Documentation Index
Fetch the complete documentation index at: https://docs.modelslab.com/llms.txt
Use this file to discover all available pages before exploring further.
Model Selection Guide
Every generation request on ModelsLab requires amodel_id parameter that tells the API which AI model to use. With 50,000+ models available, this guide helps you discover and choose the right one.
How model_id Works
Themodel_id is a string identifier you pass in your API request body. It determines which model processes your generation.
Three Ways to Discover Models
1. API: Models Endpoint
Query the models API to search programmatically:2. CLI: modelslab models
The CLI provides rich model discovery commands:3. Web: Model Browser
Browse and filter all models visually at modelslab.com/models.Models by Category
Image Generation
Use with/api/v7/images/text-to-image, /api/v7/images/image-to-image, and /api/v7/images/inpaint.
| model_id | Name | Best For |
|---|---|---|
flux | Flux Dev | Fast, high-quality general images |
midjourney | MidJourney | Artistic, stylized images |
sdxl | Stable Diffusion XL | Versatile base model |
imagen-3 | Google Imagen 3 | Premium photorealistic images |
imagen-4 | Google Imagen 4 | Latest Google image model |
For image-to-image and inpainting with SD/SDXL models (e.g.,
midjourney), include the scheduler parameter. Flux models don’t require it.Video Generation
Use with/api/v7/video-fusion/text-to-video and /api/v7/video-fusion/image-to-video.
| model_id | Name | Type |
|---|---|---|
seedance-t2v | Seedance Text-to-Video | Text to video |
seedance-i2v | Seedance Image-to-Video | Image to video |
wan2.2 | Wan 2.1 | Text/image to video |
wan2.6-t2v | Wan 2.6 Text-to-Video | Text to video |
wan2.6-i2v | Wan 2.6 Image-to-Video | Image to video |
veo2 | Google Veo 2 | Premium video |
veo3 | Google Veo 3 | Latest Google video |
sora-2 | OpenAI Sora 2 | Premium video |
Video generation is asynchronous. The API returns a
processing status with an id — use /api/v7/video-fusion/fetch/{id} to poll for results.Audio & Voice
Use with/api/v7/voice/* endpoints.
| model_id | Name | Endpoint |
|---|---|---|
eleven_multilingual_v2 | ElevenLabs Multilingual v2 | text-to-speech |
eleven_english_sts_v2 | ElevenLabs Voice Changer | speech-to-speech |
scribe_v1 | ElevenLabs Scribe | speech-to-text |
eleven_sound_effect | ElevenLabs SFX | sound-generation |
music_v1 | ElevenLabs Music | music-gen |
inworld-tts-1 | Inworld TTS | text-to-speech |
LLM / Chat
Use with/api/v7/llm/chat/completions (OpenAI-compatible format).
| model_id | Name | Provider |
|---|---|---|
meta-llama-3-8B-instruct | Llama 3 8B Instruct | Meta |
meta-llama-Llama-3.3-70B-Instruct-Turbo | Llama 3.3 70B Turbo | Meta |
meta-llama-Meta-Llama-3.1-405B-Instruct-Turbo | Llama 3.1 405B Turbo | Meta |
deepseek-ai-DeepSeek-R1-Distill-Llama-70B | DeepSeek R1 | DeepSeek |
deepseek-ai-DeepSeek-V3 | DeepSeek V3 | DeepSeek |
gemini-2.0-flash-001 | Gemini 2.0 Flash | |
gemini-2.5-pro | Gemini 2.5 Pro | |
Qwen-Qwen2.5-72B-Instruct-Turbo | Qwen 2.5 72B Turbo | Qwen |
mistralai-Mixtral-8x7B-Instruct-v0.1 | Mixtral 8x7B | Mistral |
The chat completions endpoint accepts both
model_id and model (OpenAI-compatible alias). The messages array follows the standard OpenAI format.Code Examples
Image Generation (Python)
Chat Completion (Python)
Chat Completion (OpenAI SDK Compatible)
CLI Examples
Tips for Choosing the Right Model
- Start with popular models —
fluxfor images,meta-llama-3-8B-instructfor chat - Use feature filters —
modelslab models search --feature imagennarrows results - Check model details —
modelslab models detail --id <model_id>shows supported parameters - Consider provider — models from different providers (Google, Meta, ElevenLabs) have different capabilities and pricing
- Test with small requests first — use low resolution/token counts while experimenting
Resources
- Model Browser — Visual model discovery
- API Reference — Full endpoint documentation
- CLI Installation — Install the CLI tool

