Skip to main content

Model Selection Guide

Every generation request on ModelsLab requires a model_id parameter that tells the API which AI model to use. With 50,000+ models available, this guide helps you discover and choose the right one.

How model_id Works

The model_id is a string identifier you pass in your API request body. It determines which model processes your generation.
import requests

response = requests.post(
    "https://modelslab.com/api/v7/images/text-to-image",
    json={
        "key": "your_api_key",
        "model_id": "flux",        # <-- this selects the model
        "prompt": "a red apple on a white table",
        "width": 512,
        "height": 512
    }
)
curl -X POST "https://modelslab.com/api/v7/images/text-to-image" \
  -H "Content-Type: application/json" \
  -d '{
    "key": "your_api_key",
    "model_id": "flux",
    "prompt": "a red apple on a white table",
    "width": 512,
    "height": 512
  }'

Three Ways to Discover Models

1. API: Models Endpoint

Query the models API to search programmatically:
# Search by name
curl "https://modelslab.com/api/v7/models?search=flux&key=your_api_key"

# Filter by feature
curl "https://modelslab.com/api/v7/models?feature=imagen&key=your_api_key"

2. CLI: modelslab models

The CLI provides rich model discovery commands:
# Search by name
modelslab models search --search "flux"

# Filter by feature category
modelslab models search --feature imagen         # Image models
modelslab models search --feature video_fusion   # Video models
modelslab models search --feature audio_gen      # Audio models
modelslab models search --feature llmaster       # LLM/chat models

# Get full details about a model
modelslab models detail --id flux

# JSON output for scripting
modelslab models search --search "flux" --output json --jq '.[].model_id'

3. Web: Model Browser

Browse and filter all models visually at modelslab.com/models.

Models by Category

Image Generation

Use with /api/v7/images/text-to-image, /api/v7/images/image-to-image, and /api/v7/images/inpaint.
model_idNameBest For
fluxFlux DevFast, high-quality general images
midjourneyMidJourneyArtistic, stylized images
sdxlStable Diffusion XLVersatile base model
imagen-3Google Imagen 3Premium photorealistic images
imagen-4Google Imagen 4Latest Google image model
For image-to-image and inpainting with SD/SDXL models (e.g., midjourney), include the scheduler parameter. Flux models don’t require it.

Video Generation

Use with /api/v7/video-fusion/text-to-video and /api/v7/video-fusion/image-to-video.
model_idNameType
seedance-t2vSeedance Text-to-VideoText to video
seedance-i2vSeedance Image-to-VideoImage to video
wan2.1Wan 2.1Text/image to video
wan2.6-t2vWan 2.6 Text-to-VideoText to video
wan2.6-i2vWan 2.6 Image-to-VideoImage to video
veo2Google Veo 2Premium video
veo3Google Veo 3Latest Google video
sora-2OpenAI Sora 2Premium video
Video generation is asynchronous. The API returns a processing status with an id β€” use /api/v7/video-fusion/fetch/{id} to poll for results.

Audio & Voice

Use with /api/v7/voice/* endpoints.
model_idNameEndpoint
eleven_multilingual_v2ElevenLabs Multilingual v2text-to-speech
eleven_english_sts_v2ElevenLabs Voice Changerspeech-to-speech
scribe_v1ElevenLabs Scribespeech-to-text
eleven_sound_effectElevenLabs SFXsound-generation
music_v1ElevenLabs Musicmusic-gen
inworld-tts-1Inworld TTStext-to-speech
Text-to-speech uses the prompt parameter (not text) and requires a valid ElevenLabs voice_id (e.g., 21m00Tcm4TlvDq8ikWAM for Rachel).

LLM / Chat

Use with /api/v7/llm/chat/completions (OpenAI-compatible format).
model_idNameProvider
meta-llama-3-8B-instructLlama 3 8B InstructMeta
meta-llama-Llama-3.3-70B-Instruct-TurboLlama 3.3 70B TurboMeta
meta-llama-Meta-Llama-3.1-405B-Instruct-TurboLlama 3.1 405B TurboMeta
deepseek-ai-DeepSeek-R1-Distill-Llama-70BDeepSeek R1DeepSeek
deepseek-ai-DeepSeek-V3DeepSeek V3DeepSeek
gemini-2.0-flash-001Gemini 2.0 FlashGoogle
gemini-2.5-proGemini 2.5 ProGoogle
Qwen-Qwen2.5-72B-Instruct-TurboQwen 2.5 72B TurboQwen
mistralai-Mixtral-8x7B-Instruct-v0.1Mixtral 8x7BMistral
The chat completions endpoint accepts both model_id and model (OpenAI-compatible alias). The messages array follows the standard OpenAI format.

Code Examples

Image Generation (Python)

import requests

response = requests.post(
    "https://modelslab.com/api/v7/images/text-to-image",
    json={
        "key": "your_api_key",
        "model_id": "flux",
        "prompt": "a futuristic cityscape at sunset, highly detailed",
        "width": 1024,
        "height": 1024,
        "samples": 1
    }
)

data = response.json()
if data["status"] == "success":
    print(f"Image URL: {data['output'][0]}")

Chat Completion (Python)

response = requests.post(
    "https://modelslab.com/api/v7/llm/chat/completions",
    json={
        "key": "your_api_key",
        "model_id": "meta-llama-3-8B-instruct",
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Explain quantum computing briefly."}
        ],
        "max_tokens": 200,
        "temperature": 0.7
    }
)

data = response.json()
print(data["choices"][0]["message"]["content"])

Chat Completion (OpenAI SDK Compatible)

from openai import OpenAI

client = OpenAI(
    base_url="https://modelslab.com/api/v7/llm",
    api_key="your_api_key"
)

response = client.chat.completions.create(
    model="meta-llama-3-8B-instruct",
    messages=[{"role": "user", "content": "Hello!"}],
    max_tokens=50
)

print(response.choices[0].message.content)

CLI Examples

# Generate image with specific model
modelslab generate image --prompt "sunset over mountains" --model flux

# Chat with an LLM
modelslab generate chat --message "Explain AI" --model meta-llama-3-8B-instruct

# Generate video
modelslab generate video --prompt "ocean waves" --model seedance-t2v

# Set a default model
modelslab config set generation.default_model flux

Tips for Choosing the Right Model

  1. Start with popular models β€” flux for images, meta-llama-3-8B-instruct for chat
  2. Use feature filters β€” modelslab models search --feature imagen narrows results
  3. Check model details β€” modelslab models detail --id <model_id> shows supported parameters
  4. Consider provider β€” models from different providers (Google, Meta, ElevenLabs) have different capabilities and pricing
  5. Test with small requests first β€” use low resolution/token counts while experimenting

Resources