Skip to main content

ControlNet Main Endpoint

Overview

You can now control Stable Diffusion with ControlNet. The ControlNet models are available in this API.

Contonet image

tip

You can also use this endpoint to inpaint images with ControlNet. Just make sure to pass the link to the mask_image in the request body and use the controlnet_model parameter with "inpaint" value.

info

Read our detailed blog article about the ControlNet advantages, before you dive in.

Request

--request POST 'https://modelslab.com/api/v5/controlnet' \

Send a POST request to https://modelslab.com/api/v5/controlnet endpoint.

Body Attributes

ParameterDescriptionValues
keyYour API Key used for request authorization.string
model_idThe ID of the model to be used. It can be a public model or one you have trained. (Note: Controlnet does not apply when using the model with ID flux)id
controlnet_modelControlNet model ID. It can be from the models list or user-trained.id
controlnet_typeControlNet model type. It can be from the models list.controlnet type
auto_hintAutomatically generate a hint image."yes"/"no"
guess_modeSet this to "yes" if you don't provide any prompt. The model will try to guess from init_image."yes"/"no"
promptText prompt with a description of required image modifications. Make it detailed for best results.string
negative_promptItems you don't want in the image.string
init_imageLink to the initial image to be used as a reference.url
control_imageLink to the ControlNet image.url
mask_imageLink to the mask image for inpainting.url
widthThe width of the image. Maximum value is 1024 pixels.integer
heightThe height of the image. Maximum value is 1024 pixels.integer
samplesNumber of images to be returned in response. The maximum value is 4.integer
schedulerUse it to set a scheduler.scheduler
tomesdEnable tomesd to generate images quickly. Default is "yes"."yes"/"no"
use_karras_sigmasUse Keras sigmas to generate images. Produces nice results. Default is "yes"."yes"/"no"
algorithm_typeUsed in the DPMSolverMultistepScheduler scheduler. Default is "none"."dpmsolver+++"
vaeUse a custom VAE for generating images. Default is null.null
lora_strengthSpecify the strength of the LoRa model. Range from 0.1 to 1.string (comma-separated values)
lora_modelPass LoRa model ID. Multiple LoRa models are supported; pass comma-separated values.string (comma-separated values)
num_inference_stepsNumber of denoising steps. Accepts values 21 or 31.integer
safety_checkerA checker for NSFW images. If detected, such images will be replaced by a blank image. Default is "yes"."yes"/"no"
embeddings_modelUse it to pass an embeddings model.id
enhance_promptEnhance prompts for better results. Default is "yes"."yes"/"no"
controlnet_conditioning_scaleScale for ControlNet guidance. Accepts floating values from 0.1 to 5 (e.g., 0.5).float
strengthPrompt strength when using the initial image. Range from 0 to 1.float
seedUsed to reproduce results. Pass null for a random number.integral value
ip_adapter_idIP adapter ID. Supported IDs are ip-adapter_sdxl, ip-adapter_sd15, ip-adapter-plus-face_sd15.string
ip_adapter_scaleScale for the IP adapter. Should be between 0 and 1.float
ip_adapter_imageValid image URL for the IP adapter.url
webhookSet a URL to receive a POST API call once image generation is complete.url
track_idThis ID is returned in the response to the webhook API call. Used to identify the request.id
upscaleSet this parameter to "yes" to upscale the given image resolution two times (2x)."yes"/"no"
clip_skipClip Skip. Minimum 1, Maximum 8.integer
base64Get response as a base64 string. Pass init_image, mask_image, and control_image as base64 strings to get base64 response."yes"/"no"
tempCreate a temporary image link valid for 24 hours."yes"/"no"
tip

You can also use multi ControlNet. Just make sure to pass comma saparated controlnet models to the controlnet_model as "canny,depth" and init_image in the request body.

Models

ControlNet API using Controlnet 1.1 as default: Suported controlnet_model:

  • canny
  • depth
  • hed
  • mlsd
  • normal
  • openpose
  • scribble
  • segmentation
  • inpaint
  • softedge
  • lineart
  • shuffle
  • tile
  • face_detector
  • qrcode

Schedulers

This endpoint also supports schedulers. Use the "scheduler" parameter in the request body to pass a specific scheduler from the list below:

  • DDPMScheduler
  • DDIMScheduler
  • PNDMScheduler
  • LMSDiscreteScheduler
  • EulerDiscreteScheduler
  • EulerAncestralDiscreteScheduler
  • DPMSolverMultistepScheduler
  • HeunDiscreteScheduler
  • KDPM2DiscreteScheduler
  • DPMSolverSinglestepScheduler
  • KDPM2AncestralDiscreteScheduler
  • UniPCMultistepScheduler
  • DDIMInverseScheduler
  • DEISMultistepScheduler
  • IPNDMScheduler
  • KarrasVeScheduler
  • ScoreSdeVeScheduler
  • LCMScheduler

Example

Body

{
"key": "",
"controlnet_type": "canny",
"controlnet_model": "canny",
"model_id": "midjourney",
"init_image": "https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_openpose.png",
"mask_image": null,
"control_image": null,
"auto_hint": "yes",
"width": "512",
"height": "512",
"prompt": " (a frog wearing blue jean), full-body, Ghibli style, Anime, vibrant colors, HDR, Enhance, ((plain black background)), masterpiece, highly detailed, 4k, HQ, separate colors, bright colors",
"negative_prompt": "human, unstructure, (black object, white object), colorful background, nsfw",
"guess_mode": null,
"use_karras_sigmas": "yes",
"algorithm_type": null,
"safety_checker_type": null,
"tomesd": "yes",
"vae": null,
"embeddings": null,
"lora_strength": null,
"upscale": null,
"instant_response": null,
"strength": 1,
"guidance_scale": 7.5,
"samples": "1",
"safety_checker": null,
"num_inference_steps": "31",
"controlnet_conditioning_scale": 0.4,
"track_id": null,
"scheduler": "EulerDiscreteScheduler",
"base64": null,
"clip_skip": "1",
"temp": null,
"seed": null,
"webhook": null
}

Request

var myHeaders = new Headers();
myHeaders.append("Content-Type", "application/json");

var raw = JSON.stringify({
"key": "",
"controlnet_type": "canny",
"controlnet_model": "canny",
"model_id": "midjourney",
"init_image": "https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_openpose.png",
"mask_image": null,
"control_image": null,
"auto_hint":"yes",
"width": "512",
"height": "512",
"prompt": " (a frog wearing blue jean), full-body, Ghibli style, Anime, vibrant colors, HDR, Enhance, ((plain black background)), masterpiece, highly detailed, 4k, HQ, separate colors, bright colors",
"negative_prompt": "human, unstructure, (black object, white object), colorful background, nsfw",
"guess_mode": null,
"use_karras_sigmas": "yes",
"algorithm_type": null,
"safety_checker_type": null,
"tomesd": "yes",
"vae": null,
"embeddings": null,
"lora_strength": null,
"upscale": null,
"instant_response": null,
"strength": 1,
"guidance_scale": 7.5,
"samples": "1",
"safety_checker": null,
"num_inference_steps": "31",
"controlnet_conditioning_scale": 0.4,
"track_id": null,
"scheduler": "EulerDiscreteScheduler",
"base64": null,
"clip_skip": "1",
"temp": null,
"seed": null,
"webhook": null
});

var requestOptions = {
method: 'POST',
headers: myHeaders,
body: raw,
redirect: 'follow'
};

fetch("https://modelslab.com/api/v5/controlnet", requestOptions)
.then(response => response.text())
.then(result => console.log(result))
.catch(error => console.log('error', error));

Response

{
"status": "processing",
"tip": "for faster speed, keep resolution upto 512x512",
"eta": 146.5279869184,
"messege": "Try to fetch request after given estimated time",
"fetch_result": "https://modelslab.com/api/v3/fetch/13902970",
"id": 13902970,
"output": "",
"meta": {
"prompt": "mdjrny-v4 style a model doing photoshoot, ultra high resolution, 4K image",
"model_id": "midjourney",
"controlnet_model": "canny",
"controlnet_type": "canny",
"negative_prompt": "",
"scheduler": "UniPCMultistepScheduler",
"safetychecker": "no",
"auto_hint": "yes",
"guess_mode": "no",
"strength": 0.55,
"W": 512,
"H": 512,
"guidance_scale": 3,
"seed": 4016593698,
"init_image": "https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_openpose.png",
"mask_image": null,
"steps": 30,
"full_url": "no",
"upscale": "no",
"n_samples": 1,
"embeddings": null,
"lora": null,
"outdir": "out",
"file_prefix": "c8bb8efe-b437-4e94-b508-a6b4705f366a"
}
}