Skip to main content

ControlNet Multi Endpoint

Overview

You can now specify multiple ControlNet models. Just make sure to pass comma separated ControlNet models to the controlnet_model parameter as "canny,depth" and init_image in the request body.

tip

You can also use this endpoint to inpaint images with ControlNet. Just make sure to pass the link to the mask_image in the request body and the controlnet_model parameter with the "inpaint" value.

info

Read our detailed blog article about ControlNet.

Request

--request POST 'https://modelslab.com/api/v5/controlnet' \

Send a POST request to https://modelslab.com/api/v5/controlnet endpoint.

Body Attributes

ParameterDescription
keyYour API Key used for request authorization
model_idThe ID of the model to be used. It can be public or your trained model. (Note: Muti Controlnet does not apply when using the model with flux)
controlnet_modelControlNet model ID. It can be from the models list or user trained.
controlnet_typeControlNet model type. It can be from the models list.
auto_hintAuto hint image;options: yes/no
guess_modeSet this to yes if you don't pass any prompt. The model will try to guess what's in the init_image and create best variations on its own. Options: yes/no
promptText prompt with description of required image modifications. Make it as detailed as possible for best results.
negative_promptItems you don't want in the image
init_imageLink to the Initial Image
control_imageLink to the Controlnet Image
mask_imageLink to the mask image for inpainting
widthMax Height: Width: 1024x1024
heightMax Height: Width: 1024x1024
samplesNumber of images to be returned in response. The maximum value is 4.
schedulerUse it to set a scheduler.
tomesdEnable tomesd to generate images: gives really fast results, default: yes, options: yes/no
use_karras_sigmasUse keras sigmas to generate images. gives nice results, default: yes, options: yes/no
algorithm_typeUsed in DPMSolverMultistepScheduler scheduler, default: none, options: dpmsolver+++
vaeUse custom vae in generating images default: null
lora_strengthSpecify the strength of the LoRa model you're using. If using multiple LoRa, provide each value as a comma-separated range from minimum 0.1 to maximum 1.
lora_modelMulti lora is supported, pass comma saparated values. Example contrast-fix,yae-miko-genshin
num_inference_stepsNumber of denoising steps , The value accepts 21,31,
safety_checkerA checker for NSFW images. If such an image is detected, it will be replaced by a blank image; default: yes, options: yes/no
embeddings_modelUse it to pass an embeddings model.
ip_adapter_idIp adpater id. The supported ids are ip-adapter_sdxl, ip-adapter_sd15,ip-adapter-plus-face_sd15
ip_adapter_scaleScale should be between 0 to 1
ip_adapter_imageValid image url for ip adapter
enhance_promptEnhance prompts for better results; default: yes, options: yes/no
controlnet_conditioning_scale"guidance_scale" for controlnet, Scale for controlnet guidance. Accepts floating values from 0.1 to 5 (e.g. 0.5)
strengthPrompt strength when using init_image. 1.0 corresponds to full destruction of information in the init image.
seedSeed is used to reproduce results, same seed will give you same image in return again. Pass null for a random number.
webhookSet an URL to get a POST API call once the image generation is complete.
track_idThis ID is returned in the response to the webhook API call. This will be used to identify the webhook request.
upscaleSet this parameter to "yes" if you want to upscale the given image resolution two times (2x). If the requested resolution is 512 x 512 px, the generated image will be 1024 x 1024 px.
clip_skipClip Skip (minimum: 1; maximum: 8)
base64Get response as base64 string, pass init_image, mask_image , control_image as base64 string, to get base64 response. default: "no", options: yes/no
tempCreate temp image link. This link is valid for 24 hours. temp: yes, options: yes/no
tip

You can also use multi ControlNet. Just make sure to pass comma separated ControlNet models to the controlnet_model as "canny,depth" and init_image in the request body.

Models

ControlNet API using Controlnet 1.1 as default: Suported controlnet_model:

  • canny
  • depth
  • hed
  • mlsd
  • normal
  • openpose
  • scribble
  • segmentation
  • inpaint
  • softedge
  • lineart
  • shuffle
  • tile
  • face_detector
  • qrcode

Schedulers

This endpoint also supports schedulers. Use the "scheduler" parameter in the request body to pass a specific scheduler from the list below:

  • DDPMScheduler
  • DDIMScheduler
  • PNDMScheduler
  • LMSDiscreteScheduler
  • EulerDiscreteScheduler
  • EulerAncestralDiscreteScheduler
  • DPMSolverMultistepScheduler
  • HeunDiscreteScheduler
  • KDPM2DiscreteScheduler
  • DPMSolverSinglestepScheduler
  • KDPM2AncestralDiscreteScheduler
  • UniPCMultistepScheduler
  • DDIMInverseScheduler
  • DEISMultistepScheduler
  • IPNDMScheduler
  • KarrasVeScheduler
  • ScoreSdeVeScheduler
  • LCMScheduler

Example

Body

Body Raw
{
"key": "",
"controlnet_model": "openpose,canny,face_detector",
"controlnet_type" :"openpose",
"model_id": "midjourney",
"auto_hint": "yes",
"guess_mode" : "yes",
"prompt": "human model doing photoshoot, ultra realistic face, ultra high resolution, 4K image",
"negative_prompt": null,
"control_image":"https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_openpose.png",
"init_image": "https://cdn.stablediffusionapi.com/generations/0-4957a91a-a45e-459e-b4cd-b3ca4013b847.png",
"mask_image": null,
"width": "512",
"height": "512",
"samples": "1",
"scheduler": "UniPCMultistepScheduler",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"guidance_scale": 7.5,
"controlnet_conditioning_scale": 0.7,
"strength": 0.55,
"lora_model": "yae-miko-genshin,more_details",
"clip_skip": "2",
"tomesd": "yes",
"use_karras_sigmas": "yes",
"vae": null,
"lora_strength": null,
"embeddings_model": null,
"seed": null,
"webhook": null,
"track_id": null
}

Request

var myHeaders = new Headers();
myHeaders.append("Content-Type", "application/json");

var raw = JSON.stringify({
"key": "",
"controlnet_model": "openpose,canny,face_detector",
"controlnet_type" :"openpose",
"model_id": "midjourney",
"auto_hint": "yes",
"guess_mode" : "yes",
"prompt": "human model doing photoshoot, ultra realistic face, ultra high resolution, 4K image",
"negative_prompt": null,
"control_image":"https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_openpose.png",
"init_image": "https://cdn.stablediffusionapi.com/generations/0-4957a91a-a45e-459e-b4cd-b3ca4013b847.png",
"mask_image": null,
"width": "512",
"height": "512",
"samples": "1",
"scheduler": "UniPCMultistepScheduler",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"guidance_scale": 7.5,
"controlnet_conditioning_scale": 0.7,
"strength": 0.55,
"lora_model": "yae-miko-genshin,more_details",
"clip_skip": "2",
"tomesd": "yes",
"use_karras_sigmas": "yes",
"vae": null,
"lora_strength": null,
"embeddings_model": null,
"seed": null,
"webhook": null,
"track_id": null
});

var requestOptions = {
method: 'POST',
headers: myHeaders,
body: raw,
redirect: 'follow'
};

fetch("https://modelslab.com/api/v5/controlnet", requestOptions)
.then(response => response.text())
.then(result => console.log(result))
.catch(error => console.log('error', error));

Response

{
"status": "success",
"generationTime": 14.463637351989746,
"id": 32303444,
"output": [
"https://cdn.stablediffusionapi.com/generations/0-0dcf98b7-c397-4536-bd56-f0bf69c6ec1a.png"
],
"meta": {
"prompt": "mdjrny-v4 style human model doing photoshoot, ultra realistic face, ultra high resolution, 4K image",
"model_id": "midjourney",
"controlnet_model": "openpose,canny,face_detector",
"controlnet_type": "openpose",
"negative_prompt": "",
"scheduler": "UniPCMultistepScheduler",
"safety_checker": "no",
"auto_hint": "yes",
"guess_mode": "yes",
"strength": "1",
"W": 512,
"H": 512,
"guidance_scale": 7.5,
"controlnet_conditioning_scale": "0.7",
"seed": 465647573,
"use_karras_sigmas": "yes",
"tomesd": "yes",
"init_image": "https://cdn.stablediffusionapi.com/generations/0-4957a91a-a45e-459e-b4cd-b3ca4013b847.png",
"mask_image": null,
"control_image": "https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_openpose.png",
"vae": null,
"steps": 20,
"full_url": "no",
"upscale": "no",
"n_samples": 1,
"embeddings": null,
"lora": "yae-miko-genshin,more_details",
"lora_strength": 1,
"temp": "no",
"base64": "no",
"clip_skip": 2,
"file_prefix": "0dcf98b7-c397-4536-bd56-f0bf69c6ec1a.png"
}
}