Skip to main content

Enterprise: Controlnet Endpoint

Overview

This endpoint is used to generate ControlNet images.

Contonet image
tip

You can also use this endpoint to inpaint images with ControlNet. Just make sure to pass the link to the mask_image in the request body. and use controlnet_model as "inpaint"

Request

--request POST 'https://modelslab.com/api/v1/enterprise/controlnet' \

Send a POST request to https://modelslab.com/api/v1/enterprise/controlnet endpoint.

Body Attributes

ParameterDescription
keyYour enterprise API Key used for request authorization
model_idThe ID of the model to be used. It can be public or your trained model.
controlnet_modelControlNet model ID. It can be from the models list or user trained.
controlnet_typeControlNet model type. It can be from the models list.
auto_hintAuto hint image;options: yes/no
guess_modeSet this to yes if you don't pass any prompt. The model will try to guess what's in the init_image and create best variations on its own. Options: yes/no
promptText prompt with description of required image modifications. Make it as detailed as possible for best results.
negative_promptItems you don't want in the image
init_imageLink to the Initial Image
control_imageLink to the Controlnet Image
mask_imageLink to the mask image for inpainting
widthMax Height: Width: 1024x1024
heightMax Height: Width: 1024x1024
samplesNumber of images to be returned in response. The maximum value is 4.
schedulerUse it to set a scheduler.
tomesdEnable tomesd to generate images: gives really fast results, default: yes, options: yes/no
use_karras_sigmasUse keras sigmas to generate images. gives nice results, default: yes, options: yes/no
algorithm_typeUsed in DPMSolverMultistepScheduler scheduler, default: none, options: dpmsolver+++
vaeUse custom vae in generating images default: null
lora_strengthSpecify the strength of the LoRa model you're using. If using multiple LoRa, provide each value as a comma-separated range from minimum 0.1 to maximum 1.
lora_modelMulti lora is supported, pass comma saparated values. Example contrast-fix,yae-miko-genshin
num_inference_stepsNumber of denoising steps, The value accepts 21,31.
safety_checkerA checker for NSFW images. If such an image is detected,it will be replaced by a blank image; default: yes, options: yes/no
embeddings_modelUse it to pass an embeddings model.
enhance_promptEnhance prompts for better results; default: yes, options: yes/no
multi_lingualUse different language then english; default: yes, options: yes/no
guidance_scaleScale for classifier-free guidance (minimum: 1; maximum: 20)
controlnet_conditioning_scaleScale for controlnet guidance (minimum: 1; maximum: 20)
strengthPrompt strength when using init_image. 1.0 corresponds to full destruction of information in the init image.
seedSeed is used to reproduce results, same seed will give you same image in return again. Pass null for a random number.
webhookSet an URL to get a POST API call once the image generation is complete.
track_idThis ID is returned in the response to the webhook API call. This will be used to identify the webhook request.
upscaleSet this parameter to "yes" if you want to upscale the given image resolution two times (2x). If the requested resolution is 512 x 512 px, the generated image will be 1024 x 1024 px.
clip_skipClip Skip (minimum: 1; maximum: 8)
base64Get response as base64 string, pass init_image, mask_image , control_image as base64 string, to get base64 response. default: "no", options: yes/no
tempCreate temp image link. This link is valid for 24 hours. temp: yes, options: yes/no
info

To use the load balancer, you need to have more than 1 server. Pass the first server's API key, and it will handle the load balancing with the other servers.

tip

You can also use multi ControlNet. Just make sure to pass comma saparated controlnet models to the controlnet_model as "canny,depth" and init_image in the request body.

ControlNet Models

ControlNet API using Controlnet 1.1 as default: Suported controlnet_model:

  • canny
  • depth
  • hed
  • mlsd
  • normal
  • openpose
  • scribble
  • segmentation
  • inpaint
  • softedge
  • lineart
  • shuffle
  • tile
  • face_detector
  • qrcode

Schedulers

This endpoint also supports schedulers. Use the "scheduler" parameter in the request body to pass a specific scheduler from the list below:

  • DDPMScheduler
  • DDIMScheduler
  • PNDMScheduler
  • LMSDiscreteScheduler
  • EulerDiscreteScheduler
  • EulerAncestralDiscreteScheduler
  • DPMSolverMultistepScheduler
  • HeunDiscreteScheduler
  • KDPM2DiscreteScheduler
  • DPMSolverSinglestepScheduler
  • KDPM2AncestralDiscreteScheduler
  • UniPCMultistepScheduler
  • DDIMInverseScheduler
  • DEISMultistepScheduler
  • IPNDMScheduler
  • KarrasVeScheduler
  • ScoreSdeVeScheduler
  • LCMScheduler
tip

You can also use multi Lora. Just make sure to pass comma saparated lora model ids to the lora_model as "more_details,animie" in the request body.

Example

Body

Body Raw
{
"key": "enterprise_api_key",
"controlnet_model": "canny",
"controlnet_type" :"canny",
"model_id": "midjourney",
"auto_hint": "yes",
"guess_mode" : "no",
"prompt": "a model doing photoshoot, ultra high resolution, 4K image",
"negative_prompt": null,
"init_image": "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_imgvar/input_image_vermeer.png",
"mask_image": null,
"width": "512",
"height": "512",
"samples": "1",
"scheduler": "UniPCMultistepScheduler",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"guidance_scale": 7.5,
"strength": 0.55,
"seed": null,
"webhook": null,
"track_id": null
}

Request

var myHeaders = new Headers();
myHeaders.append("Content-Type", "application/json");

var raw = JSON.stringify({
"key": "",
"controlnet_model": "canny",
"controlnet_type" :"canny",
"model_id": "midjourney",
"auto_hint": "yes",
"guess_mode" : "no",
"prompt": "a model doing photoshoot, ultra high resolution, 4K image",
"negative_prompt": null,
"init_image": "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_imgvar/input_image_vermeer.png",
"mask_image": null,
"width": "512",
"height": "512",
"samples": "1",
"scheduler": "UniPCMultistepScheduler",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"guidance_scale": 7.5,
"strength": 0.55,
"seed": null,
"webhook": null,
"track_id": null
});

var requestOptions = {
method: 'POST',
headers: myHeaders,
body: raw,
redirect: 'follow'
};

fetch("https://modelslab.com/api/v1/enterprise/controlnet", requestOptions)
.then(response => response.text())
.then(result => console.log(result))
.catch(error => console.log('error', error));

Response

{
"status": "success",
"generationTime": 3.6150574684143066,
"id": 14905468,
"output": [
"https://pub-8b49af329fae499aa563997f5d4068a4.r2.dev/generations/b989586c-0a5f-41fa-91de-1c5ed5498349-0.png"
],
"meta": {
"prompt": "mdjrny-v4 style a model doing photoshoot, ultra high resolution, 4K image",
"model_id": "midjourney",
"controlnet_model": "canny",
"controlnet_type": "canny",
"negative_prompt": "",
"scheduler": "UniPCMultistepScheduler",
"safetychecker": "no",
"auto_hint": "yes",
"guess_mode": "no",
"strength": 0.55,
"W": 512,
"H": 512,
"guidance_scale": 3,
"seed": 254638058,
"multi_lingual": "no",
"init_image": "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_imgvar/input_image_vermeer.png",
"mask_image": null,
"steps": 20,
"full_url": "no",
"upscale": "no",
"n_samples": 1,
"embeddings": null,
"lora": null,
"outdir": "out",
"file_prefix": "b989586c-0a5f-41fa-91de-1c5ed5498349"
}
}