Enterprise: Controlnet Endpoint
Overview
This endpoint is used to generate ControlNet images.
You can also use this endpoint to inpaint images with ControlNet. Just make sure to pass the link to the mask_image
in the request body. and use controlnet_model as "inpaint"
Request
--request POST 'https://modelslab.com/api/v1/enterprise/controlnet' \
Send a POST
request to https://modelslab.com/api/v1/enterprise/controlnet endpoint.
Body Attributes
Parameter | Description |
---|---|
key | Your enterprise API Key used for request authorization |
model_id | The ID of the model to be used. It can be public or your trained model. |
controlnet_model | ControlNet model ID. It can be from the models list or user trained. |
controlnet_type | ControlNet model type. It can be from the models list. |
auto_hint | Auto hint image;options: yes/no |
guess_mode | Set this to yes if you don't pass any prompt. The model will try to guess what's in the init_image and create best variations on its own. Options: yes/no |
prompt | Text prompt with description of required image modifications. Make it as detailed as possible for best results. |
negative_prompt | Items you don't want in the image |
init_image | Link to the Initial Image |
control_image | Link to the Controlnet Image |
mask_image | Link to the mask image for inpainting |
width | Max Height: Width: 1024x1024 |
height | Max Height: Width: 1024x1024 |
samples | Number of images to be returned in response. The maximum value is 4. |
scheduler | Use it to set a scheduler. |
tomesd | Enable tomesd to generate images: gives really fast results, default: yes, options: yes/no |
use_karras_sigmas | Use keras sigmas to generate images. gives nice results, default: yes, options: yes/no |
algorithm_type | Used in DPMSolverMultistepScheduler scheduler, default: none, options: dpmsolver+++ |
vae | Use custom vae in generating images default: null |
lora_strength | Specify the strength of the LoRa model you're using. If using multiple LoRa, provide each value as a comma-separated range from minimum 0.1 to maximum 1. |
lora_model | Multi lora is supported, pass comma saparated values. Example contrast-fix,yae-miko-genshin |
num_inference_steps | Number of denoising steps, The value accepts 21,31. |
safety_checker | A checker for NSFW images. If such an image is detected,it will be replaced by a blank image; default: yes, options: yes/no |
embeddings_model | Use it to pass an embeddings model. |
enhance_prompt | Enhance prompts for better results; default: yes, options: yes/no |
multi_lingual | Use different language then english; default: yes, options: yes/no |
guidance_scale | Scale for classifier-free guidance (minimum: 1; maximum: 20) |
controlnet_conditioning_scale | Scale for controlnet guidance (minimum: 1; maximum: 20) |
strength | Prompt strength when using init_image. 1.0 corresponds to full destruction of information in the init image. |
seed | Seed is used to reproduce results, same seed will give you same image in return again. Pass null for a random number. |
webhook | Set an URL to get a POST API call once the image generation is complete. |
track_id | This ID is returned in the response to the webhook API call. This will be used to identify the webhook request. |
upscale | Set this parameter to "yes" if you want to upscale the given image resolution two times (2x). If the requested resolution is 512 x 512 px, the generated image will be 1024 x 1024 px. |
clip_skip | Clip Skip (minimum: 1; maximum: 8) |
base64 | Get response as base64 string, pass init_image, mask_image , control_image as base64 string, to get base64 response. default: "no", options: yes/no |
temp | Create temp image link. This link is valid for 24 hours. temp: yes, options: yes/no |
To use the load balancer, you need to have more than 1 server. Pass the first server's API key, and it will handle the load balancing with the other servers.
You can also use multi ControlNet. Just make sure to pass comma saparated controlnet models to the controlnet_model
as "canny,depth" and init_image
in the request body.
ControlNet Models
ControlNet API using Controlnet 1.1 as default: Suported controlnet_model:
- canny
- depth
- hed
- mlsd
- normal
- openpose
- scribble
- segmentation
- inpaint
- softedge
- lineart
- shuffle
- tile
- face_detector
- qrcode
Schedulers
This endpoint also supports schedulers. Use the "scheduler" parameter in the request body to pass a specific scheduler from the list below:
- DDPMScheduler
- DDIMScheduler
- PNDMScheduler
- LMSDiscreteScheduler
- EulerDiscreteScheduler
- EulerAncestralDiscreteScheduler
- DPMSolverMultistepScheduler
- HeunDiscreteScheduler
- KDPM2DiscreteScheduler
- DPMSolverSinglestepScheduler
- KDPM2AncestralDiscreteScheduler
- UniPCMultistepScheduler
- DDIMInverseScheduler
- DEISMultistepScheduler
- IPNDMScheduler
- KarrasVeScheduler
- ScoreSdeVeScheduler
- LCMScheduler
You can also use multi Lora. Just make sure to pass comma saparated lora model ids to the lora_model
as "more_details,animie"
in the request body.
Example
Body
{
"key": "enterprise_api_key",
"controlnet_model": "canny",
"controlnet_type" :"canny",
"model_id": "midjourney",
"auto_hint": "yes",
"guess_mode" : "no",
"prompt": "a model doing photoshoot, ultra high resolution, 4K image",
"negative_prompt": null,
"init_image": "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_imgvar/input_image_vermeer.png",
"mask_image": null,
"width": "512",
"height": "512",
"samples": "1",
"scheduler": "UniPCMultistepScheduler",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"guidance_scale": 7.5,
"strength": 0.55,
"seed": null,
"webhook": null,
"track_id": null
}
Request
- JS
- PHP
- NODE
- PYTHON
- JAVA
var myHeaders = new Headers();
myHeaders.append("Content-Type", "application/json");
var raw = JSON.stringify({
"key": "",
"controlnet_model": "canny",
"controlnet_type" :"canny",
"model_id": "midjourney",
"auto_hint": "yes",
"guess_mode" : "no",
"prompt": "a model doing photoshoot, ultra high resolution, 4K image",
"negative_prompt": null,
"init_image": "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_imgvar/input_image_vermeer.png",
"mask_image": null,
"width": "512",
"height": "512",
"samples": "1",
"scheduler": "UniPCMultistepScheduler",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"guidance_scale": 7.5,
"strength": 0.55,
"seed": null,
"webhook": null,
"track_id": null
});
var requestOptions = {
method: 'POST',
headers: myHeaders,
body: raw,
redirect: 'follow'
};
fetch("https://modelslab.com/api/v1/enterprise/controlnet", requestOptions)
.then(response => response.text())
.then(result => console.log(result))
.catch(error => console.log('error', error));
<?php
$payload = [
"key" => "",
"controlnet_model" => "canny",
"controlnet_type" => "canny",
"model_id" => "midjourney",
"auto_hint" => "yes",
"guess_mode" => "no",
"prompt" => "a model doing photoshoot, ultra high resolution, 4K image",
"negative_prompt" => null,
"init_image" => "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_imgvar/input_image_vermeer.png",
"mask_image" => null,
"width" => "512",
"height" => "512",
"samples" => "1",
"scheduler" => "UniPCMultistepScheduler",
"num_inference_steps" => "30",
"safety_checker" => "no",
"enhance_prompt" => "yes",
"guidance_scale" => 7.5,
"strength" => 0.55,
"seed" => null,
"webhook" => null,
"track_id" => null
];
$curl = curl_init();
curl_setopt_array($curl, array(
CURLOPT_URL => 'https://modelslab.com/api/v1/enterprise/controlnet',
CURLOPT_RETURNTRANSFER => true,
CURLOPT_ENCODING => '',
CURLOPT_MAXREDIRS => 10,
CURLOPT_TIMEOUT => 0,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
CURLOPT_CUSTOMREQUEST => 'POST',
CURLOPT_POSTFIELDS => json_encode($payload),
CURLOPT_HTTPHEADER => array(
'Content-Type: application/json'
),
));
$response = curl_exec($curl);
curl_close($curl);
echo $response;
var request = require('request');
var options = {
'method': 'POST',
'url': 'https://modelslab.com/api/v1/enterprise/controlnet',
'headers': {
'Content-Type': 'application/json'
},
body: JSON.stringify({
"key": "",
"controlnet_model": "canny",
"controlnet_type": "canny",
"model_id": "midjourney",
"auto_hint": "yes",
"guess_mode": "no",
"prompt": "a model doing photoshoot, ultra high resolution, 4K image",
"negative_prompt": null,
"init_image": "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_imgvar/input_image_vermeer.png",
"mask_image": null,
"width": "512",
"height": "512",
"samples": "1",
"scheduler": "UniPCMultistepScheduler",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"guidance_scale": 7.5,
"strength": 0.55,
"seed": null,
"webhook": null,
"track_id": null
})
};
request(options, function (error, response) {
if (error) throw new Error(error);
console.log(response.body);
});
import requests
import json
url = "https://modelslab.com/api/v1/enterprise/controlnet"
payload = json.dumps({
"key": "",
"controlnet_model": "canny",
"controlnet_type": "canny",
"model_id": "midjourney",
"auto_hint": "yes",
"guess_mode": "no",
"prompt": "a model doing photoshoot, ultra high resolution, 4K image",
"negative_prompt": None,
"init_image": "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_imgvar/input_image_vermeer.png",
"mask_image": None,
"width": "512",
"height": "512",
"samples": "1",
"scheduler": "UniPCMultistepScheduler",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"guidance_scale": 7.5,
"strength": 0.55,
"seed": None,
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
OkHttpClient client = new OkHttpClient().newBuilder()
.build();
MediaType mediaType = MediaType.parse("application/json");
RequestBody body = RequestBody.create(mediaType, "{\n \"key\": \"\",\n \"controlnet_model\": \"canny\",\n \"type\": \"canny\",\n \"model_id\": \"midjourney\",\n \"auto_hint\" : \"yes\",\n \"guess_mode\" : \"no\",\n \"prompt\": \"a model doing photoshoot, ultra high resolution, 4K image\",\n \"negative_prompt\": null,\n \"init_image\": \"https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_imgvar/input_image_vermeer.png\",\n \"mask_image\": null,\n \"width\": \"512\",\n \"height\": \"512\",\n \"samples\": \"1\",\n \"scheduler\": \"UniPCMultistepScheduler\",\n \"num_inference_steps\": \"30\",\n \"safety_checker\": \"no\",\n \"enhance_prompt\": \"yes\",\n \"guidance_scale\": 7.5,\n \"strength\": 0.55,\n \"seed\": null,\n \"webhook\": null,\n \"track_id\": null\n}");
Request request = new Request.Builder()
.url("https://modelslab.com/api/v1/enterprise/controlnet")
.method("POST", body)
.addHeader("Content-Type", "application/json")
.build();
Response response = client.newCall(request).execute();
Response
{
"status": "success",
"generationTime": 3.6150574684143066,
"id": 14905468,
"output": [
"https://pub-8b49af329fae499aa563997f5d4068a4.r2.dev/generations/b989586c-0a5f-41fa-91de-1c5ed5498349-0.png"
],
"meta": {
"prompt": "mdjrny-v4 style a model doing photoshoot, ultra high resolution, 4K image",
"model_id": "midjourney",
"controlnet_model": "canny",
"controlnet_type": "canny",
"negative_prompt": "",
"scheduler": "UniPCMultistepScheduler",
"safetychecker": "no",
"auto_hint": "yes",
"guess_mode": "no",
"strength": 0.55,
"W": 512,
"H": 512,
"guidance_scale": 3,
"seed": 254638058,
"multi_lingual": "no",
"init_image": "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_imgvar/input_image_vermeer.png",
"mask_image": null,
"steps": 20,
"full_url": "no",
"upscale": "no",
"n_samples": 1,
"embeddings": null,
"lora": null,
"outdir": "out",
"file_prefix": "b989586c-0a5f-41fa-91de-1c5ed5498349"
}
}