Skip to main content

Enterprise: Video to Video Endpoint

Overview

Video to Video endpoint allows you to generates video from an existing video

The resolution of the output is 1024x576

Text to video endpoint result
caution

Make sure you add your s3 details for video server, so you can receive image generated in your bucket. Images generated without s3 details being added will be delete after 24 hours

Request

--request POST 'https://modelslab.com/api/v1/enterprise/video/video2video' \

Make a POST request to https://modelslab.com/api/v1/enterprise/video/video2video endpoint and pass the required parameters in the request body.

Body Attributes

ParameterDescription
keyYour API Key used for request authorization.
model_idThe ID of the model to use. The available model includes dark-sushi-mix ,epicrealismnaturalsi,hellonijicute25d
negative_promptItems you don't want in the video.
seedSeed is used to reproduce results, same seed will give you same image in return again. Pass null for a random number.
heightMax height: 768px.
widthMax width: 768px.
init_videoLink equivalent of a valid mp4 of gif file to use as initial video conditioning.
num_framesNumber of frames in generated video. Max: 25. Defaults to 16.
num_inference_stepsNumber of denoising steps. Max: 50. Defaults to 20.
guidance_scaleScale for classifier-free guidance.
clip_skipNumber of CLIP layers to skip. 2 leads to more aesthetic defaults. Defauls to null.
strengthAmount of variation you want between original video and final video. Higher values lead to more variation. Must be between 0 and 1. Defaults to 0.7.
output_typeThe output type could be mp4,gif.
fpsFrames per second rate of generated video.
lora_modelslora models to be used with the model id - default=null
lora_strengthcomma separated lora strengths - default=1.0
motion_lorasmotion lora models to be used with the model id - default=null
motion_lora_strengthcomma separated motion lora strengths - default=1.0
domain_lora_scaleanimate diff v3 scale - default=1.0
adapter_loramotion model lora for v3 - default is v2_sd15_adapter
upscale_heightheight upscaling while inference - default=None max=1024
upscale_widthwidth upscaling while inference - default=None max=1024
upscale_strengthupscaling strength - default is 0.6 maximum is 1
upscale_guidance_scaleupscaling guidance scale - default is 15.0
upscale_num_inference_stepsnum inference steps for upscaling - default is 20
motion_modulemotion models - default is v2_sd15_mm. Other options include animatelcm,v2_sd15_mm,animateDiff-lightning
instant_responsetrue if you'd like a response with future links for queued requests instantly instead of waiting for a fixed amount of time. Defaults to false.
temptrue if you want to store your generations on our temporary storage. Temporary files are cleaned every 24 hours. Defaults to false.
webhookSet an URL to get a POST API call once the image generation is complete.
track_idThis ID is returned in the response to the webhook API call. This will be used to identify the webhook request.

Example

Body

Body
{
"key":"",
"model_id":"dark-sushi-mix",
"prompt":"fox playing ukulele on a boat floating on magma flowing under the boat",
"negative_prompt":"low quality",
"init_video":"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-vid2vid-output-1.gif",
"clip_skip":2,
"num_inference_steps":40,
"use_improved_sampling": false,
"guidance_scale":9.5,
"strength":0.8,
"base64":false,
"webhook":null,
"track_id": null
}

Request

var myHeaders = new Headers();
myHeaders.append("Content-Type", "application/json");

var raw = JSON.stringify({
"key":"",
"model_id":"dark-sushi-mix",
"prompt":"fox playing ukulele on a boat floating on magma flowing under the boat",
"negative_prompt":"low quality",
"init_video":"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-vid2vid-output-1.gif",
"clip_skip":2,
"num_inference_steps":40,
"use_improved_sampling": false,
"guidance_scale":9.5,
"strength":0.8,
"base64":false,
"webhook":null,
"track_id": null
});

var requestOptions = {
method: 'POST',
headers: myHeaders,
body: raw,
redirect: 'follow'
};

fetch("https://modelslab.com/api/v1/enterprise/video/video2video", requestOptions)
.then(response => response.text())
.then(result => console.log(result))
.catch(error => console.log('error', error));

Response

Example Response

{
"status": "success",
"generationTime": 8.49,
"id": 500,
"output": [
"https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/05ffff8d-a0ba-4019-9df0-5de966c52ad5.gif"
],
"proxy_links": [
"https://cdn2.stablediffusionapi.com/generations/05ffff8d-a0ba-4019-9df0-5de966c52ad5.gif"
],
"meta": {
"base64": "no",
"clip_skip": null,
"file_prefix": "05ffff8d-a0ba-4019-9df0-5de966c52ad5",
"fps": 7,
"guidance_scale": 7,
"height": 512,
"init_video": "https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/af5057ce-2a53-4d8f-bdb7-f0e5a6d4064c.mp4",
"instant_response": "no",
"model_id": "midjourney",
"negative_prompt": "low quality",
"num_frames": 16,
"num_inference_steps": 20,
"output_type": "gif",
"prompt": "An astronaut riding a horse",
"seed": 3276424082,
"strength": 0.7,
"temp": "no",
"width": 512
}
}