Skip to main content

Text to Video Ultra Endpoint

Overview

Text to Video Ultra endpoint helps generate HD video, with high quality and high resolution, from a given text prompt.

Each API call costs $0.2, which is equivalent to 31 credits from the Basic Plan (3250 API Credits) and 43 credits from the Standard Plan (10000 API Credits).

Experience the next level of AI-driven video creation with Text-to-Video Ultra today! 🚀

Open in Playground 🚀

Example Video Generation

Request

--request POST 'https://modelslab.com/api/v6/video/text2video_ultra' \

Make a POST request to https://modelslab.com/api/v6/video/text2video_ultra endpoint and pass the required parameters in the request body.

Body Attributes

ParameterDescriptionValues
keyYour API Key used for request authorization.key
promptText prompt with a description of the things you want in the video to be generated.String
negative_promptItems you don't want in the video.String
seedSeed is used to reproduce results. The same seed will give you the same image again. Pass null for a random number.Integer or null
resolutionthe resolution of the generated outputThe maximum is 480 .
num_framesThe number of frames in the generated video. Default is 81Integer
num_inference_stepsNumber of denoising steps. Default is 25. Maximum is 30.Integer (Default: 25, Max: 30).
guidance_scaleScale for classifier-free guidance. Minimum is 0.0, maximum is 8.0min: 0.0, max: 8.0
fpsFrames per second rate of the generated video. Should be less than num_framesMax 16
portraitIndicates whether the output should be in portrait mode. It accepts true or false. Default falseBoolean
sample_shiftControls the sampling shift in the generation process. Default value is 5Integer (Default: 5, Max: 10)
tempIf true, stores the video in temporary storage (cleaned every 24 hours). Default is false.The default is false. (TRUE or FALSE)
webhookA URL to receive a POST API call once the video generation is complete.URL
track_idA unique ID used in the webhook response to identify the request.string
Open in Playground 🚀

Example

Body

When the model_id is cogvideox, the request json looks like so,

Body
{
"key":"",
"prompt":"Space Station in space",
"negative_prompt":"low quality",
"resolution":480,
"num_frames":81,
"num_inference_steps":30,
"guidance_scale": 5.0,
"shift_sample":3,
"fps":16,
"webhook": null,
"track_id":null
}

Request

var myHeaders = new Headers();
myHeaders.append("Content-Type", "application/json");

var raw = JSON.stringify({
"key":"",
"prompt":"Space Station in space",
"negative_prompt":"low quality",
"resolution":480,
"num_frames":81,
"num_inference_steps":30,
"guidance_scale": 5.0,
"shift_sample":3,
"fps":16,
"webhook": null,
"track_id":null
});

var requestOptions = {
method: 'POST',
headers: myHeaders,
body: raw,
redirect: 'follow'
};

fetch("https://modelslab.com/api/v6/video/text2video_ultra", requestOptions)
.then(response => response.text())
.then(result => console.log(result))
.catch(error => console.log('error', error));

Response

{
"status": "success",
"generationTime": 4.49,
"id": 147,
"output": [
"https://modelslab-bom.s3.amazonaws.com/generations/09e6f35f-2a47-4271-82a5-9cfff8bbc43a.mp4"
],
"proxy_links": [
"https://modelslab-bom.s3.amazonaws.com/generations/09e6f35f-2a47-4271-82a5-9cfff8bbc43a.mp4"
],
"meta": {
"base64": "no",
"file_prefix": "a51498bf-0eee-44c5-a473-81cc867b288a",
"fps": 16,
"guidance_scale": 5,
"id": null,
"instant_response": "no",
"negative_prompt": "low quality",
"num_frames": 81,
"num_inference_steps": 30,
"opacity": 0.8,
"output_type": "mp4",
"padding_down": 10,
"padding_right": 10,
"portrait": "no",
"prompt": "Space Station in space",
"rescale": "yes",
"resolution": 480,
"sample_shift": 3,
"scale_down": 6,
"seed": 3644269522,
"temp": "no",
"track_id": null,
"watermark": "no",
"webhook": null
},
"future_links": [
"https://modelslab-bom.s3.amazonaws.com/generations/09e6f35f-2a47-4271-82a5-9cfff8bbc43a.mp4"
]
}