Train a Lora Model with Custom Images
Overview
Using this endpoint you can train a Lora model with your own images. You can train a model on any object or person.
Request
--request POST 'https://modelslab.com/api/v3/lora_fine_tune' \
Make a POST
request to https://modelslab.com/api/v3/lora_fine_tune endpoint and pass the required parameters as a request body.
For now, you can only train a model on normal lora models and sdxl and get its style.
Body Attributes
Parameter | Description | Values |
---|---|---|
key | Your API Key used for request authorization. | key |
instance_prompt | Text prompt with how you want to call your trained person/object. | string |
class_prompt | Classification of the trained person/object. | person/object |
base_model_type | The type of LoRA base model you want to train on. | "normal" or "sdxl" |
negative_prompt | Items you don't want in the image. | string |
images | Accessible direct links to images, cropped to 512x512 pixels. A good number is about 7-8 images. | URL(s) |
training_type | The type of the object you are training on. | "men", "women", "couple", "null" |
lora_type | Type of LoRA model. | "lora" or "lycoris" |
max_train_steps | Set at 2 times the number of images (Ni*2); minimum value is 10 and maximum value is 50. | integer |
webhook | Set a URL to receive a POST call when training is complete. | URL |
Training Types
The table below lists all the possible values for the training_type
parameter.
Value | Description |
---|---|
men | Train on faces of men. |
female | Train on faces of females. |
couple | Train on couples of male and female; in images array pass images of couples, instead of images of a single person. |
null | Train on object or anything. |
Webhook Post JSON
This is an example webhook post call in JSON format.
{
"status": "success",
"training_status": "deploying_gpu",
"logs": "it will take upto 25 minutes",
"model_id": "F5jvdzGnYi",
}
Training Status Values
The table below describes all possible training statuses.
Status | Description |
---|---|
deploying_gpu | Deploying GPU |
training_started | Training started |
training_success | Training completed successfully |
trained_model_compressing | Compressing the trained model |
trained_model_uploading | Uploading the trained model |
trained_model_uploaded | Trained model uploaded |
deploying_model | Deploying the trained model |
model_ready | The trained model is ready for use |
Example
Body
Body
{
"key":"",
"instance_prompt": "photo of ambika0 man",
"class_prompt": "photo of a man",
"base_model_type": "sdxl",
"negative_prompt":" lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry",
"images": [
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/1.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/2.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/3.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/4.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/5.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/6.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/7.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/8.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/9.png"
],
"seed": "0",
"training_type": "men",
"max_train_steps": "18",
"lora_type":"lora",
"webhook": null
}
Request
- JS
- PHP
- NODE
- PYTHON
- JAVA
var myHeaders = new Headers();
myHeaders.append("Content-Type", "application/json");
var raw = JSON.stringify({
"key":"",
"instance_prompt": "photo of ambika0 man",
"class_prompt": "photo of a man",
"base_model_type": "sdxl",
"negative_prompt":" lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry",
"images": [
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/1.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/2.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/3.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/4.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/5.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/6.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/7.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/8.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/9.png"
],
"seed": "0",
"training_type": "men",
"max_train_steps": "18",
"lora_type":"lora",
"webhook": null
});
var requestOptions = {
method: 'POST',
headers: myHeaders,
body: raw,
redirect: 'follow'
};
fetch("https://modelslab.com/api/v3/lora_fine_tune", requestOptions)
.then(response => response.text())
.then(result => console.log(result))
.catch(error => console.log('error', error));
<?php
$payload = [
"key" => "",
"instance_prompt" => "photo of ambika0 man",
"class_prompt" => "photo of person",
"base_model_type" => "sdxl",
"negative_prompt" => " lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry",
"images" => [
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/1.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/2.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/3.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/4.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/5.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/6.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/7.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/8.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/9.png"
],
"seed" => "0",
"training_type" => "men",
"lora_type":"lora",
"max_train_steps" => "18",
"webhook" => ""
];
$curl = curl_init();
curl_setopt_array($curl, array(
CURLOPT_URL => 'https://modelslab.com/api/v3/lora_fine_tune',
CURLOPT_RETURNTRANSFER => true,
CURLOPT_ENCODING => '',
CURLOPT_MAXREDIRS => 10,
CURLOPT_TIMEOUT => 0,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
CURLOPT_CUSTOMREQUEST => 'POST',
CURLOPT_POSTFIELDS => json_encode($payload),
CURLOPT_HTTPHEADER => array(
'Content-Type: application/json'
),
));
$response = curl_exec($curl);
curl_close($curl);
echo $response;
var request = require('request');
var options = {
'method': 'POST',
'url': 'https://modelslab.com/api/v3/lora_fine_tune',
'headers': {
'Content-Type': 'application/json'
},
body: JSON.stringify({
"key":"",
"instance_prompt": "photo of ambika0 man",
"class_prompt": "photo of a man",
"base_model_type": "sdxl",
"negative_prompt":" lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry",
"images": [
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/1.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/2.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/3.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/4.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/5.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/6.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/7.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/8.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/9.png"
],
"seed": "0",
"training_type": "men",
"max_train_steps": "18",
"lora_type":"lora",
"webhook": null
})
};
request(options, function (error, response) {
if (error) throw new Error(error);
console.log(response.body);
});
import requests
import json
url = "https://modelslab.com/api/v3/lora_fine_tune"
payload = json.dumps({
"key":"",
"instance_prompt": "photo of ambika0 man",
"class_prompt": "photo of a man",
"base_model_type": "sdxl",
"negative_prompt":" lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry",
"images": [
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/1.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/2.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/3.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/4.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/5.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/6.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/7.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/8.png",
"https://raw.githubusercontent.com/pnavitha/sampleImages/master/9.png"
],
"seed": "0",
"training_type": "men",
"max_train_steps": "18",
"lora_type":"lora",
"webhook": "",
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
OkHttpClient client = new OkHttpClient().newBuilder()
.build();
MediaType mediaType = MediaType.parse("application/json");
RequestBody body = RequestBody.create(mediaType, "{\n \"key\":\"\",\n \"instance_prompt\": \"photo of ambika0 man\",\n \"class_prompt\": \"photo of a man\",\n \"base_model_type\": \"normal\",\n \"negative_prompt\":\" lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry\",\n \"images\": [\n \"https://raw.githubusercontent.com/pnavitha/sampleImages/master/1.png\",\n \"https://raw.githubusercontent.com/pnavitha/sampleImages/master/2.png\",\n \"https://raw.githubusercontent.com/pnavitha/sampleImages/master/3.png\",\n \"https://raw.githubusercontent.com/pnavitha/sampleImages/master/4.png\",\n \"https://raw.githubusercontent.com/pnavitha/sampleImages/master/5.png\",\n \"https://raw.githubusercontent.com/pnavitha/sampleImages/master/6.png\",\n \"https://raw.githubusercontent.com/pnavitha/sampleImages/master/7.png\",\n \"https://raw.githubusercontent.com/pnavitha/sampleImages/master/8.png\",\n \"https://raw.githubusercontent.com/pnavitha/sampleImages/master/9.png\"\n ],\n \"seed\": \"0\",\n \"training_type\": \"men\",\n \"max_train_steps\": \"18\",\n \"lora_type\":\"lora\",\n \"webhook\": null\n}");
Request request = new Request.Builder()
.url("https://modelslab.com/api/v3/lora_fine_tune")
.method("POST", body)
.addHeader("Content-Type", "application/json")
.build();
Response response = client.newCall(request).execute();
Response
{
"status": "success",
"messege": "deploying_gpu",
"data": "it will take upto 30 minutes.",
"training_id": "F5jvdzGnYi"
}