Chat Endpoint
Overview
Create chat endpoint allows you to talk about anything you want, with no restrictions, no rules and censorship, by passing the model_id
of the preferred model you want to use.
Request
--request POST 'https://modelslab.com/api/v6/llm/chat' \
Make a POST
request to https://modelslab.com/api/v6/llm/chat endpoint and pass the required parameters in the request body.
Body Attributes
Parameter | Description | Values |
---|---|---|
key | Your unique API key used for authorization. | string |
model_id | The ID of the LLM model you want to use. | string |
chat_id | The ID of the chat you want to associate with your conversation. | ID |
system_prompt | Optional. The persona the model should follow. Defaults to a helpful prompt. | string |
prompt | The topic or content you would like to discuss in the multi-turn conversation. | string |
max_new_tokens | The maximum number of tokens to generate, ignoring those in the prompt. | integer (Max: 1024, Default: 128) |
do_sample | Whether or not to use sampling. If not, uses greedy decoding. | true or false |
temperature | The value used to modulate the next token probabilities. Should be between 0.1 and 0.3. | float (Default: 0.3) |
top_k | The number of highest probability vocabulary tokens to keep for top-k-filtering. | integer (Max: 128, Default: 50) |
top_p | The smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. | float (Default: 0.95) |
no_repeat_ngram_size | If set to an integer greater than 0, all ngrams of that size can only occur once. | integer (Default: 5) |
seed | The seed used to reproduce results. Pass null for a random number. | integral value |
temp | Whether or not to send proxy links. | true or false (Default: false ) |
generator_type | The expected data type of the output. | json , choices , format , text |
generator_choices | Depends on generator_type . If choices , provide categorical choices; if json , provide JSON schema; if format , provide data type. | string |
reset | If true , resets the conversation with a new system prompt, forgetting all previous context. | true or false |
uncensored_system_prompt | Initialize conversation with an uncensored system prompt. | true or false |
webhook | Provide a URL to receive a POST API call once the operation is complete. | url |
track_id | This ID identifies the webhook request in the response. | integral value |
Start Conversation
When starting a chat conversation, it is important to pass the system_prompt
so that the system knows what kind of persona the model should follow. chat_id
is generated on the first request, so you do not have to pass it when starting a chat. Below is a sample on how to start a chat
{
"key": "",
"model_id" : "zephyr-7b-beta",
"system_prompt": "Gold Mining around the world",
"prompt" : "Write a step by step guide on how to mine a gold",
"max_new_tokens": 64,
"do_sample": true,
"temperature": 1,
"top_k": 50,
"top_p": 10,
"no_repeat_ngram_size": 5,
"seed": 1235,
"temp": false,
"webhook": null,
"track_id" : null
}
Once the endpoint is called, the sample response looks like so;
{
"status": "success",
"output": [
"https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/chats/49afc793-803b-4b4a-bac9-d2f9318c2b8f.json"
],
"message": "Please note that gold mining is a complex and dangerous process that requires specialized equipment, expertise, and regulatory permits. This guide provides a general overview of the steps involved in small-scale gold mining and shouldnot be interpreted as a substitute for professional guidance and safety precautions.\n\nStep 1: Research the",
"chat_id": "49afc793-803b-4b4a-bac9-d2f9318c2b8f",
"meta": {
"chat_id": "49afc793-803b-4b4a-bac9-d2f9318c2b8f",
"created_at": "2023-11-27T02:48:47.911559",
"do_sample": "yes",
"max_new_tokens": 64,
"model_id": "zephyr-7b-beta",
"no_repeat_ngram_size": 5,
"num_return_sequences": 1,
"pipeline_tag": "text-generation",
"prompt": "Write a step by step guide on how to mine a gold",
"seed": 1235,
"temp": "no",
"temperature": 1,
"top_k": 50,
"top_p": 1,
"updated_at": "2023-11-27T02:48:52.942389"
}
}
Continue Chat Conversation
From the response above, chat_id
is being returned. Continue using the chat by passing the chat_id
and other parameter to the endpoint. Here the system_prompt
is not compulsory. Below is how the sample json body looks like;
{
"key": "",
"chat_id":"49afc793-803b-4b4a-bac9-d2f9318c2b8f",
"model_id" : "zephyr-7b-beta",
"prompt" : "Tell me the dangers involved in this process",
"max_new_tokens": 64,
"do_sample": true,
"temperature": 1,
"top_k": 50,
"top_p": 10,
"no_repeat_ngram_size": 5,
"seed": 1235,
"temp": false,
"webhook": null,
"track_id" : null
}
The response of the above request looks like so,
{
"status": "success",
"output": [
"https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/chats/49afc793-803b-4b4a-bac9-d2f9318c2b8f.json"
],
"message": "Certainly, here are some of the dangers involved in gold mining:\n\n1. Collapsed Mines: Gold mining can take place in underground tunnels, which can sometimes collapse, trapping and injuring miners.\n\n2. Poisonous Gases: Mines can sometimes contain poison",
"chat_id": "49afc793-803b-4b4a-bac9-d2f9318c2b8f",
"meta": {
"chat_id": "49afc793-803b-4b4a-bac9-d2f9318c2b8f",
"created_at": "2023-11-27T02:48:47.911559",
"do_sample": "yes",
"max_new_tokens": 64,
"model_id": "zephyr-7b-beta",
"no_repeat_ngram_size": 5,
"num_return_sequences": 1,
"pipeline_tag": "text-generation",
"prompt": "Tell me the dangers involved in this process",
"seed": 1235,
"temp": "no",
"temperature": 1,
"top_k": 50,
"top_p": 1,
"updated_at": "2023-11-27T02:51:44.603069"
}
}
Request
- JS
- PHP
- NODE
- PYTHON
- JAVA
var myHeaders = new Headers();
myHeaders.append("Content-Type", "application/json");
var raw = JSON.stringify({
"key": "",
"model_id" : "zephyr-7b-beta",
"system_prompt": "Gold Mining around the world",
"prompt" : "Write a step by step guide on how to mine a gold",
"max_new_tokens": 64,
"do_sample": true,
"temperature": 1,
"top_k": 50,
"top_p": 10,
"no_repeat_ngram_size": 5,
"seed": 1235,
"temp": false,
"webhook": null,
"track_id" : null
});
var requestOptions = {
method: 'POST',
headers: myHeaders,
body: raw,
redirect: 'follow'
};
fetch("https://modelslab.com/api/v6/llm/chat", requestOptions)
.then(response => response.text())
.then(result => console.log(result))
.catch(error => console.log('error', error));
<?php
$payload = [
"key" => "",
"model_id" => "zephyr-7b-beta",
"system_prompt": "Gold Mining around the world",
"prompt" => "Write a step by step guide on how to mine a gold",
"max_new_tokens"=>64,
"do_sample"=> true,
"temperature" => 1,
"top_k" => 50,
"top_p" => 10,
"no_repeat_ngram_size"=> 5,
"seed" =>1235,
"temp"=> false
"webhook" => null,
"track_id" => null
];
$curl = curl_init();
curl_setopt_array($curl, array(
CURLOPT_URL => 'https://modelslab.com/api/v6/llm/chat',
CURLOPT_RETURNTRANSFER => true,
CURLOPT_ENCODING => '',
CURLOPT_MAXREDIRS => 10,
CURLOPT_TIMEOUT => 0,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
CURLOPT_CUSTOMREQUEST => 'POST',
CURLOPT_POSTFIELDS => json_encode($payload),
CURLOPT_HTTPHEADER => array(
'Content-Type: application/json'
),
));
$response = curl_exec($curl);
curl_close($curl);
echo $response;
var request = require('request');
var options = {
'method': 'POST',
'url': 'https://modelslab.com/api/v6/llm/chat',
'headers': {
'Content-Type': 'application/json'
},
body: JSON.stringify({
"key": "",
"model_id" : "zephyr-7b-beta",
"system_prompt": "Gold Mining around the world",
"prompt" : "Write a step by step guide on how to mine a gold",
"max_new_tokens": 64,
"do_sample": true,
"temperature": 1,
"top_k": 50,
"top_p": 10,
"no_repeat_ngram_size": 5,
"seed": 1235,
"temp": false,
"webhook": null,
"track_id" : null
})
};
request(options, function (error, response) {
if (error) throw new Error(error);
console.log(response.body);
});
import requests
import json
url = "https://modelslab.com/api/v6/llm/chat"
payload = json.dumps({
"key": "",
"model_id" : "HuggingFaceH4/zephyr-7b-beta",
"system_prompt": "Gold Mining around the world",
"prompt" : "Write a step by step guide on how to mine a gold",
"max_new_tokens": 64,
"do_sample": true,
"temperature": 1,
"top_k": 50,
"top_p": 10,
"no_repeat_ngram_size": 5,
"seed": 1235,
"temp": false,
"webhook": None,
"track_id" : None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
OkHttpClient client = new OkHttpClient().newBuilder()
.build();
MediaType mediaType = MediaType.parse("application/json");
RequestBody body = RequestBody.create(mediaType, "{\n \"key\": \"\",\n \"model_id\" : \"zephyr-7b-beta\",\n \"system_prompt\": \"Gold Mining around the world\",\n \"prompt\" : \"Write a step by step guide on how to mine a gold\",\n \"max_new_tokens\": 64,\n \"do_sample\": true,\n \"temperature\": 1,\n \"top_k\": 50,\n \"top_p\": 10,\n \"no_repeat_ngram_size\": 5,\n \"seed\": 1235,\n \"temp\": false\n}");
Request request = new Request.Builder()
.url("https://modelslab.com/api/v6/llm/chat")
.method("POST", body)
.addHeader("Content-Type", "application/json")
.build();
Response response = client.newCall(request).execute();
Response
{
"status": "success",
"output": [
"https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/chats/49afc793-803b-4b4a-bac9-d2f9318c2b8f.json"
],
"message": "Please note that gold mining is a complex and dangerous process that requires specialized equipment, expertise, and regulatory permits. This guide provides a general overview of the steps involved in small-scale gold mining and shouldnot be interpreted as a substitute for professional guidance and safety precautions.\n\nStep 1: Research the",
"chat_id": "49afc793-803b-4b4a-bac9-d2f9318c2b8f",
"meta": {
"chat_id": "49afc793-803b-4b4a-bac9-d2f9318c2b8f",
"created_at": "2023-11-27T02:48:47.911559",
"do_sample": "yes",
"max_new_tokens": 64,
"model_id": "zephyr-7b-beta",
"no_repeat_ngram_size": 5,
"num_return_sequences": 1,
"pipeline_tag": "text-generation",
"prompt": "Write a step by step guide on how to mine a gold",
"seed": 1235,
"temp": "no",
"temperature": 1,
"top_k": 50,
"top_p": 1,
"updated_at": "2023-11-27T02:48:52.942389"
}
}