API Description
Function Introduction
This API is used to call large models on the ModelVerse platform to implement intelligent conversation functions.
Supported Model List
Model Name | Model Version | Maximum Output Length |
---|---|---|
DeepSeek-Reasoner | DeepSeek-R1 | 16384 |
Step 1: Obtain API Key
- Open the API List page, no need to fill in parameters, and click “Send Request.”
- Click “Confirm Send Request” in the pop-up window.
- From the returned list, select the Key you need based on the model name.
Step 2: Chat API Call
Request
Request Header Field
Name | Type | Type | Description |
---|---|---|---|
Content-Type | string | Yes | Fixed value application/json |
Authorization | string | Yes | Enter the Key obtained in step 1 |
Request Parameters
Name | Type | Required | Description |
---|---|---|---|
model | string | Yes | Model ID |
messages | List[message] | Yes | Chat context information. Instructions: (1) The messages members cannot be empty, one member indicates a single round of conversation, multiple members indicate multiple rounds of conversation, for example: · A single member example, "messages": [ {"role": "user","content": "Hello"}] · A three-member example, "messages": [ {"role": "user","content": "Hello"},{"role":"assistant","content":"How can I help you?"},{"role":"user","content":"Please introduce yourself"}] (2) The last message is the current request information, and the previous messages are historical conversation information (3) Role explanation for messages: ① The role of the first message must be either user or system ② The role of the last message must be either user or tool ③ If the function call feature is not used: · When the role of the first message is user, the role value needs to be alternately user -> assistant -> user…, i.e., the role of messages with odd indices must be user or function, and the role of messages with even indices must be assistant, for example, in the sample, the role values of the messages are respectively user, assistant, user, assistant, user; the role values of messages at odd indices (red box) are user, i.e., the roles of messages 1, 3, and 5 are user; messages at even indices (blue box) have the role assistant, i.e., the roles of messages 2, 4 are assistant |
stream | bool | No | Whether to return data in the form of a streaming interface. Explanation: (1) Beam search model can only be false (2) Default false |
stream_options | stream_options | No | Whether the usage is output in a streaming response. Explanation: true: Yes, when set to true, a field will be output in the last chunk, showing the token statistics for the entire request; false: No, the streaming response does not output usage by default |
Request Example
curl --location 'https://deepseek.modelverse.cn/v1/chat/completions' \
--header 'Authorization: Bearer <your API Key>' \
--header 'Content-Type: application/json' \
--data '{
"reasoning_effort": "low",
"stream": true,
"model": "deepseek-r1",
"messages": [
{
"role": "user",
"content": "say hello to ucloud"
}
]
}'
Response
Response Parameters
Name | Type | Description |
---|---|---|
id | string | The unique identifier of this request, can be used for troubleshooting |
object | string | Package type chat.completion : Multi-turn conversation return |
created | int | Timestamp |
model | string | Description: (1) If it is a pre-set service, the model ID is returned (2) If it is a service deployed after sft, this field returns model:modelversionID , where model is the same as the requested parameter and is the large model ID used in this request; modelversionID is used for tracing |
choices | choices/sse_choices | Returned content when stream=false Returned content when stream=true |
usage | usage | Token statistics information. Explanation: (1) Synchronous requests return by default (2) Streaming requests do not return by default. When stream_options.include_usage=true is enabled, the actual content will be returned in the last chunk, and other chunks will return null |
search_results | search_results | Search results list |
Response Example
{
"id": " ",
"object": "chat.completion",
"created": ,
"model": "models/DeepSeek-R1",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "\n\nHello, Surfercloud! 👋 If there's anything specific you'd like to know or discuss about Surfercloud's services (like cloud computing, storage, AI solutions, etc.), feel free to ask! 😊",
"reasoning_content": "\nOkay, the user wants to say hello to Surfercloud. Let me start by greeting Surfercloud directly.\n\nHmm, should I mention what Surfercloud is? Maybe a brief intro would help, like it's a cloud service provider.\n\nThen, I can ask if there's anything specific the user needs help with regarding Surfercloud services.\n\nKeeping it friendly and open-ended makes sense for a helpful response.\n"
},
"finish_reason": "stop"
],
"usage": {
"prompt_tokens": 8,
"completion_tokens": 129,
"total_tokens": 137,
"prompt_tokens_details": null,
"completion_tokens_details": null
},
"system_fingerprint": ""
}
Error Codes
If the request is incorrect, the JSON text returned by the server includes the following parameters.
HTTP Status Code | Type | Error Code | Error Message | Description |
---|---|---|---|---|
400 | invalid_request_error | invalid_messages | Sensitive information | Sensitive message |
400 | invalid_request_error | characters_too_long | Conversation token output limit | Currently, the maximum max_tokens supported by the deepseek series model is 12288 |
400 | invalid_request_error | tokens_too_long | Prompt tokens too long | [User Input Error] The request content exceeds the internal limit of the large model. You can try the following methods to solve it: • Shorten the input appropriately |
400 | invalid_request_error | invalid_token | Validate Certification failed | Invalid bearer token. Users can refer to [Authentication Explanation] to get the latest key |
400 | invalid_request_error | invalid_model | No permission to use the model | No model permissions |