Skip to Content
API Description

API Description

Function Introduction

This API is used to call large models on the ModelVerse platform to implement intelligent conversation functions.

Supported Model List

Model NameModel VersionMaximum Output Length
DeepSeek-ReasonerDeepSeek-R116384

Step 1: Obtain API Key

  • Open the API List page, no need to fill in parameters, and click “Send Request.” api1.png
  • Click “Confirm Send Request” in the pop-up window. api2.png
  • From the returned list, select the Key you need based on the model name. api3.png

Step 2: Chat API Call

Request

Request Header Field

NameTypeTypeDescription
Content-TypestringYesFixed value application/json
AuthorizationstringYesEnter the Key obtained in step 1

Request Parameters

NameTypeRequiredDescription
modelstringYesModel ID
messagesList[message]YesChat context information. Instructions: (1) The messages members cannot be empty, one member indicates a single round of conversation, multiple members indicate multiple rounds of conversation, for example: · A single member example, "messages": [ {"role": "user","content": "Hello"}]
· A three-member example, "messages": [ {"role": "user","content": "Hello"},{"role":"assistant","content":"How can I help you?"},{"role":"user","content":"Please introduce yourself"}]
(2) The last message is the current request information, and the previous messages are historical conversation information (3) Role explanation for messages: ① The role of the first message must be either user or system ② The role of the last message must be either user or tool ③ If the function call feature is not used: · When the role of the first message is user, the role value needs to be alternately user -> assistant -> user…, i.e., the role of messages with odd indices must be user or function, and the role of messages with even indices must be assistant, for example, in the sample, the role values of the messages are respectively user, assistant, user, assistant, user; the role values of messages at odd indices (red box) are user, i.e., the roles of messages 1, 3, and 5 are user; messages at even indices (blue box) have the role assistant, i.e., the roles of messages 2, 4 are assistant
streamboolNoWhether to return data in the form of a streaming interface. Explanation: (1) Beam search model can only be false (2) Default false
stream_optionsstream_optionsNoWhether the usage is output in a streaming response. Explanation: true: Yes, when set to true, a field will be output in the last chunk, showing the token statistics for the entire request; false: No, the streaming response does not output usage by default

Request Example

curl --location 'https://deepseek.modelverse.cn/v1/chat/completions' \ --header 'Authorization: Bearer <your API Key>' \ --header 'Content-Type: application/json' \ --data '{ "reasoning_effort": "low", "stream": true, "model": "deepseek-r1", "messages": [ { "role": "user", "content": "say hello to ucloud" } ] }'

Response

Response Parameters

NameTypeDescription
idstringThe unique identifier of this request, can be used for troubleshooting
objectstringPackage type chat.completion: Multi-turn conversation return
createdintTimestamp
modelstringDescription:
(1) If it is a pre-set service, the model ID is returned
(2) If it is a service deployed after sft, this field returns model:modelversionID, where model is the same as the requested parameter and is the large model ID used in this request; modelversionID is used for tracing
choiceschoices/sse_choicesReturned content when stream=false
Returned content when stream=true
usageusageToken statistics information. Explanation:
(1) Synchronous requests return by default
(2) Streaming requests do not return by default. When stream_options.include_usage=true is enabled, the actual content will be returned in the last chunk, and other chunks will return null
search_resultssearch_resultsSearch results list

Response Example

{ "id": " ", "object": "chat.completion", "created": , "model": "models/DeepSeek-R1", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "\n\nHello, Surfercloud! 👋 If there's anything specific you'd like to know or discuss about Surfercloud's services (like cloud computing, storage, AI solutions, etc.), feel free to ask! 😊", "reasoning_content": "\nOkay, the user wants to say hello to Surfercloud. Let me start by greeting Surfercloud directly.\n\nHmm, should I mention what Surfercloud is? Maybe a brief intro would help, like it's a cloud service provider.\n\nThen, I can ask if there's anything specific the user needs help with regarding Surfercloud services.\n\nKeeping it friendly and open-ended makes sense for a helpful response.\n" }, "finish_reason": "stop" ], "usage": { "prompt_tokens": 8, "completion_tokens": 129, "total_tokens": 137, "prompt_tokens_details": null, "completion_tokens_details": null }, "system_fingerprint": "" }

Error Codes

If the request is incorrect, the JSON text returned by the server includes the following parameters.

HTTP Status CodeTypeError CodeError MessageDescription
400invalid_request_errorinvalid_messagesSensitive informationSensitive message
400invalid_request_errorcharacters_too_longConversation token output limitCurrently, the maximum max_tokens supported by the deepseek series model is 12288
400invalid_request_errortokens_too_longPrompt tokens too long[User Input Error] The request content exceeds the internal limit of the large model. You can try the following methods to solve it:
• Shorten the input appropriately
400invalid_request_errorinvalid_tokenValidate Certification failedInvalid bearer token. Users can refer to [Authentication Explanation] to get the latest key
400invalid_request_errorinvalid_modelNo permission to use the modelNo model permissions