Track Request
Track request allows you to send your LLM request to our platform. On success, this request will return a request_id
which is necessary for track-score
Example Code
Request Parameters
The name of the function. For example, if you are using OpenAI it should be either openai.Completion.create
or openai.ChatCompletion.create
. These are specific function signatures that PromptLayer uses to parse the request_response
. Some integration libraries use special function_name
’s such as langchain.PromptLayerChatOpenAI
.
Keyword arguments that are passed into the LLM (such as OpenAI’s API). Normally it should include engine
and prompt
at the very least. If you are using a chat completion or GPT-4, it should include messages
instead of prompt
.
The LLM response. This response must be formatted exactly in OpenAI’s response format.
The time at which the LLM request was initiated.
The time at which the LLM request was completed.
An array of string tags to tag this request on the PromptLayer dashboard.
The ID of the prompt in the PromptLayer Registry that you used for this request (see /prompt-templates/{prompt_name}
on how to get this id or you can get it from the URL in the dashboard).
The input variables you used for a template. This is used for syntax highlighting and, more importantly used, for backtesting when you want to iterate a prompt.
It is the version of the prompt that you are trying to track. This should be an integer of a prompt that you are tracking.
The API key for authentication.