Frequently Asked Questions

Don’t see your question here? Send a message in Discord or email us at hello@promptlayer.com

I’m having trouble with the LangChain integration.

Try updating both LangChain and PromptLayer to their most recent versions.

Do you support OpenAI function calling?

Yes, we take great pride in staying up to date. PromptLayer is 1-to-1 with OpenAI’s library. That means, if you are using PromptLayer+OpenAI through the Python libraries, function calling will be implicitly supported.

If you are using our REST library, track-request mirrors OpenAI’s request schema. You can add functions into kwargs and use function-type messages as you would use normal messages in gpt-4.

Does PromptLayer support streaming?

Streaming requests are supported on the PromptLayer Python SDK (both with OpenAI and Antrhopic).

If you are using LangChain, streaming is only supported when you use the PromptLayerCallbackHandler. Streaming is not supported through the PromptLayer-specific LLMs (the old way to use LangChain).

Finally, if you are interacting with PromptLayer through our REST API you will need to store the whole output and log it to PromptLayer (track-request) only after it is finished.

Can I export my data from PromptLayer?

Yes. You can export your usage data with the button shown below.

Filter your training data export by tags, a search query, or metadata.

Do you support on-premises deployment?

Yes, we do support on-premises deployment for a select few of our enterprise customers. However, we are rolling out this option slowly.

If you are interested in onprem, please contact us for more information.

Does AsyncOpenAI work with PromptLayer?

Yes, AsyncOpenAI is compatible with PromptLayer. Use them together as demonstrated in example below.

import asyncio
# from openai import AsyncOpenAI
import promptlayer
promptlayer.api_key = "pl_*****"
AsyncOpenAI = promptlayer.openai.AsyncOpenAI

client = AsyncOpenAI(
    api_key="sk-***",
)


async def main() -> None:
    chat_completion = await client.chat.completions.create(
        messages=[
            {
                "role": "user",
                "content": "Say this is a test",
            }
        ],
        model="gpt-3.5-turbo",
    )
    print(chat_completion)

asyncio.run(main())