Skip to main content

Helicone

This page covers how to use the Helicone ecosystem within LangChain.

What is Helicone?

Helicone is an open-source observability platform that proxies your OpenAI traffic and provides you key insights into your spend, latency and usage.

Screenshot of the Helicone dashboard showing average requests per day, response time, tokens per response, total cost, and a graph of requests over time.

Quick start

With your LangChain environment you can just add the following parameter.

export OPENAI_API_BASE="https://oai.hconeai.com/v1"

Now head over to helicone.ai to create your account, and add your OpenAI API key within our dashboard to view your logs.

Interface for entering and managing OpenAI API keys in the Helicone dashboard.

How to enable Helicone caching

from langchain_openai import OpenAI
import openai
openai.api_base = "https://oai.hconeai.com/v1"

llm = OpenAI(temperature=0.9, headers={"Helicone-Cache-Enabled": "true"})
text = "What is a helicone?"
print(llm.invoke(text))
API Reference:OpenAI

Helicone caching docs

How to use Helicone custom properties

from langchain_openai import OpenAI
import openai
openai.api_base = "https://oai.hconeai.com/v1"

llm = OpenAI(temperature=0.9, headers={
"Helicone-Property-Session": "24",
"Helicone-Property-Conversation": "support_issue_2",
"Helicone-Property-App": "mobile",
})
text = "What is a helicone?"
print(llm.invoke(text))
API Reference:OpenAI

Helicone property docs


Was this page helpful?


You can also leave detailed feedback on GitHub.