Monitor OpenAI, Anthropic, and other LLM API costs in real-time. Zero configuration. Zero API keys exposed. Privacy-first alternative to Helicone.
Real-time LLM API spending
Monitoring AI API costs is complex. Existing solutions require API key management, slow dashboards, and privacy compromises.
Your API keys never leave your infrastructure. We only see costs and timestamps.
SDK patches directly into OpenAI/Anthropic clients. No credentials needed.
Add import. Done. No environment variables, no API key passing.
Problem #1
API keys floating around in config files and environment variables = security risk
Problem #2
Cost dashboards lag by hours. Can't debug cost spikes in real-time.
Problem #3
Helicone and similar: require proxy setup, slow dashboards, no privacy control.
Everything you need to monitor and optimize LLM costs
Cost metrics updated instantly as API calls complete
Patch-based integration, your keys stay yours
OpenAI, Anthropic, and more with one SDK
Per-request costs, daily aggregates, spend trends
Zero-config tracking in 2 lines
from spend_hawk import patch
from openai import OpenAI
# Initialize SDK (2 lines total)
patch()
client = OpenAI(api_key="sk-...")
# Use OpenAI normally
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
# Costs automatically tracked
print(response.choices[0].message.content)pip install spend-hawk-sdkAdd SDK to your Python project
from spend_hawk import patch()
patch()One line patches OpenAI/..
# Costs tracked automatically
print(response)Costs tracked in real-time
Audit the code. Verify the claims.
SDK is open-source and MIT licensed. Review the code on GitHub. No hidden behavior. No phone-home telemetry.
Review on GitHubSDK intercepts calls at the client library level. Keys never transmitted. Only request/response metadata logged.
Pay only for what you use
Perfect to get started
For growing teams
For large organizations
Get started in seconds. Install the SDK, add one import, and start tracking.