Breadcrumb logs every prompt, completion, token count, latency, and cost. Open source and self-hostable in one click, so your data stays in your stack.
Get early accessWrap a function, name a span. That's it. Every call your app makes gets logged with its prompt, completion, tokens, latency, and cost. No agents, no config files, no SDKs with a 30-page setup guide.
Every LLM call your app makes in production gets logged. Real prompts, real responses, real token counts. Not samples. All of them.
Every trace shows its token count and cost. See which calls are expensive and where your budget is actually going.
Runs on your infra. Open source, forever.
Deploy with one click on Railway or anywhere else. Your data never leaves your stack. Read the code, fork it, modify it - no usage fees, no vendor lock-in.
★ github.com/joshuaKnauber/breadcrumbFirst-class TypeScript support
The SDK is written in TypeScript with full type coverage. One wrapper, drop it around any call and it starts logging. Works with the Vercel AI SDK out of the box, alongside OpenAI, Anthropic, and others.