Simple, self-hosted LLM observability for TypeScript

Breadcrumb logs every prompt, completion, token count, latency, and cost. Open source and self-hostable in one click, so your data stays in your stack.

Get early access
traces - live
0 500ms 1s+

Full LLM tracing with two lines of code.

Wrap a function, name a span. That's it. Every call your app makes gets logged with its prompt, completion, tokens, latency, and cost. No agents, no config files, no SDKs with a 30-page setup guide.

app.breadcrumb.sh
Breadcrumb LLM tracing dashboard showing traces, token counts, latency, and costs

Monitor every LLM call in production.

Every LLM call your app makes in production gets logged. Real prompts, real responses, real token counts. Not samples. All of them.

cost per trace
$0.0024

Every trace shows its token count and cost. See which calls are expensive and where your budget is actually going.

open source

Runs on your infra. Open source, forever.

Deploy with one click on Railway or anywhere else. Your data never leaves your stack. Read the code, fork it, modify it - no usage fees, no vendor lock-in.

★ github.com/joshuaKnauber/breadcrumb
typescript

First-class TypeScript support

The SDK is written in TypeScript with full type coverage. One wrapper, drop it around any call and it starts logging. Works with the Vercel AI SDK out of the box, alongside OpenAI, Anthropic, and others.

Ready to see what's happening?

Join the waitlist. We'll reach out when it's ready.

Get notified