Langfuse vs ReadMe
Open-source LLM engineering platform — observability, prompts, evals
vs. Developer hub: interactive API reference + guides with usage analytics
Pricing tiers
Langfuse
Hobby (Cloud Free)
Free. 50k units/month included. 30 days data access. 2 users. Community support.
Free
Self-Hosted (OSS)
MIT-licensed. Docker Compose or Kubernetes deployment. Unlimited.
$0 base (usage-based)
Core
$29/month. 100k units included ($8 per 100k overage). 90 days retention. Unlimited users. In-app support.
$29/mo
Pro
$199/month. 100k units included + same overage. 3 YEARS retention. Unlimited annotation queues. High rate limits.
$199/mo
Teams Add-on
+$300/month. Adds Enterprise SSO + fine-grained RBAC + dedicated Slack support to Pro.
$300/mo
Enterprise
$2,499/month. Everything + custom rate limits, uptime SLA, dedicated support engineer. Yearly options.
$2499/mo
ReadMe
Free
$0. 1 project. ReadMe branding. Public docs only. Basic features. Intended for evaluation.
Free
Startup
$99/mo. 1 project. Custom domain, remove branding, 3 admins, API Explorer.
$99/mo
Business
$399/mo. Unlimited projects, SSO, usage analytics (Metrics Starter), versioning.
$399/mo
Enterprise
Custom (typically $10k+/yr). SLA, SAML, advanced metrics, large teams, localization.
Custom
Free-tier quotas head-to-head
Comparing hobby on Langfuse vs free on ReadMe.
| Metric | Langfuse | ReadMe |
|---|---|---|
| No overlapping quota metrics for these tiers. | ||
Features
Langfuse · 16 features
- Annotation Queues — Human reviewers rate traces. Unlimited on Pro+.
- Dashboards — Aggregate metrics, cost, quality across projects.
- Datasets — Curate test sets from production traces. Run experiments.
- EU Cloud Region — GDPR-compliant hosting in EU.
- Evaluations — LLM-as-judge, manual scores, custom model-graded evaluators.
- LLM Cost Tracking — Automatic cost calculation per provider/model.
- OpenTelemetry Native — OTel SDK → Langfuse endpoint works out of box.
- Playground — Test prompts + models + variables live.
- Prompt Management — Version, tag, label prompts. Reference from code by label.
- Public API — Full REST API for ingest, query, prompt management.
- Python @observe decorator — One-line decorator to trace any function.
- Self-Hosting — Docker Compose + k8s Helm chart.
- Sessions — Group related traces (conversations, agent runs).
- Tracing — Capture every LLM call, tool call, nested span with inputs/outputs/cost.
- Users Tracking — Segment traces by user ID, track per-user cost.
- Webhooks — Subscribe to trace completion events.
ReadMe · 15 features
- API Reference — OpenAPI → interactive reference.
- Changelog — Versioned release notes.
- Custom Branding — Logo, colors, fonts.
- Custom Domain — docs.yourco.com.
- Discussion Forum — Developer community.
- Guides — Markdown-style conceptual docs.
- Interactive Tutorials — Guided walkthroughs.
- Localization — Multi-language docs (Enterprise).
- Metrics — API usage logging + analytics.
- Recipes — Multi-step tutorials.
- Search — Full-text search.
- SSO — SAML + OIDC.
- Suggested Edits — Community-contributed edits.
- Try It Console — In-browser request runner.
- Versioning — Multi-version docs.
Developer interfaces
| Kind | Langfuse | ReadMe |
|---|---|---|
| CLI | — | rdme CLI |
| SDK | langfuse-js, langfuse-python | Metrics SDK |
| REST | Langfuse REST API | ReadMe API v2 |
| MCP | Langfuse MCP Server | — |
| OTHER | Langfuse Dashboard, OpenTelemetry endpoint | OpenAPI Upload, ReadMe Dashboard, ReadMe Webhooks |
Staxly is an independent catalog of developer platforms. Some links to Langfuse and ReadMe may be affiliate links — Staxly may earn a commission if you sign up through them, at no extra cost to you. Pricing is verified against vendor pages at publication time — reconfirm before buying.
Want this comparison in your AI agent's context? Install the free Staxly MCP server.