Introduction

Trackly is a performance-first LLM observability layer. It provides a simple callback interface for Python developers to track token usage, costs, and latency across 10+ providers with zero added latency.

Core Features

Universal

Support for OpenAI, Anthropic, Gemini, Groq, Ollama and more.

Safe

Batched background flushing ensures your app never slows down.

Transparent

Open API schema. Self-host our backend or use our cloud.

Affordable

Precise token tracking using provider-specific logic.