Skip to main content

MLflow Tracing for LLM Observability

MLflow Tracing is a fully OpenTelemetry-compatible LLM observability solution for your applications. It captures the inputs, outputs, and metadata associated with each intermediate step of a request, enabling you to easily pinpoint the source of bugs and unexpected behaviors.

Tracing Gateway Video

Use Cases Throughout the ML Lifecycle

MLflow Tracing empowers you throughout the end-to-end lifecycle of a machine learning project. Here's how it helps you at each step of the workflow, click on the tabs below to learn more:

Debug Issues in Your IDE or Notebook

Traces provide deep insights into what happens beneath the abstractions of GenAI libraries, helping you precisely identify where issues occur.

You can navigate traces seamlessly within your preferred IDE, notebook, or the MLflow UI, eliminating the hassle of switching between multiple tabs or searching through an overwhelming list of traces.

Learn more →

Trace Debugging

What Makes MLflow Tracing Unique?

Open Source

MLflow is open source and 100% FREE. You don't need to pay additional SaaS costs to add observability to your GenAI stack. Your trace data is hosted on your own infrastructure.

OpenTelemetry

MLflow Tracing is fully compatible with OpenTelemetry, making it free from vendor lock-in and easy to integrate with your existing observability stack.

Framework Agnostic

MLflow Tracing integrates with 20+ GenAI libraries, including OpenAI, LangChain, LlamaIndex, DSPy, Pydantic AI, allowing you to switch between frameworks with ease.

End-to-End Platform

MLflow Tracing empowers you throughout the end-to-end machine learning lifecycle, combined with its version tracking and evaluation capabilities.

Strong Community

MLflow boasts a vibrant Open Source community as a part of the Linux Foundation, with 20K+ GitHub Stars and 20MM+ monthly downloads.

Getting Started

One-line Auto Tracing Integrations

MLflow Tracing is integrated with various GenAI libraries and provides one-line automatic tracing experience for each library (and combinations of them!):

python
import mlflow

mlflow.openai.autolog() # or replace 'openai' with other library names, e.g., "anthropic"

Click on the logos below to learn more about the individual integration:

LangChain Logo
LangGraph Logo
Vercel AI SDK Logo
OpenAI Agent Logo
DSPy Logo
PydanticAI Logo
Google ADK Logo
Microsoft Agent Framework Logo
CrewAI Logo
LlamaIndex Logo
AutoGen Logo
Strands Agent SDK Logo
Mastra Logo
Agno Logo
Smolagents Logo
Semantic Kernel Logo
AG2 Logo
Haystack Logo
Instructor Logo
txtai Logo
OpenAI Logo
Anthropic Logo
Bedrock Logo
Gemini Logo
Ollama Logo
Groq Logo
Mistral Logo
FireworksAI Logo
DeepSeek Logo
LiteLLM Logo
Claude Code Logo

Flexible and Customizable

In addition to the one-line auto tracing experience, MLflow offers Python SDK for manually instrumenting your code and manipulating traces:

Production Readiness

MLflow Tracing is production ready and provides comprehensive monitoring capabilities for your GenAI applications in production environments. By enabling async logging, trace logging is done in the background and does not impact the performance of your application.

For production deployments, it is recommended to use the Lightweight Tracing SDK (mlflow-tracing) that is optimized for reducing the total installation size and minimizing dependencies while maintaining full tracing capabilities. Compared to the full mlflow package, the mlflow-tracing package requires 95% smaller footprint.

Read Production Monitoring for complete guidance on using MLflow Tracing for monitoring models in production and various backend configuration options.