Skip to main content

MLflow 3.9.0 Highlights: AI Assistant, Dashboards, and Judge Optimization

· 6 min read
MLflow maintainers
MLflow maintainers

MLflow 3.9.0 is a major release focused on AI Observability and Evaluation capabilities, bringing powerful new features for building, monitoring, and optimizing AI agents. This release introduces an AI-powered assistant, comprehensive dashboards for agent performance, a new judge optimization algorithm, judge builder UI, continuous monitoring with LLM judges, and distributed tracing.

1. MLflow Assistant Powered by Claude Code

MLflow Assistant transforms coding agents like Claude Code into experienced AI engineers by your side. Unlike typical chatbots, the assistant is aware of your codebase and context—it's not just a Q&A tool, but a full-fledged AI engineer that can find root causes for issues, set up quality tests, and apply LLMOps best practices to your project.

Key capabilities include:

  • No additional costs: Use your existing Claude Code subscription. MLflow provides the knowledge and integration at no cost.
  • Context-rich assistance: Understands your local codebase, project structure, and provides tailored recommendations—not generic advice.
  • Complete dev-loop: Goes beyond Q&A to fetch MLflow data, read your code, and add tracing, evaluation, and versioning to your project.
  • Fully customizable: Add custom skills, sub-agents, and permissions. Everything runs on your machine with full transparency.

Open the MLflow UI, navigate to the Assistant panel in any experiment page, and follow the setup wizard to get started.

Learn more about MLflow Assistant

2. Dashboards for Agent Performance Metrics

A new "Overview" tab in GenAI experiments provides pre-built charts and visualizations for monitoring agent performance at a glance. Monitor key metrics like latency, request counts, and quality scores without manual configuration. Identify performance trends and anomalies across your agent deployments, and get tool call summaries to understand how your agents are utilizing available tools.

Navigate to any GenAI experiment and click the "Overview" tab to access the dashboard. Charts are automatically populated based on your trace data. Have a specific visualization need? Request additional charts via GitHub Issues.

Learn more about GenAI Dashboards

3. MemAlign: A New Judge Optimizer Algorithm

MemAlign is a new optimization algorithm that learns evaluation guidelines from past feedback and dynamically retrieves relevant examples at runtime. Improve judge accuracy by learning from human feedback patterns, reduce prompt engineering effort with automatic guideline extraction, and adapt judge behavior dynamically based on the input being evaluated.

Use the MemAlignOptimizer to optimize your judges with historical feedback:

import mlflow
from mlflow.genai.judges import make_judge
from mlflow.genai.judges.optimizers import MemAlignOptimizer

# Create a judge
judge = make_judge(
name="politeness",
instructions=(
"Given a user question, evaluate if the chatbot's response is polite and respectful. "
"Consider the tone, language, and context of the response.\n\n"
"Question: {{ inputs }}\n"
"Response: {{ outputs }}"
),
feedback_value_type=bool,
model="openai:/gpt-5-mini",
)

# Create the MemAlign optimizer
optimizer = MemAlignOptimizer(reflection_lm="openai:/gpt-5-mini")

# Retrieve traces with human feedback
traces = mlflow.search_traces(return_type="list")

# Align the judge
aligned_judge = judge.align(traces=traces, optimizer=optimizer)

Learn more about MemAlign

4. Configuring and Building a Judge with Judge Builder UI

A new visual interface lets you create and test custom LLM judge prompts without writing code. Iterate quickly on judge criteria and scoring rubrics with immediate feedback, test judges on sample traces before deploying to production, and export validated judges to the Python SDK for programmatic integration.

Navigate to the "Judges" section in the MLflow UI and click "Create Judge." Define your evaluation criteria, scoring rubric, and test your judge against sample traces. Once satisfied, export the configuration to use with the MLflow SDK.

Learn more about Judge Builder

5. Continuous Online Monitoring with MLflow LLM Judges

Automatically run LLM judges on incoming traces without writing any code, enabling continuous quality monitoring of your agents in production. Detect quality issues in real-time as traces flow through your system, leverage pre-defined judges for common evaluations like safety, relevance, groundedness, and correctness, and get actionable assessments attached directly to your traces.

Go to the "Judges" tab in your experiment, select from pre-defined judges or use your custom judges, and configure which traces to evaluate. Assessments are automatically attached to matching traces as they arrive.

Learn more about Agent Evaluation

6. Distributed Tracing for Tracking End-to-end Requests

Track requests across multiple services with context propagation, enabling end-to-end visibility into distributed AI systems. Maintain trace continuity across microservices and external API calls, debug issues that span multiple services with a unified trace view, and understand latency and errors at each step of your distributed pipeline.

Use the get_tracing_context_headers_for_http_request and set_tracing_context_from_http_request_headers functions to inject and extract trace context:

# Service A: Inject context into the headers of the outgoing request
import requests
import mlflow
from mlflow.tracing import get_tracing_context_headers_for_http_request

with mlflow.start_span("client-root"):
headers = get_tracing_context_headers_for_http_request()
requests.post(
"https://your.service/handle", headers=headers, json={"input": "hello"}
)
# Service B: Extract context from incoming request
import mlflow
from flask import Flask, request
from mlflow.tracing import set_tracing_context_from_http_request_headers

app = Flask(__name__)

@app.post("/handle")
def handle():
headers = dict(request.headers)
with set_tracing_context_from_http_request_headers(headers):
with mlflow.start_span("server-handler") as span:
# ... your logic ...
span.set_attribute("status", "ok")
return {"ok": True}

Learn more about Distributed Tracing

Full Changelog

For a comprehensive list of changes, see the release change log.

What's Next

Get Started

Install MLflow 3.9.0 to try these new features:

pip install mlflow==3.9.0

Share Your Feedback

We'd love to hear about your experience with these new features:

Learn More

MLflow 3.8.1

· One min read
MLflow maintainers
MLflow maintainers

MLflow 3.8.1 includes several bug fixes and documentation updates.

Bug fixes:

  • [Tracking] Skip registering sqlalchemy store when sqlalchemy lib is not installed (#19563, @WeichenXu123)
  • [Models / Scoring] fix(security): prevent command injection via malicious model artifacts (#19583, @ColeMurray)
  • [Prompts] Fix prompt registration with model_config on Databricks (#19617, @TomeHirata)
  • [UI] Fix UI blank page on plain HTTP by replacing crypto.randomUUID with uuid library (#19644, @copilot-swe-agent)

Small bug fixes and documentation updates:

#19539, #19451, #19409, @smoorjani; #19493, @alkispoly-db

For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.

MLflow 3.8.0

· 5 min read
MLflow maintainers
MLflow maintainers

MLflow 3.8.0 includes several major features and improvements

Major Features

  • ⚙️Prompt Model Configuration: Prompts can now include model configuration, allowing you to associate specific model settings with prompt templates for more reproducible LLM workflows. (#18963, #19174, #19279, @chenmoneygithub)
  • In-Progress Trace Display: The Traces UI now supports displaying spans from in-progress traces with auto-polling, enabling real-time debugging and monitoring of long-running LLM applications. (#19265, @B-Step62)
  • ⚖️DeepEval and RAGAS Judges Integration: New get_judge API enables using DeepEval and RAGAS evaluation metrics as MLflow scorers, providing access to 20+ evaluation metrics including answer relevancy, faithfulness, and hallucination detection. (#18988, @smoorjani, #19345, @SomtochiUmeh)
  • 🛡️Conversational Safety Scorer: New built-in scorer for evaluating safety of multi-turn conversations, analyzing entire conversation histories for hate speech, harassment, violence, and other safety concerns. (#19106, @joelrobin18)
  • Conversational Tool Call Efficiency Scorer: New built-in scorer for evaluating tool call efficiency in multi-turn agent interactions, detecting redundant calls, missing batching opportunities, and poor tool selections. (#19245, @joelrobin18)

Important Notice

  • Collection of UI Telemetry. From MLflow 3.8.0 onwards, MLflow will collect anonymized data about UI interactions, similar to the telemetry we collect for the Python SDK. If you manage your own server, UI telemetry is automatically disabled by setting the existing environment variables: MLFLOW_DISABLE_TELEMETRY=true or DO_NOT_TRACK=true. If you do not manage your own server (e.g. you use a managed service or are not the admin), you can still opt out personally via the new "Settings" tab in the MLflow UI. For more information, please read the documentation on usage tracking.

Features:

  • [Tracking] Add default passphrase support (#19360, @BenWilson2)
  • [Tracing] Pydantic AI Stream support (#19118, @joelrobin18)
  • [Docs] Deprecate Unity Catalog function integration in AI Gateway (#19457, @harupy)
  • [Tracking] Add --max-results option to mlflow experiments search (#19359, @alkispoly-db)
  • [Tracking] Enhance encryption security (#19253, @BenWilson2)
  • [Tracking] Fix and simplify Gateway store interfaces (#19346, @BenWilson2)
  • [Evaluation] Add inference_params support for LLM Judges (#19152, @debu-sinha)
  • [Tracing] Support batch span export to UC Table (#19324, @B-Step62)
  • [Tracking] Add endpoint tags (#19308, @BenWilson2)
  • [Docs / Evaluation] Add MLFLOW_GENAI_EVAL_MAX_SCORER_WORKERS to limit concurrent scorer execution (#19248, @debu-sinha)
  • [Evaluation / Tracking] Enable search_datasets in Databricks managed MLflow (#19254, @alkispoly-db)
  • [Prompts] render text prompt previews in markdown (#19200, @ispoljari)
  • [UI] Add linked prompts filter for trace search tab (#19192, @TomeHirata)
  • [Evaluation] Automatically wrap async functions when passed to predict_fn (#19249, @smoorjani)
  • [Evaluation] [3/6][builtin judges] Conversational Role Adherence (#19247, @joelrobin18)
  • [Tracking] [Endpoints] [1/x] Add backend DB tables for Endpoints (#19002, @BenWilson2)
  • [Tracking] [Endpoints] [3/x] Entities base definitions (#19004, @BenWilson2)
  • [Tracking] [Endpoints] [4/x] Abstract store interface (#19005, @BenWilson2)
  • [Tracking] [Endpoints] [5/x] SQL Store backend for Endpoints (#19006, @BenWilson2)
  • [Tracking] [Endpoints] [6/x] Protos and entities interfaces (#19007, @BenWilson2)
  • [Tracking] [Endpoints] [7/x] Add rest store implementation (#19008, @BenWilson2)
  • [Tracking] [Endpoints] [8/x] Add credential cache (#19014, @BenWilson2)
  • [Tracking] [Endpoints] [9/x] Add provider, model, and configuration handling (#19009, @BenWilson2)
  • [Evaluation / UI] Add show/hide visibility control for Evaluation runs chart view (#18797) (#18852, @pradpalnis)
  • [Tracking] Add mlflow experiments get command (#19097, @alkispoly-db)
  • [Server-infra] [ Gateway 1/10 ] Simplify secrets and masked secrets with map types (#19440, @BenWilson2)

Bug fixes:

  • [Tracing / UI] Branch 3.8 patch: Fix GraphQL SearchRuns filter using invalid attribute key in trace comparison (#19526, @WeichenXu123)
  • [Scoring / Tracking] Fix artifact download performance regression (#19520, @copilot-swe-agent)
  • [Tracking] Fix SQLAlchemy alias conflict in _search_runs for dataset filters (#19498, @fredericosantos)
  • [Tracking] Add auth support for GraphQL routes (#19278, @BenWilson2)
  • [] Fix SQL injection vulnerability in UC function execution (#19381, @harupy)
  • [UI] Fix MultiIndex column search crash in dataset schema table (#19461, @copilot-swe-agent)
  • [Tracking] Make datasource failures fail gracefully (#19469, @BenWilson2)
  • [Tracing / Tracking] Fix litellm autolog for versions >= 1.78 (#19459, @harupy)
  • [Model Registry / Tracking] Fix SQLAlchemy engine connection pool leak in model registry and job stores (#19386, @harupy)
  • [UI] [Bug fix] Traces UI: Support filtering on assessments with multiple values (e.g. error and boolean) (#19262, @dbczumar)
  • [Evaluation / Tracing] Fix error initialization in Feedback (#19340, @alkispoly-db)
  • [Models] Switch container build to subprocess for Sagemaker (#19277, @BenWilson2)
  • [Scoring] Fix scorers issue on Strands traces (#18835, @joelrobin18)
  • [Tracking] Stop initializing backend stores in artifacts only mode (#19167, @mprahl)
  • [Evaluation] Parallelize multi-turn session evaluation (#19222, @AveshCSingh)
  • [Tracing] Add safe attribute capture for pydantic_ai (#19219, @BenWilson2)
  • [Model Registry] Fix UC to UC copying regression (#19280, @BenWilson2)
  • [Tracking] Fix artifact path traversal vector (#19260, @BenWilson2)
  • [UI] Fix issue with auth controls on system metrics (#19283, @BenWilson2)
  • [Models] Add context loading for ChatModel (#19250, @BenWilson2)
  • [Tracing] Fix trace decorators usage for LangGraph async callers (#19228, @BenWilson2)
  • [Tracking] Update docker compose to use --artifacts-destination not --default-artifact-root (#19215, @B-Step62)
  • [Build] Reduce clint error message verbosity by consolidating README instructions (#19155, @copilot-swe-agent)

Documentation updates:

  • [Docs] Add specific references for correctness scorers (#19472, @BenWilson2)
  • [Docs] Add documentation for Fluency scorer (#19481, @alkispoly-db)
  • [Docs] Update eval quickstart to put all code into a script (#19444, @achen530)
  • [Docs] Add documentation for KnowledgeRetention scorer (#19478, @alkispoly-db)
  • [Evaluation] Fix non-reproducible code examples in deep-learning.mdx (#19376, @saumilyagupta)
  • [Docs / Evaluation] fix: Confusing documentation for mlflow.genai.evaluate() (#19380, @brandonhawi)
  • [Docs] Deprecate model logging of OpenAI flavor (#19325, @TomeHirata)
  • [Docs] Add rounded corners to video elements in documentation (#19231, @copilot-swe-agent)
  • [Docs] Sync Python/TypeScript tab selections in tracing quickstart docs (#19184, @copilot-swe-agent)

For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.

MLflow 3.7.0

· 9 min read
MLflow maintainers
MLflow maintainers

MLflow 3.7.0 includes several major features and improvements for GenAI Observability, Evaluation, and Prompt Management.

Major Features

  • 📝 Experiment Prompts UI: New prompts functionality in the experiment UI allows you to manage and search prompts directly within experiments, with support for filter strings and prompt version search in traces. (#19156, #18919, #18906, @TomeHirata)
  • 💬 Multi-turn Evaluation Support: Enhanced mlflow.genai.evaluate now supports multi-turn conversations, enabling comprehensive assessment of conversational AI applications with DataFrame and list inputs. (#18971, @AveshCSingh)
  • ⚖️ Trace Comparison: New side-by-side comparison view in the Traces UI allows you to analyze and debug LLM application behavior across different runs, making it easier to identify regressions and improvements. (#17138, @joelrobin18)
  • 🌐 Gemini TypeScript SDK: Auto-tracing support for Google's Gemini in TypeScript, expanding MLflow's observability capabilities for JavaScript/TypeScript AI applications. (#18207, @joelrobin18)
  • 🎯 Structured Outputs in Judges: The make_judge API now supports structured outputs, enabling more precise and programmatically consumable evaluation results. (#18529, @TomeHirata)
  • 🔗 VoltAgent Tracing: Added auto-tracing support for VoltAgent, extending MLflow's observability to this AI agent framework. (#19041, @joelrobin18)

Breaking Changes

Features

Bug Fixes

Documentation Updates

For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.

MLflow 3.6.0

· 3 min read
MLflow maintainers
MLflow maintainers

MLflow 3.6.0 includes several major features and improvements for AI Observability, Experiment UI, Agent Evaluation and Deployment.

#1: Full OpenTelemetry Support in MLflow Tracking Server

OpenTelemetry Trace Example

MLflow now offers comprehensive OpenTelemetry integration, allowing you to use OpenTelemetry and MLflow seamlessly together for your observability stack.

  • Ingest OpenTelemetry spans directly into the MLflow tracking server
  • Monitor existing applications that are instrumented with OpenTelemetry
  • Choose Arbitrary Languages for your AI applications and trace them, including Java, Go, Rust, and more.
  • Create unified traces that combine MLflow SDK instrumentation with OpenTelemetry auto-instrumentation from third-party libraries

For more information, please check out the blog post for more details.

#2: Session-level View in Trace UI

Session-level View in Trace UI

New chat sessions tab provides a dedicated view for organizing and analyzing related traces at the session level, making it easier to track conversational workflows.

See the Track Users & Sessions guide for more details.

#3: New Supported Frameworks in TypeScript Tracing SDK

Auto-tracing support for Vercel AI SDK, LangChain.js, Mastra, Anthropic SDK, Gemini SDK in TypeScript, expanding MLflow's observability capabilities across popular JavaScript/TypeScript frameworks.

For more information, please check out the TypeScript Tracing SDK.

#4: Tracking Judge Cost and Traces

Comprehensive tracking of LLM judge evaluation costs and traces, providing visibility into evaluation expenses and performance with automatic cost calculation and rendering

See LLM Evaluation Guide for more details.

#5: New experiment tab bar

The experiment tab bar has been fully overhauled to provide more intuitive and discoverable navigation of different features in MLflow.

Upgrade to MLflow 3.6.0 to try it out!

#6: Agent Server for Lightning Agent Deployment

import agent
from mlflow.genai.agent_server import AgentServer

agent_server = AgentServer("ResponsesAgent")
app = agent_server.app

def main():
agent_server.run(app_import_string="start_server:app")

if __name__ == "__main__":
main()
python3 start_server.py

curl -X POST http://localhost:8000/invocations \
-H "Content-Type: application/json" \
-d '{
"input": [{ "role": "user", "content": "What is the 14th Fibonacci number?"}],
"stream": true
}'

New agent server infrastructure for managing and deploying scoring agents with enhanced orchestration capabilities.

See Agent Server Guide for more details.

Breaking Changes and deprecations

  • Drop numbering suffix (_1, _2, ...) from span names (#18531)
  • Deprecate promptflow, pmdarima, and diviner flavors (#18597, #18577)

For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.

MLflow 3.5.1

· 3 min read
MLflow maintainers
MLflow maintainers

MLflow 3.5.1 is a patch release that includes several bug fixes and improvements.

Features:

  • [CLI] Add CLI command to list registered scorers by experiment (#18255, @alkispoly-db)
  • [Deployments] Add configuration option for long-running deployments client requests (#18363, @BenWilson2)
  • [Deployments] Create set_databricks_monitoring_sql_warehouse_id API (#18346, @dbrx-euirim)
  • [Prompts] Show instructions for prompt optimization on prompt registry (#18375, @TomeHirata)

Bug fixes:

Documentation updates:

For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.

MLflow 3.5.0

· 8 min read
MLflow maintainers
MLflow maintainers

MLflow 3.5.0 includes several major features and improvements!

Major Features

  • ⚙️ Job Execution Backend: Introduced a new job execution backend infrastructure for running asynchronous tasks with individual execution pools, job search capabilities, and transient error handling. (#17676, #18012, #18070, #18071, #18112, #18049, @WeichenXu123)
  • 🎯 Flexible Prompt Optimization API: Introduced a new flexible API for prompt optimization with support for model switching and the GEPA algorithm, enabling more efficient prompt tuning with fewer rollouts. See the documentation to get started. (#18183, #18031, @TomeHirata)
  • 🎨 Enhanced UI Onboarding: Improved in-product onboarding experience with trace quickstart drawer and updated homepage guidance to help users discover MLflow's latest features. (#18098, #18187, @B-Step62)
  • 🔐 Security Middleware for Tracking Server: Added a security middleware layer to protect against DNS rebinding, CORS attacks, and other security threats. Read the documentation for configuration details. (#17910, @BenWilson2)

Features

Bug Fixes

Documentation Updates

For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.

MLflow 3.4.0

· 7 min read
MLflow maintainers
MLflow maintainers

MLflow 3.4.0 includes several major features and improvements

Major New Features

  • 📊 OpenTelemetry Metrics Export: MLflow now exports span-level statistics as OpenTelemetry metrics, providing enhanced observability and monitoring capabilities for traced applications. (#17325, @dbczumar)
  • 🤖 MCP Server Integration: Introducing the Model Context Protocol (MCP) server for MLflow, enabling AI assistants and LLMs to interact with MLflow programmatically. (#17122, @harupy)
  • 🧑‍⚖️ Custom Judges API: New make_judge API enables creation of custom evaluation judges for assessing LLM outputs with domain-specific criteria. (#17647, @BenWilson2, @dbczumar, @alkispoly-db, @smoorjani)
  • 📈 Correlations Backend: Implemented backend infrastructure for storing and computing correlations between experiment metrics using NPMI (Normalized Pointwise Mutual Information). (#17309, #17368, @BenWilson2)
  • 🗂️ Evaluation Datasets: MLflow now supports storing and versioning evaluation datasets directly within experiments for reproducible model assessment. (#17447, @BenWilson2)
  • 🔗 Databricks Backend for MLflow Server: MLflow server can now use Databricks as a backend, enabling seamless integration with Databricks workspaces. (#17411, @nsthorat)
  • 🤖 Claude Autologging: Automatic tracing support for Claude AI interactions, capturing conversations and model responses. (#17305, @smoorjani)
  • 🌊 Strands Agent Tracing: Added comprehensive tracing support for Strands agents, including automatic instrumentation for agent workflows and interactions. (#17151, @joelrobin18)
  • 🧪 Experiment Types in UI: MLflow now introduces experiment types, helping reduce clutter between classic ML/DL and GenAI features. MLflow auto-detects the type, but you can easily adjust it via a selector next to the experiment name. (#17605, @daniellok-db)

Features:

  • [Evaluation] Add ability to pass tags via dataframe in mlflow.genai.evaluate (#17549, @smoorjani)
  • [Evaluation] Add custom judge model support for Safety and RetrievalRelevance builtin scorers (#17526, @dbrx-euirim)
  • [Tracing] Add AI commands as MCP prompts for LLM interaction (#17608, @nsthorat)
  • [Tracing] Add MLFLOW_ENABLE_OTLP_EXPORTER environment variable (#17505, @dbczumar)
  • [Tracing] Support OTel and MLflow dual export (#17187, @dbczumar)
  • [Tracing] Make set_destination use ContextVar for thread safety (#17219, @B-Step62)
  • [CLI] Add MLflow commands CLI for exposing prompt commands to LLMs (#17530, @nsthorat)
  • [CLI] Add 'mlflow runs link-traces' command (#17444, @nsthorat)
  • [CLI] Add 'mlflow runs create' command for programmatic run creation (#17417, @nsthorat)
  • [CLI] Add MLflow traces CLI command with comprehensive search and management capabilities (#17302, @nsthorat)
  • [CLI] Add --env-file flag to all MLflow CLI commands (#17509, @nsthorat)
  • [Tracking] Backend for storing scorers in MLflow experiments (#17090, @WeichenXu123)
  • [Model Registry] Allow cross-workspace copying of model versions between WMR and UC (#17458, @arpitjasa-db)
  • [Models] Add automatic Git-based model versioning for GenAI applications (#17076, @harupy)
  • [Models] Improve WheeledModel._download_wheels safety (#17004, @serena-ruan)
  • [Projects] Support resume run for Optuna hyperparameter optimization (#17191, @lu-wang-dl)
  • [Scoring] Add MLFLOW_DEPLOYMENT_CLIENT_HTTP_REQUEST_TIMEOUT environment variable (#17252, @dbczumar)
  • [UI] Add ability to hide/unhide all finished runs in Chart view (#17143, @joelrobin18)
  • [Telemetry] Add MLflow OSS telemetry for invoke_custom_judge_model (#17585, @dbrx-euirim)

Bug fixes:

  • [Evaluation] Implement DSPy LM interface for default Databricks model serving (#17672, @smoorjani)
  • [Evaluation] Fix aggregations incorrectly applied to legacy scorer interface (#17596, @BenWilson2)
  • [Evaluation] Add Unity Catalog table source support for mlflow.evaluate (#17546, @BenWilson2)
  • [Evaluation] Fix custom prompt judge encoding issues with custom judge models (#17584, @dbrx-euirim)
  • [Tracking] Fix OpenAI autolog to properly reconstruct Response objects from streaming events (#17535, @WeichenXu123)
  • [Tracking] Add basic authentication support in TypeScript SDK (#17436, @kevin-lyn)
  • [Tracking] Update scorer endpoints to v3.0 API specification (#17409, @WeichenXu123)
  • [Tracking] Fix scorer status handling in MLflow tracking backend (#17379, @WeichenXu123)
  • [Tracking] Fix missing source-run information in UI (#16682, @WeichenXu123)
  • [Scoring] Fix spark_udf to always use stdin_serve for model serving (#17580, @WeichenXu123)
  • [Scoring] Fix a bug with Spark UDF usage of uv as an environment manager (#17489, @WeichenXu123)
  • [Model Registry] Extract source workspace ID from run_link during model version migration (#17600, @arpitjasa-db)
  • [Models] Improve security by reducing write permissions in temporary directory creation (#17544, @BenWilson2)
  • [Server-infra] Fix --env-file flag compatibility with --dev mode (#17615, @nsthorat)
  • [Server-infra] Fix basic authentication with Uvicorn server (#17523, @kevin-lyn)
  • [UI] Fix experiment comparison functionality in UI (#17550, @Flametaa)
  • [UI] Fix compareExperimentsSearch route definitions (#17459, @WeichenXu123)

Documentation updates:

For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.

MLflow 3.4.0rc0

· 6 min read
MLflow maintainers
MLflow maintainers

MLflow 3.4.0rc0 is a release candidate for 3.4.0. To install, run the following command:

pip install mlflow==3.4.0rc0

MLflow 3.4.0rc0 includes several major features and improvements

Major New Features

  • 📊 OpenTelemetry Metrics Export: MLflow now exports span-level statistics as OpenTelemetry metrics, providing enhanced observability and monitoring capabilities for traced applications. (#17325, @dbczumar)
  • 🤖 MCP Server Integration: Introducing the Model Context Protocol (MCP) server for MLflow, enabling AI assistants and LLMs to interact with MLflow programmatically. (#17122, @harupy)
  • 🧑‍⚖️ Custom Judges API: New make_judge API enables creation of custom evaluation judges for assessing LLM outputs with domain-specific criteria. (#17647, @BenWilson2, @dbczumar, @alkispoly-db, @smoorjani)
  • 📈 Correlations Backend: Implemented backend infrastructure for storing and computing correlations between experiment metrics using NPMI (Normalized Pointwise Mutual Information). (#17309, #17368, @BenWilson2)
  • 🗂️ Evaluation Datasets: MLflow now supports storing and versioning evaluation datasets directly within experiments for reproducible model assessment. (#17447, @BenWilson2)
  • 🔗 Databricks Backend for MLflow Server: MLflow server can now use Databricks as a backend, enabling seamless integration with Databricks workspaces. (#17411, @nsthorat)
  • 🤖 Claude Autologging: Automatic tracing support for Claude AI interactions, capturing conversations and model responses. (#17305, @smoorjani)
  • 🌊 Strands Agent Tracing: Added comprehensive tracing support for Strands agents, including automatic instrumentation for agent workflows and interactions. (#17151, @joelrobin18)

Features:

  • [Evaluation] Add ability to pass tags via dataframe in mlflow.genai.evaluate (#17549, @smoorjani)
  • [Evaluation] Add custom judge model support for Safety and RetrievalRelevance builtin scorers (#17526, @dbrx-euirim)
  • [Tracing] Add AI commands as MCP prompts for LLM interaction (#17608, @nsthorat)
  • [Tracing] Add MLFLOW_ENABLE_OTLP_EXPORTER environment variable (#17505, @dbczumar)
  • [Tracing] Support OTel and MLflow dual export (#17187, @dbczumar)
  • [Tracing] Make set_destination use ContextVar for thread safety (#17219, @B-Step62)
  • [CLI] Add MLflow commands CLI for exposing prompt commands to LLMs (#17530, @nsthorat)
  • [CLI] Add 'mlflow runs link-traces' command (#17444, @nsthorat)
  • [CLI] Add 'mlflow runs create' command for programmatic run creation (#17417, @nsthorat)
  • [CLI] Add MLflow traces CLI command with comprehensive search and management capabilities (#17302, @nsthorat)
  • [CLI] Add --env-file flag to all MLflow CLI commands (#17509, @nsthorat)
  • [Tracking] Backend for storing scorers in MLflow experiments (#17090, @WeichenXu123)
  • [Model Registry] Allow cross-workspace copying of model versions between WMR and UC (#17458, @arpitjasa-db)
  • [Models] Add automatic Git-based model versioning for GenAI applications (#17076, @harupy)
  • [Models] Improve WheeledModel._download_wheels safety (#17004, @serena-ruan)
  • [Projects] Support resume run for Optuna hyperparameter optimization (#17191, @lu-wang-dl)
  • [Scoring] Add MLFLOW_DEPLOYMENT_CLIENT_HTTP_REQUEST_TIMEOUT environment variable (#17252, @dbczumar)
  • [UI] Add ability to hide/unhide all finished runs in Chart view (#17143, @joelrobin18)
  • [Telemetry] Add MLflow OSS telemetry for invoke_custom_judge_model (#17585, @dbrx-euirim)

Bug fixes:

  • [Evaluation] Implement DSPy LM interface for default Databricks model serving (#17672, @smoorjani)
  • [Evaluation] Fix aggregations incorrectly applied to legacy scorer interface (#17596, @BenWilson2)
  • [Evaluation] Add Unity Catalog table source support for mlflow.evaluate (#17546, @BenWilson2)
  • [Evaluation] Fix custom prompt judge encoding issues with custom judge models (#17584, @dbrx-euirim)
  • [Tracking] Fix OpenAI autolog to properly reconstruct Response objects from streaming events (#17535, @WeichenXu123)
  • [Tracking] Add basic authentication support in TypeScript SDK (#17436, @kevin-lyn)
  • [Tracking] Update scorer endpoints to v3.0 API specification (#17409, @WeichenXu123)
  • [Tracking] Fix scorer status handling in MLflow tracking backend (#17379, @WeichenXu123)
  • [Tracking] Fix missing source-run information in UI (#16682, @WeichenXu123)
  • [Scoring] Fix spark_udf to always use stdin_serve for model serving (#17580, @WeichenXu123)
  • [Scoring] Fix a bug with Spark UDF usage of uv as an environment manager (#17489, @WeichenXu123)
  • [Model Registry] Extract source workspace ID from run_link during model version migration (#17600, @arpitjasa-db)
  • [Models] Improve security by reducing write permissions in temporary directory creation (#17544, @BenWilson2)
  • [Server-infra] Fix --env-file flag compatibility with --dev mode (#17615, @nsthorat)
  • [Server-infra] Fix basic authentication with Uvicorn server (#17523, @kevin-lyn)
  • [UI] Fix experiment comparison functionality in UI (#17550, @Flametaa)
  • [UI] Fix compareExperimentsSearch route definitions (#17459, @WeichenXu123)

Documentation updates:

Please try it out and report any issues on the issue tracker.

MLflow 3.3.2

· One min read
MLflow maintainers
MLflow maintainers

MLflow 3.3.2 is a patch release that includes several minor improvements and bugfixes

Features:

Bug fixes:

Documentation updates:

For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.