Skip to main content

MLflow 2.19.0

· 2 min read
MLflow maintainers

2.19.0 (2024-12-11)

We are excited to announce the release of MLflow 2.19.0! This release includes a number of significant features, enhancements, and bug fixes.

Major New Features

  • ChatModel enhancements - ChatModel now adopts ChatCompletionRequest and ChatCompletionResponse as its new schema. The predict_stream interface uses ChatCompletionChunk to deliver true streaming responses. Additionally, the custom_inputs and custom_outputs fields in ChatModel now utilize AnyType, enabling support for a wider variety of data types. Note: In a future version of MLflow, ChatParams (and by extension, ChatCompletionRequest) will have the default values for n, temperature, and stream removed. (#13782, #13857, @stevenchen-db)

  • Tracing improvements - MLflow Tracing now supports both automatic and manual tracing for DSPy, LlamaIndex and Langchain flavors. Tracing feature is also auto-enabled for mlflow evaluation for all supported flavors. (#13790, #13793, #13795, #13897, @B-Step62)

  • New Tracing Integrations - MLflow Tracing now supports CrewAI and Anthropic, enabling a one-line, fully automated tracing experience. (#13903, @TomeHirata, #13851, @gabrielfu)

  • Any Type in model signature - MLflow now supports AnyType in model signature. It can be used to host any data types that were not supported before. (#13766, @serena-ruan)

Other Features:

  • [Tracking] Add update_current_trace API for adding tags to an active trace. (#13828, @B-Step62)
  • [Deployments] Update databricks deployments to support AI gateway & additional update endpoints (#13513, @djliden)
  • [Models] Support uv in mlflow.models.predict (#13824, @serena-ruan)
  • [Models] Add type hints support including pydantic models (#13924, @serena-ruan)
  • [Tracking] Add the trace.search_spans() method for searching spans within traces (#13984, @B-Step62)

Bug fixes:

  • [Tracking] Allow passing in spark connect dataframes in mlflow evaluate API (#13889, @WeichenXu123)
  • [Tracking] Fix mlflow.end_run inside a MLflow run context manager (#13888, @WeichenXu123)
  • [Scoring] Fix spark_udf conditional check on remote spark-connect client or Databricks Serverless (#13827, @WeichenXu123)
  • [Models] Allow changing max_workers for built-in LLM-as-a-Judge metrics (#13858, @B-Step62)
  • [Models] Support saving all langchain runnables using code-based logging (#13821, @serena-ruan)
  • [Model Registry] return empty array when DatabricksSDKModelsArtifactRepository.list_artifacts is called on a file (#14027, @shichengzhou-db)
  • [Tracking] Stringify param values in client.log_batch() (#14015, @B-Step62)
  • [Tracking] Remove deprecated squared parameter (#14028, @B-Step62)
  • [Tracking] Fix request/response field in the search_traces output (#13985, @B-Step62)

Documentation updates:

  • [Docs] Add Ollama and Instructor examples in tracing doc (#13937, @B-Step62)

For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.