Skip to main content

MLflow 2.8.1

· 2 min read
MLflow maintainers

MLflow 2.8.1 is a patch release, containing some critical bug fixes and an update to our continued work on reworking our docs.

Notable details:

  • The API mlflow.llm.log_predictions is being marked as deprecated, as its functionality has been incorporated into mlflow.log_table. This API will be removed in the 2.9.0 release. (#10414, @dbczumar)

Bug fixes:

  • [Artifacts] Fix a regression in 2.8.0 where downloading a single file from a registered model would fail (#10362, @BenWilson2)
  • [Evaluate] Fix the Azure OpenAI integration for mlflow.evaluate when using LLM judge metrics (#10291, @prithvikannan)
  • [Evaluate] Change Examples to optional for the make_genai_metric API (#10353, @prithvikannan)
  • [Evaluate] Remove the fastapi dependency when using mlflow.evaluate for LLM results (#10354, @prithvikannan)
  • [Evaluate] Fix syntax issues and improve the formatting for generated prompt templates (#10402, @annzhang-db)
  • [Gateway] Fix the Gateway configuration validator pre-check for OpenAI to perform instance type validation (#10379, @BenWilson2)
  • [Tracking] Fix an intermittent issue with hanging threads when using asynchronous logging (#10374, @chenmoneygithub)
  • [Tracking] Add a timeout for the mlflow.login() API to catch invalid hostname configuration input errors (#10239, @chenmoneygithub)
  • [Tracking] Add a flush operation at the conclusion of logging system metrics (#10320, @chenmoneygithub)
  • [Models] Correct the prompt template generation logic within the Prompt Engineering UI so that the prompts can be used in the Python API (#10341, @daniellok-db)
  • [Models] Fix an issue in the SHAP model explainability functionality within mlflow.shap.log_explanation so that duplicate or conflicting dependencies are not registered when logging (#10305, @BenWilson2)

Documentation updates:

For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.