We are happy to announce the availability of MLflow 2.2.1!
MLflow 2.2.1 is a patch release containing the following bug fixes:
- [Model Registry] Fix a bug that caused too many results to be requested by default when calling
MlflowClient.search_model_versions()
(#7935, @dbczumar)
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
We are happy to announce the availability of MLflow 2.2.0!
MLflow 2.2.0 includes several major features and improvements
Features:
- [Recipes] Add support for score calibration to the classification recipe (#7744, @sunishsheth2009)
- [Recipes] Add automatic label encoding to the classification recipe (#7711, @sunishsheth2009)
- [Recipes] Support custom data splitting logic in the classification and regression recipes (#7815, #7588, @sunishsheth2009)
- [Recipes] Introduce customizable MLflow Run name prefixes to the classification and regression recipes (#7746, @kamalesh0406; #7763, @sunishsheth2009)
- [UI] Add a new Chart View to the MLflow Experiment Page for model performance insights (#7864, @hubertzub-db, @apurva-koti, @prithvikannan, @ridhimag11, @sunishseth2009, @dbczumar)
- [UI] Modernize and improve parallel coordinates chart for model tuning (#7864, @hubertzub-db, @apurva-koti, @prithvikannan, @ridhimag11, @sunishseth2009, @dbczumar)
- [UI] Add typeahead suggestions to the MLflow Experiment Page search bar (#7864, @hubertzub-db, @apurva-koti, @prithvikannan, @ridhimag11, @sunishseth2009, @dbczumar)
- [UI] Improve performance of Experiments Sidebar for large numbers of experiments (#7804, @jmahlik)
- [Tracking] Introduce autologging support for native PyTorch models (#7627, @temporaer)
- [Tracking] Allow specifying
model_format
when autologging XGBoost models (#7781, @guyrosin)
- [Tracking] Add
MLFLOW_ARTIFACT_UPLOAD_DOWNLOAD_TIMEOUT
environment variable to configure artifact operation timeouts (#7783, @wamartin-aml)
- [Artifacts] Include
Content-Type
response headers for artifacts downloaded from mlflow server
(#7827, @bali0019)
- [Model Registry] Introduce the
searchModelVersions()
API to the Java client (#7880, @gabrielfu)
- [Model Registry] Introduce
max_results
, order_by
and page_token
arguments to MlflowClient.search_model_versions()
(#7623, @serena-ruan)
- [Models] Support logging large ONNX models by using external data (#7808, @dogeplusplus)
- [Models] Add support for logging Diviner models fit in Spark (#7800, @BenWilson2)
- [Models] Introduce
MLFLOW_DEFAULT_PREDICTION_DEVICE
environment variable to set the device for pyfunc model inference (#7922, @ankit-db)
- [Scoring] Publish official Docker images for the MLflow Model scoring server at github.com/mlflow/mlflow/pkgs (#7759, @dbczumar)
Bug fixes:
- [Recipes] Fix dataset format validation in the ingest step for custom dataset sources (#7638, @sunishsheth2009)
- [Recipes] Fix bug in identification of worst performing examples during training (#7658, @sunishsheth2009)
- [Recipes] Ensure consistent rendering of the recipe graph when
inspect()
is called (#7852, @sunishsheth2009)
- [Recipes] Correctly respect
positive_class
configuration in the transform step (#7626, @sunishsheth2009)
- [Recipes] Make logged metric names consistent with
mlflow.evaluate()
(#7613, @sunishsheth2009)
- [Recipes] Add
run_id
and artifact_path
keys to logged MLmodel files (#7651, @sunishsheth2009)
- [UI] Fix bugs in UI validation of experiment names, model names, and tag keys (#7818, @subramaniam02)
- [Tracking] Resolve artifact locations to absolute paths when creating experiments (#7670, @bali0019)
- [Tracking] Exclude Delta checkpoints from Spark datasource autologging (#7902, @harupy)
- [Tracking] Consistently return an empty list from GetMetricHistory when a metric does not exist (#7589, @bali0019; #7659, @harupy)
- [Artifacts] Fix support for artifact operations on Windows paths in UNC format (#7750, @bali0019)
- [Artifacts] Fix bug in HDFS artifact listing (#7581, @pwnywiz)
- [Model Registry] Disallow creation of model versions with local filesystem sources in
mlflow server
(#7908, @harupy)
- [Model Registry] Fix handling of deleted model versions in FileStore (#7716, @harupy)
- [Model Registry] Correctly initialize Model Registry SQL tables independently of MLflow Tracking (#7704, @harupy)
- [Models] Correctly move PyTorch model outputs from GPUs to CPUs during inference with pyfunc (#7885, @ankit-db)
- [Build] Fix compatiblility issues with Python installations compiled using
PYTHONOPTIMIZE=2
(#7791, @dbczumar)
- [Build] Fix compatibility issues with the upcoming pandas 2.0 release (#7899, @harupy; #7910, @dbczumar)
Documentation updates:
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
We are happy to announce the availability of MLflow 2.1.1!
MLflow 2.1.1 is a patch release containing the following bug fixes:
- [Scoring] Fix
mlflow.pyfunc.spark_udf()
type casting error on model with ColSpec
input schema
and make PyFuncModel.predict
support dataframe with elements of numpy.ndarray
type (#7592 @WeichenXu123)
- [Scoring] Make
mlflow.pyfunc.scoring_server.client.ScoringServerClient
support input dataframe with elements
of numpy.ndarray
type (#7594 @WeichenXu123)
- [Tracking] Ensure mlflow imports ML packages lazily (#7597, @harupy)
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
We are happy to announce the availability of MLflow 2.1.0!
MLflow 2.1.0 includes several major features and improvements
Features:
- [Recipes] Introduce support for multi-class classification (#7458, @mshtelma)
- [Recipes] Extend the pyfunc representation of classification models to output scores in addition to labels (#7474, @sunishsheth2009)
- [UI] Add user ID and lifecycle stage quick search links to the Runs page (#7462, @jaeday)
- [Tracking] Paginate the GetMetricHistory API (#7523, #7415, @BenWilson2)
- [Tracking] Add Runs search aliases for Run name and start time that correspond to UI column names (#7492, @apurva-koti)
- [Tracking] Add a
/version
endpoint to mlflow server
for querying the server's MLflow version (#7273, @joncarter1)
- [Model Registry] Add FileStore support for the Model Registry (#6605, @serena-ruan)
- [Model Registry] Introduce an
mlflow.search_registered_models()
fluent API (#7428, @TSienki)
- [Model Registry / Java] Add a
getRegisteredModel()
method to the Java client (#6602) (#7511, @drod331)
- [Model Registry / R] Add an
mlflow_set_model_version_tag()
method to the R client (#7401, @leeweijie)
- [Models] Introduce a
metadata
field to the MLmodel specification and log_model()
methods (#7237, @jdonzallaz)
- [Models] Extend
Model.load()
to support loading MLmodel specifications from remote locations (#7517, @dbczumar)
- [Models] Pin the major version of MLflow in Models'
requirements.txt
and conda.yaml
files (#7364, @BenWilson2)
- [Scoring] Extend
mlflow.pyfunc.spark_udf()
to support StructType results (#7527, @WeichenXu123)
- [Scoring] Extend TensorFlow and Keras Models to support multi-dimensional inputs with
mlflow.pyfunc.spark_udf()
(#7531, #7291, @WeichenXu123)
- [Scoring] Support specifying deployment environment variables and tags when deploying models to SageMaker (#7433, @jhallard)
Bug fixes:
- [Recipes] Fix a bug that prevented use of custom
early_stop
functions during model tuning (#7538, @sunishsheth2009)
- [Recipes] Fix a bug in the logic used to create a Spark session during data ingestion (#7307, @WeichenXu123)
- [Tracking] Make the metric names produced by
mlflow.autolog()
consistent with mlflow.evaluate()
(#7418, @wenfeiy-db)
- [Tracking] Fix an autologging bug that caused nested, redundant information to be logged for XGBoost and LightGBM models (#7404, @WeichenXu123)
- [Tracking] Correctly classify SQLAlchemy OperationalErrors as retryable HTTP errors (#7240, @barrywhart)
- [Artifacts] Correctly handle special characters in credentials when using FTP artifact storage (#7479, @HCTsai)
- [Models] Address an issue that prevented MLeap models from being saved on Windows (#6966, @dbczumar)
- [Scoring] Fix a permissions issue encountered when using NFS during model scoring with
mlflow.pyfunc.spark_udf()
(#7427, @WeichenXu123)
Documentation updates:
- [Docs] Add more examples to the Runs search documentation page (#7487, @apurva-koti)
- [Docs] Add documentation for Model flavors developed by the community (#7425, @mmerce)
- [Docs] Add an example for logging and scoring ONNX Models (#7398, @Rusteam)
- [Docs] Fix a typo in the model scoring REST API example for inputs with the
dataframe_split
format (#7540, @zhouyangyu)
- [Docs] Fix a typo in the model scoring REST API example for inputs with the
dataframe_records
format (#7361, @dbczumar)
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
We are happy to announce the availability of MLflow 2.0.1!
The 2.0.1 version of MLflow is a major milestone release that focuses on simplifying the management of end-to-end MLOps workflows, providing new feature-rich functionality, and expanding upon the production-ready MLOps capabilities offered by MLflow. Check out the MLflow 2.0 blog post
for an in-depth walk through!
This release contains several important breaking changes from the 1.x API, additional major features and improvements.
Features:
- [Recipes] MLflow Pipelines is now MLflow Recipes - a framework that enables data scientists to quickly develop high-quality models and deploy them to production
- [Recipes] Add support for classification models to MLflow Recipes (#7082, @bbarnes52)
- [UI] Introduce support for pinning runs within the experiments UI (#7177, @harupy)
- [UI] Simplify the layout and provide customized displays of metrics, parameters, and tags within the experiments UI (#7177, @harupy)
- [UI] Simplify run filtering and ordering of runs within the experiments UI (#7177, @harupy)
- [Tracking] Update
mlflow.pyfunc.get_model_dependencies()
to download all referenced requirements files for specified models (#6733, @harupy)
- [Tracking] Add support for selecting the Keras model
save_format
used by mlflow.tensorflow.autolog()
(#7123, @balvisio)
- [Models] Set
mlflow.evaluate()
status to stable as it is now a production-ready API
- [Models] Simplify APIs for specifying custom metrics and custom artifacts during model evaluation with
mlflow.evaluate()
(#7142, @harupy)
- [Models] Correctly infer the positive label for binary classification within
mlflow.evaluate()
(#7149, @dbczumar)
- [Models] Enable automated signature logging for
tensorflow
and keras
models when mlflow.tensorflow.autolog()
is enabled (#6678, @BenWilson2)
- [Models] Add support for native Keras and Tensorflow Core models within
mlflow.tensorflow
(#6530, @WeichenXu123)
- [Models] Add support for defining the
model_format
used by mlflow.xgboost.save/log_model()
(#7068, @AvikantSrivastava)
- [Scoring] Overhaul the model scoring REST API to introduce format indicators for inputs and support multiple output fields (#6575, @tomasatdatabricks; #7254, @adriangonz)
- [Scoring] Add support for ragged arrays in model signatures (#7135, @trangevi)
- [Java] Add
getModelVersion
API to the java client (#6955, @wgottschalk)
Breaking Changes:
The following list of breaking changes are arranged by their order of significance within each category.
- [Core] Support for Python 3.7 has been dropped. MLflow now requires Python >=3.8
- [Recipes]
mlflow.pipelines
APIs have been replaced with mlflow.recipes
- [Tracking / Registry] Remove
/preview
routes for Tracking and Model Registry REST APIs (#6667, @harupy)
- [Tracking] Remove deprecated
list
APIs for experiments, models, and runs from Python, Java, R, and REST APIs (#6785, #6786, #6787, #6788, #6800, #6868, @dbczumar)
- [Tracking] Remove deprecated
runs
response field from Get Experiment
REST API response (#6541, #6524 @dbczumar)
- [Tracking] Remove deprecated
MlflowClient.download_artifacts
API (#6537, @WeichenXu123)
- [Tracking] Change the behavior of environment variable handling for
MLFLOW_EXPERIMENT_NAME
such that the value is always used when creating an experiment (#6674, @BenWilson2)
- [Tracking] Update
mlflow server
to run in --serve-artifacts
mode by default (#6502, @harupy)
- [Tracking] Update Experiment ID generation for the Filestore backend to enable threadsafe concurrency (#7070, @BenWilson2)
- [Tracking] Remove
dataset_name
and on_data_{name | hash}
suffixes from mlflow.evaluate()
metric keys (#7042, @harupy)
- [Models / Scoring / Projects] Change default environment manager to
virtualenv
instead of conda
for model inference and project execution (#6459, #6489 @harupy)
- [Models] Move Keras model logging APIs to the
mlflow.tensorflow
flavor and drop support for TensorFlow Estimators (#6530, @WeichenXu123)
- [Models] Remove deprecated
mlflow.sklearn.eval_and_log_metrics()
API in favor of mlflow.evaluate()
API (#6520, @dbczumar)
- [Models] Require
mlflow.evaluate()
model inputs to be specified as URIs (#6670, @harupy)
- [Models] Drop support for returning custom metrics and artifacts from the same function when using
mlflow.evaluate()
, in favor of custom_artifacts
(#7142, @harupy)
- [Models] Extend
PyFuncModel
spec to support conda
and virtualenv
subfields (#6684, @harupy)
- [Scoring] Remove support for defining input formats using the
Content-Type
header (#6575, @tomasatdatabricks; #7254, @adriangonz)
- [Scoring] Replace the
--no-conda
CLI option argument for native serving with --env-manager='local'
(#6501, @harupy)
- [Scoring] Remove public APIs for
mlflow.sagemaker.deploy()
and mlflow.sagemaker.delete()
in favor of MLflow deployments APIs, such as mlflow deployments -t sagemaker
(#6650, @dbczumar)
- [Scoring] Rename input argument
df
to inputs
in mlflow.deployments.predict()
method (#6681, @BenWilson2)
- [Projects] Replace the
use_conda
argument with the env_manager
argument within the run
CLI command for MLflow Projects (#6654, @harupy)
- [Projects] Modify the MLflow Projects docker image build options by renaming
--skip-image-build
to --build-image
with a default of False
(#7011, @harupy)
- [Integrations/Azure] Remove deprecated
mlflow.azureml
modules from MLflow in favor of the azure-mlflow
deployment plugin (#6691, @BenWilson2)
- [R] Remove conda integration with the R client (#6638, @harupy)
Bug fixes:
- [Recipes] Fix rendering issue with profile cards polyfill (#7154, @hubertzub-db)
- [Tracking] Set the MLflow Run name correctly when specified as part of the
tags
argument to mlflow.start_run()
(#7228, @Cokral)
- [Tracking] Fix an issue with conflicting MLflow Run name assignment if the
mlflow.runName
tag is set (#7138, @harupy)
- [Scoring] Fix incorrect payload constructor error in SageMaker deployment client
predict()
API (#7193, @dbczumar)
- [Scoring] Fix an issue where
DataCaptureConfig
information was not preserved when updating a Sagemaker deployment (#7281, @harupy)
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
We are happy to announce the availability of MLflow 1.30.0!
MLflow 1.30.0 includes several major features and improvements
Features:
- [Pipelines] Introduce hyperparameter tuning support to MLflow Pipelines (#6859, @prithvikannan)
- [Pipelines] Introduce support for prediction outlier comparison to training data set (#6991, @jinzhang21)
- [Pipelines] Introduce support for recording all training parameters for reproducibility (#7026, #7094, @prithvikannan)
- [Pipelines] Add support for
Delta
tables as a datasource in the ingest step (#7010, @sunishsheth2009)
- [Pipelines] Add expanded support for data profiling up to 10,000 columns (#7035, @prithvikanna)
- [Pipelines] Add support for AutoML in MLflow Pipelines using FLAML (#6959, @mshtelma)
- [Pipelines] Add support for simplified transform step execution by allowing for unspecified configuration (#6909, @apurva-koti)
- [Pipelines] Introduce a data preview tab to the transform step card (#7033, @prithvikannan)
- [Tracking] Introduce
run_name
attribute for create_run
, get_run
and update_run
APIs (#6782, #6798 @apurva-koti)
- [Tracking] Add support for searching by
creation_time
and last_update_time
for the search_experiments
API (#6979, @harupy)
- [Tracking] Add support for search terms
run_id IN
and run ID NOT IN
for the search_runs
API (#6945, @harupy)
- [Tracking] Add support for searching by
user_id
and end_time
for the search_runs
API (#6881, #6880 @subramaniam02)
- [Tracking] Add support for searching by
run_name
and run_id
for the search_runs
API (#6899, @harupy; #6952, @alexacole)
- [Tracking] Add support for synchronizing run
name
attribute and mlflow.runName
tag (#6971, @BenWilson2)
- [Tracking] Add support for signed tracking server requests using AWSSigv4 and AWS IAM (#7044, @pdifranc)
- [Tracking] Introduce the
update_run()
API for modifying the status
and name
attributes of existing runs (#7013, @gabrielfu)
- [Tracking] Add support for experiment deletion in the
mlflow gc
cli API (#6977, @shaikmoeed)
- [Models] Add support for environment restoration in the
evaluate()
API (#6728, @jerrylian-db)
- [Models] Remove restrictions on binary classification labels in the
evaluate()
API (#7077, @dbczumar)
- [Scoring] Add support for
BooleanType
to mlflow.pyfunc.spark_udf()
(#6913, @BenWilson2)
- [SQLAlchemy] Add support for configurable
Pool
class options for SqlAlchemyStore
(#6883, @mingyu89)
Bug fixes:
- [Pipelines] Enable Pipeline subprocess commands to create a new
SparkSession
if one does not exist (#6846, @prithvikannan)
- [Pipelines] Fix a rendering issue with
bool
column types in Step Card data profiles (#6907, @sunishsheth2009)
- [Pipelines] Add validation and an exception if required step files are missing (#7067, @mingyu89)
- [Pipelines] Change step configuration validation to only be performed during runtime execution of a step (#6967, @prithvikannan)
- [Tracking] Fix infinite recursion bug when inferring the model schema in
mlflow.pyspark.ml.autolog()
(#6831, @harupy)
- [UI] Remove the browser error notification when failing to fetch artifacts (#7001, @kevingreer)
- [Models] Allow
mlflow-skinny
package to serve as base requirement in MLmodel
requirements (#6974, @BenWilson2)
- [Models] Fix an issue with code path resolution for loading SparkML models (#6968, @dbczumar)
- [Models] Fix an issue with dependency inference in logging SparkML models (#6912, @BenWilson2)
- [Models] Fix an issue involving potential duplicate downloads for SparkML models (#6903, @serena-ruan)
- [Models] Add missing
pos_label
to sklearn.metrics.precision_recall_curve
in mlflow.evaluate()
(#6854, @dbczumar)
- [SQLAlchemy] Fix a bug in
SqlAlchemyStore
where set_tag()
updates the incorrect tags (#7027, @gabrielfu)
Documentation updates:
- [Models] Update details regarding the default
Keras
serialization format (#7022, @balvisio)
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
We are happy to announce the availability of MLflow 1.29.0!
MLflow 1.29.0 includes several major features and improvements:
Features:
- [Pipelines] Improve performance and fidelity of dataset profiling in the scikit-learn regression Pipeline (#6792, @sunishsheth2009)
- [Pipelines] Add an mlflow pipelines get-artifact CLI for retrieving Pipeline artifacts (#6517, @prithvikannan)
- [Pipelines] Introduce an option for skipping dataset profiling to the scikit-learn regression Pipeline (#6456, @apurva-koti)
- [Pipelines / UI] Display an mlflow pipelines CLI command for reproducing a Pipeline run in the MLflow UI (#6376, @hubertzub-db)
- [Tracking] Automatically generate friendly names for Runs if not supplied by the user (#6736, @BenWilson2)
- [Tracking] Add load_text(), load_image() and load_dict() fluent APIs for convenient artifact loading (#6475, @subramaniam02)
- [Tracking] Add creation_time and last_update_time attributes to the Experiment class (#6756, @subramaniam02)
- [Tracking] Add official MLflow Tracking Server Dockerfiles to the MLflow repository (#6731, @oojo12)
- [Tracking] Add searchExperiments API to Java client and deprecate listExperiments (#6561, @dbczumar)
- [Tracking] Add mlflow_search_experiments API to R client and deprecate mlflow_list_experiments (#6576, @dbczumar)
- [UI] Make URLs clickable in the MLflow Tracking UI (#6526, @marijncv)
- [UI] Introduce support for csv data preview within the artifact viewer pane (#6567, @nnethery)
- [Model Registry / Models] Introduce mlflow.models.add_libraries_to_model() API for adding libraries to an MLflow Model (#6586, @arjundc-db)
- [Models] Add model validation support to mlflow.evaluate() (#6582, @zhe-db, @jerrylian-db)
- [Models] Introduce sample_weights support to mlflow.evaluate() (#6806, @dbczumar)
- [Models] Add pos_label support to mlflow.evaluate() for identifying the positive class (#6696, @harupy)
- [Models] Make the metric name prefix and dataset info configurable in mlflow.evaluate() (#6593, @dbczumar)
- [Models] Add utility for validating the compatibility of a dataset with a model signature (#6494, @serena-ruan)
- [Models] Add predict_proba() support to the pyfunc representation of scikit-learn models (#6631, @skylarbpayne)
- [Models] Add support for Decimal type inference to MLflow Model schemas (#6600, @shitaoli-db)
- [Models] Add new CLI command for generating Dockerfiles for model serving (#6591, @anuarkaliyev23)
- [Scoring] Add /health endpoint to scoring server (#6574, @gabriel-milan)
- [Scoring] Support specifying a variant_name during Sagemaker deployment (#6486, @nfarley-soaren)
- [Scoring] Support specifying a data_capture_config during SageMaker deployment (#6423, @jonwiggins)
Bug fixes:
- [Tracking] Make Run and Experiment deletion and restoration idempotent (#6641, @dbczumar)
- [UI] Fix an alignment bug affecting the Experiments list in the MLflow UI (#6569, @sunishsheth2009)
- [Models] Fix a regression in the directory path structure of logged Spark Models that occurred in MLflow 1.28.0 (#6683, @gwy1995)
- [Models] No longer reload the main module when loading model code (#6647, @Jooakim)
- [Artifacts] Fix an mlflow server compatibility issue with HDFS when running in --serve-artifacts mode (#6482, @shidianshifen)
- [Scoring] Fix an inference failure with 1-dimensional tensor inputs in TensorFlow and Keras (#6796, @LiamConnell)
Documentation updates:
- [Tracking] Mark the SearchExperiments API as stable (#6551, @dbczumar)
- [Tracking / Model Registry] Deprecate the ListExperiments, ListRegisteredModels, and list_run_infos() APIs (#6550, @dbczumar)
- [Scoring] Deprecate mlflow.sagemaker.deploy() in favor of SageMakerDeploymentClient.create() (#6651, @dbczumar)
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
We are happy to announce the availability of MLflow 1.28.0!
MLflow 1.28.0 includes several major features and improvements:
Features:
- [Pipelines] Log the full Pipeline runtime configuration to MLflow Tracking during Pipeline execution (#6359, @jinzhang21)
- [Pipelines] Add
pipeline.yaml
configurations to specify the Model Registry backend used for model registration (#6284, @sunishsheth2009)
- [Pipelines] Support optionally skipping the
transform
step of the scikit-learn regression pipeline (#6362, @sunishsheth2009)
- [Pipelines] Add UI links to Runs and Models in Pipeline Step Cards on Databricks (#6294, @dbczumar)
- [Tracking] Introduce
mlflow.search_experiments()
API for searching experiments by name and by tags (#6333, @WeichenXu123; #6227, #6172, #6154, @harupy)
- [Tracking] Increase the maximum parameter value length supported by File and SQL backends to 500 characters (#6358, @johnyNJ)
- [Tracking] Introduce an
--older-than
flag to mlflow gc
for removing runs based on deletion time (#6354, @Jason-CKY)
- [Tracking] Add
MLFLOW_SQLALCHEMYSTORE_POOL_RECYCLE
environment variable for recycling SQLAlchemy connections (#6344, @postrational)
- [UI] Display deeply nested runs in the Runs Table on the Experiment Page (#6065, @tospe)
- [UI] Add box plot visualization for metrics to the Compare Runs page (#6308, @ahlag)
- [UI] Display tags on the Compare Runs page (#6164, @CaioCavalcanti)
- [UI] Use scientific notation for axes when viewing metric plots in log scale (#6176, @RajezMariner)
- [UI] Add button to Metrics page for downloading metrics as CSV (#6048, @rafaelvp-db)
- [UI] Include NaN and +/- infinity values in plots on the Metrics page (#6422, @hubertzub-db)
- [Tracking / Model Registry] Introduce environment variables to control retry behavior and timeouts for REST API requests (#5745, @peterdhansen)
- [Tracking / Model Registry] Make
MlflowClient
importable as mlflow.MlflowClient
(#6085, @subramaniam02)
- [Model Registry] Add support for searching registered models and model versions by tags (#6413, #6411, #6320, @WeichenXu123)
- [Model Registry] Add
stage
parameter to set_model_version_tag()
(#6185, @subramaniam02)
- [Model Registry] Add
--registry-store-uri
flag to mlflow server
for specifying the Model Registry backend URI (#6142, @Secbone)
- [Models] Improve performance of Spark Model logging on Databricks (#6282, @bbarnes52)
- [Models] Include Pandas Series names in inferred model schemas (#6361, @RynoXLI)
- [Scoring] Make
model_uri
optional in mlflow models build-docker
to support building generic model serving images (#6302, @harupy)
- [R] Support logging of NA and NaN parameter values (#6263, @nathaneastwood)
Bug fixes and documentation updates:
- [Pipelines] Improve scikit-learn regression pipeline latency by limiting dataset profiling to the first 100 columns (#6297, @sunishsheth2009)
- [Pipelines] Use
xdg-open
instead of open
for viewing Pipeline results on Linux systems (#6326, @strangiato)
- [Pipelines] Fix a bug that skipped Step Card rendering in Jupyter Notebooks (#6378, @apurva-koti)
- [Tracking] Use the 401 HTTP response code in authorization failure REST API responses, instead of 500 (#6106, @balvisio)
- [Tracking] Correctly classify artifacts as files and directories when using Azure Blob Storage (#6237, @nerdinand)
- [Tracking] Fix a bug in the File backend that caused run metadata to be lost in the event of a failed write (#6388, @dbczumar)
- [Tracking] Adjust
mlflow.pyspark.ml.autolog()
to only log model signatures for supported input / output data types (#6365, @harupy)
- [Tracking] Adjust
mlflow.tensorflow.autolog()
to log TensorFlow early stopping callback info when log_models=False
is specified (#6170, @WeichenXu123)
- [Tracking] Fix signature and input example logging errors in
mlflow.sklearn.autolog()
for models containing transformers (#6230, @dbczumar)
- [Tracking] Fix a failure in
mlflow gc
that occurred when removing a run whose artifacts had been previously deleted (#6165, @dbczumar)
- [Tracking] Add missing
sqlparse
library to MLflow Skinny client, which is required for search support (#6174, @dbczumar)
- [Tracking / Model Registry] Fix an
mlflow server
bug that rejected parameters and tags with empty string values (#6179, @dbczumar)
- [Model Registry] Fix a failure preventing model version schemas from being downloaded with
--serve-arifacts
enabled (#6355, @abbas123456)
- [Scoring] Patch the Java Model Server to support MLflow Models logged on recent versions of the Databricks Runtime (#6337, @dbczumar)
- [Scoring] Verify that either the deployment name or endpoint is specified when invoking the
mlflow deployments predict
CLI (#6323, @dbczumar)
- [Scoring] Properly encode datetime columns when performing batch inference with
mlflow.pyfunc.spark_udf()
(#6244, @harupy)
- [Projects] Fix an issue where local directory paths were misclassified as Git URIs when running Projects (#6218, @ElefHead)
- [R] Fix metric logging behavior for +/- infinity values (#6271, @nathaneastwood)
- [Docs] Move Python API docs for
MlflowClient
from mlflow.tracking
to mlflow.client
(#6405, @dbczumar)
- [Docs] Document that MLflow Pipelines requires Make (#6216, @dbczumar)
- [Docs] Improve documentation for developing and testing MLflow JS changes in
CONTRIBUTING.rst
(#6330, @ahlag)
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
We are happy to announce the availability of MLflow 1.27.0!
MLflow 1.27.0 includes several major features and improvements:
-
[Pipelines] With MLflow 1.27.0, we are excited to announce the release of
MLflow Pipelines, an opinionated framework for
structuring MLOps workflows that simplifies and standardizes machine learning application development
and productionization. MLflow Pipelines makes it easy for data scientists to follow best practices
for creating production-ready ML deliverables, allowing them to focus on developing excellent models.
MLflow Pipelines also enables ML engineers and DevOps teams to seamlessly deploy models to production
and incorporate them into applications. To get started with MLflow Pipelines, check out the docs at
https://mlflow.org/docs/latest/pipelines.html. (#6115)
-
[UI] Introduce UI support for searching and comparing runs across multiple Experiments (#5971, @r3stl355)
More features:
- [Tracking] When using batch logging APIs, automatically split large sets of metrics, tags, and params into multiple requests (#6052, @nzw0301)
- [Tracking] When an Experiment is deleted, SQL-based backends also move the associate Runs to the "deleted" lifecycle stage (#6064, @AdityaIyengar27)
- [Tracking] Add support for logging single-element
ndarray
and tensor instances as metrics via the mlflow.log_metric()
API (#5756, @ntakouris)
- [Models] Add support for
CatBoostRanker
models to the mlflow.catboost
flavor (#6032, @danielgafni)
- [Models] Integrate SHAP's
KernelExplainer
with mlflow.evaluate()
, enabling model explanations on categorical data (#6044, #5920, @WeichenXu123)
- [Models] Extend
mlflow.evaluate()
to automatically log the score()
outputs of scikit-learn models as metrics (#5935, #5903, @WeichenXu123)
Bug fixes and documentation updates:
- [UI] Fix broken model links in the Runs table on the MLflow Experiment Page (#6014, @hctpbl)
- [Tracking/Installation] Require
sqlalchemy>=1.4.0
upon MLflow installation, which is necessary for usage of SQL-based MLflow Tracking backends (#6024, @sniafas)
- [Tracking] Fix a regression that caused
mlflow server
to reject LogParam
API requests containing empty string values (#6031, @harupy)
- [Tracking] Fix a failure in scikit-learn autologging that occurred when
matplotlib
was not installed on the host system (#5995, @fa9r)
- [Tracking] Fix a failure in TensorFlow autologging that occurred when training models on
tf.data.Dataset
inputs (#6061, @dbczumar)
- [Artifacts] Address artifact download failures from SFTP locations that occurred due to mismanaged concurrency (#5840, @rsundqvist)
- [Models] Fix a bug where MLflow Models did not restore bundled code properly if multiple models use the same code module name (#5926, @BFAnas)
- [Models] Address an issue where
mlflow.sklearn.model()
did not properly restore bundled model code (#6037, @WeichenXu123)
- [Models] Fix a bug in
mlflow.evaluate()
that caused input data objects to be mutated when evaluating certain scikit-learn models (#6141, @dbczumar)
- [Models] Fix a failure in
mlflow.pyfunc.spark_udf
that occurred when the UDF was invoked on an empty RDD partition (#6063, @WeichenXu123)
- [Models] Fix a failure in
mlflow models build-docker
that occurred when env-manager=local
was specified (#6046, @bneijt)
- [Projects] Improve robustness of the git repository check that occurs prior to MLflow Project execution (#6000, @dkapur17)
- [Projects] Address a failure that arose when running a Project that does not have a
master
branch (#5889, @harupy)
- [Docs] Correct several typos throughout the MLflow docs (#5959, @ryanrussell)
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
We are happy to announce the availability of MLflow 1.26.1!
MLflow 1.26.1 is a patch release containing the following bug fixes:
- [Installation] Fix compatibility issue with
protobuf >= 4.21.0
(#5945, @harupy)
- [Models] Fix
get_model_dependencies
behavior for models:
URIs containing artifact paths (#5921, @harupy)
- [Models] Revert a problematic change to
artifacts
persistence in mlflow.pyfunc.log_model()
that was introduced in MLflow 1.25.0 (#5891, @kyle-jarvis)
- [Models] Close associated image files when
EvaluationArtifact
outputs from mlflow.evaluate()
are garbage collected (#5900, @WeichenXu123)
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.