Skip to main content

MLflow 1.14.0

· 2 min read
MLflow maintainers
MLflow maintainers

We are happy to announce the availability of MLflow 1.14.0!

In addition to bug and documentation fixes, MLflow 1.14.0 includes the following features and improvements:

  • MLflow's model inference APIs (mlflow.pyfunc.predict), built-in model serving tools (mlflow models serve), and model signatures now support tensor inputs. In particular, MLflow now provides built-in support for scoring PyTorch, TensorFlow, Keras, ONNX, and Gluon models with tensor inputs. For more information, see https://mlflow.org/docs/latest/models.html#deploy-mlflow-models (#3808, #3894, #4084, #4068 @wentinghu; #4041 @tomasatdatabricks, #4099, @arjundc-db)
  • Add new mlflow.shap.log_explainer, mlflow.shap.load_explainer APIs for logging and loading shap.Explainer instances (#3989, @vivekchettiar)
  • The MLflow Python client is now available with a reduced dependency set via the mlflow-skinny PyPI package (#4049, @eedeleon)
  • Add new RequestHeaderProvider plugin interface for passing custom request headers with REST API requests made by the MLflow Python client (#4042, @jimmyxu-db)
  • mlflow.keras.log_model now saves models in the TensorFlow SavedModel format by default instead of the older Keras H5 format (#4043, @harupy)
  • mlflow_log_model now supports logging MLeap models in R (#3819, @yitao-li)
  • Add mlflow.pytorch.log_state_dict, mlflow.pytorch.load_state_dict for logging and loading PyTorch state dicts (#3705, @shrinath-suresh)
  • mlflow gc can now garbage-collect artifacts stored in S3 (#3958, @sklingel)

For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.