mlflow.llama_index

mlflow.llama_index.autolog(log_traces: bool = True, disable: bool = False, silent: bool = False)[source]

Note

Experimental: This function may change or be removed in a future release without warning.

Enables (or disables) and configures autologging from LlamaIndex to MLflow. Currently, MLflow only supports autologging for tracing.

Parameters
  • log_traces – If True, traces are logged for LlamaIndex models by using. If False, no traces are collected during inference. Default to True.

  • disable – If True, disables the LlamaIndex autologging integration. If False, enables the LlamaIndex autologging integration.

  • silent – If True, suppress all event logs and warnings from MLflow during LlamaIndex autologging. If False, show all events and warnings.

mlflow.llama_index.get_default_conda_env()[source]
Returns

The default Conda environment for MLflow Models produced by calls to save_model() and log_model().

mlflow.llama_index.get_default_pip_requirements()[source]
Returns

A list of default pip requirements for MLflow Models produced by this flavor. Calls to save_model() and log_model() produce a pip environment that, at a minimum, contains these requirements.

mlflow.llama_index.load_model(model_uri, dst_path=None)[source]

Note

Experimental: This function may change or be removed in a future release without warning.

Load a LlamaIndex index/engine/workflow from a local file or a run.

Parameters
  • model_uri

    The location, in URI format, of the MLflow model. For example:

    • /Users/me/path/to/local/model

    • relative/path/to/local/model

    • s3://my_bucket/path/to/model

    • runs:/<mlflow_run_id>/run-relative/path/to/model

    • mlflow-artifacts:/path/to/model

    For more information about supported URI schemes, see Referencing Artifacts.

  • dst_path – The local filesystem path to utilize for downloading the model artifact. This directory must already exist if provided. If unspecified, a local output path will be created.

Returns

A LlamaIndex index object.

mlflow.llama_index.log_model(llama_index_model, artifact_path: str, engine_type: Optional[str] = None, model_config: Optional[Dict[str, Any]] = None, code_paths: Optional[List[str]] = None, registered_model_name: Optional[str] = None, signature: Optional[mlflow.models.signature.ModelSignature] = None, input_example: Optional[Union[pandas.core.frame.DataFrame, numpy.ndarray, dict, list, csr_matrix, csc_matrix, str, bytes, tuple]] = None, await_registration_for=300, pip_requirements: Optional[Union[List[str], str]] = None, extra_pip_requirements: Optional[Union[List[str], str]] = None, conda_env=None, metadata: Optional[Dict[str, Any]] = None, **kwargs)[source]

Note

Experimental: This function may change or be removed in a future release without warning.

Log a LlamaIndex model as an MLflow artifact for the current run.

Attention

Saving a non-index object is only supported in the ‘Model-from-Code’ saving mode. Please refer to the Models From Code Guide for more information.

Note

When logging a model, MLflow will automatically save the state of the Settings object so that you can use the same settings at inference time. However, please note that some information in the Settings object will not be saved, including:

  • API keys for avoiding key leakage.

  • Function objects which are not serializable.

Parameters
  • llama_index_model

    A LlamaIndex object to be saved. Supported model types are:

    1. An Index object.

    2. An Engine object e.g. ChatEngine, QueryEngine, Retriever.

    3. A Workflow object.

    4. A string representing the path to a script contains LlamaIndex model definition

      of the one of the above types.

  • artifact_path – Local path where the serialized model (as YAML) is to be saved.

  • engine_type

    Required when saving an Index object to determine the inference interface for the index when loaded as a pyfunc model. This field is not required when saving other LlamaIndex objects. The supported values are as follows:

    • "chat": load the index as an instance of the LlamaIndex ChatEngine.

    • "query": load the index as an instance of the LlamaIndex QueryEngine.

    • "retriever": load the index as an instance of the LlamaIndex Retriever.

  • model_config

    The model configuration to apply when loading the model back with mlflow.pyfunc.load_model(). It will be applied in a different way depending on the model type and saving method:

    For in-memory Index objects saved directly, it will be passed as keyword arguments to instantiate the LlamaIndex engine with the specified engine type at logging.

    with mlflow.start_run() as run:
        model_info = mlflow.llama_index.log_model(
            index,
            artifact_path="index",
            engine_type="chat",
            model_config={"top_k": 10},
        )
    
    # When loading back, MLflow will call ``index.as_chat_engine(top_k=10)``
    engine = mlflow.pyfunc.load_model(model_info.model_uri)
    

    For other model types saved with the Model-from-Code <https://www.mlflow.org/docs/latest/model/models-from-code.html> method, the config will be accessed via the :py:class`~mlflow.models.ModelConfig` object within your model code.

    with mlflow.start_run() as run:
        model_info = mlflow.llama_index.log_model(
            "model.py",
            artifact_path="model",
            model_config={"qdrant_host": "localhost", "qdrant_port": 6333},
        )
    

    model.py:

    import mlflow
    from llama_index.vector_stores.qdrant import QdrantVectorStore
    import qdrant_client
    
    
    # The model configuration is accessible via the ModelConfig singleton
    model_config = mlflow.models.ModelConfig()
    qdrant_host = model_config.get("top_k", 5)
    qdrant_port = model_config.get("qdrant_port", 6333)
    
    client = qdrant_client.Client(host=qdrant_host, port=qdrant_port)
    vectorstore = QdrantVectorStore(client)
    
    # the rest of the model definition...
    

  • code_paths

    A list of local filesystem paths to Python file dependencies (or directories containing file dependencies). These files are prepended to the system path when the model is loaded. Files declared as dependencies for a given model should have relative imports declared from a common root path if multiple files are defined with import dependencies between them to avoid import errors when loading the model.

    For a detailed explanation of code_paths functionality, recommended usage patterns and limitations, see the code_paths usage guide.

  • registered_model_name – This argument may change or be removed in a future release without warning. If given, create a model version under registered_model_name, also creating a registered model if one with the given name does not exist.

  • signature – A Model Signature object that describes the input and output Schema of the model. The model signature can be inferred using infer_signature function of mlflow.models.signature.

  • input_example – one or several instances of valid model input. The input example is used as a hint of what data to feed the model. It will be converted to a Pandas DataFrame and then serialized to json using the Pandas split-oriented format, or a numpy array where the example will be serialized to json by converting it to a list. Bytes are base64-encoded. When the signature parameter is None, the input example is used to infer a model signature.

  • await_registration_for – Number of seconds to wait for the model version to finish being created and is in READY status. By default, the function waits for five minutes. Specify 0 or None to skip waiting.

  • pip_requirements – Either an iterable of pip requirement strings (e.g. ["llama_index", "-r requirements.txt", "-c constraints.txt"]) or the string path to a pip requirements file on the local filesystem (e.g. "requirements.txt"). If provided, this describes the environment this model should be run in. If None, a default list of requirements is inferred by mlflow.models.infer_pip_requirements() from the current software environment. If the requirement inference fails, it falls back to using get_default_pip_requirements(). Both requirements and constraints are automatically parsed and written to requirements.txt and constraints.txt files, respectively, and stored as part of the model. Requirements are also written to the pip section of the model’s conda environment (conda.yaml) file.

  • extra_pip_requirements

    Either an iterable of pip requirement strings (e.g. ["pandas", "-r requirements.txt", "-c constraints.txt"]) or the string path to a pip requirements file on the local filesystem (e.g. "requirements.txt"). If provided, this describes additional pip requirements that are appended to a default set of pip requirements generated automatically based on the user’s current software environment. Both requirements and constraints are automatically parsed and written to requirements.txt and constraints.txt files, respectively, and stored as part of the model. Requirements are also written to the pip section of the model’s conda environment (conda.yaml) file.

    Warning

    The following arguments can’t be specified at the same time:

    • conda_env

    • pip_requirements

    • extra_pip_requirements

    This example demonstrates how to specify pip requirements using pip_requirements and extra_pip_requirements.

  • conda_env

    Either a dictionary representation of a Conda environment or the path to a conda environment yaml file. If provided, this describes the environment this model should be run in. At a minimum, it should specify the dependencies contained in get_default_conda_env(). If None, a conda environment with pip requirements inferred by mlflow.models.infer_pip_requirements() is added to the model. If the requirement inference fails, it falls back to using get_default_pip_requirements(). pip requirements from conda_env are written to a pip requirements.txt file and the full conda environment is written to conda.yaml. The following is an example dictionary representation of a conda environment:

    {
        "name": "mlflow-env",
        "channels": ["conda-forge"],
        "dependencies": [
            "python=3.8.15",
            {
                "pip": [
                    "llama_index==x.y.z"
                ],
            },
        ],
    }
    

  • metadata – Custom metadata dictionary passed to the model and stored in the MLmodel file.

  • kwargs – Additional arguments for mlflow.models.model.Model

mlflow.llama_index.save_model(llama_index_model, path: str, engine_type: Optional[str] = None, model_config: Optional[Union[str, Dict[str, Any]]] = None, code_paths=None, mlflow_model: Optional[mlflow.models.model.Model] = None, signature: Optional[mlflow.models.signature.ModelSignature] = None, input_example: Optional[Union[pandas.core.frame.DataFrame, numpy.ndarray, dict, list, csr_matrix, csc_matrix, str, bytes, tuple]] = None, pip_requirements: Optional[Union[List[str], str]] = None, extra_pip_requirements: Optional[Union[List[str], str]] = None, conda_env=None, metadata: Optional[Dict[str, Any]] = None)None[source]

Note

Experimental: This function may change or be removed in a future release without warning.

Save a LlamaIndex model to a path on the local file system.

Attention

Saving a non-index object is only supported in the ‘Model-from-Code’ saving mode. Please refer to the Models From Code Guide for more information.

Note

When logging a model, MLflow will automatically save the state of the Settings object so that you can use the same settings at inference time. However, please note that some information in the Settings object will not be saved, including:

  • API keys for avoiding key leakage.

  • Function objects which are not serializable.

Parameters
  • llama_index_model

    A LlamaIndex object to be saved. Supported model types are:

    1. An Index object.

    2. An Engine object e.g. ChatEngine, QueryEngine, Retriever.

    3. A Workflow object.

    4. A string representing the path to a script contains LlamaIndex model definition

      of the one of the above types.

  • path – Local path where the serialized model (as YAML) is to be saved.

  • engine_type

    Required when saving an Index object to determine the inference interface for the index when loaded as a pyfunc model. This field is not required when saving other LlamaIndex objects. The supported values are as follows:

    • "chat": load the index as an instance of the LlamaIndex ChatEngine.

    • "query": load the index as an instance of the LlamaIndex QueryEngine.

    • "retriever": load the index as an instance of the LlamaIndex Retriever.

  • model_config – The model configuration to apply when loading the model back with mlflow.pyfunc.load_model(). It will be applied in a different way depending on the model type and saving method. See the docstring of log_model() for more details and usage examples.

  • code_paths

    A list of local filesystem paths to Python file dependencies (or directories containing file dependencies). These files are prepended to the system path when the model is loaded. Files declared as dependencies for a given model should have relative imports declared from a common root path if multiple files are defined with import dependencies between them to avoid import errors when loading the model.

    For a detailed explanation of code_paths functionality, recommended usage patterns and limitations, see the code_paths usage guide.

  • mlflow_model – An MLflow model object that specifies the flavor that this model is being added to.

  • signature – A Model Signature object that describes the input and output Schema of the model. The model signature can be inferred using infer_signature function of mlflow.models.signature.

  • input_example – one or several instances of valid model input. The input example is used as a hint of what data to feed the model. It will be converted to a Pandas DataFrame and then serialized to json using the Pandas split-oriented format, or a numpy array where the example will be serialized to json by converting it to a list. Bytes are base64-encoded. When the signature parameter is None, the input example is used to infer a model signature.

  • pip_requirements – Either an iterable of pip requirement strings (e.g. ["llama_index", "-r requirements.txt", "-c constraints.txt"]) or the string path to a pip requirements file on the local filesystem (e.g. "requirements.txt"). If provided, this describes the environment this model should be run in. If None, a default list of requirements is inferred by mlflow.models.infer_pip_requirements() from the current software environment. If the requirement inference fails, it falls back to using get_default_pip_requirements(). Both requirements and constraints are automatically parsed and written to requirements.txt and constraints.txt files, respectively, and stored as part of the model. Requirements are also written to the pip section of the model’s conda environment (conda.yaml) file.

  • extra_pip_requirements

    Either an iterable of pip requirement strings (e.g. ["pandas", "-r requirements.txt", "-c constraints.txt"]) or the string path to a pip requirements file on the local filesystem (e.g. "requirements.txt"). If provided, this describes additional pip requirements that are appended to a default set of pip requirements generated automatically based on the user’s current software environment. Both requirements and constraints are automatically parsed and written to requirements.txt and constraints.txt files, respectively, and stored as part of the model. Requirements are also written to the pip section of the model’s conda environment (conda.yaml) file.

    Warning

    The following arguments can’t be specified at the same time:

    • conda_env

    • pip_requirements

    • extra_pip_requirements

    This example demonstrates how to specify pip requirements using pip_requirements and extra_pip_requirements.

  • conda_env

    Either a dictionary representation of a Conda environment or the path to a conda environment yaml file. If provided, this describes the environment this model should be run in. At a minimum, it should specify the dependencies contained in get_default_conda_env(). If None, a conda environment with pip requirements inferred by mlflow.models.infer_pip_requirements() is added to the model. If the requirement inference fails, it falls back to using get_default_pip_requirements(). pip requirements from conda_env are written to a pip requirements.txt file and the full conda environment is written to conda.yaml. The following is an example dictionary representation of a conda environment:

    {
        "name": "mlflow-env",
        "channels": ["conda-forge"],
        "dependencies": [
            "python=3.8.15",
            {
                "pip": [
                    "llama_index==x.y.z"
                ],
            },
        ],
    }
    

  • metadata – Custom metadata dictionary passed to the model and stored in the MLmodel file.