Data stores#
A data store defines a storage provider (e.g. file system, S3, Azure blob, Iguazio v3io, etc.).
In this section
Storage credentials and parameters#
Data stores might require connection credentials. These can be provided through environment variables or project/job context secrets. The exact credentials depend on the type of the data store. They are listed in the following sections. Each parameter specified can be provided as an environment variable, or as a project-secret that has the same key as the name of the parameter.
MLRun jobs that are executed remotely run in independent pods, with their own environment. When setting an environment variable in the development environment (for example Jupyter), this has no effect on the executing pods. Therefore, before executing jobs that require access to storage credentials, these need to be provided by assigning environment variables to the MLRun runtime itself, assigning secrets to it, or placing the variables in project-secrets.
You can also use data store profiles to provide credentials for Redis.
For example, running a function locally:
# Access object in AWS S3, in the "input-data" bucket
source_url = "s3://input-data/input_data.csv"
os.environ["AWS_ACCESS_KEY_ID"] = "<access key ID>"
os.environ["AWS_SECRET_ACCESS_KEY"] = "<access key>"
# Execute a function that reads from the object pointed at by source_url.
# When running locally, the function can use the local environment variables.
local_run = func.run(name='aws_test', inputs={'source_url': source_url}, local=True)
Running the same function remotely:
# Executing the function remotely using env variables (not recommended!)
func.set_env("AWS_ACCESS_KEY_ID", "<access key ID>").set_env("AWS_SECRET_ACCESS_KEY", "<access key>")
remote_run = func.run(name='aws_test', inputs={'source_url': source_url})
# Using project-secrets (recommended) - project secrets are automatically mounted to project functions
secrets = {"AWS_ACCESS_KEY_ID": "<access key ID>", "AWS_SECRET_ACCESS_KEY": "<access key>"}
db = mlrun.get_run_db()
db.create_project_secrets(project=project_name, provider="kubernetes", secrets=secrets)
remote_run = func.run(name='aws_test', inputs={'source_url': source_url})
The following sections list the credentials and configuration parameters applicable to each storage type.
v3io#
When running in an Iguazio system, MLRun automatically configures the executed functions to use v3io
storage, and passes
the needed parameters (such as access-key) for authentication. Refer to the
auto-mount section for more details on this process.
In some cases, the v3io configuration needs to be overridden. The following parameters can be configured:
V3IO_API
— URL pointing to the v3io web-API service.V3IO_ACCESS_KEY
— access key used to authenticate with the web API.V3IO_USERNAME
— the user-name authenticating with v3io. While not strictly required when using an access-key to authenticate, it is used in several use-cases, such as resolving paths to the home-directory.
Azure Blob storage#
The Azure Blob storage can utilize several methods of authentication. Each requires a different set of parameters as listed here:
Authentication method |
Parameters |
---|---|
|
|
|
|
|
|
|
Note
The AZURE_STORAGE_CONNECTION_STRING
configuration uses the BlobServiceClient
to access objects. This has
limited functionality and cannot be used to access Azure Datalake storage objects. In this case use one of the other
authentication methods that use the fsspec
mechanism.
Google cloud storage#
GOOGLE_APPLICATION_CREDENTIALS
— path to the application credentials to use (in the form of a JSON file). This can be used if this file is located in a location on shared storage, accessible to pods executing MLRun jobs.GCP_CREDENTIALS
— when the credentials file cannot be mounted to the pod, this secret or environment variable may contain the contents of this file. If configured in the function pod, MLRun dumps its contents to a temporary file and pointsGOOGLE_APPLICATION_CREDENTIALS
at it. An exception isBigQuerySource
, which passesGCP_CREDENTIALS
’s contents directly to the query engine.
Databricks file system#
Note
Not supported by the spark and remote-spark runtimes.
DATABRICKS_HOST
— hostname in the format: https://abc-d1e2345f-a6b2.cloud.databricks.com’DATABRICKS_TOKEN
— Databricks access token. Perform Databricks personal access token authentication.
S3#
AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
— access key parametersS3_ENDPOINT_URL
— the S3 endpoint to use. If not specified, it defaults to AWS. For example, to access a storage bucket in Wasabi storage, useS3_ENDPOINT_URL = "https://s3.wasabisys.com"
MLRUN_AWS_ROLE_ARN
— IAM role to assume. Connect to AWS using the secret key and access key, and assume the role whose ARN is provided. The ARN must be of the formatarn:aws:iam::<account-of-role-to-assume>:role/<name-of-role>
AWS_PROFILE
— name of credentials profile from a local AWS credentials file. When using a profile, the authentication secrets (if defined) are ignored, and credentials are retrieved from the file. This option should be used for local development where AWS credentials already exist (created byaws
CLI, for example)
Using data store profiles#
You can use a data store profile to manage datastore credentials. A data store profile
holds all the information required to address an external data source, including credentials.
You can create
multiple profiles for one datasource, for example,
two different Redis data stores with different credentials. Targets, sources, and artifacts,
can all use the data store profile by using the ds://<profile-name>
convention.
After you create a profile object, you make it available on remote pods by calling
project.register_datastore_profile
.
Create a data store profile in the context of a project. Example of creating a Redis datastore profile:
Create the profile, for example:
profile = DatastoreProfileRedis(name="test_profile", endpoint_url="redis://11.22.33.44:6379", username="user", password="password")
The username and password parameters are optional.Register it within the project:
project.register_datastore_profile(profile)
Use the profile by specifying the ‘ds’ URI scheme. For example:
RedisNoSqlTarget(path="ds://test_profile/a/b")
If you want to use a profile from a different project, you can specify it explicitly in the URI using the format:
RedisNoSqlTarget(path="ds://another_project@test_profile")
To access a profile from the client/sdk, register the profile locally by calling
register_temporary_client_datastore_profile()
with a profile object.
You can also choose to retrieve the public information of an already registered profile by calling
project.get_datastore_profile()
and then adding the private credentials before registering it locally:
redis_profile = project.get_datastore_profile("my_profile")
local_redis_profile = DatastoreProfileRedis(redis_profile.name, redis_profile.endpoint_url, username="mylocaluser", password="mylocalpassword")
register_temporary_client_datastore_profile(local_redis_profile)
See also:
list_datastore_profile
register_temporary_client_datastore_profile
The methods get_datastore_profile()
and list_datastore_profiles()
only return public information about
the profiles. Access to private attributes is restricted to applications running in Kubernetes pods.
Note
This feature currently only supports Redis.