Integrating Databricks with Anaconda Package Security Manager (Cloud) enables organizations to maintain security and compliance while leveraging the power of both platforms.

For data science teams working in regulated environments, this integration provides essential security controls over Python package usage. Your organization can enforce security policies and maintain consistent environments across development and production. This helps prevent the use of unauthorized or vulnerable packages while providing comprehensive audit trails of package usage across your Databricks workspaces.

This guide explains how to set up a secure, customized Python environment in Databricks using packages from Anaconda’s Package Security Manager (Cloud).

Prerequisites

Before starting, ensure you have:

  • Administrator access to an Anaconda organization
  • An Anaconda organization access token
  • Docker installed on your local machine
  • A Databricks workspace with admin privileges

Setup and configuration

1

Create a Channel

  1. Sign in to Anaconda.com.
  2. Click Channels.
  3. Click Add Channel.
  4. Name your channel databricks.
  5. Set the channel’s Type to Virtual.
  6. Open the Source dropdown and select main.
  7. Set the channel’s Access to Internal.
  8. Click Save.
2

Create and apply a policy

  1. Click Create under POLICIES.

  2. Name your policy databricks.

  3. Configure the policy filter as follows:

    Exclude package if:

    Platform Is not linux-64

    and

    Platform Is not noarch

  4. Click Save.

  5. Apply your policy to the databricks channel you created earlier. For more information, see Applying a Policy.

3

Build a Custom Docker Image

To create a secure Python environment in Databricks, you’ll need to build a custom Docker image using Databricks Container Service. This image will contain your conda-based environment and can be used when launching your Databricks cluster.

For more information, see Customize containers with Databricks Container Service and GitHub - databricks/containers.

  1. Create a directory on your local machine called dcs-conda by running the following command:

    mkdir dcs-conda
    
  2. Enter your new dcs-conda directory and create a Dockerfile file inside the dcs-conda directory:

    cd dcs-conda
    vi Dockerfile
    
  3. Add the following content to the Dockerfile file:

    # Replace `<TOKEN>` with your organization access token
    # Replace `<ORG_ID>` with your organization ID — found in your organization's URL — `https://anaconda.com/app/organizations/<ORG_ID>`
    
    FROM ubuntu:22.04 AS builder
    RUN apt-get update && apt-get install --yes \
        wget \
        libdigest-sha-perl \
        bzip2 \
        gcc \
        python3-dev \
        libpq-dev \
        libcairo2-dev \
        libdbus-1-dev \
        libgirepository1.0-dev \
        libsnappy-dev \
        git \
        maven && \
        apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
    RUN wget -q https://repo.anaconda.com/miniconda/Miniconda3-py311_25.1.1-2-Linux-x86_64.sh -O miniconda.sh && \
        /bin/bash miniconda.sh -b -p /databricks/conda && \
        rm miniconda.sh
    FROM databricksruntime/minimal:15.4-LTS
    RUN apt-get update && apt-get install --yes git && \
        apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
    COPY --from=builder /databricks/conda /databricks/conda
    COPY env.yml /databricks/.conda-env-def/env.yml
    RUN /databricks/conda/bin/conda install -n base conda-token && \
        /databricks/conda/bin/conda token set <TOKEN> && \
        /databricks/conda/bin/conda config --system --prepend channels https://repo.anaconda.cloud/repo/<ORG_ID>/databricks && \
        /databricks/conda/bin/conda config --system --append channels https://repo.anaconda.cloud/repo/conda-forge && \
        /databricks/conda/bin/conda config --system --remove channels https://repo.anaconda.com/pkgs/main && \
        /databricks/conda/bin/conda config --system --remove channels https://repo.anaconda.com/pkgs/r
    RUN /databricks/conda/bin/conda env create --file /databricks/.conda-env-def/env.yml && \
        ln -s /databricks/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh
    RUN /databricks/conda/bin/conda config --system --set channel_priority strict && \
        /databricks/conda/bin/conda config --system --set always_yes True
    RUN rm -f /root/.condarc
    ENV GIT_PYTHON_GIT_EXECUTABLE=/usr/bin/git
    ENV DEFAULT_DATABRICKS_ROOT_CONDA_ENV="dcs-conda"
    ENV DATABRICKS_ROOT_CONDA_ENV="dcs-conda"
    
  4. Create an env.yml file inside the dcs-conda directory:

    vi env.yml
    
  5. Add the following content to the env.yml file:

    # Replace `<ORG_ID>` with your organization ID — found in your organization's URL — `https://anaconda.com/app/organizations/<ORG_ID>`
    
    name: dcs-conda
    channels:
      - https://repo.anaconda.cloud/repo/<ORG_ID>/databricks
      - https://repo.anaconda.cloud/repo/conda-forge
    dependencies:
      - python=3.11
      - databricks-sdk
      - grpcio
      - grpcio-status
      - ipykernel
      - ipython
      - jedi
      - jinja2
      - matplotlib
      - nomkl
      - numpy
      - pandas
      - pip
      - pyarrow
      - pyccolo
      - setuptools
      - six
      - traitlets
      - wheel
    

    Please check the recommended package versions in the System environment section of the Databricks Runtime release notes and compatibility documentation.

  6. Build the Docker image:

    docker build -t dcs-conda:15.4-psm .
    
  7. Tag and push your custom image to a Docker registry by running the following commands:

    docker tag dcs-conda:15.4-psm anaconda/dcs-conda:15.4-psm
    docker push anaconda/dcs-conda:15.4-psm
    
4

Launch a Cluster using Databricks Container Service

Clients must be authorized to access Databricks resources using a Databricks account with appropriate permissions. Without proper access, CLI commands and REST API calls will fail. Permissions can be configured by a workspace administrator.

Databricks recommends using OAuth for authorization instead of Personal Access Tokens (PATs). OAuth tokens refresh automatically and reduce security risks associated with token leaks or misuse. For more information, see Authorizing access to Databricks resources.

  1. Open your Databricks workspace.

  2. Select Compute from the left-hand navigation, then click Create compute.

  3. On the New compute page, specify the Cluster Name.

  4. Under Performance, set the Databricks Runtime Version to a version that supports Databricks Container Service. For example - Runtime: 15.4-LTS.

    This version is under long-term support (LTS). For more information, see Databricks support lifecycles.


    Databricks Runtime for Machine Learning does not support Databricks Container Service.

  5. Open the Advanced options dropdown and select the Spark tab.

  6. Add the following Spark configurations:

    spark.databricks.isv.product anaconda-psm # Must always be added, regardless of other settings
    spark.databricks.driverNfs.enabled false
    

    To access volumes on Databricks Container Service, add the following configuration to the compute’s Spark config field as well: spark.databricks.unityCatalog.volumes.enabled true.

  7. Select the Docker tab.

  8. Select the Use your own Docker container checkbox.

  9. Enter your custom Docker image in the Docker Image URL field.

  10. Open the Authentication dropdown and select an authentication method.

  11. Click Create compute.

5

Create a Notebook and connect it to your cluster

  1. Click New in the top-left corner, then click Notebook.

  2. Specify a name for the notebook.

  3. Click Connect, then select your cluster from the resource list.

6

Verify your conda installation

  1. In your notebook, run one of the following commands to check that conda is installed:

    !conda --help
    
    %sh conda --help
    

    Both commands run shell code from the notebook. !conda --help runs the command in the current shell. %sh conda --help starts a subshell, which is useful for multi-line scripts, but might not have the same environment or path.

  2. In your notebook, run the following command to check your source channels:

    !conda config --show channels
    
7

Install MLflow from your Anaconda organization channel

MLflow is available through your Anaconda organization channel for use in your Databricks environment.

  1. In your notebook, install MLflow from your Anaconda organization channel:

    !conda install mlflow
    

    This command installs MLflow and all of its dependencies from your Package Security Manager channel.

  2. In your notebook, verify the installation:

    import mlflow
    print("MLflow: " + mlflow.__version__)