Skip to main content
Version: 1.2.2

Azure

Sodot MPC Vertex is a self-hosted deployable service that represents an MPC party for MPC operations (such as keygen, sign, refresh etc.). It takes care of storing the secret share, as well as running the MPC operations themselves. It can be used seamlessly with all Sodot MPC SDKs. The Vertex exposes a REST API, documented here, that allows creating and managing secret shares easily.

Architecture

The self-hosted solution comes in two forms: a dedicated Terraform module and a Helm chart (which the Terraform module uses internally). Both should be run on your organization's computing infrastructure. We recommend you first look at our example Terraform module and modify it according to your organization's needs.

Requirements

Before starting the installation process, make sure you have the following installed:

  1. The terraform CLI.
  2. The az CLI tool for Azure.
    1. Make sure you are logged in (i.e az login).
    2. Make sure your account has permissions to create resources (such as clusters, load balancers, etc...).

In addition, this guide assumes you're running the provided example Terraform module as the root module (although most steps are applicable when running it as a submodule as well).

Installation Process

The installation process is made of the following steps:

  1. Collecting installation parameters
  2. Cloning the Terraform module
  3. Obtaining a TLS certificate
  4. Applying the Terraform module to your cloud environment
  5. Setting the DNS Record for URL_ENDPOINT to point to the Vertex

1. Collecting Parameters

At this step you should collect the following pieces of information, listed down below. At the end of each bullet we put in parentheses a capitalized name which will be used to refer to the value being set throughout the rest of this guide.

  1. The endpoint URL that you wish to host your Vertex at (URL_ENDPOINT).
  2. The GitHub token which provides access to the Terraform module (GH_TOKEN).
  3. The DockerHub token which provides access to all relevant Helm charts and Docker images (DOCKERHUB_TOKEN).

2. Cloning the Terraform Module

Before starting the actual setup process, you will need to clone the Terraform module from the Sodot repository. At this point, you should have a GitHub token that provides access to the Terraform module (GH_TOKEN).

git clone https://x-access-token:<GH_TOKEN>@github.com/sodot-rs/sodot-vertex-az-terraform.git

3. Obtaining a TLS Certificate

You should obtain a TLS cerificate for the URL_ENDPOINT domain. The certificate should be configured inside an instance of Azure Key Vault (KEY_VAULT_NAME). To use this certificate for the Vertex, make sure you pass the relevant variables when performing the terraform apply operation (more on that specifically below).

4. (Optional) Setting Up Azure Confidential Containers

In order to run the Vertex on a CoCo (Confidential Container) node (on an AKS cluster) we have to:

  1. Create a confidential amd sev-enabled Node Pool on our cluster.
  2. Sign our Vertex helm chart with the templated values set by using the Azure cli extension confcom.

Unfortunately, at the moment Confidential Containers on amd sev are not supported in azurerm_kubernetes_cluster and azurerm_kubernetes_cluster_node_pool terraform resources. For this reason, until it will be fully supported and stable, we will call the az CLI from the terraform code therefore you will have to be logged in to the CLI before running terraform apply.

For MacOS users, the confcom extension is not supported so we are providing a Dockerfile that can be built and used to run the required terraform operations.

MacOS Builder Image

This requires git clone-ing the Terraform repo.

    git clone https://x-access-token:<GITHUB_TOKEN>@github.com/sodot-rs/sodot-vertex-az-terraform.git

Then build the builder image:

    cd coco_builder
# build the image
docker build . -t sodot-coco-terraform

Now run prepare the builder with all required credentials:

    # Create a docker volume to store your az credentials
docker volume create azure_creds

# Create a docker volume to store your dockerhub credentials
docker volume create docker_creds

# First, login to az cli and store the creds
docker run -v azure_creds:/root/.azure -v docker_creds:/root/.docker -it sodot-coco-terraform az login

# Second, login to docker.io using Sodot Dockerhub token - fill the token as the requested password
docker run -v azure_creds:/root/.azure -v docker_creds:/root/.docker -it sodot-coco-terraform docker login -u sodot

Finally, the terraform commands inside the builder image from the root directory of the repo:

    # Run these commands in the terraform code directory
docker run -v azure_creds:/root/.azure -v docker_creds:/root/.docker -v .:/root/ -it sodot-coco-terraform terraform init

docker run -v azure_creds:/root/.azure -v docker_creds:/root/.docker -v .:/root/ -it sodot-coco-terraform terraform apply

# You can use the same command to perform other tf commands (e.g. destroy)

Enable the CoCo feature on your Azure Subscription

It is required to enable the Azure feature KataCcIsolationPreview on your account before running the terraform commands.

This can be done by running:

az feature register --namespace "Microsoft.ContainerService" --name "KataCcIsolationPreview"

Verify that the feature is now enabled by running:

az feature show --namespace "Microsoft.ContainerService" --name "KataCcIsolationPreview"

Finally, refresh the registration of the Microsoft.ContainerService resource provider by using the az provider register command:

az provider register --namespace "Microsoft.ContainerService"

5. Applying the Terraform Module to Your Cloud Environment

Setting up the relevant infrastructure is performed by running terraform apply (after terraform init). This will provision an AKS cluster, a Postgres DB for internal usage of the Vertex, and an additional key vault to hold the Vertex encryption keys. This will also provide the gateway with Key Vault Certificate User level access to the designated key vault (in order to use the certificate provided for TLS termination).

The variables for the terraform module are located in variables.tf and are provided with sensible defaults where applicable. Users are required to pass the following variables when calling terraform apply (and terraform destroy respectively):

  • location - the location at which the cluster (and the rest of the resources) will be provisioned.
  • dockerhub_token=<DOCKERHUB_TOKEN> - allows Terraform to install the relevant Helm chart and access the Vertex image.
  • sodot_relay_api_key=<RELAY_API_KEY> - an API key for accessing the Sodot Relay Server.
  • sodot_relay_url=<RELAY_URL> - the URL for your organization's deployed Relay Server.
  • admin_access_token=<ADMIN_TOKEN> - the secret admin token that will be used to set up all other users of the Vertex.
  • tls_cert_key_vault_name=<KEY_VAULT_NAME> - configure the name of the key vault holding the certificate.
  • tls_cert_rg_name=<KEY_VAULT_RESOURCE_GROUP> - configure which Resource Group the key vault is associated with.
    • NOTE: This is not the Resource Group that the Vertex cluster will be created under.
  • tls_cert_uri=<CERTIFICATE_ID> - the ID (URI) of the certificate that will be used for TLS termination.
  • vertex_dns_address=<URL_ENDPOINT> - equal to URL_ENDPOINT.

The rest of the variables have sensible defaults, but it is recommended to go over them and customize them for your own needs.

  • coco_enabled=<boolean> - set to true if you wish to run the Vertex as an Azure Confidential Container (requires following the above step).
  • confidential_node_vm_size=<node pool VM size> - defaults to Standard_DC4as_cc_v5, determines the node pool VM size.
Backup Settings

If you wish to backup your key shares manually, consult the Backing Up Key Shares guide for more Terraform configurations and info.

The Terraform module will attempt to create a new Resource Group for all of its resources. To change this behavior, you can modify the following optional variable:

  • existing_resource_group_name - will reuse the specified existing Resource Group instead of creating a new one when provisioning resources.

An example command will look like this:

terraform apply \
-var "location=westus" \
-var "tls_cert_uri=https://keyvault-xx.vault.azure.net/secrets/cert-xx/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" \
-var "tls_cert_key_vault_name=keyvault-xx" \
-var "tls_cert_rg_name=DefaultResourceGroup-XXX" \
-var "dockerhub_token=dckr_pat_xxxxxxxxxxxx" \
-var "sodot_relay_api_key=ABEiM0RVZneImaq7zN3u8g==" \
-var "sodot_relay_url=my-relay.XXXXX.com" \
-var "admin_access_token=XXXXXXXXXXXXXXXXXX" \
-var "vertex_dns_address=my-vertex.XXXXX.com" \
-var "coco_enabled=true" \
-var "confidential_node_vm_size=Standard_DC4as_cc_v5"

6. Setting the DNS Record for URL_ENDPOINT to Point to the Vertex

After running terraform apply there will be a command that is output of the format:

az aks get-credentials --resource-group "<DATA_HERE>" --name "<MORE_DATA_HERE>"

Run this command and then run:

kubectl get ingress

To get the IP address for your Vertex. Then you will need to create an A record for URL_ENDPOINT that points to that IP. Then, at https://URL_ENDPOINT you will be able to communicate with your Vertex.

Now, you can run:

curl https://URL_ENDPOINT/health -vvv

To verify that you get an empty 200 OK response from the Vertex and that it is indeed up and running.