Skip to main content
Version: 1.1.5

AWS

Sodot MPC Vertex is a self-hosted deployable service that represents an MPC party for MPC operations (such as keygen, sign, refresh etc.). It takes care of storing the secret share, as well as running the MPC operations themselves. It can be used seamlessly with all Sodot MPC SDKs. The Vertex exposes a REST API, documented here, that allows creating and managing secret shares easily.

Architecture

The self-hosted solution comes as a dedicated Terraform module which should be run on your organization's computing infrastructure. We recommend you first look at our example Terraform module and modify it according to your organization's needs. Each operating mode (with or without the usage of AWS Nitro) has its own shell script that will be loaded to the relevant machines as part of the installation process - those will be covered later in this guide.

Requirements

Before starting the installation process, make sure you have the following installed:

  1. The terraform CLI.
  2. The aws CLI tool for AWS.
    1. Make sure you are logged in (i.e aws sso login).
    2. Make sure your account has permissions to create resources (such as EC2 instances, RDS instances, etc...).

In addition, this guide assumes you're running the provided example Terraform module as the root module (although most steps are applicable when running it as a submodule as well).

Installation Process

The installation process is made of the following steps:

  1. Collecting installation parameters
  2. Cloning the Terraform module
  3. Obtaining a TLS certificate
  4. Applying the Terraform module to your cloud environment
  5. Setting the DNS Record for URL_ENDPOINT to point to the Vertex

1. Collecting Parameters

At this step you should collect the following pieces of information, listed down below. At the end of each bullet we put in parentheses a capitalized name which will be used to refer to the value being set throughout the rest of this guide.

  1. The endpoint URL that you wish to host your Vertex at (URL_ENDPOINT).
  2. The GitHub token which provides access to the Terraform module (GH_TOKEN).
  3. The DockerHub token which provides access to all relevant Helm charts and Docker images (DOCKERHUB_TOKEN).
  4. Your organization's Sodot Relay API key (RELAY_API_KEY) and URL (RELAY_URL).
  5. Decide on a secret admin token that will be used to provision new users using the Vertex (ADMIN_TOKEN).

2. Cloning the Terraform Module

Before starting the actual setup process, you will need to clone the Terraform module from the Sodot repository. At this point, you should have a GitHub token that provides access to the Terraform module (GH_TOKEN).

git clone https://x-access-token:<GH_TOKEN>@github.com/sodot-rs/vertex-aws-enclave-tf.git

3. Obtaining a TLS Certificate

You should obtain a TLS cerificate for the URL_ENDPOINT domain via AWS Certificate Manager. The certificate can either be requested or imported from an existing certificate. We will need the unique ARN of your certificate (TLS_CERT_ARN).

4. Configuring the Terraform Module

The variables for the terraform module are located in variables.tf and are provided with sensible defaults where applicable.

Users are required to pass the following variables when calling terraform apply (and terraform destroy respectively):

  • region - the AWS region at which the infrastructure will be provisioned.
  • dockerhub_token=<DOCKERHUB_TOKEN> - allows Terraform to install the relevant Helm chart and access the Vertex image.
  • tls_certificate_arn=<TLS_CERT_ARN> - the ARN for a TLS certificate, as described in the Obtaining TLS Certificate section.
  • vertex_admin_token=<ADMIN_TOKEN> - the secret admin token that will be used to set up all other users of the Vertex.
  • vertex_relay_api_key=<RELAY_API_KEY> - an API key for accessing the Sodot Relay Server.
  • vertex_relay_address=<RELAY_URL> - the URL for your organization's deployed Relay Server. While technically not required, this will point to a public non-production Relay server which should be replaced with your organization's own deployment.

The rest of the variables have sensible defaults, but it is recommended to go over them and customize them for your own needs.

  • vertex_image_tag - the version of the underlying Vertex used.
  • db_instance_type - which instance type to use for the RDS instance.
  • vertex_instance_type - which instance type to use for the Vertex.
  • min_size - the minimum number of Vertex instances for the Vertex Auto-Scaling Group.
  • max_size - the maximum number of Vertex instances for the Vertex Auto-Scaling Group.
  • desired_capacity - the maximum number of Vertex instances for the Vertex Auto-Scaling Group.
  • nitro_enclave_enabled - whether to enable the Nitro secure enclave feature.

Nitro

When running in with Nitro enabled (nitro_enclave_enabled=true), several other variables can be configured:

  • enclave_allocated_memory - the amount of memory allocated for the Nitro enclave. Both the parent instance and the enclave itself should have at least 4GB of memory.
  • enclave_allocated_vcpus - the amount of vcpus allocated for the Nitro enclave. The parent should have at least 2 vcpus while the enclave should have at least 4.

OpenTelemetry

OpenTelemetry outputs can be enabled for the Vertex. This default implementation will direct all the Vertex OTLP communication to an internal ADOT collector, whose configuration is located in the adot_config.yaml file inside the Terraform module.

  • enable_opentelemetry - enable OpenTelemetry outputs from the Vertex. Directed to the local ADOT collector by default.
  • opentelemetry_endpoint - the endpoint the Vertex will send all OTLP communication to - defaults to the local ADOT collector.

5. Applying the Terraform Module to Your Cloud Environment

Setting up the relevant infrastructure is performed by running terraform apply (after terraform init) along with the relevant arguments described above. This will provision several resources including (but not limited to): an EC2 instance for the Vertex to run on (with or without the Nitro secure enclave feature enabled), a Postgres DB for internal usage of the Vertex, new roles and permissions, and an additional KMS key to hold the Vertex encryption keys.

Each Vertex EC2 instance will be configured using its user data with one of the launch scripts under the scripts directory of the Terraform module - the docker script when running over plain Docker, the enclave script when running inside a secure enclave (see the nitro_enclave_enabled flag). They each will configure any relevant services the Vertex needs, its means of communication, and will then run the Vertex process itself.

Example

An example command will look like this:

terraform apply \
-var "region=us-east-1" \
-var "tls_certificate_arn=arn:aws:acm:XXX" \
-var "dockerhub_token=dckr_pat_xxxxxxxxxxxx" \
-var "vertex_relay_api_key=ABEiM0RVZneImaq7zN3u8g==" \
-var "vertex_relay_address=my-relay.XXXXX.com" \
-var "vertex_admin_token=XXXXXXXXXXXXXXXXXX" \
-var "nitro_enclave_enabled=true" \
-var "db_instance_type=db.t3.micro" \
-var "vertex_instance_type=c5.4xlarge" \
-var "min_size=2" \
-var "max_size=5" \
-var "desired_capacity=3" \

6. Setting the DNS Record for URL_ENDPOINT to Point to the Vertex

After running terraform apply there will be an address for an AWS Load Balancer that points to your Vertex Auto-Scaling Group named "Public URL".

You will need to create a record for URL_ENDPOINT that points to that load balancer. Then, at https://URL_ENDPOINT you will be able to communicate with your Vertex.

After that, you can run:

curl https://URL_ENDPOINT/health -vvv

To verify that you get an empty 200 OK response from the Vertex and that it is indeed up and running.