GCP
Sodot MPC Vertex is a self-hosted deployable service that represents an MPC party for MPC operations (e.g. keygen, sign or refresh). It takes care of storing the secret shares, as well as running the MPC operations themselves. It can be used seamlessly with all Sodot MPC SDKs. The Vertex exposes a REST API, documented here, that allows creating and managing secret shares easily.
Architecture
The self-hosted solution comes in two forms: a dedicated Terraform module and a Helm chart (which the Terraform module uses internally). Both should be run on your organization's computing infrastructure. We recommend you first look at our example Terraform module and modify it according to your organization's needs.
Requirements
Before starting the installation process, make sure you have the done the following:
- Install the
terraform
CLI. - Install the
gcloud
CLI tool for GCP.- Make sure you are logged in (i.e
gcloud auth application-default login
). - Make sure your account has permissions to create resources (e.g. clusters or load balancers).
- Make sure you are logged in (i.e
- Set up a Relay Server (instructions).
This guide assumes you're running the provided example Terraform module as the root module (although most steps are applicable when running it as a submodule as well).
Installation Process
The installation process is made of the following steps:
- Collecting the installation parameters
- Cloning the Terraform module
- Obtaining a TLS certificate
- Applying the Terraform module to your cloud environment
- Setting the DNS Record for
URL_ENDPOINT
to point to the Vertex
The steps below relate to the provided Terraform module as is. Customizing the Helm chart deployment directly is possible as explained in the Customizing Helm Chart Values section.
1. Collecting Parameters
At this step you should collect the following pieces of information: At the end of each bullet we put in parentheses a capitalized name which will be used to refer to the value being set throughout the rest of this guide.
- The endpoint URL that you wish to host your Vertex at (
URL_ENDPOINT
). - The GitHub token which provides access to the Terraform module (
GH_TOKEN
). - The DockerHub token which provides access to all relevant Helm charts and Docker images (
DOCKERHUB_TOKEN
).
2. Cloning the Terraform Module
Before starting the actual setup process, you will need to clone the Terraform module from the Sodot repository.
At this point, you should have a GitHub token that provides access to the Terraform module (GH_TOKEN
).
git clone https://x-access-token:<GH_TOKEN>@github.com/sodot-rs/sodot-vertex-gcp-terraform.git
3. Obtaining a TLS Certificate
You should obtain a TLS cerificate for the URL_ENDPOINT
domain.
The certificate has to be loaded into the GCP Certificate Manager or created by GCP, as a Classic Certificate.
The TLS_CERT_NAME
is the name of the certificate in the GCP Certificate Manager.
4. Applying the Terraform Module to Your Cloud Environment
Setting up the relevant infrastructure is performed by running terraform init
followed by terraform apply
.
This will provision an GKE cluster, a Postgres DB for internal usage by the Vertex, and an additional KMS Keyring to hold the DB encryption keys.
The variables for the terraform module are located in variables.tf
and are provided with sensible defaults where applicable.
Users are required to pass the following variables when calling terraform apply
:
location
- the location to which the cluster and the rest of the resources will be provisioned.project_id=<project_id>
- the GCP project ID under which resources are deployed.dockerhub_token=<DOCKERHUB_TOKEN>
- provides Terraform with permissions to pull the Helm chart and the Vertex Docker image.admin_access_token=<ADMIN_TOKEN>
- the secret admin token that will be used to set up all other users of the Vertex.relay_address=<RELAY_URL>
- the URL for your organization's deployed Relay Server.relay_api_key=<RELAY_API_KEY>
- an API key for accessing the Sodot Relay Server.vertex_dns_address=<URL_ENDPOINT>
- equal toURL_ENDPOINT
.tls_cert_name=<TLS_CERT_NAME>
- the name of the TLS certificate in the GCP Certificate Manager.
The rest of the variables have sensible defaults, but it is recommended to go over them and customize them for your own needs if necessary.
The resulting setup command may look like this, for example:
terraform apply \
-var "location=us-central1" \
-var "project_id=my-project-id" \
-var "dockerhub_token=dckr_pat_xxxxxxxxxxxx" \
-var "relay_api_key=ABEiM0RVZneImaq7zN3u8g==" \
-var "relay_address=my-relay.XXXXX.com" \
-var "admin_access_token=XXXXXXXXXXXXXXXXXX" \
-var "vertex_dns_address=my-vertex.XXXXX.com" \
-var "tls_cert_name=my-cert-name"
5. Setting the DNS Record for URL_ENDPOINT
to Point to the Vertex
After running terraform apply
there will be an IP address as an output with this title vertex_public_ip_address
.
Copy it and create an A
record for URL_ENDPOINT
that points to that IP address.
Finally, test the deployment by running the following command:
curl https://URL_ENDPOINT/health -vvv
The requst should result in an empty 200 OK
response from the Vertex, confirming it is indeed up and running.
Customizing Helm Chart Values
While the Terraform module is the recommended way to deploy the Vertex, the underlying Helm chart can be customized directly for a finer-grained control over the deployment. The Helm chart is available in the Sodot's Helm repository, and can be downloaded using the following commands:
helm registry login -u sodot registry.hub.docker.com -p <DOCKERHUB_TOKEN>
helm pull oci://registry.hub.docker.com/sodot/vertex-gcp
After pulling the chart, you can extract the chart files and modify them according to your needs. After modifying the files, run the following command to install the chart:
helm install sodot-vertex sodot/vertex-gcp -f values.yaml
Example
You can use existing cert-manager
and external-dns
installations deployed on your cluster in order to manage the TLS certificate and DNS record respectively for the Vertex by making the following modifications to the values.yaml
file:
- For
cert-manager
set:cert_manager.use = true
, andcert_manager.issuer = <Issuer name that deployed on the cluster>
- For
external-dns
set theexternal_dns_hostname
value to the desired hostname for the Vertex (i.e.ENDPOINT_URL
).
All configurable values, including the ones above, are documented in the values.yaml
file.