Published on 00/00/0000
Last updated on 00/00/0000
Published on 00/00/0000
Last updated on 00/00/0000
Share
Share
PRODUCT
9 min read
Share
In today's post, we'll be discussing multi-datacenter Vault clusters that span multiple regions. Most enterprises follow different replication strategies to provide scalable and highly-available services. One common replication/disaster recovery strategy for distributed applications is to have a hot standby replica of the very same deployment already setup in a secondary data center. When a catastrophic event occurs in the primary data center, all traffic is then redirected to the secondary datacenter. Hot-standby is a method of redundancy in which the primary and secondary (that is, the backup) instances are run simultaneously. All data is mirrored to the secondary instance (using transactional updates) in real-time, so both instances contain identical data. Usually, an edge server (load-balancer) is deployed in front of the two data centers, so clients don't know which instance is serving their request. Vault recently introduced the Raft storage backend in version 1.2, which helps to create high-availability (multi-node) Vault clusters without using external storage backends. This simplifies the setup of an HA/replicated Vault cluster and removes the burden of maintaining a storage backend. In the cloud, for instance, you do not depend on cloud provider services like GCS, S3, etc. We always planned to create a multi-datacenter setup for Vault, and this request also comes up frequently in the Bank-Vaults community. With Raft storage available, we decided to give it a go, and added Vault replication across multiple datacenters to our Bank-Vaults Kubernetes operator.
Bank-Vaults has automated the creation of Raft Vault clusters since the 0.6.0 release. The work done in PR 820 added support for running Vault across multiple Kubernetes clusters using the Banzai Cloud Vault operator, Bank-Vaults, and is available as of the 0.8.0 release. This change contains CRD changes that automatically join the Raft cluster of another Kubernetes cluster, and store unseal keys in a distributed manner in AWS S3 encrypted with KMS. The operator then distributes the unseal keys to multiple buckets that span different AWS regions. There is support for this feature on Google Cloud as well. When using Google, however, the operator does not have to distribute the unseal keys, as GCS and KMS support multi-region/global spanning. If you are interested in running a multi-region Vault cluster on Azure or Alibaba, let us know.
Banzai Cloud Pipeline is a solution-oriented application platform which allows enterprises to develop, deploy and securely scale container-based applications in multi- and hybrid-cloud environments. Pipeline can create Kubernetes clusters on 5 cloud providers (AWS, Azure, Alibaba, Google) and on-prem (VMware, bare metal).
The following example demonstrates implementation on AWS. First, you have to create three clusters, because Raft requires at least three nodes to work properly. These clusters will span three European AWS data centers ("eu-central-1", "eu-west-1", "eu-north-1").
or region in "eu-central-1" "eu-west-1" "eu-north-1"; do
banzai cluster create <<EOF
{
"name": "bv-${region}",
"location": "${region}",
"cloud": "amazon",
"secretName": "aws",
"properties": {
"eks": {
"version": "1.14.7",
"nodePools": {
"pool1": {
"spotPrice": "0.101",
"count": 1,
"minCount": 1,
"maxCount": 3,
"autoscaling": true,
"instanceType": "c5.large"
}
}
}
}
}
EOF
done
Once all the clusters are up and running, you can check the status of the clusters with the banzai cluster list | grep bv- command.
$ banzai cluster list | grep bv-
2862 bv-eu-central-1 eks bonifaido 2020-01-09T08:29:45Z RUNNING
2863 bv-eu-north-1 eks bonifaido 2020-01-09T08:29:47Z RUNNING
2856 bv-eu-west-1 eks bonifaido 2020-01-08T15:05:12Z RUNNING
After all clusters are ready, you need to get their KUBECONFIGs. You can do that with the following command:
for region in "eu-central-1" "eu-west-1" "eu-north-1"; do
banzai cluster sh --cluster-name "bv-${region}" 'cat $KUBECONFIG' > bv-${region}.yaml
done
The Bank-Vaults repository holds a useful script and a bunch of Vault CustomResources to help with the installation. You need to create two S3 buckets and KMS keys as well, and change those in the resource manifests.
operator/deploy/multi-dc/multi-dc-raft.sh install \
bv-eu-central-1.yaml \
bv-eu-west-1.yaml \
bv-eu-north-1.yaml
NOTE: For the sake of time, the script we're using assumes that we have the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables set in our shell. The proper way to do this on an EKS cluster would be to create an IAM Role for your Vault instances, and to access S3 and KMS to bind that IAM Role to the vault
Kubernetes ServiceAccount those instances are using. For details on how to do that, see the official Amazon EKS documentation.
The script will install the Bank-Vaults operator into all three clusters and deploy a federated Vault cluster (using Raft) which is connected in an HA manner across the three Kubernetes clusters. The unseal keys will be encrypted with KMS and stored in S3. This script will take a few minutes. The following steps are performed during the provisioning:
After the setup has successfully finished, check the status of the Vault cluster and find the current leader:
operator/deploy/multi-dc/multi-dc-raft.sh install \
bv-eu-central-1.yaml \
bv-eu-west-1.yaml \
bv-eu-north-1.yaml
You can emulate a failover by scaling down the leader instance to 0 (in this example, bv-eu-central-1 ). Use the following command to do that:
# The operator will scale the instance back to 1 in a few seconds,
# but this is enough to simulate a service outage.
export KUBECONFIG=bv-eu-central-1.yaml
kubectl scale statefulset vault-primary --replicas 0
statefulset.apps/vault-primary scaled
Let's check the logs of the secondary instance:
export KUBECONFIG=bv-eu-central-1.yaml
kubectl logs --tail=6 vault-secondary-0 vault
2020-01-09T12:29:32.954Z [WARN] storage.raft: heartbeat timeout reached, starting election: last-leader=
2020-01-09T12:29:32.954Z [INFO] storage.raft: entering candidate state: node="Node at [::]:8201 [Candidate]" term=5
2020-01-09T12:29:32.958Z [INFO] storage.raft: duplicate requestVote for same term: term=5
2020-01-09T12:29:32.958Z [WARN] storage.raft: duplicate requestVote from: candidate=[91, 58, 58, 93, 58, 56, 50, 48, 49]
2020-01-09T12:29:32.958Z [INFO] storage.raft: election won: tally=2
2020-01-09T12:29:32.958Z [INFO] storage.raft: entering leader state: leader="Node at [::]:8201 [Leader]"
As you can see that the Vault instances noticed the blip in the primary instance and selected a new leader.
Don't forget to remove the whole setup with the clusters after you finish experimenting:
operator/deploy/multi-dc/multi-dc-raft.sh remove \
bv-eu-central-1.yaml \
bv-eu-west-1.yaml \
bv-eu-north-1.yaml
for region in "eu-central-1" "eu-west-1" "eu-north-1"; do
banzai cluster delete --no-interactive "bv-${region}"
done
As you can see it's quite easy to create a geo-distributed Vault cluster capable of surviving region or AZ outages, with the Bank-Vaults Vault operator. If you have questions about this Bank-Vaults feature, or you encounter problems during testing, you can reach us on our community slack channel #bank-vaults.
Learn more about Bank-Vaults:
As of now, you can only use one Vault instance per region, but we are already working to push past this limitation. The current implementation works on AWS and Google Cloud out of the box. If you are interested in running multi-region Vault cluster on Azure or Alibaba, let us know. The Bank-Vaults secret injection webhook became available as an integrated service on the Pipeline platform a few months ago. This means that you can install Bank-Vaults on any Kubernetes cluster by using Pipeline (with the UI or with the CLI), and configure Vault with all the authentications and policies necessary to work with the webhook. Secret injections are now a default feature of Pipeline. Also, as you can see, the AWS LoadBalancers serve as a gateway between the clusters. They play a roll very similar to that of Istio Gateways - a load balancer operates at the edge of the mesh receiving incoming or outgoing HTTP/TCP connections. In a forthcoming blog post we will describe how the whole multi-DC Vault setup can become more Kubernetes native using Backyards (now Cisco Service Mesh Manager), the Banzai Cloud automated and operationalized service mesh built on Istio. For more information, or if you're interested in contributing, check out the Bank-Vaults repo - our Vault extension project for Kubernetes, and/or give us a GitHub star if you think this project deserves it!
Banzai Cloud's Pipeline provides a platform for enterprises to develop, deploy, and scale container-based applications. It leverages best-of-breed cloud components, such as Kubernetes, to create a highly productive, yet flexible environment for developers and operations teams alike. Strong security measures - multiple authentication backends, fine-grained authorization, dynamic secret management, automated secure communications between components using TLS, vulnerability scans, static code analysis, CI/CD, and so on - are default features of the Pipeline platform. #multicloud #hybridcloud #BanzaiCloud
Get emerging insights on innovative technology straight to your inbox.
Discover how AI assistants can revolutionize your business, from automating routine tasks and improving employee productivity to delivering personalized customer experiences and bridging the AI skills gap.
The Shift is Outshift’s exclusive newsletter.
The latest news and updates on cloud native modern applications, application security, generative AI, quantum computing, and other groundbreaking innovations shaping the future of technology.