Published on 00/00/0000
Last updated on 00/00/0000
Published on 00/00/0000
Last updated on 00/00/0000
Share
Share
12 min read
Share
feature
feature
that’s turned on by default, and makes it possible to target a deployment to one or more clusters simultaneously, instead of deploying separately on each. Single cluster deployments in Banzai Cloud Pipeline are usually Helm deployments, the state of a deployment being stored on the cluster itself, by Helm. Multi-cluster deployments differ in the sense that, when a user creates a multi-cluster deployment, it is persisted in the Banzai Cloud Pipeline database as a desired state. The actual state of a deployment is then fetched from each member cluster, and a reconciliation loop moves deployments from their desired state into the actual one. In the simplest most concise terms, multi-cluster deployments are made of several, single-cluster deployments installed on targeted clusters, and are often used as a convenient way of handling several deployments on multiple clouds/clusters at once.
Clusters can be freely deleted from their group, with the multi-cluster feature
first being disabled, then the clusters removed (the cluster itself is not deleted). Multi-cluster deployments are deleted from the cluster when it leaves the group. When adding new members to a group, deployments are not installed automatically, you have to either ‘Edit’ or ‘Reconcile’ in order to install a deployment on new members. This is because you might want to specify overrides for each new cluster member.
Note, multi-cluster deployment is about distributing your application to different clouds. However, when the Service Mesh feature
is switched on, you can wire them into multiple topologies as described in this post, “Easy peer-to-peer multicluster service mesh with the Istio operator”.
clustergroups/:clusterGroupId/deployments
.
In the next section we will guide you through the API, which will also be made available in the Banzai Pipeline CLI.
POST {{url}}/api/v1/orgs/:orgId/clustergroups/:clusterGroupId/deployments
{
"name": "repoName/chartName",
"namespace":"yourNamespace",
"values": {
"replicaCount": 1,
"image": {
"tag": "2.1"
}
},
"valueOverrides": {
"clusterName1": {
"replicaCount": 2
},
"clusterName2": {
"replicaCount": 3
}
}
}
We’ve already pointed out the key differences in a Create request like this one – that we have values
containing common values for deployments on all target clusters, and that you may specify overrides for each cluster in the event there are any specific differences in deployment values for a cluster/cloud. These cluster-specific values are merged together with the common values, after which the result is sent to the Helm client. Typically, such differences are, for example, StorageClasses, which may differ from cloud provider to cloud provider. Overrides are also handy if you want to specify different Object Storage credentials for each cloud provider or for any specific cloud or cluster-specific service.
Both the create and edit operations attempt to install or upgrade the deployment on each member cluster of a group, in accordance with whether there’s already a deployment of the same chart with the same release name.
The status of a deployment operation (create, edit, delete) may differ from the actual status of a deployment. The status of a multi-cluster deployment is a list of Helm chart statuses on each targeted cluster – as you will see below – whereas a create or update operation is only successful if all Helm create/update operations are successful. These operations run in parallel, so total deployment time should not exceed that of deploying to whichever cluster has the highest network latency.
Just like in single cluster deployments, there is a dryRun
flag, which keeps the deployment from being persisted. This comes in handy for validation checks. Uniqueness of release name is then checked across target clusters, and, if you don’t specify a releaseName
, a random release name is generated for you.
The update request is very much the same as it is for create, one important difference being the reuseValues
flag.
PUT {{url}}/api/v1/orgs/:orgId/clustergroups/:clusterGroupId/deployments/:releaseName
{
"name": "repoName/chartName",
"namespace":"yourNamespace",
"reuseValues": "true",
"values": {
"replicaCount": 1,
"image": {
"tag": "2.1"
}
},
"valueOverrides": {
"clusterName1": {
"replicaCount": 2
},
"clusterName2": {
"replicaCount": 3
}
}
}
Set the reuseValues
flag to true to avoid specifying values again, or set it to false, if you want to completely override the existing values. In other words, if reuseValues = false, values are merged together just as they would be during a Create request, otherwise, common values are merged with cluster-specific values, overriding existing values.
If you are using the Pipeline UI, you can see the actual values in a deployment when you uncheck the reuseValues box.
GET {{url}}/api/v1/orgs/:orgId/clustergroups/:clusterGroupId/deployments/:releaseName
{
"releaseName":"sample",
"chart":"stable/tomcat",
"chartName":"tomcat",
"chartVersion":"0.2.0",
"namespace":"default",
"description":"Deploy a basic tomcat application server with sidecar as web archive container",
"createdAt":"2019-06-04T14:17:03Z",
"updatedAt":"2019-06-04T14:17:03Z",
"values":{
...
"replicaCount":1,
"service":{
"externalPort":80,
"internalPort":8080,
"name":"http",
"type":"LoadBalancer"
},
...
},
"valueOverrides":{
"pke-demo":{
"replicaCount":2
},
"gke-demo":{
"replicaCount":3
}
},
"targetClusters":[
{
"clusterId":1681,
"clusterName":"pke-demo",
"cloud":"amazon",
"distribution":"pke",
"status":"DEPLOYED",
"stale":false,
"version":"0.2.0"
},
{
"clusterId":1682,
"clusterName":"gke-demo",
"cloud":"google",
"distribution":"gke",
"status":"DEPLOYED",
"stale":false,
"version":"0.2.0"
}
]
}
Above is a sample of details fetched from a Tomcat deployment, deployed to a PKE on AWS and GKE cluster. As you can see, there’s no single status in a multi-cluster deployment, instead, there are separate statuses for each of the target clusters listed in the targetClusters
field. The deployment status of a target cluster can be any one of the following:
UNKOWN
– Pipeline cannot reach the targeted clusters (Tiller to be more specific)NOT INSTALLED
– the given Helm chart cannot be found on the target clusterSTALE
– a Helm chart is installed on the target cluster, however, its existing values are different from desired ones – calculated by merging common and cluster-specific values.DEPLOYED
, DELETED
, SUPERSEDED
, FAILED
, DELETING
, PENDING_INSTALL
, PENDING_UPGRADE
, PENDING_ROLLBACK
PUT {{url}}/api/v1/orgs/:orgId/clustergroups/:clusterGroupId/deployments/:releaseName/sync
Reconcile
PUT {{url}}/api/v1/orgs/:orgId/clustergroups/:clusterGroupId/deployments/:releaseName/sync
Should a multi-cluster deployment status on any target cluster differ from DEPLOYED
, i.e. NOT_INSTALLED
or STALE
, your first and last stop should be the Reconcile operation, which will attempt to enforce the desired state of each cluster. Reconcile tries to install or upgrade deployments to clusters originally targeted in Create and Edit requests. There’s no automatic installation of a deployment to newly added members of a group after a deployment is created. This is mainly because we don’t know what values we want specifically overriden in a cluster, which is the same reason Reconcile doesn’t automatically install to each new member as it is added. If you want to install the deployment on a new member, use Edit. Reconcile also deletes stale deployments – deployments installed on clusters which are no longer members of a group. These are normally deleted when their cluster is removed from a group, but, in the event they remain, Reconcile should resolve the situation.
Delete
DELETE {{url}}/api/v1/orgs/:orgId/clustergroups/:clusterGroupId/deployments/:releaseName
The delete operation will attempt to delete deployments from all the members of a cluster group, as long as they exist. There's also a force flag which is useful whenever problems arise within the delete operation: if, for instance, the cluster is not available anymore or cannot connect to a cluster.
For a complete API description, check out the Pipeline Open API spec
cockroachdb:
JoinExisting:
- "cockroachdb-cockroachdb-0-node-demo-pke-1.sancyx.try.pipeline.banzai.cloud:26257"
- "cockroachdb-cockroachdb-1-node-demo-pke-1.sancyx.try.pipeline.banzai.cloud:26257"
- "cockroachdb-cockroachdb-2-node-demo-pke-1.sancyx.try.pipeline.banzai.cloud:26257"
Comparing these deployments:
JoinExisting
, there were a few other properties that contained the cluster name as well as cluster-specific overrides at the moment we had to enter them. However, we plan to introduce a degree of templating facility to support such scenarios.demo-pke-1
& demo-gke-1
, which you can do with a few clicks or CLI commands using Pipeline. Once you have the clusters up and RUNNING
you can assign them to a cluster group:
Now create a multi-cluster deployment
, with the release name cockroachdb
, and enter global common configuration values. In order to be able to install directly from the UI, we have uploaded our experimental chart to the Banzai Cloud Banzai Charts.
Enter the overrides for the GKE cluster:
Enter the overrides for the PKE cluster:
Now you should have CockroachDB deployed on two clusters! Since multi-cluster deployments are normal single-cluster deployments, you can check their details using Reconcile.
Finally, you can open the CockroachDB Dashboard, where you should see all the nodes that have joined:
Get emerging insights on innovative technology straight to your inbox.
Outshift is leading the way in building an open, interoperable, agent-first, quantum-safe infrastructure for the future of artificial intelligence.
The Shift is Outshift’s exclusive newsletter.
The latest news and updates on generative AI, quantum computing, and other groundbreaking innovations shaping the future of technology.