Published on 00/00/0000
Last updated on 00/00/0000
Published on 00/00/0000
Last updated on 00/00/0000
Share
Share
PRODUCT
12 min read
Share
Note, this is not a complete list of providers (we will be open sourcing EKS and Alibaba Cloud as soon as those services are available to the public. If you need support for any of these providers don't hesitate to contact us).For an overview of Pipeline, please study this diagram, which contains the main components of the Control Plane and a typical layout for a provisioned cluster.
You will need provider specific credentials for creating secrets in Pipeline (see below).
If you don't have aws cli installed please follow these instructionsTo create a new
AWS secret access key
and a corresponding AWS access key ID
for the specified user, run the following command:
Change {{username}}
to your AWS username
aws iam create-access-key
--user-name {{username}}
You should get something like:
{
"AccessKey": { "UserName": "{{username}}", "AccessKeyId":
“EWIAJSBDOQWBSDKFJSK", "Status": "Active",
"SecretAccessKey": "OOWSSF89sdkjf78234", "CreateDate":
"2018-05-04T13:50:07.586Z" } }
The relationship between variables and secret keys:
Credential variable name | Secret variable |
---|---|
AccessKeyId | AWS_ACCESS_KEY_ID |
SecretAccessKey | AWS_SECRET_ACCESS_KEY |
For this step it is necessary that you have an Azure subscription with AKS
service enabled.
If the Azure CLI tool is not installed, please run the following commands:$ curl -L https://aka.ms/InstallAzureCli | bash $ exec -l $SHELL $ az login
Create a Service Principal
for the Azure Active Directory using the following command:
$ az ad sp create-for-rbac
You should get something like:
{
"appId": "1234567-1234-1234-1234-1234567890ab",
"displayName": "azure-cli-2017-08-18-19-25-59", "name":
"http://azure-cli-2017-08-18-19-25-59", "password":
"1234567-1234-1234-be18-1234567890ab", "tenant":
"1234567-1234-1234-be18-1234567890ab" }
Run the following command to get your Azure subscription ID:
$ az account show --query id
"1234567-1234-1234-1234567890ab"
The relationship between variables and secret keys:
Credential variable name | Secret variable |
---|---|
appId | AZURE_CLIENT_ID |
password | AZURE_CLIENT_SECRET |
tenant | AZURE_TENANT_ID |
az account show --query id output |
AZURE_SUBSCRIPTION_ID |
Microsoft.Compute
Microsoft.Storage Microsoft.Network
Microsoft.ContainerService
You can check registered providers with: az provider list --query "[].{Provider:namespace, Status:registrationState}" --out table
.
If the above are not registered, you can add them:
az provider register --namespace
Microsoft.ContainerService az provider register --namespace
Microsoft.Compute az provider register --namespace
Microsoft.Storage az provider register --namespace
Microsoft.Network
While registration runs through the necessary zones and datacenters, we suggest you sit back and have a nice hot cup of coffee. You can check the status of each individual service by hitting az provider show -n Microsoft.ContainerService
.
If you don't have gcloud cli installed, please follow these instructions
Then, follow the steps below to create a new service account:
All unclaimed recordings (the ones not linked to any user account) are automatically archived 7 days after upload.
Your new public/private key pair will be generated and downloaded to your machine. It serves as the only copy of your private key. You are responsible for storing that key securely.
If the service account was successfully created, your key.json
file should look something like:
{ "type": "service_account",
"project_id": "colin-pipeline", "private_key_id":
"1234567890123456", "private_key": "-----BEGIN PRIVATE
KEY-----\nyourprivatekey\n-----END PRIVATE KEY-----\n",
"client_email":
"mytestserviceaccount@colin-pipeline.iam.gserviceaccount.com",
"client_id": "1234567890123456", "auth_uri":
"https://accounts.google.com/o/oauth2/auth", "token_uri":
"https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url":
"https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url":
"https://www.googleapis.com/robot/v1/metadata/x509/mytestserviceaccount%40colin-pipeline.iam.gserviceaccount.com"
}
You should check on your new service account in the Google Cloud Console.
The JSON received should be exactly what you'll send to Pipeline later.
The current version of Pipeline also supports cluster creation with profiles.The easiest way to create a Kubernetes cluster on one of the supported cloud providers is by using the REST API, which is available as a Postman collection:
Endpoints to be used:
- Create cluster (POST):
{{url}}/api/v1/orgs/{{orgId}}/clusters
- Get cluster status (GET):
{{url}}/api/v1/orgs/{{orgId}}/clusters/{{cluster_id}}
- Get cluster config (GET):
{{url}}/api/v1/orgs/{{orgId}}/clusters/{{cluster_id}}/config
- API endpoint (GET):
{{url}}/api/v1/orgs/{{orgId}}/clusters/{{cluster_id}}/apiendpoint
- Create secret (POST):
{{url}}/api/v1/orgs/{{orgId}}/secrets
- Create profile (POST):
{{url}}/api/v1/orgs/{{orgId}}/profiles/cluster
For the purposes of this blog, we will need three types of secrets:
secret_id
s.
Responses will contain a secret id
The following paragraphs describe a sample cURL request.
curl -g --request POST \
--url 'http://{{url}}/api/v1/orgs/{{orgId}}/secrets' \
--header 'Authorization: Bearer {{token}}' \
--header 'Content-Type: application/json' \
-d '{ "name": "My amazon secret", "type": "amazon",
"values": { "AWS_ACCESS_KEY_ID": "{{AWS_ACCESS_KEY_ID}}",
"AWS_SECRET_ACCESS_KEY": "{{AWS_SECRET_ACCESS_KEY}}" } }'
curl -g --request POST \
--url 'http://{{url}}/api/v1/orgs/{{orgId}}/secrets' \
--header 'Authorization: Bearer {{token}}' \
--header 'Content-Type: application/json' \
-d '{ "name": "My azure secret", "type": "azure", "values":
{ "AZURE_CLIENT_ID": "{{AZURE_CLIENT_ID}}",
"AZURE_CLIENT_SECRET": "{{AZURE_CLIENT_SECRET}}",
"AZURE_TENANT_ID": "{{AZURE_TENANT_ID}}",
"AZURE_SUBSCRIPTION_ID": "{{AZURE_SUBSCRIPTION_ID}}" } }'
curl -g --request POST \
--url 'http://{{url}}/api/v1/orgs/{{orgId}}/secrets' \
--header 'Authorization: Bearer {{token}}' \
--header 'Content-Type: application/json' \
-d '{ "name": "My google secret", "type": "google",
"values": { "type": "{{gke_type}}", "project_id":
"{{gke-projectId}}", "private_key_id": "{{private_key_id}}",
"private_key": "{{private_key}}", "client_email":
"{{client_email}}", "client_id": "{{client_id}}",
"auth_uri": "{{auth_uri}}", "token_uri": "{{token_uri}}",
"auth_provider_x509_cert_url":
"{{auth_provider_x509_cert_url}}", "client_x509_cert_url":
"{{client_x509_cert_url}}" } }'
Here is a sample cURL request that will create an Amazon cluster with two nodePools:
curl -g --request POST \
--url 'http://{{url}}/api/v1/orgs/{{orgId}}/clusters' \
--header 'Authorization: Bearer {{token}}' \
--header 'Content-Type: application/json' \
-d '{ "name":"awscluster-{{username}}-{{$randomInt}}",
"location": "eu-west-1", "cloud": "amazon", "secret_id":
"{{secret_id}}", "nodeInstanceType": "m4.xlarge",
"properties": { "amazon": { "nodePools": { "pool1": {
"instanceType": "m4.xlarge", "spotPrice": "0.2", "minCount":
1, "maxCount": 2, "image": "ami-16bfeb6f" }, "pool2": {
"instanceType": "m4.xlarge", "spotPrice": "0.2", "minCount":
3, "maxCount": 5, "image": "ami-16bfeb6f" } }, "master": {
"instanceType": "m4.xlarge", "image": "ami-16bfeb6f" } } }
}'
If the cluster was created successfully, you'll see something like this after calling Get cluster status
endpoint:
{ "status": "RUNNING",
"status_message": "Cluster is running", "name":
"awscluster-colin-227", "location": "eu-west-1", "cloud":
"amazon", "id": 75, "nodePools": { "pool1": {
"instance_type": "m4.xlarge", "spot_price": "0.2",
"min_count": 1, "max_count": 2, "image": "ami-16bfeb6f" },
"pool2": { "instance_type": "m4.xlarge", "spot_price":
"0.2", "min_count": 3, "max_count": 5, "image":
"ami-16bfeb6f" } } }
``` If the `status`
field is `ERROR` the `status_message` explains what caused
the error.
To get cluster config use `Get cluster config` endpoint. To
get cluster endpoint use `API endpoint` endpoint.
The `kubectl get nodes` result will be:
```bash
NAME STATUS ROLES AGE VERSION
ip-10-0-0-41.eu-west-1.compute.internal Ready master 5m
v1.9.5 ip-10-0-100-33.eu-west-1.compute.internal Ready
<none> 4m v1.9.5 ip-10-0-101-48.eu-west-1.compute.internal
Ready <none> 4m v1.9.5
ip-10-0-101-85.eu-west-1.compute.internal Ready <none> 4m
v1.9.5
curl -g --request
POST \
--url 'http://{{url}}/api/v1/orgs/{{orgId}}/clusters' \
--header 'Authorization: Bearer {{token}}' \
--header 'Content-Type: application/json' \
-d '{ "name":"azcluster{{username}}{{$randomInt}}",
"location": "westeurope", "cloud": "azure", "secret_id":
"{{secret_id}}", "nodeInstanceType": "Standard_B2ms",
"properties": { "azure": { "resourceGroup":
"{{az-resource-group}}", "kubernetesVersion": "1.9.2",
"nodePools": { "pool1": { "count": 1, "nodeInstanceType":
"Standard_B2ms" }, "pool2": { "count": 1,
"nodeInstanceType": "Standard_B2ms" } } } } }'
If the cluster was created successfully, you'll see something like this after calling Get cluster status
endpoint:
{ "status": "RUNNING",
"status_message": "Cluster is running", "name":
"azclustercolin316", "location": "westeurope", "cloud":
"azure", "id": 76, "nodePools": { "pool1": { "count": 1,
"instance_type": "Standard_B2ms" }, "pool2": { "count": 1,
"instance_type": "Standard_B2ms" } } }
If the status
field is ERROR
the status_message
explains what caused the error.
To get cluster config use Get cluster config
endpoint. To get cluster endpoint use API endpoint
endpoint.
The kubectl get nodes
result should be be:
NAME STATUS ROLES AGE VERSION
aks-pool1-27255451-0 Ready agent 33m v1.9.2
aks-pool2-27255451-0 Ready agent 33m v1.9.2
Here is a sample cURL request to create a Google cluster with two nodePools:
curl -g --request POST \
--url 'http://{{url}}/api/v1/orgs/{{orgId}}/clusters' \
--header 'Authorization: Bearer {{token}}' \
--header 'Content-Type: application/json' \
-d '{ "name":"gkecluster-{{username}}-{{$randomInt}}",
"location": "us-central1-a", "cloud": "google",
"nodeInstanceType": "n1-standard-1", "secret_id":
"{{secret_id}}", "properties": { "google": { "master":{
"version":"1.9" }, "nodeVersion":"1.9", "nodePools": {
"pool1": { "count": 1, "nodeInstanceType": "n1-standard-2" }
} } } }'
If the cluster was created successfully, you will see something like this after calling the Get cluster status
endpoint:
{ "status": "RUNNING",
"status_message": "Cluster is running", "name":
"gkecluster-colin-811", "location": "us-central1-a",
"cloud": "google", "id": 79, "nodePools": { "pool1": {
"count": 1, "instance_type": "n1-standard-2" }, "pool2": {
"count": 2, "instance_type": "n1-standard-2" } } }
If the status
field is ERROR
, status_message
will explain what caused the error.
To get cluster config use Get cluster config
endpoint. To get cluster endpoint use API endpoint
endpoint.
The kubectl get nodes
result should be:
NAME STATUS ROLES AGE VERSION
gke-gkecluster-colin-811-pool1-67b39b0b-v3n3 Ready <none>
15m v1.9.6-gke.1
gke-gkecluster-colin-811-pool2-73b01f4f-chm4 Ready <none> 7m
v1.9.6-gke.1 gke-gkecluster-colin-811-pool2-73b01f4f-rkkz
Ready <none> 15m v1.9.6-gke.1
profile_name
. This new field and cloud
identifies which profile should be used. These profiles can be reused any time without changes.
If you don't want to create such a cluster but you want to try one out, Pipeline offers a default profile for every provider option. In this case, referenced only as
default
toprofile_name
.
curl -g --request POST \
--url 'http://{{url}}/api/v1/orgs/{{orgId}}/clusters' \
--header 'Authorization: Bearer {{token}}' \
--header 'Content-Type: application/json' \
-d '{ "name":"awscluster-{{username}}-{{$randomInt}}",
"secret_id": "{{secret_id}}", "cloud": "amazon",
"profile_name": "default" }'
curl -g --request POST \
--url 'http://{{url}}/api/v1/orgs/{{orgId}}/clusters' \
--header 'Authorization: Bearer {{token}}' \
--header 'Content-Type: application/json' \
-d '{ "name":"azcluster{{username}}{{$randomInt}}",
"secret_id": "{{secret_id}}", "cloud": "azure",
"profile_name": "default", "properties": { "azure": {
"resourceGroup": "{{az-resource-group}}" } } }'
curl -g --request POST \
--url 'http://{{url}}/api/v1/orgs/{{orgId}}/clusters' \
--header 'Authorization: Bearer {{token}}' \
--header 'Content-Type: application/json' \
-d '{ "name":"gkecluster-{{username}}-{{$randomInt}}",
"secret_id": "{{secret_id}}", "cloud": "google",
"profile_name": "default" }'
Here is a sample cURL request for creating an AWS profile:
curl - g--request POST \
--url
'http://{{url}}/api/v1/orgs/{{orgId}}/profiles/cluster' \
--header 'Authorization: Bearer {{token}}' \
--header 'Content-Type: application/json' \
-d '{ "name": "{{profile_name}}", "location": "eu-west-1",
"cloud": "amazon", "{ "name": "{{profile_name}}",
"location": "eu-west-1", "cloud": "amazon", "properties": {
"amazon": { "master": { "instanceType": "m4.xlarge",
"image": "ami-16bfeb6f" }, "nodePools": { "pool1": {
"instanceType": "m4.xlarge", "spotPrice": "0.2", "minCount":
1, "maxCount": 2, "image": "ami-16bfeb6f" }, "pool2": {
"instanceType": "m4.xlarge", "spotPrice": "0.2", "minCount":
3, "maxCount": 5, "image": "ami-16bfeb6f" } } } } }'
Send a request to Create profile
endpoint with the following body:
curl -g --request
POST \
--url
'http://{{url}}/api/v1/orgs/{{orgId}}/profiles/cluster' \
--header 'Authorization: Bearer {{token}}' \
--header 'Content-Type: application/json' \
-d '{ "name": "{{profile_name}}", "location": "westeurope",
"cloud": "azure", "properties": { "azure": {
"kubernetesVersion": "1.9.2", "nodePools": { "pool1": {
"count": 1, "nodeInstanceType": "Standard_D2_v2" }, "pool2":
{ "count": 2, "nodeInstanceType": "Standard_D2_v2" } } } }
}'
Send a request to Create profile
endpoint with the following body:
curl -g --request
POST \
--url
'http://{{url}}/api/v1/orgs/{{orgId}}/profiles/cluster' \
--header 'Authorization: Bearer {{token}}' \
--header 'Content-Type: application/json' \
-d '{ "name": "{{profile_name}}", "location":
"us-central1-a", "cloud": "google", "properties": {
"google": { "master":{ "version":"1.9" },
"nodeVersion":"1.9", "nodePools": { "pool1": { "count": 1,
"nodeInstanceType": "n1-standard-2" }, "pool2": { "count":
2, "nodeInstanceType": "n1-standard-2" } } } } }'
We hope this helps you spin up Kubernetes clusters quickly, on any provider, using a unified and secure interface. We'll stop here, since this is already quite a long post. However, if you are interested in all the features we provide with the Pipeline Platform, please make sure you read our previous posts or check out our GitHub repositories.
If you'd like to learn more about Banzai Cloud, take a look at the other posts on this blog, the Pipeline, Hollowtrees and Bank-Vaults projects.Get emerging insights on innovative technology straight to your inbox.
Discover how AI assistants can revolutionize your business, from automating routine tasks and improving employee productivity to delivering personalized customer experiences and bridging the AI skills gap.
The Shift is Outshift’s exclusive newsletter.
The latest news and updates on generative AI, quantum computing, and other groundbreaking innovations shaping the future of technology.