Published on 00/00/0000
Last updated on 00/00/0000
Published on 00/00/0000
Last updated on 00/00/0000
Share
Share
PRODUCT
15 min read
Share
Government organizations and institutions have similar requirements and goals regarding their IT infrastructure as commercial enterprises: it must be flexible enough to adapt to the changing needs of the organization, easy to maintain and monitor, scalable to meet the changing workload requirements, highly available and resistent to errors, and of course secure to protect the various sensitive data such organizations must process. In addition, they must meet the requirements of various national and state-level regulations, like the Federal Risk and Authorization Management Program (FedRAMP), the Department of Defense (DoD) Cloud Computing Security Requirements Guide (SRG), the Federal Information Security Management Act (FISMA), and other legislation. As we all know, modern IT infrastructures are increasingly cloud-based, and managing large deployments in the cloud is a challenge in itself. Cloud providers have been trying to assist these efforts by offering cloud solutions that comply with these requirements, for example, AWS GovCloud, Google Cloud for government, or Azure Government Cloud Computing. However, since most of these solutions are lacking in higher-level integrations, it is useful to run higher-level management tools on top of these cloud solutions to automate the provisioning of Kubernetes clusters using a versatile control plane, especially if you need to manage a hybrid cloud, or manage clusters in a multicloud environment. In addition to government organizations, other entities such as contractors, research organizations, educational institutions, and other U.S. customers that run sensitive workloads in the cloud can also be affected by the above-mentioned regulations. Banzai Cloud Pipeline is a management platform and control plane that provide lots of components (40+) that are required for these tasks, and makes them work together seamlessly. It provides all the necessary glue code of configuration, resiliency, security, scaling, external integrations, and also provides a rich UI, CLI and API to manage them with ease. This blog post describes how to run the Banzai Cloud Pipeline platform in the AWS GovCloud US regions. Note that this is a lengthy and somewhat complicated process involving several steps. If you run into problems, feel free to ask for help on our #pipeline-support Slack channel. The procedure involves the following high-level steps:
You will need two AWS accounts, the related access keys, and the aws-cli tool (preferably the latest version) configured to use these credentials to complete the installation. You are going need access to two separate AWS accounts:
The latter is needed because AWS product and price information is only available in us-east-1, even for the GovCloud regions.
Note: This guide expects you to run every command in the same terminal session, because some steps depend on the output of earlier steps. (You can supply those outputs manually in a new terminal, just keep that in mind.)
Pipeline comes with an installer tool, integrated into the banzai CLI tool. Install it with the following command:
curl https://getpipeline.sh | sh
For other installation options, see the banzai CLI documentation.
Create a new directory to work in:
mkdir pipeline-govcloud-poc
cd pipeline-govcloud-poc
Note: the control plane installer of Pipeline Enterprise supports HA EKS and EC2 PKE installation with RDS/Aurora provisioning. For the sake of simplicity this blog post covers using EKS but other Kubernetes distributions (like PKE) are also supported.
To install Pipeline into the AWS GovCloud regions, you need an already existing Kubernetes cluster that you install Pipeline on. Complete the following steps to manually provision an EKS cluster for Pipeline. For more details or customization options of the manual provisioning, see the official AWS documentation. Make sure that you have your local environment set up for running AWS commands (for example, the default region is properly configured). You will also need an access-secret key pair for accessing the Pipeline cluster.
First, you have to create certain global resources required for provisioning EKS clusters.
Note: This guide uses CloudFormation for creating most of the cloud resources.
Run the following command.
aws cloudformation create-stack --stack-name eks-cluster-role --template-body {{< posturl >}}/amazon-eks-cluster-role.yaml --capabilities CAPABILITY_IAM
Wait for the stack to complete:
aws cloudformation wait stack-create-complete --stack-name eks-cluster-role
Save the output of the stack for later use:
export CLUSTER_ROLE=$(aws cloudformation describe-stacks --stack-name eks-cluster-role | jq -r '.Stacks[0].Outputs[0].OutputValue')
Run the following command:
aws cloudformation create-stack --stack-name eks-node-group-instance-role --template-url https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-06-10/amazon-eks-nodegroup-role.yaml --capabilities CAPABILITY_IAM
Wait for the stack to complete:
aws cloudformation wait stack-create-complete --stack-name eks-node-group-instance-role
Save the output of the stack for later use
export WORKER_NODE_ROLE=$(aws cloudformation describe-stacks --stack-name
eks-node-group-instance-role | jq -r '.Stacks[0].Outputs[0].OutputValue')
Run the following command:
aws cloudformation create-stack --stack-name pipeline-eks-vpc --template-url https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-06-10/amazon-eks-vpc-sample.yaml
Wait for the stack to complete:
aws cloudformation wait stack-create-complete --stack-name pipeline-eks-vpc
Save the output of the stack for later use:
export PIPELINE_VPC_SECURITYGROUP=$(aws cloudformation describe-stacks -- stack-name pipeline-eks-vpc | jq -r '.Stacks[0].Outputs[] | select(.OutputKey == "SecurityGroups") | .OutputValue')
export PIPELINE_VPC_SUBNETS=$(aws cloudformation describe-stacks --stackname pipeline-eks-vpc | jq -r '.Stacks[0].Outputs[] | select(.OutputKey == "SubnetIds") | .OutputValue')
Since this is a single resource, you can use the AWS CLI:
aws eks create-cluster \
--name pipeline-eks \
--kubernetes-version 1.17 \
--role-arn $CLUSTER_ROLE \
--resources-vpc-config subnetIds=$PIPELINE_VPC_SUBNETS,securityGroupIds=$PIPELINE_VPC_SECURITYGROUP
Wait for the cluster to become ready. This will take 5-15 minutes.
aws eks wait cluster-active --name pipeline-eks
Run the following command:
aws eks create-nodegroup \
--cluster-name pipeline-eks \
--nodegroup-name pool0 \
--kubernetes-version 1.17 \
--node-role $WORKER_NODE_ROLE \
--subnets $(echo $PIPELINE_VPC_SUBNETS | sed 's/,/ /g') \
--instance-types c5.large \
--ami-type AL2_x86_64 \
--scaling-config minSize=3,maxSize=5,desiredSize=3
Wait for the node group to become ready (this may take some time):
aws eks wait nodegroup-active --cluster-name pipeline-eks --nodegroup-name pool0
Pipeline-managed EKS clusters use self-managed node groups, because they give better flexibility and more control over the nodes.
At this point you should already have the cluster up and running. Before moving on to installing Pipeline itself, you need to configure access to the cluster. This requires some manual editing, as described in the following steps.
Start by creating a plain kubeconfig file:
export KUBECONFIG=$PWD/kubeconfig aws eks update-kubeconfig --name pipeline-eks
Note: Make sure to get the export right, otherwise the following commands might mess up your
$HOME/.kube/config
file.
Here comes the manual editing part: open the created kubeconfig
file and find the users
part in it. It should look something like this:
users:
- name: arn:aws-us-gov:eks:us-gov-west-1:ACCOUNTID:cluster/pipeline-eks
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-gov-west-1
- eks
- get-token
- --cluster-name
- pipeline-eks
command: aws
Replace the exec
section with the following:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- pipeline-eks
command: aws-iam-authenticator
env:
- name: AWS_REGION
value: YOUR_REGION (e.g. us-gov-west-1)
- name: AWS_ACCESS_KEY_ID
value: YOUR AWS ACCESS KEY ID
- name: AWS_SECRET_ACCESS_KEY
value: YOUR AWS SECRET ACCESS KEY
env
section with your own values.Check if you have access to the cluster:
kubectl get pods --all-namespaces
You should see a bunch of pods, some of them beginning with aws-node-
:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system aws-node-8tb7l 1/1 Running 0 17m
kube-system aws-node-m2hjq 1/1 Running 0 17m
kube-system aws-node-mmz6x 1/1 Running 0 17m
kube-system coredns-78dbddd469-qbq96 1/1 Running 0 31m
kube-system coredns-78dbddd469-tnfxw 1/1 Running 0 31m
kube-system kube-proxy-dhbwg 1/1 Running 0 17m
kube-system kube-proxy-j22z8 1/1 Running 0 17m
kube-system kube-proxy-rqcvq 1/1 Running 0 17m
Note: This step was required, because the Kubernetes provider of the Pipeline installer expects a self-contained configuration that doesn't require any additional parameters, but it will still use
aws-iam-authenticator
under the hood for requesting the actual token from AWS.This is the end of the Provisioning EKS cluster for Pipeline section. If everything went well, you can move on to installing Pipeline (it will be much shorter, promise :)).
Now you are ready to install the Banzai Cloud Pipeline platform on your EKS cluster hosted in an AWS GovCloud region.
Note: Make sure your KUBECONFIG environment variable still points to the correct file.
The first thing to do is to initialize a workspace for the installer:
banzai pipeline init --provider=k8s --workspace pipeline-govcloud-p
Technically this step is not necessary, but it gives you an opportunity to check and modify some configuration which is actually necessary in this case. The workspace in the installer terminology is a directory that contains all configuration and state required to install and upgrade Pipeline.
The newly created workspace should be under $HOME/.banzai/pipeline/pipeline-govcloud-poc. Inside this directory, you can find two items:
❯ ll $HOME/.banzai/pipeline/pipeline-govcloud-poc
total 8
drwx------ 4 mark staff 128B Jul 6 00:04 ./
drwx------ 12 mark staff 384B Jul 6 00:04 ../
drwx------ 3 mark staff 96B Jul 6 00:04 .kube/
-rw------- 1 mark staff 269B Jul 6 00:04 values.yaml
The values.yaml file contains all the parameters required by the installer:
❯ cat $HOME/.banzai/pipeline/pipeline-govcloud-poc/values.yaml
defaultStorageBackend: mysql
externalHost: auto
ingressHostPort: false
installer:
image: banzaicloud/pipeline-installer@sha256:815eaaf19da91e5fedd1bc5f76b32c050f33116c0ed440136ec4cd6a2726f7b3
provider: k8s
tlsInsecure: true
uuid: b23002c9-caf2-47b0-97da-28fa705dda92
Note: This directory will hold the state of your Pipeline installation, so make sure it doesn't get lost. Since it contains credentials, it's not suitable to store in a git repository. The production version of the Pipeline installer supports remote state storage and secret encryption using KMS.
Open the $HOME/.banzai/pipeline/pipeline-govcloud-poc/values.yaml file in your editor and add the following content at the end of it:
pipeline:
configuration:
cloud:
amazon:
defaultRegion:
"YOUR GOVCLOUD REGION (eg. us-gov-west-1)"
distribution:
pke:
amazon:
globalRegion:
"YOUR GOVCLOUD REGION (eg. us-gov-west-1)"
cloudinfo:
enabled: true
image:
app:
vault:
enabled: false
providers:
amazon:
enabled: true
region: "YOUR GOVCLOUD REGION (eg. us-gov-west-1)"
accessKey: "YOUR ACCESS KEY ID"
secretKey: "YOUR SECRET ACCESS KEY"
pricing:
accessKey:
"YOUR ACCESS KEY ID with access to us-east-1"
secretKey:
"YOUR SECRET ACCESS KEY with access to
us-east-1"
telescopes:
enabled: true
Replace the placeholders with your values. Remember, you need access to two separate AWS accounts:
The latter is required because AWS product and price information is only available in us-east-1, even for the GovCloud regions.
Once you have edited the values.yaml
file, you can start the installation process:
banzai pipeline up --workspace pipeline-govcloud-poc
It's going to take a few minutes, so feel free to grab a cup of coffee :)
Once the installer is ready, you should see the following output with an interactive question. Take note of this information as this is how you will be able to access Pipeline. (To display this information later, run the banzai pipeline up
command again.)
Apply complete! Resources: 20 added, 0 changed, 0 destroyed.
...
Outputs:
pipeline-address = https://a7da3d02615692c87ad7d20666653101-219213321.us-gov-west-1.elb.amazonaws.com/
pipeline-password = bgMPtmFFG3ALp5gc
pipeline-username = admin@example.com
INFO[0181] Pipeline is ready at https://a7da3d02615692c87ad7d20666653101-219213321.us-gov-west-1.elb.amazonaws.com/.
? Do you want to login this CLI tool now? (Y/n)
That's it, now you should be logged in to Pipeline in the CLI tool. You can also log in to the dashboard using the URL in the output. Now that you have installed Pipeline, you can create your first cluster from Pipeline.
First, you need to create an Amazon type secret in Pipeline.xTo use your current credentials in your current environment, you can use the following command:
```bash
banzai secret create --name=aws --magic --type=amazon
```
You can read more about credentials and access in the [Pipeline documentation](https://banzaicloud.com/docs/pipeline/secrets/providers/pke_aws_auth_credentials/).
Create a PKE cluster descriptor. The quickest way to create a new cluster is by downloading the cluster.json file that comes with this post. A few key details from the cluster descriptor:
By default, Pipeline launches on-demand nodes. You can use spot instances by setting a spotPrice in the node pool config.
Note: Normally we use pre-built PKE images that already contain PKE and the necessary components. When using a plain Ubuntu image, PKE installs everything on the fly. In a production environment, we can provide instructions for building custom images.
Run the following command:
wget {{< posturl >}}/cluster.json
banzai cluster create --name first-pke-cluster -f cluster.json
It usually takes about 5-10 minutes to launch a cluster.
Once your cluster is ready, you can quickly access it with the following command:
banzai cluster shell --cluster-name first-pke-cluster
It gives you a new shell preloaded with the appropriate kube config.
Now that you have a cluster managed by Banzai Cloud Pipeline, you can play around with it, for example, deploy some workload, or enable some of the integrated services on the cluster - either from the command line, or using the Pipeline web interface.
Every once in a while (especially during evaluation) you might want to start over, for example, to move to a different account. This section shows you how to deprovision everything in the correct order, without leaving any leftover resources.
Note: For this you don't need to remain in the same terminal (as mentioned at the beginning of this guide).
Before destroying the Pipeline instance, make sure that every cluster is deleted (if you don't want to keep them), because once Pipeline is down, there is no way to get the state back, and you need to delete any remaining clusters manually.
List your clusters with the banzai CLI tool:
❯ banzai cluster list
Id Name Distribution Location CreatorName CreatedAt Status StatusMessage
1 first-pke-cluster pke us-gov-west-1 admin@example.com 2020-07-06T13:58:54Z RUNNING Cluster is running
Delete them with the same tool:
banzai cluster delete --cluster-name first-pke-cluster
You can remove Pipeline from the EKS cluster with the following command:
banzai pipeline down --workspace pipeline-govcloud-poc
This will deprovision all Pipeline components from the cluster, including the database, so make sure this is what you want.
You can delete the EKS cluster that hosted Pipeline with a series of AWS commands.
First, delete the node group:
aws eks delete-nodegroup --cluster-name pipeline-eks --nodegroup-name pool0
Wait for the node group to be deleted:
aws eks wait nodegroup-deleted --cluster-name pipeline-eks --nodegroup-name pool0
Delete the EKS cluster itself:
aws eks delete-cluster --name pipeline-eks
Wait for the cluster to be deleted:
aws eks wait cluster-deleted --name pipeline-eks
Delete the cluster network CloudFormation stack:
aws cloudformation wait stack-delete-complete --stack-name pipeline-eks-vpc
Wait for the stack to be deleted:
aws cloudformation wait stack-delete-complete --stack-name pipeline-eks-vpc
Note: Sometimes EKS fails to delete load balancers, causing the network deprovisioning to fail. If that happens, delete the load balancers manually and run the stack delete command again.
If you don't want to create new Pipeline instances/EKS clusters, you can delete the global cluster and worker node roles as well:
aws cloudformation delete-stack --stack-name eks-node-group-instance-role
aws cloudformation delete-stack --stack-name eks-cluster-role
The last thing you might want to consider deleting is the pke-global
stack that Pipeline creates the first time you launch a PKE cluster in an account:
aws cloudformation delete-stack --stack-name pke-global
Pipeline will detect if it's missing and will recreate it if necessary.
This blog post has demonstrated how to install and run the Banzai Cloud Pipeline container management platform in a restricted AWS GovCloud region, often used by government organizations, contractors, research organizations, educational institutions, and other U.S. customers that run sensitive workloads in the cloud. Using Pipeline gives your organization the speed of delivering clusters and the extended automated solutions, the integrated services, and flexibility provided by the platform. If you are interested in testing Banzai Cloud Pipeline in AWS GovCloud and you need help, or your organization uses the government solution of a different cloud provider, contact us.
Banzai Cloud Pipeline Kubernetes Engine (PKE) is a simple, secure and powerful CNCF-certified Kubernetes distribution, the preferred Kubernetes run-time of the Pipeline platform. It was designed to work on any cloud, VM or on bare metal nodes to provide a scalable and secure foundation for private clouds. PKE is cloud-aware and includes an ever-increasing number of cloud and platform integrations.
Banzai Cloud’s Pipeline provides a platform for enterprises to develop, deploy, and scale container-based applications. It leverages best-of-breed cloud components, such as Kubernetes, to create a highly productive, yet flexible environment for developers and operations teams alike. Strong security measures — multiple authentication backends, fine-grained authorization, dynamic secret management, automated secure communications between components using TLS, vulnerability scans, static code analysis, CI/CD, and so on — are default features of the Pipeline platform..
Get emerging insights on innovative technology straight to your inbox.
Discover how AI assistants can revolutionize your business, from automating routine tasks and improving employee productivity to delivering personalized customer experiences and bridging the AI skills gap.
The Shift is Outshift’s exclusive newsletter.
The latest news and updates on generative AI, quantum computing, and other groundbreaking innovations shaping the future of technology.