Outshift Logo


16 min read

Blog thumbnail
Published on 07/28/2020
Last updated on 05/03/2024

Manage your AWS GovCloud Kubernetes clusters with Pipeline


Government organizations and institutions have similar requirements and goals regarding their IT infrastructure as commercial enterprises: it must be flexible enough to adapt to the changing needs of the organization, easy to maintain and monitor, scalable to meet the changing workload requirements, highly available and resistent to errors, and of course secure to protect the various sensitive data such organizations must process. In addition, they must meet the requirements of various national and state-level regulations, like the Federal Risk and Authorization Management Program (FedRAMP), the Department of Defense (DoD) Cloud Computing Security Requirements Guide (SRG), the Federal Information Security Management Act (FISMA), and other legislation. As we all know, modern IT infrastructures are increasingly cloud-based, and managing large deployments in the cloud is a challenge in itself. Cloud providers have been trying to assist these efforts by offering cloud solutions that comply with these requirements, for example, AWS GovCloud, Google Cloud for government, or Azure Government Cloud Computing. However, since most of these solutions are lacking in higher-level integrations, it is useful to run higher-level management tools on top of these cloud solutions to automate the provisioning of Kubernetes clusters using a versatile control plane, especially if you need to manage a hybrid cloud, or manage clusters in a multicloud environment. In addition to government organizations, other entities such as contractors, research organizations, educational institutions, and other U.S. customers that run sensitive workloads in the cloud can also be affected by the above-mentioned regulations. Banzai Cloud Pipeline is a management platform and control plane that provide lots of components (40+) that are required for these tasks, and makes them work together seamlessly. It provides all the necessary glue code of configuration, resiliency, security, scaling, external integrations, and also provides a rich UI, CLI and API to manage them with ease. This blog post describes how to run the Banzai Cloud Pipeline platform in the AWS GovCloud US regions. Note that this is a lengthy and somewhat complicated process involving several steps. If you run into problems, feel free to ask for help on our #pipeline-support Slack channel. The procedure involves the following high-level steps:
  1. Installing the tools and getting the required accounts
  2. Provisioning an EKS cluster on AWS GovCloud for the Pipeline control plane
  3. Installing the Pipeline control plane on the EKS cluster
  4. Using Pipeline to create and manage clusters

Tools and prerequisites

  • You will need two AWS accounts, the related access keys, and the aws-cli tool (preferably the latest version) configured to use these credentials to complete the installation. You are going need access to two separate AWS accounts:
    • one with GovCloud access
    • one with access to the us-east-1 region
    The latter is needed because AWS product and price information is only available in us-east-1, even for the GovCloud regions.
  • kubectl (preferably the latest)
  • aws-iam-authenticator installed in your PATH
  • JQ for scripting (optional)

Note: This guide expects you to run every command in the same terminal session, because some steps depend on the output of earlier steps. (You can supply those outputs manually in a new terminal, just keep that in mind.)

  1. Pipeline comes with an installer tool, integrated into the banzai CLI tool. Install it with the following command:
    curl https://getpipeline.sh | sh
    For other installation options, see the banzai CLI documentation.
  2. Create a new directory to work in:
    mkdir pipeline-govcloud-poc
    cd pipeline-govcloud-poc

Provision an EKS cluster on AWS GovCloud {#eks-cluster-govcloud}

Note: the control plane installer of Pipeline Enterprise supports HA EKS and EC2 PKE installation with RDS/Aurora provisioning. For the sake of simplicity this blog post covers using EKS but other Kubernetes distributions (like PKE) are also supported.
To install Pipeline into the AWS GovCloud regions, you need an already existing Kubernetes cluster that you install Pipeline on. Complete the following steps to manually provision an EKS cluster for Pipeline. For more details or customization options of the manual provisioning, see the official AWS documentation. Make sure that you have your local environment set up for running AWS commands (for example, the default region is properly configured). You will also need an access-secret key pair for accessing the Pipeline cluster.


First, you have to create certain global resources required for provisioning EKS clusters.
Note: This guide uses CloudFormation for creating most of the cloud resources.
    1. Create a Cluster IAM role. The ClusterIAM role is required for EKS clusters to manage AWS resources.
      1. Run the following command.
        aws cloudformation create-stack --stack-name eks-cluster-role --template-body {{< posturl >}}/amazon-eks-cluster-role.yaml --capabilities CAPABILITY_IAM
      2. Wait for the stack to complete:
        aws cloudformation wait stack-create-complete --stack-name eks-cluster-role
      3. Save the output of the stack for later use:
        export CLUSTER_ROLE=$(aws cloudformation describe-stacks --stack-name eks-cluster-role | jq -r '.Stacks[0].Outputs[0].OutputValue')
    2. Create a Worker node IAM role. The Worker node IAM role is required for kubelet to make API calls to AWS.
      1. Run the following command:
        aws cloudformation create-stack --stack-name eks-node-group-instance-role --template-url https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-06-10/amazon-eks-nodegroup-role.yaml --capabilities CAPABILITY_IAM
      2. Wait for the stack to complete:
        aws cloudformation wait stack-create-complete --stack-name eks-node-group-instance-role
      3. Save the output of the stack for later use
        export WORKER_NODE_ROLE=$(aws cloudformation describe-stacks --stack-name
        eks-node-group-instance-role | jq -r '.Stacks[0].Outputs[0].OutputValue')
    3. Setup networking. Use a separate VPC for the Pipeline cluster.
      1. Run the following command:
        aws cloudformation create-stack --stack-name pipeline-eks-vpc --template-url https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-06-10/amazon-eks-vpc-sample.yaml
      2. Wait for the stack to complete:
        aws cloudformation wait stack-create-complete --stack-name pipeline-eks-vpc
      3. Save the output of the stack for later use:
        export PIPELINE_VPC_SECURITYGROUP=$(aws cloudformation describe-stacks -- stack-name pipeline-eks-vpc | jq -r '.Stacks[0].Outputs[] | select(.OutputKey == "SecurityGroups") | .OutputValue')
        export PIPELINE_VPC_SUBNETS=$(aws cloudformation describe-stacks --stackname pipeline-eks-vpc | jq -r '.Stacks[0].Outputs[] | select(.OutputKey == "SubnetIds") | .OutputValue')
    4. It's time to create the EKS cluster itself.
      1. Since this is a single resource, you can use the AWS CLI:
        aws eks create-cluster \
            --name pipeline-eks \
            --kubernetes-version 1.17 \
            --role-arn $CLUSTER_ROLE \
            --resources-vpc-config subnetIds=$PIPELINE_VPC_SUBNETS,securityGroupIds=$PIPELINE_VPC_SECURITYGROUP
      2. Wait for the cluster to become ready. This will take 5-15 minutes.
        aws eks wait cluster-active --name pipeline-eks
    5. Create a worker node group. The cluster also needs a worker node group to run workloads on. For the sake of simplicity, use a managed node group.
      1. Run the following command:
        aws eks create-nodegroup \
            --cluster-name pipeline-eks \
            --nodegroup-name pool0 \
            --kubernetes-version 1.17 \
            --node-role $WORKER_NODE_ROLE \
            --subnets $(echo $PIPELINE_VPC_SUBNETS | sed 's/,/ /g') \
            --instance-types c5.large \
            --ami-type AL2_x86_64 \
            --scaling-config minSize=3,maxSize=5,desiredSize=3
      2. Wait for the node group to become ready (this may take some time):
        aws eks wait nodegroup-active --cluster-name pipeline-eks --nodegroup-name pool0
Pipeline-managed EKS clusters use self-managed node groups, because they give better flexibility and more control over the nodes.

Accessing the cluster

At this point you should already have the cluster up and running. Before moving on to installing Pipeline itself, you need to configure access to the cluster. This requires some manual editing, as described in the following steps.
  1. Start by creating a plain kubeconfig file:
    export KUBECONFIG=$PWD/kubeconfig aws eks update-kubeconfig --name pipeline-eks
    Note: Make sure to get the export right, otherwise the following commands might mess up your $HOME/.kube/config file.
  2. Here comes the manual editing part: open the created kubeconfig file and find the users part in it. It should look something like this:
      - name: arn:aws-us-gov:eks:us-gov-west-1:ACCOUNTID:cluster/pipeline-eks
            apiVersion: client.authentication.k8s.io/v1alpha1
              - --region
              - us-gov-west-1
              - eks
              - get-token
              - --cluster-name
              - pipeline-eks
            command: aws
  3. Replace the exec section with the following:
      apiVersion: client.authentication.k8s.io/v1alpha1
        - token
        - -i
        - pipeline-eks
      command: aws-iam-authenticator
        - name: AWS_REGION
          value: YOUR_REGION (e.g. us-gov-west-1)
        - name: AWS_ACCESS_KEY_ID
          value: YOUR AWS ACCESS KEY ID
        - name: AWS_SECRET_ACCESS_KEY
  4. Replace the placeholders under the env section with your own values.
  5. Check if you have access to the cluster:
    kubectl get pods --all-namespaces
    You should see a bunch of pods, some of them beginning with aws-node-:
    NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
    kube-system   aws-node-8tb7l             1/1     Running   0          17m
    kube-system   aws-node-m2hjq             1/1     Running   0          17m
    kube-system   aws-node-mmz6x             1/1     Running   0          17m
    kube-system   coredns-78dbddd469-qbq96   1/1     Running   0          31m
    kube-system   coredns-78dbddd469-tnfxw   1/1     Running   0          31m
    kube-system   kube-proxy-dhbwg           1/1     Running   0          17m
    kube-system   kube-proxy-j22z8           1/1     Running   0          17m
    kube-system   kube-proxy-rqcvq           1/1     Running   0          17m
    Note: This step was required, because the Kubernetes provider of the Pipeline installer expects a self-contained configuration that doesn't require any additional parameters, but it will still use aws-iam-authenticator under the hood for requesting the actual token from AWS.This is the end of the Provisioning EKS cluster for Pipeline section. If everything went well, you can move on to installing Pipeline (it will be much shorter, promise :)).

Install Pipeline on your GovCloud EKS cluster {#install-pipeline}

Now you are ready to install the Banzai Cloud Pipeline platform on your EKS cluster hosted in an AWS GovCloud region.
Note: Make sure your KUBECONFIG environment variable still points to the correct file.
  1. The first thing to do is to initialize a workspace for the installer:
    banzai pipeline init --provider=k8s --workspace pipeline-govcloud-p
    Technically this step is not necessary, but it gives you an opportunity to check and modify some configuration which is actually necessary in this case. The workspace in the installer terminology is a directory that contains all configuration and state required to install and upgrade Pipeline.
  2. The newly created workspace should be under $HOME/.banzai/pipeline/pipeline-govcloud-poc. Inside this directory, you can find two items:
    ❯ ll $HOME/.banzai/pipeline/pipeline-govcloud-poc
    total 8
    drwx------   4 mark  staff   128B Jul  6 00:04 ./
    drwx------  12 mark  staff   384B Jul  6 00:04 ../
    drwx------   3 mark  staff    96B Jul  6 00:04 .kube/
    -rw-------   1 mark  staff   269B Jul  6 00:04 values.yaml
    • The .kube directory contains a copy of your kube config.
    • The values.yaml file contains all the parameters required by the installer:
      ❯ cat $HOME/.banzai/pipeline/pipeline-govcloud-poc/values.yaml
      defaultStorageBackend: mysql
      externalHost: auto
      ingressHostPort: false
          image: banzaicloud/pipeline-installer@sha256:815eaaf19da91e5fedd1bc5f76b32c050f33116c0ed440136ec4cd6a2726f7b3
      provider: k8s
      tlsInsecure: true
      uuid: b23002c9-caf2-47b0-97da-28fa705dda92
    Note: This directory will hold the state of your Pipeline installation, so make sure it doesn't get lost. Since it contains credentials, it's not suitable to store in a git repository. The production version of the Pipeline installer supports remote state storage and secret encryption using KMS.
  3. Open the $HOME/.banzai/pipeline/pipeline-govcloud-poc/values.yaml file in your editor and add the following content at the end of it:
              "YOUR GOVCLOUD REGION (eg. us-gov-west-1)"
                "YOUR GOVCLOUD REGION (eg. us-gov-west-1)"
      enabled: true
          enabled: false
          enabled: true
          region: "YOUR GOVCLOUD REGION (eg. us-gov-west-1)"
          accessKey: "YOUR ACCESS KEY ID"
          secretKey: "YOUR SECRET ACCESS KEY"
              "YOUR ACCESS KEY ID with access to us-east-1"
              "YOUR SECRET ACCESS KEY with access to
      enabled: true
  4. Replace the placeholders with your values. Remember, you need access to two separate AWS accounts:
    • one with GovCloud access
    • one with access to us-east-1 region
    The latter is required because AWS product and price information is only available in us-east-1, even for the GovCloud regions.
  5. Once you have edited the values.yaml file, you can start the installation process:
    banzai pipeline up --workspace pipeline-govcloud-poc
    It's going to take a few minutes, so feel free to grab a cup of coffee :)
    Once the installer is ready, you should see the following output with an interactive question. Take note of this information as this is how you will be able to access Pipeline. (To display this information later, run the banzai pipeline up command again.)
    Apply complete! Resources: 20 added, 0 changed, 0 destroyed.
    pipeline-address = https://a7da3d02615692c87ad7d20666653101-219213321.us-gov-west-1.elb.amazonaws.com/
    pipeline-password = bgMPtmFFG3ALp5gc
    pipeline-username = admin@example.com
    INFO[0181] Pipeline is ready at https://a7da3d02615692c87ad7d20666653101-219213321.us-gov-west-1.elb.amazonaws.com/.
    ? Do you want to login this CLI tool now? (Y/n)
  7. Generally, you will want to go ahead and log in, but you don't have to. You can always do that later by running banzai login --endpoint YOUR_PIPELINE_ENDPOINT/pipeline. But let's select Yes for now.
  8. You will see a warning that the certificate cannot be verified, because the evaluation version of Pipeline uses self-signed certs, so this is normal: select Yes again.
  9. When the installer asks to log in using the browser, select yes again, then enter the credentials the installer has displayed earlier.
That's it, now you should be logged in to Pipeline in the CLI tool. You can also log in to the dashboard using the URL in the output. Now that you have installed Pipeline, you can create your first cluster from Pipeline.

Create your first cluster

The point of provisioning the EKS cluster and installing Pipeline on it was to be able to manage clusters from Pipeline. So in this section, you will create your first PKE cluster on Pipeline. For more details, check the official Pipeline documentation.
  1. First, you need to create an Amazon type secret in Pipeline.xTo use your current credentials in your current environment, you can use the following command:
    banzai secret create --name=aws --magic --type=amazon
    You can read more about credentials and access in the [Pipeline documentation](https://banzaicloud.com/docs/pipeline/secrets/providers/pke_aws_auth_credentials/).
  2. Create a PKE cluster descriptor. The quickest way to create a new cluster is by downloading the cluster.json file that comes with this post. A few key details from the cluster descriptor:
    • location controls the region for the cluster. Make sure to keep this in sync with zones under node pools.
    • secretName should refer to the secret that you created in the previous step.
    • You can add more node pools under nodepools if you want.
    • By default, Pipeline launches on-demand nodes. You can use spot instances by setting a spotPrice in the node pool config.
    Note: Normally we use pre-built PKE images that already contain PKE and the necessary components. When using a plain Ubuntu image, PKE installs everything on the fly. In a production environment, we can provide instructions for building custom images.
  3. Run the following command:
    wget {{< posturl >}}/cluster.json
    banzai cluster create --name first-pke-cluster -f cluster.json
    It usually takes about 5-10 minutes to launch a cluster.
  4. Once your cluster is ready, you can quickly access it with the following command:
    banzai cluster shell --cluster-name first-pke-cluster
    It gives you a new shell preloaded with the appropriate kube config.

Testing Pipeline

Now that you have a cluster managed by Banzai Cloud Pipeline, you can play around with it, for example, deploy some workload, or enable some of the integrated services on the cluster - either from the command line, or using the Pipeline web interface.

Cleaning up

Every once in a while (especially during evaluation) you might want to start over, for example, to move to a different account. This section shows you how to deprovision everything in the correct order, without leaving any leftover resources.
Note: For this you don't need to remain in the same terminal (as mentioned at the beginning of this guide).

Delete the clusters

Before destroying the Pipeline instance, make sure that every cluster is deleted (if you don't want to keep them), because once Pipeline is down, there is no way to get the state back, and you need to delete any remaining clusters manually.

  1. List your clusters with the banzai CLI tool:
    ❯ banzai cluster list
    Id  Name               Distribution  Location       CreatorName        CreatedAt             Status   StatusMessage
    1   first-pke-cluster  pke           us-gov-west-1  admin@example.com  2020-07-06T13:58:54Z  RUNNING  Cluster is running
  2. Delete them with the same tool:
    banzai cluster delete --cluster-name first-pke-cluster
  3. Wait for all the clusters to be deleted. You can use the list command to verify that.

Delete Pipeline

You can remove Pipeline from the EKS cluster with the following command:
banzai pipeline down --workspace pipeline-govcloud-poc
This will deprovision all Pipeline components from the cluster, including the database, so make sure this is what you want.

Delete the EKS cluster

You can delete the EKS cluster that hosted Pipeline with a series of AWS commands.
  1. First, delete the node group:
    aws eks delete-nodegroup --cluster-name pipeline-eks --nodegroup-name pool0
  2. Wait for the node group to be deleted:
    aws eks wait nodegroup-deleted --cluster-name pipeline-eks --nodegroup-name pool0
  3. Delete the EKS cluster itself:
    aws eks delete-cluster --name pipeline-eks
  4. Wait for the cluster to be deleted:
    aws eks wait cluster-deleted --name pipeline-eks
  5. Delete the cluster network CloudFormation stack:
    aws cloudformation wait stack-delete-complete --stack-name pipeline-eks-vpc
  6. Wait for the stack to be deleted:
    aws cloudformation wait stack-delete-complete --stack-name pipeline-eks-vpc
    Note: Sometimes EKS fails to delete load balancers, causing the network deprovisioning to fail. If that happens, delete the load balancers manually and run the stack delete command again.
  7. If you don't want to create new Pipeline instances/EKS clusters, you can delete the global cluster and worker node roles as well:
    aws cloudformation delete-stack --stack-name eks-node-group-instance-role
    aws cloudformation delete-stack --stack-name eks-cluster-role
  8. The last thing you might want to consider deleting is the pke-global stack that Pipeline creates the first time you launch a PKE cluster in an account:
    aws cloudformation delete-stack --stack-name pke-global
    Pipeline will detect if it's missing and will recreate it if necessary.


This blog post has demonstrated how to install and run the Banzai Cloud Pipeline container management platform in a restricted AWS GovCloud region, often used by government organizations, contractors, research organizations, educational institutions, and other U.S. customers that run sensitive workloads in the cloud. Using Pipeline gives your organization the speed of delivering clusters and the extended automated solutions, the integrated services, and flexibility provided by the platform. If you are interested in testing Banzai Cloud Pipeline in AWS GovCloud and you need help, or your organization uses the government solution of a different cloud provider, contact us.

About PKE

Banzai Cloud Pipeline Kubernetes Engine (PKE) is a simple, secure and powerful CNCF-certified Kubernetes distribution, the preferred Kubernetes run-time of the Pipeline platform. It was designed to work on any cloud, VM or on bare metal nodes to provide a scalable and secure foundation for private clouds. PKE is cloud-aware and includes an ever-increasing number of cloud and platform integrations.

About Banzai Cloud Pipeline

Banzai Cloud’s Pipeline provides a platform for enterprises to develop, deploy, and scale container-based applications. It leverages best-of-breed cloud components, such as Kubernetes, to create a highly productive, yet flexible environment for developers and operations teams alike. Strong security measures — multiple authentication backends, fine-grained authorization, dynamic secret management, automated secure communications between components using TLS, vulnerability scans, static code analysis, CI/CD, and so on — are default features of the Pipeline platform..
Subscribe card background
Subscribe to
the Shift!

Get emerging insights on emerging technology straight to your inbox.

Unlocking Multi-Cloud Security: Panoptica's Graph-Based Approach

Discover why security teams rely on Panoptica's graph-based technology to navigate and prioritize risks across multi-cloud landscapes, enhancing accuracy and resilience in safeguarding diverse ecosystems.

the Shift
emerging insights
on emerging technology straight to your inbox.

The Shift keeps you at the forefront of cloud native modern applications, application security, generative AI, quantum computing, and other groundbreaking innovations that are shaping the future of technology.

Outshift Background