Outshift Logo


12 min read

Blog thumbnail
Published on 02/10/2019
Last updated on 03/21/2024

Provider agnostic authentication and authorization in Kubernetes


Banzai Cloud's Pipeline platform allows enterprises to develop, deploy and scale container-based applications on several cloud providers, using multiple Kubernetes distributions. One significant difference between the cloud providers that support Kubernetes (we support ACSK, EKS, AKS, GKE) and our own Banzai Cloud Pipeline Kubernetes Engine is our ability to access the Kubernetes API server, and to configure it. Whether our enterprise customers are using Banzai Cloud's PKE distribution in a hybrid environment, or cloud provider-managed Kubernetes, they demand we meet the same high standards - the ability to authenticate and authorize (e.g.from LDAP, Active Directory or any other provider as GitHub, GitLab, Google, etc) utilizing a unified and provider-agnostic method. This architecture provides the same strong security measures, multiple authentication backends, fine-grained authorization, dynamic secret management, automated secure communications between components using TLS, vulnerability scans, static code analysis, etc. whether in a managed environment or our PKE, and all through Pipeline.


  • The Banzai Cloud Pipeline platform can spin up clusters on 6 cloud providers
  • Enterprises prefer to use their own LDAP or AD to authenticate and authorize a user's cloud agnostically
  • Cloud provider-managed Kubernetes does not allow for customization of the K8s API server
  • We use Dex and dynamically plug-in multiple backends
  • Banzai Cloud open-sourced JWT-to-RBAC to automatically generate RBAC resources based on JWT tokens
Provider agnostic authentication and authorization in Kubernetes In order to understand our options and their key differences, lets first go through the methods of authentication and authorization available to us in Kubernetes.


There are quite a few methods for authentication in a Kubernetes cluster:
  • X509 client certificates
  • Static token file
  • Bootstrap tokens
  • Static password file
  • Service account tokens
  • OpenID Connect tokens
  • Webhook token authentication
  • Authenticating proxy
Regardless of whether you use your own Kubernetes cluster or our Pipeline Kubernetes Engine, you should have unrestricted control over your API server, so that any of the above authentication methods work. X509 client certificates: client certificate authentication is enabled by passing the --client-ca-file=cacertfile option to the API server. This is the most popular method of user authentication in kubectl. Static token file: the API server reads bearer tokens from a file when provided with the --token-auth-file=tokenfile flag in the command line. Bootstrap Tokens: allows for streamlined bootstrapping of new clusters. The PKE deployment process backed by Pipeline uses these as well. Static password file: basic authentication is enabled by passing the --basic-auth-file=authfile option to the API server. Service account tokens: a service account is an automatically enabled authenticator that uses signed bearer tokens to verify requests (we will come back to these in more detail, later). Authenticating proxy: the API server can be configured to identify users from request header values like X-Remote-User.
This article uses LDAP based authentication as an example

OpenID Connect tokens

The simplest way of enabling OAuth token-based authentication in a Kubernetes cluster is by running the API server with special flags.
--oidc-client-id=<client-id> --oidc-ca-file=<CA-cert>
You can read more about OpenID Connect Tokens, here.

Webhook token authentication

The other preferred method of OAuth authentication is webhook token authentication. As when using OpenID connect tokens, this requires us to run the API server with special flags.
The config file provided to the API server is similar in structure to Kubeconfig files used by client tools like kubectl, and contains all the details that allow the API server to process user tokens.
=inline" >}}

# Kubernetes API version

apiVersion: v1

# variety of API object

kind: Config

# cluster, here, refers to a remote service.


- name: name-of-authn-service cluster:
  # CA for verifying the remote service.
  certificate-authority: /path/to/ca.pem
  # URL of remote service to query. Must use 'https'.
  server: https://authn-service/authenticate

# user refers to the API server's webhook configuration.


- name: name-of-api-server user: client-certificate:
  /path/to/cert.pem # cert for the webhook plugin to use
  client-key: /path/to/key.pem # key matching the cert

# kubeconfig files require context. Provide one for the API server, here.

current-context: webhook contexts:

- context: cluster: name-of-authn-service user:
  name-of-api-sever name: webhook
If you're interested in the details, check out the official Kubernetes documentation.


Authentication itself doesn't allow you to do anything, but simply verifies that you are who you claim to be. After a successful authentication, a Kubernetes cluster will also need to validate that you are permitted to execute whichever action you are trying to perform. This is called authorization, or authz for short. There are four authorization modules in Kubernetes:
  • node - Authorizes API requests made by kubelets
  • ABAC - Attribute-based access control (ABAC was the main authorization module before RBAC)
  • RBAC - Role-based access control
  • Webhook - HTTP callback

Webhook mode

If you'd like to use OAuth-provided JWT tokens for authorization, then the webhook module is the choice for you. As per usual, webhook authorization can be configured in the API server by running it with certain flags.
=inline" >}} apiVersion: v1 kind:
Config clusters:

- name: name-of-authz-service cluster:
  # CA for verifying the remote service.
  certificate-authority: /path/to/ca.pem
  # URL of remote service to query. Must use 'https'. May not include parameters.
  server: https://authz-service/authorize


- name: name-of-api-server user: client-certificate:
  /path/to/cert.pem # cert for the webhook plugin to use
  client-key: /path/to/key.pem # key matching the cert

current-context: webhook contexts:

- context: cluster: name-of-authz-service user:
  name-of-api-server name: webhook

Role-based access control

You can enable RBAC authorization mode in Kubernetes clusters, but it's usually enabled by default. This K8s module gives us objects, which are the basis of authorization decisions. These objects are stored in etcd, just like other Kubernetes resources. Objects:
  • Role
  • RoleBinding
  • ClusterRole
  • ClusterRoleBinding

Role and ClusterRole

A role contains rules that represent a set of permissions. A role can be defined within a namespace with a Role object, or cluster-wide via a ClusterRole object.
=inline" >}} kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1 metadata: name:
example-clusterrole rules:

- apiGroups: [ "", "extensions", "apps" ] resources: [
  "deployments", "replicasets", "pods" ] verbs: ["get",

RoleBinding and ClusterRoleBinding

Role binding grants those permissions defined within a role to a user or set of users. It holds a list of subjects (users, groups, or service accounts) and a reference to the role being granted. Permissions can be granted within a namespace with a RoleBinding object, or cluster-wide with a ClusterRoleBinding object.
=inline" >}} kind:
ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1
metadata: name: example-clusterrole-binding subjects:

- kind: Group name: example-group apiGroup:
  rbac.authorization.k8s.io roleRef: kind: ClusterRole name:
  example-clusterrole apiGroup:

Subjects can be

  • Users
  • Groups
  • ServiceAccounts
Users are human users, represented as strings. Group information is provided with Authenticator modules. Like users, Groups are represented as strings. Groups have no format requirements, other than that they use the prefix, system:. ServiceAccounts have usernames with the system:serviceaccount: prefix and belong to groups with the system:serviceaccounts: prefix. Read more about authorization modules.

What about cloud provider-managed Kubernetes

Now that we have a fair understanding of Kubernetes security, let's return to our original problem - how we might tackle authentication and authorization across all cloud providers and all Kubernetes distributions. So, each of the above solutions needs access and the ability to configure the API server. We're SOL, right? Not so much.

The Banzai Cloud Pipeline platform

We've always been committed to supporting Kubernetes and our container-based application platform on all major providers, however, we're also committed to providing easy, seamless and automated portability between cloud vendors. Accordingly, this post will highlight a few important aspects of a multi-cloud approach we learned from our users, and the open source code we developed and made part of the Pipeline platform. We've been trying to find a solution which works across all providers and still gives our enterprise customers the confidence to use their own LDAP or AD.
Note that we support other authentication providers, such as GitHub, Google, GitLab, etc.
Within a Kubernetes cluster, we use service account tokens for authentication, so the corresponding ServiceAccount must be created before we can authenticate and use this type of token.

Automatically create ServiceAccounts based on LDAP in a managed K8s

For authentication we use Dex along with its LDAP connector. When a user in an LDAP has group memberships, Dex issues a JWT token containing those memberships. Our open source JWT-to-RBAC project is capable of creating ServiceAccount, ClusterRoles and ClusterroleBindings based on JWT tokens. When we create a new ServiceAccount, K8s automatically generates a service account token, as we discussed earlier, and the JWT-to-RBAC retrieves it. Automatically create ServiceAccounts based on LDAP in a managed K8s


There are some prerequisites that must be met before you can begin your own tests:
  • Configured Dex server which issues JWT tokens. If you want to issue tokens with Dex, you have to configure it with its LDAP connector. You can use the Banzai Cloud Dex chart to this effect.
  • Configured LDAP server. You can use the openldap docker image for this.
  • Authentication application which uses Dex as an OpenID connector.
Dex acts as a shim between a client app and the upstream identity provider. The client only needs to understand OpenID Connect to query Dex.
The whole process is broken down into two main parts:
  • Dex auth flow, and
  • jwt-to-rbac ServiceAccount creation flow
Dex authentication flow:
  1. A user visits an Authentication App.
  2. The Authentication App redirects users to Dex with an OAuth2 request.
  3. Dex determines the user's identity by looking up the configured upstream identity provider (in this case, LDAP).
  4. Dex redirects user to the Authentication App with a signed code.
  5. The Authentication App exchanges code with Dex for an access token.
jwt-to-rbac Flow:
  1. The Authentication App has an ID token (JWT)
  2. POST ID token to the jwt-to-rbac App
  3. jwt-to-rbac validates ID token with Dex
  4. jwt-to-rbac extracts username, groups, etc. from the token
  5. jwt-to-rbac calls the API server to create ServiceAccount, ClusterRoles and ClusterRoleBindings
  6. jwt-to-rbac gets a ServiceAccount token and sends it to the Authentication App
  7. The Authentication App sends the service account token back to User
  8. The user authenticates on Kubernetes using the service account token
The access token issued by Dex contains the following:
=inline" >}} { "iss":
"http://dex/dex", "sub":
"aud": "example-app", "exp": 1549661603, "iat": 1549575203,
"at_hash": "\_L5EkeNocRsG7iuUG-pPpQ", "email":
"janedoe@example.com", "email_verified": true, "groups": [
"admins", "developers" ], "name": "jane",
"federated_claims": { "connector_id": "ldap", "user_id":
"cn=jane,ou=People,dc=example,dc=org" } }
After jwt-to-rbac extracts the information from the token, it creates a ServiceAccount and a ClusterRoleBinding, using one of the default K8s ClusterRoles as roleRef, or otherwise generates a token defined in the configuration if such a token doesn't yet exist.

Default K8s ClusterRoles used by jwt-to-rbac

JWT-to-RBAC does not create a new ClusterRole in every case; for example, if a user is a member of an admin group, it doesn't create a ClusterRole because K8s already has one by default.
Default ClusterRole Description
cluster-admin Allows super-user access to perform any action on any resource.
admin Allows admin access, intended to be granted within a namespace using a RoleBinding.
edit Allows read/write access to most objects in a namespace.
view Allows read-only access to most objects in a namespace.

jwt-to-rbac creates a custom ClusterRole defined in config

In most cases, there are different LDAP groups, so custom groups are mapped to roles which have custom rules.
=inline" >}}
[[rbachandler.customGroups]] groupName = "developers"
[[rbachandler.customGroups.customRules]] verbs = [ "get",
"list" ] resources = [ "deployments", "replicasets", "pods"
] apiGroups = [ "", "extensions", "apps" ]
To conclude this discussion of our open-sourced JWT-to-RBAC project, consider following the steps below if you'd like to try it, or, check it out in action by subscribing to our free developer beta at https://try.pipeline.banzai.cloud/.

1. Deploy jwt-to-rbac to Kubernetes

After cloning our GitHub repository, you can compile the code and create a Docker image with a single command:
make docker
If you use docker-for-desktop or minikube, you'll be able to easily deploy the solution, locally, with that newly built image.
kubectl create -f deploy/rbac.yaml
kubectl create -f deploy/configmap.yaml kubectl create -f
deploy/deployment.yaml kubectl create -f deploy/service.yaml

# port-forward locally

kubectl port-forward svc/jwt-to-rbac 5555
Now, you can communicate with the jwt-to-rbac app.

2. POST the access token issued by Dex to jwt-to-rbac API

=1-4" >}} curl --request POST \
 --url http://localhost:5555/rbac/ \
 --header 'Content-Type: application/json' \
 --data '{"token": "example.jwt.token"}'

# response:

{ "Email": "janedoe@example.com", "Groups": [ "admins",
"developers" ], "FederatedClaimas": { "connector_id":
"ldap", "user_id": "cn=jane,ou=People,dc=example,dc=org" } }
ServiceAccount, ClusterRoles (if the access token contains those custom groups we mentioned earlier) and ClusterRoleBindings are created. Listing the created K8s resources:
=1-3" >}} curl --request GET \
 --url http://localhost:5555/rbac \
 --header 'Content-Type: application/json'

#response: { "sa_list": [ "janedoe-example-com" ],
"crole_list": [ "developers-from-jwt" ], "crolebind_list": [
"janedoe-example-com-developers-from-jwt-binding" ] }

3.GET the default K8s token of ServiceAccount

=1-3" >}} curl --request GET \
 --url http://localhost:5555/tokens/janedoe-example-com \
 --header 'Content-Type: application/json'

# response:

[ { "name": "janedoe-example-com-token-m4gbj", "data": {
"ca.crt": "example-ca-cer-base64", "namespace":
"ZGVmYXVsdA==", "token": "example-k8s-sa-token-base64" } } ]

4. Generate a ServiceAccount token with TTL

=1-4" >}} curl --request POST \
 --url http://localhost:5555/tokens/janedoe-example-com \
 --header 'Content-Type: application/json' --data
'{"duration": "12h30m"}'

# response:

[ { "name": "janedoe-example-com-token-df3re", "data": {
"ca.crt": "example-ca-cer-base64", "namespace":
"ZGVmYXVsdA==", "token":
"example-k8s-sa-token-with-ttl-base64" } } ]
Now, you have a base64 encoded service account token.

5. Accessing K8s with the ServiceAccount token

You can use service account token from the command line:
kubectl --token $TOKEN_TEST --server
Or create a kubectl context:
TOKEN=$(echo "example-k8s-sa-token-base64" | base64 -D)
kubectl config set-credentials "janedoe-example-com" --token=$TOKEN

# with kubectl config get-clusters, you can get cluster names

kubectl config set-context "janedoe-example-com-context"
--cluster="clustername" --user="janedoe-example-com"
--namespace=default kubectl config use-context
janedoe-example-com-context kubectl get pod
As a final note - since we use Dex, which is an identity service that uses OpenID Connect to drive authentication for other apps, any other supported connector can be used for authentication to Kubernetes.

About Banzai Cloud Pipeline

Banzai Cloud’s Pipeline provides a platform for enterprises to develop, deploy, and scale container-based applications. It leverages best-of-breed cloud components, such as Kubernetes, to create a highly productive, yet flexible environment for developers and operations teams alike. Strong security measures — multiple authentication backends, fine-grained authorization, dynamic secret management, automated secure communications between components using TLS, vulnerability scans, static code analysis, CI/CD, and so on — are default features of the Pipeline platform.
Subscribe card background
Subscribe to
the Shift!

Get emerging insights on emerging technology straight to your inbox.

Unlocking Multi-Cloud Security: Panoptica's Graph-Based Approach

Discover why security teams rely on Panoptica's graph-based technology to navigate and prioritize risks across multi-cloud landscapes, enhancing accuracy and resilience in safeguarding diverse ecosystems.

the Shift
emerging insights
on emerging technology straight to your inbox.

The Shift keeps you at the forefront of cloud native modern applications, application security, generative AI, quantum computing, and other groundbreaking innovations that are shaping the future of technology.

Outshift Background