Published on 00/00/0000
Last updated on 00/00/0000
Published on 00/00/0000
Last updated on 00/00/0000
Share
Share
PRODUCT
12 min read
Share
Banzai Cloud's Pipeline platform allows enterprises to develop, deploy and scale container-based applications on several cloud providers, using multiple Kubernetes distributions. One significant difference between the cloud providers that support Kubernetes (we support ACSK, EKS, AKS, GKE) and our own Banzai Cloud Pipeline Kubernetes Engine is our ability to access the Kubernetes API server, and to configure it. Whether our enterprise customers are using Banzai Cloud's PKE distribution in a hybrid environment, or cloud provider-managed Kubernetes, they demand we meet the same high standards - the ability to authenticate and authorize (e.g.from LDAP, Active Directory or any other provider as GitHub, GitLab, Google, etc) utilizing a unified and provider-agnostic method. This architecture provides the same strong security measures, multiple authentication backends, fine-grained authorization, dynamic secret management, automated secure communications between components using TLS, vulnerability scans, static code analysis, etc. whether in a managed environment or our PKE, and all through Pipeline.
In order to understand our options and their key differences, lets first go through the methods of authentication and authorization available to us in Kubernetes.
There are quite a few methods for authentication in a Kubernetes cluster:
Regardless of whether you use your own Kubernetes cluster or our Pipeline Kubernetes Engine, you should have unrestricted control over your API server, so that any of the above authentication methods work. X509 client certificates: client certificate authentication is enabled by passing the --client-ca-file=cacertfile
option to the API server. This is the most popular method of user authentication in kubectl
. Static token file: the API server reads bearer tokens from a file when provided with the --token-auth-file=tokenfile
flag in the command line. Bootstrap Tokens: allows for streamlined bootstrapping of new clusters. The PKE deployment process backed by Pipeline uses these as well. Static password file: basic authentication is enabled by passing the --basic-auth-file=authfile
option to the API server. Service account tokens: a service account is an automatically enabled authenticator that uses signed bearer tokens to verify requests (we will come back to these in more detail, later). Authenticating proxy: the API server can be configured to identify users from request header values like X-Remote-User.
This article uses LDAP based authentication as an example
The simplest way of enabling OAuth token-based authentication in a Kubernetes cluster is by running the API server with special flags.
--oidc-issuer-url=<openid-issuer>
--oidc-client-id=<client-id> --oidc-ca-file=<CA-cert>
--oidc-username-claim=<JWT-claim-to-username>
--oidc-groups-claim=<JWT-claim-to-groups>
You can read more about OpenID Connect Tokens, here.
The other preferred method of OAuth authentication is webhook token authentication. As when using OpenID connect tokens, this requires us to run the API server with special flags.
--authentication-token-webhook-config-file=<config-file-accessing-webhook-service>
--authentication-token-webhook-cache-ttl=<access-cache-timeout>
The config file provided to the API server is similar in structure to Kubeconfig files used by client tools like kubectl
, and contains all the details that allow the API server to process user tokens.
=inline" >}}
# Kubernetes API version
apiVersion: v1
# variety of API object
kind: Config
# cluster, here, refers to a remote service.
clusters:
- name: name-of-authn-service cluster:
# CA for verifying the remote service.
certificate-authority: /path/to/ca.pem
# URL of remote service to query. Must use 'https'.
server: https://authn-service/authenticate
# user refers to the API server's webhook configuration.
users:
- name: name-of-api-server user: client-certificate:
/path/to/cert.pem # cert for the webhook plugin to use
client-key: /path/to/key.pem # key matching the cert
# kubeconfig files require context. Provide one for the API server, here.
current-context: webhook contexts:
- context: cluster: name-of-authn-service user:
name-of-api-sever name: webhook
If you're interested in the details, check out the official Kubernetes documentation.
Authentication itself doesn't allow you to do anything, but simply verifies that you are who you claim to be. After a successful authentication, a Kubernetes cluster will also need to validate that you are permitted to execute whichever action you are trying to perform. This is called authorization, or authz for short. There are four authorization modules in Kubernetes:
If you'd like to use OAuth-provided JWT tokens for authorization, then the webhook module is the choice for you. As per usual, webhook authorization can be configured in the API server by running it with certain flags.
--authorization-webhook-config-file=<authz-config-file>
=inline" >}} apiVersion: v1 kind:
Config clusters:
- name: name-of-authz-service cluster:
# CA for verifying the remote service.
certificate-authority: /path/to/ca.pem
# URL of remote service to query. Must use 'https'. May not include parameters.
server: https://authz-service/authorize
users:
- name: name-of-api-server user: client-certificate:
/path/to/cert.pem # cert for the webhook plugin to use
client-key: /path/to/key.pem # key matching the cert
current-context: webhook contexts:
- context: cluster: name-of-authz-service user:
name-of-api-server name: webhook
You can enable RBAC authorization mode in Kubernetes clusters, but it's usually enabled by default. This K8s module gives us objects
, which are the basis of authorization decisions. These objects are stored in etcd
, just like other Kubernetes resources. Objects:
A role contains rules that represent a set of permissions. A role can be defined within a namespace with a Role
object, or cluster-wide via a ClusterRole
object.
=inline" >}} kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1 metadata: name:
example-clusterrole rules:
- apiGroups: [ "", "extensions", "apps" ] resources: [
"deployments", "replicasets", "pods" ] verbs: ["get",
"list"]
Role binding grants those permissions defined within a role to a user or set of users. It holds a list of subjects (users, groups, or service accounts) and a reference to the role being granted. Permissions can be granted within a namespace with a RoleBinding
object, or cluster-wide with a ClusterRoleBinding
object.
=inline" >}} kind:
ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1
metadata: name: example-clusterrole-binding subjects:
- kind: Group name: example-group apiGroup:
rbac.authorization.k8s.io roleRef: kind: ClusterRole name:
example-clusterrole apiGroup:
Users are human users, represented as strings. Group information is provided with Authenticator modules. Like users, Groups are represented as strings. Groups have no format requirements, other than that they use the prefix, system:
. ServiceAccounts have usernames with the system:serviceaccount:
prefix and belong to groups with the system:serviceaccounts:
prefix. Read more about authorization modules.
Now that we have a fair understanding of Kubernetes security, let's return to our original problem - how we might tackle authentication and authorization across all cloud providers and all Kubernetes distributions. So, each of the above solutions needs access and the ability to configure the API server. We're SOL, right? Not so much.
We've always been committed to supporting Kubernetes and our container-based application platform on all major providers, however, we're also committed to providing easy, seamless and automated portability between cloud vendors. Accordingly, this post will highlight a few important aspects of a multi-cloud approach we learned from our users, and the open source code we developed and made part of the Pipeline platform. We've been trying to find a solution which works across all providers and still gives our enterprise customers the confidence to use their own LDAP or AD.
Note that we support other authentication providers, such as GitHub, Google, GitLab, etc.
Within a Kubernetes cluster, we use service account tokens
for authentication, so the corresponding ServiceAccount
must be created before we can authenticate and use this type of token.
LDAP
in a managed K8sFor authentication we use Dex along with its LDAP connector. When a user in an LDAP has group memberships, Dex issues a JWT token containing those memberships. Our open source JWT-to-RBAC project is capable of creating ServiceAccount
, ClusterRoles
and ClusterroleBindings
based on JWT tokens. When we create a new ServiceAccount
, K8s automatically generates a service account token
, as we discussed earlier, and the JWT-to-RBAC retrieves it.
There are some prerequisites that must be met before you can begin your own tests:
Dex acts as a shim between a client app and the upstream identity provider. The client only needs to understand OpenID Connect to query Dex.
The whole process is broken down into two main parts:
Dex authentication flow:
jwt-to-rbac Flow:
ServiceAccount
, ClusterRoles
and ClusterRoleBindings
ServiceAccount
token and sends it to the Authentication AppThe access token issued by Dex contains the following:
=inline" >}} { "iss":
"http://dex/dex", "sub":
"CiNjbj1qYW5lLG91PVBlb3BsZSxkYz1leGFtcGxlLGRjPW9yZxIEbGRhcA",
"aud": "example-app", "exp": 1549661603, "iat": 1549575203,
"at_hash": "\_L5EkeNocRsG7iuUG-pPpQ", "email":
"janedoe@example.com", "email_verified": true, "groups": [
"admins", "developers" ], "name": "jane",
"federated_claims": { "connector_id": "ldap", "user_id":
"cn=jane,ou=People,dc=example,dc=org" } }
After jwt-to-rbac extracts the information from the token, it creates a ServiceAccount
and a ClusterRoleBinding
, using one of the default K8s ClusterRoles
as roleRef
, or otherwise generates a token defined in the configuration if such a token doesn't yet exist.
jwt-to-rbac
JWT-to-RBAC does not create a new ClusterRole
in every case; for example, if a user is a member of an admin group, it doesn't create a ClusterRole
because K8s already has one by default.
Default ClusterRole | Description |
---|---|
cluster-admin | Allows super-user access to perform any action on any resource. |
admin | Allows admin access, intended to be granted within a namespace using a RoleBinding. |
edit | Allows read/write access to most objects in a namespace. |
view | Allows read-only access to most objects in a namespace. |
ClusterRole
defined in configIn most cases, there are different LDAP groups, so custom groups are mapped to roles which have custom rules.
=inline" >}}
[[rbachandler.customGroups]] groupName = "developers"
[[rbachandler.customGroups.customRules]] verbs = [ "get",
"list" ] resources = [ "deployments", "replicasets", "pods"
] apiGroups = [ "", "extensions", "apps" ]
To conclude this discussion of our open-sourced JWT-to-RBAC project, consider following the steps below if you'd like to try it, or, check it out in action by subscribing to our free developer beta at https://try.pipeline.banzai.cloud/.
After cloning our GitHub repository, you can compile the code and create a Docker image with a single command:
make docker
If you use docker-for-desktop or minikube, you'll be able to easily deploy the solution, locally, with that newly built image.
kubectl create -f deploy/rbac.yaml
kubectl create -f deploy/configmap.yaml kubectl create -f
deploy/deployment.yaml kubectl create -f deploy/service.yaml
# port-forward locally
kubectl port-forward svc/jwt-to-rbac 5555
Now, you can communicate with the jwt-to-rbac app.
=1-4" >}} curl --request POST \
--url http://localhost:5555/rbac/ \
--header 'Content-Type: application/json' \
--data '{"token": "example.jwt.token"}'
# response:
{ "Email": "janedoe@example.com", "Groups": [ "admins",
"developers" ], "FederatedClaimas": { "connector_id":
"ldap", "user_id": "cn=jane,ou=People,dc=example,dc=org" } }
ServiceAccount
, ClusterRoles
(if the access token contains those custom groups we mentioned earlier) and ClusterRoleBindings
are created. Listing the created K8s resources:
=1-3" >}} curl --request GET \
--url http://localhost:5555/rbac \
--header 'Content-Type: application/json'
#response: { "sa_list": [ "janedoe-example-com" ],
"crole_list": [ "developers-from-jwt" ], "crolebind_list": [
"janedoe-example-com-admin-binding",
"janedoe-example-com-developers-from-jwt-binding" ] }
ServiceAccount
=1-3" >}} curl --request GET \
--url http://localhost:5555/tokens/janedoe-example-com \
--header 'Content-Type: application/json'
# response:
[ { "name": "janedoe-example-com-token-m4gbj", "data": {
"ca.crt": "example-ca-cer-base64", "namespace":
"ZGVmYXVsdA==", "token": "example-k8s-sa-token-base64" } } ]
or
=1-4" >}} curl --request POST \
--url http://localhost:5555/tokens/janedoe-example-com \
--header 'Content-Type: application/json' --data
'{"duration": "12h30m"}'
# response:
[ { "name": "janedoe-example-com-token-df3re", "data": {
"ca.crt": "example-ca-cer-base64", "namespace":
"ZGVmYXVsdA==", "token":
"example-k8s-sa-token-with-ttl-base64" } } ]
Now, you have a base64 encoded service account token
.
You can use service account token
from the command line:
kubectl --token $TOKEN_TEST --server
$APISERVER get po
Or create a kubectl
context:
TOKEN=$(echo "example-k8s-sa-token-base64" | base64 -D)
kubectl config set-credentials "janedoe-example-com" --token=$TOKEN
# with kubectl config get-clusters, you can get cluster names
kubectl config set-context "janedoe-example-com-context"
--cluster="clustername" --user="janedoe-example-com"
--namespace=default kubectl config use-context
janedoe-example-com-context kubectl get pod
As a final note - since we use Dex, which is an identity service that uses OpenID Connect to drive authentication for other apps, any other supported connector can be used for authentication to Kubernetes.
Banzai Cloud’s Pipeline provides a platform for enterprises to develop, deploy, and scale container-based applications. It leverages best-of-breed cloud components, such as Kubernetes, to create a highly productive, yet flexible environment for developers and operations teams alike. Strong security measures — multiple authentication backends, fine-grained authorization, dynamic secret management, automated secure communications between components using TLS, vulnerability scans, static code analysis, CI/CD, and so on — are default features of the Pipeline platform.
Get emerging insights on innovative technology straight to your inbox.
Discover how AI assistants can revolutionize your business, from automating routine tasks and improving employee productivity to delivering personalized customer experiences and bridging the AI skills gap.
The Shift is Outshift’s exclusive newsletter.
The latest news and updates on generative AI, quantum computing, and other groundbreaking innovations shaping the future of technology.