Outshift Logo


7 min read

Blog thumbnail
Published on 04/28/2020
Last updated on 03/21/2024

Using templates for injecting dynamic configuration in Vault


Bank-Vaults already supports multiple KMS alternatives for encrypting and storing unseal-keys and root-tokens. However, during bootstrapping and configuring sometimes you need to source other secrets to configure Vault securely. In this post you will learn how to do that with the help of the valuable contributions of Pato Arvizu. Thank you!

For those unfamiliar with Bank-Vaults, let's do a quick recap. Bank-Vaults is a Vault Swiss Army knife, which makes enterprise-grade security attainable on Kubernetes. It has many 'blades' that cut through all kinds of security problems: the Bank-Vaults Kubernetes operator provides automation; a Go client with automatic token renewal provides dynamic secret generation, multiple unseal options and more; a CLI tool initializes, unseals and configures Vault with authentication methods and secret engines; and direct secret injection into Pods reduces attack surface.

We'd also like to take a moment to highlight some other notable features:

Background - why templates?

When configuring a Vault object via the externalConfig, it's sometimes convenient (or necessary) to inject settings that are known only at runtime (e.g. secrets that you don't want to store in source control, or dynamic resources managed elsewhere), or computations based on multiple values (string or arithmetic operations). For these cases, the operator supports parameterized templating. The vault-configurer component evaluates the templates and injects the rendered configuration into Vault. This templating is based on Go templates, extended by Sprig, with some custom functions available specifically for bank-vaults (like decrypting strings using the AWS Key Management Service or the Google Cloud Platform's Cloud Key Management Service).

Special characters

To avoid confusion and potential parsing errors (and interference with other templating tools like Helm), the templates don't use the default delimiters that Go templates use ({{ and }}). Instead, they use ${ for the left delimiter, and } for the right one. Additionally, to quote parameters being passed to functions, surround them with backticks (`). For example, to call the env function, you can use this in your manifest:
password: "${ env `MY_ENVIRONMENT_VARIABLE` }"
In this case, vault-configurer will evaluate MY_ENVIRONMENT_VARIABLE at runtime (assuming it was properly injected) and set it to the password field.


In addition to default functions in Go templates, you can also use the Sprig library of functions in your configuration. The documentation for Sprig can be found, here. One thing to keep in mind is that some Sprig functions might return values other than strings, like lists or maps. Make sure that the function you're calling returns a string to avoid unintentionally generating an invalid configuration.

Bank-Vaults template functions

To provide functionality that's more Kubernetes-friendly and cloud-native, Bank-Vaults provides a few additional functions not available in Sprig or Go. The functions and their parameters (in the order they should go in the function) are listed below.


Takes a base64-encoded, KMS-encrypted string and returns the decrypted string. Additionally, the function takes an optional second parameter for any encryption context that might be required for decrypting. If any encryption context is required, the function will take any number of additional parameters, each of which should be a key-value pair (separated by a =) that corresponds to the full context.

Note: this function assumes that the vault-configurer pod has the appropriate AWS IAM credentials and permissions to decrypt the given string. You can be inject the AWS IAM credentials by using Kubernetes secrets as environment variables, an EC2 instance role, kube2iam, or EKS IAM roles, etc.

Parameter Type Required
encodedString Base64-encoded string Yes
encryptionContext Variadic list of strings No


Takes a base64-encoded string, encrypted with a Google Cloud Platform (GCP) symmetric key and returns the decrypted string.

Note: this function assumes that the vault-configurer pod has the appropriate GCP IAM credentials and permissions to decrypt the given string. You can inject the GCP IAM credentials by using Kubernetes secrets as environment variables, or they can be acquired via a service account authentication, etc.

Parameter Type Required
encodedString Base64-encoded string Yes
projectId String Yes
location String Yes
keyRing String Yes
key String Yes


Reads the content of a blob from disk (file) or from cloud blob storage services (object storage) at the given URL and returns it. This assumes that the path exists, and is readable by vault-configurer.

Valid values for the URL parameters are listed below, for more fine grained options check the documentation of the underlying library:

  • file:///path/to/dir/file
  • s3://my-bucket/object?region=us-west-1
  • gs://my-bucket/object
  • azblob://my-container/blob

Note: this function assumes that the vault-configurer pod has the appropriate rights to access the given cloud service, for that, please check the awskms and gcpkms functions.

Parameter Type Required
url String Yes

Full example

In this example we are going to create a Vault instance with vault-operator, that can create dynamic credentials in an external MySQL instance. The root password is provided for Vault securely with the help of the new awskms template function and configured automatically.

  1. Create an EKS cluster. You can use the Banzai CLI on our hosted service, or with your own tools.
    $ banzai cluster create <<EOF
      "name": "bifrost",
      "location": "eu-west-1",
      "cloud": "amazon",
      "secretName": "aws",
      "properties": {
        "eks": {
          "version": "1.15.12",
          "nodePools": {
            "pool1": {
              "spotPrice": "0.03",
              "count": 2,
              "minCount": 2,
              "maxCount": 4,
              "autoscaling": true,
              "instanceType": "t2.medium"
  2. Enable authentication based on IAM ServiceAccount for this cluster according to our guide.
  3. Create an RDS Aurora MySQL instance. If you prefer, you can use any other MySQL implementation, even the community Helm chart.
  4. Encrypt the password of that database instance with KMS using the AWS CLI, for example:
    $ aws kms encrypt --region eu-west-1
                    --key-id 9f054126-2a98-470c-9f10-9b3b0cad94a1
                    --plaintext $(echo -n secretPassword | base64) \
                    --encryption-context Tool=bank-vaults \
                    --output text \
                    --query CiphertextBlob
  5. Modify the example in the Bank-Vaults repository to contain your values and save it to the file cr-awskms.yaml:
          - type: database
            description: mysql
              - name: mysql
                plugin_name: mysql-database-plugin
                max_open_connections: 5
                connection_url: "{{username}}:{{password}}@tcp(vault.cluster-cnehjeaeuy8e.eu-west-1.rds.amazonaws.com:3306)/"
                allowed_roles: ['*']
                username: root
                password: '${ awskms (env `ENCRYPTED_DB_CREDS`) }'
                - name: app
                  db_name: mysql
                  creation_statements: "CREATE USER '{{name}}'@'%' IDENTIFIED BY '{{password}}'; GRANT ALL ON `app\_%`.* TO '{{name}}'@'%';"
                  default_ttl: 2m
                  max_ttl: 10m
        - name: ENCRYPTED_DB_CREDS
          value: "AQICAHgF8pf5QFJ57AOqyHwBTI+KJ5zPmn5Pew3EzA0QvA8x7gHwYdiEad7eSKiARs35EBaFAAAAbDBqBgkqhkiG9w0BBwagXTBZkWWuUHNahSjQZtmeoQYjMvmHe1WYuCTBZQMEAS4wEQQMSfvJv4JxyVmTpVfoAgEQgCmAIATgudq7IW+0JXUJhT/B+iKsmIy/A2Cud609nx4mRgOsK5+ObbnL1A==" # This should be the base64-encoded ciphertext blob
        - name: AWS_REGION
          value: eu-west-1
    Of course you can replace ENCRYPTED_DB_CREDS with the new blob function, and you can store the encrypted password in an S3 bucket (if you don't like encrypted Secrets in your configuration), and pull back to the config with:
    (awskms (blob `s3://bank-vaults/encrypted/db-creds?region=eu-west-1`))
  6. Install the Vault Operator and the Vault instance:
    helm upgrade --install vault-operator banzaicloud-stable/vault-operator
    kubectl apply -f cr-awskms.yaml
  7. Check the Vault Pods and verify that they are healthy, then check the logs of the vault-configurer with the kubectl logs -f deployment/vault-configurer command to verify that it successfully configured the MySQL database dynamic secret backend.Bank-Vaults repository
    If you're interested in contributing, check out the Bank-Vaults repository, or give us a GitHub star.

    Learn more about Bank-Vaults:

    Subscribe card background
    Subscribe to
    the Shift!

    Get emerging insights on emerging technology straight to your inbox.

    Unlocking Multi-Cloud Security: Panoptica's Graph-Based Approach

    Discover why security teams rely on Panoptica's graph-based technology to navigate and prioritize risks across multi-cloud landscapes, enhancing accuracy and resilience in safeguarding diverse ecosystems.

    the Shift
    emerging insights
    on emerging technology straight to your inbox.

    The Shift keeps you at the forefront of cloud native modern applications, application security, generative AI, quantum computing, and other groundbreaking innovations that are shaping the future of technology.

    Outshift Background