Published on 00/00/0000
Last updated on 00/00/0000
Published on 00/00/0000
Last updated on 00/00/0000
Share
Share
8 min read
Share
Banzai Cloud’s Pipeline platform is an operating system which allows enterprises to develop, deploy and scale container-based applications. It leverages best-of-breed cloud components, such as Kubernetes, to create a highly productive, yet flexible environment for developers and operations teams alike. Strong security measures — multiple authentication backends, fine grained authorization, dynamic secret management, automated secure communications between components using TLS, vulnerability scans, static code analysis, etc. — are a tier zero feature of the Pipeline platform, which we strive to automate and enable for all enterprises. Developers who want to integrate their software into Pipeline benefit from its OpenAPI description and autogenerated language bindings. Some prefer to use the API directly with tools like cURL or Postman, and others use the web interface in order to get an understanding of the system. The reality is that a single type of interface is never optimal for all use-cases. Something that is intuitive for a beginner may not necessarily be ergonomic or efficient for an experienced operator. One of the universal standards in the ops world is the command line, which is based on decades of UNIX-like systems experience. Our plan is to provide a command line tool that is efficient and comfortable for experienced developers, as well as for system administrators, whether they manage Pipeline resources interactively or through simple shell scripts that automate long and repetitive workflows. The Banzai Cloud team exclusively consists of engineers with backgrounds in development and operations. Everyone is a regular user of a number of different command line tools, like kubectl
or git
, and, of course, everyone has his or her opinion about what works. In order to reach a compromise, we first had to establish the high level objectives of our CLI tool. We articulated the following desired characteristics:
If you have a basic concept of what Pipeline is for, you should be able to easily find the right options in the CLI tool to operate it. The built in help system should be good enough that, together with tab completion and prompts for missing parameters, it is unnecessary to consult a manual.
Related command line tools should be similar in their approach. If you know how to use one tool, you should be able to expect similar things from another; the tool should not distract users by working differently from the other tools they are used to.
Command line tools are often used repeatedly, but with different parameters, wherein the command is recalled from the shell history and executed again with some changes. We should accommodate this by minimizing the number of parameters so that these commands remain succinct. Accepting verbs and modifiers near the end of parameter lists helps, as it eases the use of command history: one often only has to change a last word or add a new flag to the end of a line. We have to take into account that people who do a significant part of their work in the shell may not know or use most of their shell's line editing features.
One of the reasons people use shell scripts for automation, despite the existence of many general purpose programming languages, is the expressiveness of the pipe operator (|
) and the filters provided by UNIX. Simple filters like grep are useful in quick interactive sessions, even if the command line tool also provides filtering capabilities. We should provide output that is easily readable by other tools, not just humans.
If you are performing a specific task for the first time, or you perform a task only very occasionally, it might be useful for a program to ask for missing inputs. But when you want to repeatedly execute a similar command, either from a shell prompt or a script, you may find it inconvenient that you can't just copy the entire command line. The concensus is that we should make functions easily usable without a terminal (do you remember chat/expect scripts?). The tool should also have the option to explicitly enable non-interactive mode, which is essential for scripting.
With the above requirements in mind, we started to draft some common sessions with command lines and example outputs. This made it easier to identify open questions, and to answer them with actual examples.
Basic UNIX commands like ls
, grep
or echo
don't have subcommands. Their names determine what they do, and you may only choose detailed behavior or the targets of the commands to work on. All of these types of commands do simple things to files or standard inputs/outputs in the global scope of the shell. One of the first occurrences of commands with subcommands were version control systems (e.g. SCCS from the 1970s). As the complexity of such tools increased, a practice of defining subcommands from a noun (object) and a verb (action) became the norm. But of course, the order in which these are specified varies from tool to tool. One obvious choice was to style our CLI after kubectl, the most well known Kubernetes-related tool, which uses a verb-noun order. Verb-noun order is close to the natural language order, but we found that it's most convenient for commands that perform mainly the same well-defined actions on different resource types. In the end, we chose the noun-verb approach due to requirements surrounding discoverability and repetition. With it, you are better able to select a command group (like clusters), then to choose from the actions it supports, than vice-versa.
When using Pipeline, you will often need context. By definition, the RESTful API requires that you provide this context in each call. However, the CLI is not a replacement for cURL, but is there to make your job easier. To accommodate this, we added the concept of context to our command line tool, which allows you to select default secrets, clusters, deployments, etc. Default interactive behavior selects created items as context by default. That context is either stored in the user's configuration file, or in a separate session file. We considered and rejected the possibility of controlling context internally, in the command-line tool; we found it problematic that the tool's context was less transparent (we would have to guess its context using various tricks). Shell scripts should, however, not depend on a user-global state to avoid problems with concurrent execution. Let's see an example session (subject to change):
% banzai cluster list
Name Provider Status
Test1 EKS Running
Set up gke secret
% banzai secret create --type=google --name=foo <~/Downloads/test-10g0270b6c06.json
Secret “foo” created and validated successfully, and selected.
% banzai secret list
Name Type
*foo GKE
Create new cluster
% banzai cluster create Test2 --vcpu=350 --ram=320 --on-demand=60
Creating gke cluster with the following details:
Name: Test2
Region: us-east1
Node pools:
- 13 x n1-highcpu-16 (on-demand, 16vcpu, 15.2GB)
- 37 x n1-highcpu-4 (4vcpu, 3.8GB)
Capacity: 356 VCPU, 320 GB
Secret: foo
Do you want to create the cluster? [Y/n] ⏎ # opt-out in config/--yes
Waiting for creation…
Cluster created successfully and selected.
% banzai list clusters --fields=+created_by
Name Provider Status Created by
Test1 EKS Running user
*Test2 GKE Running johndoe
Run (kubectl) commands locally in the cluster’s kubectl context:
% banzai cluster kubectl get pods
Parsable output is the main requirement for using pipes in shell programming. At first, it seemed unnecessary to explicitly define output formats for invocations, where the standard output was not a tty. But as we started to draft example commands, it became clear that differentiating between usage modes would cause unexpected, and hard to explain, results. For example, a command that gives a different output depending on its context is the ls
command. Take a look at the output of ls|cat
: it's a single-column without colors. When you run it in your shell, it uses multiple columns and, maybe, multiple colors. This difference in behavior rarely causes problems for ls
, but getting the context right for our commands is harder, so we decided to require the user to be explicit about the type of output he or she requires. However, the situation is different on the input side: if the input is from a non-tty device (i.e. another process), we can assume that the input will be in json. For example, cloning a cluster goes like this:
% banzai cluster get --name=Test2 --json
{"cluster": {"Name": "Test2", "Id": 234, "Status": "RUNNING", …}}
% banzai cluster get --name=Test2 --json | jq '.cluster.Name|="new"' | banzai cluster create
Creating a good command line tool is naturally quite complex. Many of us have designed GUIs and used a variety of web interfaces, so we've developed an intuition as to what works and what doesn't. CLIs are different, because they're cross-breeds between user interfaces and programming constructs. Web interface design is already a thoroughly explored topic with many experts. But, even if someone develops their intuition and writes a multitude of command line interfaces throughout their professional career, upon taking a step back they'll often still find gaps in their designs. We're happy to hear from you about any aspect of this topic not covered in this article.
Get emerging insights on innovative technology straight to your inbox.
Discover how AI assistants can revolutionize your business, from automating routine tasks and improving employee productivity to delivering personalized customer experiences and bridging the AI skills gap.
The Shift is Outshift’s exclusive newsletter.
The latest news and updates on generative AI, quantum computing, and other groundbreaking innovations shaping the future of technology.