Published on 00/00/0000
Last updated on 00/00/0000
Published on 00/00/0000
Last updated on 00/00/0000
Share
Share
Share
PRODUCT
7 min read
Share
Share
In a perfect world, every workload that you needed to deploy today would be designed from the ground up to be cloud native. It would run inside containers and play nicely with Kuberntes by default.
But the reality is that not all apps do these things. Many businesses need to deploy legacy workloads – such as virtual machines – within cloud native environments. This alone is very complicated. Add to this the fact that many businesses don’t have the time or resources to rebuild those workloads from scratch to fit cloud native security architecture.
As we explained in a previous blog, Calisti - The Cisco Service Mesh Manager addresses this challenge by making it easy to integrate VMs into a Istio Service Mesh. Calisti, which is built on top of Istio, treats VMs and containers alike as first-class citizens, which means you don’t need to cut corners or perform extra work to get legacy workloads running in a cloud native environment.
To prove the point, let’s walk step-by-step through the process of adding a virtual machine to your Istio Service Mesh using Calisti.
For the purposes of this tutorial, you’ll need:
Note: If you are not using the demo application to test VM integration, skip this step. You can install the demo application on your Calisti cluster by running
smm demoapp install
Before proceeding further, we want to make sure there are no replicas for the analytics service running in your cluster. So, scale down to zero replicas with:
kubectl scale deploy -n smm-demo analytics-v1 --replicas=0
Verify that there are no replicas by running:
kubectl get pods -n smm-demo | grep analytics
This command should return nothing if you’ve successfully scaled down the analytics service.
Calisti treats VMs as Kubernetes workloads. To connect a legacy VM to the service mesh, you need to assign a set of labels that match the machine.
Do that by using YAML, such as the following:
apiVersion: networking.istio.io/v1alpha3
kind: WorkloadGroup
metadata:
labels:
app: analytics
version: v0
name: analytics-v0
namespace: smm-demo
spec:
metadata:
labels:
app: analytics
version: v0
probe:
httpGet:
path: /
host: 127.0.0.1
port: 8080
scheme: HTTP
template:
network: vm-network-1
ports:
http: 8080
grpc: 8082
tcp: 8083
serviceAccount: default
This adds a virtual machine serving the analytics traffic in the demo application.
If you need to configure communication without encryption on some service ports, you can do this by creating a PeerAuthentication object in the smm-demo namespace. For example:
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: analytics
namespace: smm-demo
spec:
mtls:
mode: PERMISSIVE
selector:
matchLabels:
app: analytics
version: v0
Note: You can skip this step if the default Istio Service Mesh networking settings suffice for your workload.
At this point, it’s time to start up the workload on your VM. Do this by logging into the VM and running whichever command starts the workload:
sudo /path/to/your_workload
You’ll also need iptables and curl installed on the VM. So, since you’re logged in anyway, go ahead and install them (if they’re not already installed) with:
sudo apt-get update && apt-get install -y curl iptables
To connect the VM to Istio service Mesh, you need to know the URL of the dashboard, the namespace and name of the WorkloadGroup and the bearer token of the service account referenced in the .spec.template.serviceAccount of the WorkloadGroup.
To make it easy to collect this information, we have provided a script, which you can download and run:
#!/bin/bash -e
SA_NAMESPACE="smm-demo"
SA_SERVICEACCOUNT="default"
SA_BEARER_TOKEN_FILE=~/bearer-token
SA_SECRET_NAME=$(kubectl get serviceaccount $SA_SERVICEACCOUNT -n $SA_NAMESPACE -o json | jq -r '.secrets[0].name')
if [ -z "$SA_SECRET_NAME" ]; then
echo "Cannot find secret named $SA_NAMESPACE.$SA_SERVICEACCOUNT"
exit 1
fi
mkdir -p $(dirname $SA_BEARER_TOKEN_FILE)
if ! kubectl get secret -n $SA_NAMESPACE ${SA_SECRET_NAME} -o json | jq -r '.data.token | @base64d' > $SA_BEARER_TOKEN_FILE ; then
echo "cannot get service account bearer token"
exit 1
fi
At this point, there are a few more steps we need to take, to prepare the VM to attach to the mesh. Log into the VM and run these commands (fill in the variables with the data collected by the script in the previous step):
curl http://<dashboard-url>/get/smm-agent | bash # installs smm-agent
smm-agent set workload-group <namespace> <workloadgroup> #specifiies WorkloadGroup and namespace
smm-agent set bearer-token <token> #specifies the bearer token
You can verify that everything was set up correctly by running:
smm-agent show-config
The output should be similar to:
✓ dashboard url=http://a6bc8072e26154e5c9084e0d7f5a9c92-2016650592.eu-north-1.elb.amazonaws.com
✓ target workload-group namespace=smm-demo, name=analytics-v0
✓ no additional labels set
✓ bearer token set
✓ configuration is valid
Now, you can go ahead and attach the VM to the mesh with:
smm-agent reconcile
The output should look similar to:
✓ reconciling host operating system
✓ configuration loaded config=/etc/smm/agent.yaml
✓ install-pilot-agent ❯ downloading and installing OS package component=pilot-agent, platform={linux amd64 deb 0xc00000c168}
✓ install-pilot-agent ❯ downloader reconciles with exponential backoff downloader={pilot-agent {linux amd64 deb 0xc00000c168} true 0xc0002725b0}
...
✓ systemd-ensure-smm-agent-running/systemctl ❯ starting service args=[smm-agent]
✓ systemd-ensure-smm-agent-running/systemctl/start ❯ executing command command=systemctl, args=[start smm-agent], timeout=5m0s
✓ systemd-ensure-smm-agent-running/systemctl/start ❯ command executed successfully command=systemctl, args=[start smm-agent], stdout=, stderr=
✓ changes were made to the host operating system
✓ reconciled host operating system
Finally, to verify the setup, first run this command to check that the new WorkloadGroup exists:
kubectl get workloadentries -n smm-demo
The output should be similar to:
NAME AGE ADDRESS
analytics-v0-3.68.232.96-vm-network-1 2m40s 3.68.232.96
You can also check the healthiness of the service with:
kubectl describe workloadentries analytics-v0-3.68.232.96-vm-network-1
Look for output such as:
Name: analytics-v0-3.68.232.96-vm-network-1
Namespace: smm-demo
Labels: app=analytics
...
Status:
Conditions:
Last Probe Time: 2022-04-01T05:47:47.472143851Z
Last Transition Time: 2022-04-01T05:47:47.472144917Z
Status: True
Type: Healthy
Finally, in the Calisti dashboard, navigate to MENU > TOPOLOGY and verify that the VM is visible and that it is receiving traffic.
Congratulations, your VM is now joined to your mesh, so you can manage legacy application traffic just as seamlessly as cloud native workloads.
At this point, your work is done if you intend to operate the VM in the mesh for an extended period of time. If it’s a short-running VM that you want to remove from Istio Service Mesh, stay tuned for a future blog post that covers how to do this.
Click here to learn more about Cisco Calisti, or to use it for free.
Get emerging insights on innovative technology straight to your inbox.
Discover how AI assistants can revolutionize your business, from automating routine tasks and improving employee productivity to delivering personalized customer experiences and bridging the AI skills gap.
The Shift is Outshift’s exclusive newsletter.
The latest news and updates on generative AI, quantum computing, and other groundbreaking innovations shaping the future of technology.