Published on 00/00/0000
Last updated on 00/00/0000
Published on 00/00/0000
Last updated on 00/00/0000
Share
Share
PRODUCT
7 min read
Share
At Banzai Cloud we're always looking for products or frameworks that add value to our business, which we can enable in our open source PaaS, Pipeline. Any list of such products would include serverless frameworks. Thus, today we're adding Fn as a supported spotguide, making it easy for users to deploy Fn with Pipeline on their chosen cloud provider. Before we dive into how to deploy and use Fn with Pipeline, here are a few reasons why we thought Fn should be supported by Pipeline:
managed Kubernetes
services, which means that applications running on Kubernetes
are cloud provider agnostic. At the same time, we've retained the ability to run Kubernetes on-premise.We need a Kubernetes cluster to deploy Fn
onto. With Pipeline, it's easy to spin up Kubernetes clusters on various cloud providers by launching the control plane launcher, then using the create cluster REST API calls.
Note: Pipeline is quickly moving towards a hosted PaaS model wherein all the steps below will be unnecessary, since provisioning the control plane, supported applications or frameworks, will be automated and streamlined. We use managed Kubernetes services (actually, we believe that the future is in managed k8s services deployed to the cloud), and are constantly looking to integrate new services alongside those we already support.
Look for the Deployment Create
API call in this Postman collection. To invoke Deployment Create
API we need to first provide two parameters:
Cluster List
REST API call).REST call body:
```json
{
"name": "banzaicloud-stable/fn"
}
```
Those who prefer to deploy
Fn
manually can do so by using our Fn helm chart, which Pipeline uses behind the scenes.
The returned response contains:
Fn
components can be identified in KubernetesFn
service in KubernetesHere's an example output from a deployment to Kubernetes:
{
"release_name": "early-dingo",
"notes": "The Fn service can be accessed within your cluster at:\n\n - http://early-dingo-fn-api.default:80\n\nSet the FN_API_URL environment variable to this address to use the Fn service from outside the cluster:\n\n!! NOTE: It may take a few minutes for the API load balancer to become available.\n\nYou can watch for EXTERNAL-IP to populate by running:\n\n kubectl get svc --namespace default -w early-dingo-fn-api\n\nThen set\n\n export FN_API_URL=http://$(kubectl get svc --namespace default early-dingo-fn-api -o jsonpath='{.status.loadBalancer.ingress[0].ip}'):80\n\n############################################################################\n### WARNING: Persistence is disabled!!! You will lose function and ###\n### flow state when the MySQL pod is terminated. ###\n### See the README.md for instructions on configuring persistence. ###\n############################################################################\n"
}
The following diagram is a high level depiction of the above deployment flow: Execute the Cluster Public Endpoints
REST API call to Pipeline in order to get a list of services than can be reached from outside the Kubernetes cluster. From the returned list, the following public endpoints belong to the deployed Fn
:
Name | Host | Ports | Description |
---|---|---|---|
<fn-release-name> -fn-api | The public IP or DNS Hostname of the endpoint | fn:80 | This is the 'Fn' API endpoint |
<fn-release-name> -fn-ui | The public IP or DNS Hostname of the endpoint | fn-ui:80, flow-ui:3000 | This is the 'UI' endpoint. The Fn UI is available on port 80, while the Fn Flow UI on is 3000 |
Once Fn
is up and running, you can start using it via the Fn cli tool. All that you need is to point the Fn cli tool
to the API endpoint of the Fn
framework you just deployed.
$ export FN_API_URL=http://<the `Host` listed for <fn-release-
name>-fn-api/
Now that we have an Fn
framework up and running, it's time for some fun. Let's see if we can deploy this Fn Flow Tutorial example to our serverless framework. This example application requires an external component, a fake SDK dashboard, which the serverless functions of the application can talk to.
The fake SDK dashboard
exists outside the serverless framework, so we need to deploy it separately. Let's deploy it to our Kubernetes cluster and expose it through a service, so as to make it reachable from the outside:
$ kubectl run bristol
--image=tteggel/bristol --port=3001
$ kubectl create -f - <<EOF apiVersion: v1 kind: Service
metadata: name: bristol-svc namespace: default spec: ports:
- name: bristol port: 80 protocol: TCP targetPort: 3001
selector: run: bristol sessionAffinity: None type:
LoadBalancer EOF
Once the service is up, get its public IP (we’ll refer to this as public-ip-of-fake-dashboard
from here on in) from Kubernetes. This is the IP through which the fake SDK dashboard
is reachable.
docker login
. This is necessary, because Fn
generates docker images from our serverless functions and uploads them to dockerhub.Deploy the application to the serverless framework
$ cd FlowSaga
$ fn deploy --all --registry <your dockerhub registry name>
This example application consists of multiple serverless functions which form a fn workflow.
Verify the deployment
$ fn routes list travel
path image endpoint
/car/book <your-dockerhub-registry-name>/car-book:0.0.107 `<fn-release-name>`-fn-api host/r/travel/car/book
/car/cancel <your-dockerhub-registry-name>/car-cancel:0.0.91 `<fn-release-name>`-fn-api host/r/travel/car/cancel
/email <your-dockerhub-registry-name>/email:0.0.95 `<fn-release-name>`-fn-api host/r/travel/email
/flight/book <your-dockerhub-registry-name>/flight-book:0.0.67 `<fn-release-name>`-fn-api host/r/travel/flight/book
/flight/cancel <your-dockerhub-registry-name>/flight-cancel:0.0.117 `<fn-release-name>`-fn-api host/r/travel/flight/cancel
/hotel/book <your-dockerhub-registry-name>/hotel-book:0.0.89 `<fn-release-name>`-fn-api host/r/travel/hotel/book
/hotel/cancel <your-dockerhub-registry-name>/hotel-cancel:0.0.87 `<fn-release-name>`-fn-api host/r/travel/hotel/cancel
/trip <your-dockerhub-registry-name>/trip:0.0.230 `<fn-release-name>`-fn-api host/r/travel/trip
Configure the deployed application
$ fn apps config set travel COMPLETER_BASE_URL "http://<fn-kubernetes-deployment-name>-fn-flow"
$ fn routes config set travel /flight/book FLIGHT_API_URL "http://<public-ip-of-fake-dashboard>/flight"
$ fn routes config set travel /flight/book FLIGHT_API_SECRET "shhhh"
$ fn routes config set travel /flight/cancel FLIGHT_API_URL "http://<public-ip-of-fake-dashboard>/flight"
$ fn routes config set travel /flight/cancel FLIGHT_API_SECRET "shhhh"
$ fn routes config set travel /hotel/book HOTEL_API_URL "http://<public-ip-of-fake-dashboard>/hotel"
$ fn routes config set travel /hotel/cancel HOTEL_API_URL "http://<public-ip-of-fake-dashboard>/hotel"
$ fn routes config set travel /car/book CAR_API_URL "http://<public-ip-of-fake-dashboard>/car"
$ fn routes config set travel /car/cancel CAR_API_URL "http://<public-ip-of-fake-dashboard>/car"
$ fn routes config set travel /email EMAIL_API_URL "http://<public-ip-of-fake-dashboard>/email"
Now that it's been deployed and configured, we can pass the input payload to the application. Since this application is a flow of serverless functions, we'll see multiple functions being executed.
$ cd trip
$ fn call travel /trip < sample-payload.json
<fn-release-name>-fn-ui
/<fn-release-name>-fn-ui
:3000/Pipeline provides out-of-the-box node and platform/Kubernetes metrics for monitoring purposes through Prometheus. Deploying these is as simple as invoking and passing the following to the Deployment Create
API:
{
"name": "banzaicloud-stable/pipeline-cluster-monitor"
}
Fn
provides application level metrics that can be consumed by Prometheus. With federated monitoring, metrics collected by Prometheus instances (that are hosted on Kubernetes clusters managed by Pipeline) are exposed through a central Grafana, reachedable at http://<pipeline public ip>/grafana
.
Hollowtrees is an alert/react-based framework that's part of the Pipeline PaaS
. It coordinates monitoring, applies rules, and dispatches action chains to plugins using standard CNCF interfaces. If you'd like to learn more about Hollowtrees
check out this post. Until now, Hollowtrees has only supported microservices, listening on a gRPC interface, as action plugins. This is suboptimal in some use cases, since the action plugin
may unnecessarily burn CPU resources while the action plugin sits idle. So we've decided to extend Hollowtrees
to support Fn functions as action plugins
. This helps cost efficiency because action plugins
are only triggered when there's an event to handle, exiting once the event has been processed instead of continuously running. The diagram below shows how the Hollowtrees architecture changes when using Fn to react to alerts, as opposed to gRPC-based action plugins. That brings us to the end of this post. In our next entry in the serverless series we'll discuss writing action plugins
as serverless functions, as well as some changes we're proposing and contributing in order to make Fn
more robust on Kubernetes.
Get emerging insights on innovative technology straight to your inbox.
Discover how AI assistants can revolutionize your business, from automating routine tasks and improving employee productivity to delivering personalized customer experiences and bridging the AI skills gap.
The Shift is Outshift’s exclusive newsletter.
The latest news and updates on cloud native modern applications, application security, generative AI, quantum computing, and other groundbreaking innovations shaping the future of technology.