Published on 00/00/0000
Last updated on 00/00/0000
Published on 00/00/0000
Last updated on 00/00/0000
Share
Share
INSIGHTS
7 min read
Share
Fluvio is a data-streaming platform that allows you to transform your data stream in real-time using WASM plugins. Integrating Logging operator with Fluvio gives you a flexible data collection pipeline that can transform the data with these plugins. This post shows you how to build a strategic data collection plan so you can handle log management with ease.
Fluvio is gaining popularity as a high-performance, scalable, and fault-tolerant real-time streaming platform. It is an open-source software framework that allows developers to build, deploy, and manage streaming data applications. In addition to being built on cloud-native principles and technologies, it has a low resource footprint and low latency, providing similar guarantees to your data like other streaming platforms (for example, Apache Kafka).
Another interesting Fluvio feature is SmartModules. SmartModules exposes programmable data streaming functions using WebAssembly, which allows you to manipulate your data stream in real-time. The data stays within the Fluvio cluster, you don't need to access any external services (like Lambda or Functions). Fluvio provides client libraries for several popular programming languages.
Using Fluvio with Logging operator has several benefits when logging and monitoring applications:
In this post, you'll learn how to create a simple logging pipeline for your Kubernetes cluster to send your log data to Fluvio. (For reference, we’ve previously written about centralized logging within Kubernetes and its many benefits.) The pipeline will complete the following steps:
To implement this architecture on your Kubernetes cluster, you'll need to:
The Fluvio CLI (command-line interface) is an all-in-one tool for setting up, interacting, and managing Fluvio clusters.
1. Install the Fluvio CLI by running the following command:
curl -fsS https://packages.fluvio.io/v1/install.sh | bash
2. Add ~/.fluvio/bin/
to your PATH variable.
3. Set your KUBECONFIG context to the cluster.
4. Start the Fluvio cluster by running the following command. (This can take a few minutes.)
fluvio cluster start
5. Verify the cluster. You can check the Fluvio cluster by checking the version and status with the following command: fluvio version
The output should look something like this:
Release Channel : stable
Fluvio CLI : 0.10.2
Fluvio CLI SHA256 : 61808537aa82f7dceb24cfa5cc112cbb98fe507688ebd317afae2fe44f2a0f5e
Fluvio channel frontend SHA256 : b9a07efe2b251d77dd31d65639b1010b03fa1dd34524d957bcc2e5872f80ee65
Fluvio Platform : 0.10.2 (local)
Git Commit : 75be9c2003dbc22d3e8c2da20cb73841725b410a
OS Details : Darwin 13.1 (kernel 22.2.0)
=== Plugin Versions ===
Fluvio Runner (fluvio-run) : 0.0.0
Infinyon Cloud CLI (fluvio-cloud) : 0.2.5
6. Configure port forwarding to the controller and the stream processor unit Fluvio services.
kubectl port-forward service/fluvio-sc-public 30003:9003
kubectl port-forward service/fluvio-sc-internal 30004:9005
7. Create a new topic called `log-transformer`:
fluvio topic create log-transformer
The output should be similar to:
(mon-test-005/default)
topic "log-transformer" created
8. Send a test message:
echo "msg1" | fluvio produce log-transformer
9. Consume the test message from the topic:
fluvio consume log-transformer -B -d
The output should be similar to:
Consuming records from 'log-transformer' starting from the beginning of log
msg1
The MQTT broker will act as a mediator between Logging operator and Fluvio: Logging operator sends the messages to the MQTT broker. This example uses the [Eclipse Mosquitto](https://mosquitto.org) MQTT broker.
Install the Mosquitto MQTT broker by running the following commands:
helm repo add k8s-at-home https://k8s-at-home.com/charts/
helm repo update
helm install mosquitto k8s-at-home/mosquitto
Create an MQTT Connector, so Fluvio can fetch and process the messages from the MQTT broker.
1. Clone the fluvio-connectors repository and create an MQTT connector.
git clone https://github.com/infinyon/fluvio-connectors.git
cd fluvio-connectors
2. Create a YAML file called mqtt-connector.yaml
for the log-transformer
topic.
cat > mqtt-connector.yaml <<EOF
version: latest
name: my-mqtt-new
type: mqtt-source
topic: log-transformer
direction: source
create-topic: true
parameters:
mqtt_topic: "test/demo"
payload_output_type: json
secrets:
MQTT_URL: mqtt://mosquitto:1883
EOF
3. Build the connector module and apply the mqtt-connector.yaml
file:
cargo run --bin connector-run -- apply --config mqtt-connector.yaml
Wait a few minutes until the build is finished.
The next stage of your data collection plan should be to get your cluster’s logs sorted. The best way to do it? Install the Logging operator to collect the logs from your cluster and send them to the MQTT broker.
1. The easiest way is to install Logging operator with Helm.
helm repo add kube-logging https://kube-logging.github.io/helm-charts
helm repo update
helm upgrade --install --wait --create-namespace --namespace logging logging-operator kube-logging
2. Create a Logging resource.
kubectl apply -f - <<EOF
apiVersion: logging.banzaicloud.io/v1beta1
kind: Logging
metadata:
name: fluvio-test
spec:
controlNamespace: default
enableRecreateWorkloadOnImmutableFieldChange: true
fluentbit:
bufferStorage: {}
bufferStorageVolume:
hostPath:
path: ""
bufferVolumeImage: {}
filterKubernetes: {}
image: {}
inputTail:
storage.type: filesystem
positiondb:
hostPath:
path: ""
resources: {}
updateStrategy: {}
syslogNG:
jsonKeyDelim: "~"
EOF
3. Create a SyslogNGOutput resource to instruct Logging operator to send the incoming messages to MQTT.
kubectl apply -f - <<EOF
apiVersion: logging.banzaicloud.io/v1beta1
kind: SyslogNGOutput
metadata:
name: mqtt
namespace: default
spec:
mqtt:
address: tcp://mosquitto:1883
template: |
$(format-json --subkeys json~ --key-delimiter ~)
topic: test/demo
EOF
4. Create a SyslogNGFlow resource.
kubectl apply -f - <<EOF
apiVersion: logging.banzaicloud.io/v1beta1
kind: SyslogNGFlow
metadata:
name: testflow
namespace: default
spec:
localOutputRefs:
- mqtt
match: {}
EOF
Now that every piece of the logging pipeline is in place, you can consume messages from Fluvio again. Run:
fluvio consume log-transformer -B -d
The log messages of your cluster should appear in the topic.
Fluvio is an open source cloud native distributed streaming platform that provides similar assurances to Apache Kafka, but requires much lower resources. Its low footprint and the possibility to process data streams real time using WASM plugins makes it especially suitable for use in logging pipelines. This post has shown you how to build a simple logging pipeline using Fluvio and the Logging operator. In the future we hope that Fluvio will be able to receive data directly from Logging operator, without having to use an intermediary broker.
If you want other advanced techniques for building a data collection plan, we recommend visiting our post on the advanced logging features available in Kubernetes, which also touches on the different uses of Logging operator.
Get emerging insights on innovative technology straight to your inbox.
Discover how AI assistants can revolutionize your business, from automating routine tasks and improving employee productivity to delivering personalized customer experiences and bridging the AI skills gap.
The Shift is Outshift’s exclusive newsletter.
The latest news and updates on generative AI, quantum computing, and other groundbreaking innovations shaping the future of technology.