Published on 00/00/0000
Last updated on 00/00/0000
Published on 00/00/0000
Last updated on 00/00/0000
Share
Share
PRODUCT
4 min read
Share
Apache Spark on Kubernetes series: Introduction to Spark on Kubernetes Scaling Spark made simple on Kubernetes The anatomy of Spark applications on Kubernetes Monitoring Apache Spark with Prometheus Spark History Server on Kubernetes Spark scheduling on Kubernetes demystified Spark Streaming Checkpointing on Kubernetes Deep dive into monitoring Spark and Zeppelin with Prometheus Spark Streaming Checkpointing on Kubernetes Deep dive into monitoring Spark and Zeppelin with Prometheus Apache Spark application resilience on Kubernetes
Apache Zeppelin on Kubernetes series: Running Zeppelin Spark notebooks on Kubernetes Running Zeppelin Spark notebooks on Kubernetes - deep dive
Apache Kafka on Kubernetes series: Kafka on Kubernetes - using etcd
In our last blogpost we described how to configure spark-submit
and Spark History Server
to enable gathering event logs to Amazon S3. Since then, we've added more supported providers to Pipeline, and broadened the available options to easily capture Spark event logs to Amazon AWS S3, Microsoft Azure WASB and Google Cloud Storage. Lets see how this works. You can use our Helm deployment charts directly. We have the following umbrella charts:
spark-submit
job: Spark Resource Staging Server
, Shuffle Service
and Spark History Server
should be enabled (by default they're not).If you want to experiment, you can find a few deployment examples, here.
Note: the following steps are automated by Pipeline, but are listed in order to aid in understanding what goes on behind the scenes, and to serve as a comprehensive guide, in case you'd like to reproduce them in your own environment without using Pipeline.
Let's see what's happening behind the scenes. If you prefer to do things manually, you'll need to resolve the following steps:
Enable event logging in Spark Driver and configure the event logging folders for both spark-submit
and History server
. We've already thoroughly covered this topic in our previous blog
You need an image that includes Hadoop FileSystem drivers for each cloud storage option:
AWS
libraries are included by default in Spark's distributionAzure
SDK can be included using the hadoop-2.7
profileGoogle
Connector, has to be included as a dependency to the hadoop-cloud
module.Currently, we build our Spark images based on Spark's k8s branch, since all of its features have yet to be ported to the master branch. You'll need a few patches to include Google Connector, let's see what these are:
spark-hadoop-cloud
module, which is not present in Spark k8s and has to be cherry-picked from the master branch.spark-hadoop-cloud
module is not included in the docker bundle.Google Connector
dependency in the spark-hadoop-cloud
module. It also updates Guava to a newer version in the docker bundle, as the current one is quite old.We'll provide a patch to include an optional Google Connector for the master branch, as soon as these features are ported there, so we can use it as a basis for our Spark images.
Access is granted either by providing different access keys - this works on all cloud providers - or on the basis of policies/rules. Let's see what you need for setup on each cloud provider:
on Amazon
it's possible to gain access to S3 storage using policies. For example, you can add the following policies to your instance profile:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
...
"s3:ListBucket",
"s3:GetObject",
"s3:PutObject",
"s3:ListObjects",
"s3:DeleteObject"
],
"Resource": "*"
}
]
}
on Google Cloud
, if you create your bucket and cluster with the same Storage Account
, the only thing you have to add is the following scopes to your node config:
Config: &gke.NodeConfig{
MachineType: nodePoolModel.NodeInstanceType,
ServiceAccount: nodePoolModel.ServiceAccount,
OauthScopes: []string{
...
"https://www.googleapis.com/auth/devstorage.read_write",
},
},
on Azure
there's no role-based access so far via the Hadoop FS connector, so you have to provide your Storage Account
credentials, azureStorageAccountName and azureStorageAccessKey to History Server
, and spark-submit
options:
-Dspark.hadoop.fs.azure.account.key.{{ azureStorageAccountName
}}.blob.core.windows.net={{ azureStorageAccessKey }}
Keep in mind that these storage buckets have to be created before-hand. Pipeline automates those steps as well, utilizing a special Kubernetes
operator that automatically creates buckets on any cloud provider.
Get emerging insights on innovative technology straight to your inbox.
Discover how AI assistants can revolutionize your business, from automating routine tasks and improving employee productivity to delivering personalized customer experiences and bridging the AI skills gap.
The Shift is Outshift’s exclusive newsletter.
The latest news and updates on generative AI, quantum computing, and other groundbreaking innovations shaping the future of technology.