Apache Spark Fundamentals Training Course
Apache Spark is an analytics engine designed to distribute data across a cluster in order to process it in parallel. It contains modules for streaming, SQL, machine learning and graph processing.
This instructor-led, live training (online or onsite) is aimed at engineers who wish to deploy Apache Spark system for processing very large amounts of data.
By the end of this training, participants will be able to:
- Install and configure Apache Spark.
- Understand the difference between Apache Spark and Hadoop MapReduce and when to use which.
- Quickly read in and analyze very large data sets.
- Integrate Apache Spark with other machine learning tools.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Course Outline
Introduction
- Apache Spark vs Hadoop MapReduce
Overview of Apache Spark Features and Architecture
Choosing a Programming Language
Setting up Apache Spark
Creating a Sample Application
Choosing the Data Set
Running Data Analysis on the Data
Processing of Structured Data with Spark SQL
Processing Streaming Data with Spark Streaming
Integrating Apache Spark with 3rd Part Machine Learning Tools
Using Apache Spark for Graph Processing
Optimizing Apache Spark
Troubleshooting
Summary and Conclusion
Requirements
- Experience with the Linux command line
- A general understanding of data processing
- Programming experience with Java, Scala, Python, or R
Audience
- Developers
Open Training Courses require 5+ participants.
Apache Spark Fundamentals Training Course - Booking
Apache Spark Fundamentals Training Course - Enquiry
Apache Spark Fundamentals - Consultancy Enquiry
Consultancy Enquiry
Testimonials (5)
A lot of practical examples, different ways to approach the same problem, and sometimes not so obvious tricks how to improve the current solution
Rafal - Nordea
Course - Apache Spark MLlib
The live examples
Ahmet Bolat - Accenture Industrial SS
Course - Python, Spark, and Hadoop for Big Data
very interactive...
Richard Langford
Course - SMACK Stack for Data Science
Sufficient hands on, trainer is knowledgable
Chris Tan
Course - A Practical Introduction to Stream Processing
Get to learn spark streaming , databricks and aws redshift
Lim Meng Tee - Jobstreet.com Shared Services Sdn. Bhd.
Course - Apache Spark in the Cloud
Upcoming Courses
Related Courses
Artificial Intelligence - the most applied stuff - Data Analysis + Distributed AI + NLP
21 HoursThis course is intended for developers and data scientists who want to understand and implement artificial intelligence in their applications. Special focus is placed on data analytics, distributed AI, and natural language processing.
Big Data Analytics with Google Colab and Apache Spark
14 HoursThis instructor-led, live training in Kazakhstan (online or onsite) is aimed at intermediate-level data scientists and engineers who wish to use Google Colab and Apache Spark for big data processing and analytics.
By the end of this training, participants will be able to:
- Set up a big data environment using Google Colab and Spark.
- Process and analyze large datasets efficiently with Apache Spark.
- Visualize big data in a collaborative environment.
- Integrate Apache Spark with cloud-based tools.
Big Data Analytics in Health
21 HoursBig data analytics involves the process of examining large amounts of varied data sets in order to uncover correlations, hidden patterns, and other useful insights.
The health industry has massive amounts of complex heterogeneous medical and clinical data. Applying big data analytics on health data presents huge potential in deriving insights for improving delivery of healthcare. However, the enormity of these datasets poses great challenges in analyses and practical applications to a clinical environment.
In this instructor-led, live training (remote), participants will learn how to perform big data analytics in health as they step through a series of hands-on live-lab exercises.
By the end of this training, participants will be able to:
- Install and configure big data analytics tools such as Hadoop MapReduce and Spark
- Understand the characteristics of medical data
- Apply big data techniques to deal with medical data
- Study big data systems and algorithms in the context of health applications
Audience
- Developers
- Data Scientists
Format of the Course
- Part lecture, part discussion, exercises and heavy hands-on practice.
Note
- To request a customized training for this course, please contact us to arrange.
Introduction to Graph Computing
28 HoursIn this instructor-led, live training in Kazakhstan, participants will learn about the technology offerings and implementation approaches for processing graph data. The aim is to identify real-world objects, their characteristics and relationships, then model these relationships and process them as data using a Graph Computing (also known as Graph Analytics) approach. We start with a broad overview and narrow in on specific tools as we step through a series of case studies, hands-on exercises and live deployments.
By the end of this training, participants will be able to:
- Understand how graph data is persisted and traversed.
- Select the best framework for a given task (from graph databases to batch processing frameworks.)
- Implement Hadoop, Spark, GraphX and Pregel to carry out graph computing across many machines in parallel.
- View real-world big data problems in terms of graphs, processes and traversals.
Hadoop and Spark for Administrators
35 HoursThis instructor-led, live training in Kazakhstan (online or onsite) is aimed at system administrators who wish to learn how to set up, deploy and manage Hadoop clusters within their organization.
By the end of this training, participants will be able to:
- Install and configure Apache Hadoop.
- Understand the four major components in the Hadoop ecoystem: HDFS, MapReduce, YARN, and Hadoop Common.
- Use Hadoop Distributed File System (HDFS) to scale a cluster to hundreds or thousands of nodes.
- Set up HDFS to operate as storage engine for on-premise Spark deployments.
- Set up Spark to access alternative storage solutions such as Amazon S3 and NoSQL database systems such as Redis, Elasticsearch, Couchbase, Aerospike, etc.
- Carry out administrative tasks such as provisioning, management, monitoring and securing an Apache Hadoop cluster.
Hortonworks Data Platform (HDP) for Administrators
21 HoursThis instructor-led, live training in Kazakhstan (online or onsite) introduces Hortonworks Data Platform (HDP) and walks participants through the deployment of Spark + Hadoop solution.
By the end of this training, participants will be able to:
- Use Hortonworks to reliably run Hadoop at a large scale.
- Unify Hadoop's security, governance, and operations capabilities with Spark's agile analytic workflows.
- Use Hortonworks to investigate, validate, certify and support each of the components in a Spark project.
- Process different types of data, including structured, unstructured, in-motion, and at-rest.
A Practical Introduction to Stream Processing
21 HoursIn this instructor-led, live training in Kazakhstan (onsite or remote), participants will learn how to set up and integrate different Stream Processing frameworks with existing big data storage systems and related software applications and microservices.
By the end of this training, participants will be able to:
- Install and configure different Stream Processing frameworks, such as Spark Streaming and Kafka Streaming.
- Understand and select the most appropriate framework for the job.
- Process of data continuously, concurrently, and in a record-by-record fashion.
- Integrate Stream Processing solutions with existing databases, data warehouses, data lakes, etc.
- Integrate the most appropriate stream processing library with enterprise applications and microservices.
SMACK Stack for Data Science
14 HoursThis instructor-led, live training in Kazakhstan (online or onsite) is aimed at data scientists who wish to use the SMACK stack to build data processing platforms for big data solutions.
By the end of this training, participants will be able to:
- Implement a data pipeline architecture for processing big data.
- Develop a cluster infrastructure with Apache Mesos and Docker.
- Analyze data with Spark and Scala.
- Manage unstructured data with Apache Cassandra.
Administration of Apache Spark
35 HoursThis instructor-led, live training in Kazakhstan (online or onsite) is aimed at beginner-level to intermediate-level system administrators who wish to deploy, maintain, and optimize Spark clusters.
By the end of this training, participants will be able to:
- Install and configure Apache Spark in various environments.
- Manage cluster resources and monitor Spark applications.
- Optimize the performance of Spark clusters.
- Implement security measures and ensure high availability.
- Debug and troubleshoot common Spark issues.
Apache Spark in the Cloud
21 HoursApache Spark's learning curve is slowly increasing at the begining, it needs a lot of effort to get the first return. This course aims to jump through the first tough part. After taking this course the participants will understand the basics of Apache Spark , they will clearly differentiate RDD from DataFrame, they will learn Python and Scala API, they will understand executors and tasks, etc. Also following the best practices, this course strongly focuses on cloud deployment, Databricks and AWS. The students will also understand the differences between AWS EMR and AWS Glue, one of the lastest Spark service of AWS.
AUDIENCE:
Data Engineer, DevOps, Data Scientist
Spark for Developers
21 HoursOBJECTIVE:
This course will introduce Apache Spark. The students will learn how Spark fits into the Big Data ecosystem, and how to use Spark for data analysis. The course covers Spark shell for interactive data analysis, Spark internals, Spark APIs, Spark SQL, Spark streaming, and machine learning and graphX.
AUDIENCE :
Developers / Data Analysts
Scaling Data Pipelines with Spark NLP
14 HoursThis instructor-led, live training in Kazakhstan (online or onsite) is aimed at data scientists and developers who wish to use Spark NLP, built on top of Apache Spark, to develop, implement, and scale natural language text processing models and pipelines.
By the end of this training, participants will be able to:
- Set up the necessary development environment to start building NLP pipelines with Spark NLP.
- Understand the features, architecture, and benefits of using Spark NLP.
- Use the pre-trained models available in Spark NLP to implement text processing.
- Learn how to build, train, and scale Spark NLP models for production-grade projects.
- Apply classification, inference, and sentiment analysis on real-world use cases (clinical data, customer behavior insights, etc.).
Python and Spark for Big Data (PySpark)
21 HoursIn this instructor-led, live training in Kazakhstan, participants will learn how to use Python and Spark together to analyze big data as they work on hands-on exercises.
By the end of this training, participants will be able to:
- Learn how to use Spark with Python to analyze Big Data.
- Work on exercises that mimic real world cases.
- Use different tools and techniques for big data analysis using PySpark.
Python, Spark, and Hadoop for Big Data
21 HoursThis instructor-led, live training in Kazakhstan (online or onsite) is aimed at developers who wish to use and integrate Spark, Hadoop, and Python to process, analyze, and transform large and complex data sets.
By the end of this training, participants will be able to:
- Set up the necessary environment to start processing big data with Spark, Hadoop, and Python.
- Understand the features, core components, and architecture of Spark and Hadoop.
- Learn how to integrate Spark, Hadoop, and Python for big data processing.
- Explore the tools in the Spark ecosystem (Spark MlLib, Spark Streaming, Kafka, Sqoop, Kafka, and Flume).
- Build collaborative filtering recommendation systems similar to Netflix, YouTube, Amazon, Spotify, and Google.
- Use Apache Mahout to scale machine learning algorithms.
Apache Spark MLlib
35 HoursMLlib is Spark’s machine learning (ML) library. Its goal is to make practical machine learning scalable and easy. It consists of common learning algorithms and utilities, including classification, regression, clustering, collaborative filtering, dimensionality reduction, as well as lower-level optimization primitives and higher-level pipeline APIs.
It divides into two packages:
-
spark.mllib contains the original API built on top of RDDs.
-
spark.ml provides higher-level API built on top of DataFrames for constructing ML pipelines.
Audience
This course is directed at engineers and developers seeking to utilize a built in Machine Library for Apache Spark