Apache Spark 20 with Java Learn Spark from a Big Data Guru

Learn analyzing large data sets with Apache Spark by 10+ hands-on examples. Take your big data skills to the next level.
Apache Spark 20 with Java Learn Spark from a Big Data Guru
File Size :
1.28 GB
Total length :
3h 27m

Category

Instructor

Tao W.

Language

Last update

5/2018

Ratings

4.5/5

Apache Spark 20 with Java Learn Spark from a Big Data Guru

What you’ll learn

An overview of the architecture of Apache Spark.
Work with Apache Spark’s primary abstraction, resilient distributed datasets(RDDs) to process and analyze large data sets.
Develop Apache Spark 2.0 applications using RDD transformations and actions and Spark SQL.
Scale up Spark applications on a Hadoop YARN cluster through Amazon’s Elastic MapReduce service.
Analyze structured and semi-structured data using Datasets and DataFrames, and develop a thorough understanding about Spark SQL.
Share information across different nodes on a Apache Spark cluster by broadcast variables and accumulators.
Advanced techniques to optimize and tune Apache Spark jobs by partitioning, caching and persisting RDDs.
Best practices of working with Apache Spark in the field.

Apache Spark 20 with Java Learn Spark from a Big Data Guru

Requirements

A computer running Windows, OSX or Linux
Previous Java programming skills
Java 8 experience is preferred but NOT required

Description

What is this course about:
This course covers all the fundamentals about Apache Spark with Java and teaches you everything you need to know about developing Spark applications with Java. At the end of this course, you will gain in-depth knowledge about Apache Spark and general big data analysis and manipulations skills to help your company to adapt Apache Spark for building big data processing pipeline and data analytics applications.
This course covers 10+ hands-on big data examples. You will learn valuable knowledge about how to frame data analysis problems as Spark problems. Together we will learn examples such as aggregating NASA Apache web logs from different sources; we will explore the price trend by looking at the real estate data in California; we will write Spark applications to find out the median salary of developers in different countries through the Stack Overflow survey data; we will develop a system to analyze how maker spaces are distributed across different regions in the United Kingdom.  And much much more.
What will you learn from this lecture:
In particularly, you will learn:
An overview of the architecture of Apache Spark.Develop Apache Spark 2.0 applications with Java using RDD transformations and actions and Spark SQL.Work with Apache Spark’s primary abstraction, resilient distributed datasets(RDDs) to process and analyze large data sets.Deep dive into advanced techniques to optimize and tune Apache Spark jobs by partitioning, caching and persisting RDDs.Scale up Spark applications on a Hadoop YARN cluster through Amazon’s Elastic MapReduce service.Analyze structured and semi-structured data using Datasets and DataFrames, and develop a thorough understanding of Spark SQL.Share information across different nodes on an Apache Spark cluster by broadcast variables and accumulators.Best practices of working with Apache Spark in the field.Big data ecosystem overview.
Why shall we learn Apache Spark:
Apache Spark gives us unlimited ability to build cutting-edge applications. It is also one of the most compelling technologies of the last decade in terms of its disruption to the big data world.
Spark provides in-memory cluster computing which greatly boosts the speed of iterative algorithms and interactive data mining tasks.
Apache Spark is the next-generation processing engine for big data.
Tons of companies are adapting Apache Spark to extract meaning from massive data sets, today you have access to that same big data technology right on your desktop.
Apache Spark is becoming a must tool for big data engineers and data scientists.
About the author:
Since 2015, James has been helping his company to adapt Apache Spark for building their big data processing pipeline and data analytics applications.
James’ company has gained massive benefits by adapting Apache Spark in production. In this course, he is going to share with you his years of knowledge and best practices of working with Spark in the real field.
Why choosing this course?
This course is very hands-on, James has put lots effort to provide you with not only the theory but also real-life examples of developing Spark applications that you can try out on your own laptop.
James has uploaded all the source code to Github and you will be able to follow along with either Windows, MAC OS or Linux.
In the end of this course, James is confident that you will gain in-depth knowledge about Spark and general big data analysis and data manipulation skills. You’ll be able to develop Spark application that analyzes Gigabytes scale of data both on your laptop, and in the cloud using Amazon’s Elastic MapReduce service!
30-day Money-back Guarantee!
You will get 30-day money-back guarantee from Udemy for this course.
 If not satisfied simply ask for a refund within 30 days. You will get a full refund. No questions whatsoever asked.
Are you ready to take your big data analysis skills and career to the next level, take this course now!
You will go from zero to Spark hero in 4 hours.

Overview

Section 1: Get Started with Apache Spark

Lecture 1 Course Overview

Lecture 2 How to Take this Course and How to Get Support

Lecture 3 Text Lecture: How to Take this Course and How to Get Support

Lecture 4 Introduction to Spark

Lecture 5 Sides

Lecture 6 Java 9 Warning

Lecture 7 Install Java and Git

Lecture 8 Source Code

Lecture 9 Set up Spark project with IntelliJ IDEA

Lecture 10 Set up Spark project with Eclipse

Lecture 11 Text lecture: Set up Spark project with Eclipse

Lecture 12 Run our first Spark job

Lecture 13 Trouble shooting: running Hadoop on Windows

Section 2: RDD

Lecture 14 RDD Basics

Lecture 15 Create RDDs

Lecture 16 Text Lecture: Create RDDs

Lecture 17 Map and Filter Transformation

Lecture 18 Solution to Airports by Latitude Problem

Lecture 19 FlatMap Transformation

Lecture 20 Text Lectures: flatMap Transformation

Lecture 21 Set Operation

Lecture 22 Sampling With Replacement and Sampling Without Replacement

Lecture 23 Solution for the Same Hosts Problem

Lecture 24 Actions

Lecture 25 Solution to Sum of Numbers Problem

Lecture 26 Important Aspects about RDD

Lecture 27 Summary of RDD Operations

Lecture 28 Caching and Persistence

Section 3: Spark Architecture and Components

Lecture 29 Spark Architecture

Lecture 30 Spark Components

Section 4: Pair RDD

Lecture 31 Introduction to Pair RDD

Lecture 32 Create Pair RDDs

Lecture 33 Filter and MapValue Transformations on Pair RDD

Lecture 34 Reduce By Key Aggregation

Lecture 35 Sample solution for the Average House problem

Lecture 36 Group By Key Transformation

Lecture 37 Sort By Key Transformation

Lecture 38 Sample Solution for the Sorted Word Count Problem

Lecture 39 Data Partitioning

Lecture 40 Join Operations

Lecture 41 Extra Learning Material: How are Big Companies using Apache Spark

Section 5: Advanced Spark Topic

Lecture 42 Accumulators

Lecture 43 Text Lecture: Accumulators

Lecture 44 Solution to StackOverflow Survey Follow-up Problem

Lecture 45 Broadcast Variables

Section 6: Spark SQL

Lecture 46 Introduction to Spark SQL

Lecture 47 Spark SQL in Action

Lecture 48 Spark SQL practice: House Price Problem

Lecture 49 Spark SQL Joins

Lecture 50 Strongly Typed Dataset

Lecture 51 Use Dataset or RDD

Lecture 52 Dataset and RDD Conversion

Lecture 53 Performance Tuning of Spark SQL

Lecture 54 Extra Learning Material: Avoid These Mistakes While Writing Apache Spark Program

Section 7: Running Spark in a Cluster

Lecture 55 Introduction to Running Spark in a Cluster

Lecture 56 Package Spark Application and Use spark-submit

Lecture 57 Run Spark Application on Amazon EMR (Elastic MapReduce) cluster

Section 8: Additional Learning Materials

Lecture 58 Future Learning

Lecture 59 Text Lecture: Future Learning

Lecture 60 Coupons to Our Other Courses

Anyone who want to fully understand how Apache Spark technology works and learn how Apache Spark is being used in the field.,Software engineers who want to develop Apache Spark 2.0 applications using Spark Core and Spark SQL.,Data scientists or data engineers who want to advance their career by improving their big data processing skills.

Course Information:

Udemy | English | 3h 27m | 1.28 GB
Created by: Tao W.

You Can See More Courses in the Developer >> Greetings from CourseDown.com

New Courses

Scroll to Top