Apache Spark 2.x Cookbook - Rishi Yadav - E-Book

Apache Spark 2.x Cookbook E-Book

Rishi Yadav

0,0
39,59 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

While Apache Spark 1.x gained a lot of traction and adoption in the early years, Spark 2.x delivers notable improvements in the areas of API, schema awareness, Performance, Structured Streaming, and simplifying building blocks to build better, faster, smarter, and more accessible big data applications. This book uncovers all these features in the form of structured recipes to analyze and mature large and complex sets of data.

Starting with installing and configuring Apache Spark with various cluster managers, you will learn to set up development environments. Further on, you will be introduced to working with RDDs, DataFrames and Datasets to operate on schema aware data, and real-time streaming with various sources such as Twitter Stream and Apache Kafka. You will also work through recipes on machine learning, including supervised learning, unsupervised learning & recommendation engines in Spark.

Last but not least, the final few chapters delve deeper into the concepts of graph processing using GraphX, securing your implementations, cluster optimization, and troubleshooting.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 243

Veröffentlichungsjahr: 2017

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Title Page

Apache Spark 2.x Cookbook
Cloud-ready recipes to do analytics and data science on Apache Spark
Rishi Yadav

BIRMINGHAM - MUMBAI

Copyright

Apache Spark 2.x Cookbook

Copyright © 2017 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

First published: May 2017

Production reference: 1300517

Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham 
B3 2PB, UK.

ISBN 978-1-78712-726-5

www.packtpub.com

Credits

Author

Rishi Yadav

Copy Editor

Gladson Monteiro

Reviewer

Prashant Verma

Project Coordinator

Nidhi Joshi 

Commissioning Editor

Amey Varangaonkar

Proofreader

Safis Editing

Acquisition Editor

Vinay Argekar 

Indexer

Pratik Shirodkar 

Content Development Editor

Jagruti Babaria

Graphics

Tania Dutta

Technical Editor

Dinesh Pawar

Production Coordinator

Sharddha Falebhai

About the Author

Rishi Yadav has 19 years of experience in designing and developing enterprise applications. He is an open source software expert and advises American companies on big data and public cloud trends. Rishi was honored as one of Silicon Valley's 40 under 40 in 2014. He earned his bachelor's degree from the prestigious Indian Institute of Technology, Delhi, in 1998.

About 12 years ago, Rishi started InfoObjects, a company that helps data-driven businesses gain new insights into data. InfoObjects combines the power of open source and big data to solve business challenges for its clients and has a special focus on Apache Spark. The company has been on the Inc. 5000 list of the fastest growing companies for 6 years in a row. InfoObjects has also been named the best place to work in the Bay Area in 2014 and 2015.

Rishi is an open source contributor and active blogger.

This book is dedicated to my parents, Ganesh and Bhagwati Yadav; I would not be where I am without their unconditional support, trust, and providing me the freedom to choose a path of my own. Special thanks go to my life partner, Anjali, for providing immense support and putting up with my long, arduous hours (yet again). Our 9-year-old son, Vedant, and niece, Kashmira, were the unrelenting force behind keeping me and the book on track. Big thanks to InfoObjects' CTO and my business partner, Sudhir Jangir, for providing valuable feedback and also contributing with recipes on enterprise security, a topic he is passionate about; to our SVP, Bart Hickenlooper, for taking the charge in leading the company to the next level; to Tanmoy Chowdhury and Neeraj Gupta for their valuable advice; to Yogesh Chandani, Animesh Chauhan, and Katie Nelson for running operations skillfully so that I could focus on this book; and to our internal review team (especially Rakesh Chandran) for ironing out the kinks. I would also like to thank Marcel Izumi for, as always, providing creative visuals. I cannot miss thanking our dog, Sparky, for giving me company on my long nights out. Last but not least, special thanks to our valuable clients, partners, and employees, who have made InfoObjects the best place to work at and, needless to say, an immensely successful organization.

About the Reviewer

Prashant Verma started his IT career in 2011 as a Java developer at Ericsson, working in the telecom domain. After a couple of years of Java EE experience, he moved into the big data domain and has worked on almost all the popular big data technologies, such as Hadoop, Spark, Flume, Mongo, and Cassandra. He has also played with Scala. Currently, he works with QA Infotech as a lead data engineer, working on solving e-learning problems using analytics and machine learning.

Prashant has also been working as a freelance consultant in his spare time.

 

I want to thank Packt Publishing for giving me the chance to review the book as well as my employer and my family for their patience while I was busy working on this book.

www.PacktPub.com

For support files and downloads related to your book, please visit www.PacktPub.com.

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.comand as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.

https://www.packtpub.com/mapt

Get the most in-demand software skills with Mapt. Mapt gives you full access to all Packt books and video courses, as well as industry-leading tools to help you plan your personal development and advance your career.

Why subscribe?

Fully searchable across every book published by Packt

Copy and paste, print, and bookmark content

On demand and accessible via a web browser

Customer Feedback

Thanks for purchasing this Packt book. At Packt, quality is at the heart of our editorial process. To help us improve, please leave us an honest review on this book's Amazon page at https://www.amazon.com/dp/1787127265.

If you'd like to join our team of regular reviewers, you can e-mail us at [email protected]. We award our regular reviewers with free eBooks and videos in exchange for their valuable feedback. Help us be relentless in improving our products!

Table of Contents

www.PacktPub.com

Preface

What this book covers

What you need for this book

Who this book is for

Sections

Getting ready

How to do it...

How it works...

There's more...

See also

Conventions

Reader feedback

Customer support

Downloading the color images of this book

Errata

Piracy

Questions

Getting Started with Apache Spark

Introduction

Leveraging Databricks Cloud

How to do it...

How it works...

Cluster

Notebook

Table

Library

Deploying Spark using Amazon EMR

What it represents is much bigger than what it looks

EMR's architecture

How to do it...

How it works...

EC2 instance types

T2 - Free Tier Burstable (EBS only)

M4 - General purpose (EBS only)

C4 - Compute optimized

X1 - Memory optimized

R4 - Memory optimized

P2 - General purpose GPU

I3 - Storage optimized

D2 - Storage optimized

Installing Spark from binaries

Getting ready

How to do it...

Building the Spark source code with Maven

Getting ready

How to do it...

Launching Spark on Amazon EC2

Getting ready

How to do it...

See also

Deploying Spark on a cluster in standalone mode

Getting ready

How to do it...

How it works...

See also

Deploying Spark on a cluster with Mesos

How to do it...

Deploying Spark on a cluster with YARN

Getting ready

How to do it...

How it works...

Understanding SparkContext and SparkSession

SparkContext

SparkSession

Understanding resilient distributed dataset - RDD

How to do it...

Developing Applications with Spark

Introduction

Exploring the Spark shell

How to do it...

There's more...

Developing a Spark applications in Eclipse with Maven

Getting ready

How to do it...

Developing a Spark applications in Eclipse with SBT

How to do it...

Developing a Spark application in IntelliJ IDEA with Maven

How to do it...

Developing a Spark application in IntelliJ IDEA with SBT

How to do it...

Developing applications using the Zeppelin notebook

How to do it...

Setting up Kerberos to do authentication

How to do it...

There's more...

Enabling Kerberos authentication for Spark

How to do it...

There's more...

Securing data at rest

Securing data in transit

Spark SQL

Understanding the evolution of schema awareness

Getting ready

DataFrames

Datasets

Schema-aware file formats

Understanding the Catalyst optimizer

Analysis

Logical plan optimization

Physical planning

Code generation

Inferring schema using case classes

How to do it...

There's more...

Programmatically specifying the schema

How to do it...

How it works...

Understanding the Parquet format

How to do it...

How it works...

Partitioning

Predicate pushdown

Parquet Hive interoperability

Loading and saving data using the JSON format

How to do it...

How it works...

Loading and saving data from relational databases

Getting ready

How to do it...

Loading and saving data from an arbitrary source

How to do it...

There's more...

Understanding joins

Getting ready

How to do it...

How it works...

Shuffle hash join

Broadcast hash join

The cartesian join

There's more...

Analyzing nested structures

Getting ready

How to do it...

Working with External Data Sources

Introduction

Loading data from the local filesystem

How to do it...

Loading data from HDFS

How to do it...

Loading data from Amazon S3

How to do it...

Loading data from Apache Cassandra

How to do it...

How it works

CAP Theorem

Cassandra partitions

Consistency levels

Spark Streaming

Introduction

Classic Spark Streaming

Structured Streaming

WordCount using Structured Streaming

How to do it...

Taking a closer look at Structured Streaming

How to do it...

There's more...

Streaming Twitter data

How to do it...

Streaming using Kafka

Getting ready

How to do it...

Understanding streaming challenges

Late arriving/out-of-order data

Maintaining the state in between batches

Message delivery reliability

Streaming is not an island

Getting Started with Machine Learning

Introduction

Creating vectors

Getting ready

How to do it...

How it works...

Calculating correlation

Getting ready

How to do it...

Understanding feature engineering

Feature selection

Quality of features

Number of features

Feature scaling

Feature extraction

TF-IDF

Term frequency

Inverse document frequency

How to do it...

Understanding Spark ML

Getting ready

How to do it...

Understanding hyperparameter tuning

How to do it...

Supervised Learning with MLlib — Regression

Introduction

Using linear regression

Getting ready

How to do it...

There's more...

Understanding the cost function

There's more...

Doing linear regression with lasso

Bias versus variance

How to do it...

Doing ridge regression

Supervised Learning with MLlib — Classification

Introduction

Doing classification using logistic regression

Getting ready

How to do it...

There's more...

What is ROC?

Doing binary classification using SVM

Getting ready

How to do it...

Doing classification using decision trees

Getting ready

How to do it...

How it works...

There's more...

Doing classification using random forest

Getting ready

How to do it...

Doing classification using gradient boosted trees

Getting ready

How to do it...

Doing classification with Naïve Bayes

Getting ready

How to do it...

Unsupervised Learning

Introduction

Clustering using k-means

Getting ready

How to do it...

Dimensionality reduction with principal component analysis

Getting ready

How to do it...

Dimensionality reduction with singular value decomposition

Getting ready

How to do it...

Recommendations Using Collaborative Filtering

Introduction

Collaborative filtering using explicit feedback

Getting ready

How to do it...

Adding my recommendations and then testing predictions

There's more...

Collaborative filtering using implicit feedback

How to do it...

Graph Processing Using GraphX and GraphFrames

Introduction

Fundamental operations on graphs

Getting ready

How to do it...

Using PageRank

Getting ready

How to do it...

Finding connected components

Getting ready

How to do it...

Performing neighborhood aggregation

Getting ready

How to do it...

Understanding GraphFrames

How to do it...

Optimizations and Performance Tuning

Optimizing memory

How to do it...

How it works...

Garbage collection

Mark and sweep

G1

Spark memory allocation

Leveraging speculation

How to do it...

Optimizing joins

How to do it...

Using compression to improve performance

How to do it...

Using serialization to improve performance

How to do it...

There's more...

Optimizing the level of parallelism

How to do it...

Understanding project Tungsten

How to do it...

How it works...

Tungsten phase 1

Bypassing GC

Cache conscious computation

Code generation for expression evaluation

Tungsten phase 2

Wholesale code generation

In-memory columnar format

Preface

The success of Hadoop as a big data platform raised user expectations, both in terms of solving different analytics challenges and reducing latency. Various tools evolved over time, but when Apache Spark came, it provided a single runtime to address all these challenges. It eliminated the need to combine multiple tools with their own challenges and learning curves. Using memory for persistent storage besides compute, Apache Spark eliminates the need to store intermediate data on disk and increases processing speed up to 100 times. It also provides a single runtime, which addresses various analytics needs, such as machine-learning and real-time streaming, using various libraries. This book covers the installation and configuration of Apache Spark and building solutions using Spark Core, Spark SQL, Spark Streaming, MLlib, and GraphX libraries.

For more information on this book's recipes, please visit infoobjects.com/spark-cookbook.

What this book covers

Chapter 1, Getting Started with Apache Spark, explains how to install Spark on various environments and cluster managers.

Chapter 2, Developing Applications with Spark, talks about developing Spark applications on different IDEs and using different build tools. 

Chapter 3, Spark SQL, covers how to read and write to various data sources.

Chapter 4, Working with External Data Sources, takes you through the Spark SQL module that helps you access the Spark functionality using the SQL interface.

Chapter 5, Spark Streaming, explores the Spark Streaming library to analyze data from real-time data sources, such as Kafka.

Chapter 6, Getting Started with Machine Learning, covers an introduction to machine learning and basic artifacts, such as vectors and matrices.

Chapter 7, Supervised Learning with MLlib – Regression, walks through supervised learning when the outcome variable is continuous.

Chapter 8, Supervised Learning with MLlib – Classification, discusses supervised learning when the outcome variable is discrete.

Chapter 9, Unsupervised Learning, covers unsupervised learning algorithms, such as k-means.

Chapter 10, Recommendations Using Collaborative Filtering, introduces building recommender systems using various techniques, such as ALS.

Chapter 11, Graph Processing Using GraphX and GraphFrames, talks about various graph processing algorithms using GraphX.

Chapter 12, Optimizations and Performance Tuning, covers various optimizations on Apache Spark and performance tuning techniques.

What you need for this book

There are two ways to work with the recipes in this book:

The first is to use Databricks Community Cloud at

https://community.cloud.databricks.com

. It is a free notebook provided by Databricks. All the sample data for this book has also been uploaded in the Amazon Web Service S3 bucket, namely

sparkcookbook

.

The second option is to use InfoObjects Big Data Sandbox, which is a virtual machine built on top of Ubuntu. This software can be downloaded from

http://www.infoobjects.com

.

Who this book is for

If you are a data engineer, an application developer, or a data scientist who would like to leverage the power of Apache Spark to get better insights from big data, then this is the book for you.

Sections

In this book, you will find several headings that appear frequently (Getting ready, How to do it..., How it works..., There's more..., and See also).

To give clear instructions on how to complete a recipe, we use these sections as follows:

Getting ready

This section tells you what to expect in the recipe, and describes how to set up any software or any preliminary settings required for the recipe.

How to do it...

This section contains the steps required to follow the recipe.

How it works...

This section usually consists of a detailed explanation of what happened in the previous section.

There's more...

This section consists of additional information about the recipe in order to make the reader more knowledgeable about the recipe.

See also

This section provides helpful links to other useful information the recipe.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of.

To send us general feedback, simply e-mail [email protected], and mention the book's title in the subject of your message.

If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the color images of this book

We also provide you with a PDF file that has color images of the screenshots/diagrams used in this book. The color images will help you better understand the changes in the output. You can download this file from: https://www.packtpub.com/sites/default/files/downloads/ApacheSpark2xCookbook_ColorImages.pdf.

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.

To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.

Piracy

Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.

Please contact us at [email protected] with a link to the suspected pirated material.

We appreciate your help in protecting our authors and our ability to bring you valuable content.

Questions

If you have a problem with any aspect of this book, you can contact us at [email protected], and we will do our best to address the problem.

Getting Started with Apache Spark

In this chapter, we will set up Spark and configure it. This chapter contains the following recipes:

Leveraging Databricks Cloud

Deploying Spark using Amazon EMR

Installing Spark from binaries

Building the Spark source code with Maven

Launching Spark on Amazon EC2

Deploying Spark on a cluster in standalone mode

Deploying Spark on a cluster with Mesos

Deploying Spark on a cluster with YARN

Understanding SparkContext and SparkSession

Understanding Resilient Distributed Datasets (RDD)

Introduction

Apache Spark is a general-purpose cluster computing system to process big data workloads. What sets Spark apart from its predecessors, such as Hadoop MapReduce, is its speed, ease of use, and sophisticated analytics.

It was originally developed at AMPLab, UC Berkeley, in 2009. It was made open source in 2010 under the BSD license and switched to the Apache 2.0 license in 2013. Toward the later part of 2013, the creators of Spark founded Databricks to focus on Spark's development and future releases.

Databricks offers Spark as a service in the Amazon Web Services(AWS) Cloud, called Databricks Cloud. In this book, we are going to maximize the use of AWS as a data storage layer.

Talking about speed, Spark can achieve subsecond latency on big data workloads. To achieve such low latency, Spark makes use of memory for storage. In MapReduce, memory is primarily used for the actual computation. Spark uses memory both to compute and store objects.

Spark also provides a unified runtime connecting to various big data storage sources, such as HDFS, Cassandra, and S3. It also provides a rich set of high-level libraries for different big data compute tasks, such as machine learning, SQL processing, graph processing, and real-time streaming. These libraries make development faster and can be combined in an arbitrary fashion.

Though Spark is written in Scala--and this book only focuses on recipes on Scala--it also supports Java, Python, and R.

Spark is an open source community project, and everyone uses the pure open source Apache distributions for deployments, unlike Hadoop, which has multiple distributions available with vendor enhancements.

The following figure shows the Spark ecosystem:

Spark's runtime runs on top of a variety of cluster managers, including YARN (Hadoop's compute framework), Mesos, and Spark's own cluster manager called Standalone mode. Alluxio is a memory-centric distributed file system that enables reliable file sharing at memory speed across cluster frameworks. In short, it is an off-heap storage layer in memory that helps share data across jobs and users. Mesos is a cluster manager, which is evolving into a data center operating system. YARN is Hadoop's compute framework and has a robust resource management feature that Spark can seamlessly use.

Apache Spark, initially devised as a replacement of MapReduce, had a good proportion of workloads running in an on-premises manner. Now, most of the workloads have been moved to public clouds (AWS, Azure, and GCP). In a public cloud, we see two types of applications:

Outcome-driven applications   

Data transformation pipelines

For outcome-driven applications, where the goal is to derive a predefined signal/outcome from the given data, Databricks Cloud fits the bill perfectly. For traditional data transformation pipelines, Amazon's Elastic MapReduce (EMR) does a great job. 

Leveraging Databricks Cloud

Databricks is the company behind Spark. It has a cloud platform that takes out all of the complexity of deploying Spark and provides you with a ready-to-go environment with notebooks for various languages. Databricks Cloud also has a community edition that provides one node instance with 6 GB of RAM for free. It is a great starting place for developers. The Spark cluster that is created also terminates after 2 hours of sitting idle. 

All the recipes in this book can be run on either the InfoObjects Sandbox or Databricks Cloud community edition. The entire data for the recipes in this book has also been ported to a public bucket called sparkcookbook on S3. Just put these recipes on the Databricks Cloud community edition, and they will work seamlessly. 

How it works...

Let's look at the key concepts in Databricks Cloud.

Cluster

The concept of clusters is self-evident. A cluster contains a master node and one or more slave nodes. These nodes are EC2 nodes, which we are going to learn more about in the next recipe. 

Notebook

Notebook is the most powerful feature of Databricks Cloud. You can write your code in Scala/Python/R or a simple SQL notebook. These notebooks cover the whole 9 yards. You can use notebooks to write code like a programmer, use SQL like an analyst, or do visualization like a Business Intelligence (BI) expert. 

Table

Tables enable Spark to run SQL queries.

Library

Library is the section where you upload the libraries you would like to attach to your notebooks. The beauty is that you do not have to upload libraries manually; you can simply provide the Maven parameters and it would find the library for you and attach it.

Deploying Spark using Amazon EMR