34,79 €
This book will introduce you to the most popular Scala tools, libraries, and frameworks through practical recipes around loading, manipulating, and preparing your data. It will also help you explore and make sense of your data using stunning and insightfulvisualizations, and machine learning toolkits.
Starting with introductory recipes on utilizing the Breeze and Spark libraries, get to grips withhow to import data from a host of possible sources and how to pre-process numerical, string, and date data. Next, you’ll get an understanding of concepts that will help you visualize data using the Apache Zeppelin and Bokeh bindings in Scala, enabling exploratory data analysis. iscover how to program quintessential machine learning algorithms using Spark ML library. Work through steps to scale your machine learning models and deploy them into a standalone cluster, EC2, YARN, and Mesos. Finally dip into the powerful options presented by Spark Streaming, and machine learning for streaming data, as well as utilizing Spark GraphX.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 242
Veröffentlichungsjahr: 2015
Copyright © 2015 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
First published: October 2015
Production reference: 1261015
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham B3 2PB, UK.
ISBN 978-1-78439-674-9
www.packtpub.com
Author
Arun Manivannan
Reviewers
Amir Hajian
Shams Mahmood Imam
Gerald Loeffler
Commissioning Editor
Nadeem N. Bagban
Acquisition Editor
Larissa Pinto
Content Development Editor
Rashmi Suvarna
Technical Editor
Tanmayee Patil
Copy Editors
Ameesha Green
Vikrant Phadke
Project Coordinator
Milton Dsouza
Proofreader
Safis Editing
Indexer
Rekha Nair
Production Coordinator
Manu Joseph
Cover Work
Manu Joseph
Arun Manivannan has been an engineer in various multinational companies, tier-1 financial institutions, and start-ups, primarily focusing on developing distributed applications that manage and mine data. His languages of choice are Scala and Java, but he also meddles around with various others for kicks. He blogs at http://rerun.me.
Arun holds a master's degree in software engineering from the National University of Singapore.
He also holds degrees in commerce, computer applications, and HR management. His interests and education could probably be a good dataset for clustering.
I am deeply indebted to my dad, Manivannan, who taught me the value of persistence, hard work and determination in life, and my mom, Arockiamary, without whose prayers and boundless love I'd be nothing. I could never try to pay them back. No words can do justice to thank my loving wife, Daisy. Her humongous faith in me and her support and patience make me believe in lifelong miracles. She simply made me the man I am today.
I can't finish without thanking my 6-year old son, Jason, for hiding his disappointment in me as I sat in front of the keyboard all the time. In your smiles and hugs, I derive the purpose of my life.
I would like to specially thank Abhilash, Rajesh, and Mohan, who proved that hard times reveal true friends.
It would be a crime not to thank my VCRC friends for being a constant source of inspiration. I am proud to be a part of the bunch.
Also, I sincerely thank the truly awesome reviewers and editors at Packt Publishing. Without their guidance and feedback, this book would have never gotten its current shape. I sincerely apologize for all the typos and errors that could have crept in.
Amir Hajian is a data scientist at the Thomson Reuters Data Innovation Lab. He has a PhD in astrophysics, and prior to joining Thomson Reuters, he was a senior research associate at the Canadian Institute for Theoretical Astrophysics in Toronto and a research physicist at Princeton University. His main focus in recent years has been bringing data science into astrophysics by developing and applying new algorithms for astrophysical data analysis using statistics, machine learning, visualization, and big data technology. Amir's research has been frequently highlighted in the media. He has led multinational research team efforts into successful publications. He has published in more than 70 peer-reviewed articles with more than 4,000 citations, giving him an h-index of 34.
I would like to thank the Canadian Institute for Theoretical Astrophysics for providing the excellent computational facilities that I enjoyed during the review of this book.
Shams Mahmood Imam completed his PhD from the department of computer science at Rice University, working under Prof. Vivek Sarkar in the Habanero multicore software research project. His research interests mostly include parallel programming models and runtime systems, with the aim of making the writing of task-parallel programs on multicore machines easier for programmers. Shams is currently completing his thesis titled Cooperative Execution of Parallel Tasks with Synchronization Constraints. His work involves building a generic framework that efficiently supports all synchronization patterns (and not only those available in actors or the fork-join model) in task-parallel programs. It includes extensions such as Eureka programming for speculative computations in task-parallel models and selectors for coordination protocols in the actor model. Shams implemented a framework as part of the cooperative runtime for the Habanero-Java parallel programming library. His work has been published at leading conferences, such as OOPSLA, ECOOP, Euro-Par, PPPJ, and so on. Previously, he has been involved in projects such as Habanero-Scala, CnC-Scala, CnC-Matlab, and CnC-Python.
Gerald Loeffler is an MBA. He was trained as a biochemist and has worked in academia and the pharmaceutical industry, conducting research in parallel and distributed biophysical computer simulations and data science in bioinformatics. Then he switched to IT consulting and widened his interests to include general software development and architecture, focusing on JVM-centric enterprise applications, systems, and their integration ever since. Inspired by the practice of commercial software development projects in this context, Gerald has developed a keen interest in team collaboration, the software craftsmanship movement, sound software engineering, type safety, distributed software and system architectures, and the innovations introduced by technologies such as Java EE, Scala, Akka, and Spark. He is employed by MuleSoft as a principal solutions architect in their professional services team, working with EMEA clients on their integration needs and the challenges that spring from them.
Gerald lives with his wife and two cats in Vienna, Austria, where he enjoys music, theatre, and city life.
For support files and downloads related to your book, please visit www.PacktPub.com.
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at <[email protected]> for more details.
At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.
https://www2.packtpub.com/books/subscription/packtlib
Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can search, access, and read Packt's entire library of books.
If you have an account with Packt at www.PacktPub.com, you can use this to access PacktLib today and view 9 entirely free books. Simply use your login credentials for immediate access.
JVM has become a clear winner in the race between different methods of scalable data analysis. The power of JVM, strong typing, simplicity of code, composability, and availability of highly abstracted distributed and machine learning frameworks make Scala a clear contender for the top position in large-scale data analysis. Thanks to its dynamic-looking, yet static type system, scientists and programmers coming from Python backgrounds feel at ease with Scala.
This book aims to provide easy-to-use recipes in Apache Spark, a massively scalable distributed computation framework, and Breeze, a linear algebra library on which Spark's machine learning toolkit is built. The book will also help you explore data using interactive visualizations in Apache Zeppelin.
Other than the handful of frameworks and libraries that we will see in this book, there's a host of other popular data analysis libraries and frameworks that are available for Scala. They are by no means lesser beasts, and they could actually fit our use cases well. Unfortunately, they aren't covered as part of this book.
Apache Flink (http://flink.apache.org/), just like Spark, has first-class support for Scala and provides features that are strikingly similar to Spark. Real-time streaming (unlike Spark's mini-batch DStreams) is its distinctive feature. Flink also provides a machine learning and a graph processing library and runs standalone as well as on the YARN cluster.
Scalding (https://github.com/twitter/scalding) needs no introduction—Scala's idiomatic approach to writing Hadoop MR jobs.
Saddle (https://saddle.github.io/) is the "pandas" (http://pandas.pydata.org/) of Scala, with support for vectors, matrices, and DataFrames.
Spire (https://github.com/non/spire) has a powerful set of advanced numerical types that are not available in the default Scala library. It aims to be fast and precise in its numerical computations.
Akka (http://akka.io) is an actor-based concurrency framework that has actors as its foundation and unit of work. Actors are fault tolerant and distributed.
Accord (https://github.com/wix/accord) is simple, yet powerful, validation library in Scala.
Chapter 1, Getting Started with Breeze, serves as an introduction to the Breeze linear algebra library's API.
Chapter 2, Getting Started with Apache Spark DataFrames, introduces powerful, yet intuitive and relational-table-like, data abstraction.
Chapter 3, Loading and Preparing Data – DataFrame, showcases the loading of datasets into Spark DataFrames from a variety of sources, while also introducing the Parquet serialization format.
Chapter 4, Data Visualization, introduces Apache Zeppelin for interactive data visualization using Spark SQL and Spark UDF functions. We also briefly discuss Bokeh-Scala, which is a Scala port of Bokeh (a highly customizable visualization library).
Chapter 5, Learning from Data, focuses on machine learning using Spark MLlib.
Chapter 6, Scaling Up, walks through various deployment alternatives for Spark applications: standalone, YARN, and Mesos.
Chapter 7, Going Further, briefly introduces Spark Streaming and GraphX.
The most important installation that your machine needs is the Java Development Kit (JDK 1.7), which can be downloaded from http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html.
To run most of the recipes in this book, all you need is SBT. The installation instructions for your favorite operating system are available at http://www.scala-sbt.org/release/tutorial/Setup.html.
There are a few other libraries that we will be using throughout the book, all of which will be imported through SBT. If there is any installation required (for example, HDFS) to run a recipe, the installation URL or the steps themselves will be mentioned in the respective recipe.
Engineers and scientists who are familiar with Scala and would like to exploit the Spark ecosystem for big data analysis will benefit most from this book.
In this book, you will find several headings that appear frequently (Getting ready, How to do it…, How it works…, There's more…, and See also).
To give clear instructions on how to complete a recipe, we use these sections as follows:
This section tells you what to expect in the recipe, and describes how to set up any software or any preliminary settings required for the recipe.
This section contains the steps required to follow the recipe.
This section usually consists of a detailed explanation of what happened in the previous section.
This section consists of additional information about the recipe in order to make the reader more knowledgeable about the recipe.
This section provides helpful links to other useful information for the recipe.
In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning.
Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "We can include other contexts through the use of the include directive."
A block of code is set as follows:
Any command-line input or output is written as follows:
New terms and important words are shown in bold. Words that you see on the screen, for example, in menus or dialog boxes, appear in the text like this: "Now, if we wish to share this chart with someone or link it to an external website, we can do so by clicking on the gear icon in this paragraph and then clicking on Link this paragraph."
Warnings or important notes appear in a box like this.
Tips and tricks appear like this.
Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of.
To send us general feedback, simply e-mail <[email protected]>, and mention the book's title in the subject of your message.
If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.
Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.
You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.
Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.
To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.
Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.
Please contact us at <[email protected]> with a link to the suspected pirated material.
We appreciate your help in protecting our authors and our ability to bring you valuable content.
If you have a problem with any aspect of this book, you can contact us at <[email protected]>, and we will do our best to address the problem.
In this chapter, we will cover the following recipes:
This chapter gives you a quick overview of one of the most popular data analysis libraries in Scala, how to get them, and their most frequently used functions and data structures.
We will be focusing on Breeze in this first chapter, which is one of the most popular and powerful linear algebra libraries. Spark MLlib, which we will be seeing in the subsequent chapters, builds on top of Breeze and Spark, and provides a powerful framework for scalable machine learning.
In simple terms, Breeze (http://www.scalanlp.org) is a Scala library that extends the Scala collection library to provide support for vectors and matrices in addition to providing a whole bunch of functions that support their manipulation. We could safely compare Breeze to NumPy (http://www.numpy.org/) in Python terms. Breeze forms the foundation of MLlib—the Machine Learning library in Spark, which we will explore in later chapters.
In this first recipe, we will see how to pull the Breeze libraries into our project using Scala Build Tool (SBT). We will also see a brief history of Breeze to better appreciate why it could be considered as the "go to" linear algebra library in Scala.
For all our recipes, we will be using Scala 2.10.4 along with Java 1.7. I wrote the examples using the Scala IDE, but please feel free to use your favorite IDE.
Let's add the Breeze dependencies into our build.sbt so that we can start playing with them in the subsequent recipes. The Breeze dependencies are just two—the breeze (core) and the breeze-native dependencies.
You could import the project into your Eclipse using sbt eclipse after installing the sbteclipse plugin https://github.com/typesafehub/sbteclipse/. For IntelliJ IDEA, you just need to import the project by pointing to the root folder where your build.sbt file is.
Let's look into the details of what the breeze and breeze-native library dependencies we added bring to us.
Breeze has a long history in that it isn't written from scratch in Scala. Without the native dependency, Breeze leverages the power of netlib-java that has a Java-compiled version of the FORTRAN Reference implementation of BLAS/LAPACK. The netlib-java also provides gentle wrappers over the Java compiled library. What this means is that we could still work without the native dependency but the performance won't be great considering the best performance that we could leverage out of this FORTRAN-translated library is the performance of the FORTRAN reference implementation itself. However, for serious number crunching with the best performance, we should add the breeze-natives dependency too.
With its native additive, Breeze looks for the machine-specific implementations of the BLAS/LAPACK libraries. The good news is that there are open source and (vendor provided) commercial implementations for most popular processors and GPUs. The most popular open source implementations include ATLAS (http://math-atlas.sourceforge.net) and OpenBLAS (http://www.openblas.net/).
If you are running a Mac, you are in luck—Native BLAS libraries come out of the box on Macs. Installing NativeBLAS on Ubuntu / Debian involves just running the following commands:
Downloading the example code
You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.
For Windows, please refer to the installation instructions on https://github.com/xianyi/OpenBLAS/wiki/Installation-Guide.
There are subtle yet powerful differences between Breeze vectors and Scala's own scala.collection.Vector. As we'll see in this recipe, Breeze vectors have a lot of functions that are linear algebra specific, and the more important thing to note here is that Breeze's vector is a Scala wrapper over netlib-java and most calls to the vector's API delegates the call to it.
Vectors are one of the core components in Breeze. They are containers of homogenous data. In this recipe, we'll first see how to create vectors and then move on to various data manipulation functions to modify those vectors.
