Scala Data Analysis Cookbook - Arun Manivannan - E-Book

Scala Data Analysis Cookbook E-Book

Arun Manivannan

0,0
34,79 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

This book will introduce you to the most popular Scala tools, libraries, and frameworks through practical recipes around loading, manipulating, and preparing your data. It will also help you explore and make sense of your data using stunning and insightfulvisualizations, and machine learning toolkits.

Starting with introductory recipes on utilizing the Breeze and Spark libraries, get to grips withhow to import data from a host of possible sources and how to pre-process numerical, string, and date data. Next, you’ll get an understanding of concepts that will help you visualize data using the Apache Zeppelin and Bokeh bindings in Scala, enabling exploratory data analysis. iscover how to program quintessential machine learning algorithms using Spark ML library. Work through steps to scale your machine learning models and deploy them into a standalone cluster, EC2, YARN, and Mesos. Finally dip into the powerful options presented by Spark Streaming, and machine learning for streaming data, as well as utilizing Spark GraphX.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 242

Veröffentlichungsjahr: 2015

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Scala Data Analysis Cookbook
Credits
About the Author
About the Reviewers
www.PacktPub.com
Support files, eBooks, discount offers, and more
Why Subscribe?
Free Access for Packt account holders
Preface
Apache Flink
Scalding
Saddle
Spire
Akka
Accord
What this book covers
What you need for this book
Who this book is for
Sections
Getting ready
How to do it…
How it works…
There's more…
See also
Conventions
Reader feedback
Customer support
Downloading the example code
Errata
Piracy
Questions
1. Getting Started with Breeze
Introduction
Getting Breeze – the linear algebra library
How to do it...
There's more...
The org.scalanlp.breeze dependency
The org.scalanlp.breeze-natives package
Working with vectors
Getting ready
How to do it...
Creating vectors
Constructing a vector from values
Creating a zero vector
Creating a vector out of a function
Creating a vector of linearly spaced values
Creating a vector with values in a specific range
Creating an entire vector with a single value
Slicing a sub-vector from a bigger vector
Creating a Breeze Vector from a Scala Vector
Vector arithmetic
Scalar operations
Calculating the dot product of two vectors
Creating a new vector by adding two vectors together
Appending vectors and converting a vector of one type to another
Concatenating two vectors
Converting a vector of Int to a vector of Double
Computing basic statistics
Mean and variance
Standard deviation
Find the largest value in a vector
Finding the sum, square root and log of all the values in the vector
The Sqrt function
The Log function
Working with matrices
How to do it...
Creating matrices
Creating a matrix from values
Creating a zero matrix
Creating a matrix out of a function
Creating an identity matrix
Creating a matrix from random numbers
Creating from a Scala collection
Matrix arithmetic
Addition
Multiplication
Appending and conversion
Concatenating matrices – vertically
Concatenating matrices – horizontally
Converting a matrix of Int to a matrix of Double
Data manipulation operations
Getting column vectors out of the matrix
Getting row vectors out of the matrix
Getting values inside the matrix
Getting the inverse and transpose of a matrix
Computing basic statistics
Mean and variance
Standard deviation
Finding the largest value in a matrix
Finding the sum, square root and log of all the values in the matrix
Sqrt
Log
Calculating the eigenvectors and eigenvalues of a matrix
How it works...
Vectors and matrices with randomly distributed values
How it works...
Creating vectors with uniformly distributed random values
Creating vectors with normally distributed random values
Creating vectors with random values that have a Poisson distribution
Creating a matrix with uniformly random values
Creating a matrix with normally distributed random values
Creating a matrix with random values that has a Poisson distribution
Reading and writing CSV files
How it works...
2. Getting Started with Apache Spark DataFrames
Introduction
Getting Apache Spark
How to do it...
Creating a DataFrame from CSV
How to do it...
How it works...
There's more…
Manipulating DataFrames
How to do it...
Printing the schema of the DataFrame
Sampling the data in the DataFrame
Selecting DataFrame columns
Filtering data by condition
Sorting data in the frame
Renaming columns
Treating the DataFrame as a relational table
Joining two DataFrames
Inner join
Right outer join
Left outer join
Saving the DataFrame as a file
Creating a DataFrame from Scala case classes
How to do it...
How it works...
3. Loading and Preparing Data – DataFrame
Introduction
Loading more than 22 features into classes
How to do it...
How it works...
There's more…
Loading JSON into DataFrames
How to do it…
Reading a JSON file using SQLContext.jsonFile
Reading a text file and converting it to JSON RDD
Explicitly specifying your schema
There's more…
Storing data as Parquet files
How to do it…
Load a simple CSV file, convert it to case classes, and create a DataFrame from it
Save it as a Parquet file
Install Parquet tools
Using the tools to inspect the Parquet file
Enable compression for the Parquet file
Using the Avro data model in Parquet
How to do it…
Creation of the Avro model
Generation of Avro objects using the sbt-avro plugin
Constructing an RDD of our generated object from Students.csv
Saving RDD[StudentAvro] in a Parquet file
Reading the file back for verification
Using Parquet tools for verification
Loading from RDBMS
How to do it…
Preparing data in Dataframes
How to do it...
4. Data Visualization
Introduction
Visualizing using Zeppelin
How to do it...
Installing Zeppelin
Customizing Zeppelin's server and websocket port
Visualizing data on HDFS – parameterizing inputs
Running custom functions
Adding external dependencies to Zeppelin
Pointing to an external Spark cluster
Creating scatter plots with Bokeh-Scala
How to do it...
Preparing our data
Creating Plot and Document objects
Creating a marker object
Setting the X and Y axes' data range for the plot
Drawing the x and the y axes
Viewing flower species with varying colors
Adding grid lines
Adding a legend to the plot
Creating a time series MultiPlot with Bokeh-Scala
How to do it...
Preparing our data
Creating a plot
Creating a line that joins all the data points
Setting the x and y axes' data range for the plot
Drawing the axes and the grids
Adding tools
Adding a legend to the plot
Multiple plots in the document
5. Learning from Data
Introduction
Supervised and unsupervised learning
Gradient descent
Predicting continuous values using linear regression
How to do it...
Importing the data
Converting each instance into a LabeledPoint
Preparing the training and test data
Scaling the features
Training the model
Predicting against test data
Evaluating the model
Regularizing the parameters
Mini batching
Binary classification using LogisticRegression and SVM
How to do it...
Importing the data
Tokenizing the data and converting it into LabeledPoints
Factoring the inverse document frequency
Prepare the training and test data
Constructing the algorithm
Training the model and predicting the test data
Evaluating the model
Binary classification using LogisticRegression with Pipeline API
How to do it...
Importing and splitting data as test and training sets
Construct the participants of the Pipeline
Preparing a pipeline and training a model
Predicting against test data
Evaluating a model without cross-validation
Constructing parameters for cross-validation
Constructing cross-validator and fit the best model
Evaluating the model with cross-validation
Clustering using K-means
How to do it...
KMeans.RANDOM
KMeans.PARALLEL
K-means++
K-means||
Max iterations
Epsilon
Importing the data and converting it into a vector
Feature scaling the data
Deriving the number of clusters
Constructing the model
Evaluating the model
Feature reduction using principal component analysis
How to do it...
Dimensionality reduction of data for supervised learning
Mean-normalizing the training data
Extracting the principal components
Preparing the labeled data
Preparing the test data
Classify and evaluate the metrics
Dimensionality reduction of data for unsupervised learning
Mean-normalizing the training data
Extracting the principal components
Arriving at the number of components
Evaluating the metrics
6. Scaling Up
Introduction
Building the Uber JAR
How to do it...
Transitive dependency stated explicitly in the SBT dependency
Two different libraries depend on the same external library
Submitting jobs to the Spark cluster (local)
How to do it...
Downloading Spark
Running HDFS on Pseudo-clustered mode
Running the Spark master and slave locally
Pushing data into HDFS
Submitting the Spark application on the cluster
Running the Spark Standalone cluster on EC2
How to do it...
Creating the AccessKey and pem file
Setting the environment variables
Running the launch script
Verifying installation
Making changes to the code
Transferring the data and job files
Loading the dataset into HDFS
Running the job
Destroying the cluster
Running the Spark Job on Mesos (local)
How to do it...
Installing Mesos
Starting the Mesos master and slave
Uploading the Spark binary package and the dataset to HDFS
Running the job
Running the Spark Job on YARN (local)
How to do it...
Installing the Hadoop cluster
Starting HDFS and YARN
Pushing Spark assembly and dataset to HDFS
Running a Spark job in yarn-client mode
Running Spark job in yarn-cluster mode
7. Going Further
Introduction
Using Spark Streaming to subscribe to a Twitter stream
How to do it...
Using Spark as an ETL tool
How to do it...
Using StreamingLogisticRegression to classify a Twitter stream using Kafka as a training stream
How to do it...
Using GraphX to analyze Twitter data
How to do it...
Index

Scala Data Analysis Cookbook

Scala Data Analysis Cookbook

Copyright © 2015 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

First published: October 2015

Production reference: 1261015

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham B3 2PB, UK.

ISBN 978-1-78439-674-9

www.packtpub.com

Credits

Author

Arun Manivannan

Reviewers

Amir Hajian

Shams Mahmood Imam

Gerald Loeffler

Commissioning Editor

Nadeem N. Bagban

Acquisition Editor

Larissa Pinto

Content Development Editor

Rashmi Suvarna

Technical Editor

Tanmayee Patil

Copy Editors

Ameesha Green

Vikrant Phadke

Project Coordinator

Milton Dsouza

Proofreader

Safis Editing

Indexer

Rekha Nair

Production Coordinator

Manu Joseph

Cover Work

Manu Joseph

About the Author

Arun Manivannan has been an engineer in various multinational companies, tier-1 financial institutions, and start-ups, primarily focusing on developing distributed applications that manage and mine data. His languages of choice are Scala and Java, but he also meddles around with various others for kicks. He blogs at http://rerun.me.

Arun holds a master's degree in software engineering from the National University of Singapore.

He also holds degrees in commerce, computer applications, and HR management. His interests and education could probably be a good dataset for clustering.

I am deeply indebted to my dad, Manivannan, who taught me the value of persistence, hard work and determination in life, and my mom, Arockiamary, without whose prayers and boundless love I'd be nothing. I could never try to pay them back. No words can do justice to thank my loving wife, Daisy. Her humongous faith in me and her support and patience make me believe in lifelong miracles. She simply made me the man I am today.

I can't finish without thanking my 6-year old son, Jason, for hiding his disappointment in me as I sat in front of the keyboard all the time. In your smiles and hugs, I derive the purpose of my life.

I would like to specially thank Abhilash, Rajesh, and Mohan, who proved that hard times reveal true friends.

It would be a crime not to thank my VCRC friends for being a constant source of inspiration. I am proud to be a part of the bunch.

Also, I sincerely thank the truly awesome reviewers and editors at Packt Publishing. Without their guidance and feedback, this book would have never gotten its current shape. I sincerely apologize for all the typos and errors that could have crept in.

About the Reviewers

Amir Hajian is a data scientist at the Thomson Reuters Data Innovation Lab. He has a PhD in astrophysics, and prior to joining Thomson Reuters, he was a senior research associate at the Canadian Institute for Theoretical Astrophysics in Toronto and a research physicist at Princeton University. His main focus in recent years has been bringing data science into astrophysics by developing and applying new algorithms for astrophysical data analysis using statistics, machine learning, visualization, and big data technology. Amir's research has been frequently highlighted in the media. He has led multinational research team efforts into successful publications. He has published in more than 70 peer-reviewed articles with more than 4,000 citations, giving him an h-index of 34.

I would like to thank the Canadian Institute for Theoretical Astrophysics for providing the excellent computational facilities that I enjoyed during the review of this book.

Shams Mahmood Imam completed his PhD from the department of computer science at Rice University, working under Prof. Vivek Sarkar in the Habanero multicore software research project. His research interests mostly include parallel programming models and runtime systems, with the aim of making the writing of task-parallel programs on multicore machines easier for programmers. Shams is currently completing his thesis titled Cooperative Execution of Parallel Tasks with Synchronization Constraints. His work involves building a generic framework that efficiently supports all synchronization patterns (and not only those available in actors or the fork-join model) in task-parallel programs. It includes extensions such as Eureka programming for speculative computations in task-parallel models and selectors for coordination protocols in the actor model. Shams implemented a framework as part of the cooperative runtime for the Habanero-Java parallel programming library. His work has been published at leading conferences, such as OOPSLA, ECOOP, Euro-Par, PPPJ, and so on. Previously, he has been involved in projects such as Habanero-Scala, CnC-Scala, CnC-Matlab, and CnC-Python.

Gerald Loeffler is an MBA. He was trained as a biochemist and has worked in academia and the pharmaceutical industry, conducting research in parallel and distributed biophysical computer simulations and data science in bioinformatics. Then he switched to IT consulting and widened his interests to include general software development and architecture, focusing on JVM-centric enterprise applications, systems, and their integration ever since. Inspired by the practice of commercial software development projects in this context, Gerald has developed a keen interest in team collaboration, the software craftsmanship movement, sound software engineering, type safety, distributed software and system architectures, and the innovations introduced by technologies such as Java EE, Scala, Akka, and Spark. He is employed by MuleSoft as a principal solutions architect in their professional services team, working with EMEA clients on their integration needs and the challenges that spring from them.

Gerald lives with his wife and two cats in Vienna, Austria, where he enjoys music, theatre, and city life.

www.PacktPub.com

Support files, eBooks, discount offers, and more

For support files and downloads related to your book, please visit www.PacktPub.com.

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at <[email protected]> for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.

https://www2.packtpub.com/books/subscription/packtlib

Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can search, access, and read Packt's entire library of books.

Why Subscribe?

Fully searchable across every book published by PacktCopy and paste, print, and bookmark contentOn demand and accessible via a web browser

Free Access for Packt account holders

If you have an account with Packt at www.PacktPub.com, you can use this to access PacktLib today and view 9 entirely free books. Simply use your login credentials for immediate access.

Preface

JVM has become a clear winner in the race between different methods of scalable data analysis. The power of JVM, strong typing, simplicity of code, composability, and availability of highly abstracted distributed and machine learning frameworks make Scala a clear contender for the top position in large-scale data analysis. Thanks to its dynamic-looking, yet static type system, scientists and programmers coming from Python backgrounds feel at ease with Scala.

This book aims to provide easy-to-use recipes in Apache Spark, a massively scalable distributed computation framework, and Breeze, a linear algebra library on which Spark's machine learning toolkit is built. The book will also help you explore data using interactive visualizations in Apache Zeppelin.

Other than the handful of frameworks and libraries that we will see in this book, there's a host of other popular data analysis libraries and frameworks that are available for Scala. They are by no means lesser beasts, and they could actually fit our use cases well. Unfortunately, they aren't covered as part of this book.

Apache Flink

Apache Flink (http://flink.apache.org/), just like Spark, has first-class support for Scala and provides features that are strikingly similar to Spark. Real-time streaming (unlike Spark's mini-batch DStreams) is its distinctive feature. Flink also provides a machine learning and a graph processing library and runs standalone as well as on the YARN cluster.

Scalding

Scalding (https://github.com/twitter/scalding) needs no introduction—Scala's idiomatic approach to writing Hadoop MR jobs.

Saddle

Saddle (https://saddle.github.io/) is the "pandas" (http://pandas.pydata.org/) of Scala, with support for vectors, matrices, and DataFrames.

Spire

Spire (https://github.com/non/spire) has a powerful set of advanced numerical types that are not available in the default Scala library. It aims to be fast and precise in its numerical computations.

Akka

Akka (http://akka.io) is an actor-based concurrency framework that has actors as its foundation and unit of work. Actors are fault tolerant and distributed.

Accord

Accord (https://github.com/wix/accord) is simple, yet powerful, validation library in Scala.

What this book covers

Chapter 1, Getting Started with Breeze, serves as an introduction to the Breeze linear algebra library's API.

Chapter 2, Getting Started with Apache Spark DataFrames, introduces powerful, yet intuitive and relational-table-like, data abstraction.

Chapter 3, Loading and Preparing Data – DataFrame, showcases the loading of datasets into Spark DataFrames from a variety of sources, while also introducing the Parquet serialization format.

Chapter 4, Data Visualization, introduces Apache Zeppelin for interactive data visualization using Spark SQL and Spark UDF functions. We also briefly discuss Bokeh-Scala, which is a Scala port of Bokeh (a highly customizable visualization library).

Chapter 5, Learning from Data, focuses on machine learning using Spark MLlib.

Chapter 6, Scaling Up, walks through various deployment alternatives for Spark applications: standalone, YARN, and Mesos.

Chapter 7, Going Further, briefly introduces Spark Streaming and GraphX.

What you need for this book

The most important installation that your machine needs is the Java Development Kit (JDK 1.7), which can be downloaded from http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html.

To run most of the recipes in this book, all you need is SBT. The installation instructions for your favorite operating system are available at http://www.scala-sbt.org/release/tutorial/Setup.html.

There are a few other libraries that we will be using throughout the book, all of which will be imported through SBT. If there is any installation required (for example, HDFS) to run a recipe, the installation URL or the steps themselves will be mentioned in the respective recipe.

Who this book is for

Engineers and scientists who are familiar with Scala and would like to exploit the Spark ecosystem for big data analysis will benefit most from this book.

Sections

In this book, you will find several headings that appear frequently (Getting ready, How to do it…, How it works…, There's more…, and See also).

To give clear instructions on how to complete a recipe, we use these sections as follows:

Getting ready

This section tells you what to expect in the recipe, and describes how to set up any software or any preliminary settings required for the recipe.

How to do it…

This section contains the steps required to follow the recipe.

How it works…

This section usually consists of a detailed explanation of what happened in the previous section.

There's more…

This section consists of additional information about the recipe in order to make the reader more knowledgeable about the recipe.

See also

This section provides helpful links to other useful information for the recipe.

Conventions

In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning.

Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "We can include other contexts through the use of the include directive."

A block of code is set as follows:

organization := "com.packt" name := "chapter1-breeze" scalaVersion := "2.10.4" libraryDependencies ++= Seq( "org.scalanlp" %% "breeze" % "0.11.2", //Optional - the 'why' is explained in the How it works section "org.scalanlp" %% "breeze-natives" % "0.11.2" )

Any command-line input or output is written as follows:

sudo apt-get install libatlas3-base libopenblas-basesudo update-alternatives --config libblas.so.3sudo update-alternatives --config liblapack.so.3

New terms and important words are shown in bold. Words that you see on the screen, for example, in menus or dialog boxes, appear in the text like this: "Now, if we wish to share this chart with someone or link it to an external website, we can do so by clicking on the gear icon in this paragraph and then clicking on Link this paragraph."

Note

Warnings or important notes appear in a box like this.

Tip

Tips and tricks appear like this.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of.

To send us general feedback, simply e-mail <[email protected]>, and mention the book's title in the subject of your message.

If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the example code

You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.

To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.

Piracy

Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.

Please contact us at <[email protected]> with a link to the suspected pirated material.

We appreciate your help in protecting our authors and our ability to bring you valuable content.

Questions

If you have a problem with any aspect of this book, you can contact us at <[email protected]>, and we will do our best to address the problem.

Chapter 1. Getting Started with Breeze

In this chapter, we will cover the following recipes:

Getting Breeze—the linear algebra libraryWorking with vectorsWorking with matricesVectors and matrices with randomly distributed valuesReading and writing CSV files

Introduction

This chapter gives you a quick overview of one of the most popular data analysis libraries in Scala, how to get them, and their most frequently used functions and data structures.

We will be focusing on Breeze in this first chapter, which is one of the most popular and powerful linear algebra libraries. Spark MLlib, which we will be seeing in the subsequent chapters, builds on top of Breeze and Spark, and provides a powerful framework for scalable machine learning.

Getting Breeze – the linear algebra library

In simple terms, Breeze (http://www.scalanlp.org) is a Scala library that extends the Scala collection library to provide support for vectors and matrices in addition to providing a whole bunch of functions that support their manipulation. We could safely compare Breeze to NumPy (http://www.numpy.org/) in Python terms. Breeze forms the foundation of MLlib—the Machine Learning library in Spark, which we will explore in later chapters.

In this first recipe, we will see how to pull the Breeze libraries into our project using Scala Build Tool (SBT). We will also see a brief history of Breeze to better appreciate why it could be considered as the "go to" linear algebra library in Scala.

Note

For all our recipes, we will be using Scala 2.10.4 along with Java 1.7. I wrote the examples using the Scala IDE, but please feel free to use your favorite IDE.

How to do it...

Let's add the Breeze dependencies into our build.sbt so that we can start playing with them in the subsequent recipes. The Breeze dependencies are just two—the breeze (core) and the breeze-native dependencies.

Under a brand new folder (which will be our project root), create a new file called build.sbt.Next, add the breeze libraries to the project dependencies:
organization := "com.packt" name := "chapter1-breeze" scalaVersion := "2.10.4" libraryDependencies ++= Seq( "org.scalanlp" %% "breeze" % "0.11.2", //Optional - the 'why' is explained in the How it works section "org.scalanlp" %% "breeze-natives" % "0.11.2" )
From that folder, issue a sbt compile command in order to fetch all your dependencies.

Note

You could import the project into your Eclipse using sbt eclipse after installing the sbteclipse plugin https://github.com/typesafehub/sbteclipse/. For IntelliJ IDEA, you just need to import the project by pointing to the root folder where your build.sbt file is.

There's more...

Let's look into the details of what the breeze and breeze-native library dependencies we added bring to us.

The org.scalanlp.breeze dependency

Breeze has a long history in that it isn't written from scratch in Scala. Without the native dependency, Breeze leverages the power of netlib-java that has a Java-compiled version of the FORTRAN Reference implementation of BLAS/LAPACK. The netlib-java also provides gentle wrappers over the Java compiled library. What this means is that we could still work without the native dependency but the performance won't be great considering the best performance that we could leverage out of this FORTRAN-translated library is the performance of the FORTRAN reference implementation itself. However, for serious number crunching with the best performance, we should add the breeze-natives dependency too.

The org.scalanlp.breeze-natives package

With its native additive, Breeze looks for the machine-specific implementations of the BLAS/LAPACK libraries. The good news is that there are open source and (vendor provided) commercial implementations for most popular processors and GPUs. The most popular open source implementations include ATLAS (http://math-atlas.sourceforge.net) and OpenBLAS (http://www.openblas.net/).

If you are running a Mac, you are in luck—Native BLAS libraries come out of the box on Macs. Installing NativeBLAS on Ubuntu / Debian involves just running the following commands:

sudo apt-get install libatlas3-base libopenblas-basesudo update-alternatives --config libblas.so.3sudo update-alternatives --config liblapack.so.3

Tip

Downloading the example code

You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

For Windows, please refer to the installation instructions on https://github.com/xianyi/OpenBLAS/wiki/Installation-Guide.

Working with vectors

There are subtle yet powerful differences between Breeze vectors and Scala's own scala.collection.Vector. As we'll see in this recipe, Breeze vectors have a lot of functions that are linear algebra specific, and the more important thing to note here is that Breeze's vector is a Scala wrapper over netlib-java and most calls to the vector's API delegates the call to it.

Vectors are one of the core components in Breeze. They are containers of homogenous data. In this recipe, we'll first see how to create vectors and then move on to various data manipulation functions to modify those vectors.