Data Analysis with R - Tony Fischetti - E-Book

Data Analysis with R E-Book

Tony Fischetti

0,0
44,39 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Frequently the tool of choice for academics, R has spread deep into the private sector and can be found in the production pipelines at some of the most advanced and successful enterprises. The power and domain-specificity of R allows the user to express complex analytics easily, quickly, and succinctly. With over 7,000 user contributed packages, it’s easy to find support for the latest and greatest algorithms and techniques.
Starting with the basics of R and statistical reasoning, Data Analysis with R dives into advanced predictive analytics, showing how to apply those techniques to real-world data though with real-world examples.
Packed with engaging problems and exercises, this book begins with a review of R and its syntax. From there, get to grips with the fundamentals of applied statistics and build on this knowledge to perform sophisticated and powerful analytics. Solve the difficulties relating to performing data analysis in practice and find solutions to working with “messy data”, large data, communicating results, and facilitating reproducibility.
This book is engineered to be an invaluable resource through many stages of anyone’s career as a data analyst.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 499

Veröffentlichungsjahr: 2015

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Data Analysis with R
Credits
About the Author
About the Reviewer
www.PacktPub.com
Support files, eBooks, discount offers, and more
Why subscribe?
Free access for Packt account holders
Preface
What this book covers
What you need for this book
Who this book is for
Conventions
Reader feedback
Customer support
Downloading the example code
Downloading the color images of this book
Errata
Piracy
Questions
1. RefresheR
Navigating the basics
Arithmetic and assignment
Logicals and characters
Flow of control
Getting help in R
Vectors
Subsetting
Vectorized functions
Advanced subsetting
Recycling
Functions
Matrices
Loading data into R
Working with packages
Exercises
Summary
2. The Shape of Data
Univariate data
Frequency distributions
Central tendency
Spread
Populations, samples, and estimation
Probability distributions
Visualization methods
Exercises
Summary
3. Describing Relationships
Multivariate data
Relationships between a categorical and a continuous variable
Relationships between two categorical variables
The relationship between two continuous variables
Covariance
Correlation coefficients
Comparing multiple correlations
Visualization methods
Categorical and continuous variables
Two categorical variables
Two continuous variables
More than two continuous variables
Exercises
Summary
4. Probability
Basic probability
A tale of two interpretations
Sampling from distributions
Parameters
The binomial distribution
The normal distribution
The three-sigma rule and using z-tables
Exercises
Summary
5. Using Data to Reason About the World
Estimating means
The sampling distribution
Interval estimation
How did we get 1.96?
Smaller samples
Exercises
Summary
6. Testing Hypotheses
Null Hypothesis Significance Testing
One and two-tailed tests
When things go wrong
A warning about significance
A warning about p-values
Testing the mean of one sample
Assumptions of the one sample t-test
Testing two means
Don't be fooled!
Assumptions of the independent samples t-test
Testing more than two means
Assumptions of ANOVA
Testing independence of proportions
What if my assumptions are unfounded?
Exercises
Summary
7. Bayesian Methods
The big idea behind Bayesian analysis
Choosing a prior
Who cares about coin flips
Enter MCMC – stage left
Using JAGS and runjags
Fitting distributions the Bayesian way
The Bayesian independent samples t-test
Exercises
Summary
8. Predicting Continuous Variables
Linear models
Simple linear regression
Simple linear regression with a binary predictor
A word of warning
Multiple regression
Regression with a non-binary predictor
Kitchen sink regression
The bias-variance trade-off
Cross-validation
Striking a balance
Linear regression diagnostics
Second Anscombe relationship
Third Anscombe relationship
Fourth Anscombe relationship
Advanced topics
Exercises
Summary
9. Predicting Categorical Variables
k-Nearest Neighbors
Using k-NN in R
Confusion matrices
Limitations of k-NN
Logistic regression
Using logistic regression in R
Decision trees
Random forests
Choosing a classifier
The vertical decision boundary
The diagonal decision boundary
The crescent decision boundary
The circular decision boundary
Exercises
Summary
10. Sources of Data
Relational Databases
Why didn't we just do that in SQL?
Using JSON
XML
Other data formats
Online repositories
Exercises
Summary
11. Dealing with Messy Data
Analysis with missing data
Visualizing missing data
Types of missing data
So which one is it?
Unsophisticated methods for dealing with missing data
Complete case analysis
Pairwise deletion
Mean substitution
Hot deck imputation
Regression imputation
Stochastic regression imputation
Multiple imputation
So how does mice come up with the imputed values?
Methods of imputation
Multiple imputation in practice
Analysis with unsanitized data
Checking for out-of-bounds data
Checking the data type of a column
Checking for unexpected categories
Checking for outliers, entry errors, or unlikely data points
Chaining assertions
Other messiness
OpenRefine
Regular expressions
tidyr
Exercises
Summary
12. Dealing with Large Data
Wait to optimize
Using a bigger and faster machine
Be smart about your code
Allocation of memory
Vectorization
Using optimized packages
Using another R implementation
Use parallelization
Getting started with parallel R
An example of (some) substance
Using Rcpp
Be smarter about your code
Exercises
Summary
13. Reproducibility and Best Practices
R Scripting
RStudio
Running R scripts
An example script
Scripting and reproducibility
R projects
Version control
Communicating results
Exercises
Summary
Index

Data Analysis with R

Data Analysis with R

Copyright © 2015 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

First published: December 2015

Production reference: 1171215

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham B3 2PB, UK.

ISBN 978-1-78528-814-2

www.packtpub.com

Credits

Author

Tony Fischetti

Reviewer

Dipanjan Sarkar

Commissioning Editor

Akram Hussain

Acquisition Editor

Meeta Rajani

Content Development Editor

Anish Dhurat

Technical Editor

Siddhesh Patil

Copy Editor

Sonia Mathur

Project Coordinator

Bijal Patel

Proofreader

Safis Editing

Indexer

Monica Ajmera Mehta

Graphics

Disha Haria

Production Coordinator

Conidon Miranda

Cover Work

Conidon Miranda

About the Author

Tony Fischetti is a data scientist at College Factual, where he gets to use R everyday to build personalized rankings and recommender systems. He graduated in cognitive science from Rensselaer Polytechnic Institute, and his thesis was strongly focused on using statistics to study visual short-term memory.

Tony enjoys writing and contributing to open source software, blogging at http://www.onthelambda.com, writing about himself in third person, and sharing his knowledge using simple, approachable language and engaging examples.

The more traditionally exciting of his daily activities include listening to records, playing the guitar and bass (poorly), weight training, and helping others.

Because I'm aware of how incredibly lucky I am, it's really hard to express all the gratitude I have for everyone in my life that helped me—either directly, or indirectly—in completing this book. The following (partial) list is my best attempt at balancing thoroughness whilst also maximizing the number of people who will read this section by keeping it to a manageable length.

First, I'd like to thank all of my educators. In particular, I'd like to thank the Bronx High School of Science and Rensselaer Polytechnic Institute. More specifically, I'd like the Bronx Science Robotics Team, all it's members, it's team moms, the wonderful Dena Ford and Cherrie Fleisher-Strauss; and Justin Fox. From the latter institution, I'd like to thank all of my professors and advisors. Shout out to Mike Kalsher, Michael Schoelles, Wayne Gray, Bram van Heuveln, Larry Reid, and Keith Anderson (especially Keith Anderson).

I'd like to thank the New York Public Library, Wikipedia, and other freely available educational resources. On a related note, I need to thank the R community and, more generally, all of the authors of R packages and other open source software I use for spending their own personal time to benefit humanity. Shout out to GNU, the R core team, and Hadley Wickham (who wrote a majority of the R packages I use daily).

Next, I'd like to thank the company I work for, College Factual, and all of my brilliant co-workers from whom I've learned so much.

I also need to thank my support network of millions, and my many many friends that have all helped me more than they will likely ever realize.

I'd like to thank my partner, Bethany Wickham, who has been absolutely instrumental in providing much needed and appreciated emotional support during the writing of this book, and putting up with the mood swings that come along with working all day and writing all night.

Next, I'd like to express my gratitude for my sister, Andrea Fischetti, who means the world to me. Throughout my life, she's kept me warm and human in spite of the scientist in me that likes to get all reductionist and cerebral.

Finally, and most importantly, I'd like to thank my parents. This book is for my father, to whom I owe my love of learning and my interest in science and statistics; and to my mother for her love and unwavering support and, to whom I owe my work ethic and ability to handle anything and tackle any challenge.

About the Reviewer

Dipanjan Sarkar is an IT engineer at Intel, the world's largest silicon company, where he works on analytics, business intelligence, and application development. He received his master's degree in information technology from the International Institute of Information Technology, Bangalore. Dipanjan's area of specialization includes software engineering, data science, machine learning, and text analytics.

His interests include learning about new technologies, disruptive start-ups, and data science. In his spare time, he loves reading, playing games, and watching popular sitcoms. Dipanjan also reviewed Learning R for Geospatial Analysis and R Data Analysis Cookbook, both by Packt Publishing.

I would like to thank Bijal Patel, the project coordinator of this book, for making the reviewing experience really interactive and enjoyable.

www.PacktPub.com

Support files, eBooks, discount offers, and more

For support files and downloads related to your book, please visit www.PacktPub.com.

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at <[email protected]> for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.

https://www2.packtpub.com/books/subscription/packtlib

Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can search, access, and read Packt's entire library of books.

Why subscribe?

Fully searchable across every book published by PacktCopy and paste, print, and bookmark contentOn demand and accessible via a web browser

Free access for Packt account holders

If you have an account with Packt at www.PacktPub.com, you can use this to access PacktLib today and view 9 entirely free books. Simply use your login credentials for immediate access.

Preface

I'm going to shoot it to you straight: there are a lot of books about data analysis and the R programming language. I'll take it on faith that you already know why it's extremely helpful and fruitful to learn R and data analysis (if not, why are you reading this preface?!) but allow me to make a case for choosing this book to guide you in your journey.

For one, this subject didn't come naturally to me. There are those with an innate talent for grasping the intricacies of statistics the first time it is taught to them; I don't think I'm one of these people. I kept at it because I love science and research and knew that data analysis was necessary, not because it immediately made sense to me. Today, I love the subject in and of itself, rather than instrumentally, but this only came after months of heartache. Eventually, as I consumed resource after resource, the pieces of the puzzle started to come together. After this, I started tutoring all of my friends in the subject—and have seen them trip over the same obstacles that I had to learn to climb. I think that coming from this background gives me a unique perspective on the plight of the statistics student and allows me to reach them in a way that others may not be able to. By the way, don't let the fact that statistics used to baffle me scare you; I have it on fairly good authority that I know what I'm talking about today.

Secondly, this book was born of the frustration that most statistics texts tend to be written in the driest manner possible. In contrast, I adopt a light-hearted buoyant approach—but without becoming agonizingly flippant.

Third, this book includes a lot of material that I wished were covered in more of the resources I used when I was learning about data analysis in R. For example, the entire last unit specifically covers topics that present enormous challenges to R analysts when they first go out to apply their knowledge to imperfect real-world data.

Lastly, I thought long and hard about how to lay out this book and which order of topics was optimal. And when I say long and hard I mean I wrote a library and designed algorithms to do this. The order in which I present the topics in this book was very carefully considered to (a) build on top of each other, (b) follow a reasonable level of difficulty progression allowing for periodic chapters of relatively simpler material (psychologists call this intermittent reinforcement), (c) group highly related topics together, and (d) minimize the number of topics that require knowledge of yet unlearned topics (this is, unfortunately, common in statistics). If you're interested, I detail this procedure in a blog post that you can read at http://bit.ly/teach-stats.

The point is that the book you're holding is a very special one—one that I poured my soul into. Nevertheless, data analysis can be a notoriously difficult subject, and there may be times where nothing seems to make sense. During these times, remember that many others (including myself) have felt stuck, too. Persevere… the reward is great. And remember, if a blockhead like me can do it, you can, too. Go you!

What this book covers

Chapter 1, RefresheR, reviews the aspects of R that subsequent chapters will assume knowledge of. Here, we learn the basics of R syntax, learn R's major data structures, write functions, load data and install packages.

Chapter 2, The Shape of Data, discusses univariate data. We learn about different data types, how to describe univariate data, and how to visualize the shape of these data.

Chapter 3, Describing Relationships, goes on to the subject of multivariate data. In particular, we learn about the three main classes of bivariate relationships and learn how to describe them.

Chapter 4, Probability, kicks off a new unit by laying foundation. We learn about basic probability theory, Bayes' theorem, and probability distributions.

Chapter 5, Using Data to Reason About the World, discusses sampling and estimation theory. Through examples, we learn of the central limit theorem, point estimation and confidence intervals.

Chapter 6, Testing Hypotheses, introduces the subject of Null Hypothesis Significance Testing (NHST). We learn many popular hypothesis tests and their non-parametric alternatives. Most importantly, we gain a thorough understanding of the misconceptions and gotchas of NHST.

Chapter 7, Bayesian Methods, introduces an alternative to NHST based on a more intuitive view of probability. We learn the advantages and drawbacks of this approach, too.

Chapter 8, Predicting Continuous Variables, thoroughly discusses linear regression. Before the chapter's conclusion, we learn all about the technique, when to use it, and what traps to look out for.

Chapter 9, Predicting Categorical Variables, introduces four of the most popular classification techniques. By using all four on the same examples, we gain an appreciation for what makes each technique shine.

Chapter 10, Sources of Data, is all about how to use different data sources in R. In particular, we learn how to interface with databases, and request and load JSON and XML via an engaging example.

Chapter 11, Dealing with Messy Data, introduces some of the snags of working with less than perfect data in practice. The bulk of this chapter is dedicated to missing data, imputation, and identifying and testing for messy data.

Chapter 12, Dealing with Large Data, discusses some of the techniques that can be used to cope with data sets that are larger than can be handled swiftly without a little planning. The key components of this chapter are on parallelization and Rcpp.

Chapter 13, Reproducibility and Best Practices, closes with the extremely important (but often ignored) topic of how to use R like a professional. This includes learning about tooling, organization, and reproducibility.

What you need for this book

All code in this book has been written against the latest version of R—3.2.2 at the time of writing. As a matter of good practice, you should keep your R version up to date but most, if not all, code should work with any reasonably recent version of R. Some of the R packages we will be installing will require more recent versions, though. For the other software that this book uses, instructions will be furnished pro re nata. If you want to get a head start, however, install RStudio, JAGS, and a C++ compiler (or Rtools if you use Windows).

Who this book is for

Whether you are learning data analysis for the first time, or you want to deepen the understanding you already have, this book will prove to an invaluable resource. If you are looking for a book to bring you all the way through the fundamentals to the application of advanced and effective analytics methodologies, and have some prior programming experience and a mathematical background, then this is for you.

Conventions

In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning.

Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "We will use the system.time function to time the execution."

A block of code is set as follows:

library(VIM) aggr(miss_mtcars, numbers=TRUE)

Any command-line input or output is written as follows:

# R --vanilla CMD BATCH nothing.R

New terms and important words are shown in bold. Words that you see on the screen, for example, in menus or dialog boxes, appear in the text like this: "Clicking the Next button moves you to the next screen."

Note

Warnings or important notes appear in a box like this.

Tip

Tips and tricks appear like this.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of.

To send us general feedback, simply e-mail <[email protected]>, and mention the book's title in the subject of your message.

If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the example code

You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

Downloading the color images of this book

We also provide you with a PDF file that has color images of the screenshots/diagrams used in this book. The color images will help you better understand the changes in the output. You can download this file from https://www.packtpub.com/sites/default/files/downloads/Data_Analysis_With_R_ColorImages.pdf.

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.

To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.

Piracy

Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.

Please contact us at <[email protected]> with a link to the suspected pirated material.

We appreciate your help in protecting our authors and our ability to bring you valuable content.

Questions

If you have a problem with any aspect of this book, you can contact us at <[email protected]>, and we will do our best to address the problem.

Getting help in R

Before we go further, it would serve us well to have a brief section detailing how to get help in R. Most R tutorials leave this for one of the last sections—if it is even included at all! In my own personal experience, though, getting help is going to be one of the first things you will want to do as you add more bricks to your R knowledge castle. Learning R doesn't have to be difficult; just take it slowly, ask questions, and get help early. Go you!

It is easy to get help with R right at the console. Running the help.start() function at the prompt will start a manual browser. From here, you can do anything from going over the basics of R to reading the nitty-gritty details on how R works internally.

You can get help on a particular function in R if you know its name, by supplying that name as an argument to the help function. For example, let's say you want to know more about the gsub() function that I sprang on you before. Running the following code:

> help("gsub") > # or simply > ?gsub

will display a manual page documenting what the function is, how to use it, and examples of its usage.

This rapid accessibility to documentation means that I'm never hopelessly lost when I encounter a function which I haven't seen before. The downside to this extraordinarily convenient help mechanism is that I rarely bother to remember the order of arguments, since looking them up is just seconds away.

Occasionally, you won't quite remember the exact name of the function you're looking for, but you'll have an idea about what the name should be. For this, you can use the help.search() function.

> help.search("chisquare") > # or simply > ??chisquare

For tougher, more semantic queries, nothing beats a good old fashioned web search engine. If you don't get relevant results the first time, try adding the term programming or statistics in there for good measure.

Vectors

Vectors are the most basic data structures in R, and they are ubiquitous indeed. In fact, even the single values that we've been working with thus far were actually vectors of length 1. That's why the interactive R console has been printing [1] along with all of our output.

Vectors are essentially an ordered collection of values of the same atomic data type. Vectors can be arbitrarily large (with some limitations), or they can be just one single value.

The canonical way of building vectors manually is by using the c() function (which stands for combine).

> our.vect <- c(8, 6, 7, 5, 3, 0, 9) > our.vect [1] 8 6 7 5 3 0 9

In the preceding example, we created a numeric vector of length 7 (namely, Jenny's telephone number).

Note that if we tried to put character data types into this vector as follows:

> another.vect <- c("8", 6, 7, "-", 3, "0", 9) > another.vect [1] "8" "6" "7" "-" "3" "0" "9"

R would convert all the items in the vector (called elements) into character data types to satisfy the condition that all elements of a vector must be of the same type. A similar thing happens when you try to use logical values in a vector with numbers; the logical values would be converted into 1 and 0 (for TRUE and FALSE, respectively). These logicals will turn into TRUE and FALSE (note the quotation marks) when used in a vector that contains characters.

Subsetting

It is very common to want to extract one or more elements from a vector. For this, we use a technique called indexing or subsetting. After the vector, we put an integer in square brackets ([]) called the subscript operator. This instructs R to return the element at that index. The indices (plural for index, in case you were wondering!) for vectors in R start at 1, and stop at the length of the vector.

> our.vect[1] # to get the first value [1] 8 > # the function length() returns the length of a vector > length(our.vect) [1] 7 > our.vect[length(our.vect)] # get the last element of a vector [1] 9

Note that in the preceding code, we used a function in the subscript operator. In cases like these, R evaluates the expression in the subscript operator, and uses the number it returns as the index to extract.

If we get greedy, and try to extract an element at an index that doesn't exist, R will respond with NA, meaning, not available. We see this special value cropping up from time to time throughout this text.

> our.vect[10] [1] NA

One of the most powerful ideas in R is that you can use vectors to subset other vectors:

> # extract the first, third, fifth, and > # seventh element from our vector > our.vect[c(1, 3, 5, 7)] [1] 8 7 3 9

The ability to use vectors to index other vectors may not seem like much now, but its usefulness will become clear soon.

Another way to create vectors is by using sequences.

> other.vector <- 1:10 > other.vector [1] 1 2 3 4 5 6 7 8 9 10 > another.vector <- seq(50, 30, by=-2) > another.vector [1] 50 48 46 44 42 40 38 36 34 32 30

Above, the 1:10 statement creates a vector from 1 to 10. 10:1 would have created the same 10 element vector, but in reverse. The seq() function is more general in that it allows sequences to be made using steps (among many other things).

Combining our knowledge of sequences and vectors subsetting vectors, we can get the first 5 digits of Jenny's number thusly:

> our.vect[1:5] [1] 8 6 7 5 3

Vectorized functions

Part of what makes R so powerful is that many of R's functions take vectors as arguments. These vectorized functions are usually extremely fast and efficient. We've already seen one such function, length(), but there are many many others.

> # takes the mean of a vector > mean(our.vect) [1] 5.428571 > sd(our.vect) # standard deviation [1] 3.101459 > min(our.vect) [1] 0 > max(1:10) [1] 10 > sum(c(1, 2, 3)) [1] 6

In practical settings, such as when reading data from files, it is common to have NA values in vectors:

> messy.vector <- c(8, 6, NA, 7, 5, NA, 3, 0, 9) > messy.vector [1] 8 6 NA 7 5 NA 3 0 9 > length(messy.vector) [1] 9

Some vectorized functions will not allow NA values by default. In these cases, an extra keyword argument must be supplied along with the first argument to the function.

> mean(messy.vector) [1] NA > mean(messy.vector, na.rm=TRUE) [1] 5.428571 > sum(messy.vector, na.rm=FALSE) [1] NA > sum(messy.vector, na.rm=TRUE) [1] 38

As mentioned previously, vectors can be constructed from logical values too.

> log.vector <- c(TRUE, TRUE, FALSE) > log.vector [1] TRUE TRUE FALSE

Since logical values can be coerced into behaving like numerics, as we saw earlier, if we try to sum a logical vector as follows:.

> sum(log.vector) [1] 2

we will, essentially, get a count of the number of TRUE values in that vector.

There are many functions in R which operate on vectors and return logical vectors. is.na() is one such function. It returns a logical vector—that is, the same length as the vector supplied as an argument—with a TRUE in the position of every NA value. Remember our messy vector (from just a minute ago)?

> messy.vector [1] 8 6 NA 7 5 NA 3 0 9 > is.na(messy.vector) [1] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE FALSE > # 8 6 NA 7 5 NA 3 0 9

Putting together these pieces of information, we can get a count of the number of NA values in a vector as follows:

> sum(is.na(messy.vector)) [1] 2

When you use Boolean operators on vectors, they also return logical vectors of the same length as the vector being operated on.

> our.vect > 5 [1] TRUE TRUE TRUE FALSE FALSE FALSE TRUE

If we wanted to—and we do—count the number of digits in Jenny's phone number that are greater than five, we would do so in the following manner:

> sum(our.vect > 5) [1] 4

Advanced subsetting

Did I mention that we can use vectors to subset other vectors? When we subset vectors using logical vectors of the same length, only the elements corresponding to the TRUE values are extracted. Hopefully, sparks are starting to go off in your head. If we wanted to extract only the legitimate non-NA digits from Jenny's number, we can do it as follows:

> messy.vector[!is.na(messy.vector)] [1] 8 6 7 5 3 0 9

This is a very critical trait of R, so let's take our time understanding it; this idiom will come up again and again throughout this book.

The logical vector that yields TRUE when an NA value occurs in messy.vector (from is.na()) is then negated (the whole thing) by the negation operator !. The resultant vector is TRUE whenever the corresponding value in messy.vector is not NA. When this logical vector is used to subset the original messy vector, it only extracts the non-NA values from it.

Similarly, we can show all the digits in Jenny's phone number that are greater than five as follows:

> our.vect[our.vect > 5] [1] 8 6 7 9

Thus far, we've only been displaying elements that have been extracted from a vector. However, just as we've been assigning and re-assigning variables, we can assign values to various indices of a vector, and change the vector as a result. For example, if Jenny tells us that we have the first digit of her phone number wrong (it's really 9), we can reassign just that element without modifying the others.

> our.vect [1] 8 6 7 5 3 0 9 > our.vect[1] <- 9 > our.vect [1] 9 6 7 5 3 0 9

Sometimes, it may be required to replace all the NA values in a vector with the value 0. To do that with our messy vector, we can execute the following command:

> messy.vector[is.na(messy.vector)] <- 0 > messy.vector [1] 8 6 0 7 5 0 3 0 9

Elegant though the preceding solution is, modifying a vector in place is usually discouraged in favor of creating a copy of the original vector and modifying the copy. One such technique for performing this is by using the ifelse() function.

Not to be confused with the if/else control construct, ifelse() is a function that takes 3 arguments: a test that returns a logical/Boolean value, a value to use if the element passes the test, and one to return if the element fails the test.

The preceding in-place modification solution could be re-implemented with ifelse as follows:

> ifelse(is.na(messy.vector), 0, messy.vector) [1] 8 6 0 7 5 0 3 0 9

Recycling

The last important property of vectors and vector operations in R is that they can be recycled. To understand what I mean, examine the following expression:

> our.vect + 3 [1] 12 9 10 8 6 3 12

This expression adds three to each digit in Jenny's phone number. Although it may look so, R is not performing this operation between a vector and a single value. Remember when I said that single values are actually vectors of the length 1? What is really happening here is that R is told to perform element-wise addition on a vector of length 7 and a vector of length 1. Since element-wise addition is not defined for vectors of differing lengths, R recycles the smaller vector until it reaches the same length as that of the bigger vector. Once both the vectors are the same size, then R, element-by-element, performs the addition and returns the result.

> our.vect + 3 [1] 12 9 10 8 6 3 12

is tantamount to…

> our.vect + c(3, 3, 3, 3, 3, 3, 3) [1] 12 9 10 8 6 3 12

If we wanted to extract every other digit from Jenny's phone number, we can do so in the following manner:

> our.vect[c(TRUE, FALSE)] [1] 9 7 3 9

This works because the vector c(TRUE, FALSE) is repeated until it is of the length 7, making it equivalent to the following:

> our.vect[c(TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE)] [1] 9 7 3 9

One common snag related to vector recycling that R users (useRs, if I may) encounter is that during some arithmetic operations involving vectors of discrepant length, R will warn you if the smaller vector cannot be repeated a whole number of times to reach the length of the bigger vector. This is not a problem when doing vector arithmetic with single values, since 1 can be repeated any number of times to match the length of any vector (which must, of course, be an integer). It would pose a problem, though, if we were looking to add three to every other element in Jenny's phone number.

> our.vect + c(3, 0) [1] 12 6 10 5 6 0 12 Warning message: In our.vect + c(3, 0) : longer object length is not a multiple of shorter object length

You will likely learn to love these warnings, as they have stopped many useRs from making grave errors.

Before we move on to the next section, an important thing to note is that in a lot of other programming languages, many of the things that we did would have been implemented using for loops and other control structures. Although there is certainly a place for loops and such in R, oftentimes a more sophisticated solution exists in using just vector/matrix operations. In addition to elegance and brevity, the solution that exploits vectorization and recycling is often many, many times more efficient.

Functions

If we need to perform some computation that isn't already a function in R a multiple number of times, we usually do so by defining our own functions. A custom function in R is defined using the following syntax:

function.name <- function(argument1, argument2, ...){ # some functionality }

For example, if we wanted to write a function that determined if a number supplied as an argument was even, we can do so in the following manner:

> is.even <- function(a.number){ + remainder <- a.number %% 2 + if(remainder==0) + return(TRUE) + return(FALSE) + } > > # testing it > is.even(10) [1] TRUE > is.even(9) [1] FALSE

As an example of a function that takes more than one argument, let's generalize the preceding function by creating a function that determines whether the first argument is divisible by its second argument.

> is.divisible.by <- function(large.number, smaller.number){ + if(large.number %% smaller.number != 0) + return(FALSE) + return(TRUE) + } > > # testing it > is.divisible.by(10, 2) [1] TRUE > is.divisible.by(10, 3) [1] FALSE > is.divisible.by(9, 3) [1] TRUE

Our function, is.even(), could now be rewritten simply as:

> is.even <- function(num){ + is.divisible.by(num, 2) + }

It is very common in R to want to apply a particular function to every element of a vector. Instead of using a loop to iterate over the elements of a vector, as we would do in many other languages, we use a function called sapply() to perform this. sapply() takes a vector and a function as its argument. It then applies the function to every element and returns a vector of results. We can use sapply() in this manner to find out which digits in Jenny's phone number are even:

> sapply(our.vect, is.even) [1] FALSE TRUE FALSE FALSE FALSE TRUE FALSE

This worked great because sapply takes each element, and uses it as the argument in is.even() which takes only one argument. If you wanted to find the digits that are divisible by three, it would require a little bit more work.

One option is just to define a function is.divisible.by.three() that takes only one argument, and use that in sapply. The more common solution, however, is to define an unnamed function that does just that in the body of the sapply function call:

> sapply(our.vect, function(num){is.divisible.by(num, 3)}) [1] TRUE TRUE FALSE FALSE TRUE TRUE TRUE

Here, we essentially created a function that checks whether its argument is divisible by three, except we don't assign it to a variable, and use it directly in the sapply body instead. These one-time-use unnamed functions are called anonymous functions or lambda functions. (The name comes from Alonzo Church's invention of the lambda calculus, if you were wondering.)

This is somewhat of an advanced usage of R, but it is very useful as it comes up very often in practice.

If we wanted to extract the digits in Jenny's phone number that are divisible by both, two and three, we can write it as follows:

> where.even <- sapply(our.vect, is.even) > where.div.3 <- sapply(our.vect, function(num){ + is.divisible.by(num, 3)}) > # "&" is like the "&&" and operator but for vectors > our.vect[where.even & where.div.3] [1] 6 0

Neat-O!

Note that if we wanted to be sticklers, we would have a clause in the function bodies to preclude a modulus computation, where the first number was smaller than the second. If we had, our function would not have erroneously indicated that 0 was divisible by two and three. I'm not a stickler, though, so the functions will remain as is. Fixing this function is left as an exercise for the (stickler) reader.

Matrices

In addition to the vector data structure, R has the matrix, data frame, list, and array data structures. Though we will be using all these types (except arrays) in this book, we only need to review the first two in this chapter.

A matrix in R, like in math, is a rectangular array of values (of one type) arranged in rows and columns, and can be manipulated as a whole. Operations on matrices are fundamental to data analysis.

One way of creating a matrix is to just supply a vector to the function matrix().

> a.matrix <- matrix(c(1, 2, 3, 4, 5, 6)) > a.matrix [,1] [1,] 1 [2,] 2 [3,] 3 [4,] 4 [5,] 5 [6,] 6

This produces a matrix with all the supplied values in a single column. We can make a similar matrix with two columns by supplying matrix() with an optional argument, ncol, that specifies the number of columns.

> a.matrix <- matrix(c(1, 2, 3, 4, 5, 6), ncol=2) > a.matrix [,1] [,2] [1,] 1 4 [2,] 2 5 [3,] 3 6

We could have produced the same matrix by binding two vectors, c(1, 2, 3) and c(4, 5, 6) by columns using the cbind() function as follows:

> a2.matrix <- cbind(c(1, 2, 3), c(4, 5, 6))

We could create the transposition of this matrix (where rows and columns are switched) by binding those vectors by row instead:

> a3.matrix <- rbind(c(1, 2, 3), c(4, 5, 6)) > a3.matrix [,1] [,2] [,3] [1,] 1 2 3 [2,] 4 5 6

or by just using the matrix transposition function in R, t().

> t(a2.matrix)

Some other functions that operate on whole vectors are rowSums()/colSums() and rowMeans()/colMeans().

> a2.matrix [,1] [,2] [1,] 1 4 [2,] 2 5 [3,] 3 6 > colSums(a2.matrix) [1] 6 15 > rowMeans(a2.matrix) [1] 2.5 3.5 4.5

If vectors have sapply(), then matrices have apply(). The preceding two functions could have been written, more verbosely, as:

> apply(a2.matrix, 2, sum) [1] 6 15 > apply(a2.matrix, 1, mean) [1] 2.5 3.5 4.5

where 1 instructs R to perform the supplied function over its rows, and 2, over its columns.

The matrix multiplication operator in R is %*%

> a2.matrix %*% a2.matrix Error in a2.matrix %*% a2.matrix : non-conformable arguments

Remember, matrix multiplication is only defined for matrices where the number of columns in the first matrix is equal to the number of rows in the second.

> a2.matrix [,1] [,2] [1,] 1 4 [2,] 2 5 [3,] 3 6 > a3.matrix [,1] [,2] [,3] [1,] 1 2 3 [2,] 4 5 6 > a2.matrix %*% a3.matrix [,1] [,2] [,3] [1,] 17 22 27 [2,] 22 29 36 [3,] 27 36 45 > > # dim() tells us how many rows and columns > # (respectively) there are in the given matrix > dim(a2.matrix) [1] 3 2

To index the element of a matrix at the second row and first column, you need to supply both of these numbers into the subscripting operator.

> a2.matrix[2,1] [1] 2

Many useRs get confused and forget the order in which the indices must appear; remember—it's row first, then columns!

If you leave one of the spaces empty, R will assume you want that whole dimension:

> # returns the whole second column > a2.matrix[,2] [1] 4 5 6 > # returns the first row > a2.matrix[1,] [1] 1 4

And, as always, we can use vectors in our subscript operator:

> # give me element in column 2 at the first and third row > a2.matrix[c(1, 3), 2] [1] 4 6

Loading data into R

Thus far, we've only been entering data directly into the interactive R console. For any data set of non-trivial size this is, obviously, an intractable solution. Fortunately for us, R has a robust suite of functions for reading data directly from external files.

Go ahead, and create a file on your hard disk called favorites.txt that looks like this:

flavor,number pistachio,6 mint chocolate chip,7 vanilla,5 chocolate,10 strawberry,2 neopolitan,4

This data represents the number of students in a class that prefer a particular flavor of soy ice cream. We can read the file into a variable called favs as follows:

> favs <- read.table("favorites.txt", sep=",", header=TRUE)

If you get an error that there is no such file or directory, give R the full path name to your data set or, alternatively, run the following command:

> favs <- read.table(file.choose(), sep=",", header=TRUE)

The preceding command brings up an open file dialog for letting you navigate to the file you've just created.

The argument sep="," tells R that each data element in a row is separated by a comma. Other common data formats have values separated by tabs and pipes ("|"). The value of sep should then be "\t" and "|", respectively.

The argument header=TRUE tells R that the first row of the file should be interpreted as the names of the columns. Remember, you can enter ?read.table at the console to learn more about these options.

Reading from files in this comma-separated-values format (usually with the .csv file extension) is so common that R has a more specific function just for it. The preceding data import expression can be best written simply as:

> favs <- read.csv("favorites.txt")

Now, we have all the data in the file held in a variable of class data.frame. A data frame can be thought of as a rectangular array of data that you might see in a spreadsheet application. In this way, a data frame can also be thought of as a matrix; indeed, we can use matrix-style indexing to extract elements from it. A data frame differs from a matrix, though, in that a data frame may have columns of differing types. For example, whereas a matrix would only allow one of these types, the data set we just loaded contains character data in its first column, and numeric data in its second column.

Let's check out what we have by using the head() command, which will show us the first few lines of a data frame:

> head(favs) flavor number 1 pistachio 6 2 mint chocolate chip 7 3 vanilla 5 4 chocolate 10 5 strawberry 2 6 neopolitan 4 > class(favs) [1] "data.frame" > class(favs$flavor) [1] "factor" > class(favs$number) [1] "numeric"

I lied, ok! So what?! Technically, flavor is a factor data type, not a character type.

We haven't seen factors yet, but the idea behind them is really simple. Essentially, factors are codings for categorical variables, which are variables that take on one of a finite number of categories—think {"high", "medium", and "low"} or {"control", "experimental"}.

Though factors are extremely useful in statistical modeling in R, the fact that R, by default, automatically interprets a column from the data read from disk as a type factor if it contains characters, is something that trips up novices and seasoned useRs alike. Because of this, we will primarily prevent this behavior manually by adding the stringsAsFactors optional keyword argument to the read.* commands:

> favs <- read.csv("favorites.txt", stringsAsFactors=FALSE) > class(favs$flavor) [1] "character"

Much better, for now! If you'd like to make this behavior the new default, read the ?options manual page. We can always convert to factors later on if we need to!

If you haven't noticed already, I've snuck a new operator on you—$, the extract operator. This is the most popular way to extract attributes (or columns) from a data frame. You can also use double square brackets ([[ and ]]) to do this.

These are both in addition to the canonical matrix indexing option. The following three statements are thus, in this context, functionally identical:

> favs$flavor [1] "pistachio" "mint chocolate chip" "vanilla" [4] "chocolate" "strawberry" "neopolitan" > favs[["flavor"]] [1] "pistachio" "mint chocolate chip" "vanilla" [4] "chocolate" "strawberry" "neopolitan" > favs[,1] [1] "pistachio" "mint chocolate chip" "vanilla" [4] "chocolate" "strawberry" "neopolitan"

Note

Notice how R has now printed another number in square brackets—besides [1]—along with our output. This is to show us that chocolate is the fourth element of the vector that was returned from the extraction.

You can use the names() function to get a list of the columns available in a data frame. You can even reassign names using the same:

> names(favs) [1] "flavor" "number" > names(favs)[1] <- "flav" > names(favs) [1] "flav" "number"

Lastly, we can get a compact display of the structure of a data frame by using the str() function on it:

> str(favs) 'data.frame': 6 obs. of 2 variables: $ flav : chr "pistachio" "mint chocolate chip" "vanilla" "chocolate" ... $ number: num 6 7 5 10 2 4

Actually, you can use this function on any R structure—the property of functions that change their behavior based on the type of input is called polymorphism.

Working with packages

Robust, performant, and numerous though base R's functions are, we are by no means limited to them! Additional functionality is available in the form of packages. In fact, what makes R such a formidable statistics platform is the astonishing wealth of packages available (well over 7,000 at the time of writing). R's ecosystem is second to none!

Most of these myriad packages exist on the Comprehensive R Archive Network (CRAN). CRAN is the primary repository for user-created packages.

One package that we are going to start using right away is the ggplot2 package. ggplot2 is a plotting system for R. Base R has sophisticated and advanced mechanisms to plot data, but many find ggplot2 more consistent and easier to use. Further, the plots are often more aesthetically pleasing by default.

Let's install it!

# downloads and installs from CRAN > install.packages("ggplot2")

Now that we have the package downloaded, let's load it into the R session, and test it out by plotting our data from the last section:

> library(ggplot2) > ggplot(favs, aes(x=flav, y=number)) + + geom_bar(stat="identity") + + ggtitle("Soy ice cream flavor preferences")

Figure 1.1: Soy ice cream flavor preferences

You're all wrong, Mint Chocolate Chip is way better!

Don't worry about the syntax of the ggplot function, yet. We'll get to it in good time.

You will be installing some more packages as you work through this text. In the meantime, if you want to play around with a few more packages, you can install the gdata and foreign packages that allow you to directly import Excel spreadsheets and SPSS data files respectively directly into R.

Exercises

You can practice the following exercises to help you get a good grasp of the concepts learned in this chapter:

Write a function called simon.says that takes in a character string, and returns that string in all upper case after prepending the string "Simon says: " to the beginning of it.Write a function that takes two matrices as arguments, and returns a logical value representing whether the matrices can be matrix multiplied.Find a free data set on the web, download it, and load it into R. Explore the structure of the data set.Reflect upon how Hester Prynne allowed her scarlet letter to be decorated with flowers by her daughter in Chapter 10. To what extent is this indicative of Hester's recasting of the scarlet letter as a positive part of her identity. Back up your thesis with excerpts from the book.

Summary

In this chapter, we learned about the world's greatest analytics platform, R. We started from the beginning and built a foundation, and will now explore R further, based on the knowledge gained in this chapter. By now, you have become well versed in the basics of R (which, paradoxically, is the hardest part).You now know how to: