Data Analysis with R, Second Edition - Tony Fischetti - E-Book

Data Analysis with R, Second Edition E-Book

Tony Fischetti

0,0
31,19 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Frequently the tool of choice for academics, R has spread deep into the private sector and can be found in the production pipelines at some of the most advanced and successful enterprises. The power and domain-specificity of R allows the user to express complex analytics easily, quickly, and succinctly.
Starting with the basics of R and statistical reasoning, this book dives into advanced predictive analytics, showing how to apply those techniques to real-world data though with real-world examples.
Packed with engaging problems and exercises, this book begins with a review of R and its syntax with packages like Rcpp, ggplot2, and dplyr. From there, get to grips with the fundamentals of applied statistics and build on this knowledge to perform sophisticated and powerful analytics. Solve the difficulties relating to performing data analysis in practice and find solutions to working with messy data, large data, communicating results, and facilitating reproducibility.
This book is engineered to be an invaluable resource through many stages of anyone’s career as a data analyst.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 696

Veröffentlichungsjahr: 2018

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Data Analysis with RSecond Edition

 

 

 

 

 

 

 

A comprehensive guide to manipulating, analyzing, and visualizing data in R

 

 

 

 

 

 

 

 

Tony Fischetti

 

 

 

 

 

 

 

 

BIRMINGHAM - MUMBAI

Data Analysis with R Second Edition

Copyright © 2018 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Author:Tony FischettiCommissioning Editor: Amey VarangaonkarAcquisition Editor: Tushar GuptaContent Development Editor: Tejas LimkarTechnical Editor: Danish ShaikhCopy Editor: Safis EditingProject Coordinator: Manthan PatelProofreader: Safis EditingIndexer:Tejal Daruwale SoniGraphics: Tania DuttaProduction Coordinator:Shantanu Zagade

First published: December 2015 Second edition: March 2018

Production reference: 1270318

Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.

ISBN 978-1-78839-372-0

www.packtpub.com

mapt.io

Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.

Why subscribe?

Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals

Improve your learning with Skill Plans built especially for you

Get a free eBook or video every month

Mapt is fully searchable

Copy and paste, print, and bookmark content

PacktPub.com

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.

Contributors

About the author

Tony Fischetti is a data scientist at the New York Public Library, where he uses R everyday. He graduated in cognitive and computer science from Rensselaer Polytechnic Institute. His thesis was strongly focused on using statistics to study visual short-term memory.

He enjoys writing and contributing to open source software, blogging at On The Lambda (http://www.onthelambda.com/), writing about himself in the third person, and sharing knowledge using simple, approachable language and engaging examples.

I'd like to thank the NYPL, the R community, my support network of millions, Toblerone, Ignatius, Lex, and Pierre, and Bethany Wickham. I'd like to give a huge thanks to Andrea Fischetti for her love and support, and for keeping me warm and human. Finally, I thank my father, to whom I owe my love of learning and my interest in science and statistics, and my mother for her love and unwavering support.

About the reviewers

Manoj Kumar is a seasoned consultant with more than 15 years of versatile experience and exposure to implementing process improvement and operation optimization in typical manufacturing environments and production industries using advanced predictive and prescriptive analytics such as machine learning, deep learning, symbolic dynamics, neural dynamics, circuit mechanisms, and Markov decision process.

His domain experience is in:

Transportation and Supply Chain Management

Process and manufacturing

Mining and energy

Retail, CPG, Healthcare, Marketing, and F&A

 

Davor Lozić is a senior software engineer interested in various subjects, especially computer security, algorithms, and data structures. He manages teams of 15+ engineers and is a part-time assistant professor who lectures about database systems, Java, and interoperability. You can visit his website at http://warriorkitty.com and contact him from there. He likes cats! If you want to talk about any aspect of technology or if you have funny pictures of cats, feel free to contact him.

Packt is searching for authors like you

If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.

 

Table of Contents

Title Page

Copyright and Credits

Data Analysis with R Second Edition

Packt Upsell

Why subscribe?

PacktPub.com

Contributors

About the author

About the reviewers

Packt is searching for authors like you

Preface

Who this book is for

What this book covers

To get the most out of this book

Download the example code files

Conventions used

Get in touch

Reviews

RefresheR

Navigating the basics

Arithmetic and assignment

Logicals and characters

Flow of control

Getting help in R

Vectors

Subsetting

Vectorized functions

Advanced subsetting

Recycling

Functions

Matrices

Loading data into R

Working with packages

Exercises

Summary

The Shape of Data

Univariate data

Frequency distributions

Central tendency

Spread

Populations, samples, and estimation

Probability distributions

Visualization methods

Exercises

Summary

Describing Relationships

Multivariate data

Relationships between a categorical and continuous variable

Relationships between two categorical variables

The relationship between two continuous variables

Covariance

Correlation coefficients

Comparing multiple correlations

Visualization methods

Categorical and continuous variables

Two categorical variables

Two continuous variables

More than two continuous variables

Exercises

Summary

Probability

Basic probability

A tale of two interpretations

Sampling from distributions

Parameters

The binomial distribution

The normal distribution

The three-sigma rule and using z-tables

Exercises

Summary

Using Data To Reason About The World

Estimating means

The sampling distribution

Interval estimation

How did we get 1.96?

Smaller samples

Exercises

Summary

Testing Hypotheses

The null hypothesis significance testing framework

One and two-tailed tests

Errors in NHST

A warning about significance

A warning about p-values

Testing the mean of one sample

Assumptions of the one sample t-test

Testing two means

Assumptions of the independent samples t-test

Testing more than two means

Assumptions of ANOVA

Testing independence of proportions

What if my assumptions are unfounded?

Exercises

Summary

Bayesian Methods

The big idea behind Bayesian analysis

Choosing a prior

Who cares about coin flips

Enter MCMC – stage left

Using JAGS and runjags

Fitting distributions the Bayesian way

The Bayesian independent samples t-test

Exercises

Summary

The Bootstrap

What's... uhhh... the deal with the bootstrap?

Performing the bootstrap in R (more elegantly)

Confidence intervals

A one-sample test of means

Bootstrapping statistics other than the mean

Busting bootstrap myths

What have we left out?

Exercises

Summary

Predicting Continuous Variables

Linear models

Simple linear regression

Simple linear regression with a binary predictor

A word of warning

Multiple regression

Regression with a non-binary predictor

Kitchen sink regression

The bias-variance trade-off

Cross-validation

Striking a balance

Linear regression diagnostics

Second Anscombe relationship

Third Anscombe relationship

Fourth Anscombe relationship

Advanced topics

Exercises

Summary

Predicting Categorical Variables

k-Nearest neighbors

Using k-NN in R

Confusion matrices

Limitations of k-NN

Logistic regression

Generalized Linear Model (GLM)

Using logistic regression in R

Decision trees

Random forests

Choosing a classifier

The vertical decision boundary

The diagonal decision boundary

The crescent decision boundary

The circular decision boundary

Exercises

Summary

Predicting Changes with Time

What is a time series?

What is forecasting?

Uncertainty

Difficulties in forecasting

Creating and plotting time series

Components of time series

Time series decomposition

White noise

Autocorrelation

Smoothing

Simple exponential smoothing for forecasting

Accuracy assessment

Double exponential smoothing

Triple exponential smoothing

ETS and the state space model

Interventions for improvement

What we didn't cover

Citations for the climate change data

Exercises

Summary

Sources of Data

Relational databases

Why didn't we just do that in SQL?

Using JSON

XML

Other data formats

Online repositories

Exercises

Summary

Dealing with Missing Data

Analysis with missing data

Visualizing missing data

Types of missing data

So which one is it?

Unsophisticated methods for dealing with missing data

Complete case analysis

Pairwise deletion

Mean substitution

Hot deck imputation

Regression imputation

Stochastic regression imputation

Multiple imputation

So how does mice come up with the imputed values?

Methods of imputation

Multiple imputation in practice

Exercises

Summary

Dealing with Messy Data

Checking unsanitized data

Checking for out-of-bounds data

Checking the data type of a column

Checking for unexpected categories

Checking for outliers, entry errors, or unlikely data points

Chaining assertions

Regular expressions

What are regular expressions?

Getting started

Regex for data normalization

More normalization

Other tools for messy data

OpenRefine

Fuzzy matching

Exercises

Summary

Dealing with Large Data

Wait to optimize

Using a bigger and faster machine

Be smart about your code

Allocation of memory

Vectorization

Using optimized packages

Using another R implementation

Using parallelization

Getting started with parallel R

An example of (some) substance

Using Rcpp

Being smarter about your code

Exercises

Summary

Working with Popular R Packages

The data.table package

The i in DT [i, j, by]

What in the world are by reference semantics?

The j in DT[i, j, by]

Using both i and j

Using the by argument for grouping

Joining data tables

Reshaping, melting, and pivoting data

Using dplyr and tidyr to manipulate data

Functional programming as a main tidyverse principle

Loading data for use in dplyr

Manipulating rows

Selecting and renaming columns

Computing on columns

Grouping in dplyr

Joining data

Reshaping data with tidyr

Exercises

Summary

Reproducibility and Best Practices

R scripting

RStudio

Running R scripts

An example script

Scripting and reproducibility

R projects

Version control

Package version management

Communicating results

Exercises

Summary

Other Books You May Enjoy

Leave a review - let other readers know what you think

Preface

I'm going to shoot it to you straight. There are a lot of books about data analysis and the R programming language. I'll take it for granted that you already know why it's extremely helpful and fruitful to learn R and data analysis (if not, why are you reading this preface?!) but allow me to make a case for choosing this book to guide you in your journey.

For one, this subject didn't come naturally to me. There are those with an innate talent for grasping the intricacies of statistics the first time it is taught to them; I don't think I'm one of them. I kept at it because I love science and research, and I knew that data analysis was necessary, not because it immediately made sense to me. Today, I love the subject in and of itself rather than instrumentally, but this came only after months of heartache. Eventually, as I consumed resource after resource, the pieces of the puzzle started to come together. After this, I started tutoring interested friends in the subject—and have seen them trip over the same obstacles that I had to learn to climb. I think that coming from this background gives me a unique perspective of the plight of the statistics student and it allows me to reach them in a way that others may not be able to. By the way, don't let the fact that statistics used to baffle me scare you; I have it on fairly good authority that I know what I'm talking about today.

Secondly, this book was born of the frustration that most statistics texts tend to be written in the driest manner possible. In contrast, I adopt a light-hearted buoyant approach—but without becoming agonizingly flippant.

Third, this book includes a lot of material that I wished were covered in more of the resources I used when I was learning data analysis in R. For example, the entire last unit specifically covers topics that present enormous challenges to R analysts when they first go out to apply their knowledge to imperfect real-world data.

Lastly, I thought long and hard about how to lay out this book and which order of topics was optimal. And when I say "long and hard," I mean I wrote a library and designed algorithms to do this. The order in which I present the topics in this book was very carefully considered to (a) build on top of each other, (b) follow a reasonable level of difficulty progression allowing for periodic chapters of relatively simpler material (psychologists call this intermittent reinforcement), (c) group highly related topics together, and (d) minimize the number of topics that require knowledge of yet unlearned topics (this is, unfortunately, common in statistics). If you're interested, I've detailed this procedure in a blog post that you can read at http://bit.ly/teach-stats.

The point is that the book you're holding is a very special one—one that I poured my soul into. Nevertheless, data analysis can be a notoriously difficult subject, and there may be times where nothing seems to make sense. During these times, remember that many others (including myself) have felt stuck too. Persevere... the reward is great. And remember, if a blockhead like me can do it, you can too. Go you!

Who this book is for

Whether you are learning data analysis for the first time or you want to deepen the understanding you already have, this book will prove an invaluable resource. If you are looking for a book to bring you all the way through the fundamentals to the application of advanced and effective analytics methodologies—and if you have some prior programming experience and a mathematical background—then this is for you.

What this book covers

Chapter 1, RefresheR, reviews the aspects of R that subsequent chapters will assume knowledge of. Here, we learn the basics of R syntax, learn of R's major data structures, write functions, load data, and install packages.

Chapter 2, The Shape of Data, discusses univariate data. We learn about different data types, how to describe univariate data, and how to visualize the shape of this data.

Chapter 3, Describing Relationships, covers multivariate data. In particular, we learn about the three main classes of bivariate relationships and learn how to describe them.

Chapter 4, Probability, kicks off a new unit by laying its foundations. We learn about basic probability theory, Bayes' theorem, and probability distributions.

Chapter 5, Using Data to Reason about the World, discusses sampling and estimation theory. Through examples, we learn of the central limit theorem, point estimation, and confidence intervals.

Chapter 6, Testing Hypotheses, introduces the subject of Null Hypothesis Significance Testing (NHST). We learn of many popular hypothesis tests and their non-parametric alternatives. Perhaps most importantly, we gain a thorough understanding of the misconceptions and gotchas of NHST.

Chapter 7, Bayesian Methods, presents an alternative to NHST based on a more intuitive view of probability. We learn the advantages and drawbacks of this approach too.

Chapter 8, The Bootstrap, details another approach to NHST by using a technique called resampling. We learn of its advantages and shortcomings. In addition, this chapter serves as a great reinforcement of the material in chapters 5 and 6.

Chapter 9, Predicting Continuous Variables, kicks off our new unit on predictive analytics and thoroughly discusses linear regression. Before the chapter's conclusion, we learn all about the technique, when to use it, and what traps to look out for.

Chapter 10, Predicting Categorical Variables, introduces four of the most popular classification techniques. By using all four on the same examples, we gain an appreciation for what makes each technique shine.

Chapter 11, Predicting Changes with Time, closes our unit of predictive analytics by introducing the topics of time series analysis and forecasting. This ends with a firm foundation on one of the premier methods of time series forecasting.

Chapter 12, Sources of Data, begins the final unit detailing data analysis in the real world.  This chapter is all about how to use different data sources in R. In particular, we learn how to interface with databases, and request and load JSON and XML via an engaging example.

Chapter 13, Dealing with Missing Data, details what missing data is, how to identify types of missing data, some not-so-great methods for dealing with them, and two principled methods for handling them.

Chapter 14, Dealing with Messy Data, introduces some of the snags of working with less-than-perfect data in practice. This includes checking for unexpected input, wielding regex, and verifying data veracity with assertr.

Chapter 15, Dealing with Large Data, discusses some of the techniques that can be used to cope with data sets larger than what can be handled swiftly without a little planning. The key components of this chapter are on parallelization and Rcpp.

Chapter 16, Working with Popular R Packages, acknowledges that we’ve already wielded a lot of popular packages in this unit, but this chapter fills in some of the gaps and introduces some of the most modern packages that make speed and ease of use a priority.

Chapter 17, Reproducibility and Best Practices, closes with the extremely important (but often ignored) topic of how to use R like a professional. This includes learning about tooling, organization, and reproducibility.

To get the most out of this book

All code in this book has been written against the latest version of R—3.4.3 at time of writing. As a matter of good practice, you should keep your R version up to date but most, if not all, code should work with any reasonably recent version of R. Some of the R packages we will be installing will require more recent versions though. For the other software that this book uses, instructions will be furnished pro re nata. If you want to get a head start, however, install RStudio, JAGS, and a C++ compiler (or Rtools if you use windows).

Download the example code files

You can download the example code files for this book from your account at www.packtpub.com. If you purchased this book elsewhere, you can visit www.packtpub.com/support and register to have the files emailed directly to you.

You can download the code files by following these steps:

Log in or register at

www.packtpub.com

.

Select the

SUPPORT

tab.

Click on

Code Downloads & Errata

.

Enter the name of the book in the

Search

box and follow the onscreen instructions.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

WinRAR/7-Zip for Windows

Zipeg/iZip/UnRarX for Mac

7-Zip/PeaZip for Linux

The code bundle for the book is also hosted on GitHub athttps://github.com/PacktPublishing/Data-Analysis-with-R-Second-Edition. We also have other code bundles from our rich catalog of books and videos available athttps://github.com/PacktPublishing/. Check them out!

Conventions used

There are a number of text conventions used throughout this book.

CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "Mount the downloaded WebStorm-10*.dmg disk image file as another disk in your system."

A block of code is set as follows:

# don't worry about memorizing this temp.density <- density(airquality$Temp) pdf <- approxfun(temp.density$x, temp.density$y, rule=2) integrate(pdf, 80, 90)

When we wish to draw your attention to a particular part of a code block or output, the relevant lines or items are set in bold:

table(mtcars$carb) / length(mtcars$carb)

1 2 3 4 6 8 0.21875 0.31250 0.09375 0.31250 0.03125 0.03125

Bold: Indicates a new term, an important word, or words that you see onscreen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: "Select System info from the Administration panel."

Warnings or important notes appear like this.
Tips and tricks appear like this.

Get in touch

Feedback from our readers is always welcome.

General feedback: Email [email protected] and mention the book title in the subject of your message. If you have questions about any aspect of this book, please email us at [email protected].

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.

Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Reviews

Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!

For more information about Packt, please visit packtpub.com.

RefresheR

Before we dive into the (other) fun stuff (sampling multi-dimensional probability distributions, using convex optimization to fit data models, and so on), it would be helpful if we review those aspects of R that all subsequent chapters will assume knowledge of.

If you fancy yourself an R guru, you should still, at least, skim through this chapter, because you'll almost certainly find the idioms, packages, and style introduced here to be beneficial for following the rest of the material.

If you don't care much about R (yet), and are just in this for the statistics, you can heave a heavy sigh of relief that, for the most part, you can run the code given in this book in the interactive R interpreter with very little modification and just follow along with the ideas. However, it is my belief (read: delusion) that by the end of this book, you'll cultivate a newfound appreciation for R alongside a robust understanding of methods in data analysis.

Fire up your R interpreter and let's get started!

Navigating the basics

In the interactive R interpreter, any line starting with a > character denotes R asking for input. (If you see a + prompt, it means that you didn't finish typing a statement at the prompt and R is asking you to provide the rest of the expression). Striking the return key will send your input to R to be evaluated. R's response is then spit back at you in the line immediately following your input, after which R asks for more input. This is called a REPL (Read-Evaluate-Print-Loop). It is also possible for R to read a batch of commands saved in a file (unsurprisingly called batch mode), but we'll be using the interactive mode for most of the book.

As you might imagine, R supports all the familiar mathematical operators as with most other languages.

Arithmetic and assignment

Check out the following example:

> 2 + 2

[1] 4

> 9 / 3

[1] 3

> 5 %% 2 # modulus operator (remainder of 5 divided by 2)

[1] 1

Anything that occurs after the octothorpe or pound sign, #, (or hash-tag for you young'uns), is ignored by the R interpreter. This is useful to document the code in natural language. These are called comments.

In a multi-operation arithmetic expression, R will follow the standard order of operations from math. In order to override this natural order, you have to use parentheses flanking the sub-expression that you'd like to be performed first:

> 3 + 2 - 10 ^ 2 # ^ is the exponent operator

[1] -95

> 3 + (2 - 10) ^ 2

[1] 67

In practice, almost all compound expressions are split up with intermediate values assigned to variables that, when used in future expressions, are just like substituting the variable with the value that was assigned to it. The (primary) assignment operator is <-:

> # assignments follow the form VARIABLE <- VALUE > var <- 10 > var

[1] 10

> var ^ 2

[1] 100

> VAR / 2 # variable names are case-sensitive

Error: object 'VAR' not found

Notice that the first and second lines in the preceding code snippet didn't have an output to be displayed, so R just immediately asked for more input. This is because assignments don't have a return value. Their only job is to give a value to a variable or change the existing value of a variable. Generally, operations and functions on variables in R don't change the value of the variable. Instead, they return the result of the operation. If you want to change a variable to the result of an operation using that variable, you have to reassign that variable as follows:

> var # var is 10

[1] 10

> var ^ 2

[1] 100

> var # var is still 10

[1] 10

> var <- var ^ 2 # no return value > var # var is now 100

[1] 100

Be aware that variable names may contain numbers, underscores, and periods; this is something that trips up a lot of people who are familiar with other programming languages that disallow using periods in variable names. The only further restrictions on variable names are that they must start with a letter (or a period and then a letter), and that it must not be one of the reserved words in R such as TRUE, Inf, and so on.

Although the arithmetic operators that we've seen thus far are functions in their own right, most functions in R take the form,  function_name(value(s) supplied to the function). The values supplied to the function are called arguments of that function:

> cos(3.14159) # cosine function

[1] -1

> cos(pi) # pi is a constant that R provides

[1] -1

> acos(-1) # arccosine function

[1] 3.141593

> acos(cos(pi)) + 10

[1] 13.14159

> # functions can be used as arguments to other functions

If you paid attention in math class, you'll know that the cosine of pi is -1 and that arccosine is the inverse function of cosine.

There are hundreds of such useful functions defined in base R, only a handful of which we will see in this book. Two sections from now, we will be building our very own functions.

Before we move on from arithmetic, it will serve us well to visit some of the odd values that may result from certain operations:

> 1 / 0

[1] Inf

> 0 / 0

[1] NaN

It is common during practical usage of R to accidentally divide by zero. As you can see, this undefined operation yields an infinite value in R. Dividing zero by zero yields the value NaN, which stands for Not a Number.

Getting help in R

Before we go further, it would serve us well to have a brief section detailing how to get help in R. Most R tutorials leave this for one of the last sections--if it is even included at all! In my own personal experience, though, getting help is going to be one of the first things you will want to do as you add more bricks to your R knowledge castle. Learning R doesn't have to be difficult; just take it slowly, ask questions, and get help early. Go you!

It is easy to get help with R right at the console. Running the help.start() function at the prompt will start a manual browser. From here, you can do anything from going over the basics of R to reading the nitty-gritty details on how R works internally.

You can get help with a particular function in R if you know its name, by supplying that name as an argument to the help function. For example, let's say you want to know more about the gsub() function that I sprang on you before. Check out the following code:

> help("gsub") > # or simply > ?gsub

This will display a manual page documenting what the function is, how to use it, and examples of its usage.

This rapid accessibility to documentation means that I'm never hopelessly lost when I encounter a function that I haven't seen before. The downside to this extraordinarily convenient help mechanism is that I rarely bother to remember the order of arguments as looking them up is just seconds away.

Occasionally, you won't quite remember the exact name of the function that you're looking for, but you'll have an idea about what the name should be. For this, you can use the help.search() function:

> help.search("chisquare") > # or simply > ??chisquare

For tougher, more semantic queries, nothing beats a good old fashioned web search engine. If you don't get relevant results the first time, try adding the term programming or statistics in there for good measure.

Vectors

Vectors are the most basic data structures in R, and they are ubiquitous indeed. In fact, even the single values that we've been working with thus far were actually vectors of length 1. That's why the interactive R console has been printing [1] along with all of our output.

Vectors are essentially an ordered collection of values of the same atomic data type. Vectors can be arbitrarily large (with some limitations) or they can be just one single value.

The canonical way of building vectors manually is using the c() function (which stands for combine):

> our.vect <- c(8, 6, 7, 5, 3, 0, 9) > our.vect

[1] 8 6 7 5 3 0 9

In the preceding example, we created a numeric vector of length 7 (namely, Jenny's telephone number).

Let's try to put character data types into this vector as follows:

> another.vect <- c("8", 6, 7, "-", 3, "0", 9) > another.vect

[1] "8" "6" "7" "-" "3" "0" "9"

R would convert all the items in the vector (called elements) into character data types to satisfy the condition that all elements of a vector must be of the same type. A similar thing happens when you try to use logical values in a vector with numbers; the logical values would be converted into 1 and 0 (for TRUE and FALSE, respectively). These logicals will turn into TRUE and FALSE (note the quotation marks) when used in a vector that contains characters.

Subsetting

It is very common to want to extract one or more elements from a vector. For this, we use a technique called indexing or subsetting. After the vector, we put an integer in square brackets ([]) called the subscript operator. This instructs R to return the element at that index. The indices (plural for index, in case you were wondering!) for vectors in R start at 1 and stop at the length of the vector:

> our.vect[1] # to get the first value

[1] 8

> # the function length() returns the length of a vector > length(our.vect)

[1] 7

> our.vect[length(our.vect)] # get the last element of a vector

[1] 9

Note that in the preceding code, we used a function in the subscript operator. In cases like these, R evaluates the expression in the subscript operator and uses the number it returns as the index to extract.

If we get greedy and try to extract an element from an index that doesn't exist, R will respond with NA, meaning, not available. We see this special value cropping up from time to time throughout this text:

> our.vect[10]

[1] NA

One of the most powerful ideas in R is that you can use vectors to subset other vectors:

> # extract the first, third, fifth, and > # seventh element from our vector > our.vect[c(1, 3, 5, 7)]

[1] 8 7 3 9

The ability to use vectors to index other vectors may not seem like much now, but its usefulness will become clear soon.

Another way to create vectors is using sequences:

> other.vector <- 1:10 > other.vector

[1] 1 2 3 4 5 6 7 8 9 10

> another.vector <- seq(50, 30, by=-2) > another.vector

[1] 50 48 46 44 42 40 38 36 34 32 30

Here, the 1:10 statement creates a vector from 1 to 10. 10:1 would have created the same 10-element vector, but in reverse. The seq() function is more general in that it allows sequences to be made using steps (among many other things).

Combining our knowledge of sequences and vectors subsetting vectors, we can get the first five digits of Jenny's number:

> our.vect[1:5]

[1] 8 6 7 5 3

Vectorized functions

Part of what makes R so powerful is that many of R's functions take vectors as arguments. These vectorized functions are usually extremely fast and efficient. We've already seen one such function, length(), but there are many, many others:

> # takes the mean of a vector > mean(our.vect)

[1] 5.428571

> sd(our.vect) # standard deviation

[1] 3.101459

> min(our.vect)

[1] 0

> max(1:10)

[1] 10

> sum(c(1, 2, 3))

[1] 6

In practical settings, such as when reading data from files, it is common to have NA values in vectors:

> messy.vector <- c(8, 6, NA, 7, 5, NA, 3, 0, 9) > messy.vector

[1] 8 6 NA 7 5 NA 3 0 9

> length(messy.vector)

[1] 9

Some vectorized functions will not allow NA values by default. In these cases, an extra keyword argument must be supplied along with the first argument to the function:

> mean(messy.vector)

[1] NA

> mean(messy.vector, na.rm=TRUE)

[1] 5.428571

> sum(messy.vector, na.rm=FALSE)

[1] NA

> sum(messy.vector, na.rm=TRUE)

[1] 38

As mentioned previously, vectors can be constructed from logical values as well:

> log.vector <- c(TRUE, TRUE, FALSE) > log.vector

[1] TRUE TRUE FALSE

Since logical values can be coerced into behaving like numerics, as we saw earlier, if we try to sum a logical vector as follows:

> sum(log.vector)

[1] 2

We will, essentially, get a count of the number of TRUE values in that vector.

There are many functions in R that operate on vectors and return logical vectors. is.na() is one such function. It returns a logical vector, that is, the same length as the vector supplied as an argument, with a TRUE in the position of every NA value. Remember our messy vector (from just a minute ago)?

> messy.vector

[1] 8 6 NA 7 5 NA 3 0 9

> is.na(messy.vector)

[1] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE FALSE

> # 8 6 NA 7 5 NA 3 0 9

Putting together these pieces of information, we can get a count of the number of NA values in a vector as follows:

> sum(is.na(messy.vector))

[1] 2

When you use Boolean operators on vectors, they also return logical vectors of the same length as the vector being operated on:

> our.vect > 5

[1] TRUE TRUE TRUE FALSE FALSE FALSE TRUE

If we wanted to--and we do--count the number of digits in Jenny's phone number that are greater than five, we would do so in the following manner:

> sum(our.vect > 5)

[1] 4

Advanced subsetting

Did I mention that we can use vectors to subset other vectors! When we subset vectors using logical vectors of the same length, only the elements corresponding to the TRUE values are extracted. Hopefully, light bulbs are starting to go off in your head. If we wanted to extract only the legitimate non-NA digits from Jenny's number, we can do it as follows:

> messy.vector[!is.na(messy.vector)]

[1] 8 6 7 5 3 0 9

This is a very critical trait of R, so let's take our time understanding it; this idiom will come up again and again throughout this book.

The logical vector that yields TRUE when an NA value occurs in messy.vector (from is.na()) is then negated (the whole thing) by the negation operator,  !. The resultant vector is TRUE whenever the corresponding value in messy.vector is not NA. When this logical vector is used to subset the original messy vector, it only extracts the non-NA values from it.

Similarly, we can show all the digits in Jenny's phone number that are greater than five as follows:

> our.vect[our.vect > 5]

[1] 8 6 7 9

Thus far, we've only been displaying elements that have been extracted from a vector. However, just as we've been assigning and reassigning variables, we can assign values to various indices of a vector and change the vector as a result. For example, if Jenny tells us that we have the first digit of her phone number wrong (it's really 9), we can reassign just that element without modifying the others:

> our.vect

[1] 8 6 7 5 3 0 9

> our.vect[1] <- 9 > our.vect

[1] 9 6 7 5 3 0 9

Sometimes, it may be required to replace all the NA values in a vector with the value 0. To do this with our messy vector, we can execute the following command:

> messy.vector[is.na(messy.vector)] <- 0 > messy.vector

[1] 8 6 0 7 5 0 3 0 9

Elegant though the preceding solution is, modifying a vector in place is usually discouraged in favor of creating a copy of the original vector and modifying the copy. One such technique to perform this is using the ifelse() function.

Not to be confused with the if/else control construct, ifelse() is a function that takes three arguments: a test that returns a logical/Boolean value, a value to use if the element passes the test, and one to return if the element fails the test.

The preceding in-place modification solution could be reimplemented with ifelse as follows:

> ifelse(is.na(messy.vector), 0, messy.vector)

[1] 8 6 0 7 5 0 3 0 9

Recycling

The last important property of vectors and vector operations in R is that they can be recycled. To understand what I mean, examine the following expression:

> our.vect + 3

[1] 12 9 10 8 6 3 12

This expression adds three to each digit in Jenny's phone number. Although it may look so, R is not performing this operation between a vector and a single value. Remember when I said that single values are actually vectors of the length 1? What is really happening here is that R is told to perform element-wise addition on a vector of length 7 and a vector of length 1. As element-wise addition is not defined for vectors of differing lengths, R recycles the smaller vector until it reaches the same length as that of the bigger vector. Once both the vectors are the same size, then R, element by element, performs the addition and returns the result:

> our.vect + 3

[1] 12 9 10 8 6 3 12

This is tantamount to the following:

> our.vect + c(3, 3, 3, 3, 3, 3, 3)

[1] 12 9 10 8 6 3 12

If we wanted to extract every other digit from Jenny's phone number, we can do so in the following manner:

> our.vect[c(TRUE, FALSE)]

[1] 9 7 3 9

This works because the vector c(TRUE, FALSE) is repeated until it is of the length 7, making it equivalent to the following:

> our.vect[c(TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE)]

[1] 9 7 3 9

One common snag related to vector recycling that R users (useRs, if I may) encounter is that during some arithmetic operations involving vectors of discrepant length, R will warn you if the smaller vector cannot be repeated a whole number of times to reach the length of the bigger vector. This is not a problem when doing vector arithmetic with single values as 1 can be repeated any number of times to match the length of any vector (which must, of course, be an integer). It would pose a problem, though, if we were looking to add three to every other element in Jenny's phone number:

> our.vect + c(3, 0)

[1] 12 6 10 5 6 0 12 Warning message: In our.vect + c(3, 0) : longer object length is not a multiple of shorter object length

You will likely learn to love these warnings as they have stopped many useRs from making grave errors.

Before we move on to the next section, an important thing to note is that in a lot of other programming languages, many of the things that we did would have been implemented using for loops and other control structures. Although there is certainly a place for loops and such in R, often a more sophisticated solution exists in using just vector/matrix operations. In addition to elegance and brevity, the solution that exploits vectorization and recycling is often much more efficient.

Functions

If we need to perform some computation that isn't already a function in R a multiple number of times, we usually do so by defining our own functions. A custom function in R is defined using the following syntax:

> function.name <- function(argument1, argument2, ...){ + # some functionality + }

For example, if we wanted to write a function that determined if a number supplied as an argument was even, we can do so in the following manner:

> is.even <- function(a.number){ + remainder <- a.number %% 2 + if(remainder==0) + return(TRUE) + return(FALSE) + } > # testing it > is.even(10)

[1] TRUE

> is.even(9)

[1] FALSE

As an example of a function that takes more than one argument, let's generalize the preceding function by creating a function that determines whether the first argument is divisible by its second argument:

> is.divisible.by <- function(large.number, smaller.number){ + if(large.number %% smaller.number != 0) + return(FALSE) + return(TRUE) + } > # testing it > is.divisible.by(10, 2)

[1] TRUE

> is.divisible.by(10, 3)

[1] FALSE

> is.divisible.by(9, 3)

[1] TRUE

Our function, is.even(), could now be rewritten simply as follows:

> is.even <- function(num){ + is.divisible.by(num, 2) + }

It is very common in R to want to apply a particular function to every element of a vector. Instead of using a loop to iterate over the elements of a vector, as we would do in many other languages, we use a function called sapply() to perform this. sapply() takes a vector and a function as its arguments. It then applies the function to every element and returns a vector of results. We can use sapply() in this manner to find out which digits in Jenny's phone number are even:

> sapply(our.vect, is.even)

[1] FALSE TRUE FALSE FALSE FALSE TRUE FALSE

This worked great because sapply takes each element and uses it as the argument in is.even(), which takes only one argument. If you wanted to find the digits that are divisible by three, it would require a little bit more work.

One option is just to define a function,  is.divisible.by.three(), that takes only one argument and use this in sapply. The more common solution, however, is to define an unnamed function that does just that in the body of the sapply function call:

> sapply(our.vect, function(num){is.divisible.by(num, 3)})

[1] TRUE TRUE FALSE FALSE TRUE TRUE TRUE

Here, we essentially created a function that checks whether its argument is divisible by three, except we don't assign it to a variable and use it directly in the sapply body instead. These one-time-use unnamed functions are called anonymous functions or lambda functions. (The name comes from Alonzo Church's invention of the lambda calculus, if you were wondering.)

This is somewhat of an advanced usage of R, but it is very useful as it comes up very often in practice.

If we wanted to extract the digits in Jenny's phone number that are divisible by both, two and three, we can write it as follows:

> where.even <- sapply(our.vect, is.even) > where.div.3 <- sapply(our.vect, function(num){ + is.divisible.by(num, 3)}) > # "&" is like the "&&" and operator but for vectors > our.vect[where.even & where.div.3]

[1] 6 0

Neat-O!

Note that if we wanted to be sticklers, we would have a clause in the function bodies to preclude a modulus computation, where the first number was smaller than the second. If we had, our function would not have erroneously indicated that 0 was divisible by two and three. I'm not a stickler, though, so the function will remain as is. Fixing this function is left as an exercise for the (stickler) reader.

Matrices

In addition to the vector data structure, R has the matrix, data frame, list, and array data structures. Though we will be using all of these types (except arrays) in this book, we only need to review the first two in this chapter.

A matrix in R, like in math, is a rectangular array of values (of one type) arranged in rows and columns and can be manipulated as a whole. Operations on matrices are fundamental to data analysis.

One way of creating a matrix is to just supply a vector to the matrix() function:

> a.matrix <- matrix(c(1, 2, 3, 4, 5, 6)) > a.matrix

[,1] [1,] 1 [2,] 2 [3,] 3 [4,] 4 [5,] 5 [6,] 6

This produces a matrix with all the supplied values in a single column. We can make a similar matrix with two columns by supplying matrix() with an optional argument, ncol, that specifies the number of columns:

> a.matrix <- matrix(c(1, 2, 3, 4, 5, 6), ncol=2) > a.matrix

[,1] [,2] [1,] 1 4 [2,] 2 5 [3,] 3 6

We could have produced the same matrix by binding two vectors, c(1, 2, 3) and c(4, 5, 6), by columns using the cbind() function as follows:

> a2.matrix <- cbind(c(1, 2, 3), c(4, 5, 6))

We could create the transposition of this matrix (where rows and columns are switched) by binding these vectors by row instead:

> a3.matrix <- rbind(c(1, 2, 3), c(4, 5, 6)) > a3.matrix

[,1] [,2] [,3] [1,] 1 2 3 [2,] 4 5 6

We can do this by just using the matrix transposition function in R, t():

> t(a2.matrix)

Some other functions that operate on whole vectors are rowSums()/colSums() and rowMeans()/colMeans():

> a2.matrix

[,1] [,2] [1,] 1 4 [2,] 2 5 [3,] 3 6

> colSums(a2.matrix)

[1] 6 15

> rowMeans(a2.matrix)

[1] 2.5 3.5 4.5

If vectors have sapply(), then matrices have apply(). The preceding two functions could have been written, more verbosely, as follows:

> apply(a2.matrix, 2, sum)

[1] 6 15

> apply(a2.matrix, 1, mean)

[1] 2.5 3.5 4.5

Here, 1 instructs R to perform the supplied function over its rows, and 2, over its columns.

The matrix multiplication operator in R is %*%:

> a2.matrix %*% a2.matrix

Error in a2.matrix %*% a2.matrix : non-conformable arguments

Remember, matrix multiplication is only defined for matrices where the number of columns in the first matrix is equal to the number of rows in the second:

> a2.matrix

[,1] [,2] [1,] 1 4 [2,] 2 5 [3,] 3 6

> a3.matrix

[,1] [,2] [,3] [1,] 1 2 3 [2,] 4 5 6

> a2.matrix %*% a3.matrix

[,1] [,2] [,3] [1,] 17 22 27 [2,] 22 29 36 [3,] 27 36 45

> # dim() tells us how many rows and columns > # (respectively) there are in the given matrix > dim(a2.matrix)

[1] 3 2

To index the element of a matrix at the second row and first column, you need to supply both of these numbers into the subscripting operator:

> a2.matrix[2,1]

[1] 2

Many useRs get confused and forget the order in which the indices must appear; remember, it's row first, then columns!

If you leave one of the spaces empty, R will assume that you want that whole dimension:

> # returns the whole second column a2.matrix[,2]

[1] 4 5 6

> # returns the first row > a2.matrix[1,]

[1] 1 4

As always, we can use vectors in our subscript operator:

> # give me element in column 2 at the first and third row > a2.matrix[c(1, 3), 2]

[1] 4 6

Loading data into R

Thus far, we've only been entering data directly into the interactive R console. For any dataset of non-trivial size, this is, obviously, an intractable solution. Fortunately for us, R has a robust suite of functions to read data directly from external files.

Go ahead and create a file on your hard disk called favorites.txt that looks like this:

flavor,number pistachio,6 mint chocolate chip,7 vanilla,5 chocolate,10 strawberry,2 neopolitan,4

This data represents the number of students in a class that prefer a particular flavor of soy ice cream. We can read the file into a variable called favs as follows:

> favs <- read.table("favorites.txt", sep=",", header=TRUE)

If you get an error that there is no such file or directory, give R the full path name to your dataset or, alternatively, run the following command:

> favs <- read.table(file.choose(), sep=",", header=TRUE)

The preceding command brings up an open file dialog to let you navigate to the file that you've just created.

The sep=","argument tells R that each data element in a row is separated by a comma. Other common data formats have values separated by tabs and pipes ("|"). The value of sep should then be "\t" and "|", respectively.

The header=TRUEargument tells R that the first row of the file should be interpreted as the names of the columns. Remember, you can enter ?read.table at the console to learn more about these options.

Reading from files in this comma-separated values format (usually with the .csv file extension) is so common that R has a more specific function just for it. The preceding data import expression can be best written simply as follows:

> favs <- read.csv("favorites.txt")

Now, we have all the data in the file held in a variable of the data.frame class. A data frame can be thought of as a rectangular array of data that you might see in a spreadsheet application. In this way, a data frame can also be thought of as a matrix; indeed, we can use matrix-style indexing to extract elements from it. A data frame differs from a matrix, though, in that a data frame may have columns of differing types. For example, whereas a matrix would only allow one of these types, the dataset that we just loaded contains character data in its first column and numeric data in its second column.

Let's check out what we have using the head() command, which will show us the first few lines of a data frame:

> head(favs)

flavor number 1 pistachio 6 2 mint chocolate chip 7 3 vanilla 5 4 chocolate 10 5 strawberry 2 6 neopolitan 4

> class(favs)

[1] "data.frame"

> class(favs$flavor)

[1] "factor"

> class(favs$number)

[1] "numeric"

I lied, okay! So what?! Technically, flavor is a factor data type, not a character type.

We haven't seen factors yet, but the idea behind them is really simple. Essentially, factors are codings for categorical variables, which are variables that take on one of a finite number of categories--think {"high", "medium", and "low"} or {"control", "experimental"}.

Though factors are extremely useful in statistical modeling in R, the fact that R, by default, automatically interprets a column from the data read from disk as a type factor if it contains characters is something that trips up novices and seasoned useRs alike. Due to this, we will primarily prevent this behavior manually by adding the stringsAsFactors optional keyword argument to the read.* commands:

> favs <- read.csv("favorites.txt", stringsAsFactors=FALSE) > class(favs$flavor)

[1] "character"

Much better, for now! If you'd like to make this behavior the new default, read the ?options manual page. We can always convert to factors later on if we need to!

If you haven't noticed already, I've snuck a new operator on you--$, the extract operator. This is the most popular way to extract attributes (or columns) from a data frame. You can also use double square brackets ([[ and ]]) to do this.

These are both in addition to the canonical matrix indexing option. The following three statements are thus, in this context, functionally identical:

> favs$flavor

[1] "pistachio" "mint chocolate chip" "vanilla" [4] "chocolate" "strawberry" "neopolitan"

> favs[["flavor"]]

[1] "pistachio" "mint chocolate chip" "vanilla" [4] "chocolate" "strawberry" "neopolitan"

> favs[,1]

[1] "pistachio" "mint chocolate chip" "vanilla" [4] "chocolate" "strawberry" "neopolitan"

Notice how R has now printed another number in square brackets--besides [1]--along with our output. This is to show us that chocolate is the fourth element of the vector that was returned from the extraction.

You can use the names() function to get a list of the columns available in a data frame. You can even reassign names using the same:

> names(favs)

[1] "flavor" "number"

> names(favs)[1] <- "flav" > names(favs)

[1] "flav" "number"

Lastly, we can get a compact display of the structure of a data frame using the str() function on it:

> str(favs)

'data.frame': 6 obs. of 2 variables: $ flav : chr "pistachio" "mint chocolate chip" "vanilla"

"chocolate" ...

$ number: num 6 7 5 10 2 4

Actually, you can use this function on any R structure--the property of functions that change their behavior based on the type of input is called polymorphism.

Working with packages