Data Wrangling with R - Gustavo R Santos - E-Book

Data Wrangling with R E-Book

Gustavo R Santos

0,0
32,39 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

In this information era, where large volumes of data are being generated every day, companies want to get a better grip on it to perform more efficiently than before. This is where skillful data analysts and data scientists come into play, wrangling and exploring data to generate valuable business insights. In order to do that, you’ll need plenty of tools that enable you to extract the most useful knowledge from data.
Data Wrangling with R will help you to gain a deep understanding of ways to wrangle and prepare datasets for exploration, analysis, and modeling. This data book enables you to get your data ready for more optimized analyses, develop your first data model, and perform effective data visualization.
The book begins by teaching you how to load and explore datasets. Then, you’ll get to grips with the modern concepts and tools of data wrangling. As data wrangling and visualization are intrinsically connected, you’ll go over best practices to plot data and extract insights from it. The chapters are designed in a way to help you learn all about modeling, as you will go through the construction of a data science project from end to end, and become familiar with the built-in RStudio, including an application built with Shiny dashboards.
By the end of this book, you’ll have learned how to create your first data model and build an application with Shiny in R.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 420

Veröffentlichungsjahr: 2023

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Data Wrangling with R

Load, explore, transform and visualize data for modeling with tidyverse libraries

Gustavo R Santos

BIRMINGHAM—MUMBAI

Data Wrangling with R

Copyright © 2023 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author(s), nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Group Product Manager: Reshma Raman

Publishing Product Manager: Apeksha Shetty

Senior Editor: Sushma Reddy

Technical Editor: Rahul Limbachiya

Copy Editor: Safis Editing

Project Coordinator: Farheen Fathima

Proofreader: Safis Editing

Indexer: Tejal Daruwale Soni

Production Designer: Nilesh Mohite

Marketing Coordinator: Nivedita Singh

First published: January 2023

Production reference: 1310123

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham

B3 2PB, UK.

ISBN 978-1-80323-540-0

www.packtpub.com

To my wife, Roxane, my other half.

To my children, Maria Fernanda and Marina,

who remind me every day how much I still need to learn.

- Gustavo R Santos

Contributors

About the author

Gustavo R Santos has worked in the technology industry for 13 years, improving processes, analyzing datasets, and creating dashboards. Since 2020, he has been working as a data scientist in the retail industry, wrangling, analyzing, visualizing, and modeling data with the most modern tools such as R, Python, and Databricks. Gustavo also gives lectures from time to time at an online school about data science concepts. He has a background in marketing, is certified as a data scientist by the Data Science Academy, Brazil, and pursues his specialist MBA in data science at the University of São Paulo.

About the reviewers

Chirag Subramanian is an experienced senior data scientist with more than six years of full-time work experience in data science and analytics applied to catastrophe modeling, insurance, travel and tourism, and healthcare industries. Chirag currently works as a senior data scientist within the global insights department at Walgreens Boots Alliance, a Fortune 16 company. Chirag holds a master of science degree in operations research from Northeastern University, Boston. He is currently pursuing his second master of science degree in computational data analytics from Georgia Tech, a top-10 globally ranked school in statistics and operational research by QS World University Rankings. His hobbies include watching cricket, playing table tennis, and writing poetry.

Love Tyagi is the director of data science at one of the top biotech companies in the USA. He completed his bachelor’s in engineering from Delhi, India, and pursued his master’s in data science from George Washington University, USA. Love enjoys building solutions around data analytics, machine learning, and performing statistical analysis. He is, at the core, a data person, and wants to keep improving the way people learn about data, algorithms, and the maths behind it. And that’s why, whenever he has time, he likes to review books, papers, and blogs.

When not working, he enjoys playing cricket and soccer and researching new trends in cooking, as he is also a part-time chef at home. He also loves teaching and someday wants to go back to teaching and start his own institution for learning AI in the most practical and basic way possible.

Table of Contents

Preface

Part 1: Load and Explore Data

1

Fundamentals of Data Wrangling

What is data wrangling?

Why data wrangling?

Benefits

The key steps of data wrangling

Frameworks in Data Science

Summary

Exercises

Further reading

2

Loading and Exploring Datasets

Technical requirements

How to load files to RStudio

Loading a CSV file to R

Tibbles versus Data Frames

Saving files

A workflow for data exploration

Loading and viewing

Descriptive statistics

Missing values

Data distributions

Visualizations

Basic Web Scraping

Getting data from an API

Summary

Exercises

Further reading

3

Basic Data Visualization

Technical requirements

Data visualization

Creating single-variable plots

Dataset

Boxplots

Density plot

Creating two-variable plots

Scatterplot

Bar plot

Line plot

Working with multiple variables

Plots side by side

Summary

Exercises

Further reading

Part 2: Data Wrangling

4

Working with Strings

Introduction to stringr

Detecting patterns

Subset strings

Managing lengths

Mutating strings

Joining and splitting

Ordering strings

Working with regular expressions

Learning the basics

Creating frequency data summaries in R

Regexps in practice

Creating a contingency table using gmodels

Text mining

Tokenization

Stemming and lemmatization

TF-IDF

N-grams

Factors

Summary

Exercises

Further reading

5

Working with Numbers

Technical requirements

Numbers in vectors, matrices, and data frames

Vectors

Matrices

Data frames

Math operations with variables

apply functions

Descriptive statistics

Correlation

Summary

Exercises

Further reading

6

Working with Date and Time Objects

Technical requirements

Introduction to date and time

Date and time with lubridate

Arithmetic operations with datetime

Time zones

Date and time using regular expressions (regexps)

Practicing

Summary

Exercises

Further reading

7

Transformations with Base R

Technical requirements

The dataset

Slicing and filtering

Slicing

Filtering

Grouping and summarizing

Replacing and filling

Arranging

Creating new variables

Binding

Using data.table

Summary

Exercises

Further reading

8

Transformations with Tidyverse Libraries

Technical requirements

What is tidy data

The pipe operator

Slicing and filtering

Slicing

Filtering

Grouping and summarizing data

Replacing and filling data

Arranging data

Creating new variables

The mutate function

Joining datasets

Left Join

Right join

Inner join

Full join

Anti-join

Reshaping a table

Do more with tidyverse

Summary

Exercises

Further reading

9

Exploratory Data Analysis

Technical requirements

Loading the dataset to RStudio

Understanding the data

Treating missing data

Exploring and visualizing the data

Univariate analysis

Multivariate analysis

Exploring

Analysis report

Report

Next steps

Summary

Exercises

Further reading

Part 3: Data Visualization

10

Introduction to ggplot2

Technical requirements

The grammar of graphics

Data

Geometry

Aesthetics

Statistics

Coordinates

Facets

Themes

The basic syntax of ggplot2

Plot types

Histograms

Boxplot

Scatterplot

Bar plots

Line plots

Smooth geometry

Themes

Summary

Exercises

Further reading

11

Enhanced Visualizations with ggplot2

Technical requirements

Facet grids

Map plots

Time series plots

3D plots

Adding interactivity to graphics

Summary

Exercises

Further reading

12

Other Data Visualization Options

Technical requirements

Plotting graphics in Microsoft Power BI using R

Preparing data for plotting

Creating word clouds in RStudio

Summary

Exercises

Further reading

Part 4: Modeling

13

Building a Model with R

Technical requirements

Machine learning concepts

Classification models

Regression models

Supervised and unsupervised learning

Understanding the project

The dataset

The project

The algorithm

Preparing data for modeling in R

Exploring the data with a few visualizations

Selecting the best variables

Modeling

Training

Testing and evaluating the model

Predicting

Summary

Exercises

Further reading

14

Build an Application with Shiny in R

Technical requirements

Learning the basics of Shiny

Get started

Basic functions

Creating an application

The project

Coding

Deploying the application on the web

Summary

Exercises

Further reading

Conclusion

References

Index

Other Books You May Enjoy

Part 1: Load and Explore Data

This part includes the following chapters:

Chapter 1, Fundamentals of Data WranglingChapter 2, Load and Explore DatasetsChapter 3, Basic Data Visualization

1

Fundamentals of Data Wrangling

The relationship between humans and data is age old. Knowing that our brains can capture and store only a limited amount of information, we had to create ways to keep and organize data.

The first idea of keeping and storing data goes back to 19000 BC (as stated in https://www.thinkautomation.com/histories/the-history-of-data/) when a bone stick is believed to have been used to count things and keep information engraved on it, serving as a tally stick. Since then, words, writing, numbers, and many other forms of data collection have been developed and evolved.

In 1663, John Graunt performed one of the first recognized data analyses, studying births and deaths by gender in the city of London, England.

In 1928, Fritz Pfleumer received the patent for magnetic tapes, a solution to store sound that enabled other researchers to create many of the storage technologies that are still used, such as hard disk drives.

Fast forward to the modern world, at the beginning of the computer age, in the 1970s, when IBM researchers Raymond Boyce and Donald Chamberlin created the Structured Query Language (SQL) for getting access to and modifying data held in databases. The language is still used, and, as a matter of fact, many data-wrangling concepts come from it. Concepts such as SELECT, WHERE, GROUP BY, and JOIN are heavily present in any work you want to perform with datasets. Therefore, a little knowledge of those basic commands might help you throughout this book, although it is not mandatory.

In this chapter, we will cover the following main topics:

What is data wrangling?Why data wrangling?The key steps of data wrangling

What is data wrangling?

Data wrangling is the process of modifying, cleaning, organizing, and transforming data from one given state to another, with the objective of making it more appropriate for use in analytics and data science.

This concept is also referred to as data munging, and both words are related to the act of changing, manipulating, transforming, and incrementing your dataset.

I bet you’ve already performed data wrangling. It is a common task for all of us. Since our primary school years, we have been taught how to create a table and make counts to organize people’s opinions in a dataset. If you are familiar with MS Excel or similar tools, remember all the times you have sorted, filtered, or added columns to a table, not to mention all of those lookups that you may have performed. All of that is part of the data-wrangling process. Every task performed to somehow improve the data and make it more suitable for analysis can be considered data wrangling.

As a data scientist, you will constantly be provided with different kinds of data, with the mission of transforming the dataset into insights that will, consequentially, form the basis for business decisions. Unlike a few years ago, when the majority of data was presented in a structured form such as text or tables, nowadays, data can come in many other forms, including unstructured formats such as video, audio, or even a combination of those. Thus, it becomes clear that most of the time, data will not be presented ready to work and will require some effort to get it in a ready state, sometimes more than others.

Figure 1.1 – Data before and after wrangling

Figure 1.1 is a visual representation of data wrangling. We see on the left-hand side three kinds of data points combined, and after sorting and tabulating, the data is clearer to be analyzed.

A wrangled dataset is easier to understand and to work with, creating the path to better analysis and modeling, as we shall see in the next section when we will learn why data wrangling is important to a data science project.

Why data wrangling?

Now you know what data wrangling means, and I am sure that you share the same view as me that this is a tremendously important subject – otherwise, I don’t think you would be reading this book.

In statistics and data science areas, there is this frequently repeated phrase: garbage in, garbage out. This popular saying represents the central idea of the importance of wrangling data because it teaches us that our analysis or even our model will only be as good as the data that we present to it. You could also use the weakest link in the chain analogy to describe that importance, meaning that if your data is weak, the rest of the analysis could be easily broken by questions and arguments.

Let me give you a naïve example, but one that is still very precise, to illustrate my point. If we receive a dataset like in Figure 1.2, everything looks right at first glance. There are city names and temperatures, and it is a common format used to present data. However, for data science, this data may not be ideal for use just yet.

Figure 1.2 – Temperatures for cities

Notice that all the columns are referring to the same variable, which is Temperature. We would have trouble plotting simple graphics in R with a dataset presented as in Figure 1.2, as well as using the dataset for modeling.

In this case, a simple transformation of the table from wide to long format would be enough to complete the data-wrangling task.

Figure 1.3 – Dataset ready for use

At first glance, Figure 1.2 might appear to be the better-looking option. And, in fact, it is for human eyes. The presentation of the dataset in Figure 1.2 makes it much easier for us to compare values and draw conclusions. However, we must not forget that we are dealing with computers, and machines don’t process data the same way humans do. To a computer, Figure 1.2 has seven variables: City, Jan, Feb, Mar, Apr, May, and Jun, while Figure 1.3 has only three: City, Month, and Temperature.

Now comes the fun part; let’s compare how a computer would receive both sets of data. A command to plot the temperature timeline by city for Figure 1.2 would be as follows: Computer, take a city and the temperatures during the months of Jan, Feb, Mar, Apr, May, and Jun in that city. Then consider each of the names of the months as a point on the x axis and the temperature associated as a point on the y axis. Plot a line for the temperature throughout the months for each of the cities.

Figure 1.3 is much clearer to the computer. It does not need to separate anything. The dataset is ready, so look how the command would be given: Computer, for each city, plot the month on the x axis and the temperature on the y axis.

Much simpler, agree? That is the importance of data wrangling for Data Science.

Benefits

Performing good data wrangling will improve the overall quality of the entire analysis process. Here are the benefits:

Structured data: Your data will be organized and easily understandable by other data scientists.Faster results: If the data is already in a usable state, creating plots or using it as input to an algorithm will certainly be faster.Better data flow: To be able to use the data for modeling or for a dashboard, it needs to be properly formatted and cleaned. Good data wrangling enables the data to follow to the next steps of the process, making data pipelines and automation possible.Aggregation: As we saw in the example in the previous section, the data must be in a suitable format for the computer to understand. Having well-wrangled datasets will help you to be able to aggregate them quickly for insight extraction.Data quality: Data wrangling is about transforming the data to the ready state. During this process, you will clean, aggregate, filter, and sort it accordingly, visualize the data, assess its quality, deal with outliers, and identify faulty or incomplete data.Data enriching: During wrangling, you might be able to enrich the data by creating new variables out of the original ones or joining other datasets to make your data more complete.

Every project, being related with Data Science or not, can benefit from data wrangling. As we just listed, it brings many benefits to the analysis, impacting the quality of the deliverables in the end. But to get the best from it, there are steps to follow.

The key steps of data wrangling

There are some basic steps to help data scientists and analysts to work through the data-wrangling part of the process. Naturally, once you first see a dataset, it is important to understand it, then organize, clean, enrich, and validate it before using it as input for a model.

Figure 1.4 – Steps of data wrangling

Understand: The first step to take once we get our hands on new data is to understand it. Take some time to read the data dictionary, which is a document with the descriptions of the variables, if available, or talk to the owner(s) of the data to really understand what each data point represents and how they do or do not connect to your main purpose and to the business questions you are trying to answer. This will make the following steps clearer.Format: Step two is to format or organize the data. Raw data may come unstructured or unformatted in a way that is not usable. Therefore, it is important to be familiar with the tidy format. Tidy data is a concept developed by Hadley Wickham in 2014 in a paper with the same name – Tidy data (Tidy data. The Journal of Statistical Software, vol. 59, 2014) – where he presents a standard method to organize and structure datasets, making the cleaning and exploration steps easier. Another benefit is facilitating the transference of the dataset between different tools that use the same format. Currently, the tidy data concept is widely accepted, so that helps you to focus on the analysis instead of munging the dataset every time you need to move it down the pipeline.

Tidy data standardizes the way the structure of the data is linked to the semantics, in other words, how the layout is linked with the meaning of the values. More specifically, structure means the rows and columns that can be labeled. Most of the time, the columns are labeled, but the rows are not. On the other hand, every value is related to a variable and an observation. This is the data semantics. On a tidy dataset, the variable will be a column that holds all the values for an attribute, and each row associated with one observation. Take the dataset extract from Figure 1.5 as an example. With regard to the horsepower column, we would see values such as 110, 110, 93, and 110 for four different cars. Looking at the observations level, each row is one observation, having one value for each attribute or variable, so a car could be associated with HP=110, 6 cylinders, 21 miles per gallon, and so on.

Figure 1.5 – Tidy data. Each row is one observation; each column is a variable

According to Wickham (https://tinyurl.com/2dh75y56), here are the three rules of tidy data:

Every column is a variableEvery row is an observationEvery cell is a single valueClean: This step is relevant to determine the overall quality of the data. There are many forms of data cleaning, such as splitting, parsing variables, handling missing values, dealing with outliers, and removing erroneous entries.Enrich: As you work through the data-wrangling steps and become more familiar with the data, questions will arise and, sometimes, more data will be needed. That can be solved by either joining another dataset to the original one to bring new variables or creating new ones using those you have.Validate: To validate is to make sure that the cleaning, formatting, and transformations are all in place and the data is ready for modeling or other analysis.Analysis/Model: Once everything is complete, your dataset is now ready for use in the next phases of the project, such as the creation of a dashboard or modeling.

As with every process, we must follow steps to reach the best performance and be able to standardize our efforts and allow them to be reproduced and scaled if needed. Next, we will look at three frameworks for Data Science projects that help to make a process easy to follow and reproduce.

Frameworks in Data Science

Data Science is no different from other sciences, and it also follows some common steps. Ergo, frameworks can be designed to guide people through the process, as well as to help implement a standardized process in a company.

It is important that a Data Scientist has a holistic understanding of the flow of the data from the moment of the acquisition until the end point since the resultant business knowledge is what will support decisions.

In this section, we will take a closer look at three known frameworks that can be used for Data Science projects: KDD, SEMMA, or CRISP-DM. Let’s get to know more about them.

KDD

KDD stands for Knowledge Discovery in Databases. It is a framework to extract knowledge from data in the context of large databases.

Figure 1.6 – KDD process

The process is iterative and follows these steps:

Data: Acquiring the data from a databaseSelection: Creating a representative target set that is a subset of the data with selected variables or samples of interestPreprocessing: Data cleaning and preprocessing to remove outliers and handle missing and noisy dataTransformation: Transforming and using dimensionality reduction to format the dataData Mining: Using algorithms to analyze and search for patterns of interest (for example, classification and clustering)Interpretation/Evaluation: Interpreting and evaluating the mined patterns

After the evaluation, if the results are not satisfactory, the process can be repeated with enhancements such as more data, a different subset, or a tweaked algorithm.

SEMMA

SEMMA stands for Sample, Explore, Modify, Model, and Assess. These are the steps of the process.

Figure 1.7 – SEMMA process

SEMMA is a cyclic process that flows more naturally with Data Science. It does not contain stages like KDD. The steps are as follows:

Sample: Based on statistics, it requires a sample large enough to be representative but small enough to be quick to work withExplore: During this step, the goal is to understand the data and generate visualizations and descriptive statistics, looking for patterns and anomaliesModify: Here is where data wrangling plays a more intensive role, where the transformations occur to make the data ready for modelingModel: This step is where algorithms are used to generate estimates, predictions, or insights from the dataAssess: Evaluatethe results

CRISP-DM

The acronym for this framework means Cross-Industry Standard Process for Data Mining. It provides the data scientist with the typical phases of the project and also an overview of the data mining life cycle.

Figure 1.8 – CRISP-DM life cycle

The CRISP-DM life cycle has six phases, with the arrows indicating the dependencies between each one of them, but the key point here is that there is not a strict order to follow. The project can move back and forth during the process, making it a flexible framework. Let’s go through the steps:

Business understanding: Like the other two frameworks presented, it all starts with understanding the problem, the business. Understanding the business rules and specificities is often even more important than getting to the solution fast. That is because a solution may not be ideal for that kind of business. The business rules must always drive the solution.Data understanding: This involves collecting and exploring the data. Make sure the data collected is representative of the whole and get familiar with it to be able to find errors, faulty data, and missing values and to assess quality. All these tasks are part of data understanding.Data preparation: Once you are familiar with the data collected, it is time to wrangle it and prepare it for modeling.Modeling: This involves applying Data Science algorithms or performing the desired analysis on the processed data.Evaluation: This step is used to assess whether the solution is aligned with the business requirement and whether it is performing well.Deployment: In this step, the model reaches its purpose (for example, an application that predicts a group or a value, a dashboard, and so on).

These three frameworks have a lot in common if you look closer. They start with understanding the data, go over data wrangling with cleaning and transforming, then move on to the modeling phase, and end with the evaluation of the model, usually working with iterations to assess flaws and improve the results.

Summary

In this chapter, we learned a little about the history of data wrangling and became familiar with its definition. Every task performed in order to transform or enhance the data and to make it ready for analysis and modeling is what we call data wrangling or data munging.

We also discussed some topics stating the importance of wrangling data before modeling it. A model is a simplified representation of reality, and an algorithm is like a student that needs to understand that reality to give us the best answer about the subject matter. If we teach this student with bad data, we cannot expect to receive a good answer. A model is as good as its input data.

Continuing further in the chapter, we reviewed the benefits of data wrangling, proving that we can improve the quality of our data, resulting in faster results and better outcomes.

In the final sections, we reviewed the basic steps of data wrangling and learned more about three of the most commonly used frameworks for Data Science – KDD, SEMMA, and CRISP-DM. I recommend that you review more information about them to have a holistic view of the life cycle of a Data Science project.

Now, it is important to notice how these three frameworks preach the selection of a representative dataset or subset of data. A nice example is given by Aurélien Géron (Hands-on Machine Learning with Scikit-Learn, Keras and TensorFlow, 2nd edition, (2019): 32-33). Suppose you want to build an app to take pictures of flowers and recognize and classify them. You could go to the internet and download thousands of pictures; however, they will probably not be representative of the kind of pictures that your model will receive from the app users. Ergo, the model could underperform. This example is relevant to illustrate the garbage in, garbage out idea. That is, if you don’t explore and understand your data thoroughly, you won’t know whether it is good enough for modeling.

The frameworks can lead the way, like a map, to explore, understand, and wrangle the data and to make it ready for modeling, decreasing the risk of having a frustrating outcome.

In the next chapter, let’s get our hands on R and start coding.

Exercises

What is data wrangling?Why is data wrangling important?What are the steps for data wrangling?List three Data Science frameworks.

Further reading

Hadley Wickham – Tidy Data: https://tinyurl.com/2dh75y56What is data wrangling?: https://tinyurl.com/2p93juznData Science methodologies: https://tinyurl.com/2ucxdcch