Statistics for Data Science - James D. Miller - E-Book

Statistics for Data Science E-Book

James D Miller

0,0
37,19 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Get your statistics basics right before diving into the world of data science

About This Book

  • No need to take a degree in statistics, read this book and get a strong statistics base for data science and real-world programs;
  • Implement statistics in data science tasks such as data cleaning, mining, and analysis
  • Learn all about probability, statistics, numerical computations, and more with the help of R programs

Who This Book Is For

This book is intended for those developers who are willing to enter the field of data science and are looking for concise information of statistics with the help of insightful programs and simple explanation. Some basic hands on R will be useful.

What You Will Learn

  • Analyze the transition from a data developer to a data scientist mindset
  • Get acquainted with the R programs and the logic used for statistical computations
  • Understand mathematical concepts such as variance, standard deviation, probability, matrix calculations, and more
  • Learn to implement statistics in data science tasks such as data cleaning, mining, and analysis
  • Learn the statistical techniques required to perform tasks such as linear regression, regularization, model assessment, boosting, SVMs, and working with neural networks
  • Get comfortable with performing various statistical computations for data science programmatically

In Detail

Data science is an ever-evolving field, which is growing in popularity at an exponential rate. Data science includes techniques and theories extracted from the fields of statistics; computer science, and, most importantly, machine learning, databases, data visualization, and so on.

This book takes you through an entire journey of statistics, from knowing very little to becoming comfortable in using various statistical methods for data science tasks. It starts off with simple statistics and then move on to statistical methods that are used in data science algorithms. The R programs for statistical computation are clearly explained along with logic. You will come across various mathematical concepts, such as variance, standard deviation, probability, matrix calculations, and more. You will learn only what is required to implement statistics in data science tasks such as data cleaning, mining, and analysis. You will learn the statistical techniques required to perform tasks such as linear regression, regularization, model assessment, boosting, SVMs, and working with neural networks.

By the end of the book, you will be comfortable with performing various statistical computations for data science programmatically.

Style and approach

Step by step comprehensive guide with real world examples

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 332

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Statistics for Data Science

 

 

 

 

 

 

 

 

 

 

Leverage the power of statistics for Data Analysis, Classification, Regression, Machine Learning, and Neural Networks

 

 

 

 

 

 

 

 

 

 

 

 

James D. Miller

 

 

 

 

 

 

 

 

 

 

BIRMINGHAM - MUMBAI

Statistics for Data Science

Copyright © 2017 Packt Publishing

 

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

 

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

 

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

 

First published: November 2017

 

Production reference: 1151117

Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.

ISBN 978-1-78829-067-8

 

www.packtpub.com

Credits

Author

James D. Miller

Copy Editor

Tasneem Fatehi

Reviewers

James C. Mott

Project Coordinator

 

Manthan Patel

Commissioning Editor

 

Veena Pagare

Proofreader

 

Safis Editing

Acquisition Editor

 

Tushar Gupta

Indexer

 

Aishwarya Gangawane

Content Development Editor

Snehal Kolte

Graphics

 

Tania Dutta

Technical EditorSayli Nikalje

Production Coordinator

 

Deepika Naik

About the Author

James D. Miller, is an IBM certified expert, creative innovator and accomplished Director, Sr. Project Leader and Application/System Architect with +35 years of extensive applications and system design and development experience across multiple platforms and technologies. Experiences include introducing customers to new and sometimes disruptive technologies and platforms, integrating with IBM Watson Analytics, Cognos BI, TM1 and web architecture design, systems analysis, GUI design and testing, database modelling and systems analysis, design and development of OLAP, client/server, web and mainframe applications and systems utilizing: IBM Watson Analytics, IBM Cognos BI and TM1 (TM1 rules, TI, TM1Web and Planning Manager), Cognos Framework Manager, dynaSight-ArcPlan, ASP, DHTML, XML, IIS, MS Visual Basic and VBA, Visual Studio, PERL, SPLUNK, WebSuite, MS SQL Server, ORACLE, SYBASE Server, and so on.

Responsibilities have also included all aspects of Windows and SQL solution development and design including analysis; GUI (and website) design; data modelling; table, screen/form and script development; SQL (and remote stored procedures and triggers) development/testing; test preparation and management and training of programming staff. Other experience includes the development of  Extract, Transform, and Load (ETL)  infrastructure such as data transfer automation between mainframe (DB2, Lawson, Great Plains, and so on.) systems and client/server SQL server and web-based applications and integration of enterprise applications and data sources.

Mr Miller has acted as Internet Applications Development Mgr. responsible for the design, development, QA and delivery of multiple websites including online trading applications, warehouse process control and scheduling systems, administrative and control applications. Mr Miller also was responsible for the design, development and administration of a web-based financial reporting system for a 450-million-dollar organization, reporting directly to the CFO and his executive team.

He has also been responsible for managing and directing multiple resources in various management roles including project and team leader, lead developer and applications development director.

He has authored the following books published by Packt:

Mastering Predictive Analytics with R – Second Edition 

Big Data Visualization 

Learning IBM Watson Analytics 

Implementing Splunk – Second Edition 

Mastering Splunk 

IBM Cognos TM1 Developer's Certification Guide 

He has also authored a number of whitepapers on best practices such as Establishing a Center of Excellence and continues to post blogs on a number of relevant topics based on personal experiences and industry best practices. 

He is a perpetual learner continuing to pursue experiences and certifications, currently holding the following current technical certifications:

IBM Certified Developer Cognos TM1

IBM Certified Analyst Cognos TM1

IBM Certified Administrator Cognos TM1

IBM Cognos TM1 Master 385 Certification

IBM Certified Advanced Solution Expert Cognos TM1

IBM OpenPages Developer Fundamentals C2020-001-ENU

IBM Cognos 10 BI Administrator C2020-622

IBM Cognos 10 BI Author C2090-620-ENU

IBM Cognos BI Professional C2090-180-ENU

IBM Cognos 10 BI Metadata Model Developer C2090-632

IBM Certified Solution Expert - Cognos BI

Specialties: The evaluation and introduction of innovative and disruptive technologies, cloud migration, IBM Watson Analytics, big data, data visualizations, Cognos BI and TM1 application design and development, OLAP, Visual Basic, SQL Server, forecasting and planning; international application, and development, business intelligence, project development, and delivery and process improvement.

To Nanette L. Miller: "Like a river flows surely to the sea, darling so it goes, some things are meant to be."

 

About the Reviewer

James Mott, Ph.D, is a senior education consultant with extensive experience in teaching statistical analysis, modeling, data mining and predictive analytics. He has over 30 years of experience using SPSS products in his own research including IBM SPSS Statistics, IBM SPSS Modeler, and IBM SPSS Amos. He has also been actively teaching these products to IBM/SPSS customers for over 30 years. In addition, he is an experienced historian with expertise in the research and teaching of 20th Century United States political history and quantitative methods. His specialties are data mining, quantitative methods, statistical analysis, teaching, and consulting.

www.PacktPub.com

For support files and downloads related to your book, please visit www.PacktPub.com.

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.

https://www.packtpub.com/mapt

Get the most in-demand software skills with Mapt. Mapt gives you full access to all Packt books and video courses, as well as industry-leading tools to help you plan your personal development and advance your career.

Why subscribe?

Fully searchable across every book published by Packt

Copy and paste, print, and bookmark content

On demand and accessible via a web browser

Customer Feedback

Thanks for purchasing this Packt book. At Packt, quality is at the heart of our editorial process. To help us improve, please leave us an honest review on this book's Amazon page at https://www.amazon.com/dp/1788290674. If you'd like to join our team of regular reviewers, you can email us at [email protected]. We award our regular reviewers with free eBooks and videos in exchange for their valuable feedback. Help us be relentless in improving our products!

Table of Contents

Preface

What this book covers

What you need for this book

Who this book is for

Conventions

Reader feedback

Customer support

Downloading the example code

Downloading the color images of this book

Errata

Piracy

Questions

Transitioning from Data Developer to Data Scientist

Data developer thinking

Objectives of a data developer

Querying or mining

Data quality or data cleansing

Data modeling

Issue or insights

Thought process

Developer versus scientist

New data, new source

Quality questions

Querying and mining

Performance

Financial reporting

Visualizing

Tools of the trade

Advantages of thinking like a data scientist

Developing a better approach to understanding data

Using statistical thinking during program or database designing

Adding to your personal toolbox

Increased marketability

Perpetual learning

Seeing the future

Transitioning to a data scientist

Let's move ahead

Summary

Declaring the Objectives

Key objectives of data science

Collecting data

Processing data

Exploring and visualizing data

Analyzing the data and/or applying machine learning to the data

Deciding (or planning) based upon acquired insight

Thinking like a data scientist

Bringing statistics into data science

Common terminology

Statistical population

Probability

False positives

Statistical inference

Regression

Fitting

Categorical data

Classification

Clustering

Statistical comparison

Coding

Distributions

Data mining

Decision trees

Machine learning

Munging and wrangling

Visualization

D3

Regularization

Assessment

Cross-validation

Neural networks

Boosting

Lift

Mode

Outlier

Predictive modeling

Big Data

Confidence interval

Writing

Summary

A Developer's Approach to Data Cleaning

Understanding basic data cleaning

Common data issues

Contextual data issues

Cleaning techniques

R and common data issues

Outliers

Step 1 – Profiling the data

Step 2 – Addressing the outliers

Domain expertise

Validity checking

Enhancing data

Harmonization

Standardization

Transformations

Deductive correction

Deterministic imputation

Summary

Data Mining and the Database Developer

Data mining

Common techniques

Visualization

Cluster analysis

Correlation analysis

Discriminant analysis

Factor analysis

Regression analysis

Logistic analysis

Purpose

Mining versus querying

Choosing R for data mining

Visualizations

Current smokers

Missing values

A cluster analysis

Dimensional reduction

Calculating statistical significance

Frequent patterning

Frequent item-setting

Sequence mining

Summary

Statistical Analysis for the Database Developer

Data analysis

Looking closer

Statistical analysis

Summarization

Comparing groups

Samples

Group comparison conclusions

Summarization modeling

Establishing the nature of data

Successful statistical analysis

R and statistical analysis

Summary

Database Progression to Database Regression

Introducing statistical regression

Techniques and approaches for regression

Choosing your technique

Does it fit?

Identifying opportunities for statistical regression

Summarizing data

Exploring relationships

Testing significance of differences

Project profitability

R and statistical regression

A working example

Establishing the data profile

The graphical analysis

Predicting with our linear model

Step 1: Chunking the data

Step 2: Creating the model on the training data

Step 3: Predicting the projected profit on test data

Step 4: Reviewing the model

Step 4: Accuracy and error

Summary

Regularization for Database Improvement

Statistical regularization

Various statistical regularization methods

Ridge

Lasso

Least angles

Opportunities for regularization

Collinearity

Sparse solutions

High-dimensional data

Classification

Using data to understand statistical regularization

Improving data or a data model

Simplification

Relevance

Speed

Transformation

Variation of coefficients

Casual inference

Back to regularization

Reliability

Using R for statistical regularization

Parameter Setup

Summary

Database Development and Assessment

Assessment and statistical assessment

Objectives

Baselines

Planning for assessment

Evaluation

Development versus assessment

Planning

Data assessment and data quality assurance

Categorizing quality

Relevance

Cross-validation

Preparing data

R and statistical assessment

Questions to ask

Learning curves

Example of a learning curve

Summary

Databases and Neural Networks

Ask any data scientist

Defining neural network

Nodes

Layers

Training

Solution

Understanding the concepts

Neural network models and database models

No single or main node

Not serial

No memory address to store results

R-based neural networks

References

Data prep and preprocessing

Data splitting

Model parameters

Cross-validation

R packages for ANN development

ANN

ANN2

NNET

Black boxes

A use case

Popular use cases

Character recognition

Image compression

Stock market prediction

Fraud detection

Neuroscience

Summary

Boosting your Database

Definition and purpose

Bias

Categorizing bias

Causes of bias

Bias data collection

Bias sample selection

Variance

ANOVA

Noise

Noisy data

Weak and strong learners

Weak to strong

Model bias

Training and prediction time

Complexity

Which way?

Back to boosting

How it started

AdaBoost

What you can learn from boosting (to help) your database

Using R to illustrate boosting methods

Prepping the data

Training

Ready for boosting

Example results

Summary

Database Classification using Support Vector Machines

Database classification

Data classification in statistics

Guidelines for classifying data

Common guidelines

Definitions

Definition and purpose of an SVM

The trick

Feature space and cheap computations

Drawing the line

More than classification

Downside

Reference resources

Predicting credit scores

Using R and an SVM to classify data in a database

Moving on

Summary

Database Structures and Machine Learning

Data structures and data models

Data structures

Data models

What's the difference?

Relationships

Machine learning

Overview of machine learning concepts

Key elements of machine learning

Representation

Evaluation

Optimization

Types of machine learning

Supervised learning

Unsupervised learning

Semi-supervised learning

Reinforcement learning

Most popular

Applications of machine learning

Machine learning in practice

Understanding

Preparation

Learning

Interpretation

Deployment

Iteration

Using R to apply machine learning techniques to a database

Understanding the data

Preparing

Data developer

Understanding the challenge

Cross-tabbing and plotting

Summary

Preface

Statistics are an absolute must prerequisite for any task in the area of data science but may also be the most feared deterrent for developers entering into the field of data science. This book will take you on a statistical journey from knowing very little to becoming comfortable using various statistical methods for typical data science tasks.

What this book covers

Chapter 1: Transitioning from Data Developer to Data Scientist, sets the stage for the transition from data developer to data scientist. You will understand the difference between a developer mindset versus a data scientist mindset, the important difference between the two, and how to transition into thinking like a data scientist.

Chapter 2: Declaring the Objectives, introduces and explains (from a developer’s perspective) the basic objectives behind statistics for data science and introduces you to the important terms and keys that are used in the field of data science.

Chapter 3: A Developer's  Approach to Data Cleaning, discusses how a developer might understand and approach the topic of data cleaning using common statistical methods.

Chapter 4: Data Mining and the Database Developer, introduces the developer to mining data using R. You will understand what data mining is, why it is important, and feel comfortable using R for the most common statistical data mining methods: dimensional reduction, frequent patterns, and sequences.

Chapter 5: Statistical Analysis for the Database Developer, discusses the difference between data analysis or summarization and statistical data analysis and will follow the steps for successful statistical analysis of data, describe the nature of data, explore the relationships presented in data, create a summarization model from data, prove the validity of a model, and employ predictive analytics on a developed model.

Chapter 6: Database Progression to Database Regression, sets out to define statistical regression concepts and outline how a developer might use regression for simple forecasting and prediction within a typical data development project.

Chapter 7: Regularization for Database Improvement, introduces the developer to the idea of statistical regularization to improve data models. You will review what statistical regularization is, why it is important, and various statistical regularization methods.

Chapter 8: Data Development and Assessment, covers the idea of data model assessment and using statistics for assessment. You will understand what statistical assessment is, why it is important, and use R for statistical assessment.

Chapter 9: Databases and Neural Networks, defines the neural network model and draws from a developer’s knowledge of data models to help understand the purpose and use of neural networks in data science.

Chapter 10: Boosting and your Database, introduces the idea of using statistical boosting to better understand data in a database.

Chapter 11: Database Classification using Support Vector Machines, uses developer terminologies to define an SVM, identify various applications for its use and walks through an example of using a simple SVM to classify data in a database

Chapter 12: Database Structures and Machine Learning, aims to provide an explanation of the types of machine learning and shows the developer how to use machine learning processes to understand database mappings and identify patterns within the data.

What you need for this book

This book is intended for those with a data development background who are interested in possibly entering the field of data science and are looking for concise information on the topic of statistics with the help of insightful programs and simple explanation. Just bring your data development experience and an open mind!

Who this book is for

This book is intended for those developers who are interested in entering the field of data science and are looking for concise information on the topic of statistics with the help of insightful programs and simple explanation.

Conventions

In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning.

Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: In statistics, a boxplot is a simple way to gain information regarding the shape, variability, and center (or median) of a statistical data set, so we'll use the boxplot with our data to see if we can identify both the median Coin-in and if there are any outliers.

A block of code is set as follows:

MyFile <-"C:/GammingData/SlotsResults.csv" MyData <- read.csv(file=MyFile, header=TRUE, sep=",")

New terms and important words are shown in bold. 

Warnings or important notes appear like this.
Tips and tricks appear like this.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book-what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of. To send us general feedback, simply email [email protected], and mention the book's title in the subject of your message. If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the example code

You can download the example code files for this book from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files emailed directly to you. You can download the code files by following these steps:

Log in or register to our website using your email address and password.

Hover the mouse pointer on the

SUPPORT

tab at the top.

Click on

Code Downloads & Errata

.

Enter the name of the book in the

Search

box.

Select the book for which you're looking to download the code files.

Choose from the drop-down menu where you purchased this book from.

Click on

Code Download

.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

WinRAR / 7-Zip for Windows

Zipeg / iZip / UnRarX for Mac

7-Zip / PeaZip for Linux

The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Statistics-for-Data-Science. We also have other code bundles from our rich catalogue of books and videos available at https://github.com/PacktPublishing/. Check them out!

Downloading the color images of this book

We also provide you with a PDF file that has color images of the screenshots/diagrams used in this book. The color images will help you better understand the changes in the output. You can download this file from https://www.packtpub.com/sites/default/files/downloads/StatisticsforDataScience_ColorImages.pdf.

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books-maybe a mistake in the text or the code-we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title. To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.

Piracy

Piracy of copyrighted material on the internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the internet, please provide us with the location address or website name immediately so that we can pursue a remedy. Please contact us at [email protected] with a link to the suspected pirated material. We appreciate your help in protecting our authors and our ability to bring you valuable content.

Questions

If you have a problem with any aspect of this book, you can contact us at [email protected], and we will do our best to address the problem.

Transitioning from Data Developer to Data Scientist

In this chapter (and throughout all of the chapters of this book), we will chart your course for starting and continuing the journey from thinking like a data developer to thinking like a data scientist.

Using developer terminologies and analogies, we will discuss a developer's objectives, what a typical developer mindset might be like, how it differs from a data scientist's mindset, why there are important differences (as well as similarities) between the two and suggest how to transition yourself into thinking like a data scientist. Finally, we will suggest certain advantages of understanding statistics and data science, taking a data perspective, as well as simply thinking like a data scientist.

In this chapter, we've broken things into the following topics:

The objectives of the data developer role

How a data developer thinks

The differences between a data developer and a data scientist

Advantages of thinking like a data scientist

The steps for transitioning into a data scientist mindset

So, let's get started!

Data developer thinking

Having spent plenty of years wearing the hat of a data developer, it makes sense to start out here with a few quick comments about data developers.

In some circles, a database developer is the equivalent of a data developer. But whether data or database, both would usually be labeled as an information technology (IT) professional. Both spend their time working on or with data and database technologies.

We may see a split between those databases (data) developers that focus more on support and routine maintenance (such as administrators) and those who focus more on improving, expanding, and otherwise developing access to data (such as developers).

Your typical data developer will primarily be involved with creating and maintaining access to data rather than consuming that data. He or she will have input in or may make decisions on, choosing programming languages for accessing or manipulating data. We will make sure that new data projects adhere to rules on how databases store and handle data, and we will create interfaces between data sources.

In addition, some data developers are involved with reviewing and tuning queries written by others and, therefore, must be proficient in the latest tuning techniques, various query languages such as Structured Query Language (SQL), as well as how the data being accessed is stored and structured.

In summary, at least strictly from a data developer's perspective, the focus is all about access to valuable data resources rather than the consumption of those valuable data resources.

Objectives of a data developer

Every role, position, or job post will have its own list of objectives, responsibilities, or initiatives.

As such, in the role of a data developer, one may be charged with some of the following responsibilities:

Maintaining the integrity of a database and infrastructure

Monitoring and optimizing to maintain levels of responsiveness

Ensuring quality and integrity of data resources

Providing appropriate levels of support to communities of users

Enforcing security policies on data resources

As a data scientist, you will note somewhat different objectives. This role will typically include some of the objectives listed here:

Mining data from disparate sources

Identifying patterns or trending

Creating statistical models—modeling

Learning and assessing

Identifying insights and predicting

Do you perhaps notice a theme beginning here?

Note the keywords:

Maintaining

Monitoring

Ensuring

Providing

Enforcing

These terms imply different notions than those terms that may be more associated with the role of a data scientist, such as the following:

Mining

Trending

Modeling

Learning

Predicting

There are also, of course, some activities performed that may seem analogous to both a data developer and a data scientist and will be examined here.

Querying or mining

As a data developer, you will almost always be in the habit of querying data. Indeed, a data scientist will query data as well. So, what is data mining? Well, when one queries data, one expects to ask a specific question. For example, you might ask, What was the total number of daffodils sold in April? expecting to receive back a known, relevant answer such as in April, daffodil sales totaled 269 plants.

With data mining, one is usually more absorbed in the data relationships (or the potential relationships between points of data, sometimes referred to as variables) and cognitive analysis. A simple example might be: how does the average daily temperature during the month affect the total number of daffodils sold in April?

Another important distinction between data querying and data mining is that queries are typically historic in nature in that they are used to report past results (total sales in April), while data mining techniques can be forward thinking in that through the use of appropriate statistical methods, they can infer a future result or provide the probability that a result or event will occur. For example, using our earlier example, we might predict higher daffodil sales when the average temperature rises within the selling area.

Data quality or data cleansing

Do you think a data developer is interested in the quality of data in a database? Of course, a data developer needs to care about the level of quality of the data they support or provide access to. For a data developer, the process of data quality assurance (DQA) within an organization is more mechanical in nature, such as ensuring data is current and complete and stored in the correct format.

With data cleansing, you see the data scientist put more emphasis on the concept of statistical data quality. This includes using relationships found within the data to improve the levels of data quality. As an example, an individual whose age is nine, should not be labeled or shown as part of a group of legal drivers in the United States incorrectly labeled data.

You may be familiar with the term munging data. Munging may be sometimes defined as the act of tying together systems and interfaces that were not specifically designed to interoperate. Munging can also be defined as the processing or filtering of raw data into another form for a particular use or need.

Data modeling

Data developers create designs (or models) for data by working closely with key stakeholders based on given requirements such as the ability to rapidly enter sales transactions into an organization's online order entry system. During model design, there are three kinds of data models the data developer must be familiar with—conceptual, logical, and physical—each relatively independent of each other.

Data scientists create models with the intention of training with data samples or populations to identify previously unknown insights or validate current assumptions.

Modeling data can become complex, and therefore, it is common to see a distinction between the role of data development and data modeling. In these cases, a data developer concentrates on evaluating the data itself, creating meaningful reports, while data modelers evaluate how to collect, maintain, and use the data.

Issue or insights

A lot of a data developer's time may be spent monitoring data, users, and environments, looking for any indications of emerging issues such as unexpected levels of usage that may cause performance bottlenecks or outages. Other common duties include auditing, application integrations, disaster planning and recovery, capacity planning, change management, database software version updating, load balancing, and so on.

Data scientists spend their time evaluating and analyzing data, and information in an effort to discover valuable new insights. Hopefully, once established, insights can then be used to make better business decisions.

There is a related concept to grasp; through the use of analytics, one can identify patterns and trends within data, while an insight is a value obtained through the use of the analytical outputs.

Thought process

Someone's mental procedures or cognitive activity based on interpretations, past experiences, reasoning, problem-solving, imagining, and decision making make up their way of thinking or their thought process.

One can only guess how particular individuals will actually think, or their exact thoughts at a given point of time or during an activity, or what thought process they will use to accomplish their objectives, but in general terms, a data developer may spend more time thinking about data convenience (making the data available as per the requirements), while data scientists are all about data consumption (concluding new ways to leverage the data to find insights into existing issues or new opportunities).

To paint a clearer picture, you might use the analogy of the auto mechanic and the school counselor.

An auto mechanic will use his skills along with appropriate tools to keep an automobile available to its owner and running well, or if there has been an issue identified with a vehicle, the mechanic will perform diagnosis for the symptoms presented and rectify the problem. This is much like the activities of a data developer.

With a counselor, he or she might examine a vast amount of information regarding a student's past performance, personality traits, as well as economic statistics to determine what opportunities may exist in a particular student's future. In addition, multiple scenarios may be studied to predict what the best outcomes might be, based on this individual student's resources.

Clearly, both aforementioned individuals provide valuable services but use (maybe very) different approaches and individual thought processes to produce the desired results.

Although there is some overlapping, when you are a data developer, your thoughts are normally around maintaining convenient access to appropriate data resources but not particularly around the data's substance, that is, you may care about data types, data volumes, and accessibility paths but not about whether or what cognitive relationships exist or the powerful potential uses for the data.

In the next section, we will explore some simple circumstances in an effort to show various contrasts between the data developer and the data scientist.

Developer versus scientist

To better understand the differences between a data developer and data scientist, let's take a little time here and consider just a few hypotheticals (yet still realistic) situations that may occur during your day.

New data, new source

What happens when new data or a new data source becomes available or is presented?

Here, new data usually means that more current or more up-to-date data has become available. An example of this might be receiving a file each morning of the latest month-to-date sales transactions, usually referred to as an actual update.

In the business world, data can be either real (actual) as in the case of an authenticated sale, or sale transaction entered in an order processing system, or supposed as in the case of an organization forecasting a future (not yet actually occurred) sale or transaction.

You may receive files of data periodically from an online transactions processing system, which provide the daily sales or sales figures from the first of the month to the current date. You'd want your business reports to show the total sales numbers that include the most recent sales transactions.

The idea of a new data source is different. If we use the same sort of analogy as we used previously, an example of this might be a file of sales transactions from a company that a parent company newly acquired. Perhaps another example would be receiving data reporting the results of a recent online survey. This is the information that's collected with a specific purpose in mind and typically is not (but could be) a routine event.

Machine (and otherwise) data is accumulating even as you are reading this, providing new and interesting data sources creating a market for data to be consumed. One interesting example might be Amazon Web Services (https://aws.amazon.com/datasets/). Here, you can find massive resources of public data, including the 1000 Genomes Project (the attempt to build the most comprehensive database of human genetic information) as well as NASA's database of satellite imagery of the Earth.

In the previous scenarios, a data developer would most likely be (should be) expecting updated files and have implemented the Extract, Transform, and Load (ETL) processes to automatically process the data, handle any exceptions, and ensure that all the appropriate reports reflect the latest, correct information. Data developers would also deal with transitioning a sales file from a newly acquired company but probably would not be a primary resource for dealing with survey results (or the 1000 Genomes Project).

Data scientists are not involved in the daily processing of data (such as sales) but will be directly responsible for a survey results project. That is, the data scientist is almost always hands-on with initiatives such as researching and acquiring new sources of information for projects involving surveying. Data scientists most likely would have input even in the designing of surveys as they are the ones who will be using that data in their analysis.

Quality questions

Suppose there are concerns about the quality of the data to be, or being, consumed by the organization. As we eluded to earlier in this chapter, there are different types of data quality concerns such as what we called mechanical issues as well as statistical issues (and there are others).

Current trending examples of the most common statistical quality concerns include duplicate entries and misspellings, misclassification and aggregation, and changing meanings.

If management is questioning the validity of the total sales listed on a daily report or perhaps doesn't trust it because the majority of your customers are not legally able to drive in the United States, the number of the organizations repeat customers are declining, you have a quality issue:

Quality is a concern to both the data developer and the data scientist. A data developer focuses more on timing and formatting (the mechanics of the data), while the data scientist is more interested in the data's statistical quality (with priority given to issues with the data that may potentially impact the reliability of a particular study).

Querying and mining

Historically, the information technology group or department has been beseeched by a variety of business users to produce and provide reports showing information stored in databases and systems that are of interest.

These ad hoc reporting requests have evolved into requests for on-demand raw data extracts (rather than formatted or pretty printed reports) so that business users could then import the extracted data into a tool such as MS Excel (or others), where they could then perform their own formatting and reporting, or perform further analysis and modeling. In today's world, business users demand more self-service (even mobile) abilities to meet their organization's (or an individual's) analytical and reporting needs, expecting to have access to the updated raw data stores, directly or through smaller, focus-oriented data pools.

If business applications cannot supply the necessary reporting on their own, business users often will continue their self-service journey.                                                                                                     -Christina Wong (www.datainformed.com)

Creating ad hoc reports and performing extracts based on specific on-demand needs or providing self-service access to data falls solely to the role of the organization's data developer. However, take note that a data scientist will want to periodically perform his or her own querying and extracting—usually as part of a project they are working on. They may use these query results to determine the viability and availability of the data they need or as part of the process to create a sampling or population for specific statistical projects. This form of querying may be considered to be a form of data mining and goes much deeper into the data than queries might. This work effort is typically performed by a data scientist rather than a data developer.

Performance

You can bet that pretty much everyone is, or will be, concerned with the topic of performance. Some forms (of performance) are perhaps a bit more quantifiable, such as what is an acceptable response time for an ad hoc query or extract to complete? Or perhaps what are the total number of mouse-clicks or keystrokes required to enter a sales order? Others may be a bit more difficult to answer or address, such as why does it appear that there is a downward trend in the number of repeat customers?

It is the responsibility of the data developer to create and support data designs (even be involved with infrastructure configuration options) that consistently produce swift response times and are easy to understand and use.

One area of performance responsibility that may be confusing is in the area of website performance. For example, if an organization's website is underperforming, is it because certain pages are slow to load or uninteresting and/or irrelevant to the targeted audience or customer? In this example, both a data developer and a data scientist may be directed to address the problem.