Python for Data Science For Dummies - John Paul Mueller - E-Book

Python for Data Science For Dummies E-Book

John Paul Mueller

0,0
22,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Let Python do the heavy lifting for you as you analyze large datasets Python for Data Science For Dummies lets you get your hands dirty with data using one of the top programming languages. This beginner's guide takes you step by step through getting started, performing data analysis, understanding datasets and example code, working with Google Colab, sampling data, and beyond. Coding your data analysis tasks will make your life easier, make you more in-demand as an employee, and open the door to valuable knowledge and insights. This new edition is updated for the latest version of Python and includes current, relevant data examples. * Get a firm background in the basics of Python coding for data analysis * Learn about data science careers you can pursue with Python coding skills * Integrate data analysis with multimedia and graphics * Manage and organize data with cloud-based relational databases Python careers are on the rise. Grab this user-friendly Dummies guide and gain the programming skills you need to become a data pro.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 652

Veröffentlichungsjahr: 2023

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Python® for Data Science For Dummies®

To view this book's Cheat Sheet, simply go to www.dummies.com and search for “Python for Data Science For Dummies Cheat Sheet” in the Search box.

Table of Contents

Cover

Title Page

Copyright

Introduction

About This Book

Foolish Assumptions

Icons Used in This Book

Beyond the Book

Where to Go from Here

Part 1: Getting Started with Data Science and Python

Chapter 1: Discovering the Match between Data Science and Python

Understanding Python as a Language

Defining Data Science

Creating the Data Science Pipeline

Understanding Python’s Role in Data Science

Learning to Use Python Fast

Chapter 2: Introducing Python’s Capabilities and Wonders

Working with Python

Performing Rapid Prototyping and Experimentation

Considering Speed of Execution

Visualizing Power

Using the Python Ecosystem for Data Science

Chapter 3: Setting Up Python for Data Science

Working with Anaconda

Installing Anaconda on Windows

Installing Anaconda on Linux

Installing Anaconda on Mac OS X

Downloading the Datasets and Example Code

Chapter 4: Working with Google Colab

Defining Google Colab

Working with Notebooks

Performing Common Tasks

Using Hardware Acceleration

Executing the Code

Viewing Your Notebook

Sharing Your Notebook

Getting Help

Part 2: Getting Your Hands Dirty with Data

Chapter 5: Working with Jupyter Notebook

Using Jupyter Notebook

Performing Multimedia and Graphic Integration

Chapter 6: Working with Real Data

Uploading, Streaming, and Sampling Data

Accessing Data in Structured Flat-File Form

Sending Data in Unstructured File Form

Managing Data from Relational Databases

Interacting with Data from NoSQL Databases

Accessing Data from the Web

Chapter 7: Processing Your Data

Juggling between NumPy and pandas

Validating Your Data

Manipulating Categorical Variables

Dealing with Dates in Your Data

Dealing with Missing Data

Slicing and Dicing: Filtering and Selecting Data

Concatenating and Transforming

Aggregating Data at Any Level

Chapter 8: Reshaping Data

Using the Bag of Words Model to Tokenize Data

Working with Graph Data

Chapter 9: Putting What You Know into Action

Contextualizing Problems and Data

Considering the Art of Feature Creation

Performing Operations on Arrays

Part 3: Visualizing Information

Chapter 10: Getting a Crash Course in Matplotlib

Starting with a Graph

Setting the Axis, Ticks, and Grids

Defining the Line Appearance

Using Labels, Annotations, and Legends

Chapter 11: Visualizing the Data

Choosing the Right Graph

Creating Advanced Scatterplots

Plotting Time Series

Plotting Geographical Data

Visualizing Graphs

Part 4: Wrangling Data

Chapter 12: Stretching Python’s Capabilities

Playing with Scikit-learn

Using Transformative Functions

Considering Timing and Performance

Running in Parallel on Multiple Cores

Chapter 13: Exploring Data Analysis

The EDA Approach

Defining Descriptive Statistics for Numeric Data

Counting for Categorical Data

Creating Applied Visualization for EDA

Understanding Correlation

Working with Cramér's V

Modifying Data Distributions

Chapter 14: Reducing Dimensionality

Understanding SVD

Performing Factor Analysis and PCA

Understanding Some Applications

Chapter 15: Clustering

Clustering with K-means

Performing Hierarchical Clustering

Discovering New Groups with DBScan

Chapter 16: Detecting Outliers in Data

Considering Outlier Detection

Examining a Simple Univariate Method

Developing a Multivariate Approach

Part 5: Learning from Data

Chapter 17: Exploring Four Simple and Effective Algorithms

Guessing the Number: Linear Regression

Moving to Logistic Regression

Making Things as Simple as Naïve Bayes

Learning Lazily with Nearest Neighbors

Chapter 18: Performing Cross-Validation, Selection, and Optimization

Pondering the Problem of Fitting a Model

Cross-Validating

Selecting Variables Like a Pro

Pumping Up Your Hyperparameters

Chapter 19: Increasing Complexity with Linear and Nonlinear Tricks

Using Nonlinear Transformations

Regularizing Linear Models

Fighting with Big Data Chunk by Chunk

Understanding Support Vector Machines

Playing with Neural Networks

Chapter 20: Understanding the Power of the Many

Starting with a Plain Decision Tree

Getting Lost in a Random Forest

Boosting Predictions

Part 6: The Part of Tens

Chapter 21: Ten Essential Data Resources

Discovering the News with Reddit

Getting a Good Start with KDnuggets

Locating Free Learning Resources with Quora

Gaining Insights with Oracle’s AI & Data Science Blog

Accessing the Huge List of Resources on Data Science Central

Discovering New Beginner Data Science Methodologies at Data Science 101

Obtaining the Most Authoritative Sources at Udacity

Receiving Help with Advanced Topics at Conductrics

Obtaining the Facts of Open Source Data Science from Springboard

Zeroing In on Developer Resources with Jonathan Bower

Chapter 22: Ten Data Challenges You Should Take

Removing Personally Identifiable Information

Creating a Secure Data Environment

Working with a Multiple-Data-Source Problem

Honing Your Overfit Strategies

Trudging Through the MovieLens Dataset

Locating the Correct Data Source

Working with Handwritten Information

Working with Pictures

Indentifying Data Lineage

Interacting with a Huge Graph

Index

About the Authors

Connect with Dummies

End User License Agreement

List of Tables

Chapter 10

TABLE 10-1 Matplotlib Line Styles

TABLE 10-2 Matplotlib Colors

TABLE 10-3 Matplotlib Markers

Chapter 18

TABLE 18-1 Regression Evaluation Measures

TABLE 18-2 Classification Evaluation Measures

Chapter 19

TABLE 19-1 The SVM Module of Learning Algorithms

TABLE 19-2 The Loss, Penalty, and Dual Constraints

List of Illustrations

Chapter 1

FIGURE 1-1: Loading data into variables so that you can manipulate it.

FIGURE 1-2: Using the variable content to train a linear regression model.

FIGURE 1-3: Outputting a result as a response to the model.

Chapter 3

FIGURE 3-1: Tell the wizard how to install Anaconda on your system.

FIGURE 3-2: Specify an installation location.

FIGURE 3-3: Configure the advanced installation options.

FIGURE 3-4: Create a folder to use to hold the book’s code.

FIGURE 3-5: Notebook uses cells to store your code.

FIGURE 3-6: The housing object contains the loaded dataset.

Chapter 4

FIGURE 4-1: Create a new Python 3 Notebook using the same techniques as normal.

FIGURE 4-2: Use this dialog box to open existing notebooks.

FIGURE 4-3: Colab maintains a history of the revisions for your project.

FIGURE 4-4: Using GitHub means storing your data in a repository.

FIGURE 4-5: Use gists to store individual files or other resources.

FIGURE 4-6: Colab code cells contain a few extras not found in Notebook.

FIGURE 4-7: Use the Editor tab of the Settings dialog box to modify ...

FIGURE 4-8: Colab code cells contain a few extras not found in Notebook.

FIGURE 4-9: Use the GUI to make formatting your text easier.

FIGURE 4-10: Hardware acceleration speeds code execution.

FIGURE 4-11: Use the table of contents to navigate your notebook.

FIGURE 4-12: The notebook information includes both size and settings.

FIGURE 4-13: Send a message or obtain a link to share your notebook.

FIGURE 4-14: Use code snippets to write your applications more quickly.

Chapter 5

FIGURE 5-1: Notebook makes adding styles to your work easy.

FIGURE 5-2: Adding headings makes separating content in your notebooks easy.

FIGURE 5-3: The Help menu contains a selection of common help topics.

FIGURE 5-4: Take your time going through the magic function help, which has a l...

FIGURE 5-5: Embedding images can dress up your notebook presentation.

Chapter 6

FIGURE 6-1: The test image is 100 pixels high and 100 pixels long.

FIGURE 6-2: The raw format of a CSV file is still text and quite readable.

FIGURE 6-3: Use an application such as Excel to create a formatted CSV presenta...

FIGURE 6-4: An Excel file is highly formatted and might contain information of ...

FIGURE 6-5: The image appears onscreen after you render and show it.

FIGURE 6-6: Cropping the image makes it smaller.

FIGURE 6-7: XML is a hierarchical format that can become quite complex.

Chapter 8

FIGURE 8-1: Plotting the original graph.

FIGURE 8-2: Plotting the graph addition.

Chapter 10

FIGURE 10-1: Creating a basic plot that shows just one line.

FIGURE 10-2: Defining a plot that contains multiple lines.

FIGURE 10-3: Specifying how the axes should appear to the viewer.

FIGURE 10-4: Adding grids makes the values easier to read.

FIGURE 10-5: Line styles help differentiate between plots.

FIGURE 10-6: Markers help to emphasize individual values.

FIGURE 10-7: Use labels to identify the axes.

FIGURE 10-8: Annotation can identify points of interest.

FIGURE 10-9: Use legends to identify individual lines.

Chapter 11

FIGURE 11-1: Bar charts make it easier to perform comparisons.

FIGURE 11-2: Histograms let you see distributions of numbers.

FIGURE 11-3: Use boxplots to present groups of numbers.

FIGURE 11-4: Use scatterplots to show groups of data points and their associate...

FIGURE 11-5: Color arrays can make the scatterplot groups stand out better.

FIGURE 11-6: Scatterplot trendlines can show you the general data direction.

FIGURE 11-7: Use line graphs to show the flow of data over time.

FIGURE 11-8: Add a trendline to show the average direction of change in a chart...

FIGURE 11-9: Maps can illustrate data in ways other graphics can't.

FIGURE 11-10: Undirected graphs connect nodes to form patterns.

FIGURE 11-11: Use directed graphs to show direction between nodes.

Chapter 12

FIGURE 12-1: The output from the memory test shows memory usage for each line o...

Chapter 13

FIGURE 13-1: A contingency table based on groups and binning.

FIGURE 13-2: A boxplot comparing all the standardized variables.

FIGURE 13-3: A boxplot of body mass arranged by penguin groups.

FIGURE 13-4: Parallel coordinates anticipate whether groups are easily separabl...

FIGURE 13-5: Flipper length distribution and density.

FIGURE 13-6: Histograms can detail better distributions.

FIGURE 13-7: A scatterplot reveals how two variables relate to each other.

FIGURE 13-8: A matrix of scatterplots displays more information at one time.

FIGURE 13-9: A covariance matrix of the Palmer Penguins dataset.

FIGURE 13-10: A correlation matrix of the Palmer Penguins dataset.

FIGURE 13-11: The distribution of bill depth transformed into a uniform distrib...

FIGURE 13-12: The distribution of bill depth transformed into a normal distribu...

Chapter 14

FIGURE 14-1: The resulting projection of the handwritten data by the t-SNE algo...

FIGURE 14-2: The example application would like to find similar photos.

FIGURE 14-3: The output shows the results that resemble the test image.

Chapter 15

FIGURE 15-1: Cross-tabulation of ground truth and K-means clusters.

FIGURE 15-2: Rate of change of inertia for solutions up to k=20.

FIGURE 15-3: Cross-tabulation of ground truth and Ward method’s agglomerative c...

FIGURE 15-4: A clustering hierarchical tree obtained from agglomerative cluster...

FIGURE 15-5: Cross-tabulation of ground truth and DBScan.

Chapter 16

FIGURE 16-1: Descriptive statistics for the Diabetes DataFrame from Scikit-lear...

FIGURE 16-2: Boxplots.

FIGURE 16-3: The first two and last two components from the PCA.

Chapter 18

FIGURE 18-1: Spatial distribution of house prices in California.

FIGURE 18-2: Boxplot of house prices, grouped by clusters.

FIGURE 18-3: Validation curves.

Chapter 19

FIGURE 19-1: A slow descent optimizing squared error.

FIGURE 19-2: Dividing two groups.

FIGURE 19-3: A viable SVM solution for the problem of the two groups and more.

FIGURE 19-4: The first ten handwritten digits from the digits dataset.

FIGURE 19-5: The training and test scores of the neural network as it learns fr...

Chapter 20

FIGURE 20-1: A tree model of survival rates from the Titanic disaster.

FIGURE 20-2: A tree model of the Mushroom dataset using a depth of five splits.

FIGURE 20-3: Verifying the impact of the number of estimators on Random Forest.

Guide

Cover

Table of Contents

Title Page

Copyright

Begin Reading

Index

About the Authors

Pages

i

ii

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

143

144

145

146

147

148

149

150

151

152

153

154

155

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

316

317

318

319

320

321

322

323

324

325

327

328

329

330

331

332

333

334

335

336

337

338

339

340

341

342

343

344

345

346

347

348

349

350

351

352

353

354

355

356

357

358

359

360

361

362

363

364

365

366

367

368

369

370

371

372

373

374

375

376

377

378

379

380

381

382

383

384

385

386

387

388

389

390

391

392

393

394

395

396

397

398

399

400

401

402

403

404

405

406

407

408

409

410

411

413

414

415

416

417

418

419

420

421

422

423

424

425

426

427

428

429

430

431

432

433

434

435

436

437

438

439

440

441

442

443

444

445

446

447

449

450

451

Python® for Data Science For Dummies®, 3rd Edition

Published by: John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030-5774, www.wiley.com

Copyright © 2024 by John Wiley & Sons, Inc., Hoboken, New Jersey

Media and software compilation copyright © 2023 by John Wiley & Sons, Inc. All rights reserved.

Published simultaneously in Canada

No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without the prior written permission of the Publisher. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions.

Trademarks: Wiley, For Dummies, the Dummies Man logo, Dummies.com, Making Everything Easier, and related trade dress are trademarks or registered trademarks of John Wiley & Sons, Inc. and may not be used without written permission. Python is a registered trademark of Python Software Foundation Corporation. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.

LIMIT OF LIABILITY/DISCLAIMER OF WARRANTY: WHILE THE PUBLISHER AND AUTHORS HAVE USED THEIR BEST EFFORTS IN PREPARING THIS WORK, THEY MAKE NO REPRESENTATIONS OR WARRANTIES WITH RESPECT TO THE ACCURACY OR COMPLETENESS OF THE CONTENTS OF THIS WORK AND SPECIFICALLY DISCLAIM ALL WARRANTIES, INCLUDING WITHOUT LIMITATION ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. NO WARRANTY MAY BE CREATED OR EXTENDED BY SALES REPRESENTATIVES, WRITTEN SALES MATERIALS OR PROMOTIONAL STATEMENTS FOR THIS WORK. THE FACT THAT AN ORGANIZATION, WEBSITE, OR PRODUCT IS REFERRED TO IN THIS WORK AS A CITATION AND/OR POTENTIAL SOURCE OF FURTHER INFORMATION DOES NOT MEAN THAT THE PUBLISHER AND AUTHORS ENDORSE THE INFORMATION OR SERVICES THE ORGANIZATION, WEBSITE, OR PRODUCT MAY PROVIDE OR RECOMMENDATIONS IT MAY MAKE. THIS WORK IS SOLD WITH THE UNDERSTANDING THAT THE PUBLISHER IS NOT ENGAGED IN RENDERING PROFESSIONAL SERVICES. THE ADVICE AND STRATEGIES CONTAINED HEREIN MAY NOT BE SUITABLE FOR YOUR SITUATION. YOU SHOULD CONSULT WITH A SPECIALIST WHERE APPROPRIATE. FURTHER, READERS SHOULD BE AWARE THAT WEBSITES LISTED IN THIS WORK MAY HAVE CHANGED OR DISAPPEARED BETWEEN WHEN THIS WORK WAS WRITTEN AND WHEN IT IS READ. NEITHER THE PUBLISHER NOR AUTHORS SHALL BE LIABLE FOR ANY LOSS OF PROFIT OR ANY OTHER COMMERCIAL DAMAGES, INCLUDING BUT NOT LIMITED TO SPECIAL, INCIDENTAL, CONSEQUENTIAL, OR OTHER DAMAGES.

For general information on our other products and services, please contact our Customer Care Department within the U.S. at 877-762-2974, outside the U.S. at 317-572-3993, or fax 317-572-4002. For technical support, please visit https://hub.wiley.com/community/support/dummies.

Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com.

Library of Congress Control Number: 2023946155

ISBN 978-1-394-21314-6 (pbk); ISBN 978-1-394-21308-5 (ebk); ISBN ePDF 978-1-394-21309-2 (ebk)

Introduction

The growth of the internet has been phenomenal. According to Internet World Stats (https://www.internetworldstats.com/emarketing.htm), 69 percent of the world is now connected in some way to the internet, including developing countries. North America has the highest penetration rate 93.4 percent, which means you now have access to nearly everyone just by knowing how to manipulate data. Data science turns this huge amount of data into capabilities that you use absolutely every day to perform an amazing array of tasks or to obtain services from someone else.

You’ve probably used data science in ways that you never expected. For example, when you used your favorite search engine this morning to look for something, it made suggestions on alternative search terms. Those terms are supplied by data science. When you went to the doctor last week and discovered that the lump you found wasn’t cancer, the doctor likely made the prognosis with the help of data science.

In fact, you may work with data science every day and not even know it. Even though many of the purposes of data science elude attention, you have probably become more aware of the data you generate, and with that awareness comes a desire for control over aspects of your life, such as when and where to shop, or whether to have someone perform the task for you. In addition to all its other uses, data science enables you to add that level of control that you, like many people, are looking for today.

Python for Data Science For Dummies, 3rd Edition not only gets you started using data science to perform a wealth of practical tasks but also helps you realize just how many places data science is used. By knowing how to answer data science problems and where to employ data science, you gain a significant advantage over everyone else, increasing your chances at promotion or that new job you really want.

About This Book

The main purpose of Python for Data Science For Dummies, 3rd Edition, is to take the scare factor out of data science by showing you that data science is not only really interesting but also quite doable using Python. You may assume that you need to be a computer science genius to perform the complex tasks normally associated with data science, but that’s far from the truth. Python comes with a host of useful libraries that do all the heavy lifting for you in the background. You don’t even realize how much is going on, and you don’t need to care. All you really need to know is that you want to perform specific tasks, and Python makes these tasks quite accessible.

Part of the emphasis of this book is on using the right tools. You start with either Jupyter Notebook (on desktop systems) or Google Colab (on the web) — two tools that take the sting out of working with Python. The code you place in Jupyter Notebook or Google Colab is presentation quality, and you can mix a number of presentation elements right there in your document. It’s not really like using a traditional development environment at all.

You also discover some interesting techniques in this book. For example, you can create plots of all your data science experiments using Matplotlib, and this book gives you all the details for doing that. This book also spends considerable time showing you available resources (such as packages) and how you can use Scikit-learn to perform some very interesting calculations. Many people would like to know how to perform handwriting recognition, and if you’re one of them, you can use this book to get a leg up on the process.

Of course, you may still be worried about the whole programming environment issue, and this book doesn’t leave you in the dark there, either. At the beginning, you find complete methods you need to get started with data science using Jupyter Notebook or Google Colab. The emphasis is on getting you up and running as quickly as possible, and to make examples straightforward and simple so that the code doesn’t become a stumbling block to learning.

This third edition of the book provides you with updated examples using Python 3.x so that you’re using the most modern version of Python while reading. In addition, you find a stronger emphasis on making examples simpler, but also making the environment more inclusive by adding material on deep learning. More important, this edition of the book contains updated datasets that better demonstrate how data science works today. This edition of the book also touches on modern concerns, such as removing personally identifiable information and enhancing data security. Consequently, you get a lot more out of this edition of the book as a result of the input provided by thousands of readers before you.

To make absorbing the concepts even easier, this book uses the following conventions:

Text that you’re meant to type just as it appears in the book is in

bold

. The exception is when you’re working through a list of steps: Because each step is bold, the text to type is not bold.

When you see words in

italics

as part of a typing sequence, you need to replace that value with something that works for you. For example, if you see “Type

Your Name

and press Enter,” you need to replace

Your Name

with your actual name.

Web addresses and programming code appear in

monofont

. If you're reading a digital version of this book on a device connected to the internet, note that you can click the web address to visit that website, like this:

http://www.dummies.com

.

When you need to type command sequences, you see them separated by a special arrow, like this: File  ⇒    New File. In this example, you go to the File menu first and then select the New File entry on that menu.

Foolish Assumptions

You may find it difficult to believe that we've assumed anything about you — after all, we haven’t even met you yet! Although most assumptions are indeed foolish, we made these assumptions to provide a starting point for the book.

You need to be familiar with the platform you want to use because the book doesn’t offer any guidance in this regard. (Chapter 3 does, however, provide Anaconda installation instructions, which supports Jupyter Notebook, and Chapter 4 gets you started with Google Colab.) To provide you with maximum information about Python concerning how it applies to data science, this book doesn’t discuss any platform-specific issues. You really do need to know how to install applications, use applications, and generally work with your chosen platform before you begin working with this book.

You must know how to work with Python. This edition of the book no longer contains a Python primer because you can find a wealth of tutorials online (see https://www.w3schools.com/python/ and https://www.tutorialspoint.com/python/ as examples).

This book isn’t a math primer. Yes, you do encounter some complex math, but the emphasis is on helping you use Python and data science to perform analysis tasks rather than teaching math theory. Chapters 1 and 2 give you a better understanding of precisely what you need to know to use this book successfully.

This book also assumes that you can access items on the internet. Sprinkled throughout are numerous references to online material that will enhance your learning experience. However, these added sources are useful only if you actually find and use them.

Icons Used in This Book

As you read this book, you come across icons in the margins, and here’s what those icons mean:

Tips are nice because they help you save time or perform some task without a lot of extra work. The tips in this book are time-saving techniques or pointers to resources that you should try in order to get the maximum benefit from Python or in performing data science–related tasks.

We don’t want to sound like angry parents or some kind of maniacs, but you should avoid doing anything that’s marked with a Warning icon. Otherwise, you may find that your application fails to work as expected, or you get incorrect answers from seemingly bulletproof equations, or (in the worst-case scenario) you lose data.

Whenever you see this icon, think advanced tip or technique. You may find that you don’t need these tidbits of useful information, or they could contain the solution you need to get a program running. Skip these bits of information whenever you like.

If you don’t get anything else out of a particular chapter or section, remember the material marked by this icon. This text usually contains an essential process or a morsel of information that you must know to work with Python or to perform data science–related tasks successfully.

Beyond the Book

This book isn’t the end of your Python or data science experience — it’s really just the beginning. We provide online content to make this book more flexible and better able to meet your needs. That way, as we receive email from you, we can address questions and tell you how updates to either Python or its associated add-ons affect book content. In fact, you gain access to all these cool additions:

Cheat sheet:

You remember using crib notes in school to make a better mark on a test, don’t you? You do? Well, a cheat sheet is sort of like that. It provides you with some special notes about tasks that you can do with Python, IPython, IPython Notebook, and data science that not every other person knows. You can find the cheat sheet by going to

www.dummies.com

and entering

Python for Data Science For Dummies, 3rd Edition

in the search field. The cheat sheet contains neat information such as the most common programming mistakes, styles for creating plot lines, and common magic functions to use in Jupyter Notebook.

Updates: Sometimes changes happen. For example, we may not have seen an upcoming change when we looked into our crystal ball during the writing of this book. In the past, this possibility simply meant that the book became outdated and less useful, but you can now find updates to the book by searching this book's title at www.dummies.com.

In addition to these updates, check out the blog posts with answers to reader questions and demonstrations of useful book-related techniques at http://blog.johnmuellerbooks.com/.

Companion files:

Hey! Who really wants to type all the code in the book and reconstruct all those plots manually? Most readers would prefer to spend their time actually working with Python, performing data science tasks, and seeing the interesting things they can do, rather than typing. Fortunately for you, the examples used in the book are available for download, so all you need to do is read the book to learn

Python for Data Science For Dummies

usage techniques. You can find these files at

www.dummies.com/go/pythonfordatasciencefd3e

. You can also find the source code on author John’s website at

http://www.johnmuellerbooks.com/source-code/

.

Where to Go from Here

It’s time to start your Python for Data Science For Dummies adventure! If you’re completely new to Python and its use for data science tasks, you should start with Chapter 1 and progress through the book at a pace that allows you to absorb as much of the material as possible.

If you’re a novice who’s in an absolute rush to use Python with data science as quickly as possible, you can skip to Chapter 3 (desktop users) or Chapter 4 (web browser users) with the understanding that you may find some topics a bit confusing later. More advanced readers can skip to Chapter 5 to gain an understanding of the tools used in this book.

Readers who have some exposure to Python and know how to use their development environment can save reading time by moving directly to Chapter 6. You can always go back to earlier chapters as necessary when you have questions. However, you should understand how each technique works before moving to the next one. Every technique, coding example, and procedure has important lessons for you, and you could miss vital content if you start skipping too much information.

Part 1

Getting Started with Data Science and Python

IN THIS PART …

Understanding the connection between Python and data science

Getting an overview of Python capabilities

Defining a Python setup for data science

Using Google Colab for data science tasks

Chapter 1

Discovering the Match between Data Science and Python

IN THIS CHAPTER

Discovering the wonders of data science

Exploring how data science works

Creating the connection between Python and data science

Getting started with Python

Data science may seem like one of those technologies that you’d never use, but you’d be wrong. Yes, data science involves the use of advanced math techniques, statistics, and big data. However, data science also involves helping you make smart decisions, creating suggestions for options based on previous choices, and making robots see objects. In fact, people use data science in so many different ways that you almost can’t look anywhere or do anything without feeling the effects of data science on your life. In short, data science is the person behind the partition in the experience of the wonderment of technology. Without data science, much of what you accept as typical and expected today wouldn’t even be possible. This is the reason that being a data scientist is one of the most interesting jobs of the 21st century.

To make data science doable by someone who’s less than a math genius, you need tools. You could use any of a number of tools to perform data science tasks, but Python is uniquely suited to making it easier to work with data science. For one thing, Python provides an incredible number of math-related libraries that help you perform tasks with a less-than-perfect understanding of precisely what is going on. However, Python goes further by supporting multiple coding styles (programming paradigms) and doing other things to make your job easier. Therefore, yes, you could use other languages to write data science applications, but Python reduces your workload, so it’s a natural choice for those who really don’t want to work hard, but rather to work smart.

This chapter gets you started with Python. Even though this book isn’t designed to provide you with a complete Python tutorial, exploring some basic Python issues will reduce the time needed for you to get up to speed. (If you do need a good starting tutorial, please get Beginning Programming with Python For Dummies, 3rd Edition, by John Mueller (Wiley)). You’ll find that the book provides pointers to tutorials and other aids as needed to fill in any gaps that you may have in your Python education.

Understanding Python as a Language

This book uses Python as a programming language because it’s especially well-suited to data science needs and also supports performing general programming tasks. Common wisdom says that Python is interpreted, but as described in the blog post at http://blog.johnmuellerbooks.com/2023/04/10/compiling-python/, Python can act as a compiled language as well. This book uses Jupyter Notebook because the environment works well for learning, but you need to know that Python provides a lot more than you see in this book. With this fact in mind, the following sections provide a brief view of Python as a language.

Viewing Python’s various uses as a general-purpose language

Python isn’t a language just for use in data science; it’s a general-purpose language with many uses beyond what you need to perform data science tasks. Python is important because after you have built a model, you may need to build a user interface and other structural elements around it. The model may simply be one part of a much larger application, all of which you can build using Python. Here are some tasks that developers commonly use Python to perform beyond data science needs:

Web development

General-purpose programming:

Performing Create, Read, Update, and Delete (CRUD) operations on any sort of file

Creating graphical user interfaces (GUIs)

Developing application programming interfaces (API)s

Game development (something you can read about at

https://realpython.com/tutorials/gamedev/

)

Automation and scripting

Software testing and prototyping

Language development (Cobra, CoffeeScript, and Go all use a language syntax similar to Python)

Marketing and Search Engine Optimization (SEO)

Common tasks associated with standard applications:

Tracking financial transactions of all sorts

Interacting with various types of messaging strategies

Creating various kinds of lists based on environmental or other inputs

Automating tasks like filling out forms

The list could be much longer, but this gives you an idea of just how capable Python actually is. The view you see of Python in this book is limited to experimenting with and learning about data science, but don’t let this view limit what you actually use Python to do in the future. Python is currently used as a general-purpose programming language in companies like the following:

Amazon

Dropbox

Facebook

Google

IBM

Instagram

Intel

JP Morgan Chase

NASA

Netflix

PayPal

Pinterest

Reddit

Spotify

Stripe

Uber

YouTube

Interpreting Python

You see Python used in this book in an interpreted mode. There are a lot of reasons to take this approach, but the essential reason is that it allows the use of literate programming techniques (https://notebook.community/sfomel/ipython/LiterateProgramming), which greatly enhance learning and significantly reduce the learning curve. The main advantages of using Python in an interpreted mode are that you receive instant feedback, and fixing errors is significantly easier. When combined with a notebook environment, using Python in an interpreted mode also makes it easier to create presentations and reports, as well as to create graphics that present outcomes of various analyses.

Compiling Python

Outside this book, you may find that compiling your Python application is important because doing so can help increase overall application speed. In addition, compiling your code can reduce the potential for others stealing your code and make your applications both more secure and reliable. You do need access to third-party products to compile your code, but you’ll find plenty of available products discussed at https://www.softwaretestinghelp.com/python-compiler/.

Defining Data Science

At one point, the world viewed anyone working with statistics as a sort of accountant or perhaps a mad scientist. Many people consider statistics and analysis of data boring. However, data science is one of those occupations in which the more you learn, the more you want to learn. Answering one question often spawns more questions that are even more interesting than the one you just answered. However, the thing that makes data science so interesting is that you see it everywhere and used in an almost infinite number of ways. The following sections provide more details on why data science is such an amazing field of study.

Considering the emergence of data science

Data science is a relatively new term. William S. Cleveland coined the term in 2001 as part of a paper entitled “Data Science: An Action Plan for Expanding the Technical Areas of the Field of Statistics.” It wasn’t until a year later that the International Council for Science actually recognized data science and created a committee for it. Columbia University got into the act in 2003 by beginning publication of the Journal of Data Science.

However, the mathematical basis behind data science is centuries old because data science is essentially a method of viewing and analyzing statistics and probability. The first essential use of statistics as a term comes in 1749, but statistics are certainly much older than that. People have used statistics to recognize patterns for thousands of years. For example, the historian Thucydides (in his History of the Peloponnesian War) describes how the Athenians calculated the height of the wall of Plataea in fifth century BC by counting bricks in an unplastered section of the wall. Because the count needed to be accurate, the Athenians took the average of the count by several solders.

The process of quantifying and understanding statistics is relatively new, but the science itself is quite old. An early attempt to begin documenting the importance of statistics appears in the ninth century when Al-Kindi wrote Manuscript on Deciphering Cryptographic Messages. In this paper, Al-Kindi describes how to use a combination of statistics and frequency analysis to decipher encrypted messages. Even in the beginning, statistics saw use in practical application of science to tasks that seemed virtually impossible to complete. Data science continues this process, and to some people it may actually seem like magic.

Outlining the core competencies of a data scientist

As is true of anyone performing most complex trades today, the data scientist requires knowledge of a broad range of skills to perform the required tasks. In fact, so many different skills are required that data scientists often work in teams. Someone who is good at gathering data may team up with an analyst and someone gifted in presenting information. It would be hard to find a single person with all the required skills. With this in mind, the following list describes areas in which a data scientist could excel (with more competencies being better):

Data capture:

It doesn’t matter what sort of math skills you have if you can’t obtain data to analyze in the first place. The act of capturing data begins by managing a data source using database management skills. However, raw data isn’t particularly useful in many situations — you must also understand the data domain so that you can look at the data and begin formulating the sorts of questions to ask. Finally, you must have data-modeling skills so that you understand how the data is connected and whether the data is structured.

Analysis:

After you have data to work with and understand the complexities of that data, you can begin to perform an analysis on it. You perform some analysis using basic statistical tool skills, much like those that just about everyone learns in college. However, the use of specialized math tricks and algorithms can make patterns in the data more obvious or help you draw conclusions that you can’t draw by reviewing the data alone.

Presentation:

Most people don’t understand numbers well. They can’t see the patterns that the data scientist sees. It’s important to provide a graphical presentation of these patterns to help others visualize what the numbers mean and how to apply them in a meaningful way. More important, the presentation must tell a specific story so that the impact of the data isn’t lost.

Linking data science, big data, and AI

Interestingly enough, the act of moving data around so that someone can perform analysis on it is a specialty called Extract, Transformation, and Loading (ETL). The ETL specialist uses programming languages such as Python to extract the data from a number of sources. Corporations tend not to keep data in one easily accessed location, so finding the data required to perform analysis takes time. After the ETL specialist finds the data, a programming language or other tool transforms it into a common format for analysis purposes. The loading process takes many forms, but this book relies on Python to perform the task. In a large, real-world operation, you may find yourself using tools such as Informatica, MS SSIS, or Teradata to perform the task.

Data science isn’t necessarily a means to an end; it may instead be a step along the way. As a data scientist works through various datasets and finds interesting facts, these facts may act as input for other sorts of analysis and AI applications. For example, consider that your shopping habits often suggest what books you may like or where you may like to go for a vacation. Shopping or other habits can also help others understand other, sometimes less benign, activities as well. Machine Learning For Dummies, 2nd Edition and Artificial Intelligence For Dummies, 2nd Edition, both by John Mueller and Luca Massaron (Wiley) help you understand these other uses of data science. For now, consider the fact that what you learn in this book can have a definite effect on a career path that will go many other places.

EXTRACT, LOAD, AND TRANSFORM (ELT)

You may come across a new way of working with data called ELT, which is a variation of ETL. The article “Extract, Load, Transform (ELT)” (https://www.techtarget.com/searchdatamanagement/definition/Extract-Load-Transform-ELT), describes the difference between the two. This different approach is often used for nonrelational and unstructured data. The overall goal is to simplify the data gathering and management process, possibly allowing the use of a single tool even for large datasets. However, this approach also has significant drawbacks. The ELT approach isn’t covered in this book, but it does pay to know that it exists.

Creating the Data Science Pipeline

Data science is partly art and partly engineering. Recognizing patterns in data, considering what questions to ask, and determining which algorithms work best are all part of the art side of data science. However, to make the art part of data science realizable, the engineering part relies on a specific process to achieve specific goals. This process is the data science pipeline, which requires the data scientist to follow particular steps in the preparation, analysis, and presentation of the data. The following list helps you understand the data science pipeline better so that you can understand how the book employs it during the presentation of examples:

Preparing the data:

The data that you access from various sources doesn’t come in an easily packaged form, ready for analysis. The raw data not only may vary substantially in format but also need you to transform it to make all the data sources cohesive and amenable to analysis.

Performing exploratory data analysis:

The math behind data analysis relies on engineering principles in that the results are provable and consistent. However, data science provides access to a wealth of statistical methods and algorithms that help you discover patterns in the data. A single approach doesn’t ordinarily do the trick. You typically use an iterative process to rework the data from a number of perspectives. The use of trial and error is part of the data science art.

Learning from data:

As you iterate through various statistical analysis methods and apply algorithms to detect patterns, you begin learning from the data. The data may not tell the story that you originally thought it would, or it may have many stories to tell. Discovery is part of being a data scientist. If you have preconceived ideas of what the data contains, you won’t find the information it actually does contain.

Visualizing:

Visualization means seeing the patterns in the data and then being able to react to those patterns. It also means being able to see when data is not part of the pattern. Think of yourself as a data sculptor, removing the data that lies outside the patterns (the outliers) so that others can see the masterpiece of information beneath.

Obtaining insights and data products:

The data scientist may seem to simply be looking for unique methods of viewing data. However, the process doesn’t end until you have a clear understanding of what the data means. The insights you obtain from manipulating and analyzing the data help you to perform real-world tasks. For example, you can use the results of an analysis to make a business decision.

Understanding Python’s Role in Data Science

Given the right data sources, analysis requirements, and presentation needs, you can use Python for every part of the data science pipeline. In fact, that’s precisely what you do in this book. Every example uses Python to help you understand another part of the data science equation. Of all the languages you could choose for performing data science tasks, Python is the most flexible and capable because it supports so many third-party libraries devoted to the task. The following sections help you better understand why Python is such a good choice for many (if not most) data science needs.

Considering the shifting profile of data scientists

Some people view the data scientist as an unapproachable nerd who performs miracles on data with math. The data scientist is the person behind the curtain in an Oz-like experience. However, this perspective is changing. In many respects, the world now views the data scientist as either an adjunct to a developer or as a new type of developer. The ascendance of applications of all sorts that can learn is the essence of this change. For an application to learn, it has to be able to manipulate large databases and discover new patterns in them. In addition, the application must be able to create new data based on the old data — making an informed prediction of sorts. The new kinds of applications affect people in ways that would have seemed like science fiction just a few years ago. Of course, the most noticeable of these applications define the behaviors of robots that will interact far more closely with people tomorrow than they do today.

From a business perspective, the necessity of fusing data science and application development is obvious: Businesses must perform various sorts of analysis on the huge databases it has collected — to make sense of the information and use it to predict the future. In truth, however, the far greater impact of the melding of these two branches of science — data science and application development — will be felt in terms of creating altogether new kinds of applications, some of which aren’t even possibly to imagine with clarity today. For example, new applications could help students learn with greater precision by analyzing their learning trends and creating new instructional methods that work for that particular student. This combination of sciences may also solve a host of medical problems that seem impossible to solve today — not only in keeping disease at bay, but also by solving problems, such as how to create truly usable prosthetic devices that look and act like the real thing.

Working with a multipurpose, simple, and efficient language

Many different ways are available for accomplishing data science tasks. This book covers only one of the myriad methods at your disposal. However, Python represents one of the few single-stop solutions that you can use to solve complex data science problems. Instead of having to use a number of tools to perform a task, you can simply use a single language, Python, to get the job done. The Python difference is the large number scientific and math libraries created for it by third parties. Plugging in these libraries greatly extends Python and allows it to easily perform tasks that other languages could perform, but with great difficulty.

Python’s libraries are its main selling point; however, Python offers more than reusable code. The most important thing to consider with Python is that it supports four different coding styles:

Functional:

Treats every statement as a mathematical equation and avoids any form of state or mutable data. The main advantage of this approach is having no side effects to consider. In addition, this coding style lends itself better than the others to parallel processing because there is no state to consider. Many developers prefer this coding style for recursion and for lambda calculus.

Imperative:

Performs computations as a direct change to program state. This style is especially useful when manipulating data structures and produces elegant, but simple, code.

Object-oriented:

Relies on data fields that are treated as objects and manipulated only through prescribed methods. Python doesn’t fully support this coding form because it can’t implement features such as data hiding. However, this is a useful coding style for complex applications because it supports encapsulation and polymorphism. This coding style also favors code reuse.

Procedural:

Treats tasks as step-by-step iterations where common tasks are placed in functions that are called as needed. This coding style favors iteration, sequencing, selection, and modularization.

Learning to Use Python Fast

It’s time to try using Python to see the data science pipeline in action. The following sections provide a brief overview of the process you explore in detail in the rest of the book. You won’t actually perform the tasks in the following sections. In fact, you don’t install Python until Chapter 3, so for now, just follow along in the text. This book uses a specific version of Python and an IDE called Jupyter Notebook, so please wait until Chapter 3 to install these features (or skip ahead, if you insist, and install them now). (You can also use Google Colab with the source code in the book, as described in Chapter 4.) Don’t worry about understanding every aspect of the process at this point. The purpose of these sections is to help you gain an understanding of the flow of using Python to perform data science tasks. Many of the details may seem difficult to understand at this point, but the rest of the book will help you understand them.

The examples in this book rely on a web-based application named Jupyter Notebook. The screenshots you see in this and other chapters reflect how Jupyter Notebook looks in Chrome on a Windows 10/11 system. The view you see will contain the same data, but the actual interface may differ a little depending on platform (such as using a notebook instead of a desktop system), operating system, and browser. Don’t worry if you see some slight differences between your display and the screenshots in the book.

You don’t have to type the source code for this chapter in by hand. In fact, it’s a lot easier if you use the downloadable source (see the Introduction for details on downloading the source code). The source code for this chapter appears in the P4DS4D3_01_Quick_Overview.ipynb source code file.

Loading data

Before you can do anything, you need to load some data. The book shows you all sorts of methods for performing this task. In this case, Figure 1-1 shows how to load a dataset called California Housing that contains housing prices and other facts about houses in California. It was obtained from StatLib repository (see https://www.dcc.fc.up.pt/~ltorgo/Regression/cal_housing.html for details). The code places the entire dataset in the housing variable and then places parts of that data in variables named X and y. Think of variables as you would storage boxes. The variables are important because they make it possible to work with the data. The output shows that the dataset contains 20,640 entries with eight features each. The second output shows the name of each of the features.

Training a model

Now that you have some data to work with, you can do something with it. All sorts of algorithms are built into Python. Figure 1-2 shows a linear regression model. Again, don't worry precisely how this works; later chapters discuss linear regression in detail. The important thing to note in Figure 1-2 is that Python lets you perform the linear regression using just two statements and to place the result in a variable named hypothesis.

FIGURE 1-1: Loading data into variables so that you can manipulate it.

FIGURE 1-2: Using the variable content to train a linear regression model.

Viewing a result

Performing any sort of analysis doesn’t pay unless you obtain some benefit from it in the form of a result. This book shows all sorts of ways to view output, but Figure 1-3 starts with something simple. In this case, you see the coefficient output from the linear regression analysis. Notice that there is one coefficient for each of the dataset features.

FIGURE 1-3: Outputting a result as a response to the model.

One of the reasons that this book uses Jupyter Notebook is that the product helps you to create nicely formatted output as part of creating the application. Look again at Figure 1-3, and you see a report that you could simply print and offer to a colleague. The output isn’t suitable for many people, but those experienced with Python and data science will find it quite usable and informative.

Chapter 2

Introducing Python’s Capabilities and Wonders

IN THIS CHAPTER

Getting a quick start with Python

Considering Python’s special features

Defining and exploring the power of Python for the data scientist

All computers run on just one kind of language — machine code. However, unless you want to learn how to talk like a computer in 0s and 1s, machine code isn’t particularly useful. You’d never want to try to define data science problems using machine code. It would take an entire lifetime (if not longer) just to define one problem. Higher-level languages make it possible to write a lot of code that humans can understand quite quickly. The tools used with these languages make it possible to translate the human-readable code into machine code that the machine understands. Therefore, the choice of languages depends on the human need, not the machine need. With this in mind, this chapter introduces you to the capabilities that Python provides that make it a practical choice for the data scientist. After all, you want to know why this book uses Python and not another language, such as Java or C++. These other languages are perfectly good choices for some tasks, but they’re not as suited to meet data science needs.

The chapter begins with some simple Python examples to give you a taste for the language. As part of exploring Python in this chapter, you discover all sorts of interesting features that Python provides. Python gives you access to a host of libraries that are especially suited to meet the needs of the data scientist. In fact, you use a number of these libraries throughout the book as you work through the coding examples. Knowing about these libraries in advance will help you understand the programming examples and why the book shows how to perform tasks in a certain way.

Even though this chapter shows examples of working with Python, you don’t really begin using Python in earnest until Chapter 6. This chapter offers an overview so that you can better understand what Python can do. Chapter 3 shows how to install the particular version of Python used for this book. Chapters 4 and 5 are about tools you can use, with Chapter 4 emphasizing Google Colab, an alternative environment for coding. In short, if you don’t quite understand an example in this chapter, don’t worry: You get plenty of additional information in later chapters.

Working with Python