43,19 €
Over 60 practical recipes on data exploration and analysis
If you are a beginner or intermediate-level professional who is looking to solve your day-to-day, analytical problems with Python, this book is for you. Even with no prior programming and data analytics experience, you will be able to finish each recipe and learn while doing so.
Data analysis is the process of systematically applying statistical and logical techniques to describe and illustrate, condense and recap, and evaluate data. Its importance has been most visible in the sector of information and communication technologies. It is an employee asset in almost all economy sectors.
This book provides a rich set of independent recipes that dive into the world of data analytics and modeling using a variety of approaches, tools, and algorithms. You will learn the basics of data handling and modeling, and will build your skills gradually toward more advanced topics such as simulations, raw text processing, social interactions analysis, and more.
First, you will learn some easy-to-follow practical techniques on how to read, write, clean, reformat, explore, and understand your data—arguably the most time-consuming (and the most important) tasks for any data scientist.
In the second section, different independent recipes delve into intermediate topics such as classification, clustering, predicting, and more. With the help of these easy-to-follow recipes, you will also learn techniques that can easily be expanded to solve other real-life problems such as building recommendation engines or predictive models.
In the third section, you will explore more advanced topics: from the field of graph theory through natural language processing, discrete choice modeling to simulations. You will also get to expand your knowledge on identifying fraud origin with the help of a graph, scrape Internet websites, and classify movies based on their reviews.
By the end of this book, you will be able to efficiently use the vast array of tools that the Python environment has to offer.
This hands-on recipe guide is divided into three sections that tackle and overcome real-world data modeling problems faced by data analysts/scientist in their everyday work. Each independent recipe is written in an easy-to-follow and step-by-step fashion.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 424
Veröffentlichungsjahr: 2016
Copyright © 2016 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
First published: April 2011
Production reference: 1250416
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham B3 2PB, UK.
ISBN 978-1-78355-166-8
www.packtpub.com
Author
Tomasz Drabas
Reviewers
Brett Bloomquist
Khaled Tannir
Commissioning Editor
Dipika Gaonkar
Acquisition Editor
Prachi Bisht
Content Development Editor
Pooja Mhapsekar
Technical Editor
Bharat Patil
Copy Editor
Tasneem Fatehi
Project Coordinator
Francina Pinto
Proofreader
Safis Editing
Indexer
Mariammal Chettiyar
Production Coordinator
Nilesh R. Mohite
Cover Work
Nilesh R. Mohite
Tomasz Drabas is a data scientist working for Microsoft and currently residing in the Seattle area. He has over 12 years of international experience in data analytics and data science in numerous fields, such as advanced technology, airlines, telecommunications, finance, and consulting.
Tomasz started his career in 2003 with LOT Polish Airlines in Warsaw, Poland, while finishing his master's degree in strategy management. In 2007, he moved to Sydney to pursue a doctoral degree in operations research at the University of New South Wales, School of Aviation; his research crossed boundaries between discrete choice modeling and airline operations research. During his time in Sydney, he worked as a data analyst for Beyond Analysis Australia and as a senior data analyst/data scientist for Vodafone Hutchison Australia, among others. He has also published scientific papers, attended international conferences, and served as a reviewer for scientific journals.
In 2015, he relocated to Seattle to begin his work for Microsoft. There he works on numerous projects involving solving problems in high-dimensional feature space.
First and foremost, I would like to thank my wife, Rachel, and daughter, Skye, for encouraging me to undertake this challenge and tolerating long days of developing code and late nights of writing up. You are the best and I love you beyond bounds! Also, thanks to my family for putting up with me (in general).
Tomasz Bednarz has not only been a great friend but also a great mentor when I was learning programming—thank you! I also want to thank my current and former managers, Mike Stephenson and Rory Carter, as well as numerous colleagues and friends who also encouraged me to finish this book.
Special thanks go to my two former supervisors, Dr Richard Cheng-Lung Wu and Dr Tomasz Jablonski. The master's project with Tomasz sparked my interest in neural networks—lessons that I will never forget. Without Richard's help, I would not have been able to finish my PhD and will always be grateful for his help, guidance, and friendship.
Brett Bloomquist holds a BS in mathematics and an MS in computer science, specializing in computer-aided geometric design. He has 26 years of work experience in the software industry with a focus on geometric modeling algorithms and computer graphics. More recently, Brett has been applying his mathematics and visualization background as a principal data scientist.
Khaled Tannir is a visionary solution architect with more than 20 years of technical experience focusing on big data technologies, data science machine learning, and data mining since 2010.
He is widely recognized as an expert in these fields and has a bachelor's degree in electronics and a master's degree in system information architectures. He is working on completing his PhD.
Khaled has more than 15 certifications (R programming, big data, and many more) and is a Microsoft Certified Solution Developer (MCSD) and an avid technologist.
He has worked for many companies in France (and recently in Canada), leading the development and implementation of software solutions and giving technical presentations.
He is the author of the books RavenDB 2.x Beginner's Guide and Optimizing Hadoop MapReduce, both by Packt Publishing (which were translated in Simplified Chinese) and a technical reviewer on the books, Pentaho Analytics for MongoDB, MongoDB High Availability, and Learning Predictive Analytics with R, by Packt Publishing.
He enjoys taking landscape and night photos, traveling, playing video games, creating funny electronics gadgets using Arduino, Raspberry Pi, and .Net Gadgeteer, and of course spending time with his wife and family.
You can connect with him on LinkedIn or reach him at <[email protected]>.
For support files and downloads related to your book, please visit www.PacktPub.com.
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at <[email protected]> for more details.
At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.
https://www2.packtpub.com/books/subscription/packtlib
Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can search, access, and read Packt's entire library of books.
If you have an account with Packt at www.PacktPub.com, you can use this to access PacktLib today and view 9 entirely free books. Simply use your login credentials for immediate access.
Data analytics and data science have garnered a lot of attention from businesses around the world. The amount of data generated these days is mind-boggling, and it keeps growing everyday; with the proliferation of mobiles, access to Facebook, YouTube, Netflix, or other 4K video content providers, and increasing reliance on cloud computing, we can only expect this to increase.
The task of a data scientist is to clean, transform, and analyze the data in order to provide the business with insights about its customers and/or competitors, monitor the health of the services provided by the company, or automatically present recommendations to drive more opportunities for cross-selling (among many others).
In this book, you will learn how to read, write, clean, and transform the data—the tasks that are the most time-consuming but also the most critical. We will then present you with a broad array of tools and techniques that any data scientist should master, ranging from classification, clustering, or regression, through graph theory and time-series analysis, to discrete choice modeling and simulations. In each chapter, we will present an array of detailed examples written in Python that will help you tackle virtually any problem that you might encounter in your career as a data scientist.
Chapter 1, Preparing the Data, covers the process of reading and writing from and to various data formats and databases, as well as cleaning the data using OpenRefine and Python.
Chapter 2, Exploring the Data, describes various techniques that aid in understanding the data. We will see how to calculate distributions of variables and correlations between them and produce some informative charts.
Chapter 3, Classification Techniques, introduces several classification techniques, from simple Naïve Bayes classifiers to more sophisticated Neural Networks and Random Tree Forests.
Chapter 4, Clustering Techniques, explains numerous clustering models; we start with the most common k-means method and finish with more advanced BIRCH and DBSCAN models.
Chapter 5, Reducing Dimensions, presents multiple dimensionality reduction techniques, starting with the most renowned PCA, through its kernel and randomized versions, to LDA.
Chapter 6, Regression Methods, covers many regression models, both linear and nonlinear. We also bring back random forests and SVMs (among others) as these can be used to solve either classification or regression problems.
Chapter 7, Time Series Techniques, explores the methods of handling and understanding time series data as well as building ARMA and ARIMA models.
Chapter 8, Graphs, introduces NetworkX and Gephi to handle, understand, visualize, and analyze data in the form of graphs.
Chapter 9, Natural Language Processing, describes various techniques related to the analytics of free-flow text: part-of-speech tagging, topic extraction, and classification of data in textual form.
Chapter 10, Discrete Choice Models, explains the choice modeling theory and some of the most popular models: the Multinomial, Nested, and Mixed Logit models.
Chapter 11, Simulations, covers the concepts of agent-based simulations; we simulate the functioning of a gas station, out-of-power occurrences for electric vehicles, and sheep-wolf predation scenarios.
For this book, you need a personal computer (it can be a Windows machine, Mac, or Linux) with an installed and configured Python 3.5 environment; we use the Anaconda distribution of Python that can be downloaded at https://www.continuum.io/downloads.
Throughout this book, we use various Python modules: pandas, NumPy/SciPy, SciKit-Learn, MLPY, StatsModels, PyBrain, NLTK, BeautifulSoup, Optunity, Matplotlib, Seaborn, Bokeh, PyLab, OpenPyXl, PyMongo, SQLAlchemy, NetworkX, and SimPy. Most of the modules used come preinstalled with Anaconda, but some of them need to be installed via either the conda installer or by downloading the module and using the python setup.py install command. It is fine if some of those modules are not currently installed on your machine; we will guide you through the installation process.
We also use several non-Python tools: OpenRefine to aid in data cleansing and analysis, D3.js to visualize data, Postgres and MongoDB databases to store data, Gephi to visualize graphs, and PythonBiogeme to estimate discrete choice models. We will provide detailed installation instructions where needed.
This book is for everyone who wants to get into the data science field and needs to build up their skills on a set of examples that aim to tackle the problems faced in the corporate world. More advanced practitioners might also find some of the examples refreshing and the more advanced topics covered interesting.
In this book, you will find several headings that appear frequently (Getting ready, How to do it, How it works, There's more, and See also).
To give clear instructions on how to complete a recipe, we use these sections as follows:
This section tells you what to expect in the recipe, and describes how to set up any software or any preliminary settings required for the recipe.
This section contains the steps required to follow the recipe.
This section usually consists of a detailed explanation of what happened in the previous section.
This section consists of additional information about the recipe in order to make the reader more knowledgeable about the recipe.
This section provides helpful links to other useful information for the recipe.
Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of.
To send us general feedback, simply e-mail <[email protected]>, and mention the book's title in the subject of your message.
If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.
Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.
You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.
You can download the code files by following these steps:
You can also download the code files by clicking on the Code Files button on the book's webpage at the Packt Publishing website. This page can be accessed by entering the book's name in the Search box. Please note that you need to be logged in to your Packt account.
Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:
The code bundle for this book is also available on GitHub at https://github.com/drabastomek/practicalDataAnalysisCookbook/tree/master/Data.
We also provide you with a PDF file that has color images of the screenshots/diagrams used in this book. The color images will help you better understand the changes in the output. You can download this file from https://www.packtpub.com/sites/default/files/downloads/practicaldataanalysiscookbook_ColorImages.pdf.
Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.
To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.
Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.
Please contact us at <[email protected]> with a link to the suspected pirated material.
We appreciate your help in protecting our authors and our ability to bring you valuable content.
If you have a problem with any aspect of this book, you can contact us at <[email protected]>, and we will do our best to address the problem.
In this chapter, we will cover the basic tasks of reading, storing, and cleaning data using Python and OpenRefine. You will learn the following recipes:
For the following set of recipes, we will use Python to read data in various formats and store it in RDBMS and NoSQL databases.
All the source codes and datasets that we will use in this book are available in the GitHub repository for this book. To clone the repository, open your terminal of choice (on Windows, you can use command line, Cygwin, or Git Bash and in the Linux/Mac environment, you can go to Terminal) and issue the following command (in one line):
Note that you need Git installed on your machine. Refer to https://git-scm.com/book/en/v2/Getting-Started-Installing-Git for installation instructions.
In the following four sections, we will use a dataset that consists of 985 real estate transactions. The real estate sales took place in the Sacramento area over a period of five consecutive days. We downloaded the data from https://support.spatialkey.com/spatialkey-sample-csv-data/—in specificity, http://samplecsvs.s3.amazonaws.com/Sacramentorealestatetransactions.csv. The data was then transformed into various formats that are stored in the Data/Chapter01 folder in the GitHub repository.
In addition, you will learn how to retrieve information from HTML files. For this purpose, we will use the Wikipedia list of airports starting with the letter A, https://en.wikipedia.org/wiki/List_of_airports_by_IATA_code:_A.
To clean our dataset, we will use OpenRefine; it is a powerful tool to read, clean, and transform data.
Although not as popular to store large datasets as previous formats, sometimes we find data in a table on a web page. These structures are normally enclosed within the <table> </table> HTML tags. This recipe will show you how to retrieve data from a web page.
In order to execute the following recipe, you need pandas and re modules available. The re module is a regular expressions module for Python and we will use it to clean up the column names. Also, the read_html(...) method of pandas requires html5lib to be present on your computer. If you use the Anaconda distribution of Python, you can do it by issuing the following command from your command line:
Otherwise, you can download the source from