Building Machine Learning Projects with TensorFlow - Rodolfo Bonnin - E-Book

Building Machine Learning Projects with TensorFlow E-Book

Rodolfo Bonnin

0,0
46,79 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Engaging projects that will teach you how complex data can be exploited to gain the most insight

About This Book

  • Bored of too much theory on TensorFlow? This book is what you need! Thirteen solid projects and four examples teach you how to implement TensorFlow in production.
  • This example-rich guide teaches you how to perform highly accurate and efficient numerical computing with TensorFlow
  • It is a practical and methodically explained guide that allows you to apply Tensorflow's features from the very beginning.

Who This Book Is For

This book is for data analysts, data scientists, and researchers who want to increase the speed and efficiency of their machine learning activities and results. Anyone looking for a fresh guide to complex numerical computations with TensorFlow will find this an extremely helpful resource. This book is also for developers who want to implement TensorFlow in production in various scenarios. Some experience with C++ and Python is expected.

What You Will Learn

  • Load, interact, dissect, process, and save complex datasets
  • Solve classification and regression problems using state of the art techniques
  • Predict the outcome of a simple time series using Linear Regression modeling
  • Use a Logistic Regression scheme to predict the future result of a time series
  • Classify images using deep neural network schemes
  • Tag a set of images and detect features using a deep neural network, including a Convolutional Neural Network (CNN) layer
  • Resolve character recognition problems using the Recurrent Neural Network (RNN) model

In Detail

This book of projects highlights how TensorFlow can be used in different scenarios - this includes projects for training models, machine learning, deep learning, and working with various neural networks. Each project provides exciting and insightful exercises that will teach you how to use TensorFlow and show you how layers of data can be explored by working with Tensors. Simply pick a project that is in line with your environment and get stacks of information on how to implement TensorFlow in production.

Style and approach

This book is a practical guide to implementing TensorFlow in production. It explores various scenarios in which you could use TensorFlow and shows you how to use it in the context of real world projects. This will not only give you an upper hand in the field, but shows the potential for innovative uses of TensorFlow in your environment. This guide opens the door to second generation machine learning and numerical computation – a must-have for your bookshelf!

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 202

Veröffentlichungsjahr: 2016

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Building Machine Learning Projects with TensorFlow
Credits
About the Author
About the Reviewer
www.PacktPub.com
Why subscribe?
Customer Feedback
Preface
What this book covers 
What you need for this book 
Who this book is for 
Conventions 
Reader feedback
Customer support
Downloading the example code 
Errata
Piracy
Questions
1. Exploring and Transforming Data
TensorFlow's main data structure - tensors
Tensor properties - ranks, shapes, and types
Tensor rank
Tensor shape
Tensor data types
Creating new tensors
From numpy to tensors and vice versa
Getting things done - interacting with TensorFlow
Handling the computing workflow - TensorFlow's data flow graph
Computation graph building
Useful operation object methods
Feeding
Variables
Variable initialization
Saving data flow graphs
Graph serialization language - protocol buffers
Useful methods
Example graph building
Running our programs - Sessions
Basic tensor methods
Simple matrix operations
Reduction
Tensor segmentation
Sequences
Tensor shape transformations
Tensor slicing and joining
Dataflow structure and results visualization - TensorBoard
Command line use
How TensorBoard works
Adding Summary nodes
Common Summary operations
Special Summary functions
Interacting with TensorBoard's GUI
Reading information from disk
Tabulated formats - CSV
The Iris dataset
Reading image data
Loading and processing the images
Reading from the standard TensorFlow format
Summary
2. Clustering
Learning from data - unsupervised learning
Clustering
k-means
Mechanics of k-means
Algorithm iteration criterion
k-means algorithm breakdown
Pros and cons of k-means
k-nearest neighbors
Mechanics of k-nearest neighbors
Pros and cons of k-nn
Practical examples for Useful libraries
matplotlib plotting library
Sample synthetic data plotting
scikit-learn dataset module
About the scikit-learn library
Synthetic dataset types
Blobs dataset
Employed method
Circle dataset
Employed method
Moon dataset
Project 1 - k-means clustering on synthetic datasets
Dataset description and loading
Generating the dataset
Model architecture
Loss function description and optimizer loop
Stop condition
Results description
Full source code
k-means on circle synthetic data
Project 2 - nearest neighbor on synthetic datasets
Dataset generation
Model architecture
Loss function description
Stop condition
Results description
Full source code
Summary
3. Linear Regression
Univariate linear modelling function
Sample data generation
Determination of the cost function
Least squares
Minimizing the cost function
General minima for least squares
Iterative methods - gradient descent
Example section
Optimizer methods in TensorFlow - the train module
The tf.train.Optimizer class
Other Optimizer instance types
Example 1 - univariate linear regression
Dataset description
Model architecture
Cost function description and Optimizer loop
Stop condition
Results description
Reviewing results with TensorBoard
Full source code
Example - multivariate linear regression
Useful libraries and methods
Pandas library
Dataset description
Model architecture
Loss function description and Optimizer loop
Stop condition
Results description
Full source code
Summary
4. Logistic Regression
Problem description
Logistic function predecessor - the logit functions
Bernoulli distribution
Link function
Logit function
The importance of the logit inverse
The logistic function
Logistic function as a linear modeling generalization
Final estimated regression equation
Properties of the logistic function
Loss function
Multiclass application - softmax regression
Cost function
Data normalization for iterative methods
One hot representation of outputs
Example 1 - univariate logistic regression
Useful libraries and methods
TensorFlow's softmax implementation
Dataset description and loading
The CHDAGE dataset
CHDAGE dataset format
Dataset loading and preprocessing implementation
Model architecture
Loss function description and optimizer loop
Stop condition
Results description
Fitting function representations across epochs
Full source code
Graphical representation
Example 2 - Univariate logistic regression with skflow
Useful libraries and methods
Dataset description
Model architecture
Results description
Full source code
Summary
5. Simple FeedForward Neural Networks
Preliminary concepts
Artificial neurons
Original example - the Perceptron
Perceptron algorithm
Neural network layers
Neural Network activation functions
Gradients and the back propagation algorithm
Minimizing loss function: Gradient descent
Neural networks problem choice - Classification vs Regression
Useful libraries and methods
TensorFlow activation functions
TensorFlow loss optimization methods
Sklearn preprocessing utilities
First project - Non linear synthetic function regression
Dataset description and loading
Dataset preprocessing
Modeling architecture - Loss Function description
Loss function optimizer
Accuracy and Convergence test
Example code
Results description
Second project - Modeling cars fuel efficiency with non linear regression
Dataset description and loading
Dataset preprocessing
Modeling architecture
Convergency test
Results description
Third project - Learning to classify wines: Multiclass classification
Dataset description and loading
Dataset preprocessing
Modeling architecture
Loss function description
Loss function optimizer
Convergence test
Results description
Full source code
Summary
6. Convolutional Neural Networks
Origin of convolutional neural networks
Getting started with convolution
Continuous convolution
Discrete convolution
Kernels and convolutions
Interpretation of the convolution operations
Applying convolution in TensorFlow
Other convolutional operations
Sample code - applying convolution to a grayscale image
Sample kernels results
Subsampling operation - pooling
Properties of subsampling layers
Invariance property
Subsampling layers implementation performance.
Applying pool operations in TensorFlow
Other pool operations
Sample code
Improving efficiency - dropout operation
Applying the dropout operation in TensorFlow
Sample code
Convolutional type layer building methods
Convolutional layer
Subsampling layer
Example 1 - MNIST digit classification
Dataset description and loading
Dataset preprocessing
Modelling architecture
Loss function description
Loss function optimizer
Accuracy test
Result description
Full source code
Example 2 - image classification with the CIFAR10 dataset
Dataset description and loading
Dataset preprocessing
Modelling architecture
Loss function description and optimizer
Training and accuracy tests
Results description
Full source code
Summary
7. Recurrent Neural Networks and LSTM
Recurrent neural networks
Exploding and vanishing gradients
LSTM neural networks
The gate operation - a fundamental component
Operation steps
Part 1 - set values to forget (input gate)
Part 2 - set values to keep, change state
Part 3 - output filtered cell state
Other RNN architectures
TensorFlow LSTM useful classes and methods
class tf.nn.rnn_cell.BasicLSTMCell
class MultiRNNCell(RNNCell)
learn.ops.split_squeeze(dim, num_split, tensor_in)
Example 1 - univariate time series prediction with energy consumption data
Dataset description and loading
Dataset preprocessing
Modelling architecture
Loss function description
Convergency test
Results description
Full source code
Example 2 - writing music "a la" Bach
Character level models
Character sequences and probability representation
Encoding music as characters - the ABC music format
ABC format data organization
Useful libraries and methods
Saving and restoring variables and models
Loading and saving pseudocode
Variable saving
Variable restoring
Dataset description and loading
Network Training
Dataset preprocessing
Vocabulary definition
Modelling architecture
Loss function description
Stop condition
Results description
Full source code
Summary
8. Deep Neural Networks
Deep neural network definition
Deep network architectures through time
LeNet 5
Alexnet
Main features
The original inception model
GoogLenet (Inception V1)
Batch normalized inception (V2)
Inception v3
Residual Networks (ResNet)
Other deep neural network architectures
Example - painting with style - VGG style transfer
Useful libraries and methods
Dataset description and loading
Dataset preprocessing
Modeling architecture
Loss functions
Content loss function
Style loss function
Loss optimization loop
Convergency test
Program execution
Full source code
Summary
9. Running Models at Scale – GPU and Serving
GPU support on TensorFlow
Log device placement and device capabilities
Querying the computing capabilities
Selecting a CPU for computing
Device naming
Example 1 - assigning an operation to the GPU
Example 2 - calculating Pi number in parallel
Solution implementation
Source code
Distributed TensorFlow
Technology components
Jobs
Tasks
Servers
Combined overview
Creating a TensorFlow cluster
ClusterSpec definition format
Creating tf.Train.Server
Cluster operation - sending computing methods to tasks
Sample distributed code structure
Example 3 - distributed Pi calculation
Server script
Client script
Full source code
Example 4 - running a distributed model in a cluster
Sample code
Summary
10. Library Installation and Additional Tips
Linux installation
Initial requirements
Ubuntu preparation tasks (need to apply before any method)
Pip Linux installation method
CPU version
Testing your installation
GPU support
Virtualenv installation method
Environment test
Docker installation method
Installing Docker
Allowing Docker to run with a normal user
Reboot
Testing the Docker installation
Run the TensorFlow container
Linux installation from source
Installing the Git source code version manager
Git installation in Linux (Ubuntu 16.04)
Installing the Bazel build tool
Adding the Bazel distribution URI as a package source
Updating and installing Bazel
Installing GPU support (optional)
Installing CUDA system packages
Creating alternative locations
Installing cuDNN
Clone TensorFlow source
Configuring TensorFlow build
Building TensorFlow
Testing the installation
Windows installation
Classic Docker toolbox method
Installation steps
Downloading the Docker toolbox installer
Creating the Docker machine
MacOS X installation
Install pip
Summary

Building Machine Learning Projects with TensorFlow

Building Machine Learning Projects with TensorFlow

Copyright © 2016 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

First published: November 2016

Production reference: 2220317

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham 

B3 2PB, UK.

ISBN 978-1-78646-658-7

www.packtpub.com

Credits

Author

Rodolfo Bonnin

Copy Editor

Safis Editing

Reviewer

Niko Gamulin

Project Coordinator

Nidhi Joshi 

Commissioning Editor

Veena Pagare

Proofreader

Safis Editing

Acquisition Editor

Namrata Patil 

Indexer

Mariammal Chettiyar 

Content Development Editor

Siddhesh Salvi

Graphics

Disha Haria

Technical Editor

Danish Shaikh

Dharmendra Yadav

Production Coordinator

Arvindkumar Gupta

About the Author

Rodolfo Bonnin is a systems engineer and  PhD student at Universidad Tecnológica Nacional,  Argentina. He also pursued parallel programming and image understanding postgraduate courses at Uni Stuttgart, Germany.

He has done research on high performance computing since 2005 and began studying and implementing convolutional neural networks in 2008,writing a  CPU and GPU - supporting neural network feed forward stage. More recently he's been working in the field of fraud pattern detection with Neural Networks, and is currently working on signal classification using ML techniques.

To my wife and kids and the patience they demonstrated during the writing of this book. Also to the reviewers, who helped give professionalism to this work, and Marcos Boaglio for facilitating equipment to cover the installation chapter. Ad Maiorem Dei Gloriam.

About the Reviewer

Niko Gamulin is a senior software engineer at CloudMondo, a US-based startup, where he develops and implements predictive behavior models for humans and systems. Previously he has developed deep learning models to solve various challenges. He received his PhD in Electrical Engineering from University of Ljubljana in 2015. His research focused on creation of machine learning models for churn prediction.

I would like to thank my wonderful daughter Agata, who inspires me to gain more understanding about the learning process and Ana for being the best wife in the world.

www.PacktPub.com

For support files and downloads related to your book, please visit www.PacktPub.com.

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.

https://www.packtpub.com/mapt

Get the most in-demand software skills with Mapt. Mapt gives you full access to all Packt books and video courses, as well as industry-leading tools to help you plan your personal development and advance your career.

Why subscribe?

Fully searchable across every book published by PacktCopy and paste, print, and bookmark contentOn demand and accessible via a web browser

Customer Feedback

Thanks for purchasing this Packt book. At Packt, quality is at the heart of our editorial process. To help us improve, please leave us an honest review.

If you'd like to join our team of regular reviewers, you can e-mail us at [email protected]. We award our regular reviewers with free eBooks and videos in exchange for their valuable feedback. Help us be relentless in improving our products!

Preface

In recent years, machine learning has changed from a niche technology asset for scientific and theoretical experts to a ubiquitous theme in the day-to-day operations of the majority of the big players in the IT field.

This phenomenon started with the explosion in the volume of available data: During the second half of the 2000s, the advent of many kinds of cheap data capture devices (cellphones with integrated GPS, multi-megapixel cameras, and gravity sensors), and the popularization of new high-dimensional data capture (3D LIDAR and optic systems, the explosion of IOT devices, etc), made it possible to have access to a volume of information never seen before.

Additionally, in the hardware field, the almost visible limits of the Moore law, prompted the development of massive parallel devices, which multiplied the data to be used to train a determined models.

Both advancements in hardware and data availability allowed researchers to apply themselves to revisit the works of pioneers on human vision-based neural network architectures (convolutional neural networks, among others), finding many new problems in which to apply them, thanks to the general availability of data and computation capabilities.

To solve these new kinds of problems, a new interest in creating state-of-the-art machine learning packages was born, with players such as: Keras, Scikyt-learn, Theano, Caffe, and Torch, each one with a particular vision of the way machine learning models should be defined, trained, and executed.

On 9 November 2015, Google entered into the public machine learning arena, deciding to open-source its own machine learning framework, TensorFlow, on which many internal projects were based. This first 0.5 release had a numbers of shortcomings in comparison with others, a number of which were addressed later, specially the possibility of running distributed models.

So this little story brings us to this day, where TensorFlow is one of the main contenders for interested developers, as the number of projects using it as a base increases, improving its importance for the toolbox of any data science practitioner.

In this book, we will implement a wide variety of models using the TensorFlow library, aiming at having a low barrier of entrance and providing a detailed approach to the problem solutions.

What this book covers 

Chapter 1, Exploring and Transforming Data, guides the reader in undersanding the main components of a TensorFlow application, and the main data-exploring methods included.

Chapter 2, Clustering, tells you about the possibility of grouping different kinds of data elements, defining a previous similarity criteria.

Chapter 3, Linear Regression, allows the reader to define the first mathematical model to explain diverse phenomena.

Chapter 4, Logistic Regression, is the first step in modeling non-linear phenomena with a very powerful and simple mathematical function.

Chapter 5, Simple Feedforward Neural Networks, allows you to comprehend the main component, and mechanisms of neural networks.

Chapter 6, Convolutional Neural Networks, explains the functioning and practical application, of this recently rediscovered set of special networks.

Chapter 7, Recurrent Neural Networks, shows a detailed explanation of this very useful architecture for temporal series of data.

Chapter 8, Deep Neural Networks, offers an overview of the latest developments on mixed layer type neural networks.

Chapter 9, Running Models at Scale – GPU and Serving, explains the ways of tackling problems of greater complexity, by dividing the work into coordinating units.

Chapter 10, Library Installation and Additional Tips, covers the installation of TensorFlow on Linux, Windows, and Mac architectures, and presents you with some useful code tricks that will ease day-to-day tasks.

What you need for this book 

Software required (with version)

Hardware specifications

OS required

TensorFlow 0.10, Jupyter Notebook

Any x86 computer

Ubuntu Linux 16.04

Who this book is for 

This book is for data analysts, data scientists, and researchers who want to make the results of their machine learning activities faster and more efficient. Those who want a crisp guide to complex numerical computations with TensorFlow will find the book extremely helpful. This book is also for developers who want to implement TensorFlow in production in various scenarios. Some experience with C++ and Python is expected.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of.

To send us general feedback, simply e-mail [email protected], and mention the book's title in the subject of your message.

If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the example code 

You can download the example code files for this book from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

You can download the code files by following these steps:

Log in or register to our website using your e-mail address and password.Hover the mouse pointer on the SUPPORT tab at the top.Click on Code Downloads & Errata.Enter the name of the book in the Search box.Select the book for which you're looking to download the code files.Choose from the drop-down menu where you purchased this book from.Click on Code Download.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

WinRAR / 7-Zip for WindowsZipeg / iZip / UnRarX for Mac7-Zip / PeaZip for Linux

The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Building-Machine-Learning-Projects-with-TensorFlow. We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books-maybe a mistake in the text or the code-we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.

To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.

Piracy

Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.

Please contact us at [email protected] with a link to the suspected pirated material.

We appreciate your help in protecting our authors and our ability to bring you valuable content.

Questions

If you have a problem with any aspect of this book, you can contact us at [email protected], and we will do our best to address the problem.