Mastering TensorFlow 1.x - Armando Fandango - E-Book

Mastering TensorFlow 1.x E-Book

Armando Fandango

0,0
28,79 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Build, scale, and deploy deep neural network models using the star libraries in Python

Key Features

  • Delve into advanced machine learning and deep learning use cases using Tensorflow and Keras
  • Build, deploy, and scale end-to-end deep neural network models in a production environment
  • Learn to deploy TensorFlow on mobile, and distributed TensorFlow on GPU, Clusters, and Kubernetes

Book Description

TensorFlow is the most popular numerical computation library built from the ground up for distributed, cloud, and mobile environments. TensorFlow represents the data as tensors and the computation as graphs.

This book is a comprehensive guide that lets you explore the advanced features of TensorFlow 1.x. Gain insight into TensorFlow Core, Keras, TF Estimators, TFLearn, TF Slim, Pretty Tensor, and Sonnet. Leverage the power of TensorFlow and Keras to build deep learning models, using concepts such as transfer learning, generative adversarial networks, and deep reinforcement learning. Throughout the book, you will obtain hands-on experience with varied datasets, such as MNIST, CIFAR-10, PTB, text8, and COCO-Images.

You will learn the advanced features of TensorFlow1.x, such as distributed TensorFlow with TF Clusters, deploy production models with TensorFlow Serving, and build and deploy TensorFlow models for mobile and embedded devices on Android and iOS platforms. You will see how to call TensorFlow and Keras API within the R statistical software, and learn the required techniques for debugging when the TensorFlow API-based code does not work as expected.

The book helps you obtain in-depth knowledge of TensorFlow, making you the go-to person for solving artificial intelligence problems. By the end of this guide, you will have mastered the offerings of TensorFlow and Keras, and gained the skills you need to build smarter, faster, and efficient machine learning and deep learning systems.

What you will learn

  • Master advanced concepts of deep learning such as transfer learning, reinforcement learning, generative models and more, using TensorFlow and Keras
  • Perform supervised (classification and regression) and unsupervised (clustering) learning to solve machine learning tasks
  • Build end-to-end deep learning (CNN, RNN, and Autoencoders) models with TensorFlow
  • Scale and deploy production models with distributed and high-performance computing on GPU and clusters
  • Build TensorFlow models to work with multilayer perceptrons using Keras, TFLearn, and R
  • Learn the functionalities of smart apps by building and deploying TensorFlow models on iOS and Android devices
  • Supercharge TensorFlow with distributed training and deployment on Kubernetes and TensorFlow Clusters

Who this book is for

This book is for data scientists, machine learning engineers, artificial intelligence engineers, and for all TensorFlow users who wish to upgrade their TensorFlow knowledge and work on various machine learning and deep learning problems. If you are looking for an easy-to-follow guide that underlines the intricacies and complex use cases of machine learning, you will find this book extremely useful. Some basic understanding of TensorFlow is required to get the most out of the book.

Armando Fandango creates AI-empowered products by leveraging his expertise in deep learning, computational methods, and distributed computing. He advises Owen.ai Inc on AI product strategy. He founded NeuraSights Inc. with the goal of creating insights using neural networks. He is the founder of Vets2Data Inc., a non-profit organization assisting US military veterans in building AI skills. Armando has authored books titled Python Data Analysis - 2nd Edition and Mastering TensorFlow and published research in international journals and conferences.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 384

Veröffentlichungsjahr: 2018

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Mastering TensorFlow 1.x

 

 

 

 

Advanced machine learning and deep learning concepts using TensorFlow 1.x and Keras

 

 

 

 

 

 

 

 

Armando Fandango

 

 

 

 

 

 

 

 

 

BIRMINGHAM - MUMBAI

Mastering TensorFlow 1.x

Copyright © 2018 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Commissioning Editor: Sunith ShettyAcquisition Editor: Tushar GuptaContent Development Editor: Tejas LimkarTechnical Editor: Danish ShaikhCopy Editors: Safis EditingProject Coordinator: Manthan PatelProofreader: Safis EditingIndexer: Rekha NairGraphics: Tania DuttaProduction Coordinator:Aparna Bhagat

First published: January 2018

Production reference: 1190118

Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.

ISBN 978-1-78829-206-1

www.packtpub.com

mapt.io

Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.

Why subscribe?

Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals

Improve your learning with Skill Plans built especially for you

Get a free eBook or video every month

Mapt is fully searchable

Copy and paste, print, and bookmark content

PacktPub.com

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.

Foreword

TensorFlow and Keras are a key part of the "Data Science for Internet of Things" course, which I teach at the University of Oxford. My TensorFlow journey started with Keras. Over time, in our course, we increasingly gravitated towards core TensorFlow in addition to Keras. I believe many people's 'TensorFlow journey' will follow this trajectory.

Armando Fandango's book "Mastering TensorFlow 1.x" provides a road map for this journey. The book is an ambitious undertaking, interweaving Keras and core TensorFlow libraries. It delves into complex themes and libraries such as Sonnet, distributed TensorFlow with TF Clusters, deploying production models with TensorFlow Serving, TensorFlow mobile, and TensorFlow for embedded devices.

In that sense, this is an advanced book. But the author covers deep learning models such as RNN, CNN, autoencoders, generative adversarial models, and deep reinforcement learning through Keras. Armando has clearly drawn upon his experience to make this complex journey easier for readers.

I look forward to increased adoption of this book and learning from it.

 

Ajit Jaokar

Data Science for IoT Course Creator and Lead Tutor at the University of Oxford / Principal  Data Scientist.

Contributors

About the author

ArmandoFandango creates AI-empowered products by leveraging his expertise in deep learning, computational methods, and distributed computing. He advises Owen.ai Inc on AI product strategy. He founded NeuraSights Inc. with the goal of creating insights using neural networks. He is the founder of Vets2Data Inc., a non-profit organization assisting US military veterans in building AI skills.

Armando has authored books titled Python Data Analysis - 2nd Edition and Mastering TensorFlow and published research in international journals and conferences.

 

I would like to thank Dr. Paul Wiegand (UCF), Dr. Brian Goldiez (UCF), Tejas Limkar (Packt), and Tushar Gupta (Packt) for being able to complete this book. This work would not be possible without their inspiration.

About the reviewer

Nick McClure is currently a senior data scientist at PayScale Inc in Seattle, Washington, USA. Previously, he worked at Zillow and Caesar’s Entertainment. He has degrees in applied mathematics from the University of Montana and the College of Saint Benedict and Saint John’s University. He has also authored TensorFlow Machine Learning Cookbook by Packt. He has a passion for learning and advocating for analytics, machine learning, and artificial intelligence. he occasionally puts his thoughts and musings on his blog,fromdata.org, or through his Twitter account at @nfmcclure.

 

 

 

Packt is searching for authors like you

If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.

Table of Contents

Preface

Who this book is for

What this book covers

To get the most out of this book

Download the example code files

Conventions used

Get in touch

Reviews

TensorFlow 101

What is TensorFlow?

TensorFlow core

Code warm-up - Hello TensorFlow

Tensors

Constants

Operations

Placeholders

Creating tensors from Python objects

Variables

Tensors generated from library functions

Populating tensor elements with the same values

Populating tensor elements with sequences

Populating tensor elements with a random distribution

Getting Variables with tf.get_variable()

Data flow graph or computation graph

Order of execution and lazy loading

Executing graphs across compute devices - CPU and GPGPU

Placing graph nodes on specific compute devices

Simple placement

Dynamic placement

Soft placement

GPU memory handling

Multiple graphs

TensorBoard

A TensorBoard minimal example

TensorBoard details

Summary

High-Level Libraries for TensorFlow

TF Estimator - previously TF Learn

TF Slim

TFLearn

Creating the TFLearn Layers

TFLearn core layers

TFLearn convolutional layers

TFLearn recurrent layers

TFLearn normalization layers

TFLearn embedding layers

TFLearn merge layers

TFLearn estimator layers

Creating the TFLearn Model

Types of TFLearn models

Training the TFLearn Model

Using the TFLearn Model

PrettyTensor

Sonnet

Summary

Keras 101

Installing Keras

Neural Network Models in Keras

Workflow for building models in Keras

Creating the Keras model

Sequential API for creating the Keras model

Functional API for creating the Keras model

Keras Layers

Keras core layers

Keras convolutional layers

Keras pooling layers

Keras locally-connected layers

Keras recurrent layers

Keras embedding layers

Keras merge layers

Keras advanced activation layers

Keras normalization layers

Keras noise layers

Adding Layers to the Keras Model

Sequential API to add layers to the Keras model

Functional API to add layers to the Keras Model

Compiling the Keras model

Training the Keras model

Predicting with the Keras model

Additional modules in Keras

Keras sequential model example for MNIST dataset

Summary

Classical Machine Learning with TensorFlow

Simple linear regression

Data preparation

Building a simple regression model

Defining the inputs, parameters, and other variables

Defining the model

Defining the loss function

Defining the optimizer function

Training the model

Using the trained model to predict

Multi-regression

Regularized regression

Lasso regularization

Ridge regularization

ElasticNet regularization

Classification using logistic regression

Logistic regression for binary classification

Logistic regression for multiclass classification

Binary classification

Multiclass classification

Summary

Neural Networks and MLP with TensorFlow and Keras

The perceptron

MultiLayer Perceptron

MLP for image classification

TensorFlow-based MLP for MNIST classification

Keras-based MLP for MNIST classification

TFLearn-based MLP for MNIST classification

Summary of MLP with TensorFlow, Keras, and TFLearn

MLP for time series regression

Summary

RNN with TensorFlow and Keras

Simple Recurrent Neural Network

RNN variants

LSTM network

GRU network

TensorFlow for RNN

TensorFlow RNN Cell Classes

TensorFlow RNN Model Construction Classes

TensorFlow RNN Cell Wrapper Classes

Keras for RNN

Application areas of RNNs

RNN in Keras for MNIST data

Summary

RNN for Time Series Data with TensorFlow and Keras

Airline Passengers dataset

Loading the airpass dataset

Visualizing the airpass dataset

Preprocessing the dataset for RNN models with TensorFlow

Simple RNN in TensorFlow

LSTM in TensorFlow

GRU in TensorFlow

Preprocessing the dataset for RNN models with Keras

Simple RNN with Keras

LSTM with Keras

GRU with Keras

Summary

RNN for Text Data with TensorFlow and Keras

Word vector representations

Preparing the data for word2vec models

Loading and preparing the PTB dataset

Loading and preparing the text8 dataset

Preparing the small validation set

skip-gram model with TensorFlow

Visualize the word embeddings using t-SNE

skip-gram model with Keras

Text generation with RNN models in TensorFlow and Keras

Text generation LSTM in TensorFlow

Text generation LSTM in Keras

Summary

CNN with TensorFlow and Keras

Understanding convolution

Understanding pooling

CNN architecture pattern - LeNet

LeNet for MNIST data

LeNet CNN for MNIST with TensorFlow

LeNet CNN for MNIST with Keras

LeNet for CIFAR10 Data

ConvNets for CIFAR10 with TensorFlow

ConvNets for CIFAR10 with Keras

Summary

Autoencoder with TensorFlow and Keras

Autoencoder types

Stacked autoencoder in TensorFlow

Stacked autoencoder in Keras

Denoising autoencoder in TensorFlow

Denoising autoencoder in Keras

Variational autoencoder in TensorFlow

Variational autoencoder in Keras

Summary

TensorFlow Models in Production with TF Serving

Saving and Restoring models in TensorFlow

Saving and restoring all graph variables with the saver class

Saving and restoring selected  variables with the saver class

Saving and restoring Keras models

TensorFlow Serving

Installing TF Serving

Saving models for TF Serving

Serving models with TF Serving

TF Serving in the Docker containers

Installing Docker

Building a Docker image for TF serving

Serving the model in the Docker container

TensorFlow Serving on Kubernetes

Installing Kubernetes

Uploading the Docker image to the dockerhub

Deploying in Kubernetes

Summary

Transfer Learning and Pre-Trained Models

ImageNet dataset

Retraining or fine-tuning models

COCO animals dataset and pre-processing images

VGG16 in TensorFlow

Image classification using pre-trained VGG16 in TensorFlow

Image preprocessing in TensorFlow for pre-trained VGG16

Image classification using retrained  VGG16 in TensorFlow

VGG16 in Keras

Image classification using pre-trained VGG16 in Keras

Image classification using retrained VGG16 in Keras

Inception v3 in TensorFlow

Image classification using Inception v3 in TensorFlow

Image classification using retrained Inception v3 in TensorFlow

Summary

Deep Reinforcement Learning

OpenAI Gym 101

Applying simple policies to a cartpole game

Reinforcement learning 101

Q function (learning to optimize when the model is not available)

Exploration and exploitation in the RL algorithms

V function (learning to optimize when the model is available)

Reinforcement learning techniques

Naive Neural Network policy for Reinforcement Learning

Implementing Q-Learning

Initializing and discretizing for Q-Learning

Q-Learning with Q-Table

Q-Learning with Q-Network  or Deep Q Network (DQN) 

Summary

Generative Adversarial Networks

Generative Adversarial Networks 101

Best practices for building and training GANs

Simple GAN with TensorFlow

Simple GAN with Keras

Deep Convolutional GAN with TensorFlow and Keras

Summary

Distributed Models with TensorFlow Clusters

Strategies for distributed execution

TensorFlow clusters

Defining cluster specification

Create the server instances

Define the parameter and operations across servers and devices

Define and train the graph for asynchronous updates

Define and train the graph for synchronous updates

Summary

TensorFlow Models on Mobile and Embedded Platforms

TensorFlow on mobile platforms

TF Mobile in Android apps

TF Mobile demo on Android

TF Mobile in iOS apps

TF Mobile demo on iOS

TensorFlow Lite

TF Lite Demo on Android

TF Lite demo on iOS

Summary

TensorFlow and Keras in R

Installing TensorFlow and Keras packages in R

TF core API in R

TF estimator API in R

Keras API in R

TensorBoard in R

The tfruns package in R

Summary

Debugging TensorFlow Models

Fetching tensor values with tf.Session.run()

Printing tensor values with tf.Print()

Asserting on conditions with tf.Assert()

Debugging with the TensorFlow debugger (tfdbg)

Summary

Tensor Processing Units

Other Books You May Enjoy

Leave a review - let other readers know what you think

Preface

Google’s TensorFlow has become a major player and a go-to tool for developers to bring smart processing within an application. TensorFlow has become a major research and engineering tool in every organization. Thus, there is a need to learn advanced use cases of TensorFlow that can be implemented in all kinds of software and devices to build intelligent systems. TensorFlow is one of its kind, with lots of new updates and bug fixes to bring smart automation into your projects. So in today’s world, it becomes a necessity to master TensorFlow in order to create advanced machine learning and deep learning applications. Mastering TensorFlow will help you learn all the advanced features TensorFlow has to offer. This book funnels down the key information to provide the required expertise to the readers to enter the world of artificial intelligence, thus extending the knowledge of intermediate TensorFlow users to the next level. From implementing advanced computations to trending real-world research areas, this book covers it all. Get to the grips with this highly comprehensive guide to make yourself well established in the developer community, and you'll have a platform to contribute to research works or projects.

Who this book is for

This book is for anyone who wants to build or upgrade their skills in applying TensorFlow to deep learning problems. Those who are looking for an easy-to-follow guide that underlines the intricacies and complex use cases of deep learning will find this book useful. A basic understanding of TensorFlow and Python is required to get the most out of the book.

What this book covers

Chapter 1, TensorFlow 101, recaps the basics of TensorFlow, such as how to create tensors, constants, variables, placeholders, and operations. We learn about computation graphs and how to place computation graph nodes on various devices such as GPU. We also learn how to use TensorBoard to visualize various intermediate and final output values.

Chapter 2, High-Level Libraries for TensorFLow, covers several high-level libraries such as TF Contrib Learn, TF Slim, TFLearn, Sonnet, and Pretty Tensor.

Chapter 3, Keras 101, gives a detailed overview of the high-level library Keras, which is now part of the TensorFlow core.

Chapter 4, Classical Machine Learning with TensorFlow, teaches us to use TensorFlow to implement classical machine learning algorithms, such as linear regression and classification with logistic regression.

Chapter 5, Neural Networks and MLP with TensorFlow and Keras, introduces the concept of neural networks and shows how to build simple neural network models. We also cover how to build deep neural network models known as MultiLayer Perceptrons.

Chapter 6, RNNs with TensorFlow and Keras, covers how to build Recurrent Neural Networks with TensorFlow and Keras. We cover the internal architecture of RNN, Long Short-Term Networks (LSTM), and Gated Recurrent Units (GRU). We provide a brief overview of the API functions and classes provided by TensorFlow and Keras to implement RNN models.

Chapter 7, RNN for Time Series Data with TensorFlow and Keras, shows how to build and train RNN models for time series data and provide examples in TensorFlow and Keras libraries.

Chapter 8, RNN for Text Data with TensorFlow and Keras, teaches us how to build and train RNN models for text data and provides examples in TensorFlow and Keras libraries. We learn to build word vectors and embeddings with TensorFlow and Keras, followed by LSTM models for using embeddings to generate text from sample text data.

Chapter 9, CNN with TensorFlow and Keras, covers CNN models for image data and provides examples in TensorFlow and Keras libraries. We implement the LeNet architecture pattern for our example.

Chapter 10, Autoencoder with TensorFlow and Keras, illustrates the Autoencoder models for image data and again provides examples in TensorFlow and Keras libraries. We show the implementation of Simple Autoencoder, Denoising Autoencoder, and Variational Autoencoders.

Chapter 11, TensorFlow Models in Production with TF Serving, teaches us to deploy the models with TensorFlow Serving. We learn how to deploy using TF Serving in Docker containers and Kubernetes clusters.

Chapter 12, Transfer Learning and Pre-Trained Models, shows the use of pretrained models for predictions. We learn how to retrain the models on a different dataset. We provide examples to apply the VGG16 and Inception V3 models, pretrained on the ImageNet dataset, to predict images in the COCO dataset. We also show examples of retraining only the last layer of the models with the COCO dataset to improve the predictions.

Chapter 13, Deep Reinforcement Learning, covers reinforcement learning and the OpenAI gym. We build and train several models using various reinforcement learning strategies, including deep Q networks.

Chapter 14, Generative Adversarial Networks, shows how to build and train generative adversarial models in TensorFLow and Keras. We provide examples of SimpleGAN and DCGAN.

Chapter 15, Distributed Models with TensorFlow Clusters, covers distributed training for TensorFLow models using TensorFLow clusters. We provide examples of asynchronous and synchronous update methods for training models in data-parallel fashion.

Chapter 16, TensorFlow Models on Mobile and Embedded Platforms, shows how to deploy TensorFlow models on mobile devices running on iOS and Android platforms. We cover both TF Mobile and TF Lite APIs of the TensorFlow Library.

Chapter 17, TensorFlow and Keras in R, covers how to build and train TensorFlow models in R statistical software. We learn about the three packages provided by R Studio that implement the TF Core, TF Estimators, and Keras API in R.

Chapter 18, Debugging TensorFlow Models, tells us strategies and techniques to find problem hotspots when the models do not work as expected. We cover TensorFlow debugger, along with other methods.

Appendix, Tensor Processing Units, gives a brief overview of Tensor Processing Units. TPUs are futuristic platforms optimized to train and run TensorFlow models. Although not widely available yet, they are available on the Google Cloud Platform and slated to be available soon outside the GCP.

To get the most out of this book

We assume that you are familiar with coding in Python and the basics of TensorFlow and Keras.

If you haven't done already, then install Jupyter Notebooks, TensorFlow, and Keras.

Download the code bundle for this book that contains the Python, R, and notebook code files.

Practice with the code as you read along the text and try exploring by modifying the provided sample code.

To practice the Android chapter, you will need Android Studio and an Andrioid device.

To practice 

the iOS chapter, you will need an Apple computer with Xcode and an Apple device.

To practice 

the TensorFlow chapter, you will need Docker and Kubernetes installed. Instruction for installing Kubernetes and Docker on Ubuntu are provided in the book.

Download the example code files

You can download the example code files for this book from your account at www.packtpub.com. If you purchased this book elsewhere, you can visit www.packtpub.com/support and register to have the files emailed directly to you.

You can download the code files by following these steps:

Log in or register at

www.packtpub.com

.

Select the

SUPPORT

tab.

Click on

Code Downloads & Errata

.

Enter the name of the book in the

Search

box and follow the onscreen instructions.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

WinRAR/7-Zip for Windows

Zipeg/iZip/UnRarX for Mac

7-Zip/PeaZip for Linux

The code bundle for the book is also hosted on GitHub athttps://github.com/PacktPublishing/Mastering-TensorFlow-1x. We also have other code bundles from our rich catalog of books and videos available athttps://github.com/PacktPublishing/. Check them out!

Get in touch

Feedback from our readers is always welcome.

General feedback: Email [email protected] and mention the book title in the subject of your message. If you have questions about any aspect of this book, please email us at [email protected].

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.

Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Reviews

Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!

For more information about Packt, please visit packtpub.com.

TensorFlow 101

TensorFlow is one of the popular libraries for solving problems with machine learning and deep learning. After being developed for internal use by Google, it was released for public use and development as open source. Let us understand the three models of the TensorFlow: data model, programming model, and execution model.

TensorFlow data model consists of tensors, and the programming model consists of data flow graphs or computation graphs. TensorFlow execution model consists of firing the nodes in a sequence based on the dependence conditions, starting from the initial nodes that depend on inputs.

In this chapter, we will review the elements of TensorFlow that make up these three models, also known as the core TensorFlow.

We will cover the following topics in this chapter:

TensorFlow core

Tensors

Constants

Placeholders

Operations

Creating tensors from Python objects

Variables

Tensors generated from library functions

Data flow graph or computation graph

Order of execution and lazy loading

Executing graphs across compute devices - CPU and GPGPU

Multiple graphs

TensorBoard overview

This book is written with a practical focus in mind, hence you can clone the code from the book's GitHub repository or download it from Packt Publishing. You can follow the code examples in this chapter with the Jupyter Notebook ch-01_TensorFlow_101 included in the code bundle.

What is TensorFlow?

According to the TensorFlow website (www.tensorflow.org):

TensorFlow is an open source library for numerical computation using data flow graphs.

Initially developed by Google for its internal consumption, it was released as open source on November 9, 2015. Since then, TensorFlow has been extensively used to develop machine learning and deep neural network models in various domains and continues to be used within Google for research and product development. TensorFlow 1.0 was released on February 15, 2017. Makes one wonder if it was a Valentine's Day gift from Google to machine learning engineers!

TensorFlow can be described with a data model, a programming model, and an execution model:

Data model

comprises of tensors, that are the basic data units created, manipulated, and saved in a TensorFlow program.

Programming model

comprises of data flow graphs or computation graphs. Creating a program in TensorFlow means building one or more TensorFlow computation graphs.

Execution

model consists of firing the nodes of a computation graph in a sequence of dependence. The execution starts by running the nodes that are directly connected to inputs and only depend on inputs being present.

To use TensorFlow in your projects, you need to learn how to program using the TensorFlow API. TensorFlow has multiple APIs that can be used to interact with the library. The TF APIs or libraries are divided into two levels:

Lower-level library

: The lower level library, also known as TensorFlow core, provides very fine-grained lower level functionality, thereby offering complete control on how to use and implement the library in the models. We will cover TensorFlow core in this chapter.

Higher-level libraries

: These libraries provide high-level functionalities and are comparatively easier to learn and implement in the models. Some of the libraries include TF Estimators, TFLearn, TFSlim, Sonnet, and Keras. We will cover some of these libraries in the next chapter.

TensorFlow core

TensorFlow core is the lower level library on which the higher level TensorFlow modules are built. The concepts of the lower level library are very important to learn before we go deeper into learning the advanced TensorFlow. In this section, we will have a quick recap of all those core concepts.

Tensors

Tensors are the basic elements of computation and a fundamental data structure in TensorFlow. Probably the only data structure that you need to learn to use TensorFlow. A tensor is an n-dimensional collection of data, identified by rank, shape, and type.

Rank is the number of dimensions of a tensor, and shape is the list denoting the size in each dimension. A tensor can have any number of dimensions. You may be already familiar with quantities that are a zero-dimensional collection (scalar), a one-dimensional collection (vector), a two-dimensional collection (matrix), and a multidimensional collection.

A scalar value is a tensor of rank 0 and thus has a shape of [1]. A vector or a one-dimensional array is a tensor of rank 1 and has a shape of [columns] or [rows]. A matrix or a two-dimensional array is a tensor of rank 2 and has a shape of [rows, columns]. A three-dimensional array would be a tensor of rank 3, and in the same manner, an n-dimensional array would be a tensor of rank n.

Refer to the following resources to learn more about tensors and their mathematical underpinnings: Tensors page on Wikipedia, at https://en.wikipedia.org/wiki/TensorIntroduction to Tensors guide from NASA, at https://www.grc.nasa.gov/www/k-12/Numbers/Math/documents/Tensors_TM2002211716.pdf

A tensor can store data of one type in all its dimensions, and the data type of its elements is known as the data type of the tensor.

You can also check the data types defined in the latest version of the TensorFlow library at https://www.tensorflow.org/api_docs/python/tf/DType.

At the time of writing this book, the TensorFlow had the following data types defined:

TensorFlow Python API data type

Description

tf.float16

16-bit half-precision floating point

tf.float32

32-bit single-precision floating point

tf.float64

64-bit double-precision floating point

tf.bfloat16

16-bit truncated floating point

tf.complex64

64-bit single-precision complex

tf.complex128

128-bit double-precision complex

tf.int8

8-bit signed integer

tf.uint8

8-bit unsigned integer

tf.uint16

16-bit unsigned integer

tf.int16

16-bit signed integer

tf.int32

32-bit signed integer

tf.int64

64-bit signed integer

tf.bool

Boolean

tf.string

String

tf.qint8

Quantized 8-bit signed integer

tf.quint8

Quantized 8-bit unsigned integer

tf.qint16

Quantized 16-bit signed integer

tf.quint16

Quantized 16-bit unsigned integer

tf.qint32

Quantized 32-bit signed integer

tf.resource

Handle to a mutable resource

We recommend that you should avoid using the Python native data types. Instead of the Python native data types, use TensorFlow data types for defining tensors.

Tensors can be created in the following ways:

By defining constants, operations, and variables, and passing the values to their constructor.

By defining placeholders and passing the values to

session.run()

.

By converting Python objects such as scalar values, lists, and NumPy arrays with the 

tf.convert_to_tensor()

function.

Let's examine different ways of creating Tensors.

Constants

The constant valued tensors are created using the tf.constant() function that has the following signature:

tf.constant( value, dtype=None, shape=None, name='Const', verify_shape=False)

Let's look at the example code provided in the Jupyter Notebook with this book:

c1=tf.constant(5,name='x')c2=tf.constant(6.0,name='y')c3=tf.constant(7.0,tf.float32,name='z')

Let's look into the code in detail:

The first line defines a constant tensor

c1

, gives it value 5, and names it x.  

The second line defines a constant tensor

c2

, stores value 6.0, and names it y.

When we print these tensors, we see that the data types of

c1

and

c2

are automatically deduced by TensorFlow.

To specifically define a data type, we can use the

dtype

 parameter or place the data type as the second argument. In the preceding code example, we define the data type as

tf.float32

for

c3

.

Let's print the constants c1, c2, and c3:

print('c1 (x): ',c1)print('c2 (y): ',c2)print('c3 (z): ',c3)

When we print these constants, we get the following output:

c1 (x): Tensor("x:0", shape=(), dtype=int32)c2 (y): Tensor("y:0", shape=(), dtype=float32)c3 (z): Tensor("z:0", shape=(), dtype=float32)

In order to print the values of these constants, we have to execute them in a TensorFlow session with the tfs.run() command:

print('run([c1,c2,c3]) : ',tfs.run([c1,c2,c3]))

We see the following output:

run([c1,c2,c3]) : [5, 6.0, 7.0]

Tensors generated from library functions

Tensors can also be generated from various TensorFlow functions. These generated tensors can either be assigned to a constant or a variable, or provided to their constructor at the time of initialization.

As an example, the following code generates a vector of 100 zeroes and prints it:

a=tf.zeros((100,))print(tfs.run(a))

TensorFlow provides different types of functions to populate the tensors at the time of their definition:

Populating all elements with the same values

Populating elements with sequences

Populating elements with a random probability distribution, such as the normal distribution or the uniform distribution

Populating tensor elements with a random distribution

TensorFlow provides us with the functions to generate tensors filled with random valued distributions.

The distributions generated are affected by the graph-level or the operation-level seed. The graph-level seed is set using tf.set_random_seed, while the operation-level seed is given as the argument seed in all of the random distribution functions. If no seed is specified, then a random seed is used.

More details on random seeds in TensorFlow can be found at the following link:  https://www.tensorflow.org/api_docs/python/tf/set_random_seed.

The following table lists some of the tensor generating functions to populate elements of the tensor with random valued distributions:

Tensor generating function

Description

random_normal( shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)

Generates a tensor of the specified shape, filled with values from a normal distribution: normal(mean, stddev).

truncated_normal( shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)

Generates a tensor of the specified shape, filled with values from a truncated normal distribution: normal(mean, stddev).

Truncated means that the values returned are always at a distance less than two standard deviations from the mean.

random_uniform( shape, minval=0, maxval=None, dtype=tf.float32, seed=None, name=None)

Generates a tensor of the specified shape, filled with values from a uniform distribution: uniform([minval, maxval)).

random_gamma( shape, alpha, beta=None, dtype=tf.float32, seed=None, name=None)

 

Generates tensors of the specified shape, filled with values from gamma distributions: gamma(alpha,beta).

More details on the random_gamma function can be found at the following link: https://www.tensorflow.org/api_docs/python/tf/random_gamma.