What's New in TensorFlow 2.0 - Ajay Baranwal - E-Book

What's New in TensorFlow 2.0 E-Book

Ajay Baranwal

0,0
23,92 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Get to grips with key structural changes in TensorFlow 2.0




Key Features



  • Explore TF Keras APIs and strategies to run GPUs, TPUs, and compatible APIs across the TensorFlow ecosystem


  • Learn and implement best practices for building data ingestion pipelines using TF 2.0 APIs


  • Migrate your existing code from TensorFlow 1.x to TensorFlow 2.0 seamlessly



Book Description



TensorFlow is an end-to-end machine learning platform for experts as well as beginners, and its new version, TensorFlow 2.0 (TF 2.0), improves its simplicity and ease of use. This book will help you understand and utilize the latest TensorFlow features.







What's New in TensorFlow 2.0 starts by focusing on advanced concepts such as the new TensorFlow Keras APIs, eager execution, and efficient distribution strategies that help you to run your machine learning models on multiple GPUs and TPUs. The book then takes you through the process of building data ingestion and training pipelines, and it provides recommendations and best practices for feeding data to models created using the new tf.keras API. You'll explore the process of building an inference pipeline using TF Serving and other multi-platform deployments before moving on to explore the newly released AIY, which is essentially do-it-yourself AI. This book delves into the core APIs to help you build unified convolutional and recurrent layers and use TensorBoard to visualize deep learning models using what-if analysis.







By the end of the book, you'll have learned about compatibility between TF 2.0 and TF 1.x and be able to migrate to TF 2.0 smoothly.




What you will learn



  • Implement tf.keras APIs in TF 2.0 to build, train, and deploy production-grade models


  • Build models with Keras integration and eager execution


  • Explore distribution strategies to run models on GPUs and TPUs


  • Perform what-if analysis with TensorBoard across a variety of models


  • Discover Vision Kit, Voice Kit, and the Edge TPU for model deployments


  • Build complex input data pipelines for ingesting large training datasets



Who this book is for



If you're a data scientist, machine learning practitioner, deep learning researcher, or AI enthusiast who wants to migrate code to TensorFlow 2.0 and explore the latest features of TensorFlow 2.0, this book is for you. Prior experience with TensorFlow and Python programming is necessary to understand the concepts covered in the book.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB

Seitenzahl: 231

Veröffentlichungsjahr: 2019

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



What's New in TensorFlow 2.0

 

 

 

 

 

 

Use the new and improved features of TensorFlow to enhance machine learning and deep learning

 

 

 

 

 

 

 

 

 

 

Ajay Baranwal
Alizishaan Khatri
Tanish Baranwal

 

 

 

 

 

 

 

 

 

 

 

 

BIRMINGHAM - MUMBAI

What's New in TensorFlow 2.0

Copyright © 2019 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

 

Commissioning Editor: Mrinmayee KawalkarAcquisition Editor:Snehal MainContent Development Editor:Athikho Sapuni RishanaSenior Editor: Sophie RogersTechnical Editor: Joseph SunilCopy Editor: Safis EditingProject Coordinator: Kirti PisatProofreader: Safis EditingIndexer: Priyanka DhadkeProduction Designer: Jyoti Chauhan

First published: August 2019

Production reference: 1080819

Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.

ISBN 978-1-83882-385-6

www.packtpub.com

Contributors

About the authors

Ajay Baranwal works as a director at the Center for Deep Learning in Electronics Manufacturing, where he is responsible for researching and developing TensorFlow-based deep learning applications in the semiconductor and electronics manufacturing industry. Part of his role is to teach and train deep learning techniques to professionals.

He has a solid history of software engineering and management, where he got hooked on deep learning. He moved to natural language understanding (NLU) to pursue deep learning further at Abzooba and built an information retrieval system for the finance sector. He has also worked at Ansys Inc. as a senior manager (engineering) and a technical fellow (data science) and introduced several ML applications.

 

 

Alizishaan Khatri works as a machine learning engineer in Silicon Valley. He uses TensorFlow to build, design, and maintain production-grade systems that use deep learning for NLP applications. A major system he has built is a deep learning-based system for detecting offensive content in chats. Other works he has done includes text classification and named entity recognition (NER) systems for different use cases. He is passionate about sharing ideas with the community and frequently speaks at tech conferences across the globe. 

He holds a master's degree in computer science from the SUNY Buffalo University. His thesis proposed a solution to the problem of overfitting in deep learning. Outside of his work, he enjoys skiing and mountaineering.

 

Tanish Baranwal is a sophomore in high school and lives in California with his family and has worked with his dad on deep learning projects using TensorFlow for the last 3 years. He has been coding for 9 years (since 1st grade) and is well versed in Python and JavaScript. He is now learning C++. He has certificates from various online courses and has won the Entrepreneurship Showcase Award at his school.

Some of his deep learning projects include anomaly detection systems for transaction fraud, a system to save energy by turning off domestic water heaters when not in use, and a fully functional style transfer program that can recreate any photograph in another style. He has also written blogs on deep learning on Medium with over 1,000 views.

About the reviewers

Jay Kim is an experienced data scientist who has broad experience in data science, AI, machine learning, deep learning, and statistical analysis. He has broad experience in various industries, such as utilities, the automotive sector, manufacturing, commercial, and research.

 

Narotam Singh has been actively involved in various technical programs and the training of Government of India (GoI) officers in the fields of information technology and communication. He did his master's degree in the field of electronics, and graduated with honors in physics. He also holds a diploma in computer engineering and a postgraduate diploma in computer applications. Presently, he works as a freelancer. He has many research publications to his name and is also a technical reviewer of various books. His present research interests involve artificial intelligence, machine learning, deep learning, robotics, and spirituality.

 

Packt is searching for authors like you

If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.

 

Packt.com

Subscribe to our online digital library for full access to over 7,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.

Why subscribe?

Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals

Improve your learning with Skill Plans built especially for you

Get a free eBook or video every month

Fully searchable for easy access to vital information

Copy and paste, print, and bookmark content

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.packt.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.

At www.packt.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks. 

Table of Contents

Title Page

Copyright and Credits

What's New in TensorFlow 2.0

Contributors

About the authors

About the reviewers

Packt is searching for authors like you

About Packt

Why subscribe?

Preface

Who this book is for

What this book covers

To get the most out of this book

Download the example code files

Download the color images

Conventions used

Get in touch

Reviews

Section 1: TensorFlow 2.0 - Architecture and API Changes

Getting Started with TensorFlow 2.0

Technical requirements

What's new?

Changes from TF 1.x

TF 2.0 installation and setup

Installing and using pip

Using Docker

GPU installation

Installing using Docker

Installing using pip

Using TF 2.0

Rich extensions

Ragged Tensors

What are Ragged Tensors, really?

Constructing a Ragged Tensor

Basic operations on Ragged Tensors

New and important packages

Summary

Keras Default Integration and Eager Execution

Technical requirements

New abstractions in TF 2.0

Diving deep into the Keras API

What is Keras?

Building models

The Keras layers API

Simple model building using the Sequential API

Advanced model building using the functional API

Training models

Saving and loading models

Loading and saving architecture and weights separately

Loading and saving architectures

Loading and saving weights

Saving and loading entire models

Using Keras

Using the SavedModel API

Other features

The keras.applications module

The keras.datasets module

An end-to-end Sequential example

Estimators

Evaluating TensorFlow graphs

Lazy loading versus eager execution

Summary

Section 2: TensorFlow 2.0 - Data and Model Training Pipelines

Designing and Constructing Input Data Pipelines

Technical requirements

Designing and constructing the data pipeline

Raw data

Splitting data into train, validation, and test data

Creating TFRecords

TensorFlow protocol messages – tf.Example

tf.data dataset object creation

Creating dataset objects

Creating datasets using TFRecords

Creating datasets using in-memory objects and tensors

Creating datasets using other formats directly without using TFRecords

Transforming datasets

The map function

The flat_map function

The zip function

The concatenate function

The interleave function

The take(count) function

The filter(predicate) function

Shuffling and repeating the use of tf.data.Dataset

Batching

Prefetching

Validating your data pipeline output before feeding it to the model

Feeding the created dataset to the model

Examples of complete end-to-end data pipelines

Creating tfrecords using pickle files

Best practices and the performance optimization of a data pipeline in TF 2.0 

Built-in datasets in TF 2.0

Summary

Further reading

Model Training and Use of TensorBoard

Technical requirements

Comparing Keras and tf.keras

Comparing estimator and tf.keras

A quick review of machine learning taxonomy and TF support

Creating models using tf.keras 2.0

Sequential APIs

Functional APIs

Model subclassing APIs

Model compilation and training

The compile() API

The fit() API

Saving and restoring a model

Saving checkpoints as the training progresses

Manually saving and restoring weights

Saving and restoring an entire model

Custom training logic

Distributed training

TensorBoard

Hooking up TensorBoard with callbacks and invocation

Visualization of scalar, metrics, tensors, and image data

Graph dashboard

Hyperparameter tuning

What-If Tool

Profiling tool

Summary

Questions

Further reading

Section 3: TensorFlow 2.0 - Model Inference and Deployment and AIY

Model Inference Pipelines - Multi-platform Deployments

Technical requirements

Machine learning workflow – the inference phase

Understanding a model from an inference perspective

Model artifact – the SavedModel format

Understanding the core dataflow model

The tf.function API

The tf.autograph function

Exporting your own SavedModel model

Using the tf.function API

Analyzing SavedModel artifacts

The SavedModel command-line interface

Inference on backend servers

TensorFlow Serving

Setting up TensorFlow Serving

Setting up and running an inference server

When TensorFlow.js meets Node.js

Inference in the browser

Inference on mobile and IoT devices

Summary

AIY Projects and TensorFlow Lite

Introduction to TFLite

Getting started with TFLite

Running TFLite on mobile devices

TFLite on Android

TFLite on iOS

Running TFLite on low-power machines

Running TFLite on an Edge TPU processor

Running TF on the NVIDIA Jetson Nano

Comparing TFLite and TF

AIY

The Voice Kit

The Vision Kit

Summary

Section 4: TensorFlow 2.0 - Migration, Summary

Migrating From TensorFlow 1.x to 2.0

Major changes in TF 2.0

Recommended techniques to employ for idiomatic TF 2.0

Making code TF 2.0-native

Converting TF 1.x models

Upgrading training loops

Other things to note when converting

Frequently asked questions

The future of TF 2.0

More resources to look at

Summary

Other Books You May Enjoy

Leave a review - let other readers know what you think

Preface

TensorFlow is one of the most popular machine learning frameworks, and its new version, TensorFlow 2.0, improves its simplicity and ease of use. This book will help you understand and utilize the latest TensorFlow features.

What's New in TensorFlow 2.0 starts by focusing on advanced concepts such as the new TensorFlow Keras APIs, eager execution, and efficient distribution strategies that help you to run your machine learning models on multiple GPUs and TPUs. The book then takes you through the process of building data ingestion and training pipelines, and it provides recommendations and best practices for feeding data to models created using the new tf.keras API. You'll explore the process of building an inference pipeline using TensorFlow Serving and other multi-platform deployments before moving on to explore the newly released AIY which is essentially do-it-yourself AI. This book delves into the core APIs to help you build unified convolutional and recurrent layers and use TensorBoard to visualize deep learning models using what-if analysis.

By the end of the book, you'll have learned about the compatibility between TensorFlow 2.0 and TensorFlow 1.x and will be able to smoothly migrate to TensorFlow 2.0.

Who this book is for

If you're a data scientist, machine learning practitioner, deep learning researcher, or AI enthusiast who wants to migrate code to, and explore the latest features of TensorFlow 2.0, this book is for you. Prior experience with TensorFlow and Python programming is necessary to understand the concepts covered in the book.

What this book covers

Chapter 1, Getting Started with TensorFlow 2.0, provides a quick bird's-eye view of the architectural and API-level changes in TensorFlow 2.0. It covers TensorFlow 2.0 installation and setup, compares how it has changed compared to TensorFlow 1.x (such as Keras APIs and layer APIs), and also presents the addition of rich extensions such as TensorFlow Probability, Tensor2Tensor, Ragged Tensors, and the newly available custom training logic for loss functions.

Chapter 2, Keras Default Integration and Eager Execution, goes deeper into high-level TensorFlow 2.0 APIs using Keras. It presents a detailed perspective of how graphs are evaluated in TensorFlow 1.x compared to TensorFlow 2.0. It explains lazy evaluation and eager execution and how they are different in TensorFlow 2.0, and it also shows how to use Keras model subclassing to incorporate TensorFlow 2.0 lower APIs for custom-built models.

Chapter 3, Designing and Constructing Input Data Pipelines, gives an overview of how to build complex input data pipelines for ingesting large training and inference datasets in most common formats, such as CSV, images, and text using TFRecords and tf.data.Dataset. It gives a general explanation of protocol buffers and protocol messages and how are they implemented using tf.Example. It also explains the best practices of using tf.data.Dataset with regard to the shuffling, prefetching, and batching of data, and provides recommendations for building data pipelines.

Chapter 4, Model Training and Use of TensorBoard, covers an overall model training pipeline to enable you to build, train, and validate state-of-the-art models. It talks about how to integrate input data pipelines, create tf.keras models, run training in a distributed manner, and run validations to fine-tune hyperparameters. It explains how to export TensorFlow models for deployment or inferencing, and it outlines the usage of TensorBoard, the changes to it in TensorFlow 2.0, and how to use it for debugging and profiling a model's speed and performance.

Chapter 5, Model Inference Pipelines – Multi-platform Deployments, shows us some deployment strategies for using the trained model to build software applications at scale in a live production environment. Models trained in TensorFlow 2.0 can be deployed on platforms such as servers and web browsers using a variety of programming languages, such as Python and JavaScript.

Chapter 6, AIY Projects and TensorFlow Lite, shows us how to deploy models trained in TensorFlow 2.0 on low-powered embedded systems such as edge devices and mobile systems including Android, iOS, the Raspberry Pi, Edge TPUs, and the NVIDIA Jetson Nano. It also contains details about training and deploying models on Google's AIY kits.

Chapter 7, Migrating From TensorFlow 1.x to 2.0, shows us the conceptual differences between TensorFlow 1.x and TensorFlow 2.0, the compatibility criteria between them, and ways to migrate between them, syntactically and semantically. It also shows several examples of syntactic and semantic migration from TensorFlow 1.x to TensorFlow 2.0, and contains references and future information.

To get the most out of this book

The reader needs to have basic knowledge of Python and TensorFlow.

Download the example code files

You can download the example code files for this book from your account at www.packt.com. If you purchased this book elsewhere, you can visit www.packt.com/support and register to have the files emailed directly to you.

You can download the code files by following these steps:

Log in or register at

www.packt.com

.

Select the

SUPPORT

tab.

Click on

Code Downloads & Errata

.

Enter the name of the book in the

Search

box and follow the onscreen instructions.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

WinRAR/7-Zip for Windows

Zipeg/iZip/UnRarX for Mac

7-Zip/PeaZip for Linux

The code bundle for the book is also hosted on GitHub athttps://github.com/PacktPublishing/What-s-New-in-TensorFlow-2.0. In case there's an update to the code, it will be updated on the existing GitHub repository.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Download the color images

We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: http://www.packtpub.com/sites/default/files/downloads/9781838823856_ColorImages.pdf.

Get in touch

Feedback from our readers is always welcome.

General feedback: If you have questions about any aspect of this book, mention the book title in the subject of your message and email us at [email protected].

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packt.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.

Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Reviews

Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!

For more information about Packt, please visit packt.com.

Section 1: TensorFlow 2.0 - Architecture and API Changes

This section of the book will give you a quick summary of what is new in TensorFlow 2.0, a comparison with TensorFlow 1.x, the differences between lazy evaluation and eager execution, changes at the architectural level, and API usage with respect to tf.keras and Estimator.

This section contains the following chapters:

Chapter 1

Getting Started with TensorFlow 2.0

Chapter 2

Keras Default Integration and Eager Execution

Getting Started with TensorFlow 2.0

This book aims to familiarize readers with the new features introduced in TensorFlow 2.0 (TF 2.0) and to empower you to unlock its potential while building machine learning applications. This chapter provides a bird's-eye view of new architectural and API-level changes in TF 2.0. We will cover TF 2.0 installation and setup, and will compare the changes with respect to TensorFlow 1.x (TF 1.x), such as Keras APIs and layer APIs. We will also cover the addition of rich extensions, such as TensorFlow Probability, Tensor2Tensor, Ragged Tensors, and the newly available custom training logic for loss functions. This chapter also summarizes the changes to the layers API and other APIs.

The following topics will be covered in this chapter:

What's new?

TF 2.0

installation and setup

Using

TF 2.0

Rich extensions

Technical requirements

You will need the following before you can start executing the steps described in the sections ahead:

Python 3.4 or higher

A computer with Ubuntu 16.04 or later (The instructions remain similar for most *NIX-based systems such as macOS or other Linux variants)

What's new?

The philosophy of TF 2.0 is based on simplicity and ease of use. The major updates include easy model building with tf.keras and eager execution, robust model deployment for production and commercial use for any platform, powerful experimentation techniques and tools for research, and API simplification for a more intuitive organization of APIs. 

The new organization of TF 2.0 is simplified by the following diagram:

The preceding diagram is focused on using the Python API for training and deploying; however, the same process is followed with the other supported languages including Julia, JavaScript, and R. The flow of TF 2.0 is separated into two sections—model training and model deployment, where model training includes the data pipelines, model creation, training, and distribution strategies; and model deployment includes the variety of means of deployment, such as TF Serving, TFLite, TF.js, and other language bindings. The components in this diagram will each be elaborated upon in their respective chapters.

The biggest change in TF 2.0 is the addition of eager execution. Eager execution is an imperative programming environment that evaluates operations immediately, without necessarily building graphs. All operations return concrete values instead of constructing a computational graph that the user can compute later. 

This makes it significantly easier to build and train TensorFlow models and reduces much of the boilerplate code that was attributed to TF 1.x code. Eager execution has an intuitive interface that follows the standard Python code flow. Code written in eager execution is also much easier to debug, as standard Python modules for debugging, such as pdb, can be used to inspect code for sources of error. The creation of custom models is also easier due to the natural Python control flow and support for iteration.

Another major change in TF 2.0 is the migration to tf.keras as the standard module for creating and training TensorFlow models. The Keras API is the central high-level API in TF 2.0, making it easy to get started with TensorFlow. Although Keras is an independent implementation of deep learning concepts, the tf.keras implementation contains enhancements such as eager execution for immediate iteration and debugging, and tf.data is also included for building scalable input pipelines.

An example workflow in tf.keras would be to first load the data using the tf.data module. This allows for large amounts of data to be streamed from the disk without storing all of the data in memory. Then, the developer builds, trains, and validates the model using tf.keras or the premade estimators. The next step would be to run the model and debug it using the benefits of eager execution. Once the model is ready for full-fledged training, use a distribution strategy for distributed training. Finally, when the model is ready for deployment, export the model to a SavedModel module for deployment through any of the distribution strategies shown in the diagram.

Changes from TF 1.x

The first major difference between TF 1.x and TF 2.0 is the API organization. TF 2.0 has reduced the redundancies in the API structure. Major changes include the removal of tf.app, tf.flags, and tf.logging in favor of other Python modules, such as absl-py and the built-in logging function. 

The tf.contrib library is also now removed from the main TensorFlow repo. The code implemented in this library has either been moved to a different location or has been shifted to the TensorFlow add-ons library. The reason for this move is that the contrib module had grown beyond what could be maintained in a single repository. 

Other changes include the removal of the QueueRunner module in favor of using tf.data, the removal of graph collections, and changes in how variables are treated. The QueueRunner module was a way of providing data to a model for training, but was quite complicated and harder to use than tf.data, which is now the default way of feeding data to a model. Other benefits of using tf.data for the data pipeline are explained in Chapter 3, Designing and Constructing Input Data Pipelines.

Another major change in TF 2.0 is that there are no more global variables. In TF 1.x, variables created using tf.Variable would be put on the default graph and would still be recoverable through their names. TF 1.x had all sorts of mechanisms as an attempt to help users to recover their variables, such as variable scopes, global collections, and helper methods such as tf.get_global_step and tf.global_variables_initializer. All of this is removed in TF 2.0 for the default variable behavior in Python.

TF 2.0 installation and setup

This section describes the steps required to install TF 2.0 on your system using different methods and on different system configurations. Entry-level users are recommended to start with the pip- and virtualenv-based methods. For users of the GPU version, docker is the recommended method.

Using Docker

If you would like to isolate your TensorFlow installation from the rest of your system, you might want to consider installing it using a Docker image. This would require you to have Docker installed on your system. Installation instructions are available at https://docs.docker.com/install/.

In order to use Docker without