32,39 €
Unleash the potential of databricks for end-to-end machine learning with this comprehensive guide, tailored for experienced data scientists and developers transitioning from DIY or other cloud platforms. Building on a strong foundation in Python, Practical Machine Learning on Databricks serves as your roadmap from development to production, covering all intermediary steps using the databricks platform.
You’ll start with an overview of machine learning applications, databricks platform features, and MLflow. Next, you’ll dive into data preparation, model selection, and training essentials and discover the power of databricks feature store for precomputing feature tables. You’ll also learn to kickstart your projects using databricks AutoML and automate retraining and deployment through databricks workflows.
By the end of this book, you’ll have mastered MLflow for experiment tracking, collaboration, and advanced use cases like model interpretability and governance. The book is enriched with hands-on example code at every step. While primarily focused on generally available features, the book equips you to easily adapt to future innovations in machine learning, databricks, and MLflow.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 267
Veröffentlichungsjahr: 2023
Practical Machine Learning on Databricks
Seamlessly transition ML models and MLOps on Databricks
Debu Sinha
BIRMINGHAM—MUMBAI
Practical Machine Learning on Databricks
Copyright © 2023 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Group Product Manager: Ali Abidi
Publishing Product Manager: Tejashwini R
Content Development Editor: Priyanka Soam
Technical Editor: Kavyashree K S
Copy Editor: Safis Editing
Project Coordinator: Farheen Fathima
Proofreader: Safis Editing
Indexer: Subalakshmi Govindhan
Production Designer: Aparna Bhagat
Marketing Coordinator: Vinishka Kalra
First published: October 2023
Production reference: 1261023
Published by Packt Publishing Ltd.
Grosvenor House
11 St Paul's Square
Birmingham
B3 1RB, UK.
ISBN 978-1-80181-203-0
www.packtpub.com
To my mother, for teaching me the invaluable lessons of persistence and self-belief, which have been my guiding stars in completing this book.To the committed team at Packt—Dhruv Kataria, Hemangi Lotlikar, and Priyanka Soam—thank you for your unwavering guidance and support throughout this endeavor.To the founders of Databricks, for not only fostering an exceptional company but also for providing me with the opportunity to grow and learn among a community of brilliant minds.And to the talented engineers across the globe, with whom I’ve had the privilege to collaborate on architecting solutions to machine learning challenges using Databricks—you inspire me every day.
- Debu Sinha
Debu Sinha is an experienced data science and engineering leader with deep expertise in software engineering and solutions architecture. With over 10 years in the industry, Debu has a proven track record in designing scalable software applications and big data, and machine learning systems. As lead ML specialist on the Specialist Solutions Architect team at Databricks, Debu focuses on AI/ML use cases in the cloud and serves as an expert on LLMs, ML, and MLOps. With prior experience as a start-up co-founder, Debu has demonstrated skills in team-building, scaling, and delivering impactful software solutions. An established thought leader, Debu has received multiple awards and regularly speaks at industry events.
Abhinav Bhatnagar (AB), a seasoned data leader with years of expertise, excels in leading teams, governance, and strategy. He holds a master’s in computer science from ASU. Currently a manager of DS&E at Databricks, AB drives data development strategies. Previously, at Truecaller and Cyr3con, he showcased his prowess in boosting revenue through data solutions and architecting AI-driven pipelines. He’s been recognized with accolades such as the prestigious 40 Under 40 Data Scientists. With numerous patents and publications, Abhinav Bhatnagar stands as a remarkable figure in the data science and engineering landscape. His dedication to pushing boundaries in data science, with a wealth of experience and an innovative mindset, make him a trailblazer in tech.
Amreth Chandrasehar is a director at Informatica responsible for ML engineering, observability, and SRE teams. Over the last few years, Amreth has played a key role in cloud migration, generative AI, observability, and ML adoption at various organizations. Amreth is also co-creator of the Conducktor platform, serving T-Mobile’s 100+ million customers, and a tech/customer advisory board member at various companies on observability. Amreth has also co-created and open sourced Kardio.io, a service health dashboard tool. Amreth has been invited to and spoken at several key conferences, has won several awards within the company, and was recently awarded three gold awards – Globee, Stevie, and an International Achievers’ Award – for his contributions to observability and generative AI.
I would like to thank my wife (Ashwinya Mani) and my son (Athvik A) for their patience and support during my review of this book.
This part mainly focuses on data science use cases, the life cycle of and personas involved in a data science project (data engineers, analysts, and scientists), and the challenges of ML development in organizations.
This section has the following chapters:
Chapter 1, The ML Process and Its ChallengesChapter 2, Overview of ML on DatabricksWelcome to the world of simplifying your machine learning (ML) life cycle with the Databricks platform.
As a senior specialist solutions architect at Databricks specializing in ML, over the years, I have had the opportunity to collaborate with enterprises to architect ML-capable platforms to solve their unique business use cases using the Databricks platform. Now, that experience will be at your service to learn from. The knowledge you will gain from this book will open new career opportunities for you and change how you approach architecting ML pipelines for your organization’s ML use cases.
This book does assume that you have a reasonable understanding of the Python language as the accompanying code samples will be in Python. This book is not about teaching you ML techniques from scratch; it is assumed that you are an experienced data science practitioner who wants to learn how to take your ML use cases from development to production and all the steps in the middle using the Databricks platform.
For this book, some Python and pandas know-how is required. Being familiar with Apache Spark is a plus, and having a solid grasp of ML and data science is necessary.
Note
This book focuses on the features that are currently generally available. The code examples provided utilize Databricks notebooks. While Databricks is actively developing features to support workflows using external integrated development environments (IDEs), these specific features are not covered in this book. Also, going through this book will give you a solid foundation to quickly pick up new features as they become GA.
In this chapter, we will cover the following:
Understanding the typical ML processDiscovering the personas involved with the machine learning process in organizationsChallenges with productionizing machine learning use cases in organizationsUnderstanding the requirements of an enterprise machine learning platformExploring Databricks and the Lakehouse architectureBy the end of this chapter, you should have a fundamental understanding of what a typical ML development life cycle looks like in an enterprise and the different personas involved in it. You will also know why most ML projects fail to deliver business value and how the Databricks Lakehouse Platform provides a solution.
The following diagram summarizes the ML process in an organization:
Figure 1.1 – The data science development life cycle consists of three main stages – data preparation, modeling, and deployment
Note
Source: https://azure.microsoft.com/mediahandler/files/resourcefiles/standardizing-the-machine-learning-lifecycle/Standardizing%20ML%20eBook.pdf.
It is an iterative process. The raw structured and unstructured data first lands into a data lake from different sources. A data lake utilizes the scalable and cheap storage provided by cloud storage such as Amazon Simple Storage Service (S3) or Azure Data Lake Storage (ADLS), depending on which cloud provider an organization uses. Due to regulations, many organizations have a multi-cloud strategy, making it essential to choose cloud-agnostic technologies and frameworks to simplify infrastructure management and reduce operational overhead.
Databricks defined a design pattern called the medallion architecture to organize data in a data lake. Before moving forward, let’s briefly understand what the medallion architecture is:
Figure 1.2 – Databricks medallion architecture
The medallion architecture is a data design pattern that’s used in a Lakehouse to organize data logically. It involves structuring data into layers (Bronze, Silver, and Gold) to progressively improve its quality and structure. The medallion architecture is also referred to as a “multi-hop” architecture.
The Lakehouse architecture, which combines the best features of data lakes and data warehouses, offers several benefits, including a simple data model, ease of implementation, incremental extract, transform, and load (ETL), and the ability to recreate tables from raw data at any time. It also provides features such as ACID transactions and time travel for data versioning and historical analysis. We will expand more on the lakehouse in the Exploring the Databricks Lakehouse architecture section.
In the medallion architecture, the Bronze layer holds raw data sourced from external systems, preserving its original structure along with additional metadata. The focus here is on quick change data capture (CDC) and maintaining a historical archive. The Silver layer, on the other hand, houses cleansed, conformed, and “just enough” transformed data. It provides an enterprise-wide view of key business entities and serves as a source for self-service analytics, ad hoc reporting, and advanced analytics.
The Gold layer is where curated business-level tables reside that have been organized for consumption and reporting purposes. This layer utilizes denormalized, read-optimized data models with fewer joins. Complex transformations and data quality rules are applied here, facilitating the final presentation layer for various projects, such as customer analytics, product quality analytics, inventory analytics, and more. Traditional data marts and enterprise data warehouses (EDWs) can also be integrated into the lakehouse to enable comprehensive “pan-EDW” advanced analytics and ML.
The medallion architecture aligns well with the concept of a data mesh, where Bronze and Silver tables can be joined in a “one-to-many” fashion to generate multiple downstream tables, enhancing data scalability and autonomy.
Apache Spark has taken over Hadoop as the de facto standard for processing data at scale in the last six years due to advancements in performance and large-scale developer community adoption and support. There are many excellent books on Apache Spark written by the creators of Apache Spark themselves; these have been listed in the Further reading section. They can give more insights into the other benefits of Apache Spark.
Once the clean data lands in the Gold standard tables, features are generated by combining gold datasets, which act as input for ML model training.
During the model development and training phase, various sets of hyperparameters and ML algorithms are tested to identify the optimal combination of the model and corresponding hyperparameters. This process relies on predetermined evaluation metrics such as accuracy, R2 score, and F1 score.
In the context of ML, hyperparameters are parameters that govern the learning process of a model. They are not learned from the data itself but are set before training. Examples of hyperparameters include the learning rate, regularization strength, number of hidden layers in a neural network, or the choice of a kernel function in a support vector machine. Adjusting these hyperparameters can significantly impact the performance and behavior of the model.
On the other hand, training an ML model involves deriving values for other model parameters, such as node weights or model coefficients. These parameters are learned during the training process using the training data to minimize a chosen loss or error function. They are specific to the model being trained and are determined iteratively through optimization techniques such as gradient descent or closed-form solutions.
Expanding beyond node weights, model parameters can also include coefficients in regression models, intercept terms, feature importance scores in decision trees, or filter weights in convolutional neural networks. These parameters are directly learned from the data during the training process and contribute to the model’s ability to make predictions.
Parameters
You can learn more about parameters at https://en.wikipedia.org/wiki/Parameter.
The finalized model is deployed either for batch, streaming, or real-time inference as a Representational State Transfer (REST) endpoint using containers. In this phase, we set up monitoring for drift and governance around the deployed models to manage the model life cycle and enforce access control around usage. Let’s take a look at the different personas involved in taking an ML use case from development to production.
Typically, three different types of persona are involved in developing an ML solution in an organization:
Data engineers: The data engineers create data pipelines that take in structured, semi-structured, and unstructured data from source systems and ingest them in a data lake. Once the raw data lands in the data lake, the data engineers are also responsible for securely storing the data, ensuring that the data is reliable, clean, and easy to discover and utilize by the users in the organization.Data scientists: Data scientists collaborate with subject matter experts (SMEs) to understand and address business problems, ensuring a solid business justification for projects. They utilize clean data from data lakes and perform feature engineering, selecting and transforming relevant features. By developing and training multiple ML models with different sets of hyperparameters, data scientists can evaluate them on test sets to identify the best-performing model. Throughout this process, collaboration with SMEs validates the models against business requirements, ensuring their alignment with objectives and key performance indicators (KPIs). This iterative approach helps data scientists select a model that effectively solves the problem and meets the specified KPIs.Machine learning engineers: The ML engineering teams deploy the ML models created by data scientists into production environments. It is crucial to establish procedures, governance, and access control early on, including defining data scientist access to specific environments and data. ML engineers also implement monitoring systems to track model performance and data drift. They enforce governance practices, track model lineage, and ensure access control for data security and compliance throughout the ML life cycle.A typical ML project life cycle consists of data engineering, then data science, and lastly, production deployment by the ML engineering team. This is an iterative process.
Now, let’s take a look at the various challenges involved in productionizing ML models.
At this point, we understand what a typical ML project life cycle looks like in an organization and the different personas involved in the ML process. It looks very intuitive, though we still see many enterprises struggling to deliver business value from their data science projects.
In 2017, Gartner analyst Nick Heudecker admitted that 85% of data science projects fail. A report published by Dimensional Research (https://dimensionalresearch.com/) also uncovered that only 4% of companies have been successful in deploying ML use cases to production. A recent study done by Rackspace Global Technologies in 2021 uncovered that only 20% of the 1,870 organizations in various industries have mature AI and ML practices.
Sources
See the Further reading section for more details on these statistics.
Most enterprises face some common technical challenges in successfully delivering business value from data science projects:
Unintended data silos and messy data: Data silos can be considered as groups of data in an organization that are governed and accessible only by specific users or groups within the organization. Some valid reasons to have data silos include compliance with particular regulations around privacy laws such as General Data Protection Regulation (GDPR) in Europe or the California Privacy Rights Act (CCPA). These conditions are usually an exception to the norm. Gartner stated that almost 87% of organizations have low analytics and business intelligence maturity, meaning that data is not being fully utilized.Data silos generally arise as different departments within organizations. They have different technology stacks to manage and process the data.
The following figure highlights this challenge:
Figure 1.3 – The tools used by the different teams in an organization and the different silos
The different personas work with different sets of tools and have different work environments. Data analysts, data engineers, data scientists, and ML engineers utilize different tools and development environments due to their distinct roles and objectives. Data analysts rely on SQL, spreadsheets, and visualization tools for insights and reporting. Data engineers work with programming languages and platforms such as Apache Spark to build and manage data infrastructure. Data scientists use statistical programming languages, ML frameworks, and data visualization libraries to develop predictive models. ML engineers combine ML expertise with software engineering skills to deploy models into production systems. These divergent toolsets can pose challenges in terms of data consistency, tool compatibility, and collaboration. Standardized processes and knowledge sharing can help mitigate these challenges and foster effective teamwork. Traditionally, there is little to no collaboration between these teams. As a result, a data science use case with a validated business value may not be developed at the required pace, negatively impacting the growth and effective management of the business.
When the concept of data lakes came up in the past decade, they promised a scalable and cheap solution to support structured and unstructured data. The goal was to enable organization-wide effective usage and collaboration of data. In reality, most data lakes ended up becoming data swamps, with little to no governance regarding the quality of data.
This inherently made ML very difficult since an ML model is only as good as the data it’s trained on.
Building and managing an effective ML production environment is challenging: The ML teams at Google have done a lot of research on the technical challenges around setting up an ML development environment. A research paper published in NeurIPS on hidden technical debt in ML systems engineering from Google (https://proceedings.neurips.cc/paper/2015/file/86df7dcfd896fcaf2674f757a2463eba-Paper.pdf) documented that writing ML code is just a tiny piece of the whole ML development life cycle. To develop an effective ML development practice in an organization, many tools, configurations, and monitoring aspects need to be integrated into the overall architecture. One of the critical components is monitoring drift in model performance and providing feedback and retraining:Figure 1.4 – Hidden Technical Debt in Machine Learning Systems, NeurIPS 2015
Let’s understand the requirements of an enterprise-grade ML platform a bit more.
In the fast-paced world of artificial intelligence (AI) and ML, an enterprise-grade ML platform takes center stage as a critical component. It is a comprehensive software platform that offers the infrastructure, tools, and processes required to construct, deploy, and manage ML models at a grand scale. However, a truly robust ML platform goes beyond these capabilities, extending to every stage of the ML life cycle, from data preparation, model training, and deployment to constant monitoring and improvements.
When we speak of an enterprise-grade ML platform, several key attributes determine its effectiveness, each of which is considered a cornerstone of such platforms. Let’s delve deeper into each of these critical requirements and understand their significance in an enterprise setting.
Scalability is an essential attribute, enabling the platform to adapt to the expanding needs of