38,39 €
MLOps is an emerging field that aims to bring repeatability, automation, and standardization of the software engineering domain to data science and machine learning engineering. By implementing MLOps with Kubernetes, data scientists, IT professionals, and data engineers can collaborate and build machine learning solutions that deliver business value for their organization.
You'll begin by understanding the different components of a machine learning project. Then, you'll design and build a practical end-to-end machine learning project using open source software. As you progress, you'll understand the basics of MLOps and the value it can bring to machine learning projects. You will also gain experience in building, configuring, and using an open source, containerized machine learning platform. In later chapters, you will prepare data, build and deploy machine learning models, and automate workflow tasks using the same platform. Finally, the exercises in this book will help you get hands-on experience in Kubernetes and open source tools, such as JupyterHub, MLflow, and Airflow.
By the end of this book, you'll have learned how to effectively build, train, and deploy a machine learning model using the machine learning platform you built.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 365
Veröffentlichungsjahr: 2022
A practical handbook for building and using a complete open source machine learning platform on Kubernetes
Faisal Masood
Ross Brigoli
BIRMINGHAM—MUMBAI
Copyright © 2022 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author(s), nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Publishing Product Manager: Dhruv Jagdish Kataria
Senior Editor: David Sugarman
Content Development Editor: Priyanka Soam
Technical Editor: Devanshi Ayare
Copy Editor: Safis Editing
Project Coordinator: Farheen Fathima
Proofreader: Safis Editing
Indexer: Manju Arasan
Production Designer: Nilesh Mohite
Marketing Coordinators: Shifa Ansari, Abeer Riyaz Dawe
First published: June 2022
Production reference: 1190522
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.
ISBN 978-1-80324-180-7
www.packt.com
"To my daughter, Yleana Zorelle – hopefully, this book will help you understand what Papa does for a living."
Ross Brigoli
"To my wife, Bushra Arif – without your support, none of this would have become a reality"
Faisal Masood
Faisal Masood is a principal architect at Red Hat. He has been helping teams to design and build data science and application platforms using OpenShift, Red Hat's enterprise Kubernetes offering. Faisal has over 20 years of experience in building software and has been building microservices since the pre-Kubernetes era.
Ross Brigoli is an associate principal architect at Red Hat. He has been designing and building software in various industries for over 18 years. He has designed and built data platforms and workflow automation platforms. Before Red Hat, Ross led a data engineering team as an architect in the financial services industry. He currently designs and builds microservices architectures and machine learning solutions on OpenShift.
Audrey Reznik is a senior principal software engineer in the Red Hat Cloud Services – OpenShift Data Science team focusing on managed services, AI/ML workloads, and next-generation platforms. She has been working in the IT Industry for over 20 years in full stack development relating to data science roles. As a former technical advisor and data scientist, Audrey has been instrumental in educating data scientists and developers about what the OpenShift platform is and how to use OpenShift containers (images) to organize, develop, train, and deploy intelligent applications using MLOps. She is passionate about data science and, in particular, the current opportunities with machine learning and open source technologies.
Cory Latschkowski has made a number of major stops in various IT fields over the past two decades, including high-performance computing (HPC), cybersecurity, data science, and container platform design. Much of his experience was acquired within large organizations, including one Fortune 100 company. His last name is pronounced Latch - cow - ski. His passions are pretty moderate, but he will admit to a love of automation, Kubernetes, RTFM, and bacon. To learn more about his personal bank security questions, ping him on GitHub.
Shahebaz Sayed is a highly skilled certified cloud computing engineer with exceptional development ability and extensive knowledge of scripting and data serialization languages. Shahebaz has expertise in all three major clouds – AWS, Azure, and GCP. He also has extensive experience with technologies such as Kubernetes, Terraform, Docker, and others from the DevOps domain. Shahebaz is also certified with global certifications, including AWS Certified DevOps Engineer Professional, AWS Solution Architect Associate, Azure DevOps Expert, Azure Developer Associate, and Kubernetes CKA. He has also worked with Packt as a technical reviewer on multiple projects, including AWS Automation Cookbook, Kubernetes on AWS, and Kubernetes for Serverless Applications.
Machine Learning (ML) is the new black. Organizations are investing in adopting and uplifting their ML capabilities to build new products and improve customer experience. The focus of this book is on assisting organizations and teams to get business value out of ML initiatives. By implementing MLOps with Kubernetes, data scientists, IT operations professionals, and data engineers will be able to collaborate and build ML solutions that create tangible outcomes for their business. This book enables teams to take a practical approach to work together to bring the software engineering discipline to the ML project life cycle.
You'll begin by understanding why MLOps is important and discover the different components of an ML project. Later in the book, you'll design and build a practical end-to-end MLOps project that'll use the most popular OSS components. As you progress, you'll get to grips with the basics of MLOps and the value it can bring to your ML projects, as well as gaining experience in building, configuring, and using an open source, containerized ML platform on Kubernetes. Finally, you'll learn how to prepare data, build and deploy models quickly, and automate tasks for an efficient ML pipeline using a common platform. The exercises in this book will help you get hands-on with using Kubernetes and integrating it with OSS, such as JupyterHub, MLflow, and Airflow.
By the end of this book, you'll have learned how to effectively build, train, and deploy an ML model using the ML platform you built.
This book is for data scientists, data engineers, IT platform owners, AI product owners, and data architects who want to use open source components to compose an ML platform. Although this book starts with the basics, a good understanding of Python and Kubernetes, along with knowledge of the basic concepts of data science and data engineering, will help you grasp the topics covered in this book much better.
Chapter 1, Challenges in Machine Learning, discusses the challenges organizations face in adopting ML and why a good number of ML initiatives may not deliver the expected outcomes. The chapter further discusses the top few reasons why organizations face these challenges.
Chapter 2, Understanding MLOps, continues building on the identified set of problems from Chapter 1, Challenges in Machine Learning, and discusses how we can tackle the challenges in adopting ML. The chapter will provide the definition of MLOps and how it helps organizations to get value out of their ML initiatives. The chapter also provides a blueprint on how companies can adopt MLOps in their ML projects.
Chapter 3, Exploring Kubernetes, first describes why we have chosen Kubernetes as the basis for MLOps in this book. The chapter further defines the core concept of Kubernetes and assists you in creating an environment where the code can be tested. The world is changing fast and part of this high-velocity disruption is the availability of the cloud and cloud-based solutions. This chapter provides an overview of how the Kubernetes-based platform can give you the flexibility to run your solution anywhere.
Chapter 4, The Anatomy of a Machine Learning Platform, takes a 1,000-foot view of what an ML platform looks like. You already know what problems MLOps solves. This chapter defines the components of an MLOps platform in a technology-agnostic way. You will build a solid foundation on the core components of an MLOps platform.
Chapter 5, Data Engineering, covers an important part of any ML project that is often missed. A good number of ML tutorials/books start with a clean dataset, maybe a CSV file to build your model against. The real world is different. Data comes in many shapes and sizes and it is important that you have a well-defined strategy to harvest, process, and prepare data at scale. This chapter will define the role data engineering plays in a successful ML project. It will discuss OSS tools that can provide the basis for data engineering. The chapter will then talk about how you can install these toolsets on the Kubernetes platform.
Chapter 6, Machine Learning Engineering, will move the discussion to the model building tuning and deployment activities of an ML development life cycle. The chapter will discuss providing a self-service solution to data scientists so they can work more efficiently and collaborate with data engineering teams and fellow data scientists using the same platform. It will also discuss OSS tools that can provide the basis for model development. The chapter will then talk about how you can install these toolsets on the Kubernetes platform.
Chapter 7, Model Deployment and Automation, covers the deployment phase of the ML project life cycle. The model you build knows the data you provided to it. In the real world, however, the data changes. This chapter discusses the tools and techniques to monitor your model performance. This performance data could be used to decide whether the model needs retraining on a new dataset or whether it's time to build a new model for the given problem.
Chapter 8, Building a Complete ML Project Using the Platform, will define a typical ML project and how each component of the platform is utilized in every step of the project life cycle. The chapter will define the outcomes and requirements of the project and focus on how the MLOps platform facilitates the project life cycle.
Chapter 9, Building Your Data Pipeline, will show how a Spark cluster can be used to ingest and process data. The chapter will show how the platform enables the data engineer to read the raw data from any storage, process it, and write it back to another storage. The main focus is to demonstrate how a Spark cluster can be created on-demand and how workloads could be isolated in a shared environment.
Chapter 10, Building, Deploying, and Monitoring Your Model, will show how the JuyterHub server can be used to build, train, and tune models on the platform. The chapter will show how the platform enables the data scientist to perform the modeling activities in a self-serving fashion. This chapter will also introduce MLflow as the model experiment tracking and model registry component. Now you have a working model, how do you want to share this model for the other teams to consume? This chapter will show how the Seldon Core component allows non-programmers to expose their models as REST APIs. You will see how the deployed APIs automatically scale out using the Kubernetes capabilities.
Chapter 11, Machine Learning on Kubernetes, will take you through some of the key ideas to bring forth with you to further your knowledge on the subject. This chapter will cover identifying use cases for the ML platform, operationalizing ML, and running on Kubernetes.
You will need a basic working knowledge of Kubernetes and Python to get the most out of this book's technical exercises. The platform uses multiple software components to cover the full ML development life cycle. You will need the recommended hardware to run all the components with ease.
Running the platform requires a good amount of compute resources. If you do not have the required number of CPU cores and memory on your desktop or laptop computer, we recommend running a virtual machine on Google Cloud or any other cloud platform.
If you are using the digital version of this book, we advise you to type the code yourself or access the code from the book's GitHub repository (a link is available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.
A good follow-up after you finish with this book is to create a proof of concept within your team or organization using the platform. Assess the benefits and learn how you can further optimize your organization's data science and ML project life cycle.
You can download the example code files for this book from GitHub at https://github.com/PacktPublishing/Machine-Learning-on-Kubernetes. If there's an update to the code, it will be updated in the GitHub repository.
We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: https://static.packt-cdn.com/downloads/9781803241807_ColorImages.pdf.
There are a number of text conventions used throughout this book.
Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "Notice that you will need to adjust the following command and change the quay.io/ml-on-k8s/ part before executing the command."
A block of code is set as follows:
docker tag scikit-notebook:v1.1.0 quay.io/ml-on-k8s/scikit-notebook:v1.1.0
When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:
gcloud compute project-info add-metadata --metadata enable-oslogin=FALSE
Bold: Indicates a new term, an important word, or words that you see onscreen. For instance, words in menus or dialog boxes appear in bold. Here is an example: "The installer will present the following License Agreement screen. Click I Agree."
Tips or Important Notes
Appear like this.
Feedback from our readers is always welcome.
General feedback: If you have questions about any aspect of this book, mention the book title in the subject of your message and email us at [email protected].
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.
Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!
For more information about Packt, please visit packt.com.
Once you've read Machine Learning on Kubernetes, we'd love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.
Your review is important to us and the tech community and will help us make sure we're delivering excellent quality content.
In this section, we will define what MLOps is and why it is critical to the success of your AI journey. You will go through the challenges organizations may encounter in their AI journey and how MLOps can assist in overcoming those challenges.
The last chapter of this section will provide a refresher on Kubernetes and the role it plays in bringing MLOps to the OSS community. This is by no means a guide to Kubernetes, and you should consult other sources for a guide on Kubernetes.
This section comprises the following chapters:
Chapter 1, Challenges in Machine LearningChapter 2, Understanding MLOpsChapter 3, Exploring KubernetesMany people believe that artificial intelligence (AI) is all about the idea of a humanoid robot or an intelligent computer program that takes over humanity. The shocking news is that we are not even close to this. A better term for such incredible machines is human-like intelligence or artificial general intelligence (AGI).
So, what is AI? A more straightforward answer would be a system that uses a combination of data and algorithms to make predictions. AI practitioners call it machine learning or ML. A particular subset of ML algorithms, called deep learning (DL), refers to an ML algorithm that uses a series of steps, or layers, of computation (Goodfellow, Bengio, and Courville, 2017). This technique employs deep neural networks (DNNs) with multiple layers of artificial neurons that mimic the architecture of the human brain. Though it sounds complicated enough, it does not always mean that all DL systems will have a better performance compared to other AI algorithms or even a traditional programming approach.
ML is not always about DL. Sometimes, a basic statistical model may be a better fit for a problem you are trying to solve than a complex DNN. One of the challenges of implementing ML is about selecting the right approach. Moreover, delivering an ML project comes with other challenges, not only on the business and technology side but also in people and processes. These challenges are the primary reasons why most ML initiatives fail to deliver their expected value.
In this chapter, we will revisit a basic understanding of ML and understand the challenges in delivering ML projects that can lead to a project not delivering its promised value.
The following topics will be covered:
Understanding MLDelivering ML valueChoosing the right approachFacing the challenges of adopting MLAn overview of the ML platformIn traditional computer programming, a human programmer must write a clear set of instructions in order for a computer program to perform an operation or provide an answer to a question. In ML, however, a human (usually an ML engineer or data scientist) uses data and an algorithm to determine the best set of parameters for a model to yield answers or predictions that are usable. While traditional computer programs provide answers using exact logic (Yes/No, Correct/Wrong), ML algorithms involve fuzziness (Yes/Maybe/No, 80% certain, Not sure, I do not know, and so on).
In other words, ML is a technique for solving problems by using data along with an algorithm, statistical model, or a neural network, to infer or predict the desired answer to a question. Instead of explicitly writing instructions on how to solve a problem, we use a bunch of examples and let the algorithm figure out the best way (the best set of parameters) to solve the problem. ML is useful when it is impossible or extremely difficult to write a set of instructions to solve a problem. A typical example problem where ML shines is computer vision (CV). Though it is easy for any normal human to identify a cat, it is impossible or extremely difficult to manually write code to identify if a given image is of a cat or not. If you are a programmer, try thinking about how you would write this code without ML. This is a good mental exercise.
The following diagram illustrates where DL and ML sit in terms of AI:
Figure 1.1 – Relationship between AI, ML, and DL
AI is a broad subject covering any basic, rule-based agent system that can replace a human operator, ML, and DL. But ML alone is another broad subject. It covers several algorithms, from basic linear regression to very deep convolutional neural networks (CNNs). In traditional programming, no matter which language or framework we use, the process of developing and building applications is the same. In contrast, ML has a wide variety of algorithms, and sometimes, they require a vastly different approach to utilize and build models from. For example, a generative adversarial network (GAN), which is an architecture used in many creative ML models to generate fake human faces, is trained differently to a basic decision tree model.
Because of the nature of ML projects, some practices in software engineering may not always apply to ML, and some practices, processes, and tools that are not present in traditional programming must be invented.
There are many books, videos, and lectures available on ML and its related topics. In this book, we will cover a more adaptive approach and show how open source software (OSS) can provide the basis for you and your organization to benefit from the AI revolution.
In later chapters, we will tackle the challenges behind operationalizing ML projects by deploying and using an open source toolchain on Kubernetes. Toward the end of the book, we will build a reusable ML platform that provides essential features that will help contribute to delivering a successful ML project.
Before we dig deeper into the software, we must have foundational knowledge, and we must know the practical steps required to successfully deliver business value with ML initiatives. With this knowledge, we will be able to address some of the challenges of implementing an ML platform and identify how they will help deliver the expected value from our ML projects. The primary reason why these promised values are not realized is that they don't get to production. For example, imagine you built an excellent ML model that predicts the outcome of football World Cup matches, but no one could use it during the tournament. As a result, even though the model is successful, it failed to deliver its expected business value. Most organization's AI and ML initiatives are in the same state. The data science or ML engineering team may have built a perfectly working ML model that could have helped the organization's business and/or its customers; however, these models do not usually get deployed to production. So, what are the challenges teams face that prevent them from putting their ML models into production?
Before deciding to use ML for a given project, understand the problem first and assess if it can be solved by ML. Invest enough time in working with the right stakeholder to see what the expectations are. Some problems may be better suited to traditional approaches, such as when you have predefined business rules for a given system. It is faster and easier to code rules than is it to train a model, plus you do not need a huge amount of data.
While deciding whether to use ML or not, you can think in terms of whether pattern-based results will work for your problem. If you are building a system that reads data from the frequent-flyer database of an airline to find customers to which you want to send a promotion, a rule-based system may also give you good and acceptable results. An ML-based system may give you better matches for certain scenarios, but will the time spent on building this system be worth it?
The efficiency of your ML model depends on the quality and accuracy of the data, but unfortunately, data collection and processing activities do not get the attention they deserve, which proves costly in later stages of the project in terms of the model not being suitable enough for the given task.
"Everyone wants to do the model work, not the data work."
– Data Cascades in High-Stakes AI, Sambasivan et al. (see the Further reading section)
The paper cited here discusses this challenge. An interesting example quoted in the paper is of a team building a model to detect a particular pattern from patient scans, which works brilliantly with test data. However, the model failed in production because the scans being fed onto the model contained tiny dust particles, resulting in the inferior performance of the model. This example is a classic case of a team being focused on model building and not on how it will be used in the real world.
One thing that teams should put focus on is data validation and cleansing. Many times, data is often missing or is not correct—for example, a string field in a number column, different date formats in the same field, or the same identifier (ID) for different records if the records come from different systems. All this data anomaly may result in an inefficient model that will lead to inferior performance.
Once you've been through this process and come to the decision that yes, ML is the way to go… what next?
Organizations are eager to adopt ML to drive their business growth. In many projects, the teams become too focused on technical brilliance while not delivering the business value expected from the ML initiative. This can cause early failures that may result in reduced investment for future projects. These are the two main challenges that businesses are facing in making ML mainstream in all the various parts of the business, as outlined here:
Keeping the focus on the big pictureSiloed teamsThe first challenge organizations face is building an ecosystem where ML models create value for the business. The challenging part is that teams often do not focus on all aspects of a project and instead focus only on specific areas, resulting in poor value for the business.
How many organizations that we know of are successful in their ML journey? Beyond the Googles, Metas (formerly Facebook), and Netflixs of the world, there are few success stories. The number one reason is that the teams put focus just on building the model. So, what else is there beyond the algorithm? Google published a paper about the hidden technical debt in ML projects (see the Further reading section at the end of this chapter), and it provides a good summary of things that we need to consider to be successful.
Have a look at the following diagram:
Figure 1.2 – The components of an ML system
Can you see the small block in Figure 1.2? The block in the picture captioned ML is the ML model development part, and you can see that there are a lot more processes involved in ML projects. Let's understand a few of them, as follows:
Data collection and data verification: To have a reliable and trustworthy model, we need a good set of data. ML is all about finding patterns in the data and predicting unseen data results using those patterns. Therefore, the better the quality of your data, the better your model will perform. The data, however, comes in all shapes and sizes. Some of it may reside in files, some in proprietary databases; a dataset may come from data streams, and some data may need to be harvested from Internet of Things (IoT) devices. On top of that, the data may be owned by different teams with different security and regulatory requirements. Therefore, you need to think about technologies that allow you to collect, transform, and process data from various sources and in a variety of formats.Feature extraction and analysis: Often, assumptions about data quality and completeness are incorrect. Data science teams perform an activity called exploratory data analysis (EDA) in which they read and process data from various sources as fast as they can. Teams further improve their understanding of the data before they invest time in processing the data at scale and going to the model-building stage. Think about how your team or organization can facilitate the data exploration to speed up your ML journey.Data analysis leads to a better understanding of data, but feature extraction is another thing. This is a process of identifying, through experiments, a set of data attributes that influences the accuracy of the model output and identifying which attributes are considered irrelevant or considered noise. For example, in an ML model that classifies if a bank transaction is fraudulent or not, the name of the account holder is considered to be irrelevant, or noise, while the amount of the transaction could be an important feature. The output of this process is a transformed version of the dataset that contains only relevant features and is formatted for consumption in the ML model training process or fitness function. This is sometimes called a feature set. Teams need a tool for performing such analysis and transforming data into a format that is consumable for model training. Data collection, feature extraction, and analysis are also collectively called feature engineering (FE).
Infrastructure, monitoring, and resource management: You need computers to process and explore data, build and train your models, and deploy ML models for consumption. All these activities need processing power and storage capacity, at the lowest possible cost. Think about how your team will get access to hardware resources on-demand and in a self-service fashion. You need to plan how data scientists and engineers will be able to request the required resources in the fastest manner. At the same time, you still need to be able to follow your organization's policies and procedures. You also need system monitoring to optimize resource utilization and improve the operability of your ML platform.Model development: Once you have data available in the form of consumable features, you need to build your models. Model building requires many iterations with different algorithms and different parameters. Think about how to track the outcomes of different experiments and where to store your models. Often, different teams can reuse each other's work to increase the velocity of the teams further. Think about how teams can share their findings. Teams must have a tool that can facilitate model training and experiment runs, record model performance and experiment metadata, store models, and manage the tagging of models and promotion to an acceptable and deployable state.Process management: As you see, there are a lot of things to be done to make a useful model. Think about the processes of automating model deployment and monitoring processes. Different personas would be working on different things such as data tasks, model tasks, infrastructure tasks, and more. The team needs to collaborate and share to achieve a particular outcome. The real world keeps on changing: once your model is deployed into production, you may need to retrain your model with new data regularly. All these activities need well-defined processes and automated stages so that the team can continue working on high-value tasks.In summary, you will need an ecosystem that can provide solution components for all of the following building blocks. This single platform will increase the team's velocity via consistent experience within the team for all the needs of an ML system:
Fetching, storing, and processing dataTraining, tuning, and tracking models Deploying and monitoring modelsAutomating repetitive tasks, such as data processing and model deploymentBut how can we make different teams collaborate and use a common platform to do their tasks?
To complete an ML project, you need to have a team that comprises various roles. However, with diverse roles, there comes a challenge of communication, team dynamics, and conflicting priorities. In enterprises, these roles often belong to different teams in different business units (BUs).
ML projects need a variety of teams and personas to be successful. The following screenshot shows some of the roles and responsibilities that are required to complete a simple ML project:
Figure 1.3 – Silos involved in ML projects
Let's look at these roles in more detail here:
Data scientist: This role is the most understood one. This persona or team is responsible for exploring the data and running experiment iterations to determine which algorithm is suitable for a given problem.Data engineers: The persona or team in this role is responsible for ingesting data from various sources, cleaning the data, and making it useful for the data science teams.Developers and operations: Once the model is built, this team is responsible for taking the model and deploying it to be used. The operations team is responsible for making sure that computers and storage are available for the other teams to perform data processing, model life-cycle operations, and model inference.A business subject-matter expert (SME): Even though data scientists build the ML model, understanding data and the business domain is critical to building the right model. Imagine a data scientist who is building a model for predicting COVID-19 without understanding the different parameters. An SME, which would be a medical doctor in this case, would be required to assist the data scientists in understanding data before going on to the model-building phase.Of course, even with the building blocks in place, you're unlikely to succeed at the first attempt.
Building a cross-functional team is not enough. Make sure that the team is empowered to make its own decisions and feels comfortable experimenting with different approaches. The data and ML fields are fast-moving, and the team may choose to adapt a recent technology or process or let go of an existing one based on the given success criteria.
Form a team of people who are passionate about the work, and when you give them autonomy, you will have the best possible outcome. Enable your teams so that they can adapt to change quickly and deliver value for your business. Establish an iterative and fast feedback cycle where teams receive feedback on work that has been delivered so far. A quick feedback loop will put more focus on solving the business problem.
However, this approach brings its own challenges. Adopting modern technologies may be difficult and time-consuming. Think of Amazon Marketplace: if you want to sell some new hot thing, by using Amazon Marketplace, you can bring your product to market faster because the marketplace takes care of a lot of moving parts required to make a sale. The ML platform you will learn about in this book enables you to experiment with modern approaches and modern technologies with ease by supplying basic common services and sandbox environments for your team to experiment fast.
It is critical to the success of projects that teams that belong to distinct groups form a cross-functional and autonomous team. This new team will move with higher velocity without internal friction and avoid tedious processes and delays. It is critical that the cross-functional team is empowered to drive its own decisions and be supported with self-serving platforms so it can work in an independent manner. The ML platform you will see in this book will provide the basis of one such platform where teams can collaborate and share.
Now, let's take a look at what kind of platform will help you address the challenges we have discussed.
In this section, we will talk about the capabilities of the ML platform that you will need to consider. The aim is to make you aware of the basic building blocks that could form an ecosystem for your team to help you in your ML journey. An ML platform can be thought of as a set of components that assists in the faster development and deployment of ML models and data pipelines.
There are three main characteristics of an ML platform, as outlined here:
A complete ecosystem: The platform should provide an end-to-end (E2E) solution that includes data life-cycle management, ML life-cycle management, application life-cycle management, and observability.Built on open standards: The platform should provide a way to extend and build on the existing baseline. Because the field is fast-moving, it is critical that you can further enhance, tailor, and optimize platforms for your specific needs.Self-serving: The platform should be able to provide the resources required by teams automatically and on-demand, from hardware requests to deploying software in production. The platform automates the provisioning of resources based on enterprise controls and recovers them once the job is completed. The resources can be central processing units (CPUs), memory, or disk, or can be software such as integrated development environments (IDEs) to write code or a combination of these.The following diagram shows the various components of an ML platform that serves different personas, allowing them to collaborate on a common platform:
Figure 1.4 – Personas and their interaction with the platform
Apart from the characteristics presented in Figure 1.4, the platform must have the following technical capabilities:
Workflow automation: The platform should have some form of workflow automation capability where both data engineers can create jobs that perform repetitive tasks such as data ingestion and preparation and data scientists can orchestrate model training and automate model deployments.Security: The platform must be secured to prevent data leaks and data loss that can have a negative impact on the business.Observability: We do not want to run applications without observability, whether it is a traditional application or an ML model. Deploying applications in production without observability is like riding a bike blindfolded. The platform should have a good amount of observability where you can monitor the health and performance of the entire system or sub-system in near real time. This should also include an alerting capability.Logging: Logging plays a key role in understanding what happened when systems start behaving in an unexpected way. The platform must have a solid logging mechanism to allow operations teams to better support the ML project.Data processing and pipelining: Because ML projects rely on a huge amount of data, the platform must include a reliable fully featured data processing and data pipelining solution that can scale horizontally.Model packaging and deployment: Not all data scientists are experienced software engineers. Although some may have an experience in writing applications, it is not safe to assume that all data scientists can write production-grade applications and deploy them to production. Therefore, the platform must be able to automatically package an ML model into an application and serve it.ML life cycle: The platform must also be capable of managing ML experiments, tracking performance, storing training and experiment metadata and feature sets, and versioning models. This not only allows data scientists to work efficiently, but also allows them to work collaboratively.On-demand resource allocation: One important feature an ML platform should have is the capability that allows data scientists and data engineers to provision their own runtime resources automatically and on-demand. This eliminates the need for manual requisition of resources and eliminates time wasted on waiting and handovers with operations teams. The platform must allow platform users to create their own environment and to allocate the right amount of compute resources they need to do their jobs.There are already platform products that have most, if not all, of the capabilities you have just learned about. What you will learn in the later chapters of this book is how to build one such platform based on OSS on top of Kubernetes.
Even though ML is not new, recent advancements in relatively cheap computing power have allowed many companies to start investing in it. This widespread availability of hardware comes with its own challenges. Often, teams do not put the focus on the big picture, and that may result in ML initiatives not delivering the value they promise.