32,39 €
Many IT leaders and professionals are adept at extracting data from a particular type of database and deriving value from it. However, designing and implementing an enterprise-wide holistic data platform with purpose-built data services, all seamlessly working in tandem with the least amount of manual intervention, still poses a challenge.
This book will help you explore end-to-end solutions to common data, analytics, and AI/ML use cases by leveraging AWS services. The chapters systematically take you through all the building blocks of a modern data platform, including data lakes, data warehouses, data ingestion patterns, data consumption patterns, data governance, and AI/ML patterns. Using real-world use cases, each chapter highlights the features and functionalities of numerous AWS services to enable you to create a scalable, flexible, performant, and cost-effective modern data platform.
By the end of this book, you’ll be equipped with all the necessary architectural patterns and be able to apply this knowledge to efficiently build a modern data platform for your organization using AWS services.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 527
Veröffentlichungsjahr: 2023
Modern Data Architecture on AWS
A Practical Guide for Building Next-Gen Data Platforms on AWS
Behram Irani
BIRMINGHAM—MUMBAI
Copyright © 2023 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author(s), nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Group Product Manager: Niranjan Naikwadi
Publishing Product Manager: Tejashwini R
Book Project Manager: Kirti Pisat
Senior Editor: Sushma Reddy
Technical Editor: Sweety Pagaria
Copy Editor: Safis Editing
Proofreader: Safis Editing
Indexer: Tejal Soni
Production Designer: Ponraj Dhandapani
DevRel Marketing Coordinator: Vinishka Kalra
First published: August 2023
Production reference: 1290823
Published by Packt Publishing Ltd.
Grosvenor House
11 St Paul’s Square
Birmingham
B3 1RB, UK.
ISBN 978-1-80181-339-6
www.packtpub.com
Behram Irani is currently a technology leader with Amazon Web Services (AWS) specializing in data, analytics and AI/ML. He has spent over 18 years in the tech industry helping organizations, from start-ups to large-scale enterprises, modernize their data platforms. In the last 6 years working at AWS, Behram has been a thought leader in the data, analytics and AI/ML space; publishing multiple papers and leading the digital transformation efforts for many organizations across the globe.
Behram has completed his Bachelor of Engineering in Computer Science from the University of Pune and has an MBA degree from the University of Florida.
Jongnam Lee is a Customer Engineer at Moloco, Inc., which provides ML-based retail media solutions to e-commerce businesses. As quality data is the core of machine learning, data analytics is the core of his job. Prior to Moloco, he was the Lead Solutions Architect of the AWS Well-Architected Analytics Lens program until March 2022. As a tenured Amazonian who joined the company in 2012, he served in various roles in AWS and Amazon.com, including Amazon.com’s data lake operations and AWS cost optimization initiatives in 2018–2020. Before AWS, he was a senior software engineer at Samsung Electronics HQ in Korea for eight years.
Bo Thomas leads the AWS Analytics Specialist Solutions Architect organization for the US East, Latin America, and specialty business segments. In this role, he advises some of the largest companies in the world on how best to design and manage their analytics and AI/ML platforms. He has more than 10 years of experience leading analytics, data engineering, and research science teams across Amazon’s businesses. Prior to his current role, he led Amazon.com’s enterprise people data warehouse and data lake platforms.
Prior to working at Amazon, Bo was an officer in the US Army leading cavalry units, including a 15-month deployment to Iraq. He has a bachelor’s degree in economics from West Point and an MBA from Duke University.
Gareth Eagar has worked in the IT industry for over 25 years, starting in South Africa, working in the United Kingdom for a few years, and is now based in the United States. In 2017, Gareth started working at Amazon Web Services (AWS), and has held roles as both a Solution Architect and a Data Architect.
Gareth has become a recognized subject matter expert for building data lakes on AWS, and in 2019 he launched the Data Lake Day educational event at the AWS Lofts in NYC and San Francisco. He has delivered a number of public talks and webinars on big data related topics, and in 2021 published a book called “Data Engineering with AWS”
Many IT leaders and professionals know how to get data in a particular type of database and derive value from it. But when it comes to creating an enterprise-wide holistic data platform with purpose-built data services, all seamlessly working in tandem with the least amount of manual intervention, it is always challenging to design and implement such a platform.
This book covers end-to-end solutions of many of the common data, analytics and AI/ML use-cases that organizations want to solve using AWS services. The book systematically lays out all the building blocks of a modern data platform including data lake, data warehouse, data ingestion patterns, data consumption patterns, data governance and AI/ML patterns. Using real world use-cases, each chapter highlights the features and functionalities of many of the AWS services to create a scalable, flexible, performant and cost-effective modern data platform.
By the end of this book, readers will be equipped with all the necessary architecture patterns and would be able to apply this knowledge to build a modern data platform for their organization using AWS services.
This book is specifically geared towards helping data architects, data engineers and those professionals involved with building data platforms. The use-case driven approach in this book helps them conceptualize possible solutions to specific use-cases and provides them with design patterns to build data platforms for any organization.
Technical leaders and decision makers would also benefit from this book as they will get a perspective of what the overall data architecture looks like for their organization and how each component of the platform helps with their business needs.
Prologue, Data and Analytics Journey so far, provides a historical context around what a data platform looks like in the on-prem world. In this prologue we will discuss the traditional platform components and talk about their benefits; then pivot towards their shortcomings in meetings new business objectives. This will provide context for the need to build a modern data architecture.
Chapter 1, Modern Data Architecture on AWS, describes what it means to create a modern data architecture. We will also look at how AWS services help materialize this concept and why it is important to create this foundation for current and future business needs.
Chapter 2, Scalable Data Lakes, lays down the foundation of the modern data architecture by establishing a data lake on AWS. We will also look at different layers of the data lake and how each layer has a specific purpose.
Chapter 3, Batch Data Ingestion, provides options to move data in batches from multiple source systems into AWS. We will explore different AWS services that assist in migrating data in bulk from variety of source systems.
Chapter 4, Streaming Data Ingestion, provides an overview of the need for a real-time streaming architecture pattern and how AWS services assist in solving use-cases that require streaming data ingested and consumed in the modern data platform.
Chapter 5, Data Processing, provides options to process and transform data, so that it can eventually be consumed for analytics. We will look at some AWS services that help provide scalable, performant and cost-effective big data processing; especially for running Apache Spark based workloads.
Chapter 6, Interactive Analytics, provides insights around ad-hoc analytics use-cases along with AWS services that help solve it.
Chapter 7, Data Warehousing, covers a wide range of use-cases that can be solved using a modern cloud data warehouse on AWS. We will look at multiple design patterns, including data ingestion, data transformation and data consumption using the data warehouse on AWS.
Chapter 8, Data Sharing, provides context around how data can be shared within a modern data platform, without creating complete ETL pipelines and without duplicating data at multiple places.
Chapter 9, Data Federation, provides mechanisms of data federation and the types of use-cases that can be solved using federated queries.
Chapter 10, Predictive Analytics, covers a whole range of use-cases along with services, features and tools provided by AWS to solve AI, ML and deep learning-based business problems; with the common goal of achieving predictive analytics.
Chapter 11, Generative AI, provides variety of use-cases across multiple industries that can be solved using GenAI and how AWS provides services and tools to help fast-track building GenAI based applications.
Chapter 12, Operational Analytics, introduces the need for operational analytics, especially log analytics and how AWS helps with this aspect of the data platform.
Chapter 13, Business Intelligence, provides context around the need for a modern business intelligent tool for creating business friendly reports and dashboards, that support rich visualizations. We will look at how AWS helps with such use-cases.
Chapter 14, Data Governance, lays ground work for the need for a unified data governance and covers many dimensions of data governance along with AWS services that assist in solving for those use-cases.
Chapter 15, Data Mesh, introduces the concept of a data mesh along with its importance in the modern data platform. We will look at the pillars of data mesh and provide AWS services that help solve use-cases that require a data mesh pattern.
Chapter 16, Performant and Cost-Effective Data Platform, covers a wide range of options to ensure the data platform built using AWS services is cost-effective as well as performant.
Chapter 17, Automate, Operationalize and Monetize, wraps up the book with concepts around automating the data platform using DevOps, DataOps and MLOps mechanisms. Finally, we will look at options to monetize the modern data platform built on AWS.
The book is geared towards data professionals who are eager to build modern data platform using many of the AWS data and analytics services. A basic understanding of data & analytics architectures and systems is desirable along with beginner’s level understanding of AWS Cloud.
There are a number of text conventions used throughout this book.
Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: “Mount the downloaded WebStorm-10*.dmg disk image file as another disk in your system.”
A block of code is set as follows:
INSERT INTO processed_cloudtrail_table SELECT * FROM raw_cloudtrail_table WHERE conditions;Bold: Indicates a new term, an important word, or words that you see onscreen. For instance, words in menus or dialog boxes appear in bold. Here is an example: “Select System info from the Administration panel.”
Use-cases
Appear like this.
Feedback from our readers is always welcome.
General feedback: If you have questions about any aspect of this book, email us at [email protected] and mention the book title in the subject of your message.
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata and fill in the form.
Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
Once you’ve read Modern Data Architecture on AWS, we’d love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.
Your review is important to us and the tech community and will help us make sure we’re delivering excellent quality content.
Thanks for purchasing this book!
Do you like to read on the go but are unable to carry your print books everywhere?
Is your eBook purchase not compatible with the device of your choice?
Don’t worry, now with every Packt book you get a DRM-free PDF version of that book at no cost.
Read anywhere, any place, on any device. Search, copy, and paste code from your favorite technical books directly into your application.
The perks don’t stop there, you can get exclusive access to discounts, newsletters, and great free content in your inbox daily
Follow these simple steps to get the benefits:
Scan the QR code or visit the link belowhttps://packt.link/free-ebook/9781801813396
Submit your proof of purchaseThat’s it! We’ll send your free PDF and other benefits to your email directlyIn this part, we will explore what a modern data architecture entails and how AWS embraces this architecture pattern. We will then expand on how to setup a foundational data platform by building a data lake on AWS.
This part has the following chapters:
Chapter 1, Modern Data Architecture on AWSChapter 2, Scalable Data Lakes“We are surrounded by data but starved for insights”
– Jay Baer
We have been surrounded by digital data for almost a century now and every decade has had its unique challenges regarding how to get the best value out of that data. But these challenges were narrow in scope and manageable since the data itself was manageable. Even though data was rapidly growing in the 20th century, its volume, velocity, and variety were still limited in nature. And then we hit the 21st century and the world of data drastically changed. Data started to exponentially grow due to multiple reasons:
The adoption of the internet picked up speed and data grew into big dataSmartphone devices became a common household entity and these devices all generated tons of dataSocial media took off and added to the deluge of informationRobotics, smart edge devices, industrial devices, drones, gaming, VR, and other artificial intelligence-driven gadgets took the growth of data to a whole new level.However, across all this, the common theme that exists even today is that data gets produced, processed, stored, and consumed.
Now, even though the history of data and analytics goes back many decades, I don’t want to dig everything up. Since this book revolves around cloud computing technologies, it is important to understand how we got here, what systems were in place in the on-premises data center world, and why those same systems and the architectural patterns surrounding them struggle to cater to the business and technology needs of today.
In this prologue, we will cover the following main topics:
Introduction to the data and analytics journeyTraditional data platformsChallenges with on-premises data systemsWhat this book is all aboutIf you are already well versed with the traditional data platforms and their challenges, you can skip this introduction and directly jump to Chapter 1.
The online transaction processing (OLTP) and online analytical processing (OLAP) systems worked great by themselves for a very long time when data producers were limited, the volume of data was under control, and data was mostly structured in tabular format. The last 20 years have seen a seismic shift in the way new businesses and technologies have come up.
As the volume, velocity, and variety of data started to pick steam, data grew into big data and the data processing techniques needed a major rehaul. This gave rise to the Apache Hadoop framework, which changed the way big data was processed and stored. With more data, businesses wanted to get more descriptive and diagnostic analytics out of their data. At the same time, another technology was gaining rapid traction, which gave organizations hope that they could look ahead to the future and predict what may happen in advance so that they could take immediate actions to steer their businesses in the right direction. This was made possible by the rise of artificial intelligence and machine learning and soon, large organizations started investing in predictive analytics projects.
And while we were thinking that we got the big data under control with new frameworks, the data floodgates opened up. The last 10 to 15 years have been revolutionary with the onset of smart devices, including smartphones. Connectivity among all these devices and systems made data grow exponentially. This was termed the Internet of Things (IoT). And to add to the complexity, these devices started to share data in near real time, which meant that data had to be streamed immediately for consumption. The following figure highlights many of the sources from where data gets generated. A lot of insights can be derived from all this data so that organizations can make faster and better decisions:
Figure 00.1 – Big data sources
This also meant that organizations started to carefully segregate their technical workforce to deal with data in personas. The people processing big data came to be known as data engineers, the people dealing with data for future predictions were the data scientists, and the people analyzing the data with various tools were the data analysts. Each type of persona had a well-defined task and there was a strong desire to create/purchase the best technological tool out there to make their day-to-day lives easier.
From a data and analytics point of view, systems started to grow bigger with extra hardware. Organizations started to expand their on-premises data centers with the latest and greatest servers out there to process all this data as fast as possible, to create value for their businesses. However, a lot of architecture patterns for data and analytics remained the same, which meant that many of the old use cases were still getting solved. However, with new demands from these businesses, pain areas started popping up more frequently.
Before we get into architecting data platforms in a modern way, it is important to understand the traditional data platforms and know their strengths and limitations. Once we understand the challenges of traditional data platforms in solving new business use cases, we can design a modern data platform in a holistic matter.
Throughout the 1980s and 1990s, the three-tier architecture became a popular way of producing, processing, and storing data. Almost every organization used this pattern as it met the business needs with ease. The three tiers of this architecture were the presentation tier, the application tier, and the data tier:
The presentation tier was the front-facing module and was created either as a thick client – that is, software was installed on the client’s local machine – or as a thin client – that is, a browser-based application.The application tier would receive the data from the presentation tier and process this data with business logic hosted on the application server.The data tier was the final resting place for the business data. The data tier was typically a relational database where data was stored in rows and columns of tables.Figure 00.2 represents a typical three-tier architecture:
Figure 00.2 – A traditional three-tier architecture pattern
This three-tier architecture worked well to meet the transactional nature of businesses. To a certain extent, this system was able to help with creating a basic reporting mechanism to help organizations understand what was happening with their business. But the kind of technology used in this architecture fell short of going a step further – to identify and understand why certain things were happening with their business. So, a new architecture pattern was required that could decouple this transitional system from the analytics type of operations. This paved the way for the creation of an enterprise data warehouse (EDW).
The need for a data warehouse came from the realistic expectations of organizations to derive business intelligence out of the data they were collecting so that they could get better insights from this data and make the necessary adjustments to their business practices. For example, if a retailer is seeing a steady decline in sales from a particular region, they would want to understand what is contributing to this decline.
Now, let’s capture the data flow. All the transactional data is captured by the presentation tier, processed by the application tier, and stored in the data tier of the three-tier architecture. The database behind the data tier is always online and optimized for processing a large number of transactions, which come in the form of INSERT, UPDATE, and DELETE statements. This database also emphasizes fast query processing while maintaining atomicity, consistency, isolation, and durability (ACID) compliance. For this reason, this type of data store is called OLTP.
To further analyze this data, a path needs to be created that will bring the relevant data over from the OLTP system into the data warehouse. This is where the extract, transform, and load (ETL) layer comes into the picture. And once the data has been brought over to the data warehouse, organizations can create the business intelligence (BI) they need via the reporting and dashboarding capabilities provided by the visualization tier. We will cover the ETL and BI layers in detail in later chapters, but the focus right now is walking through the process and the history behind them.
The data warehouse system is distinctly different from the transactional database system. Firstly, the data warehouse does not constantly get bombarded by transactional data from customer-facing applications. Secondly, the types of operations that are happening in the data warehouse system are specific to mining information insights from all the data, including historical data. Therefore, this system is constantly doing operations such as data aggregation, roll-ups (data consolidation), drill-downs, and slicing and dicing the data. For this reason, the data warehouse is called OLAP.
The following figure shows the OLTP and OLAP systems working together:
Figure 00.3 – The OLTP and OLAP systems working together
The preceding diagram shows all the pieces together. This architectural pattern is still relevant and works great in many cases. However, in the era of cloud computing, business use cases are also rapidly evolving. In the following sections, we will take a look at variations of this design pattern, as well as their advantages and shortcomings.
Ralph Kimball, one of the original architects of data warehousing, proposed the idea of designing the data warehouse with a bottom-up approach. This involved creating many smaller purpose-built data marts inside a data warehouse. A data mart is a subset of the larger data warehouse with a focus on catering to use cases for a specific line of business (LOB) or a specific team. All of these data marts can be combined to form an enterprise-wide data warehouse. The design of data marts is also kept simple by having the data model as a star schema to a large extent. A star schema keeps the data in sets of denormalized tables. There are known as fact tables, and they store all the transactional and event data. Since these tables store all the fast-moving granular data, they accumulate a large number of records over a short period. Then, there are the dimension tables, which typically store characteristics data such as details about people and organizations, product information, geographical information, and so forth. Since such information doesn’t rapidly get produced or changed over a short period, compared to fact tables, dimension tables are relatively smaller in terms of the number of records stored. The following figure shows a bottom-up EDW design approach where individual data marts contribute toward a bigger data warehouse:
Figure 00.4 – Bottom-up EDW design
Let’s look at a few benefits of the bottom-up approach:
The EDW gets systemically built over a certain period with business-specific groupings of data marts.The data model’s design is typically created via star schemas, which makes the model denormalized in nature. Some data becomes redundant in this approach but overall, it helps in making the data marts perform better.An EDW is easier to create since the time taken to set up individual business-specific data marts is shorter compared to setting up an enterprise-wide warehouse.An EDW that contains data marts also makes it better suited for setting up data lakes. We will cover everything about data lakes in subsequent chapters.Now, let’s look at the shortcomings of the bottom-up approach:
It is challenging to achieve a fully harmonized integration layer because the EDW is purpose-built for each use case in the form of data marts. Data redundancy also makes it difficult to create a single source of truth.Normalized schemas create data redundancy, which makes the tables grow very large. This slows down the performance of ETL job pipelines.Since the data marts are tightly coupled to the specific business use cases, managing structural changes and their dependencies on the data warehouse becomes a cumbersome process.Bill Inmon, widely recognized as the father of data warehouses, proposed the idea of designing the data warehouse with a top-down approach. In this approach, a single source of truth for the data in the form of an EDW is constructed first using a normalized data model to reduce data redundancy. Data from different sources is mapped to a single data model, which means that all the source elements are transformed and formatted to fit in this enterprise-wide structure that’s created in the data warehouse. The following figure shows a top-down EDW design approach where the warehouse is built first before smaller data marts are created for consumers:
Figure 00.5 – Top-down EDW design
Let’s look at a few benefits of the top-down approach:
The data model is highly normalized, which reduces data redundancySince it’s not tied to a specific LOB or use case, the data warehouse can evolve independently at an enterprise levelIt provides flexibility for any business requirement changes or data structure updatesETL pipelines are simpler to create and maintainNow, let’s look at the shortcomings of the top-down approach:
A normalized data model increases the complexity of schema designA large number of joins on the normalized tables can make the system compute-intensive and expensive over timeAdditional logic is required to create a business-specific data consumption layer, which means additional ETL processes are needed to create data marts from the unified EDWAs data grew exponentially, so did the on-premises systems. However, visible cracks started to appear in the legacy way of architecting data and analytics use cases.
The hardware that was used to process, store, and consume data had to be procured up-front, and then installed and configured before it was ready for use. So, there was operational overhead and risks associated with procuring the hardware, provisioning it, installing software, and maintaining the system all the time. Also, to accommodate for future data growth, people had to estimate additional capacity way in advance. The concept of hardware elasticity didn’t exist. The lack of elasticity in hardware meant that there were scalability risks associated with the systems in place, and these risks would surface whenever there was a sudden growth in the volume of data or when there was a market expansion for the business.
Buying all this extra hardware up-front also meant that a huge capital expenditure investment had to be made for the hardware, with all the extra capacity lying unused from time to time. Also, software licenses had to be paid for and those were expensive, adding to the overall IT costs. Even after buying all the hardware upfront, it was difficult to maintain the data platform’s high performance all the time. As data volumes grew, latency started creeping in, which adversely affected the performance of certain critical systems.
As data grew into big data, the type of data produced was not just structured data; a lot of business use cases required semi-structured data, such as JSON files, and even unstructured data, such as images and PDF files. In subsequent chapters, we will go through some use cases that specify different types of data.
As the sources of data grew, so did the number of ETL pipelines. Managing these pipelines became cumbersome. And on top of that, with so much data movement, data started to duplicate at multiple places, which made it difficult to create a single source of truth for the data.
On the flip side, with so many data sources and data owners within an organization, data became siloed, which made it difficult to share across different LOBs in the organization.
Most of the enterprise data was either stored in an OLTP system such as an RDBMS or an OLAP system such as a data warehouse. What this meant was that organizations tried to solve most of their new use cases using the systems they had invested so heavily in. The challenge was that these systems were built and optimized for specific types of operations only. Soon, it became evident that to solve other types of data and analytics use cases, specific types of systems were needed to be in place, to meet the performance requirements.
Lastly, as businesses started to expand in other geographies, these systems needed to be expanded to other locations. And a lot of time, effort, and money was spent scaling the data platform and making it resilient in case of failures.
Before we wrap up this prologue and dive into more details in subsequent chapters, I want to lay the foundation for what you should expect from this book and how the content is laid out.
When you think of a data platform in an organization, it contains a lot of systems that work in tandem to make the platform operational. A data platform contains different types of purpose-built data stores, different types of ETL tools and pipelines for data movement between the data stores, different types of systems that allow end users to consume the data, and different types of security and governance mechanisms in place to keep the platform protected and safe.
To allow the data platform to cater to different types of use cases, it needs to be designed and architected in the best possible manner. With exponential data growth and the need to solve new business use cases, these architectural patterns need to constantly evolve, not just for current needs but also for future ones. Every organization is looking to move to the public cloud as quickly as they can to make their data platforms scalable, agile, performant, cost-effective, and secure.
Amazon Web Services (AWS) provides the broadest and deepest set of data, analytics, and AI/ML services. Organizations can use AWS services to help them derive insights from their data. This book will walk you through how to architect and design your data platform, for specific business use cases, using different AWS services.
In Chapter 1, we will understand what a modern data architecture on AWS looks like, and we will also look at what the pillars of this architecture are. The remainder of this book is organized around those pillars. We will start with a typical data and analytics use case and build on top of it as new use cases come along. By doing this, you will see the progressive build-up of the data platform for a variety of use cases.
One thing to note is that this book won’t have a lot of hands-on coding or other implementation exercises. The idea here is to provide architecture patterns and how multiple AWS services, along with their specific features, help solve a particular problem. However, at the end of each chapter, I will provide links to hands-on workshops, where you can follow step-by-step instructions to build the components of a modern data platform in your AWS account.
Finally, due to limited space in this book, not every use case for each of the components of the modern data platform can be covered. The idea here is to give you a simple but holistic view of what possible use cases might look like and how you can leverage some key features of many of the AWS services to get toward a working solution. A solution can be achieved in many possible ways, and every solution has pros and cons that are very specific to the implementation. Technology evolves fast and so do many of the AWS services; always do your due diligence and look out for better ways to solve problems.
With that, this short introduction has come to an end. The idea here was to provide a quick history of how data and analytics evolved. We went through the different types of data warehouse designs, along with their pros and cons. We also looked at how the recent exponential growth of data has made it difficult to use the same type of system architecture for all types of use cases.
This gives us a perfect launching pad to understand what modern data architecture is and how it can be architected using different AWS data and analytics services.
Before we dive deep into the actual data and analytics use cases and how to design and build them, let’s address the elephant in the room—what is a modern data architecture, and why build it on Amazon Web Services (AWS)?
One of the fundamental tenets of a modern data architecture on AWS is to seamlessly integrate your data lake, data warehouse, and purpose-built data stores. In the previous prologue, we looked at what a data warehouse is and what it does. We also looked at the data tier in a three-tier architecture, typically referred to as a relational database management system (RDBMS) and considered a type of purpose-built store. The type of system we haven’t really explored in much detail yet is the data lake. The next chapter is completely dedicated to data lakes, but before we go any further in this chapter, it is important to get some context around the need for data lakes in the first place.
In this chapter, we will cover the following main topics:
Data lakesThe role of a modern data architectureModern data architecture on AWSPillars of a modern data architectureSimply put, a data lake is a centralized repository to store all kinds of data. Data can be structured (such as relational database data in tabular format), semi-structured (such as JSON), or unstructured (such as images, PDFs, and so on). Data from all the heterogenous source systems is collected and processed in this single repository and consumed from it. In its early days, Apache Hadoop became the go-to place for setting up data lakes. The Hadoop framework provided a storage layer called Hadoop Distributed File System (HDFS) and a data processing layer called MapReduce. Organizations started using this data lake as a central place for storing and processing all kinds of data. The data lake provided a great alternative to storing and processing data outside relational databases and data warehouses. But soon, the data lake setup on-premises infrastructure became a nightmare. We will look at those challenges as we build upon this chapter.
The following diagram shows at a high level a data lake being the central data repository for all kinds of data:
Figure 1.1 – A data lake conceptualization
At a high level, we all now know what a data lake is. But we didn’t really get to the core question —Why do we need it in the first place?
Let’s look at some of the main reasons why data lakes play a key role:
As data grew, so did the variety of data. Databases and data warehouses were designed to store and process structured data in rows and columns of tables. Deriving insights from semi-structured and unstructured data gained traction, which gave a push to build data lakes.In the prologue, we touched upon the data silo problem and how businesses struggle to get a complete view of data in a single place. Data lakes solve this problem by making all the enterprise-wide data available in a centralized location for analytics so that businesses can derive value from this data. This enables data democratization.For data to be stored in databases and data warehouses, a schema needs to be designed first. Table and column definitions need to be created before data can be loaded into these structures. But a lot of times, the data structures are defined by how the data eventually gets consumed. Data lakes allow a schema-on-read mechanism, which makes it easy to create schemas depending on data consumption patterns.As the data lake storage is in a centralized location, the data ingestion process becomes easier by simplifying the extract, transform, and load (ETL) pipelines. This allows for quicker ingestion of data into the data lake.Since databases and data warehouses are built for specific operations, they tend to perform better for specific use cases. However, the dedicated hardware/software makes these systems expensive and slow for all other types of analytics. A data lake decouples the storage from compute, which means that you don’t have to keep the compute allocated to the process all the time. This helps lower setup and operating expenses.The list of advantages could run into multiple pages, but it’s obvious that data lakes are here to stay. Along the way, while data lakes looked promising for solving many use cases, their adoption hit some challenges, specifically in on-premises infrastructure.
The following are some of the challenges encountered so far with data lakes:
Since data lakes are the central repository for all kinds of data from all types of systems in the entire organization, it becomes difficult to manage the security and governance of the data. Unless there are strong guardrails put around the data lake, there is always the fear of turning the data lake into a data swamp, which could lead to a security breach, with sensitive and confidential data being leaked out.With large amounts of data being collected, processed, and consumed, a lot of infrastructure and tools need to be put in place to make the data lake operational. So, the scalability, durability, and resiliency of this system are always a challenge, specifically in the on-premises world.So, by now, we have understood some of the key data systems that make up a data platform—databases, data warehouses, and data lakes. We also now know how they help businesses to derive value from their data. While these systems come with a lot of promise and possibilities, the challenges we discussed around all of them are real. These challenges get amplified by the rigid constraints of on-premises infrastructure. Ultimately, it’s the business that feels the pain from all the technological challenges.
We are now in the era of cloud computing, and just in the last decade, so much has changed and evolved. We are at a stage where we have the freedom to rearchitect entire systems to take advantage of the cloud. We no longer have to make all the compromises we made in the on-premises way of architecting data platforms.
This brings us to the focal point of this book, around what a modern data architecture looks like and why we need it.
A modern data architecture removes the rigid boundaries between data systems and seamlessly integrates the data lake, data warehouse, and purpose-built data stores. A modern data architecture recognizes the fact that taking a one-size-fits-all approach leads to compromises in the data and analytics platform. And we are not just referring to seamless integration between data systems; it also has to encompass unified governance, along with ease of data movement.
A modern data architecture is a direct response to all the challenges we have seen so far, including exponential data growth, performance and scalability issues, security and data governance nightmares, data silo issues, and—of course—pinching high expenses.
The following diagram shows a modern data architecture at a high level:
Figure 1.2 – Modern data architecture
All the data an organization collects plays a huge role in reinventing its business. The faster it can derive analytical insights from this data, the quicker it will make the right decisions to steer the business forward. However, as the data grows in volume and complexity, it sometimes slows down the business. To give an analogy, the bigger an object on Earth is, the more difficult it is to move it around. This is partly due to the role gravity plays in holding these objects down. In the same way, as data grows in organizations, either in data lakes or in purpose-built stores, the harder it becomes to move all this data around. So, in short, data also has its own gravity. So, in a modern data architecture, there should be mechanisms in place to allow for easy movement of data, with the eventual goal of deriving insights from it, using the right set of tools and services. The data movement can be inside-out, outside-in, around the perimeter, or shared across.
In this pattern, the data is first ingested and processed in the data lake, and then portions of this data, depending on the use case, are moved into a purpose-built store. For example, you have data from multiple Software-as-a-Service (SaaS)-based applications come to a data lake first, where it is ingested and processed using ETL tools, and a portion of this data is then moved into a data warehouse for daily reporting.
In the following diagram, the arrows show the outward movement of the data from the data lake into thepurpose-built stores:
Figure 1.3 – Inside-out data movement
In this pattern, the data is ingested into a purpose-built store first, and then the data is moved over to a data lake to run analysis on this data. For example, in the three-tier architecture pattern we went through in the prologue, the application data is stored in a relational database. This data is then moved into the data lake for analytics purposes. There may be many such purpose-built systems from which the data can be brought into a centralized data lake.
In the following diagram, the arrows show the inward movement of the data from the purpose-built stores into the data lake:
Figure 1.4 – Outside-in data movement
In certain situations, data needs to be passed along from one purpose-built store to another, for solving specific use cases. Since data is moving around, without the need to place it inside a data lake, this pattern is called data movement around the perimeter. For example, in our three-tier architecture, data can directly be loaded from a transactional database into a data warehouse for analytics.
In the following diagram, the arrows show the movement of data between the purpose-built stores:
Figure 1.5 – Around the perimeter data movement
Finally, data holds little value if it cannot be shared where its value can be unlocked. A modern data architecture allows easy sharing of data, inside as well as outside the organization. For example, portions of data produced in one line of business (LOB) need to be shared with another LOB so that the whole organization can benefit from them.
Now that we have looked at the different patterns for data movement in a modern data platform, it leads us to the next section on how a modern data architecture looks like on AWS.
AWS has been a pioneer in cloud computing; it provides a broad and deep platform to help organizations build sophisticated, scalable, and secure applications and data platforms.
Here’s a quick recap of why millions of customers choose AWS:
Agility: Allows organizations to experiment and innovate quickly and frequentlyElasticity: Takes away the guesswork around hardware capacity provisioning, allowing it to scale up and down with demandFaster innovation: This is possible because organizations can now focus on implementing things that matter to their businesses and not worry about IT infrastructureCost saving: This is significant due to the economies of scale of cloud computing, coupled with pay-as-you-go modelsGlobal reach: This is now possible in minutes due to AWS’ most extensive, reliable, and secure global cloud infrastructureService breadth and depth: With over 200 fully featured services to support any cloud workload globallyThere is a wealth of information on how AWS has transformed the whole technology space, and multiple books have been published capturing all this knowledge. However, we will zoom in on the data and analytics space and discuss how AWS helps to achieve a modern data architecture.
With a modern data architecture on AWS, customers have the option to leverage purpose-built tools and services to build their data platforms. Security and data governance can be applied in a unified manner. The modern data architecture on AWS also allows organizations to scale their systems at a low cost without impacting the overall system performance. Data can be easily shared data across organizational boundaries so that businesses can make decisions with speed and agility at scale.
To achieve all of this, AWS has provided five pillars for building a modern data architecture. Let’s look at them in detail.
A modern data architecture is required to break down data silos so that data analytics, descriptive as well as predictive using artificial intelligence/machine learning (AI/ML), can be done with all the data aggregated into a central location. In order to meet all the business needs around deriving value out of the data in a fast and cost-effective manner, the architecture requires certain pillars to be in place, as follows:
Scalable data lakesPurpose-built analytics servicesUnified data access, including seamless data movementUnified governancePerformance and cost-effectivenessThe following diagram illustrates these pillars for you:
Figure 1.6 – Pillars of a modern data architecture on AWS
Let’s explore each of the pillars in more detail.
A data lake is the foundation of a strong modern data platform. Data lakes get pretty big in a short period of time since all the business data from multiple sources is brought to this central repository for analysis. Imagine if the IT team had to manage this infrastructure in terms of its scalability, reliability, durability, and high availability (HA), at the same time making sure it was performant and cost-effective.
This is where Amazon Simple Storage Service (S3) comes into the picture. S3 is an object storage service. It provides out-of-the-box features needed for managing a data lake, such as scalability, HA, and high-performance access to data operations. We have the entire next chapter dedicated to building scalable data lakes, and we will go through use cases and design patterns for building them on AWS.
The following diagram shows Amazon S3 as the central service for storing all the data in a data lake:
Figure 1.7 – Scalable data lakes on AWS
S3 is supported by other AWS analytics services that complement each other to create an end-to-end data platform. We will go through each of these components in the chapters to come.
Leveraging the right tool for the right job is at the forefront of building a modern data architecture on AWS. And to achieve this pillar, AWS provides a wide range of data and analytics services, being able to cater to specific use cases so that the best price/performance is achieved.
The following diagram shows the purpose-built services that help to build a modern data platform:
Figure 1.8 – Purpose-built analytics services on AWS
Specific AWS services are used at every stage of building the data platform. We have separate chapters to go into details of each section around data ingestion, data processing, data lakes, data analytics, and data prediction.
When you drive a luxury car, you can feel a difference in every aspect of the ride—plush interiors, a powerful engine, smooth handling, and so on. Similarly, a modern data platform is no good if only the consumers of the platform see its value. All the mechanisms by which you hydrate the platform with data have to be easy to build, operate, and maintain. The movement of data has to be made seamless and easy to operate. This topic has so many use cases and complexities that we have dedicated multiple chapters to it in this book.
If we all lived in a utopian society, what a waste of a topic security and data governance would be. Unfortunately, we live in a harsh reality where bad actors are everywhere, which makes the topic of unified governance and security the topmost priority focus area, always to be taken seriously.
Unified governance in a modern data architecture provides organizations with the simplicity and flexibility of managing access to datasets inside the data platform in one place. Equally important is the ability to conduct an audit of all access trails to ensure compliance. We have a dedicated chapter to understand which capabilities AWS provides to manage data governance across the platform.
And finally, what good is a data platform if it performs sluggishly or if you have to spend a fortune to keep it running? That’s why it’s important to understand and implement the right product feature in the right context, to make the platform operate in an optimal manner. Again, we have a chapter dedicated to this, where we go into detail on how AWS helps with this and which architectural best practices you should implement to keep the price/performance at a highly desirable level.
Before we conclude this chapter, as you may have noticed, most of our focus has been on technical topics so far, and the majority of the book is also geared toward technologists. However, a modern data strategy is successful only if it solves a business need, alleviates business pain points, and allows the business to use the data for maximizing its profits. By using a modern data architecture, you may be modernizing your data systems, creating a unified platform that breaks down data silos, or innovating for solving cutting edge use cases; just remember that it all ties back to the business and how the data platform helps it to stay ahead of the curve. So, in the following chapters, we will present use cases from a business point of view first and then get into the technical details of how to architect these using AWS services.
In this chapter, we looked at what data lakes are, why they are important, and what some of the challenges of on-premises data lakes are. We had enough context to pivot toward what a modern data architecture looks like and why it’s important to build data platforms using this architecture pattern. And, as the climax was building up, AWS made a grand entry. We looked at the pillars of a modern data architecture on AWS. The stage is now set to get into details of each of these pillars. The flow of this book going forward is in line with these pillars.
With this chapter, our rollercoaster has just reached the top at cruising speed. Now, in the subsequent chapters, hang tight for all the thrills of the actual use cases and how the whole modern data platform slowly starts to take shape.
In this chapter, we will look at how organizations can build a data platform foundation by creating data lakes on AWS.
We will cover the following main topics:
Why choose Amazon S3 as a data lake store?Business scenario setupData lake layersData lake patternsData catalogsTransactional data lakesPutting it all togetherBefore we dive deep into the actual data and analytics use cases and explore how to design data lakes on AWS, it is first important to understand why Amazon Simple Storage Service (Amazon S3) is the preferred choice for building a data lake and why it is used as a storage layer to store all kinds of data in a centralized location.
If you recall from the discussions we had in Chapter 1, an ideal storage for building a data lake should inherently be scalable, durable, highly performant, easy to use, secure, cost-effective, and integrated with other building blocks of the data lake ecosystem. So, we ask a very important question: why choose Amazon S3 as a data lake store?
S3 checks all the boxes on what we look for in a store for building data lakes. Here are some of the features of S3:
Scalable: S3 is a petabyte-scale object store with virtually unlimited storageDurable