41,99 €
Master the art of creating scalable, concurrent, and reactive applications using Akka
If you want to use the Lightbend platform to create highly performant reactive applications, then this book is for you. If you are a Scala developer looking for techniques to use all features of the new Akka release and want to incorporate these solutions in your current or new projects, then this book is for you. Expert Java developers who want to build scalable, concurrent, and reactive application will find this book helpful.
For a programmer, writing multi-threaded applications is critical as it is important to break large tasks into smaller ones and run them simultaneously. Akka is a distributed computing toolkit that uses the abstraction of the Actor model, enabling developers to build correct, concurrent, and distributed applications using Java and Scala with ease.
The book begins with a quick introduction that simplifies concurrent programming with actors. We then proceed to master all aspects of domain-driven design. We'll teach you how to scale out with Akka Remoting/Clustering. Finally, we introduce Conductr as a means to deploy to and manage microservices across a cluster.
This comprehensive, fast-paced guide is packed with several real-world use cases that will help you understand concepts, issues, and resolutions while using Akka to create highly performant, scalable, and concurrency-proof reactive applications.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 716
Veröffentlichungsjahr: 2016
Copyright © 2016 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
First published: October 2016
Production reference: 1141016
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.
ISBN 978-1-78646-502-3
www.packtpub.com
Author
Christian Baxter
Copy Editor
Zainab Bootwala
Reviewer
Dennis Vriend
Project Coordinator
Suzanne Coutinho
Commissioning Editor
Kunal Parikh
Proofreader
Safis Editing
Acquisition Editor
Dharmesh Parmar
Indexer
Tejal Daruwale Soni
Content Development Editor
Anish Sukumaran
Graphics
Jason Monteiro
Technical Editor
Sunith Shetty
Production Coordinator
Melwyn Dsa
Christian Baxter, from an early age, has always had an interest in understanding how things worked. Growing up, he loved challenges and liked to tinker with and fix things. This inquisitive nature was the driving force that eventually led him into computer programming. While his primary focus in college was life sciences, he always set aside time to study computers and to explore all aspects of computer programming. When he graduated from college during the height of the Internet boom, he taught himself the necessary skills to get a job as a programmer. He’s been happily programming ever since, working across diverse industries such as insurance, travel, recruiting, and advertising. He loves building out high-performance distributed systems using Scala on the Akka platform.
Christian was a long time Java programmer before making the switch over to Scala in 2010. He was looking for new technologies to build out high throughput and asynchronous systems and loved what he saw from Scala and Akka. Since then, he's been a major advocate for Akka, getting multiple ad tech companies he’s worked for to adopt it as a means of building out reactive applications. He's also been an occasional contributor to the Akka codebase, identifying and helping to fix issues. When he’s not hacking away on Scala and Akka, you can usually find him answering questions on Stackoverflow as cmbaxter.
A lot went into the writing of this book, and not just from me. As such, I’d like to thank a few people, starting with the two special women in my life.
To my beautiful wife, Laurie. Thank you for being so patient throughout the writing of this book, and with me, in general. All this time, you’ve somehow been able to put up with all my quirks, which is a feat in and of itself. You’ve always believed in me, even if, at times, I didn’t believe in myself. Thank you for always challenging me and never letting me be complacent. Without you, this book would have never been possible.
To my amazing mother, you taught me how to be a good person by instilling in me the values one needs to be successful in life. Your strength of character and resilience to life’s challenges have served as inspirations to me. You’ve always been there for me, no matter how hard the times got. For this, I am eternally grateful.
Lastly, I’d like to thank my very first mentor, Dan Gallagher. I learned so much from you during our time together. Who knows how my career would have ended up if you weren’t there in the beginning to screw my head on straight. I appreciate the enormous amount of patience that must have gone into dealing with me way back then. Thank you so much for starting my career down the right path to where I am today.
Dennis Vriend is a professional with more than 10 years of experience in programming for the Java Virtual Machine. Over the last 4 years, Dennis has been developing and designing fault tolerant, scalable, distributed, and highly performant systems using Scala, Akka, and Spark, and maintaining them across multiple servers. Dennis is active in the open source community and maintains two very successful Akka plugins—the akka-persistence-jdbc plugin and the akka-persistence-inmemory plugin—both useful to design and test state of the art business components leveraging domain-driven design and event sourcing in a distributed environment. Dennis is currently working as a software development engineer for Trivento in the Netherlands.
For support files and downloads related to your book, please visit www.PacktPub.com.
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.
At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.
https://www.packtpub.com/mapt
Get the most in-demand software skills with Mapt. Mapt gives you full access to all Packt books and video courses, as well as industry-leading tools to help you plan your personal development and advance your career.
The Akka library is well known in the Scala world for providing a means to build reactive applications. Akka’s core building block is the Actor, which is a simple concurrency unit that allows you to build asynchronous, event driven, fault tolerant, and potentially distributed components on top of. Actors are an excellent starting point to build your reactive applications and services on top of, but there’s a lot more within the entire Akka platform that you should be considering as well.
This book will provide the reader with a purpose-built tour through some of the additional modules within the Akka platform. The tour will be conducted by progressively refactoring an initial Akka application from an inflexible monolith all the way to a set of loosely coupled microservices. Along the way, you will learn how to apply new features, such as event sourcing via Akka Persistence, to the application in an effort to help it scale better. When the journey is complete, you will have a much better understanding of these additional offerings within the Akka platform and how they can help you build your applications and services.
Throughout the refactoring process, new concepts and libraries within Akka will be introduced to the reader on a chapter by chapter basis. Within those chapters, I will detail what each new feature is, and how that feature fits into breaking down a monolith into a set of loosely coupled services. Each chapter will also involve coding homework for the reader, giving them an active role in the progressive refactoring process. This hands-on experience will give the reader an immersive understanding of how to use these newer Akka features in the real-world code, which is the main goal of this book.
Chapter 1, Building a Better Reactive App, introduces you to the initial sample application and how it will be improved over the course of this book.
Chapter 2, Simplifying Concurrent Programming with Actors, is a refresher on Actors with some refactoring work using Akka’s FSM.
Chapter 3, Curing Anemic Models with Domain-Driven Design, introduces you to domain-driven design (DDD) and how it helps in modeling and building software.
Chapter 4, Making History with Event Sourcing, presents Akka Persistence as a means to build event-sourced entities.
Chapter 5, Separating Concerns with CQRS, teaches you how to separate read and write models using the CQRS pattern.
Chapter 6, Going with the Flow with Akka Streams, explains how Akka Streams can be used to build back-pressure aware, stream-based processing components.
Chapter 7, REST Easy with Akka HTTP, shows you how to leverage Akka HTTP to build and consume RESTful interfaces.
Chapter 8, Scaling Out with Akka Remoting/Clustering, demonstrates how to use remoting and clustering to gain horizontal scalability and high availability.
Chapter 9, Managing Deployments with ConductR, illustrates building, deploying, and locating your microservices with ConductR.
Chapter 10, Troubleshooting and Best Practices, presents a few final tips and best practices for using Akka.
You will need a computer (Windows or Mac OS X) with Java 8 installed on it. You will need to have Simple Build Tool (sbt) installed on that computer as well. This book also leverages Docker, with the installation of Docker being covered in more detail in Chapter 1, Building a Better Reactive App.
If you want to use the Lightbend platform to create highly-performant reactive applications, then this book is for you. If you are a Scala developer looking for techniques to use all features of the new Akka release and want to incorporate these solutions in your current or new projects, then this book is for you. Expert Java developers who want to build scalable, concurrent, and reactive application will find this book helpful.
In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning.
Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "If you want to stop a Docker container, use the docker stop command, supplying the name of the container you want to stop."
A block of code is set as follows:
{ "firstName": "Chris", "lastName": "Baxter", "email": "[email protected]" }Any command-line input or output is written as follows:
docker-build.shNew terms and important words are shown in bold. Words that you see on the screen, for example, in menus or dialog boxes, appear in the text like this: "Click the Docker whale icon in your system tray and select Preferences in the context menu that pops up."
Warnings or important notes appear in a box like this.
Tips and tricks appear like this.
Feedback from our readers is always welcome. Let us know what you think about this book-what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of. To send us general feedback, simply e-mail [email protected], and mention the book's title in the subject of your message. If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.
Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.
You can download the example code files for this book from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.
You can download the code files by following these steps:
Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:
The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Mastering-Akka. We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
We also provide you with a PDF file that has color images of the screenshots/diagrams used in this book. The color images will help you better understand the changes in the output. You can download this file from https://www.packtpub.com/sites/default/files/downloads/MasteringAkka_ColorImages.pdf.
Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books-maybe a mistake in the text or the code-we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.
To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.
Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.
Please contact us at [email protected] with a link to the suspected pirated material.
We appreciate your help in protecting our authors and our ability to bring you valuable content.
If you have a problem with any aspect of this book, you can contact us at [email protected], and we will do our best to address the problem.
This book is meant to be geared towards the more experienced Scala and Akka developers looking to build reactive applications on top of the Akka platform.
This book is written for an engineer who has already leveraged Akka in the 2.3.x series and below to build reactive applications. You have a firm understanding of the actor model and how the Akka framework leverages actors to build highly scalable, concurrent, asynchronous, event-driven, and fault-tolerant applications. You've seen the new changes rolled out in Akka 2.4.2 and are curious about how some of these new features such as Akka Streams and Akka HTTP can be leveraged within your reactive applications.
This book will serve as a guide for an engineer who wants to take a functional but flawed reactive application and, through a series of refactors, make improvements to it. It will help you understand what some of the common pitfalls are when building Akka applications. Throughout the various chapters in the book, you will learn how to use Akka and some of the newer features to address the following shortcomings:
Imagine you woke up one morning and decided that you were going to take down the mighty Amazon.com. They've spread themselves too thin in trying to sell anything and everything the world has to offer. You see an opportunity back in their original space of online book selling and have started a company to challenge them in that area.
Over the past few months, you and your team of engineers have built out a simple Minimum Viable Product (MVP) reactive bookstore application build on top of the Akka 2.3.x series. As this is an MVP, it's pretty basic, but it has served its purpose of getting something to the market quickly to establish a user base and get good feedback to iterate on. The current application covers the following subdomains within the overall domain of a bookstore application:
The code that represents the initial application can be found in the initial-example-app folder within the code distribution for this book. The application is an sbt multi-project build with the following individual projects:
If you were to look at the different projects from a dependency view, they would look like this:
The individual services subprojects were set up to avoid any direct code dependencies to each other. If services in different modules need to communicate with each other, then they use the shared domain model (entities and messages) from the common project as the protocol. The initial intention was to allow each module to eventually be built and deployed independent of each other, even though currently it's built together and deployed as a monolith.
Each service module is made up of HTTP endpoint classes, services, and, in some cases, Data Access Objects (DAOs). The endpoint classes are built on top of the Unfiltered library and allow inbound, REST-oriented HTTP requests to be serviced asynchronously, using Netty under the hood. The service business logic is modeled using the actor model and implemented with Akka actors that are called from the endpoints. Service actors that need to talk to the relational Postgres db do so via DAOs that reside within the same .scala files as the services that use them. The DAOs use Lightbend's Slick library with the Plain SQL approach to talk to Postgres.
The way the application is currently structured, an inbound request that talks to the db would be handled as follows:
As you can see from the preceding steps, there are a few different thread pools involved in the servicing of the request, but it's done completely asynchronously. The only real blocking done in this flow is the JDBC calls done via Slick (sadly, JDBC has yet to incorporate async calls into the API). Thankfully though, those blocking calls are isolated behind Slick's own thread pool.
If you've played with Akka long enough, you know it's taboo to block in the actor system's main Fork/Join pool. Akka is built around the concept of using very few threads to do a lot of work. If you start blocking the dispatcher threads themselves, then your actors can suffer from thread starvation and fall behind in the processing of their mailboxes. This will lead to higher latency in call times, upsetting end users, and nobody wins when that happens. We avoided such problems with this app, so we can pat ourselves on the back for that.
You should take a little time in going through the example code to understand how everything is wired together. Understanding this example app is critical as this serves as the foundation for our progressive refactoring. Spending a little time upfront getting familiar with things will help as different sections are discussed in the upcoming chapters.
The app itself is not perfect. It has intentional shortcomings to give us something to refactor. It was certainly beyond the scope of this book to have me build out a fully functioning, coherent, and production-ready storefront application. The code is just a medium in which to communicate some of the flawed ways in which an Akka reactive application could be put together. It was purposely built as a lead-in to discover some of the newer features in the Akka toolkit as a way to solve some common shortcomings. View it as such, with an open mind, and you will have already taken the first step in our refactoring journey.
Detailed steps to download the code bundle are mentioned in the Preface of this book. Please have a look. The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Mastering-Akka. We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
Now that you have an understanding of the initial code, we can build it and then get it up and running. It's assumed that you already have Scala and sbt installed. Assuming you have those two initial requirements installed, we can get started on getting the example app functional.
Throughout this book, we will be using Docker to handle setting up any additional applications (such as Postgres) and for running the bookstore application itself (within a container). For those unfamiliar with Docker, it is a containerization platform that will let you package and run your applications, ensuring that they run and behave the same no matter what the environment is that they are running on. This means that when you are testing things locally, on your Mac or Windows computer, the components will run and behave the same as when they eventually get deployed to whatever production environment you run (say some Linux distribution on Amazon's Elastic Compute Cloud). You package up all of the application components and their dependencies into a neat little container that can then be run anywhere Docker itself is running.
The decision to use Docker here should make set up simpler (as Docker will handle the majority of it). Also, you won't clutter up your computer with these applications as they will only run as Docker containers instead of being directly installed. When it comes to your Docker installation, you have two possible options:
The biggest difference between these two options will be what local host address Docker uses when binding applications to ports. When using Docker Toolbox, docker-machine is used, which will by default bind applications to the local address of 192.168.99.100. When using one of the native Docker apps, the loopback address of 127.0.0.1 (localhost) will be used instead.
If you already have Docker installed, then you can use that pre-existing installation. If you don't have Docker installed, and you are on a Mac, then please read through the link from below to help you decide between Docker for Mac and Docker Toolbox: https://docs.docker.com/docker-for-mac/docker-toolbox/.
For Windows users, you should check out the following link, reading through the section titled What to know before you install, to see if your computer can support the requirements of Docker for Windows. If so, then go ahead and install that flavor. If not, then callback to using Docker Toolbox: https://docs.docker.com/docker-for-windows/.
Because we gave you a choice in which Docker flavor to run, and because each different flavor will bind to different local addresses, we need a consistent way to refer to the host address that is being used by Docker. The easiest way to do this is to add an entry to your hosts file, setting up an alias for a host called boot2docker. We can then use that alias going forward to when referring to the local Docker bind address, both in the scripts provided in the code content for this book and in any examples in the book content.
The entry we need to add to this file will be the same regardless of if you are on Windows or a Mac. This is the format of the entry that you will need to add to that file:
<docker_ip> boot2dockerYou will need to replace the <docker_ip> portion of that line with whatever local host your Docker install is using. So for example, if you installed the native Docker app, then the line would look like this:
127.0.0.1 boot2dockerAnd if you installed Docker Toolkit and are thus using docker-machine, then the line would look like this:
192.168.99.100 boot2dockerThe location of that file will be different depending on if you are running Windows or are on a Mac. If you are on Windows, then the file an be found at the following location: C:\Windows\System32\Drivers\etc\hosts.
If you are running in a Mac, then the file can be found here: /etc/hosts.
The initial example app uses Postgres for its persistence needs. It's a good choice for a relational database as it's lightweight and fast (for a relational database at least). It also has fantastic support for JSON fields, so much so that people have been using it for a document store too.
We will use Docker to handle setting up Postgres locally for us, so no need to go out and install it yourself. There are also setup scripts provided as part of the code for this chapter that will handle setting up the schema and database tables that the bookstore application needs. I've included an ERD diagram of that schema below for reference as I feel it's important in understanding the table relationships between the entities for the initial version of the bookstore app.
If you are interested in the script that was used to create the tables from this diagram, then you can find it in the sql directory under the intial-example-app root folder from the code distribution, in a file called example-app.sql.
If you are using a Mac, you can skip reading this section. It only pertains to running the .sh scripts used to start up the app on Windows.
As part of the code content for each chapter, there are some bash scripts that handle building and running the bookstore application. As bash is not native to the Windows operating system, you will have to decide how you want to build and start the bookstore application, choosing from one of the the following possibilities:
Now that the database is up and running, we can get the Scala code built and then packaged into a Docker container (along with Java8 and Postgres, via docker-compose) so we can run and play with it locally. First, make sure that you have Docker up and running locally. If you are running one of the native Docker apps, then look for the whale in your system tray. If it's not there, then go and start it up and make sure it shows there before continuing. If you are running Docker Toolbox, then fire up the Docker Quickstart Terminal, which will start up a local docker-machine session within a terminal window with a whale as ASCII art at the top of it. Stay in that window for the remainder of the rest of the following commands as that's the only window where you can run Docker-related commands.
From a terminal window within the root of the initial-example-app folder run the following command to get the app all packaged up into a Docker container:
docker-build.shThis script will instruct sbt to build and package the application. The script will then build a docker image, tag it and store it in the local docker repository. This script could take a while to run initially, as it will need to download a bunch of Docker-related dependencies, so be patient. Once that completes, you can then run the following command in that same terminal window:
launch.shThis command will also take a while initially as it pulls down all of the components of our container, including Postgres. Once this command completes, you will have the bookstore initial example application container up and running locally, which you can verify by running the following command:
docker psThat will print out a process list for the containers running under Docker. You should see two rows in that list, one for the bookstore and one for Postgres. If you want to log into Postgres via the psql client, to maybe look at the db before and after interacting with the app, then you can do so by executing the following command:
docker run -it --rm --network initialexampleapp_default postgres psql -h postgres -U dockerWhen prompted for the password, enter docker. Once in the database, you can switch to the schema used by the example app by running the following command from within psql:
\c akkaexampleappFrom there, you can interact with any of the tables described in the ERD diagram shown earlier.
If you want to stop a Docker container, use the docker stop command, supplying the name of the container you want to stop. Then, use the docker rm command to remove the stopped container or docker restart if you want to start it up again.
Once the app is up and running, we can start interacting with its REST-like API in an effort to see what it can do. The interactions will be broken down by subdomain within the app (represented by the -services projects), detailing the capabilities of the endpoint(s) within that subdomain. We will use the httpie utility to execute our HTTP requests. Here are the installation instructions for each platform.
You can install httpie on your Mac via homebrew. The command to install is as follows:
$ brew install httpieThe installation on Windows is going to be a bit more complicated as you will need Python, curl, and pip. The full instructions are too long to include directly in this book and can be found at: http://jaspreetchahal.org/setting-up-httpie-and-curl-on-windows-environment/.
The first thing we can do when playing with the app's endpoints is to create a new BookstoreUser entity that will be stored in the StoreUser Postgres table. If you cd into the json folder under the initial-example-app root, there will be a user.json file that contains the following json object:
{ "firstName": "Chris", "lastName": "Baxter", "email": "[email protected]" }In order to create a user with these fields, you can execute the following httpie command when in the json folder:
http -v POST boot2docker:8080/api/user < user.jsonHere, you can see that we are making use of the hosts file alias we created in section Adding the boot2docker hosts entry. This let us make HTTP calls to the bookstore app container that is running in Docker regardless of what local address it is bound to.
The -v option supplied in that command will allow you to see the entire request that was sent (headers, body, path, and params), which can be helpful if it becomes necessary to debug issues. We won't supply this param on the remainder of the example requests, but you can if you feel you want to see the full request and response. The < symbol implies that we want to send the contents of the user.json file as the POST body. The resulting user.json will look like the following:
{ "meta": { "statusCode": 200 }, "response": { "createTs": "2016-04-13T00:00:00.000Z", "deleted:":false, "email": "[email protected]", "firstName": "Chris", "id": 1, "lastName": "Baxter", "modifyTs": "2016-04-13T00:00:00.000Z" } }This response structure is going to be the standard for endpoint responses. The "meta" section mirrors the HTTP status code and can optionally contain error information if the request was not successful. The "response" section will be there if the request was successful and can contain either a single object as JSON or an array of objects. Notice that the ID of the new user is also returned in case you want to look that user up later.
You should add a few more JSON files of your own to that directory representing more users to create and run the same command referenced earlier (albeit with a different file name) to create the additional users. If you happen to try and create a user with the same e-mail as an existing user, you will get an error.
If you want to view a user that you have created, as long as you know the ID, you can run the following command to do so, using user ID 1 as the example:
http boot2docker:8080/api/user/1Notice on this request that we don't include an explicit HTTP request verb. That's because httpie assumes a GET request if you do not include a verb.
You can also look up a user by e-mail address with the following request:
http boot2docker:8080/api/user [email protected]The httpie client uses the param==value convention to supply query params for requests. In this example, the query string would be: ?email=chris%40masteringakka.com.
You can make changes to a user's basic info (firstName, lastName, email) by executing the following command:
http PUT boot2docker:8080/api/user/1 < user-edit.jsonThe included user-edit.json file contains a set of request json to change the initially created user's e-mail address. As with the creation, if you pick an e-mail here that is already in use by another user, you will get an error.
If at any time you decide you want to delete a user, you can do so with the following request, using user ID 1 as the example:
http DELETE boot2docker:8080/api/user/1This will perform a soft delete against the database, so the record will still exist but it won't come back on lookups anymore.
The Book endpoint is for taking actions against the Book entity. Books are what are added to sales orders, so in order to test sales orders, we will need to create a few books first. To create a Book, you can run the following command:
http POST boot2docker:8080/api/book < book.jsonAs with the BookstoreUser entity, you should create a few more book.json files of your own and run the command to create those books too. Once you are satisfied, you can view a book that you have created by running the following command (using book ID 1 as the example):
http boot2docker:8080/api/book/1Books support the concept of tags. These tags represent categories that the book is associated with. For example, the 20000 Leagues Under the Sea book that is represented in the book.json file is initially tagged as fiction and sci-fi. The Book endpoint allows you to add additional tags to the book, and it also allows you to remove a tag. Adding the tag ocean to book ID 1 can be done with the following command:
http PUT boot2docker:8080/api/book/1/tag/oceanIf you decide that you would like to remove that tag, then you can execute the following command to do so:
http DELETE boot2docker:8080/api/book/1/tag/oceanIf you want to look up books that match a set of input tags, you can run the following command:
http boot2docker:8080/api/book tag==fictionThis endpoint request supports supplying the tag param multiple times. The query on the backend uses an AND condition across all of the tags supplied, so if you supply multiple tags, then the books that match must have each of the tags supplied. An example of supplying multiple tags would be as follows:
http boot2docker:8080/api/book tag==fiction tag==scifiYou can also look up a book by author by executing a request like this:
http boot2docker:8080/api/book author==VerneThis request supports partial matching on the author, so you don't have to supply the complete author name to get matches.
The last concept that we can test out related to book management is allocating inventory to the book once it's been created. Books get created initially with a 0 inventory amount. If a book does not have any available inventory, it can not be included on any sale orders. Since not being able to sell books would be bad for business, we need the ability to allocate available inventory for a book in the system.
To indicate that we have five copies of book ID 1 in stock, the request would be as follows:
http PUT boot2docker:8080/api/book/1/inventory/5Like the BookstoreUser entity, you can also perform a soft delete for a book. You can do so by executing the following request, using book ID 1 as the example:
http DELETE boot2docker:8080/api/book/1Now that we have users and books created and we have a book with inventory, we can move on to pushing sales orders through the system.
If you want to run a profitable business, at some point, you need to start taking in money. For our bookstore app, this is done by accepting sales orders for the books that we are keeping in inventory. All of the playing with the user and book-related endpoints was done so that we could create SalesOrder entities in the system. A SalesOrder is tied to a BookstoreUser (by user ID) and has 1-n line items, each for a book (by book ID).
The request to create a SalesOrder also contains the credit card info so that we can first charge that card, which is where our money will come from. In the OrderManager code, before moving forward with creating the order, we first call over to the CreditCardTransactionHandler service to charge the card and keep a persistent record of the transaction. As we don't actually own the logic for charging the card ourselves, we call out over HTTP to a fake third-party service (implemented in PretentCreditCardService in the server project) to simulate this interaction.
Within the JSON directory, there is an order.json file that has the valid JSON in it for creating a new SalesOrder within the system. This file assumes that we have already created a BookstoreUser with ID of 1 and a book with an ID of 1, and we have added inventory to that book, which we did in the previous two sections. To create the new order, execute the following command:
http POST :8080/api/order < order.jsonAs long as the userId supplied and bookId supplied exist and that Book has inventory available, then the order should be successfully created. Each SalesOrderLineItem will draw down inventory (atomically) for the book that item is for in the amount tied to the quantity input for that line item. If you run that same command enough times, you should eventually exhaust all of the available inventory on that book, and you should start getting errors on the creation. This can be fixed by adding more inventory back on the book.
If you want to view a previously created SalesOrder, as long as you know the ID (which is returned in the create response JSON), then you can make the following request (using order ID 1 as the example):
http boot2docker:8080/api/order/1If you want to lookup all of the orders for a particular user ID, then you can execute the following request:
http boot2docker:8080/api/order userId==1The Order endpoint also supports looking up SalesOrders that contains a line item for a particular book by its ID value. Using book ID 1 as the example, that request looks like this:
http boot2docker:8080/api/order bookId==1Lastly, you can also look up SalesOrders that have line items for books with a particular tag. That kind of request, using fiction as the tag, would be as follows:
http boot2docker:8080/api/order bookTag==fictionUnlike the search books by tag functionality, this request only supports supplying a single bookTag param.
By this point, you've had some time to interact with the example app to see what it can do. You've also looked at the code enough to see how everything is coded. In fact, maybe you've coded something similar to this yourself when building reactive apps on top of Akka. So now, the million dollar question is, "What's actually wrong with this app?"
The short answer is probably nothing. Wrong is a very black and white word, and when it comes to coding and application design, you're dealing more with shades of gray. This app may suit some needs perfectly well. For example, if high scalability is not a concern, you have a small development team and/or if the app's functionality doesn't need to expand much more.
This wouldn't be much of a book if we left it at that though. The long answer is that while nothing is absolutely wrong, there is a lot that we can improve upon to help our app and team continue to grow and scale. I'll break down some of the areas that I think can be made better in the following sections. This will help serve as a primer for some of the refactors that we will do in the upcoming chapters.
To me, scalability is a nebulous term. I think a lot of people, when they hear scalability, immediately begin to think of things such as performance, throughput, queries per second, and the likes. These types of areas address the runtime characteristics of whatever application you have deployed. This is certainly a big aspect of the scalability umbrella and very important to your app's growth, but it's not the only thing you should be thinking of.
Another key area of scalability, that I think of when discussing the topic, is how well your application codebase will scale to the growth of both in-app functionality and to the growth of the development team. When your team is small (like me as the single developer of this example app) and the feature set of the application is minimal (again, like this example app), then codebase scalability is probably not the first thing on your mind. However, if you expect your company to be successful and grow, then your codebase needs to grow along with it; or else you run the risk of becoming impediment to the business as opposed to an enabler of the business. Therefore, there are some decisions that you can make earlier on in the growth process to help enable the codebase to scale with the growth of the business and development team.
There's a lot to discuss related to these two areas of application scalability, and we will break them down in more detail in the subsequent sections.
If you haven't encountered Martin L. Abbot and Michael T. Fisher's excellent book, The Art of Scalability, then you should give it a look some time. This book covers all aspects of scaling a business and the technology that goes along with it. In many a meeting, I've referenced materials from this book when discussing how to architect software components. There's a ton of valuable lessons in here for beginners all the way up to the more seasoned technologists.
In the book, the authors discuss the concept of the the Three Dimensions of Scalability for a running application. Those dimensions are represented in the following diagram of the cube:
The X axis scaling is the one that most people are familiar with. You run multiple copies of the application code on different servers and put a load balancer in front to partition inbound traffic. This kind of scaling gives you high availability, in that, if one server dies, the app can still serve traffic as the load balancer will redirect that traffic to the other nodes. This technique also gives your better per-node throughput as each node is only handling a percentage of the traffic. If you see your nodes are struggling to handle the current traffic rate, simply add another node to ease the burden a little on the existing nodes.
This is the kind of scaling that a monolith, like the example app, is most likely to use. It's pretty simple to keep scaling out by just adding more application nodes, but it seems a little inefficient in terms of resource usage. For example, if in your monolith, it's really only one of the services that is receiving the bulk of the additional load, you still need to deploy every other service into the new server even though they do not need the additional headroom.
In this kind of deployment, your services can't really have their own individual scaling profile. They are all scaling out together because they are co-deployed in a monolith. When you are sizing out the new instance node to deploy into in terms of CPU and RAM, you are stuck with a more generic profile as this node will handle traffic for all of the services. If certain services were more CPU heavy and others were more RAM heavy, you end up having to pick a node that has both a lot of RAM and a high number of CPUs as opposed to being able to choose between either one. These kinds of decisions can be cost prohibitive in the cloud-based world where changes in either of those two areas cost more money and more do when they need to be coupled together.
The Y axis scaling approach addresses this exact kind of problem. With y axis scaling, you break up the application around functional boundaries and then deploy these different functional areas separately and independent of one another. This way, if functional area A needs high CPU and functional area B needs high RAM, you can select individual node instances that are the best suited to those needs. In addition, if functional area A receives the bulk of the traffic, then you can increase the number of nodes that handle that area without having to do the same thing for functional areas that receive less traffic.
If you've heard of the Microservices approach to building software (and it would be hard not to have, given how much technical literature on the Web is dedicated to it), then you are familiar with an example of using Y axis scaling to build software. This kind of approach achieves the goal of small independent services that can scale independent of each other, but it comes with additional complexity around the deployment and management of those components. In a simple monolithic, X axis style deployment, you know that every component is on every node, so the load balancer can send traffic to any of the available nodes.
In a Microservice deployment, you need to know where in the node set service A lives (and it should be in multiple nodes to give high availability) so that you can route traffic accordingly. This service location concern complicates these kinds of deployments and can involve bringing in another moving part (software component) to handle it, which further increases the complexity. Many times though, the benefits gained from decoupled, independent services outweigh these additional complexities enough to make this kind of approach worth pursuing.
In Z axis scaling, you take the same component and duplicate it across an entire set of nodes, but you make each node responsible for only a subset of the requests or data. This is commonly referred to as sharding, and it is quite often seen in database-related technologies. If you are running in-memory caching on the nodes, then this kind of approach eliminates duplicate cached data in each node (which can occur with X axis scaling). Only the node responsible for each data set (defined by your partitioning scheme) will receive traffic for that data.
When using Akka Cluster Sharding, you can be sure only one actor instance is receiving requests for a particular entity or piece of data, and this is an example of z axis scaling.
Our initial example app is a monolith, and even though that approach now carries negative connotations, it's not necessarily a bad thing. When starting out on a new application, a monolith-first
