26,39 €
A Blueprint for Production-Ready Web Applications will help you expand upon your coding knowledge and teach you how to create a complete web application. Unlike other guides that focus solely on a singular technology or process, this book shows you how to combine different technologies and processes as needed to meet industry standards.
You’ll begin by learning how to set up your development environment, and use Quart and React to create the backend and frontend, respectively. This book then helps you get to grips with managing and validating accounts, structuring relational tables, and creating forms to manage data. As you progress through the chapters, you’ll gain a comprehensive understanding of web application development by creating a to-do app, which can be used as a base for your future projects. Finally, you’ll find out how to deploy and monitor your application, along with discovering advanced concepts such as managing database migrations and adding multifactor authentication.
By the end of this web development book, you’ll be able to apply the lessons and industry best practices that you’ve learned to both your personal and work projects, allowing you to further develop your coding portfolio.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 237
Veröffentlichungsjahr: 2022
Leverage industry best practices to create complete web apps with Python, TypeScript, and AWS
Dr. Philip Jones
BIRMINGHAM—MUMBAI
Copyright © 2022 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Group Product Manager: Pavan Ramchandani
Senior Editor: Hayden Edwards
Technical Editor: Simran Udasi
Copy Editor: Safis Editing
Project Coordinator: Sonam Pandey
Proofreader: Safis Editing
Indexer: Pratik Shirodkar
Production Designer: Roshan Kawale
Marketing Coordinators: Anamika Singh and Marylou De Mello
First published: September 2022
Production reference: 2010922
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.
978-1-80324-850-9
www.packt.com
Dr. Philip Jones began his career studying physics at the University of Oxford, where he undertook his undergraduate studies and subsequently gained a doctorate in particle physics. He has authored the Quart framework, maintains the Flask framework, and supports the ongoing development of a number of other projects related to the Python HTTP stack. Currently, he works as a chief technical officer in London, and in his spare time, you will find him cycling or walking his dog, Penny.
I would like to thank my friends and family for their continued support and encouragement throughout the process of writing this book.
Sunil Kumar is a passionate and energetic young man who is following his dream of changing the world with technology. He graduated from one of the top engineering colleges in India with a BTech degree in computer science and has years of profession experience. This experience includes backend development using Quart and Flask frameworks, and frontend development using ReactJS, along with queuing systems such as Kafka and RabbitMQ. Nowadays, he’s working with FinTech companies helping to drive the change in the Indian economy and rethink debt collection systems.
Dr. Murray Hoggett worked in academia for 10 years researching climate change and volcanoes, specializing in numerical and stochastic simulations. Since then, he has worked as a software engineer on projects ranging from embedded systems and native apps to web apps and ML systems. He is currently team lead at TrueCircle, building Python and JavaScript web apps for the recycling industry.
Matt Dawson got his start in the tech industry working as a photographer/surveyor for a PropTech start-up. He developed an interest in engineering, and after graduating from Maker’s Academy, he took a job as a full-stack engineer specializing in Python and TypeScript.
He now works as an infrastructure engineer, seeking a better understanding of how to deploy and scale applications that he was already able to build. He chose this due to his desire to understand the product as a whole, building on his strong foundation in backend/frontend principles.
Matt’s love of tech is drawn from a strong curiosity to try new things and to constantly strive toward new levels of understanding, as well as his firm belief that anything can be made better.
Manuela Redinciuc is a full-stack software engineer from London, currently focusing on expanding her backend expertise at Lifeworks. She comes from a non-technical background and enjoys mentoring and helping others transition into tech roles.
Dr. Stuart Hannah is a professional software engineer living and working in London. He has extensive Python experience, holds a Ph.D. in combinatorics from Strathclyde University, and enjoys working on performant distributed systems.
Before we can build our app, we need a system that is set up for fast development. This means we’ll need to install tooling to autoformat, lint, and test our code, alongside using Git for version control and Terraform to manage the infrastructure.
This part consists of the following chapter:
Chapter 1, Setting Up Our System for DevelopmentThe aim of this book is to provide a blueprint for a web app running in a production environment and utilizing as many industrial best practices as possible. To do this, we will build a working to-do app, codenamed Tozo, that allows users to track a list of tasks. You can see the finished app in Figure 1.1:
Figure 1.1: The to-do app we’ll build in this book
While the aim is to build a working to-do app, we’ll focus on features that are useful to any app, with much of the functionality and many of the techniques being the same as in the app built here. For example, users will need to log in, change their password, and so on. Therefore, my hope is that you can take this blueprint, remove the small amount of specific to-do code, and build your own app.
In this chapter, we will take a new machine without any tooling and set it up for development. We’ll also set up systems to develop and test the app automatically. Specifically, we’ll install a system package manager and use it to install the various language runtimes and tooling before setting up a remote repository and activating continuous integration. By the end of this chapter, you’ll have everything you need to be able to focus solely on developing the app. This means that you will be able to quickly build and test the features you need in your app for your users.
So, in this chapter, we will cover the following topics:
Aiming for fast developmentSetting up our systemInstalling Python for backend developmentInstalling NodeJS for frontend developmentInstalling Terraform for infrastructure developmentInstalling PostgreSQL for database developmentAdopting a collaborative development process using GitHubI’ve built the app described in this book and you can use it by visiting the following link: https://tozo.dev. The code is also available at https://github.com/pgjones/tozo (feel free to use that code or the code in this book under the MIT license).
I’m going to assume you have a working knowledge of TypeScript and Python, as these are the languages we’ll use to write the app. However, we’re going to avoid any esoteric language features and I hope the code is easily understandable. I’m also going to assume you are happy using the command line, rather than focusing on GUI instructions, as most tooling is optimized for command-line usage, and this is something that should be advantageous.
To follow the development in this chapter, use the companion repository at https://github.com/pgjones/tozo and see the commits between the r1-ch1-start and r1-ch1-end tags.
Before we start setting up our system to build the to-do app, it’s important to understand what we are aiming for when building any app, which is to solve our customer’s needs by shipping solutions as quickly as possible. This means that we must understand their needs, translate them into working code, and crucially, deploy the solution with confidence that it works as expected.
When we are developing an app, the shorter the time between making a change to the code and being able to run and see the effect of the change, the better. This is why we will run all of the code locally, with auto-reloading enabled; this should mean that any change we make is testable in our local browser within a few seconds.
Hot/auto-reloading
In development, we ideally want any changes we make to the code to take effect immediately so that we can check that the changes have the desired effect. This feature is called hot or auto-reloading and is active with the React and Quart development servers we are using in this book.
I also like to use tooling to help speed up development and gain confidence that the code works as expected. This tooling should run as frequently as possible, ideally as part of an automated process. I have split this tooling into auto-formatting, linting, and testing categories.
The format and style of code matter as a different style to the one you are used to will take longer for you to understand. This will mean more bugs as you spend more of your time comprehending the style rather than logic. Also, while you can be consistent, almost everyone has a different preferred style, and I’ve found that these preferences change over time.
In the past, I’ve used tooling to check the styling and report on any inconsistencies. This is helpful but wasteful as every inconsistency must be fixed manually. Fortunately, most languages now have an official, or dominant, auto-formatter that both defines a style and changes all of the code to match it. Using the most popular auto-formatter means that most developers will recognize your code.
We’ll aim to set up our tooling so that there are auto-formatters for as much of the code as possible.
I think of linting in two parts: type checking and static analysis. Type checking requires that we include types when writing the code. I use type hinting, or typed languages, where possible, as this catches a large number of the errors I typically make. Typing also helps document the code, meaning that it makes it clear what objects (types) are expected. While typing costs more effort to write, I think it easily pays off in bugs avoided. Therefore, checking the typing should be our first aim of linting.
The second part, static analysis, allows linters to look for potential issues in naming, usage of functions, possible bugs, security issues, and unused code, and to flag code that is too complex or poorly constructed. These linters are a very low-cost sanity check as they are quick and easy to run and give few false issues (positives).
While linting will identify bugs and issues with the code, it cannot detect logical issues where correctly written code does the wrong thing. To identify these, we need to write tests that check that the execution of the code results in the expected output. Therefore, it is important that we write tests as we write the code, especially when we discover bugs. We will focus on writing tests that provide an easy way to test that the app works as expected.
Test coverage
Test coverage is used to measure how much of the code has been tested by the test suite. This is typically done by measuring the ratio of lines executed by the tests to the total lines of code. I find this metric unhelpful as it focuses on lines executed rather than use cases that matter to the user. Therefore, I’d encourage you to focus on testing the use cases you think your users require. However, if you’d like to measure coverage this way, you can install pytest-cov using pdm.
Using auto-formatters, linters, and a testing suite allows us to develop with greater confidence and therefore speed, which in turn means a better experience for our users. However, in order to use these tools, we will first need to set up our system effectively.
To effectively develop our app, we will need to be able to develop and run it. This means we will need tooling to manage changes to the code, test and check the app for errors, and run it. This tooling can be installed via a system package manager, of which there are many choices depending on your operating system. I recommend that you install Homebrew on Linux (https://linuxbrew.sh) and macOS (https://brew.sh), or Scoop (https://scoop.sh) on Windows. I’ll show both brew and scoop commands in this book, but you should only use the command that works on your operating system.
You will also need a code editor to write the code in and a browser to run the app. I recommend that you install VS Code (https://code.visualstudio.com) and Chrome (https://www.google.com/chrome) via the directions given on their websites. With these tools installed, we can now consider how we’ll manage the code.
As we develop our app, we will inevitably make mistakes and want to return to the previous working version. You may also want to share the code with others, or just keep a backup for yourself. This is why we need to manage the code via a version control system. While there are many different version control systems, the majority in this industry use git (https://git-scm.com). It can be installed via the system package manager as follows:
brew install git
scoop install git
Using git
This book can be completed using git add to add files to the repository, git commit to create commits, and git push to update the remote repository. I consider these to be the basic git commands. However, git can still be very confusing to use, and you may end up with your repository in a mess. It does get easier with practice and there is plenty of help online. You can always delete your local repository and start again from the remote version (as I have done many times before).
Now we have git installed, let’s set the author information as follows:
git config --global user.name "Phil Jones" git config --global user.email "[email protected]"The highlighted values should be changed to your name and email address.
Next, we can create a repository for our code by creating a directory called tozo and running the following command within it:
git init .
This will create a .git directory that can be safely ignored. This results in the following project structure:
tozo └── .gitAs we develop, we will want git to ignore certain files and paths. We will do this by creating .gitignore files that list the filenames and file paths that we do not want to be part of our repository.
Writing good commits
The history of changes stored by git can serve as an excellent companion document for your code if git is used well. This is something that won’t seem advantageous at the start, but after a year of development, it will be something you’ll sorely miss if you hadn’t done it from the beginning. So, I strongly recommend you write good commits.
A good commit contains a single atomic change to the code. This means it is focused (doesn’t combine different changes into one commit) and that it is complete (every commit leaves the code working).
A good commit is also well described and reasoned. This means the commit message explains why the change has been made. This contextual information is invaluable as it will be forgotten quickly and is often required to understand the code.
With git installed, we can start committing changes; however, we should establish how we intend to combine changes, which, in my opinion, should be done by rebasing.
As I put a lot of value on the git commit history, I recommend using rebases rather than merges when combining changes. The former will move local new commits on top of any remote changes, rewriting and leaving a linear clear history, whereas the latter will introduce a merge commit. To make this change, run the following code:
git config --global pull.rebase true
We’ve now set up our system with a package manager and version control. Next, we can install the specific tooling we need for the various aspects of the app.
There are a variety of languages that are suitable for backend development, and any would be a fine choice for your app. In this book, I’ve chosen to use Python as I find that the code is more accessible and easier to follow than other languages.
As we will be writing the backend for our app in Python, we will need to have it installed locally. While you may have a Python version already installed, I’d recommend you use the one installed by the system package manager, as follows:
brew install python
scoop install python
The package manager we’ve used so far doesn’t know how to install and manage Python packages, so we also need another package manager. There are many choices in Python, and I think PDM is the best. PDM can be installed with the system package manager on Linux and macOS systems, as follows:
brew install pdm
For Windows systems, it can be installed by running the following commands:
scoop bucket add frostming https://github.com/frostming/scoop-frostming.git
scoop install pdm
We’ll keep the backend code separate in a backend folder, so please create a backend folder at the top level of the project with the following folder structure:
tozo └── backend ├── src │ └── backend └── testsNext, we need to inform git that there are files that we don’t want to be tracked in the repository and hence it should ignore them by adding the following to backend/.gitignore:
/__pypackages__ /.mypy_cache .pdm.toml .pytest_cache .venv *.pycFor PDM to manage our project, we need to run the following command in the backend directory:
pdm init
When prompted, you should choose the Python version installed using the system package manager earlier.
We can now focus on the specific Python tooling for fast development.
Python does not have an official format or formatter; however, black is the de facto formatter for code and isort is the de facto formatter for imports. We can add both to our project by running the following command in the backend directory:
pdm add --dev black isort
The dev flag
We use the --dev flag here as these tools are only required for developing the backend and therefore do not need to be installed when running in production.
black and isort require the following configuration to work well together. This should be added to the end of the backend/pyproject.toml