Alice and Bob Learn Secure Coding - Tanya Janca - E-Book

Alice and Bob Learn Secure Coding E-Book

Tanya Janca

0,0
32,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Unlock the power of secure coding with this straightforward and approachable guide!

Discover a game-changing resource that caters to developers of all levels with Alice and Bob Learn Secure Coding. With a refreshing approach, the book offers analogies, stories of the characters Alice and Bob, real-life examples, technical explanations and diagrams to break down intricate security concepts into digestible insights that you can apply right away. Explore secure coding in popular languages like Python, Java, JavaScript, and more, while gaining expertise in safeguarding frameworks such as Angular, .Net, and React. Uncover the secrets to combatting vulnerabilities by securing your code from the ground up!

Topics include:

  • Secure coding in Python, Java, Javascript, C/C++, SQL, C#, PHP, and more
  • Security for popular frameworks, including Angular, Express, React, .Net, and Spring
  • Security Best Practices for APIs, Mobile, Web Sockets, Serverless, IOT, and Service Mesh
  • Major vulnerability categories, how they happen, the risks, and how to avoid them
  • The Secure System Development Life Cycle, in depth
  • Threat modeling, testing, and code review
  • The agnostic fundamentals of creating secure code that apply to any language or framework


Alice and Bob Learn Secure Coding is designed for a diverse audience, including software developers of all levels, budding security engineers, software architects, and application security professionals. Immerse yourself in practical examples and concrete applications that will deepen your understanding and retention of critical security principles.

Alice and Bob Learn Secure Coding illustrates all the included concepts with easy-to-understand examples and concrete practical applications, furthering the reader’s ability to grasp and retain the foundational and advanced topics contained within. Don't miss this opportunity to strengthen your knowledge; let Alice and Bob guide you to a secure and successful coding future.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 742

Veröffentlichungsjahr: 2025

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Table of Contents

Title Page

Foreword

Introduction

Part I: General Advice

CHAPTER 1: Introductory Security Fundamentals

Assume All Other Systems and Data Are Insecure

The CIA Triad

Least Privilege

Secure Defaults/Paved Roads

Zero Trust

Defense in Depth

Supply Chain Security

Security by Obscurity

Attack Surface Reduction

Usable Security

Fail Closed/Safe, Then Roll Back

Compliance, Laws, and Regulations

Security Frameworks

Learning from Mistakes and Sharing Those Lessons

Backward Compatibility (and Potential Risks It Introduces)

Threat Modeling

The Difficulty of Patching

Retesting Fixes for New Security Bugs

Chapter Exercises

CHAPTER 2: Beginning

Follow a Secure System Development Life Cycle

Use a Modern Framework and All Available Security Features Within

Input Validation

Output Encoding

Parameterized Queries and ORMs

Authentication and Identity

Authorization and Access Control

Password Management

Protecting Sensitive Data

New Security Header Features

Security Headers Previously Covered

More New Headers

Secure Cookies

Chapter Exercises

CHAPTER 3: Improving

Database Security

File Management

File Uploads

(De)Serialization

Privacy (User/Citizen/Customer/Employee)

Errors

Logging, Monitoring, and Alerting

Fail Closed

Cryptographic Practices

Strongly Typed Languages

Domain‐Driven Development

Memory‐Safe Languages

Chapter Exercises

CHAPTER 4: Achieving

Chapter Exercises

Summary of Part I

Part II: Specific Advice

CHAPTER 5: Technology‐Specific

API Security Best Practices

Mobile Application Security Best Practices

WebSocket Security Best Practices

Serverless Security Best Practices

IoT Security Best Practices

Chapter Exercises

CHAPTER 6: Popular Programming Languages

JavaScript

HTML/CSS

Python

SQL

Node.js

Java

TypeScript

C#

PHP

C/C++

Conclusion

Chapter Exercises

CHAPTER 7: Popular Frameworks

Web and JavaScript

Other Frameworks and Libraries

Chapter Exercises

CHAPTER 8: Vulnerability Categories

Design Flaws / Logic Flaws

Code Bugs / Implementation Errors

Overflows and Other Memory Issues

Injection: Interpreter and Compiler Issues

Input Issues

Authentication and Identity Issues

Authorization and Access Issues

Configuration and Implementation Issues

Fraudulent Transactions

Replay Attacks

Crossing Trust Boundaries

File Handling Issues

Object Handling Issues

Secrets Management Issues

Race Conditions and Timing Issues

Resource Issues

Falling into an Unknown State

Chapter Exercises

Summary of Part II

Part III: Secure System Development Life Cycle

CHAPTER 9: Requirements

Project Kick‐Off: Outline of Your Project's Security Activities

Project Scheduling and Planning

Security Requirements

Chapter Exercises

CHAPTER 10: Design

Threat Modeling

Secure Design Patterns and Concepts

Architecture Whiteboarding

Examining Data Flows

Security User Stories

Chapter Exercises

CHAPTER 11: Coding

Training

Code Review

IDE Plugins and Other Guidance

Verifying That Your Dependencies Are Safe (SCA)

Finding and Managing Secrets

Dynamic Testing (DAST)

Chapter Exercises

CHAPTER 12: Testing

Test Coverage and Timing

Manual Testing

Automated Testing

Fuzzing

Interactive Application Security Testing (IAST)

Bug Bounty Programs

Test Results

Final Thoughts

Chapter Exercises

CHAPTER 13: Release/Deployment

Security Events Within the CI/CD

Securing the CI/CD Pipeline Itself

Assuring the Integrity of Your Release

Security Release Approval

Chapter Exercises

CHAPTER 14: Maintenance

Monitoring, Alerting, and Observability

Blocking/Shielding

Continuous Testing

Security Incidents

Business Continuity and Disaster Recovery Planning

Chapter Exercises

CHAPTER 15: Conclusion

Good Habits

Your Responsibility

How Much Is

Enough

?

Using Artificial Intelligence

Safely

Continuous Learning

Becoming a Champion

Getting Others on Board

Transitioning onto the Security Team

Conclusion

Summary of Part III

APPENDIX A: Resources

APPENDIX B: Answer Keys

Index

Praise for

Alice & Bob Learn Secure Coding

Copyright

Dedication

About the Author

About the Technical Editors

Acknowledgments

End User License Agreement

List of Tables

Chapter 3

Table 3.1: Safer Options

List of Illustrations

Chapter 2

Figure 2.1: Access control for Alice

Figure 2.2: Access control for Tanya

Chapter 3

Figure 3.1: Properly Stored Variable

Figure 3.2: Improperly Stored Variable

Chapter 14

Figure 14.1: Potential layers of abstraction

Guide

Cover

Title Page

Foreword

Introduction

Table of Contents

Begin Reading

APPENDIX A Resources

APPENDIX B Answer Keys

Index

Praise for Alice & Bob Learn Secure Coding

Copyright

Dedication

About the Author

About the Technical Editors

Acknowledgments

End User License Agreement

Pages

v

xxvii

xxviii

xxix

xxx

1

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

117

118

119

120

121

122

123

125

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

245

246

247

248

249

251

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

316

317

319

320

321

322

323

324

325

326

327

328

329

330

331

332

333

334

335

336

337

339

340

341

343

344

345

346

347

348

349

350

351

352

353

354

355

356

357

358

359

360

361

362

363

364

365

366

367

368

369

370

371

372

373

374

375

376

377

378

379

380

381

382

383

384

385

386

i

ii

iii

iv

vi

vii

ix

xi

xiii

387

 

Alice & Bob Learn Secure Coding

Tanya Janca

 

 

 

 

Foreword

From a technological point of view, it's a pretty amazing time to be a software engineer. When I was a much younger nerd, reading science fiction novels and watching early episodes of Doctor Who and Star Trek: The Next Generation, I was fascinated by the technologies on display. We had applications and devices to solve every problem in this new future.

Fast‐forward 25 years, and it feels like we are living in the age of extraordinary invention. Software touches almost every part of our lives, from our work to our free time, and we, as engineers, are the ones building it.

The systems we write help bring us together and help us collaborate and share at a scale never before seen. Software is also helping us identify medical conditions earlier, control robots that can help us navigate unsafe environments, and even drive around our cities for us. With this great focus on software and the benefits it can bring us, however, comes the responsibility we share for what this software can do and its impacts on people, data, and systems.

When I started my software development career, the systems I built (in the finance and taxation space) were less crucial than they are now. The systems I built were broadly only used between 9:00 a.m. and 5:00 p.m., Monday to Friday, and if they stopped working, although it was annoying, many manual systems could compensate.

The software we build now is integrated so deeply into our lives that the impacts of incidents and issues are much more varied and, in some cases, much more dangerous.

That is why this book and Tanya's lifetime of work in secure coding are so crucial. As software developers, we are at a crossroads. We have an obligation to make our software and systems of the highest quality and standards. We have a responsibility to balance performance, scaling, usability, observability, and accessibility—ensuring that our applications do the job as intended, when needed, and for as many people as possible.

Now, it's time to add security to this mix.

In this book, Tanya has made secure coding accessible to software developers from all walks of life and all software ecosystems. Her easy‐to‐understand examples and simple, pragmatic approach mean that even the busiest software teams can get started securing their code in a way that works for their software development life cycle, culture, and language choices. Packed with examples and sidebars, this book is easily the grab‐and‐go reference you want to give your team.

If we are going to build this wonderful software‐centric future together, I hope we can all commit to making every day a little more security‐focused than the last and every line of code an opportunity to improve our software quality and security.

Laura Main Bell

CEO and Founder, SafeStack

Introduction

Right now, our industry demands more from programmers than ever before; I'm sure you've all seen articles urging you to be a 10‐X engineer, be an expert in several programming languages, and conquer all aspects of full‐stack development. But one clear and unnegotiable expectation stands out for software developers: you must create secure applications. It doesn't matter if it wasn't covered in your computer science program, if your security team is unhelpful, or if you don't have the right toolset available. Your users, teammates, and the entire organization are counting on you to provide them with reliable and safe software.

This book will help you write more secure code, full stop. It is a toolkit for standing out among your peers, developing robust applications, and safeguarding their data. By reading this book, you are taking a giant step toward becoming a senior software engineer or the newest member of the security team.

Unlike many other textbooks, this one will make you smile. You may also find it surprisingly easy to understand and consume. As a previous professional entertainer who also has dyslexia, I focused on making this book a joy to read and as easy as possible to understand. Using both humor and numerous teaching tactics (analogies, relatable stories, empathy for Alice and Bob, diagrams, repeating complex abstract concepts, etc.), I tried to write a book that I would have wanted earlier in my career.

This book is a whirlwind of secure coding, covering advice that applies to every tech stack, as well as more specific and detailed advice for 10 programming languages, 9 frameworks, several technologies, and every vulnerability class I am aware of. That is a lot of information. It is easily enough material to create an amazing secure coding guideline for your workplace. Wink, wink.

This isn't just a book; it's an interactive journey. Over the course of 2025 and 2026, I will be doing monthly livestreams for each chapter of the book. These sessions will feature my (expert) friends and me discussing each chapter, including the answers at the end, plus live Q&A.

I didn't just write this book for myself; I wrote it for you. When I started to learn about security, it was a struggle. The few resources I could find were not very good. I made it my personal mission to help streamline the transition to application security and improve our industry's approach to software security. But as time passed, I saw that if I wanted to create substantial change, I needed to turn my attention to developers. And so here I am, dear reader.

I wrote this book for one more reason, and I suspect it's the same reason you picked it up. I want myself and those I love to be able to use software safely. I want the products our industry creates to be trustworthy. As of this writing, that is not the case; we've all heard of endless data breaches, security incidents, and other damage or injury happening as a result of insecure applications. While we wait for our governments to get their collective acts together, I am trying to raise the bar. With this book, I hope to instill a sense of responsibility in every developer to build more secure software.

Everyone is driven by something different in their lives. For me personally, it is extremely important that I perform good in my life. By sharing knowledge and cultivating a community of developers committed to building secure software, I hope to protect countless individuals from harm. But for this to happen, I need your commitment. Are you ready to go on this journey with me?

I invite you to immerse yourself in this book and use the knowledge within to create truly remarkable software.

Part IGeneral Advice

In This Part

Chapter 1:

Introductory Security Fundamentals

Chapter 2:

Beginning

Chapter 3:

Improving

Chapter 4:

Achieving

CHAPTER 1Introductory Security Fundamentals

This chapter will focus on fundamental security concepts, the sorts of things that every security person wishes every other IT professional knew. You may not have a chance to apply all of them in your work, but understanding them can help you create more robust and defensible systems.

Assume All Other Systems and Data Are Insecure

Perhaps the most important lesson in this entire book is learning to stop trusting computer systems, data, and users by default. Human beings, in general, are quite trusting as a species.1 For creating a society with laws, safety, and general order, having most people assume trust from the start is a good thing. It's part of what makes our societal fabric work.

As a result of human beings generally assuming trust, when we design computer systems, we tend to design them in such a way that the systems have an implied trust. What this means is that rather than automatically verifying facts, our computer systems assume they must be true. And this can lead to dire consequences.

COLLEGE COPY CAT

When Alice was in college, she used to save her work files to a shared drive on the network for school. In case her computer ever failed, she knew there would be a backup. In her second year of college, she was preparing a presentation for her software engineering course as part of a class team project. During the class, when her team was going to present the project research, a classmate named Eve2 asked if she could present her own project research first. Alice didn't see anything wrong with this, so she agreed. Eve got up and presented all of Alice's findings as though they were her own! She even used Alice's slide deck and changed the name on the front page to her own. Alice thought, “Are you kidding? How could this happen!?!?!” All of their classmates applauded and were very pleased with Eve. Alice had assumed that saving her files on a shared drive at school meant they would be safe. She never thought a classmate would steal her work and present it as their own or that someone would go rifling through Alice's folders on the shared school drive. Alice had assumed trust, and she had gotten burned.

If we are trying to create a secure system, it is of the utmost importance that we never assume trust. This can mean using multifactor authentication to protect against credential stuffing attacks, double‐checking data you receive from the database to ensure that it is the correct format and size, or performing authentication and authorization against an application programming interface (API) calling a serverless app, even though they were both built by your company.

Examples of implied trust:

Zoning in network design: once someone has entered a zone, they are able to access every other system within that zone without having to cross a firewall or reauthenticate.

Accepting user input, not validating it, and then using that input to create an SQL query, a URL redirect, or another decision within your system.

Exposing an API to the internet without putting a gateway in front of it or any other mechanism to perform authentication and authorization, allowing anyone to call it—including bots.

If you only learn one thing from this book, I hope it is this: design every system with as little implied trust as possible. Verify everything, including input, decisions, data, and other system integrations. Perform more than one verification if possible and several if the stakes are high (top‐secret information, systems requiring high availability, etc.). Always assume that other systems and data are potentially insecure.

The CIA Triad

The CIA triad are the three things that make up the mandate for most IT Security teams around the world. It is a cybersecurity team's job to guard the confidentiality, integrity, and availability of the systems and data they are charged with protecting. Security teams protect more than just the CIA, but those are often considered the core topics. Throughout this book, we will cover much more, including privacy, safety, authenticity, and layering our defenses.

TOP‐SECRET WORK

Bob used to work on a top‐secret case for the government of Canada. It was about antiterrorism activities, and that's all he was allowed to say about it. Keeping that sensitive data safe was of the utmost importance, and Bob took all of his training on how to protect that data very seriously. When Bob had his training with the Canadian Centre for Cyber Security, he asked what made the data “top secret” rather than secret or some other classification. The response was, “If that data got out, it could potentially harm Canada as a nation. It could potentially result in hundreds of lost lives and cause various other counterterrorism activities to fail. This information could not only harm those working for the government but also block them from uncovering various plots that could lead to the death of citizens or even, in the worst case, a successful government coup. Your most important task, no matter how complex, onerous, or difficult, is to keep this information from falling into the wrong hands. Guard it, literally, with your life.” Although Bob rarely spoke about this work assignment with his friends and family, they were all very aware of how Bob felt about the importance of his work for the Department of Justice.

NOTE

Confidentiality: the state of keeping or being kept secret or private

3

Integrity: internal consistency or lack of corruption in electronic data

3

Availability: the quality of being able to be used or obtained at any time

3

WHEN CRITICAL INFRASTRUCTURE GOES DOWN

In 2022, on July 9, the Rogers telecommunications network went down in Canada.4 Canada has only three major telecommunication companies, and the rest share or rent the lines from the main three. Canadians couldn't pay bills, surf the net, or even call 911 in large parts of the country for several days as a result of this outage. Cybersecurity generally focuses on three things: confidentiality, integrity, and availability (CIA). As you can see, when Rogers went down, it caused a lack of availability for much of the country, and therefore the outage was a security issue. If one human error can disable emergency services for a large part of the country for several days, that system is not secure. Although the outage itself was not caused by a cyber attack, it interfered with one of the CIA triad, and therefore was a security issue. The lack of contingency planning around these systems is also a security issue. This note is not meant as a critique against Rogers; it is a real‐world example of how security is a part of quality and also the importance of defense in depth, business continuity planning, and disaster recovery.

Although we have seen the definition of integrity, let's talk a little more about the meaning of that word. Often, when we speak of a person who “has integrity,” we mean that you can trust that person: you can rely on them and know they will always make the “right” decision. Integrity is similar for computer systems; when a system and its data have integrity, it means we can use that data to make decisions and know they will be good decisions. This data has already been verified; it is trustworthy.

Now imagine that you are a doctor, and you use a computer to calculate a dosage for medicine based on your patient's weight, height, any medical conditions they have, and various other factors. When you give the patient the medication, you assume it's the correct amount—that the information the computer gives you will help the patient, not harm them. When you do this (trust the output of the system), you are assuming that the system and its data have integrity. Imagine the horror for a doctor if the computer got it wrong and their patient was caused harm. For some systems, the integrity of the data is critical (measuring medicine, for instance), and for others, it's not so important (a recipe for a cake that says one egg versus two—it's not the end of the world). When you are creating a system, it's extremely helpful to know which of the CIA is most important and then to design your systems and tests with that in mind.

Least Privilege

The principle of least privilege (PoLP) refers to an information security concept in which a user is given the minimum levels of access – or permissions – needed to perform their job functions.

— CyberArk5

Although this quote implies that least privilege only applies to users, this is not true; it applies to any person or thing who may have access or privileges, including computer systems, like software or an AI. Least privilege also applies to how long the access is given, as it should only be provided during the time it is required and no longer.

NO MORE ADMIN RIGHTS

Alice remembers her first introduction to the concept of least privilege from the security team. They told her she couldn't have admin privileges on her work desktop anymore. Alice was not pleased. She tried to explain that as a C‐level exec, she needed to have admin privileges so that she could install software she found on the internet and not have to wait on tech support's approval. The security team informed her not only that she should not be installing software that was not provided by the company but also that this was part of the reason they were taking this privilege away. They told her several executives had been phished, and because they had admin privileges and no anti‐malware software, it caused a lot of problems. They told her that as an executive, she was a target for malicious actors because she had so much power within the company. They also explained that she would definitely still be able to do her job without it, and if she had a need for software they were not supplying, they would get it for her—just ask. Alice was not exactly pleased with the conversation, but it seemed clear these security folks were not going to back down, so she said, “Okay, fine!” (as though she had any choice in the matter).

Whenever we publish or deploy a new piece of software or add a user to a network or any other electronic system, we have to decide how much access they will have and for how long. If we give them access to everything, this means they have the ability to cause damage to everything within that system. This could be intentional or accidental. It could also be that a regular user's account was taken over by a malicious actor and then used to wreak havoc on your network, application, or anything else they have been given access to.

That said, if we limit their rights and privileges by only giving them the access that they require to perform their job function, the amount of damage someone could do would be greatly reduced. This applies to all electronic systems, including software! Let's walk through some examples.

Example 1: Walter is a parent of teenage twins, and their curfew is midnight. Walter has electronic smart locks on his home that deny entrance for their fingerprints starting at 12:05 a.m. This means if his teenagers stay out past curfew, they have to ring the doorbell, alerting both parents that they arrived home late. However, the parents can enter the home at any time. The parent's privileges to access their home are different, and the twins have fewer privileges. Walter quite likes it that way!

Example 2: You build a graphical user interface (GUI) front end for your application, plus three APIs that perform services for your front end and one database to hold all the data for you. Your application has one service account created on your network; this account is your application's identity on the network. Ideally, your APIs can only be called by the corresponding service account, and all other calls will be blocked or ignored. Your database should only accept connections from your APIs that belong to your app, using your app's service account, and from the database administrator (DBA). This way, if another app or network entity tries to send commands to your database or make calls to your APIs, it won't work. You could protect your system's data further by selecting the least amount of privileges to your data that you need: for instance, selecting read‐only or create‐read‐update‐delete (CRUD) rather than database owner (DBO).

Example 3: You are in charge of access to a lab where sensitive research is being performed. You give everyone their own access card, with a photo of them, that needs to be scanned to get into each of the different laboratories. You set access to various closets or cabinets with the same system but only let people into the areas they are supposed to have access to. For instance, some scientists are allowed to access chemicals, whereas others may use the very special equipment for measuring experiments. Each employee can only access what they need to do their jobs, and nothing more. When employees stop working for the company or stop performing the role that requires this access, the access is revoked.

Example 3, expanded: The laboratories you are protecting have extremely sensitive and dangerous chemicals, and there are top‐secret experiments being performed. The risks in this situation are both physical and political, and thus you increase the level of security and reduce the amount of privileges each person is given. To allow someone into the building, they must swipe their badge and have their body and belongings scanned for weapons; then they enter a cage where they are visually identified by one of the security guards. Once they are in, certain cabinets are only opened with two people's access cards, not one. The important cabinets require multiparty authorization.

Secure Defaults/Paved Roads

Every computer system that has any configuration has defaults; they are the settings that come with the system by default. Often, users leave whatever the default settings are for their entire usage of the software. This means if the original setting on your software is suboptimal with regard to privacy or security, most of your users will continue to use it in this state. Many software creators take advantage of this, using it to violate their users’ privacy by selling their data, sharing it with other organizations, and other less‐than‐ideal usage.

SECRET SIGNALS

Alice used to use Telegram to send messages to friends because she thought it was very secure and had end‐to‐end (e2e) encryption. One day a friend told her that it's only encrypted from the client to the server, but on the server, it's decrypted.6 All those years, she thought her messages had been encrypted and safe, even from the people at Telegraph! She felt like an idiot and switched to Signal Messenger.

That said, we security‐minded IT professionals can take advantage of this as well, by setting defaults that are secure. When we make the path of least resistance the secure path, everyone wins. If it is possible to make the easiest way to do something also the most secure way to do it, we are more likely to get the results we want! Whenever possible, create a paved road that leads to the most secure way forward for the user.

Assume Breach / Plan For Failure

The concept of assume breach means to plan, design, and react as though we have been breached, or will be one day. A breach can be someone breaking into a system, stealing your data, breaking into a building, and so on.

Examples of this would be launching your incident response process the moment a big vulnerability is reported to your bug bounty program or your coordinated disclosure program, using multiple layers of defense in case one is surpassed, or monitoring inside your network for suspicious activity.

Planning for failure is very similar to “assume breach,” except that it does not include reacting to events, just planning for them. Designing systems to have multiple layers of defense, including contingency plans, and ensuring that both our business continuity plan (BCP) and our disaster recovery (DR) plans are up to date and actionable are some of the ways that we, as IT professionals, can plan for failure.

REAL‐LIFE PLANNING FOR FAILURE

A real‐life example of planning for failure is the COVID‐19 pandemic hitting its peak in 2020. Think back to when organizations quickly switched to work‐from‐home, selling their products online, or deploying systems to ensure that employees were safe. Some organizations were ahead of the curve, and it is highly likely that they enacted their BCPs to achieve this. Although data is not publicly available, I (the author) would love to see the data on the financial cost for organizations that did and did not have BCPs and DR plans and how much money, time, and human lives were saved as a result of planning for failure.

Zero Trust

Zero Trust is a concept that is the opposite of assumed or implied trust. If you recall the first section in this chapter, “Assume All Other Systems and Data Are Insecure,” this is an extension of that concept. Assumed trust means that after initial verification, we trust a system, person, or account, usually permanently. For instance, someone logging in to a network is allowed to access anything on the whole network, or a person going into a building is allowed to go into every single room on every floor.

In real life, humans are generally trusting, but it depends on the social situation. Imagine having a guest over for a dinner party in your home and finding them going through your medicine cabinet: that would be weird. It would set off alarm bells. You would likely ask them what they are doing or even kick them out of your home. But computers don't understand social situations, so we have to teach them in advance.

Zero Trust means that there is no implied or assumed trust, ever. All systems deny access as a default. Unless a human gave access explicitly, the answer is no. When we implement Zero Trust, this means we have to be careful to ensure that everything is set up properly or things break. It's a labor‐intensive concept to apply, but when done properly, it is extremely effective.

Defense in Depth

Defense in Depth is the simplest of all the security concepts; it means having more than one layer of defense. Some people have a lock on their front door, which is one layer of defense. Others have a lock, an alarm system, video cameras, and an attack dog. That is defense in depth: multiple defenses that are layered. We do this because there are often weaknesses or unknown chinks in our armor; multiple layers can ensure that attackers are blocked. Ideally, you will decide how many layers of defense are required based on the value of the system and/or data you are protecting and the risk of said systems and/or data coming to harm.

Supply Chain Security

A supply chain is all the things you need to create a product. Let's say you want to make tomato sauce to sell at grocery stores. You will need tomatoes, spices, salt, (possibly) sugar, and a jar to hold it all. You may also want to put a nice label on the jar, meaning you need the label and some glue. To create the product regularly, you will need to have a regular supply of all of these things, and several of them need to be fresh (rotten tomatoes would not make a tasty sauce).

The label, jar, and glue are all products in themselves, which means each of them also has its own supply chain. Each one of their ingredients is part of your tomato sauce supply chain. As you list each and every item and then chain them together in a list, that is the supply chain for your product.

Supply chain security means protecting the entire chain of ingredients. Personally, I don't even know what the ingredients are for glue, but I do know that without it, the labels will fall off. Every single part of the supply chain needs to be kept safe and protected; we can't have broken glass jars or poisonous sauce.

With software, this means each one of the dependencies you have included in your application has to be safe to use. If those dependencies call other libraries, packages, or software, all of that needs to be secure as well! It also includes every tool you use to create the software, such as your Integrated Development Environment (IDE), testing servers, your version control system, and so on. We will talk later in the book about how to do a good job of this, but for now, just understanding that every component, library, framework, or package you use to build your software is part of your supply chain is enough.

Security by Obscurity

This concept is mostly referred to with regard to open‐source code versus proprietary code, and which is more secure. It is my opinion that all software can have bugs and flaws, and it is the systems we put in place to detect, fix, and avoid security issues that determine the security of its end state. That said, not sharing a copy of our code publicly would make it more difficult for a determined attacker to analyze it for vulnerabilities. Not broadcasting the SSID (name) of our Wi‐Fi home network can help us avoid random members of the public trying to brute force their way onto it, but a determined or advanced attacker will generally find it. Although “security by obscurity” should never be used as the only defense of a system, it can help protect intellectual property and be used as one of several layers in a more comprehensive security plan.

Attack Surface Reduction

Every part of your application that someone can interact with is part of your app that can be attacked. Anything that can be attacked is considered part of your attack surface. If we can reduce the amount of attack surface we have, there is less for malicious actors to try to wreak havoc on, meaning there is less chance they will be successful.

One of the things that is often unstated about reducing our attack surface is that unused code is usually what we end up removing, and unused code is often insecure code! For instance, if there's a menu item for a feature no one uses anymore, you should remove the code from your application, not just comment out the menu item. Over time, this unused feature will likely not be tested and updated because it's not on the list of things for the QA team and penetration tester to look at. It might get missed, and if it does, that means it's less likely to be secure. Another bonus when reducing the size of your codebase is that it will make it easier to understand and maintain.

NOTE

Key takeaway: Unused code is significantly more likely to be insecure, and thus we should remove it from our apps.

Usable Security

Human beings will always find a way to get their jobs done, we are a determined bunch. When the security team puts a security control in place that prevents them from performing their duties, even the least technical of people will find a way around it; it's human nature. When users find a workaround, it is often a less secure way to use the system. It is best if the security team works hard to ensure that the security mitigations they put in place don't cause difficulty for the end users. Asking for feedback and adjusting security controls so that people are able to get their work done is a way that we can get what we want (secure users) with less conflict. User feedback matters.

NOTE

Examples of insecure user workarounds for inconvenient and/or unusable security policies: users writing passwords on sticky notes because the security team forces them to rotate their passwords every 90 days; applications preventing screenshots, resulting in users taking pictures with their personal devices to share information with their colleagues; scanning code with an untuned static application security testing (SAST) tool that provides more false positives than true ones results in it being disabled in the pipeline, and the code is then pushed with no security testing. The list is endless.

Fail Closed/Safe, Then Roll Back

Whenever there is a problem during a transaction, it is important that we roll it back and do it again. If we were opening something (such as a database connection), we should close it. If we were granting access, we should revoke it. We do this to avoid race conditions, timing attacks, and other potential vulnerabilities. Going back to the start and doing it all again requires very little effort or time with a modern computer. But allowing an error or failure to continue can allow our system to fall into an unknown state. And that is where vulnerabilities often lie in wait: when our system's next action is not predictable. Thus, it is always safer to fail closed, roll back, and start again.

Compliance, Laws, and Regulations

Compliance: the act or process of complying to a desire, demand, proposal, or regimen or to coercion.

— Merriam‐Webster

There aren't very many laws related to cybersecurity outside the United States and the European Union, and as far as I could find on the internet, none of them concentrate solely on the security of software. There are, however, several that cover data security from the United States and EU, which are very closely related to software: the Federal Information Security Modernization Act of 2014 (FISMA), the Health Insurance Portability and Accountability Act of 1996 (HIPAA), the Notice of Security Breach Act of 2003 (only applies to California), the EU Cybersecurity Act, the Directive on Security of Network and Information Systems (NIS Directive), the Digital Operational Resilience Act (DORA), and the General Data Protection Regulation of 2018 (GDPR).7

That said, there are several self‐appointed regulating bodies in the field of information security that offer compliance and standards we can follow. Although many in the industry will agree that security and compliance are not the same thing, creating a requirement for users to follow best security practices can be quite helpful to propel organizations to develop and maintain a more robust security program.

What follows is a non‐exhaustive list of compliance and standards in the field of information security. You do not need to memorize them all, but being aware that they exist and if your organization needs to comply with any of them will help you do your job better:

Payment Card Industry Data Security Standard (PCI DSS):

8

For companies that handle credit cards and other forms of payment online.

NIST special publications: For any information systems, there are several different standards available from NIST.

New York Department of Financial Services (NY DFS) Cybersecurity Regulation: For financial organizations from the United States.

The Digital Operational Resilience Act (DORA) is a European Union regulation that aims to strengthen the IT security of financial entities, including banks, insurance companies, and investment firms. This regulation is set to apply from January 17, 2025. DORA focuses on ensuring that the financial sector in Europe can maintain resilience in the face of significant operational disruptions.

International Organization for Standardization (ISO) standards: A system built to help organizations manage their entire information security program.

General Data Protection Regulation (GDPR): A privacy‐focused regulation that aims to protect consumers.

Health Insurance Portability and Accountability Act (HIPAA)/Health Information Technology for Economic and Clinical Health Act (HITECH)—The protection of personal health data of US citizens.

Federal Information Security Management Act (FISMA): Another regulation from the United States to help protect against data breaches for governmental organizations.

9

California Consumer Privacy Act (CCPA): Protecting the privacy rights of California residents.

Security Frameworks

There are several organizations that have created security frameworks in an effort to make system processes easier to define, implement, and maintain. What follows is a non‐exhaustive list of some frameworks you may want to learn about on your journey into secure coding and secure applications.

An unsurprising first choice are three frameworks from OWASP (the Open Web Application Security Project): the Security Knowledge Framework, DevSecOps Maturity Model (DSOMM), and OWASP SAMM (Software Assurance Maturity Model).

The Security Knowledge Framework10 is a free and open‐source software that uses OWASP's Application Security Verification Standard (ASVS) as its base for helping developers write secure applications. It gives secure coding examples in several languages so that programmers can look up the most secure way to do whatever they are trying to accomplish.

OWASP DSOMM is a framework that illustrates various security measures that are easily applicable when using the DevOps software development methodology and how to prioritize them.

OWASP SAMM11 is a framework for helping you evaluate the maturity of your secure system development life cycle (S‐SDLC). OWASP SAMM is agnostic of any technology or programming language; it's just there to help you improve your SDLC and other processes to ensure that your software is secure at the end.

OWASP offers several other frameworks, including a web app testing framework. It also has endless documentation about application security, shared freely on its website, https://owasp.org. Its best‐known resource is the OWASP Top Ten Security Risks to Web Applications (affectionately known as “the Top Ten”), which can be found here: https://owasp.org/www-project-top-ten/.

NOTE

When giving security advice to software developers, it is extremely important to be timely, specific, and concise. Back in the day, I worked on a security team with over 20 people, who mostly performed risk assessments that were made up of checklists. When developers asked for advice on their applications, the security team provided answers that absolutely baffled me. A software architect asked if he could be given the list they use at the end of the SDLC to validate the security of his application so that he could build it in from the start. He was told “that would be cheating,” and his request was rejected; apparently the security criteria were a secret. One developer asked for details of how to securely manage his application's session ID and was sent a link to the NIST website, with instructions to read the entire thing (easily over 1,000 pages) so he could learn how to build secure apps. I tried to explain to the security team that this was, effectively, like sending a picture of them giving the middle finger to the developers who sent these questions. It was both unhelpful and insulting. No one agreed with me. I ended up leaving that workplace rather quickly; I just didn't fit in.

Up next is NIST, which releases many frameworks along with other extremely helpful publications; NIST is an amazing online resource. NIST special publication (SP) 800‐5312 is an application security framework that describes the recommended risk management practices for software. This publication focuses mostly on how to test software and tools that are recommended for such testing. Most recent updates include newer tooling such as Interactive Application Security Testing (IAST) and Run‐time Application Self Protection (RASP) tools.

Also from NIST is the Cybersecurity Framework (CSF), which states on the title page that it is for improving critical infrastructure, but in reality it can apply to most IT systems. This framework focuses on managing and evaluating risk rather than specifics on secure coding or design.

Another group you should be aware of is the Center for Internet Security (CIS), which created a list of Critical Security Controls.13 Previously known as the “SANs Top 20,” this list of controls is the biggest bang for your (security) buck at a large organization. It includes controls for securing all of an organization, not just IT or software development. This group also releases regular “CIS Benchmarks,” which are vendor‐neutral security configuration guides, as well as pre‐hardened infrastructure images. CIS focuses on infrastructure over custom applications.

The next two are closer to compliance, but technically they are independent bodies. That said, they are usually implemented to prove security maturity to those outside of their organizations, such as customers, investors, and business partners.

The International Organization for Standardization (ISO), ISO 27001, and ISO 27002 certifications are considered the international standard for validating a cybersecurity program.14 It might not be a framework per se, but it guides you through the steps to get your house in order with regard to risk management.

Service Organization Control (SOC) Type 2, also known as SOC2, is a list of 60 controls for securely managing client data.15 The controls for this framework are intense, and it's a lot of work to implement, but when an organization can state that it has “SOC 2 Compliance,” it is a huge selling point for any product/company.

Although there are certainly several more cybersecurity‐related frameworks that exist, none of them are related to the security of software. This leaves us with OWASP and NIST as leaders in providing software‐focused security frameworks at the time of this writing.

Learning from Mistakes and Sharing Those Lessons

Although this topic is not strictly about security, many cybersecurity issues occur due to a lack of knowledge, information, or training. If every security team made a point of sharing information widely within our organizations when a bug was fixed, a security flaw was discovered, or a postmortem was performed after a security incident, we would be less likely, as an industry, to fail so often. With cybersecurity issues, such as data breaches and other failures, happening so often, we must work hard to avoid repeating our mistakes.

Backward Compatibility (and Potential Risks It Introduces)

As someone who prefers to hold on to my personal devices for as long as humanly possible, I value backward compatibility and the cost savings it provides to the end user and my personal bank account. That said, it is not without risk. Software ages very badly; the longer something is in the public eye, the more potentially malicious attention it may have received. Vulnerabilities are sometimes discovered immediately, but often it takes time, and the more time provided, the more likely vulnerabilities will be found. As software, hardware, and their dependencies age, some of them stop being supported, which also has negative security implications (no more testing, no more security updates, etc.). When you are deciding how long to provide backward compatibility for a product you have created, please take security into account and whether you can continue to ensure that it will be safe for the end user.

Threat Modeling

Threat modeling is a process for identifying threats to a system you are charged with creating or maintaining. You identify the potential threats and then analyze whether you should mitigate them or not (some threats are minor or highly unlikely and do not require fixing to maintain your desired security posture). When you perform code review and security testing, you should search for your threat models and try to prove that your system is adequately defended against them.

Threat modeling can be incredibly formal, with a lot of documentation, processes, and time involved. It can also be informal, such as a brainstorming session with some note‐taking. Whichever way you choose, this process is almost always eye‐opening, illuminating several potential issues you had not previously planned for. We will go into further detail on this topic in the third part of this book.

PLANNING FOR PRIVACY

Years ago, when Alice was much younger, she dreamed of being a famous movie star. She auditioned and got small parts in a few independent films and even had a small following of fans. It didn't last long, but she enjoyed it while she could. During this brief stint, an older, more experienced actress took her aside and explained that now that she was “in the public eye,” her threat model had changed. “What's a threat model?” Alice asked. The older woman told her that she would receive mostly good attention, but that with any level of fame it also attracts negative attention, and thus it was very important to protect her personal information, such as where she lived, her date of birth, her social insurance number, and anything else someone might use to find her or impersonate her. Alice took this advice seriously, and as she got older, she was very conscious of what images and information she shared online. Having fans was great, but she didn't want to ever be surprised by one of them showing up at her front door unannounced!

The Difficulty of Patching

When we apply updates to our phones or software on our computers, it is called patching. When we update (patch) our operating system, sometimes it can cause software to have issues or require that older software be updated or it will no longer work. When there is an issue in a dependency for software, if it has its own dependencies, it can mean updating several at a time, which can be complex, difficult, and time‐consuming. Now imagine updating a programming language framework on your custom software. Sometimes this means you need to swap out dependencies or not call certain functions anymore; and once in a while (usually when you are several versions behind), it can require a complete rewrite of your application! After that work has been completed, extensive testing is required to ensure that the application continues to work as intended and that you have not created new bugs (security or otherwise) in the process. Please keep this in mind when someone says, “Just patch it”; it is usually much more work than meets the eye.

When working with companies on their application security programs, the complaint I hear most often is, “The developers won't fix the vulnerabilities we report.” When we dig a little deeper, the reason is often one of the following two issues: the software developers have an inappropriately high workload and don't have the time to fix the issues, or the release process is painful, time‐consuming, and ineffective. More than once, I have told clients, “Your main threat to your software security is your inability to release software fixes in a reasonable amount of time. We need to fix this process as soon as possible.” Anything you can do to streamline or automate this process will help your organization respond faster to security issues and is a good investment of time and money.

PRIVACY THAT DOESN'T WORK

Bob had always loved his MacBook and the Apple MacOS operating system, but when the Catalina MacOS Update came out in 2019, it provided all sorts of new privacy “features” that broke almost everything! He was supposed to give a presentation at work the next day, and his presentation software couldn't access his folders without him messing around for 10 minutes; he was very embarrassed. He tried to make a video that evening on his personal machine, and Catalina struck again! He had to get completely new video recording software; Open Broadcaster Software (OBS) was completely nonfunctional. It took him weeks to get his personal machine back to normal and have everything working the way he liked it. Bob has a lot of sympathy for people who apply patches for a living!

Retesting Fixes for New Security Bugs

Whenever we fix a bug, we are changing and/or writing code. When we write new code or change existing code, it is very possible that we can create a new bug. And some of those bugs are security bugs. It's never on purpose, but it happens to everyone at some point. The point here isn't to shame anyone for making errors; it's what human beings do. The point is that we must always retest our code after we make changes, to avoid this situation whenever possible. Ideally, we do this retesting using automation. With the advent of DevOps and continuous integration and continuous delivery/deployment (CI/CD) pipelines, doing this is easier than ever before. On top of this, there are now hundreds of security tools that can be automated to retest your application in very little time, including free and open‐source options. Keep reading this book, and we will go over how you can test your own apps without the need to be a security expert.

NOTE

KISS: Keep It Simple, Silly Complexity is the enemy of security. Systems work better when they are kept simple. This applies to everything, including the applications you develop. The more complex the code is, the easier it is to make errors, and the more difficult it is to maintain. Reducing complexity will help you create more secure software.

Chapter Exercises

You can find the answers to these questions in Appendix B, “Answer Keys.”

Why is it important to retest our code after we fix a bug?

Why is it important that we (try to) keep our programming frameworks up to date for custom software that we build?

Which security framework seems the most useful to you, and why?

What is your personal threat model? (Note: There are no wrong answers here. The point is for you to identify potential threats in your life and see if you can mitigate them.)

Give one example of “assumed trust.” It does not have to be IT related.

Think of one new security thing you have learned that you think would benefit your team at work, at school, or in your personal life. Share that information with the appropriate people. It can be something from this book or anything else you think might help someone be more secure.

Give one example of least privilege and how you would apply it. Explain why, and the potential consequences if least privilege was not applied to this situation or system.

Give an example of a potential secure default or “paved road” that you would like to implement.

Give one example of security through obscurity.

Give an example of a situation where you would “fail safe,” “fail closed,” or “roll back a transaction” and how it could provide protection to the users of that system.

CHAPTER 2Beginning

Secure coding is a bit like muscle memory: the more often you do it, the easier it becomes. You make a habit of validating all the input to your app. You always use parameterized queries. You meet with someone from the security team to ask them to help you classify and label your data, until you know how to do it by yourself.

In this part of the book, beginning with this chapter, we are going to cover secure coding advice that applies to the majority of programming languages and frameworks. The information in this chapter is the start of your secure coding journey.

Follow a Secure System Development Life Cycle

If I could give only one piece of advice to a company, team, or person, it would be to follow a secure