Alice and Bob Learn Application Security - Tanya Janca - E-Book

Alice and Bob Learn Application Security E-Book

Tanya Janca

0,0
33,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Learn application security from the very start, with this comprehensive and approachable guide!

Alice and Bob Learn Application Security is an accessible and thorough resource for anyone seeking to incorporate, from the beginning of the System Development Life Cycle, best security practices in software development. This book covers all the basic subjects such as threat modeling and security testing, but also dives deep into more complex and advanced topics for securing modern software systems and architectures. Throughout, the book offers analogies, stories of the characters Alice and Bob, real-life examples, technical explanations and diagrams to ensure maximum clarity of the many abstract and complicated subjects.

Topics include:

  • Secure requirements, design, coding, and deployment
  • Security Testing (all forms)
  • Common Pitfalls
  • Application Security Programs
  • Securing Modern Applications
  • Software Developer Security Hygiene

Alice and Bob Learn Application Security is perfect for aspiring application security engineers and practicing software developers, as well as software project managers, penetration testers, and chief information security officers who seek to build or improve their application security programs.

Alice and Bob Learn Application Security illustrates all the included concepts with easy-to-understand examples and concrete practical applications, furthering the reader's ability to grasp and retain the foundational and advanced topics contained within.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 480

Veröffentlichungsjahr: 2020

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Foreword

Introduction

Pushing Left

About This Book

Out-of-Scope Topics

The Answer Key

Part I: What You Must Know to Write Code Safe Enough to Put on the Internet

CHAPTER 1: Security Fundamentals

The Security Mandate: CIA

Assume Breach

Insider Threats

Defense in Depth

Least Privilege

Supply Chain Security

Security by Obscurity

Attack Surface Reduction

Hard Coding

Never Trust, Always Verify

Usable Security

Factors of Authentication

Exercises

CHAPTER 2: Security Requirements

Requirements

Requirements Checklist

Exercises

CHAPTER 3: Secure Design

Design Flaw vs. Security Bug

Secure Design Concepts

Segregation of Production Data

Threat Modeling

Exercises

CHAPTER 4: Secure Code

Selecting Your Framework and Programming Language

Untrusted Data

HTTP Verbs

Identity

Session Management

Bounds Checking

Authentication (AuthN)

Authorization (AuthZ)

Error Handling, Logging, and Monitoring

Exercises

CHAPTER 5: Common Pitfalls

OWASP

Defenses and Vulnerabilities Not Previously Covered

Race Conditions

Closing Comments

Exercises

Part II: What You Should Do to Create Very Good Code

CHAPTER 6: Testing and Deployment

Testing Your Code

Testing Your Application

Testing Your Infrastructure

Testing Your Database

Testing Your APIs and Web Services

Testing Your Integrations

Testing Your Network

Deployment

Exercises

CHAPTER 7: An AppSec Program

Application Security Program Goals

Application Security Activities

Application Security Tools

CHAPTER 8: Securing Modern Applications and Systems

APIs and Microservices

Online Storage

Containers and Orchestration

Serverless

Infrastructure as Code (IaC)

Security as Code (SaC)

Platform as a Service (PaaS)

Infrastructure as a Service (IaaS)

Continuous Integration/Delivery/Deployment

Dev(Sec)Ops

The Cloud

Cloud Workflows

Modern Tooling

Modern Tactics

Summary

Exercises

Part III: Helpful Information on How to Continue to Create Very Good Code

CHAPTER 9: Good Habits

Password Management

Multi-Factor Authentication

Incident Response

Fire Drills

Continuous Scanning

Technical Debt

Inventory

Other Good Habits

Summary

Exercises

CHAPTER 10: Continuous Learning

What to Learn

Take Action

Exercises

Learning Plan

CHAPTER 11: Closing Thoughts

Lingering Questions

Conclusion

APPENDIX A: Resources

Introduction

Chapter 1: Security Fundamentals

Chapter 2: Security Requirements

Chapter 3: Secure Design

Chapter 4: Secure Code

Chapter 5: Common Pitfalls

Chapter 6: Testing and Deployment

Chapter 7: An AppSec Program

Chapter 8: Securing Modern Applications and Systems

Chapter 9: Good Habits

Chapter 10: Continuous Learning

APPENDIX B: Answer Key

Chapter 1: Security Fundamentals

Chapter 2: Security Requirements

Chapter 3: Secure Design

Chapter 4: Secure Code

Chapter 5: Common Pitfalls

Chapter 6: Testing and Deployment

Chapter 7: An AppSec Program

Chapter 8: Securing Modern Applications and Systems

Chapter 9: Good Habits

Chapter 10: Continuous Learning

Index

End User License Agreement

List of Illustrations

Introduction

Figure I-1: System Development Life Cycle (SDLC)

Figure I-2: Shifting/Pushing Left

Chapter 1

Figure 1-1: The CIA Triad is the reason IT Security teams exist.

Figure 1-2: Confidentiality: keeping things safe

Figure 1-3: Integrity means accuracy.

Figure 1-4: Resilience

improves

availability.

Figure 1-5: Three layers of security for an application; an example of defens...

Figure 1-6: A possible supply chain for Bob’s doll house

Figure 1-7: Example of an application calling APIs and when to authenticate

Chapter 2

Figure 2-1: The System Development Life Cycle (SDLC)

Figure 2-2: Data classifications Bob uses at work

Figure 2-3: Forgotten password flowchart

Figure 2-4: Illustration of a web proxy intercepting web traffic

Chapter 3

Figure 3-1: The System Development Life Cycle (SDLC)

Figure 3-2: Flaws versus bugs

Figure 3-3: Approximate cost to fix security bugs and flaws during the SDLC

Figure 3-4: Pushing left

Figure 3-5: Using a web proxy to circumvent JavaScript validation

Figure 3-6: Example of very basic attack tree for a run-tracking mobile app

Chapter 4

Figure 4-1: Input validation flowchart for untrusted data

Figure 4-2: Session management flow example

Chapter 5

Figure 5-1: CRSF flowchart

Figure 5-2: SSRF flowchart

Chapter 6

Figure 6-1: Continuous Integration/Continuous Delivery (CI/CD)

Chapter 7

Figure 7-1: Security activities added to the SDLC

Chapter 8

Figure 8-1: Simplified microservice architecture

Figure 8-2: Microservice architecture with API gateway

Figure 8-3: Infrastructure as Code workflow

Figure 8-4: File integrity monitoring and application control tooling at work...

Guide

Cover

Table of Contents

Begin Reading

Pages

i

iii

xxi

xxii

xxiii

xxiv

xxv

xxvi

1

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

119

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

193

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

225

226

227

228

229

230

231

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

iv

v

vii

ix

xi

258

Early Praise for Alice & Bob Learn Application Security by Tanya Janca

 

“Tanya knows her stuff. She has a huge depth of experience and expertise in application security, DevSecOps, and cloud security. We can all learn a ton of stuff from Tanya, so you should read her book!”

-Dafydd Stuttard, best-selling co-author of The Web Application Hacker’s Handbook, creator of Burp Suite

 

“I learned so much from this book! Information security is truly everyone’s job — this book is a fantastic overview of the vast knowledge needed by everyone, from developer, infrastructure, security professionals, and so much more. Kudos to Ms. Janca for writing such an educational and practical primer. I loved the realistic stories that frame real-world problems, spanning everything from design, migrating applications from problematic frameworks, mitigating admin risks, and things that every modern developer needs to know.”

-Gene Kim, bestselling author of The Unicorn Project, co-author of The Phoenix Project, DevOps Handbook, Accelerate

 

“Practical guidance for the modern era; Tanya does a great job of communicating current day thinking around AppSec in terms we can all relate to.”

-Troy Hunt, creator of “Have I Been Pwned?”

Alice & Bob Learn Application Security

 

 

Tanya Janca

 

 

 

 

Foreword

There have been dramatic advances in application security over the past several years, with many forces at play forcing organizations to care about the security of their software: the rise of security incidents as a result of insecure software, the growing number of regulations that force companies to care about information security, and the growing dependence of internet enabled software.

Organizations of every size and sector of business and government have experienced breaches and the loss that goes with it. However, these growing number of information security events also bring awareness that helps push organizations and the developers within them to build more secure software.

While awareness around the cost of insecure software has risen, it’s not enough. We need surgical advice and clear engineering strategies to build secure software. Enterprises who wish to build secure software must turn their focus to a new way of engineering that includes changes to the famous trilogy of people, process and technology.

That’s where Alice and Bob come in.

Tanya’s book lays out the subject of application security in a clear and concise way, allowing the reader to apply what they have learned as they flow through the book. The chapters are peppered with tales of Alice and Bob, and how the security decisions we make effect real people’s lives. The book starts with an explanation of the importance of this topic, then teaches all of the main security concepts that we all seem to pick up somewhere, but we’re never quite sure how it happened.

From security requirements for web applications to secure design concepts, secure coding guidelines to common pitfalls, it’s sprinkled with stories, examples with Alice or Bob, and diagrams. It also covers testing and deployment, but it’s certainly not a book about ‘hacking’, it’s about how to ensure that your applications are tough, rugged and secure. The book describes how to create an AppSec program, how to secure modern technologies and systems, habits for developers (or anyone) to keep themselves and their systems safe, and even includes a learning plan at the end! With tips, tricks and even jokes, this is not your average textbook.

I hope you enjoy this book as much as I did, and that you decide to fight the good fight with Tanya and me, by building secure software!

-Jim Manico, Founder, Secure Coding Instructor, Manicode Security

Introduction

Why application security? Why should you read this book? Why is security important? Why is it so hard?

If you have picked up this book, you likely already know the answer to this question. You have seen the headlines of companies that have been “hacked,” data breached, identities stolen, businesses and lives ruined. However, you may not be aware that the number-one reason for data breaches is insecure software, causing between 26% and 40% of leaked and stolen records (Verizon Breach Report, 2019).1 Yet when we look at the budgets of most companies, the amount allocated toward ensuring their software is secure is usually much, much lower than that.

Most organizations at this point are highly skilled at protecting their network perimeter (with firewalls), enterprise security (blocking malware and not allowing admin rights for most users), and physical security (badging in and out of secure areas). That said, reliably creating secure software is still an elusive goal for most organizations today. Why?

Right now, universities and colleges are teaching students how to code, but not teaching them how to ensure the code they write is secure, nor are they teaching them even the basics of information security. Most post-secondary programs that do cover security just barely touch upon application security, concentrating instead on identity, network security, and infrastructure.

Imagine if someone went to school to become an electrician but they never learned about safety. Houses would catch fire from time to time because the electricians wouldn't know how to ensure the work that they did was safe. Allowing engineering and computer science students to graduate with inadequate security training is equally dangerous, as they create banking software, software that runs pacemakers, software that safeguards government secrets, and so much more that our society depends on.

This is one part of the problem.

Another part of the problem is that (English-language) training is generally extremely expensive, making it unobtainable for many. There is also no clear career path or training program that a person can take to become a secure coder, security architect, incident responder, or application security engineer. Most people end up with on-the-job training, which means that each of us has a completely different idea of how to do things, with varying results.

Adding to this problem is how profitable it is to commit crimes on the internet, and with attribution (figuring out who did the crime) being so difficult, there are many, many threats facing any application hosted on the internet. The more valuable the system or the data within it, the more threats it will face.

The last part of this equation is that application security is quite difficult. Unlike infrastructure security, where each version of Microsoft Windows Server 2008 R2 PS2 is exactly the same, each piece of custom software is a snowflake; unique by design. When you build a deck out of wood in your backyard and you go to the hardware store to buy a 2x4 that is 8 feet long, it will be the same in every store you go to, meaning you can make safe assumptions and calculations. With software this is almost never the case; you must never make any assumptions and you must verify every fact. This means brute-force memorization, automated tools, and other one-size-fits-all solutions rarely work. And that makes application security, as a field, very challenging.

Pushing Left

If you look at the System Development Life Cycle (SDLC) in Figure I-1, you see the various phases moving toward the right of the page. Requirements come before Design, which comes before Coding. Whether you are doing Agile, Waterfall, DevOps, or any other software development methodology, you always need to know what you are building (requirements), make a plan (design), build it (coding), verifying it does all that it should do, and nothing more (testing), then release and maintain it (deployment).

Figure I-1: System Development Life Cycle (SDLC)

Often security activities start in the release or testing phases, far to the right, and quite late in the project. The problem with this is that the later in the process that you fix a flaw (design problem) or a bug (implementation problem), the more it costs and the harder it is to do.

Let me explain this a different way. Imagine Alice and Bob are building a house. They have saved for this project for years, and the contractors are putting the finishing touches on it by putting up wallpaper and adding handles on the cupboards. Alice turns to Bob and says, “Honey, we have 2 children but only one bathroom! How is this going to work?” If they tell the contractors to stop working, the house won't be finished on time. If they ask them to add a second bathroom, where will it go? How much will it cost? Finding out this late in their project would be disastrous. However, if they had figured this out in the requirements phase or during the design phase it would have been easy to add more bathrooms, for very little cost. The same is true for solving security problems.

This is where “shifting left” comes into play: the earlier you can start doing security activities during a software development project, the better the results will be. The arrows in Figure I-2 show a progression of starting security earlier and earlier in your projects. We will discuss later on what these activities are.

Figure I-2: Shifting/Pushing Left

About This Book

This book will teach you the foundations of application security (AppSec for short); that is, how to create secure software. This book is for software developers, information security professionals wanting to know more about the security of software, and anyone who wants to work in the field of application security (which includes penetration testing, aka “ethical hacking”).

If you are a software developer, it is your job to make the most secure software that you know how to make. Your responsibility here cannot be understated; there are hundreds of programmers for every AppSec engineer in the field, and we cannot do it without you. Reading this book is the first step on the right path. After you've read it, you should know enough to make secure software and know where to find answers if you are stuck.

Notes on format: There will be examples of how security issues can potentially affect real users, with the characters Alice and Bob making several appearances throughout the book. You may recall the characters of Alice and Bob from other security examples; they have been being used to simplify complex topics in our industry since the advent of cryptography and encryption.

Out-of-Scope Topics

Brief note on topics that are out of scope for this book: incident response (IR), network monitoring and alerting, cloud security, infrastructure security, network security, security operations, identity and access management (IAM), enterprise security, support, anti-phishing, reverse engineering, code obfuscation, and other advanced defense techniques, as well as every other type of security not listed here. Some of these topics will be touched upon but are in no way covered exhaustively in this book. Please consume additional resources to learn more about these important topics.

The Answer Key

At the end of each chapter are exercises to help you learn and to test your knowledge. There is an answer key at the end of the book; however, it will be incomplete. Many of questions could be an essay, research paper, or online discussion in themselves, while others are personal in nature (only you can answer what roadblocks you may be facing in your workplace). With this in mind, the answer key is made up of answers (when possible), examples (when appropriate), and some skipped questions, left for online discussion.

In the months following the publication of this book, you will be able to stream recorded discussions answering all of the exercise questions online at youtube.com/shehackspurple under the playlist “Alice and Bob Learn Application Security.” You can subscribe to learn about new videos, watch the previous videos, and explore other free content.

You can participate live in the discussions by subscribing to the SheHacksPurple newsletter to receive invitations to the streams (plus a lot of other free content) at newsletter.shehackspurple.ca.

It doesn't cost anything to attend the discussions or watch them afterward, and you can learn a lot by hearing other's opinions, ideas, successes, and failures. Please join us.

Part IWhat You Must Know to Write Code Safe Enough to Put on the Internet

In This Part

Chapter 1

:

Security Fundamentals

Chapter 2

:

Security Requirements

Chapter 3

:

Secure Design

Chapter 4

:

Secure Code

Chapter 5

:

Common Pitfalls

CHAPTER 1Security Fundamentals

Before learning how to create secure software, you need to understand several key security concepts. There is no point in memorizing how to implement a concept if you don’t understand when or why you need it. Learning these principles will ensure you make secure project decisions and are able to argue for better security when you face opposition. Also, knowing the reason behind security rules makes them a lot easier to live with.

The Security Mandate: CIA

The mandate and purpose of every IT security team is to protect the confidentiality, integrity, and availability of the systems and data of the company, government, or organization that they work for. That is why the security team hassles you about having unnecessary administrator rights on your work machine, won’t let you plug unknown devices into the network, and wants you to do all the other things that feel inconvenient; they want to protect these three things. We call it the “CIA Triad” for short (Figure 1-1).

Let’s examine this with our friends Alice and Bob. Alice has type 1 diabetes and uses a tiny device implanted in her arm to check her insulin several times a day, while Bob has a “smart” pacemaker that regulates his heart, which he accesses via a mobile app on this phone. Both of these devices are referred to as IoT medical device implants in our industry.

Figure 1-1: The CIA Triad is the reason IT Security teams exist.

NOTE IoT stands for Internet of Things, physical products that are internet connected. A smart toaster or a fridge that talks to the internet are IoT devices.

Confidentiality

Alice is the CEO of a large Fortune 500 company, and although she is not ashamed that she is a type 1 diabetic, she does not want this information to become public. She is often interviewed by the media and does public speaking, serving as a role model for many other women in her industry. Alice works hard to keep her personal life private, and this includes her health condition. She believes that some people within her organization are after her job and would do anything to try to portray her as “weak” in an effort to undermine her authority. If her device were to accidentally leak her information, showing itself on public networks, or if her account information became part of a breach, this would be highly embarrassing for her and potentially damaging to her career. Keeping her personal life private is important to Alice.

Bob, on the other hand, is open about his heart condition and happy to tell anyone that he has a pacemaker. He has a great insurance plan with the federal government and is grateful that when he retires he can continue with his plan, despite his pre-existing condition. Confidentiality is not a priority for Bob in this respect (Figure 1-2).

Figure 1-2: Confidentiality: keeping things safe

NOTE Confidentiality is often undervalued in our personal lives. Many people tell me they “have nothing to hide.” Then I ask, “Do you have curtains on your windows at home? Why? I thought that you had nothing to hide?” I’m a blast at parties.

Integrity

Integrity in data (Figure 1-3) means that the data is current, correct, and accurate. Integrity also means that your data has not been altered during transmission; the correct value must be maintained during transit. Integrity in a computer system means that the results it gives are precise and factual. For Bob and Alice, this may be the most crucial of the CIA factors: if either of their systems gives them incorrect treatment, it could result in death. For a human being (as opposed to a company or nation-state), there does not exist a more serious repercussion than end of life. The integrity of their health systems is crucial to ensuring they both remain in good health.

Figure 1-3: Integrity means accuracy.

CIA is the very core of our entire industry. Without understanding this from the beginning, and how it affects your teammates, your software, and most significantly, your users, you cannot build secure software.

Availability

If Alice’s insulin measuring device was unavailable due to malfunction, tampering, or dead batteries, her device would not be “available.” Since Alice usually checks her insulin levels several times a day, but she is able to do manual testing of her insulin (by pricking her finger and using a medical kit designed for this purpose) if she needs to, it is somewhat important to her that this service is available. A lack of availability of this system would be quite inconvenient for her, but not life-threatening.

Bob, on the other hand, has irregular heartbeats from time to time and never knows when his arrhythmia will strike. If Bob’s pacemaker was not available when his heart was behaving erratically, this could be a life-or-death situation if enough time elapsed. It is vital that his pacemaker is available and that it reacts in real time (immediately) when an emergency happens.

Bob works for the federal government as a clerk managing secret and top-secret documents, and has for many years. He is a proud grandfather and has been trying hard to live a healthy life since his pacemaker was installed.

NOTE Medical devices are generally “real-time” software systems. Real-time means the system must respond to changes in the fastest amount of time possible, generally in milliseconds. It cannot have delays—the responses must be as close as possible to instantaneous or immediate. When Bob’s arrhythmia starts, his pacemaker must act immediately; there cannot be a delay. Most applications are not real-time. If there is a 10-millisecond delay in the purchase of new running shoes, or in predicting traffic changes, it is not truly critical.

Figure 1-4: Resilience improves availability.

NOTE Many customers move to “the cloud” for the sole reason that it is extremely reliable (almost always available) when compared to more traditional in-house data center service levels. As you can see in Figure 1-4, resilience improves availability, making public cloud an attractive option from a security perspective.

The following are security concepts that are well known within the information security industry. It is essential to have a good grasp of these foundational ideas in order to understand how the rest of the topics in this book apply to them. If you are already a security practitioner, you may not need to read this chapter.

Assume Breach

“There are two types of companies: those that have been breached and those that don’t know they’ve been breached yet.”2 It’s such a famous saying in the information security industry that we don’t even know who to attribute it to anymore. It may sound pessimistic, but for those of us who work in incident response, forensics, or other areas of investigation, we know this is all too true.

The concept of assume breach means preparation and design considerations to ensure that if someone were to gain unapproved access to your network, application, data, or other systems, it would prove difficult, time-consuming, expensive, and risky, and you would be able to detect and respond to the situation quickly. It also means monitoring and logging your systems to ensure that if you don’t notice until after a breach occurs, at least you can find out what did happen. Many systems also monitor for behavioral changes or anomalies to detect potential breaches. It means preparing for the worst, in advance, to minimize damage, time to detect, and remediation efforts.

Let’s look at two examples of how we can apply this principle: a consumer example and a professional example.

As a consumer, Alice opens an online document-sharing account. If she were to “assume breach,” she wouldn’t upload anything sensitive or valuable there (for instance, unregistered intellectual property, photos of a personal nature that could damage her professional or personal life, business secrets, government secrets, etc.). She would also set up monitoring of the account as well as have a plan if the documents were stolen, changed, removed, shared publicly, or otherwise accessed in an unapproved manner. Lastly, she would monitor the entire internet in case they were leaked somewhere. This would be an unrealistic amount of responsibility to expect from a regular consumer; this book does not advise average consumers to “assume breach” in their lives, although doing occasional online searches on yourself is a good idea and not uploading sensitive documents online is definitely advisable.

As a professional, Bob manages secret and top-secret documents. The department Bob works at would never consider the idea of using an online file-sharing service to share their documents; they control every aspect of this valuable information. When they were creating the network and the software systems that manage these documents, they designed them, and their processes, assuming breach. They hunt for threats on their network, designed their network using zero trust, monitor the internet for signs of data leakage, authenticate to APIs before connecting, verify data from the database and from internal APIs, perform red team exercises (security testing in production), and monitor their network and applications closely for anomalies or other signs of breach. They’ve written automated responses to common attack patterns, have processes in place and ready for uncommon attacks, and they analyze behavioral patterns for signs of breach. They operate on the idea that data may have been breached already or could be at any time.

Another example of this would be initiating your incident response process when a serious bug has been disclosed via your responsible disclosure or bug bounty program, assuming that someone else has potentially already found and exploited this bug in your systems.

According to Wikipedia, coordinated disclosure is a vulnerability disclosure model in which a vulnerability or an issue is disclosed only after a period of time that allows for the vulnerability or issue to be patched or mended.

Bug bounty programs are run by many organizations. They provide recognition and compensation for security researchers who report bugs, especially those pertaining to vulnerabilities.

Insider Threats

An insider threat means that someone who has approved access to your systems, network, and data (usually an employee or consultant) negatively affects one or more of the CIA aspects of your systems, data, and/or network. This can be malicious (on purpose) or accidental.

Here are some examples of malicious threats and the parts of the CIA Triad they affect:

An employee downloading intellectual property onto a portable drive, leaving the building, and then selling the information to your competitors (confidentiality)

An employee deleting a database and its backup on their last day of work because they are angry that they were dismissed (availability)

An employee programming a back door into a system so they can steal from your company (integrity and confidentiality)

An employee downloading sensitive files from another employee’s computer and using them for blackmail (confidentiality)

An employee accidentally deleting files, then changing the logs to cover their mistake (integrity and availability)

An employee not reporting a vulnerability to management in order to avoid the work of fixing it (potentially all three, depending upon the type of vulnerability)

Here are some examples of accidental threats and the parts of the CIA Triad they affect:

Employees using software improperly, causing it to fall into an unknown state (potentially all three)

An employee accidentally deleting valuable data, files, or even entire systems (availability)

An employee accidentally misconfiguring software, the network, or other software in a way that introduces security vulnerabilities (potentially all three)

An inexperienced employee pointing a web proxy/dynamic application security testing (DAST) tool at one of your internal applications, crashing the application (availability) or polluting your database (integrity)

We will cover how to avoid this in later chapters to ensure all of your security testing is performed safely.

WARNING Web proxy software and/or DAST tools are generally forbidden on professional work networks. Also known as “web app scanners,” web proxies are hacker tools and can cause great damage. Never point a web app scanner at a website or application and perform active scanning or other interactive testing without permission. It must be written permission, from someone with the authority to give the permission. Using a DAST tool to interact with a site on the internet (without permission) is a criminal act in many countries. Be careful, and when in doubt, always ask before you start.

Defense in Depth

Defense in depth is the idea of having multiple layers of security in case one is not enough (Figure 1-5). Although this may seem obvious when explained so simply, deciding how many layers and which layers to have can be difficult (especially if your security budget is limited).

“Layers” of security can be processes (checking someone’s ID before giving them their mail, having to pass security testing before software can be released), physical, software, or hardware systems (a lock on a door, a network firewall, hardware encryption), built-in design choices (writing separate functions for code that handles more sensitive tasks in an application, ensuring everyone in a building must enter through one door), and so on.

Figure 1-5: Three layers of security for an application; an example of defense in depth

Here are some examples of using multiple layers:

When creating software

: Having security requirements, performing threat modeling, ensuring you use secure design concepts, ensuring you use secure coding tactics, security testing, testing in multiple ways with multiple tools, etc. Each one presents another form of defense, making your application more secure.

Network security

: Turning on monitoring, having a SIEM (Security information and event management, a dashboard for viewing potential security events, in real time), having IPS/IDS (Intrusion prevention/detection system, tools to find and stop intruders on your network), firewalls, and so much more. Each new item adds to your defenses.

Physical security

: Locks, barbed wire, fences, gates, video cameras, security guards, guard dogs, motion sensors, alarms, etc.

Quite often the most difficult thing when advocating for security is convincing someone that one defense is not enough. Use the value of what you are protecting (reputation, monetary value, national security, etc.) when making these decisions. While it makes little business sense to spend one million dollars protecting something with a value of one thousand dollars, the examples our industry sees the most often are usually reversed.

NOTE Threat modeling: Identifying threats to your applications and creating plans for mitigation. More on this in Chapter 3.

SIEM system: Monitoring for your network and applications, a dashboard of potential problems.

Intrusion prevention/detection system (IPS/IDS): Software installed on a network with the intention of detecting and/or preventing network attacks.

Least Privilege

Giving users exactly how much access and control they need to do their jobs, but nothing more, is the concept of least privilege. The reasoning behind least privilege is that if someone were able to take over your account(s), they wouldn’t get very far. If you are a software developer with access to your code and read/write access to the single database that you are working on, that means if someone were able to take over your account they would only be able to access that one database, your code, your email, and whatever else you have access to. However, if you were the database owner on all of the databases, the intruder could potentially wipe out everything. Although it may be unpleasant to give up your superpowers on your desktop, network, or other systems, you are reducing the risk to those systems significantly by doing so.

Examples of least privilege:

Needing extra security approvals to have access to a lab or area of your building with a higher level of security.

Not having administrator privileges on your desktop at work.

Having read-only access to all of your team’s code and write access to your projects, but not having access to other teams’ code repositories.

Creating a service account for your application to access its database and only giving it read/write access, not database owner (DBO). If the application only requires read access, only give it what it requires to function properly. A service account with only read access to a database cannot be used to alter or delete any of the data, even if it could be used to steal a copy of the data. This reduces the risk greatly.

NOTE Software developers and system administrators are attractive targets for most malicious actors, as they have the most privileges. By giving up some of your privileges you will be protecting your organization more than you may realize, and you will earn the respect of the security team at the same time.

Supply Chain Security

Every item you use to create a product is considered to be part of your “supply chain,” with the chain including the entity (supplier) of each item (manufacturer, store, farm, a person, etc.). It’s called a “chain” because each part of it depends on the previous part in order to create the final product. It can include people, companies, natural or manufactured resources, information, licenses, or anything else required to make your end product (which does not need to be physical in nature).

Let’s explain this a bit more clearly with an example. If Bob was building a dollhouse for his grandchildren, he might buy a kit that was made in a factory. That factory would require wood, paper, ink to create color, glue, machines for cutting, workers to run and maintain the machines, and energy to power the machines. To get the wood, the factory would order it from a lumber company, which would cut it down from a forest that it owns or has licensed access to. The paper, ink, and glue are likely all made in different factories. The workers could work directly for the factory or they could be casual contractors. The energy would most likely come from an energy company but could potentially come from solar or wind power, or a generator in case of emergency. Figure 1-6 shows the (hypothetical) supply chain for the kit that Bob has purchased in order to build a doll house for his children for Christmas this year.

Figure 1-6: A possible supply chain for Bob’s doll house

What are the potential security (safety) issues with this situation? The glue provided in this kit could be poisonous, or the ink used to decorate the pieces could be toxic. The dollhouse could be manufactured in a facility that also processes nuts, which could cross-contaminate the boxes, which could cause allergic reactions in some children. Incorrect parts could be included, such as a sharp component, which would not be appropriate for a young child. All of these situations are likely to be unintentional on the part of the factory.

When creating software, we also use a supply chain: the frameworks we use to write our code, the libraries we call in order to write to the screen, do advanced math calculations, or draw a button, the application programming interfaces (APIs) we call to perform actions on behalf of our applications, etc. Worse still, each one of these pieces usually depends on other pieces of software, and all of them are potentially maintained by different groups, companies, and/or people. Modern applications are typically made up of 20–40 percent original code3 (what you and your teammates wrote), with the rest being made up of these third-party components, often referred to as “dependencies.” When you plug dependencies into your applications, you are accepting the risks of the code they contain that your application uses. For instance, if you add something to process images into your application rather than writing your own, but it has a serious security flaw in it, your application now has a serious security flaw in it, too.

This is not to suggest that you could write every single line of code yourself; that would not only be extremely inefficient, but you may still make errors that result in security problems. One way to reduce the risk, though, is to use fewer dependencies and to vet carefully the ones that you do decide to include in your software. Many tools on the market (some are even free) can verify if there are any known security issues with your dependencies. These tools should be used not only every time you push new code to production, but your code repository should also be scanned regularly as well.

SUPPLY CHAIN ATTACK EXAMPLE

The open source Node.js module called event-stream was passed on to a new maintainer in 2018 who added malicious code into it, waited until millions of people had downloaded it via NPM (the package manager for Node.JS), and then used this vulnerability to steal bitcoins from Copay wallets, which used the event-stream library in their wallet software.4

Another defense tactic against using an insecure software supply chain is using frameworks and other third-party components made by known companies or recognized and well-respected open source groups, just as a chef would only use the finest ingredients in the food they make. You can (and should) take care when choosing which components make it into the final cut of your products.

There have been a handful of publicly exposed supply chain attacks in recent years, where malicious actors injected vulnerabilities into software libraries, firmware (low-level software that is a part of hardware), and even into hardware itself. This threat is real and taking precautions against it will serve any developer well.

Security by Obscurity

The concept of security by obscurity means that if something is hidden it will be “more secure,” as potential attackers will not notice it. The most common implementation of this is software companies that hide their source code, rather than putting it open on the internet (this is used as a means to protect their intellectual property and as a security measure). Some go as far as obfuscating their code, changing it such that it is much more difficult or impossible to understand if someone is attempting to reverse engineer your product.

NOTE Obfuscation is making something hard to understand or read. A common tactic is encoding all of the source code in ASCII, Base64, or Hex, but that’s quite easy to see for professional reverse engineers. Some companies will double or triple encode their code. Another tactic is to XOR the code (an assembler command) or create their own encoding schema and add it programmatically. There are also products on the market that can perform more advanced obfuscation.

Another example of “security by obscurity” is having a wireless router suppress the SSID/Wi-Fi name (meaning if you want to connect you need to know the name) or deploying a web server without a domain name hoping no one will find it. There are tools to get around this, but it reduces your risk of people attacking your wireless router or web server.

The other side of this is “security by being open,” the argument that if you write open source software there will be more eyes on it and therefore those eyes will find security vulnerabilities and report them. In practice this is rarely true; security researchers rarely review open source code and report bugs for free. When security flaws are reported to open source projects they don’t necessarily fix them, and if vulnerabilities are found, the finder may decide to sell them on the black market (to criminals, to their own government, to foreign governments, etc.) instead of reporting them to the owner of the code repository in a secure manner.

Although security by obscurity is hardly an excellent defense when used on its own, it is certainly helpful as one layer of a “defense in depth” security strategy.

Attack Surface Reduction

Every part of your software can be attacked; each feature, input, page, or button. The smaller your application, the smaller the attack surface. If you have four pages with 10 functions versus 20 pages and 100 functions, that’s a much smaller attack surface. Every part of your app that could be potentially exposed to an attacker is considered attack surface.

Attack surface reduction means removing anything from your application that is unrequired. For instance, a feature that is not fully implemented but you have the button grayed out, would be an ideal place to start for a malicious actor because it’s not fully tested or hardened yet. Instead, you should remove this code before publishing it to production and wait until it’s finished to publish it. Even if it’s hidden, that’s not enough; reduce your attack surface by removing that part of your code.

TIP Legacy software often has very large amounts of functionality that is not used. Removing features that are not in use is an excellent way to reduce your attack surface.

If you recall from earlier in the chapter, Alice and Bob both have medical implants, a device to measure insulin for Alice and a pacemaker for Bob. Both of their devices are “smart,” meaning they can connect to them via their smart phones. Alice’s device works over Bluetooth and Bob’s works over Wi-Fi. One way for them to reduce the attack surface of their medical devices would have been to not have gotten smart devices in the first place. However, it’s too late for that in this example. Instead, Alice could disable her insulin measuring device’s Bluetooth “discoverable” setting, and Bob could hide the SSID of his pacemaker, rather than broadcasting it.

Hard Coding

Hard coding means programming values into the code, rather than getting the values organically (from the user, database, an API, etc.). For example, if you have created a calculator application and the user enters 4 + 4, presses Enter, and then the screen shows 8, you will likely assume the calculator works. However, if you enter 5 + 5 and press Enter but the screen still shows 8, you may have a situation of hard coding.

Why is hard coding a potential security issue? The reason is twofold: you cannot trust the output of the application, and the values that have been hard coded are often of a sensitive nature (passwords, API keys, hashes, etc.) and anyone with access to the source code would therefore have access to those hard-coded values. We always want to keep our secrets safe, and hard coding them into our source code is far from safe.

Hard coding is generally considered a symptom of poor software development (there are some exceptions to this). If you encounter it, you should search the entire application for hard coding, as it is unlikely the one instance you have found is unique.

Never Trust, Always Verify

If you take away only one lesson from this book, it should be this: never trust anything outside of your own application. If your application talks to an API, verify it is the correct API, and that it has authority to do whatever it’s trying to do. If your application accepts data, from any source, perform validation on the data (ensure it is what you are expecting and that it is appropriate; if it is not, reject it). Even data from your own database could have malicious input or other contamination. If a user is attempting to access an area of your application that requires special permissions, reverify they have the permission to every single page or feature they use. If a user has authenticated to your application (proven they are who they say they are), ensure you continue to validate it’s the same user that you are dealing with as they move from page to page (this is called session management). Never assume because you checked one time that everything is fine from then on; you must always verify and reverify.

NOTE We verify data from our own database because it may contain stored cross-site scripting (XSS), or other values that may damage our program. Stored XSS happens when a program does not perform proper input validation and saves an XSS attack into its database by accident. When users perform an action in your application that calls that data, when it is returned to the user it launches the attack against the user in their browser. It is an attack that a user is unable to protect themselves against, and it is generally considered a critical risk if found during security testing.

Quite often developers forget this lesson and assume trust due to context. For instance, you have a public-facing internet application, and you have extremely tight security on that web app. That web app calls an API (#1) within your network (behind the firewall) all the time, which then calls another API (#2) that changes data in a related database. Often developers don’t bother authenticating (proving identity) to the first API or have the API (#1) verify the app has authorization to call whatever part of the API it’s calling. If they do, however, in this situation, they often perform security measures only on API #1 and then skip doing it on API #2. This results in anyone inside your network being able to call API #2, including malicious actors who shouldn’t be there, insider threats, or even accidental users (Figure 1-7).

Figure 1-7: Example of an application calling APIs and when to authenticate

Here are some examples:

A website is vulnerable to stored cross-site scripting, and an attacker uses this to store an attack in the database. If the web application validates the data from the database, the stored attack would be unsuccessful when triggered.

A website charges for access to certain data, which it gets from an API. If a user knows the API is exposed to the internet, and the API does not validate that who is calling it is allowed to use it (authentication and authorization), the user can call the API directly and get the data without paying (which would be malicious use of the website), it’s theft.

A regular user of your application is frustrated and pounds on the keyboard repeatedly, accidentally entering much more data than they should have into your application. If your application is validating the input properly, it would reject it if there is too much. However, if the application does not validate the data, perhaps it would overload your variables or be submitted to your database and cause it to crash. When we don’t verify that the data we are getting is what we are expecting (a number in a number field, a date in a date field, an appropriate amount of text, etc.), our application can fall into an unknown state, which is where we find many security bugs. We never want an application to fall into an unknown state.

Usable Security

If security features make your application difficult to use, users will find a way around it or go to your competitor. There are countless examples online of users creatively circumventing inconvenient security features; humans are very good at solving problems, and we don’t want security to be the problem.

The answer to this is creating usable security features. While it is obvious that if we just turned the internet off, all our applications would be safer, that is obviously an unproductive solution to protecting anyone from threats on the internet. We need to be creative ourselves and find a way to make the easiest way to do something also be the most secure way to do something.

Examples of usable security include:

Allowing a fingerprint, facial recognition, or pattern to unlock your personal device instead of a long and complicated password.

Teaching users to create passphrases (a sentence or phrase that is easy to remember and type) rather than having complexity rules (ensuing a special character, number, lower- and uppercase letters are used, etc.). This would increase entropy, making it more difficult for malicious actors to break the password, but would also make it easier to use for users.

Teaching users to use password managers, rather than expecting them to create and remember 100+ unique passwords for all of their accounts.

Examples of users getting around security measures include:

Users tailgating at secure building entrances (following closely while someone enters a building so that they do not need to swipe to get in).

Users turning off their phones, entering through a scanner meant to detect transmitting devices, then turning it back on once in the secure area where cell phones are banned.

Using a proxy service to visit websites that are blocked by your workplace network.

Taking a photo of your screen to bring a copyright image or sensitive data home.

Using the same password over and over but incrementing the last number of it for easy memory. If your company forces users to reset their password every 90 days, there’s a good chance there are quite a few passwords in your org that follow the format currentSeason_currentYear.

Factors of Authentication

Authentication is proving that you are indeed the real, authentic, you, to a computer. A “factor” of authentication is a method of proving who you are to a computer. Currently there are only three different factors: something you have, something you are, and something you know:

Something you have

could be a phone, computer, token, or your badge for work. Something that should only ever be in

your

possession.

Something you are

could be your fingerprint, an iris scan, your gait (the way you walk), or your DNA. Something that is physically unique to

you

.

Something you know

could be a password, a passphrase, a pattern, or a combination of several pieces of information (often referred to as security questions) such as your mother’s maiden name, your date of birth, and your social insurance number. The idea is that it is something that only

you

would know.

When we log in to accounts online with only a username and password, we are only using one “factor” of authentication, and it is significantly less secure than using two or more factors. When accounts are broken into or data is stolen, it is often due to someone using only one factor of authentication to protect the account. Using more than one factor of authentication is usually referred to as multi-factor authentication (MFA) or two-factor authentication (2FA), or two-step login. We will refer to this as MFA from now on in this book.

TIP Security questions are passé. It is simple to look up the answers to most security questions on the internet by performing Open Source Intelligence Gathering (OSINT). Do not use security questions as a factor of authentication in your software; they are too easily circumvented by attackers.

When credentials (usernames with corresponding passwords) are stolen and used maliciously to break into accounts, users that have a second factor of authentication are protected; the attacker will not have the second factor of authentication and therefore will be unable to get in. When someone tries to brute force (using a script to automatically try every possible option, very quickly) a system or account that has MFA enabled, even if they eventually get the password, they won’t have the second factor in order to get in. Using a second factor makes your online accounts significantly more difficult to break into.

Examples of MFA include:

Multi-factor

: Entering your username and password, then having to use a second device or physical token to receive a code to authenticate. The username and password are one factor (something you know) and using a second device is the second factor (something you have).

Not multi-factor

: A username

and

a password. This is two examples of the

same

factor; they are both something that you know. Multi-factor authentication means that you have more than one of the different types of factors of authentication, not one or more of the same factor.

Not multi-factor

: Using a username and password, and then answering security questions. These are two of the

same

fact, something you know.

Multi-factor

: Username and password, then using your thumb print.

NOTE Many in the information security industry are in disagreement as to whether or not using your phone to receive an SMS (text message) with a pin code is a “good” implementation of MFA, as there are known security flaws within the SMS protocol and some implementations of it. It is my opinion that having a “pretty-darn-good second factor,” rather than having only one factor, is better. Whenever possible, however, ask users to use an authentication application instead of SMS text messages as the second factor.

Exercises

These exercises are meant to help you understand the concepts in this chapter. Write out the answers and see which ones you get stuck on. If you have trouble answering some of the questions, you may want to reread the chapter. Every chapter will have exercises like these at the end. If there is a term you are unfamiliar with, look it up in the glossary at the end of the book; that may help with your understanding.

If you have a colleague or professional mentor who you can discuss the answers with, that would be the best way to find out if you are right or wrong, and why. Some of the answers are not Boolean (true/false) and are just to make you contemplate the problem.

Bob sets the Wi-Fi setting on his pacemaker to not broadcast the name of his Wi-Fi. What is this defensive strategy called?

Name an example of a value that could be hard coded and why. (What would be the motivation for the programmer to do that?)

Is a captcha

usable

security? Why or why not?

Give one example of a good implementation of usable security.

When using information from the URL parameters do you need to validate that data? Why or why not?

If an employee learns a trade secret at work and then sells it to a competitor, this breaks which part(s) of CIA?

If you buy a “smart” refrigerator and connect it to your home network, then have a malicious actor connect to it and change the settings so that it’s slightly warmer and your milk goes bad, which part(s) of CIA did they break?

If someone hacks your smart thermostat and turns off your heat, which part(s) of CIA did they break?