Cloud Native Security - Chris Binnie - E-Book

Cloud Native Security E-Book

Chris Binnie

0,0
25,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Explore the latest and most comprehensive guide to securing your Cloud Native technology stack Cloud Native Security delivers a detailed study into minimizing the attack surfaces found on today's Cloud Native infrastructure. Throughout the work hands-on examples walk through mitigating threats and the areas of concern that need to be addressed. The book contains the information that professionals need in order to build a diverse mix of the niche knowledge required to harden Cloud Native estates. The book begins with more accessible content about understanding Linux containers and container runtime protection before moving on to more advanced subject matter like advanced attacks on Kubernetes. You'll also learn about: * Installing and configuring multiple types of DevSecOps tooling in CI/CD pipelines * Building a forensic logging system that can provide exceptional levels of detail, suited to busy containerized estates * Securing the most popular container orchestrator, Kubernetes * Hardening cloud platforms and automating security enforcement in the cloud using sophisticated policies Perfect for DevOps engineers, platform engineers, security professionals and students, Cloud Native Security will earn a place in the libraries of all professionals who wish to improve their understanding of modern security challenges.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 452

Veröffentlichungsjahr: 2021

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title Page

Introduction

Meeting the Challenge

A Few Conventions

Companion Download Files

How to Contact the Publisher

Part I: Container and Orchestrator Security

CHAPTER 1: What Is A Container?

Common Misconceptions

Container Components

Kernel Capabilities

Other Containers

Summary

CHAPTER 2: Rootless Runtimes

Docker Rootless Mode

Running Rootless Podman

Summary

CHAPTER 3: Container Runtime Protection

Running Falco

Configuring Rules

Summary

CHAPTER 4: Forensic Logging

Things to Consider

Salient Files

Breaking the Rules

Key Commands

The Rules

Parsing Rules

Monitoring

Ordering and Performance

Summary

CHAPTER 5: Kubernetes Vulnerabilities

Mini Kubernetes

Options for Using

kube-hunter

Container Deployment

Inside Cluster Tests

Minikube vs.

kube-hunter

Getting a List of Tests

Summary

CHAPTER 6: Container Image CVEs

Understanding CVEs

Trivy

Exploring Anchore

Clair

Summary

Part II: DevSecOps Tooling

CHAPTER 7: Baseline Scanning (or, Zap Your Apps)

Where to Find ZAP

Baseline Scanning

Scanning Nmap's Host

Adding Regular Expressions

Summary

CHAPTER 8: Codifying Security

Security Tooling

Installation

Simple Tests

Example Attack Files

Summary

CHAPTER 9: Kubernetes Compliance

Mini Kubernetes

Using

kube-bench

Troubleshooting

Automation

Summary

CHAPTER 10: Securing Your Git Repositories

Things to Consider

Installing and Running Gitleaks

Installing and Running GitRob

Summary

CHAPTER 11: Automated Host Security

Machine Images

Idempotency

Secure Shell Example

Kernel Changes

Summary

CHAPTER 12: Server Scanning With Nikto

Things to Consider

Installation

Scanning a Second Host

Running Options

Command-Line Options

Evasion Techniques

The Main Nikto Configuration File

Summary

Part III: Cloud Security

CHAPTER 13: Monitoring Cloud Operations

Host Dashboarding with NetData

Cloud Platform Interrogation with Komiser

Summary

CHAPTER 14: Cloud Guardianship

Installing Cloud Custodian

More Complex Policies

IAM Policies

S3 Data at Rest

Generating Alerts

Summary

CHAPTER 15: Cloud Auditing

Runtime, Host, and Cloud Testing with Lunar

AWS Auditing with Cloud Reports

CIS Benchmarks and AWS Auditing with Prowler

Summary

CHAPTER 16: AWS Cloud Storage

Buckets

Native Security Settings

Automated S3 Attacks

Storage Hunting

Summary

Part IV: Advanced Kubernetes and Runtime Security

CHAPTER 17: Kubernetes External Attacks

The Kubernetes Network Footprint

Attacking the API Server

Attacking etcd

Attacking the Kubelet

Summary

CHAPTER 18: Kubernetes Authorization with RBAC

Kubernetes Authorization Mechanisms

RBAC Overview

RBAC Gotchas

Auditing RBAC

Summary

CHAPTER 19: Network Hardening

Container Network Overview

Restricting Traffic in Kubernetes Clusters

CNI Network Policy Extensions

Summary

CHAPTER 20: Workload Hardening

Using Security Context in Manifests

Mandatory Workload Security

PodSecurityPolicy

PSP Alternatives

Summary

Index

Copyright

About the Authors

About the Technical Editor

End User License Agreement

List of Tables

Chapter 1

Table 1.1: Common Container Components

Chapter 2

Table 2.1: Rootless Mode Limitations and Restrictions

Chapter 4

Table 4.1: Actions for

auditd

When Disks Are Filling Up Rapidly

Table 4.2: The Different Permissions You Can Apply

Table 4.3: List Options Available for

fork

and

clone

Syscalls

Table 4.4: Options for

audit_set_failure

Chapter 5

Table 5.1: Deployment Methods for

kube-hunter

Table 5.2: Scanning Options That You Can Try in

kube-hunter

Table 5.3: Hunting Modes in kube-hunter

Chapter 6

Table 6.1: Policy Matching Criteria That Anchore Can Use Within Its Policies

Table 6.2: The Policies Available from the Policy Hub

Chapter 7

Table 7.1: ZAP Builds Available via Docker

Chapter 8

Table 8.1: Using Tags in Gauntlt to Get More or Less Results

Chapter 12

Table 12.1: Interactive Options for Nikto While It's Running

Table 12.2: IDS Evasion Capabilities Courtesy of Libwhisker

Table 12.3: Nikto Offers “Mutation” Technique Options, Too

Table 12.4: Tuning Options Within Nikto

Chapter 15

Table 15.1: The Many Areas of Coverage That Lunar Offers

Chapter 16

Table 16.1: Public Access Settings for S3 Buckets and Objects

Table 16.2: Ways to List S3 Buckets in S3Scanner

List of Illustrations

Chapter 1

Figure 1.1: How virtual machines and containers reside on a host

Chapter 5

Figure 5.1: The excellent

kube-hunter

has found Kubernetes components but is...

Figure 5.2: We need the vulnerability IDs so that we can look up more detail...

Figure 5.3: Looking up KHV002 in the Knowledge Base offers more detail.

Figure 5.4: An internal view of Minishift is a slight improvement over k3s's...

Chapter 6

Figure 6.1: The Common Vulnerability Scoring System

Figure 6.2: Trivy's assessment of the latest

nginx

container image

Figure 6.3: Older versions of images tend to flag more issues, as you'd expe...

Figure 6.4: Anchore is up, courtesy of Docker Compose.

Figure 6.5: Only 2 medium-ranked CVEs have been found by Anchore, but 52 low...

Figure 6.6: Harbor has the excellent Clair CVE scanner built-in.

Figure 6.7: Different scanning results again for the

nginx

container image

Figure 6.8: Harbor lets you inspect the layers of your images with ease.

Chapter 7

Figure 7.1: A combination of Docker and Webswing means that running ZAP with...

Figure 7.2: A redacted HTML report from a baseline scan

Figure 7.3: A trimmed screenshot of the HTML report after scanning Nmap’s ho...

Chapter 10

Figure 10.1: Fine-grained permissions from GitHub via personal access tokens...

Figure 10.2: GitRob initializing and beginning to scan all repositories belo...

Chapter 11

Figure 11.1: The Ansible directory structure, courtesy of the

tree

command

Chapter 12

Figure 12.1: Even an HTTP 403 is revealing.

Chapter 13

Figure 13.1: The start of the Netdata installation process

Figure 13.2: Netdata has completed its installation successfully.

Figure 13.3: The top of the dashboard

Figure 13.4: Networking information showing the

docker0

network interface

Figure 13.5: The

cpuidle

dashboard to show how quiet your CPU cores are

Figure 13.6: Temperature metrics can be useful for on-premises hosts that ha...

Figure 13.7: The splash screen for Komiser made available by our container

Figure 13.8: A billing summary per-service plus outstanding support tickets...

Figure 13.9: Checking running instances is useful not just for costs but str...

Figure 13.10: Lambda functions aren't forgotten about in Komiser.

Figure 13.11: Potentially costly utilized network resource in an AWS region...

Chapter 14

Figure 14.1: Cloud Custodian courtesy of the Python installation route

Figure 14.2: In the AWS Console or programmatically, add a tag to an EC2 ins...

Figure 14.3: Highly permissive EC2 policy for our first test policy in Cloud...

Figure 14.4: We have stopped our instance successfully using a policy.

Chapter 15

Figure 15.1: Some of the permissions that your user/role will need in AWS, b...

Figure 15.2: The start of the Cloud Reports build process, courtesy of Node....

Figure 15.3: The end of the build process

Figure 15.4: The IAM policy is very permissive, even as read-only, so be sur...

Figure 15.5: Check your progress via the Last Used column in IAM for your us...

Figure 15.6: HTML output after using the

-f html

switch, with the AWS accoun...

Figure 15.7: A relatively empty region in the AWS account still produced 16 ...

Figure 15.8: Prowler needs two IAM policies attached to an IAM user or role....

Figure 15.9: Prowler is firing up and ready to scan a (redacted) AWS account...

Chapter 16

Figure 16.1: You should only give S3 Read access to S3 Inspector for obvious...

Figure 16.2: Redacted output from the same results as Listing 16.1, focusing...

Figure 16.3: The top-level listing in the AWS Console of S3 buckets reminds ...

Figure 16.4: There are relatively new Edit Public Access Settings options no...

Figure 16.5: GrayhatWarfare is an excellent resource for learning about stor...

Figure 16.6: Public files discovered in S3 buckets

Chapter 18

Figure 18.1: Rakkess output

Figure 18.2: Rakkess output for the

certificate-controller

account

Figure 18.3:

kubectl-who-can get secrets

Figure 18.4: Example of

rback

output

Chapter 19

Figure 19.1: Traffic flow in the base Kubernetes cluster

Figure 19.2: Network traffic after default deny policies applied

Figure 19.3: Network traffic after

allow-webapp-access

policy added

Chapter 20

Figure 20.1: PodSecurityPolicies

Guide

Cover

Table of Contents

Begin Reading

Pages

iii

xix

xx

xxi

xxii

xxiii

xxiv

1

3

4

5

6

7

8

9

10

11

12

13

14

15

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

33

34

35

36

37

38

39

40

41

42

43

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

65

66

67

68

69

70

71

72

73

74

75

76

77

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

225

226

227

228

229

230

231

232

233

234

235

236

237

239

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

iv

v

vii

ix

308

Cloud Native Security

 

Chris Binnie

Rory McCune

 

 

 

 

 

Introduction

There is little doubt that we have witnessed a dramatic and notable change in the way that software applications are developed and deployed in recent years.

Take a moment to consider what has happened within the last decade alone. Start with the mind-blowing levels of adoption of containers, courtesy of Docker's clever packaging of Linux container technologies. Think of the pivotal maturation of cloud platforms with their ever-evolving service offerings. Remember the now-pervasive use of container orchestrators to herd multiple catlike containers. And do not forget that software applications have been teased apart and broken down into portable, microservice-sized chunks.

Combined, these significant innovations have empowered developers by offering them a whole new toolbox from which their software can be developed, and a reliable platform that their applications can be deployed upon.

In hand with the other recent milestone innovations in computing, such as the growth of Unix-like operating systems and the birth of the web and the internet as a whole, Cloud Native technologies have already achieved enough to merit a place in the history books. However, as with all newly formed tech, different types of security challenges surface and must be addressed in a timely fashion.

Cloud Native security is a complex, multifaceted topic to understand and even harder to get right. Why is that? The answer lies with the multiple, diverse components that need to be secured. The cloud platform, the underlying host operating system, the container runtime, the container orchestrator, and then the applications themselves each require specialist security attention.

Bear in mind too, that the securing and then monitoring of the critical nuts and bolts of a tech stack needs to happen 24 hours a day, all year round. For those who are working in security and unaccustomed to Cloud Native technologies, their limited breadth of exposure can make the challenge that they are suddenly faced with a real eye-opener.

Among the more advanced attackers, there are many highly adaptive, intelligent, and ultimately extremely patient individuals with a vast amount of development and systems experience who have the ability to pull off exceptional compromises, including those of the highest-profile online services. These individuals, who may also be well-funded, are extremely difficult to keep out of a cloud estate. Only by continually plugging every security hole, with multiple layers of defense, is it possible to hope to do so. They are the attackers, however, that can usually be kept at arm's length. At the highest level, so-called nation-state attackers are advanced enough that many security teams would struggle to even identify if a compromise had been successful.

Insomnia-inducing concerns aside, the good news is that it is possible to increase the effort involved for an attacker to successfully exploit a vulnerability significantly. This can be achieved using a combination of open source tools and shifting security to the left in the software lifecycle in order to empower developers with greater visibility of threats and therefore giving them more responsibility for the code that makes it into production.

Shifting security to the left, as championed by DevSecOps methodologies, is a worthwhile pursuit, especially when coupled with the interjection of security logic gates into CI/CD pipelines that determine whether to pass or fail software builds. Combined with multiple build tests, wherever they might be needed within the software lifecycle, this approach is highly effective and has been growing in popularity exponentially.

Meeting the Challenge

The authors of Cloud Native Security have both worked in the technology and security space for more than 20 years and approach such challenges from different perspectives. For that reason, this book is divided into four distinct sections that together will arm the reader with enough security tooling knowledge, coupled with niche know-how, to improve the security posture of any Cloud Native infrastructure.

The key areas explored in detail within this book are the high-level building blocks already mentioned in the introduction. Part I focuses on container runtime and orchestrator security, Part II on DevSecOps tooling, Part III on the securing and monitoring of cloud platforms, and finally Part IV looks at advanced Kubernetes security.

There is ostensibly less Linux hardening information in this book than the other facets because Linux is more mature than the other components in a Cloud Native stack, fast approaching its 30th birthday. However, it would be unfair not to mention that almost every component involved with Cloud Native technologies starts with Linux in one shape or form. It is an often-overlooked cornerstone of security in this space. For that reason, a chapter is dedicated to ensuring that the very best advice, based on industry-consensus, is employed when it comes to using automation to harden Linux.

Today's popular cloud platforms are unquestionably each different, but the security skills required to harden them can be transposed from one to another with a little patience. Amazon Web Services (AWS) is still the dominant cloud provider, so this book focuses on AWS; readers working on other cloud platforms, however, will find enough context to work with them in a similar manner. From a Linux perspective, the hands-on examples use Debian derivatives, but equally other Linux distributions will match closely to the examples shown.

Coverage of container security issues often incorrectly focuses solely only on static container image analysis; however, within this book readers will find that the information relating to container runtime threats are separated away cleanly from orchestrator threats for much greater clarity.

This book explores concepts and technologies that are more accessible to less experienced readers within the first three sections. And, on a journey through to the last section where more advanced attacks on Kubernetes are delved into, the latter chapters are constructed to help encourage the reader to absorb and then research further into the complex concepts.

It is the hope that security professionals will gain a diverse mix of the required niche knowledge to help secure the Cloud Native estates that they are working on. Equally, as today's developers are consistently required to learn more about security, they too can keep abreast of the challenges that their roles will increasingly involve.

With this in mind, it has been an enjoyable experience collecting thoughts to put them down on paper. The reader's journey now begins with a look at the innards of a Linux container. Not all DevOps engineers can confidently explain what a container is from a Linux perspective. That is something that this book hopes to remedy, in the interests of security.

What Does This Book Cover?

Here's a chapter-by-chapter summary of what you will learn in Cloud Native Security:

Chapter 1

: What Is A Container?

   The first chapter in Part I discusses the components that comprise a Linux container. Using hands-on examples, the chapter offers the perspective of these components from a Linux system's point of view and discusses common types of containers in use today.

Chapter 2

: Rootless Runtimes

   This chapter looks at the Holy Grail of running containers, doing so without using the

root

user. An in-depth examination of Docker's experimental rootless mode, followed by an in-depth look at Podman being run without using the superuser, helps demonstrate the key differences between the runtimes.

Chapter 3

: Container Runtime Protection

   This chapter looks at a powerful open source tool that can provide impressive guardrails around containers. The custom policies can be used to monitor and enforce against unwanted anomalies in a container's behavior.

Chapter 4

: Forensic Logging

   This chapter examines a built-in Linux Auditing System that can provide exceptional levels of detail. Using the auditing system, it is possible to walk, step-by-step, through logged events after an attack to fully understand how a compromise was successful. In addition, misconfigurations and performance issues can be identified with greater ease.

Chapter 5

: Kubernetes Vulnerabilities

   This chapter looks at a clever tool that uses a number of detailed checks to suggest suitable security and compliance fixes to Kubernetes clusters. Such advice can be useful for auditing both at installation time and in an ongoing fashion.

Chapter 6

: Container Image CVEs

   By using the best of three Common Vulnerability and Exploit scanning tools, or a combination of them, it is possible to capture a highly detailed picture of the vulnerabilities that require patching within static container images.

Chapter 7

: Baseline Scanning (or, Zap Your Apps)

   This chapter is the first of

Part II

, “DevSecOps Tooling,” and explores the benefits of performing baseline tests within a CI/CD pipeline to highlight issues with applications.

Chapter 8

: Codifying Security

   This chapter demonstrates a tool that can utilize popular attack applications using custom policies to test for vulnerabilities within newly built services and applications in CI/CD tests.

Chapter 9

: Kubernetes Compliance

   This chapter details a tool that is compatible with CI/CD tests that will inspect a Kubernetes cluster using hundreds of different testing criteria and then report on suitable fixes to help with its security posture.

Chapter 10

: Securing Your Git Repositories

   This chapter looks at two popular tools to help prevent secrets, tokens, certificates, and passwords from being accidentally stored within code repositories using the

git

revision control system. Both suit being called from within CI/CD pipelines.

Chapter 11

: Automated Host Security

   This chapter explores an often-overlooked aspect of Cloud Native security, the Linux hosts themselves. By automating the hardening of hosts either once or by frequently enforcing security controls, using a configuration management tool like Ansible, it is possible to help mitigate against attackers gaining a foothold and additionally create predictable, reliable, and more secure hosts.

Chapter 12

: Server Scanning With Nikto

   This chapter offers a valuable insight into a tool that will run thousands of tests against applications running on hosts in order to help improve their security posture. It can also be integrated into CI/CD pipeline tests with relative ease.

Chapter 13

: Monitoring Cloud Operations

   The first chapter of

Part III

, “Cloud Security,” suggests solutions to the day-to-day monitoring of cloud infrastructure and how to improve Cloud Security Posture Management (CSPM). Using Open Source tools, it is quite possible to populate impressive dashboards with highly useful, custom metrics and save on operational costs at the same time.

Chapter 14

: Cloud Guardianship

   This chapter examines a powerful tool that can be used to automate custom policies to prevent insecure configuration settings within a cloud environment. By gaining a clear understanding of how the tool works, you are then free to deploy some of the many examples included with the software across the AWS, Azure, and Google Cloud platforms.

Chapter 15

: Cloud Auditing

   This chapter shows the installation and use of popular auditing tools that can run through hundreds of both Linux and cloud platform compliance tests, some of which are based on the highly popular CIS Benchmarks.

Chapter 16

: AWS Cloud Storage

   This chapter looks at how attackers steal vast amounts of sensitive date from cloud storage on a regular basis. It also highlights how easy it is for nefarious visitors to determine whether storage is publicly accessible and then potentially download assets from that storage. In addition, the chapter identifies a paid-for service to help attackers do just that using automation.

Chapter 17

: Kubernetes External Attacks

   This chapter is the first of

Part IV

, “Advanced Kubernetes and Runtime Security.” It delves deeply into API Server attacks, a common way of exploiting Kubernetes, as well as looking at other integral components of a Kubernetes cluster.

Chapter 18

: Kubernetes Authorization with RBAC

   This chapter discusses the role-based access control functionality used for authorization within a Kubernetes cluster. By defining granular access controls, you can significantly restrict the levels of access permitted.

Chapter 19

: Network Hardening

   This chapter explores how networking can be targeted by attackers in a Kubernetes cluster and the modern approach to limiting applications or users moving between network namespaces.

Chapter 20

: Workload Hardening

   This chapter builds upon the knowledge learned in the earlier chapters of the book and takes a more advanced approach to the hardening of workloads in Kubernetes.

A Few Conventions

This book follows the time-honored tradition of setting coding-language keywords, modifiers, and identifiers (including URLs), when they appear in running text, in the same fixed-space font used for displayed code listings and snippets.

We have also had to make a couple of changes from what you will see on your screen to fit the format of a printed book. First, although Linux command screens typically show white type on a dark background, that scheme does not reproduce well in printed captures. For legibility, we have reversed those screens to black-on-white.

Also note that the width of a printed page does not hold as many characters as a Linux command or output line. In cases where we have had to introduce line breaks that you would not see on the screen, we have made them at meaningful points; and in the rare instances where in entering code you would need to omit a line break we have shown, we tell you so explicitly.

Companion Download Files

As you work through the examples in this book, you will see that most of them are command-line interactions, where you enter a single command and see its output. For the automated tasks demonstrated in Chapter 19, YAML files are available for download from http://www.wiley.com/go/cloudnativesecurity.

How to Contact the Publisher

If you believe you have found a mistake in this book, please bring it to our attention. At John Wiley & Sons, we understand how important it is to provide our customers with accurate content, but even with our best efforts an error may occur.

To submit your possible errata, please email it to our Customer Service Team at [email protected] with the subject line “Possible Book Errata Submission.”

Part IContainer and Orchestrator Security

The Cloud Native Computing Foundation, often abbreviated as the CNCF (www.cncf.io), reported in its 2020 survey that “the use of containers in production has increased to 92%, up from 84% last year, and up 300% from our first survey in 2016” and also that “Kubernetes use in production has increased to 83%, up from 78% last year.” The report (www.cncf.io/wp-content/uploads/2020/12/CNCF_Survey_Report_2020.pdf) takes note of a number of useful facts that demonstrate that the way modern applications are developed and hosted is continuing to evolve using Cloud Native technologies and methodologies. A significant component, as the survey demonstrates, involves containerization, and for that reason the first six chapters of this book explore the security of containers and container orchestrators. The final part of the book examines this topic using more advanced examples and scenarios.

In This Part

Chapter 1:

What Is A Container?

Chapter 2:

Rootless Runtimes

Chapter 3:

Container Runtime Protection

Chapter 4:

Forensic Logging

Chapter 5:

Kubernetes Vulnerabilities

Chapter 6:

Container Image CVEs

CHAPTER 1What Is A Container?

Linux containers as we know them today have been realized through a series of incremental innovations, courtesy of a disparate group of protagonists. Destined for a place in the history books, containers have brought significant change to the way that modern software is now developed; this change will be intriguing to look back upon in the coming years ahead.

In simple terms, a container is a distinct and relatively isolated unit of code that has a specific purpose. As will be repeated later, the premise of a container is to focus on one key process (such as a web server) and its associated processes. If your web server needs to be upgraded or altered, then no other software components are affected (such as a related database container), making the construction of a technology stack more modular by design.

In this chapter, we will look at how a container is constructed and some of its fundamental components. Without this background information it is difficult to understand how to secure a containerized server estate successfully. We will start by focusing on the way software runs containers; we call that software the container runtime. We will focus on the two most prominent runtimes, Docker and Podman. An examination of the latter should also offer a valuable insight into the relatively recent advances in container runtimes.

As we work through the book, we will look at this area again from a more advanced perspective with an eye firmly on security mitigation. Purposely, rather than studying historical advances of Linux containers, this chapter focuses on identifying the components of a container that security personnel should be concerned about.

Common Misconceptions

In 2014–15, the clever packaging of system and kernel components by Docker Inc. led to an explosion of interest in Linux containers. As Docker's popularity soared, a common misconception was that containers could be treated in the same way as virtual machines (VMs). As technology evolved, this became partially true, but let us consider what that misconception involved to help illustrate some of the security challenges pertinent to containers.

Along the same lines as most VMs, less-informed users trusted that Customer A had no access to Customer B's resources if each customer ran its own containers. This implicit trust is understandable. Hardware virtualization is used often on Linux systems, implemented with tools like the popular Kernel-based Virtual Machine, or KVM (www.linux-kvm.org), for example. Virtual machines using such technologies can run on the same physical machine and do indeed share significant levels of segregation, improving their security posture significantly. Helpful information is provided in a white paper by a long-standing commercial brand, VMware, that offers a detailed look at how this works.

www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/whitepaper/techpaper/vmw-white-paper-secrty-vsphr-hyprvsr-uslet-101.pdf

This type of virtualization is not to be confused with paravirtualization, utilized by software such as Xen (xenproject.org), where guest operating systems (OSs) can share hardware on a modified host OS.

NOTE

Xen is able to support hardware virtualization and paravirtualization. You can find more information on the subject here:

wiki.xen.org/wiki/Xen_Project_Software_Overview#PV_.28x86.29

In Figure 1.1 we can see the difference between containers and virtual machines. The processes shown are those relevant to a running application. Using our web server example again, one process might be running a web server listening on an HTTP port and another for HTTPS. As mentioned, to maintain the desired modularity, containers should service a specific single task (such as a web server). Normally, they will run a main application's process alone, along with any required associated processes.

Figure 1.1: How virtual machines and containers reside on a host

It should be clear that a Linux container is an entirely different animal than a VM. A saying that appears to have gained popularity at Red Hat during the explosion of container popularity noted earlier is that fundamentally “containers are Linux.” One interpretation of such a statement is that if you can appreciate how a Linux system is constructed at a nuts-and-bolts level and understand how to slice up a system into small segments, each of which uses native Linux components, then you will have a reasonable chance of understanding what containers are. For a more specific understanding of where that phrase comes from, visit this Red Hat blog page that explains the motivation behind the phrase: www.redhat.com/en/blog/containers-are-linux.

From the perspective of an underlying host machine, the operating system is not only slicing memory up to share among containers, segmenting the networking stack, dividing up the filesystem, and restricting full access to the CPU; it is also hiding some of the processes that are running in the process table. How are all those aspects of a Linux system controlled centrally? Correct, via the kernel. During the massive proliferation of Docker containers, it became obvious that users did not fully appreciate how many of the components hung together.

For example, the Docker runtime has been improved over time with new security features (which we look at in more detail in Chapter 2, “Rootless Runtimes”); but in older versions, it needed to run as the root user without exception. Why? It was because in order to slice up the system into suitable container-shaped chunks, superuser permissions were needed to convince the kernel to allow an application like Docker to do so.

One example scenario (which is common still to this day) that might convey why running as the root user is such a problem involves the popular continuous integration/continuous development (CI/CD) automation tool, Jenkins.

TIP

Security in the CI/CD software development pipeline is the subject of the chapters in Part II of this book, “DevSecOps Tooling.”

Imagine that a Jenkins job is configured to run from a server somewhere that makes use of Docker Engine to run a new container; it has built the container image from the Dockerfile passed to it. Think for a second—even the seemingly simplest of tasks such as running a container always used to need root permissions to split up a system's resources, from networking to filesystem access, from kernel namespaces to kernel control groups, and beyond. This meant you needed blind faith in the old (now infamous) password manager in Jenkins to look after the password that ran the Jenkins job. That is because as that job executed on the host, it would have root user permissions.

What better way to examine how a system views a container—which, it is worth repeating, is definitely not a virtual machine—than by using some hands-on examples?

Container Components

There are typically a number of common components on a Linux system that enable the secure use of containers, although new features, or improvements to existing kernel and system features, are augmented periodically. These are Linux security features that allow containers to be bundled into a distinct unit and separated from other system resources. Such system and kernel features mean that most containers spawned, without adding any nonstandard options to disable such security features, have a limited impact on other containers and the underlying host. However, often unwittingly containers will run as the root user or developers will open security features to ease their development process. Table 1.1 presents key components.

Table 1.1: Common Container Components

COMPONENT

DESCRIPTION

Kernel namespaces

A logical partitioning of kernel resources to reduce the visibility that processes receive on a system.

Control croups

Functionality to limit usage of system resources such as I/O, CPU, RAM, and networking. Commonly called

cgroups

.

SElinux/AppArmor

Mandatory Access Control (MAC) for enforcing security-based access control policies across numerous system facets such as filesystems, processes, and networking. Typically, SElinux is found on Red Hat Enterprise Linux (RHEL) derivatives and AppArmor on Debian derivatives. However, SElinux is popular on both, and AppArmor appears to be in experimental phase for RHEL derivatives such as CentOS.

Seccomp

Secure Computing (seccomp) allows the kernel to restrict numerous system calls; for the Docker perspective, see

docs.docker.com/engine/security/seccomp

.

Chroot

An isolation technique that uses a pseudo root directory so that processes running within the chroot lose visibility of other defined facets of a system.

Kernel capabilities

Checking and restricting all system calls; more in the next section.

Kernel Capabilities

To inspect the innards of a Linux system and how they relate to containers in practice, we need to look a little more at kernel capabilities. The kernel is important because before other security hardening techniques were introduced in later versions, Docker allowed (and still does) the ability to disable certain features, and open up specific, otherwise locked-down, kernel permissions.

You can find out about Linux kernel capabilities by using the command $ man capabilities (or by visiting man7.org/linux/man-pages/man7/capabilities.7.html).

The manual explains that capabilities offer a Linux system the ability to run permission checks against each system call (commonly called syscalls) that is sent to the kernel. Syscalls are used whenever a system resource requests anything from the kernel. That could involve access to a file, memory, or another process among many other things, for example. The manual explains that during the usual run of events on traditional Unix-like systems, there are two categories of processes: any privileged process (belonging to the root user) and unprivileged processes (which don't belong to the root user). According to the Kernel Development site (lwn.net/1999/1202/kernel.php3), kernel capabilities were introduced in 1999 via the v2.1 kernel. Using kernel capabilities, it is possible to finely tune how much system access a process can get without being the root user.

By contrast, cgroups or control groups were introduced into the kernel in 2006 after being designed by Google engineers to enforce quotas for system resources including RAM and CPU; such limitations are also of great benefit to the security of a system when it is sliced into smaller pieces to run containers.

The problem that kernel capabilities addressed was that privileged processes bypass all kernel permission checks while all nonroot processes are run through security checks that involve monitoring the user ID (UID), group ID (GID), and any other groups the user is a member of (known as supplementary groups). The checks that are performed on processes will be against what is called the effective UID of the process. In other words, imagine that you have just logged in as a nonroot user chris and then elevate to become the root user with an su- command. Your “real UID” (your login user) remains the same; but after you elevate to become the superuser, your “effective UID” is now 0, the UID for the root user. This is an important concept to understand for security, because security controls need to track both UIDs throughout their lifecycle on a system. Clearly you don't want a security application telling you that the root user is attacking your system, but instead you need to know the “real UID,” or the login user chris in this example, that elevated to become the root user instead. If you are ever doing work within a container for testing and changing the USER instruction in the Dockerfile that created the container image, then the id command is a helpful tool, offering output such as this so you can find out exactly which user you currently are:

uid=0(root) gid=0(root) groups=0(root)

Even with other security controls used within a Linux system running containers, such as namespaces that segregate access between pods in Kubernetes and OpenShift or containers within a runtime, it is highly advisable never to run a container as the root user. A typical Dockerfile that prevents the root user running within the container might be created as shown in Listing1.1.

Listing 1.1: A Simple Example Dockerfile of How to Spawn a Container as Nonroot

FROM debian:stable

USER root

RUN apt-get update && apt-get install -y iftop && apt-get clean

USER nobody

CMD bash

In Listing 1.1, the second line explicitly states that the root user is initially used to create the packages in the container image, and then the nobody user actually executes the final command. The USER root line isn't needed if you build the container image as the root user but is added here to demonstrate the change between responsibilities for each USER clearly.

Once an image is built from that Dockerfile, when that image is spawned as a container, it will run as the nobody user, with the predictable UID and GID of 65534 on Debian derivatives or UID/GID 99 on Red Hat Enterprise Linux derivatives. These UIDs or usernames are useful to remember so that you can check that the permissions within your containers are set up to suit your needs. You might need them to mount a storage volume with the correct permissions, for example.

Now that we have covered some of the theory, we'll move on to a more hands-on approach to demonstrate the components of how a container is constructed. In our case we will not use the dreaded --privileged option, which to all intents and purposes gives a container root permissions. Docker offers the following useful security documentation about privileges and kernel capabilities, which is worth a read to help with greater clarity in this area:

docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities

The docs describe Privileged mode as essentially enabling “…access to all devices on the host as well as [having the ability to] set some configuration in AppArmor or SElinux to allow the container nearly all the same access to the host as processes running outside containers on the host.” In other words, you should rarely, if ever, use this switch on your container command line. It is simply the least secure and laziest approach, widely abused when developers cannot get features to work. Taking such an approach might mean that a volume can only be mounted from a container with tightened permissions onto a host's directory, which takes more effort to achieve a more secure outcome. Rest assured, with some effort, whichever way you approach the problem there will be a possible solution using specific kernel capabilities, potentially coupled with other mechanisms, which means that you don't have to open the floodgates and use Privileged mode.

For our example, we will choose two of the most powerful kernel capabilities to demonstrate what a container looks like, from the inside out. They are CAP_SYS_ADMIN and CAP_NET_ADMIN (commonly abbreviated without CAP_ in Docker and kernel parlance).

The first of these enables a container to run a number of sysadmin commands to control a system in ways a root user would. The second capability is similarly powerful but can manipulate the host's and container network stack. In the Linux manual page (man7.org/linux/man-pages/man7/capabilities.7.html) you can see the capabilities afforded to these --cap-add settings within Docker.

From that web page we can see that Network Admin (CAP_NET_ADMIN) includes the following:

Interface configuration

Administration of IP firewall

Modifying routing tables

Binding to any address for proxying

Switching on promiscuous mode

Enabling multicasting

We will start our look at a container's internal components by running this command:

$ docker run -d --rm --name apache -p443:443 httpd:latest

We can now check that TCP port 443 is available from our Apache container (Apache is also known as httpd) and that the default port, TCP port 80, has been exposed as so:

$ docker ps

IMAGE COMMAND CREATED STATUS PORTS NAMES

httpd "httpd-foreground" 36 seconds ago Up 33s 80/tcp, 443->443/tcp apache

Having seen the slightly redacted output from that command, we will now use a second container (running Debian Linux) to look inside our first container with the following command, which elevates permissions available to the container using the two kernel capabilities that we just looked at:

$ docker run --rm -it --name debian --pid=container:apache \

--net=container:apache --cap-add sys_admin debian:latest

We will come back to the contents of that command, which started a Debian container in a moment. Now that we're running a Bash shell inside our Debian container, let's see what processes the container is running, by installing the procps package:

root@0237e1ebcc85: /# apt update; apt install procps -y

root@0237e1ebcc85: /# ps -ef

UID PID PPID C STIME TTY TIME CMD

root 1 0 0 15:17 ? 00:00:00 httpd -DFOREGROUND

daemon 9 1 0 15:17 ? 00:00:00 httpd -DFOREGROUND

daemon 10 1 0 15:17 ? 00:00:00 httpd -DFOREGROUND

daemon 11 1 0 15:17 ? 00:00:00 httpd -DFOREGROUND

root 93 0 0 15:45 pts/0 00:00:00 bash

root 670 93 0 15:51 pts/0 00:00:00 ps -ef

We can see from the ps command's output that bash and ps -ef processes are present, but additionally several Apache web server processes are also shown as httpd. Why can we see them when they should be hidden? They are visible thanks to the following switch on the run command for the Debian container:

--pid=container:apache

In other words, we have full access to the apache container's process table from inside the Debian container.

Now try the following commands to see if we have access to the filesystem of the apache container:

root@0237e1ebcc85: cd /proc/1/root

root@0237e1ebcc85: ls

bin boot dev etc home lib lib64 media mnt opt proc root run sbin

srv sys tmp usr var

There is nothing too unusual from that directory listing. However, you might be surprised to read that what we can see is actually the top level of the Apache container filesystem and not the Debian container's. Proof of this can be found by using this path in the following ls command:

root@0237e1ebcc85: ls usr/local/apache2/htdocs

usr/local/apache2/htdocs/index.html

As suspected, there's an HTML file sitting within the apache2 directory:

root@0237e1ebcc85:/proc/1/root# cat usr/local/apache2/htdocs/index.html

<html><body><h1>It works!</h1></body></html>

We have proven that we have visibility of the Apache container's process table and its filesystem. Next, we will see what access this switch offers us:--net=container:apache.

Still inside the Debian container we will run this command:

root@0237e1ebcc85:/proc/1/root# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

valid_lft forever preferred_lft forever

10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP

link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0

inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0

valid_lft forever preferred_lft forever

The slightly abbreviated output from the ip a command offers us two network interfaces, lo for the loopback interface and eth0, which has the IP address 172.17.0.2/16.

Let's exit the Debian container by pressing Ctrl+D and return to our normal system prompt to run a quick test. We named the container apache, so using the following inspect command we can view the end of the output to get the IP address for the Apache container:

$ docker inspect apache | tail -20

Listing 1.2 shows slightly abbreviated output from that command, and lo and behold in the IP Address section we can see the same IP address we saw from within the Debian container a moment ago, as shown in Listing 1.2: "IPAddress": "172.17.0.2".

Listing 1.2: The External View of the Apache Container's Network Stack

"Networks": {

"bridge": {

"IPAMConfig": null,

"Links": null,

"Aliases": null,

"NetworkID": […snip…]

"Gateway": "172.17.0.1",

"IPAddress": "172.17.0.2",

"IPPrefixLen": 16,

"IPv6Gateway": "",

"GlobalIPv6Address": "",

"GlobalIPv6PrefixLen": 0,

"MacAddress": "02:42:ac:11:00:02",

"DriverOpts": null

}

}

}

}

]

Head back into the Debian container now with the same command as earlier, shown here:

$ docker run --rm -it --name debian --pid=container:apache \--net=container:apache --cap-add sys_admin debian:latest

To prove that the networking is fully passed across to the Debian container from the Apache container, we will install the curl command inside the container:

root@0237e1ebcc85:/# apt update; apt install curl -y

After a little patience (if you've stopped the Debian container, you'll need to run apt update before the curl command for it to work; otherwise, you can ignore it) we can now check what the intertwined network stack means from an internal container perspective with this command:

root@0237e1ebcc85:/# curl -v http://localhost:80

<html><body><h1>It works!</h1></body></html>

And, not straight from the filesystem this time but served over the network using TCP port 80, we see the HTML file saying, “It works!”

As we have been able to demonstrate, a Linux system does not need much encouragement to offer visibility between containers and across all the major components of a container. These examples should offer an insight into how containers reside on a host and how easy it is to potentially open security holes between containerized workloads.

Again, because containers are definitely not the same as virtual machines, security differs greatly and needs to be paid close attention to. If a container is run with excessive privileges or punches holes through the security protection offered by kernel capabilities, then not only are other containers at serious risk but the host machine itself is too. A sample of the key concerns of a “container escape” where it is possible to “break out” of a host's relatively standard security controls includes the following:

Disrupting services on any or all containers on the host, causing outages

Attacking the underlying host to cause a denial of service by causing a stress event with a view to exhaust available resources, whether that be RAM, CPU, disk space capacity, or I/O, for example

Deleting data on any locally mounting volumes directly on the host machine or wiping critical host directories that cause system failure

Embedding processes on a host that may act as a form of advanced persistent threat (APT), which could lie dormant for a period of time before being taken advantage of at a later date

Other Containers

A little-known fact is that serverless technologies also embrace containerization, or more accurately lightweight virtualization when it comes to AWS Lambda. Making use of KVM as mentioned earlier, AWS uses Firecracker to provide what it calls MicroVMs. When launched, AWS explicitly stated that security was its top priority and ensured that multiple levels of isolation were introduced to provide defense in depth. From a performance perspective, remarkably the MicroVMs can apparently start up in about an eighth of a second. An active Open Source project, Firecracker is an intriguing technology:

github.com/firecracker-microvm/firecracker

As mentioned earlier, the security model is a familiar one, according to the AWS site: “The Firecracker process is jailed using cgroups and seccomp BPF, and has access to a small, tightly controlled list of system calls.”

Apparently, at least according to this page on the AWS forums (forums.aws.amazon.com/thread.jspa?threadID=263968), there are restrictions applied to the containerized service such as limitations on varying kernel capabilities. These are dropped for security purposes and might include various syscalls like PTRACE, which allow the monitoring of and potentially the control of other processes. Other more obvious services, such as SMTP, are disallowed to prevent spam from leaving a function. And removing the ability to use the CAP_NET_RAW capability makes it impossible to spoof IP addresses or use raw sockets for capturing traffic.

Another approach to running containers in a more secure fashion is to lean on hardware virtualization to a greater degree. One of the earlier pioneers of containerization was CoreOS (known for a number of other products, such as etcd, which is prevalent in most modern Kubernetes distributions). They created a container runtime called rkt (which was pronounced “rock-it”), that is sadly now deprecated. The approach from rkt was to make use of KVM as a hypervisor. The premise (explained at coreos.com/rkt/docs/latest/running-kvm-stage1.html) was to use KVM, which provides efficient hardware-level virtualization, to spawn containers rather than systemd-nspawn (wiki.debian.org/nspawn), which can create a slim namespaced container. The sophisticated rkt offered what might be called hard tenancy between containers. This strict isolation enabled true protection for Customer B if Customer A was compromised; and although containers are, again, not virtual machines, rkt bridged a gap where previously few other security innovations had succeeded.

A modern approach being actively developed, similar to that of rkt, is called Kata Containers (katacontainers.io) via the Open Stack Foundation (OSF). The marketing strapline on the website confidently declares that you can achieve the “speed of containers” and still have the “security of VMs.” Along a similar vein to rkt, MicroVMs are offered via an Open Source runtime. By using hardware virtualization the isolation of containerized workloads can be comfortably assured. This post from Red Hat about SElinux alerations for Kara Containers is informative: www.redhat.com/sysadmin/selinux-kata-containers. Its customers apparently include internet giants such as Baidu, which uses Kata Containers in production, and you are encouraged to investigate their offering further.

Finally, following a slight tangent, another interesting addition to this space is courtesy of AWS, which, in 2020, announced the general availability of an Open Source Linux distribution called Bottlerocket (aws.amazon.com/bottlerocket). This operating system is designed specifically to run containers with improved security. The premise for the operational side of Bottlerocket is that creating a distribution that contains only the minimal files required for running containers reduces the attack surface significantly. Coupled with SElinux, to increase isolation between containers and the underlying host, the usual suspects are present too: cgroups, namespaces, and seccomp. There is also device mapper functionality from dm-verity that provides integrity checking of block devices to prevent the chances of advanced persistent threats taking hold. While time will tell if Bottlerocket proves to be popular, it is an interesting development that should be watched.

Summary

In this chapter, we looked at some of the key concepts around container security and how the Linux kernel developers have added a number of features over the years to help protect containerized workloads, along with contributions from commercial entities such as Google.

We then looked at some hands-on examples of how a container is constructed and how containers are ultimately viewed from a system's perspective. Our approach made it easy to appreciate how any kind of privilege escalation can lead to unwanted results for other containers and critically important system resources on a host machine.

Additionally, we saw that the USER instruction should never be set to root within a container and how a simple Dockerfile can be constructed securely if permissions are set correctly for resources, using some forethought. Finally, we noted that other technologies such as serverless also use containerization for their needs.

CHAPTER 2Rootless Runtimes

In Chapter 1, “What Is A Container?,” we looked at the components that make up a container and how a system is sliced up into segments to provide isolation for the standard components that Linux usually offers.