Practical Threat Detection Engineering - Megan Roddie - E-Book

Practical Threat Detection Engineering E-Book

Megan Roddie

0,0
43,19 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Threat validation is the backbone of every strong security detection strategy—it ensures your detection pipeline is effective, reliable, and resilient against real-world threats.
This comprehensive guide is designed for those new to detection validation, offering clear, actionable frameworks to help you assess, test, and refine your security detections with confidence. Covering the entire detection lifecycle, from development to validation, this book provides real-world examples, hands-on tutorials, and practical projects to solidify your skills.
Beyond just technical know-how, this book empowers you to build a career in detection engineering, equipping you with the essential expertise to thrive in today’s cybersecurity landscape.
By the end of this book, you'll have the tools and knowledge to fortify your organization’s defenses, enhance detection accuracy, and stay ahead of cyber threats.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB

Seitenzahl: 478

Veröffentlichungsjahr: 2023

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Practical Threat Detection Engineering

A hands-on guide to planning, developing, and validating detection capabilities

Megan Roddie

Jason Deyalsingh

Gary J. Katz

BIRMINGHAM—MUMBAI

Practical Threat Detection Engineering

Copyright © 2023 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Group Product Manager: Pavan Ramchandani

Publishing Product Manager: Neha Sharma

Senior Content Development Editor: Adrija Mitra

Technical Editor: Rajat Sharma

Copy Editor: Safis Editing

Project Coordinator: Sean Lobo

Proofreader: Safis Editing

Indexer: Tejal Soni

Production Designer: Ponraj Dhandapani

Marketing Coordinator: Marylou De Mello

First published: July 2023

Production reference: 1230623

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham

B3 2PB, UK.

ISBN 978-1-80107-671-5

www.packtpub.com

First, thank you to my parents, Geraldine and Michael Roddie, who have consistently pushed me toward success, for supporting me in all aspects of life, no matter how many times I drive them crazy. To my fiancé, Kelvin Clay Fonseca, who unexpectedly came into my life and constantly reminds me of my capabilities, giving me the confidence to face challenges head-on. Finally, thank you to all my present and previous employers and colleagues who gave me amazing opportunities and put me on a path that led me to write this book.

– Megan Roddie

Thank you first to my dad, Sheldon Katz, who taught me to be an inquiring engineer, never complicating technical explanations unnecessarily, and by teaching science wherever it was needed, whether on a food court napkin or late-night studying. To my mom, Ruth Katz, whose positive impact on students, and me, will continue to be relevant in our lives long after the technology in this book becomes dated. Most importantly, I want to thank my amazing wife, Heather, who supported me as I filled each weekend morning at the coffee shop writing this book. To the OneDo coffee shop in Baltimore, which provides a place for so many people to work on their pet projects, study, and get a great cup of coffee. To my coffee shop companion and friend, Katie Walsh, whose encouragement while writing this book was greatly appreciated. To my… okay, I should probably stop.

– Gary Katz

I’d like to express my sincere appreciation to everyone who contributed to this effort. Thanks for the work, thanks for your help, thanks for your guidance, thanks for listening, thanks for the motivation, and thanks for the lulz. Finally, thanks to my family and friends for being patient with me...you might need to continue being patient with me for the foreseeable future! :-/

– Jason Deyalsingh

Contributors

About the authors

Megan Roddie is an experienced information security professional with a diverse background ranging from incident response to threat intelligence to her current role as a detection engineer. Additionally, Megan is a course author and instructor with the SANS Institute where she regularly publishes research on cloud incident response and forensics. Outside of the cyber security industry, Megan trains and competes as a high-level amateur Muay Thai fighter in Austin, TX.

I would like to thank my parents, Geraldine and Mike, for a lifetime of love and support and for always pushing me toward greatness. I would also like to thank my fiancé, Kelvin, for being my biggest cheerleader throughout this process. Finally, thank you to everyone who has played a role in my career thus far, ultimately leading me to write this book.

Jason Deyalsingh is an experienced consultant with over nine years of experience in the cyber security space. He has spent the last 5 years focused on digital forensics and incident response (DFIR). His current hobbies include playing with data and failing to learn Rust.

To the cyber security community and everyone I ever worked with directly or indirectly. Keep crushing it!

Gary J. Katz is still trying to figure out what to do with his life while contemplating what its purpose really is. While not spiraling into this metaphysical black hole compounded by the plagues and insanity of this world, he sometimes thinks about cyber security problems and writes them down. These ruminations are, on occasion, captured in articles and books.

To those who read a technical book from cover to cover, I admire you, even if I could never be one of you.

About the reviewers

Dr. Chelsea Hicks (she/they) has worked in cyber security for more than 10 years, with the last 5 years being focused on threat hunting and incident response. Dr Hicks also has previous experience with machine learning, scripting, and infrastructure building. Dr Hicks received their BSc in computer science, MSc in information technology, and PhD in information technology from the University of Texas at San Antonio. You can find Dr Hicks as the social media coordinator for BSidesSATX, and presenting at events such as SANS Blue Team Summit, Blue Team Village, and Texas Cyber Summit.

Obligatory remark: Nothing I reviewed or provided feedback on in this book represents my employers; it is 100% based on my opinion and thoughts outside of work.

I’d like to thank my family and friends – especially my spouse and mom, who always encourage me and understand the time and commitment it takes to try to stay sharp in this field. I’d also like to thank them for keeping me grounded and making sure I maintain my work-life balance. Thank you to the amazing DFIR and threat-hunting fields for your supportiveness in helping these fields grow – especially folks like hacks4pancakes and shortxstack!

Terrence Williams has worked in the cyber security space for nearly 10 years, with 4 years focused on cloud forensics, incident response, and detection engineering. He began his career in the US Marine Corps, developing skills in hunting advanced persistent threats. Terrence received his BSc in computer science from Saint Leo University and is currently pursuing an MSc in computer science from Vanderbilt University. He has honed his cyber security skills through his roles at Amazon Web Services, Meta Platforms, and Google. Additionally, he serves as a SANS Instructor for the FOR509 course, which focuses on enterprise forensics and incident response.

Rod Soto is a security researcher and co-founder of HackMiami and Pacific Hackers. He was the winner of the 2012 BlackHat Las Vegas CTF competition and Red Alert ICS CTF at DEFCON 2022 contest. He is the founder and lead developer of the Kommand && KonTroll/NOQRTR CTF competitive hacking tournament series.

Table of Contents

Preface

Part 1: Introduction to Detection Engineering

1

Fundamentals of Detection Engineering

Foundational concepts

The Unified Kill Chain

The MITRE ATT&CK framework

The Pyramid of Pain

Types of cyberattacks

The motivation for detection engineering

Defining detection engineering

Important distinctions

The value of a detection engineering program

The need for better detection

The qualities of good detection

The benefits of a detection engineering program

A guide to using this book

The book's structure

Practical exercises

Summary

2

The Detection Engineering Life Cycle

Phase 1 – Requirements Discovery

Characteristics of a complete detection requirement

Detection requirement sources

Exercise – understanding your organization’s detection requirement sources

Phase 2 – Triage

Threat severity

Organizational alignment

Detection coverage

Active exploits

Phase 3 – Investigate

Identify the data source

Determine detection indicator types

Research

Establish validation criteria

Phase 4 – Develop

Phase 5 – Test

Types of test data

Phase 6 – Deploy

Summary

3

Building a Detection Engineering Test Lab

Technical requirements

The Elastic Stack

Deploying the Elastic Stack with Docker

Configuring the Elastic Stack

Setting up Fleet Server

Installing and configuring Fleet Server

Additional configurations for Fleet Server

Adding a host to the lab

Elastic Agent policies

Building your first detection

Additional resources

Summary

Part 2: Detection Creation

4

Detection Data Sources

Technical requirements

Understanding data sources and telemetry

Raw telemetry

Security tooling

MITRE ATT&CK data sources

Identifying your data sources

Looking at data source issues and challenges

Completeness

Quality

Timeliness

Coverage

Exercise – understanding your data sources

Adding data sources

Lab – adding a web server data source

Summary

Further reading

5

Investigating Detection Requirements

Revisiting the phases of detection requirements

Discovering detection requirements

Tools and processes

Exercise – requirements discovery for your organization

Triaging detection requirements

Threat severity

Organizational alignment

Detection coverage

Active exploits

Calculating priority

Investigating detection requirements

Summary

6

Developing Detections Using Indicators of Compromise

Technical requirements

Leveraging indicators of compromise for detection

Example scenario – identifying an IcedID campaign using indicators

Scenario 1 lab

Installing and configuring Sysmon as a data source

Detecting hashes

Detecting network-based indicators

Lab summary

Summary

Further reading

7

Developing Detections Using Behavioral Indicators

Technical requirements

Detecting adversary tools

Example scenario – PsExec usage

Detecting tactice, techniques, and procedures (TTPs)

Example scenario – mark of the web bypass technique

Summary

8

Documentation and Detection Pipelines

Documenting a detection

Lab – documenting a detection

Exploring the detection repository

Detection-as-code

Challenges creating a detection pipeline

Lab – Publishing a rule using Elastic’s detection-rules project

Summary

Part 3: Detection Validation

9

Detection Validation

Technical requirements

Understanding the validation process

Understanding purple team exercises

Simulating adversary activity

Atomic Red Team

CALDERA

Exercise – validating detections for a single technique using Atomic Red Team

Exercise – validating detections for multiple techniques via CALDERA

Using validation results

Measuring detection coverage

Summary

Further reading

10

Leveraging Threat Intelligence

Technical requirements

Threat intelligence overview

Open source intelligence

Internal threat intelligence

Gathering threat intelligence

Threat intelligence in the detection engineering life cycle

Requirements Discovery

Triage

Investigate

Threat intelligence for detection engineering in practice

Example – leveraging threat intel blog posts for detection engineering

Example – leveraging VirusTotal for detection engineering

Threat assessments

Example – leveraging threat assessments for detection engineering

Resources and further reading

Threat intelligence sources and concepts

Online scanners and sandboxes

MITRE ATT&CK

Summary

Part 4: Metrics and Management

11

Performance Management

Introduction to performance management

Assessing the maturity of your detection program

Measuring the efficiency of a detection engineering program

Measuring the effectiveness of a detection engineering program

Prioritizing detection efforts

Precision, noisiness, and recall

Calculating a detection’s efficacy

Low-fidelity coverage metrics

Automated validation

High-fidelity coverage metrics

Summary

Further reading

Part 5: Detection Engineering as a Career

12

Career Guidance for Detection Engineers

Getting a job in detection engineering

Job postings

Developing skills

Detection engineering as a job

Detection engineering roles and responsibilities

The future of detection engineering

Attack surfaces

Visibility

Security device capabilities

Machine learning

Sharing of attack methodology

The adversary

The human

Summary

Index

Other Books You May Enjoy

Preface

Over the past several years, the field of detection engineering has become more and more at the forefront of cyber security defense discussions. While the number of conference talks, blog posts, and webcasts surrounding detection engineering has increased, a dedicated book on the topic has not yet appeared on the market. We hope that we can fill the gap with the release of this book. While learning resources for related fields such as threat hunting, threat intelligence, and red teaming are plentiful, detection engineering has a long way to go in providing the training necessary to develop detection engineers.

Our goal is to not only provide a discussion on the topic but to provide you with practical skills using hands-on exercises throughout the book. Furthermore, we hope that these exercises, in combination with the creation of the detection engineering test lab, lead you to continue your education and training by practicing the skills you’ve learned for your own use cases.

The authors of this book have worked in various areas of security, with a current focus on detection engineering, and derived this content from real-life experiences throughout their careers. This, combined with research taken from some of the top minds in this field, provides you with a comprehensive overview of the topics necessary to understand detection engineering. With topics including the detection engineering life cycle, creating and validating detections, careers guidance for detection engineers, and everything in between, you should walk away feeling confident in your understanding of what detection engineering is and what it means to the cyber security industry.

We hope this book inspires continued content creation and community contributions in detection engineering.

Who this book is for

This book provides insights into detection engineering via a hands-on methodology for those with an interest in the field. It is primarily focused on security engineers who have some experience in the area. Mid- to senior-level security analysts can also learn from this book. To fully understand the content and follow the labs, you should understand foundational security concepts. Additionally, you should have the technical ability to work with technologies such as virtual machines and containers, which are leveraged in hands-on exercises throughout the book.

What this book covers

Chapter 1, Fundamentals of Detection Engineering, provides an introduction to the foundational concepts that will be referenced throughout the book. It also defines detection engineering to help you understand what exactly detection engineering is.

Chapter 2, The Detection Engineering Life Cycle, introduces the phases of the detection engineering life cycle and different types of continuous monitoring. Each phase of the life cycle will be discussed in depth in later chapters.

Chapter 3, Building a Detection Engineering Test Lab, introduces the technologies that will be used to build a detection engineering test lab. The subsequent hands-on exercises will teach you how to deploy the detection engineering lab that will be leveraged for future labs throughout the book, and how to create a simple detection.

Chapter 4, Detection Data Sources, discusses what detection data sources are, their importance, and the potential challenges faced when leveraging data sources. It will then provide a hands-on exercise to connect a new data source to the detection engineering test lab.

Chapter 5, Investigating Detection Requirements, looks at the first two phases of the detection engineering life cycle. It discusses how to identify and triage detection requirements from a variety of sources and the related methods and processes to be implemented.

Chapter 6, Developing Detections Using Indicators of Compromise, discusses the use of indicators of compromise for the purpose of detection engineering. The concept is demonstrated through an example scenario based on a real-life threat. As part of the exercise, Sysmon will also be introduced and installed in the detection engineering lab.

Chapter 7, Developing Detections Using Behavioral Indicators, builds on Chapter 6 by moving on to developing detections at the behavioral indicator level. Two scenarios and associated exercises are leveraged to introduce the concept: one focused on detecting adversary tools and one focused on detecting tactics, techniques, and procedures (TTPs).

Chapter 8, Documentation and Detection Pipelines, provides an overview of how detections should be documented in order to effectively manage a detection engineering program. It then introduces concepts related to deployment processes and automation, such as CI/CD, along with a lab to demonstrate creating a detection pipeline.

Chapter 9, Detection Validation, provides an overview of validating detections using various methodologies. It will introduce two tools, Atomic Red Team and CALDERA, that can be used for performing validation. An associated hands-on exercise will allow you to work with these tools in your detection engineering test lab.

Chapter 10, Leveraging Threat Intelligence, provides an introduction to cyber threat intelligence with a focus on how it relates to detection engineering. A series of examples is used to demonstrate the use of open source intelligence for detection engineering. Additionally, the chapter will discuss the use of threat assessments to develop detection requirements.

Chapter 11, Performance Management, provides an overview of how to evaluate a detection engineering program as a whole. It includes methodologies for calculating the effectiveness and efficiency of the detections in an organization. Then, it discusses how such data can be used to improve the detection engineering program.

Chapter 12, Career Guidance for Detection Engineers, closes off the book with a discussion on careers in detection engineering. This includes finding jobs, improving your skill sets, and associated training. It then provides insights into the future of detection engineering as a field. Finally, it looks at ways in which detection engineers can contribute to the community.

To get the most out of this book

The primary software used in this book is Docker and virtualization software. All other software and operating systems are run within these technologies. Due to the use of virtualization software, these labs cannot be run on ARM-based systems, such as M1 Macs. For the labs, we provide setup instructions for both Linux and Windows systems.

Software/hardware covered in the book

Operating system requirements

Docker

Windows or Linux

VirtualBox

Windows or Linux

While the book uses steps and screenshots specific to VirtualBox in the exercises, users who have an understanding of virtualization software may use VMware or other solutions of their choice.

If you are using the digital version of this book, we advise you to type the code yourself or access the code from the book’s GitHub repository (a link is available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.

Download the example code files

You can download the example code files for this book from GitHub at https://github.com/PacktPublishing/Practical-Threat-Detection-Engineering. If there’s an update to the code, it will be updated in the GitHub repository.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Download the color images

We also provide a PDF file that has color images of the screenshots and diagrams used in this book. You can download it here: https://packt.link/qt1nr.

Conventions used

There are a number of text conventions used throughout this book.

Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: “The process described on Elastic’s site involves the use of docker-compose.yaml and a .env file, which docker-compose then interprets to build the Elastic and Kibana nodes.”

A block of code is set as follows:

ES1_DATA=/path/to/large/disk/elasticdata/es01 ES2_DATA=/path/to/large/disk/elasticdata/es02 KIBANA_DATA=/path/to/large/disk/elasticdata/kibana_data

Any command-line input or output is written as follows:

$ docker --version Docker version v20.10.12, build 20.10.12-0ubuntu4

Bold: Indicates a new term, an important word, or words that you see onscreen. For instance, words in menus or dialog boxes appear in bold. Here is an example: “At this point, you are probably wondering what type of data is being sent back to the Elasticsearch backend. You can view this data by navigating to the Discover page, under Analytics in the hamburger menu.”

Tips or important notes

Appear like this.

Get in touch

Feedback from our readers is always welcome.

General feedback: If you have questions about any aspect of this book, email us at [email protected] and mention the book title in the subject of your message.

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata and fill in the form.

Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Share Your Thoughts

Once you’ve read Practical Detection Engineering, we’d love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.

Your review is important to us and the tech community and will help us make sure we’re delivering excellent quality content.

Download a free PDF copy of this book

Thanks for purchasing this book!

Do you like to read on the go but are unable to carry your print books everywhere?

Is your eBook purchase not compatible with the device of your choice?

Don’t worry, now with every Packt book you get a DRM-free PDF version of that book at no cost.

Read anywhere, any place, on any device. Search, copy, and paste code from your favorite technical books directly into your application.

The perks don’t stop there, you can get exclusive access to discounts, newsletters, and great free content in your inbox daily

Follow these simple steps to get the benefits:

Scan the QR code or visit the link below

https://packt.link/free-ebook/9781801076715

Submit your proof of purchaseThat’s it! We’ll send your free PDF and other benefits to your email directly

Part 1: Introduction to Detection Engineering

In this part, you will learn about some foundational concepts related to detection engineering. After establishing this baseline knowledge, you’ll be introduced to the detection engineering life cycle, which will be followed throughout the book. To wrap up this part, we’ll guide you through deploying a detection engineering lab, which will support the labs throughout the book.

This section has the following chapters:

Chapter 1, Fundamentals of Detection EngineeringChapter 2, The Detection Engineering Life CycleChapter 3, Building a Detection Engineering Test Lab

1

Fundamentals of Detection Engineering

Across nearly every industry, a top concern for executives and board members is the security of their digital assets. It’s an understandable concern, given that companies are now more interconnected and reliant on technology than ever before. Digital assets and their supporting infrastructure comprise ever-increasing portions of a typical organization’s inventory. Additionally, more processes are becoming reliant on robust communication technologies. In most cases, these technologies enable companies to operate more effectively. The management and defense of this new digital landscape, however, can be challenging for organizations of any size.

Additionally, where sophisticated attacks used to be limited to nation-state adversaries, the increased interconnectedness of technology, coupled with the emergence of cryptocurrencies, creates a near-perfect environment for cyber criminals to operate in. The addition of sophisticated threat actors motivated by financial gain rather than those limited to nation-state motivations has dramatically broadened the number of organizations that must be able to identify and respond to such threats. Stopping these attacks requires increased agility by an organization to combat the adversary. A detection engineering program provides that agility, improving an organization’s ops tempo to operationalize intelligence about new threats. The primary goal of detection engineering is to develop the rules or algorithmic models to automatically identify the presence of threat actors, or malicious activity in general, promptly so that the relevant teams can take mitigative action.

In this chapter, we will discuss several topics that will provide you with knowledge that will be relevant throughout this book:

Foundational concepts, such as attack frameworks, common attack types, and the definition of detection engineeringThe value of a detection engineering programAn overview of this book

Foundational concepts

The foundation of how we can track and categorize an adversary’s actions allows us to prioritize and understand the scope or coverage of our detections. The following subsection covers common frameworks and models that will be referenced throughout this book. They provide a starting model for framing cyberattacks, their granular sub-components, and how to defend against them.

The Unified Kill Chain

Cyberattacks tend to follow a predictable pattern that should be understood by defenders. This pattern was initially documented as the now famous Lockheed Martin Cyber Kill Chain. This model has been adapted and modernized over time by multiple vendors. The Unified Kill Chain is a notable modernization of the model. This model defines 18 broad tactics across three generalized goals, which provides defenders with a reasonable framework for designing appropriate defenses according to attackers’ objectives. Let’s look at these goals:

In: The attacker’s goal at this phase is to research the potential victim, discover possible attack vectors, and gain and maintain reliable access to a target environment.Through: Having gained access to a target environment, the threat actor needs to orient themselves and gather supplemental resources required for the remainder of the attack, such as privileged credentials.Out: These tactics are focused on completing the objective of the cyberattack. In the case of double extortion ransomware, this would include staging files for exfiltration, copying those files to attacker infrastructure, and, finally, the large-scale deployment of ransomware.

Figure 1.1, based on the Unified Kill Chain whitepaper by Paul Pols, shows the individual tactics in each phase of the kill chain:

Figure 1.1 – The Unified Kill Chain

To better understand how the Unified Kill Chain applies to cyberattacks, let’s look at how it maps to a well-known attack. We are specifically going to look at an Emotet attack campaign. Emotet is a malicious payload often distributed via email and used to deliver additional payloads that will carry out the attacker’s final objectives. The specific campaign we will analyze is one reported on by The DFIR Report in November 2022: https://thedfirreport.com/2022/11/28/emotet-strikes-again-lnk-file-leads-to-domain-wide-ransomware/.

Table 1.1 lists the stages of the attack, as reported in the article, and how they map to the Unified Kill Chain:

Attack Event

Unified Kill Chain Phase Group

Unified Kill Chain Phase

Emotet executed via LNK malspam attachment

In

Delivery

Emotet sends outbound SMTP spam email

Network propagation

Pivoting

Domain enumeration via Cobalt Strike

Through

Discovery

Lateral movement to user workstation

Through

Pivoting

SMB share enumeration

Through

Discovery

Zerologon exploit attempt

In

Exploitation

Remote Management Agent installed

In

Command and control/persistence

Exfiltration via Rclone to Mega

Out

Exfiltration

Ransomware execution

Out

Impact

Table 1.1 – Unified Kill Chain mapping for Emotet attack chain

As can be seen from Table 1.1, not all phases will take place in every attack and may not occur in a linear order.

To read the full Unified Kill Chain whitepaper, visit this link: https://www.unifiedkillchain.com/assets/The-Unified-Kill-Chain.pdf.

While this follows the progression of a typical cyberattack, as the paper outlines and as our example shots show, it is not uncommon for the attacker to execute some tactics outside this expected order. While the Unified Kill Chain provides a model for how threat actors carry out attacks, it does not dive into the detailed techniques that can be used to achieve the goals of each phase in the kill chain. The MITRE ATT&CK framework provides more granular insight into the tactics, techniques, and procedures leveraged by threat actors.

The MITRE ATT&CK framework

The MITRE ATT&CK framework is a knowledge base developed by the MITRE Corporation. The framework classifies threat actor objectives and catalogs the granular tools and activities related to achieving those objectives.

ATT&CK stands for Adversarial Tactics, Techniques, and Common Knowledge. The MITRE ATT&CK framework groups adversarial techniques into high-level categories called tactics. Each tactic represents a smaller immediate goal within the overall cyberattack. This framework will be referenced frequently throughout this book, providing an effective model for designing and validating detections. The following points detail the high-level tactics included as part of the Enterprise ATT&CK framework:

Reconnaissance: This tactic falls within the initial foothold phase of the Unified Kill Chain. Here, the threat actor gathers information about their target. At this stage, the attacker may use tools to passively collect technical details about the target, such as any publicly accessible infrastructure, emails, vulnerable associate businesses, and the like. In ideal cases, the threat actor may identify publicly accessible and vulnerable interfaces, but reconnaissance can also include gathering information about employees of an organization to identify possible targets for social engineering and understand how various internal businessprocesses work.Resource development: This tactic falls within the initial foothold phase of the Unified Kill Chain. Having identified a plausible attack vector, threat actors design an appropriate attack and develop technical resources to facilitate the attack. This phase includes creating, purchasing, or stealing credentials, infrastructure, or capabilities specifically to support the operation against the target.Initial access: This tactic falls within the initial foothold phase of the Unified Kill Chain. The threat actor attempts to gain access to an asset in the victim-controlled environment. A variety of tools can be leveraged in combination at this point, ranging from cleverly designed phishing campaigns to deploying code that weaponizes yet-undisclosed vulnerabilities in exposed software interfaces (also known as zero-day attacks).Execution: Tactics in this category fall within the initial foothold and network propagation phases of the Unified Kill Chain. The attacker aims to execute their code on a target asset. Code used in this phase typically attempts to collect additional details about the target network, understand the security context the code is executing under, or collect data and return it to infrastructure controlled by the threat actor.Persistence: This tactic falls within the initial foothold category of the Unified Kill Chain. Initial access to a foreign environment can be volatile. Threat actors prefer robust and survivable access to target systems. Persistence techniques focus on maintaining access despite system restarts or modifications to identities and infrastructure.Privilege escalation: This tactic falls within the network propagation category of the Unified Kill Chain. Having gained access to the victim control environment, the threat actor typically attempts to attain the highest level of privileges possible. Privileged access provides a means for executing nearly every option available to the administrators of the victim, removing many roadblocks that may prevent them from taking action on the attacker’s objectives. Having privileged access can also make threat actor activities more challenging to detect.Defense evasion: This tactic falls within the initial foothold category of the Unified Kill Chain. Threat actors must understand the victim’s defense systems to design appropriate methods for avoiding them. Successful evasion of defense increases the likelihood of a successful operation. These tactics focus specifically on finding ways to subvert or otherwise avoid the target’s defensive controls.Credential access: This tactic falls within the initial foothold and action on objectives categories of the Unified Kill Chain. Identities control access to systems. Harvesting credentials or credential material is essential for completely dominating a victim’s environment. Access to multiple systems and credentials makes navigating environments easier and lets attackers pivot if the event credentials are modified.Discovery: This tactic falls within the network propagation category of the Unified Kill Chain. These techniques focus on understanding the victim’s internal environment. The internal network layout, infrastructure configuration, identity information, and defense systems must be understood to plan for the remaining phases of the attack.Lateral movement: This tactic falls within the action on objectives category of the Unified Kill Chain. Systems that are accessed for the first time often do not have the information or resources (tools, credential material, direct connectivity, or visibility) required to complete objectives. Following the discovery of connected systems, and with the proper credentials, the adversary can, and often needs to, move from the current system to other connected systems. These techniques are all focused on traversing the victim’s environment.Collection: This tactic falls within the action on objectives category of the Unified Kill Chain. These techniques focus on performing internal reconnaissance. Access to new environments provides new visibility, and understanding the technical environment is essential for planning the subsequent phases of the attack.Command and control: This tactic falls within the initial access category of the Unified Kill Chain. It allows us to implement systems so that we can remotely control the victim’s environment.Exfiltration: This tactic falls within the action on objectives category of the Unified Kill Chain. Not all attacks involve exfiltration activities, but tactics in this category have become more popular with the rise of ransomware double extortion attacks. You can find a more detailed description of double extortion ransomware attacks at https://www.zscaler.com/resources/security-terms-glossary/what-is-double-extortion-ransomware. These tactics aim to copy data out of the victim’s environment to an attacker-controlled infrastructure.Impact: This tactic falls within the action on objectives category of the Unified Kill Chain. At this point, the threat actor can take steps to complete their attack. For example, in the case of a ransomware attack, the large-scale encryption of data would fall into this phase.

We encourage you to explore the MITRE ATT&CK framework in full at https://attack.mitre.org/. In this book, we are specifically going to focus on the Enterprise ATT&CK framework, but MITRE also provides frameworks for ICS and mobile-based attacks as well. The ATT&CK Navigator, located at https://mitre-attack.github.io/attack-navigator/, is also extremely useful for defenders to quickly search for and qualify tactics.

Most publications documenting incident response observations typically provide kill chain and MITRE ATT&CK tactics, which help defenders understand how to design detections and other preventative controls.

The Pyramid of Pain

Another helpful model for defenders to understand is the Pyramid of Pain. This model, developed by David Bianco, visualizes the relationship between the categories of indicators and the impact of defending each. This impact is expressed as the effort required by the threat actor to modify their attack once an effective defense is implemented for a given indicator category. Figure 1.2 shows the concept of the Pyramid of Pain:

Figure 1.2 – David Bianco’s Pyramid of Pain

As we can see, controls designed to operate on static indicators such as domain names, IP addresses, and hash values are trivial for adversaries to evade. For example, modifying the hash of a binary simply involves changing a single bit. It is far more difficult for an adversary to modify their tools, tactics, and procedures (TTPs), which are essentially the foundation of their attack playbook. The gold standard for defensive controls is those that target TTPs. However, these are usually more difficult to implement and require reliable data from protected assets, as well as a deep understanding of the adversary’s tactics and capabilities. Defensive controls designed for static indicators are effective for short-term, tactical defense. You can read David Bianco’s full blog post here: https://detect-respond.blogspot.com/2013/03/the-pyramid-of-pain.html.

Throughout the remainder of this book, we will frequently reference these concepts. In later chapters, we will illustrate how these models can be used to understand cyberattacks, translate high-level business objectives for defense into detections, and measure coverage against known attacks.

Now that we have gained an understanding of the model for framing cyberattacks, let’s look into the most common types of cyberattacks.

Types of cyberattacks

To detect cyberattacks, detection engineers need to have a base understanding of the attacks that they will face. Some of the most prevalent attacks at the time of writing are summarized here to provide some introductory insight into the attacks we are trying to defend against.

Business Email Compromise (BEC)

The FBI reported receiving a total of 19,954 complaints related to Business Email Compromise (BEC) incidents in 2021. They estimate these complaints represent a cumulative loss of 2.4 billion dollars (USD). The full report can be accessed at https://www.ic3.gov/Media/PDF/AnnualReport/2021_IC3Report.pdf.

BEC attacks target users of the most popular and accessible user collaboration tool available – email. The electronic transfer of funds is a normal part of business operations for many organizations. Threat actors research organizations and identify personnel likely to be involved in correspondence related to the exchange of funds. Having identified a target, the threat actor leverages several techniques to gain access to the target’s mailbox (or someone adjacent from a business process perspective). With this access, the threat actor’s objective pivots to observing email exchanges to understand internal processes. During this time, the threat actor needs to understand the communication flows and key players. In ideal cases, they will identify a third-party contractor whom the organization conducts routine business with, the people who typically send correspondence for payments, and the person who approves these payments on behalf of the organization. Once the right opportunity arises, the threat actor can intercept and alter email conversations about payment, changing destination account numbers. If this goes unnoticed, funds may be deposited into the attacker’s account instead of the intended recipient.

Denial of service (DoS)

Denial of service (DoS) attacks attempt to make services unavailable to legitimate users by overwhelming the service or otherwise impairing the infrastructure the service depends on. There are three main types of DoS attacks: volumetric, protocol, and application attacks.

Volumetric attacks are executed by sending an inordinate volume of traffic to a target system. If the attack persists, it can degrade the service or disrupt it entirely. Protocol attacks focus on the network and transport layer and attempt to deplete the available resources of the networking devices, making the target service available. Application attacks send large volumes of requests to a target service. The service attempts to process each request, which consumes processing power on the underlying systems. Eventually, the available resources are exhausted, and service response times increase to the point where the service becomes unavailable. These types of attacks can be further categorized by their degree of automation and the techniques used.

Increasing the number of systems executing the attack can significantly increase the impact. By making use of compromised systems, threat actors can conduct synchronized DoS attacks against a single target, known as distributed denial of service (DDoS) attacks.

Malware outbreak

When malicious software, or malware, manages to evade defensive controls, the impact can range broadly, depending on the specific malware family. In low-impact cases, an end user may be bombarded with unsolicited pop-up ads, and in more extreme scenarios, malware can give full control of a system to a remote threat actor. The presence of malware in an enterprise environment usually indicates a possible deficiency in security controls. Seemingly low-impact malware infections can lead to more significant incidents, including full-blown ransomware attacks.

Insider threats

Employees of an organization who perform malicious activity against that organization are known as insider threats. Insider threats can exist at any level of the organization and have various motivations. Malicious insiders can be difficult to defend against since the organization has granted them a degreeof trust.

Phishing

Phishing attacks fall under the category of social engineering, where threat actors design attacks around communication and collaboration tools, such as email, instant messaging apps, SMS text messages, and even regular phone calls. The underlying objective in all cases is to entice users to reveal sensitive information, such as credentials or banking information. BEC attacks typically leverage phishing techniques.

Ransomware

While the threat landscape is full of countless actors, with diverse goals ranging from stealthy cyber espionage to tech-support scams, the most prolific and impactful of these is the modern ransomware attack.

The goal of a ransomware attack is to interrupt critical business operations by taking critical systems offline and demanding payment, or a ransom, from the organization. In exchange for a successful payment, the threat actors claim they will return systems to a normal operating state.

Recently, some ransomware operators have added a separate extortion component to their playbook. During their ransomware attack, they exfiltrate sensitive data from the organization’s environment to attacker-controlled systems. Ransomware operators then threaten to publicize this data unless the ransom is paid. This attack is commonly referred to as the double-extortion ransomware attack.

Successful ransomware operations put businesses in a frightening predicament. Apart from untangling the deep complexities of determining whether to pay the ransom, recovering from a successful cyberattack can take months or sometimes years.

These malicious operations have become increasingly sophisticated and successful over time. According to CrowdStrike, the first instance of modern ransomware was recorded in 2005. Between then and now, the frequency, scale, and sophistication of ransomware attacks have only increased. CrowdStrike’s History of Ransomware article provides a summary of the evolution of ransomware. You can read the full article here: https://www.crowdstrike.com/cybersecurity-101/ransomware/history-of-ransomware/.

The motivation for detection engineering

Successful breaches can have expensive impacts, requiring thousands of man-hours to remediate. IBM’s 2022 Cost of a Data Breach report found that the average total cost of a data breach amounted to 4.35 million USD. Typically, the earlier a threat is detected, the lower the cost of remediation. For every phase that an attacker advances through the kill chain, the cost of remediation goes up. While a threat hunt allows an organization to search for an adversary already inside its environment, the identification occurs when and if a search is performed. This detection, though, allows an organization to identify malicious behavior when the activity is performed, reducing the mean time to detect. Given that the same IBM Cost of a Data Breach report determined that the average time to identify and contain a breach was 277 days, there is much work to be done in attempting to reduce the time to detection.

To understand how the time to detect an attack greatly determines the impact on the business, let’s consider a scenario where a threat actor can gain initial access to an internet-connected workstation via a successful phishing campaign. This unauthorized access was immediately detected by the organization’s security team. They quickly isolated this workstation and performed a full re-imaging of its contents to a known-good state. They also performed a full reset of the user’s credentials, along with any other user who interacted with that workstation. Administrators identified the phishing email in their enterprise email solution, and all recipients had their workstations re-imaged and their credentials reset.

In this scenario, the steps that were taken by the security team were relatively simple to execute and would likely be sufficient to remove the threat from the environment. In contrast, if the threat actors were able to gain privileged access, exfiltrate data, and then deploy ransomware across all systems, the task becomes significantly more onerous. The security team would be faced with the dual task of understanding what happened while simultaneously advising on the best way to restore the business’s ability to operate safely. The following table summarizes how the number of assets impacted, the investigative requirements, and typical remediation efforts change across the Unified Kill Chain goals:

Initial Foothold

Network Propagation

Action on Objectives

Assets impacted

Low value.

Typically, this involves edge devices, public-facing servers, or user workstations. Because of their position in modern architectures, these devices are typically untrusted by default.

Medium value.

Some internal systems. Typically at this phase, the threat actor has access to some member servers within the environment and has a reliable C2 channel established.

High value.

Critical servers such as Active Directory domain controllers, backup servers, or file servers.

Threat
actor’s degree of control

Low.

The threat actor has unreliable access to a system or is attempting to obtain access to a system, typically through phishing or attacking publicly facing services. Typically, this phase is the best opportunity for defenders to remove a threat.

Medium.

The threat actor has enough control to traverse the network, but not enough control to execute objectives. At this point, threat actors typically have some credentials and have a reliable C2 channel established.

High.

The threat actor is fully comfortable operating in the environment. They found all the resources needed to execute their objectives. At this point, they likely have the highest level of privileges available in the environment.

Data
requirement for investigation

Relatively low.

Typically, impact at this phase is limited to a small number of assets. Once identified at this phase, the data required for fully scoping the event is limited to a single host.

Significant.

The capability to traverse the internal network typically indicates the presence of a reliable C2 channel. A higher volume of historic and real-time data is required to identify impacted assets. At this point, incident responders will need to have visibility of all connected assets to fully track lateral movement.

High.

Investigators will require access to historical and real-time data from all connected assets. Additionally, in cases where data exfiltration is an objective, telemetry for the access and movement of data will also be required. This data is difficult to collect and is not typically tracked.

Effort
required to remediate

Low.

Activities at this phase typically occur on edge devices or public-facing assets. The typical posture is to treat these assets as untrusted, so it is common for environments to have capabilities for rapidly isolating these assets.

Medium.

Traversing the network requires more investigative work to identify the individual assets that were accessed, the degree to which they were utilized, and the requirements for remediating.

High.

In nearly every case, this requires rebuilding critical infrastructure. Often, this needs to occur with the added pressure of returning the business to a minimally operational state, to minimize losses.

Table 1.2 – Generalized asset impact and effort versus kill chain goals

It’s plain to see the importance of finding out about cyberattacks in your environment and, more so, the importance of finding out as early as possible. The right person needs to get the relevant information about cyberattacks in a timely fashion. This is the primary objective of detection engineering.

Defining detection engineering

Quickly identifying, qualifying, and mitigating potential security incidents is a top priority for security teams. Identifying potential security incidents quickly is a fairly complicated problem to solve. In general terms, security personnel need to be able to do the following:

Collect events from assets that require protection, as well as assets that can indirectly impact them.Identify events that may indicate a security incident, ideally as soon as they happen.Understand the impact of the potential incident.Communicate the high-value details of the event to all relevant teams for investigation and mitigation.Receive feedback from investigative teams to determine how the whole process can be improved.

Each of these steps can be difficult to execute within small environments. The complexity increases radically for any increase in the size of a managed environment.

Detection engineering definition

Detection engineering can be defined as a set of processes that enable potential threats to be detected within an environment. These processes encompass the end-to-end life cycle, from collecting detection requirements, aggregating system telemetry, and implementing and maintaining detection logic to validating program effectiveness.

To accomplish these goals, a good detection engineering program typically needs to implement four main processes:

Discovery: This involves collecting detection requirements. Here, you must determine whether the requirements are met with existing detections. You must also determine the criticality of the detection, as well as the audiences and timeframes for alerting.Design, development, and testing: The detection requirement is interpreted, and a plan for implementing the detection is formulated. The designed detection is implemented first in a test environment and tested to ensure it produces the expected results.Implementation and post-implementation monitoring: Detection is implemented in the production detection environment. Here, the performance of the detection and the detection systems is monitored.Validation: Routine testing to determine the effectiveness of the detection engineering program as a whole:

Figure 1.3 – The detection engineering processes

Chapter 2, The Detection Engineering Life Cycle, takes a deeper dive into each of these processes.

Important distinctions

Detection engineering can be misunderstood, partly because some processes overlap with other functions within a security organization. We can clarify detection engineering’s position with the following distinctions:

Threat hunting: The threat hunting process proactively develops investigative analyses based on a hypothesis that assumes a successful, undetected breach. The threat hunting process can identify active threats in the environment that managed to evade current security controls. This process provides input to the detection engineering program as it can identify deficiencies in detections. The data that’s available to detection engineering is typically the same data that threat hunters utilize. Therefore, threat hunting can also identify deficiencies in the existing data collection infrastructure that will need to be solved and integrated with the detection infrastructure.Security operations center (SOC) operations: SOC teams typically focus on monitoring the security environment, whereas detection engineering provides inputs to SOC teams. While the SOC consumes the products of the detection engineering functions, they typically work very closely with them to provide feedback for detection or collection improvements.Data engineering: Data engineers design, implement, and maintain systems to collect, transform, and distribute data, typically to satisfy data analytics and business intelligence requirements. This aligns with several goals of detection engineering; however, the detection engineering program is heavily security-focused and relies on data engineering to produce the data it needs to build detections.

In this section, we examined some basic cyber security concepts that will be useful throughout this book as we dive into the detection engineering process. Furthermore, we established a definition for detection engineering. With this definition in mind, the following section will examine the value that a detection engineering program brings to an organization.

The value of a detection engineering program

Before a detection engineering program can be established, it must be justified to stakeholders in the organization so that funding can be received. This section will discuss the importance of detection engineering. Specifically, we will look at the increasing need for good detections, how we define the quality of a detection, and how a detection engineering program fills this need.

The need for better detection

Advancements in software development such as open source, cloud computing, infrastructure as Code (IaC), and continuous integrations/continuous deployment (CI/CD) pipelines have reaped benefits for organizations. These advancements allow organizations to easily build upon the technology of others, frequently deploy new versions of their software, quickly stand up and break down infrastructure, and adapt quickly to changes in their landscape.

Unfortunately, these same advancements have aided threat actors as well. Open source repositories provide a plethora of offensive tools. Cloud computing and IaC allow adversaries to quickly deploy and break down their C2 infrastructure, while advances in software processes and automation have increased their ops tempo in updating and creating new capabilities. These changes have further deteriorated the value of static indicators and necessitate the need for better, more sophisticated detections. As such, the field of detection engineering is beginning to evolve to support efforts for more sophisticated detections. With an effective detection engineering program, organizations can go beyond detecting static indicators and instead detect malicious activity at a technique level.

The qualities of good detection

There is no one definition for good detection. Individual cyber security organizations will have varying thresholds for false positive rates – that is, the rate of detections triggering when they shouldn’t. Additionally, the adversaries they face will differ in sophistication, and the visibility and tools at their disposal will vary. As a detection engineer, you must identify metrics and evaluation criteria that align with your organization’s needs. In Chapter 9, we will review processes and approaches that will help guide those decisions. These evaluation criteria can be broken into three areas:

The ability to detect the adversaryThe cost of that ability to the cyber security organizationThe cost to the adversary to evade that detection

The ability to detect the adversary can be broken into a detection’s coverage, or the scope of the activity that the detection identifies. This can most easily be understood in terms of MITRE ATT&CK. As mentioned earlier, the framework provides definitions at varying levels of specificity, starting with tactics as the most general grouping, broken into techniques, and then procedures as the most fine-grained classification. Most behavioral detections focus on detecting one or more procedures taken by an adversary to implement a technique. Increasing a detection’s coverage by detecting multiple procedures associated with a technique or creating a detection that works across multiple techniques often increases the complexity of the detection but can also improve a detection’s durability.

Where a detection’s coverage can be thought of as the surface area across the MITRE ATT&CK TTPs, the durability of the detection identifies how long the detection is expected to be effective. Understanding the volatility of an adversary’s infrastructure, tools, and procedures and the relative cost to change them can help predict the durability of a detection.

These two evaluation criteria define what portion of attacks we can detect and for how long we expect those detections to be effective. Unfortunately, quantifying these evaluation criteria into metrics requires complete knowledge of an adversary’s capabilities and their ops tempo to change those capabilities. Despite this, we can use these criteria to rank the effectiveness and quality of our detections as we strive to improve our ability to detect the adversary.

However, we can calculate an organization’s historical effectiveness by calculating our mean time to detection as the time from the start of the attack on the organization to the time it took to detect the adversary.

Our ability to detect the adversary does not come without costs to the cyber security organization. These costs can be realized in the creation, running, and maintenance of detections, the resources spent reviewing associated alerts, and the actions taken based on those alerts. Later in this chapter, we will review the workflow of detection engineering. The creation time to perform that workflow defines the costs for creating that detection. For example, researching approaches to a technique is necessary to improve the coverage and durability of a detection but also increase the cost of creation. As a detection engineer, understanding the complexity of the detection affects future analysts’ abilities to understand and maintain the detection. It also affects the efficiency of running the detection (both positively and negatively). Maintaining the detections within an organization is an ongoing process. Staleness can be used to define the continued effectiveness or value of a detection. Is that technique or tool still being actively used? Is the detection used to detect something that is fully patched or protect infrastructure/software that is no longer on your network?

Each alert that an analyst must review comes at a cost. The confidence of a detection measures the probability that the alert is a true positive – that is, the alert is triggered under the expected conditions. However, tuning a detection to reduce the false positive rate can decrease the detection’s coverage and result in not identifying the attack. In contrast, the noisiness of a detection identifies how often a detection creates an alert that does not result in remediation. The noisiness of a detection might result from low confidence – that is, a high false positive rate – but it could also be related to the impact of the detection. Understanding the potential impact allows us to measure the importance or severity of what has been detected.

For example, a detection might identify reconnaissance scanning of the network. The lack of actionability on this activity, despite the confidence in the detection, might result in the noisiness of the detection being unacceptable. Each organization must identify its tolerance for false positives when tuning its detections. However, confidence in detection and the associated potential impact can be used to prioritize an organization’s alerts. In Chapter 5, we will review how low-fidelity detections can be valuable without significantly affecting analyst productivity.

The actionability of a detection defines how easy it is for a SOC analyst to leverage the detection to either further analyze the threat or remediate it. This does not mean that every detection must have an immediate action or response. A detection may have such significantly low confidence that it is ineffective to immediately investigate or respond to. Instead, the action associated with the alert is to increase confidence in other related identified activities or support potential root cause analysis. Unactionable intelligence, however, has limited value. The specificity of a detection supports this actionability by explaining what was detected. As an example, a machine learning model may provide increased coverage in detection with a high confidence level but may be unable to explain specifically why the alert was created. This lack of specificity, such as identifying the malware family, could reduce the actionability by not identifying the capabilities, persistence mechanisms, or other details about the malware required to properly triage or remediate the threat.

Lastly, when evaluating a detection, we must look at the cost to the adversary. While we will not, in most cases, have an inside look at the detailed costs associated with implementing an attack, we can look at indirect evidence in determining adversary cost. Inherent knowledge of how easily an adversary can evade detection, such as referencing the Pyramid of Pain, can provide guidance for ranking the cost to the adversary. As an example, the cost of changing a malware hash is significantly less than the cost of changing the malware’s C2 protocol. The volatility of an attacker’s infrastructure, tools, and procedures measures how often the attacker changes their attack in a way that would mitigate the detection. Identifying parts of an attack with lower volatility allows the defender to increase the durability of their detection.

The benefits of a detection engineering program

When selling the concept of a detection engineering program to executives, there’s only one justification that matters: a detection engineering program dramatically reduces the risk that a sophisticated adversary can penetrate their network and wreak havoc on their company. While this should be true about every aspect of your cyber security organization, each organization achieves this differently. A detection engineering program differs from other aspects of a cyber security program by allowing organizations to respond to new attacks quickly. It can leverage internal intelligence about adversaries targeting their industry and specifics about their company’s network to customize detections.