Network Forensics - Ric Messier - E-Book

Network Forensics E-Book

Ric Messier

0,0
38,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Intensively hands-on training for real-world network forensics

Network Forensics provides a uniquely practical guide for IT and law enforcement professionals seeking a deeper understanding of cybersecurity. This book is hands-on all the way—by dissecting packets, you gain fundamental knowledge that only comes from experience. Real packet captures and log files demonstrate network traffic investigation, and the learn-by-doing approach relates the essential skills that traditional forensics investigators may not have. From network packet analysis to host artifacts to log analysis and beyond, this book emphasizes the critical techniques that bring evidence to light.

Network forensics is a growing field, and is becoming increasingly central to law enforcement as cybercrime becomes more and more sophisticated. This book provides an unprecedented level of hands-on training to give investigators the skills they need.

  • Investigate packet captures to examine network communications
  • Locate host-based artifacts and analyze network logs
  • Understand intrusion detection systems—and let them do the legwork
  • Have the right architecture and systems in place ahead of an incident

Network data is always changing, and is never saved in one place; an investigator must understand how to examine data over time, which involves specialized skills that go above and beyond memory, mobile, or data forensics. Whether you're preparing for a security certification or just seeking deeper training for a law enforcement or IT role, you can only learn so much from concept; to thoroughly understand something, you need to do it. Network Forensics provides intensive hands-on practice with direct translation to real-world application.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 669

Veröffentlichungsjahr: 2017

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title Page

Introduction

What This Book Covers

How This Book Is Organized

1 Introduction to Network Forensics

What Is Forensics?

Incident Response

The Need for Network Forensic Practitioners

Summary

References

2 Networking Basics

Protocols

Request for Comments

Internet Registries

Internet Protocol and Addressing

Transmission Control Protocol (TCP)

User Datagram Protocol (UDP)

Ports

Domain Name System

Support Protocols (DHCP)

Support Protocols (ARP)

Summary

References

3 Host-Side Artifacts

Services

Connections

Tools

Summary

4 Packet Capture and Analysis

Capturing Packets

Packet Analysis with Wireshark

Network Miner

Summary

5 Attack Types

Denial of Service Attacks

Vulnerability Exploits

Insider Threats

Evasion

Application Attacks

Summary

6 Location Awareness

Time Zones

Using whois

Traceroute

Geolocation

Location-Based Services

WiFi Positioning

Summary

7 Preparing for Attacks

NetFlow

Logging

Antivirus

Incident Response Preparation

Security Information and Event Management

Summary

8 Intrusion Detection Systems

Detection Styles

Host-Based versus Network-Based

Architecture

Alerting

Summary

9 Using Firewall and Application Logs

Syslog

Event Viewer

Firewall Logs

Common Log Format

Summary

10 Correlating Attacks

Time Synchronization

Packet Capture Times

Log Aggregation and Management

Timelines

Security Information and Event Management

Summary

11 Network Scanning

Port Scanning

Vulnerability Scanning

Port Knocking

Tunneling

Passive Data Gathering

Summary

12 Final Considerations

Encryption

Cloud Computing

The Onion Router (TOR)

Summary

End User License Agreement

Pages

iii

iv

v

vii

ix

xi

xxi

xxii

xxiii

xxiv

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

316

317

318

Guide

Cover

Table of Contents

Begin Reading

List of Illustrations

Chapter 2

Figure 2.1 The Open Systems Interconnection seven layer model.

Figure 2.2 The top of RFC 1.

Figure 2.3 A whois lookup on 4.2.2.1.

Figure 2.4 IP header diagram.

Figure 2.5 TCP header diagram.

Figure 2.6 SYN message.

Figure 2.7 UDP header diagram.

Figure 2.8 Client to server communication.

Figure 2.9 Domain hierarchy.

Figure 2.10 Recursive DNS query diagram.

Figure 2.11 DNS query trace.

Figure 2.12 Layer 2 broadcast request.

Figure 2.13 DHCP ACK message.

Figure 2.14 ARP request.

Chapter 3

Figure 3.1 Windows Services applet.

Figure 3.2 Windows Service Properties dialog.

Figure 3.3 macOS launch daemons.

Figure 3.4 Linux service scripts.

Figure 3.5 Connection list from netstat.

Figure 3.6 Protocol-based network statistics from netstat.

Figure 3.7 nbtstat output.

Figure 3.8 ifconfig output.

Figure 3.9 ipconfig output.

Figure 3.10 TCPView.

Figure 3.11 Looking at remotely opened files with PsFile.

Figure 3.12 Network listeners with Process Explorer.

Figure 3.13 IP statistics using ntop.

Figure 3.14 Top talkers using ntop.

Figure 3.15 Task Manager Ethernet display.

Figure 3.16 Resource Monitor.

Figure 3.17 ARP cache.

Figure 3.18 /proc entry for Nginx web server.

Chapter 4

Figure 4.1 The Wireshark interface.

Figure 4.2 The Wireshark Interface.

Figure 4.3 Copper network tap.

Figure 4.4 Ettercap in Curses mode.

Figure 4.5 Wireshark decode.

Figure 4.6 Name Resolution view.

Figure 4.7 Header field values.

Figure 4.8 Raw packet data.

Figure 4.9 Statistics menu.

Figure 4.10 Conversations view.

Figure 4.11 Protocol Hierarchy view.

Figure 4.12 IPv4 statistics view.

Figure 4.13 Follow stream.

Figure 4.14 Exporting files.

Figure 4.15 Files shared over SMB.

Figure 4.16 Files captured using Network Miner.

Figure 4.17 Hosts tab in Network Miner.

Chapter 5

Figure 5.1 tcpdump of a SYN flood.

Figure 5.2 Packet capture of a Teardrop attack.

Figure 5.3 Malformed HTTP request.

Figure 5.4 UDP flood packet capture.

Figure 5.5 SNMP request.

Figure 5.6 Smurf amplifier registry.

Figure 5.7 DNS ANY request result.

Figure 5.8 distcc attack.

Figure 5.9 Buffer overflow attack.

Figure 5.10 Attack evasion using fragmentation.

Figure 5.11 Attack against Firefox browser.

Figure 5.12 SQL injection attack.

Figure 5.13 XSS attack.

Chapter 6

Figure 6.1 Windows time zones.

Figure 6.2 Windows registry time zone settings.

Figure 6.3 whois lookup.

Figure 6.4 Geolocation lookup.

Figure 6.5 db-ip.com lookup.

Figure 6.6 GeoIP lookup using Wireshark.

Figure 6.7 Geolocation lookup.

Chapter 7

Figure 7.1 NetFlow diagram.

Figure 7.2 NetFlow output.

Figure 7.3 NetFlow output in Microsoft Excel.

Figure 7.4 NetFlow output with decimal values.

Figure 7.5 Sample syslog configuration file.

Figure 7.6 syslog entries.

Figure 7.7 Windows Event Viewer.

Figure 7.8 A Windows event.

Figure 7.9 Using PowerShell for Windows events.

Figure 7.10 PFSense firewall state table.

Figure 7.11 Nagios monitoring console interface.

Figure 7.12 Splunk data configuration.

Figure 7.13 Splunk patterns.

Figure 7.14 GRR hunt.

Chapter 9

Figure 9.1 Windows Event Viewer displaying XML.

Figure 9.2 Windows Event Viewer displaying details.

Figure 9.3 Windows Event Viewer categories.

Figure 9.4 Windows Event Viewer applications and services.

Figure 9.5 Event Viewer Create Custom View dialog.

Figure 9.6 Event Viewer actions.

Figure 9.7 Cleared system log.

Chapter 10

Figure 10.1 Wireshark showing Epoch time in seconds.

Figure 10.2 Hex editor showing times from PCAP.

Figure 10.3 Windows Event Viewer.

Figure 10.4 Creating a subscription in Windows Event Viewer.

Figure 10.5 Creating listener in Splunk.

Figure 10.6 Displaying logs in Splunk.

Figure 10.7 Illogical timeline.

Figure 10.8 Plaso filetypes.

Figure 10.9 PacketTotal console view.

Figure 10.10 PacketTotal timeline view.

Figure 10.11 Wireshark conversation.

Chapter 11

Figure 11.1 OpenVAS start page.

Figure 11.2 OpenVAS results.

Figure 11.3 Nexpose site creation.

Figure 11.4 Nexpose site creation.

Chapter 12

Figure 12.1 TLS handshake in Wireshark.

Figure 12.2 Amazon's web server certificate.

Figure 12.3 Amazon EC2 workflows.

Figure 12.4 Amazon EC2 virtual machine choices.

Figure 12.5 Microsoft OneDrive interface.

Figure 12.6 Microsoft Word new document.

Figure 12.7 Google transparency report.

Figure 12.8 Microsoft transparency report.

Figure 12.9 Wireshark capture of Tor communication.

Figure 12.10 Ahmia search site.

List of Tables

Chapter 2

Table 2.1 Private Addresses

Chapter 3

Table 3.1 TCP Connection States

Chapter 5

Table 5.1 Martian Addresses

Chapter 7

Table 7.1 Syslog Facilities

Table 7.2 Syslog Severity Levels

Chapter 9

Table 9.1 Syslog Facilities

Table 9.2 Syslog Severities

Network Forensics

 

 

 

 

 

Ric Messier

 

 

 

Introduction

One of the best things about the different technology fields, should you have the stomach for it—and many don't—is the near constant change. Over the decades I have been involved in technology-based work, I've either had to or managed to reinvent myself and my career every handful of years or less. The world keeps changing and in order to maintain pace, we have to change too. In one of my incarnations that ended not many months ago now, I ran graduate and undergraduate programs at Champlain College in its online division. One of my responsibilities within that role was overseeing development of course materials. Essentially, either I or someone I hired developed the course and then I hired people who could teach it, often the people who did the development, though not always.

In the process of developing a course on network forensics, I discovered that there wasn't a lot of material around that covered it. At the time, I was able to find a single book but it wasn't one that we could make use of at the college because of policies focused on limiting costs to students. As a result, when I was asked what my next book would be, a book on network forensics that would explore in more detail the ideas I think are really important to anyone who is doing network investigations made the most sense to me.

What This Book Covers

I like to understand the why and how of things. I find it serves me better. When I understand the why and how, I don't get stuck in a dinosaur graveyard because at its core, technology continues to cycle around a number of central ideas. This has always been true. When you understand what underpins the technology, you'll see it's a variation on something you've seen before, if you stick around long enough. As a result, what is covered in this book is a lot of “how and why” and less of “these are the latest trendy tools” because once you understand the how and why, once you get to what's underneath, the programs can change and you'll still understand what it is you are looking at, rather than expecting the tools to do the work for you.

This is the reason why this book, while offering up some ideas about investigations, is really more about the technologies that network investigations are looking at. If you understand how networks work, you'll know better where to look for the information you need. You'll also be able to navigate changes. While we've moved from coax to twisted pair to optical to wireless, ultimately the protocols have remained the same for decades. As an example, Ethernet was developed in the 1970s and your wireless network connection, whether it's at home or at your favorite coffee shop down the street, still uses Ethernet. We're changing the delivery mechanism without changing what is being delivered. Had you learned how Ethernet worked in the early 1980s, you could look at a frame of Ethernet traffic today and still understand exactly what is happening.

The same is true of so-called cloud computing. In reality, it's just the latest term for outsourcing or even the service bureaus that were big deals in the '70s and '80s. We outsource our computing needs to companies so we don't have to deal with any of the hassle of the equipment and we can focus on the needs of the business. Cloud computing makes life much easier because delivery of these services has settled down to a small handful of well-known protocols. We know how they all work so there is no deciphering necessary.

At the risk of over-generalizing, for many years now there has been a significant emphasis on digital forensics, seen particularly through the lens of any number of TV shows that glorify the work of a forensic investigator and, in the process, get huge chunks of the work and the processes completely wrong. So-called dead-box forensics has been in use for decades now, where the investigator gets a disk or a disk image and culls through all the files, and maybe even the memory image for artifacts. The way people use computers and computing devices is changing. On top of that, as more and more businesses are affected by incidents that have significant financial impact, they have entirely different needs.

The traditional law enforcement approach to forensics is transitioning, I believe, to more of a consulting approach or an incident response at the corporate level. In short, there will continue to be a growing need for people who can perform network investigations as time goes on. With so many attackers in the business of attacking—their attacks, thefts, scams, and so on are how they make their living—the need for skilled investigators is unlikely to lessen any time in the near future. As long as there is money to be made, you can be sure the criminal incidents will continue.

As you read through this book, you will find that the “what's underneath” at the heart of everything. We'll talk about a lot of technologies, protocols, and products, but much of it is with the intention of demonstrating that the more things change, the more they stay the same.

How to Use This Book

I've always been a big believer in a hands-on approach to learning. Rather than just talking about theories, you'll look at how the tools work in the field. However, this is not a substitute for actually using them yourself. All of the tools you look at in this book are either open source or have community editions, which means you can spend time using the tools yourself by following along with the different features and capabilities described in each chapter. It's best to see how they all behave in your own environment, especially since some of the examples provided here may look and behave differently on your systems because you'll have different network traffic and configurations. Working along with the text, you'll not only get hands-on experience with the tools, but you will see how everything on your own systems and networks behaves.

How This Book Is Organized

This book is organized so that chapter topics more or less flow from one to the next.

Chapter 1 provides a foundational understanding of forensics. It also looks at what it means to perform forensic investigations as well as what an incident response might look like and why they are important. You may or may not choose to skim or skip this chapter, depending on how well-versed you are with some of the basic legal underpinnings and concepts of what forensics and incident response are.

Chapter 2 provides the foundation of what you should know about networking and protocols, because the rest of the book will be looking at network traffic in a lot of detail. If you are unfamiliar with networking and the protocols we use to communicate across a network, you should spend a fair amount of time here, getting used to how everything is put together.

Chapter 3 covers host-side artifacts. After all, not everything happens over the bare wire. Communication originates and terminates from end devices like computers, tablets, phones, and a variety of other devices. When communication happens between two devices, there are traces on those devices. We'll cover what those artifacts might be and how you might recover them.

Chapter 4 explains how you would go about capturing network traffic and then analyzing it.

Chapter 5 talks about the different types of attacks you may see on the network. Looking at these attacks relies on the material covered in Chapter 4, because we are going to look at packet captures and analyze them to look at the attack traffic.

Chapter 6 is about how a computer knows where it is and how you can determine where a computer is based on information that you have acquired over the network. You can track this down in a number of ways to varying levels of granularity without engaging Internet service providers.

Chapter 7 covers how you can prepare yourself for a network investigation. Once an incident happens, the network artifacts are gone because they are entirely ephemeral on the wire. If you are employed by or have a relationship with a business that you perform investigations for, you should think about what you need in place so that when an incident happens, you have something to look at. Otherwise you will be blind, deaf, and dumb.

Chapter 8 continues the idea of getting prepared by talking about intrusion detection systems and their role in a potential investigation.

Along the same lines, Chapter 9 is about firewalls and other applications that may be used for collecting network-related information.

Chapter 10 covers how to correlate all of that information once you have it in order to obtain something that you can use. This includes the importance of timelines so you can see what happened and in what order.

Chapter 11 is about performing network scans so you can see what the attacker might see. Network scanning can also tell you things that looking at your different hosts may not tell you.

Finally, Chapter 12 is about other considerations. This includes cryptography and cloud computing and how they can impact a network forensic investigation.

Once you have a better understanding of all of the different types of network communications and all of the supporting information, I hope you will come away with a much better understanding of the importance of making use of the network for investigations. I hope you will find that your skills as a network investigator improve with what you find here.

1Introduction to Network Forensics

In this chapter, you will learn about:

What network forensics is

Evidence handling standards

Verification of evidence

Sitting in front of his laptop he stares at a collection of files and reflects on how easy it was to get them. He sent an e-mail to a sales manager at his target company—almost silly how obviously fake it was—and within minutes he knew that he had access to the sales manager's system. It took very little time for him to stage his next steps, which included installing a small rootkit to keep his actions from being noticed, and to ensure his continued presence on the system wouldn't be detected. It also provided him continued access without the sales manager needing open the e-mail message again. That had taken place weeks back and so far, there appeared to be no evidence that anyone had caught on to his presence not only on the system but, by extension, on the business network the sales manager's laptop was connected to.

It was this network that he was poring over now, looking at a collection of files related to the business's financial planning. There were also spreadsheets including lists of customer names, contact information, and sales projections to those customers. No really big score but definitely some interesting starting points. Fortunately, this user was well-connected with privileges in the enterprise network. This ended up giving him a lot of network shares to choose from, and for the last several weeks he has been busy looking for other systems on the network to take over. Getting access to the address book on this system was really helpful. It allowed him to send messages looking as though they came from this user, sending co-workers to a website that would compromise their systems with some client software, adding them to the growing botnet he had control over. File shares were also good places to not only get documents to make use of, but also to drop some more infected files. The key loggers that were installed have generated some interesting information and keeping an eye on all of that is an ongoing project.

Ultimately, this is becoming quite a little stronghold of systems. It's not exactly the best organization he's been in with respect to quality data from an intellectual property or large caches of credit card numbers or even health care information. However, having more systems to continue building the botnet is always good and at some point months or even years down the road, more interesting information may show up. In the meantime, there may be vendors who have trust relationships with this network that could be exploited.

Once inside the network, he has so many potential places to go and places to probe. There is a lot of data to be found and even though it appears that disk encryption is being used fairly consistently across the organization, all of that data is accessible to him as an authenticated user on the network. Wiping logs in places where they were turned on was trivial. This little network was all his for the taking for apparently as long as he felt it would be useful.

Does this sound scary at all to you? In reality, this is far too common and although it's dramatized, it's not that far off from how networks become compromised. Not long ago, technical intrusions were more common than the type of attack just described. In a technical intrusion, attackers use software vulnerabilities to get into a system remotely. This type of attack targets servers sitting in a data center because those are exposed to the outside world. That's not the case anymore. As we continue to learn, attackers are using people to get into systems and networks. This was vividly illustrated in 2013 in Mandiant's report, “APT1: Exposing One of China's Cyber Espionage Units” (https://www.fireeye.com/content/dam/fireeye-www/services/pdfs/mandiant-apt1-report.pdf). Attackers send e-mail with malicious attachments, get someone to visit a website, or just simply park malicious software on a known website and wait for people to visit in order to infect their systems. Unfortunately, this is the world we now live in, a world where companies who haven't had systems compromised are becoming the minority rather than the majority.

This is one reason forensics is becoming such a hot skill to have. Well, that and the fact that the folks on various TV shows make it seem really cool, interesting, and easy. The reality is a different story, of course. Although the news and other media outlets make it seem as though attacks are carried out by solo hackers (an ambiguous and misleading word), the majority of outside attacks businesses are subject to today are perpetrated by well-funded and organized criminal enterprises. There is money to be made from these crimes; criminals are starting to use ransom and extortion to go directly for the money rather than trying to steal something to sell off later on.

The term forensics can be ambiguous. Because of that, it's helpful to have an understanding of what forensics currently is and isn't. Particularly when it comes to network forensics, it's more and more becoming part of incident response. Digital forensics practitioners have to be capable of more than locating images and deleted files that may be common for the large volume of child pornography cases that traditional law enforcement practitioners may be looking for. Sometimes, knowing how to extract files from a computer system isn't enough because information can be obscured and deleted very effectively. Certainly operating system forensics is important, but sometimes it takes more than just understanding what happened on the system itself.

Network forensics is becoming an extremely important set of skills when it comes to situations like the one described at the beginning of the chapter. Rather than relying on what the operating system and disks may be able to tell you, a network forensic investigator can go to the network itself and collect data of an attack in progress or look up historical information that may be available after a company has suffered a security breach with someone taking up long-term residence, someone who has the ability to observe and piece together what they see into a coherent picture. This coherent picture may include information from other sources such as firewalls, application logs, antivirus logs, and a number of other sources.

One advantage to watching the network is that the network can't lie. Applications may not be doing what they are supposed to be doing. Logs may not be available or they may have been wiped. There may be root kits installed to obscure what is happening on a system. Once a network transmission is sent out on the wire, though, the bits are the bits.

Because of situations like the one described in the chapter-opening scenario, it's important to know exactly what forensics is as a practice as well as its role in incident response. Finally, there is a need for not only forensic practitioners in general because of the large number of incidents that occur in businesses around the world, but specifically, there is a need for network forensic practitioners.

What Is Forensics?

Before going further, let's define some terms.

The word forensics comes from the Latin forens, meaning belonging to the public. It is related to the word forum. If you have ever been involved in debate teams, you may be familiar with it as being related to debate and argumentation. If you are skilled in forensics, you may make a good lawyer. It is from this sense that the connotation of the word has come to mean something other than debate and argumentation. Investigating evidence, in the field or in the lab, to be used in a court case is the practice of forensics because the activity is related to the courts or trials.

This chapter expands on that by talking more specifically about digital forensics. Computer or digital forensics is the practice of investigating computers, digital media, and digital communications for potential artifacts. In this context, the word artifact indicates any object of interest. We wouldn't use the word evidence unless it's actually presented as part of a court case. You may say that an artifact is potential evidence. It may end up being nothing, but because it was extracted from the piles of data that may have been handed to the investigator, we need to refer to it in a way that makes clear the object is something of interest and potentially warrants additional investigation.

Because the word forensics is used in legal settings, you will often find that talk about forensics is involved with law enforcement. Traditionally, that has been the case. However, because many of the techniques and skills that are used by law enforcement are the same as those that may be practiced by an incident response specialist—someone who is investigating a suspicious event or set of events within a business setting—the word forensics also describes the process of identifying digital artifacts within a large collection of data, even in situations where law enforcement isn't involved.

For our purposes, the data we are talking about collecting is network information. This may be packet captures, which are bit-for-bit copies of all communication that has passed across a network interface. The data collected may also come in the form of logs or aggregated data like network flow information.

Any time you handle information that could potentially be used in a court case, it's essential that it be maintained in its original condition, and that you can prove that it hasn't been tampered with. There are ways to ensure that you can demonstrate that the evidence hasn't been tampered with, including maintaining documentation demonstrating who handled it. Additionally, being able to have verifiable proof that the evidence you had at the end is the same as at the beginning is important. The reason for this is that in a course case , technical evidence, such as that from a digital forensic examination, is expected to adhere to an accepted set of standards.

Handling Evidence

The United States of America uses a common law legal system. This is at the federal as well as the state level, with the exception of the state of Louisiana, which uses a civil law system. The United Kingdom also uses a common law system. This means that legislatures enact laws and those laws are then interpreted by the courts for their applicability to specific circumstances. After a court has issued a ruling on a case, that case can then be used as a precedent in subsequent cases. This way every court doesn't have to make a wholly original interpretation of a law for every case. They build on previous cases to create a common interpretation of the law.

When it comes to addressing technical evidence in court cases, a couple of cases are worth understanding. The first case, Frye vs. United States, was a case in 1923 related to the admissibility of a polygraph test. As we continue to make technological advances, courts can have a hard time keeping up. The Frye standard was the one of the first attempts to codify a process that could help ensure that technical or scientific evidence being offered was standardized or accepted within the technical or scientific community. The courts needed a way to evaluate technical or scientific evidence to ensure that it was able to help the trier of facts determine the truth in a trial.

In essence, the Frye standard says that any scientific or technical evidence that is presented before the court must be generally accepted by a meaningful portion of the community of those responsible for the process, principle, or technique being presented. Acceptance by only a small number of colleagues who are also working in a related area doesn't necessarily rise to the standard of general acceptance by the community. Scientific evidence such as that resulting from DNA testing or blood type testing has passed this standard of reliability and veracity and is therefore allowed to be presented in a trial.

The federal court system and most U.S. states have moved past the Frye standard. Instead, they rely on the case Daubert vs. Merrell Dow Pharmaceuticals, Inc. Essentially, the standard of determining whether scientific or technical evidence is relevant hasn't changed substantially. What the majority opinion in the Daubert case argued was that because the Federal Rules of Evidence (FRE) were passed in 1975, those should supersede Frye, which was older. The Supreme Court ruled that in cases where the FRE was in conflict with common laws, such as the standard set by Frye, the FRE had precedence.

The intention of the continuing progress of case law related to technical evidence is to ensure that the evidence presented can be used to assist the trier of facts. The role of the trier of facts in a court case is to come to the truth of the situation. Frye was used to make sure technical evidence was accepted by a community of experts before it could be considered admissible in court. Daubert said that because the Federal Rules of Evidence came later than Frye, it should become the standard in cases of technical evidence. While expert witnesses are used to explain the evidence, the expert witness alone is not sufficient. The witness is a stand-in at trial for the evidence. A witness can be questioned and can provide clarifying information that the evidence directly cannot.

When it comes to digital evidence, we have to consider issues related to the appropriate handling of the data because it can be easily manipulated. For that reason, there's a risk that digital evidence could be considered hearsay if it's mishandled because of the FRE requirements regarding hearsay evidence. Hearsay is relevant here because hearsay is any evidence that is not direct, meaning that it doesn't come from a primary source that can be questioned by the opposition. In short, because there isn't someone sitting on the stand indicating what they saw, it's potentially hearsay unless it is a recording of regular business activities. Of course, the legal aspects are much more complicated than this short discussion might imply, but those are the essentials for those of us without law degrees.

All of this is to say that we have to handle potential evidence carefully so it cannot be questioned as being inauthentic and an inaccurate representation of the events. Fortunately, there are ways that we can not only demonstrate that nothing has changed but also demonstrate a complete record of who has handled the evidence. It is essential that when evidence has been acquired that it be documented clearly from the point of acquisition using the techniques outlined in the following sections.

Cryptographic Hashes

The best way to demonstrate that evidence has not changed from the point of acquisition is to use a cryptographic hash. Let's say, for example, that you have an image of a disk drive that you are going to investigate. Or, for our purposes, what may be more relevant is to say that we have a file that contains all of the network communications from a particular period of time. In order to have something we can check against later, we would generate a cryptographic hash of those files. The cryptographic hash is the result of a mathematical process that, when given a particular data set as input, generates a fixed-length value output. That fixed-length value can be verified later on with other hashes taken from the same evidence. Because hashing a file will always generate the same value (that is, output), as long as the file (the input data) hasn't changed, courts have accepted cryptographic hashes (of sufficient complexity) as a reliable test of authenticity when it comes to demonstrating that the evidence has not changed over a period of time and repeated interactions.

Two separate sets of data creating the same hash value is called a collision. The problem of determining the collision rate of a particular algorithm falls under a particular probability theory called the birthday paradox. The birthday paradox says that in order to get a 50% probability that two people in a given room have the same birthday, month and day, all you need is to have 23 people in the room. In order to get to 100% probability, you would need 367 people in the room. There is a very slim potential for having 366 people in a room who all have a different birthday. To guarantee that you would have a duplicate, you would need to have 367 (365 + 1 for leap day + 1 to get the duplicate). This particular mathematical problem has the potential to open doors for attacks against the hash algorithm.

When you hear cryptographic, you may think encryption. We are not talking about encrypting the evidence. Instead, we are talking about passing the evidence through a very complicated mathematical function in order to get a single output value. Hashing algorithms used for this purpose are sometimes called one-way functions because there is no way to get the original data back from just the hash value. Similarly, for a hash algorithm to be acceptable for verifying integrity, there should be no way to have two files with different contents generate the same hash value. This means that we can be highly confident that if we have one hash value each time we test a file, the content of that file hasn't changed because it shouldn't be possible to make any change to the content of the file such that the original hash value is returned. The only way to get the original hash value is for the data to remain unaltered.

NOTE

A cryptographic hash takes into consideration only the data that resides within the file. It does not use any of the metadata like the filename or dates. As a result, you can change the name of the file and the hash value for that file will remain the same.

NOTE

Cryptography is really just about secret writing, which isn't necessarily the same as encryption. Hashes are used in encryption processes as a general rule because they are so good at determining whether something has changed. If you have encrypted something, you want to make sure it hasn't been tampered with in any fashion. You want to know that what you receive is exactly what was sent. The same is true when we are talking about forensic evidence.

For many years, the cryptographic hash standard used by most digital forensic practitioners and tools was Message Digest 5 (MD5). MD5 was created in 1992 and it generates a 128-bit value that is typically represented using hexadecimal numbering because it is shorter and more representative than other methods like printing out all 128 binary bits. To demonstrate the process of hashing, I placed the following text into a file:

Hi, this is some text. It is being placed in this file in order to get a hash value from the file.

The MD5 hash value for that file is 2583a3fab8faaba111a567b1e44c2fa4. No matter how many times I run the MD5 hash utility against that file, I will get the same value back. The MD5 hash algorithm is non-linear, however. This means that a change to the file of a single bit will yield an entirely different result, and not just a result that is one bit different from the original hash. Every bit in the file will make a difference to the calculation. If you have an extra space or an end of line where there wasn't one in the original input, the value will be different. To demonstrate this, changing the first letter of the text file from an H to a G is a single-bit difference in how it is stored on the computer since the value for H is 72 and the value for G is 71 on the ASCII table. The hash value resulting from this altered file is 2a9739d833abe855112dc86f53780908. This is a substantive change, demonstrating the complexity of the hashing function.

NOTE

MD5 is the algorithm but there are countless implementations of that algorithm. Every program that can generate an MD5 hash value contains an implementation of the MD5 algorithm.

One of the problems with the MD5 algorithm, though, is that it is only 128 bits. This isn't an especially large space in which to be generating values, leading it to be vulnerable to collisions. As a result, for many purposes, the MD5 hash has been superseded by the Secure Hash Algorithm 1 (SHA-1) hash. The SHA-1 hash generates a 160-bit value, which can be rendered using 40 hexadecimal digits. Even this isn't always considered large enough. As a result, the SHA-2 standard for cryptographic hashing has several alternatives that generate longer values. One that you may run into, particularly in the encryption space, is SHA-256, which generates a 256-bit value. Where the 128-bit MD5 hash algorithm has the potential to generate roughly 3.4 × 10^38 unique values, the SHA-256 hash algorithm can yield 1.15 × 10^77 unique values. It boggles the mind to think about how large those numbers are, frankly. Generating a SHA-1 hash against our original text file gives us a value of 286f55360324d42bcb1231ef5706a9774ed0969e. The SHA-256 hash value of our original file is 3ebcc1766a03b456517d10e315623b88bf41541595b5e9f60f8bd48e06bcb7ba. These are all different values that were generated against the same input file.

One thing to keep in mind is that any change at all to the data in the source file will generate a completely different value. Adding or removing a line break, for example, would constitute removing an entire character from the file. If that were done, the file may look identical to your eyes but the hash values would be completely different. To see the difference, you would have to view the file using something like a hexadecimal editor to see how it is truly represented in storage and not just how it is displayed.

You can use a number of utilities to generate these values. The preceding values were generated using the built-in, command-line utilities on a Mac OS system. Linux has similar command-line utilities available. On Microsoft Windows, you can download a number of programs, though Microsoft doesn't include any by default. Microsoft does, however, have a utility that you can download that will generate the different hash values for you. The name of the utility is File Checksum Identity Verifier (FCIV).

Any time you obtain a file such as a packet capture or a log file, you should immediately generate a hash value for that file. MD5 hash values are considered acceptable in court cases as of the time of this writing, though an investigation would be more durable if algorithms like SHA-1 or SHA-256, which generate longer values, were to be used. MD5 continues to demonstrate flaws the longer it is used and those flaws may eventually make evidence verification from MD5 hashes suspect in a court case.

Over the course of looking at packet captures in Chapter 4, we will talk about some other values that perform similar functions. One of those is the cyclic redundancy check (CRC), which is also mathematically computed and is often used to validate that data hasn't been altered. These sorts of values, though, are commonly called checksums rather than hashes.

Chain of Custody

Sometimes it seems as though TV shows like NCIS, CSI, Bones, and others that portray forensics simultaneously advance and set back the field of forensics. Although some of the technical aspects of forensics, including the language, are ridiculous, these shows do sometimes get things right. This was especially true in the early days of NCIS, as an example, where everything they collected was bagged and tagged. If evidence is handed off from one person to another, it must be documented. This documentation is the chain of custody. Evidence should be kept in a protected and locked location if you are going to be presenting any of it in court. Though this may be less necessary if you are involved in investigating an incident on a corporate network, it's still a good habit. For a start, as noted earlier in this chapter, you never know when the event you are investigating may turn from a localized incident to something where legal proceedings are required. As an example, the very first well-known distributed denial of service (DDoS) attack in February 2000 appeared as a number of separate incidents to the companies involved. However, when it came time to prosecute Michael Calce, known as Mafiaboy, the FBI would have needed evidence and that evidence would have come from the individual companies who were targets of the attacks—Yahoo, Dell, Amazon, and so on.

Even in the case of investigating a network incident in a business setting, documenting the chain of custody is a good strategy. This ensures that you know who was handling the potential evidence at any given time. It provides for accountability and a history. If anything were to go wrong at any point, including loss of or damage to the evidence, you would have a historical record of who was handling the evidence and why they had it.

Keeping a record of the date and time for handing off the evidence as well as who is taking responsibility for it and what they intend to do with it is a good chain-of-custody plan. It doesn't take a lot of time and it can be very important. As always, planning can be the key to success, just as lack of planning can be the doorway to failure. The first time you lose a disk drive or have it corrupted and that drive had been handed around to multiple people, you will recognize the importance of audit logs like chain-of-custody documentation. Ideally, you would perform a hash when you first obtain the evidence to ensure that what you are getting is exactly what you expect it to be. You should have a hash value documented so you will have something to compare your hash to in order to demonstrate that no changes have occurred.

Incident Response

Incident response may be harder to get your head around if you are a forensic practitioner. If you are a system or network administrator trying to get your hands around the idea of forensics, incident response should be old hat to you. When networks belonging to businesses or other organizations (schools, non-profits, governmental, and so on) are subject to a malware infestation, as an example, that would probably trigger an incident response team to get the incident under control as well as investigate the cause of the incident. Depending on who you talk to you may get different answers, but the process of incident response can be boiled down to four stages: preparation; detection and analysis; containment, eradication, and recovery; and post-incident activity.

What exactly is an incident? How does an incident differ from an event? This is another area where you may find that you get differing opinions depending on whom you talk to. Rather than getting into a deep discussion here, let's go with simple. An event is a change that has been detected in a system. This could be something as simple as plugging an external drive into a system. That will trigger a log message in most cases. That would be an event. Someone attempting to ping a system behind a firewall where the messages are blocked and logged may be an event. An event may even be updating system software, as in the case with a hot fix or a service pack.

An incident, on the other hand, is commonly something that is attributable to human interaction and is often malicious. An incident is always an event, because every incident would result in some sort of observable change to the system. If all of your web servers were infected by malware, that malware would be observable on the system. It would result in events on all of the systems and you would have an incident on your hands. A single system being infected with malware would be an event but wouldn't be enough to rise to a level where you would call an incident response team.

A forensic practitioner would obviously be necessary at the detection and analysis phase but they would typically be involved in the preparation stage as well. Over the course of the book, we will be going over some items that you may want to make sure are in place as an organization goes through preparation stages. Preparation is a very large category of activities, including planning, but from the standpoint of a forensic investigator, it is primarily when you make sure you will have what you need when it comes to doing an analysis. There may also be activity when it comes to eradication, to ensure that the source of the incident has been completely removed. Finally, a forensic investigator would be involved in post-incident activities for lessons learned and process improvement.

In most cases, you would have an incident response team, even if it is small and ad hoc, to deal with incidents because handling incidents is a process. The larger the organization and the more systems involved, the larger the incident response team would likely be. Creating a team up front would be another important activity when it comes to planning. Your organization, as part of the creation of security policies, standards, and processes, should create an incident response team or at least have documentation for how to handle an incident, should one occur. Considering that it's widely believed that a significant proportion of companies in the United States have been breached, meaning they have had attackers compromise systems to gain unauthorized access, “should one occur” is a bit euphemistic. In reality, I should say when an incident occurs. If you haven't had to deal with an incident, it may simply be a result of lack of appropriate detection capabilities.

Forensic practitioners are definitely needed as part of the incident response effort. They need not be full-time forensic practitioners, but simply people already employed at the company who happen to have the knowledge and skills necessary to perform a forensic investigation. They can get to the root cause of an incident, and that requires someone who can dig through filesystems and logs and look in other places within the operating system on the affected hosts.

Without understanding the root cause, it would be difficult to say whether the incident is under control. It would also be difficult to know whether you have found all of the systems that may be impacted because incidents, like unauthorized system access or malware infestations, will commonly impact multiple devices across a network. This is especially true when there is a large commonality in system deployments. In other words, if all systems are created from the same source image, they will all be vulnerable in the same way. Once an attacker finds a way into one, all of the others that have been built using the same image are easy targets.

The forensic investigator will need to be focused on identifying the source of the attack, whether it's a system compromise or a malware infection, to determine what may need to be addressed to make sure a subsequent, similar attack isn't successful. They will also need to be focused on finding any evidence that the attacker attempted to compromise or infect other hosts on the local network. If there is evidence of attempts against systems not on the organization's network, the incident response team should have the capability to reach out to other organizations, including a computer emergency response team (CERT) that may be able to coordinate attacks across multiple organizations.

This is where you may run into the need for the collected artifacts in a larger investigation and potential criminal action. Coordinating with law enforcement will help you, as a forensic investigator, determine your best course of action if there is evidence of either substantial damage or evidence that the attack involves multiple organizations. This is another area where planning is helpful—determining points of contact for local and federal law enforcement ahead of time for when an incident occurs.

The Need for Network Forensic Practitioners

In early 2016, a task force was assembled to talk about how to best approach educating more professionals who are capable of filling thousands of jobs that are expected to be available in the coming years. While this is generally referred to as a need for cybersecurity workers, the term cybersecurity is fairly vague and covers a significant amount of ground. The federal government alone is planning on large spending around making sure they can support a growing need for skilled and/or knowledgeable people to prevent attacks, defend against attacks, and then respond when an attack has been detected. The initial plan was to spend $3.1 billion to modernize and if the plan is implemented properly, there will continue to be a need for people who are capable of responding to incidents.

This is just at the level of the federal government. Large consulting companies like Mandiant and Verizon Business as well as the large accounting companies that are also involved in security consulting are hiring a lot of people who have skills or knowledge in the area of forensics. When companies suffer a large-scale incident, particularly smaller or medium-sized companies that can't afford full-time staff capable of handling a complete response, they often bring in a third party to help them out. This has several advantages. One of them is that a third party is less likely to make any assumptions because they have no pre-existing knowledge of the organization. This allows them to be thorough rather than potentially skipping something in the belief they know the answer because of the way “it's supposed to work.” Hiring information technology people who are skilled in information security and forensics can be really expensive. This is especially true for smaller companies that may just need someone who knows a little networking and some Windows administration.

Large companies will often have a staff of people who are responsible for investigations, including those related to digital evidence. This means that the federal government, consulting companies, and large companies are all looking for you, should you be interested in taking on work as a network forensic investigator. This will be challenging work, however, because in addition to an understanding of common forensic procedure and evidence handling, you also need a solid understanding of networking. This includes the TCP/IP suite of protocols as well as a number of application protocols. It also includes an understanding of some of the security technology that is commonly in place in enterprise networks like firewalls and intrusion detection systems.

Because there is currently no end in sight when it comes to computers being compromised by attackers around the world, there is no end in sight for the need for skilled forensics professionals. For forensic investigators without a foundation in network protocols and security technologies, this book intends to address that gap.

Summary

Businesses, government agencies, educational institutions, and non-profits are all subject to attack by skilled adversaries. These adversaries are, more and more, well-funded professional organizations. They may be some form of organized crime or they may be nation-states. The objectives of these two types of organizations may be significantly different but the end result is the same—they obtain some sort of unauthorized access to systems and once they are in place, they can be difficult to detect or extricate. This is where forensics professionals come in.

Forensics is a wide and varied field that has its basis in the legal world. Forensics, in a general sense, is anything to do with court proceedings. For our purposes, while the practice of digital forensics may have some foundation in law enforcement professionals performing investigations as part of criminal proceedings, the skills necessary to perform those investigations cross over to other areas. When it comes to investigations performed within an enterprise rather than by a law enforcement agency, the skills and techniques are the same but there may be differences in how artifacts and evidence are handled. That isn't always the case, of course, because even if you are just looking for the root cause, there is a possibility of what you find being necessary as part of a court case.

Because there is a possibility that artifacts and evidence may be used in court, it's generally a good idea to make use of cryptographic hashes as well as keeping a chain-of-custody document. These two activities will help you maintain accountability and a historical record of how the evidence and artifacts were handled. This is helpful if you have to refer to the events later on.

When it comes to working in an organization that isn't law enforcement, you may be asked to perform forensic investigations as part of an incident response. Incident response teams are becoming common practice at all sizes of organization. It's just how any organization has to operate to ensure that they can get back on their feet quickly and efficiently when an attack happens—whether it's someone who has infiltrated the network by sending an infected e-mail or whether it's an attacker who has broken into the web server through a commonly known vulnerability.

Given the number of organizations around the world that have suffered these attacks, including several highly publicized attacks at Sony, Target, Home Depot, TJ Maxx, and countless others, there is a real need for forensics practitioners who can work with network data. This is because companies are using intrusion detection systems that will generate packet captures surrounding an incident and some organizations will actually perform a wire recording on a continuous basis simply in case an incident takes place. The network is the best place to capture what really happened because the network—the actual wire—can't lie.

References

Morgan, Steve. “Help Wanted: 1,000 Cybersecurity Jobs At OPM, Post-Hack Hiring Approved By DHS.” (

Forbes

, January 13, 2016.) Retrieved June 22, 2016, from

http://www.forbes.com/sites/stevemorgan/2016/01/31/help-wanted-1000-cybersecurity-jobs-at-opm-post-hack-hiring-approved-by-dhs/#3f10bfe12cd2

.

Umberg, Tommy and Cherrie Warden. “Digital Evidence and Investigatory Protocols.”

Digital Evidence and Electronic Signature Law Review

, 11 (2014). DEESLR,

11

(0). doi:10.14296/deeslr.v11i0.2131.

2Networking Basics

In this chapter, you will learn about:

What protocols are and how they work

The basics of TCP/IP

The difference between the OSI model and the TCP/IP architecture

Sitting at his desk, he was looking for his next target. A couple of quick Google searches and digging through various job sites gave him some ideas but he needed to know more. He was in need of addresses and hostnames and he knew of several places he would be able to locate that information. With just a few commands in his open terminal window he had a number of network addresses that he could start poking at. That gave him a starting point, and a few DNS queries later he had not only network addresses but some hostnames that went along with them. He was also able to get some contact information that could be useful later on.