Web Application Defender's Cookbook - Ryan C. Barnett - E-Book

Web Application Defender's Cookbook E-Book

Ryan C. Barnett

0,0
32,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Defending your web applications against hackers and attackers

The top-selling book Web Application Hacker's Handbook showed how attackers and hackers identify and attack vulnerable live web applications. This new Web Application Defender's Cookbook is the perfect counterpoint to that book: it shows you how to defend. Authored by a highly credentialed defensive security expert, this new book details defensive security methods and can be used as courseware for training network security personnel, web server administrators, and security consultants.

Each "recipe" shows you a way to detect and defend against malicious behavior and provides working code examples for the ModSecurity web application firewall module. Topics include identifying vulnerabilities, setting hacker traps, defending different access points, enforcing application flows, and much more.

  • Provides practical tactics for detecting web attacks and malicious behavior and defending against them
  • Written by a preeminent authority on web application firewall technology and web application defense tactics 
  • Offers a series of "recipes" that include working code examples for the open-source ModSecurity web application firewall module

Find the tools, techniques, and expert information you need to detect and respond to web application attacks with Web Application Defender's Cookbook: Battling Hackers and Protecting Users.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 537

Veröffentlichungsjahr: 2013

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Part I: Preparing the Battle Space

Chapter 1: Application Fortification

Recipe 1-1: Real-time Application Profiling

Recipe 1-2: Preventing Data Manipulation with Cryptographic Hash Tokens

Recipe 1-3: Installing the OWASP ModSecurity Core Rule Set (CRS)

Recipe 1-4: Integrating Intrusion Detection System Signatures

Recipe 1-5: Using Bayesian Attack Payload Detection

HTTP Audit Logging

Recipe 1-6: Enable Full HTTP Audit Logging

Recipe 1-7: Logging Only Relevant Transactions

Recipe 1-8: Ignoring Requests for Static Content

Recipe 1-9: Obscuring Sensitive Data in Logs

Recipe 1-10: Sending Alerts to a Central Log Host Using Syslog

Recipe 1-11: Using the ModSecurity AuditConsole

Chapter 2: Vulnerability Identification and Remediation

Internally Developed Applications

Externally Developed Applications

Virtual Patching

Recipe 2-1: Passive Vulnerability Identification

Active Vulnerability Identification

Recipe 2-2: Active Vulnerability Identification

Manual Vulnerability Remediation

Recipe 2-3: Manual Scan Result Conversion

Recipe 2-4: Automated Scan Result Conversion

Recipe 2-5: Real-time Resource Assessments and Virtual Patching

Chapter 3: Poisoned Pawns (Hacker Traps)

Honeytrap Concepts

Recipe 3-1: Adding Honeypot Ports

Recipe 3-2: Adding Fake robots.txt Disallow Entries

Recipe 3-3: Adding Fake HTML Comments

Recipe 3-4: Adding Fake Hidden Form Fields

Recipe 3-5: Adding Fake Cookies

Part II: Asymmetric Warfare

Chapter 4: Reputation and Third-Party Correlation

Suspicious Source Identification

Recipe 4-1: Analyzing the Client's Geographic Location Data

Recipe 4-2: Identifying Suspicious Open Proxy Usage

Recipe 4-3: Utilizing Real-time Blacklist Lookups (RBL)

Recipe 4-4: Running Your Own RBL

Recipe 4-5: Detecting Malicious Links

Chapter 5: Request Data Analysis

Request Data Acquisition

Recipe 5-1: Request Body Access

Recipe 5-2: Identifying Malformed Request Bodies

Recipe 5-3: Normalizing Unicode

Recipe 5-4: Identifying Use of Multiple Encodings

Recipe 5-5: Identifying Encoding Anomalies

Input Validation Anomalies

Recipe 5-6: Detecting Request Method Anomalies

Recipe 5-7: Detecting Invalid URI Data

Recipe 5-8: Detecting Request Header Anomalies

Recipe 5-9: Detecting Additional Parameters

Recipe 5-10: Detecting Missing Parameters

Recipe 5-11: Detecting Duplicate Parameter Names

Recipe 5-12: Detecting Parameter Payload Size Anomalies

Recipe 5-13: Detecting Parameter Character Class Anomalies

Chapter 6: Response Data Analysis

Recipe 6-1: Detecting Response Header Anomalies

Recipe 6-2: Detecting Response Header Information Leakages

Recipe 6-3: Response Body Access

Recipe 6-4: Detecting Page Title Changes

Recipe 6-5: Detecting Page Size Deviations

Recipe 6-6: Detecting Dynamic Content Changes

Recipe 6-7: Detecting Source Code Leakages

Recipe 6-8: Detecting Technical Data Leakages

Recipe 6-9: Detecting Abnormal Response Time Intervals

Recipe 6-10: Detecting Sensitive User Data Leakages

Recipe 6-11: Detecting Trojan, Backdoor, and Webshell Access Attempts

Chapter 7: Defending Authentication

Recipe 7-1: Detecting the Submission of Common/Default Usernames

Recipe 7-2: Detecting the Submission of Multiple Usernames

Recipe 7-3: Detecting Failed Authentication Attempts

Recipe 7-4: Detecting a High Rate of Authentication Attempts

Recipe 7-5: Normalizing Authentication Failure Details

Recipe 7-6: Enforcing Password Complexity

Recipe 7-7: Correlating Usernames with SessionIDs

Chapter 8: Defending Session State

Recipe 8-1: Detecting Invalid Cookies

Recipe 8-2: Detecting Cookie Tampering

Recipe 8-3: Enforcing Session Timeouts

Recipe 8-4: Detecting Client Source Location Changes During Session Lifetime

Recipe 8-5: Detecting Browser Fingerprint Changes During Sessions

Chapter 9: Preventing Application Attacks

Recipe 9-1: Blocking Non-ASCII Characters

Recipe 9-2: Preventing Path-Traversal Attacks

Recipe 9-3: Preventing Forceful Browsing Attacks

Recipe 9-4: Preventing SQL Injection Attacks

Recipe 9-5: Preventing Remote File Inclusion (RFI) Attacks

Recipe 9-6: Preventing OS Commanding Attacks

Recipe 9-7: Preventing HTTP Request Smuggling Attacks

Recipe 9-8: Preventing HTTP Response Splitting Attacks

Recipe 9-9: Preventing XML Attacks

Chapter 10: Preventing Client Attacks

Recipe 10-1: Implementing Content Security Policy (CSP)

Recipe 10-2: Preventing Cross-Site Scripting (XSS) Attacks

Recipe 10-3: Preventing Cross-Site Request Forgery (CSRF) Attacks

Recipe 10-4: Preventing UI Redressing (Clickjacking) Attacks

Recipe 10-5: Detecting Banking Trojan (Man-in-the-Browser) Attacks

Chapter 11: Defending File Uploads

Recipe 11-1: Detecting Large File Sizes

Recipe 11-2: Detecting a Large Number of Files

Recipe 11-3: Inspecting File Attachments for Malware

Chapter 12: Enforcing Access Rate and Application Flows

Recipe 12-1: Detecting High Application Access Rates

Recipe 12-2: Detecting Request/Response Delay Attacks

Recipe 12-3: Identifying Inter-Request Time Delay Anomalies

Recipe 12-4: Identifying Request Flow Anomalies

Recipe 12-5: Identifying a Significant Increase in Resource Usage

Part III: Tactical Response

Chapter 13: Passive Response Actions

Recipe 13-1: Tracking Anomaly Scores

Recipe 13-2: Trap and Trace Audit Logging

Recipe 13-3: Issuing E-mail Alerts

Recipe 13-4: Data Sharing with Request Header Tagging

Chapter 14: Active Response Actions

Recipe 14-1: Using Redirection to Error Pages

Recipe 14-2: Dropping Connections

Recipe 14-3: Blocking the Client Source Address

Recipe 14-4: Restricting Geolocation Access Through Defense Condition (DefCon) Level Changes

Recipe 14-5: Forcing Transaction Delays

Recipe 14-6: Spoofing Successful Attacks

Recipe 14-7: Proxying Traffic to Honeypots

Recipe 14-8: Forcing an Application Logout

Recipe 14-9: Temporarily Locking Account Access

Chapter 15: Intrusive Response Actions

Recipe 15-1: JavaScript Cookie Testing

Recipe 15-2: Validating Users with CAPTCHA Testing

Recipe 15-3: Hooking Malicious Clients with BeEF

Frontmatter

Foreword

Introduction

Part I: Preparing the Battle Space

The art of war teaches us to rely not on the likelihood of the enemy’s not coming, but on our own readiness to receive him; not on the chance of his not attacking, but rather on the fact that we have made our position unassailable.

—Sun Tzu in The Art of War

“Is our web site secure?” If your company’s chief executive officer asked you this question, what would you say? If you respond in the affirmative, the CEO might say, “Prove it.” How do you provide tangible proof that your web applications are adequately protected? This section lists some sample responses and highlights the deficiencies of each. Here’s the first one:

Our web applications are secure because we are compliant with the Payment Card Industry Data Security Standard (PCI DSS).

PCI DSS, like most other regulations, is a minimum standard of due care. This means that achieving compliance does not make your site unhackable. PCI DSS is really all about risk transference (from the credit card companies to the merchants) rather than risk mitigation. If organizations do not truly embrace the concept of reducing risk by securing their environments above and beyond what PCI DSS specifies, the compliance process becomes nothing more than a checkbox paperwork exercise. Although PCI has some admirable aspects, keep this mantra in mind:

It is much easier to pass a PCI audit if you are secure than to be secure because you pass a PCI audit.

In a more general sense, regulations tend to suffer from the control-compliant philosophy. They are input-centric and do not actually analyze or monitor their effectiveness in operations. Richard Bejtlich,1 a respected security thought leader, brilliantly presented this interesting analogy on this topic:

Imagine a football (American-style) team that wants to measure their success during a particular season. Team management decides to measure the height and weight of each player. They time how fast the player runs the 40 yard dash. They note the college from which each player graduated. They collect many other statistics as well, then spend time debating which ones best indicate how successful the football team is. Should the center weigh over 300 pounds? Should the wide receivers have a shoe size of 11 or greater? Should players from the northwest be on the starting lineup? All of this seems perfectly rational to this team. An outsider looks at the situation and says: “Check the scoreboard! You’re down 42–7 and you have a 1–6 record. You guys are losers!”

This is the essence of input-centric versus output-aware security. Regardless of all the preparations, it is on the live production network where all your web security preparations will either pay off or crash and burn. Because development and staging areas rarely adequately mimic production environments, you do not truly know how your web application security will fare until it is accessible by untrusted clients.

Our web applications are secure because we have deployed commercial web security product(s).

This response is an unfortunate result of transitive belief in security. Just because a security vendor’s web site or product collateral says that the product will make your web application more secure does not in fact make it so. Security products, just like the applications they are protecting, have flaws if used incorrectly. There are also potential issues with mistakes in configuration and deployment, which may allow attackers to manipulate or evade detection.

Our web applications are secure because we use SSL.

Many e-commerce web sites prominently display an image seal. This indicates that the web site is secure because it uses a Secure Socket Layer (SSL) certificate purchased from a reputable certificate authority (CA). Use of an SSL signed certificate helps prevent the following attacks:

Network sniffing.

Without SSL, your data is sent across the network using an unencrypted channel. This means that anyone along the path can potentially sniff the traffic off the wire in clear text.

Web site spoofing.

Without a valid SSL site certificate, it is more difficult for attackers to attempt to use phishing sites that mimic the legitimate site.

The use of SSL does help mitigate these two issues, but it has one glaring weakness: The use of SSL does absolutely nothing to prevent a malicious user from directly attacking the web application itself. As a matter of fact, many attackers prefer to target SSL-enabled web applications because using this encrypted channel may hide their activities from other network-monitoring devices.

Our web applications are secure because we have alerts demonstrating that we blocked web attacks.

Evidence of blocked attack attempts is good but is not enough. When management asks if the web site is secure, it is really asking what the score of the game is. The CEO wants to know whether you are winning or losing the game of defending your web applications from compromise. In this sense, your response doesn’t answer the question. Again referencing Richard Bejtlich’s American football analogy, this is like someone asking you who won the Super Bowl, and you respond by citing game statistics such as number of plays, time of possession, and yards gained without telling him or her the final score! Not really answering the question is it? Although providing evidence of blocked attacks is a useful metric, management really wants to know if any successful attacks occurred.

With this concept as a backdrop, here are the web security metrics that I feel are most important for the production network and gauging how the web application’s security mechanisms are performing:

Web transactions per day

should be represented as a number (#). It establishes a baseline of web traffic and provides some perspective for the other metrics.

Attacks detected (true positive)

should be represented as both a number (#) and a percentage (%) of the total web transactions per day. This data is a general indicator of both malicious web traffic and security detection accuracy.

Missed attacks (false negative)

should be represented as both a number (#)and a percentage (%) of the total web transactions per day. This data is a general indicator of the effectiveness of security detection accuracy. This is the key metric that is missing when you attempt to provide the final score of the game.

Blocked traffic (false positive)

should be represented as both a number (#) and a percentage (%) of the total web transactions per day. This data is also a general indicator of the effectiveness of security detection accuracy. This is very important data for many organizations because blocking legitimate customer traffic may mean missed revenue. Organizations should have a method of accurately tracking false positive alerts that took disruptive actions on web transactions.

Attack detection failure rate

should be represented as a percentage (%). It is derived by adding false negatives and false positives and then dividing by true positives.

This percentage gives the overall effectiveness of your web application security detection accuracy.

The attack detection failure rate provides data to better figure out the score of the game. Unfortunately, most organizations do not gather enough information to conduct this type of security metric analysis.

Our web applications are secure because we have not identified any abnormal behavior.

From a compromise perspective, identifying abnormal application behavior seems appropriate. The main deficiency with this response has to do with the data used to identify anomalies. Most organizations have failed to properly instrument their web applications to produce sufficient logging detail. Most web sites default to using the web server’s logging mechanisms, such as the Common Log Format (CLF). Here are two sample CLF log entries taken from the Apache web server:

109.70.36.102 - - [15/Feb/2012:09:08:16 -0500] "

POST

/wordpress//xmlrpc.php HTTP/1.1" 500 163 "-" "

Wordpress Hash Grabber v2.0libwww-perl/6.02

" 109.70.36.102 - - [15/Feb/2012:09:08:17 -0500] "

POST

/wordpress//xmlrpc.php HTTP/1.1"

200

613 "-" "

Wordpress Hash Grabber v2.0libwww-perl/6.02

"

Looking at this data, we can see a few indications of potential suspicious or abnormal behavior. The first is that the User-Agent field data shows a value for a known WordPress exploit program, WordPress Hash Grabber. The second indication is the returned HTTP status code tokens. The first entry results in a 500 Internal Server Error status code, and the second entry results in a 200 OK status code. What data in the first entry caused the web application to generate an error condition? We don’t know what parameter data was sent to the application because POST requests pass data in the request body rather than in a QUERY_STRING value that is logged by web servers in the CLF log. What data was returned within the response bodies of these transactions? These are important questions to answer, but CLF logs include only a small subset of the full transactional data. They do not, for instance, include other request headers such as cookies, POST request bodies, or any logging of outbound data. Failure to properly log outbound HTTP response data prevents organizations from answering this critical incident response question: “What data did the attackers steal?” The lack of robust HTTP audit logs is one of the main reasons why organizations cannot conduct proper incident response for web-related incidents.

Our web applications are secure because we have not identified any abnormal behavior, and we collect and analyze full HTTP audit logs for signs of malicious behavior.

A key mistake that many organizations make is to use only alert-centric events as indicators of potential incidents. If you log only details about known malicious behaviors, how will you know if your defenses are ever circumvented? New or stealthy attack methods emerge constantly. Thus, it is insufficient to analyze alerts for issues you already know about. You must have full HTTP transactional audit logging at your disposal so that you may analyze them for other signs of malicious activity.

During incident response, management often asks, “What else did this person do?” To accurately answer this question, you must have audit logs of the user’s entire web session, not just a single transaction that was flagged as suspicious.

Our web applications are secure because we have not identified any abnormal behavior, and we collect and analyze full HTTP audit logs for signs of malicious behavior. We also regularly test our applications for the existence of vulnerabilities.

Identifying and blocking web application attack attempts is important, but correlating the target of these attacks with the existence of known vulnerabilities within your applications is paramount. Suppose you are an operational security analyst for your organization who manages events that are centralized within a Security Information Event Management (SIEM) system. Although a spike in activity for attacks targeting a vulnerability within a Microsoft Internet Information Services (IIS) web server indicates malicious behavior, the severity of these alerts may be substantially less if your organization does not use the IIS platform. On the other hand, if you see attack alerts for a known vulnerability within the osCommerce application, and you are running that application on the system that is the target of the alert, the threat level should be increased, because a successful compromise is now a possibility. Knowing which applications are deployed in your environment and if they have specific vulnerabilities is critical for proper security event prioritization. Even if you have conducted full application assessments to identify vulnerabilities, this response is still incomplete, and this final response highlights why:

Our web applications are secure because we have not identified any abnormal behavior, and we collect and analyze full HTTP audit logs for signs of malicious behavior. We also regularly test our applications for the existence of vulnerabilities and our detection and incident response capabilities.

With this final response, you see why the preceding answer is incomplete. Even if you know where your web application vulnerabilities are, you still must actively test your operational security defenses with live simulations of attacks to ensure their effectiveness. Does operational security staff identify the attacks? Are proper incident response countermeasures implemented? How long does it take to implement them? Are the countermeasures effective? Unless you can answer these questions, you will never truly know if your defensive mechanisms are working.

1http://taosecurity.blogspot.com/

Chapter 1

Application Fortification

Whoever is first in the field and awaits the coming of the enemy will be fresh for the fight; whoever is second in the field and has to hasten to battle will arrive exhausted.

—Sun Tzu in The Art of War

The recipes in this section walk you through the process of preparing your web application for the production network battlefront.

The first step is application fortification, in which you analyze the current web application that you must protect and enhance its defensive capabilities.