43,19 €
Evade antiviruses and bypass firewalls with the most widely used penetration testing frameworks
Penetration testing or ethical hacking is a legal and foolproof way to identify vulnerabilities in your system. With thorough penetration testing, you can secure your system against the majority of threats.
This Learning Path starts with an in-depth explanation of what hacking and penetration testing is. You’ll gain a deep understanding of classical SQL and command injection flaws, and discover ways to exploit these flaws to secure your system. You'll also learn how to create and customize payloads to evade antivirus software and bypass an organization's defenses. Whether it’s exploiting server vulnerabilities and attacking client systems, or compromising mobile phones and installing backdoors, this Learning Path will guide you through all this and more to improve your defense against online attacks.
By the end of this Learning Path, you'll have the knowledge and skills you need to invade a system and identify all its vulnerabilities.
This Learning Path includes content from the following Packt products:
This Learning Path is designed for security professionals, web programmers, and pentesters who want to learn vulnerability exploitation and make the most of the Metasploit framework. Some understanding of penetration testing and Metasploit is required, but basic system administration skills and the ability to read code are a must.
Gilberto Najera-Gutierrez is an experienced penetration tester currently working for one of the top security testing service providers in Australia. He obtained leading security and penetration testing certifications, namely Offensive Security Certified Professional (OSCP), EC-Council Certified Security Administrator (ECSA), and GIAC Exploit Researcher and Advanced Penetration Tester (GXPN). Gilberto has been working as a penetration tester since 2013, and he has been a security enthusiast for almost 20 years. He has successfully conducted penetration tests on networks and applications of some of the biggest corporations, government agencies, and financial institutions in Mexico and Australia. Juned Ahmed Ansari is a cyber security researcher based out of Mumbai. He currently leads the penetration testing and offensive security team in a prodigious MNC. Juned has worked as a consultant for large private sector enterprises, guiding them on their cyber security program. He has also worked with start-ups, helping them make their final product secure. Juned has conducted several training sessions on advanced penetration testing, which were focused on teaching students stealth and evasion techniques in highly secure environments. His primary focus areas are penetration testing, threat intelligence, and application security research. Daniel Teixeira is an IT security expert, author, and trainer, specializing in red team engagements, penetration testing, and vulnerability assessments. His main areas of focus are adversary simulation, emulation of modern adversarial tactics, techniques and procedures; vulnerability research, and exploit development. Abhinav Singh is a well-known information security researcher. He is the author of Metasploit Penetration Testing Cookbook (first and second editions) and Instant Wireshark Starter, by Packt. He is an active contributor to the security community—paper publications, articles, and blogs. His work has been quoted in several security and privacy magazines, and digital portals. He is a frequent speaker at eminent international conferences—Black Hat and RSA. His areas of expertise include malware research, reverse engineering, and cloud security.Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 621
Veröffentlichungsjahr: 2019
Copyright © 2019 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
First published: June 2019
Production reference: 1180619
Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.
ISBN 978-1-83864-607-3
www.packtpub.com
Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.
Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals
Improve your learning with Skill Plans built especially for you
Get a free eBook or video every month
Mapt is fully searchable
Copy and paste, print, and bookmark content
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.packt.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.
At www.packt.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.
Gilberto Najera-Gutierrez is an experienced penetration tester currently working for one of the top security testing service providers in Australia. He obtained leading security and penetration testing certifications, namely Offensive Security Certified Professional (OSCP), EC-Council Certified Security Administrator (ECSA), and GIAC Exploit Researcher and Advanced Penetration Tester (GXPN).
Gilberto has been working as a penetration tester since 2013, and he has been a security enthusiast for almost 20 years. He has successfully conducted penetration tests on networks and applications of some of the biggest corporations, government agencies, and financial institutions in Mexico and Australia.
Juned Ahmed Ansari is a cyber security researcher based out of Mumbai. He currently leads the penetration testing and offensive security team in a prodigious MNC. Juned has worked as a consultant for large private sector enterprises, guiding them on their cyber security program. He has also worked with start-ups, helping them make their final product secure.
Juned has conducted several training sessions on advanced penetration testing, which were focused on teaching students stealth and evasion techniques in highly secure environments. His primary focus areas are penetration testing, threat intelligence, and application security research.
Daniel Teixeira is an IT security expert, author, and trainer, specializing in red team engagements, penetration testing, and vulnerability assessments. His main areas of focus are adversary simulation, emulation of modern adversarial tactics, techniques and procedures; vulnerability research, and exploit development.
Abhinav Singh is a well-known information security researcher. He is the author of Metasploit Penetration Testing Cookbook (first and second editions) and Instant Wireshark Starter, by Packt. He is an active contributor to the security community—paper publications, articles, and blogs. His work has been quoted in several security and privacy magazines, and digital portals. He is a frequent speaker at eminent international conferences—Black Hat and RSA. His areas of expertise include malware research, reverse engineering, and cloud security.
If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.
Title Page
Copyright
Improving your Penetration Testing Skills
About Packt
Why subscribe?
Packt.com
Contributors
About the authors
Packt is searching for authors like you
Preface
Who this book is for
What this book covers
To get the most of this book
Download the example code files
Download the color images
Conventions used
Get in touch
Reviews
Introduction to Penetration Testing and Web Applications
Proactive security testing
Different testing methodologies
Ethical hacking
Penetration testing
Vulnerability assessment
Security audits
Considerations when performing penetration testing
Rules of Engagement
The type and scope of testing
Client contact details
Client IT team notifications
Sensitive data handling
Status meeting and reports
The limitations of penetration testing
The need for testing web applications
Reasons to guard against attacks on web applications
Kali Linux
A web application overview for penetration testers
HTTP protocol
Knowing an HTTP request and response
The request header
The response header
HTTP methods
The GET method
The POST method
The HEAD method
The TRACE method
The PUT and DELETE methods
The OPTIONS method
Keeping sessions in HTTP
Cookies
Cookie flow between server and client
Persistent and nonpersistent cookies
Cookie parameters
HTML data in HTTP response
The server-side code
Multilayer web application
Three-layer web application design
Web services
Introducing SOAP and REST web services
HTTP methods in web services
XML and JSON
AJAX
Building blocks of AJAX
The AJAX workflow
HTML5
WebSockets
Setting Up Your Lab with Kali Linux
Kali Linux
Latest improvements in Kali Linux
Installing Kali Linux
Virtualizing Kali Linux versus installing it on physical hardware
Installing on VirtualBox
Creating the virtual machine
Installing the system
Important tools in Kali Linux
CMS & Framework Identification
WPScan
JoomScan
CMSmap
Web Application Proxies
Burp Proxy
Customizing client interception
Modifying requests on the fly
Burp Proxy with HTTPS websites
Zed Attack Proxy
ProxyStrike
Web Crawlers and Directory Bruteforce
DIRB
DirBuster
Uniscan
Web Vulnerability Scanners
Nikto
w3af
Skipfish
Other tools
OpenVAS
Database exploitation
Web application fuzzers
Using Tor for penetration testing
Vulnerable applications and servers to practice on
OWASP Broken Web Applications
Hackazon
Web Security Dojo
Other resources
Reconnaissance and Profiling the Web Server
Reconnaissance
Passive reconnaissance versus active reconnaissance
Information gathering
Domain registration details
Whois – extracting domain information
Identifying related hosts using DNS
Zone transfer using dig
DNS enumeration
DNSEnum
Fierce
DNSRecon
Brute force DNS records using Nmap
Using search engines and public sites to gather information
Google dorks
Shodan
theHarvester
Maltego
Recon-ng – a framework for information gathering
Domain enumeration using Recon-ng
Sub-level and top-level domain enumeration
Reporting modules
Scanning – probing the target
Port scanning using Nmap
Different options for port scan
Evading firewalls and IPS using Nmap
Identifying the operating system
Profiling the server
Identifying virtual hosts
Locating virtual hosts using search engines
Identifying load balancers
Cookie-based load balancer
Other ways of identifying load balancers
Application version fingerprinting
The Nmap version scan
The Amap version scan
Fingerprinting the web application framework
The HTTP header
The WhatWeb scanner
Scanning web servers for vulnerabilities and misconfigurations
Identifying HTTP methods using Nmap
Testing web servers using auxiliary modules in Metasploit
Identifying HTTPS configuration and issues
OpenSSL client
Scanning TLS/SSL configuration with SSLScan
Scanning TLS/SSL configuration with SSLyze
Testing TLS/SSL configuration using Nmap
Spidering web applications
Burp Spider
Application login
Directory brute forcing
DIRB
ZAP's forced browse
Authentication and Session Management Flaws
Authentication schemes in web applications
Platform authentication
Basic
Digest
NTLM
Kerberos
HTTP Negotiate
Drawbacks of platform authentication
Form-based authentication
Two-factor Authentication
OAuth
Session management mechanisms
Sessions based on platform authentication
Session identifiers
Common authentication flaws in web applications
Lack of authentication or incorrect authorization verification
Username enumeration
Discovering passwords by brute force and dictionary attacks
Attacking basic authentication with THC Hydra
Attacking form-based authentication
Using Burp Suite Intruder
Using THC Hydra
The password reset functionality
Recovery instead of reset
Common password reset flaws
Vulnerabilities in 2FA implementations
Detecting and exploiting improper session management
Using Burp Sequencer to evaluate the quality of session IDs
Predicting session IDs
Session Fixation
Preventing authentication and session attacks
Authentication guidelines
Session management guidelines
Detecting and Exploiting Injection-Based Flaws
Command injection
Identifying parameters to inject data
Error-based and blind command injection
Metacharacters for command separator
Exploiting shellshock
Getting a reverse shell
Exploitation using Metasploit
SQL injection
An SQL primer
The SELECT statement
Vulnerable code
SQL injection testing methodology
Extracting data with SQL injection
Getting basic environment information
Blind SQL injection
Automating exploitation
sqlninja
BBQSQL
sqlmap
Attack potential of the SQL injection flaw
XML injection
XPath injection
XPath injection with XCat
The XML External Entity injection
The Entity Expansion attack
NoSQL injection
Testing for NoSQL injection
Exploiting NoSQL injection
Mitigation and prevention of injection vulnerabilities
Finding and Exploiting Cross-Site Scripting (XSS) Vulnerabilities
An overview of Cross-Site Scripting
Persistent XSS
Reflected XSS
DOM-based XSS
XSS using the POST method
Exploiting Cross-Site Scripting
Cookie stealing
Website defacing
Key loggers
Taking control of the user's browser with BeEF-XSS
Scanning for XSS flaws
XSSer
XSS-Sniper
Preventing and mitigating Cross-Site Scripting
Cross-Site Request Forgery, Identification, and Exploitation
Testing for CSRF flaws
Exploiting a CSRF flaw
Exploiting CSRF in a POST request
CSRF on web services
Using Cross-Site Scripting to bypass CSRF protections
Preventing CSRF
Attacking Flaws in Cryptographic Implementations
A cryptography primer
Algorithms and modes
Asymmetric encryption versus symmetric encryption
Symmetric encryption algorithm
Stream and block ciphers
Initialization Vectors
Block cipher modes
Hashing functions
Salt values
Secure communication over SSL/TLS
Secure communication in web applications
TLS encryption process
Identifying weak implementations of SSL/TLS
The OpenSSL command-line tool
SSLScan
SSLyze
Testing SSL configuration using Nmap
Exploiting Heartbleed
POODLE
Custom encryption protocols
Identifying encrypted and hashed information
Hashing algorithms
hash-identifier
Frequency analysis
Entropy analysis
Identifying the encryption algorithm
Common flaws in sensitive data storage and transmission
Using offline cracking tools
Using John the Ripper
Using Hashcat
Preventing flaws in cryptographic implementations
Using Automated Scanners on Web Applications
Considerations before using an automated scanner
Web application vulnerability scanners in Kali Linux
Nikto
Skipfish
Wapiti
OWASP-ZAP scanner
Content Management Systems scanners
WPScan
JoomScan
CMSmap
Fuzzing web applications
Using the OWASP-ZAP fuzzer
Burp Intruder
Post-scanning actions
Metasploit Quick Tips for Security Professionals
Introduction
Installing Metasploit on Windows
Getting ready
How to do it...
Installing Linux and macOS
How to do it...
Installing Metasploit on macOS
How to do it...
Using Metasploit in Kali Linux
Getting ready
How to do it...
There's more...
Upgrading Kali Linux
Setting up a penetration-testing lab
Getting ready
How to do it...
How it works...
Setting up SSH connectivity
Getting ready
How to do it...
Connecting to Kali using SSH
How to do it...
Configuring PostgreSQL
Getting ready
How to do it...
There's more...
Creating  workspaces
How to do it...
Using the database
Getting ready
How to do it...
Using the hosts command
How to do it...
Understanding the services command
How to do it...
Information Gathering and Scanning
Introduction
Passive information gathering with Metasploit
Getting ready
How to do it...
DNS Record Scanner and Enumerator
There's more...
CorpWatch Company Name Information Search
Search Engine Subdomains Collector
Censys Search
Shodan Search
Shodan Honeyscore Client
Search Engine Domain Email Address Collector
Active information gathering with Metasploit
How to do it...
TCP Port Scanner
TCP SYN Port Scanner
Port scanning—the Nmap way
Getting ready
How to do it...
How it works...
There's more...
Operating system and version detection
Increasing anonymity
Port scanning—the db_nmap way
Getting ready
How to do it...
Nmap Scripting Engine
Host discovery with ARP Sweep
Getting ready
How to do it...
UDP Service Sweeper
How to do it...
SMB scanning and enumeration
How to do it...
Detecting SSH versions with the SSH Version Scanner
Getting ready
How to do it...
FTP scanning
Getting ready
How to do it...
SMTP enumeration
Getting ready
How to do it...
SNMP enumeration
Getting ready
How to do it...
HTTP scanning
Getting ready
How to do it...
WinRM scanning and brute forcing
Getting ready
How to do it...
Integrating with Nessus
Getting ready
How to do it...
Integrating with NeXpose
Getting ready
How to do it...
Integrating with OpenVAS
How to do it...
Server-Side Exploitation
Introduction
Getting to know MSFconsole
MSFconsole commands
Exploiting a Linux server
Getting ready
How to do it...
How it works...
What about the payload?
SQL injection
Getting ready
How to do it...
Types of shell
Getting ready
How to do it...
Exploiting a Windows Server machine
Getting ready
How to do it...
Exploiting common services
Getting ready
How to do it
MS17-010 EternalBlue SMB Remote Windows Kernel Pool Corruption
Getting ready
How to do it...
MS17-010 EternalRomance/EternalSynergy/EternalChampion
How to do it...
Installing backdoors
Getting ready
How to do it...
Denial of Service
Getting ready
How to do it...
How to do it...
Meterpreter
Introduction
Understanding the Meterpreter core commands
Getting ready
How to do it...
How it works...
Understanding the Meterpreter filesystem commands
How to do it...
How it works...
Understanding Meterpreter networking commands
Getting ready
How to do it...
How it works...
Understanding the Meterpreter system commands
How to do it...
Setting up multiple communication channels with the target
Getting ready
How to do it...
How it works...
Meterpreter anti-forensics
Getting ready
How to do it...
How it works...
There's more...
The getdesktop and keystroke sniffing
Getting ready
How to do it...
There's more...
Using a scraper Meterpreter script
Getting ready
How to do it...
How it works...
Scraping the system using winenum
How to do it...
Automation with AutoRunScript
How to do it...
Meterpreter resource scripts
How to do it...
Meterpreter timeout control
How to do it...
Meterpreter sleep control
How to do it...
Meterpreter transports
How to do it...
Interacting with the registry
Getting ready
How to do it...
Loading framework plugins
How to do it...
Meterpreter API and mixins
Getting ready
How to do it...
How it works...
Railgun—converting Ruby into a weapon
Getting ready
How to do it...
How it works...
There's more...
Adding DLL and function definitions to Railgun
How to do it...
How it works...
Injecting the VNC server remotely
Getting ready
How to do it...
Enabling Remote Desktop
How to do it...
How it works...
Post-Exploitation
Introduction
Post-exploitation modules
Getting ready
How to do it...
How it works...
How to do it...
How it works...
Bypassing UAC
Getting ready
How to do it...
Dumping the contents of the SAM database
Getting ready
How to do it...
Passing the hash
How to do it...
Incognito attacks with Meterpreter
How to do it...
Using Mimikatz
Getting ready
How to do it...
There's more...
Setting up a persistence with backdoors
Getting ready
How to do it...
Becoming TrustedInstaller
How to do it...
Backdooring Windows binaries
How to do it...
Pivoting with Meterpreter
Getting ready
How to do it...
How it works...
Port forwarding with Meterpreter
Getting ready
How to do it...
Credential harvesting
How to do it...
Enumeration modules
How to do it...
Autoroute and socks proxy server
How to do it...
Analyzing an existing post-exploitation module
Getting ready
How to do it...
How it works...
Writing a post-exploitation module
Getting ready
How to do it...
Using MSFvenom
Introduction
Payloads and payload options
Getting ready
How to do it...
Encoders
How to do it...
There's more...
Output formats
How to do it...
Templates
Getting ready
How to do it...
Meterpreter payloads with trusted certificates
Getting ready
How to do it...
There's more...
Client-Side Exploitation and Antivirus Bypass
Introduction
Exploiting a Windows 10 machine
Getting ready
How to do it...
Bypassing antivirus and IDS/IPS
How to do it...
Metasploit macro exploits
How to do it...
There's more...
Human Interface Device attacks
Getting ready
How to do it...
HTA attack
How to do it...
Backdooring executables using a MITM attack
Getting ready
How to do it...
Creating a Linux trojan
How to do it...
Creating an Android backdoor
Getting ready
How to do it...
There's more...
Social-Engineer Toolkit
Introduction
Getting started with the Social-Engineer Toolkit
Getting ready
How to do it...
How it works...
Working with the spear-phishing attack vector
How to do it...
Website attack vectors
How to do it...
Working with the multi-attack web method
How to do it...
Infectious media generator
How to do it...
How it works...
Working with Modules for Penetration Testing
Introduction
Working with auxiliary modules
Getting ready
How to do it...
DoS attack modules
How to do it...
HTTP
SMB
Post-exploitation modules
Getting ready
How to do it...
Understanding the basics of module building
How to do it...
Analyzing an existing module
Getting ready
How to do it...
Building your own post-exploitation module
Getting ready
How to do it...
Building your own auxiliary module
Getting ready
How to do it...
Other Books You May Enjoy
Leave a review - let other readers know what you think
Penetration testing or ethical hacking is a legal and foolproof way to identify vulnerabilities in your system. With thorough penetration testing, you can secure your system against the majority of threats.
This Learning Path starts with an in-depth explanation of what hacking and penetration testing is. You’ll gain a deep understanding of classical SQL and command injection flaws, and discover ways to exploit these flaws to secure your system. You'll also learn how to create and customize payloads to evade antivirus software and bypass an organization's defenses. Whether it’s exploiting server vulnerabilities and attacking client systems, or compromising mobile phones and installing backdoors, this Learning Path will guide you through all this and more to improve your defense against online attacks.
By the end of this Learning Path, you'll have the knowledge and skills you need to invade a system and identify all its vulnerabilities.
This Learning Path includes content from the following Packt products:
Web Penetration Testing with Kali Linux - Third Edition by Juned Ahmed Ansari and Gilberto Najera-Gutierrez
Metasploit Penetration Testing Cookbook - Third Edition by Daniel Teixeira and Abhinav Singh
This Learning Path is designed for security professionals, web programmers, and pentesters who want to learn vulnerability exploitation and make the most of the Metasploit framework. Some understanding of penetration testing and Metasploit is required, but basic system administration skills and the ability to read code are a must.
Chapter 1, Introduction to Penetration Testing and Web Applications, covers the basic concepts of penetration testing, Kali Linux, and web applications. It starts with the definition of penetration testing itself and other key concepts, followed by the considerations to have before engaging in a professional penetration test such as defining scope and rules of engagement. Then we dig into Kali Linux and see how web applications work, focusing on the aspects that are more relevant to a penetration tester.
Chapter 2, Setting Up Your Lab with Kali Linux, is a technical review of the testing environment that will be used through the rest of the chapters. We start by explaining what Kali Linux is and the tools it includes for the purpose of testing security of web applications; next we look at the vulnerable web applications that will be used in future chapters to demonstrate the vulnerabilities and attacks.
Chapter 3, Reconnaissance and Profiling the Web Server, shows the techniques and tools used by penetration testers and attackers to gain information about the technologies used to develop, host and support the target application and identify the first weak spots that may be further exploited, because, following the standard methodology for penetration testing, the first step is to gather as much information as possible about the targets.
Chapter 4, Authentication and Session Management Flaws, as the name suggests, is dedicated to detection, exploitation, and mitigation of vulnerabilities related to the identification of users and segregation of duties within the application, starting with the explanation of different authentication and session management mechanisms, followed by how these mechanisms can have design or implementation flaws and how those flaws can be taken advantage of by a malicious actor or a penetration tester.
Chapter 5, Detecting and Exploiting Injection-Based Flaws, explains detection, exploitation, and mitigation of the most common injection flaws, because one of the top concerns of developers in terms of security is having their applications vulnerable to any kind of injection attack, be it SQL injection, command injection, or any other attack, these can pose a major risk on a web application.
Chapter 6, Finding and Exploiting Cross-Site Scripting (XSS) Vulnerabilities, goes from explaining what is a Cross-Site Scripting vulnerability, to how and why it poses a security risk, to how to identify when a web application is vulnerable, and how an attacker can take advantage of it to grab sensitive information from the user or make them perform actions unknowingly.
Chapter 7, Cross-Site Request Forgery, Identification and Exploitation, explains what is and how a Cross-Site Request Forgery attack works. Then we discuss the key factor to detecting the flaws that enable it, followed by techniques for exploitation, and finish with prevention and mitigation advice.
Chapter 8, Attacking Flaws in Cryptographic Implementations, starts with an introduction on cryptography concepts that are useful from the perspective of penetration testers, such as how SSL/TLS works in general, a review of concepts and algorithms of encryption, and encoding and hashing; then we describe the tools used to identify weak SSL/TLS implementations, together with the exploitation of well-known vulnerabilities. Next, we cover the detection and exploitation of flaws in custom cryptographic algorithms and implementations. We finish the chapter with an advice on how to prevent vulnerabilities when using encrypted communications or when storing sensitive information.
Chapter 9, Using Automated Scanners on Web Applications, explains the factors to take into account when using automated scanners and fuzzers on web applications. We also explain how these scanners work and what fuzzing is, followed by usage examples of the scanning and fuzzing tools included in Kali Linux. We conclude with the actions a penetration tester should take after performing an automated scan on a web application in order to deliver valuable results to the application's developer.
Chapter 10, Metasploit Quick Tips for Security Professionals, contains recipes covering how to install Metasploit on different platforms, building a penetration testing lab, configuring Metasploit to use a PostgreSQL database, and using workspaces.
Chapter 11, Information Gathering and Scanning, discusses passive and active information gathering with Metasploit, port scanning, scanning techniques, enumeration, and integration with scanners such as Nessus, NeXpose, and OpenVAS.
Chapter 12, Server-Side Exploitation, includes Linux and Windows server exploitation, SQL injection, backdoor installation, and Denial of Service attacks.
Chapter 13, Meterpreter, covers all of the commands related to Meterpreter, communication channels, keyloggers, automation, loading framework plugins, using Railgun, and much more.
Chapter 14, Post-Exploitation, covers post-exploitation modules, privilege escalation, process migration, bypassing UAC, pass the hash attacks, using Incognito and Mimikatz, backdooring Windows binaries, pivoting, port forwarding, credential harvesting, and writing a post-exploitation module.
Chapter 15, Using MSFvenom, discusses MSFvenom payloads and payload options, encoders, output formats, templates, and how to use Meterpreter payloads with trusted certificates.
Chapter 16, Client-Side Exploitation and Antivirus Bypass, explains how to exploit a Windows 10 machine, antivirus and IDS/IPS bypasses, macro exploits, Human Interface Device attacks, HTA attacks, how to backdoor executables using a MITM attack, and how to create a Linux trojan and an Android backdoor.
Chapter 17, Social-Engineer Toolkit, includes how to get started with the Social-Engineer Toolkit, spear-phishing attack vectors, website attack vectors, working with the multiattack web method, and infectious media generation.
Chapter 18, Working with Modules for Penetration Testing, covers auxiliary modules, DoS attack modules, post-exploitation modules, and module analyzing and building.
To successfully take advantage of this book, the reader is recommended to have a basic understanding of the following topics:
Linux OS installation
Unix/Linux command-line usage
The HTML language
PHP web application programming
Python programming
A Metasploitable 2 vulnerable machine
A Metasploitable 3 vulnerable machine
A Windows 7 x86 client machine
A Windows 10 client machine
An Android OS device or a virtual machine
The only hardware necessary is a personal computer, with an operation system capable of running VirtualBox or other virtualization software. As for specifications, the recommended setup is as follows:
Intel i5, i7, or a similar CPU
500 GB on hard drive
8 GB on RAM
An internet connection
You can download the example code files for this book from your account at www.packt.com. If you purchased this book elsewhere, you can visit www.packt.com/support and register to have the files emailed directly to you.
You can download the code files by following these steps:
Log in or register at
www.packt.com
.
Select the
SUPPORT
tab.
Click on
Code Downloads & Errata
.
Enter the name of the book in the
Search
box and follow the onscreen instructions.
Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:
WinRAR/7-Zip for Windows
Zipeg/iZip/UnRarX for Mac
7-Zip/PeaZip for Linux
The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Improving-your-Penetration-Testing-Skills. In case there's an update to the code, it will be updated on the existing GitHub repository.
We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: https://www.packtpub.com/sites/default/files/downloads/9781838646073_ColorImages.pdf.
There are a number of text conventions used throughout this book.
CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "The msfdb command allows you to manage the Metasploit Framework database, not just initialize the database."
A block of code is set as follows:
print_status psh_exec(script)print_good 'Finished!'
Any command-line input or output is written as follows:
root@kali:~# systemctl start postgresql
Bold: Indicates a new term, an important word, or words that you see onscreen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: "Select the Kali Linux virtual machine and click on Settings."
Feedback from our readers is always welcome.
General feedback: If you have questions about any aspect of this book, mention the book title in the subject of your message and email us at [email protected].
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packt.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.
Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!
For more information about Packt, please visit packt.com.
A web application uses the HTTP protocol for client-server communication and requires a web browser as the client interface. It is probably the most ubiquitous type of application in modern companies, from Human Resources' organizational climate surveys to IT technical services for a company's website. Even thick and mobile applications and many Internet of Things (IoT) devices make use of web components through web services and the web interfaces that are embedded into them.
Not long ago, it was thought that security was necessary only at the organization's perimeter and only at network level, so companies spent considerable amount of money on physical and network security. With that, however, came a somewhat false sense of security because of their reliance on web technologies both inside and outside of the organization. In recent years and months, we have seen news of spectacular data leaks and breaches of millions of records including information such as credit card numbers, health histories, home addresses, and the Social Security Numbers (SSNs) of people from all over the world. Many of these attacks were started by exploiting a web vulnerability or design failure.
Modern organizations acknowledge that they depend on web applications and web technologies, and that they are as prone to attack as their network and operating systems—if not more so. This has resulted in an increase in the number of companies who provide protection or defense services against web attacks, as well as the appearance or growth of technologies such as Web Application Firewall (WAF), Runtime Application Self-Protection (RASP), web vulnerability scanners, and source code scanners. Also, there has been an increase in the number of organizations that find it valuable to test the security of their applications before releasing them to end users, providing an opportunity for talented hackers and security professionals to use their skills to find flaws and provide advice on how to fix them, thereby helping companies, hospitals, schools, and governments to have more secure applications and increasingly improved software development practices.
Penetration testing and ethical hacking are proactive ways of testing web applications by performing attacks that are similar to a real attack that could occur on any given day. They are executed in a controlled way with the objective of finding as many security flaws as possible and to provide feedback on how to mitigate the risks posed by such flaws.
It is very beneficial for companies to perform security testing on applications before releasing them to end users. In fact, there are security-conscious corporations that have nearly completely integrated penetration testing, vulnerability assessments, and source code reviews in their software development cycle. Thus, when they release a new application, it has already been through various stages of testing and remediation.
People are often confused by the following terms, using them interchangeably without understanding that, although some aspects of these terms overlap, there are also subtle differences that require your attention:
Ethical hacking
Penetration testing
Vulnerability assessment
Security audits
Very few people realize that hacking is a misunderstood term; it means different things to different people, and more often than not a hacker is thought of as a person sitting in a dark enclosure with no social life and malicious intent. Thus, the word ethical is prefixed here to the term, hacking. The term, ethical hacker is used to refer to professionals who work to identify loopholes and vulnerabilities in systems, report it to the vendor or owner of the system, and, at times, help them fix the system. The tools and techniques used by an ethical hacker are similar to the ones used by a cracker or a black hat hacker, but the aim is different as it is used in a more professional way. Ethical hackers are also known as security researchers.
Penetration testing is a term that we will use very often in this book, and it is a subset of ethical hacking. It is a more professional term used to describe what an ethical hacker does. If you are planning a career in ethical hacking or security testing, then you would often see job postings with the title, Penetration Tester. Although penetration testing is a subset of ethical hacking, it differs in many ways. It's a more streamlined way of identifying vulnerabilities in systems and finding out if the vulnerability is exploitable or not. Penetration testing is governed by a contract between the tester and owner of the systems to be tested. You need to define the scope of the test in order to identify the systems to be tested. Rules of Engagement need to be defined, which determines the way in which the testing is to be done.
At times, organizations might want only to identify the vulnerabilities that exist in their systems without actually exploiting them and gaining access. Vulnerability assessments are broader than penetration tests. The end result of vulnerability assessment is a report prioritizing the vulnerabilities found, with the most severe ones listed at the top and the ones posing a lesser risk appearing lower in the report. This report is very helpful for clients who know that they have security issues and who need to identify and prioritize the most critical ones.
Auditing is a systematic procedure that is used to measure the state of a system against a predetermined set of standards. These standards can be industry best practices or an in-house checklist. The primary objective of an audit is to measure and report on conformance. If you are auditing a web server, some of the initial things to look out for are the open ports on the server, harmful HTTP methods, such as TRACE, enabled on the server, the encryption standard used, and the key length.
When planning to execute a penetration testing project, be it for a client as a professional penetration tester or as part of a company's internal security team, there are aspects that always need to be considered before starting the engagement.
Rules of Engagement (RoE) is a document that deals with the manner in which the penetration test is to be conducted. Some of the directives that should be clearly spelled out in RoE before you start the penetration test are as follows:
The type and scope of testing
Client contact details
Client IT team notifications
Sensitive data handling
Status meeting and reports
The type of testing can be black box, white box, or an intermediate gray box, depending on how the engagement is performed and the amount of information shared with the testing team.
There are things that can and cannot be done in each type of testing. With black box testing, the testing team works from the view of an attacker who is external to the organization, as the penetration tester starts from scratch and tries to identify the network map, the defense mechanisms implemented, the internet-facing websites and services, and so on. Even though this approach may be more realistic in simulating an external attacker, you need to consider that such information may be easily gathered from public sources or that the attacker may be a disgruntled employee or ex-employee who already possess it. Thus, it may be a waste of time and money to take a black box approach if, for example, the target is an internal application meant to be used by employees only.
White box testing is where the testing team is provided with all of the available information about the targets, sometimes even including the source code of the applications, so that little or no time is spent on reconnaissance and scanning. A gray box test then would be when partial information, such as URLs of applications, user-level documentation, and/or user accounts are provided to the testing team.
Gray box testing is especially useful when testing web applications, as the main objective is to find vulnerabilities within the application itself, not in the hosting server or network. Penetration testers can work with user accounts to adopt the point of view of a malicious user or an attacker that gained access through social engineering.
We can agree that even when we take all of the necessary precautions when conducting tests, at times the testing can go wrong because it involves making computers do nasty stuff. Having the right contact information on the client-side really helps. A penetration test is often seen turning into aDenial-of-Service(DoS) attack. The technical team on the client side should be available 24/7 in case a computer goes down and a hard reset is needed to bring it back online.
Penetration tests are also used as a means to check the readiness of the support staff in responding to incidents and intrusion attempts. You should discuss this with the client whether it is an announced or unannounced test. If it's an announced test, make sure that you inform the client of the time and date, as well as the source IP addresses from where the testing (attack) will be done, in order to avoid any real intrusion attempts being missed by their IT security team. If it's an unannounced test, discuss with the client what will happen if the test is blocked by an automated system or network administrator. Does the test end there, or do you continue testing? It all depends on the aim of the test, whether it's conducted to test the security of the infrastructure or to check the response of the network security and incident handling team. Even if you are conducting an unannounced test, make sure that someone in the escalation matrix knows about the time and date of the test. Web application penetration tests are usually announced.
During test preparation and execution, the testing team will be provided with and may also find sensitive information about the company, the system, and/or its users. Sensitive data handling needs special attention in the RoE and proper storage and communication measures should be taken (for example, full disk encryption on the testers' computers, encrypting reports if they are sent by email, and so on). If your client is covered under the various regulatory laws such as the Health Insurance Portability and Accountability Act (HIPAA), the Gramm-Leach-Bliley Act (GLBA), or the European data privacy laws, only authorized personnel should be able to view personal user data.
Communication is key for a successful penetration test. Regular meetings should be scheduled between the testing team and the client organization and routine status reports issued by the testing team. The testing team should present how far they have reached and what vulnerabilities have been found up to that point. The client organization should also confirm whether their detection systems have triggered any alerts resulting from the penetration attempt. If a web server is being tested and a WAF was deployed, it should have logged and blocked attack attempts. As a best practice, the testing team should also document the time when the test was conducted. This will help the security team in correlating the logs with the penetration tests.
Although penetration tests are recommended and should be conducted on a regular basis, there are certain limitations to penetration testing. The quality of the test and its results will directly depend on the skills of the testing team. Penetration tests cannot find all of the vulnerabilities due to the limitation of scope, limitation of access of penetration testers to the testing environment, and limitations of tools used by the tester. The following are some of the limitations of a penetration test:
Limitation of skills
: As mentioned earlier, the success and quality of the test will directly depend on the skills and experience of the penetration testing team. Penetration tests can be classified into three broad categories: network, system, and web application penetration testing. You will not get correct results if you make a person skilled in network penetration testing work on a project that involves testing a web application. With the huge number of technologies deployed on the internet today, it is hard to find a person skillful in all three. A tester may have in-depth knowledge of Apache web servers, but might be encountering an IIS server for the first time. Past experience also plays a significant role in the success of the test; mapping a low-risk vulnerability to a system that has a high level of threat is a skill that is only acquired through experience.
Limitation of time
: Penetration testing is often a short-term project that has to be completed in a predefined time period. The testing team is required to produce results and identify vulnerabilities within that period. Attackers, on the other hand, have much more time to work on their attacks and can plan them carefully. Penetration testers also have to produce a report at the end of the test, describing the methodology, vulnerabilities identified, and an executive summary. Screenshots have to be taken at regular intervals, which are then added to the report. Clearly, an attacker will not be writing any reports and can therefore dedicate more time to the actual attack.
Limitation of custom exploits
: In some highly secure environments, normal penetration testing frameworks and tools are of little use and the team is required to think outside of the box, such as by creating a custom exploit and manually writing scripts to reach the target. Creating exploits is extremely time consuming, and it affects the overall budget and time for the test. In any case, writing custom exploits should be part of the portfolio of any self-respecting penetration tester.
Avoiding DoS attack
: Hacking and penetration testing is the art of making a computer or application do things that it was not designed to do. Thus, at times, a test may lead to a DoS attack rather than gaining access to the system. Many testers do not run such tests in order to avoid inadvertently causing downtime on the system. Since systems are not tested for DoS attacks, they are more prone to attacks by script kiddies, who are just out there looking for such internet-accessible systems in order to seek fame by taking them offline.
Script kiddies
are unskilled individuals who exploit easy-to-find and well-known weaknesses in computer systems in order to gain notoriety without understanding, or caring about, the potential harmful consequences. Educating the client about the pros and cons of a DoS test should be done, as this will help them to make the right decision.
Limitation of access
: Networks are divided into different segments, and the testing team will often have access and rights to test only those segments that have servers and are accessible from the internet in order to simulate a real-world attack. However, such a test will not detect configuration issues and vulnerabilities on the internal network where the clients are located.
Limitations of tools used
: Sometimes, the penetration testing team is only allowed to use a client-approved list of tools and exploitation frameworks. No one tool is complete irrespective of it being a free version or a commercial one. The testing team needs to be knowledgeable about these tools, and they will have to find alternatives when features are missing from them.
In order to overcome these limitations, large organizations have a dedicated penetration testing team that researches new vulnerabilities and performs tests regularly. Other organizations perform regular configuration reviews in addition to penetration tests.
With the huge number of internet-facing websites and the increase in the number of organizations doing business online, web applications and web servers make an attractive target for attackers. Web applications are everywhere across public and private networks, so attackers don't need to worry about a lack of targets. Only a web browser is required to interact with a web application. Some of the defects in web applications, such as logic flaws, can be exploited even by a layman. For example, due to bad implementation of logic, if a company has an e-commerce website that allows the user to add items to their cart after the checkout process and a malicious user finds this out through trial and error, they would then be able to exploit this easily without needing any special tools.
Vulnerabilities in web applications also provide a means for spreading malware and viruses, and these can spread across the globe in a matter of minutes. Cybercriminals realize considerable financial gains by exploiting web applications and installing malware that will then be passed on to the application's users.
Firewalls at the edge are more permissive to inbound HTTP traffic flowing towards the web server, so the attacker does not require any special ports to be open. The HTTP protocol, which was designed many years ago, does not provide any built-in security features; it's a cleartext protocol, and it requires the additional layering of using the HTTPS protocol in order to secure communication. It also does not provide individual session identification, and it leaves it to the developer to design it in. Many developers are hired directly out of college, and they have only theoretical knowledge of programming languages and no prior experience with the security aspects of web application programming. Even when the vulnerability is reported to the developers, they take a long time to fix it as they are busier with the feature creation and enhancement portion of the web application.
Investing resources in writing secure code is an effective method for minimizing web application vulnerabilities. However, writing secure code is easy to say but difficult to implement.
Some of the most compelling reasons to guard against attacks on web applications are as follows:
Protecting customer data
Compliance with law and regulation
Loss of reputation
Revenue loss
Protection against business disruption.
If the web application interacts with and stores credit card information, then it needs to be in compliance with the rules and regulations laid out by Payment Card Industry (PCI). PCI has specific guidelines, such as reviewing all code for vulnerabilities in the web application or installing a WAF in order to mitigate the risk.
When the web application is not tested for vulnerabilities and an attacker gains access to customer data, it can severely affect the brand of the company if a customer files a lawsuit against the company for not adequately protecting their data. It may also lead to revenue losses, since many customers will move to competitors who might assure better security. Attacks on web applications may also result in severe disruption of service if it's a DoS attack, if the server is taken offline to clean up the exposed data, or for a forensics investigation. This might be reflected negatively in the financial statements.
These reasons should be enough to convince the senior management of your organization to invest resources in terms of money, manpower, and skills in order to improve the security of your web applications.
In this book, we will use the tools provided by Kali Linux to accomplish our testing. Kali Linux is a Debian-based GNU/Linux distribution. Kali Linux is used by security professionals to perform offensive security tasks, and it is maintained by a company known as Offensive Security. The predecessor of Kali Linux is BackTrack, which was one of the primary tools used by penetration testers for more than six years until 2013, when it was replaced by Kali Linux. In August 2015, the second version of Kali Linux was released with the code name Kali Sana, and in January 2016, it switched to a rolling release.
This means that the software is continuously updated without the need to change the operating system version. Kali Linux comes with a large set of popular hacking tools, which are ready to use with all of the prerequisites installed. We will take a deep dive into the tools and use them to test web applications that are vulnerable to major flaws which are found in real-world web applications.
Web applications involve much more than just HTML code and web servers. If you are not a programmer who is actively involved in the development of web applications, then chances are that you are unfamiliar with the inner workings of the HTTP protocol, the different ways web applications interact with the database, and what exactly happens when a user clicks a link or enters the URL of a website into their web browser.
As a penetration tester, understanding how the information flows from the client to the server and database and then back to the client is very important. This section will include information that will help an individual who has no prior knowledge of web application penetration testing to make use of the tools provided in Kali Linux to conduct an end-to-end web penetration test. You will get a broad overview of the following:
HTTP protocol
Headers in HTTP
Session tracking using cookies
HTML
Architecture of web applications
The underlying protocol that carries web application traffic between the web server and the client is known as the Hypertext Transport Protocol (HTTP). HTTP/1.1, the most common implementation of the protocol, is defined in RFCs 7230-7237, which replaced the older version defined in RFC 2616. The latest version, known as HTTP/2, was published in May 2015, and it is defined in RFC 7540. The first release, HTTP/1.0, is now considered obsolete and is not recommended.
As the internet evolved, new features were added to the subsequent releases of the HTTP protocol. In HTTP/1.1, features such as persistent connections, OPTIONS method, and several other improvements in the way HTTP supports caching were added.
HTTP is a client-server protocol, wherein the client (web browser) makes a request to the server and in return the server responds to the request. The response by the server is mostly in the form of HTML-formatted pages. By default, HTTP protocol uses port 80, but the web server and the client can be configured to use a different port.
HTTP is a cleartext protocol, which means that all of the information between the client and server travels unencrypted, and it can be seen and understood by any intermediary in the communication chain. To tackle this deficiency in HTTP's design, a new implementation was released that establishes an encrypted communication channel with the Secure Sockets Layer (SSL) protocol and then sends HTTP packets through it. This was called HTTPS or HTTP over SSL. In recent years, SSL has been increasingly replaced by a newer protocol called Transport Layer Security (TLS), currently in version 1.2.
An HTTP request is the message a client sends to the server in order to get some information or execute some action. It has two parts separated by a blank line: the header and body. The header contains all of the information related to the request itself, response expected, cookies, and other relevant control information, and the body contains the data exchanged. An HTTP response has the same structure, changing the content and use of the information contained within it.
Here is an HTTP request captured using a web application proxy when browsing to www.bing.com:
The first line in this header indicates the method of the request: GET, the resource requested: / (that is, the root directory) and the protocol version: HTTP 1.1. There are several other fields that can be in an HTTP header. We will discuss the most relevant fields:
Host
: This specifies the host and port number of the resource being requested. A web server may contain more than one site, or it may contain technologies such as shared hosting or load balancing. This parameter is used to distinguish between different sites/applications served by the same infrastructure.
User-Agent
: This field is used by the server to identify the type of client (that is, web browser) which will receive the information. It is useful for developers in that the response can be adapted according to the user's configuration, as not all features in the HTTP protocol and in web development languages will be compatible with all browsers.
Cookie
: Cookies are temporary values exchanged between the client and server and used, among other reasons, to keep session information.
Content-Type
: This indicates to the server the media type contained within the request's body.
Authorization
: HTTP allows for per-request client authentication through this parameter. There are multiple modes of authenticating, with the most common being
Basic
,
Digest
,
NTLM
, and
Bearer
.
Upon receiving a request and processing its contents, the server may respond with a message such as the one shown here:
The first line of the response header contains the status code (200), which is a three-digit code. This helps the browser understand the status of operation. The following are the details of a few important fields:
Status code: There is no field named status code, but the value is passed in the header. The 2xx series of status codes are used to communicate a successful operation back to the web browser. The 3xx series is used to indicate redirection when a server wants the client to connect to another URL when a web page is moved. The 4xx series is used to indicate an error in the client request and that the user will have to modify the request before resending. The 5xx series indicates an error on the server side, as the server was unable to complete the operation. In the preceding header, the status code is 200
