34,79 €
As Linux continues to gain prominence, there has been a rise in network services being deployed on Linux for cost and flexibility reasons. If you are a networking professional or an infrastructure engineer involved with networks, extensive knowledge of Linux networking is a must.
This book will guide you in building a strong foundation of Linux networking concepts. The book begins by covering various major distributions, how to pick the right distro, and basic Linux network configurations. You'll then move on to Linux network diagnostics, setting up a Linux firewall, and using Linux as a host for network services. You'll discover a wide range of network services, why they're important, and how to configure them in an enterprise environment. Finally, as you work with the example builds in this Linux book, you'll learn to configure various services to defend against common attacks. As you advance to the final chapters, you’ll be well on your way towards building the underpinnings for an all-Linux datacenter.
By the end of this book, you'll be able to not only configure common Linux network services confidently, but also use tried-and-tested methodologies for future Linux installations.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 607
Veröffentlichungsjahr: 2021
Securely configure and operate Linux network services for the enterprise
Rob VandenBrink
BIRMINGHAM—MUMBAI
Copyright © 2021 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Group Product Manager: Wilson Dsouza
Publishing Product Manager: Yogesh Deokar
Senior Editor: Athikho Sapuni Rishana
Content Development Editor: Sayali Pingale
Technical Editor: Nithik Cheruvakodan
Copy Editor: Safis Editing
Project Coordinator: Neil Dmello
Proofreader: Safis Editing
Indexer: Manju Arasan
Production Designer: Nilesh Mohite
First published: September 2021
Production reference: 1150921
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.
978-1-80020-239-9
www.packt.com
Dedicated to my wife, Karen: together, we make every year better than the last!
– Rob VandenBrink
Rob VandenBrink is a consultant with Coherent Security in Ontario, Canada. He is a volunteer with the Internet Storm Center, a site that posts daily blogs on information security and related stories. Rob also contributes as a volunteer to various security benchmarks at the Center for Internet Security, notably the Palo Alto Networks Firewall benchmark and the Cisco Nexus benchmark.
His areas of specialization include all facets of information security, network infrastructure, network and data center design, IT automation, orchestration, and virtualization. Rob has developed tools for ensuring policy compliance for VPN access users, a variety of networking tools native to Cisco IOS, as well as security audit/assessment tools for both Palo Alto Networks Firewall and VMware vSphere.
Rob has a master's degree in information security engineering from the SANS Technology Institute and holds a variety of SANS/GIAC, VMware, and Cisco certifications.
Melvin Reyes Martin is an enthusiastic senior network engineer who is very passionate about design, improvement, and automation. He has achieved expert-level certifications in networking, such as CCIE Enterprise Infrastructure and CCIE Service Provider. Melvin worked at Cisco Systems for 6 years, implementing new exciting networking technologies for internet service providers in the Latin America and Caribbean regions. He also possesses the Linux+ certification and loves to integrate open source projects into networking. Melvin is a big believer in cloud infrastructure and blockchain technology.
I would like to thank my wife, Nadiolis Varela, and my kids, Aaron and Matthew, for their help and encouragement over the years.
Welcome to Linux for Networking Professionals! If you've ever wondered how to reduce the cost of hosts and services that support your network, you've come to the right place. Or if you're considering how to start securing network services such as DNS, DHCP, or RADIUS, we can help you on that path as well.
If there's a service that helps you support your network, we've tried to cover how to get it up and running with a basic configuration, as well as helping you to start securing that service. Along the way, we've tried to help you pick a Linux distribution, show you how to use Linux for troubleshooting, and introduce you to a few services that you maybe didn't know that you needed.
Hopefully, the journey we take in this book helps you add new services to your network, and maybe helps you understand your network a bit better along the way!
This book is meant for anyone tasked with administering network infrastructure of almost any kind. If you are interested in the nuts and bolts of how things work in your network, this book is for you! You'll also find our discussion interesting if you are often left wondering how you will deliver the various services on your network that your organization needs, but might not have the budget to pay for commercial products. We'll cover how each of the Linux services we discuss works, as well as how you might configure them in a typical environment.
Finally, if you are concerned with how attackers view your network assets, you'll find lots to interest you! We discuss how attackers and malware commonly attack various services on your network, and how to defend those services.
Since our focus in this book is on Linux, you'll find that the budget for both deploying and defending the services we cover is measured more in your enthusiasm and time for learning new and interesting things, rather than in dollars and cents!
Chapter 1, Welcome to the Linux Family, consists of a short history of Linux and a description of various Linux distributions. Also, we provide some advice for selecting a Linux distribution for your organization.
Chapter 2, Basic Linux Network Configuration and Operations – Working with Local Interfaces, discusses network interface configuration in Linux, which can be a real stumbling block for many administrators, especially when the decision has been made that a server doesn't need a GUI. In this chapter, we'll discuss how to configure various network interface parameters, all from the command line, as well as lots of the basics of IP and MAC layer lore.
Chapter 3, Using Linux and Linux Tools for Network Diagnostics, covers diagnosing and resolving network problems, which is a daily journey for almost all network administrators. In this chapter, we'll continue the exploration that we started in the previous chapter, layering on TCP and UDP basics. With that in hand, we'll discuss local and remote network diagnostics using native Linux commands, as well as common add-ons. We'll end this chapter with a discussion of assessing wireless networks.
Chapter 4, The Linux Firewall, explains that the Linux firewall can be a real challenge for many administrators, especially since there are multiple different "generations" of the iptables/ipchains firewall implementation. We'll discuss the evolution of the Linux firewall and implement it to protect specific services on Linux.
Chapter 5, Linux Security Standards with Real-Life Examples, covers securing your Linux host, which is always a moving target, depending on the services implemented on that host and the environment it's deployed to. We'll discuss these challenges, as well as various security standards that you can use to inform your security decisions. In particular, we'll discuss the Center for Internet Security (CIS) Critical Controls, and work through a few of the recommendations in a CIS Benchmark for Linux.
Chapter 6, DNS Services on Linux, explains how DNS works in different instances, and how to implement DNS services on Linux, both internally and internet-facing. We'll also discuss various attacks against DNS, and how to protect your server against them.
Chapter 7, DHCP Services on Linux, covers DHCP, which is used to issue IP addresses to client workstations, as well as to "push" a myriad of configuration options to client devices of all kinds. In this chapter, we'll illustrate how to implement this on Linux for traditional workstations, and discuss things you should consider for other devices, such as Voice over IP (VoIP) phones.
Chapter 8, Certificate Services on Linux, covers certificates, which are often viewed as "the bogeyman" in many network infrastructures. In this chapter, we try to demystify how they work, and how to implement a free certificate authority on Linux for your organization.
Chapter 9, RADIUS Services for Linux, explains how to use RADIUS on Linux as the authentication for various network devices and services.
Chapter 10, Load Balancer Services for Linux, explains that Linux makes a great load balancer, allowing "for free" load balancing services tied to each workload, rather than the traditional, expensive, and monolithic "per data center" load balancing solutions that we see so often.
Chapter 11, Packet Capture and Analysis in Linux, discusses using Linux as a packet capture host. This chapter covers how to make this happen network-wise, as well as exploring various filtering methods to get the information you need to solve problems. We use various attacks against a VoIP system to illustrate how to get this job done!
Chapter 12, Network Monitoring Using Linux, covers using Linux to centrally log traffic using syslog, as well as real-time alerting on keywords found in logs. We also have a discussion on logging network traffic flow patterns, using NetFlow and related protocols.
Chapter 13, Intrusion Prevention Systems on Linux, explains that Linux applications are used to alert on and block common attacks, as well as adding important metadata to traffic information. We explore two different solutions in this regard, and show how to apply various filters to uncover various patterns in traffic and attacks.
Chapter 14, Honeypot Services on Linux, covers using honeypots as "deception hosts" to distract and delay your attackers, while providing high-fidelity alerts to the defenders. We also discuss using honeypots for research into trends in malicious behavior on the public internet.
In this book, we'll base most of our examples and builds on a default installation of Ubuntu Linux. You can certainly install Ubuntu on "bare metal" hardware, but you may find that using a virtualization solution such as VMware (Workstation or ESXi), VirtualBox, or Proxmox can really benefit your learning experience (all of these except for VMware Workstation are free). Using virtualization options, you can take "snapshots" of your host at known good points along the way, which means that if you clobber something while experimenting with a tool or feature, it is very easy to just roll back that change and try it again.
Also, using virtualization allows you to make multiple copies of your host so that you can implement features or services in a logical way, rather than trying to put all the services we discuss in this book on the same host.
We use several Linux services in this book, mostly implemented on Ubuntu Linux version 20 (or newer). These services are summarized here:
In addition, we use or discuss several "add-on" Linux tools that you might not be familiar with:
Most of the tools and services referenced can all be installed on a single Linux host as the book progresses. This works well for a lab setup, but in a real network you will of course split important servers across different hosts.
Some tools we explore as part of a pre-built or pre-packaged distribution. In these cases, you can certainly install this same distribution in your hypervisor, but you can also certainly follow along in that chapter to get a good appreciation for the concepts, approaches, and pitfalls as they are illustrated.
We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: http://www.packtpub.com/sites/default/files/downloads/9781800202399_ColorImages.pdf.
You can download the example code files for this book from GitHub at https://github.com/PacktPublishing/Linux-for-Networking-Professionals. In case there's an update to the code, it will be updated on the existing GitHub repository.
We also have other code bundles from our rich catalog of books and videos available at
https://github.com/PacktPublishing/. Check them out!
There are a number of text conventions used throughout this book.
Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "All three tools are free, and all can be installed with the standard apt-get install <package name> command."
Any command-line input or output is written as follows:
$ sudo kismet –c <wireless interface name>
Bold: Indicates a new term, an important word, or words that you see onscreen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: "In the Linux GUI, you could start by clicking the network icon on the top panel, then select Settings for your interface."
Tips or important notes
Appear like this.
Feedback from our readers is always welcome.
General feedback: If you have questions about any aspect of this book, mention the book title in the subject of your message and email us at [email protected].
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.
Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
This section outlines the various Linux options available to the reader, and why they might select Linux to deliver various network functions or services. In addition, basic Linux network configuration is covered in some depth. This section sets the stage for all the subsequent chapters.
This part of the book comprises the following chapters:
Chapter 1, Welcome to the Linux FamilyChapter 2, Basic Linux Network Configuration and Operations – Working with Local InterfacesThis book explores the Linux platform and various Linux-based operating systems – in particular, how Linux can work well for networking services. We'll start by discussing some of the history of the operating system before looking at its basic configuration and troubleshooting. From there, we'll work through building various network-related services on Linux that you may commonly see in most organizations. As we progress, we'll build real services on real hosts, with an emphasis on securing and troubleshooting each service as we go. By the time we're done, you should be familiar enough with each of these services to start implementing some or all of them in your own organization. As they say, every journey begins with a single step, so let's take that step and start with a general discussion of the Linux platform.
In this chapter, we'll start our journey by exploring Linux as a family of operating systems. They're all related, but each is unique in its own way, with different strengths and features.
We'll cover the following topics:
Why Linux is a good fit for a networking teamMainstream data center LinuxSpecialty Linux distributionsVirtualizationPicking a Linux distribution for your organizationIn this book, we'll explore how to support and troubleshoot your network using Linux and Linux-based tools, as well as how to securely deploy common networking infrastructure on Linux platforms.
Why would you want to use Linux for these purposes? To begin with, the architecture, history, and culture of Linux steers administrators toward scripting and automating processes. While carrying this to extremes can get people into funny situations, scripting routine tasks can be a real time-saver.
In fact, scripting non-routine tasks, such as something that needs doing once per year, can be a lifesaver as well – it means that administrators don't need to relearn how to do that thing they did 12 months ago.
Scripting routine tasks is an even bigger win. Over many years, Windows administrators have learned that doing one task hundreds of times in a Graphical User Interface (GUI) guarantees that we misclick at least a few times. Scripting tasks like that, on the other hand, guarantees consistent results. Not only that, but over a network, where administrators routinely perform operations for hundreds or thousands of stations, scripting is often the only way to accomplish tasks at larger scales.
Another reason that network administrators prefer Linux platforms is that Linux (and before that, Unix) has been around since there were networks to be a part of. On the server side, Linux (or Unix) services are what defined those services, where the matching Windows services are copies that have mostly grown to feature parity over time.
On the workstation side, if you need a tool to administer or diagnose something on your network, it's probably already installed. If the tool that you seek isn't installed, it's a one-line command to get it installed and running, along with any other tools, libraries, or dependencies required. And adding that tool does not require a license fee – both Linux and any tools installed on Linux are (almost without exception) free and open source.
Lastly, on both the server and desktop side, historically, Linux has been free. Even now, when for-profit companies have license fees for some of the main supported distributions (for instance, Red Hat and SUSE), those companies offer free versions of those distributions. Red Hat offers Fedora Linux and CentOS, both of which are free and, to one extent or another, act as test-bed versions for new features in Red Hat Enterprise Linux. openSUSE (free) and SUSE Linux (chargeable) are also very similar, with the SUSE distribution being more rigorously tested and seeing a more regular cadence for version upgrades. The enterprise versions are typically term-licensed, with that license granting the customer access to technical support and, in many cases, OS updates.
Many companies do opt for the licensed enterprise-ready versions of the OS, but many other companies choose to build their infrastructures on free versions of OpenSUSE, CentOS, or Ubuntu. The availability of free versions of Linux means that many organizations can operate with substantially lower IT costs, which has very much influenced where we have gone as an industry.
Over the years, one of the jokes in the information technology community is that next year was always going to be the year of the Linux desktop – where we'd all stop paying license fees for desktops and business applications, and everything would be free and open source.
Instead, what has happened is that Linux has been making steady inroads into the server and infrastructure side of many environments.
Linux has become a mainstay in most data centers, even if those organizations think they are a Windows-only environment. Many infrastructure components run Linux under the covers, with a nice web frontend to turn it into a vendor solution. If you have a Storage Area Network (SAN), it likely runs Linux, as do your load balancers, access points, and wireless controllers. Many routers and switches run Linux, as do pretty much all the new software-defined networking solutions.
Almost without fail, information security products are based on Linux. Traditional firewalls and next-generation firewalls, Intrusion Detection and Prevention Systems (IDS/IPS), Security Information and Event Management (SIEM) systems, and logging servers – Linux, Linux, Linux!
Why is Linux so pervasive? There are many reasons:
It is a mature operating system.It has an integrated patching and updating system.The basic features are simple to configure. The more complex features on the operating system can be more difficult to configure than on Windows though. Look ahead to our chapter on DNS or DHCP for more information.On the other hand, many features that might be for sale products in a Windows environment are free to install on Linux. Since Linux is almost entirely file-based, it's fairly easy to keep it to a known baseline if you are a vendor who's basing their product on Linux.You can build just about anything on top of Linux, given the right mix of (free and open source) packages, some scripting, and maybe some custom coding. If you pick the right distribution, the OS itself is free, which is a great motivator for a vendor trying to maximize profit or a customer trying to reduce their costs.If the new Infrastructure as Code movement is what draws you, then you'll find that pretty much every coding language is represented in Linux and is seeing active development – from new languages such as Go and Rust, all the way back to Fortran and Cobol. Even PowerShell and .NET, which grew out of Windows, are completely supported on Linux. Most infrastructure orchestration engines (for instance, Ansible, Puppet, and Terraform) started on and supported Linux first.
On the cloud side of today's IT infrastructure, the fact that Linux is free has seen the cloud service providers push their clients toward that end of the spectrum almost from the start. If you've subscribed to any cloud service that is described as serverless or as a Service, behind the scenes, it's likely that that solution is almost all Linux.
Finally, now that we've seen the server and infrastructure side of IT move toward Linux, we should note that today's cell phones are steadily becoming the largest desktop platform in today's computing reality. In today's world, cell phones are generally either iOS- or Android-based, both of which are (you guessed it) Unix/Linux-based! So, the year of the Linux desktop has snuck upon us by changing the definition of desktop.
All of this makes Linux very important to today's networking or IT professionals. This book focuses on using Linux both as a desktop toolbox for the networking professional, as well as securely configuring and delivering various network services on a Linux platform.
To understand the origins of Linux, we must discuss the origins of Unix. Unix was developed in the late 1960s and early 1970s at Bell Labs. Dennis Ritchie and Ken Thompson were Unix's main developers. The name Unix was actually a pun based on the name Multics, an earlier operating system that inspired many of Unix's features.
In 1983, Richard Stallman and the Free Software Foundation started the GNU (a recursive acronym – GNU's Not Unix) project, which aspired to create a Unix-like operating system available to all for free. Out of this effort came the GNU Hurd kernel, which most would consider the precursor to today's Linux versions (the SFS would prefer we called them all GNU/Linux).
In 1992, Linus Torvalds released Linux, the first fully realized GNU kernel. It's important to note that mainstream Linux is normally considered to be a kernel that can be used to create an operating system, rather than an operating system on its own. Linux is still maintained with Linus Torvalds as the lead developer, but today, there is a much larger team of individuals and corporations acting as contributors. So, while technically Linux only refers to the kernel, in the industry, Linux generally refers to any of the operating systems that are built upon that kernel.
Since the 1970s, hundreds of separate flavors of Linux have been released. Each of these is commonly called a distribution (or distro, for short). These are each based on the Linux kernel of the day, along with an installation infrastructure and a repository system for the OS and for updates. Most are unique in some way, either in the mix of base packages or the focus of the distro – some might be small in size to fit on smaller hardware platforms, some might focus on security, some might be intended as a general-purpose enterprise workhorse operating system, and so on.
Some distros have been "mainstream" for a period of time, and some have waned in popularity as time has gone by. The thing they all share is the Linux kernel, which they have each built upon to create their own distribution. Many distros have based their operating system on another distro, customizing that enough to justify calling their implementation a new distribution. This trend has given us the idea of a "Linux family tree" – where dozens of distributions can grow from a common "root." This is explored on the DistroWatch website at https://distrowatch.com/dwres.php?resource=family-tree.
An alternative to Linux, especially in the Intel/AMD/ARM hardware space, is Berkeley Software Distribution (BSD) Unix. BSD Unix is a descendent of the original Bell Labs Unix; it is not based on Linux at all. However, BSD and many of its derivatives are still free and share many characteristics (and a fair amount of code) with Linux.
To this day, the emphasis of both Linux and BSD Unix is that both are freely available operating systems. While commercial versions and derivatives are certainly available, almost all those commercial versions have matching free versions.
In this section, we looked at both the history and importance of Linux in the computing space. We understood how Linux emerged and how it found popularity in certain sections of the computing landscape. Now, we'll start looking at the different versions of Linux that are available to us. This will help us build on the information we need to make choices regarding which distro to use later in this chapter.
As we've discussed, Linux is not a monolithic "thing," but rather a varied or even splintered ecosystem of different distributions. Each Linux distribution is based on the same GNU/Linux kernel, but they are packaged into groups with different goals and philosophies, making for a wide variety of choices when an organization wants to start standardizing on their server and workstation platforms.
The main distributions that we commonly see in modern data centers are Red Hat, SUSE, and Ubuntu, with FreeBSD Unix being another alternative (albeit much less popular now than in the past). This is not to say that other distributions don't crop up on desktops or data centers, but these are the ones you'll see most often. These all have both desktop and server versions – the server versions often being more "stripped down," with their office productivity, media tools, and, often, the GUI removed.
Red Hat has recently been acquired by IBM (in 2019), but still maintains Fedora as one of its main projects. Fedora has both server and desktop versions, and remains freely available. The commercial version of Fedora is Red Hat Enterprise Linux (RHEL). RHEL is commercially licensed and has a formal support channel.
CentOS started as a free, community-supported version of Linux that was functionally compatible with the Red Hat Enterprise version. This made it very popular for server implementations in many organizations. In January 2014, Red Hat pulled CentOS into its fold, becoming a formal sponsor of the distro. In late 2020, it was announced that CentOS would no longer be maintained as a RHEL-compatible distribution but would rather "fit" somewhere between Fedora and RHEL – not so new as to be "bleeding edge," but not as stable as RHEL either. As part of this change, CentOS was renamed CentOS Stream.
Finally, Fedora is the distro that has the latest features and code, where new features get tried and tested. The CentOS Stream distro is more stable but is still "upstream" of RHEL. RHEL is a stable, fully tested operating system with formal support offerings.
Oracle/Scientific Linux is also seen in many data centers (and in Oracle's cloud offerings). Oracle Linux is based on Red Hat, and they advertise their product as being fully compatible with RHEL. Oracle Linux is free to download and use, but support from Oracle is subscription-based.
OpenSUSE is the community distribution that SUSE Linux is based on, similar to how RedHat Enterprise Linux is based on Fedora.
SUSE Linux Enterprise Server (commonly called SLES) was, in the early days of Linux, the mainly European competitor for the US-based Red Hat distribution. Those days are in the past, however, and SUSE Linux is (almost) as likely to be found in Indiana as it is in Italy in modern data centers.
Similar to the relationship between RedHat and CentOS, SUSE maintains both a desktop and a server version. In addition, they also maintain a "high-performance" version of the OS, which comes with optimizations and tools pre-installed for parallel computing. OpenSUSE occupies an "upstream" position to SLES, where changes can be introduced in a distro that is somewhat more "forgiving" to changes that might not always work out the first time. The OpenSUSE Tumbleweed distro has the newest features and versions, where as OpenSUSE Leap is closer in versioning and stability to the SLE versions of the operating system. It is no accident that this model is similar to the RedHat family of distros.
Ubuntu Linux is maintained by Canonical and is free to download, with no separate commercial or "upstream" options. It is based on Debian and has a unique release cycle. New versions of both the server and desktop versions are released every 6 months. A Long-Term Support (LTS) version is released every 2 years, with support for LTS versions of both the server and desktop running for 5 years from the release date. As with the other larger players, support is subscription-based, though free support from the community is a viable option as well.
As you would expect, the server version of Ubuntu is focused more on the core OS, network, and data center services. The GUI is often de-selected during the installation of the server version. The desktop version, however, has several packages installed for office productivity, media creation, and conversion, as well as some simple games.
As we mentioned previously, the BSD "tree" of the family is derived from Unix rather than from the Linux kernel, but there is lots of shared code, especially once you look at the packages that aren't part of the kernel.
FreeBSD and OpenBSD were historically viewed as "more secure" than the earlier versions of Linux. Because of this, many firewalls and network appliances were built based on the BSD OS family, and remain on this OS to this day. One of the more "visible" BSD variants is Apple's commercial operating system OS X (now macOS). This is based on Darwin, which is, in turn, a fork of BSD.
As time marched on, however, Linux has grown to have most of the same security capabilities as BSD, until BSD perhaps had the more secure default setting than most Linux alternatives.
Linux now has security modules available that significantly increase its security posture. SELinux and AppArmor are the two main options that are available. SELinux grew out of the Red Hat distros and is fully implemented for SUSE, Debian, and Ubuntu as well. AppArmor is typically viewed as a simpler-to-implement option, with many (but not all) of the same features. AppArmor is available on Ubuntu, SUSE, and most other distros (with the notable exception of RHEL). Both options take a policy-based approach to significantly increase the overall security posture of the OS they are installed on.
With the evolution of Linux to be more security focused, in particular with SELinux or AppArmor available (and recommended) for most modern Linux distributions, the "more secure" argument of BSD versus Linux is now mainly a historic perception rather than fact.
Aside from the mainstream Linux distributions, there are several distros that have been purpose-built for a specific set of requirements. They are all built on a more mainstream distro but are tailored to fit a specific set of needs. We'll describe a few here that you are most likely to see or use as a network professional.
Most commercial Network-attached Storage (NAS) and SAN providers are based on Linux or BSD. The front runner on open source NAS/SAN services, at the time of writing, seems to be TrueNAS (formerly FreeNAS) and XigmaNAS (formerly NAS4Free). Both have free and commercial offerings.
Networking and security companies offer a wide variety of firewall appliances, most of which are based on Linux or BSD. Many companies do offer free firewalls, some of the more popular being pfSense (free versions and pre-built hardware solutions available), OPNsense (freely available, with donations), and Untangle (which also has a commercial version). Smoothwall is another alternative, with both free and commercial versions available.
In this book, we'll explore using the on-board firewall in Linux to secure individual servers, or to secure a network perimeter.
Descended from BackTrack, and KNOPPIX before that, Kali Linux is a distribution based on Debian that is focused on information security. The underlying goal of this distribution is to collect as many useful penetration testing and ethical hacking tools as possible on one platform, and then ensure that they all work without interfering with each other. The newer versions of the distribution have focused on maintaining this tool interoperability as the OS and tools get updated (using the apt toolset).
SIFT is a distribution authored by the forensics team at the SANS institute, focused on digital forensics and incident response tools and investigations. Similar to Kali, the goal of SIFT is to be a "one-stop shop" for free/open source tools in one field – Digital Forensics and Incident Response (DFIR). Historically, this was a distribution based on Ubuntu, but in recent years, this has changed – SIFT is now also distributed as a script that installs the tools on Ubuntu desktop or Windows Services for Linux (which is Ubuntu-based).
Security Onion is also similar to Kali Linux in that it contains several information security tools, but its focus is more from the defender's point of view. This distribution is centered on threat hunting, network security monitoring, and log management. Some of the tools in this distribution include Suricata, Zeek, and Wazuh, just to name a few.
Virtualization has played a major role in the adoption of Linux and the ability to work with multiple distributions at once. With a local hypervisor, a network professional can run dozens of different "machines" on their laptop or desktop computers. While VMware was the pioneer in this space (desktop and dedicated virtualization), they have since been joined by Xen, KVM, VirtualBox, and QEMU, just to name a few. While the VMware products are all commercial products (except for VMware Player), the other solutions listed are, at the time of writing, still free. VMware's flagship hypervisor, ESXi, is also available for free as a standalone product.
The increasing stability of Linux and the fact that virtualization is now mainstream has, in many ways, made our modern-day cloud ecosystems possible. Add to this the increasing capabilities of automation in deploying and maintaining backend infrastructure and the sophistication available to the developers of web applications and Application Programming Interfaces (APIs), and what we get is the cloud infrastructures of today. Some of the key features of this are as follows:
A multi-tenant infrastructure, where each customer maintains their own instances (virtual servers and virtual data centers) in the cloud.Granular costing either by month or, more commonly, by resources used over time.Reliability that it is as good or better than many modern data centers (though recent outages have shown what happens when we put too many eggs in the same basket).APIs that make automating your infrastructure relatively easy, so much so that for many companies, provisioning and maintaining their infrastructure has become a coding activity (often called Infrastructure as Code).These APIs make it possible to scale up (or down) on capacity as needed, whether that is storage, computing, memory, session counts, or all four.Cloud services are in business for a profit, though – any company that has decided to "forklift" their data center as is to a cloud service has likely found that all those small charges add up over time, eventually reaching or surpassing the costs of their on-premises data center. It's still often attractive on the dollars side, as those dollars are spent on operational expenses that can be directly attributed more easily than the on-premises capital expenditure model (commonly called Cap-Ex versus Op-Ex models).
As you can see, moving a data center to a cloud service does bring lots of benefits to an organization that likely wouldn't have the option to in the on-premises model. This only becomes more apparent as more cloud-only features are utilized.
In many ways, which distribution you select for your data center is not important – the main distributions all have similar functions, often have identical components, and often have similar vendor or community support options. However, because of the differences between these distros, what is important is that one distribution (or a set of similar distros) is selected.
The desired outcome is that your organization standardizes one distribution that your team can develop their expertise with. This also means that you can work with the same escalation team for more advanced support and troubleshooting, whether that is a consulting organization, a paid vendor support team, or a group of like-minded individuals on various internet forums. Many organizations purchase support contracts with one of "the big three" (Red Hat, SUSE, or Canonical, depending on their distribution).
Where you don't want to be is in the situation I've seen a few clients end up in. Having hired a person who is eager to learn, a year later, they found that each of the servers they built that year were on a different Linux distribution, each built slightly differently. This is a short road to your infrastructure becoming the proverbial "science experiment" that never ends!
Contrast this with another client – their first server was a SUSE Linux for SAP, which is, as the name suggests, a SUSE Linux server, packaged with the SAP application that the client purchased (SAP HANA). As their Linux footprint grew with more services, they stuck with the SUSE platform, but went with the "real" SLES distribution. This kept them on a single operating system and, equally important for them, a single support license with SUSE. They were able to focus their training and expertise on SUSE. Another key benefit for them was that as they added more servers, they were able to apply a single "stream" of updates and patches with a phased approach. In each patch cycle, less critical servers got patched first, leaving the core business application servers to be patched a few days later, after their testing was complete.
The main advice in picking a distribution is to stick to one of the larger distributions. If people on your team have strong feelings about one of these, then definitely take that into consideration. You will likely want to stay fairly close to one of the mainstream distributions so that you can use it within your organization, something that is regularly maintained and has a paid subscription model available for support – even if you don't feel you need paid support today, that may not always be the case.
Now that we've discussed the history of Linux, along with several of the main distributions, I hope you are in a better position to appreciate the history and the central importance of the operating systems in our society. In particular, I hope that you have some good criteria to help you choose a distro for your infrastructure.
In this book, we'll choose Ubuntu as our distribution. It's a free distribution, which, in its LTS version, has an OS that we can depend on being supported as you work through the various scenarios, builds, and examples that we'll discuss. It's also the distribution that is native to Windows (in Windows services for Linux). This makes it an easy distro to become familiar with, even if you don't have server or workstation hardware to spare or even a virtualization platform to test with.
In the next chapter, we'll discuss getting your Linux server or workstation on the network. We'll illustrate working with the local interfaces and adding IP addresses, subnet masks, and any routes required to get your Linux host working in a new or existing network.
https://docs.microsoft.com/en-us/windows/wsl/about
FreeBSD Unix: https://www.freebsd.org/OpenBSD Unix: https://www.openbsd.org/Linux/BSD differences: https://www.howtogeek.com/190773/htg-explains-whats-the-difference-between-linux-and-bsd/TrueNAS: https://www.truenas.com/XigmaNAS: https://www.xigmanas.com/ pfSense: https://www.pfsense.org/OPNsense: https://opnsense.org/Untangle: https://www.untangle.com/untangleKali Linux: https://www.kali.org/SIFT: https://digital-forensics.sans.org/community/downloads; https://www.sans.org/webcasts/started-sift-workstation-106375Security Onion: https://securityonionsolutions.com/softwareKali Linux: https://www.kali.org/In this chapter, we'll explore how to display and configure local interfaces and routes on your Linux host. As much as possible we'll discuss both the new and legacy commands for performing these operations. This will include displaying and modifying IP addressing, local routes, and other interface parameters. Along the way, we'll discuss how IP addresses and subnet addresses are constructed using a binary approach.
This chapter should give you a solid foundation for topics we cover in the later chapters, troubleshooting networking problems, hardening our host, and installing secure services.
The topics covered in this chapter are as follows:
Working with your network settings – two sets of commandsDisplaying interface IP informationIPv4 addresses and subnet masksAssigning an IP address to an interfaceIn this and every other chapter, as we discuss various commands, you are encouraged to try them on your own computer. The commands in this book are all illustrated on Ubuntu Linux, version 20 (a Long-Term Support version), but should for the most part be identical or very similar on almost any Linux distribution.
For most of the Linux lifespan that people are familiar with, ifconfig (interface config) and related commands have been a mainstay of the Linux operating system, so much so that now that it's deprecated in most distributions, it still rolls off the fingers of many system and network administrators.
Why were these old network commands replaced? There are several reasons. Some new hardware (in particular, InfiniBand network adapters) are not well supported by the old commands. In addition, as the Linux kernel has changed over the years, the operation of the old commands has become less and less consistent over time, but pressure around backward compatibility made resolving this difficult.
The old commands are in the net-tools software package, and the new commands are in the iproute2 software package. New administrators should focus on the new commands, but familiarity with the old commands is still a good thing to maintain. It's still very common to find old computers running Linux, machines that might never be updated that still use the old commands. For this reason, we'll cover both toolsets.
The lesson to be learned from this is that in the Linux world, change is constant. The old commands are still available but are not installed by default.
To install the legacy commands, use this command:
robv@ubuntu:~$ sudo apt install net-tools
[sudo] password for robv:
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following package was automatically installed and is no longer required:
libfprint-2-tod1
Use 'sudo apt autoremove' to remove it.
The following NEW packages will be installed:
net-tools
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/196 kB of archives.
After this operation, 864 kB of additional disk space will be used.
Selecting previously unselected package net-tools.
(Reading database ... 183312 files and directories currently installed.)
Preparing to unpack .../net-tools_1.60+git20180626.aebd88e-1ubuntu1_amd64.deb .. .
Unpacking net-tools (1.60+git20180626.aebd88e-1ubuntu1) ...
Setting up net-tools (1.60+git20180626.aebd88e-1ubuntu1) ...
Processing triggers for man-db (2.9.1-1) ...
You may notice a few things in this install command and its output:
sudo: The sudo command was used – sudo essentially means do as the super user – so the command executes with root (administrator) privileges. This needs to be paired with the password of the user executing the command. In addition, that user needs to be properly entered in the configuration file /etc/sudoers. By default, in most distributions, the userid defined during the installation of the operating system is automatically included in that file. Additional users or groups can be added using the visudo command.Why was sudo used? Installing software or changing network parameters and many other system operations require elevated rights – on a multi-user corporate system, you wouldn't want people who weren't administrators to be making these changes.
So, if sudo is so great, why don't we run everything as root? Mainly because this is a security issue. Of course, everything will work if you have root privileges. However, any mistakes and typos can have disastrous results. Also, if you are running with the right privileges and happen to execute some malware, the malware will then have those same privileges, which is certainly less than ideal! If anyone asks, yes, Linux malware definitely exists and has sadly been with the operating system almost from the start.
apt: The apt command was used – apt stands for Advanced Package Tool, and installs not only the package requested, but also any required packages, libraries, or other dependencies required for that package to run. Not only that, but by default, it collects all of those components from online repositories (or repos). This is a welcome shortcut compared to the old process, where all the dependencies (at the correct versions) had to be collected, then installed in the correct order to make any new features work.apt is the default installer on Ubuntu, Debian, and related distributions, but the package management application will vary between distributions. In addition to the apt and its equivalents, installing from downloaded files is still supported. Debian, Ubuntu, and related distributions use deb files, while many other distributions use rpm files. This is summarized as follows:
So, now that we have a boatload of new commands to look at, how do we get more information on these? The man (for manual) command has documentation for most commands and operations in Linux. The man command for apt, for instance, can be printed using the man apt command; the output is as follows:
Figure 2.1 – apt man page
As we introduce new commands in this book, take a minute to review them using the man command – this book is meant more to guide you in your journey, not as a replacement for the actual operating system documentation.
Now that we've talked about the modern and legacy tools, and then installed the legacy net-tools commands, what are these commands, and what do they do?
Displaying interface information is a common task on a Linux workstation. This is especially true if your host adapter is set to be automatically configured, for instance using Dynamic Host Configuration Protocol (DHCP) or IPv6 autoconfiguration.
As we discussed, there are two sets of commands to do this. The ip command allows us to display or configure your host's network parameters on new operating systems. On old versions, you will find that the ifconfig command is used.
The ip command will allow us to display or update IP addresses, routing information, and other networking information. For instance, to display current IP address information, use the following command:
ip address
The ip command supports command completion, so ip addr or even ip a will give you the same results:
robv@ubuntu:~$ ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:33:2d:05 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.182/24 brd 192.168.122.255 scope global dynamic noprefixroute ens33
valid_lft 6594sec preferred_lft 6594sec
inet6 fe80::1ed6:5b7f:5106:1509/64 scope link noprefixroute
valid_lft forever preferred_lft forever
You'll see that even the simplest of commands will sometimes return much more information that you might want. For instance, you'll see both IP version 4 (IPv4) and IPv6 information returned – we can limit this to only version 4 or 6 by adding -4 or -6 to the command-line options:
robv@ubuntu:~$ ip -4 ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
inet 192.168.122.182/24 brd 192.168.122.255 scope global dynamic noprefixroute ens33
valid_lft 6386sec preferred_lft 6386sec
In this output, you'll see that the loopback interface (a logical, internal interface) has an IP address of 127.0.0.1, and the Ethernet interface ens33 has an IP address of 192.168.122.182.
Now would be an excellent time to type man ip and review the various operations that we can do with this command:
Figure 2.2 – ip man page
The ifconfig command has very similar functions to the ip command, but as we noted, it is seen mostly on old versions of Linux. The legacy commands have all grown organically, with features bolted on as needed. This has landed us in a state in which as more complex things are being displayed or configured, the syntax becomes less and less consistent. The more modern commands were designed from the ground up for consistency.
Let's duplicate our efforts using the legacy command; to display the interface IP, just type ifconfig:
robv@ubuntu:~$ ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1400
inet 192.168.122.22 netmask 255.255.255.0 broadcast 192.168.122.255
inet6 fe80::1ed6:5b7f:5106:1509 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:33:2d:05 txqueuelen 1000 (Ethernet)
RX packets 161665 bytes 30697457 (30.6 MB)
RX errors 0 dropped 910 overruns 0 frame 0
TX packets 5807 bytes 596427 (596.4 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 1030 bytes 91657 (91.6 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1030 bytes 91657 (91.6 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
As you can see, mostly the same information is displayed in a slightly different format. If you review the man page for both commands, you'll see that the options are more consistent in the imp command, and there isn't as much IPv6 support – for instance, natively you can't select an IPv4 or IPv6 only display.
In the modern network commands, we'll use the exact same ip command to display our routing information. And, as you'd expect, the command is ip route, which can be shortened to anything up to ip r:
robv@ubuntu:~$ ip route
default via 192.168.122.1 dev ens33 proto dhcp metric 100
169.254.0.0/16 dev ens33 scope link metric 1000
192.168.122.0/24 dev ens33 proto kernel scope link src 192.168.122.156 metric 100
robv@ubuntu:~$ ip r
default via 192.168.122.1 dev ens33 proto dhcp metric 100
169.254.0.0/16 dev ens33 scope link metric 1000
192.168.122.0/24 dev ens33 proto kernel scope link src 192.168.122.156 metric 100
From this output, we see that we have a default route pointing to 192.168.122.1. The default route is just that – if a packet is being sent to a destination that isn't in the routing table, the host will send that packet to its default gateway. The routing table will always prefer the "most specific" route – the route that most closely matches the destination IP. If there is no match, then the most specific route goes to the default gateway, which routes to 0.0.0.0 0.0.0.0 (in other words, the "if it doesn't match anything else" route). The host assumes that the default gateway IP belongs to a router, which will (hopefully) then know where to send that packet next.
We also see a route to 169.254.0.0/16. This is called a Link-Local Address as defined in the RFC 3927. RFC stands for Request for Comment, which serves as part of the informal peer review process that internet standards use as they are developed. The list of published RFCs is maintained by the IETF (Internet Engineering Task Force), at https://www.ietf.org/standards/rfcs/.