Mastering Bash - Giorgio Zarrelli - E-Book

Mastering Bash E-Book

Giorgio Zarrelli

0,0
41,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Your one stop guide to making the most out of Bash programming

About This Book

  • From roots to leaves, learn how to program in Bash and automate daily tasks, pouring some spice in your scripts
  • Daemonize a script and make a real service of it, ensuring it's available at any time to process user-fed data or commands
  • This book provides functional examples that show you practical applications of commands

Who This Book Is For

If you're a power user or system administrator involved in writing Bash scripts to automate tasks, then this book is for you. This book is also ideal for advanced users who are engaged in complex daily tasks.

What You Will Learn

  • Understand Bash right from the basics and progress to an advanced level
  • Customise your environment and automate system routine tasks
  • Write structured scripts and create a command-line interface for your scripts
  • Understand arrays, menus, and functions
  • Securely execute remote commands using ssh
  • Write Nagios plugins to automate your infrastructure checks
  • Interact with web services, and a Slack notification script
  • Find out how to execute subshells and take advantage of parallelism
  • Explore inter-process communication and write your own daemon

In Detail

System administration is an everyday effort that involves a lot of tedious tasks, and devious pits. Knowing your environment is the key to unleashing the most powerful solution that will make your life easy as an administrator, and show you the path to new heights. Bash is your Swiss army knife to set up your working or home environment as you want, when you want.

This book will enable you to customize your system step by step, making your own real, virtual, home out of it. The journey will take you swiftly through the basis of the shell programming in Bash to more interesting and challenging tasks. You will be introduced to one of the most famous open source monitoring systems—Nagios, and write complex programs with it in any languages. You'll see how to perform checks on your sites and applications.

Moving on, you'll discover how to write your own daemons so you can create your services and take advantage of inter-process communication to let your scripts talk to each other. So, despite these being everyday tasks, you'll have a lot of fun on the way. By the end of the book, you will have gained advanced knowledge of Bash that will help you automate routine tasks and manage your systems.

Style and approach

This book presents step-by-step instructions and expert advice on working with Bash and writing scripts. Starting from the basics, this book serves as a reference manual where you can find handy solutions and advice to make your scripts flexible and powerful.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 643

Veröffentlichungsjahr: 2017

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



 

Mastering Bash

 

 

 

 

 

 

 

 

 

Automate daily tasks with Bash

 

 

 

 

 

 

 

 

 

 

Giorgio Zarrelli

 

 

 

BIRMINGHAM - MUMBAI

Mastering Bash

Copyright © 2017 Packt Publishing

 

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

First published: June 2017

 

Production reference: 1190617

Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.

ISBN 978-1-78439-687-9

www.packtpub.com

Credits

Author

Giorgio Zarrelli

Copy Editors

Dipti Mankame

Yesha Gangani

Reviewer

Sebastian F. Colomar

 

Project Coordinator

Judie Jose

Commissioning Editor

Kartikey Pandey

Proofreader

Safis Editing

Acquisition Editor

Rahul Nair

Indexer

Rekha Nair

Content Development Editor

Abhishek Jadhav

Graphics

Kirk D'Penha

Technical Editor

Aditya Khadye

Production Coordinator

Melwyn Dsa

 

About the Author

Giorgio Zarrelli is a passionate GNU/Linux system administrator and Debian user, but has worked over the years with Windows, Mac, and OpenBSD, writing scripts, programming, installing and configuring services--whatever is required from an IT guy. He started tinkering seriously with servers back in his university days, when he took part in the Computational Philosophy Laboratory and was introduced to the Prolog language. As a young guy, he had fun being paid for playing games and write about them in video game magazines. Then he grew up and worked as an IT journalist and Nagios architect, and recently moved over to the threat intelligence field, where a lot of interesting stuff is happening nowadays.

Over the years, he has worked for start-ups and well-established companies, among them In3 incubator and Onebip as a database and systems administrator, IBM as QRadar support, and Anomali as CSO, trying to find the best ways to help companies make the best out of IT.

Giorgio has written several books in Italian on different topics related to IT, from Windows security to Linux system administration, covering MySQL DB administration and Bash scripting.

 

At last, some acknowledgments since we cannot do much without the help of the people who make our lives better. Firstly, Ilaria, who had to go through all the weekends and the mornings I spent writing instead of strolling downtown. Then, mum and dad and my brother, Maurizio. Being Italian, my mum would kill me if I did not acknowledge her--and, by the way, they are such an important part of my life. Let’s keep it short, since I cannot thank all the people who enrich my life and have put some flourishes into this book. So let me thank my bosses at Anomali, Gabe and Mitul, for supporting me and letting me use a Mac (I do whatever needed to write a book, even if it is crazy) when my laptop broke and the replacement was stuck somewhere around the globe. Thanks to my editor, Abhishek, for being supportive, professional, and patient during the writing of this book. Finally, thank you, dear reader, for having a look at this book--sometimes IT can be boring; I've tried to make it fun.

About the Reviewer

Sebastian F. Colomar is a GNU/Linux system engineer specializing in the scripting, installation, configuration, and maintenance of Linux servers for better security and performance.

He is currently an infrastructure architect at Hanscan, having been a consultant for scripting and Linux administration for many companies, such as IBM, Indra, Thales, Accelya, Accenture, AXA, Cetelem, RTVCM, EMT, and ESA.

www.PacktPub.com

For support files and downloads related to your book, please visit www.PacktPub.com.

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.comand as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.

https://www.packtpub.com/mapt

Get the most in-demand software skills with Mapt. Mapt gives you full access to all Packt books and video courses, as well as industry-leading tools to help you plan your personal development and advance your career.

Why subscribe?

Fully searchable across every book published by Packt

Copy and paste, print, and bookmark content

On demand and accessible via a web browser

Customer Feedback

Thanks for purchasing this Packt book. At Packt, quality is at the heart of our editorial process. To help us improve, please leave us an honest review on this book's Amazon page at https://www.amazon.com/dp/1784396877.

 

If you'd like to join our team of regular reviewers, you can e-mail us at [email protected]. We award our regular reviewers with free eBooks and videos in exchange for their valuable feedback. Help us be relentless in improving our products!

Preface

Bash is a common tool for everyday tasks that almost every Linux user relies on. Whatever you want to do, you have to log in to a shell, and most of the time, it will be Bash. This book aims to explain how to use this tool to get the most out of it, whether it be programming a plugin or network client or simply explaining why a double dot means what it means, we will dig a bit deeper than usual to become fully confident with our shell. Starting from the basics but with a different point of view, we will climb up step by step, focusing on the programming side of our environment, looking at how to prevent any issues in setting up our recurring tasks and ensure that everything works fine. Make it once, take your time, debug, improve, and then fire and forget; as in old Linux saying states, "If it works, why change it?" So, since we are dealing with sayings, we could stick to the other two cornerstones: "KISS: Keep it simple, stupid" and "Do only one thing, but do it well." These are three principles around which Linux revolves: making something, not everything, and making it simple and reliable and taking your time to make it work well so you do not have to modify it too often over time. When something is focused and simple, it is easy to understand, well maintained, and safe. And that is our approach, since Bash is not only a tool but also the environment we spend a lot of time in, and so understanding it, making the best use of it, and keeping everything clean and tidy should be our daily aim.

What this book covers

Chapter 1,Let's Start Programming, is our first brush with the magic of Bash. We will use basic shell programming bits to write easy code that will forecast all the benefits of more advanced scripts.

Chapter 2, Operators, is where we perform some simple operations, such as checking whether something is greater, equal to, or less than something else and how to add, subtract, and fiddle with numbers. This is the first step toward imposing conditions on events dealt with in our scripts.

Chapter 3, Testing, explains how checking whether something fits into boundaries and certain conditions are met or not is fundamental to making our scripts able to react to events and to decide what to do based on real-time indicators coming from the system or from other programs.

Chapter 4, Quoting and Escaping, tells you how the shell has its own reserved words, which cannot be used without knowing exactly what they do. Furthermore, the variables hold values that must be preserved while we are working on them. This is where we'll learn to be cautions about what we are going to write.

Chapter 5, Menus, Arrays, and Functions, explores how to make the script interact with the user, for example, giving the user the chance to answer some questions and deal with the options highlighted. This involves the ability to create a command-line interface for the program itself and a way to store the data in a structure that will make it easy to retrieve that data. And that is what arrays are all about.

Chapter 6, Iterations, explains how iterations are fundamental to going over data and extracting and processing them based on some conditions while they last, for instance, or for some values we use as counters. We will learn how to use while and for loops.

Chapter 7, Plug into the Real World, introduces one of the most famous open source monitoring system, Nagios, which is all about plugins. You can write complex programs in any language to perform whichever checks you want on your sites and applications. But some of the most tricky plugins I have used have been written using Bash, and nothing else.

Chapter 8, We Want to Chat, is about Slack, currently one of the most widely used messaging systems. Why not write a small fragment of code to send our thoughts over a Slack channel and, maybe, make a communication plugin out of it, enabling other scripts to send messages through the messaging system?

Chapter 9, Subshells, Signals, and Job Controls, discusses how sometimes a single process is not enough. Our script has to do many things at once, using a sort of raw parallelism to get to the desired outcome. Well, it's time to see what we can spawn inside a shell, how to control our jobs, and send signals.

Chapter 10, Let's Make a Process Chat, explores the topic of processes talking to each other, feeding each other data and sharing the burden of data elaboration. Pipes, redirections, process substitution, and a bit of netcat--this could open up new scenarios, and we'll see how.

Chapter 11, Living as a Daemon, explains how sometimes sending a script into the background is not enough. It will not survive long, but you can use some tricks such as double forking, setsid, and disowning to make it a bit devilish and survive until process death. Make it a daemon and let it wait for your orders.

Chapter 12, Remote Connections over SSH, tells you how scripts can be run locally, but they can do much more for you. They can log in remotely over a secure channel and issue commands on your behalf without you inputting any further instructions. Everything is stored in a key, which unlocks a whole bunch of new possibilities.

Chapter 13, It's Time for a Timer, discusses how to fully automate routine tasks. We have to have a method to run our scripts based on some conditions. The most common is based on time, such as hourly, daily, weekly, or monthly repetitions. Just think about a simple log rotation triggered on certain conditions, the most common being on a daily schedule.

Chapter 14, Time for Safety, explains how safety is a must in your working environment. Scripting often means access to remote servers and interacting with them, so learning some tricks to keep your server more secure will help you prevent intrusions and keep your job away from unwanted eyes.

What you need for this book

This book assumes a good level of experience with Linux operating systems and an intermediate knowledge of the Bash shell, and since there will be some chapters dealing with Nagios monitoring and Slack messaging, basic understanding of networking concepts is required.

A simple Linux installation is required with really low specifications, as even the Nagios plugin can be tested without requiring the actual installation of the monitoring system. So, this is the minimum configuration required:

CPU: single-core

Memory: 2 GB

Disk space: 20 GB

For this book, you will need the following software:

Linux operating system: Debian 8

Nagios Core 3.5.1

OpenSSH 6.7p1

rssh 2.3.4

Internet connectivity is required to install the necessary service packages and to try out some of the examples.

Who this book is for

This book is intended for advanced users who are engaged in complex daily tasks. Starting from the basics, this book aims to serve as a reference manual where one can find handy solutions and advice to make their scripts flexible and powerful.

Conventions

In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning.

Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "What is interesting here is that the value of real is slightly different between the two commands."

A block of code is set as follows:

#!/bin/bash

set -x

echo "The total disk allocation for this system is: "

echo -e "\n"

df -h

echo -e "\n"

set +x

df -h | grep /dm-0 | awk '{print "Space left on root partition: " $4}'

Any command-line input or output is written as follows:

gzarrelli:~$ time echo $0/bin/bashreal 0m0.000suser 0m0.000ssys 0m0.000s

gzarrelli:~$

New terms and important words are shown in bold.

Warnings or important notes appear in a box like this.
Tips and tricks appear like this.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book-what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of.

To send us general feedback, simply e-mail [email protected], and mention the book's title in the subject of your message.

If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the example code

You can download the example code files for this book from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

You can download the code files by following these steps:

Log in or register to our website using your e-mail address and password.

Hover the mouse pointer on the

SUPPORT

tab at the top.

Click on

Code Downloads & Errata

.

Enter the name of the book in the

Search

box.

Select the book for which you're looking to download the code files.

Choose from the drop-down menu where you purchased this book from.

Click on

Code Download

.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

WinRAR / 7-Zip for Windows

Zipeg / iZip / UnRarX for Mac

7-Zip / PeaZip for Linux

The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Mastering-Bash. We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Downloading the color images of this book

We also provide you with a PDF file that has color images of the screenshots/diagrams used in this book. The color images will help you better understand the changes in the output. You can download this file from https://www.packtpub.com/sites/default/files/downloads/MasteringBash_ColorImages.pdf.

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books-maybe a mistake in the text or the code-we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.

To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.

Piracy

Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.

Please contact us at [email protected] with a link to the suspected pirated material.

We appreciate your help in protecting our authors and our ability to bring you valuable content.

Questions

If you have a problem with any aspect of this book, you can contact us at [email protected], and we will do our best to address the problem.

I/O redirection

As we saw in the previous pages, redirection is one of the last operations undertaken by Bash to parse and prepare the command line that will lead to the execution of a command. But what is a redirection? You can easily guess from your everyday experience. It means taking a stream that goes from one point to another and making it go somewhere else, like changing the flow of a river and making it go somewhere else. In Linux and Unix, it is quite the same, just keep in mind the following two principles:

In Unix, each process, except for daemons, is supposed to be connected to a standard input, standard output, and standard error device

Every device in Unix is represented by a file

You can also think of these devices as streams:

Standard input, named

stdin

, is the intaking stream from which the process receives input data

Standard output, named

stdout

, is the outbound stream where the process writes its output data

Standard error, named

stderr

, is the stream where the process writes its error messages

These streams are also identified by a standard POSIX file descriptor, which is an integer used by the kernel as a handler to refer to them, as you can see in the following table:

Device

Mode

File descriptor

stdin

read

0

stdout

write

1

stderr

write

2

So, tinkering with the file descriptors for the three main streams means that we can redirect the flows between stdin and stdout, but also stderr, from one process to the other. So, we can make different processes communicate with each other, and this is actually a form of IPC, inter-process communication, which we will look at it in more detail later in this book.

How do we redirect the Input/Output (I/O), from one process to another? We can get to this goal making use of some special characters:

>

Let's start stating that the default output of a process, usually, is the stdout. Whatever it returns is returned on the stdout which, again usually, is the monitor or the terminal. Using the > character, we can divert this flow and make it go to a file. If the file does not exist, it is created, and if it exists, it is flattened and its content is overwritten with the output stream of the process.

A simple example will clarify how the redirection to a file works:

gzarrelli:~$ echo "This is some content"

This is some content

We used the command echo to print a message on the stdout, and so we see the message written, in our case, to the text terminal that is usually connected to the shell:

gzarrelli:~$ ls -lah

total 0

drwxr-xr-x 2 zarrelli gzarrelli 68B 20 Jan 07:43 .

drwxr-xr-x+ 47 zarrelli gzarrelli 1.6K 20 Jan 07:43 ..

There is nothing on the filesystem, so the output went straight to the terminal, but the underlying directory was not affected. Now, time for a redirection:

gzarrelli:~$ echo "This is some content" > output_file.txt

Well, nothing to the screen; no output at all:

gzarrelli:~$ ls -lah

total 8

drwxr-xr-x 3 gzarrelli gzarrelli 102B 20 Jan 07:44 .

drwxr-xr-x+ 47 gzarrelli gzarrelli 1.6K 20 Jan 07:43 ..

-rw-r--r-- 1 gzarrelli gzarrelli 21B 20 Jan 07:44 output_file.txt

Actually, as you can see, the output did not vanish; it was simply redirected to a file on the current directory which got created and filled in:

gzarrelli:~$ cat output_file.txt

This is some content

Here we have something interesting. The cat command takes the content of the output_file.txt and sends it on the stdout. What we can see is that the output from the former command was redirected from the terminal and written to a file.

>>

This double mark answers a requirement we often face: How can we add more content coming from a process to a file without overwriting anything? Using this double character, which means no file is already in place, create a new one; if it already exists, just append the new data. Let's take the previous file and add some content to it:

gzarrelli:~$ echo "This is some other content" >> output_file.txt

gzarrelli:~$ cat output_file.txt

This is some content

This is some other content

Bingo, the file was not overwritten and the new content from the echo command was added to the old. Now, we know how to write to a file, but what about reading from somewhere else other than the stdin?

<

If the text terminal is the stdin, the keyboard is the standard input for a process, where it expects some data from. Again, we can divert the flow or data reading and get the process read from a file. For our example, we start creating a file containing a set of unordered numbers:

gzarrelli:~$ echo -e '5\n9\n4\n1\n0\n6\n2' > to_sort

And let us verify its content, as follows:

gzarrelli:~$ cat to_sort

5

9

4

1

0

6

2

Now we can have the sort command read this file into its stdin, as follows:

gzarrelli:~$ sort < to_sort

0

1

2

4

5

6

9

Nice, our numbers are now in sequence, but we can do something more interesting:

gzarrelli:~$ sort < to_sort > sorted

What did we do? We simply gave the file to_sort to the command sort into its standard input, and at the same time, we concatenated a second redirection so that the output of sort is written into the file sorted:

gzarrelli:~$ cat sorted

0

1

2

4

5

6

9

So, we can concatenate multiple redirections and have some interesting results, but we can do something even trickier, that is, chaining together inputs and outputs, not on files but on processes, as we will see now.

|

The pipe character does exactly what its name suggests, pipes the stream; could be the stdout or stderr, from one process to another, creating a simple interprocess communication facility:

gzarrelli:~$

ps aux | awk '{print $2, $3, $4}' | grep -v [A-Z] | sort -r -k 2 -g | head -n 3

95 0.0 0.0

94 0.0 0.0

93 0.0 0.0

In this example, we had a bit of fun, first getting a list of processes, then piping the output to the awk utility, which printed only the first, eleventh, and twelfth fields of the output of the first command, giving us the process ID, CPU percentage, and memory percentage columns. Then, we got rid of the heading PID %CPU %MEM, piping the awk output to the input of grep, which performed a reverse pattern matching on any strings containing a character, not a number. In the next stage, we piped the output to the sort command, which reverse-ordered the data based on the values in the second column. Finally, we wanted only the three lines, and so we got the PID of the first three heaviest processes relying on CPU occupation.

Redirection can also be used for some kind of fun or useful stuff, as you can see in the following screenshot:

As you can see, there are two users on the same machine on different terminals, and remember that each user has to be connected to a terminal. To be able to write to any user's terminal, you must be root or, as in this example, the same user on two different terminals. With the who command we can identify which terminal (ttys) the user is connected to, also known as reads from, and we simply redirect the output from an echo command to his terminal. Because its session is connected to the terminal, he will read what we send to the stdin of his terminal device (hence, /dev/ttysxxx).

Everything in Unix is represented by a file, be it a device, a terminal, or anything we need access to. We also have some special files, such as /dev/null, which is a sinkhole - whatever you send to it gets lost:

gzarrelli:~$ echo "Hello" > /dev/null

gzarrelli:~$

And have a look at the following example too:

root:~$ ls

output_file.txtsortedto_sort

root:~$ mv output_file.txt /dev/null

root:~$ ls

to_sort

Great, there is enough to have fun, but it is just the beginning. There is a whole lot more to do with the file descriptors.

Messing around with stdin, stdout, and stderr

Well, if we tinker a little bit with the file descriptors and special characters we can have some nice, really nice, outcomes; let's see what we can do.

x < filename

: This opens a file in read mode and assigns the descriptor named

a

, whose value falls between

3

and

9

. We can choose any name by the means of which we can easily access the file content through the

stdin

.

1 > filename

: This redirects the standard output to filename. If it does not exist, it gets created; if it exists, the pre-existing data is overwritten.

1 >> filename

: This redirects the standard output to filename. If it does not exist, it is created; otherwise, the contents get appended to the pre-existing data.

2 > filename

: This redirects the standard error to filename. If it does not exist, it gets created; if it exists, the pre-existing data is overwritten.

2 >> filename

: This redirects the standard error to filename. If it does not exist, it is created; otherwise, the contents get appended to the pre-existing data.

&> filename

: This redirects both the

stdout

and the

stderr

to filename. This redirects the standard error to filename. If it does not exist, it gets created; if it exists, the pre-existing data is overwritten.

2>&1

: This redirects the

stderr

to the

stdout

. If you use this with a program, its error messages will be redirected to the

stdout

, that is, usually, the monitor.

y>&x

: This redirects the file descriptor for

y

to

x

so that the output from the file pointed by descriptor

y

will be redirected to the file pointed by descriptor

x

.

>&x

: This redirects the file descriptor

1

that is associated with the

stdout

to the file pointed by the descriptor

x

, so whatever hits the standard output will be written in the file pointed by

x

.

x<> filename

: This opens a file in read/write mode and assigns the descriptor

x

to it. If the file does not exist, it is created, and if the descriptor is omitted, it defaults to

0

, the

stdin

.

x<&-

: This closes the file opened in read mode and associated with the descriptor

x

.

0<&- or <&-

: This closes the file opened in read mode and associated with the descriptor

0

, the

stdin

, which is then closed.

x>&-

: This closes the file opened in write mode and associated with the descriptor

x

.

1>&- or >&-

: This closes the file opened in write mode and associated with the descriptor

1

, the

stdout

, which is then closed.

If you want to see which file descriptors are associated with a process, you can explore the /proc directory and point to the following:

/proc/pid/fd

Under that path, change PID with the ID of the process you want to explore; you will find all the file descriptors associated with it, as in the following example:

gzarrelli:~$ ls -lah /proc/15820/fd

total 0

dr-x------ 2 postgres postgres 0 Jan 20 17:59 .

dr-xr-xr-x 9 postgres postgres 0 Jan 20 09:59 ..

lr-x------ 1 postgres postgres 64 Jan 20 17:59 0 -> /dev/null (deleted)

l-wx------ 1 postgres postgres 64 Jan 20 17:59 1 -> /var/log/postgresql/postgresql-9.4-main.log

lrwx------ 1 postgres postgres 64 Jan 20 17:59 10 -> /var/lib/postgresql/9.4/main/base/16385/16587

lrwx------ 1 postgres postgres 64 Jan 20 17:59 11 -> socket:[13135]

lrwx------ 1 postgres postgres 64 Jan 20 17:59 12 -> socket:[1502010]

lrwx------ 1 postgres postgres 64 Jan 20 17:59 13 -> /var/lib/postgresql/9.4/main/base/16385/16591

lrwx------ 1 postgres postgres 64 Jan 20 17:59 14 -> /var/lib/postgresql/9.4/main/base/16385/16593

lrwx------ 1 postgres postgres 64 Jan 20 17:59 15 -> /var/lib/postgresql/9.4/main/base/16385/16634

lrwx------ 1 postgres postgres 64 Jan 20 17:59 16 -> /var/lib/postgresql/9.4/main/base/16385/16399

lrwx------ 1 postgres postgres 64 Jan 20 17:59 17 -> /var/lib/postgresql/9.4/main/base/16385/16406

lrwx------ 1 postgres postgres 64 Jan 20 17:59 18 -> /var/lib/postgresql/9.4/main/base/16385/16408

l-wx------ 1 postgres postgres 64 Jan 20 17:59 2 -> /var/log/postgresql/postgresql-9.4-main.log

lr-x------ 1 postgres postgres 64 Jan 20 17:59 3 -> /dev/urandom

l-wx------ 1 postgres postgres 64 Jan 20 17:59 4 -> /dev/null (deleted)

l-wx------ 1 postgres postgres 64 Jan 20 17:59 5 -> /dev/null (deleted)

lr-x------ 1 postgres postgres 64 Jan 20 17:59 6 -> pipe:[1502013]

l-wx------ 1 postgres postgres 64 Jan 20 17:59 7 -> pipe:[1502013]

lrwx------ 1 postgres postgres 64 Jan 20 17:59 8 -> /var/lib/postgresql/9.4/main/base/16385/11943

lr-x------ 1 postgres postgres 64 Jan 20 17:59 9 -> pipe:[13125]

Nice, isn't it? So, let us do something that is absolute fun:

First, let's open a socket in read/write mode to the web server of a virtual machine created for this book and assign the descriptor 9:

gzarrelli:~$ exec 9<> /dev/tcp/172.16.210.128/80 || exit 1

Then, let us write something to it; nothing complex:

gzarrelli:~$ printf 'GET /index2.html HTTP/1.1\nHost: 172.16.210.128\nConnection: close\n\n' >&9

We just requested a simple HTML file created for this example.

And now let us read the file descriptor 9:

gzarrelli:~$ cat <&9

HTTP/1.1 200 OK

Date: Sat, 21 Jan 2017 17:57:33 GMT

Server: Apache/2.4.10 (Debian)

Last-Modified: Sat, 21 Jan 2017 17:57:12 GMT

ETag: "f3-5469e7ef9e35f"

Accept-Ranges: bytes

Content-Length: 243

Vary: Accept-Encoding

Connection: close

Content-Type: text/html

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"

"http://www.w3.org/TR/html4/strict.dtd">

<HTML>

<HEAD>

<TITLE>This is a test file</TITLE>

</HEAD>

<BODY>

<P>And we grabbed it through our descriptor!

</BODY>

</HTML>

That's it! We connected the file descriptor to a remote server through a socket, we could write to it and read the response, redirecting the streams over the network.

For dealing just with the command line, we have done a lot so far, but if we want to go further, we have to see how to script all these commands and make the most out of them. It is time for our first script!

Time for the interpreter: the sha-bang

When the game gets tougher, a few concatenations on the command line cannot be enough to perform the tasks we are meant to accomplish. Too many bits on single lines are too messy, and we lack clarity, so better to store our commands or builtins in a file and have it executed.

When a script is executed, the system loader parses the first line looking for what is named the sha-bang or shebang, a sequence of characters.

#!

This will force the loader to treat the following characters as a path to the interpreter and its optional arguments to be used to further parse the script, which will then be passed as another argument to the interpreter itself. So, at the end, the interpreter will parse the script and, this time, we will ignore the sha-bang, since its first character is a hash, usually indicating a comment inside a script and comments do not get executed. To go a little further, the sha-bang is what we call a 2-bit magic number, a constant sequence of numbers or text values used in Unix to identify file or protocol types. So, 0x23 0x21 is actually the ASCII representation of #!.

So, let's make a little experiment and create a tiny one line script:

gzarrelli:~$ echo "echo \"This should go under the sha-bang\"" > test.sh

Just one line. Let's have a look:

gzarrelli:~$ cat test.sh

echo "This should go under the sha-bang"

Nice, everything is as we expected. Has Linux something to say about our script? Let's ask:

gzarrelli:~$ file test.sh

test.sh: ASCII text

Well, the file utility says that it is a plain file, and this is a simple text file indeed. Time for a nice trick:

gzarrelli:~$ sed -i '1s/^/#!\/bin\/sh\n/' test.sh

Nothing special; we just added a sha-bang pointing to /bin/sh:

gzarrelli:~$ cat test.sh

#!/bin/sh

echo "This should go under the sha-bang"

As expected, the sha-bang is there at the beginning of our file:

gzarrelli:~$ file test.sh

test.sh: POSIX shell script, ASCII text executable

No way, now it is a script! The file utility makes three different tests to identify the type of file it is dealing with. In order: file system tests, magic number tests, and language tests. In our case, it identified the magic numbers that represent the sha-bang, and thus a script, and this is what it told us: it is a script.

Now, a couple of final notes before moving on.

You can omit the

sha-bang

if your script is not using a shell

builtins

or shell internals

Pay attention to

/bin/sh

, not everything that looks like an innocent executable is what it seems:

gzarrelli:~$ ls -lah /bin/sh

lrwxrwxrwx 1 root root 4 Nov 8 2014 /bin/sh -> dash

In some systems, /bin/sh is a symbolic link to a different kind of interpreter, and if you are using some internals or builtins of Bash, your script could have unwanted or unexpected outcomes.

Something went wrong, let's trace it

So, we have a new tiny script named disk.sh:

gzarrelli:~$ cat disk.sh

#!/bin/bash

echo "The total disk allocation for this system is: "

echo -e "\n"

df -h

echo -e "\n

df -h | grep /$ | awk '{print "Space left on root partition: " $4}'

Nothing special, a shebang, a couple of echoes on a new line just to have some vertical spacing, the output of df -h and the same command but parsed by awk to give us a meaningful message. Let's run it:

zarrelli:~$ ./disk.sh

The total disk allocation for this system is:

Filesystem Size Used Avail Use% Mounted on

/dev/dm-0 19G 15G 3.0G 84% /

udev 10M 0 10M 0% /dev

tmpfs 99M 9.1M 90M 10% /run

tmpfs 248M 80K 248M 1% /dev/shm

tmpfs 5.0M 4.0K 5.0M 1% /run/lock

tmpfs 248M 0 248M 0% /sys/fs/cgroup

/dev/sda1 236M 33M 191M 15% /boot

tmpfs 50M 12K 50M 1% /run/user/1000

tmpfs 50M 0 50M 0% /run/user/0