Hadoop 2.x Administration Cookbook - Gurumukh Singh - E-Book

Hadoop 2.x Administration Cookbook E-Book

Gurumukh Singh

0,0
39,59 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Hadoop enables the distributed storage and processing of large datasets across clusters of computers. Learning how to administer Hadoop is crucial to exploit its unique features. With this book, you will be able to overcome common problems encountered in Hadoop administration.
The book begins with laying the foundation by showing you the steps needed to set up a Hadoop cluster and its various nodes. You will get a better understanding of how to maintain Hadoop cluster, especially on the HDFS layer and using YARN and MapReduce. Further on, you will explore durability and high availability of a Hadoop cluster.
You’ll get a better understanding of the schedulers in Hadoop and how to configure and use them for your tasks. You will also get hands-on experience with the backup and recovery options and the performance tuning aspects of Hadoop. Finally, you will get a better understanding of troubleshooting, diagnostics, and best practices in Hadoop administration.
By the end of this book, you will have a proper understanding of working with Hadoop clusters and will also be able to secure, encrypt it, and configure auditing for your Hadoop clusters.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 352

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Hadoop 2.x Administration Cookbook
Credits
About the Author
About the Reviewers
www.PacktPub.com
eBooks, discount offers, and more
Why subscribe?
Customer Feedback
Preface
What this book covers
What you need for this book
Who this book is for
Sections
Getting ready
How to do it…
How it works…
There's more…
See also
Conventions
Reader feedback
Customer support
Downloading the example code
Downloading the color images of this book
Errata
Piracy
Questions
1. Hadoop Architecture and Deployment
Introduction
Overview of Hadoop Architecture
Building and compiling Hadoop
Getting ready
How to do it...
How it works...
Installation methods
Getting ready
How to do it...
How it works...
Setting up host resolution
Getting ready
How to do it...
How it works...
Installing a single-node cluster - HDFS components
Getting ready
How to do it...
How it works...
There's more...
Setting up ResourceManager and NodeManager
Installing a single-node cluster - YARN components
Getting ready
How to do it...
How it works...
There's more...
See also
Installing a multi-node cluster
Getting ready
How to do it...
How it works...
Configuring the Hadoop Gateway node
Getting ready
How to do it...
How it works...
See also
Decommissioning nodes
Getting ready
How to do it...
How it works...
See also
Adding nodes to the cluster
Getting ready
How to do it...
How it works...
There's more...
2. Maintaining Hadoop Cluster HDFS
Introduction
Overview of HDFS
Configuring HDFS block size
Getting ready
How to do it...
How it works...
Setting up Namenode metadata location
Getting ready
How to do it...
How it works...
Loading data in HDFS
Getting ready
How to do it...
How it works...
Configuring HDFS replication
Getting ready
How to do it...
How it works...
See also
HDFS balancer
Getting ready
How to do it...
How it works...
Quota configuration
Getting ready
How to do it...
How it works...
HDFS health and FSCK
Getting ready
How to do it...
How it works...
See also
Configuring rack awareness
Getting ready
How to do it...
How it works...
See also
Recycle or trash bin configuration
Getting ready
How to do it...
How it works...
There's more...
Distcp usage
Getting ready
How to do it...
How it works...
Control block report storm
Getting ready
How to do it...
How it works...
Configuring Datanode heartbeat
Getting ready
How to do it...
How it works...
3. Maintaining Hadoop Cluster – YARN and MapReduce
Introduction
Running a simple MapReduce program
Getting ready
How to do it...
Hadoop streaming
Getting ready
How to do it...
How it works...
Configuring YARN history server
Getting ready
How to do it...
How it works...
There's more...
Job history web interface and metrics
Getting ready
How to do it...
How it works...
Configuring ResourceManager components
Getting ready
How to do it...
How it works...
There's more...
See also
YARN containers and resource allocations
Getting ready
How to do it...
How it works...
There's more...
See also
ResourceManager Web UI and JMX metrics
Getting ready
How to do it...
How it works...
Preserving ResourceManager states
Getting ready
How to do it...
How it works...
There's more...
4. High Availability
Introduction
Namenode HA using shared storage
Getting ready
How to do it...
How it works...
See also
ZooKeeper configuration
Getting ready
How to do it...
How it works...
Namenode HA using Journal node
Getting ready
How to do it...
How it works...
Resourcemanager HA using ZooKeeper
Getting ready
How to do it...
How it works…
Rolling upgrade with HA
Getting ready
How to do it...
How it works...
Configure shared cache manager
Getting ready
How to do it...
There's more...
See also
Configure HDFS cache
Getting ready
How to do it...
How it works...
See also
HDFS snapshots
Getting ready
How to do it...
How it works...
Configuring storage based policies
Getting ready
How to do it...
How it works...
Configuring HA for Edge nodes
Getting ready
How to do it...
How it works...
5. Schedulers
Introduction
Configuring users and groups
Getting ready
How to do it...
How it works...
See also
Fair Scheduler configuration
Getting ready
How to do it...
How it works...
Fair Scheduler pools
Getting ready
How to do it...
How it works...
Configuring job queues
Getting ready
How to do it...
How it works...
See also
Job queue ACLs
Getting ready
How to do it...
How it works...
See also
Configuring Capacity Scheduler
Getting ready
How to do it...
How it works...
See also
Queuing mappings in Capacity Scheduler
Getting ready
How to do it...
How it works...
YARN and Mapred commands
Getting ready
How to do it...
How it works...
YARN label-based scheduling
Getting ready
How to do it...
How it works...
YARN SLS
Getting ready
How to do it...
How it works...
6. Backup and Recovery
Introduction
Initiating Namenode saveNamespace
Getting ready
How to do it...
How it works...
Using HDFS Image Viewer
Getting ready
How to do it...
How it works...
Fetching parameters which are in-effect
Getting ready
How to do it...
How it works...
Configuring HDFS and YARN logs
Getting ready
How to do it...
How it works...
See also
Backing up and recovering Namenode
Getting ready
How to do it...
How it works...
See also
Configuring Secondary Namenode
Getting ready
How to do it...
How it works…
Promoting Secondary Namenode to Primary
Getting ready
How to do it...
How it works...
See also
Namenode recovery
Getting ready
How to do it...
How it works...
Namenode roll edits – online mode
Getting ready
How to do it...
How it works...
Namenode roll edits – offline mode
Getting ready
How to do it...
How it works...
Datanode recovery – disk full
Getting ready
How to do it...
How it works...
Configuring NFS gateway to serve HDFS
Getting ready
How to do it...
How it works...
Recovering deleted files
Getting ready
How to do it...
How it works...
7. Data Ingestion and Workflow
Introduction
Hive server modes and setup
Getting ready
How to do it...
How it works...
Using MySQL for Hive metastore
How to do it…
How it works...
Operating Hive with ZooKeeper
Getting ready
How to do it...
How it works...
Loading data into Hive
Getting ready
How to do it...
How it works...
See also
Partitioning and Bucketing in Hive
Getting ready
How to do it...
How it works...
See also
Hive metastore database
Getting ready
How to do it...
How it works...
See also
Designing Hive with credential store
Getting ready
How to do it...
How it works...
Configuring Flume
Getting ready
How to do it...
How it works...
Configure Oozie and workflows
Getting ready
How to do it...
How it works...
8. Performance Tuning
Tuning the operating system
Getting ready
How to do it...
How it works...
See also
Tuning the disk
Getting ready
How to do it...
How it works...
Tuning the network
Getting ready
How to do it...
How it works...
Tuning HDFS
Getting ready
How to do it...
How it works...
Tuning Namenode
Getting ready
How to do it...
There's more...
See also
Tuning Datanode
Getting ready
How to do it...
How it works...
See also
Configuring YARN for performance
Getting ready
How to do it...
How it works...
Configuring MapReduce for performance
Getting ready
How to do it...
How it works...
Hive performance tuning
Getting ready
How to do it...
There's more...
How it works...
Benchmarking Hadoop cluster
Getting ready
How to do it...
Benchmark 1--Testing HDFS with TestDFSIO
Benchmark 2--Stress testing Namenode
Benchmark 3--MapReduce testing by generating small files
Benchmark 4--TeraGen, TeraSort, and TeraValidate benchmarks
There's more...
How it works...
9. HBase Administration
Introduction
Setting up single node HBase cluster
Getting ready
How to do it...
How it works...
Setting up multi-node HBase cluster
Getting ready
How to do it...
How it works...
Inserting data into HBase
Getting ready
How to do it...
How it works...
Integration with Hive
Getting ready
How to do it...
How it works...
See also
HBase administration commands
Getting ready
How to do it...
How it works...
See also
HBase backup and restore
Getting ready
How to do it...
How it works...
Tuning HBase
Getting ready
How to do it...
How it works...
HBase upgrade
Getting ready
How to do it...
How it works...
Migrating data from MySQL to HBase using Sqoop
Getting ready
How to do it...
10. Cluster Planning
Introduction
Disk space calculations
Getting ready
How to do it...
How it works...
Nodes needed in the cluster
Getting ready
How to do it...
How it works...
See also
Memory requirements
Getting ready
How to do it...
How it works...
See also
Sizing the cluster as per SLA
Getting ready
How to do it...
How it works...
See also
Network design
Getting ready
How to do it...
How it works...
Estimating the cost of the Hadoop cluster
How to do it...
How it works...
Hardware and software options
How it works...
11. Troubleshooting, Diagnostics, and Best Practices
Introduction
Namenode troubleshooting
Getting ready
How to do it...
How it works...
See also
Datanode troubleshooting
Getting ready
How to do it...
How it works...
See also
Resourcemanager troubleshooting
Getting ready
How to do it…
How it works...
See also
Diagnose communication issues
Getting ready
How to do it...
How it works...
Parse logs for errors
Getting ready
How to do it...
How it works...
Hive troubleshooting
Getting ready
How to do it...
How it works...
See also
HBase troubleshooting
Getting ready
How to do it...
How it works...
Hadoop best practices
How it works...
12. Security
Introduction
Encrypting disk using LUKS
Getting ready
How to do it...
How it works...
See also
Configuring Hadoop users
Getting ready
How to do it...
How it works...
HDFS encryption at Rest
Getting ready
How to do it...
How it works...
Configuring SSL in Hadoop
Getting ready
How to do it...
How it works...
See also
In-transit encryption
Getting ready
How to do it...
There's more...
See also
Enabling service level authorization
Getting ready
How to do it...
How it works...
See also
Securing ZooKeeper
Getting ready
How to do it...
How it works...
Configuring auditing
Getting ready
How to do it...
How it works...
Configuring Kerberos server
Getting ready
How to do it...
How it works...
Configuring and enabling Kerberos for Hadoop
Getting ready
How to do it...
How it works...
Index

Hadoop 2.x Administration Cookbook

Hadoop 2.x Administration Cookbook

Copyright © 2017 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

First published: May 2017

Production reference: 1220517

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham B3 2PB, UK.

ISBN 978-1-78712-673-2

www.packtpub.com

Credits

Author

Gurmukh Singh

Reviewers

Rajiv Tiwari

Wissem EL Khlifi

Commissioning Editor

Amey Varangaonkar

Acquisition Editor

Varsha Shetty

Content Development Editor

Deepti Thore

Technical Editor

Nilesh Sawakhande

Copy Editors

Laxmi Subramanian

Safis Editing

Project Coordinator

Shweta H Birwatkar

Proofreader

Safis Editing

Indexer

Francy Puthiry

Graphics

Tania Dutta

Production Coordinator

Nilesh Mohite

Cover Work

Nilesh Mohite

About the Author

Gurmukh Singh is a seasoned technology professional with 14+ years of industry experience in infrastructure design, distributed systems, performance optimization, and networks. He has worked in big data domain for the last 5 years and provides consultancy and training on various technologies.

He has worked with companies such as HP, JP Morgan, and Yahoo.

He has authored Monitoring Hadoop by Packt Publishing (https://www.packtpub.com/big-data-and-business-intelligence/monitoring-hadoop)

I would like to thank my wife, Navdeep Kaur, and my lovely daughter, Amanat Dhillon, who have always supported me throughout the journey 
of this book.

About the Reviewers

Rajiv Tiwari is a freelance big data and cloud architect with over 17 years of experience across big data, analytics, and cloud computing for banks and other financial organizations. He is an electronics engineering graduate from IIT Varanasi, and has been working in England for the past 13 years, mostly in the financial city of London. Rajiv can be contacted on Twitter at @bigdataoncloud.

He is the author of the book Hadoop for Finance, an exclusive book for using Hadoop in banking and financial services.

I would like to thank my wife, Seema, and my son, Rivaan, for allowing me to spend their quota of time on reviewing this book.

Wissem El Khlifi is the first Oracle ACE in Spain and an Oracle Certified Professional DBA with over 12 years of IT experience.

He earned the Computer Science Engineer degree from FST Tunisia, Master in Computer Science from the UPC Barcelona, and Master in Big Data Science from the UPC Barcelona.

His area of interest include Cloud Architecture, Big Data Architecture, and Big Data Management and Analysis.

His career has included the roles of: Java analyst / programmer, Oracle Senior DBA, and big data scientist. He currently works as Senior Big Data and Cloud Architect for Schneider Electric / APC.

He writes numerous articles on his website http://www.oracle-class.com and is avaialble on twitter at @orawiss.

www.PacktPub.com

eBooks, discount offers, and more

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at <[email protected]> for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.

https://www.packtpub.com/mapt

Get the most in-demand software skills with Mapt. Mapt gives you full access to all Packt books and video courses, as well as industry-leading tools to help you plan your personal development and advance your career.

Why subscribe?

Fully searchable across every book published by PacktCopy and paste, print, and bookmark contentOn demand and accessible via a web browser

Customer Feedback

Thanks for purchasing this Packt book. At Packt, quality is at the heart of our editorial process. To help us improve, please leave us an honest review on this book's Amazon page at https://www.amazon.com/dp/1787126730.

If you'd like to join our team of regular reviewers, you can e-mail us at [email protected]. We award our regular reviewers with free eBooks and videos in exchange for their valuable feedback. Help us be relentless in improving our products!

Preface

Hadoop is a distributed system with a large ecosystem, which is growing at an exponential rate, and hence it becomes important to get a grip on things and do a deep dive into the functioning of a Hadoop cluster in production. Whether you are new to Hadoop or a seasoned Hadoop specialist, this recipe book contains recipes to deep dive into Hadoop cluster configuration and optimization.

What this book covers

Chapter 1, Hadoop Architecture and Deployment, covers Hadoop's architecture, its components, various installation modes and important daemons, and the services that make Hadoop a robust system. This chapter covers single-node and multinode clusters.

Chapter 2, Maintaining Hadoop Cluster – HDFS, wraps the storage layer HDFS, block size, replication, cluster health, Quota configuration, rack awareness, and communication channel between nodes.

Chapter 3, Maintaining Hadoop Cluster – YARN and MapReduce, talks about the processing layer in Hadoop and the resource management framework YARN. This chapter covers how to configure YARN components, submit jobs, configure job history server, and YARN fundamentals.

Chapter 4, High Availability, covers high availability for a Namenode and Resourcemanager, ZooKeeper configuration, HDFS storage-based policies, HDFS snapshots, and rolling upgrades.

Chapter 5, Schedulers, talks about YARN schedulers such as fair and capacity scheduler, with detailed recipes on configuring Queues, Queue ACLs, configuration of users and groups, and other Queue administration commands.

Chapter 6, Backup and Recovery, covers Hadoop metastore, backup and restore procedures on a Namenode, configuration of a secondary Namenode, and various ways of recovering lost Namenodes. This chapter also talks about configuring HDFS and YARN logs for troubleshooting.

Chapter 7, Data Ingestion and Workflow, talks about Hive configuration and its various modes of operation. This chapter also covers setting up Hive with the credential store and highly available access using ZooKeeper. The recipes in this chapter give details about the process of loading data into Hive, partitioning, bucketing concepts, and configuration with an external metastore. It also covers Oozie installation and Flume configuration for log ingestion.

Chapter 8, Performance Tuning, covers the performance tuning aspects of HDFS, YARN containers, the operating system, and network parameters, as well as optimizing the cluster for production by comparing benchmarks for various configurations.

Chapter 9, Hbase and RDBMS, talks about HBase cluster configuration, best practices, HBase tuning, backup, and restore. It also covers migration of data from MySQL to HBase and the procedure to upgrade HBase to the latest release.

Chapter 10, Cluster Planning, covers Hadoop cluster planning and the best practices for designing clusters are, in terms of disk storage, network, servers, and placement policy. This chapter also covers costing and the impact of SLA driver workloads on cluster planning.

Chapter 11, Troubleshooting, Diagnostics, and Best Practices, talks about the troubleshooting steps for a Namenode and Datanode, and diagnoses communication errors. It also covers details on logs and how to parse them for errors to extract important key points on issues faced.

Chapter 12, Security, covers Hadoop security in terms of data encryption, in-transit encryption, ssl configuration, and, more importantly, configuring Kerberos for the Hadoop cluster. This chapter also covers auditing and a recipe on securing ZooKeeper.

What you need for this book

To go through the recipes in this book, users need any Linux distribution, which could be Ubuntu, Centos, or any other flavor, as long as it supports running JVM. We use Centos in our recipe, as it is the most commonly used operating system for Hadoop clusters.

Hadoop runs on both virtualized and physical servers, so it is recommended to have at least 8 GB for the base system, on which about three virtual hosts can be set up. Users do not need to set up all the recipes covered in this book all at once; they can run only those daemons that are necessary for that particular recipe. This way, they can keep the resource requirements to the bare minimum. It is good to have at least four hosts to practice all the recipes in this book. These hosts could be virtual or physical.

In terms of software, users need JDK 1.7 minimum, and any SSH client, such as PuTTY in Windows or Terminal, to connect to the Hadoop nodes.

Who this book is for

If you are a system administrator with a basic understanding of Hadoop and you want to get into Hadoop administration, this book is for you. It's also ideal if you are a Hadoop administrator who wants a quick reference guide to all the Hadoop administration-related tasks and solutions to commonly occurring problems.

Sections

In this book, you will find several headings that appear frequently (Getting ready, How to do it, How it works, There's more, and See also).

To give clear instructions on how to complete a recipe, we use these sections as follows:

Getting ready

This section tells you what to expect in the recipe, and describes how to set up any software or any preliminary settings required for the recipe.

How to do it…

This section contains the steps required to follow the recipe.

How it works…

This section usually consists of a detailed explanation of what happened in the previous section.

There's more…

This section consists of additional information about the recipe in order to make the reader more knowledgeable about the recipe.

See also

This section provides helpful links to other useful information for the recipe.

Conventions

In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning.

Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "You will see a tarball under the hadoop-2.7.3-src/hadoop-dist/target/ folder."

A block of code is set as follows:

<property> <name>dfs.hosts.exclude</name> <value>/home/hadoop/excludes</value> <final>true</final> </property>

Any command-line input or output is written as follows:

$ stop-yarn.sh

Note

Warnings or important notes appear in a box like this.

Tip

Tips and tricks appear like this.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of.

To send us general feedback, simply e-mail <[email protected]>, and mention the book's title in the subject of your message.

If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the example code

You can download the example code files for this book from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

You can download the code files by following these steps:

Log in or register to our website using your e-mail address and password.Hover the mouse pointer on the SUPPORT tab at the top.Click on Code Downloads & Errata.Enter the name of the book in the Search box.Select the book for which you're looking to download the code files.Choose from the drop-down menu where you purchased this book from.Click on Code Download.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

WinRAR / 7-Zip for WindowsZipeg / iZip / UnRarX for Mac7-Zip / PeaZip for Linux

The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Hadoop-2.x-Administration-Cookbook. We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Downloading the color images of this book

We also provide you with a PDF file that has color images of the screenshots/diagrams used in this book. The color images will help you better understand the changes in the output. You can download this file from http://www.packtpub.com/sites/default/files/downloads/Hadoop2.xAdministrationCookbook_ColorImages.pdf.

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.

To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.

Piracy

Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.

Please contact us at <[email protected]> with a link to the suspected pirated material.

We appreciate your help in protecting our authors and our ability to bring you valuable content.

Questions

If you have a problem with any aspect of this book, you can contact us at <[email protected]>, and we will do our best to address the problem.

Chapter 1. Hadoop Architecture and Deployment

In this chapter, we will cover the following recipes:

Overview of Hadoop ArchitectureBuilding and compiling HadoopInstallation methodsSetting up host resolutionInstalling a single-node cluster - HDFS componentsInstalling a single-node cluster - YARN componentsInstalling a multi-node clusterConfiguring Hadoop Gateway nodeDecommissioning nodesAdding nodes to the cluster

Introduction

As Hadoop is a distributed system with many components, and has a reputation of getting quite complex, it is important to understand the basic Architecture before we start with the deployments.

In this chapter, we will take a look at the Architecture and the recipes to deploy a Hadoop cluster in various modes. This chapter will also cover recipes on commissioning and decommissioning nodes in a cluster.

The recipes in this chapter will primarily focus on deploying a cluster based on an Apache Hadoop distribution, as it is the best way to learn and explore Hadoop.

Note

While the recipes in this chapter will give you an overview of a typical configuration, we encourage you to adapt this design according to your needs. The deployment directory structure varies according to IT policies within an organization. All our deployments will be based on the Linux operating system, as it is the most commonly used platform for Hadoop in production. You can use any flavor of Linux; the recipes are very generic in nature and should work on all Linux flavors, with the appropriate changes in path and installation methods, such as yum or apt-get.

Overview of Hadoop Architecture

Hadoop is a framework and not a tool. It is a combination of various components, such as a filesystem, processing engine, data ingestion tools, databases, workflow execution tools, and so on. Hadoop is based on client-server Architecture with a master node for each storage layer and processing layer.

Namenode is the master for Hadoop distributed file system (HDFS) storage and ResourceManager is the master for YARN (Yet Another Resource Negotiator). The Namenode stores the file metadata and the actual blocks/data reside on the slave nodes called Datanodes. All the jobs are submitted to the ResourceManager and it then assigns tasks to its slaves, called NodeManagers. In a highly available cluster, we can have more than one Namenode and ResourceManager.

Both masters are each a single point of failure, which makes them very critical components of the cluster and so care must be taken to make them highly available.

Although there are many concepts to learn, such as application masters, containers, schedulers, and so on, as this is a recipe book, we will keep the theory to a minimum.

Building and compiling Hadoop

The pre-build Hadoop binary available at www.apache.org, is a 32-bit version and is not suitable for the 64-bit hardware as it will not be able to utilize the entire addressable memory. Although, for lab purposes, we can use the 32-bit version, it will keep on giving warnings about the "not being built for the native library", which can be safely ignored.

In production, we will always be running Hadoop on hardware which is a 64-bit version and can support larger amounts of memory. To properly utilize memory higher than 4 GB on any node, we need the 64-bit complied version of Hadoop.

Getting ready

To step through the recipes in this chapter, or indeed the entire book, you will need at least one preinstalled Linux instance. You can use any distribution of Linux, such as Ubuntu, CentOS, or any other Linux flavor that the reader is comfortable with. The recipes are very generic and are expected to work with all distributions, although, as stated before, one may need to use distro-specific commands. For example, for package installation in CentOS we use yum package installer, or in Debian-based systems we use apt-get, and so on. The user is expected to know basic Linux commands and should know how to set up package repositories such as the yum repository. The user should also know how the DNS resolution is configured. No other prerequisites are required.

How to do it...

ssh to the Linux instance using any of the ssh clients. If you are on Windows, you need PuTTY. If you are using a Mac or Linux, there is a default terminal available to use ssh. The following command connects to the host with an IP of 10.0.0.4. Change it to whatever the IP is in your case:Change to the user root or any other privileged user:
$ sudo su -
Install the dependencies to build Hadoop:
# yum install gcc gcc-c++ openssl-devel make cmake jdk-1.7u45(minimum)
Download and install Maven:
wget mirrors.gigenet.com/apache/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz
Untar Maven:
# tar -zxf apache-maven-3.3.9-bin.tar.gz -C /opt/
Set up the Maven environment:
# cat /etc/profile.d/maven.shexport JAVA_HOME=/usr/java/latestexport M3_HOME=/opt/apache-maven-3.3.9export PATH=$JAVA_HOME/bin:/opt/apache-maven-3.3.9/bin:$PATH
Download and set up protobuf:
# wget https://github.com/google/protobuf/releases/download/v2.5.0/protobuf-2.5.0.tar.gz# tar -xzf protobuf-2.5.0.tar.gz -C /root# cd /opt/protobuf-2.5.0/# ./configure# make;make install
Download the latest Hadoop stable source code. At the time of writing, the latest Hadoop version is 2.7.3:
# wget apache.uberglobalmirror.com/hadoop/common/stable2/hadoop-2.7.3-src.tar.gz# tar -xzf hadoop-2.7.3-src.tar.gz -C /opt/# cd /opt/hadoop-2.7.2-src# mvn package -Pdist,native -DskipTests -Dtar
You will see a tarball in the folder hadoop-2.7.3-src/hadoop-dist/target/.

How it works...

The tarball package created will be used for the installation of Hadoop throughout the book. It is not mandatory to build a Hadoop from source, but by default the binary packages provided by Apache Hadoop are 32-bit versions. For production, it is important to use a 64-bit version so as to fully utilize the memory beyond 4 GB and to gain other performance benefits.

Installation methods

Hadoop can be installed in multiple ways, either by using repository methods such as Yum/apt-get or by extracting the tarball packages. The project Bigtop http://bigtop.apache.org/ provides Hadoop packages for infrastructure, and can be used by creating a local repository of the packages.

All the steps are to be performed as the root user. It is expected that the user knows how to set up a yum repository and Linux basics.

Getting ready

You are going to need a Linux machine. You can either use the one which has been used in the previous task or set up a new node, which will act as repository server and host all the packages we need.

How to do it...

Connect to a Linux machine that has at least 5 GB disk space to store the packages.If you are on CentOS or a similar distribution, make sure you have the package yum-utils installed. This package will provide the command reposync.Create a file bigtop.repo under /etc/yum.repos.d/. Note that the file name can be anything—only the extension must be .repo.See the following screenshot for the contents of the file:Execute the command reposync –r bigtop. It will create a directory named bigtop under the present working directory with all the packages downloaded to it.All the required Hadoop packages can be installed by configuring the repository we downloaded as a repository server.

How it works...

From step 2 to step 6, the user will be able to configure and use the Hadoop package repository. Setting up a Yum repository is not required, but it makes things easier if we have to do installations on hundreds of nodes. In larger setups, management systems such as Puppet or Chef will be used for deployment configuration to push configuration and packages to nodes.

In this chapter, we will be using the tarball package that was built in the first section to perform installations. This is the best way of learning about directory structure and the configurations needed.

Setting up host resolution

Before we start with the installations, it is important to make sure that the host resolution is configured and working properly.

Getting ready

Choose any appropriate hostnames the user wants for his or her Linux machines. For example, the hostnames could be master1.cluster.com or rt1.cyrus.com or host1.example.com. The important thing is that the hostnames must resolve.

This resolution can be done using a DNS server or by configuring the/etc/hosts file on each node we use for our cluster setup.

The following steps will show you how to set up the resolution in the/etc/hosts file.

How to do it...

Connect to the Linux machine and change the hostname to master1.cyrus.com in the file as follows:Edit the/etc/hosts file as follows:Make sure the resolution returns an IP address:
# getent hosts master1.cyrus.com
The other preferred method is to set up the DNS resolution so that we do not have to populate the hosts file on each node. In the example resolution shown here, the user can see that the DNS server is configured to answer the domain cyrus.com:
# nslookup master1.cyrus.comServer: 10.0.0.2Address: 10.0.0.2#53Non-authoritative answer:Name: master1.cyrus.comAddress: 10.0.0.104

How it works...

Each Linux host has a resolver library that helps it resolve any hostname that is asked for. It contacts the DNS server, and if it is not found there, it contacts the hosts file. Users who are not Linux administrators can simply use the hosts files as a workaround to set up a Hadoop cluster. There are many resources available online that could help you to set up a DNS quickly if needed.

Once the resolution is in place, we will start with the installation of Hadoop on a single-node and then progress to multiple nodes.

Installing a single-node cluster - HDFS components

Usually the term cluster means a group of machines, but in this recipe, we will be installing various Hadoop daemons on a single node. The single machine will act as both the master and slave for the storage and processing layer.

Getting ready

You will need some information before stepping through this recipe.

Although Hadoop can be configured to run as root user, it is a good practice to run it as a non-privileged user. In this recipe, we are using the node name nn1.cluster1.com, preinstalled with CentOS 6.5.

Tip

Create a system user named hadoop and set a password for that user.

Install JDK, which will be used by Hadoop services. The minimum recommended version of JDK is 1.7, but Open JDK can also be used.

How to do it...

Log into the machine/host as root user and install jdk:
# yum install jdk –yor it can also be installed using the command as below# rpm –ivh jdk-1.7u45.rpm
Once Java is installed, make sure Java is in PATH for execution. This can be done by setting JAVA_HOME and exporting it as an environment variable. The following screenshot shows the content of the directory where Java gets installed:
# export JAVA_HOME=/usr/java/latest
Now we need to copy the tarball hadoop-2.7.3.tar.gz--which was built in the Build Hadoop section earlier in this chapter—to the home directory of the user root. For this, the user needs to login to the node where Hadoop was built and execute the following command:
# scp –r hadoop-2.7.3.tar.gz [email protected]:~/
Create a directory named/opt/cluster to be used for Hadoop:
# mkdir –p /opt/cluster
Then untar the hadoop-2.7.3.tar.gz to the preceding created directory:
# tar –xzvf hadoop-2.7.3.tar.gz -C /opt/Cluster/
Create a user named hadoop, if you haven't already, and set the password as hadoop:
# useradd hadoop# echo hadoop | passwd --stdin hadoop
As step 6 was done by the root user, the directory and file under /opt/cluster will be owned by the root user. Change the ownership to the Hadoop user:
# chown -R hadoop:hadoop /opt/cluster/
If the user lists the directory structure under /opt/cluster, he will see it as follows:The directory structure under /opt/cluster/hadoop-2.7.3 will look like the one shown in the following screenshot:The listing shows etc, bin, sbin, and other directories.The etc/hadoop directory is the one that contains the configuration files for configuring various Hadoop daemons. Some of the key files are core-site.xml, hdfs-site.xml, hadoop-env.xml, and mapred-site.xml among others, which will be explained in the later sections:The directories bin and sbin contain executable binaries, which are used to start and stop Hadoop daemons and perform other operations such as filesystem listing, copying, deleting, and so on:To execute a command /opt/cluster/hadoop-2.7.3/bin/hadoop, a complete path to the command needs to be specified. This could be cumbersome, and can be avoided by setting the environment variable HADOOP_HOME.Similarly, there are other variables that need to be set that point to the binaries and the configuration file locations:The environment file is set up system-wide so that any user can use the commands. Once the hadoopenv.sh file is in place, execute the command to export the variables defined in it:Change to the Hadoop user using the command su – hadoop:Change to the /opt/cluster directory and create a symlink:To verify that the preceding changes are in place, the user can execute either the which Hadoop or which java commands, or the user can execute the command hadoop directly without specifying the complete path.In addition to setting the environment as discussed, the user has to add the JAVA_HOME variable in the hadoop-env.sh file.The next thing is to set up the Namenode address, which specifies the host:port address on which it will listen. This is done using the file core-site.xml:The important thing to keep in mind is the property fs.defaultFS.The next thing that the user needs to configure is the location where Namenode will store its metadata. This can be any location, but it is recommended that you always have a dedicated disk for it. This is configured in the file hdfs-site.xml:The next step is to format the Namenode. This will create an HDFS file system:
$ hdfs namenode -format
Similarly, we have to add the rule for the Datanode directory under hdfs-site.xml. Nothing needs to be done to the core-site.xml file:Then the services need to be started for Namenode and Datanode:
$ hadoop-daemon.sh start namenode$ hadoop-daemon.sh start datanode
The command jps can be used to check for running daemons:

How it works...

The master Namenode stores metadata and the slave node Datanode stores the blocks. When the Namenode is formatted, it creates a data structure that contains fsimage, edits, and VERSION. These are very important for the functioning of the cluster.

The parameters dfs.data.dir and dfs.datanode.data.dir are used for the same purpose, but are used across different versions. The older parameters are deprecated in favor of the newer ones, but they will still work. The parameter dfs.name.dir has been deprecated in favor of dfs.namenode.name.dir in Hadoop 2.x. The intention of showing both versions of the parameter is to bring to the user's notice that parameters are evolving and ever changing, and care must be taken by referring to the release notes for each Hadoop version.

There's more...

Setting up ResourceManager and NodeManager

In the preceding recipe, we set up the storage layer—that is, the HDFS for storing data—but what about the processing layer?. The data on HDFS needs to be processed to make a meaningful decision using MapReduce, Tez, Spark, or any other tool. To run the MapReduce, Spark or other processing framework we need to have ResourceManager, NodeManager.

Installing a single-node cluster - YARN components

In the previous recipe, we discussed how to set up Namenode and Datanode for HDFS. In this recipe, we will be covering how to set up YARN on the same node.

After completing this recipe, there will be four daemons running on the nn1.cluster1.com node, namely namenode, datanode, resourcemanager, and nodemanager daemons.

Getting ready

For this recipe, you will again use the same node on which we have already configured the HDFS layer.

All operations will be done by the hadoop user.

How to do it...

Log in to the node nn1.cluster1.com and change to the hadoop user.Change to the /opt/cluster/hadoop/etc/hadoop directory and configure the files mapred-site.xml and yarn-site.xml:The file yarn-site.xml specifies the shuffle class, scheduler, and resource management components of the ResourceManager. You only need to specify yarn.resourcemanager.address; the rest are automatically picked up by the ResourceManager. You can see from the following screenshot that you can separate them into their independent components:Once the configurations are in place, the resourcemanager and nodemanager daemons need to be started:The environment variables that were defined by /etc/profile.d/hadoopenv.sh included YARN_HOME and YARN_CONF_DIR, which let the framework know about the location of the YARN configurations.

How it works...

The nn1.cluster1.com node is configured to run HDFS and YARN components. Any file that is copied to the HDFS will be split into blocks and stored on Datanode. The metadata of the file will be on the Namenode.