HBase Administration Cookbook - Yifeng Jiang - E-Book

HBase Administration Cookbook E-Book

Yifeng Jiang

0,0
39,59 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

As an Open Source distributed big data store, HBase scales to billions of rows, with millions of columns and sits on top of the clusters of commodity machines. If you are looking for a way to store and access a huge amount of data in real-time, then look no further than HBase.HBase Administration Cookbook provides practical examples and simple step-by-step instructions for you to administrate HBase with ease. The recipes cover a wide range of processes for managing a fully distributed, highly available HBase cluster on the cloud. Working with such a huge amount of data means that an organized and manageable process is key and this book will help you to achieve that.The recipes in this practical cookbook start from setting up a fully distributed HBase cluster and moving data into it. You will learn how to use all of the tools for day-to-day administration tasks as well as for efficiently managing and monitoring the cluster to achieve the best performance possible. Understanding the relationship between Hadoop and HBase will allow you to get the best out of HBase so the book will show you how to set up Hadoop clusters, configure Hadoop to cooperate with HBase, and tune its performance.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 382

Veröffentlichungsjahr: 2012

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

HBase Administration Cookbook
Credits
About the Author
Acknowledgement
About the Reviewers
www.PacktPub.com
Support files, eBooks, discount offers and more
Why Subscribe?
Free Access for Packt account holders
Preface
What this book covers
What you need for this book
Who this book is for
Conventions
Reader feedback
Customer support
Downloading the example code
Errata
Piracy
Questions
1. Setting Up HBase Cluster
Introduction
Quick start
Getting ready
How to do it...
How it works...
Getting ready on Amazon EC2
Getting ready
How to do it...
How it works...
Setting up Hadoop
Getting ready
How to do it...
How it works...
Setting up ZooKeeper
Getting ready
How to do it...
How it works...
There's more...
Changing the kernel settings
Getting ready
How to do it...
How it works...
See also
Setting up HBase
Getting ready
How to do it...
How it works...
Basic Hadoop/ZooKeeper/HBase configurations
How to do it...
How it works...
See also
Setting up multiple High Availability (HA) masters
Getting ready
How to do it...
Install and configure Heartbeat and Pacemaker
Create and install a NameNode resource agent
Configure highly available NameNode
Start DataNode, HBase cluster, and backup HBase master
How it works...
There's more...
2. Data Migration
Introduction
Importing data from MySQL via single client
Getting ready
How to do it...
How it works...
Importing data from TSV files using the bulk load tool
Getting ready
How to do it...
How it works...
There's more...
Writing your own MapReduce job to import data
Getting ready
How to do it...
How it works...
There's more...
Generating HFile files in MapReduce
Important configurations affecting data migration
See also
Precreating regions before moving data into HBase
Getting ready
How to do it...
How it works...
See also
3. Using Administration Tools
Introduction
HBase Master web UI
Getting ready
How to do it...
How it works...
Using HBase Shell to manage tables
Getting ready
How to do it...
How it works...
There's more...
Using HBase Shell to access data in HBase
Getting ready
How to do it...
How it works...
See also
Using HBase Shell to manage the cluster
Getting ready
How to do it...
How it works...
See also
Executing Java methods from HBase Shell
Getting ready
How to do it...
How it works...
There's more...
Row counter
Getting ready
How to do it...
How it works...
There's more...
WAL tool—manually splitting and dumping WALs
Getting ready
How to do it...
How it works...
See also
HFile tool—viewing textualized HFile content
Getting ready
How to do it...
How it works...
There's more...
HBase hbck—checking the consistency of an HBase cluster
Getting ready
How to do it...
How it works...
See also
Hive on HBase—querying HBase using a SQL-like language
Getting ready
How to do it...
How it works...
4. Backing Up and Restoring HBase Data
Introduction
Full shutdown backup using distcp
Getting ready
How to do it...
How it works...
Using CopyTable to copy data from one table to another
Getting ready
How to do it...
How it works...
Exporting an HBase table to dump files on HDFS
Getting ready
How to do it...
How it works...
There's more...
See also
Restoring HBase data by importing dump files from HDFS
Getting ready
How to do it...
How it works...
There's more...
See also
Backing up NameNode metadata
Getting ready
How to do it...
How it works...
There's more...
Backing up region starting keys
Getting ready
How to do it...
How it works...
See also
Cluster replication
Getting ready
How to do it...
How it works...
There's more...
5. Monitoring and Diagnosis
Introduction
Showing the disk utilization of HBase tables
Getting ready
How to do it...
How it works...
There's more...
Setting up Ganglia to monitor an HBase cluster
Getting ready
How to do it...
How it works...
There's more...
See also
OpenTSDB—using HBase to monitor an HBase cluster
Getting ready
How to do it...
How it works...
There's more...
Setting up Nagios to monitor HBase processes
Getting ready
How to do it...
How it works...
There's more...
Using Nagios to check Hadoop/HBase logs
Getting ready
How to do it...
How it works...
There's more...
See also
Simple scripts to report the status of the cluster
Getting ready
How to do it...
How it works...
There's more...
See also
Hot region—write diagnosis
Getting ready
How to do it...
How it works...
There's more...
See also
6. Maintenance and Security
Introduction
Enabling HBase RPC DEBUG-level logging
Getting ready
How to do it...
How it works...
There's more...
Graceful node decommissioning
Getting ready
How to do it...
How it works...
There's more...
See also
Adding nodes to the cluster
Getting ready
How to do it...
How it works...
There's more...
Rolling restart
Getting ready
How to do it...
How it works...
There's more...
Simple script for managing HBase processes
Getting ready
How to do it...
How it works...
Simple script for making deployment easier
Getting ready
How to do it...
How it works...
There's more...
Kerberos authentication for Hadoop and HBase
Getting ready
How to do it...
How it works...
There's more...
See also
Configuring HDFS security with Kerberos
Getting ready
How to do it...
How it works...
There's more...
HBase security configuration
Getting ready
How to do it...
How it works...
There's more...
7. Troubleshooting
Introduction
Troubleshooting tools
Getting ready
How to do it...
How it works...
See also
Handling the XceiverCount error
Getting ready
How to do it...
How it works...
Handling the "too many open files" error
Getting ready
How to do it...
How it works...
There's more...
See also
Handling the "unable to create new native thread" error
Getting ready
How to do it...
How it works...
There's more...
See also
Handling the "HBase ignores HDFS client configuration" issue
Getting ready
How to do it...
How it works...
Handling the ZooKeeper client connection error
Getting ready
How to do it...
How it works...
There's more...
Handling the ZooKeeper session expired error
Getting ready
How to do it...
How it works...
See also
Handling the HBase startup error on EC2
Getting ready
How to do it...
How it works...
There's more...
See also
8. Basic Performance Tuning
Introduction
Setting up Hadoop to spread disk I/O
Getting ready
How to do it...
How it works...
There's more...
Using network topology script to make Hadoop rack-aware
Getting ready
How to do it...
How it works...
Mounting disks with noatime and nodiratime
Getting ready
How to do it...
How it works...
There's more...
Setting vm.swappiness to 0 to avoid swap
Getting ready
How it works...
See also
Java GC and HBase heap settings
Getting ready
How to do it...
How it works...
There's more...
See also
Using compression
Getting ready
How to do it...
How it works...
There's more...
Managing compactions
Getting ready
How to do it...
How it works...
There's more...
Managing a region split
Getting ready
How to do it...
How it works...
There's more...
See also
9. Advanced Configurations and Tuning
Introduction
Benchmarking HBase cluster with YCSB
Getting ready
How to do it...
How it works...
There's more...
Increasing region server handler count
Getting ready
How to do it...
How it works...
See also
Precreating regions using your own algorithm
Getting ready
How to do it...
How it works...
There's more...
See also
Avoiding update blocking on write-heavy clusters
Getting ready
How to do it...
How it works...
See also
Tuning memory size for MemStores
Getting ready
How to do it...
How it works...
There's more...
See also
Client-side tuning for low latency systems
Getting ready
How to do it...
How it works...
There's more...
Configuring block cache for column families
Getting ready
How to do it...
How it works...
There's more...
See also
Increasing block cache size on read-heavy clusters
Getting ready
How to do it...
How it works...
See also
Client side scanner setting
Getting ready
How to do it...
How it works...
There's more...
See also
Tuning block size to improve seek performance
Getting ready
How to do it...
How it works...
There's more...
See also
Enabling Bloom Filter to improve the overall throughput
Getting ready
How to do it...
How it works...
There's more...
Index

HBase Administration Cookbook

HBase Administration Cookbook

Copyright © 2012 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

First published: August 2012

Production Reference: 1080812

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham B3 2PB, UK..

ISBN 978-1-84951-714-0

www.packtpub.com

Cover Image by Asher Wishkerman ( <[email protected]> )

Credits

Author

Yifeng Jiang

Reviewers

Masatake Iwasaki

Tatsuya Kawano

Michael Morello

Shinichi Yamashita

Acquisition Editor

Sarah Cullington

Lead Technical Editor

Pramila Balan

Technical Editors

Merin Jose

Kavita Raghavan

Manmeet Singh Vasir

Copy Editors

Brandt D'Mello

Insiya Morbiwala

Project Coordinator

Yashodhan Dere

Proofreader

Aaron Nash

Indexer

Hemangini Bari

Graphics

Manu Joseph

Valentina D'silva

Production Coordinator

Arvindkumar Gupta

Cover Work

Arvindkumar Gupta

About the Author

Yifeng Jiang is a Hadoop and HBase Administrator and Developer at Rakuten—the largest e-commerce company in Japan. After graduating from the University of Science and Technology of China with a B.S. in Information Management Systems, he started his career as a professional software engineer, focusing on Java development.

In 2008, he started looking over the Hadoop project. In 2009, he led the development of his previous company's display advertisement data infrastructure using Hadoop and Hive.

In 2010, he joined his current employer, where he designed and implemented the Hadoop- and HBase-based, large-scale item ranking system. He is also one of the members of the Hadoop team in the company, which operates several Hadoop/HBase clusters.

Acknowledgement

Little did I know, when I was first asked by Packt Publishing whether I would be interested in writing a book about HBase administration on September 2011, how much work and stress (but also a lot of fun) it was going to be.

Now that the book is finally complete, I would like to thank those people without whom it would have been impossible to get done.

First, I would like to thank the HBase developers for giving us such a great piece of software. Thanks to all of the people on the mailing list providing good answers to my many questions, and all the people working on tickets and documents.

I would also like to thank the team at Packt Publishing for contacting me to get started with the writing of this book, and providing support, guidance, and feedback.

Many thanks to Rakuten, my employer, who provided me with the environment to work on HBase and the chance to write this book.

Thank you to Michael Stack for helping me with a quick review of the book.

Thank you to the book's reviewers—Michael Morello, Tatsuya Kawano, Kenichiro Hamano, Shinichi Yamashita, and Masatake Iwasaki.

To Yotaro Kagawa: Thank you for supporting me and my family from the very start and ever since.

To Xinping and Lingyin: Thank you for your support and all your patience—I love you!

About the Reviewers

Masatake Iwasaki is a Software Engineer at NTT DATA CORPORATION, providing technical consultation for open source softwares such as Hadoop, HBase, and PostgreSQL.

Tatsuya Kawano is an HBase contributor and evangelist in Japan. He has been helping the Japanese Hadoop and HBase community to grow since 2010.

He is currently working for Gemini Mobile Technologies as a Research & Development software engineer. He is also developing Cloudian, a fully S3 API-complaint cloud storage platform, and Hibari DB, an open source, distributed, key-value store.

He has co-authored a Japanese book named "Basic Knowledge of NOSQL" in 2012, which introduces 16 NoSQL products, such as HBase, Cassandra, Riak, MongoDB, and Neo4j to novice readers.

He has studied graphic design in New York, in the late 1990s. He loves playing with 3D computer graphics as much as he loves developing high-availability, scalable, storage systems.

Michael Morello holds a Masters degree in Distributed Computing and Artificial Intelligence. He is a Senior Java/JEE Developer with a strong Unix and Linux background. His areas of research are mostly related to large-scale systems and emerging technologies dedicated to solving scalability, performance, and high availability issues.

I would like to thank my wife and my little angel for their love and support.

Shinichi Yamashita is a Chief Engineer at the OSS Professional Service unit in NTT DATA Corporation, in Japan. He has more than 7 years of experience in software and middleware (Apache, Tomcat, PostgreSQL, Hadoop eco system) engineering.

Shinicha has written a few books on Hadoop in Japan.

I would like to thank my colleagues.

www.PacktPub.com

Support files, eBooks, discount offers and more

You might want to visit www.PacktPub.com for support files and downloads related to your book.

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at <[email protected]> for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.

http://PacktLib.PacktPub.com

Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can access, read and search across Packt's entire library of books.

Why Subscribe?

Fully searchable across every book published by PacktCopy and paste, print and bookmark contentOn demand and accessible via web browser

Free Access for Packt account holders

If you have an account with Packt at www.PacktPub.com, you can use this to access PacktLib today and view nine entirely free books. Simply use your login credentials for immediate access.

Preface

As an open source, distributed, big data store, HBase scales to billions of rows, with millions of columns and sits on top of the clusters of commodity machines. If you are looking for a way to store and access a huge amount of data in real time, then look no further than HBase.

HBase Administration Cookbook provides practical examples and simple step-by-step instructions for you to administrate HBase with ease. The recipes cover a wide range of processes for managing a fully distributed, highly available HBase cluster on the cloud. Working with such a huge amount of data means that an organized and manageable process is key, and this book will help you to achieve that.

The recipes in this practical cookbook start with setting up a fully distributed HBase cluster and moving data into it. You will learn how to use all the tools for day-to-day administration tasks, as well as for efficiently managing and monitoring the cluster to achieve the best performance possible. Understanding the relationship between Hadoop and HBase will allow you to get the best out of HBase; so this book will show you how to set up Hadoop clusters, configure Hadoop to cooperate with HBase, and tune its performance.

What this book covers

Chapter 1, Setting Up HBase Cluster: This chapter explains how to set up an HBase cluster, from a basic standalone HBase instance to a fully distributed, highly available HBase cluster on Amazon EC2.

Chapter 2, Data Migration: In this chapter, we will start with the simple task of importing data from MySQL to HBase, using its Put API. We will then describe how to use the importtsv and bulk load tools to load TSV data files into HBase. We will also use a MapReduce sample to import data from other file formats. This includes putting data directly into an HBase table and writing to HFile format files on Hadoop Distributed File System (HDFS). The last recipe in this chapter explains how to precreate regions before loading data into HBase.

This chapter ships with several sample sources written in Java. It assumes that you have basic Java knowledge, so it does not explain how to compile and package the sample Java source in the recipes.

Chapter 3, Using Administration Tools: In this chapter, we describe the usage of various administration tools such as HBase web UI, HBase Shell, HBase hbck, and others. We explain what the tools are for, and how to use them to resolve a particular task.

Chapter 4, Backing Up and Restoring HBase Data: In this chapter, we will describe how to back up HBase data using various approaches, their pros and cons, and which approach to choose depending on your dataset size, resources, and requirements.

Chapter 5, Monitoring and Diagnosis: In this chapter, we will describe how to monitor and diagnose HBase cluster with Ganglia, OpenTSDB, Nagios, and other tools. We will start with a simple task to show the disk utilization of HBase tables. We will install and configure Ganglia to monitor an HBase metrics and show an example usage of Ganglia graphs. We will also set up OpenTSDB, which is similar to Ganglia, but more scalable as it is built on the top of HBase.

We will set up Nagios to check everything we want to check, including HBase-related daemon health, Hadoop/HBase logs, HBase inconsistencies, HDFS health, and space utilization.

In the last recipe, we will describe an approach to diagnose and fix the frequently asked hot spot region issue.

Chapter 6, Maintenance and Security: In the first six recipes of this chapter we will learn about the various HBase maintenance tasks, such as finding and correcting faults, changing cluster size, making configuration changes, and so on.

We will also look at security in this chapter. In the last three recipes, we will install Kerberos and then set up HDFS security with Kerberos, and finally set up secure HBase client access.

Chapter 7, Troubleshooting: In this chapter, we will look through several of the most confronted issues. We will describe the error messages of these issues, why they happen, and how to fix them with the troubleshooting tools.

Chapter 8, Basic Performance Tuning: In this chapter, we will describe how to tune HBase to gain better performance. We will also include recipes to tune other tuning points such as Hadoop configurations, the JVM garbage collection settings, and the OS kernel parameters.

Chapter 9, Advanced Configurations and Tuning: This is another chapter about performance tuning in the book. The previous chapter describes some recipes to tune Hadoop, OS setting, Java, and HBase itself, to improve the overall performance of the HBase cluster. These are general improvements for many use cases. In this chapter, we will describe more specific recipes, some of which are for write-heavy clusters, while some are aimed at improving the read performance of the cluster.

What you need for this book

Everything you need is listed in each recipe.

The basic list of software required for this book are as follows:

Debian 6.0.1 (squeeze)Oracle JDK (Java Development Kit) SE 6HBase 0.92.1Hadoop 1.0.2ZooKeeper 3.4.3

Who this book is for

This book is for HBase administrators, developers, and will even help Hadoop administrators. You are not required to have HBase experience, but are expected to have a basic understanding of Hadoop and MapReduce.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or may have disliked. Reader feedback is important for us to develop titles that you really get the most out of.

To send us general feedback, simply send an e-mail to <[email protected]>, and mention the book title through the subject of your message.

If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide on www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the example code

You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you would report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/support, selecting your book, clicking on the errata submission form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website, or added to any list of existing errata, under the Errata section of that title.

Piracy

Piracy of copyright material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works, in any form, on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.

Please contact us at <[email protected]> with a link to the suspected pirated material.

We appreciate your help in protecting our authors, and our ability to bring you valuable content.

Questions

You can contact us at <[email protected]> if you are having a problem with any aspect of the book, and we will do our best to address it.

Chapter 1. Setting Up HBase Cluster

In this chapter, we will cover:

Quick startGetting ready on Amazon EC2Setting up HadoopSetting up ZooKeeperChanging the kernel settingsSetting up HBaseBasic Hadoop/ZooKeeper/HBase configurationsSetting up multiple High Availability (HA) masters

Introduction

This chapter explains how to set up HBase cluster, from a basic standalone HBase instance to a fully distributed, highly available HBase cluster on Amazon EC2.

According to Apache HBase's home page:

HBase is the Hadoop database. Use HBase when you need random, real-time, read/write access to your Big Data. This project's goal is the hosting of very large tables—billions of rows X millions of columns—atop clusters of commodity hardware.

HBase can run against any filesystem. For example, you can run HBase on top of an EXT4 local filesystem, Amazon Simple Storage Service (Amazon S3), and Hadoop Distributed File System (HDFS) , which is the primary distributed filesystem for Hadoop. In most cases, a fully distributed HBase cluster runs on an instance of HDFS, so we will explain how to set up Hadoop before proceeding.

Apache ZooKeeper is an open source software providing a highly reliable, distributed coordination service. A distributed HBase depends on a running ZooKeeper cluster.

HBase, which is a database that runs on Hadoop, keeps a lot of files open at the same time. We need to change some Linux kernel settings to run HBase smoothly.

A fully distributed HBase cluster has one or more master nodes (HMaster), which coordinate the entire cluster, and many slave nodes (RegionServer), which handle the actual data storage and request. The following diagram shows a typical HBase cluster structure:

HBase can run multiple master nodes at the same time, and use ZooKeeper to monitor and failover the masters. But as HBase uses HDFS as its low-layer filesystem, if HDFS is down, HBase is down too. The master node of HDFS, which is called NameNode, is the Single Point Of Failure (SPOF) of HDFS, so it is the SPOF of an HBase cluster. However, NameNode as a software is very robust and stable. Moreover, the HDFS team is working hard on a real HA NameNode, which is expected to be included in Hadoop's next major release.

The first seven recipes in this chapter explain how we can get HBase and all its dependencies working together, as a fully distributed HBase cluster. The last recipe explains an advanced topic on how to avoid the SPOF issue of the cluster.

We will start by setting up a standalone HBase instance, and then demonstrate setting up a distributed HBase cluster on Amazon EC2.

Quick start

HBase has two run modes—standalone mode and distributed mode. Standalone mode is the default mode of HBase. In standalone mode, HBase uses a local filesystem instead of HDFS, and runs all HBase daemons and an HBase-managed ZooKeeper instance, all in the same JVM.

This recipe describes the setup of a standalone HBase. It leads you through installing HBase, starting it in standalone mode, creating a table via HBase Shell, inserting rows, and then cleaning up and shutting down the standalone HBase instance.

Getting ready

You are going to need a Linux machine to run the stack. Running HBase on top of Windows is not recommended. We will use Debian 6.0.1 (Debian Squeeze) in this book, because we have several Hadoop/HBase clusters running on top of Debian in production at my company, Rakuten Inc., and 6.0.1 is the latest Amazon Machine Image (AMI) we have, at http://wiki.debian.org/Cloud/AmazonEC2Image.

As HBase is written in Java, you will need to have Java installed first. HBase runs on Oracle's JDK only, so do not use OpenJDK for the setup. Although Java 7 is available, we don't recommend you to use Java 7 now because it needs more time to be tested. You can download the latest Java SE 6 from the following link: http://www.oracle.com/technetwork/java/javase/downloads/index.html.

Execute the downloaded bin file to install Java SE 6. We will use /usr/local/jdk1.6 as JAVA_HOME in this book:

root# ln -s /your/java/install/directory /usr/local/jdk1.6

We will add a user with the name hadoop, as the owner of all HBase/Hadoop daemons and files. We will have all HBase files and data stored under /usr/local/hbase:

root# useradd hadoop root# mkdir /usr/local/hbase root# chown hadoop:hadoop /usr/local/hbase

How to do it...

Get the latest stable HBase release from HBase's official site, http://www.apache.org/dyn/closer.cgi/hbase/. At the time of writing this book, the current stable release was 0.92.1.

You can set up a standalone HBase instance by following these instructions:

Download the tarball and decompress it to our root directory for HBase. We will set an HBASE_HOME environment variable to make the setup easier, by using the following commands:
root# su - hadoop hadoop$ cd /usr/local/hbase hadoop$ tar xfvz hbase-0.92.1.tar.gz hadoop$ ln -s hbase-0.92.1 current hadoop$ export HBASE_HOME=/usr/local/hbase/current
Set JAVA_HOME in HBase's environment setting file, by using the following command:
hadoop$ vi $HBASE_HOME/conf/hbase-env.sh # The java implementation to use. Java 1.6 required. export JAVA_HOME=/usr/local/jdk1.6
Create a directory for HBase to store its data and set the path in the HBase configuration file (hbase-site.xml), between the<configuration> tag, by using the following commands:
hadoop$ mkdir -p /usr/local/hbase/var/hbase hadoop$ vi /usr/local/hbase/current/conf/hbase-site.xml <property> <name>hbase.rootdir</name> <value>file:///usr/local/hbase/var/hbase</value> </property>
Start HBase in standalone mode by using the following command:
hadoop$ $HBASE_HOME/bin/start-hbase.sh starting master, logging to /usr/local/hbase/current/logs/hbase-hadoop-master-master1.out
Connect to the running HBase via HBase Shell, using the following command:
hadoop$ $HBASE_HOME/bin/hbase shell HBase Shell; enter 'help<RETURN>' for list of supported commands. Type "exit<RETURN>" to leave the HBase Shell Version 0.92.1, r1298924, Fri Mar 9 16:58:34 UTC 2012
Verify HBase's installation by creating a table and then inserting some values. Create a table named test, with a single column family named cf1, as shown here:
hbase(main):001:0> create 'test', 'cf1' 0 row(s) in 0.7600 seconds

i. In order to list the newly created table, use the following command:

hbase(main):002:0> list TABLE test 1 row(s) in 0.0440 seconds

ii. In order to insert some values into the newly created table, use the following commands:

hbase(main):003:0> put 'test', 'row1', 'cf1:a', 'value1' 0 row(s) in 0.0840 seconds hbase(main):004:0> put 'test', 'row1', 'cf1:b', 'value2' 0 row(s) in 0.0320 seconds
Verify the data we inserted into HBase by using the scan command:
hbase(main):003:0> scan 'test' ROW COLUMN+CELL row1 column=cf1:a, timestamp=1320947312117, value=value1 row1 column=cf1:b, timestamp=1320947363375, value=value2 1 row(s) in 0.2530 seconds
Now clean up all that was done, by using the disable and drop commands:

i. In order to disable the table test, use the following command:

hbase(main):006:0> disable 'test' 0 row(s) in 7.0770 seconds

ii. In order to drop the the table test, use the following command:

hbase(main):007:0> drop 'test' 0 row(s) in 11.1290 seconds
Exit from HBase Shell using the following command:
hbase(main):010:0> exit
Stop the HBase instance by executing the stop script:
hadoop$ /usr/local/hbase/current/bin/stop-hbase.sh stopping hbase.......

How it works...

We installed HBase 0.92.1 on a single server. We have used a symbolic link named current for it, so that version upgrading in the future is easy to do.

In order to inform HBase where Java is installed, we will set JAVA_HOME in hbase-env.sh, which is the environment setting file of HBase. You will see some Java heap and HBase daemon settings in it too. We will discuss these settings in the last two chapters of this book.

In step 1, we created a directory on the local filesystem, for HBase to store its data. For a fully distributed installation, HBase needs to be configured to use HDFS, instead of a local filesystem. The HBase master daemon (HMaster) is started on the server where start-hbase.sh is executed. As we did not configure the region server here, HBase will start a single slave daemon (HRegionServer) on the same JVM too.

As we mentioned in the Introduction section, HBase depends on ZooKeeper as its coordination service. You may have noticed that we didn't start ZooKeeper in the previous steps. This is because HBase will start and manage its own ZooKeeper ensemble, by default.

Then we connected to HBase via HBase Shell. Using HBase Shell, you can manage your cluster, access data in HBase, and do many other jobs. Here, we just created a table called test, we inserted data into HBase, scanned the test table, and then disabled and dropped it, and exited the shell.

HBase can be stopped using its stop-hbase.sh script. This script stops both HMaster and HRegionServer daemons.

Getting ready on Amazon EC2

Amazon Elastic Compute Cloud (EC2) is a web service that provides resizable computer capacity in the cloud. By using Amazon EC2, we can practice HBase on a fully distributed mode easily, at low cost. All the servers that we will use to demonstrate HBase in this book are running on Amazon EC2.

This recipe describes the setup of the Amazon EC2 environment, as a preparation for the installation of HBase on it. We will set up a name server and client on Amazon EC2. You can also use other hosting services such as Rackspace, or real servers to set up your HBase cluster.

Getting ready

You will need to sign up, or create an Amazon Web Service (AWS) account at http://aws.amazon.com/.

We will use EC2 command-line tools to manage our instances. You can download and set up the tools by following the instructions available at the following page:

http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/index.html?SettingUp_CommandLine.html.

You need a public/private key to log in to your EC2 instances. You can generate your key pairs and upload your public key to EC2, using these instructions:

http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/generating-a-keypair.html.

Before you can log in to an instance, you must authorize access. The following link contains instructions for adding rules to the default security group:

http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/adding-security-group-rules.html.

After all these steps are done, review the following checklist to make sure everything is ready:

X.509 certificates: Check if the X.509 certificates are uploaded. You can check this at your account's Security Credentials page.EC2 key pairs: Check if EC2 key pairs are uploaded. You can check this at AWS Management Console | Amazon EC2 | NETWORK & SECURITY | Key Pairs.Access: Check if the access has been authorized. This can be checked at AWS Management Console | Amazon EC2 | NETWORK & SECURITY | Security Groups | Inbound.Environment variable settings: Check if the environment variable settings are done. As an example, the following snippet shows my settings; make sure you are using the right EC2_URL for your region:
$ cat ~/.bashrc export EC2_HOME=~/opt/ec2-api-tools-1.4.4.2 export PATH=$PATH:$EC2_HOME/bin export EC2_PRIVATE_KEY=~/.ec2/pk-OWRHNWUG7UXIOPJXLOBC5UZTQBOBCVQY.pem export EC2_CERT=~/.ec2/cert-OWRHNWUG7UXIOPJXLOBC5UZTQBOBCVQY.pem export JAVA_HOME=/Library/Java/Home export EC2_URL=https://ec2.us-west-1.amazonaws.com

We need to import our EC2 key pairs to manage EC2 instances via EC2 command-line tools:

$ ec2-import-keypair your-key-pair-name --public-key-file ~/.ssh/id_rsa.pub

Verify the settings by typing the following command:

$ ec2-describe-instances

If everything has been set up properly, the command will show your instances similarly to how you had configured them in the previous command.

Tip

Downloading the example code

You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

The last preparation is to find a suitable AMI. An AMI is a preconfigured operating system and software, which is used to create a virtual machine within EC2. We can find a registered Debian AMI at http://wiki.debian.org/Cloud/AmazonEC2Image.

For the purpose of practicing HBase, a 32-bit, EBS-backed AMI is the most cost effective AMI to use. Make sure you are choosing AMIs for your region. As we are using US-West (us-west-1) for this book, the AMI ID for us is ami-77287b32. This is a 32-bit, small instance of EC2. A small instance is good for practicing HBase on EC2 because it's cheap. For production, we recommend you to use at least High-Memory Extra Large Instance with EBS, or a real server.

How to do it...

Follow these instructions to get your EC2 instances ready for HBase. We will start two EC2 instances; one is a DNS/NTP server, and the other one is the client:

Start a micro instance for the mentioned server. We will use ns1.hbase-admin-cookbook.com (ns1) as its Fully Qualified Domain Name (FQDN) , in a later section of this book:
$ ec2-run-instances ami-77287b32 -t t1.micro -k your-key-pair
Start a small instance for the client. We will use client1.hbase-admin-cookbook.com (client1) as its FQDN, later in this book:
$ ec2-run-instances ami-77287b32 -t m1.small -k your-key-pair
Verify the startup from AWS Management Console, or by typing the following command:
$ ec2-describe-instances
You should see two instances from the output of the command. From the output of the ec2-describe-instances command, or AWS Management Console, you can find the public DNS of the instances that have already started. The DNS shows a value such as ec2-xx-xx-xxx-xx.us-west-1.compute.amazonaws.com:
Log in to the instances via SSH, using the following command:Update the package index files before we install packages on the server, by using the following command:
root# apt-get update
Change your instances' time zone to your local timezone, using the following command:
root# dpkg-reconfigure tzdata
Install the NTP server daemon on the DNS server, using the following command:
root@ns# apt-get install ntp ntp-server ntpdate
Install the NTP client on the client/server, using the following command:
root@client1# apt-get install ntp ntpdate
Configure /etc/ntp.conf on ns1 to run as an NTP server, and client1 to run as an NTP client, using ns1 as its server.

Because there is no HBase-specific configuration for the NTP setup, we will skip the details. You can find the sample ntp.conf files for both the server and client, from the sample source of this book.

Install BIND9 on ns1 to run as a DNS server, using the following command:
root@ns# apt-get install bind9
You will need to configure BIND9 to run as a primary master server for internal lookup, and run as a caching server for external lookup. You also need to configure the DNS server, to allow other EC2 instances to update their record on the DNS server.

We will skip the details as this is out of the scope of this book. For sample BIND9 configuration, please refer to the source, shipped with this book.

For client1, just set it up using ns1 as its DNS server:
root@client1# vi /etc/resolv.conf nameserver 10.160.49.250 #private IP of ns search hbase-admin-cookbook.com #domain name
Update the DNS hostname automatically. Set up hostname to the EC2 instance's user data of the client. From the My Instances page of AWS Management Console, select client1 from the instances list, stop it, and then click Instance Actions | View | Change User Data; enter the hostname of the instance you want to use (here client1) in the pop-up page:Create a script to update the client's record on the DNS server, using user data:
root@client1# vi ec2-hostname.sh #!/bin/bash #you will need to set up your DNS server to allow update from this key DNS_KEY=/root/etc/Kuser.hbase-admin-cookbook.com.+157+44141.private DOMAIN=hbase-admin-cookbook.com USER_DATA=`/usr/bin/curl -s http://169.254.169.254/latest/user-data` HOSTNAME=`echo $USER_DATA` #set also the hostname to the running instance hostname $HOSTNAME #we only need to update for local IP LOCIP=`/usr/bin/curl -s http://169.254.169.254/latest/meta-data/local-ipv4` cat<<EOF | /usr/bin/nsupdate -k $DNS_KEY -v server ns.$DOMAIN zone $DOMAIN update delete $HOSTNAME.$DOMAIN A update add $HOSTNAME.$DOMAIN 60 A $LOCIP send EOF
Finally, to run this at boot time from rc.local, add the following script to the rc.local file:
root@client1# vi /etc/rc.local sh /root/bin/ec2-hostname.sh

How it works...

First we started two instances, a micro instance for DNS/NTP server, and a small one for client. To provide a name service to other instances, the DNS name server has to be kept running. Using micro instance can reduce your EC2 cost.

In step 3, we set up the NTP server and client. We will run our own NTP server on the same DNS server, and NTP clients on all other servers.

Note

Note: Make sure that the clocks on the HBase cluster members are in basic alignment.

EC2 instances can be started and stopped on demand; we don't need to pay for stopped instances. But, restarting an EC2 instance will change the IP address of the instance, which makes it difficult to run HBase. We can resolve this issue by running a DNS server to provide a name service to all EC2 instances in our HBase cluster. We can update name records on the DNS server every time other EC2 instances are restarted.

That's exactly what we have done in steps 4 and 5. Step 4 is a normal DNS setup. In step 5, we stored the instance name in its user data property at first, so that when the instance is restarted, we can get it back using EC2 API. Also, we will get the private IP address of the instance via EC2 API. With this data, we can then send a DNS update command to our DNS server every time the instance is restarted. As a result, we can always use its fixed hostname to access the instance.

We will keep only the DNS instance running constantly. You can stop all other instances whenever you do not need to run your HBase cluster.

Setting up Hadoop

A fully distributed HBase runs on top of HDFS. As a fully distributed HBase cluster installation, its master daemon (HMaster) typically runs on the same server as the master node of HDFS (NameNode), while its slave daemon (HRegionServer) runs on the same server as the slave node of HDFS, which is called DataNode.

Hadoop MapReduce is not required by HBase. MapReduce daemons do not need to be started. We will cover the setup of MapReduce in this recipe too, in case you like to run MapReduce on HBase. For a small Hadoop cluster, we usually have a master daemon of MapReduce (JobTracker) run on the NameNode server, and slave daemons of MapReduce (TaskTracker) run on the DataNode servers.

This recipe describes the setup of Hadoop. We will have one master node (master1) run NameNode and JobTracker on it. We will set up three slave nodes (slave1 to slave3), which will run DataNode and TaskTracker on them, respectively.

Getting ready

You will need four small EC2 instances, which can be obtained by using the following command:

$ec2-run-instances ami-77287b32 -t m1.small -n 4 -k your-key-pair

All these instances must be set up properly, as described in the previous recipe, Getting ready on Amazon EC2. Besides the NTP and DNS setups, Java installation is required by all servers too.

We will use the hadoop user as the owner of all Hadoop daemons and files. All Hadoop files and data will be stored under /usr/local/hadoop. Add the hadoop user and create a /usr/local/hadoop directory on all the servers, in advance.

We will set up one Hadoop client node as well. We will use client1, which we set up in the previous recipe. Therefore, the Java installation, hadoop user, and directory should be prepared on client1 too.

How to do it...

Here are the steps to set up a fully distributed Hadoop cluster:

In order to SSH log in to all nodes of the cluster, generate the hadoop user's public key on the master node:
hadoop@master1$ ssh-keygen -t rsa -N ""
This command will create a public key for the hadoop user on the master node, at ~/.ssh/id_rsa.pub.
On all slave and client nodes, add the hadoop user's public key to allow SSH login from the master node:
hadoop@slave1$ mkdir ~/.ssh hadoop@slave1$ chmod 700 ~/.ssh hadoop@slave1$ cat >> ~/.ssh/authorized_keys
Copy the hadoop user's public key you generated in the previous step, and paste to ~/.ssh/authorized_keys. Then, change its permission as following:
hadoop@slave1$ chmod 600 ~/.ssh/authorized_keys
Get the latest, stable, HBase-supported Hadoop release from Hadoop's official site, http://www.apache.org/dyn/closer.cgi/hadoop/common/. While this chapter was being written, the latest HBase-supported, stable Hadoop release was 1.0.2. Download the tarball and decompress it to our root directory for Hadoop, then add a symbolic link, and an environment variable:
hadoop@master1$ ln -s hadoop-1.0.2 current hadoop@master1$ export HADOOP_HOME=/usr/local/hadoop/current
Create the following directories on the master node:
hadoop@master1$ mkdir -p /usr/local/hadoop/var/dfs/name hadoop@master1$ mkdir -p /usr/local/hadoop/var/dfs/data hadoop@master1$ mkdir -p /usr/local/hadoop/var/dfs/namesecondary
You can skip the following steps if you don't use MapReduce:
hadoop@master1$ mkdir -p /usr/local/hadoop/var/mapred
Set up JAVA_HOME in Hadoop's environment setting file (hadoop-env.sh):
hadoop@master1$ vi $HADOOP_HOME/conf/hadoop-env.sh export JAVA_HOME=/usr/local/jdk1.6
Add the hadoop.tmp.dir property to core-site.xml:
hadoop@master1$ vi $HADOOP_HOME/conf/core-site.xml <property> <name>hadoop.tmp.dir</name> <value>/usr/local/hadoop/var</value> </property>
Add the fs.default.name property to core-site.xml:
hadoop@master1$ vi $HADOOP_HOME/conf/core-site.xml <property> <name>fs.default.name</name> <value>hdfs://master1:8020</value> </property>
If you need MapReduce, add the mapred.job.tracker property to mapred-site.xml:
hadoop@master1$ vi $HADOOP_HOME/conf/mapred-site.xml <property> <name>mapred.job.tracker</name> <value>master1:8021</value> </property>
Add a slave server list to the slaves file:
hadoop@master1$ vi $HADOOP_HOME/conf/slaves slave1 slave2 slave3
Sync all Hadoop files from the master node, to client and slave nodes. Don't sync ${hadoop.tmp.dir} after the initial installation:
hadoop@master1$ rsync -avz /usr/local/hadoop/ client1:/usr/local/hadoop/ hadoop@master1$ for i in 1 2 3 do rsync -avz /usr/local/hadoop/ slave$i:/usr/local/hadoop/ sleep 1 done
You need to format NameNode before starting Hadoop. Do it only for the initial installation:
hadoop@master1$ $HADOOP_HOME/bin/hadoop namenode -format
Start HDFS from the master node:
hadoop@master1$ $HADOOP_HOME/bin/start-dfs.sh
You can access your HDFS by typing the following command:
hadoop@master1$ $HADOOP_HOME/bin/hadoop fs -ls /
You can also view your HDFS admin page from the browser. Make sure the 50070 port is opened. The HDFS admin page can be viewed at http://master1:50070/dfshealth.jsp:
Start MapReduce from the master node, if needed:
hadoop@master1$ $HADOOP_HOME/bin/start-mapred.sh
Now you can access your MapReduce admin page from the browser. Make sure the 50030 port is opened. The MapReduce admin page can be viewed at http://master1:50030/jobtracker.jsp:
To stop HDFS, execute the following command from the master node:
hadoop@master1$ $HADOOP_HOME/bin/stop-dfs.sh
To stop MapReduce, execute the following command from the master node:
hadoop@master1$ $HADOOP_HOME/bin/stop-mapred.sh

How it works...

To start/stop the daemon on remote slaves from the master node, a passwordless SSH login of the hadoop user is required. We did this in step 1.

HBase must run on a special HDFS that supports a durable sync implementation. If HBase runs on an HDFS that has no durable sync implementation, it may lose data if its slave servers go down. Hadoop versions later than 0.20.205, including Hadoop 1.0.2 which we have chosen, support this feature.

HDFS and MapReduce use local filesystems to store their data. We created directories required by Hadoop in step 3, and set up the path to the Hadoop's configuration file in step 5.