Apache Oozie Essentials - Jagat Singh - E-Book

Apache Oozie Essentials E-Book

Jagat Singh

0,0
27,59 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

As more and more organizations are discovering the use of big data analytics, interest in platforms that provide storage, computation, and analytic capabilities is booming exponentially. This calls for data management. Hadoop caters to this need. Oozie fulfils this necessity for a scheduler for a Hadoop job by acting as a cron to better analyze data.

Apache Oozie Essentials starts off with the basics right from installing and configuring Oozie from source code on your Hadoop cluster to managing your complex clusters. You will learn how to create data ingestion and machine learning workflows.
This book is sprinkled with the examples and exercises to help you take your big data learning to the next level. You will discover how to write workflows to run your MapReduce, Pig ,Hive, and Sqoop scripts and schedule them to run at a specific time or for a specific business requirement using a coordinator. This book has engaging real-life exercises and examples to get you in the thick of things. Lastly, you’ll get a grip of how to embed Spark jobs, which can be used to run your machine learning models on Hadoop.
By the end of the book, you will have a good knowledge of Apache Oozie. You will be capable of using Oozie to handle large Hadoop workflows and even improve the availability of your Hadoop environment.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 134

Veröffentlichungsjahr: 2015

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Apache Oozie Essentials
Credits
About the Author
About the Reviewers
www.PacktPub.com
Support files, eBooks, discount offers, and more
Why subscribe?
Free access for Packt account holders
Preface
What this book covers
What you need for this book
Who this book is for
Conventions
Reader feedback
Customer support
Downloading the example code
Errata
Piracy
Questions
1. Setting up Oozie
Configuring Oozie in Hortonworks distribution
Installing Oozie using tar ball
Creating a test virtual machine
Building Oozie source code
Summary of the build script
Codehaus Maven move
Download dependency jars
Preparing to create a WAR file
Create a WAR file
Configure Oozie MySQL database
Configure the shared library
Start server testing and verification
Summary
2. My First Oozie Job
Installing and configuring Hue
Oozie concepts
Workflows
Coordinator
Bundles
Book case study
Running our first Oozie job
Types of nodes
Control flow nodes
Action nodes
Oozie web console
The Oozie command line
Summary
3. Oozie Fundamentals
Chapter case study
The Decision node
The Email action
Expression Language functions
Basic EL constants
Basic EL functions
Workflow EL functions
Hadoop EL constants
HDFS EL functions
Email action configuration
Job property file
Submission from the command line
Workflow states
Summary
4. Running MapReduce Jobs
Chapter case study
Running MapReduce jobs from Oozie
The job.properties file
Running the job
Running Oozie MapReduce job
Coordinators
Datasets
Frequency and time
Cron syntax for frequency
Timezone
The <done-flag> tag
Initial instance
My first Coordinator
Coordinator v1 definition
job.properties v1 definition
Coordinator v2 definition
job.properties v2 definition
Checking the job log
Running a MapReduce streaming job
Summary
5. Running Pig Jobs
Chapter case study
The Pig command line
The config-default.xml file
Pig action
Pig Coordinator job v2
Parameters in the Dataset's input and output events
current(int n)
hoursInDay(int n)
daysInMonth(int n)
latest(int n)
Coordinator controls
Pig Coordinator job v3
Summary
6. Running Hive Jobs
Chapter case study
Running a Hive job from the command line
Hive action
Validating Oozie Workflow
Hive 2 action
Parameterization of Coordinator jobs
dateOffset(String baseDate, int instance, String timeUnit)
dateTzOffet(String baseDate, String timezone)
formatTime(String timeStamp, String format)
Summary
7. Running Sqoop Jobs
Chapter case study
Running Sqoop command line
Sqoop action
HCatalog
HCatalog datasets
HCatalog EL functions
HCatalog Coordinator functions
Pig script
The job.properties file
The Sqoop action Coordinator
Running the job
Checking data in the Hive table
Summary
8. Running Spark Jobs
Spark action
Bundles
Data pipelines
Summary
9. Running Oozie in Production
Packaging and continuous delivery
Oozie in secured cluster
Rerun
Rerun Workflow
Rerun Coordinator
Rerun Bundle
Summary
Index

Apache Oozie Essentials

Apache Oozie Essentials

Copyright © 2015 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

First published: December 2015

Production reference: 1011215

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham B3 2PB, UK.

ISBN 978-1-78588-038-4

www.packtpub.com

Credits

Author

Jagat Jasjit Singh

Reviewers

Siva Prakash

Rahul Tekchandani

Commissioning Editor

Dipika Gaonkar

Acquisition Editor

Tushar Gupta

Content Development Editor

Preeti Singh

Technical Editor

Dhiraj Chandanshive

Copy Editor

Roshni Banerjee

Project Coordinator

Shweta H Birwatkar

Proofreader

Safis Editing

Indexer

Priya Sane

Production Coordinator

Melwyn Dsa

Cover Work

Melwyn Dsa

About the Author

Jagat Jasjit Singh works for one of the largest telecom companies in Melbourne, Australia, as a big data architect. He has a total experience of over 10 years and has been working with the Hadoop ecosystem for more than 5 years. He is skilled in Hadoop, Spark, Oozie, Hive, Pig, Scala, machine learning, HBase, Falcon, Kakfa, GraphX, Flume, Knox, Sqoop, Mesos, Marathon, Chronos, Openstack, and Java. He has experience of a variety of Australian and European customer implementations. He actively writes on Big Data and IoT technologies on his personal blog (http://jugnu.life). Jugnu (a Punjabi word) is a firefly that glows at night and illuminates the world with its tiny light. Jagat believes in this same philosophy of sharing knowledge to make the world a better place. You can connect with him on LinkedIn at https://au.linkedin.com/in/jagatsingh.

All the (author side) earnings of this book will go towards charity. Please consider donating, if you have not purchased this book directly, at http://www.pingalwara.net/donations.html. You can donate with your PayPal account or credit card.

This book is dedicated to Almighty God, who gave me everything, my parents, and the wonderful people from the Omnia project at Commonwealth Bank of Australia (https://github.com/CommBank). I would like to acknowledge the help of Tushar Gupta, Dhiraj Chandanshive, Roshni Banerjee, and Preeti Singh from Packt Publishing in writing this book.

About the Reviewers

Siva Prakash has been working in the field of software development for the last 7 years. Currently, he is working with CISCO, Bangalore. He has an extensive development experience in desktop-, mobile-, and web-based applications in ERP, telecom, and the digital media industry. He has passion for learning new technologies and sharing knowledge thus gained with others. He has worked on big data technologies for the digital media industry. He loves trekking, travelling, music, reading books, and blogging.

He is available on LinkedIn at https://www.linkedin.com/in/techsivam.

Rahul Tekchandani is a Hadoop software developer who specializes in building and developing Hadoop data platforms for big financial institutions. With experience in software design, development, and support, he has engineered strong, data-driven applications using the Cloudera's Hadoop Distribution. Rahul has also worked as an information architect to support data sanitization and data governance.

Prior to his career in software development, he completed his masters in Management of Information Systems at University of Arizona and worked on academic projects for top tech and banking companies.

He currently lives in Charlotte, North Carolina. Visit his developer's blog at www.rahultekchandani.com to see what he is currently exploring, and to learn more about him.

www.PacktPub.com

Support files, eBooks, discount offers, and more

For support files and downloads related to your book, please visit www.PacktPub.com.

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at <[email protected]> for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.

https://www2.packtpub.com/books/subscription/packtlib

Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can search, access, and read Packt's entire library of books.

Why subscribe?

Fully searchable across every book published by PacktCopy and paste, print, and bookmark contentOn demand and accessible via a web browser

Free access for Packt account holders

If you have an account with Packt at www.PacktPub.com, you can use this to access PacktLib today and view 9 entirely free books. Simply use your login credentials for immediate access.

Preface

With the increasing popularity of Big Data in enterprise, every day more and more workloads are being shifted to Hadoop.

To run those regular processing jobs on Hadoop, we need a scheduler that can act as cron for all data pipelines. Oozie plays this role in the Big Data world.

This book introduces you to the world of Oozie using a step-by-step case study-based approach.

What this book covers

Chapter 1, Setting up Oozie, covers how to install and configure Oozie in Hadoop cluster. We will also learn how to install Oozie from the source code.

Chapter 2, My First Oozie Job, covers running a "Hello World" equivalent first Oozie job. It also introduces the concept of Workflow, Coordinator, and Bundles.

Chapter 3, Oozie Fundamentals, introduces the fundamental concepts of control nodes, expression language, web console, and running Oozie jobs from Hue.

Chapter 4, Running MapReduce Jobs, teaches how to run MapReduce jobs from Oozie and explores the concepts of Coordinators, Datasets, and cron-based frequency schedules.

Chapter 5, Running Pig Jobs, teaches how to run Pig jobs from Oozie. We will also cover the concept of parameterization of Datasets and Coordinator controls.

Chapter 6, Running Hive Jobs, introduces how to run Hive jobs and discusses the concepts of parameterization of Coordinator actions.

Chapter 7, Running Sqoop Jobs, shows how to run Sqoop jobs from Oozie and introduces the concept of HCatalog Datasets and EL functions.

Chapter 8, Running Spark Jobs, shows how to run Spark jobs. It also introduces the concept of Bundles and how they are used to group a set of Coordinator jobs.

Chapter 9, Running Oozie in Production, covers how to package the code for production deployments and how to rerun the jobs that have failed.

What you need for this book

To follow the tutorial and code examples in this book, you need to have access to Hadoop cluster or you can configure a single node virtual machine-based cluster. You should have a good laptop/desktop, preferably with a Linux operating system or Windows with VirtualBox installed.

Who this book is for

This book is for anyone who is familiar with basics of Hadoop and Hive, and now wants to automate the data and machine learning pipelines using Apache Oozie.

Conventions

In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning.

Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "Now, edit the torrc file placed at the /etc/tor/ directory."

Most of the code in the book is XML. A block of code is set as follows:

<workflow-app name="My_first_Workflow" xmlns="uri:oozie:workflow:0.5"> <start to="fs-2178"/> <kill name="Kill"> <message>Action failed </message> </kill> <action name="fs-2178"> <fs> <delete path='${nameNode}/user/hue'/> </fs> <ok to="End"/> <error to="Kill"/> </action> <end name="End"/> </workflow-app>

Any command-line input or output is written as follows:

# $ hadoop fs -ls /user/hue/learn_oozie

New terms and important words are shown in bold. Words that you see on the screen, for example, in menus or dialog boxes, appear in the text like this: "Go to Settings | Networking | Port Forwarding , Click on Add new port forwarding."

Note

Warnings or important notes appear in a box like this.

Tip

Tips and tricks appear like this.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of.

To send us general feedback, simply e-mail <[email protected]>, and mention the book's title in the subject of your message.

If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the example code

You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.

To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.

Piracy

Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.

Please contact us at <[email protected]> with a link to the suspected pirated material.

We appreciate your help in protecting our authors and our ability to bring you valuable content.

Questions

If you have a problem with any aspect of this book, you can contact us at <[email protected]>, and we will do our best to address the problem.

Chapter 1. Setting up Oozie

Oozie is a workflow scheduler system to run Apache Hadoop jobs. Oozie Workflow jobs are Directed Acyclic Graphs (DAGs) of actions. More information on DAG can be found at https://en.wikipedia.org/wiki/Directed_acyclic_graph. Actions tell what to do in the job. Oozie supports running jobs of various types such as Java, Map-reduce, Pig, Hive, Sqoop, Spark, and Distcp. The output of one action can be consumed by the next action to create a chain sequence.

Oozie has client-server architecture, in which we install the server for storing the jobs and using client we submit our jobs to the server.

In this chapter, we will learn how to install Oozie for learning purpose and in production. For learning purposes, we will build Oozie from the source code, and for production we will use Hadoop distribution by Hortonworks. Throughout the book, we will use Hortonworks single node virtual machine. If you are using a different Hadoop distribution, you should not worry at all. All distribution packages are the same for Oozie software, which is made by the Apache community (http://oozie.apache.org).

After reading this chapter, we will be able to:

Configure Oozie in Hortonworks distribution using AmbariInstall Oozie using the source code provided as tar ball by the Apache Oozie website

Configuring Oozie in Hortonworks distribution

In this section, we will learn how to configure Oozie inside Hortonworks Hadoop distribution using Ambari. We will configure the Oozie server to use a MySQL database instead of the default Derby database to store all job information.

We will use a virtual machine to learn how to configure Oozie in Hortonworks Hadoop distribution. Most of other distributions, such as Cloudera, Pivotal, and so on, have similar steps.

Let's start with the following steps:

If you don't have VirtualBox on your machine, then download and install VirtualBox from https://www.virtualbox.org/wiki/Downloads.Download the Hortonworks single node virtual machine from http://hortonworks.com/hdp/downloads/. It will take 1-2 hours depending upon your Internet connection speed.

Tip

It is always good to store the virtual machine images in a common folder. For example, I have folder in my machine such as ~/dev/vm/. It makes virtual machine image management easier.

After the download is complete, open the VirtualBox and click on File | Import Appliance:

Import appliance