Getting Started with Greenplum for Big Data Analytics - Sunila Gollapudi - E-Book

Getting Started with Greenplum for Big Data Analytics E-Book

Sunila Gollapudi

0,0
31,19 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Organizations are leveraging the use of data and analytics to gain a competitive advantage over their opposition. Therefore, organizations are quickly becoming more and more data driven. With the advent of Big Data, existing Data Warehousing and Business Intelligence solutions are becoming obsolete, and a requisite for new agile platforms consisting of all the aspects of Big Data has become inevitable. From loading/integrating data to presenting analytical visualizations and reports, the new Big Data platforms like Greenplum do it all. It is now the mindset of the user that requires a tuning to put the solutions to work.

"Getting Started with Greenplum for Big Data Analytics" is a practical, hands-on guide to learning and implementing Big Data Analytics using the Greenplum Integrated Analytics Platform. From processing structured and unstructured data to presenting the results/insights to key business stakeholders, this book explains it all.

"Getting Started with Greenplum for Big Data Analytics" discusses the key characteristics of Big Data and its impact on current Data Warehousing platforms. It will take you through the standard Data Science project lifecycle and will lay down the key requirements for an integrated analytics platform. It then explores the various software and appliance components of Greenplum and discusses the relevance of each component at every level in the Data Science lifecycle.

You will also learn Big Data architectural patterns and recap some key advanced analytics techniques in detail. The book will also take a look at programming with R and integration with Greenplum for implementing analytics. Additionally, you will explore MADlib and advanced SQL techniques in Greenplum for analytics. This book also elaborates on the physical architecture aspects of Greenplum with guidance on handling high-availability, back-up, and recovery.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 185

Veröffentlichungsjahr: 2013

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Getting Started with Greenplum for Big Data Analytics
Credits
Foreword
About the Author
Acknowledgement
About the Reviewers
www.PacktPub.com
Support files, eBooks, discount offers and more
Why Subscribe?
Free Access for Packt account holders
Instant Updates on New Packt Books
Preface
What this book covers
What you need for this book
Who this book is for
Conventions
Reader feedback
Customer support
Errata
Piracy
Questions
1. Big Data, Analytics, and Data Science Life Cycle
Enterprise data
Classification
Features
Big Data
So, what is Big Data?
Multi-structured data
Data analytics
Data science
Data science life cycle
Phase 1 – state business problem
Phase 2 – set up data
Phase 3 – explore/transform data
Phase 4 – model
Phase 5 – publish insights
Phase 6 – measure effectiveness
References/Further reading
Summary
2. Greenplum Unified Analytics Platform (UAP)
Big Data analytics – platform requirements
Greenplum Unified Analytics Platform (UAP)
Core components
Greenplum Database
Hadoop (HD)
Chorus
Command Center
Modules
Database modules
HD modules
Data Integration Accelerator (DIA) modules
Core architecture concepts
Data warehousing
Column-oriented databases
Parallel versus distributed computing/processing
Shared nothing, massive parallel processing (MPP) systems, and elastic scalability
Shared disk data architecture
Shared memory data architecture
Shared nothing data architecture
Data loading patterns
Greenplum UAP components
Greenplum Database
The Greenplum Database physical architecture
The Greenplum high-availability architecture
High-speed data loading using external tables
External table types
Polymorphic data storage and historic data management
Data distribution
Hadoop (HD)
Hadoop Distributed File System (HDFS)
Hadoop MapReduce
Chorus
Greenplum Data Computing Appliance (DCA)
Greenplum Data Integration Accelerator (DIA)
References/Further reading
Summary
3. Advanced Analytics – Paradigms, Tools, and Techniques
Analytic paradigms
Descriptive analytics
Predictive analytics
Prescriptive analytics
Analytics classified
Classification
Forecasting or prediction or regression
Clustering
Optimization
Simulations
Modeling methods
Decision trees
Association rules
The Apriori algorithm
Linear regression
Logistic regression
The Naive Bayesian classifier
K-means clustering
Text analysis
R programming
Weka
In-database analytics using MADlib
References/Further reading
Summary
4. Implementing Analytics with Greenplum UAP
Data loading for Greenplum Database and HD
Greenplum data loading options
External tables
gpfdist
gpload
Hadoop (HD) data loading options
Sqoop 2
Greenplum BulkLoader for Hadoop
Using external ETL to load data into Greenplum
Extraction, Load, and Transformation (ELT) and Extraction, Transformation, Load, and Transformation (ETLT)
Greenplum target configuration
Sourcing large volumes of data from Greenplum
Unsupported Greenplum data types
Push Down Optimization (PDO)
Greenplum table distribution and partitioning
Distribution
Data skew and performance
Optimizing the broadcast or redistribution motion for data co-location
Partitioning
Querying Greenplum Database and HD
Querying Greenplum Database
Analyzing and optimizing queries
The ANALYZE function
The EXPLAIN function
Dynamic Pipelining in Greenplum
Querying HDFS
Hive
Pig
Data communication between Greenplum Database and Hadoop (using external tables)
Data Computing Appliance (DCA)
Storage design, disk protection, and fault tolerance
Master server RAID configurations
Segment server RAID configurations
Monitoring DCA
Greenplum Database management
In-database analytics options (Greenplum-specific)
Window functions
The PARTITION BY clause
The ORDER BY clause
The OVER (ORDER BY…) clause
Creating, modifying, and dropping functions
User-defined aggregates
Using R with Greenplum
DBI Connector for R
PL/R
Using Weka with Greenplum
Using MADlib with Greenplum
Using Greenplum Chorus
Pivotal
References/Further reading
Summary
Index

Getting Started with Greenplum for Big Data Analytics

Getting Started with Greenplum for Big Data Analytics

Copyright © 2013 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

First published: October 2013

Production Reference: 1171013

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham B3 2PB, UK.

ISBN 978-1-78217-704-3

www.packtpub.com

Cover Image by Aniket Sawant (<[email protected]>)

Credits

Author

Sunila Gollapudi

Reviewers

Brian Feeny

Scott Kahler

Alan Koskelin

Tuomas Nevanranta

Acquisition Editor

Kevin Colaco

Commissioning Editor

Deepika Singh

Technical Editors

Kanhucharan Panda

Vivek Pillai

Project Coordinator

Amey Sawant

Proofreader

Bridget Braund

Indexer

Mariammal Chettiyar

Graphics

Valentina D'silva

Ronak Dhruv

Abhinash Sahu

Production Coordinator

Adonia Jones

Cover Work

Adonia Jones

Foreword

In the last decade, we have seen the impact of exponential advances in technology on the way we work, shop, communicate, and think. At the heart of this change is our ability to collect and gain insights into data; and comments like "Data is the new oil" or "we have a Data Revolution" only amplifies the importance of data in our lives.

Tim Berners-Lee, inventor of the World Wide Web said, "Data is a precious thing and will last longer than the systems themselves." IBM recently stated that people create a staggering 2.5 quintillion bytes of data every day (that's roughly equivalent to over half a billion HD movie downloads). This information is generated from a huge variety of sources including social media posts, digital pictures, videos, retail transactions, and even the GPS tracking functions of mobile phones.

This data explosion has led to the term "Big Data" moving from an Industry buzz word to practically a household term very rapidly. Harnessing "Big Data" to extract insights is not an easy task; the potential rewards for finding these patterns are huge, but it will require technologists and data scientists to work together to solve these problems.

The book written by Sunila Gollapudi, Getting Started with Greenplum for Big Data Analytics, has been carefully crafted to address the needs of both the technologists and data scientists.

Sunila starts with providing excellent background to the Big Data problem and why new thinking and skills are required. Along with a dive deep into advanced analytic techniques, she brings out the difference in thinking between the "new" Big Data science and the traditional "Business Intelligence", this is especially useful to help understand and bridge the skill gap.

She moves on to discuss the computing side of the equation-handling scale, complexity of data sets, and rapid response times. The key here is to eliminate the "noise" in data early in the data science life cycle. Here, she talks about how to use one of the industry's leading product platforms like Greenplum to build Big Data solutions with an explanation on the need for a unified platform that can bring essential software components (commercial/open source) together backed by a hardware/appliance.

She then puts the two together to get the desired result—how to get meaning out of Big Data. In the process, she also brings out the capabilities of the R programming language, which is mainly used in the area of statistical computing, graphics, and advanced analytics.

Her easy-to-read practical style of writing with real examples shows her depth of understanding of this subject. The book would be very useful for both data scientists (who need to learn the computing side and technologies to understand) and also for those who aspire to learn data science.

V. Laxmikanth

Managing Director

Broadridge Financial Solutions (India) Private Limited

www.broadridge.com

About the Author

Sunila Gollapudi works as a Technology Architect for Broadridge Financial Solutions Private Limited. She has over 13 years of experience in developing, designing and architecting data-driven solutions with a focus on the banking and financial services domain for around eight years. She drives Big Data and data science practice for Broadridge. Her key roles have been Solutions Architect, Technical leader, Big Data evangelist, and Mentor.

Sunila has a Master's degree in Computer Applications and her passion for mathematics enthused her into data and analytics. She worked on Java, Distributed Architecture, and was a SOA consultant and Integration Specialist before she embarked on her data journey. She is a strong follower of open source technologies and believes in the innovation that open source revolution brings.

She has been a speaker at various conferences and meetups on Java and Big Data. Her current Big Data and data science specialties include Hadoop, Greenplum, R, Weka, MADlib, advanced analytics, machine learning, and data integration tools such as Pentaho and Informatica.

With a unique blend of technology and domain expertise, Sunila has been instrumental in conceptualizing architectural patterns and providing reference architecture for Big Data problems in the financial services domain.

Acknowledgement

It was a pleasure to work with Packt Publishing on this project. Packt has been most accommodating, extremely quick, and responsive to all requests.

I am deeply grateful to Broadridge for providing me the platform to explore and build expertise in Big Data technologies. My greatest gratitude to Laxmikanth V. (Managing Director, Broadridge) and Niladri Ray (Executive Vice President, Broadridge) for all the trust, freedom, and confidence in me.

Thanks to my parents for having relentlessly encouraged me to explore any and every subject that interested me.

Authors usually thank their spouses for their "patience and support" or words to that effect. Unless one has lived through the actual experience, one cannot fully comprehend how true this is. Over the last ten years, Kalyan has endured what must have seemed like a nearly continuous stream of whining punctuated by occasional outbursts of exhilaration and grandiosity—all of which before the background of the self-absorbed attitude of a typical author. His patience and support were unfailing.

Last but not least, my love, my daughter, my angel, Nikita, who has been my continuous drive. Without her being as accommodative as she was, this book wouldn't have been possible.

About the Reviewers

Brian Feeny is a technologist/evangelist working with many Big Data technologies such as analytics, visualization, data mining, machine learning, and statistics. He is a graduate student in Software Engineering at Harvard University, primarily focused on data science, where he gets to work on interesting data problems using some of the latest methods and technology.

Brian works for Presidio Networked Solutions, where he helps businesses with their Big Data challenges and helps them understand how to make best use of their data.

I would like to thank my wife, Scarlett, for her tolerance of my busy schedule. I would like to thank Presidio, my employer, for investing in in our Big Data practice. Lastly, I would like to thank EMC and Pivotal for the excellent training and support they have given Presidio and myself.

Scott Kahler started down the path in the mid 80s when he disconnected the power LED on his Commodore 64. In this fashion he could run his handwritten Dungeons and Dragons' random character generator, and his parents wouldn't complain about the computer being on all night. Since that point of time, Scott Kahler has been involved in technology and data.

His ability to get his hands on truly large datasets happened after the year 2000 failed to end technology as we know it. Scott joined up with a bunch of talented people to launch uclick.com (now gocomics.com) playing a role as a jack-of-all-trades: Programmer, DBA, and System Administrator. It was there that he first dealt with datasets that needed to be distributed to multiple nodes to be parsed and churned on in a relatively quick amount of time. A decade later, he joined Adknowledge and helped implement their Greenplum and Hadoop infrastructures taking roles as their Big Data Architect and managing IT Operations. Scott, now works for Pivotal as a field engineer spreading the gospel of next technology paradigm, scalable distributed storage, and compute.

I would first and foremost like to thank my wife, Kate. She is the primary reason I am able to do what I do. She provides strength when I run into barriers and stability when life is hectic.

Alan Koskelin is a software developer living in the Madison, Wisconsin area. He has worked in many industries including biotech, healthcare, and online retail. The software, he develops, is often data-centric and his personal interests lean towards ecological, environmental, and biological data.

Alan currently works for a nonprofit organization dedicated to improving reading instruction in the primary grades.

Tuomas Nevanranta is a Business Intelligence professional in Helsinki, Finland. He has an M.Sc. in Economics and Business Administration and a B.Sc. in Business Information Technology. He is currently working in a Finnish company called Rongo.

Rongo is a leading Finnish Information Management consultancy company. Rongo helps its customers to manage, refine, and utilize information in their businesses. Rongo creates added value by offering market-leading Business Intelligence solutions containing Big Data solutions, data warehousing, master data management, reporting, and scorecards.

www.PacktPub.com

Support files, eBooks, discount offers and more

You might want to visit www.PacktPub.com for support files and downloads related to your book.

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at <[email protected]> for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.

http://PacktLib.PacktPub.com

Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can access, read and search across Packt's entire library of books.

Why Subscribe?

Fully searchable across every book published by PacktCopy and paste, print and bookmark contentOn demand and accessible via web browser

Free Access for Packt account holders

If you have an account with Packt at www.PacktPub.com, you can use this to access PacktLib today and view nine entirely free books. Simply use your login credentials for immediate access.

Instant Updates on New Packt Books

Get notified! Find out when new books are published by following @PacktEnterprise on Twitter, or the Packt Enterprise Facebook page.

Preface

Big Data started off as a technology buzzword rapidly growing into the headline agenda of several corporate strategies across industry verticals. With the amount of structured and unstructured data available to organizations exploding, analysis of these large data sets is increasingly becoming a key basis of competition, productivity growth, and more importantly, product innovation.

Most technology approaches on Big Data appear to come across as linear deployments of new technology stacks on top of their existing databases or data warehouse. Big Data strategy is partly about solving the "computational" challenge that comes with exponentially growing data, and more importantly about "uncovering the patterns" and trends lying hidden in the heaps of data in these large data sets. Also, with changing data storage and processing challenges, existing data warehousing and business intelligence solutions need a face-lift, a requisite for new agile platforms addressing all the aspects of Big Data has become inevitable. From loading/integrating data to presenting analytical visualizations and reports, the new Big Data platforms like Greenplum do it all. Very evidently, we now need to address this opportunity with a combination of "art of data science" and "related tools/technologies".

This book is meant to serve as a practical, hands-on guide to learning and implementing Big Data analytics using Greenplum and other related tools and frameworks like Hadoop, R, MADlib, and Weka. Some key Big Data architectural patterns are covered with detail on few relevant advanced analytics techniques. includes required details to help onboard the readers to all the required concepts, tools, and frameworks to implement a data analytics project.

R, Weka, MADlib, advanced SQL functions, and Windows functions are covered for in-database analytics implementation. Infrastructure and hardware aspects of Greenplum are covered along with some detail on the configurations and tuning.

Overall, from processing structured and unstructured data to presenting the results/insights to key business stakeholders, this book introduces all the key aspects of the technology and science.

Note

Greenplum UAP is currently being repositioned by Pivotal. The modules and components are being rebranded to include the "Pivotal" tag and are being packaged under PivotalOne. Few of the VMware products such as GemFire and SQLFire are being included in the Pivotal Solution Suite along with RabbitMQ. Additionally, support/integration with Complex Event Processing (CEP) for real-time analytics is added. Hadoop (HD) distribution, now called Pivotal HD, with new framework HAWQ has support for SQL-like querying capabilities for Hadoop data (a framework similar to Impala from open source distribution). However, the current features and capabilities of the Greenplum UAP detailed in this book will still continue to exist.

What this book covers

Chapter 1, Big Data, Analytics, and Data Science Life Cycle, defines and introduces the readers to the core aspects of Big Data and standard analytical techniques. It covers the philosophy of data science with a detailed overview of standard life cycle and steps in business context.

Chapter 2, Greenplum Unified Analytics Platform (UAP), elaborates the architecture and application of Greenplum Unified Analytics Platform (UAP) in Big Data analytics' context. It covers the appliance and the software part of the platform. Greenplum UAP combines the capabilities to process structured and unstructured data with a productivity engine and a social network engine that cans the barriers between the data science teams. Tools and frameworks such as R, Weka, and MADlib that integrate into the platform are elaborated.

Chapter 3, Advanced Analytics – Paradigms, Tools, and Techniques, introduces standard analytic paradigms with a dive deep into some core data mining techniques such as simulations, clustering, data mining, text analytics, decision trees, association rules, linear and logistic regression, and so on. R programming, Weka, and in-database analytics using MADlib are introduced in this chapter.

Chapter 4, Implementing Analytics with Greenplum UAP, covers the implementation aspects of a data science project using Greenplum analytics platform. A detailed guide to loading and unloading structured and unstructured data into Greenplum and HD, along with the approach to integrate Informatica Power Center, R, Hadoop, Weka, and MADlib with Greenplum is covered. A note on Chorus and other Greenplum specific in-database analytic options are detailed.

What you need for this book

As a pre-requisite, this book assumes readers to have basic knowledge of distributed and parallel computing, an understanding of core analytic techniques, and basic exposure to programming.

In this book, readers will see a selective detailing on some implementation aspects of data science project using Greenplum analytics platform (that includes Greenplum Database, HD, in-database analytics utilities such as PL/XXX packages and MADlib), R, and Weka.

Who this book is for

This book is meant for data scientists (or aspiring data scientists) and solution and data architects who are looking for implementing analytic solutions for Big Data using Greenplum integrated analytic platform. This book gives a right mix of detail into technology, tools, framework, and the science part of the analytics.

Conventions

In this book, you will find a number of styles of text that distinguish between different kinds of information. Here are some examples of these styles, and an explanation of their meaning.

Code words in text are shown as follows: "Use runif to generate multiple random numbers uniformly between two numbers."

A block of code is set as follows:

runif(1, 2, 3) runif(10, 5.0, 7.5)

New terms and important words are shown in bold. Words that you see on the screen, in menus or dialog boxes, for example, appear in the text like this: "The following screenshot shows an object browser window in Greenplum's pgAdminIII, a client tool to manage database elements".

Note

Warnings or important notes appear in a box like this.

Tip

Tips and tricks appear like this.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or may have disliked. Reader feedback is important for us to develop titles that you really get the most out of.

To send us general feedback, simply send an e-mail to <[email protected]>, and mention the book title via the subject of your message.

If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide on www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you would report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the errata submission form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded on our website, or added to any list of existing errata, under the Errata section of that title. Any existing errata can be viewed by selecting your title from http://www.packtpub.com/support.

Piracy

Piracy of copyright material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works, in any form, on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.

Please contact us at <[email protected]> with a link to the suspected pirated material.

We appreciate your help in protecting our authors, and our ability to bring you valuable content.

Questions

You can contact us at <[email protected]> if you are having a problem with any aspect of the book, and we will do our best to address it.