Apache Kafka 1.0 Cookbook - Raul Estrada - E-Book

Apache Kafka 1.0 Cookbook E-Book

Raul Estrada

0,0
32,39 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Simplify real-time data processing by leveraging the power of Apache Kafka 1.0

About This Book

  • Use Kafka 1.0 features such as Confluent platforms and Kafka streams to build efficient streaming data applications to handle and process your data
  • Integrate Kafka with other Big Data tools such as Apache Hadoop, Apache Spark, and more
  • Hands-on recipes to help you design, operate, maintain, and secure your Apache Kafka cluster with ease

Who This Book Is For

This book is for developers and Kafka administrators who are looking for quick, practical solutions to problems encountered while operating, managing or monitoring Apache Kafka. If you are a developer, some knowledge of Scala or Java will help, while for administrators, some working knowledge of Kafka will be useful.

What You Will Learn

  • Install and configure Apache Kafka 1.0 to get optimal performance
  • Create and configure Kafka Producers and Consumers
  • Operate your Kafka clusters efficiently by implementing the mirroring technique
  • Work with the new Confluent platform and Kafka streams, and achieve high availability with Kafka
  • Monitor Kafka using tools such as Graphite and Ganglia
  • Integrate Kafka with third-party tools such as Elasticsearch, Logstash, Apache Hadoop, Apache Spark, and more

In Detail

Apache Kafka provides a unified, high-throughput, low-latency platform to handle real-time data feeds. This book will show you how to use Kafka efficiently, and contains practical solutions to the common problems that developers and administrators usually face while working with it.

This practical guide contains easy-to-follow recipes to help you set up, configure, and use Apache Kafka in the best possible manner. You will use Apache Kafka Consumers and Producers to build effective real-time streaming applications. The book covers the recently released Kafka version 1.0, the Confluent Platform and Kafka Streams. The programming aspect covered in the book will teach you how to perform important tasks such as message validation, enrichment and composition.Recipes focusing on optimizing the performance of your Kafka cluster, and integrate Kafka with a variety of third-party tools such as Apache Hadoop, Apache Spark, and Elasticsearch will help ease your day to day collaboration with Kafka greatly. Finally, we cover tasks related to monitoring and securing your Apache Kafka cluster using tools such as Ganglia and Graphite.

If you're looking to become the go-to person in your organization when it comes to working with Apache Kafka, this book is the only resource you need to have.

Style and approach

Following a cookbook recipe-based approach, we'll teach you how to solve everyday difficulties and struggles you encounter using Kafka through hands-on examples.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 203

Veröffentlichungsjahr: 2017

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Apache Kafka 1.0 Cookbook

 

 

 

 

 

 

 

 

 

 

Over 100 practical recipes on using distributed enterprise messaging to handle real-time data

 

 

 

 

 

 

 

 

 

 

Raúl Estrada

 

 

 

 

 

 

BIRMINGHAM - MUMBAI

Apache Kafka 1.0 Cookbook

 

Copyright © 2017 Packt Publishing

 

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

 

First published: December 2017

 

Production reference: 1211217

 

Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.

 

ISBN 978-1-78728-684-9

 

www.packtpub.com

Credits

Author

 

Raúl Estrada

Copy Editor

 

Safis Editing

Reviewers

 

Sandeep Khurana

Brian Gatt

Project Coordinator

 

Nidhi Joshi

Commissioning Editor

 

Amey Varangaonkar

Proofreader

 

Safis Editing

Acquisition Editor

 

Varsha Shetty

Indexer

 

Tejal Daruwale Soni

Content Development Editor

 

Cheryl Dsa

Graphics

 

Tania Dutta

Technical Editors

 

Dinesh Pawar

Production Coordinator

 

Aparna Bhagat

About the Author

Raúl Estrada has been a programmer since 1996 and a Java developer since 2001. He loves functional languages like Scala, Elixir, Clojure, and Haskell. He also loves all topics related to computer science. With more than 14 years of experience in high availability and enterprise software, he has designed and implemented architectures since 2003. His specialization is in systems integration and he has participated in projects mainly related to the financial sector. He has been an enterprise architect for BEA Systems and Oracle Inc., but he also enjoys mobile programming and game development. He considers himself a programmer before an architect, engineer, or developer.

Raul is a supporter of free software, and enjoys experimenting with new technologies, frameworks, languages, and methods.

I want to say thanks to my editors Cheryl Dsa and Dinesh Pawar. Without their effort and patience, it would not have been possible to write this book. I also thank my acquisition editor, Varsha Shetty, who believed in this project from the beginning. And finally, I want to thank all the heroes who contribute (often anonymously and without profit) to open source projects, specifically Apache Kafka. An honorable mention for those who build the connectors of this technology, and especially the Confluent Inc. crew.

About the Reviewers

Sandeep Khurana is an early proponent in the domain of big data and analytics, which started during his days in Yahoo! (originator of Hadoop). He has been part of many other industry leaders in the same domain such as IBM Software Lab, Oracle, Yahoo!, Nokia, VMware and an array of startups where he was instrumental in architecting, designing and building multiple petabyte scale big data processing systems, which has stood the test of industry rigor. He is completely in his elements with coding in all the big data technologies such as MapReduce, Spark, Pig, Hive, ZooKeeper, Flume, Oozie, HBase, and Kafka. With the wealth of experience arising from being around for 21 years in the industry, he has developed a unique trait of solving the most complicated and critical architectural issues with the simplest and most efficient means. Being an early entrant in the industry he has worked in all aspects of Java/JEE-based technologies and frameworks such as Spring, Hibernate, JPA, EJB, security, and Struts before he delved into the big data domain. Some of his other present areas of interest are OAuth2, OIDC, micro services frameworks, artificial intelligence, and machine learning. He is quite active on LinkedIn (/skhurana333) with his tech talks.

 

 

 

Brian Gatt is a software developer who holds a bachelor's degree in computer science and artificial intelligence from the University of Malta, and a masters degree in computer games and entertainment from Goldsmiths University of London. In his spare time, he likes to keep up with the latest in programming, specifically native C++ programming and game development techniques.

 

www.PacktPub.com

For support files and downloads related to your book, please visit www.PacktPub.com. Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details. At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.

https://www.packtpub.com/mapt

Get the most in-demand software skills with Mapt. Mapt gives you full access to all Packt books and video courses, as well as industry-leading tools to help you plan your personal development and advance your career.

Why subscribe?

Fully searchable across every book published by Packt

Copy and paste, print, and bookmark content

On demand and accessible via a web browser

Customer Feedback

Thanks for purchasing this Packt book. At Packt, quality is at the heart of our editorial process. To help us improve, please leave us an honest review on this book's Amazon page at https://www.amazon.com/dp/1787286843.

If you'd like to join our team of regular reviewers, you can email us at [email protected]. We award our regular reviewers with free eBooks and videos in exchange for their valuable feedback. Help us be relentless in improving our products!

This book is dedicated to my mom, who loves cookbooks

Table of Contents

Preface

What this book covers

What you need for this book

Who this book is for

Sections

Getting ready

How to do it…

How it works…

There's more…

See also

Conventions

Reader feedback

Customer support

Downloading the example code

Downloading the color images of this book

Errata

Piracy

Questions

Configuring Kafka

Introduction

Installing Kafka

Getting ready

How to do it...

Installing Java in Linux

 Installing Scala in Linux

Installing Kafka in Linux

There's more...

See also

Running Kafka

Getting ready

How to do it...

There's more...

See also

Configuring Kafka brokers

Getting ready

How to do it...

How it works...

There's more...

See also

Configuring Kafka topics

Getting ready

How to do it...

How it works...

There's more…

Creating a message console producer

Getting ready

How to do it...

How it works...

There's more…

Creating a message console consumer

Getting ready

How to do it...

How it works...

There's more...

Configuring the broker settings

Getting ready

How to do it...

How it works…

There's more…

Configuring threads and performance

Getting ready

How to do it...

How it works…

There's more...

Configuring the log settings

Getting ready

How to do it...

How it works…

There's more…

See also

Configuring the replica settings

Getting ready

How to do it...

How it works…

There's more...

Configuring the ZooKeeper settings

Getting ready

How to do it…

How it works…

See also

Configuring other miscellaneous parameters

Getting ready

How to do it...

How it works…

See also

Kafka Clusters

Introduction

Configuring a single-node single-broker cluster – SNSB

Getting ready

How to do it...

Starting ZooKeeper

Starting the broker

How it works...

There's more...

See also

SNSB – creating a topic, producer, and consumer

Getting ready

How to do it...

Creating a topic

Starting the producer

Starting the consumer

How it works...

There's more...

Configuring a single-node multiple-broker cluster – SNMB

Getting ready

How to do it...

How it works...

There's more...

See also

SNMB – creating a topic, producer, and consumer

Getting ready

How to do it...

Creating a topic

Starting a producer

Starting a consumer

How it works...

There's more...

See also

Configuring a multiple-node multiple-broker cluster – MNMB

Getting ready

How to do it...

How it works...

See also

Message Validation

Introduction

Modeling the events

Getting ready

How to do it...

How it works...

There's more...

See also

Setting up the project

Getting ready

How to do it...

How it works...

There's more...

See also

Reading from Kafka

Getting ready

How to do it...

How it works...

There's more...

See also

Writing to Kafka

Getting ready

How to do it...

How it works...

There's more...

See also

Running ProcessingApp

Getting ready

How to do it...

How it works...

There's more...

See also

Coding the validator

Getting ready

How to do it...

There's more...

See also

Running the validator

Getting ready

How to do it...

How it works...

There's more...

See also

Message Enrichment

Introduction

Geolocation extractor

Getting ready

How to do it...

How it works...

There's more...

See also

Geolocation enricher

Getting ready

How to do it...

How it works...

There's more...

See also

Currency price extractor

Getting ready

How to do it...

How it works...

There's more...

See also

Currency price enricher

Getting ready

How to do it...

How it works...

There's more...

See also

Running the currency price enricher

Getting ready

How to do it...

How it works...

Modeling the events

Getting ready

How to do it...

How it works...

There's more...

See also

Setting up the project

Getting ready

How to do it...

How it works...

There's more...

See also

Open weather extractor

Getting ready

How to do it...

How it works...

There's more...

See also

Location temperature enricher

Getting ready

How to do it...

How it works...

There's more...

See also

Running the location temperature enricher

Getting ready

How to do it...

How it works...

The Confluent Platform

Introduction

Installing the Confluent Platform

Getting ready

How to do it...

There's more...

See also

Using Kafka operations

Getting ready

How to do it...

There's more...

See also

Monitoring with the Confluent Control Center

Getting ready

How to do it...

How it works...

There's more...

Using the Schema Registry

Getting ready

How to do it...

See also

Using the Kafka REST Proxy

Getting ready

How to do it...

There's more...

See also

Using Kafka Connect

Getting ready

How to do it...

There's more...

See also

Kafka Streams

Introduction

Setting up the project

Getting ready

How to do it...

How it works...

Running the streaming application

Getting ready

How to do it...

Managing Kafka

Introduction

Managing consumer groups

Getting ready

How to do it...

How it works...

Dumping log segments

Getting ready

How to do it...

How it works...

Importing ZooKeeper offsets

Getting ready

How to do it...

How it works...

Using the GetOffsetShell

Getting ready

How to do it...

How it works...

Using the JMX tool

Getting ready

How to do it...

How it works...

There's more...

Using the MirrorMaker tool

Getting ready

How to do it...

How it works...

There's more...

See also

Replaying log producer

Getting ready

How to do it...

How it works...

Using state change log merger

Getting ready

How to do it...

How it works...

Operating Kafka

Introduction

Adding or removing topics

Getting ready

How to do it...

How it works...

There's more...

See also

Modifying message topics

Getting ready

How to do it...

How it works...

There's more...

See also

Implementing a graceful shutdown

Getting ready

How to do it...

How it works...

Balancing leadership

Getting ready

How to do it...

How it works...

There's more...

Expanding clusters

Getting ready

How to do it...

How it works...

There's more...

Increasing the replication factor

Getting ready

How to do it...

How it works...

There's more...

Decommissioning brokers

Getting ready

How to do it...

How it works...

Checking the consumer position

Getting ready

How to do it...

How it works...

Monitoring and Security

Introduction

Monitoring server statistics

Getting ready

How to do it...

How it works...

See also

Monitoring producer statistics

Getting ready

How to do it...

How it works...

See also

Monitoring consumer statistics

Getting ready

How to do it...

How it works...

See also

Connecting with the help of Graphite

Getting ready

How to do it...

How it works...

See also

Monitoring with the help of Ganglia

Getting ready

How to do it...

How it works...

See also

Implementing authentication using SSL

How to do it...

See also

Implementing authentication using SASL/Kerberos

How to do it...

See also

Third-Party Tool Integration

Introduction

Moving data between Kafka nodes with Flume

Getting ready

How to do it...

How it works...

See also

Writing to an HDFS cluster with Gobblin

Getting ready

How to do it...

How it works...

See also

Moving data from Kafka to Elastic with Logstash

Getting ready

How to do it...

How it works...

There's more...

See also

Connecting Spark streams and Kafka

Getting ready

How to do it...

How it works...

There's more...

Ingesting data from Kafka to Storm

Getting ready

How to do it...

How it works...

There's more...

See also

Pushing data from Kafka to Elastic

Getting ready

How to do it...

How it works...

See also

Inserting data from Kafka to SolrCloud

Getting ready

How to do it...

How it works...

See also

Building a Kafka producer with Akka

Getting ready

How to do it...

How it works...

There's more...

Building a Kafka consumer with Akka

Getting ready

How to do it...

Storing data in Cassandra

Getting ready

How to do it...

How it works...

Running Kafka on Mesos

Getting ready

How to do it...

How it works...

There's more...

Reading Kafka with Apache Beam

Getting ready

How to do it...

How it works...

There's more...

See also

Writing to Kafka from Apache Beam

Getting ready

How to do it...

How it works...

There's more...

See also

Preface

Since 2011, Kafka's growth has exploded. More than one-third of all Fortune 500 companies use Apache Kafka. These companies include the top 10 travel companies, 7 of the top 10 banks, 8 of the top 10 insurance companies, and 9 of the top 10 telecom companies.

LinkedIn, Uber, Twitter, Spotify, Paypal, and Netflix process with Apache Kafka, each one with a total of four-comma (1,000,000,000,000) messages in a single day.

Nowadays, Apache Kafka is used for real-time data streaming, to collect data, or to do real-time data analyses. In other contexts, Kafka is used in microservice architectures to improve durability. It can also be used to feed events to Complex Event Processing (CEP) architectures and IoT automation systems.

Today we live in the middle of a war, a streaming war. Several competitors (Kafka Streams, Spark Streaming, Akka Streaming, Apache Flink, Apache Storm, Apache Beam, Amazon Kinesis, and so on) are immersed in a competition where there are many factors to evaluate, but mainly the winner is the one with the best performance.

Much of the current adoption of Apache Kafka is due to its ease of use. Kafka is easy to implement, easy to learn, and easy to maintain. Unlike most of its competitors, the learning curve is not so steep.

This book is practical; it is focused on hands-on recipes and it isn't just stop at theoretical or architectural explanations about Apache Kafka. This book is a cookbook, a compendium of practical recipes that are solutions to everyday problems faced in the implementation of a streaming architecture with Apache Kafka. The first part of the book is about programming, and the second part is about Apache Kafka administration.

What this book covers

Chapter 1, Configuring Kafka, explains the basic recipes used to get started with Apache Kafka. It discusses how to install, configure, and run Kafka. It also discusses how to do basic operations with a Kafka broker.

Chapter 2, Kafka Clusters, covers how to make three types of clusters: single-node single-broker cluster, single-node multiple-broker cluster, and multiple-node multiple-broker cluster.

Chapter 3, Message Validation, in this chapter having an enterprise service bus, one of the tasks is related to data validation, this is filtering some events froman input message stream. This chapter is about the programming of this validation.

Chapter 4, Message Enrichment, details how the next task of an enterprise service bus is related to message enrichment, which means having an individual message, obtaining additional information, and incorporating it into the message stream.

Chapter 5, The Confluent Platform, shows how to operate and monitor a Kafka system with the Confluent Platform. It also explains how to use the Schema Registry, the Kafka REST Proxy, and Kafka Connect.

Chapter 6, Kafka Streams, explains how to obtain information about a group of messages (a message stream) and additional information such as aggregation and composition of messages using Kafka Streams.

Chapter 7, Managing Kafka, talks about the command-line tools developed by the authors of Kafka to make a sysadmin team's life easier when debugging, testing, and running a Kafka cluster.

Chapter 8, Operating Kafka, explains the different operations that can be done on a Kafka cluster. These tools cannot be used daily, but they help the DevOps team manage Kafka clusters.

Chapter 9, Monitoring and Security, has a first half that talks about various statistics, how they are exposed, and how to monitor them with tools such as Graphite and Ganglia. Its second part is about security—in a nutshell, how to implement SSL authentication, SASL/Kerberos authentication, and SASL/plain authentication.

Chapter 10, Third-Party Tool Integration, talks about other real-time data processing tools and how to use Apache Kafka to make a data processing pipeline with them. Tools such as Hadoop, Flume, Gobblin, Elastic, Logstash, Spark, Storm, Solr, Akka, Cassandra, Mesos, and Beam are covered in this chapter.

What you need for this book

The reader should have some experience in programming with Java and some experience in Linux/Unix operating systems.

The minimum configuration needed to execute the recipes in this book is: Intel ® Core i3 processor, 4 GB RAM, and 128 GB of disks. It is recommended to use Linux or Mac OS. Windows is not fully supported.

Who this book is for

This book is for software developers, data architects, and data engineers looking for practical Kafka recipes.

The first half of this cookbook is about programming; this is introductory material for those with no previous knowledge of Apache Kafka. As the book progresses, the difficulty level increases.

The second half of this cookbook is about configuration; this is advanced material for those who want to improve existing Apache Kafka systems or want to better administer current Kafka deployments.

Sections

In this book, you will find several headings that appear frequently (Getting ready, How to do it…, How it works…, There's more…, and See also). To give clear instructions on how to complete a recipe, we use these sections as follows.

Getting ready

This section tells you what to expect in the recipe, and describes how to set up any software or any preliminary settings required for the recipe.

How to do it…

This section contains the steps required to follow the recipe.

How it works…

This section usually consists of a detailed explanation of what happened in the previous section.

There's more…

This section consists of additional information about the recipe in order to make the reader more knowledgeable about the recipe.

See also

This section provides helpful links to other useful information for the recipe.

Conventions

In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning.

Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "Finally, run the apt-get update to install the Confluent Platform."

A block of code is set as follows:

consumer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor producer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor

Any command-line input or output is written as follows:

> bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic SNSBTopic

New terms and important words are shown in bold. Words that you see on the screen, for example, in menus or dialog boxes, appear in the text like this: "From Kafka Connect, click on the SINKS button and then on the New sink button."

Warnings or important notes appear like this.
Tips and tricks appear like this.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book-what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of. To send us general feedback, simply e-mail [email protected], and mention the book's title in the subject of your message. If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors .

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the example code

You can download the example code files for this book from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. You can download the code files by following these steps:

Log in or register to our website using your e-mail address and password.