ElasticSearch Cookbook - Alberto Paro - E-Book

ElasticSearch Cookbook E-Book

Alberto Paro

0,0
44,39 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

ElasticSearch is one of the most promising NoSQL technologies available and is built to provide a scalable search solution with built-in support for near real-time search and multi-tenancy.

This practical guide is a complete reference for using ElasticSearch and covers 360 degrees of the ElasticSearch ecosystem. We will get started by showing you how to choose the correct transport layer, communicate with the server, and create custom internal actions for boosting tailored needs.

Starting with the basics of the ElasticSearch architecture and how to efficiently index, search, and execute analytics on it, you will learn how to extend ElasticSearch by scripting and monitoring its behaviour.

Step-by-step, this book will help you to improve your ability to manage data in indexing with more tailored mappings, along with searching and executing analytics with facets. The topics explored in the book also cover how to integrate ElasticSearch with Python and Java applications.

This comprehensive guide will allow you to master storing, searching, and analyzing data with ElasticSearch.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 450

Veröffentlichungsjahr: 2013

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

ElasticSearch Cookbook
Credits
About the Author
About the Reviewers
www.PacktPub.com
Support files, eBooks, discount offers and more
Why Subscribe?
Free Access for Packt account holders
Preface
What this book covers
What you need for this book
Who this book is for
Conventions
Reader feedback
Customer support
Downloading the example code
Errata
Piracy
Questions
1. Getting Started
Introduction
Understanding node and cluster
Getting ready
How it works...
There's more...
See also
Understanding node services
Getting ready
How it works...
Managing your data
Getting ready
How it works...
There's more...
Best practice
See also
Understanding cluster, replication, and sharding
Getting ready
How it works...
Best practice
There's more…
How to solve the yellow status
Best practice
How to solve the red status
Best practice
See also
Communicating with ElasticSearch
Getting ready
How it works…
Using the HTTP protocol
Getting ready
How to do it…
How it works…
There's more…
Using the Native protocol
Getting ready
How to do it…
How it works...
There's more…
See also
Using the Thrift protocol
Getting ready
How to do it…
How it works…
There's more...
See also
2. Downloading and Setting Up ElasticSearch
Introduction
Downloading and installing ElasticSearch
Getting ready
How to do it...
How it works...
There's more...
Networking setup
Getting ready
How to do it...
How it works...
See also
Setting up a node
Getting ready
How to do it...
How it works...
There's more...
See also
Setting up ElasticSearch for Linux systems (advanced)
Getting ready
How to do it...
How it works...
There's more...
Setting up different node types (advanced)
Getting ready
How to do it...
How it works...
Installing a plugin
Getting ready
How to do it...
How it works...
There's more...
See also
Installing a plugin manually
Getting ready
How to do it...
How it works...
Removing a plugin
Getting ready
How to do it...
How it works...
Changing logging settings (advanced)
Getting ready
How to do it...
How it works...
3. Managing Mapping
Introduction
Using explicit mapping creation
Getting ready
How to do it...
How it works...
There's more...
Mapping base types
Getting ready
How to do it...
How it works...
There's more...
See also
Mapping arrays
Getting ready
How to do it...
How it works...
Mapping an object
Getting ready
How to do it...
How it works...
There's more...
See also
Mapping a document
Getting ready
How to do it...
How it works...
See also
Using dynamic templates in document mapping
Getting ready
How to do it...
How it works...
There's more...
See also
Managing nested objects
Getting ready
How to do it...
How it works...
There's more...
See also
Managing a child document
Getting ready
How to do it...
How it works...
There's more...
See also
Mapping a multifield
Getting ready
How to do it...
How it works...
There's more...
See also
Mapping a GeoPoint field
Getting ready
How to do it...
How it works...
There's more...
Mapping a GeoShape field
Getting ready
How to do it...
How it works...
See also
Mapping an IP field
Getting ready
How to do it...
How it works...
Mapping an attachment field
Getting ready
How to do it...
How it works...
There's more...
See also
Adding generic data to mapping
Getting ready
How to do it...
How it works...
Mapping different analyzers
Getting ready
How to do it...
How it works...
See also
4. Standard Operations
Introduction
Creating an index
Getting ready
How to do it...
How it works...
There's more…
See also
Deleting an index
Getting ready
How to do it...
How it works...
See also
Opening/closing an index
Getting ready
How to do it...
How it works...
See also
Putting a mapping in an index
Getting ready
How to do it...
How it works...
See also
Getting a mapping
Getting ready
How to do it...
How it works...
See also
Deleting a mapping
Getting ready
How to do it...
How it works...
See also
Refreshing an index
Getting ready
How to do it...
How it works...
See also
Flushing an index
Getting ready
How to do it...
How it works...
See also
Optimizing an index
Getting ready
How to do it...
How it works...
There's more…
See also
Checking if an index or type exists
Getting ready
How to do it...
How it works...
Managing index settings
Getting ready
How to do it...
How it works...
There's more…
See also
Using index aliases
Getting ready
How to do it...
How it works...
There's more…
Indexing a document
Getting ready
How to do it...
How it works...
There's more…
See also
Getting a document
Getting ready
How to do it...
How it works...
There's more…
See also
Deleting a document
Getting ready
How to do it...
How it works...
See also
Updating a document
Getting ready
How to do it...
How it works...
See also
Speeding up atomic operations (bulk)
Getting ready
How to do it...
How it works...
Speeding up GET
Getting ready
How to do it...
How it works...
See also...
5. Search, Queries, and Filters
Introduction
Executing a search
Getting ready
How to do it...
How it works...
There's more...
See also
Sorting a search
Getting ready
How to do it...
How it works...
There's more...
See also
Highlighting results
Getting ready
How to do it...
How it works...
See also
Executing a scan query
Getting ready
How to do it...
How it works...
See also
Suggesting a correct query
Getting ready
How to do it...
How it works...
See also
Counting
Getting ready
How to do it...
How it works...
See also
Deleting by query
Getting ready
How to do it...
How it works...
See also
Matching all the documents
Getting ready
How to do it...
How it works...
See also
Querying/filtering for term
Getting ready
How to do it...
How it works...
There's more…
See also
Querying/filtering for terms
Getting ready
How to do it...
How it works…
There's more…
See also
Using a prefix query/filter
Getting ready
How to do it...
How it works…
See also
Using a Boolean query/filter
Getting ready
How to do it...
How it works…
See also
Using a range query/filter
Getting ready
How to do it...
How it works...
Using span queries
Getting ready
How to do it...
How it works...
See also
Using the match query
Getting ready
How to do it...
How it works...
See also
Using the IDS query/filter
Getting ready
How to do it...
How it works...
See also
Using the has_child query/filter
Getting ready
How to do it...
How it works...
See also
Using the top_children query
Getting ready
How to do it...
How it works...
See also
Using the has_parent query/filter
Getting ready
How to do it...
How it works...
See also
Using a regexp query/filter
Getting ready
How to do it...
How it works...
See also
Using exists and missing filters
Getting ready
How to do it...
How it works...
Using and/or/not filters
Getting ready
How to do it...
How it works...
Using the geo_bounding_box filter
Getting ready
How to do it...
How it works...
See also
Using the geo_polygon filter
Getting ready
How to do it...
How it works...
See also
Using the geo_distance filter
Getting ready
How to do it...
How it works...
There's more...
See also
6. Facets
Introduction
Executing facets
Getting ready
How to do it...
How it works...
See also
Executing terms facets
Getting ready
How to do it...
How it works...
There's more...
See also
Executing range facets
Getting ready
How to do it...
How it works...
See also
Executing histogram facets
Getting ready
How to do it...
How it works...
There's more...
See also
Executing date histogram facets
Getting ready
How to do it...
How it works...
There's more...
Executing filter/query facets
Getting ready
How to do it...
How it works...
See also
Executing statistical facets
Getting ready
How to do it...
How it works...
There's more...
Executing term statistical facets
Getting ready
How to do it...
How it works...
See also
Executing geo distance facets
Getting ready
How to do it...
How it works...
There's more...
See also
7. Scripting
Introduction
Installing additional script plugins
Getting ready
How to do it...
How it works...
There's more...
Sorting using script
Getting ready
How to do it...
How it works...
There's more...
Computing return fields with scripting
Getting ready
How to do it...
How it works...
See also
Filtering a search via scripting
Getting ready
How to do it...
How it works...
There's more...
See also
Updating with scripting
Getting ready
How to do it...
How it works...
There's more...
8. Rivers
Introduction
Managing a river
Getting ready
How to do it...
How it works...
There's more…
See also
Using the CouchDB river
Getting ready
How to do it...
How it works...
There's more…
See also
Using the MongoDB river
Getting ready
How to do it...
How it works...
See also
Using the RabbitMQ river
Getting ready
How to do it...
How it works...
There's more…
See also
Using the JDBC river
Getting ready
How to do it...
How it works...
See also
Using the Twitter river
Getting ready
How to do it...
How it works...
There's more…
See also
9. Cluster and Nodes Monitoring
Introduction
Controlling cluster health via API
Getting ready
How to do it…
How it works…
There's more…
See also
Controlling cluster state via API
Getting ready
How to do it…
How it works…
There's more…
See also
Getting nodes information via API
Getting ready
How to do it…
How it works…
There's more…
See also
Getting node statistic via API
Getting ready
How to do it…
How it works…
There's more…
See also
Installing and using BigDesk
Getting ready
How to do it…
How it works…
There's more…
Installing and using ElasticSerach-head
Getting ready
How to do it…
How it works…
There's more…
Installing and using SemaText SPM
Getting ready
How to do it…
How it works…
See also
10. Java Integration
Introduction
Creating an HTTP client
Getting ready
How to do it...
How it works...
There's more...
See also
Creating a native client
Getting ready
How to do it...
How it works...
There's more...
See also
Managing indices with the native client
Getting ready
How to do it...
How it works...
See also
Managing mappings
Getting ready
How to do it...
How it works...
There's more...
See also
Managing documents
Getting ready
How to do it...
How it works...
See also
Managing bulk action
Getting ready
How to do it...
How it works...
See also
Creating a query
Getting ready
How to do it...
How it works...
There's more...
See also
Executing a standard search
Getting ready
How to do it...
How it works...
See also
Executing a facet search
Getting ready
How to do it...
How it works...
See also
Executing a scroll/scan search
Getting ready
How to do it...
How it works...
There's more...
See also
11. Python Integration
Introduction
Creating a client
Getting ready
How to do it...
How it works...
There's more…
See also
Managing indices
Getting ready
How to do it...
How it works...
See also
Managing mappings
Getting ready
How to do it...
How it works...
There's more…
See also
Managing documents
Getting ready
How to do it...
How it works...
There's more…
See also
Executing a standard search
Getting ready
How to do it...
How it works...
There's more…
See also
Executing a facet search
Getting ready
How to do it...
How it works...
There's more…
See also
12. Plugin Development
Introduction
Creating a site plugin
Getting ready
How to do it...
How it works...
There's more…
See also
Creating a simple plugin
Getting ready
How to do it...
How it works...
There's more...
Creating a REST plugin
Getting ready
How to do it...
How it works...
There's more…
See also
Creating a cluster action
Getting ready
How to do it...
How it works...
See also
Creating an analyzer plugin
Getting ready
How to do it...
How it works...
Creating a river plugin
Getting ready
How to do it...
How it works...
There's more…
See also
Index

ElasticSearch Cookbook

ElasticSearch Cookbook

Copyright © 2013 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

First published: December 2013

Production Reference: 1171213

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham B3 2PB, UK.

ISBN 978-1-78216-662-7

www.packtpub.com

Cover Image by John M. Quick (<[email protected]>)

Credits

Author

Alberto Paro

Reviewers

Jettro Coenradie

Henrik Lindström

Richard Louapre

Christian Pietsch

Acquisition Editor

Kevin Colaco

Lead Technical Editor

Arun Nadar

Technical Editors

Pragnesh Bilimoria

Iram Malik

Krishnaveni Haridas

Shruti Rawool

Project Coordinator

Amey Sawant

Proofreader

Bridget Braund

Indexer

Priya Subramani

Graphics

Yuvraj Mannari

Production Coordinator

Pooja Chiplunkar

Cover Work

Pooja Chiplunkar

About the Author

Alberto Paro is an engineer, a project manager, and a software developer. He currently works as a CTO at The Net Planet Europe and as a Freelance Consultant of software engineering on Big Data and NoSQL solutions. He loves studying emerging solutions and applications mainly related to Big Data processing, NoSQL, Natural Language Processing, and neural networks. He started programming in Basic on a Sinclair Spectrum when he was eight years old and in his life he has gained a lot of experience using different operative systems, applications, and programming.

In 2000, he completed Computer Science engineering from Politecnico di Milano with a thesis on designing multi-users and multidevices web applications. He worked as a professor helper at the university for about one year. Then, after coming in contact with The Net Planet company and loving their innovation ideas, he started working on knowledge management solutions and advanced data-mining products.

In his spare time, when he is not playing with his children, he likes working on open source projects. When he was in high school, he started contributing to projects related to the Gnome environment (GTKMM). One of his preferred programming languages was Python and he wrote one of the first NoSQL backend for Django for MongoDB (django-mongodb-engine). In 2010, he started using ElasticSearch to provide search capabilities for some Django e-commerce sites and developed PyES (a pythonic client for ElasticSearch) and the initial part of ElasticSearch MongoDB River.

I would like to thank my wife and my children for their support. I am indebted to my editors and reviewers for guiding this book to completion. Their professionalism, courtesy, good judgment, and passion for books are much appreciated.

About the Reviewers

Jettro Coenradie likes to try out new stuff. That is why he got his motorcycle drivers license. On a motorbike, you tend to explore different routes to get the best out of your bike and have fun while doing the things you need to do, such as going from A to B. When exploring new technologies, he also likes to explore new routes to find better and more interesting ways to accomplish his goal. Jettro rides an all terrain-bike; he does not like riding on the same ground over and over again. The same is valid for his technical interest; he knows about backend (ElasticSearch, MongoDB, Spring Data, and Spring Integration), as well as frontend (AngularJS, Sass, and Less) and mobile development (iOS and Sencha touch).

Henrik Lindström has worked with enterprise search for the last 10 years and the last two years mainly with ElasticSearch. He was one of the founders of 200 OK AB and the Truffler search service that ran on the top of ElasticSearch. In 2013, 200 OK was acquired by EPiServer AB and at that time, he joined EPiServer and is currently working on their cloud services and mainly the search service EPiServer Find. When Henrik isn't coding or spending time with his family, you might find him in the backcountry with skis on his feet during the winter or with a fly rod in his hand in the summer time.

Richard Louapre is a Technical Consultant with 12 years of experience in content management. He is passionate about exploring new IT technologies, particularly in the field of NoSQL, search engine, and MVC JavaScript framework. He applied those concepts in the open source MongoDB River Plugin for ElasticSearch (https://github.com/richardwilly98/elasticsearch-river-mongodb).

Christian Pietsch is a computational linguist with a degree from Saarland University, Germany. His work experience has been mostly research-related. At the Open University, England, he worked as a Java programmer within the Natural Language Generation group. As a Junior Researcher at the Center of Excellence in Cognitive Interaction Technology (CITEC), Germany, he analyzed linguistic data collections using Python and R, and even tried to build a human-like virtual receptionist with his colleagues.

Currently, at the Library Technology and Knowledge Management (LibTec) department of Bielefeld University Library, Germany, his duties include handling bibliographic metadata and research data. For this, his preferred toolkit is the open source modern Perl framework Catmandu that among other things provides easy-to-use wrappers for document stores and search engines such as ElasticSearch. Refer to http://librecat.org/ for more information about Catmandu.

www.PacktPub.com

Support files, eBooks, discount offers and more

You might want to visit www.PacktPub.com for support files and downloads related to your book.

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at <[email protected]> for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.

http://PacktLib.PacktPub.com

Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can access, read and search across Packt's entire library of books. 

Why Subscribe?

Fully searchable across every book published by PacktCopy and paste, print and bookmark contentOn demand and accessible via web browser

Free Access for Packt account holders

If you have an account with Packt at www.PacktPub.com, you can use this to access PacktLib today and view nine entirely free books. Simply use your login credentials for immediate access.

To Giulia and Andrea, my extraordinary children.

Preface

One of the main requirements of today applications is the search capability. In the market we can find a lot of solutions to answer this need, both in the commercial and in the open source world. One of the frequently used libraries for searching is Apache Lucene. This library is the base of a large number of search solutions such as Apache Solr, Indextank, and ElasticSearch.

ElasticSearch is one of the younger solutions, written with the cloud, and distributed computing in mind. Its main author, Shay Banon, famous for having developed Compass (http://www.compass-project.org), released the first version of ElasticSearch in March 2010.

Thus the main scope of ElasticSearch is to be a search engine; it also provides a lot of features that allows it to be used also as data store and analytic engine via facets.

ElasticSearch contains a lot of innovative features: JSON REST-based, natively distributed in a map/reduce approach, easy to set up, and extensible with plugins. In this book, we will study in depth about these features and many others available in ElasticSearch.

Before ElasticSearch, only Apache Solr was able to provide some of these functionalities, but it was not designed for the cloud and it is not using JSON REST API. In the last year, this situation has changed a bit with the release of Solr Cloud in 2012. For users who want to have a deeper comparison between these two products, I suggest to read posts by Rafal Kuc available at http://blog.sematext.com/2012/08/23/solr-vs-elasticsearch-part-1-overview/.

ElasticSearch is also a product in continuous evolution and new functionalities are released both by the ElasticSearch Company (the company founded by Shay Banon to provide commercial support for ElasticSearch) and by ElasticSearch users as a plugin (mainly available on GitHub).

In my opinion, ElasticSearch is probably one of the most powerful and easy-to-use search solutions in the market. In writing this book and these recipes, the book reviewers and I have tried to transmit our knowledge, our passion, and the best practices to manage it in a better way.

What this book covers

Chapter 1, Getting Started, gives the reader an overview of the basic concepts of ElasticSearch and the ways to communicate with it.

Chapter 2, Downloading and Setting Up ElasticSearch, covers the basic steps to start using ElasticSearch from the simple install to cloud ones.

Chapter 3, Managing Mapping, covers the correct definition of the data fields to improve both indexing and searching quality.

Chapter 4, Standard Operations, teaches the most common actions that are required to ingest data in ElasticSearch and to manage it.

Chapter 5, Search, Queries, and Filters, talks about Search DSL—the core of the search functionalities of ElasticSearch. It is the only way to execute queries in ElasticSearch.

Chapter 6, Facets, covers another capability of ElasticSearch—the possibility to execute analytics on search results to improve both user experience and to drill down the information contained in ElasticSearch.

Chapter 7, Scripting, shows how to customize ElasticSearch with scripting in different languages.

Chapter 8, Rivers, extends ElasticSearch giving the ability to pull data from different sources such as databases, NoSQL solutions, or data streams.

Chapter 9, Cluster and Nodes Monitoring, shows how to analyze the behavior of a cluster/node to understand common pitfalls.

Chapter 10, Java Integration, describes how to integrate ElasticSearch in Java application using both REST and Native protocols.

Chapter 11, Python Integration, covers the usage of the official ElasticSearch Python client and the Pythonic PyES library.

Chapter 12, Plugin Development, describes how to create the different types of plugins: site and native. Some examples show the plugin skeletons, the setup process, and their building.

What you need for this book

For this book you will need a computer, of course. In terms of the software required, you don't have to be worried, all the components we use are open source and available for every platform.

For all the REST examples the cURL software (http://curl.haxx.se/) is used to simulate a command from the command line. It's commonly preinstalled in Linux and Mac OS X operative systems. For Windows, it can be downloaded from its site and put in a path that can be called from a command line.

For Chapter 10, Java Integration and Chapter 12, Plugin Development, the Maven built tool (http://maven.apache.org/) is required, which is a standard for managing build, packaging, and deploy in Java. It is natively supported in Java IDEs such as Eclipse and IntelliJ IDEA.

Chapter 11, Python Integration, requires the Python interpreter installed. By default it's available on Linux and Mac OS X. For Windows it can be downloaded from the official Python site (http//www.python.org). For the current examples Version 2.X is used.

Who this book is for

This book is for developers who want to start using both ElasticSearch and at the same time improve their ElasticSearch knowledge. The book covers all aspects of using ElasticSearch and provides solutions and hints for everyday usage. The recipes are reduced in complexity to easily focus the reader on the discussed ElasticSearch aspect and to easily memorize the ElasticSearch functionalities.

The latter chapters that discuss the ElasticSearch integration in JAVA and Python, shows the user how to integrate the power of ElasticSearch in their applications.

The last chapter talks about advanced usage of ElasticSearch and its core extension, so some skilled Java know-how is required.

Conventions

In this book, you will find a number of styles of text that distinguish between different kinds of information. Here are some examples of these styles, and an explanation of their meaning.

Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "Open the config/elasticsearch.yml file with an editor of your choice."

A block of code is set as follows:

path.conf: /opt/data/es/conf path.data: /opt/data/es/data1,/opt2/data/data2 path.work: /opt/data/work path.logs: /opt/data/logs path.plugins: /opt/data/plugins

When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:

{ "order": { "_uid": {"store": "yes"},"_id": {"path": "order_id"}, "properties": { "order_id": { "type": "string", "store": "yes", "index": "not_analyzed" },

Any command-line input or output is written as follows:

bin/plugin -install elasticsearch/elasticsearch-mapper-attachments/1.9.0

New terms and important words are shown in bold. Words that you see on the screen, in menus or dialog boxes for example, appear in the text like this: " The Any Request [+] tab allows executing custom query. On the left-hand side there are the following options:".

Note

Warnings or important notes appear in a box like this.

Tip

Tips and tricks appear like this.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or may have disliked. Reader feedback is important for us to develop titles that you really get the most out of.

To send us general feedback, simply send an e-mail to <[email protected]>, and mention the book title via the subject of your message.

If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide on www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the example code

You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you would report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the errata submission form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded on our website, or added to any list of existing errata, under the Errata section of that title. Any existing errata can be viewed by selecting your title from http://www.packtpub.com/support.

Piracy

Piracy of copyright material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works, in any form, on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.

Please contact us at <[email protected]> with a link to the suspected pirated material.

We appreciate your help in protecting our authors, and our ability to bring you valuable content.

Questions

You can contact us at <[email protected]> if you are having a problem with any aspect of the book, and we will do our best to address it.

Chapter 1. Getting Started

In this chapter, we will cover the following topics:

Understanding node and clusterUnderstanding node servicesManaging your dataUnderstanding cluster, replication, and shardingCommunicating with ElasticSearchUsing the HTTP protocolUsing the Native protocolUsing the Thrift protocol

Introduction

In order to efficiently use ElasticSearch, it is very important to understand how it works. The goal of this chapter is to give the reader an overview of the basic concepts of ElasticSearch such as node, index, shard, type, records, and fields.

ElasticSearch can be used both as a search engine and as a data store. A brief description of the ElasticSearch logic helps the user to improve the performance and quality, and decide when and how to invest in infrastructure to improve scalability and availability. Some details about data replications and base node communication processes are also explained. At the end of this chapter the protocols used to manage ElasticSearch are also discussed.

Understanding node and cluster

Every instance of ElasticSearch is called as node. Several nodes are grouped in a cluster. This is the base of the cloud nature of ElasticSearch.

Getting ready

To better understand the upcoming sections, some knowledge of basic concepts of application node and cluster is required.

How it works...

One or more ElasticSearch nodes can be set up on a physical or a virtual server depending on available resources such as RAM, CPUs, and disk space. A default node allows storing data in it and to process requests and responses. (In Chapter 2, Downloading and Setting Up ElasticSearch,we'll see details about how to set up different nodes and cluster topologies). When a node is started, several actions take place during its startup:

The configuration is read from the environment variables and from the elasticsearch.yml configuration fileA node name is set by a config file or chosen from a list of built-in random namesInternally, the ElasticSearch engine initializes all the modules and plugins that are available in the current installation

After node startup, the node searches for other cluster members and checks its indices and shards status. In order to join two or more nodes in a cluster, the following rules must be matched:

The version of ElasticSearch must be the same (0.20, 0.9, and so on) otherwise the join is rejectedThe cluster name must be the sameThe network must be configured to support multicast (default) and they can communicate with each other

Refer to the Networking setup recipe in the next chapter.

A common approach in cluster management is to have a master node, which is the main reference for all cluster level actions, and the others ones called secondary or slaves, that replicate the master data and actions. All the update actions are first committed in the master node and then replicated in secondary ones.

In a cluster with multiple nodes, if a master node dies, a secondary one is elected to be the new master; this approach allows automatic failover to be set up in an ElasticSearch cluster.

There's more...

There are two important behaviors in an ElasticSearch node, namely the arbiter and the data container.

The arbiter nodes are able to process the REST response and all the other operations of search. During every action execution, ElasticSearch generally executes actions using a MapReduce approach. The arbiter is responsible for distributing the actions to the underlying shards (map) and collecting/aggregating the shard results (redux) to be sent a final response. They may use a huge amount of RAM due to operations such as facets, collecting hits and caching (for example, scan queries).

Data nodes are able to store data in them. They contain the indices shards that store the indexed documents as Lucene indices. All the standard nodes are both arbiter and data container.

In big cluster architectures, having some nodes as simple arbiters with a lot of RAM with no data reduces the resources required by data nodes and improves performance in search using the local memory cache of arbiters.

See also

Setting up a node and Setting up different node types (advanced) recipes in the next chapter

Understanding node services

When a node is running, a lot of services are managed by its instance. Services provide additional functionalities to a node and they cover different behaviors such as networking, indexing, analyzing, and so on.

Getting ready

Every ElasticSearch server that is running provides services.

How it works...

ElasticSearch natively provides a large set of functionalities that can be extended with additional plugins. During a node startup, a lot of required services are automatically started. The most important are as follows:

Cluster services manage cluster state and intra-node communication and synchronizationIndexing service manages all the index operations, initializing all active indices and shardsMapping service that manages the document types stored in the cluster (we'll discuss mapping in Chapter 3, Managing Mapping)Network services, such as HTTP REST services (default on port 9200), internal ES protocol (on port 9300) and Thrift server (on port 9500 if thrift plugin is installed)Plugin service (discussed in Chapter 2, Downloading and Setting Up ElasticSearch, for installation and Chapter 12, Plugin Development, for detailed usage)River service (covered in Chapter 8, Rivers)Language scripting services that allow adding new language scripting support to ElasticSearch

Note

Throughout the book, we'll see recipes that interact with ElasticSearch services. Every base functionality or extended functionality is managed in ElasticSearch as a service.

Managing your data

Unless you are using ElasticSearch as a search engine or a distributed data store, it's important to understand concepts on how ElasticSearch stores and manages your data.

Getting ready

To work with ElasticSearch data, a user must know basic concepts of data management and JSON that is the "lingua franca" for working with ElasticSearch data and services.

How it works...

Our main data container is called index (plural indices) and it can be considered as a database in the traditional SQL world. In an index, the data is grouped in data types called mappings in ElasticSearch. A mapping describes how the records are composed (called fields).

Every record, that must be stored in ElasticSearch, must be a JSON object.

Natively, ElasticSearch is a schema-less datastore. When you put records in it, during insert it processes the records, splits them into fields, and updates the schema to manage the inserted data.

To manage huge volumes of records, ElasticSearch uses the common approach to split an index into many shards so that they can be spread on several nodes. The shard management is transparent in usage—all the common record operations are managed automatically in the ElasticSearch application layer.

Every record is stored in only one shard. The sharding algorithm is based on record ID, so many operations that require loading and changing of records can be achieved without hitting all the shards.

The following schema compares ElasticSearch structure with SQL and MongoDB ones:

ElasticSearch

SQL

MongoDB

Index (Indices)

Database

Database

Shard

Shard

Shard

Mapping/Type

Table

Collection

Field

Field

Field

Record (JSON object)

Record (Tuples)

Record (BSON object)

There's more...

ElasticSearch, internally, has rigid rules about how to execute operations to ensure safe operations on index/mapping/records. In ElasticSearch, the operations are divided as follows:

Cluster operations: At cluster level all write ones are locked, first they are applied to the master node and then to the secondary one. The read operations are typically broadcasted.Index management operations: These operations follow the cluster pattern.Record operations: These operations are executed on single documents at shard level.

When a record is saved in ElasticSearch, the destination shard is chosen based on the following factors:

The ID (unique identifier) of the record. If the ID is missing, it is autogenerated by ElasticSearch.If the routing or parent (covered while learning the parent/child mapping) parameters are defined, the correct shard is chosen by the hash of these parameters.

Splitting an index into shards allows you to store your data in different nodes, because ElasticSearch tries to do shard balancing.

Every shard can contain up to 2^32 records (about 4.2 billion records), so the real limit to shard size is its storage size.

Shards contain your data and during search process all the shards are used to calculate and retrieve results. ElasticSearch performance in big data scales horizontally with the number of shards.

All native records operations (such as index, search, update, and delete) are managed in shards.

The shard management is completely transparent to the user. Only an advanced user tends to change the default shard routing and management to cover their custom scenarios. A common custom scenario is the requirement to put customer data in the same shard to speed up his/her operations (search/index/analytics).

Best practice

It's best practice not to have a too big shard (over 10 GB) to avoid poor performance in indexing due to continuous merge and resizing of index segments.

It's not good to oversize the number of shards to avoid poor search performance due to native distributed search (it works as MapReduce). Having a huge number of empty shards in an index consumes only memory.

See also

Shard on Wikipedia http://en.wikipedia.org/wiki/Shard_(database_architecture)

Understanding cluster, replication, and sharding

Related to shard management, there is the key concept of replication and cluster status.

Getting ready

You need one or more nodes running to have a cluster. To test an effective cluster you need at least two nodes (they can be on the same machine).

How it works...

An index can have one or more replicas—the shards are called primary if they are part of the master index and secondary if they are part of replicas.

To maintain consistency in write operations the following workflow is executed:

The write is first executed in the primary shard.If the primary write is successfully done, it is propagated simultaneously in all the secondary shards.If a primary shard dies, a secondary one is elected as primary (if available) and the flow is re-executed.

During search operations, a valid set of shards is chosen randomly between primary and secondary to improve performances.

The following figure shows an example of possible shards configuration:

Best practice

In order to prevent data loss and to have High Availability, it's good to have at least one replica so that your system can survive a node failure without downtime and without loss of data.

There's more…

Related to the concept of replication there is the cluster indicator of the health of your cluster.

It can cover three different states:

Green: Everything is ok.Yellow: Something is missing but you can work.Red: "Houston, we have a problem". Some primary shards are missing.

How to solve the yellow status

Mainly yellow status is due to some shards that are not allocated. If your cluster is in recovery status, just wait if there is enough space in nodes for your shards.

If your cluster, even after recovery is still in yellow state, it means you don't have enough nodes to contain your replicas so you can either reduce the number of your replicas or add the required number of nodes.

Best practice

The total number of nodes must not be lower than the maximum number of replicas.

How to solve the red status

When you have lost data (that is, one or more shard is missing), you need to try restoring the node(s) that are missing. If your nodes restart and the system goes back to yellow or green status you are safe. Otherwise, you have lost data and your cluster is not usable. In this case, delete the index/indices and restore them from backup (if you have it) or from other sources.

Best practice

To prevent data loss, I suggest having always at least two nodes and the replica set to 1.

Tip

Having one or more replicas on different nodes on different machines allows you to have a live backup of your data, always updated.

See also

Replica and shard management in this chapter.

Communicating with ElasticSearch

You can communicate with your ElasticSearch server with several protocols. In this recipe we will look at some main protocols.

Getting ready

You need a working ElasticSearch cluster.

How it works…

ElasticSearch is designed to be used as a RESTful server, so the main protocol is HTTP usually on port 9200 and above. Thus, it allows using different protocols such as native and thrift ones. Many others are available as extension plugins, but they are seldom used, such as memcached one.

Every protocol has weak and strong points, it's important to choose the correct one depending on the kind of applications you are developing. If you are in doubt, choose the HTTP protocol layer that is the most standard and easy to use one.

Choosing the right protocol depends on several factors, mainly architectural and performance related. This schema factorizes advantages and disadvantages related to them. If you are using it to communicate with Elasticsearch, the official clients switching from a protocol to another one is generally a simple setting in the client initialization. Refer to the following table which shows protocols and their advantages, disadvantages, and types:

Protocol

Advantages

Disadvantages

Type

HTTP

More often used. API safe and generally compatible with different ES versions. Suggested. JSON

HTTP overhead.

Text

Native

Fast network layer. Programmatic. Best for massive index operations.

API changes and breaks applications. Depends on the same version of ES server.

Binary

Thrift

As HTTP

Related to the thrift plugin.

Binary

Chapter 2. Downloading and Setting Up ElasticSearch

In this chapter we will cover the following topics:

Downloading and installing an ElasticSearchNetworking setupSetting up a nodeSetting up ElasticSearch for Linux systems (advanced)Setting up different node types (advanced)Installing a pluginInstalling a plugin manuallyRemoving a pluginChanging logging settings (advanced)

Introduction

There are different options in installing ElasticSearch and setting up a working environment for development and production.

This chapter explains the installation process and the configuration from a single developer machine to a big cluster, giving hints on how to improve the performance and skip misconfiguration errors.

The setup step is very important, because a bad configuration can bring bad results, poor performances and kill your server.

In this chapter, the management of ElasticSearch plugins is also discussed: installing, configuring, updating, and removing plugins.

Downloading and installing ElasticSearch

ElasticSearch has an active community and the release cycles are very fast.

Because ElasticSearch depends on many common Java libraries (Lucene, Guice, and Jackson are the most famous ones), the ElasticSearch community tries to keep them updated and fix bugs that are discovered in them and in ElasticSearch core.

If it's possible, the best practice is to use the latest available release (usually the more stable one).

Getting ready

A supported ElasticSearch Operative System (Linux/MacOSX/Windows) with installed Java JVM 1.6 or above is required. A web browser is required to download the ElasticSearch binary release.

How to do it...

For downloading and installing an ElasticSearch server, we will perform the steps given as follows:

Download ElasticSearch from the Web.

The latest version is always downloadable from the web address http://www.elasticsearch.org/download/.

There are versions available for different operative systems:

elasticsearch-{version-number}.zip: This is for both Linux/Mac OSX, and Windows operating systemselasticsearch-{version-number}.tar.gz: This is for Linux/Macelasticsearch-{version-number}.deb: This is for Debian-based Linux distributions (this also covers Ubuntu family)

These packages contain everything to start ElasticSearch.

At the time of writing this book, the latest and most stable version of ElasticSearch was 0.90.7. To check out whether this is the latest available or not, please visit http://www.elasticsearch.org/download/.

Extract the binary content.

After downloading the correct release for your platform, the installation consists of expanding the archive in a working directory.

Choose a working directory that is safe to charset problems and doesn't have a long path to prevent problems when ElasticSearch creates its directories to store the index data.

For windows platform, a good directory could be c:\es, on Unix and MacOSX /opt/es.

To run ElasticSearch, you need a Java Virtual Machine 1.6 or above installed. For better performance, I suggest you use Sun/Oracle 1.7 version.

We start ElasticSearch to check if everything is working.

To start your ElasticSearch server, just go in the install directory and type:

# bin/elasticsearch –f (for Linux and MacOsX)

or

# bin\elasticserch.bat –f (for Windows)

Now your server should start as shown in the following screenshot:

How it works...

The ElasticSearch package contains three directories:

bin: This contains script to start and manage ElasticSearch. The most important ones are:
elasticsearch(.bat): This is the main script to start ElasticSearchplugin(.bat): This is a script to manage plugins
config: This contains the ElasticSearch configs. The most important ones are:
elasticsearch.yml: This is the main config file for ElasticSearchlogging.yml: This is the logging config file
lib: This contains all the libraries required to run ElasticSearch

There's more...

During ElasticSearch startup a lot of events happen:

A node name is chosen automatically (that is Akenaten in the example) if not provided in elasticsearch.yml.A node name hash is generated for this node (that is, whqVp_4zQGCgMvJ1CXhcWQ).If there are plugins (internal or sites), they are loaded. In the previous example there are no plugins.Automatically if not configured, ElasticSearch binds on all addresses available two ports:
9300 internal, intra node communication, used for discovering other nodes9200 HTTP REST API port
After starting, if indices are available, they are checked and put in online mode to be used.

There are more events which are fired during ElasticSearch startup. We'll see them in detail in other recipes.

Networking setup

Correctly setting up a networking is very important for your node and cluster.

As there are a lot of different install scenarios and networking issues in this recipe we will cover two kinds of networking setups:

Standard installation with autodiscovery working configurationForced IP configuration; used if it is not possible to use autodiscovery

Getting ready

You need a working ElasticSearch installation and to know your current networking configuration (that is, IP).

How to do it...

For configuring networking, we will perform the steps as follows:

Open the ElasticSearch configuration file with your favorite text editor.

Using the standard ElasticSearch configuration file (config/elasticsearch.yml), your node is configured to bind on all your machine interfaces and does autodiscovery broadcasting events, that means it sends "signals" to every machine in the current LAN and waits for a response. If a node responds to it, they can join in a cluster.