44,39 €
ElasticSearch is one of the most promising NoSQL technologies available and is built to provide a scalable search solution with built-in support for near real-time search and multi-tenancy.
This practical guide is a complete reference for using ElasticSearch and covers 360 degrees of the ElasticSearch ecosystem. We will get started by showing you how to choose the correct transport layer, communicate with the server, and create custom internal actions for boosting tailored needs.
Starting with the basics of the ElasticSearch architecture and how to efficiently index, search, and execute analytics on it, you will learn how to extend ElasticSearch by scripting and monitoring its behaviour.
Step-by-step, this book will help you to improve your ability to manage data in indexing with more tailored mappings, along with searching and executing analytics with facets. The topics explored in the book also cover how to integrate ElasticSearch with Python and Java applications.
This comprehensive guide will allow you to master storing, searching, and analyzing data with ElasticSearch.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 450
Veröffentlichungsjahr: 2013
Copyright © 2013 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
First published: December 2013
Production Reference: 1171213
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham B3 2PB, UK.
ISBN 978-1-78216-662-7
www.packtpub.com
Cover Image by John M. Quick (<[email protected]>)
Author
Alberto Paro
Reviewers
Jettro Coenradie
Henrik Lindström
Richard Louapre
Christian Pietsch
Acquisition Editor
Kevin Colaco
Lead Technical Editor
Arun Nadar
Technical Editors
Pragnesh Bilimoria
Iram Malik
Krishnaveni Haridas
Shruti Rawool
Project Coordinator
Amey Sawant
Proofreader
Bridget Braund
Indexer
Priya Subramani
Graphics
Yuvraj Mannari
Production Coordinator
Pooja Chiplunkar
Cover Work
Pooja Chiplunkar
Alberto Paro is an engineer, a project manager, and a software developer. He currently works as a CTO at The Net Planet Europe and as a Freelance Consultant of software engineering on Big Data and NoSQL solutions. He loves studying emerging solutions and applications mainly related to Big Data processing, NoSQL, Natural Language Processing, and neural networks. He started programming in Basic on a Sinclair Spectrum when he was eight years old and in his life he has gained a lot of experience using different operative systems, applications, and programming.
In 2000, he completed Computer Science engineering from Politecnico di Milano with a thesis on designing multi-users and multidevices web applications. He worked as a professor helper at the university for about one year. Then, after coming in contact with The Net Planet company and loving their innovation ideas, he started working on knowledge management solutions and advanced data-mining products.
In his spare time, when he is not playing with his children, he likes working on open source projects. When he was in high school, he started contributing to projects related to the Gnome environment (GTKMM). One of his preferred programming languages was Python and he wrote one of the first NoSQL backend for Django for MongoDB (django-mongodb-engine). In 2010, he started using ElasticSearch to provide search capabilities for some Django e-commerce sites and developed PyES (a pythonic client for ElasticSearch) and the initial part of ElasticSearch MongoDB River.
I would like to thank my wife and my children for their support. I am indebted to my editors and reviewers for guiding this book to completion. Their professionalism, courtesy, good judgment, and passion for books are much appreciated.
Jettro Coenradie likes to try out new stuff. That is why he got his motorcycle drivers license. On a motorbike, you tend to explore different routes to get the best out of your bike and have fun while doing the things you need to do, such as going from A to B. When exploring new technologies, he also likes to explore new routes to find better and more interesting ways to accomplish his goal. Jettro rides an all terrain-bike; he does not like riding on the same ground over and over again. The same is valid for his technical interest; he knows about backend (ElasticSearch, MongoDB, Spring Data, and Spring Integration), as well as frontend (AngularJS, Sass, and Less) and mobile development (iOS and Sencha touch).
Henrik Lindström has worked with enterprise search for the last 10 years and the last two years mainly with ElasticSearch. He was one of the founders of 200 OK AB and the Truffler search service that ran on the top of ElasticSearch. In 2013, 200 OK was acquired by EPiServer AB and at that time, he joined EPiServer and is currently working on their cloud services and mainly the search service EPiServer Find. When Henrik isn't coding or spending time with his family, you might find him in the backcountry with skis on his feet during the winter or with a fly rod in his hand in the summer time.
Richard Louapre is a Technical Consultant with 12 years of experience in content management. He is passionate about exploring new IT technologies, particularly in the field of NoSQL, search engine, and MVC JavaScript framework. He applied those concepts in the open source MongoDB River Plugin for ElasticSearch (https://github.com/richardwilly98/elasticsearch-river-mongodb).
Christian Pietsch is a computational linguist with a degree from Saarland University, Germany. His work experience has been mostly research-related. At the Open University, England, he worked as a Java programmer within the Natural Language Generation group. As a Junior Researcher at the Center of Excellence in Cognitive Interaction Technology (CITEC), Germany, he analyzed linguistic data collections using Python and R, and even tried to build a human-like virtual receptionist with his colleagues.
Currently, at the Library Technology and Knowledge Management (LibTec) department of Bielefeld University Library, Germany, his duties include handling bibliographic metadata and research data. For this, his preferred toolkit is the open source modern Perl framework Catmandu that among other things provides easy-to-use wrappers for document stores and search engines such as ElasticSearch. Refer to http://librecat.org/ for more information about Catmandu.
You might want to visit www.PacktPub.com for support files and downloads related to your book.
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at <[email protected]> for more details.
At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.
http://PacktLib.PacktPub.com
Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can access, read and search across Packt's entire library of books.
If you have an account with Packt at www.PacktPub.com, you can use this to access PacktLib today and view nine entirely free books. Simply use your login credentials for immediate access.
To Giulia and Andrea, my extraordinary children.
One of the main requirements of today applications is the search capability. In the market we can find a lot of solutions to answer this need, both in the commercial and in the open source world. One of the frequently used libraries for searching is Apache Lucene. This library is the base of a large number of search solutions such as Apache Solr, Indextank, and ElasticSearch.
ElasticSearch is one of the younger solutions, written with the cloud, and distributed computing in mind. Its main author, Shay Banon, famous for having developed Compass (http://www.compass-project.org), released the first version of ElasticSearch in March 2010.
Thus the main scope of ElasticSearch is to be a search engine; it also provides a lot of features that allows it to be used also as data store and analytic engine via facets.
ElasticSearch contains a lot of innovative features: JSON REST-based, natively distributed in a map/reduce approach, easy to set up, and extensible with plugins. In this book, we will study in depth about these features and many others available in ElasticSearch.
Before ElasticSearch, only Apache Solr was able to provide some of these functionalities, but it was not designed for the cloud and it is not using JSON REST API. In the last year, this situation has changed a bit with the release of Solr Cloud in 2012. For users who want to have a deeper comparison between these two products, I suggest to read posts by Rafal Kuc available at http://blog.sematext.com/2012/08/23/solr-vs-elasticsearch-part-1-overview/.
ElasticSearch is also a product in continuous evolution and new functionalities are released both by the ElasticSearch Company (the company founded by Shay Banon to provide commercial support for ElasticSearch) and by ElasticSearch users as a plugin (mainly available on GitHub).
In my opinion, ElasticSearch is probably one of the most powerful and easy-to-use search solutions in the market. In writing this book and these recipes, the book reviewers and I have tried to transmit our knowledge, our passion, and the best practices to manage it in a better way.
Chapter 1, Getting Started, gives the reader an overview of the basic concepts of ElasticSearch and the ways to communicate with it.
Chapter 2, Downloading and Setting Up ElasticSearch, covers the basic steps to start using ElasticSearch from the simple install to cloud ones.
Chapter 3, Managing Mapping, covers the correct definition of the data fields to improve both indexing and searching quality.
Chapter 4, Standard Operations, teaches the most common actions that are required to ingest data in ElasticSearch and to manage it.
Chapter 5, Search, Queries, and Filters, talks about Search DSL—the core of the search functionalities of ElasticSearch. It is the only way to execute queries in ElasticSearch.
Chapter 6, Facets, covers another capability of ElasticSearch—the possibility to execute analytics on search results to improve both user experience and to drill down the information contained in ElasticSearch.
Chapter 7, Scripting, shows how to customize ElasticSearch with scripting in different languages.
Chapter 8, Rivers, extends ElasticSearch giving the ability to pull data from different sources such as databases, NoSQL solutions, or data streams.
Chapter 9, Cluster and Nodes Monitoring, shows how to analyze the behavior of a cluster/node to understand common pitfalls.
Chapter 10, Java Integration, describes how to integrate ElasticSearch in Java application using both REST and Native protocols.
Chapter 11, Python Integration, covers the usage of the official ElasticSearch Python client and the Pythonic PyES library.
Chapter 12, Plugin Development, describes how to create the different types of plugins: site and native. Some examples show the plugin skeletons, the setup process, and their building.
For this book you will need a computer, of course. In terms of the software required, you don't have to be worried, all the components we use are open source and available for every platform.
For all the REST examples the cURL software (http://curl.haxx.se/) is used to simulate a command from the command line. It's commonly preinstalled in Linux and Mac OS X operative systems. For Windows, it can be downloaded from its site and put in a path that can be called from a command line.
For Chapter 10, Java Integration and Chapter 12, Plugin Development, the Maven built tool (http://maven.apache.org/) is required, which is a standard for managing build, packaging, and deploy in Java. It is natively supported in Java IDEs such as Eclipse and IntelliJ IDEA.
Chapter 11, Python Integration, requires the Python interpreter installed. By default it's available on Linux and Mac OS X. For Windows it can be downloaded from the official Python site (http//www.python.org). For the current examples Version 2.X is used.
This book is for developers who want to start using both ElasticSearch and at the same time improve their ElasticSearch knowledge. The book covers all aspects of using ElasticSearch and provides solutions and hints for everyday usage. The recipes are reduced in complexity to easily focus the reader on the discussed ElasticSearch aspect and to easily memorize the ElasticSearch functionalities.
The latter chapters that discuss the ElasticSearch integration in JAVA and Python, shows the user how to integrate the power of ElasticSearch in their applications.
The last chapter talks about advanced usage of ElasticSearch and its core extension, so some skilled Java know-how is required.
In this book, you will find a number of styles of text that distinguish between different kinds of information. Here are some examples of these styles, and an explanation of their meaning.
Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "Open the config/elasticsearch.yml file with an editor of your choice."
A block of code is set as follows:
When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:
Any command-line input or output is written as follows:
New terms and important words are shown in bold. Words that you see on the screen, in menus or dialog boxes for example, appear in the text like this: " The Any Request [+] tab allows executing custom query. On the left-hand side there are the following options:".
Warnings or important notes appear in a box like this.
Tips and tricks appear like this.
Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or may have disliked. Reader feedback is important for us to develop titles that you really get the most out of.
To send us general feedback, simply send an e-mail to <[email protected]>, and mention the book title via the subject of your message.
If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide on www.packtpub.com/authors.
Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.
You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.
Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you would report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the errata submission form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded on our website, or added to any list of existing errata, under the Errata section of that title. Any existing errata can be viewed by selecting your title from http://www.packtpub.com/support.
Piracy of copyright material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works, in any form, on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.
Please contact us at <[email protected]> with a link to the suspected pirated material.
We appreciate your help in protecting our authors, and our ability to bring you valuable content.
You can contact us at <[email protected]> if you are having a problem with any aspect of the book, and we will do our best to address it.
In this chapter, we will cover the following topics:
In order to efficiently use ElasticSearch, it is very important to understand how it works. The goal of this chapter is to give the reader an overview of the basic concepts of ElasticSearch such as node, index, shard, type, records, and fields.
ElasticSearch can be used both as a search engine and as a data store. A brief description of the ElasticSearch logic helps the user to improve the performance and quality, and decide when and how to invest in infrastructure to improve scalability and availability. Some details about data replications and base node communication processes are also explained. At the end of this chapter the protocols used to manage ElasticSearch are also discussed.
Every instance of ElasticSearch is called as node. Several nodes are grouped in a cluster. This is the base of the cloud nature of ElasticSearch.
To better understand the upcoming sections, some knowledge of basic concepts of application node and cluster is required.
One or more ElasticSearch nodes can be set up on a physical or a virtual server depending on available resources such as RAM, CPUs, and disk space. A default node allows storing data in it and to process requests and responses. (In Chapter 2, Downloading and Setting Up ElasticSearch,we'll see details about how to set up different nodes and cluster topologies). When a node is started, several actions take place during its startup:
After node startup, the node searches for other cluster members and checks its indices and shards status. In order to join two or more nodes in a cluster, the following rules must be matched:
Refer to the Networking setup recipe in the next chapter.
A common approach in cluster management is to have a master node, which is the main reference for all cluster level actions, and the others ones called secondary or slaves, that replicate the master data and actions. All the update actions are first committed in the master node and then replicated in secondary ones.
In a cluster with multiple nodes, if a master node dies, a secondary one is elected to be the new master; this approach allows automatic failover to be set up in an ElasticSearch cluster.
There are two important behaviors in an ElasticSearch node, namely the arbiter and the data container.
The arbiter nodes are able to process the REST response and all the other operations of search. During every action execution, ElasticSearch generally executes actions using a MapReduce approach. The arbiter is responsible for distributing the actions to the underlying shards (map) and collecting/aggregating the shard results (redux) to be sent a final response. They may use a huge amount of RAM due to operations such as facets, collecting hits and caching (for example, scan queries).
Data nodes are able to store data in them. They contain the indices shards that store the indexed documents as Lucene indices. All the standard nodes are both arbiter and data container.
In big cluster architectures, having some nodes as simple arbiters with a lot of RAM with no data reduces the resources required by data nodes and improves performance in search using the local memory cache of arbiters.
When a node is running, a lot of services are managed by its instance. Services provide additional functionalities to a node and they cover different behaviors such as networking, indexing, analyzing, and so on.
Every ElasticSearch server that is running provides services.
ElasticSearch natively provides a large set of functionalities that can be extended with additional plugins. During a node startup, a lot of required services are automatically started. The most important are as follows:
Throughout the book, we'll see recipes that interact with ElasticSearch services. Every base functionality or extended functionality is managed in ElasticSearch as a service.
Unless you are using ElasticSearch as a search engine or a distributed data store, it's important to understand concepts on how ElasticSearch stores and manages your data.
To work with ElasticSearch data, a user must know basic concepts of data management and JSON that is the "lingua franca" for working with ElasticSearch data and services.
Our main data container is called index (plural indices) and it can be considered as a database in the traditional SQL world. In an index, the data is grouped in data types called mappings in ElasticSearch. A mapping describes how the records are composed (called fields).
Every record, that must be stored in ElasticSearch, must be a JSON object.
Natively, ElasticSearch is a schema-less datastore. When you put records in it, during insert it processes the records, splits them into fields, and updates the schema to manage the inserted data.
To manage huge volumes of records, ElasticSearch uses the common approach to split an index into many shards so that they can be spread on several nodes. The shard management is transparent in usage—all the common record operations are managed automatically in the ElasticSearch application layer.
Every record is stored in only one shard. The sharding algorithm is based on record ID, so many operations that require loading and changing of records can be achieved without hitting all the shards.
The following schema compares ElasticSearch structure with SQL and MongoDB ones:
ElasticSearch
SQL
MongoDB
Index (Indices)
Database
Database
Shard
Shard
Shard
Mapping/Type
Table
Collection
Field
Field
Field
Record (JSON object)
Record (Tuples)
Record (BSON object)
ElasticSearch, internally, has rigid rules about how to execute operations to ensure safe operations on index/mapping/records. In ElasticSearch, the operations are divided as follows:
When a record is saved in ElasticSearch, the destination shard is chosen based on the following factors:
Splitting an index into shards allows you to store your data in different nodes, because ElasticSearch tries to do shard balancing.
Every shard can contain up to 2^32 records (about 4.2 billion records), so the real limit to shard size is its storage size.
Shards contain your data and during search process all the shards are used to calculate and retrieve results. ElasticSearch performance in big data scales horizontally with the number of shards.
All native records operations (such as index, search, update, and delete) are managed in shards.
The shard management is completely transparent to the user. Only an advanced user tends to change the default shard routing and management to cover their custom scenarios. A common custom scenario is the requirement to put customer data in the same shard to speed up his/her operations (search/index/analytics).
It's best practice not to have a too big shard (over 10 GB) to avoid poor performance in indexing due to continuous merge and resizing of index segments.
It's not good to oversize the number of shards to avoid poor search performance due to native distributed search (it works as MapReduce). Having a huge number of empty shards in an index consumes only memory.
Related to shard management, there is the key concept of replication and cluster status.
You need one or more nodes running to have a cluster. To test an effective cluster you need at least two nodes (they can be on the same machine).
An index can have one or more replicas—the shards are called primary if they are part of the master index and secondary if they are part of replicas.
To maintain consistency in write operations the following workflow is executed:
During search operations, a valid set of shards is chosen randomly between primary and secondary to improve performances.
The following figure shows an example of possible shards configuration:
In order to prevent data loss and to have High Availability, it's good to have at least one replica so that your system can survive a node failure without downtime and without loss of data.
Related to the concept of replication there is the cluster indicator of the health of your cluster.
It can cover three different states:
Mainly yellow status is due to some shards that are not allocated. If your cluster is in recovery status, just wait if there is enough space in nodes for your shards.
If your cluster, even after recovery is still in yellow state, it means you don't have enough nodes to contain your replicas so you can either reduce the number of your replicas or add the required number of nodes.
The total number of nodes must not be lower than the maximum number of replicas.
When you have lost data (that is, one or more shard is missing), you need to try restoring the node(s) that are missing. If your nodes restart and the system goes back to yellow or green status you are safe. Otherwise, you have lost data and your cluster is not usable. In this case, delete the index/indices and restore them from backup (if you have it) or from other sources.
To prevent data loss, I suggest having always at least two nodes and the replica set to 1.
Having one or more replicas on different nodes on different machines allows you to have a live backup of your data, always updated.
You can communicate with your ElasticSearch server with several protocols. In this recipe we will look at some main protocols.
You need a working ElasticSearch cluster.
ElasticSearch is designed to be used as a RESTful server, so the main protocol is HTTP usually on port 9200 and above. Thus, it allows using different protocols such as native and thrift ones. Many others are available as extension plugins, but they are seldom used, such as memcached one.
Every protocol has weak and strong points, it's important to choose the correct one depending on the kind of applications you are developing. If you are in doubt, choose the HTTP protocol layer that is the most standard and easy to use one.
Choosing the right protocol depends on several factors, mainly architectural and performance related. This schema factorizes advantages and disadvantages related to them. If you are using it to communicate with Elasticsearch, the official clients switching from a protocol to another one is generally a simple setting in the client initialization. Refer to the following table which shows protocols and their advantages, disadvantages, and types:
Protocol
Advantages
Disadvantages
Type
HTTP
More often used. API safe and generally compatible with different ES versions. Suggested. JSON
HTTP overhead.
Text
Native
Fast network layer. Programmatic. Best for massive index operations.
API changes and breaks applications. Depends on the same version of ES server.
Binary
Thrift
As HTTP
Related to the thrift plugin.
Binary
In this chapter we will cover the following topics:
There are different options in installing ElasticSearch and setting up a working environment for development and production.
This chapter explains the installation process and the configuration from a single developer machine to a big cluster, giving hints on how to improve the performance and skip misconfiguration errors.
The setup step is very important, because a bad configuration can bring bad results, poor performances and kill your server.
In this chapter, the management of ElasticSearch plugins is also discussed: installing, configuring, updating, and removing plugins.
ElasticSearch has an active community and the release cycles are very fast.
Because ElasticSearch depends on many common Java libraries (Lucene, Guice, and Jackson are the most famous ones), the ElasticSearch community tries to keep them updated and fix bugs that are discovered in them and in ElasticSearch core.
If it's possible, the best practice is to use the latest available release (usually the more stable one).
A supported ElasticSearch Operative System (Linux/MacOSX/Windows) with installed Java JVM 1.6 or above is required. A web browser is required to download the ElasticSearch binary release.
For downloading and installing an ElasticSearch server, we will perform the steps given as follows:
The latest version is always downloadable from the web address http://www.elasticsearch.org/download/.
There are versions available for different operative systems:
These packages contain everything to start ElasticSearch.
At the time of writing this book, the latest and most stable version of ElasticSearch was 0.90.7. To check out whether this is the latest available or not, please visit http://www.elasticsearch.org/download/.
Extract the binary content.After downloading the correct release for your platform, the installation consists of expanding the archive in a working directory.
Choose a working directory that is safe to charset problems and doesn't have a long path to prevent problems when ElasticSearch creates its directories to store the index data.
For windows platform, a good directory could be c:\es, on Unix and MacOSX /opt/es.
To run ElasticSearch, you need a Java Virtual Machine 1.6 or above installed. For better performance, I suggest you use Sun/Oracle 1.7 version.
We start ElasticSearch to check if everything is working.To start your ElasticSearch server, just go in the install directory and type:
or
Now your server should start as shown in the following screenshot:
The ElasticSearch package contains three directories:
During ElasticSearch startup a lot of events happen:
There are more events which are fired during ElasticSearch startup. We'll see them in detail in other recipes.
Correctly setting up a networking is very important for your node and cluster.
As there are a lot of different install scenarios and networking issues in this recipe we will cover two kinds of networking setups:
You need a working ElasticSearch installation and to know your current networking configuration (that is, IP).
For configuring networking, we will perform the steps as follows:
Using the standard ElasticSearch configuration file (config/elasticsearch.yml), your node is configured to bind on all your machine interfaces and does autodiscovery broadcasting events, that means it sends "signals" to every machine in the current LAN and waits for a response. If a node responds to it, they can join in a cluster.