44,39 €
Elasticsearch is a Lucene-based distributed search server that allows users to index and search unstructured content with petabytes of data. This book is your one-stop guide to master the complete Elasticsearch ecosystem.
We’ll guide you through comprehensive recipes on what’s new in Elasticsearch 5.x, showing you how to create complex queries and analytics, and perform index mapping, aggregation, and scripting. Further on, you will explore the modules of Cluster and Node monitoring and see ways to back up and restore a snapshot of an index.
You will understand how to install Kibana to monitor a cluster and also to extend Kibana for plugins. Finally, you will also see how you can integrate your Java, Scala, Python, and Big Data applications such as Apache Spark and Pig with Elasticsearch, and add enhanced functionalities with custom plugins.
By the end of this book, you will have an in-depth knowledge of the implementation of the Elasticsearch architecture and will be able to manage data efficiently and effectively with Elasticsearch.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 655
Veröffentlichungsjahr: 2017
Author
Alberto Paro
Copy Editor
Safis Editing
Reviewer
Marcelo Ochoa
Project Coordinator
Shweta H Birwatkar
Commissioning Editor
Amey Varangaonkar
Proofreader
Safis Editing
Acquisition Editor
Divya Poojari
Indexer
Rekha Nair
Content Development Editor
Amrita Noronha
Production Coordinator
Arvindkumar Gupta
Technical Editor
Deepti Tuscano
Alberto Paro is an engineer, project manager, and software developer. He currently works as freelance trainer/consultant on big data technologies and NoSQL solutions. He loves to study emerging solutions and applications mainly related to big data processing, NoSQL, natural language processing, and neural networks. He began programming in BASIC on a Sinclair Spectrum when he was eight years old, and to date, has collected a lot of experience using different operating systems, applications, and programming languages.
In 2000, he graduated in computer science engineering from Politecnico di Milano with a thesis on designing multiuser and multidevice web applications. He assisted professors at the university for about a year. He then came in contact with The Net Planet Company and loved their innovative ideas; he started working on knowledge management solutions and advanced data mining products. In summer 2014, his company was acquired by a big data technologies company, where he worked until the end of 2015 mainly using Scala and Python on state-of-the-art big data software (Spark, Akka, Cassandra, and YARN). In 2013, he started freelancing as a consultant for big data, machine learning, Elasticsearch and other NoSQL products. He has created or helped to develop big data solutions for business intelligence, financial, and banking companies all over the world. A lot of his time is spent teaching how to efficiently use big data solutions (mainly Apache Spark), NoSql datastores (Elasticsearch, HBase, and Accumulo) and related technologies (Scala, Akka, and Playframework). He is often called to present at big data or Scala events. He is an evangelist on Scala and Scala.js (the transcompiler from Scala to JavaScript).
In his spare time, when he is not playing with his children, he likes to work on open source projects. When he was in high school, he started contributing to projects related to the GNOME environment (gtkmm). One of his preferred programming languages is Python, and he wrote one of the first NoSQL backends on Django for MongoDB (Django-MongoDB-engine). In 2010, he began using Elasticsearch to provide search capabilities to some Django e-commerce sites and developed PyES (a Pythonic client for Elasticsearch), as well as the initial part of the Elasticsearch MongoDB river. He is the author of Elasticsearch Cookbook as well as a technical reviewer of Elasticsearch Server-Second Edition, Learning Scala Web Development, and the video course, Building a Search Server with Elasticsearch, all of which are published by Packt Publishing.
It would have been difficult for me to complete this book without the support of a large number of people.
First, I would like to thank my wife, my children and the rest of my family for their support.
A personal thanks to my best friends, Mauro and Michele, and to all the people that helped me and my family.
I'd like to express my gratitude to everyone at Packt Publishing who are involved in the development and production of this book. I'd like to thank Amrita Noronha for guiding this book to completion and Deepti Tuscano and Marcelo Ochoa for patiently going through the first draft and providing their valuable feedback. Their professionalism, courtesy, good judgment, and passion for books are much appreciated.
Marcelo Ochoa works at the system laboratory of Facultad de Ciencias Exactas of the Universidad Nacional del Centro de la Provincia de Buenos Aires and is the CTO at Scotas.com, a company that specializes in near real-time search solutions using Apache Solr and Oracle. He divides his time between university jobs and external projects related to Oracle and big data technologies. He has worked on several Oracle-related projects, such as the translation of Oracle manuals and multimedia CBTs. His background is in database, network, web, and Java technologies. In the XML world, he is known as the developer of the DB Generator for the Apache Cocoon project. He has worked on the open source projects DBPrism and DBPrism CMS, the Lucene-Oracle integration using the Oracle JVM Directory implementation, and the Restlet.org project, where he worked on the Oracle XDB Restlet Adapter, which is an alternative to writing native REST web services inside a database-resident JVM.
Since 2006, he has been part of an Oracle ACE program. Oracle ACEs are known for their strong credentials as Oracle community enthusiasts and advocates, with candidates nominated by ACEs in the Oracle technology and applications communities.
He has coauthored Oracle Database Programming using Java and Web Services by Digital Press and Professional XML Databases by Wrox Press, and has been the technical reviewer for several books by Packt Publishing such as Apache Solr 4 Cookbook and ElasticSearch Server.
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.
At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.
https://www2.packtpub.com/books/subscription/packtlib
Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can search, access, and read Packt's entire library of books.
Thank you for purchasing this Packt book. We take our commitment to improving our content and products to meet your needs seriously—that's why your feedback is so valuable. Whatever your feelings about your purchase, please consider leaving a review on this book's Amazon page. Not only will this help us, more importantly it will also help others in the community to make an informed decision about the resources that they invest in to learn.
You can also review for us on a regular basis by joining our reviewers' club. If you're interested in joining, or would like to learn more about the benefits we offer, please contact us: [email protected].
To Giulia and Andrea, my extraordinary children.
The most common requirements of today standard applications are the search and analytics capabilities. On the market we can find a lot of solutions to answer to these need both in commercial and in open source world. One of the most used libraries for searching is Apache Lucene. This library is the base of a large number of search solutions such as Apache Solr, Indextank, and Elasticsearch.
Elasticsearch is one of the most powerful solution, written with the cloud and distributed computing in mind. Its main author, Shay Banon, famous for having developed Compass (http://www.compass-project.org), released the first version of Elasticsearch in March 2010.
Thus the main scope of Elasticsearch is to be a search engine; it also provides a lot of features that allows using it also as data-store and analytic engine via its aggregation framework.
Elasticsearch contains a lot of innovative features: JSON REST based, natively distributed in a map/reduce approach for both search and analytics, easy to set up and extensible with plugins. From 2010 when it started to be developed, to last version (5.x) there is a big evolution of the product becoming one of the most used datastore for a lot of markets. In this book we will go in depth on these changes and features and many others capabilities available in Elasticsearch.
Elasticsearch is also a product in continuous evolution and new functionalities are released both by the Elasticsearch Company (the company founded by Shay Banon to provide commercial support for Elasticsearch) and by Elasticsearch users as plugin (mainly available on GitHub). Today a lot of the major world players in IT industry (see some use cases at https://www.elastic.co/use-cases) are using Elasticsearch for its simplicity and advanced features.
In my opinion, Elasticsearch is probably one of the most powerful and easy-to-use search solution on the market. In writing this book and these recipes, I and the book reviewers have tried to transmit our knowledge, our passion, and best practices to better manage it.
Chapter 1, Getting Started, The goal of this chapter is to give the reader an overview of the basic concepts of Elasticsearch and the ways to communicate with it.
Chapter 2, Downloading and Setup, covers the basic steps to start using Elasticsearch from the simple install to a cloud ones.
Chapter 3, Managing Mappings, covers the correct definition of the data fields to improve both indexing and searching quality.
Chapter 4, Basic Operations, teaches the most common actions that are required to ingest data in Elasticsearch and to manage it.
Chapter 5, Search, talks about executing search, sorting and related API calls. The API discussed in this chapter are the main
Chapter 6, Text and Numeric Queries, talks about Search DSL part on text and numeric fields —the core of the search functionalities of Elasticsearch.
Chapter 7, Relationships and Geo Queries, talks about queries that works on related document (child/parent, nested) and geo located fields.
Chapter 8, Aggregations, covers another capability of Elasticsearch, the possibility to execute analytics on search results to improve both the user experience and to drill down the information contained in Elasticsearch.
Chapter 9, Scripting, shows how to customize Elasticsearch with scripting and use the scripting capabilities in different part of Elasticsearch (search, aggregation, and ingest) using different languages. The chapter is mainly focused on Painless the new scripting language developed by Elastic Team.
Chapter 10, Managing Clusters and Nodes, shows how to analyze the behavior of a cluster/node to understand common pitfalls.
Chapter 11, Backup and Restore, covers one of the most important component in managing data: Backup. It shows how to manage a distributed backup and restore of snapshots.
Chapter 12, User Interfaces, describes two of the most common user interfaces for Elasticsearch 5.x: Cerebro, mainly used for admin activities, and Kibana with X-Pack as a common UI extension for Elasticsearch.
Chapter 13, Ingest, talks about the new ingest functionality introduced in Elasticsearch 5.x to import data in Elasticsearch via an ingestion pipeline.
Chapter 14, Java Integration, describes how to integrate Elasticsearch in Java application using both REST and native protocols.
Chapter 15, Scala Integration, describes how to integrate Elasticsearch in Scala using elastic4s: an advanced type-safe and feature rich Scala library based on native Java API.
Chapter 16, Python Integration, covers the usage of the official Elasticsearch Python client.
Chapter 17, Plugin Development, describes how to create native plugins to extend Elasticsearch functionalities. Some examples show the plugin skeletons, the setup process, and their building.
Chapter 18, Big Data Integration, covers how to integrate Elasticsearch in common big data tools such as Apache Spark and Apache Pig.
For this book you will need a computer, of course. In terms of software required you don’t have to be worried, all the components we use are open source and available for every platform.
For all the REST example, the CURL software (http://curl.haxx.se/) is used to simulate the command from command line. It’s common preinstalled in Linux and Mac OS X operative systems. For Windows, it can be downloaded from its site and put in a PATH that can be called from command-line.
For the Chapter 14, Java Integration and and Chapter 17, Plugin Development, it is required the Maven build tool (http://maven.apache.org/), which is a standard for managing build, packaging and deploy in Java. It is natively supported in Java IDEs such as Eclipse and Intellij IDEA.
For Chapter 15, Scala Integration, SBT, (http://www.scala-sbt.org/) is required to compile Scala projects, but it can be also used with IDE that supports Scala such as Eclipse and Intellij IDEA.
The Chapter 16, Python Integration, requires the Python interpreter installed. By default it’s available on Linux and Mac OS X , for Windows can be downloaded from the official python site (http://www.python.org). For the current examples the version 2.X is used.
This book is for developers who want to start using both Elasticsearch and at the same time improve their Elasticsearch knowledge. The book covers all the aspects of using Elasticsearch and provides solutions and hints for everyday usage. The recipes are reduced in complexity to easy focus the reader on the discussed Elasticsearch aspect and to easily memorize the Elasticsearch functionalities.
The latter chapters that discuss the Elasticsearch integration in JAVA, Scala, Python, and Big Data tools show the user how to integrate the power of Elasticsearch in their applications.
The chapter, that talks about plugin development, shows an advanced usage of Elasticsearch and its core extension, so some skilled Java know-how is required.
In this book, you will find several headings that appear frequently (Getting ready, How to do it, How it works, There's more, and See also).
To give clear instructions on how to complete a recipe, we use these sections as follows:
This section tells you what to expect in the recipe, and describes how to set up any software or any preliminary settings required for the recipe.
This section contains the steps required to follow the recipe.
This section usually consists of a detailed explanation of what happened in the previous section.
This section consists of additional information about the recipe in order to make the reader more knowledgeable about the recipe.
This section provides helpful links to other useful information for the recipe.
In this book, you will find a number of styles of text that distinguish between different kinds of information. Here are some examples of these styles, and an explanation of their meaning.
Code words in text are shown as follows: "After the name and the type parameters, usually a river requires an extra configuration that can be passed in the _metaproperty "
A block of code is set as follows:
cluster.name: elasticsearch node.name: "My wonderful server" network.host: 192.168.0.1 discovery.zen.ping.unicast.hosts: ["192.168.0.2","192.168.0.3[9300-9400]"]Any command-line input or output is written as follows:
curl -XDELETE 'http://127.0.0.1:9200/_river/my_river/'Warnings or important notes appear in a box like this.
Tips and tricks appear like this.
Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or may have disliked. Reader feedback is important for us to develop titles that you really get the most out of.
To send us general feedback, simply send an e-mail to [email protected], and mention the book title via the subject of your message.
If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide on http://www.packtpub.com/authors.
Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.
You can download the example code files for this book from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.
You can download the code files by following these steps:
You can also download the code files by clicking on the Code Files button on the book's webpage at the Packt Publishing website. This page can be accessed by entering the book's name in the Search box. Please note that you need to be logged in to your Packt account.
Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:
The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Elasticsearch-5x-Cookbook-Third-Edition. We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you would report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the errata submission form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded on our website, or added to any list of existing errata, under the Errata section of that title. Any existing errata can be viewed by selecting your title from http://www.packtpub.com/support.
Piracy of copyright material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works, in any form, on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.
Please contact us at [email protected] with a link to the suspected pirated material.
We appreciate your help in protecting our authors, and our ability to bring you valuable content.
You can contact us at [email protected] if you are having a problem with any aspect of the book, and we will do our best
In this chapter, we will cover the following recipes:
To efficiently use Elasticsearch, it is very important to understand its design and working.
The goal of this chapter is to give the readers an overview of the basic concepts of Elasticsearch and to be a quick reference for them. It's essential to better understand them to not fall in common pitfalls due to the lack of know-how about Elasticsearch architecture and internals.
The key concepts that we will see in this chapter are node, index, shard, type/mapping, document, and field.
Elasticsearch can be used in several ways such as:
A brief description of the Elasticsearch logic helps the user to improve performance, search quality and decide when and how to optimize the infrastructure to improve scalability and availability. Some details on data replications and base node communication processes are also explained in the upcoming section, Understanding cluster, replication, and sharding.
At the end of this chapter, the protocols used to manage Elasticsearch are also discussed.
Every instance of Elasticsearch is called node. Several nodes are grouped in a cluster. This is the base of the cloud nature of Elasticsearch.
To better understand the following sections, knowledge of the basic concepts such as application node and cluster are required.
One or more Elasticsearch nodes can be setup on physical or a virtual server depending on the available resources such as RAM, CPUs, and disk space.
A default node allows us to store data in it and to process requests and responses. (In Chapter 2, Downloading and Setup, we will see details on how to set up different nodes and cluster topologies).
When a node is started, several actions take place during its startup: such as:
After node startup, the node searches for other cluster members and checks its index and shard status.
To join two or more nodes in a cluster, these rules must be matched:
The network must be configured to support broadcast discovery (default) and they can communicate with each other. (Refer to How to setup networking recipe Chapter 2, Downloading and Setup).
A common approach in cluster management is to have one or more master nodes, which is the main reference for all cluster-level actions, and the other ones called secondary, that replicate the master data and actions.
To be consistent in write operations, all the update actions are first committed in the master node and then replicated in secondary ones.
In a cluster with multiple nodes, if a master node dies, a master-eligible one is elected to be the new master. This approach allows automatic failover to be setup in an Elasticsearch cluster.
In Elasticsearch, we have four kinds of nodes:
In big cluster architectures, having some nodes as simple client nodes with a lot of RAM, with no data, reduces the resources required by data nodes and improves performance in search using the local memory cache of them.
When a node is running, a lot of services are managed by its instance. Services provide additional functionalities to a node and they cover different behaviors such as networking, indexing, analyzing, and so on.
Starting an Elasticsearch node, a lot of output will be prompted; this output is provided during services start up. Every Elasticsearch server, that is running, provides services.
Elasticsearch natively provides a large set of functionalities that can be extended with additional plugins.
During a node startup, a lot of required services are automatically started. The most important ones are:
Throughout the book, we'll see recipes that interact with Elasticsearch services. Every base functionality or extended functionality is managed in Elasticsearch as a service.
If you'll be using Elasticsearch as a search engine or a distributed data store, it's important to understand concepts on how Elasticsearch stores and manages your data.
To work with Elasticsearch data, a user must have basic knowledge of data management and JSON (https://en.wikipedia.org/wiki/JSON) data format that is the lingua franca for working with Elasticsearch data and services.
Our main data container is called index (plural indices) and it can be considered similar to a database in the traditional SQL world. In an index, the data is grouped in data types called mappings in Elasticsearch. A mapping describes how the records are composed (fields). Every record, that must be stored in Elasticsearch, must be a JSON object.
Natively, Elasticsearch is a schema-less data store: when you put records in it, during insert it processes the records, splits it in fields, and updates the schema to manage the inserted data.
To manage huge volumes of records, Elasticsearch uses the common approach to split an index into multiple parts (shards) so that they can be spread on several nodes. The shard management is transparent to user usage: all common record operations are managed automatically in Elasticsearch's application layer.
Every record is stored in only a shard; the sharding algorithm is based on record ID, so many operations, that require loading and changing of records/objects, can be achieved without hitting all the shards, but only the shard (and their replicas) that contains your object.
The following schema compares Elasticsearch structure with SQL and MongoDB ones:
Elasticsearch
SQL
MongoDB
Index (indices)
Database
Database
Shard
Shard
Shard
Mapping/Type
Table
Collection
Field
Column
Field
Object (JSON object)
Record (tuples)
Record (BSON object)
The following screenshot is a conceptual representation of an Elasticsearch cluster with three nodes, one index with four shards and replica set to 1 (primary shards are in bold):
Elasticsearch, to ensure safe operations on index/mapping/objects, internally has rigid rules about how to execute operations.
In Elasticsearch the operations are divided into:
When a record is saved in Elasticsearch, the destination shard is chosen based on:
Splitting an index in a shard allows you to store your data in different nodes, because Elasticsearch tries to balance the shard distribution on all the available nodes.
Every shard can contain up to 232 records (about 4.9 Billions), so the real limit to shard size it is the storage size.
Shards contain your data, and during the search process all the shards are used to calculate and retrieve results: so Elasticsearch performance in big data scales horizontally with the number of shards.
All native records operations (that is, index, search, update, and delete) are managed in shards.
The shard management is completely transparent to the user. Only advanced users tend to change the default shard routing and management to cover their custom scenarios, for example, if there is a requirement to put customer data in the same shard to speed up his operations (search/index/analytics).
It's best practice not to have too big in size shard (over 10Gb) to avoid poor performance in indexing due to continuous merging and resizing of index segments.
While indexing (a record update is equal to indexing a new element) Lucene, the Elasticsearch engine, writes the indexed documents in blocks (segments/files) to speed up the write process. Over time the small segments are deleted and their sum up is written as a new fragment. Having big fragments due to big shards with a lot of data slows down the indexing performance.
It is not good to over-allocate the number of shards to avoid poor search performance because Elasticsearch works in a map and reduce way due to native distribute search. Shards consist of the worker that does the job of indexing/searching and the master/client nodes do the redux part (collect the results from shards and compute the result to be sent to the user). Having a huge number of empty shards in indices consumes only memory and increases search times due to an overhead on network and results aggregation phases.
Related to shards management, there are key concepts of replication and cluster status.
You need one or more nodes running to have a cluster. To test an effective cluster, you need at least two nodes (that can be on the same machine).
An index can have one or more replicas (full copies of your data, automatically managed by Elasticsearch): the shards are called primary ones if they are part of the primary replica, and secondary ones if they are part of other replicas.
To maintain consistency in write operations, the following workflow is executed:
During search operations, if there are some replicas, a valid set of shards is chosen randomly between primary and secondary to improve performances. Elasticsearch has several allocation algorithms to better distribute shards on nodes. For reliability, replicas are allocated in a way that if a single node becomes unavailable, there is always at least one replica of each shard that is still available on the remaining nodes.
The following figure shows some example of possible shards and replica configuration:
The replica has a cost to increase the indexing time due to data node synchronization and also the time spent to propagate the message to the slaves (mainly in an asynchronous way).
To prevent data loss and to have high availability, it's good to have at least one replica; so, your system can survive a node failure without downtime and without loss of data.
A typical approach for scaling performance in search when your customer number is to increase the replica number.
Related to the concept of replication, there is the cluster status indicator of the health of your cluster.
It can cover three different states:
The total number of nodes must not be lower than the maximum number of replicas.
To prevent data loss, I suggest having always at least two nodes and a replica set to 1.
Having one or more replicas on different nodes on different machines allows you to have a live backup of your data, always updated.
In Elasticsearch 5.x, there are only two ways to communicate with the server using HTTP protocol or the native one. In this recipe, we will take a look at these main protocols.
The standard installation of Elasticsearch provides access via its web services on port 9200 for HTTP and 9300 for native Elasticsearch protocol. Simply starting an Elasticsearch server, you can communicate on these ports with it.
Elasticsearch is designed to be used as a RESTful server, so the main protocol is the HTTP usually on port 9200 and above. This is the only protocol that can be used by programming languages that don't run on a Java Virtual Machine (JVM).
Every protocol has advantages and disadvantages. It's important to choose the correct one depending on the kind of applications you are developing. If you are in doubt, choose the HTTP protocol layer that is the most standard and easy to use.
Choosing the right protocol depends on several factors, mainly architectural and performance related. This schema factorizes the advantages and disadvantages related to them:
Protocol
Advantages
Disadvantages
Type
HTTP
This is more frequently used. It is API safe and has general compatibility for different ES versions. Suggested. JSON.
It is easy to proxy and to balance with HTTP balancers.
This is an HTTP overhead. HTTP clients don't know the cluster topology, so they require more hops to access data.
Text
Native
This is a fast network layer. It is programmatic. It is best for massive index operations.
The API changes and breaks applications. It depends on the same version of ES Server. Only on JVM.
It is more compact due to its binary nature.
It is faster because the clients know the cluster topology.
The native serializer/deserializer are more efficient than the JSON ones.
Binary