ElasticSearch Cookbook - Second Edition - Alberto Paro - E-Book

ElasticSearch Cookbook - Second Edition E-Book

Alberto Paro

0,0
44,39 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Key Features

    Book Description

    If you are a developer who implements ElasticSearch in your web applications and want to sharpen your understanding of the core elements and applications, this is the book for you. It is assumed that you’ve got working knowledge of JSON and, if you want to extend ElasticSearch, of Java and related technologies.

    What you will learn

    • Make ElasticSearch work for you by choosing the best cloud topology and powering it with plugins
    • Develop tailored mapping to take full control of index steps
    • Build complex queries through managing indices and documents
    • Optimize search results through executing analytics aggregations
    • Manage rivers (SQL, NoSQL, and webbased) to synchronize and populate crosssource data
    • Develop web interfaces to execute key tasks
    • Monitor the performance of the cluster and nodes

    Who this book is for

    Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

    EPUB
    MOBI

    Seitenzahl: 531

    Veröffentlichungsjahr: 2015

    Bewertungen
    0,0
    0
    0
    0
    0
    0
    Mehr Informationen
    Mehr Informationen
    Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



    Table of Contents

    ElasticSearch Cookbook Second Edition
    Credits
    About the Author
    Acknowledgments
    About the Reviewers
    www.PacktPub.com
    Support files, eBooks, discount offers, and more
    Why subscribe?
    Free access for Packt account holders
    Preface
    What this book covers
    What you need for this book
    Who this book is for
    Sections
    Getting ready
    How to do it…
    How it works…
    There's more…
    See also
    Conventions
    Reader feedback
    Customer support
    Downloading the example code
    Errata
    Piracy
    Questions
    1. Getting Started
    Introduction
    Understanding nodes and clusters
    Getting ready
    How it works...
    There's more...
    See also
    Understanding node services
    Getting ready
    How it works...
    Managing your data
    Getting ready
    How it works...
    There's more...
    Best practices
    See also
    Understanding clusters, replication, and sharding
    Getting ready
    How it works...
    There's more...
    Solving the yellow status...
    Solving the red status
    See also
    Communicating with ElasticSearch
    Getting ready
    How it works...
    Using the HTTP protocol
    Getting ready
    How to do it...
    How it works...
    There's more...
    Using the native protocol
    Getting ready
    How to do it...
    How it works...
    There's more...
    See also
    Using the Thrift protocol
    Getting ready
    How to do it...
    There's more...
    See also
    2. Downloading and Setting Up
    Introduction
    Downloading and installing ElasticSearch
    Getting ready
    How to do it…
    How it works...
    There's more...
    Setting up networking
    Getting ready
    How to do it...
    How it works...
    See also
    Setting up a node
    Getting ready
    How to do it...
    How it works...
    There's more...
    See also
    Setting up for Linux systems
    Getting ready
    How to do it...
    How it works...
    Setting up different node types
    Getting ready
    How to do it...
    How it works...
    Installing plugins in ElasticSearch
    Getting ready
    How to do it...
    How it works...
    There's more...
    See also
    Installing a plugin manually
    Getting ready
    How to do it...
    How it works...
    Removing a plugin
    Getting ready
    How to do it...
    How it works...
    Changing logging settings
    Getting ready
    How to do it...
    How it works...
    3. Managing Mapping
    Introduction
    Using explicit mapping creation
    Getting ready
    How to do it...
    How it works...
    There's more...
    See also
    Mapping base types
    Getting ready
    How to do it...
    How it works...
    There's more...
    See also
    Mapping arrays
    Getting ready
    How to do it...
    How it works...
    Mapping an object
    Getting ready
    How to do it...
    How it works...
    There's more...
    See also
    Mapping a document
    Getting ready
    How to do it...
    How it works...
    See also
    Using dynamic templates in document mapping
    Getting ready
    How to do it...
    How it works...
    There's more...
    See also
    Managing nested objects
    Getting ready
    How to do it...
    How it works...
    There's more...
    See also
    Managing a child document
    Getting ready
    How to do it...
    How it works...
    There's more...
    See also
    Adding a field with multiple mappings
    Getting ready
    How to do it...
    How it works...
    There's more...
    See also
    Mapping a geo point field
    Getting ready
    How to do it...
    How it works...
    There's more...
    Mapping a geo shape field
    Getting ready
    How to do it...
    How it works...
    See also
    Mapping an IP field
    Getting ready
    How to do it...
    How it works...
    Mapping an attachment field
    Getting ready
    How to do it...
    How it works...
    There's more...
    See also
    Adding metadata to a mapping
    Getting ready
    How to do it...
    How it works...
    Specifying a different analyzer
    Getting ready
    How to do it...
    How it works...
    See also
    Mapping a completion suggester
    Getting ready
    How to do it...
    How it works...
    See also
    4. Basic Operations
    Introduction
    Creating an index
    Getting ready
    How to do it...
    How it works...
    There's more…
    See also
    Deleting an index
    Getting ready
    How to do it...
    How it works...
    See also
    Opening/closing an index
    Getting ready
    How to do it...
    How it works...
    See also
    Putting a mapping in an index
    Getting ready
    How to do it...
    How it works...
    See also
    Getting a mapping
    Getting ready
    How to do it...
    How it works...
    See also
    Deleting a mapping
    Getting ready
    How to do it...
    How it works...
    See also
    Refreshing an index
    Getting ready
    How to do it...
    How it works...
    See also
    Flushing an index
    Getting ready
    How to do it...
    How it works...
    See also
    Optimizing an index
    Getting ready
    How to do it...
    How it works...
    There's more…
    See also
    Checking if an index or type exists
    Getting ready
    How to do it...
    How it works...
    Managing index settings
    Getting ready
    How to do it...
    How it works...
    There is more…
    See also
    Using index aliases
    Getting ready
    How to do it...
    How it works...
    There's more…
    Indexing a document
    Getting ready
    How to do it...
    How it works...
    There's more…
    See also
    Getting a document
    Getting ready
    How to do it...
    How it works...
    There's more…
    See also
    Deleting a document
    Getting ready
    How to do it...
    How it works...
    See also
    Updating a document
    Getting ready
    How to do it...
    How it works...
    See also
    Speeding up atomic operations (bulk operations)
    Getting ready
    How to do it...
    How it works...
    See also
    Speeding up GET operations (multi GET)
    Getting ready
    How to do it...
    How it works...
    See also...
    5. Search, Queries, and Filters
    Introduction
    Executing a search
    Getting ready
    How to do it...
    How it works...
    There's more...
    See also
    Sorting results
    Getting ready
    How to do it...
    How it works...
    There's more...
    See also
    Highlighting results
    Getting ready
    How to do it...
    How it works...
    See also
    Executing a scan query
    Getting ready
    How to do it...
    How it works...
    See also
    Suggesting a correct query
    Getting ready
    How to do it...
    How it works...
    See also
    Counting matched results
    Getting ready
    How to do it...
    How it works...
    See also
    Deleting by query
    Getting ready
    How to do it...
    How it works...
    See also
    Matching all the documents
    Getting ready
    How to do it...
    How it works...
    See also
    Querying/filtering for a single term
    Getting ready
    How to do it...
    How it works...
    There's more...
    See also
    Querying/filtering for multiple terms
    Getting ready
    How to do it...
    How it works…
    There's more...
    See also
    Using a prefix query/filter
    Getting ready
    How to do it...
    How it works…
    See also
    Using a Boolean query/filter
    Getting ready
    How to do it...
    How it works…
    See also
    Using a range query/filter
    Getting ready
    How to do it...
    How it works...
    There's more...
    Using span queries
    Getting ready
    How to do it...
    How it works...
    See also
    Using a match query
    Getting ready
    How to do it...
    How it works...
    See also
    Using an ID query/filter
    Getting ready
    How to do it...
    How it works...
    See also
    Using a has_child query/filter
    Getting ready
    How to do it...
    How it works...
    See also
    Using a top_children query
    Getting ready
    How to do it...
    How it works...
    See also
    Using a has_parent query/filter
    Getting ready
    How to do it...
    How it works...
    See also
    Using a regexp query/filter
    Getting ready
    How to do it...
    How it works...
    See also
    Using a function score query
    Getting ready
    How to do it...
    How it works...
    See also
    Using exists and missing filters
    Getting ready
    How to do it...
    How it works...
    Using and/or/not filters
    Getting ready
    How to do it...
    How it works...
    Using a geo bounding box filter
    Getting ready
    How to do it...
    How it works...
    See also
    Using a geo polygon filter
    Getting ready
    How to do it...
    How it works...
    See also
    Using geo distance filter
    Getting ready
    How to do it...
    How it works...
    There's more...
    See also
    Using a QueryString query
    Getting ready
    How to do it...
    How it works...
    There's more...
    See also
    Using a template query
    Getting ready
    How to do it...
    How it works...
    There's more...
    See also
    6. Aggregations
    Introduction
    Executing an aggregation
    Getting ready
    How to do it...
    How it works...
    See also
    Executing the stats aggregation
    Getting ready
    How to do it...
    How it works...
    See also
    Executing the terms aggregation
    Getting ready
    How to do it...
    How it works...
    There's more…
    See also
    Executing the range aggregation
    Getting ready
    How to do it...
    How it works...
    There's more…
    See also
    Executing the histogram aggregation
    Getting ready
    How to do it...
    How it works...
    There's more…
    See also
    Executing the date histogram aggregation
    Getting ready
    How to do it...
    How it works...
    See also
    Executing the filter aggregation
    Getting ready
    How to do it...
    How it works...
    There's more…
    See also
    Executing the global aggregation
    Getting ready
    How to do it...
    How it works...
    Executing the geo distance aggregation
    Getting ready
    How to do it...
    How it works...
    See also
    Executing nested aggregation
    Getting ready
    How to do it...
    How it works...
    There's more…
    Executing the top hit aggregation
    Getting ready
    How to do it...
    How it works...
    See Also
    7. Scripting
    Introduction
    Installing additional script plugins
    Getting ready
    How to do it...
    How it works...
    There's more...
    Managing scripts
    Getting ready
    How to do it...
    How it works...
    See also
    Sorting data using script
    Getting ready
    How to do it...
    How it works...
    There's more...
    See also
    Computing return fields with scripting
    Getting ready
    How to do it...
    How it works...
    See also
    Filtering a search via scripting
    Getting ready
    How to do it...
    How it works...
    There's more...
    See also
    Updating a document using scripts
    Getting ready
    How to do it...
    How it works...
    There's more...
    8. Rivers
    Introduction
    Managing a river
    Getting ready
    How to do it...
    How it works...
    There's more…
    See also
    Using the CouchDB river
    Getting ready
    How to do it...
    How it works...
    There's more…
    See also
    Using the MongoDB river
    Getting ready
    How to do it...
    How it works...
    See also
    Using the RabbitMQ river
    Getting ready
    How to do it...
    How it works...
    There's more…
    See also
    Using the JDBC river
    Getting ready
    How to do it...
    How it works...
    There's more...
    See also
    Using the Twitter river
    Getting ready
    How to do it...
    How it works...
    There's more…
    See also
    9. Cluster and Node Monitoring
    Introduction
    Controlling cluster health via the API
    Getting ready
    How to do it...
    How it works...
    There's more…
    See also
    Controlling cluster state via the API
    Getting ready
    How to do it...
    How it works...
    There's more...
    See also
    Getting cluster node information via the API
    Getting ready
    How to do it...
    How it works...
    There's more…
    See also
    Getting node statistics via the API
    Getting ready
    How to do it...
    How it works...
    There's more…
    See also
    Managing repositories
    Getting ready
    How to do it...
    How it works...
    There's more…
    See also
    Executing a snapshot
    Getting ready
    How to do it...
    How it works...
    There's more…
    Restoring a snapshot
    Getting ready
    How to do it...
    How it works...
    Installing and using BigDesk
    Getting ready
    How to do it...
    How it works...
    There's more…
    Installing and using ElasticSearch Head
    Getting ready
    How to do it...
    How it works...
    There's more…
    Installing and using SemaText SPM
    Getting ready
    How to do it...
    How it works...
    See also
    Installing and using Marvel
    Getting ready
    How to do it...
    How it works...
    See also
    10. Java Integration
    Introduction
    Creating an HTTP client
    Getting ready
    How to do it...
    How it works...
    There's more
    See also
    Creating a native client
    Getting ready
    How to do it...
    How it works...
    There's more
    See also
    Managing indices with the native client
    Getting ready
    How to do it...
    How it works...
    See also
    Managing mappings
    Getting ready
    How to do it...
    How it works...
    There's more
    See also
    Managing documents
    Getting ready
    How to do it...
    How it works...
    See also
    Managing bulk actions
    Getting ready
    How to do it...
    How it works...
    See also
    Building a query
    Getting ready
    How to do it...
    How it works...
    There's more
    See also
    Executing a standard search
    Getting ready
    How to do it...
    How it works...
    See also
    Executing a search with aggregations
    Getting ready
    How to do it...
    How it works...
    See also
    Executing a scroll/scan search
    Getting ready
    How to do it...
    How it works...
    There's more
    See also
    11. Python Integration
    Introduction
    Creating a client
    Getting ready
    How to do it...
    How it works...
    There's more…
    See also
    Managing indices
    Getting ready
    How to do it...
    How it works...
    See also
    Managing mappings
    Getting ready
    How to do it...
    How it works...
    There's more…
    See also
    Managing documents
    Getting ready
    How to do it...
    How it works...
    There's more…
    See also
    Executing a standard search
    Getting ready
    How to do it...
    How it works...
    There's more…
    See also
    Executing a search with aggregations
    Getting ready
    How to do it...
    How it works...
    There's more…
    See also
    12. Plugin Development
    Introduction
    Creating a site plugin
    Getting ready
    How to do it...
    How it works...
    There's more…
    See also
    Creating a native plugin
    Getting ready
    How to do it...
    How it works...
    There's more…
    Creating a REST plugin
    Getting ready
    How to do it...
    How it works...
    There's more…
    See also
    Creating a cluster action
    Getting ready
    How to do it...
    How it works...
    See also
    Creating an analyzer plugin
    Getting ready
    How to do it...
    How it works...
    Creating a river plugin
    Getting ready
    How to do it...
    How it works...
    There's more…
    See also
    Index

    ElasticSearch Cookbook Second Edition

    ElasticSearch Cookbook Second Edition

    Copyright © 2015 Packt Publishing

    All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

    Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing and its dealers and distributors, will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

    Packt Publishing has endeavored to provide trademark information about all the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

    First published: December 2013

    Second edition: January 2015

    Production reference: 1230115

    Published by Packt Publishing Ltd.

    Livery Place

    35 Livery Street

    Birmingham B3 2PB, UK.

    ISBN 978-1-78355-483-6

    www.packtpub.com

    Credits

    Author

    Alberto Paro

    Reviewers

    Florian Hopf

    Wenhan Lu

    Suvda Myagmar

    Dan Noble

    Philip O'Toole

    Acquisition Editor

    Rebecca Youé

    Content Development Editor

    Amey Varangaonkar

    Technical Editors

    Prajakta Mhatre

    Rohith Rajan

    Copy Editors

    Hiral Bhat

    Dipti Kapadia

    Neha Karnani

    Shambhavi Pai

    Laxmi Subramanian

    Ashwati Thampi

    Project Coordinator

    Leena Purkait

    Proofreaders

    Ting Baker

    Samuel Redman Birch

    Stephen Copestake

    Ameesha Green

    Lauren E. Harkins

    Indexer

    Hemangini Bari

    Graphics

    Valentina D'silva

    Production Coordinator

    Manu Joseph

    Cover Work

    Manu Joseph

    About the Author

    Alberto Paro is an engineer, project manager, and software developer. He currently works as a CTO at Big Data Technologies and as a freelance consultant on software engineering for Big Data and NoSQL solutions. He loves to study emerging solutions and applications mainly related to Big Data processing, NoSQL, natural language processing, and neural networks. He began programming in BASIC on a Sinclair Spectrum when he was 8 years old, and to date, has collected a lot of experience using different operating systems, applications, and programming.

    In 2000, he graduated in computer science engineering at Politecnico di Milano with a thesis on designing multiuser and multidevice web applications. He assisted professors at the university for about a year. He then came in contact with The Net Planet Company and loved their innovative ideas; he started working on knowledge management solutions and advanced data mining products. In summer 2014, his company was acquired by a Big Data technologies company, where he currently works mainly using Scala and Python on state-of-the-art big data software (Spark, Akka, Cassandra, and YARN). In 2013, he started freelancing as a consultant for Big Data, machine learning, and ElasticSearch.

    In his spare time, when he is not playing with his children, he likes to work on open source projects. When he was in high school, he started contributing to projects related to the GNOME environment (gtkmm). One of his preferred programming languages is Python, and he wrote one of the first NoSQL backends on Django for MongoDB (Django-MongoDB-engine). In 2010, he began using ElasticSearch to provide search capabilities to some Django e-commerce sites and developed PyES (a Pythonic client for ElasticSearch), as well as the initial part of the ElasticSearch MongoDB river. He is the author of ElasticSearch Cookbook as well as a technical reviewer Elasticsearch Server, Second Edition, and the video course, Building a Search Server with ElasticSearch, all of which are published by Packt Publishing.

    Acknowledgments

    It would have been difficult for me to complete this book without the support of a large number of people.

    First, I would like to thank my wife, my children, and the rest of my family for their valuable support.

    On a more personal note, I'd like to thank my friend, Mauro Gallo, for his patience.

    I'd like to express my gratitude to everyone at Packt Publishing who've been involved in the development and production of this book. I'd like to thank Amey Varangaonkar for guiding this book to completion, and Florian Hopf, Philip O'Toole, and Suvda Myagmar for patiently going through the first drafts and providing valuable feedback. Their professionalism, courtesy, good judgment, and passion for this book are much appreciated.

    About the Reviewers

    Florian Hopf works as a freelance software developer and consultant in Karlsruhe, Germany. He familiarized himself with Lucene-based search while working with different content management systems on the Java platform. He is responsible for small and large search systems, on both the Internet and intranet, for web content and application-specific data based on Lucene, Solr, and ElasticSearch. He helps to organize the local Java User Group as well as the Search Meetup in Karlsruhe, and he blogs at http://blog.florian-hopf.de.

    Wenhan Lu is currently pursuing his master's degree in computer science at Carnegie Mellon University. He has worked for Amazon.com, Inc. as a software engineering intern. Wenhan has more than 7 years of experience in Java programming. Today, his interests include distributed systems, search engineering, and NoSQL databases.

    Suvda Myagmar currently works as a technical lead at a San Francisco-based start-up called Expect Labs, where she builds developer APIs and tunes ranking algorithms for intelligent voice-driven, content-discovery applications. She is the co-founder of Piqora, a company that specializes in social media analytics and content management solutions for online retailers. Prior to working for start-ups, she worked as a software engineer at Yahoo! Search and Microsoft Bing.

    Dan Noble is a software engineer from Washington, D.C. who has been a big fan of ElasticSearch since 2011. He's the author of the Python ElasticSearch driver called rawes, available at https://github.com/humangeo/rawes. Dan focuses his efforts on the development of web application design, data visualization, and geospatial applications.

    Philip O'Toole has developed software and led software development teams for more than 15 years for a variety of applications, including embedded software, networking appliances, web services, and SaaS infrastructure. His most recent work with ElasticSearch includes leading infrastructure design and development of Loggly's log analytics SaaS platform, whose core component is ElasticSearch. He is based in the San Francisco Bay Area and can be found online at http://www.philipotoole.com.

    www.PacktPub.com

    Support files, eBooks, discount offers, and more

    For support files and downloads related to your book, please visit www.PacktPub.com.

    Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com, and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at <[email protected]> for more details.

    At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.

    https://www2.packtpub.com/books/subscription/packtlib

    Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can search, access, and read Packt's entire library of books.

    Why subscribe?

    Fully searchable across every book published by PacktCopy and paste, print, and bookmark contentOn demand and accessible via a web browser

    Free access for Packt account holders

    If you have an account with Packt at www.PacktPub.com, you can use this to access PacktLib today and view nine entirely free books. Simply use your login credentials for immediate access.

    To Giulia and Andrea, my extraordinary children.

    Preface

    One of the main requirements of today's applications is search capability. In the market, we can find a lot of solutions that answer this need, both in commercial as well as the open source world. One of the most used libraries for searching is Apache Lucene. This library is the base of a large number of search solutions such as Apache Solr, Indextank, and ElasticSearch.

    ElasticSearch is written with both cloud and distributed computing in mind. Its main author, Shay Banon, who is famous for having developed Compass (http://www.compass-project.org), released the first version of ElasticSearch in March 2010.

    Thus, the main scope of ElasticSearch is to be a search engine; it also provides a lot of features that allow you to use it as a data store and an analytic engine using aggregations.

    ElasticSearch contains a lot of innovative features: it is JSON/REST-based, natively distributed in a Map/Reduce approach, easy to set up, and extensible with plugins. In this book, we will go into the details of these features and many others available in ElasticSearch.

    Before ElasticSearch, only Apache Solr was able to provide some of these functionalities, but it was not designed for the cloud and does not use the JSON/REST API. In the last few years, this situation has changed a bit with the release of the SolrCloud in 2012. For users who want to more thoroughly compare these two products, I suggest you read posts by Rafał Kuć, available at http://blog.sematext.com/2012/08/23/solr-vs-elasticsearch-part-1-overview/.

    ElasticSearch is a product that is in a state of continuous evolution, and new functionalities are released by both the ElasticSearch company (the company founded by Shay Banon to provide commercial support for ElasticSearch) and ElasticSearch users as plugins (mainly available on GitHub).

    Founded in 2012, the ElasticSearch company has raised a total of USD 104 million in funding. ElasticSearch's success can best be described by the words of Steven Schuurman, the company's cofounder and CEO:

    It's incredible to receive this kind of support from our investors over such a short period of time. This speaks to the importance of what we're doing: businesses are generating more and more data—both user- and machine-generated—and it has become a strategic imperative for them to get value out of these assets, whether they are starting a new data-focused project or trying to leverage their current Hadoop or other Big data investments.

    ElasticSearch has an impressive track record for its search product, powering customers such as Fourquare (which indexes over 50 million venues), the online music distribution platform SoundCloud, StumbleUpon, and the enterprise social network Xing, which has 14 million members. It also powers GitHub, which searches 20 terabytes of data and 1.3 billion files, and Loggly, which uses ElasticSearch as a key value store to index clusters of data for rapid analytics of logfiles.

    In my opinion, ElasticSearch is probably one of the most powerful and easy-to-use search solutions on the market. Throughout this book and these recipes, the book's reviewers and I have sought to transmit our knowledge, passion, and best practices to help readers better manage ElasticSearch.

    What this book covers

    Chapter 1, Getting Started, gives you an overview of the basic concepts of ElasticSearch and the ways to communicate with it.

    Chapter 2, Downloading and Setting Up, shows the basic steps to start using ElasticSearch, from the simple installation to running multiple nodes.

    Chapter 3, Managing Mapping, covers the correct definition of data fields to improve both the indexing and search quality.

    Chapter 4, Basic Operations, shows you the common operations that are required to both ingest and manage data in ElasticSearch.

    Chapter 5, Search, Queries, and Filters, covers the core search functionalities in ElasticSearch. The search DSL is the only way to execute queries in ElasticSearch.

    Chapter 6, Aggregations, covers another capability of ElasticSearch: the possibility to execute analytics on search results in order to improve the user experience and drill down the information.

    Chapter 7, Scripting, shows you how to customize ElasticSearch with scripting in different programming languages.

    Chapter 8, Rivers, extends ElasticSearch to give you the ability to pull data from different sources such as databases, NoSQL solutions, and data streams.

    Chapter 9, Cluster and Node Monitoring, shows you how to analyze the behavior of a cluster/node to understand common pitfalls.

    Chapter 10, Java Integration, describes how to integrate ElasticSearch in a Java application using both REST and native protocols.

    Chapter 11, Python Integration, covers the usage of the official ElasticSearch Python client and the Pythonic PyES library.

    Chapter 12, Plugin Development, describes how to create the different types of plugins: site and native plugins. Some examples show the plugin skeletons, the setup process, and their build.

    What you need for this book

    For this book, you will need a computer running a Windows OS, Macintosh OS, or Linux distribution. In terms of the additional software required, you don't have to worry, as all the components you will need are open source and available for every major OS platform.

    For all the REST examples, the cURL software (http://curl.haxx.se/) will be used to simulate the command from the command line. It comes preinstalled on Linux and Mac OS X operating systems. For Windows, it can be downloaded from its site and added in a PATH that can be called from the command line.

    Chapter 10, Java Integration, and Chapter 12, Plugin Development, require the Maven build tool (http://maven.apache.org/), which is a standard tool to manage builds, packaging, and deploying in Java. It is natively supported on most of the Java IDEs, such as Eclipse and IntelliJ IDEA.

    Chapter 11, Python Integration, requires the Python Interpreter installed on your computer. It's available on Linux and Mac OS X by default. For Windows, it can be downloaded from the official Python website (http://www.python.org). The examples in this chapter have been tested using version 2.x.

    Who this book is for

    This book is for developers and users who want to begin using ElasticSearch or want to improve their knowledge of ElasticSearch. This book covers all the aspects of using ElasticSearch and provides solutions and hints for everyday usage. The recipes have reduced complexity so it is easy for readers to focus on the discussed ElasticSearch aspect and easily and fully understand the ElasticSearch functionalities.

    The chapters toward the end of the book discuss ElasticSearch integration with Java and Python programming languages; this shows the users how to integrate the power of ElasticSearch into their Java- and Python-based applications.

    Chapter 12, Plugin Development, talks about the advanced use of ElasticSearch and its core extensions, so you will need some prior Java knowledge to understand this chapter fully.

    Sections

    This book contains the following sections:

    Getting ready

    This section tells us what to expect in the recipe, and describes how to set up any software or any preliminary settings needed for the recipe.

    How to do it…

    This section characterizes the steps to be followed for "cooking" the recipe.

    How it works…

    This section usually consists of a brief and detailed explanation of what happened in the previous section.

    There's more…

    It consists of additional information about the recipe in order to make the reader more anxious about the recipe.

    See also

    This section may contain references to the recipe.

    Conventions

    In this book, you will find a number of styles of text that distinguish between different kinds of information. Here are some examples of these styles, and an explanation of their meaning.

    Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "After the name and type parameters, usually a river requires an extra configuration that can be passed in the _meta property."

    A block of code is set as follows:

    cluster.name: elasticsearch node.name: "My wonderful server" network.host: 192.168.0.1 discovery.zen.ping.unicast.hosts: ["192.168.0.2","192.168.0.3[9300-9400]"]

    When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:

    cluster.name: elasticsearch node.name: "My wonderful server" network.host: 192.168.0.1 discovery.zen.ping.unicast.hosts: ["192.168.0.2","192.168.0.3[9300-9400]"]

    Any command-line input or output is written as follows:

    curl -XDELETE 'http://127.0.0.1:9200/_river/my_river/'

    New terms and important words are shown in bold. Words you see on the screen, in menus or dialog boxes, for example, appear in the text like this: "If you don't see the cluster statistics, put your node address to the left and click on the connect button."

    Note

    Warnings or important notes appear in a box like this.

    Tip

    Tips and tricks appear like this.

    Reader feedback

    Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or may have disliked. Reader feedback is important for us to develop titles you really get the most out of.

    To send us general feedback, simply send an e-mail to <[email protected]>, and mention the book title via the subject of your message.

    If there is a topic you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.

    Customer support

    Now that you are the proud owner of a Packt book, we have a number of things to help you get the most from your purchase.

    Downloading the example code

    You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. The code bundle is also available on GitHub at https://github.com/aparo/elasticsearch-cookbook-second-edition.

    Errata

    Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.

    To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.

    Piracy

    Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works, in any form, on the Internet, please provide us with the location address or website name immediately so we can pursue a remedy.

    Please contact us at <[email protected]> with a link to the suspected pirated material.

    We appreciate your help in protecting our authors, and our ability to bring you valuable content.

    Questions

    If you have a problem with any aspect of this book, you can contact us at <[email protected]>, and we will do our best to address the problem.

    Chapter 1. Getting Started

    In this chapter, we will cover:

    Understanding nodes and clustersUnderstanding node servicesManaging your dataUnderstanding clusters, replication, and shardingCommunicating with ElasticSearchUsing the HTTP protocolUsing the native protocolUsing the Thrift protocol

    Introduction

    To efficiently use ElasticSearch, it is very important to understand how it works.

    The goal of this chapter is to give the readers an overview of the basic concepts of ElasticSearch and to be a quick reference for them. It's essential to understand the basics better so that you don't fall into the common pitfall about how ElasticSearch works and how to use it.

    The key concepts that we will see in this chapter are: node, index, shard, mapping/type, document, and field.

    ElasticSearch can be used both as a search engine as well as a data store.

    A brief description of the ElasticSearch logic helps the user to improve performance, search quality, and decide when and how to optimize the infrastructure to improve scalability and availability.

    Some details on data replications and base node communication processes are also explained.

    At the end of this chapter, the protocols used to manage ElasticSearch are also discussed.

    Understanding nodes and clusters

    Every instance of ElasticSearch is called a node. Several nodes are grouped in a cluster. This is the base of the cloud nature of ElasticSearch.

    Getting ready

    To better understand the following sections, some basic knowledge about the concepts of the application node and cluster are required.

    How it works...

    One or more ElasticSearch nodes can be set up on a physical or a virtual server depending on the available resources such as RAM, CPU, and disk space.

    A default node allows you to store data in it to process requests and responses. (In Chapter 2, Downloading and Setting Up, we'll see details about how to set up different nodes and cluster topologies).

    When a node is started, several actions take place during its startup, such as:

    The configuration is read from the environment variables and the elasticsearch.yml configuration fileA node name is set by the configuration file or is chosen from a list of built-in random namesInternally, the ElasticSearch engine initializes all the modules and plugins that are available in the current installation

    Tip

    Downloading the example code

    You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

    After the node startup, the node searches for other cluster members and checks its index and shard status.

    To join two or more nodes in a cluster, the following rules must be observed:

    The version of ElasticSearch must be the same (v0.20, v0.9, v1.4, and so on) or the join is rejected.The cluster name must be the same.The network must be configured to support broadcast discovery (it is configured to it by default) and they can communicate with each other. (See the Setting up networking recipe in Chapter 2, Downloading and Setting Up.)

    A common approach in cluster management is to have a master node, which is the main reference for all cluster-level actions, and the other nodes, called secondary nodes, that replicate the master data and its actions.

    To be consistent in the write operations, all the update actions are first committed in the master node and then replicated in the secondary nodes.

    In a cluster with multiple nodes, if a master node dies, a master-eligible node is elected to be the new master node. This approach allows automatic failover to be set up in an ElasticSearch cluster.

    There's more...

    There are two important behaviors in an ElasticSearch node: the non-data node (or arbiter) and the data container behavior.

    Non-data nodes are able to process REST responses and all other operations of search. During every action execution, ElasticSearch generally executes actions using a map/reduce approach: the non-data node is responsible for distributing the actions to the underlying shards (map) and collecting/aggregating the shard results (redux) to be able to send a final response. They may use a huge amount of RAM due to operations such as facets, aggregations, collecting hits and caching (such as scan/scroll queries).

    Data nodes are able to store data in them. They contain the indices shards that store the indexed documents as Lucene (internal ElasticSearch engine) indices.

    Using the standard configuration, a node is both an arbiter and a data container.

    In big cluster architectures, having some nodes as simple arbiters with a lot of RAM, with no data, reduces the resources required by data nodes and improves performance in searches using the local memory cache of arbiters.

    See also

    The Setting up different node types recipe in Chapter 2, Downloading and Setting Up.

    Understanding node services

    When a node is running, a lot of services are managed by its instance. These services provide additional functionalities to a node and they cover different behaviors such as networking, indexing, analyzing and so on.

    Getting ready

    Every ElasticSearch server that is running provides services.

    How it works...

    ElasticSearch natively provides a large set of functionalities that can be extended with additional plugins.

    During a node startup, a lot of required services are automatically started. The most important are:

    Cluster services: These manage the cluster state, intra-node communication, and synchronization.Indexing Service: This manages all indexing operations, initializing all active indices and shards.Mapping Service: This manages the document types stored in the cluster (we'll discuss mapping in Chapter 3, Managing Mapping).Network Services: These are services such as HTTP REST services (default on port 9200), internal ES protocol (port 9300) and the Thrift server (port 9500), applicable only if the Thrift plugin is installed.Plugin Service: This enables us to enhance the basic ElasticSearch functionality in a customizable manner. (It's discussed in Chapter 2, Downloading and Setting Up, for installation and Chapter 12, Plugin Development, for detailed usage.)River Service: It is a pluggable service running within ElasticSearch cluster, pulling data (or being pushed with data) that is then indexed into the cluster. (We'll see it in Chapter 8, Rivers.)Language Scripting Services: They allow you to add new language scripting support to ElasticSearch.

    Note

    Throughout this book, we'll see recipes that interact with ElasticSearch services. Every base functionality or extended functionality is managed in ElasticSearch as a service.

    Managing your data

    If you are going to use ElasticSearch as a search engine or a distributed data store, it's important to understand concepts of how ElasticSearch stores and manages your data.

    Getting ready

    To work with ElasticSearch data, a user must have basic concepts of data management and JSON data format, which is the lingua franca to work with ElasticSearch data and services.

    How it works...

    Our main data container is called index (plural indices) and it can be considered as a database in the traditional SQL world. In an index, the data is grouped into data types called mappings in ElasticSearch. A mapping describes how the records are composed (fields).

    Every record that must be stored in ElasticSearch must be a JSON object.

    Natively, ElasticSearch is a schema-less data store; when you enter records in it during the insert process it processes the records, splits it into fields, and updates the schema to manage the inserted data.

    To manage huge volumes of records, ElasticSearch uses the common approach to split an index into multiple shards so that they can be spread on several nodes. Shard management is transparent to the users; all common record operations are managed automatically in the ElasticSearch application layer.

    Every record is stored in only a shard; the sharding algorithm is based on a record ID, so many operations that require loading and changing of records/objects, can be achieved without hitting all the shards, but only the shard (and its replica) that contains your object.

    The following schema compares ElasticSearch structure with SQL and MongoDB ones:

    ElasticSearch

    SQL

    MongoDB

    Index (Indices)

    Database

    Database

    Shard

    Shard

    Shard

    Mapping/Type

    Table

    Collection

    Field

    Field

    Field

    Object (JSON Object)

    Record (Tuples)

    Record (BSON Object)

    There's more...

    To ensure safe operations on index/mapping/objects, ElasticSearch internally has rigid rules about how to execute operations.

    In ElasticSearch, the operations are divided into:

    Cluster/index operations: All clusters/indices with active write are locked; first they are applied to the master node and then to the secondary one. The read operations are typically broadcasted to all the nodes.Document operations: All write actions are locked only for the single hit shard. The read operations are balanced on all the shard replicas.

    When a record is saved in ElasticSearch, the destination shard is chosen based on:

    The id (unique identifier) of the record; if the id is missing, it is autogenerated by ElasticSearchIf routing or parent (we'll see it in the parent/child mapping) parameters are defined, the correct shard is chosen by the hash of these parameters

    Splitting an index in shard allows you to store your data in different nodes, because ElasticSearch tries to balance the shard distribution on all the available nodes.

    Every shard can contain up to 2^32 records (about 4.9 billion), so the real limit to a shard size is its storage size.

    Shards contain your data and during search process all the shards are used to calculate and retrieve results. So ElasticSearch performance in big data scales horizontally with the number of shards.

    All native records operations (such as index, search, update, and delete) are managed in shards.

    Shard management is completely transparent to the user. Only an advanced user tends to change the default shard routing and management to cover their custom scenarios. A common custom scenario is the requirement to put customer data in the same shard to speed up his operations (search/index/analytics).

    Best practices

    It's best practice not to have a shard too big in size (over 10 GB) to avoid poor performance in indexing due to continuous merging and resizing of index segments.

    It is also not good to over-allocate the number of shards to avoid poor search performance due to native distributed search (it works as map and reduce). Having a huge number of empty shards in an index will consume memory and increase the search times due to an overhead on network and results aggregation phases.

    See also

    Shard on Wikipedia http://en.wikipedia.org/wiki/Shard_(database_architecture)

    Understanding clusters, replication, and sharding

    Related to shard management, there is the key concept of replication and cluster status.

    Getting ready

    You need one or more nodes running to have a cluster. To test an effective cluster, you need at least two nodes (that can be on the same machine).

    How it works...

    An index can have one or more replicas; the shards are called primary if they are part of the primary replica, and secondary ones if they are part of replicas.

    To maintain consistency in write operations, the following workflow is executed:

    The write operation is first executed in the primary shardIf the primary write is successfully done, it is propagated simultaneously in all the secondary shardsIf a primary shard becomes unavailable, a secondary one is elected as primary (if available) and then the flow is re-executed

    During search operations, if there are some replicas, a valid set of shards is chosen randomly between primary and secondary to improve its performance. ElasticSearch has several allocation algorithms to better distribute shards on nodes. For reliability, replicas are allocated in a way that if a single node becomes unavailable, there is always at least one replica of each shard that is still available on the remaining nodes.

    The following figure shows some examples of possible shards and replica configuration:

    The replica has a cost in increasing the indexing time due to data node synchronization, which is the time spent to propagate the message to the slaves (mainly in an asynchronous way).

    Note

    To prevent data loss and to have high availability, it's good to have a least one replica; so your system can survive a node failure without downtime and without loss of data.

    There's more...

    Related to the concept of replication, there is the cluster status indicator that will show you information on the health of your cluster. It can cover three different states:

    Green: This shows that everything is okayYellow: This means that some shards are missing but you can work on your clusterRed: This indicates a problem as some primary shards are missing

    Solving the yellow status...

    Mainly, yellow status is due to some shards that are not allocated.

    If your cluster is in the recovery status (meaning that it's starting up and checking the shards before they are online), you need to wait until the shards' startup process ends.

    After having finished the recovery, if your cluster is always in the yellow state, you may not have enough nodes to contain your replicas (for example, maybe the number of replicas is bigger than the number of your nodes). To prevent this, you can reduce the number of your replicas or add the required number of nodes. A good practice is to observe that the total number of nodes must not be lower than the maximum number of replicas present.

    Solving the red status

    This means you are experiencing lost data, the cause of which is that one or more shards are missing.

    To fix this, you need to try to restore the node(s) that are missing. If your node restarts and the system goes back to the yellow or green status, then you are safe. Otherwise, you have obviously lost data and your cluster is not usable; the next action would be to delete the index/indices and restore them from backups or snapshots (if you have done them) or from other sources. To prevent data loss, I suggest having always a least two nodes and a replica set to 1 as good practice.

    Note

    Having one or more replicas on different nodes on different machines allows you to have a live backup of your data, which stays updated always.

    See also

    Setting up different node types in the next chapter.

    Communicating with ElasticSearch

    You can communicate with several protocols using your ElasticSearch server. In this recipe, we will take a look at the main protocols.

    Getting ready

    You will need a working instance of the ElasticSearch cluster.

    How it works...

    ElasticSearch is designed to be used as a RESTful server, so the main protocol is the HTTP, usually on port number 9200 and above. Thus, it allows using different protocols such as native and thrift ones.

    Many others are available as extension plugins, but they are seldom used, such as memcached, couchbase, and websocket. (If you need to find more on the transport layer, simply type in Elasticsearch transport on the GitHub website to search.)

    Every protocol has advantages and disadvantages. It's important to choose the correct one depending on the kind of applications you are developing. If you are in doubt, choose the HTTP Protocol layer that is the standard protocol and is easy to use.

    Choosing the right protocol depends on several factors, mainly architectural and performance related. This schema factorizes advantages and disadvantages related to them. If you are using any of the protocols to communicate with ElasticSearch official clients, switching from a protocol to another is generally a simple setting in the client initialization.

    Protocol

    Advantages

    Disadvantages

    Type

    HTTP

    Frequently usedAPI is safe and has general compatibility for different versions of ES, although JSON is suggested
    HTTP overhead
    Text

    Native

    Fast network layerProgrammaticBest for massive indexing operations
    If the API changes, it can break the applicationsRequires the same version of the ES serverOnly on JVM
    Binary

    Thrift

    Similar to HTTP
    Related to the Thrift plugin
    Binary

    Chapter 2. Downloading and Setting Up

    In this chapter, we will cover the following topics:

    Downloading and installing ElasticSearchSetting up networkingSetting up a nodeSetting up for Linux systemsSetting up different node typesInstalling plugins in ElasticSearchInstalling a plugin manuallyRemoving a pluginChanging logging settings

    Introduction

    This chapter explains how to install and configure ElasticSearch, from a single developer machine to a big cluster, giving you hints on how to improve performance and skip misconfiguration errors.

    There are different options to install ElasticSearch and set up a working environment for development and production.

    When testing out ElasticSearch for a development cluster, the configuration tool does not require any configurations to be set in it. However, when moving to production, it is important to properly configure the cluster based on your data and use cases. The setup step is very important because a bad configuration can lead to bad results and poor performances, and it can even kill your server.

    In this chapter, the management of ElasticSearch plugins is also discussed: installing, configuring, updating, and removing.

    Downloading and installing ElasticSearch

    ElasticSearch has an active community and the release cycles are very fast.

    Because ElasticSearch depends on many common Java libraries (Lucene, Guice, and Jackson are the most famous), the ElasticSearch community tries to keep them updated and fixes bugs that are discovered in them and the ElasticSearch core. The large user base is also a source of new ideas and features to improve ElasticSearch use cases.

    For these reasons, if it's possible, best practice is to use the latest available release (usually, the most stable release and with the least bugs).

    Getting ready

    You need an ElasticSearch supported operating system (Linux / Mac OS X / Windows) with JVM 1.7 or above installed. A web browser is required to download the ElasticSearch binary release.

    How to do it…

    In order to download and install an ElasticSearch server, we will perform the following steps:

    Download ElasticSearch from the web. The latest version is always downloadable at http://www.elasticsearch.org/download/. Different versions are available for different operating systems:
    elasticsearch-{version-number}.zip: This is used for both Linux (or Mac OS X) and Windows operating systemselasticsearch-{version-number}.tar.gz: This is used for Linux and Mac operating systemselasticsearch-{version-number}.deb: This is used for a Debian-based Linux distribution (this also covers the Ubuntu family). It can be installed with the Debian command dpkg –i elasticsearch-*.deb.elasticsearch-{version-number}.rpm: This is used for Red Hat-based Linux distributions (this also covers the CentOS family). You can install this version with the command rpm –i elasticsearch-{version number}.rpm.

    Note

    These packages contain everything to start using ElasticSearch. At the time of writing this book, the latest and most stable version of ElasticSearch is 1.4.0. To check whether this is the latest available version, please visit http://www.elasticsearch.org/download/.

    Extract the binary content:
    After downloading the correct release for your platform, the installation consists of extracting the archive to a working directory.

    Note

    Choose a working directory that is safe for charset problems and does not have a long path name (path name) in order to prevent problems when ElasticSearch creates its directories to store index data.

    For the Windows platform, a good directory can be c:\es, while on Unix and Mac OS X, you can use /opt/es.