Mastering Kibana 6.x - Anurag Srivastava - E-Book

Mastering Kibana 6.x E-Book

Anurag Srivastava

0,0
33,59 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Get to grips with Kibana and its advanced functions to create interactive visualizations and dashboards

Key Features



  • Explore visualizations and perform histograms, stats, and map analytics
  • Unleash X-Pack and Timelion, and learn alerting, monitoring, and reporting features
  • Manage dashboards with Beats and create machine learning jobs for faster analytics

Book Description



Kibana is one of the popular tools among data enthusiasts for slicing and dicing large datasets and uncovering Business Intelligence (BI) with the help of its rich and powerful visualizations.



To begin with, Mastering Kibana 6.x quickly introduces you to the features of Kibana 6.x, before teaching you how to create smart dashboards in no time. You will explore metric analytics and graph exploration, followed by understanding how to quickly customize Kibana dashboards. In addition to this, you will learn advanced analytics such as maps, hits, and list analytics. All this will help you enhance your skills in running and comparing multiple queries and filters, influencing your data visualization skills at scale.



With Kibana’s Timelion feature, you can analyze time series data with histograms and stats analytics. By the end of this book, you will have created a speedy machine learning job using X-Pack capabilities.

What you will learn



  • Create unique dashboards with various intuitive data visualizations
  • Visualize Timelion expressions with added histograms and stats analytics
  • Integrate X-Pack with your Elastic Stack in simple steps
  • Extract data from Elasticsearch for advanced analysis and anomaly detection using dashboards
  • Build dashboards from web applications for application logs
  • Create monitoring and alerting dashboards using Beats

Who this book is for



Mastering Kibana 6.x is for you if you are a big data engineer, DevOps engineer, or data scientist aspiring to go beyond data visualization at scale and gain maximum insights from their large datasets. Basic knowledge of Elasticstack will be an added advantage, although not mandatory.

Anurag Srivastava is a senior technical lead since 11 years in a multinational software company in web-based application development. He has led and handled teams and clients since 7 years of his professional career. Proficient in designing and deployment of scalable applications, he has multiple certifications in ML and data science using Python. He is well experienced with the Elastic stack (Elasticsearch, Logstash, and Kibana) for creating dashboards using system metrics data, log data, application data, or relational databases.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 297

Veröffentlichungsjahr: 2018

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Mastering Kibana 6.x

 

 

Visualize your Elastic Stack data with histograms, maps, charts, and graphs

 

 

 

 

 

 

 

 

 

 

Anurag Srivastava

 

 

 

 

 

 

 

 

 

 

 

BIRMINGHAM - MUMBAI

Mastering Kibana 6.x

Copyright © 2018 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Commissioning Editor:Pravin DhandreAcquisition Editor: Viraj MadhavContent Development Editor:Karan ThakkarTechnical Editor: Sagar SawantCopy Editors: Dhanya Baburaj, Shaila Kusanale, Dipti Mankame, Laxmi SubramanianProject Coordinator: Nidhi JoshiProofreader: Safis EditingIndexer: Pratik ShirodkarGraphics: Jisha ChirayilProduction Coordinator: Nilesh Mohite

First published: July 2018

Production reference: 1310718

Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.

ISBN 978-1-78883-103-1

www.packtpub.com

 
mapt.io

Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.

Why subscribe?

Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals

Improve your learning with Skill Plans built especially for you

Get a free eBook or video every month

Mapt is fully searchable

Copy and paste, print, and bookmark content

PacktPub.com

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks. 

Contributors

About the author

Anurag Srivastava is a senior technical lead since 11 years in a multinational software company in web-based application development. He has led and handled teams and clients since 7 years of his professional career. Proficient in designing and deployment of scalable applications, he has multiple certifications in ML and data science using Python. He is well experienced with the Elastic stack (Elasticsearch, Logstash, and Kibana) for creating dashboards using system metrics data, log data, application data, or relational databases.

 

 

 

About the reviewers

Saurabh Chhajed is a Certified Spark and Hadoop developer with 8 years of professional experience in the enterprise application development and big data analytics , using the latest frameworks, tools, and design patterns. He has extensive experience of working with Agile and Scrum methodologies and enjoys acting as an evangelist for various big data frameworks and machine learning. While not working, he enjoys traveling and sharing his experiences on his blog, SAURZCODE.

 

 

Sharath Kumar M N is the author of Learning Elastic Stack 6.0 which was named as one of the Best Elasticsearch Books of All Time by BookAuthority (bookauthority.org). He has done his masters in computer science at The University of Texas, Dallas, USA. He is currently working as an big data architect at CA Technologies. He being an avid speaker, he has also given several tech talks in conferences such as the Oracle Code Event. His new interests are into DevOps and AIOps.

 

 

Packt is searching for authors like you

If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.

Table of Contents

Title Page

Copyright and Credits

Mastering Kibana 6.x

Packt Upsell

Why subscribe?

PacktPub.com

Contributors

About the author

About the reviewers

Packt is searching for authors like you

Preface

Who this book is for

What this book covers

To get the most out of this book

Download the color images

Conventions used

Get in touch

Reviews

Revising the ELK Stack

What is ELK Stack?

Elasticsearch

Logstash

Kibana

Beats

Installing the ELK Stack

Elasticsearch

Installing Elasticsearch using a TAR file

Installing Elasticsearch with Homebrew

Installing Elasticsearch with MSI Windows Installer

Installing Elasticsearch with the Debian package

Installing Elasticsearch with the RPM package

Logstash

Using apt package repositories

Using yum package repositories

Kibana

Installing Kibana using .tar.gz

Installing Kibana using the Debian package

Installing Kibana using rpm

Installing Kibana on Windows

Beats

Packetbeat

Metricbeat

Filebeat

Winlogbeat

Heartbeat

ELK use cases

Log management

Security monitoring and alerting

Web scraping

E-commerce search solutions

Full text search

Visualizing data

Summary

Setting Up and Customizing the Kibana Dashboard

Setting up the stage

Configuring Logstash to fetch data from the Apache log file

Outputting the log data into Elasticsearch

Configuring Kibana to read the Elasticsearch index

Creating demo visualizations with Apache log data

Creating the dashboard

Customizing the dashboard

Editing the visualization

Changing the title by customizing the panel

Moving the visualization to full screen

Deleting the visualization from the dashboard

Changing the colors of the visualization

Dragging and dropping visualizations on a desired location on the dashboard

Resizing the visualization as per our requirements

Exporting CSV data from the visualization

Getting the Elasticsearch request, response, and statistics

Summary

Exploring Your Data

Kibana Discover

Discovering data using Kibana Discover

Configuring Packetbeat to push packet data into Elasticsearch

Configuring Kibana to read the Elasticsearch index with packet logs

Exploring Kibana Discover to access packet data

Showing the required fields

Applying the time filter

Elasticsearch query DSL

Filter

Saving and opening searches

Saving the result

Opening the result

Sharing results

Field data statistics

Summary

Visualizing the Data

Creating visualizations

Basic charts

Data

Maps

Time series

Other

Pie charts

Metric aggregation

Bucket aggregation

Creating a pie chart

Adding another dimension to the pie chart

Bar charts

Metric aggregation

Bucket aggregation

Creating a bar chart

Area charts

Creating an area chart

Data metrics

Creating a data metric

Data tables

Creating the data table

Tag clouds

Creating a tag cloud

Markdown

Creating a markdown visualization

Sharing visualizations

Summary

Dashboarding to Showcase Key Performance Indicators

Creating the dashboard

Arranging visualizations

Moving visualizations

Resizing visualizations

Removing visualizations

Showing in full screen

Showing visualization data

Modifying the visualization

Saving the dashboard

Sharing the dashboard

Sharing the saved dashboard

Sharing the snapshot

Cloning the dashboard

Exploring the dashboard

The search query

Adding filters

Applying the time filter

Clicking on visualizations

Summary

Handling Time Series Data with Timelion

Timelion interface

Timeline expression

.es function parameters

Chainable methods

.sum()

.avg()

.min()

.max()

.log()

.abs()

.divide()

.multiply()

.derivative()

.bars()

.color()

.label()

.legend()

.movingaverage()

.trend()

.range()

.precision()

Data source functions

Elasticsearch

Static/value

World bank

Setting the offset for data sources

Saving Timelion graph

Timelion sheet option

Deleting Timelion sheet

Timelion help

Function reference

Keyboard tips

Timelion auto-refresh

Summary

Interact with Your Data Using Dev Tools

Console

Copy as cURL

Auto indent

Multiple requests in console

Profiling queries

Query profile

Aggregation profile

Grok debugger

Summary

Tweaking Your Configuration with Kibana Management

Index pattern

Creating the index pattern

Setting the default index pattern

Refreshing index pattern fields

Deleting an index pattern

Managing fields

String

Dates

Geographic point field

Numbers

Saved objects

Dashboards

Searches

Visualizations

Advanced settings

xPack:defaultAdminEmail

search:queryLanguage

search:queryLanguage:switcher:enable

dateFormat

dateFormat:tz

dateFormat:dow

defaultIndex

Reporting

Security

Roles

Users

Watcher

Creating the watch

Threshold alert

Advanced watch

Deleting the watch

Summary

Understanding X-Pack Features

Installing X-Pack

Installing X-Pack into Elasticsearch

Installing X-Pack into Kibana

Features of X-Pack

Monitoring

Elasticsearch monitoring

Kibana monitoring

Security settings

Users

Roles

Machine learning

Other options of X-Pack

Application Performance Monitoring

Logging

Apache logs

MySQL logs

Nginx logs

System logs

Metrics

Apache metrics

Docker metrics

Kubernetes metrics

MySQL metrics

Nginx metrics

Redis metrics

System metrics

Summary 

Machine Learning with Kibana

Machine learning jobs

Single metric Jobs

Multi-metric jobs

Population Jobs

Advanced Jobs

Create a machine learning job

Data visualizer

Single metric Job

Managing jobs

Job settings

Job config

Datafeed

Counts

JSON

Job messages

Datafeed preview

Anomaly explorer

Single metric viewer

Multi metric job

Explore multi metric job result

Population job

Summary 

Create Super Cool Dashboard from a Web Application

JDBC input plugin

Scheduling

Maintaining the last SQL value 

Fetch size

Configuring Logstash for database input

Creating a dashboard using MySQL data

Creating visualizations

Total blog and top blog count

Blogger-wise blog counts

Tag cloud for blog categories

Blogger name-category-views-blog pie chart

Tabular view of blog details

Create dashboard

Summary

Different Use Cases of Kibana

Time-series data handling

Conditional formatting

Tracking trends

A visual builder for handling time series data

GeoIP for Elastic Stack

Ingest node

GeoIP with Packetbeat data

Summary

Creating Monitoring Dashboards Using Beats

Configuring the Beats

Filebeat

Configuring Filebeat

Metricbeat

Configuring Metricbeat

Enabling the modules using the metricbeat.yml file

Enabling the modules from the modules.d directory

Packetbeat

Configuring Packetbeat

Creating visualizations using Beat data

Visualization using Filebeat

Visualization using Metricbeat

Visualization using Packetbeat

Creating the dashboard

Importing Beat dashboards

Importing dashboards in Filebeat

Importing dashboards in Metricbeat

Importing dashboards in Packetbeat

Summary

Best Practices

Requirement of test environment

Picking the right time filter field

Avoiding large document indexing

Avoiding sparsity

Avoiding unrelated data in the same index

Normalizing the document

Avoiding types in Indices

Avoiding wildcard searches

Summary

Other Books You May Enjoy

Leave a review - let other readers know what you think

Preface

Kibana is a powerful visualization tool which can be use to solve different types of problems. The basic use of Kibana is log management and it is mostly used for the log management only because it is quite difficult to handle the logs without a proper tool which can help us to explore, filter, search and visualize the logs. We can also use Kibana in many other areas like for security monitoring and alerting in which we use the tool to figure out any suspicious activity or attack. Machine learning is another important feature which was introduced in Kibana 5.4 and provides us the luxury to apply the machine learning algorithm directly on the index pattern data without any other software dependency.

The objective of this book is to first introduce the reader with basics of Kibana like installation, functioning and log management etc and then to explain some complex topics like Timelion, Machine Learning etc and at last to provide some practical explanation to setup the dashboard like creating dashboard using Beats and then through RDBMS data. So we can say that this book is a complete package and covers almost every aspect of Kibana.

Who this book is for

This book is for system admins, data analysts, programmers, and anyone who need a powerful dashboard using any sort of data. If you want to get complete insight of Kibana and how we can use it to solve our data exploration problems, you can refer to this book. This book is not a Kibana manual but a solution oriented approach where readers can get the idea to solve their problem in hand after learning the basics of Kibana. No prior Kibana knowledge is required for this book.

What this book covers

Chapter 1, Revising the ELK Stack, this chapter will explain details of ELK stack which is now known as Elastic Stack. Although they've all been built to work exceptionally well together, each one is a separate project that is driven by the open-source vendor Elastic. Through this chapter reader will get complete idea of these three software and will able to figure out that how we can combine these to achieve different use cases.

Chapter 2,Setting Up and Customizing the Kibana Dashboard, In this chapter we will know how to customize Kibana visualization by  adding title, resizing panels, change colors and opacity, modify the legends etc. This will also explain how we can embed the dashboard on our existing application, By tweaking these features we can create more meaningful and impact full dashboards. 

Chapter 3,  Exploring Your Data, Here we will come to know the Discover tab functionalities like Search Bar, Time Filter, Field Selector, Data Histogram and Log View. Discover option provide us the way to search and select required fields from our dataset. It provides us the complete picture of Elastic search data which is loaded into Kibana.

Chapter 4, Visualizing the Data, The Kibana Visualize page is where we can create, modify, and view our own custom visualizations. There are different types of visualizations, ranging from Vertical bar and Pie charts to Tile maps and Data tables. Different type of visualization can be created using Kibana Visualize option. Visualizations can also be shared with other users who have access to the Kibana instance.In this chapter reader will learn to create various types of data visualizations like Vertical bar,Pie charts, Tile maps,Data tables and tag clouds etc.

Chapter 5, Dashboarding to Showcase Key Performance Indicators, With a dashboard, we can combine multiple visualizations onto a single page. Here we can filter them by providing a search query or by selecting filters by clicking elements in the visualization. Dashboards are useful  when we want to get an overview of logs, and make correlations among various visualizations and logs. We can also export the csv data from data tables of Kibana.

Chapter 6, Handling Time Series Data with Timelion, In this chapter we will learn about Timelion which is a time series visualization plugin for Kibana which enables us to combine independent data sources within the same visualization. As with normal visualizations in Kibana, we can visualize Timelion expressions from the Visualize tab. It provides us various features such as function chaining, analyzing trends, data formatting, and performing basic calculations.

Chapter 7, Interact with Your Data Using Dev Tools , in this chapter we will learn aboutDev Tools which contains development tools that we can use to interact with data in Kibana. Console plugin of Kibana Dev Tools provides a UI to interact with the REST API of Elasticsearch. Console has two main areas: the editor, where we can compose requests to Elasticsearch, and the response pane, which displays the responses to the request.

Chapter 8, Tweaking Your Configuration with Kibana Management, in this chapter we will cover Kibana Management interface is used to perform  runtime configuration of Kibana, initial setup and ongoing configuration of index patterns, advanced settings that tweak the behaviors of Kibana itself, and  various "objects" that we can save throughout Kibana such as searches, visualizations, and dashboards.

Chapter 9, Understanding X-Pack Features , in this chapter we will come to know how to setup X-Pack and use different features like security, alerting, monitoring, reporting and machine learning. In default setup of ELK we do not have these features and for using X-Pack we need to purchase the license. X-Pack provide us the feature to secure the ELK stack will user role and permission.

Chapter 10, Machine Learning with Kibana, in this chapter we will learn about Machine learning which is the science of getting computers to act without being explicitly programmed. For applying machine learning on our dataset we need to use any programming language like R or Python but Kibana provides us a tab with X-Pack for creating machine learning jobs and managing them. We can apply machine learning in any time based dataset and can get the output in Kibana UI. We can detect anomalies, find root cause of any problem, easily forecast the future trends and find many answers from our data using machine learning.

Chapter 11, Create Super Cool Dashboard from a Web Application , in this chapter we will cover how we can create a super cool dashboard from an existing web application through  practical example. Here I will drive through application data flow from database to Kibana and then from Kibana visualization to Dashboard. The dashboard can independently be used or we can embed it in our web application.

Chapter 12, Different Use Cases of Kibana, in this chapter we will cover different important use cases of Kibana like handling time series data where we will cover conditional formatting and tracking trends etc. After that we will cover how to work with visual builder to handle the time series data and then will cover GeoIP for Elastic Search and how we can plot data on maps.

Chapter 13, Create Monitoring Dashboard Using Beats, in this chapter we will learn about Beats which works as a data shippers. This chapter will explain to create a quick monitoring dashboard using Beats. We will come to know about different type of beats like Metricbeat, Packetbeat, Filebeat, and so on. Here I will cover each steps from Beats configuration to dashboard creation.In this chapter reader would be able to create quick monitoring dashboard using Beats.

 Chapter 14, Best Practices, in this chapter we will cover different best practices which we need to ensure while working with Elastic Stack. By following these best practices we can get optimum performance from our Elastic stack setup.

To get the most out of this book

Although it is not required but it would be beneficial if you have basic knowledge of charts.

You should have a system access where you can install Elastic Stack and can follow the instructions given in the book.

Download the color images

We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: http://www.packtpub.com/sites/default/files/downloads/MasteringKibana6x_ColorImages.pdf.

Conventions used

There are a number of text conventions used throughout this book.

CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, path names, dummy URLs, user input, and Twitter handles. Here is an example: "To run Logstash, we need to install Logstash and edit the configuration file logstash.conf."

A block of code is set as follows:

input { file { path => "/var/log/apache2/access.log" } }

Any command-line input or output is written as follows:

curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.1.3.tar.gz

Bold: Indicates a new term, an important word, or words that you see onscreen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: "To get the statistics, we need to select Statistics from the dropdown."

 

Warnings or important notes appear like this.
Tips and tricks appear like this.

Get in touch

Feedback from our readers is always welcome.

General feedback: Email [email protected] and mention the book title in the subject of your message. If you have questions about any aspect of this book, please email us at [email protected].

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.

Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Reviews

Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!

For more information about Packt, please visit packtpub.com.

Revising the ELK Stack

Although this book is about Kibana, it doesn't make any sense if we are not aware of the complete Elastic Stack (ELK Stack), including Elasticsearch, Kibana, Logstash, and Beats. In this chapter, you are going to learn the basic concepts of the different software, installation, and their use cases. We cannot use Kibana to its full strength unless we know how to get proper data, filter it, and store it in a format that we can easily use in Kibana.

Elasticsearch is a search engine that is built on top of Apache Lucene, which is mainly used for storing schemaless data and searching it quickly. Logstash is a data pipeline that can practically take data from any source and send data to any source. We can also filter that data as per our requirements. Beats is a single-purpose software that is used to run on individual servers and send data to the Logstash server or directly to the Elasticsearch server. Finally, Kibana uses the data that's stored in Elasticsearch and creates beautiful dashboards using different types of visualization options, such as graphs, charts, histograms, word tags, and data tables. 

In this chapter, we will be covering the following topics:

What is ELK Stack?

The installation of Elasticsearch, Logstash, Kibana, and Beats

ELK use cases

What is ELK Stack?

ELK Stack is a stack with three different open source software—Elasticsearch, Logstash, and Kibana. Elasticsearch is a search engine that is developed on top of Apache Lucene. Logstash is basically used for data pipelining where we can get data from any data source as an input, transform it if required, and send it to any destination as an output. In general, we use Logstash to push the data into Elasticsearch. Kibana is a dashboard or visualization tool, which can be configured with Elasticsearch to generate charts, graphs, and dashboards using our data:

We can use ELK Stack for different use cases, the most common being log analysis. Other than that, we can use it for business intelligence, application security and compliance, web analytics, fraud management, and so on.

In the following subsections, we are going to be looking at ELK Stack's components.

Elasticsearch

Elasticsearch is a full text search engine that can be used as a NoSQL database and as an analytics engine. It is easy to scale, schemaless, and near real time, and provides a restful interface for different operations. It is schemaless, and it uses inverted indexes for data storage. There are different language clients available for Elasticsearch, as follows:

Java

PHP

Perl

Python

.NET

Ruby

JavaScript

Groovy

The basic components of Elasticsearch are as follows:

Cluster

Node

Index

Type

Document

Shard

Logstash

Logstash is basically used for data pipelining, through which we can take input from different sources and output to different data sources. Using Logstash, we can clean the data through filter options and mutate the input data before sending it to the output source. Logstash has different adapters to handle different applications, such as for MySQL or any other relational database connection. We have a JDBC input plugin through which we can connect to MySQL server, run queries, and take the table data as the input in Logstash. For Elasticsearch, there is a connector in Logstash that gives us the option to seamlessly transfer data from Logstash to Elasticsearch.

To run Logstash, we need to install Logstash and edit the configuration file logstash.conf, which consists of an input, output, and filter sections. We need to tell Logstash where it should get the input from through the input block, what it should do with the input through the filter block, and where it should send the output through the output block. In the following example, I am reading an Apache Access Log and sending the output to Elasticsearch:

input { file { path => "/var/log/apache2/access.log" } } filter { grok { match => { message => "%{COMBINEDAPACHELOG}" } } }output { elasticsearch { hosts => "http://127.0.0.1:9200" index => "logs_apache" document_type => "logs" }}

The input block is showing a file key that is set to /var/log/apache2/access.log. This means that we are getting the file input and path of the file, /var/log/apache2/access.log, which is Apache's log file. The filter block is showing the grok filter, which converts unstructured data into structured data by parsing it.

There are different patterns that we can apply for the Logstash filter. Here, we are parsing the Apache logs, but we can filter different things, such as email, IP addresses, and dates.

Kibana

Kibana is a dashboarding open source software from ELK Stack, and it is a very good tool for creating different visualizations, charts, maps, and histograms, and by integrating different visualizations together, we can create dashboards. It is part of ELK Stack; hence it is quite easy to read the Elasticsearch data. This does not require any programming skills. Kibana has a beautiful UI for creating different types of visualizations, including charts, histograms, and dashboards.

It provides us with different inbuilt dashboards with multiple visualizations when we use Beats, as it automatically creates multiple visualizations that we can customize to create a useful dashboard, such as for CPU usage and memory usage.

Beats

Beats are basically data shippers that are grouped to do single-purpose jobs. For example, Metricbeat is used to collect metrics for memory usage, CPU usage, and disk space, whereas Filebeat is used to send file data such as logs. They can be installed as agents on different servers to send data from different sources to a central Logstash or Elasticsearch cluster. They are written in Go; they work on a cross-platform environment; and they are lightweight in design. Before Beats, it was very difficult to get data from different machines as there was no single-purpose data shipper, and we had to do some tweaking to get the desired data from servers.

For example, if I am running a web application on the Apache web server and want to run it smoothly, then there are two things that need to be monitored—first, all of the errors from the application, and second, the server's performance, such as memory usage, CPU usage, and disk space. So, in order to collect this information, we need to install the following two Beats on our machine:

Filebeat

: This is used to collect log data from Apache web server in an incremental way. Filebeat will run on the server and will periodically check for any change in the Apache log. When there is any change in the Apache log file, it will send the log to Logstash. Logstash will receive the data file and execute the filter to find the errors. After filtering the data, it saves the data into Elasticsearch.

Metricbeat

: This is used to collect metrics for memory usage, CPU usage, disk space, and so on. Metricbeat collects the server metrics, such as memory usage, CPU usage, and disk space, and saves the data into Elasticsearch. Metrics data sends a predefined set of data, and there is no need to modify anything; that is why it sends data directly to Elasticsearch instead of sending it to Logstash first.

To visualize this data, we can use Kibana to create meaningful dashboards through which we can get complete control of our data.

Installing the ELK Stack

For a complete installation of ELK Stack, we first need to install individual components that are explained one by one in the following sections.

Elasticsearch

Elasticsearch 6.0 requires that we have Java 8 at the least. Before you proceed with the installation of Elasticsearch, please ensure which version of Java is present in your system by executing the following command:

java -version

echo $JAVA_HOME

After the setup is complete, we can go ahead and run Elasticsearch. You can find the binaries at www.elastic.co/downloads.

Installing Elasticsearch using a TAR file

First, we will download Elasticsearch 6.1.3.tar, as shown in the following code block:

curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.1.3.tar.gz

Then, extract it as follows:

tar -xvf elasticsearch-6.1.3.tar.gz

You will then see that a bunch of files and folders have been created. We can now proceed to the bin directory, as follows:

cd elasticsearch-6.1.3/bin

 We are now ready to start our node and a single cluster:

./elasticsearch

Installing Elasticsearch with Homebrew

You can also install Elasticsearch on macOS through Homebrew, as follows:

brew install elasticsearch

Installing Elasticsearch with MSI Windows Installer

Windows users are recommended to use the MSI Installer package. This package includes a graphical user interface (GUI) that guides the users through the installation process.

First, download the Elasticsearch 6.1.3 MSI from https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.1.3.msi.

Launch the GUI by double-clicking on the downloaded file. On the first screen, select the deployment directories:

Installing Elasticsearch with the Debian package

On Debian, before you can proceed with the installation process, you may need to install the apt-transport-https package first:

sudo apt-get install apt-transport-https

Save the repository definition to /etc/apt/sources.list.d/elastic-6.x.list:

echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list

You can install the elasticsearch Debian package with the following code:

sudo apt-get update && sudo apt-get install elasticsearch

Installing Elasticsearch with the RPM package

Download and install the public signing key:

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Create a file named elasticsearch.repo in the /etc/yum.repos.d/ directory for Red Hat-based distributions or in the /etc/zypp/repos.d/ directory for openSUSE-based distributions, containing the following code:

[elasticsearch-6.x]

name=Elasticsearch repository for 6.x packages

baseurl=https://artifacts.elastic.co/packages/6.x/yum

gpgcheck=1

gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch

enabled=1

autorefresh=1

type=rpm-md

Your repository is now ready for use. You can now install Elasticsearch with one of the following commands:

You can use yum on CentOS and older Red Hat-based distributions:

sudo yum install elasticsearch

You can use dnf on Fedora and other newer Red Hat distributions:

sudo dnf install elasticsearch

You can use zypper on openSUSE-based distributions:

sudo zypper install elasticsearch

Elasticsearch can be started and stopped using the service command:

sudo -i service elasticsearch start

sudo -i service elasticsearch stop

Logstash

Logstash requires at least Java 8. Before you go ahead with the installation of Logstash, please check the version of Java in your system by running the following command:

java -version

echo $JAVA_HOME

Using apt package repositories

Download and install the public signing key:

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

You may need to install the apt-transport-https package on Debian before proceeding, as follows:

sudo apt-get install apt-transport-https

Save the repository definition to /etc/apt/sources.list.d/elastic-6.x.list, as follows:

echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a

/etc/apt/sources.list.d/elastic-6.x.list

Run sudo apt-get update and the repository will be ready for use. You can install it using the following code:

sudo apt-get update && sudo apt-get install logstash

Using yum package repositories

Download and install the public signing key:

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Add the following in your /etc/yum.repos.d/ directory in a file with a .repo suffix (for example, logstash.repo):

[logstash-6.x]

name=Elastic repository for 6.x packages

baseurl=https://artifacts.elastic.co/packages/6.x/yum

gpgcheck=1

gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch

enabled=1

autorefresh=1

type=rpm-md

Your repository is now ready for use. You can install it using the following code:

sudo yum install logstash

Kibana

Starting with version 6.0.0, Kibana only supports 64-bit operating systems.

Installing Kibana using .tar.gz

The Linux archive for Kibana v6.1.3 can be downloaded and installed as follows:

wget https://artifacts.elastic.co/downloads/kibana/kibana-6.1.3-linux-x86_64.tar.gz

Compare the SHA produced by sha1sum or shasum with the published SHA:

sha1sum kibana-6.1.3-linux-x86_64.tar.gz

tar -xzf kibana-6.1.3-linux-x86_64.tar.gz

This directory is known as $KIBANA_HOME: 

cd kibana-6.1.3-linux-x86_64/

Installing Kibana using the Debian package

Download and install the public signing key:

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

You may need to install the apt-transport-https package on Debian before proceeding:

sudo apt-get install apt-transport-https

Save the repository definition to /etc/apt/sources.list.d/elastic-6.x.list:

echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list

You can install the Kibana Debian package with the following:

sudo apt-get update && sudo apt-get install kibana

Installing Kibana using rpm

Download and install the public signing key, as follows:

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Create a file named kibana.repo in the /etc/yum.repos.d/ directory for Red Hat-based distributions, or in the /etc/zypp/repos.d/ directory for openSUSE-based distributions, containing the following code:

[kibana-6.x]

name=Kibana repository for 6.x packages

baseurl=https://artifacts.elastic.co/packages/6.x/yum

gpgcheck=1

gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch

enabled=1

autorefresh=1

type=rpm-md

Your repository is now ready for use. You can now install Kibana with one of the following commands:

You can use

yum

on CentOS and older Red Hat-based distributions:

sudo yum install kibana

You can use 

dnf

on Fedora and other newer Red Hat distributions:

sudo dnf install kibana

You can use 

zypper

on openSUSE-based distributions:

sudo zypper install kibana

Installing Kibana on Windows

Download the .zip Windows archive for Kibana v6.1.3 from https://artifacts.elastic.co/downloads/kibana/kibana-6.1.3-windows-x86_64.zip.

Unzipping it will create a folder named kibana-6.1.3-windows-x86_64, which we will refer to as $KIBANA_HOME. In your Terminal, CD to the $KIBANA_HOME directory; for instance:

CD c:\kibana-6.1.3-windows-x86_64