38,39 €
Design, implement, and publish custom Splunk applications by following best practices
This book is for administrators, developers, and search ninjas who have been using Splunk for some time. A comprehensive coverage makes this book great for Splunk veterans and newbies alike.
This book will give you an edge over others through insights that will help you in day-to-day instances. When you're working with data from various sources in Splunk and performing analysis on this data, it can be a bit tricky. With this book, you will learn the best practices of working with Splunk.
You'll learn about tools and techniques that will ease your life with Splunk, and will ultimately save you time. In some cases, it will adjust your thinking of what Splunk is, and what it can and cannot do.
To start with, you'll get to know the best practices to get data into Splunk, analyze data, and package apps for distribution. Next, you'll discover the best practices in logging, operations, knowledge management, searching, and reporting. To finish off, we will teach you how to troubleshoot Splunk searches, as well as deployment, testing, and development with Splunk.
If you're stuck or want to find a better way to work with Splunk environment, this book will come handy. This easy-to-follow, insightful book contains step-by-step instructions and examples and scenarios that you will connect to.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 237
Veröffentlichungsjahr: 2016
Copyright © 2016 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
First published: September 2016
Production reference: 1150916
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham B3 2PB, UK.
ISBN 978-1-78528-139-6
www.packtpub.com
Author
Travis Marlette
Copy Editor
Safis Editing
Reviewer
Chris Ladd
Project Coordinator
Ulhas Kambali
Commissioning Editor
Veena Pagare
Proofreader
Safis Editing
Acquisition Editor
Tushar Gupta
Indexer
Tejal Daruwale Soni
Content Development Editor
Prashanth G Rao
Production Coordinator
Melwyn Dsa
Technical Editor
Murtaza Tinwala
Cover Work
Melwyn Dsa
Travis Marlette has been working with Splunk since Splunk 4.0, and has over 7 years of statistical and analytical experience leveraging both Splunk and other technologies. He cut his teeth in the securities and equities division of the finance industry, routing stock market data and performing transactional analysis on stock market trading, as well as reporting security metrics for SEC and other federal audits.
His specialty is in IT operational intelligence, which consists of the lions share of many major companies. Being able to report on security, system-specific, and propriety application metrics is always a challenge for any company and with the increase of IT in the modern day, having a specialist like this will become more and more prominent.
Working in finance, Travis has experience of working to integrate Splunk with some of the newest and most complex technologies, such as:
Travis is has been certified for a series of Microsoft, Juniper, Cisco, Splunk, and network security certifications. His knowledge and experience is truly his most valued currency, and this is demonstrated by every organization that has worked with him to reach their goals.
He has worked with Splunk installations that ingest 80 to 150 GB daily, as well as 6 TB daily, and provided value with each of the installations he’s created to the companies that he’s worked with. In addition he also knows when a project sponsor or manager requires more information about Splunk and helps them understand what Splunk is, and how it can best bring value to their organization without over-committing.
According to Travis, "Splunk is not a 'crystal ball'that's made of unicorn tears, and bottled rainbows, granting wishes and immediate gratification to the person who possesses it. It’s an IT platform that requires good resources supporting it, and is limited only by the knowledge and imagination of those resources". With the right resources, that’s a good limitation for a company to have.
Splunk acts as a ‘Rosetta Stone’ of sorts for machines. It takes thousands of machines, speaking totally different languages all at the same time, and translates that into something a human can understand. This by itself, is powerful.
His passion for innovating new solutions and overcoming challenges leveraging Splunk and other data science tools have been exercised and visualized every day each of his roles. Those roles are cross industry, ranging from Bank of New York and Barclay's Capital, to the Federal Government. Thus far, he and the teams he has worked with have taken each of these organizations further than they have ever been on their Splunk journey. While he continues to bring visibility, add value, consolidate tools, share work, perform predictions, and implement cost savings, he is also are often mentioned as the most resourceful, reliable, and goofy person in the organization. Travis says “A new Splunk implementation is like asking your older brother to turn on a fire hose so you can get a drink of water. Once it’s on, just remember to breathe.”
Chris Ladd is a staff sales engineer at Splunk. He has been with Splunk for three years and has been a sales engineer for more than a decade. He has earned degrees from Southwestern University and the University of Houston. He resides in Chicago.
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.
At www.PacktPub.com , you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.
https://www2.packtpub.com/books/subscription/packtlib
Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can search, access, and read Packt's entire library of books.
Within the working world of technology, there are hundreds of thousands of different applications, all (usually) logging in different formats. As a Splunk expert, our job is make all those logs speak human, which is often the impossible task. With third-party applications that provide support, sometimes log formatting is out of our control. Take, for instance, Cisco or Juniper, or any other leading leading manufacturer.
These devices submit structured data,specific to the manufacturer. There are also applications that we have more influence on, which are usually custom applications built for a specific purpose by the development staff of your organization. These are usually referred to as 'Proprietary applications' or 'in-house' or 'home grown' all of which mean the same thing.
The logs I am referencing belong to proprietary in-house (a.k.a. home grown) applications that are often part of the middleware, and usually control some of the most mission critical services an organization can provide.
Proprietary applications can be written in anything, but logging is usually left up to the developers for troubleshooting, and up until now the process of manually scraping log files to troubleshoot quality assurance issues and system outages has been very specific. I mean that usually, the developer(s) are the only people that truly understand what those log messages mean.
That being said, developers often write their logs in a way that they can understand them, because ultimately it will be them doing the troubleshooting / code fixing when something severe breaks.
As an IT community, we haven't really started taking a look at the way we log things, but instead we have tried to limit the confusion to developers, and then have them help other SMEs that provide operational support, understand what is actually happening.
This method has been successful, but time consuming, and the true value of any SME is reducing any systems MTTR, and increasing uptime. With any system, the more transactions processed means the larger the scale of a system, which after about 20 machines, troubleshooting begins to get more complex, and time consuming with a manual process.
The goal of this book is to give you some techniques to build a bridge in your organization. We will assume you have a base understanding of what Splunk does, so that we can provide a few tools to make your day to day life easier with Splunk and not get bogged down in the vast array of SDK's and matching languages, and API's. These tools range from intermediate to expert levels. My hope is that at least one person can take at least one concept from this book, to make their lives easier.
Chapter 1 , Application Logging, discusses where the application data comes from, and how that data gets into Splunk, and how it reacts to the data. You will develop applications, or scripts, and also learn how to adjust Splunk to handle some non-standardized logging. Splunk is as turnkey, as the data you put it into it. This means, if you have a 20-year-old application that logs unstructured data in debug mode only, your Splunk instance will not be a turnkey. With a system such a Splunk, we can quote some data science experts in saying "garbage in, garbage out".
Chapter 2 , Data Inputs, discusses how to move on to understanding what kinds of data input Splunk uses in order to get data inputs. We see how to enable Splunk to use the methods which they have developed in data inputs. Finally, you will get a brief introduction to the data inputs for Splunk.
Chapter 3 , Data Scrubbing, discusses how to format all incoming data to a Splunk, friendly format, pre-indexing in order to ease search querying, and knowledge management going forward.
Chapter 4 , Knowledge management, explains some techniques of managing the incoming data to your Splunk indexers, some basics of how to leverage those knowledge objects to enhance performance when searching, as well as the pros and cons of pre and post field extraction.
Chapter 5, Alerting, discusses the growing importance of Splunk alerting, and the different levels of doing so. In the current corporate environment, intelligent alerting, and alert 'noise' reduction are becoming more important due to machine sprawl, both horizontally and vertically. Later, we will discuss how to create intelligent alerts, manage them effectively, and also some methods of 'self-healing' that I've used in the past and the successes and consequences of such methods in order to assist in setting expectations.
Chapter 6, Searching and Reporting, will talk about the anatomy of a search, and then some key techniques that help in real-world scenarios. Many people understand search syntax, however to use it effectively, (a.k.a to become a search ninja) is something much more evasive and continuous. We will also see real world use-cases in order to get the point across such as, merging two datasets at search time, and making the result set of a two searches match each other in time.
Chapter 7, Form-Based Dashboards, discusses how to create form based dashboards leveraging $foo$ variables as selectors to appropriately pass information to another search, or another dashboard and also, we see how to create an effective drill-down effect.
Chapter 8, Search optimization, shows how to optimize the dashboards to increase performance. This ultimately effects how quickly dashboards load results. We do that by adjusting search queries, leverage summary indexes, the KV Store, accelerated searches, and data models to name a few.
Chapter 9, App Creation and Consolidation, discusses how to take a series of apps from Splunkbase, as well as any dashboard that is user created, and put them into a Splunk app for ease of use. We also talk about how to adjust the navigation XML to ease user navigation of such an app.
Chapter 10, Advanced Data Routing, discusses something that is becoming more common place in an enterprise. As many people are using big data platforms like Splunk to move data around their network things such as firewalls and data stream loss, sourcetype renaming by environment can become administratively expensive.
You will need at least a distributed deployment of an on prem installation of Splunk for this book, collecting both Linux and Windows information, and a heavy forwarder as well. We will use all of these pieces to show you techniques to add value.
This book is for administrators, developers, and search ninjas who have been using Splunk for some time. A comprehensive coverage makes this book great for Splunk veterans and newbies alike.
Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of.
To send us general feedback, simply e-mail [email protected], and mention the book's title in the subject of your message.
If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.
Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.
You can download the example code files for this book from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.
You can download the code files by following these steps:
You can also download the code files by clicking on the Code Files button on the book's webpage at the Packt Publishing website. This page can be accessed by entering the book's name in the Search box. Please note that you need to be logged in to your Packt account.
Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:
The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Splunk-Best-Practices. We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
We also provide you with a PDF file that has color images of the screenshots/diagrams used in this book. The color images will help you better understand the changes in the output. You can download this file from https://www.packtpub.com/sites/default/files/downloads/SplunkBestPractices_ColorImages.pdf.
Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.
To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.
Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.
Please contact us at [email protected] with a link to the suspected pirated material.
We appreciate your help in protecting our authors and our ability to bring you valuable content.
If you have a problem with any aspect of this book, you can contact us at [email protected], and we will do our best to address the problem.
Within the working world of technology, there are hundreds of thousands of different applications, all (usually) logging in different formats. As Splunk experts, our job is make all those logs speak human, which is often an impossible task. With third-party applications that provide support, sometimes log formatting is out of our control. Take for instance, Cisco or Juniper, or any other leading application manufacturer. We won't be discussing these kinds of logs in this chapter, but we'll discuss the logs that we do have some control over.
The logs I am referencing belong to proprietary in-house (also known as "home grown") applications that are often part of middleware, and usually they control some of the most mission-critical services an organization can provide.
Proprietary applications can be written in any language. However, logging is usually left up to the developers for troubleshooting and up until now the process of manually scraping log files to troubleshoot quality assurance issues and system outages has been very specific. I mean that usually, the developer(s) are the only people that truly understand what those log messages mean.
That being said, oftentimes developers write their logs in a way that they can understand them, because ultimately it will be them doing the troubleshooting/code fixing when something breaks severely.
As an IT community, we haven't really started looking at the way we log things, but instead we have tried to limit the confusion to developers, and then have them help other SME's that provide operational support to understand what is actually happening.
This method is successful, however, it is slow, and the true value of any SME is reducing any system's MTTR, and increasing uptime.
With any system, the more transactions processed means the larger the scale of the system, which means that, after about 20 machines, troubleshooting begins to get more complex and time consuming with a manual process.
This is where something like Splunk can be extremely valuable. However, Splunk is only as good as the information that comes into it.
I will say this phrase for the people who haven't heard it yet; "garbage in... garbage out"
There are some ways to turn proprietary logging into a powerful tool, and I have personally seen the value of these kinds of logs. After formatting them for Splunk, they turn into a huge asset in an organization's software life cycle.
I'm not here to tell you this is easy, but I am here to give you some good practices about how to format proprietary logs.
To do that I'll start by helping you appreciate a very silent and critical piece of the application stack.
To developers, a logging mechanism is a very important part of the stack, and the log itself is mission critical. What we haven't spent much time thinking about before log analyzers, is how to make log events/messages/exceptions more machine friendly so that we can socialize the information in a system like Splunk, and start to bridge the knowledge gap between development and operations.
The nicer we format the logs, the faster Splunk can reveal the information about our systems, saving everyone time and headaches.
In this chapter we are briefly going to look at the following topics:
Here I will give some very high level information on loggers. My intention is not to recommend logging tools, but simply to raise awareness of their existence for those that are not in development, and allow for independent research into what they do. With the right developer, and the right Splunker, the logger turns into something immensely valuable to an organization.
There is an array of different loggers in the IT universe, and I'm only going to touch on a couple of them here. Keep in mind that I only reference these due to the ease of development I've seen from personal experience, and experiences do vary.
I'm only going to touch on three loggers and then move on to formatting, as there are tons of logging mechanisms and the preference truly depends on the developer.
I'm going to be taking some very broad strokes with the following explanations in order to familiarize you, the Splunk administrator, with the development version of 'the logger'. Each language has its own versions of 'the logger' which is really only a function written in that software language that writes application relevant messages to a log file. If you would like to learn more information, please either seek out a developer to help you understand the logic better or acquire some education on how to develop and log in independent study.
There are some pretty basic components to logging that we need to understand to learn which type of data we are looking at. I'll start with the four most common ones:
These exceptions can print a huge amount of information into the log depending on the developer and the framework. The format itself is not easy and in some cases is not even possible for a developer to manage.
This is an open source logger that is often used in middleware applications.
