Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Learning Elastic Stack 7.0

You're reading from   Learning Elastic Stack 7.0 Distributed search, analytics, and visualization using Elasticsearch, Logstash, Beats, and Kibana

Arrow left icon
Product type Paperback
Published in May 2019
Publisher Packt
ISBN-13 9781789954395
Length 474 pages
Edition 2nd Edition
Arrow right icon
Authors (2):
Arrow left icon
Sharath Kumar Sharath Kumar
Author Profile Icon Sharath Kumar
Sharath Kumar
Pranav Shukla Pranav Shukla
Author Profile Icon Pranav Shukla
Pranav Shukla
Arrow right icon
View More author details
Toc

Table of Contents (17) Chapters Close

Preface 1. Section 1: Introduction to Elastic Stack and Elasticsearch FREE CHAPTER
2. Introducing Elastic Stack 3. Getting Started with Elasticsearch 4. Section 2: Analytics and Visualizing Data
5. Searching - What is Relevant 6. Analytics with Elasticsearch 7. Analyzing Log Data 8. Building Data Pipelines with Logstash 9. Visualizing Data with Kibana 10. Section 3: Elastic Stack Extensions
11. Elastic X-Pack 12. Section 4: Production and Server Infrastructure
13. Running Elastic Stack in Production 14. Building a Sensor Data Analytics Application 15. Monitoring Server Infrastructure 16. Other Books You May Enjoy

The Logstash architecture

The Logstash event processing pipeline has three stages, that is, Inputs, Filters, and Outputs. A Logstash pipeline has two required elements, that is, input and output, and one option element known as filters:

Inputs create events, Filters modify the input events, and Outputs ship them to the destination. Inputs and outputs support codecs, which allow you to encode or decode the data as and when it enters or exits the pipeline, without having to use a separate filter.

Logstash uses in-memory bounded queues between pipeline stages by default (Input to Filter and Filter to Output) to buffer events. If Logstash terminates unsafely, any events that are stored in memory will be lost. To prevent data loss, you can enable Logstash to persist in-flight events to the disk by making use of persistent queues.

Persistent queues can be enabled by setting the queue...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image