How to Pre-Process Logs with Logstash: Part III of “Scalable and Robust Logging for Web Applications” 13

0 Flares Twitter 0 Facebook 0 Google+ 0 Reddit 0 LinkedIn 0 StumbleUpon 0 Filament.io 0 Flares ×

by Andreas Beer via flickr

 

This article is an introduction on how to pre-process logs from multiple sources in logstash before storing them in a data store or analyze them in real time. Some common use cases are unifying time formats across different log sources, anonymizing data, extracting only interesting information from the logs as well as tagging and selective distribution.

The first two parts of the series “Scalable and Robust Logging for Web Applications” described how to improve the default Ruby on Rails logger with Log4r and how to transport logs to a central location. This is the third post in a series which’s goal it is to develop a robust system for logging, monitoring and collection of metrics that can easily scale in terms of throughput – i.e. adding more application servers – but is also easy to expand to new types of log data – i.e. by adding a new database or importing external data sources – and makes it easy to modify and analyze the data exactly as you need.

 

(Subscribe via Email to get informed as soon as the next post in this series is published! Sign up now on top of the sidebar to your right!)

Scalable and Robust Logging for Web Applications

  1. Log4r for Ruby on Rails
  2. Log Transport and Distribution with Kafka
  3. How to Pre-Process Logs with Logstash
  4. Real Time Analytics with Storm
  5. Server Monitoring with Sensu
  6. Metrics with Graphite
  7. Storing Metrics and Logs
  8. Visualization

 

 

Why Pre-Process Logs?

Why not just pipe everything into a big HDFS cluster and run MapReduce jobs on it? There are a few reasons pre-processing is essential to make the actual data analysis easier and faster, one of the biggest is data cleanup and unification. Not only is it faster to process the log data once and then store it in a unified format, but also more secure, as there is only one codebase that needs to be tested instead of each single query that performs standard data manipulation.

 

A big problem of heterogeneous logs the different format of timestamps. Even worse, some formats may exclude the timezone or even the year. Also are they in different positions within the log and make sorting, selecting and comparing by time much harder. Just counting the logs between last Monday and Friday requires five different parsers for each line when using data from the examples above.

 

Standard Rails logs show another big issue with certain kinds of logs. While we got rid of the verbose Rails log using Log4r other programs might not allow for this kind of customization of the logging output. Finding information in logs where each log event is stretches across multiple lines is much harder than in well-formatted single-line logs. Furthermore logs might contain sensible information that should be kept out of the stored logs, like passwords and user-identifiable data.

Logstash has been built to solve exactly these and many more problems with ease:

 

Introducing Logstash

Logstash is a Java-based tool that allows pre-processing logs. It has four basic phases, input – decode – filter – output, in which the logs can be annotated, trimmed, unified and modified in many other ways through corresponding plugins. Logstash already comes with a very comprehensive set of  default plugins and extending it is very simple due to it’s modular structure. Another outstanding feature is grok, a “write once, combine everywhere” approach to regexes, which also has a great online interpreter to help debugging.

Jordan Sissel, the creator or logstash, recently joined ElasticSearch and will continue working on logstash to further “Elasticsearch as the real time analytics and search engine”.

 

Workflow

Input

There are multiple ways to preprocess logs depending on the use-cases and the available resources. The most common ones are

  1. Logstash is used to process the logs on each server and sends the results directly to the storage.
    + Less data to transport
  2. Each server sends the logs directly to a central logstash instance for processing
    + Central place to make configuration changes.
  3. Each server sends the logs to a log aggregator or pub/sub system. Logstash subscribes and processes logs. The results are written to a data store or back to the pub/sub.
    + Scalability
    + Allows data to be accessed by multiple systems
  4. Each server sends the logs to a storage server, like Hadoop. A subset of the logs is sent to logstash for processing and distribution.
    + Good for very high throughput and use-cases where not all logs need preprocessing or are used for analytics

This shows the versatility of logstash and how it can be used in many different stages of the log processing. This series will focus on option three.

 

Decoding

Logs can come in many different forms and shapes. Not only do logs have different patterns to store their data, some might even come already in a structured form like JSON. In this step structured data is extracted into variables that can later be manipulated and stored.

 

Filter

In this part the data in the variables can be modified, combined and parsed. Common use-cases are unifying date formats and converting timezones, tagging logs based on source or content, anonymizing data, creating checksums, extracting or converting numbers, decoding JSON or XML data as well as creating simple metrics.

 

Output

After extracting, unifying and manipulating data logstash supports a vast amount outputs, like databases, TCP and UPD as well as IRC or Pagerduty. And if the necessary output does not come with logstash, the community is very active and most likely already built a unofficial output. Worst case, output filter are just ruby code, so creating a custom output is very easy.

 

 

This series will demonstrate reading input from Kafka, decoding and pre-processing different types of logs and then output the result back to Kafka as well as Storm, in order to allow for real time analysis , ElasticSearch, to allow querying and visualizations via Kibana, and Hadoop for long-term storage and large scale analytics.

 

Getting Started with Logstash

Seting Up Logstash

After downloading the latest version* of logstash the next – and only step – is to create a config file that tells logstash where to get the data from and what to do with it.

The config file is written using JSON syntax and contains three top-level entries, input, filter and output. Next is a very basic example that reads from stdin and prints to stdout in the rubydebug syntax:

After unpacking and configuring logstash, the following command starts it:

You can now type something like Hello World and press Enter. Logstash should return a JSON Hash/Map with your message and some other useful information like the timestamp and the host:

 

In the remainder of this section we will set up logstash to consume data from Kafka and process it in multiple ways.

* at the time of this writing the latest version was 1.4.0

 

 

Consume From Kafka 0.8

In order to get data from Kafka a third party plugin has to be used. The only one that supports Kafka 0.8 at the moment is logstash-kafka, which is based off of jruby-kafka. This plugin requires logstash to be rebuilt with this plugin, but the author included a makefile that will automate this step:

Building the plugin however requires some dependencies like jruby, scala, kafka and jruby-kafka to be available. Furthermore the plugin does not have any tests. If this seems to much of a risk, an alternative solution is to write a small Java program that uses the default consumer that comes with Kafka and sends the data to logstash via TCP/UDP. Following is a sample logstash.conf where logstash-kafka is used to input data from Kafka 0.8.

 

 

Parse Logs

As soon as the Kafka messages are coming in they have to be converted into key-value pairs. If the messages are in a supported format, they can automatically be converted by using a codec. Supported codecs include JSON, Graphite or multiline, which allows compressing multiple lines like stack traces into one message, and others.

If logs are not in one of these formats the grok filter is a great way to split up the logs into key-value pairs based on regexes. break_on_match => true  will break out of the current grok filter as soon a matching regex is found. This helps performance as matching regexes are pretty expensive operations. Without this flag every regex in the filter will be executed on each line every time, even if a previous one already matched.

 

 

Access Data in Logstash

After the input has been parsed, data can be accesse by using square brackets. Assuming the parsed data contained a field headers which was a map of fields that were sent to the server in the request header. To access the referrer field that was sent in a request, the syntax is [headers][referrer]. Or take a look at another example on the logstash website.

When outputting data sometimes it is necessary to name an output based on the value of a field. To i.e. send a message to statsD denoting which response code the request had, the following code can be used, which will send a increment notice for apache.500 when the request that is currently processed resulted in a 500 Internal Server Error response code.

 

 

Adding Tags and Using Conditions

Logstash allows tagging certain types of input as well as applying filters only when certain conditions are met.

The first way to add tags as well as types is during the input phase. Types are in essence the same as tags, however there is only one type per input, and mainly used for filter activation.

 

Logstash also supports the usual conditions ifelse if as well as else and can use tags, types and any fields to selectively apply filters.

 

 

Anonymize Data from Logs

Some legislations, especially in the EU, require logs to be anonymized to a certain degree, i.e. the raw IP address cannot be stored without consent. Logstash makes this easy by providing a anonymize filter. If takes the fields that should be anonymized as an input and hashes them. It also has a special mode for IPv4 addresses to truncate them based on the subnet prefix length.

 

 

Create Simple Metrics

Logstash also allows creating simple metrics from fields. It supports two modes, called meter and timer. Meter counts the occurrence of a field and outputs sliding windows of the rate (events per sec) for the last 1, 5 and 15 minutes. Timer is used for getting averages as well as percentiles over the value of a field.

Metrics are flushed separately every 5 seconds, or whichever value is set using flush_interval. The percentiles and counts are reset based on the value of clear_interval, which should always be a multiple of the flush-rate to prevent inaccuracies.

 

There are also other services that are built with the sole purpose to handle stats and help analyze those, for example statsd and graphite. Sending data to these services can be done by outputting only specific fields, like shown in this example for apache logs:

 

Save Output to Multiple Destinations

After the data is pre-processed, it needs to be transported to the next stage, be it final storage, indexing, real-time processing or other applications.

Logstash does this in its outputs and has a wide variety of standard outputs as well as many plugins to extend this selection. This article will demonstrate pushing the pre-processed data to ElasticSearch, Hadoop, Kafka and Storm. Outputters also can filter what data is processed by filtering on tags and attributes as described above in Adding Tags and Conditions.

 

ElasticSearch

Save all messages that have the tag index to be indexed by ElasticSearch:

This is just a very basic example and especially the latest version of logstash and ElasticSearch add more features like templates. Also the parameters have to match how the indexes in ElasticSearch are set up. Consult the full documentation of the ElasticSearch output before sending data to an existing cluster.

 

Apache Hadoop / HDFS

Saving data to Hadoop an be either accomplished through elasticsearch-hadoop, if it is already used to index and analyze the same data, or via the logstash-hdfs plugin by .

 

Apache Kafka

Kafka is amazing at distributing messaged to multiple consumers, therefore also a great destination for the pre-processed data. Writing back to kafka will use the same plugin as mentioned above, logstash-kafka.

 

Apache Storm

There is no direct predefined way to get data into Storm. Two approaches are either using Kafka as an intermediary queue and create a Kafka consumer spout, the name for data sources in Storm, or a TCP spout that sends data directly to Storm. Considering a use-case of real-time data analysis, Kafka would introduce unnecessary latency but can help with scaling.

 

 

This concludes a simple introduction into logstash. The documentation of logstash has much more detail on each topic and the community is always willing to help, either via email ( logstash-users@googlegroups.com or Google’s web interface for the group) or via IRC in #logstash on irc.freenode.net.

 

The following lists some amazing tools that logstash community built to easy its use and generally useful tools for log processing:

 

Tools

There is a community managed logstash cookbook that contains several useful tips and tricks.

Many more plugins that are not part of the core logstash distribution can be found in the contrib plugins github project.

Using ElasticSearch

ElasticSearch is logstash’s favorite storage/indexing engine and therefore there are a few nice plugins that work very well with this combination and make it easier to use.

A small web-client is shipped with logstash and can be stared by addinf web to the logstash command like   bin/logstash agent -f logstash.conf web , which will launch the agent and the web client at the same time. This is a quick way to get started, especially when using the embedded elasticsearch via  output { elasticsearch { embedded => true } } .

For more sophisticated applications, elasticsearch-kopf and Kibana are tools to know. Kopf is a tool more focused on administrating the ElasticSearch cluster whereas Kibana is optimized for data analysis and displaying the gained insights.

elasticsearch-kopf screenshot

elasticsearch-kopf REST query interface

Kibana displaying data from ElasticSearch

 

Using Grok

Writing grok patterns is sometimes as complicates as writing plain old regular expressions. Luckily there are amazing online tools available for both.

Grok Debugger helps with writing grok pattern and also has a nice list of common patterns. For full-blown regular expressions in ruby rubular is essential. It has a small cheat-sheet at the bottom and allows evaluating full regexes against test-data, immediately shows matches and captures.

 

Logstash is extendable via jruby and the community is very supportive of beginners. If there is a tool, plugin or feature missing, mention it in the mailing list or just code away and create a solution yourself.

 

 

I hope this article gave you a good introduction into how to use logstash to pre-process your logs. The next posts in this series will take a look at how to process these logs in real time using Storm.

What do you think? Do you have any comments, improvements, questions or just enjoyed the read? Go write a comment below, at reddit or HackerNews and I try to respond as quickly as possible :)

Help others discover this article as well: Share it with your connections and upvote it on reddit and HackerNews!

 

Subscribe to the blog to get informed immediately when the next post in this series is published!

 

0 Flares Twitter 0 Facebook 0 Google+ 0 Reddit 0 LinkedIn 0 StumbleUpon 0 Filament.io 0 Flares ×

13 thoughts on “How to Pre-Process Logs with Logstash: Part III of “Scalable and Robust Logging for Web Applications”

  1. Reply Joe Lawson Sep 22,2014 7:23 PM

    Just an update for your article, the kafka-logstash plugin does have tests now and has been in production use for nearly a year at roomkey.com shuffling thousands of logs per second as needed. Further, the plugin is being introduced into the mainline of logstash beginning with version 1.5.

    Thanks for the interest in my plugin and the great writeup as to getting stuff going. I’m sure people find it informative.

  2. Reply Rajesh Antony Oct 21,2014 10:21 AM

    Thank You, Joe. Very much appreciated for the nice article

  3. Reply Karan GM Oct 28,2014 5:21 AM

    Hi,

    I am new to ELK stack. Following is the list of things i want to do.

    We have a distributed setup

    1. I want to centralize logs from all the machines at one place.

    2. I need a UI from where i can view these logs.

    3. I also need to calculate metrics and display them in graph, store the metric datapoints in database.

    Any article i have seen online, they have suggested to use Redis or equivalent like ZeroMQ or RabbitMQ) to act as a buffer. Firstly do you really require this redis in the pipeline. Can’t we directly index it from the client itself (or push it to Amazon S3 or Mongo or store it in Database). The reason what they have mentioned in the groups is that Indexing is CPU intensive, you need a central place where you configure what you want to do with your log file i.e outputs

    If you are ok with spending few CPU cycles and also if you have a configuration file and you want to change what you want to do with output, you can come up with a script which will either download the file from central locatiopn each time it runs or read the config file from GFS location. So what i am not able to get here is what is the real purpose redis i serving here.

    Secondly i am concerned with redis scalability. In the docs they have spoken about high-availability – failover mechanism of redis. This is fine. But dont you think redis will become a bottleneck. Is there anything like you can have a cluster. ( I am not ok with having different master redis server for a set of clients to scale out – difficult to maintain). Can you do clustering?

    Thirdly i want to do monitoring – DB writes/sec, DB reads/sec, number of messages in the queue, what is the cpu% in machine1, machine2 and so on. I want store all this data in a DB like mysql or mongo and also want to display them in graph. Is it possile or should i have to use like nagios or sensu.

    Please help me out.

    Regards,
    Karan GM

  4. Reply Patrick Kaddu Mar 20,2015 6:41 AM

    Great article. Thank you.

  5. Reply Bob Brown Jun 10,2015 4:35 PM

    Karan,

    ELK can help you centralize logs, Kibana will help you to view the log data and search for it using Elastic Search as the data repository. Logstash has a number of plugins to allow you to gather metrics and ship them to where you like for a special view or special processing.

    The broker, [redis, rabbintMQ, zeroMQ, etc] isn’t required but recommended if you want to be able to scale or handle a significant amount of data. There are plugins to allow you to use AWS SQS.

    The purpose of the broker is to act as a buffer for the data going into logstash. With high volumes of data, without a broker of some kind, you may have contention for the indexing versus the incoming IO. This could create a condition where messages could be lost. Using a broker will allow the data to hang around for a time while you fix the problem or wait until the indexer is available to grab more data.

    Yes, you could certainly write a script to download the file to process it. But then we are defeating the purpose and reason for setting up ELK in the first place. It will do all that for you.

    Yes, normally in production we would set up the broker in a cluster for both failover and high-availablity. Or technically, AWS SQS would allow Amazon to handle cluster issues in the background and you could just read and write to it through the plugin.

    While we could use ELK to do monitoring, monitoring is usually done through log files. Probably not the best use case. However, you can certainly use logstash-forwarder to forward any data you give to it to statsd, nagios or wherever you want.

    Bob

  6. Reply Bob Brown Jun 10,2015 4:37 PM

    Typo: monitoring is NOT usually done through log files

  7. Reply Sergio Jun 24,2015 12:51 AM

    Some kafka output plugin configuration are deprecated or not supported.
    It could be nice to mark those ones.

    I realize reading the logstash 1.5 documentation:
    https://www.elastic.co/guide/en/logstash/current/plugins-outputs-kafka.html#plugins-outputs-kafka-broker_list

    Regards and thank you for your posts!

  8. Reply Balthasar Schopman Aug 10,2015 2:23 AM

    A small note on the Kafka output example: the semicolons at the start of the keys must be removed, eg. not :broker_list, but broker_list.

  9. Reply shubhi Sep 27,2016 12:03 AM

    I am new in this,Can i use logstash as data processing tool in distributed environment.And how we can implement it

    • Reply Markus Lachinger Jan 27,2017 7:16 PM

      Yes, you can either use it on every server and process the logs there or ship the full logs to a central location and then process them through logstash at that point.

Leave a Reply