Logging for the masses

(I really need to update the blog template ūüôĀ )

Problem: there are several sources of logs you want to consult, search in a centralized way. Also those logs should be correlated for events and raise alerts.

At first glance there are 2 alternatives: Splunk, maybe the leader for logging systems and ArcSight Logger already installed in Poland RDC.

The former is ridiculously expensive (at least for my miserable budget) and the later is a bureaucratic hell.

Both are expensive solutions, proprietary and closed, so sometimes pays itself to look for inexpensive and free (as in speech) source

The free solution involves using Logstash, Elasticsearch and Kibana for logging, storing and presentation.

Web Server Logging

We have about 80 log feeds from 15 web applications and 30 servers, the goal is log everything and be able to search by app, server, date, IP,…

The good news are that all those logs follows the same pattern 

The architecture follows the next scheme (the configuration files are sanitized):

logstash-infr

Logstash-forwarder: formerly known as lumberjack. It’s an application that tails logs and sends them in a secure channel to logstash over a tcp port, maintains an offset of each log.

Logstash (as shipper): Receives all the logs streams and stores them in a Redis data store.

Redis: It works here like a message queue between shipper and indexer. A thousand times easier to setup than ActiveMQ.

Logstash (as indexer): Extracts from the redis queue and process the data: parse, map and store in a elasticsearch db.

ElasticSearch: the database where logs are stored, indexed and be searchable.

Kibana: a php frontend for ES, allows the creation and customization of dashboards, queries and filters.

 

Logstash works as shipper and indexer why split those functions in two different process?

  • Because we don’t want to lose data.
  • Because the indexer can do some serious, CPU intensive tasks per entry.
  • Because the shipper and indexer throughput are different and not synchronized.
  • Because the logs can be unstructured and the match model could have errors reporting null pointers and finally out of memory killing or making it a zombie process (as when i tried to add some JBoss log4j logs).

For those reasons there is a queue between shipper and indexer, so the infrastructure is resilient to downtimes and the indexer isn’t saturated by the shipper throughput.

Logstash-forwarder configuration

A JSON config file, declaring the shipper host, a certificate (shared with the shipper) and which paths are being forwarded.

One instance per server

{

  "network": {

        "servers": [ "<SHIPPER>:5000" ],

    "ssl certificate": "/opt/logstash-forwarder/logstash.pub",

    "ssl key": "/opt/logstash-forwarder/logstash.key",

    "ssl ca": "/opt/logstash-forwarder/logstash.pub",

    "timeout": 15

  },

"files": [

             {

      "paths": [

        "/opt/httpd/logs/App1/access.log" , "/opt/httpd-sites/logs/App2/access_ssl.log"

      ],

      "fields": { "type": "apache" },

      "fields": { "app": "App1" }

    },

            {

      "paths": [

        "/opt/httpd/logs/App2/access.log" , "/opt/httpd/logs/App2/access_ssl.log"

      ],

      "fields": { "type": "apache" },

      "fields": { "app": "App2" }

    }

  ]

}

Logstash as shipper

Another JSON config file, accepts logs streams and stores them in a redis datastore.

input {

lumberjack {

port => 5000

ssl_certificate => "/etc/ssl/logstash.pub"

ssl_key => "/etc/ssl/logstash.key"

codec => json

}

}

output {

stdout { codec => rubydebug }

redis { host => "localhost" data_type => "list" key => "logstash" }

}

 

Redis

I think is out of the scope of this blog entry, it’s really dead easy, a default config was enough. It would need scaling depending on the throughput.

Logstash as indexer

Here the input it’s the output of the shipper, the output it’s the ES database, between there is the matching section where we filter the entries (we map them, dropping the health-checks from the F5 balancers and tagging entries with 503 errors). Yes, the output can be multiple too here not only we store those matches but those 503 are sended to a zabbix output which in turn sends them to our¬†zabbix¬†server.

input {

redis {

host => "<REDIS_HOST>"

type => "redis"

data_type => "list"

key => "logstash"

}

}

filter {

grok {

match => [ "message", "%{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response:int} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} %{QS:jsessionid} %{QS:bigippool} %{NUMBER:reqtimes:int}/%{NUMBER:reqtimems:int}" ]

}

}

filter {

if [request] == "/f5.txt" {

drop { }

}

}

filter {

if [response] == "503" {

alter {

add_tag => [ "zabbix-sender" ]

}

}

}

output {

stdout { }

elasticsearch {

cluster => "ES_WEB_DMZ"

}

zabbix {

# only process events with this tag

tags => "zabbix-sender"

# specify the hostname or ip of your zabbix server

# (defaults to localhost)

host => "<ZABBIX_SERVER>"

# specify the port to connect to (default 10051)

port => "10051"

# specify the path to zabbix_sender

# (defaults to "/usr/local/bin/zabbix_sender")

zabbix_sender => "/usr/bin/zabbix_sender"

}

}

 

ElasticSearch

The configuration file for a basic service is easy. Depending on the needs, throughput, how many searches per second it gets complicate (shards, masters, nodes,…) but for a very occasional use with this line is enough:

cluster.name: ES_WEB_DMZ

Kibana

Another easy configuration, it only needs to know the ES address: “http://<ES_HOST>:9200” and that’s all. Dashboards and queries are saved in the ES database. The php files and directories can be read only.

This post was originally posted in my company intranet and was showing 2 dashboards/screenshots that I can’t reproduce here:

  1. A simple dashboard showing how the logs are distributed per application, server, how many entries and their response times. Each facet can be inspected and go deeper.
  2. A dashboard showing the application errors (error codes 5XX)

 

Leave a Reply

Your email address will not be published. Required fields are marked *