Tag Archives: lumberjack

Warlogs

One of the first things to do in a migration is to be able to handle a lot of logs without having to check each one manually.

If you don’t have one then you need it, in a final state maybe the log of an application is enough but during a migration you will need to control firewall, security,… In multiple servers because usually you should have your services clustered or at least balanced. Multiply it for the number of applications and you get the idea.

There could be an official centralized log system but sometimes realpolitik or strict rules of use (in format, size, origin, use,…) don’t advise its use.

A temporal solution is using logstash, temporal like a scaffolding. A permanent solution needs a lot of capacity planning and a serious logging strategy out of the scope of this entry (maybe another day).

You need a server with a few GB of space but don’t fret about it you can always delete the logs saved. Remember the goal isn’t keeping an historic is detecting errors usually when testing.  And a reason to view it as temporal or a demo of the things to come.

The first thing is to install a standalone elasticsearch server. The database which keeps the logs. Carlos Spitzer a Red Hat engineer (who I met in a previous project btw) explains how to create an rpm for rhel/centos, and for Debian Internet is full of examples. Actually all this applications are very basic and simple to install.

Logstash receives the logs and records them in the elasticsearch database. It’s a java application so you need a jre but at least the installation, configuration and mantainance is so simple as changing the underlying jar. Its configuration file has three sections: input, transformation and output.

  1. Input section: forget about putting a logstash per server just one is enough. You need to configure a lumberjack input and a SSL certificate and that’s all.
  2. Transform section: patterns, matches, transformation process. The more complex section, I haven’t exploited yet all its capabilities. If I’d had an advice it would had been a grok debugger, the best way for matching grok regex patterns.
  3. Output section: just don’t use the embedded, point it to the previous elasticsearch server.

A last tip, the lasts logstash releases come with two modes, agent (for logging, this mode needs the config file) and web (which starts an embedded Kibana server). You would have two logstash processes running.

Lumberjack, the log feeder, you would need to download it from github and compile it with go, but after that, the exec file is all you’ll need. No libs, no dependencies, no runtimes. Just the exec, a json config file declaring the logstash server port, the SSL cert and which log files to track and it’s ready. If the log files don’t have to much life maybe you need to restart the lumberjack process after refreshing the logs but in general it would manage how many entries should send and if the logstash server is down they will keep them until is online again so there are no logs lost. This exec, the certs and the config file is the only files that you’ll need to deploy in each server, a fabric task or as we have, installing it with a script for automatic app deploying is enough.

For accessing the logs I use Kibana, the embedded one that comes with logstash. Sorry, I don’t like ruby platforms, well I don’t like python eggs either or perl CPAN, a production server is no place for compilers. They always give extra work so a flat jar and a jre runtime is a good stand-off for me.