Tag Archives: migration

Risk management

In a migration there are three risks that must be controlled yes or yes.

In the development environment, use a original service thinking that you are using a migrated one. In this scenario the tests will work flawlessly and the go-live day will be plagued with errors because it hasn’t been properly tested, actually it hasn’t been tested!

In production, there are two. Corrupting with test data the original live production environment and after the go-live saving production data in already migrated or development systems.

Solutions to mitigate or avoid the problems related to those risks?

One is simple, cosmetic but it works wonders. Everybody remember OMG Ponies! So choose an official RDC migration colour, the camper the better and change the background colour on the migration branch of each app. Nobody will be able to say that it was a mistake doing tests in a real app thinking they were attacking its migrating counterpart. Also if by error someone promotes an app with real properties (from a source code branch different to the migration one)  it’ll be detected instantly.

The second is like putting a profilatic over your systems. Yes, internal firewalls. Netfilter/iptables for the win. Using Windows? Sorry. Twice (the first one for having to use it). Iptables is a bit rough, I prefer using shorewall. A collection of scripts for configuring iptables using policies and high level objects.

The different systems will be in different networks ( if they aren’t, the migration is trivial or there are more serious issues to think about… Like abandoning the boat as fast as possible). In the development environment, reject (and log) egress connections to the original network range is usually enough. If instead of rejecting the conn, it’s dropped, you get timeouts, it’s usually better a fast log with a connection reset in it than waiting minutes not knowing what’s happening. Production’s environment rules have a different scope, all outward connections are blocked by default, and only selected network ranges are opened explicitly.

Derivative accesses are still a risk, more if they are managed by other departments with other standards. Some can be tracked per application… others are a matter of faith.

Tracking all those rejected conns can be done easily at Kibana. Just add to the automatic process (I use fabric) for configuring shorewall/iptables on all the servers, the option to relay the shorewall log to lumberjack/logstash (see Warlogs).

 

Warlogs

One of the first things to do in a migration is to be able to handle a lot of logs without having to check each one manually.

If you don’t have one then you need it, in a final state maybe the log of an application is enough but during a migration you will need to control firewall, security,… In multiple servers because usually you should have your services clustered or at least balanced. Multiply it for the number of applications and you get the idea.

There could be an official centralized log system but sometimes realpolitik or strict rules of use (in format, size, origin, use,…) don’t advise its use.

A temporal solution is using logstash, temporal like a scaffolding. A permanent solution needs a lot of capacity planning and a serious logging strategy out of the scope of this entry (maybe another day).

You need a server with a few GB of space but don’t fret about it you can always delete the logs saved. Remember the goal isn’t keeping an historic is detecting errors usually when testing.  And a reason to view it as temporal or a demo of the things to come.

The first thing is to install a standalone elasticsearch server. The database which keeps the logs. Carlos Spitzer a Red Hat engineer (who I met in a previous project btw) explains how to create an rpm for rhel/centos, and for Debian Internet is full of examples. Actually all this applications are very basic and simple to install.

Logstash receives the logs and records them in the elasticsearch database. It’s a java application so you need a jre but at least the installation, configuration and mantainance is so simple as changing the underlying jar. Its configuration file has three sections: input, transformation and output.

  1. Input section: forget about putting a logstash per server just one is enough. You need to configure a lumberjack input and a SSL certificate and that’s all.
  2. Transform section: patterns, matches, transformation process. The more complex section, I haven’t exploited yet all its capabilities. If I’d had an advice it would had been a grok debugger, the best way for matching grok regex patterns.
  3. Output section: just don’t use the embedded, point it to the previous elasticsearch server.

A last tip, the lasts logstash releases come with two modes, agent (for logging, this mode needs the config file) and web (which starts an embedded Kibana server). You would have two logstash processes running.

Lumberjack, the log feeder, you would need to download it from github and compile it with go, but after that, the exec file is all you’ll need. No libs, no dependencies, no runtimes. Just the exec, a json config file declaring the logstash server port, the SSL cert and which log files to track and it’s ready. If the log files don’t have to much life maybe you need to restart the lumberjack process after refreshing the logs but in general it would manage how many entries should send and if the logstash server is down they will keep them until is online again so there are no logs lost. This exec, the certs and the config file is the only files that you’ll need to deploy in each server, a fabric task or as we have, installing it with a script for automatic app deploying is enough.

For accessing the logs I use Kibana, the embedded one that comes with logstash. Sorry, I don’t like ruby platforms, well I don’t like python eggs either or perl CPAN, a production server is no place for compilers. They always give extra work so a flat jar and a jre runtime is a good stand-off for me.

The Arsenal.

When you only have a hammer everything looks like a nail, but a good professional without having to renounce to the hammer (one of the differences between a pro and a real pro, the later knows when to use it), has a toolbox with a collection of chosen utils right for the job or at least for any contingency (yes, that any would be the hammer for).

Tools I use and are usually needed in any migration. They are Linux oriented, there should be alternatives for other platforms.

  • An office suite, LibreOffice could be enough for most things but MS Project it’s still king. A tool for planning tasks and resources is critical. We could even argue that it shouldn’t be our job managing the project, it’s a full job in itself. Not only to manage the technical details but also the followups or sideways of all the people involved… But alas, Spain is different.
  • An issue/ticket management and a wiki. Redmine or Jira.
  • A firewall rules management software. Shorewall.
  • A DNS proxy/resolver. Dnsmasq
  • Software for centralizing logs. Logstash, elasticsearch and kibana. Explained with more detail in Warlogs.
  • A monitoring strategy and software. Zabbix
  • Networking tools (tcpdump, netcat, iproute2, wireshark,…)
  • Software for installing packages/updates supporting channels.
  • To be continue…

In following posts I’d explain how I use them and why. What are the motives and what I try to accomplish.