Category Archives: Cloud Computing

Stream Data Platform or Global Logging with Apache Kafka

A logging platform has been something I’ve been looking for some time. In Logging for the masses‌ I explained how I built an ELK platform for accessing/searching our web logs. Elasticsearch and Kibana are great but Logstash is the weak link, it’s not well designed for parallel processing (cloud/multiples nodes). I had to split the logstash service in two adding a redis server just to get some HA and don’t lose logs.

Also logging is a deficit or a requisite needed by any dockerized app. Most of the issues I talked about in Docker: production usefulness are still valid some  have been tackled with kubernetes, openshiftv3,…  (those relative to managing docker images, and fleet/project management) but with monitoring and logging the jury is still out.

Apache Kafka is a solution to both. Actually is a solution for a lot of things:

  • Messaging: Kafka works well as a replacement for a more traditional message broker.In this domain Kafka is comparable to traditional messaging systems such as ActiveMQ or RabbitMQ.

  • Website Activity Tracking: The original use case for Kafka was to be able to rebuild a user activity tracking pipeline as a set of real-time publish-subscribe feeds.This means site activity (page views, searches, or other actions users may take) is published to central topics with one topic per activity type. These feeds are available for subscription for a range of use cases including real-time processing, real-time monitoring, and loading into Hadoop or offline data warehousing systems for offline processing and reporting.

  • Metrics: Kafka is often used for operational monitoring data. This involves aggregating statistics from distributed applications to produce centralized feeds of operational data.

  • Log Aggregation: Many people use Kafka as a replacement for a log aggregation solution. Kafka abstracts away the details of files and gives a cleaner abstraction of log or event data as a stream of messages. This allows for lower-latency processing and easier support for multiple data sources and distributed data consumption.

  • Stream Processing: Many users end up doing stage-wise processing of data where data is consumed from topics of raw data and then aggregated, enriched, or otherwise transformed into new Kafka topics for further consumption. Storm and Samza are popular frameworks for implementing these kinds of transformations.

  • Event Sourcing: Event sourcing is a style of application design where state changes are logged as a time-ordered sequence of records. Kafka’s support for very large stored log data makes it an excellent backend for an application built in this style.

  • Commit Log: Kafka can serve as a kind of external commit-log for a distributed system. The log helps replicate data between nodes and acts as a re-syncing mechanism for failed nodes to restore their data.

What is Kafka? Where does the name come from?
It’s explained in http://blog.confluent.io/2015/02/25/stream-data-platform-1/ great blog entry by the way, a must to read for understanding Kafka.

We built Apache Kafka at LinkedIn with a specific purpose in mind: to serve as a central repository of data streams.

For a long time we didn’t really have a name for what we were doing (we just called it “Kafka stuff” or “the global commit log thingy”) but over time we came to call this kind of data “stream data”, and the concept of managing this centrally a “stream data platform”

 

LinkedIn platform before and after developing and implementing Kafka.

data-flow-ugly

In this blog entry from “engineering.linkedin.com”, there is another technical explanation:

stream-centric-architecture1

The Log: What every software engineer should know about real-time data’s unifying abstraction

 

 

I learnt about it thanks to Javi Roman (@javiromanrh) a RedHat Engineer who talks about BigData and for several weeks his tweets always had some Kafka in them. So appealing that I had to research it myself to verify that it really needs to enter in my priority list.

Some links tweeted by Javi Roman to get a glimpse of Apache Kafka:

Potential and imaginary roadmap for 2015/16

I’ll talk about our Spanish middle-ware platform dedicated to offer web services (and sites, not just ws).

It’s an exercise of a potential roadmap for the next year.
Mainly:
Several RHEL servers in DEV, UAT and PROD , LAN, ExternalDevices (DMZ),…
Several Jboss App Servers , Apache Servers, F5 VIPs and support services (LDAP, MQ,…)
All the infrastructure that supports CI/CD. Subversion servers, Artifactory,…

Migrate Spanish CI/CD to X-Forge

That would be migrating our source code from our Subversion servers to our overlord SVN repository (let’s call it X-Forge, a dev portal in jira, confluence,etc…), our artifactory server to Nexus and our build and deploy scripts to bamboo, Sonar to Sonarcube,… It seems viable because the technology involved is compatible and the build (maven+eclipse headless) and deploy (ssh+perl+bash) scripts are invoked from Quickbuild ,but if it were Jenkins there wouldn’t be any difference, we ought to be able to migrate them to bamboo.

It isn’t in plans for 2015 so at least it won’t be before 2016. Although the technical part seems viable at first glance, there would be still a lot of details to be ironed with our overlords. Actually, the underlying organization of the code (as Eclipse PSF) and the inter-dependencies between repositories make phasing the project quite difficult (if not impossible). Actually migrating seamlessly our CD/CI infrastructure from Spain to Poland (while maintaining both for almost 2 years) was maybe one of major feat in that project (in complexity terms,  of the 200 servers I built in the new data center; the second was the SVN server while the first one was the configuration manager). Nobody was impacted so it went unnoticed (And nobody assessed the difficulty of doing it at the first try).

Revamp J2EE platform

The platform design is showing its age. I designed it 4 or 5 years ago and hasn’t been updated since them. It isn’t keeping the pace. Recent moves to agile procedures show a lack of flexibility that the powers to be have decided to overcome with what it’s called “multicontext”. Instead of creating several environments on demand (changing the platform) the solution is using the same environments to install multiple times the same j2ee apps changing their contexts (I’m a ‘rara avis’ I’m the only one that thinks this is a problem and should be considered heresy).

Although most or all of our apps could run in a Tomcat/jetty server I’d maintain the Jboss App Server due to it’s modularity, lightweight fingerprint, excellent support and synergies with other Red Hat products like FUSE (although we use other Service Bus as ESB, JBoss Fuse is elastic and looks great for cloud platforms) or BRMS (drools).

Solution 1: Migration of our applications to Pivotal CF/CloudFoundry.

Overall, maybe the best solution as it’s being deployed by our overlords. It’d allow the developers to have control of their environments and create sandboxes/environments on demand.
Of course there are problems to be addressed, it’s a pre-requisite that our CI/CD were already migrated to X-Forge and the applications should be revised and refactored to eliminate some tightly coupled functionality. Performance and stress tests are critical, it’s a shared platform. Maybe the solution is using it only for development in a first phase.

Solution 2: OpenShift V3

Maybe the best technical solution (IMHO and without knowing Pivotal in detail) for a J2EE platform like ours.

“OpenShift adds developer and operational centric tools on top of Kubernetes and Docker”.

It can be used for dev and prod environments and gives a lot of elasticity. Our apps will need to be dockerized but that is also a plus. The drawbacks as with Pivotal, the apps would need to be refactored and it isn’t a technology that it’s in our overlords roadmap/elements (while Pivotal CF seems to be the option of choice). Also dockerizing the apps will need some type of registry, access permission and change management procedures on its own.
Dockerizing Jboss will also involve upgrading the Jboss Domain Controller and a better workflow (a preliminary POC was tested in Docker: Attack on Wildfly).
This solution doesn’t require the migration to X-Forge.

Solution 3: Upgrading our JBoss servers

Upgrading our JBoss servers and changing the devops procedures for the jboss domain. It’s a solution 2 without openshift, kubernetes and docker, no cloud ability, elasticity,etc… just the improvements of a better workflow and a better refactoring (with ansible or salt) of devops procedures (creating environments, creating apps, deleting, cloning,…)
This solution doesn’t require the migration to X-Forge.

Solution 4: Refactoring the devops procedures.

Most of the complaints are related to the lack of flexibility for creating new apps and environments. That’s really an automation problem that can be resolved anytime (if I had it). The actual design can be scaled just adding extra capacity (and maybe licenses). I’ve already preliminary work in Ansible and Salt, Our overlords red line it’s a concern. This solution doesn’t require the migration to X-Forge.

Solution 5a: Not doing anything.

Keep multicontext and the same workflow (I don’t know why I detailed the other options… they have no chance against this one!)

Solution 5b:  Run.

Cloud Computing & Conway’s Law

a previous post: PaaS as change enabler, I mentioned Conway’s law as a difficulty to be addressed when looking for implementing a cloud solution.

organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations

Recently Gartner analyst: Thomas J. Bittman;Problems Encountered by 95% of Private Clouds researched which problems where suffering their clients with private clouds and found that 95% of them have problems with their solutions.

Amazon AWS (Private Clouds are things of the past) and other cloud providers/evangelists will parrot that “private clouds are inherently broken” but I can’t follow their logic on those reasons.

The problems encountered are related to the use of the technology not the technology itself, and most of them will occur implementing cloud, private or hybrid clouds.

If I had to extract a title it’d be “Companies don’t fully understand Cloud Computing“, they map their expertise, their knowledge and their organizational model to a cloud paradigm (hence Conway’s law) without fully committing/understanding all the consequences of a cloud model.

Anyway, those detected problems are critical. Addressing them mark the difference between a successful project or a failure for any company, and it usually involves a change of mindset which is cloud computing really about. A change of paradigm which without it, maybe there isn’t any difference to an advanced virtualization.