A logging platform has been something I’ve been looking for some time. In Logging for the masses I explained how I built an ELK platform for accessing/searching our web logs. Elasticsearch and Kibana are great but Logstash is the weak link, it’s not well designed for parallel processing (cloud/multiples nodes). I had to split the logstash service in two adding a redis server just to get some HA and don’t lose logs.
Also logging is a deficit or a requisite needed by any dockerized app. Most of the issues I talked about in Docker: production usefulness are still valid some have been tackled with kubernetes, openshiftv3,… (those relative to managing docker images, and fleet/project management) but with monitoring and logging the jury is still out.
Messaging: Kafka works well as a replacement for a more traditional message broker.In this domain Kafka is comparable to traditional messaging systems such as ActiveMQ or RabbitMQ.
Website Activity Tracking: The original use case for Kafka was to be able to rebuild a user activity tracking pipeline as a set of real-time publish-subscribe feeds.This means site activity (page views, searches, or other actions users may take) is published to central topics with one topic per activity type. These feeds are available for subscription for a range of use cases including real-time processing, real-time monitoring, and loading into Hadoop or offline data warehousing systems for offline processing and reporting.
Metrics: Kafka is often used for operational monitoring data. This involves aggregating statistics from distributed applications to produce centralized feeds of operational data.
Log Aggregation: Many people use Kafka as a replacement for a log aggregation solution. Kafka abstracts away the details of files and gives a cleaner abstraction of log or event data as a stream of messages. This allows for lower-latency processing and easier support for multiple data sources and distributed data consumption.
Stream Processing: Many users end up doing stage-wise processing of data where data is consumed from topics of raw data and then aggregated, enriched, or otherwise transformed into new Kafka topics for further consumption. Storm and Samza are popular frameworks for implementing these kinds of transformations.
Event Sourcing: Event sourcing is a style of application design where state changes are logged as a time-ordered sequence of records. Kafka’s support for very large stored log data makes it an excellent backend for an application built in this style.
Commit Log: Kafka can serve as a kind of external commit-log for a distributed system. The log helps replicate data between nodes and acts as a re-syncing mechanism for failed nodes to restore their data.
What is Kafka? Where does the name come from?
It’s explained in http://blog.confluent.io/2015/02/25/stream-data-platform-1/ great blog entry by the way, a must to read for understanding Kafka.
We built Apache Kafka at LinkedIn with a specific purpose in mind: to serve as a central repository of data streams.
For a long time we didn’t really have a name for what we were doing (we just called it “Kafka stuff” or “the global commit log thingy”) but over time we came to call this kind of data “stream data”, and the concept of managing this centrally a “stream data platform”
LinkedIn platform before and after developing and implementing Kafka.
In this blog entry from “engineering.linkedin.com”, there is another technical explanation:
I learnt about it thanks to Javi Roman (@javiromanrh) a RedHat Engineer who talks about BigData and for several weeks his tweets always had some Kafka in them. So appealing that I had to research it myself to verify that it really needs to enter in my priority list.
Some links tweeted by Javi Roman to get a glimpse of Apache Kafka:
- BOTTLED WATER: REAL-TIME INTEGRATION OF POSTGRESQL AND KAFKA.
Use postgresql for real-time stream of changes. Similar to Oracle GoldenGate, MySQL binlog, MongoDB oplog or CouchDB changes feed.
- Building a new trends experience. Explains how Manhattan, Kafka, HDFS et al come together
- Building a Stream Data Platform with apachekafka by ConfluentInc
- Running Kafka at Scale. We use apachekafka for moving every type of data around between systems … @bonkoif @LinkedInEng
- A real-time processing revival. Stream processing and data management a good state-of-art article by radar
- Rsyslog kafka plugin producer: Great new feature in rsyslog since v8.7.0 by @rgerhards omkafka plug-in @apachekafka producer.
- Things that apachekafka does are very confusing to new users
- TURNING THE DATABASE INSIDE-OUT WITH APACHE SAMZA
Rethinking database and application relation architecture using @apachekafka and @samzastream by @martinkl