jm + kafka   47

Exactly-once Support in Apache Kafka – Jay Kreps
If you’re one of the people who think [exactly-once support is impossible], I’d ask you to take an actual look at what we actually know to be possible and impossible, and what has been built in Kafka, and hopefully come to a more informed opinion. So let’s address this in two parts. First, is exactly-once a theoretical impossibility? Second, how does Kafka support it.
exactly-once-delivery  distributed  kafka  distcomp  jay-kreps  coding  broadcast 
11 weeks ago by jm
Exactly-once Semantics is Possible: Here's How Apache Kafka Does it
How does this feature work? Under the covers it works in a way similar to TCP; each batch of messages sent to Kafka will contain a sequence number which the broker will use to dedupe any duplicate send. Unlike TCP, though—which provides guarantees only within a transient in-memory connection—this sequence number is persisted to the replicated log, so even if the leader fails, any broker that takes over will also know if a resend is a duplicate. The overhead of this mechanism is quite low: it’s just a few extra numeric fields with each batch of messages. As you will see later in this article, this feature add negligible performance overhead over the non-idempotent producer.
kafka  sequence-numbers  dedupe  deduplication  unique  architecture  distcomp  streaming  idempotence 
11 weeks ago by jm
Delivering Billions of Messages Exactly Once · Segment Blog
holy crap, this is exactly the wrong way to build a massive-scale deduplication system -- with a monster random-access "is this random UUID in the db" lookup
deduping  architecture  horror  segment  messaging  kafka 
11 weeks ago by jm
Kafka Streams - Scaling up or down
this is a nice zero-config scaling story -- good work Kafka Streams
scaling  scalability  architecture  kafka  streams  ops 
october 2016 by jm
Counting with domain specific databases — The Smyte Blog — Medium
whoa, pretty heavily engineered scalable counting system with Kafka, RocksDB and Kubernetes
kafka  rocksdb  kubernetes  counting  databases  storage  ops 
april 2016 by jm
Apache Kafka, Purgatory, and Hierarchical Timing Wheels
In the new design, we use Hierarchical Timing Wheels for the timeout timer and DelayQueue of timer buckets to advance the clock on demand. Completed requests are removed from the timer queue immediately with O(1) cost. The buckets remain in the delay queue, however, the number of buckets is bounded. And, in a healthy system, most of the requests are satisfied before timeout, and many of the buckets become empty before pulled out of the delay queue. Thus, the timer should rarely have the buckets of the lower interval. The advantage of this design is that the number of requests in the timer queue is the number of pending requests exactly at any time. This allows us to estimate the number of requests need to be purged. We can avoid unnecessary purge operation of the watcher lists. As the result we achieve a higher scalability in terms of request rate with much better CPU usage.
algorithms  timers  kafka  scheduling  timing-wheels  delayqueue  queueing 
october 2015 by jm
SuperChief: From Apache Storm to In-House Distributed Stream Processing
Another sorry tale of Storm issues:
Storm has been successful at Librato, but we experienced many of the limitations cited in the Twitter Heron: Stream Processing at Scale paper and outlined here by Adrian Colyer, including:
Inability to isolate, reason about, or debug performance issues due to the worker/executor/task paradigm. This led to building and configuring clusters specifically designed to attempt to mitigate these problems (i.e., separate clusters per topology, only running a worker per server.), which added additional complexity to development and operations and also led to over-provisioning.
Ability of tasks to move around led to difficult to trace performance problems.
Storm’s work provisioning logic led to some tasks serving more Kafka partitions than others. This in turn created latency and performance issues that were difficult to reason about. The initial solution was to over-provision in an attempt to get a better hashing/balancing of work, but eventually we just replaced the work allocation logic.
Due to Storm’s architecture, it was very difficult to get a stack trace or heap dump because the processes that managed workers (Storm supervisor) would often forcefully kill a Java process while it was being investigated in this way.
The propensity for unexpected and subsequently unhandled exceptions to take down an entire worker led to additional defensive verbose error handling everywhere.
This nasty bug STORM-404 coupled with the aforementioned fact that a single exception can take down a worker led to several cascading failures in production, taking down entire topologies until we upgraded to 0.9.4.
Additionally, we found the performance we were getting from Storm for the amount of money we were spending on infrastructure was not in line with our expectations. Much of this is due to the fact that, depending upon how your topology is designed, a single tuple may make multiple hops across JVMs, and this is very expensive. For example, in our time series aggregation topologies a single tuple may be serialized/deserialized and shipped across the wire 3-4 times as it progresses through the processing pipeline.
scalability  storm  kafka  librato  architecture  heron  ops 
october 2015 by jm
SQL on Kafka using PipelineDB
this is quite nice. PipelineDB allows direct hookup of a Kafka stream, and will ingest durably and reliably, and provide SQL views computed over a sliding window of the stream.
logging  sql  kafka  pipelinedb  streaming  sliding-window  databases  search  querying 
september 2015 by jm
Evolution of Babbel’s data pipeline on AWS: from SQS to Kinesis
Good "here's how we found it" blog post:

Our new data pipeline with Kinesis in place allows us to plug new consumers without causing any damage to the current system, so it’s possible to rewrite all Queue Workers one by one and replace them with Kinesis Workers. In general, the transition to Kinesis was smooth and there were not so tricky parts.
Another outcome was significantly reduced costs – handling almost the same amount of data as SQS, Kinesis appeared to be many times cheaper than SQS.
aws  kinesis  kafka  streaming  data-pipelines  streams  sqs  queues  architecture  kcl 
september 2015 by jm
jgc on Cloudflare's log pipeline
Cloudflare are running a 40-machine, 50TB Kafka cluster, ingesting at 15 Gbps, for log processing. Also: Go producers/consumers, capnproto as wire format, and CitusDB/Postgres to store rolled-up analytics output. Also using Space Saver (top-k) and HLL (counting) estimation algorithms.
logs  cloudflare  kafka  go  capnproto  architecture  citusdb  postgres  analytics  streaming 
june 2015 by jm
Why Loggly loves Apache Kafka
Some good factoids about Loggly's Kafka usage and scales
scalability  logging  loggly  kafka  queueing  ops  reliabilty 
may 2015 by jm
RADStack - an open source Lambda Architecture built on Druid, Kafka and Samza
'In this paper we presented the RADStack, a collection of complementary technologies that can be used together to power interactive analytic applications. The key pieces of the stack are Kafka, Samza, Hadoop, and Druid. Druid is designed for exploratory analytics and is optimized for low latency data exploration, aggregation, and ingestion, and is well suited for OLAP workflows. Samza and Hadoop complement Druid and add data processing functionality, and Kafka enables high throughput event delivery.'
druid  samza  kafka  streaming  cep  lambda-architecture  architecture  hadoop  big-data  olap 
april 2015 by jm
Kafka best practices
This is the second part of our guide on streaming data and Apache Kafka. In part one I talked about the uses for real-time data streams and explained our idea of a stream data platform. The remainder of this guide will contain specific advice on how to go about building a stream data platform in your organization.


tl;dr: limit the number of Kafka clusters; use Avro.
architecture  kafka  storage  streaming  event-processing  avro  schema  confluent  best-practices  tips 
march 2015 by jm
What's confusing about Kafka: a list
At a recent call, Neha said “The most confusing behavior we have is how producing to a topic can return errors for few seconds after the topic was already created”. As she said that, I remembered that indeed, this was once very confusing, but then I got used to it.  Which got us thinking: What other things that Kafka does are very confusing to new users, but we got so used to them that we no longer even see the issue?
kafka  messaging  tips  gotchas 
march 2015 by jm
The official REST Proxy for Kafka
The REST Proxy is an open source HTTP-based proxy for your Kafka cluster. The API supports many interactions with your cluster, including producing and consuming messages and accessing cluster metadata such as the set of topics and mapping of partitions to brokers. Just as with Kafka, it can work with arbitrary binary data, but also includes first-class support for Avro and integrates well with Confluent’s Schema Registry. And it is scalable, designed to be deployed in clusters and work with a variety of load balancing solutions.

We built the REST Proxy first and foremost to meet the growing demands of many organizations that want to use Kafka, but also want more freedom to select languages beyond those for which stable native clients exist today. However, it also includes functionality beyond traditional clients, making it useful for building tools for managing your Kafka cluster. See the documentation for a more detailed description of the included features.
kafka  rest  proxies  http  confluent  queues  messaging  streams  architecture 
march 2015 by jm
Twitter's Answers architecture
Twitter's mobile-device analytics service architecture, with Kafka and Storm in full Lambda-Architecture mode
twitter  lambda-architecture  storm  kafka  architecture 
february 2015 by jm
yahoo/kafka-manager
A tool for managing Apache Kafka. It supports the following :

Manage multiple clusters;
Easy inspection of cluster state (topics, brokers, replica distribution, partition distribution);
Run preferred replica election;
Generate partition assignments (based on current state of cluster);
Run reassignment of partition (based on generated assignments)
yahoo  kafka  ops  tools 
february 2015 by jm
Comparing Message Queue Architectures on AWS
A good overview -- I like the summary table. tl;dr:
If you are light on DevOps and not latency sensitive use SQS for job management and Kinesis for event stream processing. If latency is an issue, use ELB or 2 RabbitMQs (or 2 beanstalkds) for job management and Redis for event stream processing.
amazon  architecture  aws  messaging  queueing  elb  rabbitmq  beanstalk  kinesis  sqs  redis  kafka 
february 2015 by jm
sclasen/event-shuttle
UNIX system service [jmason: ie a sidecar] that collects events and reliably delivers them to kafka, relieving other services on the same system from having to do so. Journals events through bolt-db so that in the event of an kafka outage, events can still be accepted, and will be delivered when kafka becomes available.
kafka  messaging  ruby  go  events  fault-tolerance  queueing 
november 2014 by jm
DynamoDB Streams
This is pretty awesome. All changes to a DynamoDB table can be streamed to a Kinesis stream, MySQL-replication-style.

The nice bit is that it has a solid way to ensure readers won't get overwhelmed by the stream volume (since ddb tables are IOPS-rate-limited), and Kinesis has a solid way to read missed updates (since it's a Kafka-style windowed persistent stream). With this you have a pretty reliable way to ensure you're not going to suffer data loss.
iops  dynamodb  aws  kinesis  reliability  replication  multi-az  multi-region  failover  streaming  kafka 
november 2014 by jm
Announcing Confluent, A Company for Apache Kafka And Realtime Data
Jay Kreps, Neha Narkhede, and Jun Rao are leaving LinkedIn to form a Kafka-oriented realtime event processing company
realtime  event-processing  logs  kafka  streaming  open-source  jay-kreps  jun-rao  confluent 
november 2014 by jm
Why We Didn’t Use Kafka for a Very Kafka-Shaped Problem
A good story of when Kafka _didn't_ fit the use case:
We came up with a complicated process of app-level replication for our messages into two separate Kafka clusters. We would then do end-to-end checking of the two clusters, detecting dropped messages in each cluster based on messages that weren’t in both.

It was ugly. It was clearly going to be fragile and error-prone. It was going to be a lot of app-level replication and horrible heuristics to see when we were losing messages and at least alert us, even if we couldn’t fix every failure case.

Despite us building a Kafka prototype for our ETL — having an existing investment in it — it just wasn’t going to do what we wanted. And that meant we needed to leave it behind, rewriting the ETL prototype.
cassandra  java  kafka  scala  network-partitions  availability  multi-region  multi-az  aws  replication  onlive 
november 2014 by jm
Load testing Apache Kafka on AWS
This is a very solid benchmarking post, examining Kafka in good detail. Nicely done. Bottom line:
I basically spend 2/3 of my work time torture testing and operationalizing distributed systems in production. There's some that I'm not so pleased with (posts pending in draft forever) and some that have attributes that I really love. Kafka is one of those systems that I pretty much enjoy every bit of, and the fact that it performs predictably well is only a symptom of the reason and not the reason itself: the authors really know what they're doing. Nothing about this software is an accident. Performance, everything in this post, is only a fraction of what's important to me and what matters when you run these systems for real. Kafka represents everything I think good distributed systems are about: that thorough and explicit design decisions win.
testing  aws  kafka  ec2  load-testing  benchmarks  performance 
october 2014 by jm
[KAFKA-1555] provide strong consistency with reasonable availability
Major improvements for Kafka consistency coming in 0.8.2; replication to multiple in-sync replicas, controlled by a new "min.isr" setting
kafka  replication  cap  consistency  streams 
october 2014 by jm
FelixGV/tehuti
Felix says:

'Like I said, I'd like to move it to a more general / non-personal repo in the future, but haven't had the time yet. Anyway, you can still browse the code there for now. It is not a big code base so not that hard to wrap one's mind around it.

It is Apache licensed and both Kafka and Voldemort are using it so I would say it is pretty self-contained (although Kafka has not moved to Tehuti proper, it is essentially the same code they're using, minus a few small fixes missing that we added).

Tehuti is a bit lower level than CodaHale (i.e.: you need to choose exactly which stats you want to measure and the boundaries of your histograms), but this is the type of stuff you would build a wrapper for and then re-use within your code base. For example: the Voldemort RequestCounter class.'
asl2  apache  open-source  tehuti  metrics  percentiles  quantiles  statistics  measurement  latency  kafka  voldemort  linkedin 
october 2014 by jm
Tehuti
An embryonic metrics library for Java/Scala from Felix GV at LinkedIn, extracted from Kafka's metric implementation and in the new Voldemort release. It fixes the major known problems with the Meter/Timer implementations in Coda-Hale/Dropwizard/Yammer Metrics.

'Regarding Tehuti: it has been extracted from Kafka's metric implementation. The code was originally written by Jay Kreps, and then maintained improved by some Kafka and Voldemort devs, so it definitely is not the work of just one person. It is in my repo at the moment but I'd like to put it in a more generally available (git and maven) repo in the future. I just haven't had the time yet...

As for comparing with CodaHale/Yammer, there were a few concerns with it, but the main one was that we didn't like the exponentially decaying histogram implementation. While that implementation is very appealing in terms of (low) memory usage, it has several misleading characteristics (a lack of incoming data points makes old measurements linger longer than they should, and there's also a fairly high possiblity of losing interesting outlier data points). This makes the exp decaying implementation robust in high throughput fairly constant workloads, but unreliable in sparse or spiky workloads. The Tehuti implementation provides semantics that we find easier to reason with and with a small code footprint (which we consider a plus in terms of maintainability). Of course, it is still a fairly young project, so it could be improved further.'

More background at the kafka-dev thread: http://mail-archives.apache.org/mod_mbox/kafka-dev/201402.mbox/%3C131A7649-ED57-45CB-B4D6-F34063267664@linkedin.com%3E
kafka  metrics  dropwizard  java  scala  jvm  timers  ewma  statistics  measurement  latency  sampling  tehuti  voldemort  linkedin  jay-kreps 
october 2014 by jm
Integrating Kafka and Spark Streaming: Code Examples and State of the Game
Spark Streaming has been getting some attention lately as a real-time data processing tool, often mentioned alongside Apache Storm. [...] I added an example Spark Streaming application to kafka-storm-starter that demonstrates how to read from Kafka and write to Kafka, using Avro as the data format and Twitter Bijection for handling the data serialization. In this post I will explain this Spark Streaming example in further detail and also shed some light on the current state of Kafka integration in Spark Streaming. All this with the disclaimer that this happens to be my first experiment with Spark Streaming.
spark  kafka  realtime  architecture  queues  avro  bijection  batch-processing 
october 2014 by jm
Apache Kafka 0.8 basic training
This is a pretty voluminous and authoritative presentation about getting started with Kafka; wish this was around when we started using it for 0.7. (We use our own homegrown realtime system nowadays, due to better partitioning, monitoring and operability.)
storm  kafka  presentations  documentation  ops 
august 2014 by jm
All Data Are Belong to AWS: Streaming upload via Fluentd
Fluentd looks like a decent foundation for tailing/streaming event processing in Ruby, supporting batched output to S3 and a bunch of other AWS services, Kafka, and RabbitMQ for output. Claims to have ok performance, despite its Rubbitude. However, its high-availability story is shite, so not to be used where availability is important
ruby  rabbitmq  kafka  tail  event-streaming  cep  event-processing  s3  aws  sqs  fluentd 
august 2014 by jm
Real time analytics with Netty, Storm, Kafka
Arch of a fairly typical Kafka/Storm realtime ad-tracking setup, from eClick/mc2ads, via Trustin Lee
via:trustinlee  kafka  storm  netty  architecture  ad-tracking  ads  realtime 
august 2014 by jm
Pinterest Secor
Today we’re open sourcing Secor, a zero data loss log persistence service whose initial use case was to save logs produced by our monetization pipeline. Secor persists Kafka logs to long-term storage such as Amazon S3. It’s not affected by S3’s weak eventual consistency model, incurs no data loss, scales horizontally, and optionally partitions data based on date.
pinterest  hadoop  secor  storm  kafka  architecture  s3  logs  archival 
may 2014 by jm
Answer to How many topics (queues) can be created in Apache Kafka? - Quora
Good to know:
'As far as I understand (this was true as of 2013, when I last looked into this issue) there's at least one Apache ZooKeeper znode per topic in Kafka. While there is no hard limitation in Kafka itself (Kafka is linearly scalable), it does mean that the maximum number of znodes comfortable supported by ZooKeeper (on the order of about ten thousand) is the upper limit of Kafka's scalability as far as the number of topics goes.'
kafka  queues  zookeeper  znodes  architecture 
march 2014 by jm
Big, Small, Hot or Cold - Your Data Needs a Robust Pipeline
'(Examples [of big-data B-I crunching pipelines] from Stripe, Tapad, Etsy & Square)'
stripe  tapad  etsy  square  big-data  analytics  kafka  impala  hadoop  hdfs  parquet  thrift 
february 2014 by jm
Grape
a realtime processing engine, built on a persistent queue and a set of workers. 'The main goal is data availability and persistency. We created grape for those who cannot afford losing data'. It does this by allowing infinite expansion of the pending queue in Elliptics, their Dynamo-like horizontally-scaled storage backend.
kafka  queue  queueing  storage  realtime  fault-tolerance  grape  cep  event-processing 
november 2013 by jm
Call me maybe: Kafka
Aphyr takes a look at Kafka 0.8's replication with the Jepsen test suite. It doesn't go great. Jay Kreps responds here: http://blog.empathybox.com/post/62279088548/a-few-notes-on-kafka-and-jepsen
jay-kreps  kafka  replication  distributed-systems  distcomp  networking  reliability  fault-tolerance  jepsen 
september 2013 by jm
Behind the Screens at Loggly
Boost ASIO at the front end (!), Kafka 0.8, Storm, and ElasticSearch
boost  scalability  loggly  logging  ingestion  cep  stream-processing  kafka  storm  architecture  elasticsearch 
september 2013 by jm
DataSift Architecture: Realtime Datamining at 120,000 Tweets Per Second
250 million tweets per day, 30-node HBase cluster, 400TB of storage, Kafka and 0mq.

This is from 2011, hence this dated line: 'for a distributed application they thought AWS was too limited, especially in the network. AWS doesn’t do well when nodes are connected together and they need to talk to each other. Not low enough latency network. Their customers care about latency.' (Nowadays, it would be damn hard to build a lower-latency network than that attached to a cc2.8xlarge instance.)
datasift  architecture  scalability  data  twitter  firehose  hbase  kafka  zeromq 
april 2013 by jm
Running a Multi-Broker Apache Kafka 0.8 Cluster on a Single Node
an excellent writeup on Kafka 0.8's use and operation, including details of the new replication features
kafka  replication  queueing  distributed  ops 
april 2013 by jm
Kafka 0.8 Producer Performance
Great benchmarking from Piotr Kozikowski at the LiveRamp team, into performance of the upcoming Kafka 0.8 release
performance  kafka  apache  benchmarks  ops  queueing 
april 2013 by jm
Compression in Kafka: GZIP or Snappy ?
With Ack: in this mode, as far as compression is concerned, the data gets compressed at the producer, decompressed and compressed on the broker before it sends the ack to the producer. The producer throughput with Snappy compression was roughly 22.3MB/s as compared to 8.9MB/s of the GZIP producer. Producer throughput is 150% higher with Snappy as compared to GZIP.

No ack, similar to Kafka 0.7 behavior: In this mode, the data gets compressed at the producer and it doesn’t wait for the ack from the broker. The producer throughput with Snappy compression was roughly 60.8MB/s as compared to 18.5MB/s of the GZIP producer. Producer throughput is 228% higher with Snappy as compared to GZIP. The higher compression savings in this test are due to the fact that the producer does not wait for the leader to re-compress and append the data; it simply compresses messages and fires away. Since Snappy has very high compression speed and low CPU usage, a single producer is able to compress the same amount of messages much faster as compared to GZIP.
gzip  snappy  compression  kafka  streaming  ops 
april 2013 by jm
TCP Tune
These notes are intended to help users and system administrators maximize TCP/IP performance on their computer systems. They summarize all of the end-system (computer system) network tuning issues including a tutorial on TCP tuning, easy configuration checks for non-experts, and a repository of operating system specific instructions for getting the best possible network performance on these platforms.


Some tips for maximizing HPC network performance for the intra-DC case; recommended by the LinkedIn Kafka operations page.
tuning  network  tcp  sysadmin  performance  ops  kafka  ec2 
april 2013 by jm
Tumblr Architecture - 15 Billion Page Views A Month And Harder To Scale Than Twitter
Buckets of details on Tumblr's innards. fans of Finagle and Kafka, notably
tumblr  scalability  web  finagle  redis  kafka 
november 2012 by jm
Autometrics: Self-service metrics collection
how LinkedIn built a service-metrics collection and graphing infrastructure using Kafka and Zookeeper, writing to RRD files, handling 8.8k metrics per datacenter per second
kafka  zookeeper  linkedin  sysadmin  service-metrics 
february 2012 by jm
Apache Kafka
'Kafka provides a publish-subscribe solution that can handle all activity stream data and processing on a consumer-scale web site. This kind of activity (page views, searches, and other user actions) are a key ingredient in many of the social feature on the modern web. This data is typically handled by "logging" and ad hoc log aggregation solutions due to the throughput requirements. This kind of ad hoc solution is a viable solution to providing logging data to an offline analysis system like Hadoop, but is very limiting for building real-time processing. Kafka aims to unify offline and online processing by providing a mechanism for parallel load into Hadoop as well as the ability to partition real-time consumption over a cluster of machines.' neat
kafka  linkedin  apache  distributed  messaging  pubsub  queue  incubator  scaling 
february 2012 by jm

related tags

ad-tracking  ads  algorithms  amazon  analytics  apache  architecture  archival  asl2  availability  avro  aws  batch  batch-processing  beanstalk  benchmarks  best-practices  big-data  bijection  boost  broadcast  cap  capnproto  cassandra  cep  citusdb  cloudflare  coding  compression  confluent  consistency  counting  data  data-pipelines  databases  datasift  dedupe  deduping  deduplication  delayqueue  distcomp  distributed  distributed-systems  documentation  dropwizard  druid  dynamodb  ec2  elasticsearch  elb  etsy  event-processing  event-streaming  events  ewma  exactly-once-delivery  failover  fault-tolerance  finagle  firehose  fluentd  go  gotchas  grape  gzip  hadoop  hbase  hdfs  heron  horror  http  idempotence  impala  incubator  ingestion  iops  java  jay-kreps  jepsen  jun-rao  jvm  kafka  kcl  kinesis  kubernetes  lambda-architecture  latency  librato  linkedin  load-testing  log  logging  loggly  logs  measurement  messaging  metrics  multi-az  multi-region  netty  network  network-partitions  networking  olap  onlive  open-source  ops  parquet  percentiles  performance  pinterest  pipelinedb  postgres  presentations  proxies  pubsub  quantiles  querying  queue  queueing  queues  rabbitmq  realtime  redis  reliability  reliabilty  replication  rest  rocksdb  ruby  s3  sampling  samza  scala  scalability  scaling  scheduling  schema  search  secor  segment  sequence-numbers  service-metrics  sliding-window  snappy  spark  spark-streaming  sql  sqs  square  statistics  storage  storm  stream-processing  streaming  streams  stripe  sysadmin  tail  tapad  tcp  tehuti  testing  thrift  timers  timing-wheels  tips  tools  tumblr  tuning  twitter  unique  via:trustinlee  voldemort  web  yahoo  zeromq  znodes  zookeeper 

Copy this bookmark:



description:


tags: