jm + snappy   4

How eBay’s Shopping Cart used compression techniques to solve network I/O bottlenecks
compressing data written to MongoDB using LZ4_HIGH --dropped oplog write rates from 150GB/hour to 11GB/hour. Snappy and Gzip didn't fare too well by comparison
lz4  compression  gzip  json  snappy  scaling  ebay  mongodb 
10 weeks ago by jm
Announcing Snappy Ubuntu
Awesome! I was completely unaware this was coming down the pipeline.
A new, transactionally updated Ubuntu for the cloud. Ubuntu Core is a new rendition of Ubuntu for the cloud with transactional updates. Ubuntu Core is a minimal server image with the same libraries as today’s Ubuntu, but applications are provided through a simpler mechanism. The snappy approach is faster, more reliable, and lets us provide stronger security guarantees for apps and users — that’s why we call them “snappy” applications.

Snappy apps and Ubuntu Core itself can be upgraded atomically and rolled back if needed — a bulletproof approach to systems management that is perfect for container deployments. It’s called “transactional” or “image-based” systems management, and we’re delighted to make it available on every Ubuntu certified cloud.
ubuntu  linux  packaging  snappy  ubuntu-core  transactional-updates  apt  docker  ops 
december 2014 by jm
Compression in Kafka: GZIP or Snappy ?
With Ack: in this mode, as far as compression is concerned, the data gets compressed at the producer, decompressed and compressed on the broker before it sends the ack to the producer. The producer throughput with Snappy compression was roughly 22.3MB/s as compared to 8.9MB/s of the GZIP producer. Producer throughput is 150% higher with Snappy as compared to GZIP.

No ack, similar to Kafka 0.7 behavior: In this mode, the data gets compressed at the producer and it doesn’t wait for the ack from the broker. The producer throughput with Snappy compression was roughly 60.8MB/s as compared to 18.5MB/s of the GZIP producer. Producer throughput is 228% higher with Snappy as compared to GZIP. The higher compression savings in this test are due to the fact that the producer does not wait for the leader to re-compress and append the data; it simply compresses messages and fires away. Since Snappy has very high compression speed and low CPU usage, a single producer is able to compress the same amount of messages much faster as compared to GZIP.
gzip  snappy  compression  kafka  streaming  ops 
april 2013 by jm
snappy - A fast compressor/decompressor
'On a single core of a Core i7 processorin 64-bit mode, it compresses at about 250 MB/sec or more and decompresses atabout 500 MB/sec or more. (These numbers are for the slowest inputs in ourbenchmark suite; others are much faster.) In our tests, Snappy usuallyis faster than algorithms in the same class (e.g. LZO, LZF, FastLZ, QuickLZ,etc.) while achieving comparable compression ratios.'  Apache-licensed, from Google
snappy  google  compression  speed  from delicious
march 2011 by jm

Copy this bookmark:



description:


tags: