jm + redis   39

Learn redis the hard way (in production) · trivago techblog
oh god this is pretty awful. this just reads like "don't try to use Redis at scale" to me
redis  scalability  ops  architecture  horror  trivago  php 
march 2017 by jm
How to do distributed locking
A critique of the "Redlock" locking algorithm from Redis by Martin Kleppman. antirez responds here: http://antirez.com/news/101 ; summary of followups: https://storify.com/martinkl/redlock-discussion
distributed  locking  redis  algorithms  coding  distcomp  redlock  martin-kleppman  zookeeper 
february 2016 by jm
"What the hell have you built"
cut out and keep PNG for many occasions! "Why is Redis talking to MongoDB?"
mongodb  redis  funny  architecture  gifs  png  reactiongifs 
january 2016 by jm
Nchan
Nchan is a scalable, flexible pub/sub server for the modern web, built as a module for the Nginx web server. It can be configured as a standalone server, or as a shim between your application and tens, thousands, or millions of live subscribers. It can buffer messages in memory, on-disk, or via Redis. All connections are handled asynchronously and distributed among any number of worker processes. It can also scale to many nginx server instances with Redis. Messages are published to channels with HTTP POST requests or websockets, and subscribed also through websockets, long-polling, EventSource (SSE), old-fashioned interval polling, and more. Each subscriber can listen to up to 255 channels per connection, and can be optionally authenticated via a custom application url. An events meta channel is also available for debugging.


Also now supports HTTP/2. This used to be called the Nginx HTTP Push Module, and I used it with great results in that form. This is the way to do HTTP push in all its forms....
nginx  pubsub  websockets  sse  http  http-push  http2  redis  long-polling  nchan 
january 2016 by jm
Scaling Analytics at Amplitude
Good blog post on Amplitude's lambda architecture setup, based on S3 and a custom "real-time set database" they wrote themselves.

antirez' comment from a Redis angle on the set database: http://antirez.com/news/92

HN thread: https://news.ycombinator.com/item?id=10118413
lambda-architecture  analytics  via:hn  redis  set-storage  storage  databases  architecture  s3  realtime 
august 2015 by jm
Transparent huge pages implicated in Redis OOM
A nasty real-world prod error scenario worsened by THPs:
jemalloc(3) extensively uses madvise(2) to notify the operating system that it's done with a range of memory which it had previously malloc'ed. The page size on this machine is 2MB because transparent huge pages are in use. As such, a lot of the memory which is being marked with madvise(..., MADV_DONTNEED) is within substantially smaller ranges than 2MB. This means that the operating system never was able to evict pages which had ranges marked as MADV_DONTNEED because the entire page has to be unneeded to allow a page to be reused. Despite initially looking like a leak, the operating system itself was unable to free memory because of madvise(2) and transparent huge pages. This led to sustained memory pressure on the machine and redis-server eventually getting OOM killed.
oom-killer  oom  linux  ops  thp  jemalloc  huge-pages  madvise  redis  memory 
march 2015 by jm
Vaurien, the Chaos TCP Proxy — Vaurien 1.8 documentation
Vaurien is basically a Chaos Monkey for your TCP connections. Vaurien acts as a proxy between your application and any backend. You can use it in your functional tests or even on a real deployment through the command-line.

Vaurien is a TCP proxy that simply reads data sent to it and pass it to a backend, and vice-versa. It has built-in protocols: TCP, HTTP, Redis & Memcache. The TCP protocol is the default one and just sucks data on both sides and pass it along.

Having higher-level protocols is mandatory in some cases, when Vaurien needs to read a specific amount of data in the sockets, or when you need to be aware of the kind of response you’re waiting for, and so on.

Vaurien also has behaviors. A behavior is a class that’s going to be invoked everytime Vaurien proxies a request. That’s how you can impact the behavior of the proxy. For instance, adding a delay or degrading the response can be implemented in a behavior.

Both protocols and behaviors are plugins, allowing you to extend Vaurien by adding new ones.

Last (but not least), Vaurien provides a couple of APIs you can use to change the behavior of the proxy live. That’s handy when you are doing functional tests against your server: you can for instance start to add big delays and see how your web application reacts.
proxy  tcp  vaurien  chaos-monkey  testing  functional-testing  failures  sockets  redis  memcache  http 
february 2015 by jm
Comparing Message Queue Architectures on AWS
A good overview -- I like the summary table. tl;dr:
If you are light on DevOps and not latency sensitive use SQS for job management and Kinesis for event stream processing. If latency is an issue, use ELB or 2 RabbitMQs (or 2 beanstalkds) for job management and Redis for event stream processing.
amazon  architecture  aws  messaging  queueing  elb  rabbitmq  beanstalk  kinesis  sqs  redis  kafka 
february 2015 by jm
Testing fork time on AWS/Xen infrastructure
Redis uses forking to perform persistence flushes, which means that once every 30 minutes it performs like crap (and kills the 99th percentile latency). Given this, various Redis people have been benchmarking fork() times on various Xen platforms, since Xen has a crappy fork() implementation
fork  xen  redis  bugs  performance  latency  p99 
october 2014 by jm
Game Day Exercises at Stripe: Learning from `kill -9`
We’ve started running game day exercises at Stripe. During a recent game day, we tested failing over a Redis cluster by running kill -9 on its primary node, and ended up losing all data in the cluster. We were very surprised by this, but grateful to have found the problem in testing. This result and others from this exercise convinced us that game days like these are quite valuable, and we would highly recommend them for others.


Excellent post. Game days are a great idea. Also: massive Redis clustering fail
game-days  redis  testing  stripe  outages  ops  kill-9  failover 
october 2014 by jm
Roshiak
a Riak-based clone of Roshi, the CRDT server built on top of Redis. some day I'll write up the CRDT we use on top of Voldemort in $work.

Comments: https://lobste.rs/s/tim5xc
riak  roshi  crdt  redis  storage  time-series-data 
october 2014 by jm
How Twitter Uses Redis to Scale
'105TB RAM, 39MM QPS, 10,000+ instances.' Notes from a talk given by Yao Yu of Twitter's Cache team, where she's worked for 4 years. Lots of interesting insights into large-scale Redis caching usage -- as in, large enough to max out the cluster hosts' network bandwidth.
twitter  redis  caching  memcached  yao-yu  scaling 
september 2014 by jm
Sirius by Comcast
At Comcast, our applications need convenient, low-latency access to important reference datasets. For example, our XfinityTV websites and apps need to use entertainment-related data to serve almost every API or web request to our datacenters: information like what year Casablanca was released, or how many episodes were in Season 7 of Seinfeld, or when the next episode of the Voice will be airing (and on which channel!).

We traditionally managed this information with a combination of relational databases and RESTful web services but yearned for something simpler than the ORM, HTTP client, and cache management code our developers dealt with on a daily basis. As main memory sizes on commodity servers continued to grow, however, we asked ourselves: How can we keep this reference data entirely in RAM, while ensuring it gets updated as needed and is easily accessible to application developers?

The Sirius distributed system library is our answer to that question, and we're happy to announce that we've made it available as an open source project. Sirius is written in Scala and uses the Akka actor system under the covers, but is easily usable by any JVM-based language.

Also includes a Paxos implementation with "fast follower" read-only slave replication. ASL2-licensed open source.

The only thing I can spot to be worried about is speed of startup; they note that apps need to replay a log at startup to rebuild state, which can be slow if unoptimized in my experience.

Update: in a twitter conversation at https://twitter.com/jon_moore/status/459363751893139456 , Jon Moore indicated they haven't had problems with this even with 'datasets consuming 10-20GB of heap', and have 'benchmarked a 5-node Sirius ingest cluster up to 1k updates/sec write throughput.' That's pretty solid!
open-source  comcast  paxos  replication  read-only  datastores  storage  memory  memcached  redis  sirius  scala  akka  jvm  libraries 
april 2014 by jm
Huge Redis rant
I want to emphasize that if you use redis as intended (as a slightly-persistent, not-HA cache), it's great. Unfortunately, more and more shops seem to be thinking that Redis is a full-service database and, as someone who's had to spend an inordinate amount of time maintaining such a setup, it's not. If you're writing software and you're thinking "hey, it would be easy to just put a SET key value in this code and be done," please reconsider. There are lots of great products out there that are better for the overwhelming majority of use cases.


Ouch. (via Aphyr)
redis  storage  architecture  memory  caching  ha  databases 
february 2014 by jm
Kelly "kellabyte" Sommers on Redis' "relaxed CP" approach to the CAP theorem

Similar to ACID properties, if you partially provide properties it means the user has to _still_ consider in their application that the property doesn't exist, because sometimes it doesn't. In you're fsync example, if fsync is relaxed and there are no replicas, you cannot consider the database durable, just like you can't consider Redis a CP system. It can't be counted on for guarantees to be delivered. This is why I say these systems are hard for users to reason about. Systems that partially offer guarantees require in-depth knowledge of the nuances to properly use the tool. Systems that explicitly make the trade-offs in the designs are easier to reason about because it is more obvious and _predictable_.
kellabyte  redis  cp  ap  cap-theorem  consistency  outages  reliability  ops  database  storage  distcomp 
december 2013 by jm
Benchmarking Redis on AWS ElastiCache
good data points, but could do with latency percentiles
latency  redis  measurement  benchmarks  ec2  elasticache  aws  storage  tests 
september 2013 by jm
Twilio Billing Incident Post-Mortem
At 1:35 AM PDT on July 18, a loss of network connectivity caused all billing redis-slaves to simultaneously disconnect from the master. This caused all redis-slaves to reconnect and request full synchronization with the master at the same time. Receiving full sync requests from each redis-slave caused the master to suffer extreme load, resulting in performance degradation of the master and timeouts from redis-slaves to redis-master.
By 2:39 AM PDT the host’s load became so extreme, services relying on redis-master began to fail. At 2:42 AM PDT, our monitoring system alerted our on-call engineering team of a failure in the Redis cluster. Observing extreme load on the host, the redis process on redis-master was misdiagnosed as requiring a restart to recover. This caused redis-master to read an incorrect configuration file, which in turn caused Redis to attempt to recover from a non-existent AOF file, instead of the binary snapshot. As a result of that failed recovery, redis-master dropped all balance data. In addition to forcing recovery from a non-existent AOF, an incorrect configuration also caused redis-master to boot as a slave of itself, putting it in read-only mode and preventing the billing system from updating account balances.

See also http://antirez.com/news/60 for antirez' response.

Here's the takeaways I'm getting from it:

1. network partitions happen in production, and cause cascading failures. this is a great demo of that.

2. don't store critical data in Redis. this was the case for Twilio -- as far as I can tell they were using Redis as a front-line cache for billing data -- but it's worth saying anyway. ;)

3. Twilio were just using Redis as a cache, but a bug in their code meant that the writes to the backing SQL store were not being *read*, resulting in repeated billing and customer impact. In other words, it turned a (fragile) cache into the authoritative store.

4. they should probably have designed their code so that write failures would not result in repeated billing for customers -- that's a bad failure path.

Good post-mortem anyway, and I'd say their customers are a good deal happier to see this published, even if it contains details of the mistakes they made along the way.
redis  caching  storage  networking  network-partitions  twilio  postmortems  ops  billing  replication 
july 2013 by jm
Pinterest's follower graph store, built on Redis
This is a good, high-availability Redis configuration; sharded by userid across 8192 shards, with a Redis master/slave pair of instances for each set of N shards. I like their use of two redundancy systems -- hot slave and backup snapshots:
We run our cluster in a Redis master-slave configuration, and the slaves act as hot backups. Upon a master failure, we failover the slave as the new master and either bring up a new slave or reuse the old master as the new slave. We rely on ZooKeeper to make this as quick as possible.

Each master Redis instance (and slave instance) is configured to write to AOF on Amazon EBS. This ensures that if the Redis instances terminate unexpectedly then the loss of data is limited to 1 second of updates. The slave Redis instances also perform BGsave hourly which is then loaded to a more permanent store (Amazon S3). This copy is also used by Map Reduce jobs for analytics.

As a production system, we need many failure modes to guard ourselves. As mentioned, if the master host is down, we will manually failover to slave. If a single master Redis instance reboots, monit restart restores from AOF, implying a 1 second window of data loss on the shards on that instance. If the slave host goes down, we bring up a replacement. If a single slave Redis instance goes down, we rely on monit to restart using the AOF data. Because we may encounter AOF or BGsave file corruption, we BGSave and copy hourly backups to S3. Note that large file sizes can cause BGsave induced delays but in our cluster this is mitigated by smaller Redis data due to the sharding scheme.
graph  redis  architecture  ha  high-availability  design  redundancy  sharding 
july 2013 by jm
The Architecture Twitter Uses to Deal with 150M Active Users, 300K QPS, a 22 MB/S Firehose, and Send Tweets in Under 5 Seconds
Good read.
Twitter is primarily a consumption mechanism, not a production mechanism. 300K QPS are spent reading timelines and only 6000 requests per second are spent on writes.


* their approach of precomputing the timeline for the non-search case is a good example of optimizing for the more frequently-exercised path.

* MySQL and Redis are the underlying stores. Redis is acting as a front-line in-RAM cache. they're pretty happy with it: https://news.ycombinator.com/item?id=6011254

* these further talks go into more detail, apparently (haven't watched them yet):

http://www.infoq.com/presentations/Real-Time-Delivery-Twitter
http://www.infoq.com/presentations/Twitter-Timeline-Scalability
http://www.infoq.com/presentations/Timelines-Twitter

* funny thread of comments on HN, from a big-iron fan: https://news.ycombinator.com/item?id=6008228
scale  architecture  scalability  twitter  high-scalability  redis  mysql 
july 2013 by jm
Instagram: Making the Switch to Cassandra from Redis, a 75% 'Insta' Savings
shifting data out of RAM and onto SSDs -- unsurprisingly, big savings.
a 12 node cluster of EC2 hi1.4xlarge instances; we store around 1.2TB of data across this cluster. At peak, we're doing around 20,000 writes per second to that specific cluster and around 15,000 reads per second. We've been really impressed with how well Cassandra has been able to drop into that role.
ram  ssd  cassandra  databases  nosql  redis  instagram  storage  ec2 
june 2013 by jm
Reducing MongoDB traffic by 78% with Redis | Crashlytics Blog
One for @roflscaletips. Crashlytics reduce MongoDB load by hacking in some hand-coded caching into their Rails app, instead of just using a front-line HTTP cache to reduce Rails *and* db load. duh. (via Oisin)
crashlytics  fail  roflscale  rails  caching  redis  ruby  via:oisin 
may 2013 by jm
Call me maybe: Carly Rae Jepsen and the perils of network partitions
Kyle "aphyr" Kingsbury expands on his slides demonstrating the real-world failure scenarios that arise during some kinds of partitions (specifically, the TCP-hang, no clear routing failure, network partition scenario). Great set of blog posts clarifying CAP
distributed  network  databases  cap  nosql  redis  mongodb  postgresql  riak  crdt  aphyr 
may 2013 by jm
High Scalability - Scaling Pinterest - From 0 to 10s of Billions of Page Views a Month in Two Years
wow, Pinterest have a pretty hardcore architecture. Sharding to the max. This is scary stuff for me:
a [Cassandra-style] Cluster Management Algorithm is a SPOF. If there’s a bug it impacts every node. This took them down 4 times.


yeah, so, eek ;)
clustering  sharding  architecture  aws  scalability  scaling  pinterest  via:matt-sergeant  redis  mysql  memcached 
april 2013 by jm
aaw/hyperloglog-redis - GitHub
'This gem is a pure Ruby implementation of the HyperLogLog algorithm for estimating cardinalities of sets observed via a stream of events. A Redis instance is used for storing the counters.'
cardinality  sets  redis  algorithms  ruby  gems  hyperloglog 
january 2013 by jm
Scaling Crashlytics: Building Analytics on Redis 2.6
How one analytics/metrics co is using Redis on the backend
analytics  redis  presentation  metrics 
january 2013 by jm
Special encoding of small aggregate data types in Redis
Nice performance trick in Redis on hash storage:

'In theory in order to guarantee that we perform lookups in constant time (also known as O(1) in big O notation) there is the need to use a data structure with a constant time complexity in the average case, like an hash table. But many times hashes contain just a few fields. When hashes are small we can instead just encode them in an O(N) data structure, like a linear array with length-prefixed key value pairs. Since we do this only when N is small, the amortized time for HGET and HSET commands is still O(1): the hash will be converted into a real hash table as soon as the number of elements it contains will grow too much (you can configure the limit in redis.conf). This does not work well just from the point of view of time complexity, but also from the point of view of constant times, since a linear array of key value pairs happens to play very well with the CPU cache (it has a better cache locality than an hash table).'
memory  redis  performance  big-o  hash-tables  storage  coding  cache  arrays 
november 2012 by jm
Tumblr Architecture - 15 Billion Page Views A Month And Harder To Scale Than Twitter
Buckets of details on Tumblr's innards. fans of Finagle and Kafka, notably
tumblr  scalability  web  finagle  redis  kafka 
november 2012 by jm
Scaling: It's Not What It Used To Be
skamille's top 5 scaling apps. "1. Redis. I was at a NoSQL meetup last night when someone asked "if you could put a million dollars behind one of the solutions presented here tonight, which one would you choose?" And the answer that one of the participants gave was "None of the above. I would choose Redis. Everyone uses one of these products and Redis."
2. Nginx. Your ops team probably already loves it. It's simple, it scales fabulously, and you don't have to be a programmer to understand how to run it.
3. HAProxy. Because if you're going to have hundreds or thousands of servers, you'd better have good load balancing.
4. Memcached. Redis can act as a cache but using a real caching product for such a purpose is probably a better call.
And finally:
5. Cloud hardware. Imagine trying to grow out to millions of users if you had to buy, install, and admin every piece of hardware you would need to do such a thing."
scaling  nginx  memcached  haproxy  redis 
april 2012 by jm
feedback loop n-gram analyzer
'a simple parser of ARF compliant FBL complaints, which normalizes the email complaints and generates a 6-tuple n-gram version of the message. These n-grams are stored in a Redis database, keyed by the file in which they can be found. An inverse index also exists that allow you to find all messages containing a particular n-gram word.'
anti-spam  spam  fbl  feedback  filtering  n-grams  similarity  hashing  redis  searching 
september 2011 by jm
How we use Redis at Bump
via Simon Willison. some nice ideas here, particularly using a replication slave to handle the potentially latency-impacting disk writes in AOF mode
queueing  redis  nosql  databases  storage  via:simonw  replication  bump 
july 2011 by jm
GitHub scheduled maintainance due to Redis upgrade
good comments on the processes useful for large-scale Redis upgrades
upgrades  redis  spof  nosql  databases  github  deployment  from delicious
may 2010 by jm
A fast, fuzzy, full-text index using Redis
quite easy, using a Metaphone sound-like indexing scheme to provide the fuzz
metaphone  sounds-like  indexing  python  redis  search  full-text  fuzzy  from delicious
may 2010 by jm
Why I like Redis
Simon Willison plugs Redis as a good datastore for quick-hack scripts with requirements for lots of fast, local data storage -- the kind of thing I'd often use a DB_File for
python  storage  databases  schemaless  nosql  redis  simon-willison  data-store  from delicious
october 2009 by jm

related tags

akka  algorithms  amazon  analytics  anti-spam  ap  aphyr  architecture  arrays  aws  beanstalk  benchmarks  big-o  billing  bloom-filters  bugs  bump  cache  caching  cap  cap-theorem  cardinality  cassandra  chaos-monkey  clustering  coding  comcast  concurrency  consistency  cp  craigslist  crashlytics  crdt  data-store  data-structures  database  databases  datastores  defrag  deployment  design  distcomp  distributed  ec2  elasticache  elb  estimation  exception-handling  fail  failover  failure  failures  fault-tolerance  fbl  feedback  filtering  finagle  fork  full-text  functional-testing  funny  fuzzy  game-days  gems  gifs  github  graph  ha  haproxy  hash-tables  hashing  hbase  hdfs  high-availability  high-scalability  hll  horror  http  http-push  http2  huge-pages  hyperloglog  indexing  instagram  jemalloc  jvm  kafka  kellabyte  kernel  kill-9  kinesis  lambda-architecture  latency  libraries  linux  locking  long-polling  madvise  mapreduce  martin-kleppman  measurement  memcache  memcached  memory  messaging  metaphone  metrics  mongodb  mysql  n-grams  nchan  network  network-partitions  networking  nginx  nosql  oom  oom-killer  open-source  ops  outages  p99  papers  paxos  performance  php  pinterest  png  postgresql  postmortems  presentation  probabilistic  probability  proxy  pubsub  python  queueing  rabbitmq  race-conditions  rails  ram  random  reactiongifs  read-only  realtime  redis  redlock  redundancy  reliability  replication  riak  roflscale  roshi  ruby  s3  scala  scalability  scale  scaling  schemaless  search  searching  set-storage  sets  sharding  similarity  simon-willison  sirius  sockets  sounds-like  spam  spof  sqs  ssd  sse  startup  storage  stripe  tcp  testing  tests  thp  time-series-data  transparent-huge-pages  trivago  tumblr  twilio  twitter  upgrades  vaurien  via:hn  via:matt-sergeant  via:oisin  via:simonw  web  websockets  xen  yao-yu  zookeeper 

Copy this bookmark:



description:


tags: