mpm + queuing   6

The Amazon Builders' Library
The Amazon Builders’ Library is a collection of living articles that describe how Amazon develops, architects, releases, and operates technology
architecture  queuing  availability  load-balancing 
4 days ago by mpm
Fail at Scale
Our services process requests using adaptive LIFO. During normal operating conditions, requests are processed in FIFO order, but when a queue is starting to form, the server switches to LIFO mode
queuing  scalability 
6 weeks ago by mpm
Erlang/OTP 21's new logger
With OTP-21 came a new logging library in Erlang called logger. It comes as an attempt to offer a built-in alternative to long successful projects such as lager, which have seen years of battle testing and tweaking for performance. I've seen few articles written about the new logger and what it can do, so I've decided to do just that.
erlang  logging  queuing 
december 2018 by mpm
Telling Stories About Little's Law
I like Little's Law as a mathematical tool, but also as a narrative tool. It provides a powerful way to frame stories about system behavior.
july 2018 by mpm
Things we (finally) know about network queues
How big should your queue be, and what should you do when it fills up? Many times, we implement or even deploy a networking system before we have answered those questions. Luckily, recent research has given us some guidelines. Here's what we know
queuing  protocol 
august 2017 by mpm
The SprayList: A Scalable Relaxed Priority Queue
High-performance concurrent priority queues are essential for applications such as task scheduling and discrete event simulation. Unfortunately even the best performing implementations do not scale past a number of threads in the single digits. This is because of the sequential bottleneck in accessing the elements at the head of the queue in order to perform a DeleteMin operation. In this paper, we present the SprayList, a scalable priority queue with relaxed ordering semantics. Starting from a nonblocking SkipList, the main innovation behind our design is that the DeleteMin operations avoid a sequential bottleneck by “spraying” themselves onto the head of the SkipList list in a coordinated fashion. The spraying is implemented using a carefully designed random walk, so that DeleteMin always returns an element among the first O(p polylog p) in the list, where p is the number of threads. We prove that the expected running time of a DeleteMin operation is poly-logarithmic in p, independent of the size of the list, and also provide analytic upper bounds on the number of possible priority inversions for an element. Our experiments show that the relaxed semantics allow the data structure to scale for very high thread counts, comparable to a classic unordered SkipList. Furthermore, we observe that, for reasonably parallel workloads, the scalability benefits of relaxation considerably outweigh the additional work due to out-of-order execution.
concurrency  performance  queuing 
november 2014 by mpm

Copy this bookmark: