jm + concurrency   48

MemC3: Compact and concurrent Memcache with dumber caching and smarter hashing
An improved hashing algorithm called optimistic cuckoo hashing, and a CLOCK-based eviction algorithm that works in tandem with it. They are evaluated in the context of Memcached, where combined they give up to a 30% memory usage reduction and up to a 3x improvement in queries per second as compared to the default Memcached implementation on read-heavy workloads with small objects (as is typified by Facebook workloads).
memcached  performance  key-value-stores  storage  databases  cuckoo-hashing  algorithms  concurrency  caching  cache-eviction  memory  throughput 
november 2016 by jm
How to Quantify Scalability
good page on the Universal Scalability Law and how to apply it
usl  performance  scalability  concurrency  capacity  measurement  excel  equations  metrics 
september 2016 by jm
ztellman/dirigiste
'centrally-planned object and thread pools' for java.

'In the default JVM thread pools, once a thread is created it will only be retired when it hasn't performed a task in the last minute. In practice, this means that there are as many threads as the peak historical number of concurrent tasks handled by the pool, forever. These thread pools are also poorly instrumented, making it difficult to tune their latency or throughput. Dirigiste provides a fast, richly instrumented version of a java.util.concurrent.ExecutorService, and provides a means to feed that instrumentation into a control mechanism that can grow or shrink the pool as needed. Default implementations that optimize the pool size for thread utilization are provided. It also provides an object pool mechanism that uses a similar feedback mechanism to resize itself, and is significantly simpler than the Apache Commons object pool implementation.'

Great metric support, too.
async  jvm  dirigiste  java  threadpools  concurrency  utilization  capacity  executors  object-pools  object-pooling  latency 
june 2016 by jm
Conversant ConcurrentQueue and Disruptor BlockingQueue
'Disruptor is the highest performing intra-thread transfer mechanism available in Java. Conversant Disruptor is the highest performing implementation of this type of ring buffer queue because it has almost no overhead and it exploits a particularly simple design.

Conversant has been using this in production since 2012 and the performance is excellent. The BlockingQueue implementation is very stable, although we continue to tune and improve it. The latest release, 1.2.4, is 100% production ready.

Although we have been working on it for a long time, we decided to open source our BlockingQueue this year to contribute something back to the community. ... its a drop in for BlockingQueue, so its a very easy test. Conversant Disruptor will crush ArrayBlockingQueue and LinkedTransferQueue for thread to thread transfers.

In our system, we noticed a 10-20% reduction in overall system load and latency when we introduced it.'
disruptor  blocking-queues  queues  queueing  data-structures  algorithms  java  conversant  concurrency  performance 
march 2016 by jm
The Importance of Tuning Your Thread Pools
Excellent blog post on thread pools, backpressure, Little's Law, and other Hystrix-related topics (PS: use Hystrix)
hystrix  threadpools  concurrency  java  jvm  backpressure  littles-law  capacity 
january 2016 by jm
The Injector: A new Executor for Java
This honestly fits a narrow niche, but one that is gaining in popularity. If your messages take > 100μs to process, or your worker threads are consistently saturated, the standard ThreadPoolExecutor is likely perfectly adequate for your needs. If, on the other hand, you’re able to engineer your system to operate with one application thread per physical core you are probably better off looking at an approach like the LMAX Disruptor. However, if you fall in the crack in between these two scenarios, or are seeing a significant portion of time spent in futex calls and need a drop in ExecutorService to take the edge off, the injector may well be worth a look.
performance  java  executor  concurrency  disruptor  algorithms  coding  threads  threadpool  injector 
may 2015 by jm
ben-manes/caffeine
'Caffeine is a Java 8 based concurrency library that provides specialized data structures, such as a high performance cache.'
cache  java8  java  guava  caching  concurrency  data-structures  coding 
march 2015 by jm
JCTools
Java Concurrency Tools for the JVM. This project aims to offer some concurrent data structures currently missing from the JDK:

Bounded lock free queues
SPSC/MPSC/SPMC/MPMC variations for concurrent queues
Alternative interfaces for queues (experimental)
Offheap concurrent ring buffer for ITC/IPC purposes (experimental)
Executor (planned)
concurrency  lock-free  data-structures  queues  jvm  java 
january 2015 by jm
If Eventual Consistency Seems Hard, Wait Till You Try MVCC
ex-Percona MySQL wizard Baron Schwartz, noting that MVCC as implemented in common SQL databases is not all that simple or reliable compared to big bad NoSQL Eventual Consistency:
Since I am not ready to assert that there’s a distributed system I know to be better and simpler than eventually consistent datastores, and since I certainly know that InnoDB’s MVCC implementation is full of complexities, for right now I am probably in the same position most of my readers are: the two viable choices seem to be single-node MVCC and multi-node eventual consistency. And I don’t think MVCC is the simpler paradigm of the two.
nosql  concurrency  databases  mysql  riak  voldemort  eventual-consistency  reliability  storage  baron-schwartz  mvcc  innodb  postgresql 
december 2014 by jm
Hermitage: Testing the "I" in ACID
[Hermitage is] a test suite for databases which probes for a variety of concurrency issues, and thus allows a fair and accurate comparison of isolation levels. Each test case simulates a particular kind of race condition that can happen when two or more transactions concurrently access the same data. Each test can pass (if the database’s implementation of isolation prevents the race condition from occurring) or fail (if the race condition does occur).
acid  architecture  concurrency  databases  nosql 
november 2014 by jm
ExecutorService - 10 tips and tricks
Excellent advice from Tomasz Nurkiewicz' blog for anyone using java.util.concurrent.ExecutorService regularly. The whole blog is full of great posts btw
concurrency  java  jvm  threading  threads  executors  coding 
november 2014 by jm
WriterReaderPhaser
A nice new concurrency primitive from Gil Tene:
Have you ever had a need for logging or analyzing data that is actively being updated? Have you ever wanted to do that without stalling the writers (recorders) in any way? If so, then WriterReaderPhaser is for you.  I'm not talking about logging messages or text lines here.  I'm talking about data.  Data larger than one word of memory.  Data that holds actual interesting state. Data that keeps being updated, but needs to be viewed in a stable and coherent way for analysis or logging.  Data like frame buffers. Data like histograms.  Data like usage counts. Data that changes.


see also Left-Right: http://concurrencyfreaks.blogspot.ie/2013/12/left-right-concurrency-control.html
phasers  data-structures  concurrency  primitives  algorithms  performance  wait-free 
november 2014 by jm
"Left-Right: A Concurrency Control Technique with Wait-Free Population Oblivious Reads" [pdf]
'In this paper, we describe a generic concurrency control technique with Blocking write operations and Wait-Free Population Oblivious read operations, which we named the Left-Right technique. It is of particular interest for real-time applications with dedicated Reader threads, due to its wait-free property that gives strong latency guarantees and, in addition, there is no need for automatic Garbage Collection.
The Left-Right pattern can be applied to any data structure, allowing concurrent access to it similarly to a Reader-Writer lock, but in a non-blocking manner for reads. We present several variations of the Left-Right technique, with different versioning mechanisms and state machines. In addition, we constructed an optimistic approach that can reduce synchronization for reads.'

See also http://concurrencyfreaks.blogspot.ie/2013/12/left-right-concurrency-control.html for java implementation code.
left-right  concurrency  multithreading  wait-free  blocking  realtime  gc  latency  reader-writer  locking  synchronization  java 
september 2014 by jm
ThreadSanitizer
Google's purify/valgrind-like concurrency checking tool:

'As a bonus, ThreadSanitizer finds some other types of bugs: thread leaks, deadlocks, incorrect uses of mutexes, malloc calls in signal handlers, and more. It also natively understands atomic operations and thus can find bugs in lock-free algorithms. [...] The tool is supported by both Clang and GCC compilers (only on Linux/Intel64). Using it is very simple: you just need to add a -fsanitize=thread flag during compilation and linking. For Go programs, you simply need to add a -race flag to the go tool (supported on Linux, Mac and Windows).'
concurrency  bugs  valgrind  threadsanitizer  threading  deadlocks  mutexes  locking  synchronization  coding  testing 
june 2014 by jm
Why Disqus made the Python->Go switchover
for their realtime component, from the horse's mouth:
at higher contention, the CPU was choking everything. Switching over to Go removed that contention for us, which was the primary issue that we were seeing.
python  languages  concurrency  go  threading  gevent  scalability  disqus  realtime  hn 
may 2014 by jm
Notes On Concurrent Ring Buffer Queue Mechanics
great notes from Nitsan Wakart, who's been hacking on ringbuffers a lot in JAQ
jaq  nitsanw  atomic  concurrency  data-structures  ring-buffers  queueing  queues  algorithms 
april 2014 by jm
Scalable Atomic Visibility with RAMP Transactions
Great new distcomp protocol work from Peter Bailis et al:
We’ve developed three new algorithms—called Read Atomic Multi-Partition (RAMP) Transactions—for ensuring atomic visibility in partitioned (sharded) databases: either all of a transaction’s updates are observed, or none are. [...]

How they work: RAMP transactions allow readers and writers to proceed concurrently. Operations race, but readers autonomously detect the races and repair any non-atomic reads. The write protocol ensures readers never stall waiting for writes to arrive.

Why they scale: Clients can’t cause other clients to stall (via synchronization independence) and clients only have to contact the servers responsible for items in their transactions (via partition independence). As a consequence, there’s no mutual exclusion or synchronous coordination across servers.

The end result: RAMP transactions outperform existing approaches across a variety of workloads, and, for a workload of 95% reads, RAMP transactions scale to over 7 million ops/second on 100 servers at less than 5% overhead.
scale  synchronization  databases  distcomp  distributed  ramp  transactions  scalability  peter-bailis  protocols  sharding  concurrency  atomic  partitions 
april 2014 by jm
MICA: A Holistic Approach To Fast In-Memory Key-Value Storage [paper]
Very interesting new approach to building a scalable in-memory K/V store. As Rajiv Kurian notes on the mechanical-sympathy list:

'The basic idea is that each core is responsible for a portion of the key-space and requests are forwarded to the right core, avoiding multiple-writer scenarios. This is opposed to designs like memcache which uses locks and shared memory.

Some of the things I found interesting: The single writer design is taken to an extreme. Clients assist the partitioning of requests, by calculating hashes before submitting GET requests. It uses Intel DPDK instead of sockets to forward packets to the right core, without processing the packet on any core. Each core is paired with a dedicated RX/TX queue. The design for a lossy cache is simple but interesting. It does things like replacing a hash slot (instead of chaining) etc. to take advantage of the lossy nature of caches. There is a lossless design too. A bunch of tricks to optimize for memory performance. This includes pre-allocation, design of the hash indexes, prefetching tricks etc. There are some other concurrency tricks that were interesting. Handling dangling pointers was one of them.'

Source code here: https://github.com/efficient/mica
mica  in-memory  memory  ram  key-value-stores  storage  smp  dpdk  multicore  memcached  concurrency 
april 2014 by jm
Flock for Cron jobs
good blog post writing up the 'flock -n -c' trick to ensure single-concurrent-process locking for cron jobs
cron  concurrency  unix  linux  flock  locking  ops 
december 2013 by jm
[JavaSpecialists 215] - StampedLock Idioms
a demo of Doug Lea's latest concurrent data structure in Java 8
doug-lea  concurrency  coding  java-8  java  threads 
december 2013 by jm
SPSC revisited part III - FastFlow + Sparse Data
holy moly. This is some heavily-optimized mechanical-sympathy Java code. By using a sparse data structure, cache-aligned fields, and wait-free low-level CAS concurrency primitives via sun.misc.Unsafe, a single-producer/single-consumer queue implementation goes pretty damn fast compared to the current state of the art
nitsanw  optimization  concurrency  java  jvm  cas  spsc  queues  data-structures  algorithms 
october 2013 by jm
Lock-Based vs Lock-Free Concurrent Algorithms
An excellent post from Martin Thompson showing a new JSR166 concurrency primitive, StampedLock, compared against a number of alternatives in a simple microbenchmark.
The most interesting thing for me is how much the lock-free, AtomicReference.compareAndSet()-based approach blows away all the lock-based approaches -- even in the 1-reader-1-writer case. Its code is extremely simple, too: https://github.com/mjpt777/rw-concurrency/blob/master/src/LockFreeSpaceship.java
concurrency  java  threads  lock-free  locking  compare-and-set  cas  atomic  jsr166  microbenchmarks  performance 
august 2013 by jm
Java Concurrent Counters By Numbers
threadsafe counters in the JVM compared. AtomicLong, Doug Lea's LongAdder, a ThreadLocal counter, and a field-on-the-Thread-object counter int (via Darach Ennis). Nitsan's posts on concurrency are fantastic
counters  concurrency  threads  java  jvm  atomic 
june 2013 by jm
Functional Reactive Programming in the Netflix API with RxJava
Hmm, this seems nifty as a compositional building block for Java code to enable concurrency without thread-safety and sync problems.
Functional reactive programming offers efficient execution and composition by providing a collection of operators capable of filtering, selecting, transforming, combining and composing Observable's.

The Observable data type can be thought of as a "push" equivalent to Iterable which is "pull". With an Iterable, the consumer pulls values from the producer and the thread blocks until those values arrive. By contrast with the Observable type, the producer pushes values to the consumer whenever values are available. This approach is more flexible, because values can arrive synchronously or asynchronously.
concurrency  java  jvm  threads  thread-safety  coding  rx  frp  fp  functional-programming  reactive  functional  async  observable 
april 2013 by jm
CRDTs - Commutative Replicated Data Types [pdf]

Shared read-only data is easy to scale by using well-understood replication techniques. However, sharing mutable data at a large scale is a dicult problem, because of the CAP impossibility result [5]. Two approaches dominate in practice. One ensures scalability by giving up consistency guarantees, for instance using the Last-Writer-Wins (LWW) approach [7]. The alternative guarantees consistency by serialising all updates, which does not scale beyond a small cluster [12]. Optimistic replication allows replicas to diverge, eventually resolving conflicts either by LWW-like methods or by serialisation [11].

In some (limited) cases, a radical simpli cation is possible. If concurrent updates to some datum commute, and all of its replicas execute all updates in causal order, then the replicas converge.1 We call this a Commutative Replicated Data Type (CRDT). The CRDT approach ensures that there are no conflicts, hence, no need for consensus-based concurrency control. CRDTs are not a universal solution, but, perhaps surprisingly, we were able to design highly useful CRDTs. This new research direction is promising as it ensures consistency in the large scale at a low cost, at least for some applications.
consistency  algorithms  concurrency  crdts  distcomp  data 
april 2013 by jm
JPL Institutional Coding Standard for the Java Programming Language
From JPL's Laboratory for Reliable Software (LaRS). Great reference; there's some really useful recommendations here, and good explanations of familiar ones like "prefer composition over inheritance". Many are supported by FindBugs, too.

Here's the full list:

compile with checks turned on;
apply static analysis;
document public elements;
write unit tests;
use the standard naming conventions;
do not override field or class names;
make imports explicit;
do not have cyclic package and class dependencies;
obey the contract for equals();
define both equals() and hashCode();
define equals when adding fields;
define equals with parameter type Object;
do not use finalizers;
do not implement the Cloneable interface;
do not call nonfinal methods in constructors;
select composition over inheritance;
make fields private;
do not use static mutable fields;
declare immutable fields final;
initialize fields before use;
use assertions;
use annotations;
restrict method overloading;
do not assign to parameters;
do not return null arrays or collections;
do not call System.exit;
have one concept per line;
use braces in control structures;
do not have empty blocks;
use breaks in switch statements;
end switch statements with default;
terminate if-else-if with else;
restrict side effects in expressions;
use named constants for non-trivial literals;
make operator precedence explicit;
do not use reference equality;
use only short-circuit logic operators;
do not use octal values;
do not use floating point equality;
use one result type in conditional expressions;
do not use string concatenation operator in loops;
do not drop exceptions;
do not abruptly exit a finally block;
use generics;
use interfaces as types when available;
use primitive types;
do not remove literals from collections;
restrict numeric conversions;
program against data races;
program against deadlocks;
do not rely on the scheduler for synchronization;
wait and notify safely;
reduce code complexity
nasa  java  reference  guidelines  coding-standards  jpl  reliability  software  coding  oo  concurrency  findbugs  bugs 
march 2013 by jm
Are volatile reads really free?
Marc Brooker with some good test data:
It appears as though reads to volatile variables are not free in Java on x86, or at least on the tested setup. It's true that the difference isn't so huge (especially for the read-only case) that it'll make a difference in any but the more performance sensitive case, but that's a different statement from free.
volatile  concurrency  jvm  performance  java  marc-brooker 
february 2013 by jm
java - Given that HashMaps in jdk1.6 and above cause problems with multi-threading, how should I fix my code - Stack Overflow
Massive Java concurrency fail in recent 1.6 and 1.7 JDK releases -- the java.util.HashMap type now spin-locks on an AtomicLong in its constructor.

Here's the response from the author: 'I'll acknowledge right up front that the initialization of hashSeed is a bottleneck but it is not one we expected to be a problem since it only happens once per Hash Map instance. For this code to be a bottleneck you would have to be creating hundreds or thousands of hash maps per second. This is certainly not typical. Is there really a valid reason for your application to be doing this? How long do these hash maps live?'

Oh dear. Assumptions of "typical" like this are not how you design a fundamental data structure. fail. For now there is a hacky reflection-based workaround, but this is lame and needs to be fixed as soon as possible. (Via cscotta)
java  hashmap  concurrency  bugs  fail  security  hashing  jdk  via:cscotta 
february 2013 by jm
A Non-Blocking HashTable by Dr. Cliff Click : programming
Proggit discovers the NonBlockingHashMap. This comment from Boundary's cscotta is particularly interesting: "The code is intricate and curiously-formatted, but NBHM is quite excellent. The majority of our analytics platform is backed by NBHMs updated rapidly in parallel. Cliff's a great, friendly, approachable guy; if you have any specific questions about the approaches or implementation, he may be happy to answer."
data-structures  algorithms  non-blocking  concurrency  threading  multicore  cliff-click  azul  maps  java  boundary 
january 2013 by jm
Cliff Click in "A JVM Does What?"
interesting YouTubed presentation from Azul's Cliff Click on some java/JVM innards
presentation  concurrency  jvm  video  java  youtube  cliff-click 
december 2012 by jm
Memory Barriers/Fences
Martin Thompson with a good description of the x86 memory barrier model and how it interacts with Java's JSR-133 memory model
architecture  hardware  programming  java  concurrency  volatile  jsr-133 
november 2012 by jm
Cliff Click's 2008 JavaOne talk about the NonBlockingHashTable
I'm a bit late to this data structure -- highly scalable, nearly lock-free, benchmarks very well (except with the G1 GC): http://edwwang.com/blog/2012/02/10/concurrent-hashmap-benchmark/ .

Having said that, it doesn't cope well with frequently-changing unique keys: http://sourceforge.net/tracker/?func=detail&aid=3563980&group_id=194172&atid=948362 .

More background at: http://www.azulsystems.com/blog/cliff/2007-03-26-non-blocking-hashtable and http://www.azulsystems.com/blog/cliff/2007-04-01-non-blocking-hashtable-part-2

This was used in Cassandra for a while, although I think the above bug may have caused its removal?
nonblockinghashtable  data-structures  hashmap  concurrency  scaling  java  jvm 
october 2012 by jm
SnapTree benchmarks
nice concurrent Map data structure for the JVM; beats out ConcurrentHashMap, ConcurrentLinkedHashMap from guava, ConcurrentSkipListMap under both CMS and G1 garbage collectors.
concurrency  benchmarks  hashmap  map  data-structures  java  jvm  snaptree 
september 2012 by jm
Locks & Condition Variables - Latency Impact

Firstly, this is 3 orders of magnitude greater latency than what I illustrated in the previous article using just memory barriers to signal between threads. This cost comes about because the kernel needs to get involved to arbitrate between the threads for the lock, and then manage the scheduling for the threads to awaken when the condition is signalled. The one-way latency to signal a change is pretty much the same as what is considered current state of the art for network hops between nodes via a switch. It is possible to get ~1µs latency with InfiniBand and less than 5µs with 10GigE and user-space IP stacks.

Secondly, the impact is clear when letting the OS choose what CPUs the threads get scheduled on rather than pinning them manually. I've observed this same issue across many use cases whereby Linux, in default configuration for its scheduler, will greatly impact the performance of a low-latency system by scheduling threads on different cores resulting in cache pollution. Windows by default seems to make a better job of this.
</blockqote>
locking  concurrency  java  jvm  signalling  locks  linux  threading 
september 2012 by jm
Martin "Disruptor" Thompson's Single Writer Principle
Contains these millisecond estimates for highly-contended inter-thread signalling when incrementing a 64-bit counter in java:
One Thread 300<br>
One Thread with Memory Barrier 4,700<br>
One Thread with CAS 5,700<br>
Two Threads with CAS 18,000<br>
One Thread with Lock 10,000<br>
Two Threads with Lock 118,000<br>


Undoubtedly not realistic for a lot of cases, but it's still useful for order-of-magnitude estimates of locking cost. Bottom line: don't lock if you can avoid it, even with 'volatile' or AtomicFoo types.
java  jvm  performance  coding  concurrency  threading  cas  locking 
september 2012 by jm
Striped (Guava: Google Core Libraries for Java 13.0.1 API)
Nice piece of Guava concurrency infrastructure in the latest release:
A striped Lock/Semaphore/ReadWriteLock. This offers the underlying lock striping similar to that of ConcurrentHashMap in a reusable form, and extends it for semaphores and read-write locks. Conceptually, lock striping is the technique of dividing a lock into many stripes, increasing the granularity of a single lock and allowing independent operations to lock different stripes and proceed concurrently, instead of creating contention for a single lock.<br>

The guarantee provided by this class is that equal keys lead to the same lock (or semaphore), i.e. if (key1.equals(key2)) then striped.get(key1) == striped.get(key2) (assuming Object.hashCode() is correctly implemented for the keys). Note that if key1 is not equal to key2, it is not guaranteed that striped.get(key1) != striped.get(key2); the elements might nevertheless be mapped to the same lock. The lower the number of stripes, the higher the probability of this happening.<br>

Prior to this class, one might be tempted to use Map<K, Lock>, where K represents the task. This maximizes concurrency by having each unique key mapped to a unique lock, but also maximizes memory footprint. On the other extreme, one could use a single lock for all tasks, which minimizes memory footprint but also minimizes concurrency. Instead of choosing either of these extremes, Striped allows the user to trade between required concurrency and memory footprint. For example, if a set of tasks are CPU-bound, one could easily create a very compact Striped<Lock> of availableProcessors() * 4 stripes, instead of possibly thousands of locks which could be created in a Map<K, Lock> structure.
locking  concurrency  java  guava  semaphores  coding  via:twitter 
september 2012 by jm
1024cores
Some good algorithms and notes by Dmitry Vyukov on 'lockfree, waitfree, obstruction-free synchronization algorithms and data structures, scalability-oriented architecture, multicore/multiprocessor design patterns, high-performance computing, threading technologies and libraries (OpenMP, TBB, PPL), message-passing systems and related topics.' The catalog of lock-free queue implementations is particularly extensive (via Sergio Bossa)
algorithms  concurrency  articles  dmitry-vyukov  go  c++  coding  via:sergio-bossa 
august 2012 by jm
Ask For Forgiveness Programming - Or How We'll Program 1000 Cores
Nifty concept from IBM Research's David Ungar -- "race-and-repair". Simply put, allow lock-free lossy/inconsistent calculation, and backfill later, using concepts like "freshener" threads, to reconcile inconsistencies. This is a familiar concept in distributed computing nowadays thanks to CAP, but I hadn't heard it being applied to single-host multicore parallel programming before -- I can already think of an application in our codebase...
race-and-repair  concurrency  coding  ibm  parallelism  parallel  david-ungar  cap  multicore 
april 2012 by jm
Fault Tolerance in a High Volume, Distributed System
Netflix's "DependencyCommand", a resiliency system for SOA inter-service network calls, offering builtin support for threadpools, timeouts, retries and graceful failover. Very nice
netflix  architecture  concurrency  distributed  failover  ha  resiliency  fail-fast  failsafe  soa  fault-tolerance 
march 2012 by jm
How does LMAX's disruptor pattern work? - Stack Overflow
LMAX's "Disruptor" concurrent-server pattern, claiming to be a higher-throughput, lower-latency, and lock-free alternative to the SEDA pattern using a massive ring buffer. Good discussion here at SO. (via Filippo)
via:filippo  servers  seda  queueing  concurrency  disruptor  patterns  latency  trading  performance  ring-buffers 
november 2011 by jm
Akka
'platform for event-driven, scalable, and fault-tolerant architectures on the JVM' .. Actor-based, 'let-it-crash', Apache-licensed, Java and Scala APIs, remote Actors, transactional memory -- looks quite nice
scala  java  concurrency  scalability  apache  akka  actors  erlang  fault-tolerance  events  from delicious
march 2011 by jm
Project Middleman
another concurrency shell command; interesting approach to dashboarding the results, with the "mdm.screen" utility provided
mdm  unix  concurrency  shell  linux  forking  background  xargs  parallelism  from delicious
october 2010 by jm
How do we kick our synchronous addiction?
great post on the hazards of programming in an async framework, and how damn hard it is. good comments thread too (via jzawodny)
via:jzawodny  coding  python  javascript  scalability  ruby  concurrency  erlang  async  node.js  twisted  from delicious
february 2010 by jm
pigz
'A parallel implementation of gzip for modern multi-processor, multi-core machines', by Mark Adler, no less
adler  pigz  gzip  compression  performance  concurrency  shell  parallel  multicore  zip  software  from delicious
october 2009 by jm

related tags

acid  actors  adler  akka  algorithms  apache  architecture  articles  async  atomic  azul  background  backpressure  baron-schwartz  benchmarks  blocking  blocking-queues  boundary  bugs  c++  cache  cache-eviction  caching  cap  capacity  cas  cassandra  cliff-click  coding  coding-standards  compare-and-set  compression  concurrency  consistency  conversant  counters  crdts  cron  cuckoo-hashing  data  data-structures  databases  david-ungar  deadlocks  dirigiste  disqus  disruptor  distcomp  distributed  dmitry-vyukov  doug-lea  dpdk  equations  erlang  events  eventual-consistency  excel  exception-handling  executor  executors  fail  fail-fast  failover  failsafe  failure  fault-tolerance  findbugs  flock  forking  fp  frp  functional  functional-programming  gc  gevent  gnu  go  guava  guidelines  gzip  ha  hardware  hashing  hashmap  hbase  hdfs  hn  hyperdex  hyperleveldb  hystrix  ibm  in-memory  injector  innodb  jaq  java  java-8  java8  javascript  jdk  job  jpl  jsr-133  jsr166  jvm  key-value-stores  languages  latency  left-right  leveldb  linux  littles-law  lock-free  locking  locks  map  mapreduce  maps  marc-brooker  mdm  measurement  memcached  memory  metrics  mica  microbenchmarks  multicore  multithreading  mutexes  mvcc  mysql  nasa  netflix  nitsanw  node.js  non-blocking  nonblockinghashtable  nosql  object-pooling  object-pools  observable  oo  ops  optimization  p99  papers  parallel  parallelism  partitions  patterns  performance  persistence  peter-bailis  phasers  pigz  postgresql  presentation  primitives  programming  protocols  python  queueing  queues  race-and-repair  race-conditions  ram  ramp  reactive  reader-writer  realtime  redis  reference  reliability  resiliency  riak  ring-buffers  rocksdb  ruby  rx  scala  scalability  scale  scaling  scripting  security  seda  semaphores  servers  sharding  shell  signalling  smp  snaptree  soa  software  speed  spsc  startup  storage  synchronization  testing  thread-safety  threading  threadpool  threadpools  threads  threadsanitizer  throughput  trading  transactions  twisted  unix  usl  utilization  valgrind  via:cscotta  via:fanf  via:filippo  via:jzawodny  via:kellabyte  via:sergio-bossa  via:twitter  video  volatile  voldemort  wait-free  xargs  youtube  zip 

Copy this bookmark:



description:


tags: