jm + gc   33

_Blade: a Data Center Garbage Collector_
Essentially, add a central GC scheduler to improve tail latencies in a cluster, by taking instances out of the pool to perform slow GC activity instead of letting them impact live operations. I've been toying with this idea for a while, nice to see a solid paper about it
gc  latency  tail-latencies  papers  blade  go  java  scheduling  clustering  load-balancing  low-latency  performance 
yesterday by jm
Optimizing Java CMS garbage collections, its difficulties, and using JTune as a solution | LinkedIn Engineering
I like the sound of this -- automated Java CMS GC tuning, kind of like a free version of JClarity's Censum (via Miguel Ángel Pastor)
java  jvm  tuning  gc  cms  linkedin  performance  ops 
6 days ago by jm
The Four Month Bug: JVM statistics cause garbage collection pauses (
Ugh, tying GC safepoints to disk I/O? bad idea:
The JVM by default exports statistics by mmap-ing a file in /tmp (hsperfdata). On Linux, modifying a mmap-ed file can block until disk I/O completes, which can be hundreds of milliseconds. Since the JVM modifies these statistics during garbage collection and safepoints, this causes pauses that are hundreds of milliseconds long. To reduce worst-case pause latencies, add the -XX:+PerfDisableSharedMem JVM flag to disable this feature. This will break tools that read this file, like jstat.
bugs  gc  java  jvm  disk  mmap  latency  ops  jstat 
22 days ago by jm
JClarity's Illuminate
Performance-diagnosis-as-a-service. Cool.
Users download and install an Illuminate Daemon using a simple installer which starts up a small stand alone Java process. The Daemon sits quietly unless it is asked to start gathering SLA data and/or to trigger a diagnosis. Users can set SLA’s via the dashboard and can opt to collect latency measurements of their transactions manually (using our library) or by asking Illuminate to automatically instrument their code (Servlet and JDBC based transactions are currently supported).

SLA latency data for transactions is collected on a short cycle. When the moving average of latency measurements goes above the SLA value (e.g. 150ms), a diagnosis is triggered. The diagnosis is very quick, gathering key data from O/S, JVM(s), virtualisation and other areas of the system. The data is then run through the machine learned algorithm which will quickly narrow down the possible causes and gather a little extra data if needed.

Once Illuminate has determined the root cause of the performance problem, the diagnosis report is sent back to the dashboard and an alert is sent to the user. That alert contains a link to the result of the diagnosis which the user can share with colleagues. Illuminate has all sorts of backoff strategies to ensure that users don’t get too many alerts of the same type in rapid succession!
illuminate  jclarity  java  jvm  scala  latency  gc  tuning  performance 
7 weeks ago by jm
Rust borrow and lifetimes
How Rust avoids GC overhead using it's "borrow" system:
Rust achieves memory safety without GC by using a sophiscated borrow system. For any resource (stack memory, heap memory, file handle and so on), there is exactly one owner which takes care of its resource deallocation, if needed. You may create new bindings to refer to the resource using & or &mut, which is called a borrow or mutable borrow. The compiler ensures all owners and borrowers behave correctly.
languages  rust  gc  borrow  lifecycle  stack  heap  allocation 
november 2014 by jm
Troubleshooting Production JVMs with jcmd
remotely trigger GCs, finalization, heap dumps etc. Handy
jvm  jcmd  debugging  ops  java  gc  heap  troubleshooting 
september 2014 by jm
"Left-Right: A Concurrency Control Technique with Wait-Free Population Oblivious Reads" [pdf]
'In this paper, we describe a generic concurrency control technique with Blocking write operations and Wait-Free Population Oblivious read operations, which we named the Left-Right technique. It is of particular interest for real-time applications with dedicated Reader threads, due to its wait-free property that gives strong latency guarantees and, in addition, there is no need for automatic Garbage Collection.
The Left-Right pattern can be applied to any data structure, allowing concurrent access to it similarly to a Reader-Writer lock, but in a non-blocking manner for reads. We present several variations of the Left-Right technique, with different versioning mechanisms and state machines. In addition, we constructed an optimistic approach that can reduce synchronization for reads.'

See also for java implementation code.
left-right  concurrency  multithreading  wait-free  blocking  realtime  gc  latency  reader-writer  locking  synchronization  java 
september 2014 by jm
Java tip: optimizing memory consumption
Good tips on how to tell if object allocation rate is a bottleneck in your JVM-based code
yourkit  memory  java  jvm  allocation  gc  bottlenecks  performance 
august 2014 by jm
Dynamic Tuple Performance On the JVM
More JVM off-heap storage from Boundary:
generates heterogeneous collections of primitive values and ensures as best it can that they will be laid out adjacently in memory. The individual values in the tuple can either be accessed from a statically bound interface, via an indexed accessor, or via reflective or other dynamic invocation techniques. FastTuple is designed to deal with a large number of tuples therefore it will also attempt to pool tuples such that they do not add significantly to the GC load of a system. FastTuple is also capable of allocating the tuple value storage entirely off-heap, using Java’s direct memory capabilities.
jvm  java  gc  off-heap  storage  boundary  memory 
may 2014 by jm
Garbage Collection Optimization for High-Throughput and Low-Latency Java Applications
LinkedIn talk about the GC opts they used to optimize the Feed. good detail
performance  optimization  linkedin  java  jvm  gc  tuning 
april 2014 by jm
Cassandra: tuning the JVM for read heavy workloads
The cluster we tuned is hosted on AWS and is comprised of 6 hi1.4xlarge EC2 instances, with 2 1TB SSDs raided together in a raid 0 configuration. The cluster’s dataset is growing steadily. At the time of this writing, our dataset is 341GB, up from less than 200GB a few months ago, and is growing by 2-3GB per day. The workload on this cluster is very read heavy, with quorum reads making up 99% of all operations.

Some careful GC tuning here. Probably not applicable to anyone else, but good approach in general.
java  performance  jvm  scaling  gc  tuning  cassandra  ops 
january 2014 by jm
What drives JVM full GC duration
Interesting empirical results using JDK 7u21:
Full GC duration depends on the number of objects allocated and the locality of their references. It does not depend that much on actual heap size.

Reference locality has a surprisingly high effect.
java  jvm  data  gc  tuning  performance  cms  g1 
october 2013 by jm
Voldemort on Solid State Drives [paper]
'This paper and talk was given by the LinkedIn Voldemort Team at the Workshop on Big Data Benchmarking (WBDB May 2012).'

With SSD, we find that garbage collection will become a very significant bottleneck, especially for systems which have little control over the storage layer and rely on Java memory management. Big heapsizes make the cost of garbage collection expensive, especially the single threaded CMS Initial mark. We believe that data systems must revisit their caching strategies with SSDs. In this regard, SSD has provided an efficient solution for handling fragmentation and moving towards predictable multitenancy.
voldemort  storage  ssd  disk  linkedin  big-data  jvm  tuning  ops  gc 
september 2013 by jm
[JVM] GC is a difficult, specialised area that can be very frustrating for busy developers or devops folks to deal with. The JVM has a number of Garbage Collectors and a bewildering array of switches that can alter the behaviour of each collector. Censum does all of the parsing, number crunching and statistical analysis for you, so you don't have to go and get that PhD in Computer Science in order to solve your GC performance problem. Censum gives you straight answers as opposed to a ton of raw data. can eat any GC log you care to throw at it. is easy to install and use.

Commercial software, UKP 495 per license.
censum  gc  tuning  ops  java  jvm  commercial 
july 2013 by jm
Tuning and benchmarking Java 7's Garbage Collectors: Default, CMS and G1
Rudiger Moller runs through a typical GC-tuning session, in exhaustive detail
java  gc  tuning  jvm  cms  g1  ops 
july 2013 by jm
Java Garbage Collection Distilled
a great summary of the state of JVM garbage collection from Martin Thompson
jvm  java  gc  garbage-collection  tuning  memory  performance  martin-thompson 
july 2013 by jm
Java Garbage Collection Distilled
Martin Thompson lays it out:
Serial, Parallel, Concurrent, CMS, G1, Young Gen, New Gen, Old Gen, Perm Gen, Eden, Tenured, Survivor Spaces, Safepoints, and the hundreds of JVM start-up flags. Does this all baffle you when trying to tune the garbage collector while trying to get the required throughput and latency from your Java application? If it does then don’t worry, you are not alone. Documentation describing garbage collection feels like man pages for an aircraft. Every knob and dial is detailed and explained but nowhere can you find a guide on how to fly. This article will attempt to explain the tradeoffs when choosing and tuning garbage collection algorithms for a particular workload.
gc  java  garbage-collection  coding  cms  g1  jvm  optimization 
june 2013 by jm
Building a Modern Website for Scale (QCon NY 2013) [slides]
some great scalability ideas from LinkedIn. Particularly interesting are the best practices suggested for scaling web services:

1. store client-call timeouts and SLAs in Zookeeper for each REST endpoint;
2. isolate backend calls using async/threadpools;
3. cancel work on failures;
4. avoid sending requests to GC'ing hosts;
5. rate limits on the server.

#4 is particularly cool. They do this using a "GC scout" request before every "real" request; a cheap TCP request to a dedicated "scout" Netty port, which replies near-instantly. If it comes back with a 1-packet response within 1 millisecond, send the real request, else fail over immediately to the next host in the failover set.

There's still a potential race condition where the "GC scout" can be achieved quickly, then a GC starts just before the "real" request is issued. But the incidence of GC-blocking-request is probably massively reduced.

It also helps against packet loss on the rack or server host, since packet loss will cause the drop of one of the TCP packets, and the TCP retransmit timeout will certainly be higher than 1ms, causing the deadline to be missed. (UDP would probably work just as well, for this reason.) However, in the case of packet loss in the client's network vicinity, it will be vital to still attempt to send the request to the final host in the failover set regardless of a GC-scout failure, otherwise all requests may be skipped.

The GC-scout system also helps balance request load off heavily-loaded hosts, or hosts with poor performance for other reasons; they'll fail to achieve their 1 msec deadline and the request will be shunted off elsewhere.

For service APIs with real low-latency requirements, this is a great idea.
gc-scout  gc  java  scaling  scalability  linkedin  qcon  async  threadpools  rest  slas  timeouts  networking  distcomp  netty  tcp  udp  failover  fault-tolerance  packet-loss 
june 2013 by jm
Hey Judy, don't make it bad
Github get good results using Judy arrays to replace a Ruby hash. However: the whole blog post is a bit dodgy to me. It feels like there are much better ways to fix the problem:

1. the big one: don't do GC-heavy activity in the front-end web servers. Split that language-classification code into a separate service. Write its results to a cache and don't re-query needlessly.
2. why isn't this benchmarked against a C/C++ hash? it's only 36000 entries, loaded once at startup. lookups against that should be blisteringly fast even with the basic data structures, and that would also be outside the Ruby heap so avoid the GC overhead. Feels like the use of a Judy array was a "because I want to" decision.
3. personally, I'd have preferred they spend time fixing their uptime problems....

See also for more kvetching.
ruby  github  gc  judy-arrays  linguist  hashes  data-structures 
may 2013 by jm
"This project aims at creating a simple efficient building block for "Big Data" libraries, applications and frameworks; thing that can be used as an in-memory, bounded queue with opaque values (sequence of JDK primitive values): insertions at tail, removal from head, single entry peeks), and that has minimal garbage collection overhead. Insertions and removals are as individual entries, which are sub-sequences of the full buffer.

GC overhead minimization is achieved by use of direct ByteBuffers (memory allocated outside of GC-prone heap); and bounded nature by only supporting storage of simple primitive value (byte, `long') sequences where size is explicitly known.

Conceptually memory buffers are just simple circular buffers (ring buffers) that hold a sequence of primitive values, bit like arrays, but in a way that allows dynamic automatic resizings of the underlying storage. Library supports efficient reusing and sharing of underlying segments for sets of buffers, although for many use cases a single buffer suffices."
gc  java  jvm  bytebuffer 
december 2012 by jm
Everything I Ever Learned About JVM Performance Tuning @Twitter
presentation by Attila Szegedi of Twitter from last year. Some good tips here, well-presented
tuning  jvm  java  gc  cms  presentations  slides  twitter 
december 2012 by jm
_The Pauseless GC Algorithm_ [pdf]
Paper from USENIX VEE '05, by Cliff Click, Gil Tene, and Michael Wolf of Azul Systems, describing some details of the Azul secret sauce (via b6n)
via:b3n  azul  gc  jvm  java  usenix  papers 
december 2012 by jm
HotSpot JVM garbage collection options cheat sheet (v2)
'In this article I have collected a list of options related to GC tuning in JVM. This is not a comprehensive list, I have only collected options which I use in practice (or at least understand why I may want to use them).
Compared to previous version a few useful diagnostic options was added. Additionally section for G1 specific options was introduced.'
hotspot  jvm  coding  gc  java  performance 
september 2012 by jm
twitter/jvmgcprof - GitHub
'gcprof is a simple utility for profile allocation and garbage collection activity in the JVM [...] Profile allocation and garbage collection activity in the JVM. The gcprof command runs a java command under profiling. Allocation and collection statistics are printed periodically. If -n or -no are provided, statistics are also reported in terms of the given application metric. Total allocation, allocation rate, and a survival histogram is given. The intended use for this tool is twofold: (1) monitor and test garbage allocation and GC behavior, and (2) inform GC tuning.'
gc  java  performance  twitter  jvm  tools 
february 2012 by jm
Avoiding Full GCs in HBase with MemStore-Local Allocation Buffers
Fascinating. Evading the Java GC by reimplementing a slab allocator, basically
memory  allocation  java  gc  jvm  hbase  memstore  via:dehora  slab-allocator 
october 2011 by jm
Why WeakHashMap Sucks
'SoftReferences are the cheap, crappy caching mechanism [...] perfect for when you'd like your cache to be cleared at random times and in random order.'
softreferences  weakreferences  weak  references  gc  java  jvm  caching  hash  memory  collections  vm  weakhashmap  via:spyced  from delicious
september 2009 by jm

related tags

algorithms  allocation  async  azul  big-data  blade  blocking  borrow  bottlenecks  boundary  bugs  bytebuffer  caching  cassandra  censum  clustering  cms  coding  collections  commercial  concurrency  data  data-structures  dataviz  debugging  disk  distcomp  event-processing  event-streams  failover  fault-tolerance  g1  g1gc  garbage-collection  gc  gc-scout  github  go  hash  hashes  hbase  heap  hotspot  illuminate  java  jclarity  jcmd  jstat  judy-arrays  jvm  languages  latency  left-right  lifecycle  linguist  linkedin  load-balancing  locking  low-latency  mark-and-sweep  martin-thompson  memory  memstore  mmap  multithreading  netty  networking  off-heap  ops  optimization  out-of-band  packet-loss  papers  passenger  performance  phusion  presentations  qcon  rails  reader-writer  realtime  refcounting  references  rest  ruby  rust  scala  scalability  scaling  scheduling  scott-andreas  slab-allocator  slas  slides  softreferences  ssd  stack  storage  synchronization  tail-latencies  tcp  threadpools  timeouts  tools  troubleshooting  tuning  twitter  udp  undocumented  unicorn  usenix  via:b3n  via:dehora  via:spyced  visualization  vm  voldemort  wait-free  weak  weakhashmap  weakreferences  web-services  yourkit 

Copy this bookmark: