jm + big-o   8

Accidentally Quadratic — Rust hash iteration+reinsertion
It was recently discovered that some surprising operations on Rust’s standard hash table types could go quadratic.


Quite a nice unexpected accidental detour into O(n^2)
big-o  hashing  robin-hood-hashing  siphash  algorithms  hashtables  rust 
november 2016 by jm
Frogsort
Frogsort as an exam question (via qwghlm)
via:qwghlm  frogsort  sorting  big-o  algorithms  funny  comics  smbc 
august 2014 by jm
Efficient substring searching
This is a couple of years old, but I like this:
Turbo Boyer-Moore is disappointing, its name doesn’t do it justice. In academia constant overhead doesn’t matter, but here we see that it matters a lot in practice. Turbo Boyer-Moore’s inner loop is so complex that we think we’re better off using the original Boyer-Moore.


A good demo of how large values of O(n) can be slower than small values of O(mn).
algorithms  search  strings  coding  big-o  string-search  searching 
march 2014 by jm
Excellent Rob Pike quote about algorithmic complexity
'Fancy algorithms are slow when n is small, and n is usually small.' -- Rob Pike


Been there, bought the t-shirt ;)
rob-pike  quotes  algorithms  big-o  complexity  coding 
september 2013 by jm
Scala 2.8 Collections API -- Performance Characteristics
wow. Every library vending a set of collection types should have a page like this
collections  scala  performance  reference  complexity  big-o  coding 
january 2013 by jm
Special encoding of small aggregate data types in Redis
Nice performance trick in Redis on hash storage:

'In theory in order to guarantee that we perform lookups in constant time (also known as O(1) in big O notation) there is the need to use a data structure with a constant time complexity in the average case, like an hash table. But many times hashes contain just a few fields. When hashes are small we can instead just encode them in an O(N) data structure, like a linear array with length-prefixed key value pairs. Since we do this only when N is small, the amortized time for HGET and HSET commands is still O(1): the hash will be converted into a real hash table as soon as the number of elements it contains will grow too much (you can configure the limit in redis.conf). This does not work well just from the point of view of time complexity, but also from the point of view of constant times, since a linear array of key value pairs happens to play very well with the CPU cache (it has a better cache locality than an hash table).'
memory  redis  performance  big-o  hash-tables  storage  coding  cache  arrays 
november 2012 by jm
Avoiding Hash Lookups in a Ruby Implementation
'If I were to sum up the past 6 years I've spent optimizing JRuby it would be with the following phrase: Get Rid Of Hash Lookups.'

This has been a particular theme of some recent optimization hacks I've been working on. Hashes may be O(1) to read, on average, but that doesn't necessarily mean they're the right tool for performance...

(via Declan McGrath)
via:declanmcgrath  hash  optimization  ruby  performance  jruby  hashing  data-structures  big-o  optimisation 
september 2012 by jm

Copy this bookmark:



description:


tags: