jm + ranking   9

tdunning/t-digest
A new data structure for accurate on-line accumulation of rank-based statistics such as quantiles and trimmed means. The t-digest algorithm is also very parallel friendly making it useful in map-reduce and parallel streaming applications.

The t-digest construction algorithm uses a variant of 1-dimensional k-means clustering to product a data structure that is related to the Q-digest. This t-digest data structure can be used to estimate quantiles or compute other rank statistics. The advantage of the t-digest over the Q-digest is that the t-digest can handle floating point values while the Q-digest is limited to integers. With small changes, the t-digest can handle any values from any ordered set that has something akin to a mean. The accuracy of quantile estimates produced by t-digests can be orders of magnitude more accurate than those produced by Q-digests in spite of the fact that t-digests are more compact when stored on disk.


Super-nice feature is that it's mergeable, so amenable to parallel usage across multiple hosts if required. Java implementation, ASL licensing.
data-structures  algorithms  java  t-digest  statistics  quantiles  percentiles  aggregation  digests  estimation  ranking 
december 2016 by jm
What Are the Worst Airports in the World?
this is a great resource when picking a stopover for a 2-stop flight. Pity "best kids play area" isn't a criterion
airports  comparison  via:boingboing  flying  travel  ranking  world  skytrax 
september 2015 by jm
Your Google Algorithm Cheat Sheet: Panda, Penguin, and Hummingbird
Interesting that GOOG are still doing these big-bang releases -- I guess crunching the data to come up with new weights/rules is a heavyweight, time-consuming process
google  search  ranking  releases  panda  penguin  hummingbird  weighting 
may 2015 by jm
StackShare
'Discover and discuss the best dev tools and cloud infrastructure services' -- fun!
stackshare  architecture  stack  ops  software  ranking  open-source 
april 2015 by jm
Box Tech Blog » A Tale of Postmortems
How Box introduced COE-style dev/ops outage postmortems, and got them working. This PIE metric sounds really useful to head off the dreaded "it'll all have to come out missus" action item:
The picture was getting clearer, and we decided to look into individual postmortems and action items and see what was missing. As it was, action items were wasting away with no owners. Digging deeper, we noticed that many action items entailed massive refactorings or vague requirements like “make system X better” (i.e. tasks that realistically were unlikely to be addressed). At a higher level, postmortem discussions often devolved into theoretical debates without a clear outcome. We needed a way to lower and focus the postmortem bar and a better way to categorize our action items and our technical debt.

Out of this need, PIE (“Probability of recurrence * Impact of recurrence * Ease of addressing”) was born. By ranking each factor from 1 (“low”) to 5 (“high”), PIE provided us with two critical improvements:

1. A way to police our postmortems discussions. I.e. a low probability, low impact, hard to implement solution was unlikely to get prioritized and was better suited to a discussion outside the context of the postmortem. Using this ranking helped deflect almost all theoretical discussions.
2. A straightforward way to prioritize our action items.

What’s better is that once we embraced PIE, we also applied it to existing tech debt work. This was critical because we could now prioritize postmortem action items alongside existing work. Postmortem action items became part of normal operations just like any other high-priority work.
postmortems  action-items  outages  ops  devops  pie  metrics  ranking  refactoring  prioritisation  tech-debt 
august 2014 by jm
Microsoft CEO Steve Ballmer retires: A firsthand account of the company’s employee-ranking system
LOL MS. Sadly, this talk of "core competencies" and "visibility" is pretty reminiscent of Amazon's review season, too:
This illustrated another problem with [stack ranking]: It destroyed trust between individual contributors and management, because the stack rank required that all lower-level managers systematically lie to their reports. Why? Because for years Microsoft did not admit the existence of the stack rank to nonmanagers. Knowledge of the process gradually leaked out, becoming a recurrent complaint on the much-loathed (by Microsoft) Mini-Microsoft blog, where a high-up Microsoft manager bitterly complained about organizational dysfunction and was joined in by a chorus of hundreds of employees. The stack rank finally made it into a Vanity Fair article in 2012, but for many years it was not common knowledge, inside or outside Microsoft. It was presented to the individual contributors as a system of objective assessment of “core competencies,” with each person being judged in isolation.
When review time came, and programmers would fill out a short self-assessment talking about their achievements, strengths, and weaknesses, only some of them knew that their ratings had been more or less already foreordained at the stack rank. [...] If you did know about the stack rank, you weren’t supposed to admit it. So you went through the pageantry of the performance review anyway, arguing with your manager in the rhetoric of “core competencies.” The managers would respond in kind. Since the managers had little control over the actual score and attendant bonus and raise (if any), their job was to write a review to justify the stack rank in the language of absolute merit. (“Higher visibility” was always a good catch-all: Sure, you may be a great coder and work 80 hours a week, but not enough people have heard of you!)
amazon  stack-ranking  employees  ranking  work  microsoft  core-competencies 
august 2013 by jm
How Kaggle Is Changing How We Work - Thomas Goetz - The Atlantic

Founded in 2010, Kaggle is an online platform for data-mining and predictive-modeling competitions. A company arranges with Kaggle to post a dump of data with a proposed problem, and the site's community of computer scientists and mathematicians -- known these days as data scientists -- take on the task, posting proposed solutions.

[...] On one level, of course, Kaggle is just another spin on crowdsourcing, tapping the global brain to solve a big problem. That stuff has been around for a decade or more, at least back to Wikipedia (or farther back, Linux, etc). And companies like TaskRabbit and oDesk have thrown jobs to the crowd for several years. But I think Kaggle, and other online labor markets, represent more than that, and I'll offer two arguments. First, Kaggle doesn't incorporate work from all levels of proficiency, professionals to amateurs. Participants are experts, and they aren't working for benevolent reasons alone: they want to win, and they want to get better to improve their chances of winning next time. Second, Kaggle doesn't just create the incidental work product, it creates a new marketplace for work, a deeper disruption in a professional field. Unlike traditional temp labor, these aren't bottom of the totem pole jobs. Kagglers are on top. And that disruption is what will kill Joy's Law.

Because here's the thing: the Kaggle ranking has become an essential metric in the world of data science. Employers like American Express and the New York Times have begun listing a Kaggle rank as an essential qualification in their help wanted ads for data scientists. It's not just a merit badge for the coders; it's a more significant, more valuable, indicator of capability than our traditional benchmarks for proficiency or expertise. In other words, your Ivy League diploma and IBM resume don't matter so much as my Kaggle score. It's flipping the resume, where your work is measurable and metricized and your value in the marketplace is more valuable than the place you work.
academia  datamining  economics  data  kaggle  data-science  ranking  work  competition  crowdsourcing  contracting 
april 2013 by jm
Reddit’s ranking algorithms
so Reddit uses the Wilson score confidence interval approach, it turns out; more details here (via Toby diPasquale)
ranking  rating  algorithms  popularity  python  wilson-score-interval  sorting  statistics  confidence-sort 
january 2013 by jm

Copy this bookmark:



description:


tags: