5033
How The Copyright Industry Pushed For Internet Surveillance | TorrentFreak
Rick Falkvinge with a good point:
The reason for the copyright industry to push for surveillance is simple: any digital communications channel can be used for private conversation, but it can also be used to share culture and knowledge that is under copyright monopoly. In order to tell which communications is which, you must sort all of it – and to do that, you must look at all of it. In other words, if enforcing the copyright monopoly is your priority, you need to kill privacy, and specifically anonymity and secrecy of correspondence.


This was exactly my biggest worry -- a side-effect of effective copyright filtering is the creation of infrastructure for online oppression by the state.
copyright  privacy  state  data-protection  rick-falkvinge  copyfight  internet  filtering  surveillance  anonymity 
july 2013
Google Translate of "Lorem ipsum"
The perils of unsupervised machine learning... here's what GTranslate reckons "lorem ipsum" translates to:
We will be sure to post a comment. Add tomato sauce, no tank or a traditional or online. Until outdoor environment, and not just any competition, reduce overall pain. Cisco Security, they set up in the throat develop the market beds of Cura; Employment silently churn-class by our union, very beginner himenaeos. Monday gate information. How long before any meaningful development. Until mandatory functional requirements to developers. But across the country in the spotlight in the notebook. The show was shot. Funny lion always feasible, innovative policies hatred assured. Information that is no corporate Japan
lorem-ipsum  boilerplate  machine-learning  translation  google  translate  probabilistic  tomato-sauce  cisco  funny 
june 2013
Latest leak of EU Data Protection Regulation makes fines impossible
Well, isn't this convenient. The leaked proposed regulation document from the Irish EU presidency contains the following changes from current law:
what is new is a set of prescriptive conditions which, if adopted, appears to make a Monetary Penalty Notice (MPN) almost impracticable to serve. This is because the [Data Protection] Commissioner would have consider a dozen factors (many of which will give no doubt rise to appeal). [...]

In addition, the fines in the Regulation require consideration of the actual damage caused; this compares unfavourably with the current MPN where large fines have been contingent on grave security errors on the part of the data controller (i.e. the MPN of the UK DPA does not need damage to data subjects – only the likelihood of substantial distress or damage which should have been preventable/foreseeable).
data-protection  law  eu  ec  ireland  privacy  fines  regulation  mpn 
june 2013
_Measuring Mobile Web Performance_ [slides]
Notable slide is #13, displaying a graph of HSDPA packet RTTs measured from a train. Max RTT gets up to 20,266ms. ouch
rtt  packets  latency  hsdpa  mobile  internet  trains  packet-loss 
june 2013
_Bolt-On Causal Consistency_ [slides]
SIGMOD 2013 presentation from Peter Bailis, Ali Ghodsi, Joseph M. Hellerstein, Ion Stoica -- adding consistency to an eventually-consistent store by tracking dependencies
eventual-consistency  state  cap-theorem  storage  peter-bailis 
june 2013
My email to Irish Times Editor, sent 25th June
Daragh O'Brien noting 3 stories on 3 consecutive days voicing dangerously skewed misinformation about data protection and privacy law in Ireland:
There is a worrying pattern in these stories. The first two decry the Data Protection legislation (current and future) as being dangerous to children and damaging to the genealogy trade. The third sets up an industry “self-regulation” straw man and heralds it as progress (when it is decidedly not, serving only to further confuse consumers about their rights).

If I was a cynical person I would find it hard not to draw the conclusion that the Irish Times, the “paper of record” has been stooged by organisations who are resistant to the defence of and validation of fundamental rights to privacy as enshrined in the Data Protection Acts and EU Treaties, and in the embryonic Data Protection Regulation. That these stories emerge hot on the heels of the pendulum swing towards privacy concerns that the NSA/Prism revelations have triggered is, I must assume, a co-incidence. It cannot be the case that the Irish Times blindly publishes press releases without conducting cursory fact checking on the stories contained therein?

Three stories over three days is insufficient data to plot a definitive trend, but the emphasis is disconcerting. Is it the Irish Times’ editorial position that Data Protection legislation and the protection of fundamental rights is a bad thing and that industry self-regulation that operates in ignorance of legislation is the appropriate model for the future? It surely cannot be that press releases are regurgitated as balanced fact and news by the Irish Times without fact checking and verification? If I was to predict a “Data Protection killed my Puppy” type headline for tomorrow’s edition or another later this week would I be proved correct?
daragh-obrien  irish-times  iab  bias  advertising  newspapers  press-releases  journalism  data-protection  privacy  ireland 
june 2013
Boundary's Early Warnings alarm
Anomaly detection on network throughput metrics, alarming if throughputs on selected flows deviate by 1, 2, or 3 standard deviations from a historical baseline.
network-monitoring  throughput  boundary  service-metrics  alarming  ops  statistics 
june 2013
Locally Repairable Codes
Facebook’s new erasure coding algorithm (via High Scalability).
Disk I/O and network traffic were reduced by half compared to RS codes.
The LRC required 14% more storage than RS (ie. 60% of data size).
Repair times were much lower thanks to the local repair codes.
Much greater reliability thanks to fast repairs.
Reduced network traffic makes them suitable for geographic distribution.
erasure-coding  facebook  redundancy  repair  algorithms  papers  via:highscalability  data  storage  fault-tolerance 
june 2013
how RAID fits in with Riak
Write heavy, high performance applications should probably use RAID 0 or avoid RAID altogether and consider using a larger n_val and cluster size. Read heavy applications have more options, and generally demand more fault tolerance with the added benefit of easier hardware replacement procedures.


Good to see official guidance on this (via Bill de hOra)
via:dehora  riak  cluster  fault-tolerance  raid  ops 
june 2013
gnuplot's dumb terminal
Turns out gnuplot has a pretty readable ASCII terminal rendering mode; combined with 'watch' it makes for a nifty graphing one-liner
gnuplot  plotting  charts  graphs  cli  command-line  unix  gnu  hacks  dataviz  visualization  ascii 
june 2013
Facebook announce Wormhole
Over the last couple of years, we have built and deployed a reliable publish-subscribe system called Wormhole. Wormhole has become a critical part of Facebook's software infrastructure. At a high level, Wormhole propagates changes issued in one system to all systems that need to reflect those changes – within and across data centers.


Facebook's Kafka-alike, basically, although with some additional low-latency guarantees. FB appear to be using it for multi-region and multi-AZ replication. Proprietary.
pub-sub  scalability  facebook  realtime  low-latency  multi-region  replication  multi-az  wormhole 
june 2013
Sketch of the Day: K-Minimum Values
Another sketching algorithm -- this one supports set union and intersection operations more easily than HyperLogLog when there are more than 2 sets
algorithms  coding  space-saving  cardinality  streams  stream-processing  estimation  sets  sketching 
june 2013
js-hll
Good UI for exploration of HyperLogLog set intersections and unions.
One of the first things that we wanted to do with HyperLogLog when we first started playing with it was to support and expose it natively in the browser. The thought of allowing users to directly interact with these structures -- perform arbitrary unions and intersections on effectively unbounded sets all on the client -- was exhilarating to us. [...] we are pleased to announce the open-source release of AK’s HyperLogLog implementation for JavaScript, js-hll. We are releasing this code under the Apache License, Version 2.0.

We knew that we couldn’t just release a bunch of JavaScript code without allowing you to see it in action — that would be a crime. We passed a few ideas around and the one that kept bubbling to the top was a way to kill two birds with one stone. We wanted something that would showcase what you can do with HLL in the browser and give us a tool for explaining HLLs. It is typical for us to explain how HLL intersections work using a Venn diagram. You draw some overlapping circles with a border that represents the error and you talk about how if that border is close to or larger than the intersection then you can’t say much about the size of that intersection. This works just ok on a whiteboard but what you really want is to just build a visualization that allows you to select from some sets and see the overlap. Maybe even play with the precision a little bit to see how that changes the result. Well, we did just that!
javascript  ui  hll  hyperloglog  algorithms  sketching  js  sets  intersection  union  apache  open-source 
june 2013
'If I was your cloud provider, I'd never let you down'
This is the thing that's put me off Joyent. They make claims like this one from October 2012:
We’ve given our other partners 99.9999% uptime.


This despite a 10-day outage of their BingoDisk and Strongspace storage services in January 2008, 1734 days previously (http://www.datacenterknowledge.com/archives/2008/01/21/joyent-services-back-after-8-day-outage/).

If you assume that is the only outage they've had since then, that works out as 99.4% uptime. Quite a few less nines...
joyent  marketing  uptime  two-nines  fail  strongdisk 
june 2013
shades
A command-line utility in Ruby to perform (a) OLAP cubing and (b) histogramming, given whitespace-delimited line data
ruby  olap  number-crunching  data  histograms  cli 
june 2013
Liberty issues claim against British Intelligence Services over PRISM and Tempora privacy scandal
James Welch, Legal Director for Liberty, said:
 
“Those demanding the Snoopers’ Charter seem to have been indulging in out-of-control snooping even without it – exploiting legal loopholes and help from Uncle Sam.
“No-one suggests a completely unpoliced internet but those in power cannot swap targeted investigations for endless monitoring of the entire globe.”


Go Liberty! Take note, ICCL, this is how a civil liberties group engages with internet issues.
prism  nsa  gchq  surveillance  liberty  civil-liberties  internet  snooping 
june 2013
Setting up Perfect Forward Secrecy for nginx or stud
Matt Sergeant writes up a pretty solid HOWTO:

There has been a lot of discussion recently about Perfect Forward Secrecy (PFS) and the benefits it can bring you, especially in terms of any kind of traffic sniffing attack. Unfortunately setting this up I found very few guides telling you exactly what you need to do. The downside to PFS [via ECDHE] is that it uses more CPU power than other ciphers. This is a trade-off between security and cost.
ecdhe  elliptic-curve  crypto  pfs  ssl  tls  howto  nginx  stud 
june 2013
Accuweather long-range forecast accuracy questionable
"questionable" is putting it mildly:

Now to to the point: Are the 25-day forecasts any good? In a word, no. Specifically, after running this data, I would not trust a forecast high temperature more than a week out. I’d rather look at the normal (historical average) temperature for that day than the forecast. Similarly, I would not even look at a precipitation forecast more than 6 days in advance, and I wouldn’t start to trust it for anything important until about 3 days ahead of time.
accuweather  accuracy  fail  graphs  data  weather  forecasting  philadelphia 
june 2013
Skype's principal architect explains why they no longer have end-to-end crypto
Mobile devices can't handle the CPU and constantly-online requirements, and an increased reliance on dedicated routing supernodes to avoid Windows-client monoculture and p2p network fragility

(via the IP list, via kragen)
skype  p2p  mobile  architecture  networking  internet  snooping  crypto  via:ip  via:kragen  phones  windows 
june 2013
McLibel leaflet was co-written by undercover police officer Bob Lambert | UK news | guardian.co.uk

The true identity of one of the authors of the "McLibel leaflet" is Bob Lambert, a police officer who used the alias Bob Robinson in his five years infiltrating the London Greenpeace group. [...]

McDonald's famously sued green campaigners over the roughly typed leaflet, in a landmark three-year high court case, that was widely believed to have been a public relations disaster for the corporation. Ultimately the company won a libel battle in which it spent millions on lawyers.

Lambert was deployed by the special demonstration squad (SDS) – a top-secret Metropolitan police unit that targeted political activists between 1968 until 2008, when it was disbanded. He co-wrote the defamatory six-page leaflet in 1986 – and his role in its production has been the subject of an internal Scotland Yard investigation for several months.

At no stage during the civil legal proceedings brought by McDonald's in the 1990s was it disclosed that a police infiltrator helped author the leaflet.
infiltration  police  mcdonalds  libel  greenpeace  bob-lambert  undercover  1980s  uk-politics 
june 2013
SSL/TLS overhead
'The TLS handshake has multiple variations, but let’s pick the most common one – anonymous client and authenticated server (the connections browsers use most of the time).' Works out to 4 packets, in addition to the TCP handshake's 3, and about 6.5k bytes on average.
network  tls  ssl  performance  latency  speed  networking  internet  security  packets  tcp  handshake 
june 2013
hlld
a high-performance C server which is used to expose HyperLogLog sets and operations over them to networked clients. It uses a simple ASCII protocol which is human readable, and similar to memcached.

HyperLogLog's are a relatively new sketching data structure. They are used to estimate cardinality, i.e. the unique number of items in a set. They are based on the observation that any bit in a "good" hash function is indepedenent of any other bit and that the probability of getting a string of N bits all set to the same value is 1/(2^N). There is a lot more in the math, but that is the basic intuition. What is even more incredible is that the storage required to do the counting is log(log(N)). So with a 6 bit register, we can count well into the trillions. For more information, its best to read the papers referenced at the end. TL;DR: HyperLogLogs enable you to have a set with about 1.6% variance, using 3280 bytes, and estimate sizes in the trillions.


(via:cscotta)
hyper-log-log  hlld  hll  data-structures  memcached  daemons  sketching  estimation  big-data  cardinality  algorithms  via:cscotta 
june 2013
Java Concurrent Counters By Numbers
threadsafe counters in the JVM compared. AtomicLong, Doug Lea's LongAdder, a ThreadLocal counter, and a field-on-the-Thread-object counter int (via Darach Ennis). Nitsan's posts on concurrency are fantastic
counters  concurrency  threads  java  jvm  atomic 
june 2013
stuff Google has learned from their hiring data
A. On the hiring side, we found that [interview] brainteasers are a complete waste of time. How many golf balls can you fit into an airplane? How many gas stations in Manhattan? A complete waste of time. They don’t predict anything. They serve primarily to make the interviewer feel smart.

Instead, what works well are structured behavioral interviews, where you have a consistent rubric for how you assess people, rather than having each interviewer just make stuff up. Behavioral interviewing also works — where you’re not giving someone a hypothetical, but you’re starting with a question like, “Give me an example of a time when you solved an analytically difficult problem.” The interesting thing about the behavioral interview is that when you ask somebody to speak to their own experience, and you drill into that, you get two kinds of information. One is you get to see how they actually interacted in a real-world situation, and the valuable “meta” information you get about the candidate is a sense of what they consider to be difficult.

This makes sense, and matches what I learned in Amazon. Bad news for Microsoft though! (Correction: Adam Shostack got in touch to note that MS haven't done this for 10+ years either.)

Also, I like this:

A. One of the things we’ve seen from all our data crunching is that G.P.A.’s are worthless as a criteria for hiring, and test scores are worthless — no correlation at all except for brand-new college grads, where there’s a slight correlation. Google famously used to ask everyone for a transcript and G.P.A.’s and test scores, but we don’t anymore, unless you’re just a few years out of school. We found that they don’t predict anything. What’s interesting is the proportion of people without any college education at Google has increased over time as well. So we have teams where you have 14 percent of the team made up of people who’ve never gone to college.
google  hiring  interviewing  interviews  brainteasers  gpa  microsoft  star  amazon 
june 2013
rendering pcm with simulated phosphor persistence
This is something readily applicable to display of sampled time-series metric data -- it really makes regular patterns visible (and is nicely retro to boot).
When PCM waveforms and similar function plots are displayed on screen, computational speed is often preferred over beauty and information content. For example, Audacity only draws the local maximum envelope amplitude and (what appears to be) RMS power when zoomed out, and when zoomed in, displays a very straightforward linear interpolation between samples.

Analogue oscilloscopes, on the other hand, do things differently. An electron beam scans a phosphor screen at a constant X velocity, lighting a dot everywhere it hits. The dot brightness is proportional to the time the electron beam was directed at it. Because the X speed of the beam is constant and the Y position is modulated by the waveform, brightness gives information about the local derivative of the function. Now how cool is that? It looks like an X-ray of the signal. We can see right away that the beep is roughly a square wave, because there's light on top and bottom of the oscillation envelope but mostly darkness in between. Minute changes in the harmonic content are also visible as interesting banding and ribbons.


(via an _amazing_ kragen post on ghetto electronics)
via:kragen  pcm  waveforms  oscilloscopes  analog  analogue  dataviz  time-series  waves  ui  phosphor  retro 
june 2013
Project Voldemort: measuring BDB space consumption
HOWTO measure this using the BDB-JE command line tools. this is exposed through JMX as the CleanerBacklog metric, too, I think, but good to bookmark just in case
voldemort  cleaner  bdb  ops  space  storage  monitoring  debug 
june 2013
3-D Printer Brings Dexterity To Children With No Fingers
'A South African man who lost part of his hand in a home carpentry accident and an American puppeteer he met via YouTube have teamed up to make 3D-printable hands for children who have no fingers. So far, over 100 children have been given "robohands" for free, and a simplified version released just yesterday snaps together like LEGO bricks and costs just $5 in materials.'

This is incredible. Check out the video of Liam and his robohand in action: http://www.youtube.com/watch?v=kB53-D_N8Uc
3d-printing  3d  makers  robohands  hands  prosthetics  future  youtube  via:gruverja 
june 2013
DRI needs your help
Appalled by mass surveillance scandals? So are we. We’re doing something about it – and you can too.

In 2006 we started a case challenging Irish and European laws that require your mobile phone company and ISP to monitor your location, your calls, your texts and your emails and to store that information for up to two years. That case has now made it to the European Court of Justice and will be heard on July 9th. If we are successful, it will strike down these laws for all of Europe and will declare illegal this type of mass surveillance of the entire population.

Here’s where you come in. You can take part by: making a donation to help us pay for the expenses we incur; following our updates and keeping abreast of the issues; spreading the word on social media.

With your help, we can strike a blow for the privacy of all citizens.
activism  privacy  politics  ireland  dri  digital-rights  data-protection  data-retention 
june 2013
Java Garbage Collection Distilled
Martin Thompson lays it out:
Serial, Parallel, Concurrent, CMS, G1, Young Gen, New Gen, Old Gen, Perm Gen, Eden, Tenured, Survivor Spaces, Safepoints, and the hundreds of JVM start-up flags. Does this all baffle you when trying to tune the garbage collector while trying to get the required throughput and latency from your Java application? If it does then don’t worry, you are not alone. Documentation describing garbage collection feels like man pages for an aircraft. Every knob and dial is detailed and explained but nowhere can you find a guide on how to fly. This article will attempt to explain the tradeoffs when choosing and tuning garbage collection algorithms for a particular workload.
gc  java  garbage-collection  coding  cms  g1  jvm  optimization 
june 2013
The Cold Hard Facts of Freezing to Death
an amazing account of near-death from hypothermia (via Dor)
via:dor  hypothermia  cold  medicine  science  non-fiction 
june 2013
Verified by Visa and MasterCard SecureCode kill 10-12% of your business
As Chris Shiflett noted: not only are they bad for security, they're bad for business too.
12 percent of users consider abandoning [an online shopping transaction] when they see either the Verified by Visa or the American Express SafeKey logos, while 10 percent will consider abandoning when the see the MasterCard Secure card logo.
ecommerce  vbv  online-shopping  mastercard  visa  securecode  security  fail 
june 2013
Open Rights Group - EU Commission caved to US demands to drop anti-PRISM privacy clause
Reports this week revealed that the US successfully pressed the European Commission to drop sections of the Data Protection Regulation that would, as the Financial Times explains, “have nullified any US request for technology and telecoms companies to hand over data on EU citizens.

The article [...] would have prohibited transfers of personal information to a third country under a legal request, for example the one used by the NSA for their PRISM programme, unless “expressly authorized by an international agreement or provided for by mutual legal assistance treaties or approved by a supervisory authority.”

The Article was deleted from the draft Regulation proper, which was published shortly afterwards in January 2012. The reports suggest this was due to intense pressure from the US. Commission Vice-President Viviane Reding favoured keeping the the clause, but other Commissioners seemingly did not grasp the significance of the article.
org  privacy  us  surveillance  fisaaa  viviane-reding  prism  nsa  ec  eu  data-protection 
june 2013
Schneier on Security: Blowback from the NSA Surveillance
Unintended consequences on US-focused governance of the internet and cloud computing:
Writing about the new Internet nationalism, I talked about the ITU meeting in Dubai last fall, and the attempt of some countries to wrest control of the Internet from the US. That movement just got a huge PR boost. Now, when countries like Russia and Iran say the US is simply too untrustworthy to manage the Internet, no one will be able to argue. We can't fight for Internet freedom around the world, then turn around and destroy it back home. Even if we don't see the contradiction, the rest of the world does.
internet  freedom  cloud-computing  amazon  google  hosting  usa  us-politics  prism  nsa  surveillance 
june 2013
Persuading David Simon (Pinboard Blog)
Maciej Ceglowski with a strongly-argued rebuttal of David Simon's post about the NSA's PRISM. This point in particular is key:
The point is, you don't need human investigators to find leads, you can have the algorithms do it [based on the call graph or network of who-calls-who]. They will find people of interest, assemble the watch lists, and flag whomever you like for further tracking. And since the number of actual terrorists is very, very, very small, the output of these algorithms will consist overwhelmingly of false positives.
false-positives  maciej  privacy  security  nsa  prism  david-simon  accuracy  big-data  filtering  anti-spam 
june 2013
Why I won’t give the European Parliament the data protection analysis it wanted
Holy crap. Simon Davies rips into the EU data-protection reform disaster with gusto:
The situation was an utter disgrace. The advertising industry even gave an award to an Irish Minister for destroying some of the rights in the regulation while the UK managed to force a provision that would make the direct marketing industry a “legitimate” processing operation in its own right, putting it on the same level of lawful processing as fraud prevention. Things got to the point where even the most senior data protection officials in Europe stopped trying to influence events and had told me “let the chips fall as they may”.
[...]

But let’s take a step back for a moment from this travesty. Out on the streets – while most may not know what data protection is – people certainly know what it is supposed to protect. People value their privacy and they will be vocal about attempts to destroy it.
I had said as much to the joint parliamentary meeting, observing “the one element that has been left out of all these efforts is the public”. However, as the months rolled on, the only message being sent to the public was that data protection is an anachronism stitched together with self interest and impracticality.
[...]

I wasn’t aware at the time that there was a vast stitch-up to kill the reforms. I cannot bring myself to present a temperate report with measured wording that pretends this is all just normal business. It isn’t normal business, and it should never be normal business in any civilized society. How does one talk in measured tones about such endemic hypocrisy and deception? If you want to know who the real enemy of privacy is, don’t just look to the American agencies. The real enemy is right here in the European Parliament in the guise of MEPs who have knowingly sold our rights away to maintain powerful relationships. I’d like to say they were merely hoodwinked into supporting the vandalism, but many are smart people who knew exactly what they were doing.


Nice work, Irish presidency! His bottom line:
Is there a way forward? I believe so. First, governments should yield to common decency and scrap the illegitimate and poisoned Irish Council draft and hand the task to the Lithuanian Presidency that commences next month. Second, the Irish and British governments should be infinitely more transparent about their cooperation with intrusive interests that fuelled the deception.
ireland  eu  europe  reform  law  data-protection  privacy  simon-davies  meps  iab 
june 2013
Building a Modern Website for Scale (QCon NY 2013) [slides]
some great scalability ideas from LinkedIn. Particularly interesting are the best practices suggested for scaling web services:

1. store client-call timeouts and SLAs in Zookeeper for each REST endpoint;
2. isolate backend calls using async/threadpools;
3. cancel work on failures;
4. avoid sending requests to GC'ing hosts;
5. rate limits on the server.

#4 is particularly cool. They do this using a "GC scout" request before every "real" request; a cheap TCP request to a dedicated "scout" Netty port, which replies near-instantly. If it comes back with a 1-packet response within 1 millisecond, send the real request, else fail over immediately to the next host in the failover set.

There's still a potential race condition where the "GC scout" can be achieved quickly, then a GC starts just before the "real" request is issued. But the incidence of GC-blocking-request is probably massively reduced.

It also helps against packet loss on the rack or server host, since packet loss will cause the drop of one of the TCP packets, and the TCP retransmit timeout will certainly be higher than 1ms, causing the deadline to be missed. (UDP would probably work just as well, for this reason.) However, in the case of packet loss in the client's network vicinity, it will be vital to still attempt to send the request to the final host in the failover set regardless of a GC-scout failure, otherwise all requests may be skipped.

The GC-scout system also helps balance request load off heavily-loaded hosts, or hosts with poor performance for other reasons; they'll fail to achieve their 1 msec deadline and the request will be shunted off elsewhere.

For service APIs with real low-latency requirements, this is a great idea.
gc-scout  gc  java  scaling  scalability  linkedin  qcon  async  threadpools  rest  slas  timeouts  networking  distcomp  netty  tcp  udp  failover  fault-tolerance  packet-loss 
june 2013
Record companies to target 20 more pirate sites after court ruling - Independent.ie
Looks like IRMA are following the lead of the UK's BPI, by chasing the proxy sites next:
Up to 20 internet sites are to be targeted by an organisation representing record companies in a move to stamp out the illegal pirating of music and other copyright material. The Irish Recorded Music Association (IRMA) said it would be immediately moving against the 20 "worst offenders" to "take out" internet sites involved in the illegal downloading of copyright work.


However, looks like this will involve more court time:
Last night IRMA director general, Dick Doyle said the High Court ruling was only the first step in "taking out many internet sites involved in illegally downloading music. "We will be back in court very shortly to take out five to 10 other sites. We have already selected a total of 20 of the worst offender sites and we will go after the next five in the very near future," he said.


That's not going to be cheap!
courts  ireland  law  irma  piracy  pirate-bay  bpi  proxies  filesharing  copyright 
june 2013
CloudFlare, PRISM, and Securing SSL Ciphers
Matthew Prince of CloudFlare has an interesting theory on the NSA's capabilities:
It is not inconceivable that the NSA has data centers full of specialized hardware optimized for SSL key breaking. According to data shared with us from a survey of SSL keys used by various websites, the majority of web companies were using 1024-bit SSL ciphers and RSA-based encryption through 2012. Given enough specialized hardware, it is within the realm of possibility that the NSA could within a reasonable period of time reverse engineer 1024-bit SSL keys for certain web companies. If they'd been recording the traffic to these web companies, they could then use the broken key to go back and decrypt all the transactions.

While this seems like a compelling theory, ultimately, we remain skeptical this is how the PRISM program described in the slides actually works. Cracking 1024-bit keys would be a big deal and likely involve some cutting-edge cryptography and computational power, even for the NSA. The largest SSL key that is known to have been broken to date is 768 bits long. While that was 4 years ago, and the NSA undoubtedly has some of the best cryptographers in the world, it's still a considerable distance from 768 bits to 1024 bits -- especially given the slide suggests Microsoft's key would have to had been broken back in 2007.

Moreover, the slide showing the dates on which "collection began" for various companies also puts the cost of the program at $20M/year. That may sound like a lot of money, but it is not for an undertaking like this. Just the power necessary to run the server farm needed to break a 1024-bit key would likely cost in excess of $20M/year. While the NSA may have broken 1024-bit SSL keys as part of some other program, if the slide is accurate and complete, we think it's highly unlikely they did so as part of the PRISM program. A not particularly glamorous alternative theory is that the NSA didn't break the SSL key but instead just cajoled rogue employees at firms with access to the private keys -- whether the companies themselves, partners they'd shared the keys with, or the certificate authorities who issued the keys in the first place -- to turn them over. That very well may be possible on a budget of $20M/year.

[....]
Google is a notable anomaly. The company uses a 1024-bit key, but, unlike all the other companies listed above, rather than using a default cipher suite based on the RSA encryption algorithm, they instead prefer the Elliptic Curve Diffie-Hellman Ephemeral (ECDHE) cipher suites. Without going into the technical details, a key difference of ECDHE is that they use a different private key for each user's session. This means that if the NSA, or anyone else, is recording encrypted traffic, they cannot break one private key and read all historical transactions with Google. The NSA would have to break the private key generated for each session, which, in Google's case, is unique to each user and regenerated for each user at least every 28-hours.

While ECDHE arguably already puts Google at the head of the pack for web transaction security, to further augment security Google has publicly announced that they will be increasing their key length to 2048-bit by the end of 2013. Assuming the company continues to prefer the ECDHE cipher suites, this will put Google at the cutting edge of web transaction security.


2048-bit ECDHE sounds like the way to go, and CloudFlare now support that too.
prism  security  nsa  cloudflare  ssl  tls  ecdhe  elliptic-curve  crypto  rsa  key-lengths 
june 2013
Announcing Zuul: Edge Service in the Cloud
Netflix' library to implement "edge services" -- ie. a front end to their API, web servers, and streaming servers. Some interesting features: dynamic filtering using Groovy scripts; Hystrix for software load balancing, fault tolerance, and error handling for originated HTTP requests; fine-grained service metrics; Archaius for configuration; and canary requests to detect overload risks. Pretty complex though
edge-services  api  netflix  zuul  archaius  canary-requests  http  groovy  hystrix  load-balancing  fault-tolerance  error-handling  configuration 
june 2013
Paper: "Root Cause Detection in a Service-Oriented Architecture" [pdf]
LinkedIn have implemented an automated root-cause detection system:

This paper introduces MonitorRank, an algorithm that can reduce the time, domain knowledge, and human effort required to find the root causes of anomalies in such service-oriented architectures. In the event of an anomaly, MonitorRank provides a ranked order list of possible root causes for monitoring teams to investigate. MonitorRank uses the historical and current time-series metrics of each sensor as its input, along with the call graph generated between sensors to build an unsupervised model for ranking. Experiments on real production outage data from LinkedIn, one of the largest online social networks, shows a 26% to 51% improvement in mean
average precision in finding root causes compared to baseline and current state-of-the-art methods.


This is a topic close to my heart after working on something similar for 3 years in Amazon!

Looks interesting, although (a) I would have liked to see more case studies and examples of "real world" outages it helped with; and (b) it's very much a machine-learning paper rather than a systems one, and there is no discussion of fault tolerance in the design of the detection system, which would leave me worried that in the case of a large-scale outage event, the system itself will disappear when its help is most vital. (This was a major design influence on our team's work.)

Overall, particularly given those 2 issues, I suspect it's not in production yet. Ours certainly was ;)
linkedin  soa  root-cause  alarming  correlation  service-metrics  machine-learning  graphs  monitoring 
june 2013
Introducing Kale « Code as Craft
Etsy have implemented a tool to perform auto-correlation of service metrics, and detection of deviation from historic norms:
at Etsy, we really love to make graphs. We graph everything! Anywhere we can slap a StatsD call, we do. As a result, we’ve found ourselves with over a quarter million distinct metrics. That’s far too many graphs for a team of 150 engineers to watch all day long! And even if you group metrics into dashboards, that’s still an awful lot of dashboards if you want complete coverage. Of course, if a graph isn’t being watched, it might misbehave and no one would know about it. And even if someone caught it, lots of other graphs might be misbehaving in similar ways, and chances are low that folks would make the connection.

We’d like to introduce you to the Kale stack, which is our attempt to fix both of these problems. It consists of two parts: Skyline and Oculus. We first use Skyline to detect anomalous metrics. Then, we search for that metric in Oculus, to see if any other metrics look similar. At that point, we can make an informed diagnosis and hopefully fix the problem.


It'll be interesting to see if they can get this working well. I've found it can be tricky to get working with low false positives, without massive volume to "smooth out" spikes caused by normal activity. Amazon had one particularly successful version driving severity-1 order drop alarms, but it used massive event volumes and still had periodic false positives. Skyline looks like it will alarm on a single anomalous data point, and in the comments Abe notes "our algorithms err on the side of noise and so alerting would be very noisy."
etsy  monitoring  service-metrics  alarming  deviation  correlation  data  search  graphs  oculus  skyline  kale  false-positives 
june 2013
On Scala
great, comprehensive review of the language, its pros and misfeatures, from Bill de hOra
scala  languages  coding  fp  reviews 
june 2013
Possible ban on 'factory food' in French restaurants
I am very much in favour of this in Ireland, too. The pre-prepared food thing makes for crappy food:
In an attempt to crack down on the proliferation of restaurants serving boil-in-a-bag or microwave-ready meals, which could harm France’s reputation for good food, MP Daniel Fasquelle is putting a new law to parliament this month. [...] The proposed law would limit the right to use the word “restaurant” to eateries where food is prepared on site using raw ingredients, either fresh or frozen. Exceptions would be made for some prepared products, such as bread, charcuterie and ice cream.
restaurants  food  france  cuisine  boil-in-the-bag  microwave  cooking  daniel-fasquelle 
june 2013
Atelier olschinsky - "Cities III 05"
Fine Art Print on Hahnemuehle Photo Rag Bright White, 310g: 40x50cm up to 70x100cm. Some great art based on decayed urban landscape shots, from a Vienna-based design studio. See also http://english.mashkulture.net/2011/10/17/atelier-olschinsky-cities-iii/ , http://www.mascontext.com/tag/atelier-olschinsky/
olschinsky  cities  urban  decay  landscape  art  prints  want 
june 2013
UK ISPs Secretly Start Blocking Torrent Site Proxies | TorrentFreak
The next step of cat-and-mouse. Let's see what the pirate sites do next...
The blocking orders are intended to deter online piracy and were requested by the music industry group BPI on behalf of a variety of major labels. Thus far they’ve managed to block access to The Pirate Bay, Kat.ph, H33T and Fenopy, and preparations are being made to add many others.

The effectiveness of these initial measures has been called into doubt, as they are relatively easy to bypass. For example, in response to the blockades hundreds of proxy sites popped up, allowing subscribers to reach the prohibited sites via a detour.
However, as of this week these proxies are also covered by the same blocklist they aim to circumvent, without a new court ruling.

The High Court orders give music industry group BPI the authority to add sites to the blocklist without oversight. Until now some small changes have been made, mostly in response to The Pirate Bay’s domain hopping endeavors, but with the latest blocklist update a whole new range of websites is being targeted.
bittorrent  blocking  filesharing  copyright  bpi  piracy  pirate-bay  proxies  fenopy  kat.ph  h33t  filtering  uk 
june 2013
EU unlocks a great new source of online innovation
Today the European Parliament voted to formally agree new rules on open data – effectively making a reality of the proposal which I first put forward just over 18 months ago, and making it easier to open up huge amounts of public sector data.


Great news -- wonder how it'll affect the Ordnance Survey of Ireland?
osi  mapping  open-data  open  data  europe  eu  neelie-kroes 
june 2013
Big Memory, Part 4
good microbenchmarking of a bunch of Java collections; Trove, fastutil, PCJ, mahout-collections, hppc
java  collections  benchmarks  performance  speed  coding  data-structures  optimization 
june 2013
fastutil
fastutil extends the Java™ Collections Framework by providing type-specific maps, sets, lists and queues with a small memory footprint and fast access and insertion; provides also big (64-bit) arrays, sets and lists, and fast, practical I/O classes for binary and text files. It is free software distributed under the Apache License 2.0. It requires Java 6 or newer.


used by Facebook (along with Apache Giraph, Netty, Unsafe) to speed up "weekend Hive jobs" to "coffee breaks". http://www.slideshare.net/nitayj/2013-0603-berlin-buzzwords
via:highscalability  facebook  giraph  optimization  java  speed  fastutil  collections  data-structures 
june 2013
Former NSA Boss: We Don't Data Mine Our Giant Data Collection, We Just Ask It Questions
'Well, that's - no, we're going to use it. But we're not going to use it in the way that some people fear. You put these records, you store them, you have them. It's kind of like, I've got the haystack now. And now let's try to find the needle. And you find the needle by asking that data a question. I'm sorry to put it that way, but that's fundamentally what happens. All right. You don't troll through the data looking for patterns or anything like that. The data is set aside. And now I go into that data with a question that - a question that is based on articulable(ph), arguable, predicate to a terrorist nexus.'


Yep, that's data mining.
data-mining  questions  haystack  needle  nsa  usa  politics  privacy  data-protection  michael-hayden 
june 2013
graphite-metrics
metric collectors for various stuff not (or poorly) handled by other monitoring daemons

Core of the project is a simple daemon (harvestd), which collects metric values and sends them to graphite carbon daemon (and/or other configured destinations) once per interval. Includes separate data collection components ("collectors") for processing of:

/proc/slabinfo for useful-to-watch values, not everything (configurable).
/proc/vmstat and /proc/meminfo in a consistent way.
/proc/stat for irq, softirq, forks.
/proc/buddyinfo and /proc/pagetypeinfo (memory fragmentation).
/proc/interrupts and /proc/softirqs.
Cron log to produce start/finish events and duration for each job into a separate metrics, adapts jobs to metric names with regexes.
Per-system-service accounting using systemd and it's cgroups.
sysstat data from sadc logs (use something like sadc -F -L -S DISK -S XDISK -S POWER 60 to have more stuff logged there) via sadf binary and it's json export (sadf -j, supported since sysstat-10.0.something, iirc).
iptables rule "hits" packet and byte counters, taken from ip{,6}tables-save, mapped via separate "table chain_name rule_no metric_name" file, which should be generated along with firewall rules (I use this script to do that).


Pretty exhaustive list of system metrics -- could have some interesting ideas for Linux OS-level metrics to monitor in future.
graphite  monitoring  metrics  unix  linux  ops  vm  iptables  sysadmin 
june 2013
Lawsuit Filed To Prove Happy Birthday Is In The Public Domain; Demands Warner Pay Back Millions Of License Fees | Techdirt
The issue [...] is that it's just not cost effective for anyone to actually stand up and challenge Warner Music, who has strong financial incentive to pretend the copyright is still valid. Well, apparently, someone is pissed off enough to try. The creatively named Good Morning to You Productions, a documentary film company planning a film about the song Happy Birthday, has now filed a lawsuit concerning the copyright of Happy Birthday and are seeking to force Warner/Chappell to return the millions of dollars it has collected over the years. That's going to make this an interesting case.
music  copyright  law  via:bwalsh  public-domain  happy-birthday  songs  warner-music  lawsuits 
june 2013
There's a map for that
'Not long ago, we began rendering 3D models on GitHub. Today we're excited to announce the latest addition to the visualization family - geographic data. Any .geojson file in a GitHub repository will now be automatically rendered as an interactive, browsable map, annotated with your geodata.'

As this HN comment notes, https://news.ycombinator.com/item?id=5875693 -- 'I'd much rather Github cleaned up the UI for existing features than added these little flourishes that I can't imagine even 1% of users use.' Something is seriously wrong in how GitHub decides product direction if this kind of wankology (and that Judy-array crap) is what gets prioritised. :(

(via Marc O'Morain)
via:marc  github  mapping  maps  geojson  hacking  product-management  ui  pull-requests 
june 2013
Spamalot reigns: the spoils of Ireland’s EU kingship | The Irish Times - Thu, Jun 13, 2013
The spam presidency. As European citizens are made the miserable targets of unimpeded “direct marketing”, that may be how Ireland’s stint in the EU presidency seat is recalled for years to come.
Under the guiding hand of Minister for Justice Alan Shatter, the Council of the European Union has submitted proposals for amendments to a proposed new data protection regulation, all of which overwhelmingly favour business and big organisations, not citizens.
The most obviously repugnant and surprising element in the amendments is a watering down of existing protections for EU citizens against the willy-nilly marketing Americans are forced to endure. In the US there are few meaningful restrictions on what businesses can do with people’s personal information when pitching products and services at them.
In the EU, this has always been strictly controlled; information gathered for one purpose cannot be used by a business to sell whatever it wants – unless you have opted in to receive such solicitations. This means you are not constantly bombarded by emails and junk mail, nor do you get non-stop phone calls from telemarketers.
Under the proposed amendments to the draft data protection regulation, direct marketing would become a legal form of data processing. In effect, this would legitimise spam email, junk print mail and marketing calls. This unexpected provision signals just how successful powerful corporate lobbyists have been in convincing ministers that business matters more than privacy or giving citizens reasonable control over their personal information.
Far worse is contained in other amendments, which in effect turn the original draft of the regulation upside down.


Fantastic article from Karlin Lillington in today's Times on the terrible amendments proposed for the EU's data protection law.
eu  law  prism  data-protection  privacy  ireland  ec  marketing  spam  anti-spam  email 
june 2013
Labour TD ignores tough questions on web case
I [Tom Murphy] have asked [Sean Sherlock] a question: Does he have any comment about the lawsuit between EMI and UPC (and a raft of other ISPs too btw) which is using his SI to attempt to block PirateBay? A court case he said would not happen. Now, I am blocked from following him on Twitter. This is not how a proper political system works.
politics  ireland  twitter  sean-sherlock  tom-murphy  boards  devore  copyright 
june 2013
Music firms secure orders blocking access to Pirate Bay - Crime & Law News from Ireland & Abroad | The Irish Times - Wed, Jun 12, 2013
Four major music companies have secured court orders requiring six internet service providers to block access by subscribers to various Pirate Bay websites within some 30 days in a bid to prevent illegal downloading of copyright music and other material. [...]

Today, Mr Justice Brian McGovern said he was satisfied to make the order in circumstances including that new copyright laws here and in the EU permitted such orders to be made. He said he fully agreed with a previous High Court judge who had said he would make such blocking orders if the law permitted and noted the law now allowed for such orders. The form of the orders means the music companies will not have to make fresh applications to court if Pirate Bay changes its location on the internet.
pirate-bay  blocking  filtering  internet  ireland  upc  eircom  vodafone  digiweb  three  imagine  o2  copyright 
june 2013
Rapid Response: The NSA Prism Leak
'The biggest leak in the history of US security or nothing to worry about? A breach of trust and a data protection issue or a necessary secret project to protect American interests? [Tomorrow] lunchtime Science Gallery Rapid Response event [sic] will pick through the jargon, examine the minutiae of the National Security Agency's PRISM project and the whistle blower Edward Snowden's revelations, and discuss what it means for you and everyone. And we'll look at the bigger picture too. Journalist Una Mullally will chair a panel of guests on the story that everyone is talking about. '
science-gallery  panel-discussions  dublin  nsa  prism  panel 
june 2013
Vagrant and Chef to provision dev test environments
We have recently switched from a manually configured development environment to a nearly fully automated one using Vagrant, Chef, and a few other tools. With this transition, we’ve moved to an environment where data on the dev boxes is considered disposable and only what’s checked into the SCM is “real”. This is where we’ve always wanted to be, but without the ability to easily rebuild the dev environment from scratch, it’s hard to internalize this behavior pattern.
dev  osx  chef  vagrant  testing  vms  coding 
june 2013
PRISM explains the wider lobbying issues surrounding EU data protection reform | EDRI
The US has very successfully and expertly lobbied against the [EU] data protection package directly, it has mobilised and supported US industry lobbying. US industry has lobbied in its own name and mobilised malleable European trade associations to lobby on their behalf to amplify their message, “independent” “think tanks” have been created to amplify their message again. The result is not just the biggest lobbying effort that Brussels has ever seen, but also the broadest.

Compliant Members of the European Parliament (MEPs) and EU Member States [...] have been imposing a “death by a thousand cuts” on the Regulation. Where previously there was a clear obligation to collect the “minimum necessary” data for any given service, the vague requirement to retain “not excessive” data is now preferred. Where previously companies could only use data for purposes that were “compatible” with the original reason for collecting the data, the Irish EU Presidency (pdf) has proposed a comical definition of “compatible” based on five elements, only one of which is related to the dictionary definition of the word.

Members of the European Parliament and EU Member States are falling over themselves to ensure that the EU does not maintain its strategic advantage over the US. In addition to dismantling the proposed Regulation, countries like the UK desperately seek to delay the whole process and subsume it into the EU-US free trade agreement (the so-called “investment partnership” TTIP/TAFTA), which would subordinate a fundamental rights discussion in a trade negotiation. The UK government is even prepared to humiliate itself by arguing in favour of the US position on the basis that two and a half years (see Communication from 2010, pdf) of discussion is too fast!
edri  data-protection  eu  ec  ireland  politics  usa  meps  privacy  uk  free-trade 
june 2013
Microsoft admits US government can access EU-based cloud data
interesting point from an MS Q&A back in 2011, quite relevant nowadays:
Q: Can Microsoft guarantee that EU-stored data, held in EU based datacenters, will not leave the European Economic Area under any circumstances — even under a request by the Patriot Act?

A: Frazer explained that, as Microsoft is a U.S.-headquartered company, it has to comply with local laws (the United States, as well as any other location where one of its subsidiary companies is based). Though he said that "customers would be informed wherever possible," he could not provide a guarantee that they would be informed — if a gagging order, injunction or U.S. National Security Letter permits it. He said: "Microsoft cannot provide those guarantees. Neither can any other company." While it has been suspected for some time, this is the first time Microsoft, or any other company, has given this answer. Any data which is housed, stored or processed by a company, which is a U.S. based company or is wholly owned by a U.S. parent company, is vulnerable to interception and inspection by U.S. authorities. 
microsoft  privacy  cloud-computing  eu  data-centers  data-protection  nsa  fisa  usa 
june 2013
LobbyPlag
wow, great view of which MEPs are eviscerating the EU's data protection regime:
Currently the EU is negotiating about new data privacy laws. This new EU Regulation will replace all existing national laws on data privacy. Here you can see a general overview which Members of the European Parliament (MEPs) are pushing for more or less data privacy. Choose a country, a political group or a MEP from the “Top 10” list to find out more.
europe  eu  privacy  data-protection  datap  ec  regulation  meps 
june 2013
Council of the European Union Releases Draft Compromise Text on the Proposed EU Data Protection Regulation
Oh god. this sounds like an impending privacy and anti-spam disaster. "business-focussed":
Overall, the [Irish EC Presidency’s] draft compromise text can be seen as a more business-focused, pragmatic approach. For example, the Presidency has drafted an additional recital (Recital 3a), clarifying the right to data protection as a qualified right, highlighting the principle of proportionality and importance of other competing fundamental rights, including the freedom to conduct a business.


and some pretty serious relaxation of how consent for use of personal data is measured:

The criterion for valid consent is amended from “explicit” to “unambiguous,” except in the case of processing special categories of data (i.e., sensitive personal data) (Recital 25 and Article 9(2)). This reverts to the current position under the Data Protection Directive and is a concession to the practical difficulty of obtaining explicit consent in all cases.

The criteria for valid consent are further relaxed by the ability to obtain consent in writing, orally or in an electronic manner, and where technically feasible and effective, valid consent can be given using browser settings and other technical solutions. Further, the requirement that the controller bear the burden of proof that valid consent was obtained is limited to a requirement that the controller be able to “demonstrate” that consent was obtained (Recital 32 and Article 7(1)). The need for “informed” consent is also relaxed from the requirement to provide the full information requirements laid out in Article 14 to the minimal requirements that the data subject “at least” be made aware of: (1) the identity of the data controller, and (2) the purpose(s) of the processing of their personal data (Recitals 33 and 48).
anti-spam  privacy  data-protection  spam  ireland  eu  ec  regulation 
june 2013
Instagram: Making the Switch to Cassandra from Redis, a 75% 'Insta' Savings
shifting data out of RAM and onto SSDs -- unsurprisingly, big savings.
a 12 node cluster of EC2 hi1.4xlarge instances; we store around 1.2TB of data across this cluster. At peak, we're doing around 20,000 writes per second to that specific cluster and around 15,000 reads per second. We've been really impressed with how well Cassandra has been able to drop into that role.
ram  ssd  cassandra  databases  nosql  redis  instagram  storage  ec2 
june 2013
seeing into the UV spectrum after Cataract Surgery with Crystalens
I've been very happy so far with the Crystalens implant for Cataract Surgery [...] one unexpected/interesting aspect is I see a violet glow that others do not - perhaps I'm more sensitive to the low end of the visible light spectrum.


(via Tony Finch)
via:fanf  science  perception  augmentation  uv  light  sight  cool  cataracts  surgery  lens  eyes 
june 2013
The CAP FAQ by henryr
No subject appears to be more controversial to distributed systems engineers than the oft-quoted, oft-misunderstood CAP theorem. The purpose of this FAQ is to explain what is known about CAP, so as to help those new to the theorem get up to speed quickly, and to settle some common misconceptions or points of disagreement.
database  distributed  nosql  cap  consistency  cap-theorem  faqs 
june 2013
IAB Europe awards MEP Sean Kelly for standing up for data privacy rights (video) - Ireland’s CIO and strategy news and reports service – Siliconrepublic.com
Irish MEP serving as a rapporteur on reform of the EU data protection regime, was given an award by an advertising trade group last month:
Sean Kelly, Fine Gael MEP for Ireland South [who serves as the EU’s Industry Committee Rapporteur for the General Data Protection Regulation], has been selected to receive the prestigious IAB Europe Award for Leadership and Excellence for his approach to dealing with privacy concerns over shortcomings in the European Commission’s data protection proposal.
IAB Europe represents more than 5,500 online advertising media, research and analytics organisations.
iab-europe  awards  spam  sean-kelly  ireland  meps  politics  eu  data-protection  privacy  ec 
june 2013
EDRI's comments on EU proposals to reform privacy law
Amendments 762, 764 and 765 in particular seem to move portions of the law from "confirmed opt-in required" to "opt-out is ok" -- which sounds like a risk where spam and unsolicited actions on a person's data are concerned
law  privacy  anti-spam  eu  spam  edri 
june 2013
EU Council deals killer blow to privacy reforms
'In an extraordinary result for corporate lobbying, direct marketing would by default be considered a legitimate data process and would therefore – by default – be lawful.'
eu  politics  data-protection  privacy  anti-spam  spam  eu-council  direct-marketing 
june 2013
HyperLevelDB: A High-Performance LevelDB Fork
'HyperLevelDB improves on LevelDB in two key ways:
Improved parallelism: HyperLevelDB uses more fine-grained locking internally to provide higher throughput for multiple writer threads.
Improved compaction: HyperLevelDB uses a different method of compaction that achieves higher throughput for write-heavy workloads, even as the database grows.'
leveldb  storage  key-value-stores  persistence  unix  libraries  open-source 
june 2013
EC2Instances.info
'Easy Amazon EC2 Instance Comparison'. a nice UI on the various EC2 instance types on offer with their key attributes. Misses out availability of EBS-optimized instances though
amazon  ec2  aws  comparison  pricing 
june 2013
the infamous 2008 S3 single-bit-corruption outage
Neat, I didn't realise this was publicly visible. A single corrupted bit infected the S3 gossip network, taking down the whole S3 service in (iirc) one region:
We've now determined that message corruption was the cause of the server-to-server communication problems. More specifically, we found that there were a handful of messages on Sunday morning that had a single bit corrupted such that the message was still intelligible, but the system state information was incorrect. We use MD5 checksums throughout the system, for example, to prevent, detect, and recover from corruption that can occur during receipt, storage, and retrieval of customers' objects. However, we didn't have the same protection in place to detect whether [gossip state] had been corrupted. As a result, when the corruption occurred, we didn't detect it and it spread throughout the system causing the symptoms described above. We hadn't encountered server-to-server communication issues of this scale before and, as a result, it took some time during the event to diagnose and recover from it.

During our post-mortem analysis we've spent quite a bit of time evaluating what happened, how quickly we were able to respond and recover, and what we could do to prevent other unusual circumstances like this from having system-wide impacts. Here are the actions that we're taking: (a) we've deployed several changes to Amazon S3 that significantly reduce the amount of time required to completely restore system-wide state and restart customer request processing; (b) we've deployed a change to how Amazon S3 gossips about failed servers that reduces the amount of gossip and helps prevent the behavior we experienced on Sunday; (c) we've added additional monitoring and alarming of gossip rates and failures; and, (d) we're adding checksums to proactively detect corruption of system state messages so we can log any such messages and then reject them.


This is why you checksum all the things ;)
s3  aws  post-mortems  network  outages  failures  corruption  grey-failures  amazon  gossip 
june 2013
Low-latency stock trading "jumps the gun" due to default NTP configuration settings
On June 3, 2013, trading in SPY exploded at 09:59:59.985, which is 15 milliseconds before the ISM's Manufacturing number released at 10:00:00. Activity in the eMini (traded in Chicago), exploded at 09:59:59.992, which is 8 milliseconds before the news release, but 7 milliseconds after SPY. Note how SPY and the eMini traded within a millisecond for the Consumer Confidence release last week, but the eMini lagged SPY by about 7 milliseconds for the ISM Manufacturing release. The simultaneous trading on Consumer Confidence is because that number is released at the same time in both NYC and Chicago.

The ISM Manufacturing number is probably released on a low latency feed in NYC, and then takes 5-7 milliseconds, due to the speed of light, to reach Chicago. Either the clock used to release the ISM number was 15 milliseconds fast, or someone (correctly) jumped the gun.

Update: [...] The clock used to release the ISM was indeed, 15 milliseconds fast. This could be from using the default setting of many NTP clients, which allows the clock to drift up to about 16 milliseconds before adjusting time.
ntp  time  synchronization  spy  trading  stocks  low-latency  clocks  internet 
june 2013
Care and Feeding of Large Scale Graphite Installations [slides]
good docs for large-scale graphite use: 'Tip and tricks of using and scaling graphite. First presented at DevOpsDays Austin Texas 2013-05-01'
graphite  devops  ops  metrics  dashboards  sysadmin 
june 2013
Cities 05
from Atelier Olschinsky. 'Fine Art Print on Hahnemuehle Photo Rag Bright White 310g; Limited Edition / Numbered and signed by the artist'
art  graphics  cities  prints  want  via:bdif 
june 2013
The network is reliable
Aphyr and Peter Bailis collect an authoritative list of known network partition and outage cases from published post-mortem data:

This post is meant as a reference point -- to illustrate that, according to a wide range of accounts, partitions occur in many real-world environments. Processes, servers, NICs, switches, local and wide area networks can all fail, and the resulting economic consequences are real. Network outages can suddenly arise in systems that are stable for months at a time, during routine upgrades, or as a result of emergency maintenance. The consequences of these outages range from increased latency and temporary unavailability to inconsistency, corruption, and data loss. Split-brain is not an academic concern: it happens to all kinds of systems -- sometimes for days on end. Partitions deserve serious consideration.


I honestly cannot understand people who didn't think this was the case. 3 years reading (and occasionally auto-cutting) Amazon's network-outage tickets as part of AWS network monitoring will do that to you I guess ;)
networking  outages  partition  cap  failure  fault-tolerance 
june 2013
Don’t Overuse Mocks
hooray, sanity from the Google Testing blog. this has been a major cause of pain in the past, dealing with tricky rewrites of mock-heavy unit test code
mocking  testing  tests  google  mocks  unit-testing 
may 2013
Hermetic Servers
'What is a Hermetic Server? The short definition would be a “server in a box”. If you can start up the entire server on a single machine that has no network connection AND the server works as expected, you have a hermetic server! This is a special case of the more general “hermetic” concept which applies to an isolated system not necessarily on a single machine.

Why is it useful to have a hermetic server? Because if your entire [system under test] is composed of hermetic servers, it could all be started on a single machine for testing; no network connection necessary! The single machine could be a physical or virtual machine.'

These also qualify as "fakes", using the terminology Martin Fowler suggests at http://martinfowler.com/bliki/TestDouble.html , I think
google  testing  hermetic-servers  test  test-doubles  unit-testing 
may 2013
incompetent error-handling code in the mongo-java-driver project
an unexplained invocation of Math.random() in the exception handling block of this MongoDB java driver class causes roflscale lols in the github commit notes. http://stackoverflow.com/a/16833798 has more explanation.
github  commits  mongodb  webscale  roflscale  random  daily-wtf  wtf 
may 2013
_Dynamic Histograms: Capturing Evolving Data Sets_ [pdf]

Currently, histograms are static structures: they are created from scratch periodically and their creation is based on looking at the entire data distribution as it exists each time. This creates problems, however, as data stored in DBMSs usually varies with time. If new data arrives at a high rate and old data is likewise deleted, a histogram’s accuracy may deteriorate fast as the histogram becomes older, and the optimizer’s effectiveness may be lost. Hence, how often a histogram is reconstructed becomes very critical, but choosing the right period is a hard problem, as the following trade-off exists: If the period is too long, histograms may become outdated. If the period is too short, updates of the histogram may incur a high overhead.

In this paper, we propose what we believe is the most elegant solution to the problem, i.e., maintaining dynamic histograms within given limits of memory space. Dynamic histograms are continuously updateable, closely tracking changes to the actual data. We consider two of the best static histograms proposed in the literature [9], namely V-Optimal and Compressed, and modify them. The new histograms are naturally called Dynamic V-Optimal (DVO) and Dynamic Compressed (DC). In addition, we modified V-Optimal’s partition constraint to create the Static Average-Deviation Optimal (SADO) and Dynamic Average-Deviation Optimal (DADO) histograms.


(via d2fn)
via:d2fn  histograms  streaming  big-data  data  dvo  dc  sado  dado  dynamic-histograms  papers  toread 
may 2013
Videos from the Continuous Delivery track at QCon SF 2012
Think we'll be watching some of these in work soon -- Jez Humble's talk (the last one) in particular looks good:

Amazon, Etsy, Google and Facebook are all primarily software development shops which command enormous amounts of resources. They are, to use Christopher Little’s metaphor, unicorns. How can the rest of us adopt continuous delivery? That’s the subject of my talk, which describes four case studies of organizations that adopted continuous delivery, with varying degrees of success.

One of my favourites – partly because it’s embedded software, not a website – is the story of HP’s LaserJet Firmware team, who re-architected their software around the principles of continuous delivery. People always want to know the business case for continuous delivery: the FutureSmart team provide one in the book they wrote that discusses how they did it.
continuous-integration  continuous-delivery  build  release  process  dev  deployment  videos  qcon  towatch  hp 
may 2013
Casalattico - Wikipedia, the free encyclopedia
How wierd. Many of the well-known chippers in Ireland are run by families from the same comune in Italy.
In the late 19th and early 20th century a significant number of young people left Casalattico to work in Ireland, with many founding chip shops there. Most second, third and fourth generation Irish-Italians can trace their lineage back to the municipality, with names such as Magliocco, Fusco, Marconi, Borza, Macari, Rosato and Forte being the most common. Although the Forte family actually originates from the village of Mortale, renamed Mon Forte due to the achievements of the Forte family. It is believed that up to 8,000 Irish-Italians have ancestors from Casalattico. The village is home to an Irish festival every summer to celebrate the many families that moved from there to Ireland.


(via JK)
rome  lazio  italy  ireland  chip-shops  chippers  history  emigration  casalattico  work  irish-italians  via:jk 
may 2013
« earlier      later »
abuse ads ai algorithms amazon analytics android anti-spam apache apple apps architecture art automation aws banking big-data bitcoin books bugs build business cars cassandra censorship children china cli coding compression concurrency containers copyright crime crypto culture cycling data data-protection data-structures databases dataviz debugging deployment design devops distcomp distributed dns docker driving dublin ec2 email eu europe exploits facebook fail false-positives fault-tolerance filesharing filtering food fraud funny future games gaming gc gchq git github go google government graphics hacking hacks hadoop hardware hashing health history home http https images internet ios ip iphone ireland isps java javascript jobs journalism jvm kafka kids lambda languages latency law legal libraries life linux load-balancing logging machine-learning malware mapping maps medicine memory metrics microsoft ml mobile money monitoring movies mp3 music mysql netflix network networking news nosql nsa open-source ops optimization outages packaging papers patents pdf performance phones photos piracy politics presentations privacy programming protocols python realtime recipes redis reference reliability replication research ruby russia s3 safety scala scalability scaling scams science search security shopping slides snooping social-media society software space spam sql ssl startups statistics storage streaming surveillance swpats sysadmin tcp tech testing time tips tls tools travel tuning tv twitter ui uk unix us-politics via:fanf via:nelson video web wifi work youtube

Copy this bookmark:



description:


tags: