jm + cloudflare   10

How and why the leap second affected Cloudflare DNS
The root cause of the bug that affected our DNS service was the belief that time cannot go backwards. In our case, some code assumed that the difference between two times would always be, at worst, zero. RRDNS is written in Go and uses Go’s time.Now() function to get the time. Unfortunately, this function does not guarantee monotonicity. Go currently doesn’t offer a monotonic time source.


So the clock went "backwards", s1 - s2 returned < 0, and the code couldn't handle it (because it's a little known and infrequent failure case).

Part of the root cause here is cultural -- Google has solved the leap-second problem internally through leap smearing, and Go seems to be fundamentally a Google product at heart.

The easiest fix in general in the "outside world" is to use "ntpd -x" to do a form of smearing. It looks like AWS are leap smearing internally (https://aws.amazon.com/blogs/aws/look-before-you-leap-the-coming-leap-second-and-aws/), but it is a shame they aren't making this a standard part of services running on top of AWS and a feature of the AWS NTP fleet.
ntp  time  leap-seconds  fail  cloudflare  rrdns  go  golang  dns  leap-smearing  ntpd  aws 
january 2017 by jm
Cloudflare on Tor
quite a reasonable position, I think
tor  cloudflare  abuse  anonymity  captchas 
march 2016 by jm
jgc on Cloudflare's log pipeline
Cloudflare are running a 40-machine, 50TB Kafka cluster, ingesting at 15 Gbps, for log processing. Also: Go producers/consumers, capnproto as wire format, and CitusDB/Postgres to store rolled-up analytics output. Also using Space Saver (top-k) and HLL (counting) estimation algorithms.
logs  cloudflare  kafka  go  capnproto  architecture  citusdb  postgres  analytics  streaming 
june 2015 by jm
CFSSL
Cloudflare's open source CA/PKI infrastructure app
cloudflare  pki  ca  ssl  tls  ops 
june 2015 by jm
How to receive a million packets per second on Linux

To sum up, if you want a perfect performance you need to:
Ensure traffic is distributed evenly across many RX queues and SO_REUSEPORT processes. In practice, the load usually is well distributed as long as there are a large number of connections (or flows).
You need to have enough spare CPU capacity to actually pick up the packets from the kernel.
To make the things harder, both RX queues and receiver processes should be on a single NUMA node.
linux  networking  performance  cloudflare  packets  numa  so_reuseport  sockets  udp 
june 2015 by jm
A Tour Inside CloudFlare's Latest Generation Servers
great transparency from CloudFront! Looking at their current 4th-gen rackmount server buildout -- now with HP after Dell and ZT. Shitloads of SSDs for lower power and greater predictability in failure rates. 128GB RAM. consistent hashing to address stores instead of RAID. Sandybridge chipset. Solarflare SFC9020 10Gbps network cards. This is really impressive openness for a high-scale custom datacenter server platform...
datacenter  cloudflare  hardware  rackmount  ssds  intel 
july 2013 by jm
CloudFlare, PRISM, and Securing SSL Ciphers
Matthew Prince of CloudFlare has an interesting theory on the NSA's capabilities:
It is not inconceivable that the NSA has data centers full of specialized hardware optimized for SSL key breaking. According to data shared with us from a survey of SSL keys used by various websites, the majority of web companies were using 1024-bit SSL ciphers and RSA-based encryption through 2012. Given enough specialized hardware, it is within the realm of possibility that the NSA could within a reasonable period of time reverse engineer 1024-bit SSL keys for certain web companies. If they'd been recording the traffic to these web companies, they could then use the broken key to go back and decrypt all the transactions.

While this seems like a compelling theory, ultimately, we remain skeptical this is how the PRISM program described in the slides actually works. Cracking 1024-bit keys would be a big deal and likely involve some cutting-edge cryptography and computational power, even for the NSA. The largest SSL key that is known to have been broken to date is 768 bits long. While that was 4 years ago, and the NSA undoubtedly has some of the best cryptographers in the world, it's still a considerable distance from 768 bits to 1024 bits -- especially given the slide suggests Microsoft's key would have to had been broken back in 2007.

Moreover, the slide showing the dates on which "collection began" for various companies also puts the cost of the program at $20M/year. That may sound like a lot of money, but it is not for an undertaking like this. Just the power necessary to run the server farm needed to break a 1024-bit key would likely cost in excess of $20M/year. While the NSA may have broken 1024-bit SSL keys as part of some other program, if the slide is accurate and complete, we think it's highly unlikely they did so as part of the PRISM program. A not particularly glamorous alternative theory is that the NSA didn't break the SSL key but instead just cajoled rogue employees at firms with access to the private keys -- whether the companies themselves, partners they'd shared the keys with, or the certificate authorities who issued the keys in the first place -- to turn them over. That very well may be possible on a budget of $20M/year.

[....]
Google is a notable anomaly. The company uses a 1024-bit key, but, unlike all the other companies listed above, rather than using a default cipher suite based on the RSA encryption algorithm, they instead prefer the Elliptic Curve Diffie-Hellman Ephemeral (ECDHE) cipher suites. Without going into the technical details, a key difference of ECDHE is that they use a different private key for each user's session. This means that if the NSA, or anyone else, is recording encrypted traffic, they cannot break one private key and read all historical transactions with Google. The NSA would have to break the private key generated for each session, which, in Google's case, is unique to each user and regenerated for each user at least every 28-hours.

While ECDHE arguably already puts Google at the head of the pack for web transaction security, to further augment security Google has publicly announced that they will be increasing their key length to 2048-bit by the end of 2013. Assuming the company continues to prefer the ECDHE cipher suites, this will put Google at the cutting edge of web transaction security.


2048-bit ECDHE sounds like the way to go, and CloudFlare now support that too.
prism  security  nsa  cloudflare  ssl  tls  ecdhe  elliptic-curve  crypto  rsa  key-lengths 
june 2013 by jm
One of CloudFlare's upstream providers on the "death of the internet" scare-mongering
Having a bad day on the Internet is nothing new. These are the types
of events we deal with on a regular basis, and most large network
operators are very good at responding quickly to deal with situations like
this. In our case, we worked with Cloudflare to quickly identify the
attack profile, rolled out global filters on our network to limit the
attack traffic without adversely impacting legitimate users, and worked
with our other partner networks (like NTT) to do the same. If the attacks
had stopped here, nobody in the "mainstream media" would have noticed, and
it would have been just another fun day for a few geeks on the Internet.

The next part is where things got interesting, and is the part that nobody
outside of extremely technical circles has actually bothered to try and
understand yet. After attacking Cloudflare and their upstream Internet
providers directly stopped having the desired effect, the attackers turned
to any other interconnection point they could find, and stumbled upon
Internet Exchange Points like LINX (in London), AMS-IX (in Amsterdam), and
DEC-IX (in Frankfurt), three of the largest IXPs in the world. An IXP is
an "interconnection fabric", or essentially just a large switched LAN,
which acts as a common meeting point for different networks to connect and
exchange traffic with each other. One downside to the way this
architecture works is that there is a single big IP block used at each of
these IXPs, where every network who interconnects is given 1 IP address,
and this IP block CAN be globally routable. When the attackers stumbled
upon this, probably by accident, it resulted in a lot of bogus traffic
being injected into the IXP fabrics in an unusual way, until the IXP
operators were able to work with everyone to make certain the IXP IP
blocks weren't being globally re-advertised.

Note that the vast majority of global Internet traffic does NOT travel
over IXPs, but rather goes via direct private interconnections between
specific networks. The IXP traffic represents more of the "long tail" of
Internet traffic exchange, a larger number of smaller networks, which
collectively still adds up to be a pretty big chunk of traffic. So, what
you actually saw in this attack was a larger number of smaller networks
being affected by something which was an completely unrelated and
unintended side-effect of the actual attacks, and thus *poof* you have the
recipe for a lot of people talking about it. :)

Hopefully that clears up a bit of the situation.
bandwidth  internet  gizmodo  traffic  cloudflare  ddos  hacking 
march 2013 by jm

Copy this bookmark:



description:


tags: