jm + bandwidth   11

[net-next,14/14] tcp_bbr: add BBR congestion control
This commit implements a new TCP congestion control algorithm: BBR
(Bottleneck Bandwidth and RTT). A detailed description of BBR will be
published in ACM Queue, Vol. 14 No. 5, September-October 2016, as
"BBR: Congestion-Based Congestion Control".

BBR has significantly increased throughput and reduced latency for
connections on Google's internal backbone networks and google.com and
YouTube Web servers.

BBR requires only changes on the sender side, not in the network or
the receiver side. Thus it can be incrementally deployed on today's
Internet, or in datacenters. [....]

Signed-off-by: Van Jacobson <vanj@google.com>
google  linux  tcp  ip  congestion-control  bufferbloat  patches  algorithms  rtt  bandwidth  youtube  via:bradfitz 
september 2016 by jm
toxy
toxy is a fully programmatic and hackable HTTP proxy to simulate server failure scenarios and unexpected network conditions. It was mainly designed for fuzzing/evil testing purposes, when toxy becomes particularly useful to cover fault tolerance and resiliency capabilities of a system, especially in service-oriented architectures, where toxy may act as intermediate proxy among services.

toxy allows you to plug in poisons, optionally filtered by rules, which essentially can intercept and alter the HTTP flow as you need, performing multiple evil actions in the middle of that process, such as limiting the bandwidth, delaying TCP packets, injecting network jitter latency or replying with a custom error or status code.
toxy  proxies  proxy  http  mitm  node.js  soa  network  failures  latency  slowdown  jitter  bandwidth  tcp 
august 2015 by jm
Having Your Cake and Eating It Too: Jointly Optimal Erasure Codes for I/O, Storage, and Network-bandwidth | USENIX
Erasure codes, such as Reed-Solomon (RS) codes, are increasingly being deployed as an alternative to data-replication for fault tolerance in distributed storage systems. While RS codes provide significant savings in storage space, they can impose a huge burden on the I/O and network resources when reconstructing failed or otherwise unavailable data. A recent class of erasure codes, called minimum-storage-regeneration (MSR) codes, has emerged as a superior alternative to the popular RS codes, in that it minimizes network transfers during reconstruction while also being optimal with respect to storage and reliability. However, existing practical MSR codes do not address the increasingly important problem of I/O overhead incurred during reconstructions, and are, in general, inferior to RS codes in this regard. In this paper, we design erasure codes that are simultaneously optimal in terms of I/O, storage, and network bandwidth. Our design builds on top of a class of powerful practical codes, called the product-matrix-MSR codes. Evaluations show that our proposed design results in a significant reduction the number of I/Os consumed during reconstructions (a 5 reduction for typical parameters), while retaining optimality with respect to storage, reliability, and network bandwidth.
erasure-coding  reed-solomon  compression  reliability  reconstruction  replication  fault-tolerance  storage  bandwidth  usenix  papers 
february 2015 by jm
TCP incast
a catastrophic TCP throughput collapse that occurs as the number of storage servers sending data to a client increases past the ability of an Ethernet switch to buffer packets. In a clustered file system, for example, a client application requests a data block striped across several storage servers, issuing the next data block request only when all servers have responded with their portion (Figure 1). This synchronized request workload can result in packets overfilling the buffers on the client's port on the switch, resulting in many losses. Under severe packet loss, TCP can experience a timeout that lasts a minimum of 200ms, determined by the TCP minimum retransmission timeout (RTOmin).
incast  networking  performance  tcp  bandwidth  buffering  switch  ethernet  capacity 
november 2014 by jm
TCP incast vs Riak
An extremely congested local network segment causes the "TCP incast" throughput collapse problem -- packet loss occurs, and TCP throughput collapses as a side effect. So far, this is pretty unsurprising, and anyone designing a service needs to keep bandwidth requirements in mind.

However it gets worse with Riak. Due to a bug, this becomes a serious issue for all clients: the Erlang network distribution port buffers fill up in turn, and the Riak KV vnode process (in its entirety) will be descheduled and 'cannot answer any more queries until the A-to-B network link becomes uncongested.'

This is where EC2's fully-uncontended-1:1-network compute cluster instances come in handy, btw. ;)
incast  tcp  networking  bandwidth  riak  architecture  erlang  buffering  queueing 
february 2014 by jm
Why YouTube buffers: The secret deals that make -- and break -- online video
Should ISPs be required to ensure they have sufficient upstream bandwidth to video sites like YouTube and Netflix?
"Verizon has chosen to sell its customers a product [Netflix] that they hope those customers don't actually use," Schaeffer said. "And when customers use it and request movies, they have not ensured there is adequate connectivity to get that video content back to their customers."
netflix  youtube  streaming  video  isps  net-neutrality  peering  comcast  bandwidth  upstream 
july 2013 by jm
Netflix ISP Speed Index for Ireland
Via Mulley. Magnet doing well, with UPC coming second; UPC have dropped a fair bit in the past month. Would love to see it broken down by region...
upc  ireland  isps  speed  bandwidth  netflix  broadband  magnet  eircom 
april 2013 by jm
One of CloudFlare's upstream providers on the "death of the internet" scare-mongering
Having a bad day on the Internet is nothing new. These are the types
of events we deal with on a regular basis, and most large network
operators are very good at responding quickly to deal with situations like
this. In our case, we worked with Cloudflare to quickly identify the
attack profile, rolled out global filters on our network to limit the
attack traffic without adversely impacting legitimate users, and worked
with our other partner networks (like NTT) to do the same. If the attacks
had stopped here, nobody in the "mainstream media" would have noticed, and
it would have been just another fun day for a few geeks on the Internet.

The next part is where things got interesting, and is the part that nobody
outside of extremely technical circles has actually bothered to try and
understand yet. After attacking Cloudflare and their upstream Internet
providers directly stopped having the desired effect, the attackers turned
to any other interconnection point they could find, and stumbled upon
Internet Exchange Points like LINX (in London), AMS-IX (in Amsterdam), and
DEC-IX (in Frankfurt), three of the largest IXPs in the world. An IXP is
an "interconnection fabric", or essentially just a large switched LAN,
which acts as a common meeting point for different networks to connect and
exchange traffic with each other. One downside to the way this
architecture works is that there is a single big IP block used at each of
these IXPs, where every network who interconnects is given 1 IP address,
and this IP block CAN be globally routable. When the attackers stumbled
upon this, probably by accident, it resulted in a lot of bogus traffic
being injected into the IXP fabrics in an unusual way, until the IXP
operators were able to work with everyone to make certain the IXP IP
blocks weren't being globally re-advertised.

Note that the vast majority of global Internet traffic does NOT travel
over IXPs, but rather goes via direct private interconnections between
specific networks. The IXP traffic represents more of the "long tail" of
Internet traffic exchange, a larger number of smaller networks, which
collectively still adds up to be a pretty big chunk of traffic. So, what
you actually saw in this attack was a larger number of smaller networks
being affected by something which was an completely unrelated and
unintended side-effect of the actual attacks, and thus *poof* you have the
recipe for a lot of people talking about it. :)

Hopefully that clears up a bit of the situation.
bandwidth  internet  gizmodo  traffic  cloudflare  ddos  hacking 
march 2013 by jm
Netflix Beats BitTorrent’s Bandwidth
'For perhaps the first time in the internet’s history, the largest percentage of the net’s traffic is content that is paid for.' A great demo of how *good*, legit, for-pay services, can beat out less usable, dodgy, but free ones (via Waxy)
via:waxy  piracy  bandwidth  bittorrent  internet  netflix  filesharing 
may 2011 by jm

Copy this bookmark:



description:


tags: