jm + packets   10

Spammergate: The Fall of an Empire
Featuring this interesting reactive-block evasion tactic:
In that screenshot, a RCM co-conspirator describes a technique in which the spammer seeks to open as many connections as possible between themselves and a Gmail server. This is done by purposefully configuring your own machine to send response packets extremely slowly, and in a fragmented manner, while constantly requesting more connections.
Then, when the Gmail server is almost ready to give up and drop all connections, the spammer suddenly sends as many emails as possible through the pile of connection tunnels. The receiving side is then overwhelmed with data and will quickly block the sender, but not before processing a large load of emails.


(via Tony Finch)
via:fanf  spam  antispam  gmail  blocklists  packets  tcp  networking 
march 2017 by jm
How both TCP and Ethernet checksums fail
At Twitter, a team had a unusual failure where corrupt data ended up in memcache. The root cause appears to have been a switch that was corrupting packets. Most packets were being dropped and the throughput was much lower than normal, but some were still making it through. The hypothesis is that occasionally the corrupt packets had valid TCP and Ethernet checksums. One "lucky" packet stored corrupt data in memcache. Even after the switch was replaced, the errors continued until the cache was cleared.


YA occurrence of this bug. When it happens, it tends to _really_ screw things up, because it's so rare -- we had monitoring for this in Amazon, and when it occurred, it overwhelmingly occurred due to host-level kernel/libc/RAM issues rather than stuff in the network. Amazon design principles were to add app-level checksumming throughout, which of course catches the lot.
networking  tcp  ip  twitter  ethernet  checksums  packets  memcached 
october 2015 by jm
How to receive a million packets per second on Linux

To sum up, if you want a perfect performance you need to:
Ensure traffic is distributed evenly across many RX queues and SO_REUSEPORT processes. In practice, the load usually is well distributed as long as there are a large number of connections (or flows).
You need to have enough spare CPU capacity to actually pick up the packets from the kernel.
To make the things harder, both RX queues and receiver processes should be on a single NUMA node.
linux  networking  performance  cloudflare  packets  numa  so_reuseport  sockets  udp 
june 2015 by jm
VPC Flow Logs
we are introducing Flow Logs for the Amazon Virtual Private Cloud.  Once enabled for a particular VPC, VPC subnet, or Elastic Network Interface (ENI), relevant network traffic will be logged to CloudWatch Logs for storage and analysis by your own applications or third-party tools.

You can create alarms that will fire if certain types of traffic are detected; you can also create metrics to help you to identify trends and patterns. The information captured includes information about allowed and denied traffic (based on security group and network ACL rules). It also includes source and destination IP addresses, ports, the IANA protocol number, packet and byte counts, a time interval during which the flow was observed, and an action (ACCEPT or REJECT).
ec2  aws  vpc  logging  tracing  ops  flow-logs  network  tcpdump  packets  packet-capture 
june 2015 by jm
The Discovery of Apache ZooKeeper's Poison Packet - PagerDuty
Excellent deep dive into a production issue. Root causes: crappy error handling code in Zookeeper; lack of bounds checking in ZK; and a nasty kernel bug.
zookeeper  bugs  error-handling  bounds-checking  oom  poison-packets  pagerduty  packets  tcpdump  xen  aes  linux  kernel 
may 2015 by jm
Chris Baus: TCP_CORK: More than you ever wanted to know
Even with buffered streams the application must be able to instruct the OS to forward all pending data when the stream has been flushed for optimal performance. The application does not know where packet boundaries reside, hence buffer flushes might not align on packet boundaries. TCP_CORK can pack data more effectively, because it has direct access to the TCP/IP layer. [..]

If you do use an application buffering and streaming mechanism (as does Apache), I highly recommend applying the TCP_NODELAY socket option which disables Nagle's algorithm. All calls to write() will then result in immediate transfer of data.
networking  tcp  via:nmaurer  performance  ip  tcp_cork  linux  syscalls  writev  tcp_nodelay  nagle  packets 
september 2014 by jm
DNS results now being manipulated in Turkey
Deep-packet inspection and rewriting on DNS packets for Google and OpenDNS servers. VPNs and DNSSEC up next!
turkey  twitter  dpi  dns  opendns  google  networking  filtering  surveillance  proxying  packets  udp 
march 2014 by jm
_Measuring Mobile Web Performance_ [slides]
Notable slide is #13, displaying a graph of HSDPA packet RTTs measured from a train. Max RTT gets up to 20,266ms. ouch
rtt  packets  latency  hsdpa  mobile  internet  trains  packet-loss 
june 2013 by jm
SSL/TLS overhead
'The TLS handshake has multiple variations, but let’s pick the most common one – anonymous client and authenticated server (the connections browsers use most of the time).' Works out to 4 packets, in addition to the TCP handshake's 3, and about 6.5k bytes on average.
network  tls  ssl  performance  latency  speed  networking  internet  security  packets  tcp  handshake 
june 2013 by jm

Copy this bookmark:



description:


tags: