datacenter   4775

« earlier    

Tupperware: Efficient, reliable cluster management - Facebook Code
facebook  container  architecture  datacenter 
4 days ago by summerwind
Stop by our newly launched International Day website and celebrate the profession with us. Host a tour,…
Datacenter  from twitter_favs
8 days ago by tolkien
Inside Azure datacenter architecture with Mark Russinovich - BRK3060 - YouTube
CosmosDB の複数インターフェース対応とか今の Microsoft っぽくて面白いな。
microsoft  architecture  datacenter  technology  presentation  video 
23 days ago by summerwind
Don't Base Your Design on Vendor Marketing « blog
I’ve seen tons of STP- or MLAG-induced data center meltdowns. The first thing I would want to do in a new data center design would be to get rid of MLAG as much as possible. Most hypervisors work just fine without MLAG, and bare-metal Linux or Windows servers need MLAG only if you want to fully utilize all server uplinks. WAN edge routers should use routing with the fabric, and in some cases you can use the same trick with network services appliances.

End result: you MIGHT need MLAG to connect network services boxes that use static routing. Connect all of them to a single pair of ToR switches and get rid of MLAG everywhere else.

Even worse, MLAG-based design limits scalability. Most data center switching vendors support at most two switches in an MLAG cluster, limiting a MLAG+STP fabric to two spine switches.

Regardless of how you implement them, large layer-2 fabrics are a disaster waiting to happen. With VXLAN-over-IP fabric you have at least a stable L3-only transport fabric, and keep the crazy bits at the network edge - the way Internet worked for ages.

Interestingly, most networking vendors have seen the light, dropped their proprietary or standard L2 fabrics and replaced them with VXLAN+EVPN. Raw VXLAN is not the best DCI technology.

Deep buffers are not a panacea. When Arista started promoting deep buffer switches (because they were the first vendor deploying Jericho chipset - now you can buy them from Cisco as well) I asked a number of people familiar with real-life data center designs, ASIC internals, and TCP behavior whether you really need deep buffer switches in data centers.

While the absolutely correct answer is always “it depends”, in this particular case we got to “mostly NO”. You need deep buffers when going from low latency/high bandwidth environment to high latency/low bandwidth one (data center WAN edge); in the core of a data center fabric they do more harm than good. Another reason to connect DCI links to fabric edge.
datacenter  network  design 
27 days ago by some_hren
RFC 7938 - Use of BGP for Routing in Large-Scale Data Centers
Some network operators build and operate data centers that support over one hundred thousand servers. In this document, such data centers are referred to as "large-scale" to differentiate them from smaller infrastructures. Environments of this scale have a unique set of network requirements with an emphasis on operational simplicity and network stability. This document summarizes operational experience in designing and operating large-scale data centers using BGP as the only routing protocol. The intent is to report on a proven and stable routing design that could be leveraged by others in the industry.
network  leaf  spine  clos  design  architecture  datacenter 
6 weeks ago by curiousstranger
What’s cooler than being cool? Ice cold archive storage
We will be rolling out an entirely new archive class of Cloud Storage designed for long-term data retention. Available later this year at price points starting from $0.0012 per GB per month ($1.23 per TB per month), the archive class is intended for data that would probably otherwise be stored in tape archives.
archiving  preservation  storage  datacenter  google  blog-posts 
9 weeks ago by mikael

« earlier    

related tags

1940s  1995  32socket  3m  3rd  a16z  addresses  ai  airtrunk  amazon  amazonwebservices  amd  antimonument  app  apple  architecture  archiving  arm  artificialintelligence  atosteam  attack  auction  automation  availability  awesome  aws  azure  azurestack  backup  bare  belgium  bell_labs  benchmark  best+practice  bestpractices  blog-posts  book  broadcom  business  cell-tower  center  china  chips  cisco  climatechange  clos  cloud  cluster  code  colocation  comp-sci  computing  config  configuration  configurationmanagement  connectivity  constant  consumption  container  containerized  cooling  coolng  cores  cryptomining  customer  data  database  datacentre  dataplan  dc  dcim  design  desktop  devices  devops  diversity  dns  documentation  dr  edge  electricity  electronics  energy  engineering  environment  environments  equinix  esxi  ethernet  excess  facebook  facility  failover  fb  fiber  fifthutility  filetype:pdf  flourinert  foreman  fpga  frankfurt  full  gas  generator  geography  globalwarming  google  green  grid  hanging  hardware  hdd  hetzner  hfc  history  hosting  hpc  hpe  humor  ibm  images  immersion  industrial  infrastructure  insurance  integrated  intel  internet  inventory  investing  ip  ipam  latency  leaf  lifestyle  liquid  list  live  maainframe  maelstrom  management  map  marine  marketing  marketplace  memory  mesa  metal  metaphor  microsoft  military  mirroring  ml  multi-tenant  naming  netbox  network  networking  nonstop  novec  nsx  ocean  offshore  online  opensores  operations  optical  optics  optimisation  optoasic  osdi  ovs  ovum  paper  papers  pc  pdu  performance  photography  photonics  pictures  pinboard  pod  policy  portable  portal  power  presentation  preservation  provisioning  proxmox  qualcomm  rack  racktables  redundancy  reference  reliability  reproduction  research  resource  retrofit  rockley  samsung  scale  scheduler  security  server  servers  service  silicon  smart  smartphone  socialmedia  space  spine  sre  ssd  standard  stats  sthlmtech  stoarage  storage  structure  styleguide  submarine  supermicro  surface  sysadmin  tactical  technology  telecommunication  telephone  thomasedison  tracking  turbine  ubiquity  undersea  underwater  upgrade  uptime  usa  utility  video  virtual  virtualizationin  vmware  vsan  vsphere  water  webcam  webservice  wind  windowless  windows  xeon  zerocarbonfootprint 

Copy this bookmark: