paxos   821

« earlier    

How Your Data is Stored, or, The Laws of the Imaginary Greeks
If you don’t work in computers, you probably haven’t spent much time thinking about how data gets stored on computers or in the cloud. I’m not talking about the physical ways that hard disks or…
algorithms  consensus  paxos  distributed  interesting 
6 days ago by djhworld
TLA+ in Practice and Theory<br>Part 1: The Principles of TLA+
Write an awesome description for your new site here. You can edit this line in _config.yml. It will appear in your document head meta (for Google search results) and in your feed.xml site description.
programming  paxos 
25 days ago by geetarista
OpenReplica
OpenReplica provides availability, reliability and fault-tolerance in distributed systems. It is designed to maintain long-lived, critical state (such as configuration information) and to synchronize distributed components. It works as follows: you define a Python object that encapsulates the state you want replicated, along with methods that can update it, and can synchronize threads that access it. You give it to OpenReplica, your object gets geographically distributed automatically, and you receive a proxy through which multiple clients can access the replicated object transparently. To the rest of your application, your replicated object appears as a regular Python object when you use the provided proxy.
distributed  synchronization  paxos  python 
4 weeks ago by euler
How Your Data is Stored, or, The Laws of the Imaginary Greeks
If you have made it this far, you have just learned some of the most challenging topics in distributed computing. Nearly every problem in datacenter- or planet-scale computing boils down to these issues: how do you get a bunch of computers, often distant from one another, connected via unreliable links, and prone to going down at unpredictable intervals, to nonetheless agree on what information they store?
In practice, there are four methods which are commonly used:
Single data stores (the Pseudemoxian Hermit), where a single computer keeps its own copy, everyone wishing to use it must take turns, and the system is vulnerable to a single disaster; however, the system is strongly consistent, dead-simple, and all other systems are built on top of it.
Eventually consistent replication (the Fotan system), where each participant has their own (strongly-consistent) store, and everyone changes and reads their own copy, distributing and receiving updates to all of their fellows later on. This has the advantage of speed and simplicity, as well as robustness to many kinds of disaster, but lacks the strong-consistent guarantees that once you write, all future readers will know about it. This system is very useful in cases where that guarantee isn’t needed, such as distributing copies of images (or other bulky data) which will never change after it is written, and where freshness isn’t really required.
Quorum decisions (the Paxon system — and unlike the other examples, this one is actually called “Paxos” in normal CS conversations), where reads and writes involve getting a majority of the participants to agree. This provides strong consistency and robustness, but can be very slow, especially when spread out over a wide area.
Master election (the Siranon system), where an expensive, strongly-consistent store is used to decide who is in charge of any subject for a time, and then that responsible party uses their own, smaller, strongly-consistent store to maintain the laws on that subject.
distributed  synchronization  concurrency  paxos 
4 weeks ago by euler
In search of a simple consensus algorithm
In this post: (1) covered an availability limitation of the Raft protocol (2) demonstrated that modern implementations of Raft are subject to it (3) described an existing simpler approach to the problem of consensus (4) showed that its toy 500-lines implementation has performance similar to Etcd but doesn't suffer from Raft's performance penalty
consensus  paxos  availability  actors 
8 weeks ago by mpm
Multileader WAN Paxos: Ruling the Archipelago with Fast Consensus
We present WPaxos, a multileader wide area network (WAN) Paxos protocol, that achieves low-latency high-throughput consensus across WAN deployments. WPaxos dynamically partitions the global object-space across multiple concurrent leaders that are deployed strategically using flexible quorums. This partitioning and emphasis on local operations allow our protocol to significantly outperform leaderless approaches, such as EPaxos, while maintaining the same consistency guarantees. Unlike statically partitioned multiple Paxos deployments, WPaxos adapts dynamically to the changing access locality through adaptive object stealing. The ability to quickly react to changing access locality not only speeds up the protocol, but also enables support for mini-transactions
paxos  scaling  consensus 
11 weeks ago by mpm

« earlier    

related tags

2015  2017  abhishekverma  academic_paper  acm  actors  alfraniocorreia  algor  algorithm  algorithms  architecture  availability  beginner  bigdata  bitcoin  blog  bookmarks_bar  byzantine_generals  clojure  cluster_mgmt  clustering  complexity  computer  concensus__leadership  concurrency  concurrent  consensus-algorithm  consensus  coordination  critique  data_structures_and_algorithms  database  davidoppenheimer  dev  development  discussion  distributed-computing  distributed-system  distributed-systems  distributed  distributed_computing  distributed_systems  distributedsystems  distrubuted_systems  edu  education  election  erictune  examples  explanation  filetype:pdf  golang  google  google_borg  greece  groupreplication  grpc  gryadka  guide  hacker-news-comments  hackernews  implementation  infrastructure  interesting  johnwilkes  jvm  kafka  lamport  latency  leader  lecture  ledger  leslie  linkedin  luispedrosa  madhukarrkorupolu  mit  mysql  network  networking  nopaxos  ordering  paper  papers  pdf  performance  presentation  programming  protocols  pseudo-code  python  raft  read  reference  reliability  replication  research  rust  scalability  scaling  science  slides  software  synchronization  system  testing  throughput  tla  tr-2016-10  tr-2017-03  trex  tutorial  write  xcom  zab 

Copy this bookmark:



description:


tags: