jm + synchronization   17

AWS Greengrass
AWS Greengrass is software that lets you run local compute, messaging & data caching for connected devices in a secure way. With AWS Greengrass, connected devices can run AWS Lambda functions, keep device data in sync, and communicate with other devices securely – even when not connected to the Internet. Using AWS Lambda, Greengrass ensures your IoT devices can respond quickly to local events, operate with intermittent connections, and minimize the cost of transmitting IoT data to the cloud.

AWS Greengrass seamlessly extends AWS to devices so they can act locally on the data they generate, while still using the cloud for management, analytics, and durable storage. With Greengrass, you can use familiar languages and programming models to create and test your device software in the cloud, and then deploy it to your devices. AWS Greengrass can be programmed to filter device data and only transmit necessary information back to the cloud. AWS Greengrass authenticates and encrypts device data at all points of connection using AWS IoT’s security and access management capabilities. This way data is never exchanged between devices when they communicate with each other and the cloud without proven identity.
aws  cloud  iot  lambda  devices  offline  synchronization  architecture 
april 2017 by jm
Hybrid Logical Clocks
neat substitute for physical-time clocks in synchronization and ordering in a distributed system, based on Lamport's Logical Clocks and Google's TrueTime.

'HLC captures the causality relationship like LC, and enables easy identification of consistent snapshots in distributed systems. Dually, HLC can be used in lieu of PT clocks since it maintains its logical clock to be always close to the PT clock.'
hlc  clocks  logical-clocks  time  synchronization  ordering  events  logs  papers  algorithms  truetime  distcomp 
june 2015 by jm
Five different ways to handle leap seconds with NTP
Without switching to chronyd, ntpd -x sounds not too suboptimal:
With ntpd, the kernel backward step is used by default. With ntpd versions before 4.2.6, or 4.2.6 and later patched for this bug, the -x option (added to /etc/sysconfig/ntpd) can be used to disable the kernel leap second correction and ignore the leap second as far as the local clock is concerned. The one-second error gained after the leap second will be measured and corrected later by slewing in normal operation using NTP servers which already corrected their local clocks.


It's all pretty messy though :(
ntpd  ntp  chronyd  clocks  time  synchronization  via:fanf  linux  leap-seconds 
june 2015 by jm
Biased Locking in HotSpot (David Dice's Weblog)
This is pretty nuts. If biased locking in the HotSpot JVM is causing performance issues, it can be turned off:
You can avoid biased locking on a per-object basis by calling System.identityHashCode(o). If the object is already biased, assigning an identity hashCode will result in revocation, otherwise, the assignment of a hashCode() will make the object ineligible for subsequent biased locking.
hashcode  jvm  java  biased-locking  locking  mutex  synchronization  locks  performance 
march 2015 by jm
"Left-Right: A Concurrency Control Technique with Wait-Free Population Oblivious Reads" [pdf]
'In this paper, we describe a generic concurrency control technique with Blocking write operations and Wait-Free Population Oblivious read operations, which we named the Left-Right technique. It is of particular interest for real-time applications with dedicated Reader threads, due to its wait-free property that gives strong latency guarantees and, in addition, there is no need for automatic Garbage Collection.
The Left-Right pattern can be applied to any data structure, allowing concurrent access to it similarly to a Reader-Writer lock, but in a non-blocking manner for reads. We present several variations of the Left-Right technique, with different versioning mechanisms and state machines. In addition, we constructed an optimistic approach that can reduce synchronization for reads.'

See also http://concurrencyfreaks.blogspot.ie/2013/12/left-right-concurrency-control.html for java implementation code.
left-right  concurrency  multithreading  wait-free  blocking  realtime  gc  latency  reader-writer  locking  synchronization  java 
september 2014 by jm
ThreadSanitizer
Google's purify/valgrind-like concurrency checking tool:

'As a bonus, ThreadSanitizer finds some other types of bugs: thread leaks, deadlocks, incorrect uses of mutexes, malloc calls in signal handlers, and more. It also natively understands atomic operations and thus can find bugs in lock-free algorithms. [...] The tool is supported by both Clang and GCC compilers (only on Linux/Intel64). Using it is very simple: you just need to add a -fsanitize=thread flag during compilation and linking. For Go programs, you simply need to add a -race flag to the go tool (supported on Linux, Mac and Windows).'
concurrency  bugs  valgrind  threadsanitizer  threading  deadlocks  mutexes  locking  synchronization  coding  testing 
june 2014 by jm
"Taking the hotdog"
aka. lock acquisition. ex-Amazon-Dublin lingo, observed in the wild ;)
language  hotdog  archie-mcphee  amazon  dublin  intercom  coding  locks  synchronization 
may 2014 by jm
Scalable Atomic Visibility with RAMP Transactions
Great new distcomp protocol work from Peter Bailis et al:
We’ve developed three new algorithms—called Read Atomic Multi-Partition (RAMP) Transactions—for ensuring atomic visibility in partitioned (sharded) databases: either all of a transaction’s updates are observed, or none are. [...]

How they work: RAMP transactions allow readers and writers to proceed concurrently. Operations race, but readers autonomously detect the races and repair any non-atomic reads. The write protocol ensures readers never stall waiting for writes to arrive.

Why they scale: Clients can’t cause other clients to stall (via synchronization independence) and clients only have to contact the servers responsible for items in their transactions (via partition independence). As a consequence, there’s no mutual exclusion or synchronous coordination across servers.

The end result: RAMP transactions outperform existing approaches across a variety of workloads, and, for a workload of 95% reads, RAMP transactions scale to over 7 million ops/second on 100 servers at less than 5% overhead.
scale  synchronization  databases  distcomp  distributed  ramp  transactions  scalability  peter-bailis  protocols  sharding  concurrency  atomic  partitions 
april 2014 by jm
Safe cross-thread publication of a non-final variable in the JVM
Scary, but potentially useful in future, so worth bookmarking. By carefully orchestrating memory accesses using volatile and non-volatile fields, one can ensure that a non-volatile, non-synchronized field's value is safely visible to all threads after that point due to JMM barrier semantics.

What you are looking to do is enforce a barrier between your initializing stores and your publishing store, without that publishing store being made to a volatile field. This can be done by using volatile access to other fields in the publication path, without using those variables in the later access paths to the published object.
volatile  atomic  java  jvm  gil-tene  synchronization  performance  threading  jmm  memory-barriers 
january 2014 by jm
Asynchronous logging versus Memory Mapped Files
Interesting article around using mmap'd files from Java using RandomAccessFile.getChannel().map(), which allows them to be accessed directly as a ByteBuffer. together with Atomic variable lazySet() operations, this provides pretty excellent performance results on low-latency writes to disk. See also: http://psy-lob-saw.blogspot.ie/2012/12/atomiclazyset-is-performance-win-for.html
atomic  lazyset  putordered  jmm  java  synchronization  randomaccessfile  bytebuffers  performance  optimization  memory  disk  queues 
november 2013 by jm
Low-latency stock trading "jumps the gun" due to default NTP configuration settings
On June 3, 2013, trading in SPY exploded at 09:59:59.985, which is 15 milliseconds before the ISM's Manufacturing number released at 10:00:00. Activity in the eMini (traded in Chicago), exploded at 09:59:59.992, which is 8 milliseconds before the news release, but 7 milliseconds after SPY. Note how SPY and the eMini traded within a millisecond for the Consumer Confidence release last week, but the eMini lagged SPY by about 7 milliseconds for the ISM Manufacturing release. The simultaneous trading on Consumer Confidence is because that number is released at the same time in both NYC and Chicago.

The ISM Manufacturing number is probably released on a low latency feed in NYC, and then takes 5-7 milliseconds, due to the speed of light, to reach Chicago. Either the clock used to release the ISM number was 15 milliseconds fast, or someone (correctly) jumped the gun.

Update: [...] The clock used to release the ISM was indeed, 15 milliseconds fast. This could be from using the default setting of many NTP clients, which allows the clock to drift up to about 16 milliseconds before adjusting time.
ntp  time  synchronization  spy  trading  stocks  low-latency  clocks  internet 
june 2013 by jm
BitTorrent’s Secure Dropbox Alternative Goes Public
As kragen says, 'a decentralized way to sync a folder of large files, using BitTorrent instead of an untrustworthy central server'. Windows, OSX, and Linux supported
bittorrent  dropbox  cloud  storage  filesharing  sharing  sync  synchronization 
april 2013 by jm
Dropbox Sync API
Give your app its own private Dropbox client and leave the syncing to us.
apps  dropbox  synchronization  sync  ios  android  api 
march 2013 by jm
Mindblowing Python GIL
'presentation about how the Python GIL actually works and why it's even worse than most people even imagine.' A good chunk btw could be rephrased as 'pthreads is worse than most people even imagine'. pretty awful data, though
python  gil  locking  synchronization  ouch  performance  tuning  coding  interpreters  threads  pthreads  from delicious
february 2010 by jm
lsyncd
'Lsyncd uses rsync to synchronize local directories with a remote machine running rsyncd. Lsyncd watches multiple directories trees through inotify. The first step after adding the watches is to rsync all directories with the remote host, and then sync single file by collecting the inotify events. So lsyncd is a light-weight live mirror solution that should be easy to install and use while blending well with your system.' (via adulau)
via:adulau  lsyncd  mirroring  linux  inotify  backup  sysadmin  synchronization  sync  dropbox  from delicious
december 2009 by jm

related tags

algorithms  amazon  android  api  apps  archie-mcphee  architecture  atomic  aws  backup  best-practices  biased-locking  bittorrent  blocking  bugs  bytebuffers  cas  chronyd  clocks  cloud  coding  concurrency  cpus  cron  databases  deadlocks  devices  disk  distcomp  distributed  dropbox  dublin  events  filesharing  gc  gil  gil-tene  google  hardware  hashcode  hlc  hotdog  inotify  intercom  internet  interpreters  ios  iot  java  jitter  jmm  jvm  lambda  language  latency  lazyset  leap-seconds  left-right  linux  locking  locks  lockstep  logical-clocks  logs  low-latency  lsyncd  memory  memory-barriers  mirroring  multicore  multithreading  mutex  mutexes  ntp  ntpd  offline  optimization  ordering  ouch  papers  partitions  performance  periodic  peter-bailis  protocols  pthreads  putordered  python  queues  ramp  randomaccessfile  randomness  reader-writer  realtime  scalability  scale  sharding  sharing  spy  stocks  storage  sync  synchronization  sysadmin  testing  threading  threads  threadsanitizer  time  timing  trading  transactions  truetime  tuning  valgrind  via:adulau  via:fanf  volatile  wait-free  youtube 

Copy this bookmark:



description:


tags: