jm + ops   300

Elasticsearch and data loss
"@alexbfree @ThijsFeryn [ElasticSearch is] fine as long as data loss is acceptable. . We lose ~1% of all writes on average."
elasticsearch  data-loss  reliability  data  search  aphyr  jepsen  testing  distributed-systems  ops 
3 hours ago by jm
Træfɪk is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease. It supports several backends (Docker , Mesos/Marathon, Consul, Etcd, Rest API, file...) to manage its configuration automatically and dynamically.

Hot-reloading is notably much easier than with nginx/haproxy.
proxy  http  proxying  reverse-proxy  traefik  go  ops 
8 days ago by jm
Chaos Engineering Upgraded
some details on Netflix's Chaos Monkey, Chaos Kong and other aspects of their availability/failover testing
architecture  aws  netflix  ops  chaos-monkey  chaos-kong  testing  availability  failover  ha 
8 days ago by jm
a tool which simplifies tracing and testing of Java programs. Byteman allows you to insert extra Java code into your application, either as it is loaded during JVM startup or even after it has already started running. The injected code is allowed to access any of your data and call any application methods, including where they are private. You can inject code almost anywhere you want and there is no need to prepare the original source code in advance nor do you have to recompile, repackage or redeploy your application. In fact you can remove injected code and reinstall different code while the application continues to execute. The simplest use of Byteman is to install code which traces what your application is doing. This can be used for monitoring or debugging live deployments as well as for instrumenting code under test so that you can be sure it has operated correctly. By injecting code at very specific locations you can avoid the overheads which often arise when you switch on debug or product trace. Also, you decide what to trace when you run your application rather than when you write it so you don't need 100% hindsight to be able to obtain the information you need.
tracing  java  byteman  injection  jvm  ops  debugging  testing 
13 days ago by jm
a specialized packet sniffer designed for displaying and logging HTTP traffic. It is not intended to perform analysis itself, but to capture, parse, and log the traffic for later analysis. It can be run in real-time displaying the traffic as it is parsed, or as a daemon process that logs to an output file. It is written to be as lightweight and flexible as possible, so that it can be easily adaptable to different applications.

via Eoin Brazil
via:eoinbrazil  httpry  http  networking  tools  ops  testing  tcpdump  tracing 
16 days ago by jm
Anatomy of a Modern Production Stack
Interesting post, but I think it falls into a common trap for the xoogler or ex-Amazonian -- assuming that all the BigCo mod cons are required to operate, when some are luxuries than can be skipped for a few years to get some real products built
architecture  ops  stack  docker  containerization  deployment  containers  rkt  coreos  prod  monitoring  xooglers 
18 days ago by jm
You're probably wrong about caching
Excellent cut-out-and-keep guide to why you should add a caching layer. I've been following this practice for the past few years, after I realised that #6 (recovering from a failed cache is hard) is a killer -- I've seen a few large-scale outages where a production system had gained enough scale that it required a cache to operate, and once that cache was damaged, bringing the system back online required a painful rewarming protocol. Better to design for the non-cached case if possible.
architecture  caching  coding  design  caches  ops  production  scalability 
4 weeks ago by jm
Docker image creation, tagging and traceability in Shippable
this is starting to look quite impressive as a well-integrated Docker-meets-CI model; Shippable is basing its builds off Docker baselines and is automatically cutting Docker images of the post-CI stage. Must take another look
shippable  docker  ci  ops  dev  continuous-integration 
6 weeks ago by jm
Call me Maybe: Chronos
Chronos (the Mesos distributed scheduler) comes out looking pretty crappy here
aphyr  mesos  chronos  cron  scheduling  outages  ops  jepsen  testing  partitions  cap 
6 weeks ago by jm
Cleanup old/obsolete Docker images in a repo.
disk-space  ops  docker  cleanup  cron 
7 weeks ago by jm
our full-featured, high performance, scalable web server designed to compete with the likes of nginx. It has been built from the ground-up with no external library dependencies entirely in x86_64 assembly language, and is the result of many years' experience with high volume web environments. In addition to all of the common things you'd expect a modern web server to do, we also include assembly language function hooks ready-made to facilitate Rapid Web Application Server (in Assembler) development.
assembly  http  performance  https  ssl  x86_64  web  ops  rwasa  tls 
7 weeks ago by jm
A collection of postmortems
A well-maintained list with a potted description of each one (via HN)
postmortems  ops  uptime  reliability 
8 weeks ago by jm
Introducing Nurse: Auto-Remediation at LinkedIn
Interesting to hear about auto-remediation in prod -- we built a (very targeted) auto-remediation system in Amazon on the Network Monitoring team, but this is much bigger in focus
nurse  auto-remediation  outages  linkedin  ops  monitoring 
9 weeks ago by jm
danilop/runjop · GitHub
RunJOP (Run Just Once Please) is a distributed execution framework to run a command (i.e. a job) only once in a group of servers [built using AWS DynamoDB and S3].

nifty! Distributed cron is pretty easy when you've got Dynamo doing the heavy lifting.
dynamodb  cron  distributed-cron  scheduling  runjop  danilop  hacks  aws  ops 
9 weeks ago by jm
Why Docker is Not Yet Succeeding Widely in Production
Spot-on points which Docker needs to address. It's still production-ready, and _should_ be used there, it just has significant rough edges...
docker  containers  devops  deployment  releases  linux  ops 
9 weeks ago by jm
Taming Complexity with Reversibility
This is a great post from Kent Beck, putting a lot of recent deployment/rollout patterns in a clear context -- that of supporting "reversibility":
Development servers. Each engineer has their own copy of the entire site. Engineers can make a change, see the consequences, and reverse the change in seconds without affecting anyone else.
Code review. Engineers can propose a change, get feedback, and improve or abandon it in minutes or hours, all before affecting any people using Facebook.
Internal usage. Engineers can make a change, get feedback from thousands of employees using the change, and roll it back in an hour.
Staged rollout. We can begin deploying a change to a billion people and, if the metrics tank, take it back before problems affect most people using Facebook.
Dynamic configuration. If an engineer has planned for it in the code, we can turn off an offending feature in production in seconds. Alternatively, we can dial features up and down in tiny increments (i.e. only 0.1% of people see the feature) to discover and avoid non-linear effects.
Correlation. Our correlation tools let us easily see the unexpected consequences of features so we know to turn them off even when those consequences aren't obvious.
IRC. We can roll out features potentially affecting our ability to communicate internally via Facebook because we have uncorrelated communication channels like IRC and phones.
Right hand side units. We can add a little bit of functionality to the website and turn it on and off in seconds, all without interfering with people's primary interaction with NewsFeed.
Shadow production. We can experiment with new services under real load, from a tiny trickle to the whole flood, without affecting production.
Frequent pushes. Reversing some changes require a code change. On the website we never more than eight hours from the next schedule code push (minutes if a fix is urgent and you are willing to compensate Release Engineering). The time frame for code reversibility on the mobile applications is longer, but the downward trend is clear from six weeks to four to (currently) two.
Data-informed decisions. (Thanks to Dave Cleal) Data-informed decisions are inherently reversible (with the exceptions noted below). "We expect this feature to affect this metric. If it doesn't, it's gone."
Advance countries. We can roll a feature out to a whole country, generate accurate feedback, and roll it back without affecting most of the people using Facebook.
Soft launches. When we roll out a feature or application with a minimum of fanfare it can be pulled back with a minimum of public attention.
Double write/bulk migrate/double read. Even as fundamental a decision as storage format is reversible if we follow this format: start writing all new data to the new data store, migrate all the old data, then start reading from the new data store in parallel with the old.

We do a bunch of these in work, and the rest are on the to-do list. +1 to these!
software  deployment  complexity  systems  facebook  reversibility  dark-releases  releases  ops  cd  migration 
10 weeks ago by jm
Benchmarking GitHub Enterprise - GitHub Engineering
Walkthrough of debugging connection timeouts in a load test. Nice graphs (using matplotlib)
github  listen-backlog  tcp  debugging  timeouts  load-testing  benchmarking  testing  ops  linux 
10 weeks ago by jm
Mikhail Panchenko's thoughts on the July 2015 CircleCI outage
an excellent followup operational post on CircleCI's "database is not a queue" outage
database-is-not-a-queue  mysql  sql  databases  ops  outages  postmortems 
11 weeks ago by jm
From Zero to Docker: Migrating to the Whale
nicely detailed writeup of how New Relic are dockerizing
docker  ops  deployment  packaging  new-relic 
12 weeks ago by jm
Outlier Detection at Netflix | Hacker News
Excellent HN thread re automated anomaly detection in production, Q&A with the dev team
machine-learning  ml  remediation  anomaly-detection  netflix  ops  time-series  clustering 
12 weeks ago by jm
a command line tool for JVM diagnostic troubleshooting and profiling.
java  jvm  monitoring  commandline  jmx  sjk  tools  ops 
june 2015 by jm
Cloudflare's open source CA/PKI infrastructure app
cloudflare  pki  ca  ssl  tls  ops 
june 2015 by jm
Docker at Shopify: From This-Looks-Fun to Production
Pragmatic evolution story, adding Docker as a packaging/deploy format for an existing production Capistrano/Rails fleet
docker  ops  deployment  packaging  shopify  slides 
june 2015 by jm
Google Cloud Platform announces new Container Registry
Yay. Sensible Docker registry pricing at last. Given the high prices, rough edges and slow performance of the other registry offerings, I'm quite happy to see this.
Google Container Registry helps make it easy for you to store your container images in a private and encrypted registry, built on Cloud Platform. Pricing for storing images in Container Registry is simple: you only pay Google Cloud Storage costs. Pushing images is free, and pulling Docker images within a Google Cloud Platform region is free (Cloud Storage egress cost when outside of a region).

Container Registry is now ready for production use:

* Encrypted and Authenticated - Your container images are encrypted at rest, and access is authenticated using Cloud Platform OAuth and transmitted over SSL
* Fast - Container Registry is fast and can handle the demands of your application, because it is built on Cloud Storage and Cloud Networking.
* Simple - If you’re using Docker, just tag your image with a tag and push it to the registry to get started.  Manage your images in the Google Developers Console.
* Local - If your cluster runs in Asia or Europe, you can now store your images in ASIA or EU specific repositories using and tags.
docker  registry  google  gcp  containers  cloud-storage  ops  deployment 
june 2015 by jm
Automated Nginx Reverse Proxy for Docker
Nice hack. An automated nginx reverse proxy which regenerates as the Docker containers update
nginx  reverse-proxy  proxies  web  http  ops  docker 
june 2015 by jm
Google Cloud Platform Blog: A look inside Google’s Data Center Networks
We used three key principles in designing our datacenter networks:
We arrange our network around a Clos topology, a network configuration where a collection of smaller (cheaper) switches are arranged to provide the properties of a much larger logical switch.
We use a centralized software control stack to manage thousands of switches within the data center, making them effectively act as one large fabric.
We build our own software and hardware using silicon from vendors, relying less on standard Internet protocols and more on custom protocols tailored to the data center.
clos-networks  google  data-centers  networking  sdn  gcp  ops 
june 2015 by jm
Why I dislike systemd
Good post, and hard to disagree.
One of the "features" of systemd is that it allows you to boot a system without needing a shell at all. This seems like such a senseless manoeuvre that I can't help but think of it as a knee-jerk reaction to the perception of Too Much Shell in sysv init scripts.
In exactly which universe is it reasonable to assume that you have a running D-Bus service (or kdbus) and a filesystem containing unit files, all the binaries they refer to, all the libraries they link against, and all the configuration files any of them reference, but that you lack that most ubiquitous of UNIX binaries, /bin/sh?
history  linux  unix  systemd  bsd  system-v  init  ops  dbus 
june 2015 by jm
VPC Flow Logs
we are introducing Flow Logs for the Amazon Virtual Private Cloud.  Once enabled for a particular VPC, VPC subnet, or Elastic Network Interface (ENI), relevant network traffic will be logged to CloudWatch Logs for storage and analysis by your own applications or third-party tools.

You can create alarms that will fire if certain types of traffic are detected; you can also create metrics to help you to identify trends and patterns. The information captured includes information about allowed and denied traffic (based on security group and network ACL rules). It also includes source and destination IP addresses, ports, the IANA protocol number, packet and byte counts, a time interval during which the flow was observed, and an action (ACCEPT or REJECT).
ec2  aws  vpc  logging  tracing  ops  flow-logs  network  tcpdump  packets  packet-capture 
june 2015 by jm
How We Moved Our API From Ruby to Go and Saved Our Sanity
Parse on their ditching-Rails story. I haven't heard a nice thing about Ruby or Rails as an operational, production-quality platform in a long time :(
go  ruby  rails  ops  parse  languages  platforms 
june 2015 by jm
etcd Clustering in AWS
'a fully-automated solution to build auto-scaling etcd clusters in AWS'
aws  cluster  docker  etcd  asg  autoscaling  ops 
june 2015 by jm
1172401 – Add Amazon root certificates
Well, well -- looks like AWS is about to disrupt PKI, and about time too. If they come up with a Plex-style "provision a cert" API, it'll be revolutionary
pki  ssl  tls  amazon  aws  apis  web-services  ops 
june 2015 by jm
Eric Brewer interview on Kubernetes
What is the relationship between Kubernetes, Borg and Omega (the two internal resource-orchestration systems Google has built)?

I would say, kind of by definition, there’s no shared code but there are shared people.

You can think of Kubernetes — especially some of the elements around pods and labels — as being lessons learned from Borg and Omega that are, frankly, significantly better in Kubernetes. There are things that are going to end up being the same as Borg — like the way we use IP addresses is very similar — but other things, like labels, are actually much better than what we did internally.

I would say that’s a lesson we learned the hard way.
google  architecture  kubernetes  docker  containers  borg  omega  deployment  ops 
may 2015 by jm
Deploy a registry - Docker Documentation
Looks like it's pretty feasible to run a private Docker registry on every host, backed by S3 (according to the ECS team's AMA). SPOF-free -- handy
docker  registry  ops  deployment  s3 
may 2015 by jm
Migration to, Expectations, and Advanced Tuning of G1GC
Bookmarking for future reference. recommended by one of the GC experts, I can't recall exactly who ;)
gc  g1gc  jvm  java  tuning  performance  ops  migration 
may 2015 by jm
Patterns for building a resilient and scalable microservices platform on AWS
Some good details from Boyan Dimitrov at Hailo, on their orchestration, deployment, provisioning infra they've built
deployment  ops  devops  hailo  microservices  platform  patterns  slides 
may 2015 by jm
Why Loggly loves Apache Kafka
Some good factoids about Loggly's Kafka usage and scales
scalability  logging  loggly  kafka  queueing  ops  reliabilty 
may 2015 by jm
Cassandra moving to using G1 as the default recommended GC implementation
This is a big indicator that G1 is ready for primetime. CMS has long been the go-to GC for production usage, but requires careful, complex hand-tuning -- if G1 is getting to a stage where it's just a case of giving it enough RAM, that'd be great.

Also, looks like it'll be the JDK9 default:
cassandra  tuning  ops  g1gc  cms  gc  java  jvm  production  performance  memory 
april 2015 by jm
a web-based SSH console that centrally manages administrative access to systems. Web-based administration is combined with management and distribution of user's public SSH keys. Key management and administration is based on profiles assigned to defined users.

Administrators can login using two-factor authentication with FreeOTP or Google Authenticator . From there they can create and manage public SSH keys or connect to their assigned systems through a web-shell. Commands can be shared across shells to make patching easier and eliminate redundant command execution.
keybox  owasp  security  ssh  tls  ssl  ops 
april 2015 by jm
'Discover and discuss the best dev tools and cloud infrastructure services' -- fun!
stackshare  architecture  stack  ops  software  ranking  open-source 
april 2015 by jm
Kubernetes compared to Borg
'Here are four Kubernetes features that came from our experiences with Borg.'
google  ops  kubernetes  borg  containers  docker  networking 
april 2015 by jm
Cluster-Based Architectures Using Docker and Amazon EC2 Container Service
In this post, we’re going to take a deeper dive into the architectural concepts underlying cluster computing using container management frameworks such as ECS. We will show how these frameworks effectively abstract the low-level resources such as CPU, memory, and storage, allowing for highly efficient usage of the nodes in a compute cluster. Building on some of the concepts detailed in the earlier posts, we will discover why containers are such a good fit for this type of abstraction, and how the Amazon EC2 Container Service fits into the larger ecosystem of cluster management frameworks.
docker  aws  ecs  ec2  ops  hosting  containers  mesos  clusters 
april 2015 by jm
Amazon EC2 Container Service team AmA
a few answers here. Mostly people pointing out shortcomings and the team asking them to start a thread on their forum though :(
ec2  ecs  docker  aws  ops  ama  reddit 
april 2015 by jm
Etsy's Release Management process
Good info on how Etsy use their Deployinator tool, end-to-end.

Slide 11: git SHA is visible for each env, allowing easy verification of what code is deployed.

Slide 14: Code is deployed to "princess" staging env while CI tests are running; no need to wait for unit/CI tests to complete.

Slide 23: smoke tests of pre-prod "princess" (complete after 8 mins elapsed).

Slide 31: dashboard link for deployed code is posted during deploy; post-release prod smoke tests are run by Jenkins. (short ones! they complete in 42 seconds)
deployment  etsy  deploy  deployinator  princess  staging  ops  testing  devops  smoke-tests  production  jenkins 
april 2015 by jm
'Continuous Deployment: The Dirty Details'
Good slide deck from Etsy's Mike Brittain regarding their CD setup. Some interesting little-known details:

Slide 41: database schema changes are not CD'd -- they go out on "Schema change Thursdays".

Slide 44: only the webapp is CD'd -- PHP, Apache, memcache components (, support and back-office tools, developer API, gearman async worker queues). The external "services" are not -- databases, Solr/JVM search (rolling restarts), photo storage (filters, proxy cache, S3), payments (PCI-DSS, controlled access).

They avoid schema changes and breaking changes using an approach they call "non-breaking expansions" -- expose new version in a service interface; support multiple versions in the consumer. Example from slides 50-63, based around a database schema migration.

Slide 66: "dev flags" (rollout oriented) are promoted to "feature flags" (long lived degradation control).

Slide 71: some architectural philosophies: deploying is cheap; releasing is cheap; gathering data should be cheap too; treat first iterations as experiments.

Slide 102: "Canary pools". They have multiple pools of users for testing in production -- the staff pool, users who have opted in to see prototypes/beta stuff, 0-100% gradual phased rollout.
cd  deploy  etsy  slides  migrations  database  schema  ops  ci  version-control  feature-flags 
april 2015 by jm
Internet Scale Services Checklist
good aspirational checklist, inspired heavily by James Hamilton's seminal 2007 paper, "On Designing And Deploying Internet-Scale Services"
james-hamilton  checklists  ops  internet-scale  architecture  operability  monitoring  reliability  availability  uptime  aspirations 
april 2015 by jm
Pinterest's Hadoop workflow manager; 'scalable, reliable, simple, extensible' apparently. Hopefully it allows upgrades of a workflow component without breaking an existing run in progress, like LinkedIn's Azkaban does :(
python  pinterest  hadoop  workflows  ops  pinball  big-data  scheduling 
april 2015 by jm
'a secret management and distribution service [from Square] that is now available for everyone. Keywhiz helps us with infrastructure secrets, including TLS certificates and keys, GPG keyrings, symmetric keys, database credentials, API tokens, and SSH keys for external services — and even some non-secrets like TLS trust stores. Automation with Keywhiz allows us to seamlessly distribute and generate the necessary secrets for our services, which provides a consistent and secure environment, and ultimately helps us ship faster. [...]

Keywhiz has been extremely useful to Square. It’s supported both widespread internal use of cryptography and a dynamic microservice architecture. Initially, Keywhiz use decoupled many amalgamations of configuration from secret content, which made secrets more secure and configuration more accessible. Over time, improvements have led to engineers not even realizing Keywhiz is there. It just works. Please check it out.'
square  security  ops  keys  pki  key-distribution  key-rotation  fuse  linux  deployment  secrets  keywhiz 
april 2015 by jm
Yelp Product & Engineering Blog | True Zero Downtime HAProxy Reloads
Using tc and qdisc to delay SYNs while haproxy restarts. Definitely feels like on-host NAT between 2 haproxy processes would be cleaner and easier though!
linux  networking  hacks  yelp  haproxy  uptime  reliability  tcp  tc  qdisc  ops 
april 2015 by jm
Optimizing Java CMS garbage collections, its difficulties, and using JTune as a solution | LinkedIn Engineering
I like the sound of this -- automated Java CMS GC tuning, kind of like a free version of JClarity's Censum (via Miguel Ángel Pastor)
java  jvm  tuning  gc  cms  linkedin  performance  ops 
april 2015 by jm
an asynchronous Netty based graphite proxy. It protects Graphite from the herds of clients by minimizing context switches and interrupts; by batching and aggregating metrics. Gruffalo also allows you to replicate metrics between Graphite installations for DR scenarios, for example.

Gruffalo can easily handle a massive amount of traffic, and thus increase your metrics delivery system availability. At Outbrain, we currently handle over 1700 concurrent connections, and over 2M metrics per minute per instance.
graphite  backpressure  metrics  outbrain  netty  proxies  gruffalo  ops 
april 2015 by jm
Introducing Vector: Netflix's On-Host Performance Monitoring Tool
It gives pinpoint real-time performance metric visibility to engineers working on specific hosts -- basically sending back system-level performance data to their browser, where a client-side renderer turns it into a usable dashboard. Essentially the idea is to replace having to ssh onto instances, run "top", systat, iostat, and so on.
vector  netflix  performance  monitoring  sysstat  top  iostat  netstat  metrics  ops  dashboards  real-time  linux 
april 2015 by jm
Gil Tene's "usual suspects" to reduce system-level hiccups/latency jitters in a Linux system
Based on empirical evidence (across many tens of sites thus far) and note-comparing with others, I use a list of "usual suspects" that I blame whenever they are not set to my liking and system-level hiccups are detected. Getting these settings right from the start often saves a bunch of playing around (and no, there is no "priority" to this - you should set them all right before looking for more advice...).
performance  latency  hiccups  gil-tene  tuning  mechanical-sympathy  hyperthreading  linux  ops 
april 2015 by jm
Outages, PostMortems, and Human Error 101
Good basic pres from John Allspaw, covering the basics of tier-one tech incident response -- defining the 5 severity levels; root cause analysis techniques (to Five-Whys or not); and the importance of service metrics
devops  monitoring  ops  five-whys  allspaw  slides  etsy  codeascraft  incident-response  incidents  severity  root-cause  postmortems  outages  reliability  techops  tier-one-support 
april 2015 by jm
Cassandra remote code execution hole (CVE-2015-0225)
Ah now lads.
Under its default configuration, Cassandra binds an unauthenticated
JMX/RMI interface to all network interfaces. As RMI is an API for the
transport and remote execution of serialized Java, anyone with access
to this interface can execute arbitrary code as the running user.
cassandra  jmx  rmi  java  ops  security 
april 2015 by jm
How We Scale VividCortex's Backend Systems - High Scalability
Excellent post from Baron Schwartz about their large-scale, 1-second-granularity time series database storage system
time-series  tsd  storage  mysql  sql  baron-schwartz  ops  performance  scalability  scaling  go 
march 2015 by jm
The Four Month Bug: JVM statistics cause garbage collection pauses (
Ugh, tying GC safepoints to disk I/O? bad idea:
The JVM by default exports statistics by mmap-ing a file in /tmp (hsperfdata). On Linux, modifying a mmap-ed file can block until disk I/O completes, which can be hundreds of milliseconds. Since the JVM modifies these statistics during garbage collection and safepoints, this causes pauses that are hundreds of milliseconds long. To reduce worst-case pause latencies, add the -XX:+PerfDisableSharedMem JVM flag to disable this feature. This will break tools that read this file, like jstat.
bugs  gc  java  jvm  disk  mmap  latency  ops  jstat 
march 2015 by jm
Transparent huge pages implicated in Redis OOM
A nasty real-world prod error scenario worsened by THPs:
jemalloc(3) extensively uses madvise(2) to notify the operating system that it's done with a range of memory which it had previously malloc'ed. The page size on this machine is 2MB because transparent huge pages are in use. As such, a lot of the memory which is being marked with madvise(..., MADV_DONTNEED) is within substantially smaller ranges than 2MB. This means that the operating system never was able to evict pages which had ranges marked as MADV_DONTNEED because the entire page has to be unneeded to allow a page to be reused. Despite initially looking like a leak, the operating system itself was unable to free memory because of madvise(2) and transparent huge pages. This led to sustained memory pressure on the machine and redis-server eventually getting OOM killed.
oom-killer  oom  linux  ops  thp  jemalloc  huge-pages  madvise  redis  memory 
march 2015 by jm
"tees" all TCP traffic from one server to another. "widely used by companies in China"!
testing  benchmarking  performance  tcp  ip  tcpcopy  tee  china  regression-testing  stress-testing  ops 
march 2015 by jm
an open source stream processing software system developed by Mozilla. Heka is a “Swiss Army Knife” type tool for data processing, useful for a wide variety of different tasks, such as:

Loading and parsing log files from a file system.
Accepting statsd type metrics data for aggregation and forwarding to upstream time series data stores such as graphite or InfluxDB.
Launching external processes to gather operational data from the local system.
Performing real time analysis, graphing, and anomaly detection on any data flowing through the Heka pipeline.
Shipping data from one location to another via the use of an external transport (such as AMQP) or directly (via TCP).
Delivering processed data to one or more persistent data stores.

Via feylya on twitter. Looks potentially nifty
heka  mozilla  monitoring  metrics  via:feylya  ops  statsd  graphite  stream-processing 
march 2015 by jm
The Large Hadron Migrator is a tool to perform live database migrations in a Rails app without locking.

The basic idea is to perform the migration online while the system is live, without locking the table. In contrast to OAK and the facebook tool, we only use a copy table and triggers. The Large Hadron is a test driven Ruby solution which can easily be dropped into an ActiveRecord or DataMapper migration. It presumes a single auto incremented numerical primary key called id as per the Rails convention. Unlike the twitter solution, it does not require the presence of an indexed updated_at column.
migrations  database  sql  ops  mysql  rails  ruby  lhm  soundcloud  activerecord 
march 2015 by jm
A project to reduce systemd to a base initd, process supervisor and transactional dependency system, while minimizing intrusiveness and isolationism. Basically, it’s systemd with the superfluous stuff cut out, a (relatively) coherent idea of what it wants to be, support for non-glibc platforms and an approach that aims to minimize complicated design. uselessd is still in its early stages and it is not recommended for regular use or system integration.

This may be the best option to evade the horrors of systemd.
init  linux  systemd  unix  ops  uselessd 
march 2015 by jm
« earlier      
per page:    204080120160

related tags

10/8  accept  accidents  acm  acm-queue  action-items  activemq  activerecord  admin  adrian-cockcroft  advent  advice  agpl  airbnb  aix  alarm-fatigue  alarming  alarms  alert-logic  alerting  alerts  alestic  algorithms  allspaw  alter-table  ama  amazon  analysis  analytics  anomaly-detection  antarctica  anti-spam  antipatterns  ap  apache  aphyr  apis  app-engine  apt  archaius  architecture  asg  asgard  aspirations  assembly  atlas  atomic  aufs  authentication  auto-remediation  auto-scaling  automation  autoremediation  autoscaling  availability  aws  az  azure  backblaze  backlog  backpressure  backup  backups  banking  baron-schwartz  bash  basho  batch  bdb  bdb-je  bdd  beanstalk  ben-treynor  benchmarking  benchmarks  best-practices  big-data  billing  bind  bit-errors  bitcoin  bitly  bitrot  bloat  blockdev  blogs  blue-green-deployments  boot2docker  borg  boundary  broadcast  bsd  btrfs  bugs  build  build-out  bureaucracy  byteman  c  ca  ca-7  caches  caching  campaigns  canary-requests  cap  cap-theorem  capacity  carbon  case-studies  cassandra  cd  cdn  censum  certs  cfengine  cgroups  change-management  change-monitoring  changes  chaos-kong  chaos-monkey  charts  checkip  checklists  chef  chefspec  china  chronos  ci  circuit-breakers  circus  cisco  classification  classifiers  cleaner  cleanup  cli  clos-networks  cloud  cloud-storage  cloudera  cloudflare  cloudnative  cloudwatch  cluster  clustering  clusters  cms  code-spaces  codeascraft  codedeploy  coding  coes  coinbase  cold  collaboration  command-line  commandline  commercial  company  compatibility  complexity  compression  concurrency  conferences  confidence-bands  configuration  consistency  consul  containerization  containers  continuous-delivery  continuous-deployment  continuous-integration  continuousintegration  copy-on-write  copyright  coreos  coreutils  corruption  cp  crash-only-software  criu  cron  crypto  culture  curl  daemon  daemons  danilop  dark-releases  dashboards  data  data-centers  data-corruption  data-loss  database  database-is-not-a-queue  databases  datacenters  datadog  dataviz  dbus  debian  debug  debugging  decay  defrag  delete  delivery  delta  demo  dependencies  deploy  deployinator  deployment  design  desktops  dev  developers  development  devops  diagnosis  digital-ocean  disk  disk-space  disks  distcomp  distributed  distributed-cron  distributed-systems  distros  diy  dmca  dns  docker  documentation  dotcloud  drivers  dropbox  dstat  duplicity  duply  dynamic-configuration  dynamodb  dynect  ebs  ec2  ecs  elastic-scaling  elasticsearch  elb  email  emr  encryption  engineering  ensemble  erasure-coding  etcd  etsy  eureka  event-management  eventual-consistency  exception-handling  exercises  exponential-decay  extortion  fabric  facebook  facette  fail  failover  failure  false-positives  fault-tolerance  fcron  feature-flags  fedora  file-transfer  filesystems  fincore  firefighting  five-whys  flapjack  flock  flow-logs  forecasting  foursquare  freebsd  front-ends  fs  fsync  ftrace  fuse  g1  g1gc  gae  game-days  games  gating  gc  gcp  gil-tene  gilt  gilt-groupe  git  github  gnome  go  god  google  gossip  grafana  graphing  graphite  graphs  gruffalo  gzip  ha  hacks  hadoop  hailo  haproxy  hardware  hbase  hdds  hdfs  heap  heartbeats  heka  hero-coder  hero-culture  hiccups  hidden-costs  history  hn  holt-winters  home  honeypot  hosting  hotspot  howto  hrd  http  httpry  https  huge-pages  humor  hvm  hyperthreading  hystrix  iam  ian-wilkes  ibm  icecube  images  imaging  inactivity  incident-response  incidents  inept  influxdb  infrastructure  init  injection  inspeqtor  instrumentation  integration-tests  internet  internet-scale  interviews  inviso  io  iops  iostat  ioutil  ip  ip-addresses  iptables  ironfan  james-hamilton  java  javascript  jay-kreps  jcmd  jdk  jemalloc  jenkins  jepsen  jmx  jmxtrans  john-allspaw  joyent  jstat  juniper  jvm  kafka  kdd  kde  kellabyte  kernel  key-distribution  key-rotation  keybox  keys  keywhiz  kill-9  knife  kubernetes  lambda  languages  laptops  latency  legacy  leveldb  lhm  libc  lifespan  limits  linden  linkedin  links  linode  linux  listen-backlog  live  load  load-balancers  load-balancing  load-testing  locking  logentries  logging  loggly  loose-coupling  lsb  lsof  lsx  luks  lxc  mac  machine-learning  macosx  madvise  mail  maintainance  mandos  manta  map-reduce  mapreduce  measurements  mechanical-sympathy  memory  mesos  metrics  mfa  microservices  microsoft  migration  migrations  mincore  mirroring  mit  ml  mmap  mongodb  monit  monitorama  monitoring  movies  mozilla  mtbf  multiplexing  mysql  nagios  namespaces  nannies  nas  natwest  nerve  netflix  netstat  netty  network  network-monitoring  network-partitions  networking  networks  new-relic  nginx  nix  nixos  nixpkgs  node.js  nosql  notification  notifications  npm  ntp  ntpd  nurse  obama  omega  omniti  oom  oom-killer  open-source  openjdk  operability  operations  ops  optimization  organisations  os  oss  osx  ouch  out-of-band  outage  outages  outbrain  outsourcing  overlayfs  owasp  packaging  packet-capture  packets  page-cache  pager-duty  pagerduty  pages  paging  papers  parse  partition  partitions  passenger  patterns  paxos  pbailis  pcp  pcp2graphite  pdf  peering  percona  performance  phusion  pie  pillar  pinball  pinterest  piops  pixar  pki  platform  platforms  post-mortems  postgres  postmortems  presentations  pricing  princess  prioritisation  procedures  process  processes  procfs  prod  production  profiling  programming  provisioning  proxies  proxy  proxying  pty  puppet  pv  python  qa  qdisc  questions  queueing  rabbitmq  race-conditions  rafe-colburn  raid  rails  rami-rosen  randomization  ranking  rant  rate-limiting  rbs  rc3  rdbms  rds  real-time  recovery  red-hat  reddit  redis  redshift  refactoring  reference  registry  regression-testing  reinvent  release  releases  reliability  reliabilty  remediation  replicas  replication  resiliency  restarting  restoring  reverse-proxy  reversibility  reviews  rewrites  riak  riemann  risks  rkt  rm-rf  rmi  rocket  rollback  root-cause  root-causes  route53  routing  rspec  ruby  runbooks  runit  runjop  rvm  rwasa  s3  s3funnel  s3ql  safety  sanity-checks  scala  scalability  scale  scaling  scheduler  scheduling  schema  scripts  sdd  sdn  seagate  search  secrets  security  sensu  serf  serialization  server  servers  serverspec  service-discovery  service-metrics  service-registry  services  ses  sev1  severity  sharding  shippable  shodan  shopify  shorn-writes  silos  sjk  slashdot  sleep  slew  slides  smartstack  smoke-tests  smtp  snappy  snapshots  sns  soa  sockets  software  solaris  soundcloud  south-pole  space  spark  spdy  speculative-execution  split-brain  spot-instances  spotify  sql  square  sre  ssd  ssh  ssl  stack  stack-size  stackshare  staging  startup  statistics  stats  statsd  statsite  stephanie-dean  stepping  storage  storm  strace  stream-processing  streaming  stress-testing  strider  stripe  supervision  supervisord  support  survey  svctm  syadmin  synapse  sysadmin  sysadvent  sysdig  syslog  sysstat  system  system-testing  system-v  systemd  systems  tahoe-lafs  talks  tc  tcp  tcpcopy  tcpdump  tdd  teams  tech  tech-debt  techops  tee  telefonica  telemetry  testing  thp  threadpools  threads  throughput  thundering-herd  tier-one-support  tildeslash  time  time-machine  time-series  time-synchronization  timeouts  tips  tls  tools  top  tos  trace  tracer-requests  tracing  trading  traefik  training  transactional-updates  transparent-huge-pages  troubleshooting  tsd  tuning  turing-complete  twilio  twisted  twitter  two-factor-authentication  uat  ubuntu  ubuntu-core  ui  ulster-bank  ultradns  unicorn  unit-testing  unit-tests  unix  upgrades  upstart  uptime  uselessd  usenix  vagrant  vector  version-control  versioning  via:aphyr  via:bill-dehora  via:chughes  via:codeslinger  via:dave-doran  via:dehora  via:eoinbrazil  via:fanf  via:feylya  via:filippo  via:jk  via:kragen  via:lusis  via:martharotter  via:nelson  via:pdolan  via:pixelbeat  virtualisation  virtualization  visualisation  vm  vms  voldemort  vpc  web  web-services  webmail  weighting  whats-my-ip  wiki  wipac  work  workflows  x86_64  xen  xooglers  yahoo  yammer  yelp  zfs  zipkin  zonify  zookeeper  zooko 

Copy this bookmark: