jm + kubernetes   13

Capturing all the flags in BSidesSF CTF by pwning Kubernetes/Google Cloud
good exploration of the issues with running a CTF challenge (or any other secure infrastructure!) atop Kubernetes and a cloud platform like GCE
gce  google-cloud  kubernetes  security  docker  containers  gke  ctf  hacking  exploits 
8 weeks ago by jm
10 Most Common Reasons Kubernetes Deployments Fail
some real-world failure cases and how to fix them
kubernetes  docker  ops 
february 2017 by jm
CoreOS and Prometheus: Building monitoring for the next generation of cluster infrastructure
Ooh, this is a great plan. :applause:
Enabling GIFEE — Google Infrastructure for Everyone Else — is a primary mission at CoreOS, and open source is key to that goal. [....]

Prometheus was initially created to handle monitoring and alerting in modern microservice architectures. It steadily grew to fit the wider idea of cloud native infrastructure. Though it was not intentional in the original design, Prometheus and Kubernetes conveniently share the key concept of identifying entities by labels, making the semantics of monitoring Kubernetes clusters simple. As we discussed previously on this blog, Prometheus metrics formed the basis of our analysis of Kubernetes scheduler performance, and led directly to improvements in that code. Metrics are essential not just to keep systems running, but also to analyze and improve application behavior.

All things considered, Prometheus was an obvious choice for the next open source project CoreOS wanted to support and improve with internal developers committed to the code base.
monitoring  coreos  prometheus  metrics  clustering  ops  gifee  google  kubernetes 
may 2016 by jm
Counting with domain specific databases — The Smyte Blog — Medium
whoa, pretty heavily engineered scalable counting system with Kafka, RocksDB and Kubernetes
kafka  rocksdb  kubernetes  counting  databases  storage  ops 
april 2016 by jm
A Decade Of Container Control At Google
The big thing that can be gleaned from the latest paper out of Google on its container controllers is that the shift from bare metal to containers is a profound one – something that may not be obvious to everyone seeking containers as a better way – and we think cheaper way – of doing server virtualization and driving up server utilization higher. Everything becomes application-centric rather than machine-centric, which is the nirvana that IT shops have been searching for. The workload schedulers, cluster managers, and container controllers work together to get the right capacity to the application when it needs it, whether it is a latency-sensitive job or a batch job that has some slack in it, and all that the site recovery engineers and developers care about is how the application is performing and they can easily see that because all of the APIs and metrics coming out of them collect data at the application level, not on a per-machine basis. To do this means adopting containers, period. There is no bare metal at Google, and let that be a lesson to HPC shops or other hyperscalers or cloud builders that think they need to run in bare metal mode.
google  containers  kubernetes  borg  bare-metal  ops 
april 2016 by jm
Why We Chose Kubernetes Over ECS
3 months ago when we, at nanit.com, came to evaluate which Docker orchestration framework to use, we gave ECS the first priority. We were already familiar with AWS services, and since we already had our whole infrastructure there, it was the default choice. After testing the service for a while we had the feeling it was not mature enough and missing some key features we needed (more on that later), so we went to test another orchestration framework: Kubernetes. We were glad to discover that Kubernetes is far more comprehensive and had almost all the features we required. For us, Kubernetes won ECS on ECS’s home court, which is AWS.
kubernetes  ecs  docker  containers  aws  ec2  ops 
december 2015 by jm
Eric Brewer interview on Kubernetes
What is the relationship between Kubernetes, Borg and Omega (the two internal resource-orchestration systems Google has built)?

I would say, kind of by definition, there’s no shared code but there are shared people.

You can think of Kubernetes — especially some of the elements around pods and labels — as being lessons learned from Borg and Omega that are, frankly, significantly better in Kubernetes. There are things that are going to end up being the same as Borg — like the way we use IP addresses is very similar — but other things, like labels, are actually much better than what we did internally.

I would say that’s a lesson we learned the hard way.
google  architecture  kubernetes  docker  containers  borg  omega  deployment  ops 
may 2015 by jm
Intel speeds up etcd throughput using ADR Xeon-only hardware feature
To reduce the latency impact of storing to disk, Weaver’s team looked to buffering as a means to absorb the writes and sync them to disk periodically, rather than for each entry. Tradeoffs? They knew memory buffers would help, but there would be potential difficulties with smaller clusters if they violated the stable storage requirement.

Instead, they turned to Intel’s silicon architects about features available in the Xeon line. After describing the core problem, they found out this had been solved in other areas with ADR. After some work to prove out a Linux OS supported use for this, they were confident they had a best-of-both-worlds angle. And it worked. As Weaver detailed in his CoreOS Fest discussion, the response time proved stable. ADR can grab a section of memory, persist it to disk and power it back. It can return entries back to disk and restore back to the buffer. ADR provides the ability to make small (<100MB) segments of memory “stable” enough for Raft log entries. It means it does not need battery-backed memory. It can be orchestrated using Linux or Windows OS libraries. ADR allows the capability to define target memory and determine where to recover. It can also be exposed directly into libs for runtimes like Golang. And it uses silicon features that are accessible on current Intel servers.
kubernetes  coreos  adr  performance  intel  raft  etcd  hardware  linux  persistence  disk  storage  xeon 
may 2015 by jm
Kubernetes compared to Borg
'Here are four Kubernetes features that came from our experiences with Borg.'
google  ops  kubernetes  borg  containers  docker  networking 
april 2015 by jm

Copy this bookmark:



description:


tags: