697
Venti: a new approach to archival storage
This paper describes a network storage system, called
Venti, intended for archival data. In this system, a
unique hash of a block’s contents acts as the block
identifier for read and write operations. This approach
enforces a write-once policy, preventing accidental or
malicious destruction of data. In addition, duplicate
copies of a block can be coalesced, reducing the
consumption of storage and simplifying the
implementation of clients. Venti is a building block for
constructing a variety of storage applications such as
logical backup, physical backup, and snapshot file
systems.
We have built a prototype of the system and present
some preliminary performance results. The system uses
magnetic disks as the storage technology, resulting in
an access time for archival data that is comparable to
non-archival data. The feasibility of the write-once
model for storage is demonstrated using data from over
a decade’s use of two Plan 9 file systems.
os 
5 days ago
The 64-bit Standalone Plan 9 File Server
This paper is a revision of Thompsons The Plan 9 File Server, and
describes the structure and the operation of the new 64-bit Plan 9 file
servers. Some specifics apply to the 32-bit Plan 9 file server Emelie,
which code is also the basis for the user-level file server kfs.
In 2004, Collyer created a 64-bit version of Thompsons 32-bit file
server, updating all file offsets, sizes and block numbers to 64 bits. In
addition, triple- and quadruple-indirect blocks were implemented. File
name components were extended from 27 to 55 bytes. This code is also
the basis for the user-level file server cwfs(4).
os 
5 days ago
Security in Plan 9
The security architecture of the Plan 9" operating system has
recently been redesigned to address some technical shortcomings. This
redesign provided an opportunity also to make the system more convenient to use securely. Plan 9 has thus improved in two ways not usually
seen together: it has become more secure and easier to use.
The central component of the new architecture is a per-user selfcontained agent called factotum. Factotum securely holds a copy of
the users keys and negotiates authentication protocols, on behalf of the
user, with secure services around the network. Concentrating security
code in a single program offers several advantages including: ease of
update or repair to broken security software and protocols; the ability to
run secure services at a lower privilege level; uniform management of
keys for all services; and an opportunity to provide single sign on, even
to unchanged legacy applications. Factotum has an unusual architecture: it is implemented as a Plan 9 file server.
os  security  sandbox 
5 days ago
The Organization of Networks in Plan 9
In a distributed system networks are of paramount importance. This
paper describes the implementation, design philosophy, and organization
of network support in Plan 9. Topics include network requirements for
distributed systems, our kernel implementation, network naming, user
interfaces, and performance. We also observe that much of this organization is relevant to current systems.
os  networking 
5 days ago
The Use of Name Spaces in Plan 9
Plan 9 is a distributed system built at the Computing Sciences
Research Center of AT&T Bell Laboratories (now Lucent Technologies, Bell
Labs) over the last few years. Its goal is to provide a production-quality
system for software development and general computation using heterogeneous hardware and minimal software. A Plan 9 system comprises CPU
and file servers in a central location connected together by fast networks.
Slower networks fan out to workstation-class machines that serve as user
terminals. Plan 9 argues that given a few carefully implemented abstractions it is possible to produce a small operating system that provides
support for the largest systems on a variety of architectures and networks. The foundations of the system are built on two ideas: a perprocess name space and a simple message-oriented file system protocol.
os  sandbox 
5 days ago
Exploring container security: Using Cloud Security Command Center (and five partner tools) to detect and manage an attack | Google Cloud Blog
If you suspect that a container has been compromised, what do you do? In today’s blog post on container security, we’re focusing in on container runtime security—how to detect, respond to, and mitigate suspected threats for containers running in production. There’s no one way to respond to an attack, but there are best practices that you can follow, and in the event of a compromise, we want to make it easy for you to do the right thing.
security  containers  kubernetes 
23 days ago
Exploring container security: Running a tight ship with Kubernetes Engine 1.10 | Google Cloud Blog
It’s only been a few months since we last spoke about securing Google Kubernetes Engine, but a lot has changed since then. Our security team has been working to further harden Kubernetes Engine, so that you can deploy sensitive containerized applications on the platform with confidence. Today we’ll walk through the latest best practices for hardening your Kubernetes Engine cluster, with updates for new features in Kubernetes Engine versions 1.9 and 1.10.
security  containers  kubernetes 
23 days ago
Exploring container security: Protecting and defending your Kubernetes Engine network | Google Cloud Blog
Security is a crucial factor in deciding which public cloud provider to move to—if at all. Containers have become the standard way to deploy applications both in the public cloud and on-premises, and Google Kubernetes Engine implements several best practices to ensure the security and privacy of your deployments. In this post, we’ll answer some of your questions related to container networking security of Kubernetes Engine, and how it differs from traditional VM networking security.
security  containers  kubernetes 
23 days ago
Exploring container security: Digging into Grafeas container image metadata | Google Cloud Blog
The great thing about containers is how easy they are to create, modify and share. But that also raises the question of whether or not they're safe to deploy to production. One way to answer that is to track metadata about your container, for example, who worked on it, where it's stored, and whether it has any known vulnerabilities.
security  containers  kubernetes 
23 days ago
Exploring container security: Node and container operating systems | Google Cloud Blog
When deploying containers, your container images should be free of known vulnerabilities, and have a bare minimum of functionality. This reduces the attack surface, preventing bad actors from taking advantage of unnecessary openings in your infrastructure.
security  containers  kubernetes 
23 days ago
Exploring container security: Isolation at different layers of the Kubernetes stack | Google Cloud Blog
To conclude our blog series on container security, today’s post covers isolation, and when containers are appropriate for actually, well... containing. While containers bring great benefits to your development pipeline and provide some resource separation, they were not designed to provide a strong security boundary.
security  containers  kubernetes 
23 days ago
Exploring container security: An overview | Google Cloud Blog
Containers are increasingly being used to deploy applications, and with good reason, given their portability, simple scalability and lower management burden. However, the security of containerized applications is still not well understood. How does container security differ from that of traditional VMs? How can we use the features of container management platforms to improve security?
security  containers  kubernetes 
23 days ago
Best Practices for Building Containers  |  Architectures  |  Google Cloud
This article describes a set of best practices for building containers. These practices cover a wide range of goals, from shortening the build time, to creating smaller and more resilient images, with the aim of making containers easier to build (for example, with Cloud Build), and easier to run in Google Kubernetes Engine (GKE).
containers 
23 days ago
Best Practices for Operating Containers  |  Architectures  |  Google Cloud
This article describes a set of best practices for making containers easier to operate. These practices cover a wide range of topics, from security to monitoring and logging. Their aim is to make applications easier to run in Google Kubernetes Engine and in containers in general. Many of the practices discussed here were inspired by the twelve-factor methodology, which is a great resource for building cloud-native applications.
containers 
23 days ago
Open-sourcing gVisor, a sandboxed container runtime | Google Cloud Blog
Containers have revolutionized how we develop, package, and deploy applications. However, the system surface exposed to containers is broad enough that many security experts don't recommend them for running untrusted or potentially malicious applications.

A growing desire to run more heterogenous and less trusted workloads has created a new interest in sandboxed containers—containers that help provide a secure isolation boundary between the host OS and the application running inside the container.

To that end, we’d like to introduce gVisor, a new kind of sandbox that helps provide secure isolation for containers, while being more lightweight than a virtual machine (VM). gVisor integrates with Docker and Kubernetes, making it simple and easy to run sandboxed containers in production environments.
security  sandbox  containers 
23 days ago
Understanding and Hardening Linux Containers
Operating System virtualization is an attractive feature for efficiency, speed and modern application deployment, amid questionable security. Recent advancements of
the Linux kernel have coalesced for simple yet powerful OS virtualization via Linux
Containers, as implemented by LXC, Docker, and CoreOS Rkt among others. Recent
container focused start-ups such as Docker have helped push containers into the
limelight. Linux containers offer native OS virtualization, segmented by kernel namespaces, limited through process cgroups and restricted through reduced root capabilities, Mandatory Access Control and user namespaces. This paper discusses these
container features, as well as exploring various security mechanisms. Also included is
an examination of attack surfaces, threats, and related hardening features in order to
properly evaluate container security. Finally, this paper contrasts different container
defaults and enumerates strong security recommendations to counter deployment
weaknesses– helping support and explain methods for building high-security Linux
containers. Are Linux containers the future or merely a fad or fantasy? This paper
attempts to answer that question.
security  containers 
23 days ago
GoogleContainerTools/distroless: 🥑 Language focused docker images, minus the operating system.
"Distroless" images contain only your application and its runtime dependencies. They do not contain package managers, shells or any other programs you would expect to find in a standard Linux distribution.
google  containers 
23 days ago
Designing Distributed Systems with TLA+ • Hillel Wayne
Concurrency is hard. How do you test your system when it’s spread across three services and four languages? Unit testing and type systems only take us so far. At some point we need new tools.

Enter TLA+. TLA+ is a specification language that describes your system and the properties you want. This makes it a fantastic complement to testing: not only can you check your code, you can check your design, too! TLA+ is especially effective for testing concurrency problems, like stalling, race conditions, and dropped messages.

This talk will introduce the ideas behind TLA+ and how it works, with a focus on practical examples. We’ll also show how it caught complex bugs in our systems, as well as how you can start applying it to your own work.

(This is the exact same abstract as the last TLA+ talk, despite the two having just one slide in common. Writing abstracts is not my strong suite.)
methods  distributedsystems 
4 weeks ago
Genode - Genode Operating System Framework
The Genode OS Framework is a tool kit for building highly secure special-purpose operating systems. It scales from embedded systems with as little as 4 MB of memory to highly dynamic general-purpose workloads.

Genode is based on a recursive system structure. Each program runs in a dedicated sandbox and gets granted only those access rights and resources that are needed for its specific purpose. Programs can create and manage sub-sandboxes out of their own resources, thereby forming hierarchies where policies can be applied at each level. The framework provides mechanisms to let programs communicate with each other and trade their resources, but only in strictly-defined manners. Thanks to this rigid regime, the attack surface of security-critical functions can be reduced by orders of magnitude compared to contemporary operating systems.

The framework aligns the construction principles of L4 with Unix philosophy. In line with Unix philosophy, Genode is a collection of small building blocks, out of which sophisticated systems can be composed. But unlike Unix, those building blocks include not only applications but also all classical OS functionalities including kernels, device drivers, file systems, and protocol stacks.
os  hypervisor 
4 weeks ago
hafnium - Git at Google
Hafnium is a type-1 hypervisor, initially supporting aarch64 (64-bit ARMv8 CPUs), with a focus on security and isolation.
hypervisor 
4 weeks ago
Reasoning about Object Capabilities with Logical Relations and Effect Parametricity
Object capabilities are a technique for fine-grained
privilege separation in programming languages and systems,
with important applications in security. However, current formal characterisations do not fully capture capability-safety of
a programming language and are not sufficient for verifying
typical applications. Using state-of-the-art techniques from
programming languages research, we define a logical relation
for a core calculus of JavaScript that better characterises
capability-safety. The relation is powerful enough to reason
about typical capability patterns and supports evolvable invariants on shared data structures, capabilities with restricted
authority over them and isolated components with restricted
communication channels. We use a novel notion of effect
parametricity for deriving properties about effects. Our results
imply memory access bounds that have previously been used
to characterise capability-safety.
security  auth 
5 weeks ago
Log(Graph): A Near-Optimal High-Performance Graph Representation
Today’s graphs used in domains such as machine learning or
social network analysis may contain hundreds of billions of
edges. Yet, they are not necessarily stored efficiently, and standard
graph representations such as adjacency lists waste a
significant number of bits while graph compression schemes
such as WebGraph often require time-consuming decompression.
To address this, we propose Log(Graph): a graph representation
that combines high compression ratios with very
low-overhead decompression to enable cheaper and faster
graph processing. The key idea is to encode a graph so that
the parts of the representation approach or match the respective
storage lower bounds. We call our approach “graph
logarithmization” because these bounds are usually logarithmic.
Our high-performance Log(Graph) implementation
based on modern bitwise operations and state-of-the-art succinct
data structures achieves high compression ratios as well
as performance. For example, compared to the tuned Graph
Algorithm Processing Benchmark Suite (GAPBS), it reduces
graph sizes by 20-35% while matching GAPBS’ performance
or even delivering speedups due to reducing amounts of
transferred data. It approaches the compression ratio of the
established WebGraph compression library while enabling
speedups of up to more than 2×. Log(Graph) can improve
the design of various graph processing engines or libraries on
single NUMA nodes as well as distributed-memory systems
algorithms  databases  graphs  scale  papers 
september 2018
BeyondCorp 6 - Building a Healthy Fleet
Any security capability is inherently only as secure as the other systems
it trusts. The BeyondCorp project helped Google clearly define
and make access decisions around the platforms we trust, shifting
our security strategy from protecting services to protecting trusted platforms.
Previous BeyondCorp articles discussed the tooling Google uses to
confidently ascertain the provenance of a device, but we have not yet covered
the mechanics behind how we trust these devices.
enterprise  security  scale 
september 2018
Scaling Backend Authentication at Facebook
Secure authentication and authorization within Facebook’s infrastructure play important
roles in protecting people using Facebook’s services. Enforcing security while maintaining a
flexible and performant infrastructure can be challenging at Facebook’s scale, especially in the
presence of varying layers of trust among our servers. Providing authentication and encryption
on a per-connection basis is certainly necessary, but also insufficient for securing more complex
flows involving multiple services or intermediaries at lower levels of trust.
To handle these more complicated scenarios, we have developed two token-based mechanisms
for authentication. The first type is based on certificates and allows for flexible verification due
to its public-key nature. The second type, known as “crypto auth tokens”, is symmetric-key
based, and hence more restrictive, but also much more scalable to a high volume of requests.
Crypto auth tokens rely on pseudorandom functions to generate independently-distributed keys
for distinct identities.
Finally, we provide (mock) examples which illustrate how both of our token primitives can be
used to authenticate real-world flows within our infrastructure, and how a token-based approach
to authentication can be used to handle security more broadly in other infrastructures which
have strict performance requirements and where relying on TLS alone is not enough.
facebook  security  auth 
september 2018
Sincronia: Near-Optimal Network Design for Coflows
We present Sincronia, a near-optimal network design for coflows that can be implemented on top on any transport layer (for flows) that supports priority scheduling. Sincronia achieves this using a key technical result --- we show that given a "right" ordering of coflows, any per-flow rate allocation mechanism achieves average coflow completion time within 4X of the optimal as long as (co)flows are prioritized with respect to the ordering.

Sincronia uses a simple greedy mechanism to periodically order all unfinished coflows; each host sets priorities for its flows using corresponding coflow order and offloads the flow scheduling and rate allocation to the underlying priority-enabled transport layer. We evaluate Sincronia over a real testbed comprising 16-servers and commodity switches, and using simulations across a variety of workloads. Evaluation results suggest that Sincronia not only admits a practical, near-optimal design but also improves upon state-of-the-art network designs for coflows (sometimes by as much as 8X).
networking  papers  datacenter 
august 2018
B4 and After: Managing Hierarchy, Partitioning, and Asymmetry for Availability and Scale in Google’s Software-Defined WAN
Private WANs are increasingly important to the operation of
enterprises, telecoms, and cloud providers. For example, B4,
Google’s private software-defined WAN, is larger and growing
faster than our connectivity to the public Internet. In this
paper, we present the five-year evolution of B4. We describe
the techniques we employed to incrementally move from
offering best-effort content-copy services to carrier-grade
availability, while concurrently scaling B4 to accommodate
100x more traffic. Our key challenge is balancing the tension
introduced by hierarchy required for scalability, the partitioning
required for availability, and the capacity asymmetry
inherent to the construction and operation of any large-scale
network. We discuss our approach to managing this tension:
i) we design a custom hierarchical network topology for both
horizontal and vertical software scaling, ii) we manage inherent
capacity asymmetry in hierarchical topologies using
a novel traffic engineering algorithm without packet encapsulation,
and iii) we re-architect switch forwarding rules
via two-stage matching/hashing to deal with asymmetric
network failures at scale.
scale  sdn  networking  papers 
august 2018
Causal Inference Book | Miguel Hernan | Harvard T.H. Chan School of Public Health
My colleague Jamie Robins and I are working on a book that provides a cohesive presentation of concepts of, and methods for, causal inference. Much of this material is currently scattered across journals in several disciplines or confined to technical articles. We expect that the book will be of interest to anyone interested in causal inference, e.g., epidemiologists, statisticians, psychologists, economists, sociologists, political scientists, computer scientists… The book is divided in 3 parts of increasing difficulty: causal inference without models, causal inference with models, and causal inference from complex longitudinal data.
statistics 
july 2018
How to scale a distributed system - Henry Robinson
What is this, and who’s it for?
§ Lessons learned from the trenches building distributed systems for 8+ years at Cloudera and in
open source communities.
scale  distributedsystems 
july 2018
An Illustrated Proof of the CAP Theorem
The CAP Theorem is a fundamental theorem in distributed systems that states any distributed system can have at most two of the following three properties.

Consistency
Availability
Partition tolerance
This guide will summarize Gilbert and Lynch's specification and proof of the CAP Theorem with pictures!
distributedsystems 
july 2018
What is a zero-knowledge proof? | Zero-Knowledge Proofs
What are they, how do they work, and are they fast yet?
algorithms  crypto 
july 2018
Survivable Key Compromise in Software Update Systems
Today’s software update systems have little or no defense
against key compromise. As a result, key compromises have
put millions of software update clients at risk. Here we identify
three classes of information whose authenticity and integrity
are critical for secure software updates. Analyzing
existing software update systems with our framework, we
find their ability to communicate this information securely
in the event of a key compromise to be weak or nonexistent.
We also find that the security problems in current software
update systems are compounded by inadequate trust revocation
mechanisms. We identify core security principles that
allow software update systems to survive key compromise.
Using these ideas, we design and implement TUF, a software
update framework that increases resilience to key compromise
security  softwareengineering  crypto  papers  deployment 
july 2018
DeepLog: Anomaly Detection and Diagnosis from System Logs through Deep Learning
Anomaly detection is a critical step towards building a secure and
trustworthy system. Œe primary purpose of a system log is to
record system states and signi€cant events at various critical points
to help debug system failures and perform root cause analysis. Such
log data is universally available in nearly all computer systems.
Log data is an important and valuable resource for understanding
system status and performance issues; therefore, the various system
logs are naturally excellent source of information for online
monitoring and anomaly detection. We propose DeepLog, a deep
neural network model utilizing Long Short-Term Memory (LSTM),
to model a system log as a natural language sequence. Œis allows
DeepLog to automatically learn log paŠerns from normal execution,
and detect anomalies when log paŠerns deviate from the model
trained from log data under normal execution. In addition, we
demonstrate how to incrementally update the DeepLog model in
an online fashion so that it can adapt to new log paŠerns over time.
Furthermore, DeepLog constructs workƒows from the underlying
system log so that once an anomaly is detected, users can diagnose
the detected anomaly and perform root cause analysis e‚ectively.
Extensive experimental evaluations over large log data have shown
that DeepLog has outperformed other existing log-based anomaly
detection methods based on traditional data mining methodologies
machinelearning  deeplearning  security  papers 
july 2018
A Calculus for Access Control in Distributed Systems
We study some of the concepts, protocols, and algorithms for access control
in distributed systems, from a logical perspective. We account for how a
principal may come to believe that another principal is making a request,
either on his own or on someone else's behalf. We also provide a logical
language for access control lists, and theories for deciding whether requests
should be granted.
auth  crypto  security 
july 2018
Authentication in Distributed Systems: Theory and Practice
We describe a theory of authentication and a system that implements it. Our theory is based on
the notion of principal and a ‘speaks for’ relation between principals. A simple principal either
has a name or is a communication channel; a compound principal can express an adopted role or
delegated authority. The theory shows how to reason about a principal’s authority by deducing
the other principals that it can speak for; authenticating a channel is one important application.
We use the theory to explain many existing and proposed security mechanisms. In particular, we
describe the system we have built. It passes principals efficiently as arguments or results of remote
procedure calls, and it handles public and shared key encryption, name lookup in a large
name space, groups of principals, program loading, delegation, access control, and revocation.
auth  crypto  security 
july 2018
ACLs don’t
The ACL model is unable to make correct access decisions for interactions involving more than
two principals, since required information is not retained across message sends. Though this
deficiency has long been documented in the published literature, it is not widely understood. This
logic error in the ACL model is exploited by both the clickjacking and Cross-Site Request
Forgery attacks that affect many Web applications.
auth  crypto  security 
july 2018
Access Control (Capabilities)
Access control is central to computer security. Traditionally, we wish to restrict
the user to exactly what he should be able to do, no more and no less.
You might think that this only applies to legitimate users: where do attackers
fit into this worldview? Of course, an attacker is a user whose access should be
limited just like any other. Increasingly, of course, computers expose services
that are available to anyone – in other words, anyone can be a a legitimate user.
As well as users there are also programs we would like to control. For
example, the program that keeps the clock correctly set on my machine should
be allowed to set the clock and talk to other time-keeping programs on the
Internet, and probably nothing else1
.
Increasingly we are moving towards an environment where users choose what
is installed on their machines, where their trust in what is installed is highly
variable2 and where “installation” of software is an increasingly fluid concept,
particularly in the context of the Web, where merely viewing a page can cause
code to run.
In this paper I explore an alternative to the traditional mechanisms of roles
and access control lists. Although I focus on the use case of web pages, mashups
and gadgets, the technology is applicable to all access control.
auth  crypto  security 
july 2018
The Web SSO Standard OpenID Connect: In-Depth Formal Security Analysis and Security Guidelines
Abstract—Web-based single sign-on (SSO) services such as
Google Sign-In and Log In with Paypal are based on the OpenID
Connect protocol. This protocol enables so-called relying parties
to delegate user authentication to so-called identity providers.
OpenID Connect is one of the newest and most widely deployed
single sign-on protocols on the web. Despite its importance, it has
not received much attention from security researchers so far, and
in particular, has not undergone any rigorous security analysis.
In this paper, we carry out the first in-depth security analysis
of OpenID Connect. To this end, we use a comprehensive generic
model of the web to develop a detailed formal model of OpenID
Connect. Based on this model, we then precisely formalize and
prove central security properties for OpenID Connect, including
authentication, authorization, and session integrity properties.
In our modeling of OpenID Connect, we employ security
measures in order to avoid attacks on OpenID Connect that
have been discovered previously and new attack variants that we
document for the first time in this paper. Based on these security
measures, we propose security guidelines for implementors of
OpenID Connect. Our formal analysis demonstrates that these
guidelines are in fact effective and sufficient.
security  auth  crypto 
july 2018
Cryptographic Security of Macaroon Authorization Credentials
Macaroons, recently introduced by Birgisson et al. [BPUE+14], are authorization credentials that
provide support for controlled sharing in decentralized systems. Macaroons are similar to cookies in that
they are bearer credentials, but unlike cookies, macaroons include caveats that attenuate and contextually
confine when, where, by who, and for what purpose authorization should be granted.
In this work, we formally study the cryptographic security of macaroons. We define macaroon schemes,
introduce corresponding security definitions and provide several constructions. In particular, the MACbased
and certificate-based constructions outlined in [BPUE+14], can be seen as instantiations of our
definitions. We also present a new construction that is privately-verifiable (similar to the MAC-based
construction) but where the verifying party does not learn the intermediate keys of the macaroon, a problem
already observed in [BPUE+14].
We also formalize the notion of a protocol for “discharging” third-party caveats and present a security
definition for such a protocol. The encryption-based protocol outlined by Birgisson et al. [BPUE+14] can
be seen as an instantiation of our definition, and we also present a new signature-based construction.
Finally, we formally prove the security of all constructions in the given security models.
crypto  auth 
june 2018
For Good Measure Remember the Recall
Sherlock’s statement is most often quoted to imply that uncommon
scenarios can all be explained away by reason and logic. This is missing
the point. The quote’s power is in the elimination of the impossible
before engaging in such reasoning. The present authors seek to expose
a similar misapplication of methodology as it exists throughout information
security and offer a framework by which to elevate the common Watson.
organization  security 
june 2018
PROCHLO: Strong Privacy for Analytics in the Crowd
The large-scale monitoring of computer users’ software
activities has become commonplace, e.g., for application
telemetry, error reporting, or demographic profiling. This
paper describes a principled systems architecture—Encode,
Shuffle, Analyze (ESA)—for performing such monitoring
with high utility while also protecting user privacy. The ESA
design, and its PROCHLO implementation, are informed by
our practical experiences with an existing, large deployment
of privacy-preserving software monitoring.
With ESA, the privacy of monitored users’ data is guaranteed
by its processing in a three-step pipeline. First, the data
is encoded to control scope, granularity, and randomness.
Second, the encoded data is collected in batches subject to
a randomized threshold, and blindly shuffled, to break linkability
and to ensure that individual data items get “lost in the
crowd” of the batch. Third, the anonymous, shuffled data is
analyzed by a specific analysis engine that further prevents
statistical inference attacks on analysis results.
ESA extends existing best-practice methods for sensitivedata
analytics, by using cryptography and statistical techniques
to make explicit how data is elided and reduced in
precision, how only common-enough, anonymous data is analyzed,
and how this is done for only specific, permitted purposes.
As a result, ESA remains compatible with the established
workflows of traditional database analysis.
Strong privacy guarantees, including differential privacy,
can be established at each processing step to defend
against malice or compromise at one or more of those steps.
PROCHLO develops new techniques to harden those steps,
including the Stash Shuffle, a novel scalable and efficient
oblivious-shuffling algorithm based on Intel’s SGX, and new
applications of cryptographic secret sharing and blinding.
We describe ESA and PROCHLO, as well as experiments
that validate their ability to balance utility and privacy.
privacy  machinelearning  scale  google 
june 2018
Ubiq: A Scalable and Fault-tolerant Log Processing Infrastructure
Abstract. Most of today’s Internet applications generate vast amounts
of data (typically, in the form of event logs) that needs to be processed
and analyzed for detailed reporting, enhancing user experience and increasing
monetization. In this paper, we describe the architecture of
Ubiq, a geographically distributed framework for processing continuously
growing log files in real time with high scalability, high availability and
low latency. The Ubiq framework fully tolerates infrastructure degradation
and data center-level outages without any manual intervention. It
also guarantees exactly-once semantics for application pipelines to process
logs as a collection of multiple events. Ubiq has been in production
for Google’s advertising system for many years and has served as a critical
log processing framework for several dozen pipelines. Our production
deployment demonstrates linear scalability with machine resources, extremely
high availability even with underlying infrastructure failures, and
an end-to-end latency of under a minute.
scale  infrastructure  distributedsystems  google 
june 2018
A Comprehensive Formal Security Analysis of OAuth 2.0
The OAuth 2.0 protocol is one of the most widely deployed authorization/single sign-on (SSO) protocols
and also serves as the foundation for the new SSO standard OpenID Connect. Despite the popularity
of OAuth, so far analysis efforts were mostly targeted at finding bugs in specific implementations and
were based on formal models which abstract from many web features or did not provide a formal treatment
at all.
In this paper, we carry out the first extensive formal analysis of the OAuth 2.0 standard in an expressive
web model. Our analysis aims at establishing strong authorization, authentication, and session
integrity guarantees, for which we provide formal definitions. In our formal analysis, all four OAuth
grant types (authorization code grant, implicit grant, resource owner password credentials grant, and
the client credentials grant) are covered. They may even run simultaneously in the same and different
relying parties and identity providers, where malicious relying parties, identity providers, and browsers
are considered as well. Our modeling and analysis of the OAuth 2.0 standard assumes that security
recommendations and best practices are followed in order to avoid obvious and known attacks.
When proving the security of OAuth in our model, we discovered four attacks which break the security
of OAuth. The vulnerabilities can be exploited in practice and are present also in OpenID Connect.
We propose fixes for the identified vulnerabilities, and then, for the first time, actually prove the
security of OAuth in an expressive web model. In particular, we show that the fixed version of OAuth
(with security recommendations and best practices in place) provides the authorization, authentication,
and session integrity properties we specify.
security  auth 
june 2018
Cross Origin Infoleaks
Browsers do their best to enforce a hard security boundary on an origin-by-origin basis. To vastly
oversimplify, applications hosted at distinct origins must not be able to read each other's data or
take action on each other’s behalf in the absence of explicit cooperation. Generally speaking,
browsers have done a reasonably good job at this; bugs crop up from time to time, but they're
well-understood to be bugs by browser vendors and developers, and they're addressed promptly.
The web platform, however, is designed to encourage both cross-origin communication and
inclusion. These design decisions weaken the borders that browsers place around origins, creating
opportunities for side-channel attacks (pixel perfect, resource timing, etc.) and server-side
confusion about the provenance of requests (CSRF, cross-site search). Spectre and related attacks
based on speculative execution make the problem worse by allowing attackers to read more
memory than they're supposed to, which may contain sensitive cross-origin responses fetched by
documents in the same process. Spectre is a powerful attack technique, but it should be seen as a
(large) iterative improvement over the platform's existing side-channels.
This document reviews the known classes of cross-origin information leakage, and uses this
categorization to evaluate some of the mitigations that have recently been proposed (CORB,
From-Origin, Sec-Metadata / Sec-Site, SameSite cookies and Cross-Origin-Isolate). We attempt to
survey their applicability to each class of attack, and to evaluate developers' ability to deploy them
properly in real-world applications. Ideally, we'll be able to settle on mitigation techniques which
are both widely deployable, and broadly scoped.
security  web 
may 2018
Andromeda: Performance, Isolation, and Velocity at Scale in Cloud Network Virtualization
This paper presents our design and experience with Andromeda,
Google Cloud Platform’s network virtualization
stack. Our production deployment poses several challenging
requirements, including performance isolation among
customer virtual networks, scalability, rapid provisioning
of large numbers of virtual hosts, bandwidth and latency
largely indistinguishable from the underlying hardware,
and high feature velocity combined with high availability.
Andromeda is designed around a flexible hierarchy of
flow processing paths. Flows are mapped to a programming
path dynamically based on feature and performance
requirements. We introduce the Hoverboard programming
model, which uses gateways for the long tail of low bandwidth
flows, and enables the control plane to program
network connectivity for tens of thousands of VMs in
seconds. The on-host dataplane is based around a highperformance
OS bypass software packet processing path.
CPU-intensive per packet operations with higher latency
targets are executed on coprocessor threads. This architecture
allows Andromeda to decouple feature growth from
fast path performance, as many features can be implemented
solely on the coprocessor path. We demonstrate
that the Andromeda datapath achieves performance that is
competitive with hardware while maintaining the flexibility
and velocity of a software-based architecture.
google  networking  sdn  papers 
may 2018
PREDATOR: Proactive Recognition and Elimination of Domain Abuse at Time-Of-Registration
Miscreants register thousands of new domains every day to launch
Internet-scale attacks, such as spam, phishing, and drive-by downloads.
Quickly and accurately determining a domain’s reputation
(association with malicious activity) provides a powerful tool for mitigating
threats and protecting users. Yet, existing domain reputation
systems work by observing domain use (e.g., lookup patterns, content
hosted)—often too late to prevent miscreants from reaping benefits of
the attacks that they launch.
As a complement to these systems, we explore the extent to which
features evident at domain registration indicate a domain’s subsequent
use for malicious activity. We develop PREDATOR, an approach that
uses only time-of-registration features to establish domain reputation.
We base its design on the intuition that miscreants need to obtain
many domains to ensure profitability and attack agility, leading to
abnormal registration behaviors (e.g., burst registrations, textually
similar names). We evaluate PREDATOR using registration logs of
second-level .com and .net domains over five months. PREDATOR
achieves a 70% detection rate with a false positive rate of 0.35%, thus
making it an effective—and early—first line of defense against the
misuse of DNS domains. It predicts malicious domains when they
are registered, which is typically days or weeks earlier than existing
DNS blacklists.
phishing  security  papers 
april 2018
Fuchsia is not Linux
This document is a collection of articles describing the Fuchsia operating system, organized around particular subsystems. Sections will be populated over time.
os 
april 2018
Systematic Generation of Fast Elliptic Curve Cryptography Implementations
Widely used implementations of cryptographic primitives
employ number-theoretic optimizations specific to large
prime numbers used as moduli of arithmetic. These optimizations
have been applied manually by a handful of experts,
using informal rules of thumb. We present the first
automatic compiler that applies these optimizations, starting
from straightforward modular-arithmetic-based algorithms
and producing code around 5X faster than with off-the-shelf
arbitrary-precision integer libraries for C. Furthermore, our
compiler is implemented in the Coq proof assistant; it produces
not just C-level code but also proofs of functional
correctness. We evaluate the compiler on several key primitives
from elliptic curve cryptography
crypto  papers 
april 2018
Beyond Corp 1 - A New Approach to Enterprise Security
Virtually every company today uses firewalls to enforce perimeter
security. However, this security model is problematic because, when
that perimeter is breached, an attacker has relatively easy access to a
company’s privileged intranet. As companies adopt mobile and cloud technologies,
the perimeter is becoming increasingly difficult to enforce. Google
is taking a different approach to network security. We are removing the
requirement for a privileged intranet and moving our corporate applications
to the Internet.
security  enterprise  google  beyondcorp 
april 2018
Beyond Corp 2 - Design to Deployment
The goal of Google’s BeyondCorp initiative is to improve our security
with regard to how employees and devices access internal applications.
Unlike the conventional perimeter security model, BeyondCorp
doesn’t gate access to services and tools based on a user’s physical location
or the originating network; instead, access policies are based on information
about a device, its state, and its associated user. BeyondCorp considers both
internal networks and external networks to be completely untrusted, and
gates access to applications by dynamically asserting and enforcing levels, or
“tiers,” of access.
We present an overview of how Google transitioned from traditional security infrastructure
to the BeyondCorp model and the challenges we faced and the lessons we learned in the process.
For an architectural discussion of BeyondCorp, see [1].
google  security  enterprise  beyondcorp 
april 2018
Canary Analysis Service - ACM Queue
In 1913, Scottish physiologist John Scott Haldane proposed the idea of bringing a caged canary into a mine to detect dangerous gases. More than 100 years later, Haldane's canary-in-the-coal-mine approach is also applied in software testing.

In this article, the term canarying refers to a partial and time-limited deployment of a change in a service, followed by an evaluation of whether the service change is safe. The production change process may then roll forward, roll back, alert a human, or do something else. Effective canarying involves many decisions—for example, how to deploy the partial service change or choose meaningful metrics—and deserves a separate discussion.

Google has deployed a shared centralized service called CAS (Canary Analysis Service) that offers automatic (and often autoconfigured) analysis of key metrics during a production change. CAS is used to analyze new versions of binaries, configuration changes, data-set changes, and other production changes. CAS evaluates hundreds of thousands of production changes every day at Google.
scale  deployment 
march 2018
wg-serverless/whitepaper at master · cncf/wg-serverless
CNFD defines 'serverless' including functions as a service (FaaS) like Lambda and backend as a service (BaaS) like Bigquery.
cloud  deployment 
february 2018
TaintScope: A Checksum-Aware Directed Fuzzing Tool for Automatic Software Vulnerability Detection
Fuzz testing has proven successful in finding
security vulnerabilities in large programs. However, traditional
fuzz testing tools have a well-known common drawback: they
are ineffective if most generated malformed inputs are rejected
in the early stage of program running, especially when target
programs employ checksum mechanisms to verify the integrity
of inputs. In this paper, we present TaintScope, an automatic
fuzzing system using dynamic taint analysis and symbolic
execution techniques, to tackle the above problem. TaintScope
has several novel contributions: 1) TaintScope is the first
checksum-aware fuzzing tool to the best of our knowledge. It
can identify checksum fields in input instances, accurately locate
checksum-based integrity checks by using branch profiling
techniques, and bypass such checks via control flow alteration.
2) TaintScope is a directed fuzzing tool working at X86 binary
level (on both Linux and Window). Based on fine-grained
dynamic taint tracing, TaintScope identifies which bytes in a
well-formed input are used in security-sensitive operations (e.g.,
invoking system/library calls) and then focuses on modifying
such bytes. Thus, generated inputs are more likely to trigger
potential vulnerabilities. 3) TaintScope is fully automatic, from
detecting checksum, directed fuzzing, to repairing crashed
samples. It can fix checksum values in generated inputs using
combined concrete and symbolic execution techniques.
We evaluate TaintScope on a number of large real-world
applications. Experimental results show that TaintScope can
accurately locate the checksum checks in programs and dramatically
improve the effectiveness of fuzz testing. TaintScope
has already found 27 previously unknown vulnerabilities in
several widely used applications, including Adobe Acrobat,
Google Picasa, Microsoft Paint, and ImageMagick. Most of
these severe vulnerabilities have been confirmed by Secunia
and oCERT, and assigned CVE identifiers (such as CVE-2009-
1882, CVE-2009-2688). Corresponding patches from vendors
are released or in progress based on our reports.
fuzzing  security 
february 2018
Distributed Authorization with Distributed Grammars
While groups are generally helpful for the definition of authorization
policies, their use in distributed systems is not straightforward.
This paper describes a design for authorization in distributed systems
that treats groups as formal languages. The design supports forms of
delegation and negative clauses in authorization policies. It also considers
the wish for privacy and efficiency in group-membership checks, and
the possibility that group definitions may not all be available and may
contain cycles.
auth  security  computerscience 
february 2018
AES-VCM, An AES-GCM Construction Using An Integer-Based Universal Hash Function
We give a framework for construction and composition of universal
hash functions. Using this framework, we propose to swap out AES-GCM’s
F2128 -based universal hash function for one based on VMAC, which uses integer
arithmatic. For architectures having AES acceleration but where either
F2128 acceleration is absent or exists on the same execution unit as AES acceleration,
an integer-based variant of AES-GCM may offer a performance
advantage, while offering identical security.
crypto  tls 
january 2018
A Vendor Agnostic Root of Trust for Measurement
We report the success of a project that Google performed as a proof-of-concept for increasing
confidence in first-instruction integrity across a variety of server and peripheral environments. We
begin by motivating the problem of first-instruction integrity and share the lessons learned from
our proof-of-concept implementation. Our goal in sharing this information is to increase industry
support and engagement for similar designs. Notable features include a vendor-agnostic capability
to interpose on the SPI peripheral bus (from which bootstrap firmware is loaded upon power-on in a
wide variety of devices today) without negatively impacting the efficacy of any existing vendor- or
device-specific integrity mechanisms, thereby providing additional defense-in-depth.
security  hardware  crypto 
january 2018
152 Simple Steps to Stay Safe Online: SEcurity Advice for Non-Tech-Savy Users
Users often don’t follow expert advice for staying secure online, but the reasons for users’ noncompliance
are only partly understood. More than 200 security experts were asked for the top three pieces of advice
they would give non-tech-savvy users. Te results suggest that, although individual experts give thoughtful,
reasonable answers, the expert community as a whole lacks consensus.
security  usability 
january 2018
Implementing and Proving the TLS 1.3 Record Layer - Microsoft Research
The record layer is the main bridge between TLS applications and internal sub-protocols. Its core functionality is an elaborate form of authenticated encryption: streams of messages for each sub-protocol (handshake, alert, and application data) are fragmented, multiplexed, and encrypted with optional padding to hide their lengths. Conversely, the sub-protocols may provide fresh keys or signal stream termination to the record layer. Compared to prior versions, TLS 1.3 discards obsolete schemes in favor of a common construction for Authenticated Encryption with Associated Data (AEAD), instantiated with algorithms such as AES-GCM and ChaCha20-Poly1305. It differs from TLS 1.2 in its use of padding, associated data and nonces. It also encrypts the content-type used to multiplex between sub-protocols. New protocol features such as early application data (0-RTT and 0.5-RTT) and late handshake messages require additional keys and a more general model of stateful encryption. We build and verify a reference implementation of the TLS record layer and its cryptographic algorithms in F*, a dependently typed language where security and functional guarantees can be specified as pre-and post-conditions. We reduce the high-level security of the record layer to cryptographic assumptions on its ciphers. Each step in the reduction is verified by typing an F* module, for each step that involves a cryptographic assumption, this module precisely captures the corresponding game. We first verify the functional correctness and injectivity properties of our implementations of one-time MAC algorithms (Poly1305 and GHASH) and provide a generic proof of their security given these two properties. We show the security of a generic AEAD construction built from any secure one-time MAC and PRF. We extend AEAD, first to stream encryption, then to length-hiding, multiplexed encryption. Finally, we build a security model of the record layer against an adversary that controls the TLS sub-protocols. We compute concrete security bounds for the AES_128_GCM, AES_256_GCM, and CHACHA20_POLY1305 ciphersuites, and derive recommended limits on sent data before re-keying. We plug our implementation of the record layer into the miTLS library, confirm that they interoperate with Chrome and Firefox, and report initial performance results. Combining our functional correctness, security, and experimental results, we conclude that the new TLS record layer (as described in RFCs and cryptographic standards) is provably secure, and we provide its first verified implementation.
security  crypto  tls 
january 2018
Application Layer Transport Security  (LOAS)
Production systems at Google consist of a constellation of microservices1 that collectively issue O(1010) Remote Procedure Calls (RPCs) per second. When a Google engineer schedules a production workload2, any RPCs issued or received by that workload are protected with ALTS by default. This automatic, zero-configuration protection is provided by Google’s Application Layer Transport Security (ALTS). In addition to the automatic protections conferred on RPC’s, ALTS also facilitates easy service replication, load balancing, and rescheduling across production machines. This paper describes ALTS and explores its deployment over Google’s production infrastructure.
security  auth  infrastructure  scale 
january 2018
Retpoline: a software construct for preventing branch-target-injection - Google Help
“Retpoline” sequences are a software construct which allow indirect branches to be isolated from speculative execution.  This may be applied to protect sensitive binaries (such as operating system or hypervisor implementations) from branch target injection attacks against their indirect branches.  

The name “retpoline” is a portmanteau of “return” and “trampoline.”  It is a trampoline construct constructed using return operations which also figuratively ensures that any associated speculative execution will “bounce” endlessly.  

(If it brings you any amusement: imagine speculative execution as an overly energetic 7-year old that we must now build a warehouse of trampolines around.)
security  compilers  via:micktwomey 
january 2018
Logic in Access Control (Tutorial Notes)
Abstract. Access control is central to security in computer systems. Over the
years, there have been many efforts to explain and to improve access control,
sometimes with logical ideas and tools. This paper is a partial survey and discussion
of the role of logic in access control. It considers logical foundations
for access control and their applications, in particular in languages for security
policies. It focuses on some specific logics and their properties. It is intended
as a written counterpart to a tutorial given at the 2009 International School on
Foundations of Security Analysis and Design.
logic  auth  security 
january 2018
Logic in Access Control
Access control is central to security in computer systems.
Over the years, there have been many efforts to explain and
to improve access control, sometimes with logical ideas and
tools. This paper is a partial survey and discussion of the
role of logic in access control. It considers logical foundations
for access control and their applications, in particular
in languages for programming security policies.
logic  auth  security 
january 2018
Fleet management at scale
Google's employees are spread across the globe, and with job functions
ranging from software engineers to financial analysts, they require a broad
spectrum of technology to get their jobs done. As a result, we manage a fleet
of nearly a quarter-million computers (workstations and laptops) across four
operating systems (macOS, Windows, Linux, and Chrome OS).
Our colleagues often ask how we're able to manage such a diverse fleet. Do
we have access to unlimited resources? Impose draconian security policies
on users? Shift the maintenance burden to our support staff?
The truth is that the bigger we get, the more we look for ways to increase
efficiency without sacrificing security or user productivity. We scale our
engineering teams by relying on reviewable, repeatable, and automated
backend processes and minimizing GUI-based configuration tools. Using and
developing open-source software saves money and provides us with a level
of flexibility that's often missing from proprietary software and closed
systems. And we strike a careful balance between user uptime and security
by giving users freedom to get their work done while preventing them from
doing harm, like installing malware or exposing Google data.
This paper describes some of the tools and systems that we use to image,
manage, and secure our varied inventory of workstations and laptops . Some
1
tools were built by third parties—sometimes with our own modifications to
make them work for us. We also created several tools to meet our own
enterprise needs, often open sourcing them later for wider use. By sharing
this information, we hope to help others navigate some of the challenges
we've faced—and ultimately overcame—throughout our enterprise fleet
management journey.
enterprise  security  scale 
december 2017
A Mathematical Theory of Communication
THE recent development of various methods of modulation such as PCM and PPM which exchange
bandwidth for signal-to-noise ratio has intensified the interest in a general theory of communication. A
basis for such a theory is contained in the important papers of Nyquist1 and Hartley2 on this subject. In the
present paper we will extend the theory to include a number of new factors, in particular the effect of noise
in the channel, and the savings possible due to the statistical structure of the original message and due to the
nature of the final destination of the information.
papers  computerscience 
november 2017
Design patterns for container-based distributed systems
In the late 1980s and early 1990s, object-oriented pro-
gramming revolutionized software development, popu-
larizing the approach of building of applications as col-
lections of modular components. Today we are seeing
a similar revolution in distributed system development,
with the increasing popularity of microservice archi-
tectures built from containerized software components.
Containers [15] [22] [1] [2] are particularly well-suited
as the fundamental “object” in distributed systems by
virtue of the walls they erect at the container bound-
ary. As this architectural style matures, we are seeing the
emergence of design patterns, much as we did for object-
oriented programs, and for the same reason – thinking in
terms of objects (or containers) abstracts away the low-
level details of code, eventually revealing higher-level
patterns that are common to a variety of applications and
algorithms.
This paper describes three types of design patterns
that we have observed emerging in container-based dis-
tributed systems: single-container patterns for container
management, single-node patterns of closely cooperat-
ing containers, and multi-node patterns for distributed
algorithms. Like object-oriented patterns before them,
these patterns for distributed computation encode best
practices, simplify development, and make the systems
where they are used more reliable.
google  distributedsystems  containers  cloud  datacenter 
november 2017
TensorFlow Agents: Efficient Batched Reinforcement Learning in TensorFlow
We introduce TensorFlow Agents, an efficient infrastructure paradigm for
building parallel reinforcement learning algorithms in TensorFlow. We simu-
late multiple environments in parallel, and group them to perform the neural
network computation on a batch rather than individual observations. This
allows the TensorFlow execution engine to parallelize computation, without
the need for manual synchronization. Environments are stepped in separate
Python processes to progress them in parallel without interference of the global
interpreter lock. As part of this project, we introduce BatchPPO, an efficient
implementation of the proximal policy optimization algorithm. By open sourc-
ing TensorFlow Agents, we hope to provide a flexible starting point for future
projects that accelerates future research in the field.
machinelearning 
november 2017
Taking the Edge off with Espresso: Scale, Reliability and Programmability for Global Internet Peering
We present the design of Espresso, Google’s SDN-based Internet
peering edge routing infrastructure. This architecture grew out of a
need to exponentially scale the Internet edge cost-effectively and to
enable application-aware routing at Internet-peering scale. Espresso
utilizes commodity switches and host-based routing/packet process-
ing to implement a novel fine-grained traffic engineering capability.
Overall, Espresso provides Google a scalable peering edge that is
programmable, reliable, and integrated with global traffic systems.
Espresso also greatly accelerated deployment of new networking
features at our peering edge. Espresso has been in production for
two years and serves over 22% of Google’s total traffic to the Inter-
net.
networking  sdn  scale 
november 2017
BeyondCorp 5 - The User Experience
Previous articles in the BeyondCorp series discuss aspects of the
technical challenges we solved along the way [1–3]. Beyond its purely
technical features, the migration also had a human element: it was
vital to keep our users constantly in mind throughout this process. Our goal
was to keep the end user experience as seamless as possible. When things
did go wrong, we wanted users to know exactly how to proceed and where to
go for help. This article describes the experience of Google employees as they
work within the BeyondCorp model, from onboarding new employees and
setting up new devices, to what happens when users run into issues.
enterprise  security  beyondcorp 
november 2017
Beyond Corp 4 - Migrating Peck et al.pdf
If you’re familiar with the articles about Google’s BeyondCorp network
security model published in ;login: [1-3] over the past two years, you
may be thinking, “That all sounds good, but how does my organization
move from where we are today to a similar model? What do I need to do?
And what’s the potential impact on my company and my employees?” This
article discusses how we moved from our legacy network to the BeyondCorp model—changing the fundamentals of network access—without reducing the company’s productivity.
enterprise  security  google  beyondcorp 
june 2017
« earlier      

Copy this bookmark:



description:


tags: