nhaliday + constraint-satisfaction   20

Two Performance Aesthetics: Never Miss a Frame and Do Almost Nothing - Tristan Hume
I’ve noticed when I think about performance nowadays that I think in terms of two different aesthetics. One aesthetic, which I’ll call Never Miss a Frame, comes from the world of game development and is focused on writing code that has good worst case performance by making good use of the hardware. The other aesthetic, which I’ll call Do Almost Nothing comes from a more academic world and is focused on algorithmically minimizing the work that needs to be done to the extent that there’s barely any work left, paying attention to the performance at all scales.

[ed.: Neither of these exactly matches TCS performance PoV but latter is closer (the focus on diffs is kinda weird).]


Never Miss a Frame

In game development the most important performance criteria is that your game doesn’t miss frame deadlines. You have a target frame rate and if you miss the deadline for the screen to draw a new frame your users will notice the jank. This leads to focusing on the worst case scenario and often having fixed maximum limits for various quantities. This property can also be important in areas other than game development, like other graphical applications, real-time audio, safety-critical systems and many embedded systems. A similar dynamic occurs in distributed systems where one server needs to query 100 others and combine the results, you’ll wait for the slowest of the 100 every time so speeding up some of them doesn’t make the query faster, and queries occasionally taking longer (e.g because of garbage collection) will impact almost every request!


In this kind of domain you’ll often run into situations where in the worst case you can’t avoid processing a huge number of things. This means you need to focus your effort on making the best use of the hardware by writing code at a low level and paying attention to properties like cache size and memory bandwidth.

Projects with inviolable deadlines need to adjust different factors than speed if the code runs too slow. For example a game might decrease the size of a level or use a more efficient but less pretty rendering technique.

Aesthetically: Data should be tightly packed, fixed size, and linear. Transcoding data to and from different formats is wasteful. Strings and their variable lengths and inefficient operations must be avoided. Only use tools that allow you to work at a low level, even if they’re annoying, because that’s the only way you can avoid piles of fixed costs making everything slow. Understand the machine and what your code does to it.

Personally I identify this aesthetic most with Jonathan Blow. He has a very strong personality and I’ve watched enough of videos of him that I find imagining “What would Jonathan Blow say?” as a good way to tap into this aesthetic. My favourite articles about designs following this aesthetic are on the Our Machinery Blog.


Do Almost Nothing

Sometimes, it’s important to be as fast as you can in all cases and not just orient around one deadline. The most common case is when you simply have to do something that’s going to take an amount of time noticeable to a human, and if you can make that time shorter in some situations that’s great. Alternatively each operation could be fast but you may run a server that runs tons of them and you’ll save on server costs if you can decrease the load of some requests. Another important case is when you care about power use, for example your text editor not rapidly draining a laptop’s battery, in this case you want to do the least work you possibly can.

A key technique for this approach is to never recompute something from scratch when it’s possible to re-use or patch an old result. This often involves caching: keeping a store of recent results in case the same computation is requested again.

The ultimate realization of this aesthetic is for the entire system to deal only in differences between the new state and the previous state, updating data structures with only the newly needed data and discarding data that’s no longer needed. This way each part of the system does almost no work because ideally the difference from the previous state is very small.

Aesthetically: Data must be in whatever structure scales best for the way it is accessed, lots of trees and hash maps. Computations are graphs of inputs and results so we can use all our favourite graph algorithms to optimize them! Designing optimal systems is hard so you should use whatever tools you can to make it easier, any fixed cost they incur will be made negligible when you optimize away all the work they need to do.

Personally I identify this aesthetic most with my friend Raph Levien and his articles about the design of the Xi text editor, although Raph also appreciates the other aesthetic and taps into it himself sometimes.


_I’m conflating the axes of deadline-oriented vs time-oriented and low-level vs algorithmic optimization, but part of my point is that while they are different, I think these axes are highly correlated._


Text Editors

Sublime Text is a text editor that mostly follows the Never Miss a Frame approach. ...

The Xi Editor is designed to solve this problem by being designed from the ground up to grapple with the fact that some operations, especially those interacting with slow compilers written by other people, can’t be made instantaneous. It does this using a fancy asynchronous plugin model and lots of fancy data structures.



Jonathan Blow’s Jai compiler is clearly designed with the Never Miss a Frame aesthetic. It’s written to be extremely fast at every level, and the language doesn’t have any features that necessarily lead to slow compiles. The LLVM backend wasn’t fast enough to hit his performance goals so he wrote an alternative backend that directly writes x86 code to a buffer without doing any optimizations. Jai compiles something like 100,000 lines of code per second. Designing both the language and compiler to not do anything slow lead to clean build performance 10-100x faster than other commonly-used compilers. Jai is so fast that its clean builds are faster than most compilers incremental builds on common project sizes, due to limitations in how incremental the other compilers are.

However, Jai’s compiler is still O(n) in the codebase size where incremental compilers can be O(n) in the size of the change. Some compilers like the work-in-progress rust-analyzer and I think also Roslyn for C# take a different approach and focus incredibly hard on making everything fully incremental. For small changes (the common case) this can let them beat Jai and respond in milliseconds on arbitrarily large projects, even if they’re slower on clean builds.

I find both of these aesthetics appealing, but I also think there’s real trade-offs that incentivize leaning one way or the other for a given project. I think people having different performance aesthetics, often because one aesthetic really is better suited for their domain, is the source of a lot of online arguments about making fast systems. The different aesthetics also require different bases of knowledge to pursue, like knowledge of data-oriented programming in C++ vs knowledge of abstractions for incrementality like Adapton, so different people may find that one approach seems way easier and better for them than the other.

I try to choose how to dedicate my effort to pursuing each aesthetics on a per project basis by trying to predict how effort in each direction would help. Some projects I know if I code it efficiently it will always hit the performance deadline, others I know a way to drastically cut down on work by investing time in algorithmic design, some projects need a mix of both. Personally I find it helpful to think of different programmers where I have a good sense of their aesthetic and ask myself how they’d solve the problem. One reason I like Rust is that it can do both low-level optimization and also has a good ecosystem and type system for algorithmic optimization, so I can more easily mix approaches in one project. In the end the best approach to follow depends not only on the task, but your skills or the skills of the team working on it, as well as how much time you have to work towards an ambitious design that may take longer for a better result.
techtariat  reflection  things  comparison  aesthetics  lens  programming  engineering  cracker-prog  carmack  games  performance  big-picture  system-design  constraint-satisfaction  metrics  telos-atelos  worst-case  distributed  incentives  concurrency  cost-benefit  tradeoffs  systems  metal-to-virtual  latency-throughput  abstraction  marginal  caching  editors  strings  ideas  ui  common-case  examples  applications  flux-stasis  nitty-gritty  ends-means  thinking  summary  correlation  degrees-of-freedom  c(pp)  rust  interface  integration-extension 
15 days ago by nhaliday
Surveil things, not people – The sideways view
Technology may reach a point where free use of one person’s share of humanity’s resources is enough to easily destroy the world. I think society needs to make significant changes to cope with that scenario.

Mass surveillance is a natural response, and sometimes people think of it as the only response. I find mass surveillance pretty unappealing, but I think we can capture almost all of the value by surveilling things rather than surveilling people. This approach avoids some of the worst problems of mass surveillance; while it still has unattractive features it’s my favorite option so far.


The idea
We’ll choose a set of artifacts to surveil and restrict. I’ll call these heavy technology and everything else light technology. Our goal is to restrict as few things as possible, but we want to make sure that someone can’t cause unacceptable destruction with only light technology. By default something is light technology if it can be easily acquired by an individual or small group in 2017, and heavy technology otherwise (though we may need to make some exceptions, e.g. certain biological materials or equipment).

Heavy technology is subject to two rules:

1. You can’t use heavy technology in a way that is unacceptably destructive.
2. You can’t use heavy technology to undermine the machinery that enforces these two rules.

To enforce these rules, all heavy technology is under surveillance, and is situated such that it cannot be unilaterally used by any individual or small group. That is, individuals can own heavy technology, but they cannot have unmonitored physical access to that technology.


This proposal does give states a de facto monopoly on heavy technology, and would eventually make armed resistance totally impossible. But it’s already the case that states have a massive advantage in armed conflict, and it seems almost inevitable that progress in AI will make this advantage larger (and enable states to do much more with it). Realistically I’m not convinced this proposal makes things much worse than the default.

This proposal definitely expands regulators’ nominal authority and seems prone to abuses. But amongst candidates for handling a future with cheap and destructive dual-use technology, I feel this is the best of many bad options with respect to the potential for abuse.
ratty  acmtariat  clever-rats  risk  existence  futurism  technology  policy  alt-inst  proposal  government  intel  authoritarianism  orwellian  tricks  leviathan  security  civilization  ai  ai-control  arms  defense  cybernetics  institutions  law  unintended-consequences  civil-liberty  volo-avolo  power  constraint-satisfaction  alignment 
april 2018 by nhaliday
Flows With Friction
To see how the no-slip condition arises, and how the no-slip condition and the fluid viscosity lead to frictional stresses, we can examine the conditions at a solid surface on a molecular scale. When a fluid is stationary, its molecules are in a constant state of motion with a random velocity v. For a gas, v is equal to the speed of sound. When a fluid is in motion, there is superimposed on this random velocity a mean velocity V, sometimes called the bulk velocity, which is the velocity at which fluid from one place to another. At the interface between the fluid and the surface, there exists an attraction between the molecules or atoms that make up the fluid and those that make up the solid. This attractive force is strong enough to reduce the bulk velocity of the fluid to zero. So the bulk velocity of the fluid must change from whatever its value is far away from the wall to a value of zero at the wall (figure 7). This is called the no-slip condition.

The fluid property responsible for the no-slip condition and the development of the boundary layer is viscosity.
org:junk  org:edu  physics  mechanics  h2o  identity  atoms  constraint-satisfaction  volo-avolo  flux-stasis  chemistry  stat-mech  nibble  multi  q-n-a  reddit  social  discussion  dirty-hands  pdf  slides  lectures  qra  fluid  local-global  explanation 
september 2017 by nhaliday
Whole Health Source: Palatability, Satiety and Calorie Intake
The more palatable the food, the less filling per calorie, and the relationship was quite strong for a study of this nature. This is consistent with the evidence that highly palatable foods shut down the mechanisms in the brain that constrain food intake. Croissants had the lowest SI (47), while potatoes had the highest (323). Overall, baked goods and candy had the lowest SI. They didn't test sweet potatoes, but I suspect they would have been similar to potatoes. Other foods with a high SI include meat/fish, whole grain foods, fruit and porridge.
taubes-guyenet  org:health  fitsci  health  embodied  food  diet  nutrition  metabolic  constraint-satisfaction  wire-guided  correlation  emotion 
july 2017 by nhaliday
- the genetic book of the dead [Dawkins]
- complementarity [Frank Wilczek]
- relative information
- effective theory [Lisa Randall]
- affordances [Dennett]
- spontaneous symmetry breaking
- relatedly, equipoise [Nicholas Christakis]
- case-based reasoning
- population reasoning (eg, common law)
- criticality [Cesar Hidalgo]
- Haldan's law of the right size (!SCALE!)
- polygenic scores
- non-ergodic
- ansatz
- state [Aaronson]: http://www.scottaaronson.com/blog/?p=3075
- transfer learning
- effect size
- satisficing
- scaling
- the breeder's equation [Greg Cochran]
- impedance matching

- reciprocal altruism
- life history [Plomin]
- intellectual honesty [Sam Harris]
- coalitional instinct (interesting claim: building coalitions around "rationality" actually makes it more difficult to update on new evidence as it makes you look like a bad person, eg, the Cathedral)
basically same: https://twitter.com/ortoiseortoise/status/903682354367143936

more: https://www.edge.org/conversation/john_tooby-coalitional-instincts

interesting timing. how woke is this dude?
org:edge  2017  technology  discussion  trends  list  expert  science  top-n  frontier  multi  big-picture  links  the-world-is-just-atoms  metameta  🔬  scitariat  conceptual-vocab  coalitions  q-n-a  psychology  social-psych  anthropology  instinct  coordination  duty  power  status  info-dynamics  cultural-dynamics  being-right  realness  cooperate-defect  westminster  chart  zeitgeist  rot  roots  epistemic  rationality  meta:science  analogy  physics  electromag  geoengineering  environment  atmosphere  climate-change  waves  information-theory  bits  marginal  quantum  metabuch  homo-hetero  thinking  sapiens  genetics  genomics  evolution  bio  GT-101  low-hanging  minimum-viable  dennett  philosophy  cog-psych  neurons  symmetry  humility  life-history  social-structure  GWAS  behavioral-gen  biodet  missing-heritability  ergodic  machine-learning  generalization  west-hunter  population-genetics  methodology  blowhards  spearhead  group-level  scale  magnitude  business  scaling-tech  tech  business-models  optimization  effect-size  aaronson  state  bare-hands  problem-solving  politics 
may 2017 by nhaliday
Lecture 11
In which we prove that the Edmonds-Karp algorithm for maximum flow is a strongly polynomial time algorithm, and we begin to talk about the push-relabel approach.
pdf  lecture-notes  exposition  optimization  algorithms  linear-programming  graphs  stanford  luca-trevisan  nibble  direction  stock-flow  tcs  constraint-satisfaction  tcstariat 
january 2017 by nhaliday
Lecture 16
In which we define a multi-commodity flow problem, and we see that its dual is the relaxation of a useful graph partitioning problem. The relaxation can be rounded to yield an approximate graph partitioning algorithm.
pdf  lecture-notes  exposition  optimization  linear-programming  graphs  graph-theory  algorithms  duality  rounding  stanford  approximation  rand-approx  luca-trevisan  relaxation  nibble  stock-flow  constraint-satisfaction  tcs  tcstariat 
january 2017 by nhaliday
Hyperbolic discounting - Wikipedia
Individuals using hyperbolic discounting reveal a strong tendency to make choices that are inconsistent over time – they make choices today that their future self would prefer not to have made, despite using the same reasoning. This dynamic inconsistency happens because the value of future rewards is much lower under hyperbolic discounting than under exponential discounting.
psychology  cog-psych  behavioral-econ  values  time-preference  wiki  reference  concept  models  distribution  time  uncertainty  decision-theory  decision-making  sequential  stamina  neurons  akrasia  contradiction  self-control  patience  article  formal-values  microfoundations  constraint-satisfaction  additive  long-short-run 
january 2017 by nhaliday

related tags

2016-election  aaronson  abstraction  academia  acm  acmtariat  additive  aesthetics  ai  ai-control  akrasia  algorithms  alignment  alt-inst  altruism  analogy  anthropology  aphorism  applications  approximation  arms  article  asia  atmosphere  atoms  authoritarianism  axelrod  bare-hands  behavioral-econ  behavioral-gen  being-right  big-picture  bio  biodet  bits  blowhards  build-packaging  business  business-models  c(pp)  caching  calculation  canon  carmack  chart  chemistry  christianity  civil-liberty  civilization  clever-rats  climate-change  coalitions  cog-psych  cohesion  coloring  combo-optimization  commentary  common-case  comparison  compilers  concept  conceptual-vocab  concurrency  constraint-satisfaction  contest  contradiction  convexity-curvature  cool  cooperate-defect  coordination  correlation  cost-benefit  counter-revolution  course  cracker-prog  creative  critique  cs  cultural-dynamics  culture-war  curvature  cybernetics  cynicism-idealism  darwinian  debate  decision-making  decision-theory  defense  degrees-of-freedom  democracy  dennett  diet  direction  dirty-hands  discrete  discussion  distributed  distribution  diversity  duality  duty  editors  effect-size  electromag  embedded-cognition  embodied  emergent  emotion  ends-means  engineering  environment  epistemic  ergodic  europe  evolution  examples  existence  expert  expert-experience  explanans  explanation  exposition  extrema  fashun  fitsci  fluid  flux-stasis  food  formal-methods  formal-values  frontier  functional  futurism  games  gcc  generalization  genetics  genomics  geoengineering  geometry  gnosis-logos  government  gowers  graph-theory  graphs  greedy  group-level  GT-101  GWAS  h2o  hardware  haskell  health  hi-order-bits  hidden-motives  history  homepage  homo-hetero  humility  ideas  identity  identity-politics  ideology  illusion  impact  impetus  incentives  info-dynamics  information-theory  innovation  instinct  institutions  integration-extension  intel  interdisciplinary  interface  intricacy  iron-age  is-ought  journos-pundits  jvm  kumbaya-kult  latency-throughput  lattice  law  lecture-notes  lectures  left-wing  lens  leviathan  life-history  linear-programming  links  linux  lisp  list  llvm  local-global  lol  long-short-run  low-hanging  luca-trevisan  machine-learning  magnitude  malaise  marginal  matching  math  math.CO  math.MG  mathtariat  measurement  mechanics  mediterranean  meta:science  metabolic  metabuch  metal-to-virtual  metameta  methodology  metrics  microfoundations  minimum-viable  missing-heritability  mit  models  moments  morality  multi  mystic  n-factor  nascent-state  nature  neurons  news  nibble  nitty-gritty  noble-lie  nutrition  ocaml-sml  occident  off-convex  oly  oly-programming  open-problems  optimization  order-disorder  org:bleg  org:edge  org:edu  org:health  org:inst  org:junk  org:mag  org:sci  organizing  orient  orourke  orwellian  overflow  papers  paradox  pareto  patience  pdf  performance  phase-transition  philosophy  physics  pls  plt  polarization  policy  polisci  politics  popsci  population-genetics  power  preference-falsification  presentation  probabilistic-method  probability  problem-solving  programming  proofs  proposal  prudence  psychology  q-n-a  qra  quantum  questions  rand-approx  random  rationality  ratty  realness  reason  reddit  reference  reflection  regularization  relaxation  religion  research  risk  roots  rot  rounding  rust  s:null  sanctity-degradation  sapiens  scale  scaling-tech  schelling  science  scitariat  security  self-control  sequential  signaling  sinosphere  slides  slippery-slope  social  social-psych  social-structure  software  spearhead  stamina  stanford  stat-mech  state  status  stock-flow  straussian  strings  structure  summary  symmetry  synthesis  system-design  systems  taubes-guyenet  tcs  tcstariat  tech  technology  techtariat  telos-atelos  terminal  the-basilisk  the-classics  the-great-west-whale  the-watchers  the-world-is-just-atoms  theos  things  thinking  tidbits  time  time-preference  top-n  toxoplasmosis  tradeoffs  trends  tribalism  tricks  troll  truth  twitter  ui  unaffiliated  uncertainty  unintended-consequences  unit  unix  urban  urban-rural  us-them  values  visuo  volo-avolo  waves  west-hunter  westminster  wiki  wire-guided  worst-case  yak-shaving  yoga  zeitgeist  🎓  🔬 

Copy this bookmark: