nhaliday + papers   181

[1804.04268] Incomplete Contracting and AI Alignment
We suggest that the analysis of incomplete contracting developed by law and economics researchers can provide a useful framework for understanding the AI alignment problem and help to generate a systematic approach to finding solutions. We first provide an overview of the incomplete contracting literature and explore parallels between this work and the problem of AI alignment. As we emphasize, misalignment between principal and agent is a core focus of economic analysis. We highlight some technical results from the economics literature on incomplete contracts that may provide insights for AI alignment researchers. Our core contribution, however, is to bring to bear an insight that economists have been urged to absorb from legal scholars and other behavioral scientists: the fact that human contracting is supported by substantial amounts of external structure, such as generally available institutions (culture, law) that can supply implied terms to fill the gaps in incomplete contracts. We propose a research agenda for AI alignment work that focuses on the problem of how to build AI that can replicate the human cognitive processes that connect individual incomplete contracts with this supporting external structure.
nibble  preprint  org:mat  papers  ai  ai-control  alignment  coordination  contracts  law  economics  interests  culture  institutions  number  context  behavioral-econ  composition-decomposition  rent-seeking  whole-partial-many 
april 2018 by nhaliday
Theory of Self-Reproducing Automata - John von Neumann
Fourth Lecture: THE ROLE OF HIGH AND OF EXTREMELY HIGH COMPLICATION

Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future.

Estimates for size for the human central nervous system. Excursus about the “mixed” character of living organisms. Analog and digital elements. Observations about the “mixed” character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these.

Evaluation of the discrepancy in size between artificial and natural automata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used.

The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires.

Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms--that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self-correcting features.

Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The “single error” principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata: The role of the autonomy of parts. Connections between this autonomy and evolution.

- 10^10 neurons in brain, 10^4 vacuum tubes in largest computer at time
- machines faster: 5 ms from neuron potential to neuron potential, 10^-3 ms for vacuum tubes

https://en.wikipedia.org/wiki/John_von_Neumann#Computing
pdf  article  papers  essay  nibble  math  cs  computation  bio  neuro  neuro-nitgrit  scale  magnitude  comparison  acm  von-neumann  giants  thermo  phys-energy  speed  performance  time  density  frequency  hardware  ems  efficiency  dirty-hands  street-fighting  fermi  estimate  retention  physics  interdisciplinary  multi  wiki  links  people  🔬  atoms  automata  duplication  iteration-recursion  turing  complexity  measure  nature  technology  complex-systems  bits  information-theory  circuits  robust  structure  composition-decomposition  evolution  mutation  axioms  analogy  thinking  input-output  hi-order-bits  coding-theory  flexibility  rigidity 
april 2018 by nhaliday
[1410.0369] The Universe of Minds
kinda dumb, don't think this guy is anywhere close to legit (e.g., he claims set of mind designs is countable, but gives no actual reason to believe that)
papers  preprint  org:mat  ratty  miri-cfar  ai  intelligence  philosophy  logic  software  cs  computation  the-self 
march 2018 by nhaliday
[1509.02504] Electric charge in hyperbolic motion: The early history and other geometrical aspects
We revisit the early work of Minkowski and Sommerfeld concerning hyperbolic motion, and we describe some geometrical aspects of the electrodynamic interaction. We discuss the advantages of a time symmetric formulation in which the material points are replaced by infinitesimal length elements.

SPACE AND TIME: An annotated, illustrated edition of Hermann Minkowski's revolutionary essay: http://web.mit.edu/redingtn/www/netadv/SP20130311.html
nibble  preprint  papers  org:mat  physics  electromag  relativity  exposition  history  mostly-modern  pre-ww2  science  the-trenches  discovery  intricacy  classic  explanation  einstein  giants  plots  manifolds  article  multi  liner-notes  org:junk  org:edu  absolute-relative 
november 2017 by nhaliday
Stability of the Solar System - Wikipedia
The stability of the Solar System is a subject of much inquiry in astronomy. Though the planets have been stable when historically observed, and will be in the short term, their weak gravitational effects on one another can add up in unpredictable ways. For this reason (among others) the Solar System is chaotic,[1] and even the most precise long-term models for the orbital motion of the Solar System are not valid over more than a few tens of millions of years.[2]

The Solar System is stable in human terms, and far beyond, given that it is unlikely any of the planets will collide with each other or be ejected from the system in the next few billion years,[3] and the Earth's orbit will be relatively stable.[4]

Since Newton's law of gravitation (1687), mathematicians and astronomers (such as Laplace, Lagrange, Gauss, Poincaré, Kolmogorov, Vladimir Arnold and Jürgen Moser) have searched for evidence for the stability of the planetary motions, and this quest led to many mathematical developments, and several successive 'proofs' of stability of the Solar System.[5]

...

The planets' orbits are chaotic over longer timescales, such that the whole Solar System possesses a Lyapunov time in the range of 2–230 million years.[3] In all cases this means that the position of a planet along its orbit ultimately becomes impossible to predict with any certainty (so, for example, the timing of winter and summer become uncertain), but in some cases the orbits themselves may change dramatically. Such chaos manifests most strongly as changes in eccentricity, with some planets' orbits becoming significantly more—or less—elliptical.[7]

Is the Solar System Stable?: https://www.ias.edu/ideas/2011/tremaine-solar-system

Is the Solar System Stable?: https://arxiv.org/abs/1209.5996
nibble  wiki  reference  article  physics  mechanics  space  gravity  flux-stasis  uncertainty  robust  perturbation  math  dynamical  math.DS  volo-avolo  multi  org:edu  org:inst  papers  preprint  time  data  org:mat 
november 2017 by nhaliday
Karl Pearson and the Chi-squared Test
Pearson's paper of 1900 introduced what subsequently became known as the chi-squared test of goodness of fit. The terminology and allusions of 80 years ago create a barrier for the modern reader, who finds that the interpretation of Pearson's test procedure and the assessment of what he achieved are less than straightforward, notwithstanding the technical advances made since then. An attempt is made here to surmount these difficulties by exploring Pearson's relevant activities during the first decade of his statistical career, and by describing the work by his contemporaries and predecessors which seem to have influenced his approach to the problem. Not all the questions are answered, and others remain for further study.

original paper: http://www.economics.soton.ac.uk/staff/aldrich/1900.pdf

How did Karl Pearson come up with the chi-squared statistic?: https://stats.stackexchange.com/questions/97604/how-did-karl-pearson-come-up-with-the-chi-squared-statistic
He proceeds by working with the multivariate normal, and the chi-square arises as a sum of squared standardized normal variates.

You can see from the discussion on p160-161 he's clearly discussing applying the test to multinomial distributed data (I don't think he uses that term anywhere). He apparently understands the approximate multivariate normality of the multinomial (certainly he knows the margins are approximately normal - that's a very old result - and knows the means, variances and covariances, since they're stated in the paper); my guess is that most of that stuff is already old hat by 1900. (Note that the chi-squared distribution itself dates back to work by Helmert in the mid-1870s.)

Then by the bottom of p163 he derives a chi-square statistic as "a measure of goodness of fit" (the statistic itself appears in the exponent of the multivariate normal approximation).

He then goes on to discuss how to evaluate the p-value*, and then he correctly gives the upper tail area of a χ212χ122 beyond 43.87 as 0.000016. [You should keep in mind, however, that he didn't correctly understand how to adjust degrees of freedom for parameter estimation at that stage, so some of the examples in his papers use too high a d.f.]
nibble  papers  acm  stats  hypothesis-testing  methodology  history  mostly-modern  pre-ww2  old-anglo  giants  science  the-trenches  stories  multi  q-n-a  overflow  explanation  summary  innovation  discovery  distribution  degrees-of-freedom  limits 
october 2017 by nhaliday
[1709.06560] Deep Reinforcement Learning that Matters
https://twitter.com/WAWilsonIV/status/912505885565452288
I’ve been experimenting w/ various kinds of value function approaches to RL lately, and its striking how primitive and bad things seem to be
At first I thought it was just that my code sucks, but then I played with the OpenAI baselines and nope, it’s the children that are wrong.
And now, what comes across my desk but this fantastic paper: (link: https://arxiv.org/abs/1709.06560) arxiv.org/abs/1709.06560 How long until the replication crisis hits AI?

https://twitter.com/WAWilsonIV/status/911318326504153088
Seriously I’m not blown away by the PhDs’ records over the last 30 years. I bet you’d get better payoff funding eccentrics and amateurs.
There are essentially zero fundamentally new ideas in AI, the papers are all grotesquely hyperparameter tuned, nobody knows why it works.

Deep Reinforcement Learning Doesn't Work Yet: https://www.alexirpan.com/2018/02/14/rl-hard.html
Once, on Facebook, I made the following claim.

Whenever someone asks me if reinforcement learning can solve their problem, I tell them it can’t. I think this is right at least 70% of the time.
papers  preprint  machine-learning  acm  frontier  speedometer  deep-learning  realness  replication  state-of-art  survey  reinforcement  multi  twitter  social  discussion  techtariat  ai  nibble  org:mat  unaffiliated  ratty  acmtariat  liner-notes  critique  sample-complexity  cost-benefit  todo 
september 2017 by nhaliday
New Theory Cracks Open the Black Box of Deep Learning | Quanta Magazine
A new idea called the “information bottleneck” is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn.

sounds like he's just talking about autoencoders?
news  org:mag  org:sci  popsci  announcement  research  deep-learning  machine-learning  acm  information-theory  bits  neuro  model-class  big-surf  frontier  nibble  hmm  signal-noise  deepgoog  expert  ideas  wild-ideas  summary  talks  video  israel  roots  physics  interdisciplinary  ai  intelligence  shannon  giants  arrows  preimage  lifts-projections  composition-decomposition  characterization  markov  gradient-descent  papers  liner-notes  experiment  hi-order-bits  generalization  expert-experience  explanans  org:inst  speedometer 
september 2017 by nhaliday
Rank aggregation basics: Local Kemeny optimisation | David R. MacIver
This turns our problem from a global search to a local one: Basically we can start from any point in the search space and search locally by swapping adjacent pairs until we hit a minimum. This turns out to be quite easy to do. _We basically run insertion sort_: At step n we have the first n items in a locally Kemeny optimal order. Swap the n+1th item backwards until the majority think its predecessor is < it. This ensures all adjacent pairs are in the majority order, so swapping them would result in a greater than or equal K. This is of course an O(n^2) algorithm. In fact, the problem of merely finding a locally Kemeny optimal solution can be done in O(n log(n)) (for much the same reason as you can sort better than insertion sort). You just take the directed graph of majority votes and find a Hamiltonian Path. The nice thing about the above version of the algorithm is that it gives you a lot of control over where you start your search.
techtariat  liner-notes  papers  tcs  algorithms  machine-learning  acm  optimization  approximation  local-global  orders  graphs  graph-theory  explanation  iteration-recursion  time-complexity  nibble 
september 2017 by nhaliday
Fermat's Library | Cassini, Rømer and the velocity of light annotated/explained version.
Abstract: The discovery of the finite nature of the velocity of light is usually attributed to Rømer. However, a text at the Paris Observatory confirms the minority opinion according to which Cassini was first to propose the ‘successive motion’ of light, while giving a rather correct order of magnitude for the duration of its propagation from the Sun to the Earth. We examine this question, and discuss why, in spite of the criticisms of Halley, Cassini abandoned this hypothesis while leaving Rømer free to publish it.
liner-notes  papers  essay  history  early-modern  europe  the-great-west-whale  giants  the-trenches  mediterranean  nordic  science  innovation  discovery  physics  electromag  space  speed  nibble  org:sci  org:mat 
september 2017 by nhaliday
Correlated Equilibria in Game Theory | Azimuth
Given this, it’s not surprising that Nash equilibria can be hard to find. Last September a paper came out making this precise, in a strong way:

• Yakov Babichenko and Aviad Rubinstein, Communication complexity of approximate Nash equilibria.

The authors show there’s no guaranteed method for players to find even an approximate Nash equilibrium unless they tell each other almost everything about their preferences. This makes finding the Nash equilibrium prohibitively difficult to find when there are lots of players… in general. There are particular games where it’s not difficult, and that makes these games important: for example, if you’re trying to run a government well. (A laughable notion these days, but still one can hope.)

Klarreich’s article in Quanta gives a nice readable account of this work and also a more practical alternative to the concept of Nash equilibrium. It’s called a ‘correlated equilibrium’, and it was invented by the mathematician Robert Aumann in 1974. You can see an attempt to define it here:
baez  org:bleg  nibble  mathtariat  commentary  summary  news  org:mag  org:sci  popsci  equilibrium  GT-101  game-theory  acm  conceptual-vocab  concept  definition  thinking  signaling  coordination  tcs  complexity  communication-complexity  lower-bounds  no-go  liner-notes  big-surf  papers  research  algorithmic-econ  volo-avolo 
july 2017 by nhaliday
Predicting the outcomes of organic reactions via machine learning: are current descriptors sufficient? | Scientific Reports
As machine learning/artificial intelligence algorithms are defeating chess masters and, most recently, GO champions, there is interest – and hope – that they will prove equally useful in assisting chemists in predicting outcomes of organic reactions. This paper demonstrates, however, that the applicability of machine learning to the problems of chemical reactivity over diverse types of chemistries remains limited – in particular, with the currently available chemical descriptors, fundamental mathematical theorems impose upper bounds on the accuracy with which raction yields and times can be predicted. Improving the performance of machine-learning methods calls for the development of fundamentally new chemical descriptors.
study  org:nat  papers  machine-learning  chemistry  measurement  volo-avolo  lower-bounds  analysis  realness  speedometer  nibble  🔬  applications  frontier  state-of-art  no-go  accuracy  interdisciplinary 
july 2017 by nhaliday
Why is the Lin and Tegmark paper 'Why does deep and cheap learning work so well?' important? - Quora
To take the analogy further than I probably should, the resolution to the magic key problem might be that the key is magical, but that the locks are particularly magical. For deep learning, my guess is that it’s a bit of both.
q-n-a  qra  papers  liner-notes  deep-learning  off-convex  machine-learning  explanation  nibble  big-picture  explanans 
february 2017 by nhaliday
[1604.03640] Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex
We discuss relations between Residual Networks (ResNet), Recurrent Neural Networks (RNNs) and the primate visual cortex. We begin with the observation that a shallow RNN is exactly equivalent to a very deep ResNet with weight sharing among the layers. A direct implementation of such a RNN, although having orders of magnitude fewer parameters, leads to a performance similar to the corresponding ResNet. We propose 1) a generalization of both RNN and ResNet architectures and 2) the conjecture that a class of moderately deep RNNs is a biologically-plausible model of the ventral stream in visual cortex. We demonstrate the effectiveness of the architectures by testing them on the CIFAR-10 dataset.
papers  preprint  neuro  biodet  interdisciplinary  deep-learning  model-class  identity  machine-learning  nibble  org:mat  computer-vision 
february 2017 by nhaliday
inequalities - Is the Jaccard distance a distance? - MathOverflow
Steinhaus Transform
the referenced survey: http://kenclarkson.org/nn_survey/p.pdf

It's known that this transformation produces a metric from a metric. Now if you take as the base metric D the symmetric difference between two sets, what you end up with is the Jaccard distance (which actually is known by many other names as well).
q-n-a  overflow  nibble  math  acm  sublinear  metrics  metric-space  proofs  math.CO  tcstariat  arrows  reduction  measure  math.MG  similarity  multi  papers  survey  computational-geometry  cs  algorithms  pdf  positivity  msr  tidbits  intersection  curvature  convexity-curvature  intersection-connectedness  signum 
february 2017 by nhaliday
Paperscape
- includes physics, cs, etc.
- CS is _a lot_ smaller, or at least has much lower citation counts
- size = number citations, placement = citation network structure
papers  publishing  science  meta:science  data  visualization  network-structure  big-picture  dynamic  exploratory  🎓  physics  cs  math  hi-order-bits  survey  visual-understanding  preprint  aggregator  database  search  maps  zooming  metameta  scholar-pack  🔬  info-dynamics  scale  let-me-see  chart 
february 2017 by nhaliday
The Brunn-Minkowski Inequality | The n-Category Café
For instance, this happens in the plane when A is a horizontal line segment and B is a vertical line segment. There’s obviously no hope of getting an equation for Vol(A+B) in terms of Vol(A) and Vol(B). But this example suggests that we might be able to get an inequality, stating that Vol(A+B) is at least as big as some function of Vol(A) and Vol(B).

The Brunn-Minkowski inequality does this, but it’s really about linearized volume, Vol^{1/n}, rather than volume itself. If length is measured in metres then so is Vol^{1/n}.

...

Nice post, Tom. To readers whose background isn’t in certain areas of geometry and analysis, it’s not obvious that the Brunn–Minkowski inequality is more than a curiosity, the proof of the isoperimetric inequality notwithstanding. So let me add that Brunn–Minkowski is an absolutely vital tool in many parts of geometry, analysis, and probability theory, with extremely diverse applications. Gardner’s survey is a great place to start, but by no means exhaustive.

I’ll also add a couple remarks about regularity issues. You point out that Brunn–Minkowski holds “in the vast generality of measurable sets”, but it may not be initially obvious that this needs to be interpreted as “when A, B, and A+B are all Lebesgue measurable”, since A+B need not be measurable when A and B are (although you can modify the definition of A+B to work for arbitrary measurable A and B; this is discussed by Gardner).
mathtariat  math  estimate  exposition  geometry  math.MG  measure  links  regularity  survey  papers  org:bleg  nibble  homogeneity  brunn-minkowski  curvature  convexity-curvature 
february 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : academemeta

related tags

:/  aaronson  absolute-relative  abstraction  academia  accretion  accuracy  acm  acmtariat  additive-combo  adversarial  advice  aggregator  ai  ai-control  alg-combo  algebra  algorithmic-econ  algorithms  alignment  alt-inst  AMT  analogy  analysis  anglo  announcement  anonymity  apollonian-dionysian  applications  approximation  arms  arrows  article  asia  atoms  authoritarianism  auto-learning  automata  automation  average-case  axioms  baez  bandits  bayesian  behavioral-econ  ben-recht  benchmarks  best-practices  better-explained  biases  big-list  big-peeps  big-picture  big-surf  binomial  bio  biodet  bioinformatics  bitcoin  bits  boolean-analysis  brain-scan  brands  brexit  britain  brunn-minkowski  business  business-models  c(pp)  caltech  career  causation  characterization  chart  checklists  chemistry  china  circuits  clarity  classic  classification  clever-rats  coalitions  cocktail  coding-theory  cog-psych  commentary  communication  communication-complexity  comparison  complex-systems  complexity  composition-decomposition  compressed-sensing  computation  computational-geometry  computer-vision  concentration-of-measure  concept  conceptual-vocab  concurrency  conference  confluence  confounding  constraint-satisfaction  context  contracts  convexity-curvature  cool  coordination  cost-benefit  counterexample  counting  creative  critique  crypto  crypto-anarchy  cryptocurrency  cs  culture  curvature  cybernetics  cycles  dark-arts  data  data-science  data-structures  database  dataset  debate  decision-theory  deep-learning  deepgoog  definite-planning  definition  degrees-of-freedom  dennett  density  dependence-independence  descriptive  detail-architecture  devtools  differential  dimensionality  dirty-hands  discovery  discrete  discussion  distributed  distribution  dumb-ML  duplication  dynamic  dynamical  early-modern  earth  economics  eden-heaven  efficiency  einstein  elections  electromag  embeddings  embodied  empirical  ems  endo-exo  endogenous-exogenous  energy-resources  engineering  ensembles  entanglement  entropy-like  equilibrium  eric-kaufmann  error  essay  estimate  ethics  europe  events  evolution  examples  exocortex  expansionism  experiment  expert  expert-experience  explanans  explanation  exploratory  exposition  fall-2016  features  fermi  fiber  finance  finiteness  fixed-point  flexibility  flux-stasis  focs  foreign-lang  fourier  french  frequency  frontier  functional  futurism  game-theory  games  gelman  generalization  generative  genetics  geography  geometry  georgia  germanic  giants  google  gotchas  government  gowers  grad-school  gradient-descent  graph-theory  graphical-models  graphs  gravity  ground-up  GT-101  GWAS  hacker  hanson  hardware  haskell  heavy-industry  hi-order-bits  history  hmm  hn  homepage  homo-hetero  homogeneity  hsu  huge-data-the-biggest  human-ml  humanity  hypothesis-testing  icml  ideas  identity  idk  IEEE  impetus  impro  industrial-org  inference  info-dynamics  info-econ  info-foraging  information-theory  init  innovation  input-output  institutions  integral  intel  intelligence  interdisciplinary  interests  internet  interpretability  intersection  intersection-connectedness  intricacy  intuition  iron-age  israel  iteration-recursion  jargon  jvm  kernels  labor  language  large-factor  latent-variables  law  learning  learning-theory  lectures  lens  let-me-see  leviathan  libraries  lifts-projections  limits  linear-algebra  liner-notes  links  list  local-global  logic  long-term  lower-bounds  machine-learning  macro  magnitude  manifolds  map-territory  maps  marginal  market-failure  markov  matching  math  math.CA  math.CO  math.CV  math.DS  math.GR  math.MG  math.RT  mathtariat  matrix-factorization  measure  measurement  mechanics  mechanism-design  media  mediterranean  meta:math  meta:research  meta:science  meta:war  metabuch  metameta  methodology  metric-space  metrics  michael-jordan  micro  military  miri-cfar  mit  ML-MAP-E  model-class  models  monetary-fiscal  monotonicity  monte-carlo  mostly-modern  msr  multi  multiplicative  mutation  narrative  nature  network-structure  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nips  nitty-gritty  nlp  no-go  nonlinearity  nordic  norms  novelty  nuclear  number  numerics  objektbuch  ocaml-sml  occam  off-convex  offense-defense  old-anglo  online-learning  open-problems  openai  operational  opsec  optimization  order-disorder  orders  org:anglo  org:biz  org:bleg  org:edu  org:inst  org:junk  org:lite  org:mag  org:mat  org:nat  org:rec  org:sci  organization  oscillation  overflow  oxbridge  p:someday  p:whenever  PAC  papers  paradox  parametric  parsimony  pdf  people  percolation  performance  perturbation  phase-transition  philosophy  phys-energy  physics  pinboard  piracy  planning  plots  pls  plt  polisci  polynomials  pop-structure  popsci  population-genetics  positivity  postmortem  power-law  pragmatic  pre-ww2  prediction  prediction-markets  preimage  preprint  princeton  priors-posteriors  privacy  probability  problem-solving  prof  profile  programming  project  proofs  propaganda  properties  pseudorandomness  psychology  psychometrics  publishing  python  q-n-a  qra  QTL  quantum  quantum-info  quantum-money  questions  quixotic  quotes  rand-approx  rand-complexity  random  random-networks  ranking  ratty  reading  realness  recommendations  reduction  reference  reflection  regularity  regularizer  reinforcement  relativity  rent-seeking  replication  research  research-program  retention  review  rigidity  rigor  rigorous-crypto  risk  robust  roots  rounding  s:*  sample-complexity  sampling  sanjeev-arora  sapiens  scale  scholar  scholar-pack  science  scitariat  SDP  search  sebastien-bubeck  security  selection  sequential  series  shannon  SIGGRAPH  signal-noise  signaling  signum  similarity  simulation  skeleton  skunkworks  sky  sleuthin  slides  smoothness  social  social-choice  social-psych  social-science  society  sociology  soft-question  software  space  sparsity  spatial  spectral  speculation  speed  speedometer  spock  stanford  stat-mech  state  state-of-art  stats  stoc  stochastic-processes  stories  stream  street-fighting  structure  study  studying  stylized-facts  sublinear  success  sum-of-squares  summary  summer-2016  survey  synchrony  synthesis  systematic-ad-hoc  systems  tails  talks  tcs  tcstariat  tech  technology  techtariat  telos-atelos  the-classics  the-great-west-whale  the-self  the-trenches  the-world-is-just-atoms  theory-of-mind  theory-practice  thermo  thesis  thick-thin  things  thinking  threat-modeling  thurston  tidbits  tightness  time  time-complexity  todo  tools  top-n  tradecraft  tricks  trivia  truth  turing  tutorial  twitter  types  UGC  ui  unaffiliated  uncertainty  unintended-consequences  unit  universalism-particularism  unsupervised  usa  utopia-dystopia  vague  values  vazirani  vc-dimension  video  virtu  visual-understanding  visualization  volo-avolo  von-neumann  washington  waves  web  white-paper  whole-partial-many  wiki  wild-ideas  winter-2016  wire-guided  wisdom  within-without  workflow  working-stiff  wormholes  worrydream  writing  yak-shaving  yoga  zooming  🌞  🎓  🎩  👳  🔬  🖥 

Copy this bookmark:



description:


tags: