nhaliday + acmtariat   236

Surveil things, not people – The sideways view
Technology may reach a point where free use of one person’s share of humanity’s resources is enough to easily destroy the world. I think society needs to make significant changes to cope with that scenario.

Mass surveillance is a natural response, and sometimes people think of it as the only response. I find mass surveillance pretty unappealing, but I think we can capture almost all of the value by surveilling things rather than surveilling people. This approach avoids some of the worst problems of mass surveillance; while it still has unattractive features it’s my favorite option so far.

...

The idea
We’ll choose a set of artifacts to surveil and restrict. I’ll call these heavy technology and everything else light technology. Our goal is to restrict as few things as possible, but we want to make sure that someone can’t cause unacceptable destruction with only light technology. By default something is light technology if it can be easily acquired by an individual or small group in 2017, and heavy technology otherwise (though we may need to make some exceptions, e.g. certain biological materials or equipment).

Heavy technology is subject to two rules:

1. You can’t use heavy technology in a way that is unacceptably destructive.
2. You can’t use heavy technology to undermine the machinery that enforces these two rules.

To enforce these rules, all heavy technology is under surveillance, and is situated such that it cannot be unilaterally used by any individual or small group. That is, individuals can own heavy technology, but they cannot have unmonitored physical access to that technology.

...

This proposal does give states a de facto monopoly on heavy technology, and would eventually make armed resistance totally impossible. But it’s already the case that states have a massive advantage in armed conflict, and it seems almost inevitable that progress in AI will make this advantage larger (and enable states to do much more with it). Realistically I’m not convinced this proposal makes things much worse than the default.

This proposal definitely expands regulators’ nominal authority and seems prone to abuses. But amongst candidates for handling a future with cheap and destructive dual-use technology, I feel this is the best of many bad options with respect to the potential for abuse.
ratty  acmtariat  clever-rats  risk  existence  futurism  technology  policy  alt-inst  proposal  government  intel  authoritarianism  orwellian  tricks  leviathan  security  civilization  ai  ai-control  arms  defense  cybernetics  institutions  law  unintended-consequences  civil-liberty  volo-avolo  power  constraint-satisfaction  alignment 
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
Sequence Modeling with CTC
A visual guide to Connectionist Temporal Classification, an algorithm used to train deep neural networks in speech recognition, handwriting recognition and other sequence problems.
acmtariat  techtariat  org:bleg  nibble  better-explained  machine-learning  deep-learning  visual-understanding  visualization  analysis  let-me-see  research  sequential  audio  classification  model-class  exposition  language  acm  approximation  comparison  markov  iteration-recursion  concept  atoms  distribution  orders  DP  heuristic  optimization  trees  greedy  matching  gradient-descent 
december 2017 by nhaliday
[1709.06560] Deep Reinforcement Learning that Matters
https://twitter.com/WAWilsonIV/status/912505885565452288
I’ve been experimenting w/ various kinds of value function approaches to RL lately, and its striking how primitive and bad things seem to be
At first I thought it was just that my code sucks, but then I played with the OpenAI baselines and nope, it’s the children that are wrong.
And now, what comes across my desk but this fantastic paper: (link: https://arxiv.org/abs/1709.06560) arxiv.org/abs/1709.06560 How long until the replication crisis hits AI?

https://twitter.com/WAWilsonIV/status/911318326504153088
Seriously I’m not blown away by the PhDs’ records over the last 30 years. I bet you’d get better payoff funding eccentrics and amateurs.
There are essentially zero fundamentally new ideas in AI, the papers are all grotesquely hyperparameter tuned, nobody knows why it works.

Deep Reinforcement Learning Doesn't Work Yet: https://www.alexirpan.com/2018/02/14/rl-hard.html
Once, on Facebook, I made the following claim.

Whenever someone asks me if reinforcement learning can solve their problem, I tell them it can’t. I think this is right at least 70% of the time.
papers  preprint  machine-learning  acm  frontier  speedometer  deep-learning  realness  replication  state-of-art  survey  reinforcement  multi  twitter  social  discussion  techtariat  ai  nibble  org:mat  unaffiliated  ratty  acmtariat  liner-notes  critique  sample-complexity  cost-benefit  todo 
september 2017 by nhaliday
Superintelligence Risk Project Update II
https://www.jefftk.com/p/superintelligence-risk-project-update

https://www.jefftk.com/p/conversation-with-michael-littman
For example, I asked him what he thought of the idea that to we could get AGI with current techniques, primarily deep neural nets and reinforcement learning, without learning anything new about how intelligence works or how to implement it ("Prosaic AGI" [1]). He didn't think this was possible, and believes there are deep conceptual issues we still need to get a handle on. He's also less impressed with deep learning than he was before he started working in it: in his experience it's a much more brittle technology than he had been expecting. Specifically, when trying to replicate results, he's often found that they depend on a bunch of parameters being in just the right range, and without that the systems don't perform nearly as well.

The bottom line, to him, was that since we are still many breakthroughs away from getting to AGI, we can't productively work on reducing superintelligence risk now.

He told me that he worries that the AI risk community is not solving real problems: they're making deductions and inferences that are self-consistent but not being tested or verified in the world. Since we can't tell if that's progress, it probably isn't. I asked if he was referring to MIRI's work here, and he said their work was an example of the kind of approach he's skeptical about, though he wasn't trying to single them out. [2]

https://www.jefftk.com/p/conversation-with-an-ai-researcher
Earlier this week I had a conversation with an AI researcher [1] at one of the main industry labs as part of my project of assessing superintelligence risk. Here's what I got from them:

They see progress in ML as almost entirely constrained by hardware and data, to the point that if today's hardware and data had existed in the mid 1950s researchers would have gotten to approximately our current state within ten to twenty years. They gave the example of backprop: we saw how to train multi-layer neural nets decades before we had the computing power to actually train these nets to do useful things.

Similarly, people talk about AlphaGo as a big jump, where Go went from being "ten years away" to "done" within a couple years, but they said it wasn't like that. If Go work had stayed in academia, with academia-level budgets and resources, it probably would have taken nearly that long. What changed was a company seeing promising results, realizing what could be done, and putting way more engineers and hardware on the project than anyone had previously done. AlphaGo couldn't have happened earlier because the hardware wasn't there yet, and was only able to be brought forward by massive application of resources.

https://www.jefftk.com/p/superintelligence-risk-project-conclusion
Summary: I'm not convinced that AI risk should be highly prioritized, but I'm also not convinced that it shouldn't. Highly qualified researchers in a position to have a good sense the field have massively different views on core questions like how capable ML systems are now, how capable they will be soon, and how we can influence their development. I do think these questions are possible to get a better handle on, but I think this would require much deeper ML knowledge than I have.
ratty  core-rats  ai  risk  ai-control  prediction  expert  machine-learning  deep-learning  speedometer  links  research  research-program  frontier  multi  interview  deepgoog  games  hardware  performance  roots  impetus  chart  big-picture  state-of-art  reinforcement  futurism  🤖  🖥  expert-experience  singularity  miri-cfar  empirical  evidence-based  speculation  volo-avolo  clever-rats  acmtariat  robust  ideas  crux  atoms  detail-architecture  software  gradient-descent 
july 2017 by nhaliday
How to Escape Saddle Points Efficiently – Off the convex path
A core, emerging problem in nonconvex optimization involves the escape of saddle points. While recent research has shown that gradient descent (GD) generically escapes saddle points asymptotically (see Rong Ge’s and Ben Recht’s blog posts), the critical open problem is one of efficiency — is GD able to move past saddle points quickly, or can it be slowed down significantly? How does the rate of escape scale with the ambient dimensionality? In this post, we describe our recent work with Rong Ge, Praneeth Netrapalli and Sham Kakade, that provides the first provable positive answer to the efficiency question, showing that, rather surprisingly, GD augmented with suitable perturbations escapes saddle points efficiently; indeed, in terms of rate and dimension dependence it is almost as if the saddle points aren’t there!
acmtariat  org:bleg  nibble  liner-notes  machine-learning  acm  optimization  gradient-descent  local-global  off-convex  time-complexity  random  perturbation  michael-jordan  iterative-methods  research  learning-theory  math.DS  iteration-recursion 
july 2017 by nhaliday
Unsupervised learning, one notion or many? – Off the convex path
(Task A) Learning a distribution from samples. (Examples: gaussian mixtures, topic models, variational autoencoders,..)

(Task B) Understanding latent structure in the data. This is not the same as (a); for example principal component analysis, clustering, manifold learning etc. identify latent structure but don’t learn a distribution per se.

(Task C) Feature Learning. Learn a mapping from datapoint → feature vector such that classification tasks are easier to carry out on feature vectors rather than datapoints. For example, unsupervised feature learning could help lower the amount of labeled samples needed for learning a classifier, or be useful for domain adaptation.

Task B is often a subcase of Task C, as the intended user of “structure found in data” are humans (scientists) who pour over the representation of data to gain some intuition about its properties, and these “properties” can be often phrased as a classification task.

This post explains the relationship between Tasks A and C, and why they get mixed up in students’ mind. We hope there is also some food for thought here for experts, namely, our discussion about the fragility of the usual “perplexity” definition of unsupervised learning. It explains why Task A doesn’t in practice lead to good enough solution for Task C. For example, it has been believed for many years that for deep learning, unsupervised pretraining should help supervised training, but this has been hard to show in practice.
acmtariat  org:bleg  nibble  machine-learning  acm  thinking  clarity  unsupervised  conceptual-vocab  concept  explanation  features  bayesian  off-convex  deep-learning  latent-variables  generative  intricacy  distribution  sampling 
june 2017 by nhaliday
Prékopa–Leindler inequality | Academically Interesting
Consider the following statements:
1. The shape with the largest volume enclosed by a given surface area is the n-dimensional sphere.
2. A marginal or sum of log-concave distributions is log-concave.
3. Any Lipschitz function of a standard n-dimensional Gaussian distribution concentrates around its mean.
What do these all have in common? Despite being fairly non-trivial and deep results, they all can be proved in less than half of a page using the Prékopa–Leindler inequality.

ie, Brunn-Minkowski
acmtariat  clever-rats  ratty  math  acm  geometry  measure  math.MG  estimate  distribution  concentration-of-measure  smoothness  regularity  org:bleg  nibble  brunn-minkowski  curvature  convexity-curvature 
february 2017 by nhaliday
Predicting with confidence: the best machine learning idea you never heard of | Locklin on science
The advantages of conformal prediction are many fold. These ideas assume very little about the thing you are trying to forecast, the tool you’re using to forecast or how the world works, and they still produce a pretty good confidence interval. Even if you’re an unrepentant Bayesian, using some of the machinery of conformal prediction, you can tell when things have gone wrong with your prior. The learners work online, and with some modifications and considerations, with batch learning. One of the nice things about calculating confidence intervals as a part of your learning process is they can actually lower error rates or use in semi-supervised learning as well. Honestly, I think this is the best bag of tricks since boosting; everyone should know about and use these ideas.

The essential idea is that a “conformity function” exists. Effectively you are constructing a sort of multivariate cumulative distribution function for your machine learning gizmo using the conformity function. Such CDFs exist for classical stuff like ARIMA and linear regression under the correct circumstances; CP brings the idea to machine learning in general, and to models like ARIMA when the standard parametric confidence intervals won’t work. Within the framework, the conformity function, whatever may be, when used correctly can be guaranteed to give confidence intervals to within a probabilistic tolerance. The original proofs and treatments of conformal prediction, defined for sequences, is extremely computationally inefficient. The conditions can be relaxed in many cases, and the conformity function is in principle arbitrary, though good ones will produce narrower confidence regions. Somewhat confusingly, these good conformity functions are referred to as “efficient” -though they may not be computationally efficient.
techtariat  acmtariat  acm  machine-learning  bayesian  stats  exposition  research  online-learning  probability  decision-theory  frontier  unsupervised  confidence 
february 2017 by nhaliday
Unlearning descriptive statistics | Hacker News
For readers who are OK with some math, I recommend John Myles White's eye-opening post about means, medians, and modes: http://www.johnmyleswhite.com/notebook/2013/03/22/modes-medians-and-means-an-unifying-perspective/. He describes these summary descriptive stats in terms of what penalty function they minimize: mean minimizes L2, median minimizes L1, mode minimizes L0.
hn  commentary  techtariat  acmtariat  data-science  explanation  multi  norms  org:bleg  nibble  scitariat  expectancy 
february 2017 by nhaliday
Thinking Outside One’s Paradigm | Academically Interesting
I think that as a scientist (or really, even as a citizen) it is important to be able to see outside one’s own paradigm. I currently think that I do a good job of this, but it seems to me that there’s a big danger of becoming more entrenched as I get older. Based on the above experiences, I plan to use the following test: When someone asks me a question about my field, how often have I not thought about it before? How tempted am I to say, “That question isn’t interesting”? If these start to become more common, then I’ll know something has gone wrong.
ratty  clever-rats  academia  science  interdisciplinary  lens  frontier  thinking  rationality  meta:science  curiosity  insight  scholar  innovation  reflection  acmtariat  water  biases  heterodox  🤖  🎓  aging  meta:math  low-hanging  big-picture  hi-order-bits  flexibility  org:bleg  nibble  the-trenches  wild-ideas  metameta  courage  s:**  discovery  context  embedded-cognition  endo-exo  near-far  🔬  info-dynamics  allodium  ideas  questions  within-without 
january 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : academeacmmetapeepspub

related tags

2016-election  abstraction  academia  accelerationism  accretion  accuracy  acm  acmtariat  adversarial  advice  aging  ai  ai-control  akrasia  algorithmic-econ  algorithms  alignment  allodium  alt-inst  altruism  ama  analogy  analysis  analytical-holistic  announcement  anthropic  aphorism  apollonian-dionysian  applicability-prereqs  applications  approximation  arbitrage  arms  arrows  art  atmosphere  atoms  attention  audio  authoritarianism  auto-learning  automata  automation  average-case  backup  bandits  bare-hands  bayesian  beeminder  ben-recht  benchmarks  berkeley  best-practices  better-explained  bias-variance  biases  big-picture  big-yud  bio  biodet  bits  blog  boltzmann  bonferroni  books  bostrom  bots  branches  bret-victor  brunn-minkowski  c:**  c:***  caching  california  caltech  capital  capitalism  career  causation  censorship  charity  chart  checking  checklists  circuits  civil-liberty  civilization  cjones-like  clarity  classic  classification  clever-rats  climate-change  cmu  coalitions  coarse-fine  cocktail  coding-theory  cog-psych  combo-optimization  commentary  communication  community  comparison  compensation  competition  complement-substitute  complex-systems  complexity  composition-decomposition  compressed-sensing  computation  computer-vision  concentration-of-measure  concept  conceptual-vocab  concrete  concurrency  conference  confidence  confluence  confounding  confusion  constraint-satisfaction  context  contracts  contrarianism  control  convergence  convexity-curvature  cool  cooperate-defect  coordination  core-rats  correlation  cost-benefit  counterexample  counterfactual  courage  course  critique  crux  curiosity  current-events  curvature  cybernetics  cycles  data  data-science  dataset  dataviz  death  debate  debt  debugging  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  defense  definition  dennett  descriptive  detail-architecture  deterrence  developmental  devops  differential  differential-privacy  dimensionality  diogenes  direct-indirect  direction  discipline  discovery  discrete  discrimination  discussion  distribution  DP  duality  dumb-ML  duplication  dynamic  dynamical  ecology  economics  eden  eden-heaven  EEA  effective-altruism  efficiency  egalitarianism-hierarchy  EGT  elections  embedded-cognition  embeddings  embodied  embodied-cognition  embodied-pack  embodied-street-fighting  empirical  ems  encyclopedic  endo-exo  endogenous-exogenous  ends-means  engineering  enhancement  ensembles  entropy-like  environment  environmental-effects  epistemic  ergodic  error  essay  estimate  ethical-algorithms  ethics  events  evidence-based  evolution  evopsych  examples  existence  exit-voice  exocortex  expansionism  expectancy  experiment  expert  expert-experience  explanans  explanation  exploratory  explore-exploit  exposition  extrema  facebook  fall-2015  fall-2016  farmers-and-foragers  features  fermi  fiction  field-study  finance  finiteness  fixed-point  flexibility  flux-stasis  focus  food  formal-values  forum  fourier  frequentist  frontier  fungibility-liquidity  futurism  game-theory  games  gaussian-processes  gedanken  gelman  generalization  generative  geometry  giants  gnon  google  gotchas  government  grad-school  gradient-descent  graph-theory  graphical-models  graphs  gray-econ  greedy  gregory-clark  ground-up  growth-econ  gtd  guide  gwern  habit  hacker  hanson  hard-core  hardness  hardware  hci  heavy-industry  heterodox  heuristic  hi-order-bits  higher-ed  history  hmm  hn  homepage  homogeneity  honor  hsu  human-ml  human-study  humanity  hypochondria  hypothesis-testing  ideas  idk  IEEE  iidness  impact  impetus  incentives  individualism-collectivism  inequality  info-dynamics  info-econ  info-foraging  information-theory  infrastructure  inhibition  init  innovation  insight  institutions  integrity  intel  intelligence  interdisciplinary  interests  internet  interpretability  intersection  intersection-connectedness  intervention  interview  intricacy  intuition  invariance  iq  isotropy  iteration-recursion  iterative-methods  jargon  justice  kaggle  kernels  knowledge  labor  land  language  large-factor  latent-variables  law  learning  learning-theory  lectures  legacy  legibility  len:long  len:short  lens  lesswrong  let-me-see  levers  leviathan  libraries  limits  linear-algebra  linear-models  linearity  liner-notes  links  list  local-global  logic  long-short-run  long-term  low-hanging  lower-bounds  machine-learning  magnitude  malthus  manifolds  map-territory  marginal  markets  markov  martingale  matching  math  math.CA  math.CO  math.DS  math.GN  math.MG  mathtariat  matrix-factorization  measure  measurement  media  mena4  meta:math  meta:prediction  meta:research  meta:rhetoric  meta:science  metabuch  metameta  methodology  metrics  michael-jordan  michael-nielsen  micro  miri-cfar  mixing  ML-MAP-E  model-class  model-organism  model-selection  models  moloch  moments  money-for-time  monte-carlo  morality  motivation  mrtz  multi  multiplicative  music  mutation  nature  near-far  network-structure  neuro  neuro-nitgrit  neurons  nibble  nitty-gritty  nlp  no-go  noise-structure  nonlinearity  nonparametric  nootropics  norms  notetaking  nuclear  number  numerics  objective-measure  objektbuch  occam  off-convex  offense-defense  oly  online-learning  open-closed  open-problems  openai  operational  optimate  optimization  order-disorder  orders  org:bleg  org:edu  org:inst  org:junk  org:mat  org:med  org:nat  organization  orwellian  oscillation  oss  outcome-risk  overflow  p:*  p:***  p:someday  p:whenever  PAC  papers  parenting  parsimony  pdf  peace-violence  people  performance  perturbation  phd  philosophy  physics  pic  pigeonhole-markov  planning  plots  pls  plt  podcast  policy  polisci  politics  poll  polynomials  popsci  postmortem  potential  power  power-law  pragmatic  pre-2013  prediction  prediction-markets  prepping  preprint  presentation  princeton  prioritizing  priors-posteriors  pro-rata  probabilistic-method  probability  problem-solving  procrastination  productivity  prof  programming  progression  project  proposal  psychology  psychometrics  publishing  puzzles  python  q-n-a  qra  quantified-self  questions  quixotic  quotes  rand-approx  rand-complexity  random  random-networks  ranking  rant  rat-pack  rationality  ratty  reading  realness  reason  recommendations  reddit  reduction  reference  reflection  regression  regularity  regularization  regularizer  regulation  reinforcement  relativity  replication  repo  research  research-program  retention  review  rhetoric  rigidity  rigor  risk  roadmap  robust  roots  rounding  s:*  s:**  s:***  safety  sample-complexity  sampling  sanjeev-arora  scale  scholar  scholar-pack  science  scifi-fantasy  scitariat  search  sebastien-bubeck  security  selection  self-control  sensitivity  sequential  series  sex  shift  SIGGRAPH  signal-noise  signaling  similarity  simulation  singularity  skeleton  skunkworks  sleuthin  slides  smoothness  social  social-psych  social-science  society  software  space  sparsity  spatial  spectral  speculation  speed  speedometer  spock  stackex  stanford  startups  stat-mech  state-of-art  stats  stochastic-processes  stories  strategy  stream  street-fighting  structure  students  study  studying  subculture  subjective-objective  success  summary  summer-2016  supply-demand  survey  survival  sv  symmetry  synthesis  system-design  systematic-ad-hoc  tails  tainter  talks  tcs  tcstariat  teaching  tech  technocracy  technology  techtariat  telos-atelos  tensors  terrorism  tetlock  the-monster  the-prices  the-trenches  theory-practice  thermo  thick-thin  things  thinking  threat-modeling  thurston  tidbits  tightness  time  time-complexity  time-preference  time-use  todo  toolkit  tools  top-n  topology  track-record  trade  tradeoffs  transitions  transportation  trees  trends  tricki  tricks  trump  trust  turing  tutorial  twitter  ui  unaffiliated  uncertainty  unintended-consequences  unit  universalism-particularism  unsupervised  urban-rural  values  vc-dimension  video  virtu  visual-understanding  visualization  volo-avolo  von-neumann  war  water  wealth  wiki  wild-ideas  winner-take-all  wire-guided  within-without  wkfly  workflow  working-stiff  workshop  worrydream  writing  xenobio  yak-shaving  yoga  zero-positive-sum  zooming  🌞  🎓  🐸  👳  🔬  🖥  🤖  🦉 

Copy this bookmark:



description:


tags: