nhaliday + clever-rats   80

Surveil things, not people – The sideways view
Technology may reach a point where free use of one person’s share of humanity’s resources is enough to easily destroy the world. I think society needs to make significant changes to cope with that scenario.

Mass surveillance is a natural response, and sometimes people think of it as the only response. I find mass surveillance pretty unappealing, but I think we can capture almost all of the value by surveilling things rather than surveilling people. This approach avoids some of the worst problems of mass surveillance; while it still has unattractive features it’s my favorite option so far.

...

The idea
We’ll choose a set of artifacts to surveil and restrict. I’ll call these heavy technology and everything else light technology. Our goal is to restrict as few things as possible, but we want to make sure that someone can’t cause unacceptable destruction with only light technology. By default something is light technology if it can be easily acquired by an individual or small group in 2017, and heavy technology otherwise (though we may need to make some exceptions, e.g. certain biological materials or equipment).

Heavy technology is subject to two rules:

1. You can’t use heavy technology in a way that is unacceptably destructive.
2. You can’t use heavy technology to undermine the machinery that enforces these two rules.

To enforce these rules, all heavy technology is under surveillance, and is situated such that it cannot be unilaterally used by any individual or small group. That is, individuals can own heavy technology, but they cannot have unmonitored physical access to that technology.

...

This proposal does give states a de facto monopoly on heavy technology, and would eventually make armed resistance totally impossible. But it’s already the case that states have a massive advantage in armed conflict, and it seems almost inevitable that progress in AI will make this advantage larger (and enable states to do much more with it). Realistically I’m not convinced this proposal makes things much worse than the default.

This proposal definitely expands regulators’ nominal authority and seems prone to abuses. But amongst candidates for handling a future with cheap and destructive dual-use technology, I feel this is the best of many bad options with respect to the potential for abuse.
ratty  acmtariat  clever-rats  risk  existence  futurism  technology  policy  alt-inst  proposal  government  intel  authoritarianism  orwellian  tricks  leviathan  security  civilization  ai  ai-control  arms  defense  cybernetics  institutions  law  unintended-consequences  civil-liberty  volo-avolo  power  constraint-satisfaction  alignment 
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
Unaligned optimization processes as a general problem for society
TL;DR: There are lots of systems in society which seem to fit the pattern of “the incentives for this system are a pretty good approximation of what we actually want, so the system produces good results until it gets powerful, at which point it gets terrible results.”

...

Here are some more places where this idea could come into play:

- Marketing—humans try to buy things that will make our lives better, but our process for determining this is imperfect. A more powerful optimization process produces extremely good advertising to sell us things that aren’t actually going to make our lives better.
- Politics—we get extremely effective demagogues who pit us against our essential good values.
- Lobbying—as industries get bigger, the optimization process to choose great lobbyists for industries gets larger, but the process to make regulators robust doesn’t get correspondingly stronger. So regulatory capture gets worse and worse. Rent-seeking gets more and more significant.
- Online content—in a weaker internet, sites can’t be addictive except via being good content. In the modern internet, people can feel addicted to things that they wish they weren’t addicted to. We didn’t use to have the social expertise to make clickbait nearly as well as we do it today.
- News—Hyperpartisan news sources are much more worth it if distribution is cheaper and the market is bigger. News sources get an advantage from being truthful, but as society gets bigger, this advantage gets proportionally smaller.

...

For these reasons, I think it’s quite plausible that humans are fundamentally unable to have a “good” society with a population greater than some threshold, particularly if all these people have access to modern technology. Humans don’t have the rigidity to maintain social institutions in the face of that kind of optimization process. I think it is unlikely but possible (10%?) that this threshold population is smaller than the current population of the US, and that the US will crumble due to the decay of these institutions in the next fifty years if nothing totally crazy happens.
ratty  thinking  metabuch  reflection  metameta  big-yud  clever-rats  ai-control  ai  risk  scale  quality  ability-competence  network-structure  capitalism  randy-ayndy  civil-liberty  marketing  institutions  economics  political-econ  politics  polisci  advertising  rent-seeking  government  coordination  internet  attention  polarization  media  truth  unintended-consequences  alt-inst  efficiency  altruism  society  usa  decentralized  rhetoric  prediction  population  incentives  intervention  criminal-justice  property-rights  redistribution  taxes  externalities  science  monetary-fiscal  public-goodish  zero-positive-sum  markets  cost-benefit  regulation  regularizer  order-disorder  flux-stasis  shift  smoothness  phase-transition  power  definite-planning  optimism  pessimism  homo-hetero  interests  eden-heaven  telos-atelos  threat-modeling  alignment 
february 2018 by nhaliday
Superintelligence Risk Project Update II
https://www.jefftk.com/p/superintelligence-risk-project-update

https://www.jefftk.com/p/conversation-with-michael-littman
For example, I asked him what he thought of the idea that to we could get AGI with current techniques, primarily deep neural nets and reinforcement learning, without learning anything new about how intelligence works or how to implement it ("Prosaic AGI" [1]). He didn't think this was possible, and believes there are deep conceptual issues we still need to get a handle on. He's also less impressed with deep learning than he was before he started working in it: in his experience it's a much more brittle technology than he had been expecting. Specifically, when trying to replicate results, he's often found that they depend on a bunch of parameters being in just the right range, and without that the systems don't perform nearly as well.

The bottom line, to him, was that since we are still many breakthroughs away from getting to AGI, we can't productively work on reducing superintelligence risk now.

He told me that he worries that the AI risk community is not solving real problems: they're making deductions and inferences that are self-consistent but not being tested or verified in the world. Since we can't tell if that's progress, it probably isn't. I asked if he was referring to MIRI's work here, and he said their work was an example of the kind of approach he's skeptical about, though he wasn't trying to single them out. [2]

https://www.jefftk.com/p/conversation-with-an-ai-researcher
Earlier this week I had a conversation with an AI researcher [1] at one of the main industry labs as part of my project of assessing superintelligence risk. Here's what I got from them:

They see progress in ML as almost entirely constrained by hardware and data, to the point that if today's hardware and data had existed in the mid 1950s researchers would have gotten to approximately our current state within ten to twenty years. They gave the example of backprop: we saw how to train multi-layer neural nets decades before we had the computing power to actually train these nets to do useful things.

Similarly, people talk about AlphaGo as a big jump, where Go went from being "ten years away" to "done" within a couple years, but they said it wasn't like that. If Go work had stayed in academia, with academia-level budgets and resources, it probably would have taken nearly that long. What changed was a company seeing promising results, realizing what could be done, and putting way more engineers and hardware on the project than anyone had previously done. AlphaGo couldn't have happened earlier because the hardware wasn't there yet, and was only able to be brought forward by massive application of resources.

https://www.jefftk.com/p/superintelligence-risk-project-conclusion
Summary: I'm not convinced that AI risk should be highly prioritized, but I'm also not convinced that it shouldn't. Highly qualified researchers in a position to have a good sense the field have massively different views on core questions like how capable ML systems are now, how capable they will be soon, and how we can influence their development. I do think these questions are possible to get a better handle on, but I think this would require much deeper ML knowledge than I have.
ratty  core-rats  ai  risk  ai-control  prediction  expert  machine-learning  deep-learning  speedometer  links  research  research-program  frontier  multi  interview  deepgoog  games  hardware  performance  roots  impetus  chart  big-picture  state-of-art  reinforcement  futurism  🤖  🖥  expert-experience  singularity  miri-cfar  empirical  evidence-based  speculation  volo-avolo  clever-rats  acmtariat  robust  ideas  crux  atoms  detail-architecture  software  gradient-descent 
july 2017 by nhaliday
Prékopa–Leindler inequality | Academically Interesting
Consider the following statements:
1. The shape with the largest volume enclosed by a given surface area is the n-dimensional sphere.
2. A marginal or sum of log-concave distributions is log-concave.
3. Any Lipschitz function of a standard n-dimensional Gaussian distribution concentrates around its mean.
What do these all have in common? Despite being fairly non-trivial and deep results, they all can be proved in less than half of a page using the Prékopa–Leindler inequality.

ie, Brunn-Minkowski
acmtariat  clever-rats  ratty  math  acm  geometry  measure  math.MG  estimate  distribution  concentration-of-measure  smoothness  regularity  org:bleg  nibble  brunn-minkowski  curvature  convexity-curvature 
february 2017 by nhaliday
Thinking Outside One’s Paradigm | Academically Interesting
I think that as a scientist (or really, even as a citizen) it is important to be able to see outside one’s own paradigm. I currently think that I do a good job of this, but it seems to me that there’s a big danger of becoming more entrenched as I get older. Based on the above experiences, I plan to use the following test: When someone asks me a question about my field, how often have I not thought about it before? How tempted am I to say, “That question isn’t interesting”? If these start to become more common, then I’ll know something has gone wrong.
ratty  clever-rats  academia  science  interdisciplinary  lens  frontier  thinking  rationality  meta:science  curiosity  insight  scholar  innovation  reflection  acmtariat  water  biases  heterodox  🤖  🎓  aging  meta:math  low-hanging  big-picture  hi-order-bits  flexibility  org:bleg  nibble  the-trenches  wild-ideas  metameta  courage  s:**  discovery  context  embedded-cognition  endo-exo  near-far  🔬  info-dynamics  allodium  ideas  questions  within-without  meta:research 
january 2017 by nhaliday
Intelligent Agent Foundations Forum | Online Learning 1: Bias-detecting online learners
apparently can maybe be used to shave exponent from Christiano's manipulation-resistant reputation system paper
ratty  clever-rats  online-learning  acm  research  ai-control  miri-cfar 
november 2016 by nhaliday
The best kind of discrimination – The sideways view
I think it would be nice if the world had more price discrimination; we would produce more goods, and those goods would be available to more people. As a society we could enable price discrimination by providing more high-quality signals to be used by price discriminators. The IRS is in a particularly attractive position to offer such signals since income is a particularly useful signal. But realistically I think that such a proposal would require coordination in order to get consumers’ consent to make the data available (and to ensure that only upper bounds were available); the total gains are probably not large enough to justify the amount of coordination and complexity that is required.

apparently about 1/3 of income goes to capital-holders, and 2/3 to workers (wonder what the source for that is, and how consistent it is across industries)
clever-rats  ratty  alt-inst  economics  proposal  policy  markets  arbitrage  hmm  efficiency  street-fighting  analysis  gray-econ  🤖  acmtariat  compensation  distribution  objektbuch  capital  labor  cost-benefit  capitalism  ideas  discrimination  supply-demand  micro 
november 2016 by nhaliday
Overcoming Bias : Lognormal Jobs
could be the case that exponential tech improvement -> linear job replacement, as long as distribution of jobs across automatability is log-normal (I don't entirely follow the argument)

Paul Christiano has objection (to premise not argument) in the comments
hanson  thinking  street-fighting  futurism  automation  labor  economics  ai  prediction  🎩  gray-econ  regularizer  contrarianism  c:*  models  distribution  marginal  2016  meta:prediction  discussion  clever-rats  ratty  speedometer  ideas  neuro  additive  multiplicative  magnitude  iteration-recursion 
november 2016 by nhaliday
A Fervent Defense of Frequentist Statistics - Less Wrong
Short summary. This essay makes many points, each of which I think is worth reading, but if you are only going to understand one point I think it should be “Myth 5″ below, which describes the online learning framework as a response to the claim that frequentist methods need to make strong modeling assumptions. Among other things, online learning allows me to perform the following remarkable feat: if I’m betting on horses, and I get to place bets after watching other people bet but before seeing which horse wins the race, then I can guarantee that after a relatively small number of races, I will do almost as well overall as the best other person, even if the number of other people is very large (say, 1 billion), and their performance is correlated in complicated ways.

If you’re only going to understand two points, then also read about the frequentist version of Solomonoff induction, which is described in “Myth 6″.

...

If you are like me from, say, two years ago, you are firmly convinced that Bayesian methods are superior and that you have knockdown arguments in favor of this. If this is the case, then I hope this essay will give you an experience that I myself found life-altering: the experience of having a way of thinking that seemed unquestionably true slowly dissolve into just one of many imperfect models of reality. This experience helped me gain more explicit appreciation for the skill of viewing the world from many different angles, and of distinguishing between a very successful paradigm and reality.

If you are not like me, then you may have had the experience of bringing up one of many reasonable objections to normative Bayesian epistemology, and having it shot down by one of many “standard” arguments that seem wrong but not for easy-to-articulate reasons. I hope to lend some reprieve to those of you in this camp, by providing a collection of “standard” replies to these standard arguments.
bayesian  philosophy  stats  rhetoric  advice  debate  critique  expert  lesswrong  commentary  discussion  regularizer  essay  exposition  🤖  aphorism  spock  synthesis  clever-rats  ratty  hi-order-bits  top-n  2014  acmtariat  big-picture  acm  iidness  online-learning  lens  clarity  unit  nibble  frequentist  s:**  expert-experience  subjective-objective  grokkability-clarity 
september 2016 by nhaliday
Risk Arbitrage | Ordinary Ideas
People have different risk profiles, and different beliefs about the future. But it seems to me like these differences should probably get washed out in markets, so that as a society we pursue investments if and only if they have good returns using some particular beliefs (call them the market’s beliefs) and with respect to some particular risk profile (call it the market’s risk profile).

As it turns out, if we idealize the world hard enough these two notions collapse, yielding a single probability distribution P which has the following property: on the margins, every individual should make an investment if and only if it has a positive expected value with respect to P. This probability distribution tends to be somewhat pessimistic: because people care about wealth more in worlds where wealth is scarce (being risk averse), events like a complete market collapse receive higher probability under P than under the “real” probability distribution over possible futures.
insight  thinking  hanson  rationality  explanation  finance  🤖  alt-inst  spock  confusion  prediction-markets  markets  ratty  decision-theory  clever-rats  pre-2013  acmtariat  outcome-risk  info-econ  info-dynamics 
september 2016 by nhaliday
What is up with carbon dioxide and cognition? An offer - Less Wrong Discussion
study: http://ehp.niehs.nih.gov/1104789/
n=22, p-values < .001 generally, no multiple comparisons or anything, right?
chart: http://ehp.niehs.nih.gov/wp-content/uploads/2012/11/ehp.1104789.g002.png
- note it's CO2 not oxygen that's relevant
- some interesting debate in comments about whether you would find similar effects for similar levels of variation in oxygen, implications for high-altitude living, etc.
- CO2 levels can range quite to quite high levels indoors (~1500, and even ~7000 in some of Gwern's experiments); this seems to be enough to impact cognition to a significant degree
- outdoor air quality often better than indoor even in urban areas (see other studies)

the solution: houseplants, http://lesswrong.com/lw/nk0/what_is_up_with_carbon_dioxide_and_cognition_an/d956

https://twitter.com/menangahela/status/965167009083379712
https://archive.is/k0I0U
except that environmental instability tends to be harder on more 'complex' adaptations and co2 ppm directly correlates with decreased effectiveness of cognition-enhancing traits vis chronic low-grade acidosis
productivity  study  gotchas  workflow  money-for-time  neuro  gwern  embodied  hypochondria  hmm  lesswrong  🤖  spock  nootropics  embodied-cognition  evidence-based  ratty  clever-rats  atmosphere  rat-pack  psychology  cog-psych  🌞  field-study  multi  c:**  2016  human-study  acmtariat  embodied-street-fighting  biodet  objective-measure  decision-making  s:*  embodied-pack  intervention  iq  environmental-effects  branches  unintended-consequences  twitter  social  discussion  backup  gnon  mena4  land  🐸  environment  climate-change  intelligence  structure 
may 2016 by nhaliday
Deliberate Grad School | Andrew Critch
- find a flexible program (math, stats, TCS)
- high-impact topic
- teach
- use freedom to visibly accomplish things
- organize seminar
- get exposure to experts
- learn how productive researchers work
- remember you don't have to stay in academia
academia  grad-school  advice  phd  reflection  expert  long-term  🎓  high-variance  aphorism  hi-order-bits  top-n  tactics  strategy  ratty  core-rats  multi  success  flexibility  metameta  s:*  s-factor  clever-rats  expert-experience  cs  math  stats  machine-learning 
may 2016 by nhaliday
paulfchristiano/dwimmer
dwimmer means sorcery, but idk what this is otherwise, maybe a logic programming repl?

relevant?: https://ai-alignment.com/learning-and-logic-e96bd41b1ab5
rationality  programming  tools  thinking  idk  worrydream  repo  clever-rats  ratty  multi  org:med  acmtariat  ai-control 
april 2016 by nhaliday

bundles : academeacmpeeps

related tags

2016-election  80000-hours  ability-competence  abstraction  academia  accelerationism  accretion  acm  acmtariat  additive  advanced  adversarial  advertising  advice  aesthetics  aging  ai  ai-control  akrasia  alg-combo  algebra  algorithms  alignment  allodium  alt-inst  altruism  AMT  analogy  analysis  analytical-holistic  announcement  anthropic  aphorism  apollonian-dionysian  applicability-prereqs  applications  approximation  arbitrage  arms  arrows  asia  atmosphere  atoms  attention  audio  authoritarianism  automata-languages  automation  average-case  aversion  backup  bare-hands  bayesian  beeminder  being-right  best-practices  biases  big-picture  big-yud  bio  biodet  biohacking  bits  blog  boltzmann  books  bostrom  branches  brexit  britain  brunn-minkowski  business  c:*  c:**  c:***  caching  california  capital  capitalism  career  cartoons  causation  censorship  charity  chart  checklists  chemistry  china  christianity  civil-liberty  civilization  cjones-like  clarity  clever-rats  climate-change  coalitions  coarse-fine  cog-psych  combo-optimization  commentary  communication  comparison  compensation  competition  complement-substitute  complex-systems  complexity  composition-decomposition  concentration-of-measure  concept  conceptual-vocab  concrete  confluence  confusion  constraint-satisfaction  context  contracts  contrarianism  convergence  convexity-curvature  cool  cooperate-defect  coordination  core-rats  cost-benefit  counterfactual  coupling-cohesion  courage  creative  criminal-justice  critique  crux  cs  curiosity  current-events  curvature  cybernetics  cycles  dan-luu  data  data-science  death  debate  debugging  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  defense  definite-planning  dennett  descriptive  design  detail-architecture  deterrence  developmental  differential  dimensionality  discipline  discovery  discrete  discrimination  discussion  distributed  distribution  divide-and-conquer  duality  dumb-ML  duplication  ecology  economics  econotariat  eden  eden-heaven  EEA  effective-altruism  efficiency  egalitarianism-hierarchy  EGT  elections  electromag  elegance  embedded-cognition  embodied  embodied-cognition  embodied-pack  embodied-street-fighting  empirical  ems  endo-exo  endogenous-exogenous  ends-means  engineering  enhancement  ensembles  entropy-like  environment  environmental-effects  epistemic  error  essay  estimate  ethical-algorithms  ethics  evidence-based  evolution  evopsych  examples  existence  exit-voice  exocortex  expansionism  experiment  expert  expert-experience  explanation  explore-exploit  exposition  externalities  extrema  facebook  farmers-and-foragers  fermi  fiction  field-study  film  finance  finiteness  fire  flexibility  flux-stasis  focus  form-design  formal-values  forum  fourier  frequentist  frontier  fungibility-liquidity  futurism  games  gedanken  generalization  generative  geometry  giants  gnon  gotchas  government  grad-school  gradient-descent  graph-theory  graphical-models  graphs  gray-econ  gregory-clark  grokkability-clarity  ground-up  growth  growth-econ  gtd  guessing  gwern  habit  hacker  hanson  hard-core  hardware  hci  health  heavy-industry  heavyweights  heterodox  heuristic  hi-order-bits  high-variance  history  hmm  hn  homo-hetero  homogeneity  honor  hsu  human-ml  human-study  humanity  hypochondria  ideas  idk  iidness  impact  impetus  impro  incentives  individualism-collectivism  inequality  info-dynamics  info-econ  info-foraging  information-theory  infrastructure  inhibition  init  innovation  insight  institutions  integral  integrity  intel  intelligence  interdisciplinary  interests  internet  interpretability  intersection  intersection-connectedness  intervention  interview  intricacy  intuition  iq  iteration-recursion  judgement  knowledge  labor  land  language  large-factor  latent-variables  latex  law  learning  legacy  len:long  len:short  lens  lesswrong  let-me-see  letters  levers  leviathan  lifehack  limits  linear-algebra  linearity  liner-notes  links  list  local-global  logic  long-short-run  long-term  low-hanging  machine-learning  macro  magnitude  malthus  manifolds  marginal  marginal-rev  marketing  markets  markov  martingale  math  math.CA  math.CO  math.GN  math.GR  math.MG  math.NT  math.RT  mathtariat  measure  measurement  media  mena4  mental-math  meta:math  meta:prediction  meta:research  meta:rhetoric  meta:science  metabolic  metabuch  metameta  methodology  micro  minimum-viable  miri-cfar  mit  ML-MAP-E  model-class  model-organism  models  moloch  moments  monetary-fiscal  money  money-for-time  monte-carlo  morality  mostly-modern  motivation  multi  multiplicative  music-theory  mutation  nature  near-far  network-structure  neuro  neuro-nitgrit  neurons  nibble  nitty-gritty  nlp  nonlinearity  nootropics  notetaking  nuclear  number  numerics  objective-measure  objektbuch  occam  offense-defense  oly  online-learning  open-closed  open-problems  openai  optimate  optimism  optimization  order-disorder  org:bleg  org:com  org:mat  org:med  orwellian  oss  outcome-risk  overflow  p:***  p:someday  p:whenever  papers  parenting  paying-rent  pdf  peace-violence  people  performance  pessimism  phase-transition  phd  philosophy  physics  pigeonhole-markov  plots  podcast  polarization  policy  polisci  political-econ  politics  population  postmortem  postrat  power  pragmatic  pre-2013  prediction  prediction-markets  prepping  preprint  presentation  prioritizing  priors-posteriors  pro-rata  probabilistic-method  probability  problem-solving  procrastination  productivity  programming  project  proofs  property-rights  proposal  pseudorandomness  psychology  psychometrics  public-goodish  publishing  puzzles  q-n-a  quality  quantified-self  quantum  questions  quixotic  rand-approx  rand-complexity  random  randy-ayndy  rat-pack  rationality  ratty  reading  realness  reason  recommendations  redistribution  reduction  reflection  regression-to-mean  regularity  regularization  regularizer  regulation  reinforcement  relativity  religion  rent-seeking  repo  research  research-program  retention  review  rhetoric  rigidity  risk  roadmap  robust  roots  s-factor  s:*  s:**  s:***  safety  sampling  scale  scholar  scholar-pack  science  scifi-fantasy  search  security  selection  self-control  series  sex  shift  signal-noise  signaling  similarity  simulation  singularity  skeleton  smoothness  social  social-science  society  software  space  spectral  speculation  speed  speedometer  spock  ssc  stat-mech  state-of-art  stats  status  stirling  stochastic-processes  stories  strategy  stream  street-fighting  stress  structure  students  study  studying  stylized-facts  subculture  subjective-objective  success  summary  supply-demand  survey  survival  sv  synthesis  system-design  tactics  tainter  taxes  tcs  tech  technical-writing  technology  techtariat  telos-atelos  terrorism  tetlock  the-monster  the-trenches  the-world-is-just-atoms  theory-practice  thermo  thick-thin  things  thinking  threat-modeling  tidbits  time  time-preference  time-use  todo  toolkit  tools  top-n  topology  track-record  trade  tradeoffs  transportation  tricki  tricks  trump  trust  truth  twitter  uncertainty  unintended-consequences  unit  universalism-particularism  unsupervised  urban-rural  usa  values  video  virtu  vitality  volo-avolo  war  water  wealth  wiki  wild-ideas  winner-take-all  wire-guided  within-without  wkfly  workflow  working-stiff  world-war  worrydream  writing  xenobio  yc  yoga  yvain  zero-positive-sum  zooming  🌞  🎓  🎩  🐸  👳  👽  🔬  🖥  🤖  🦉 

Copy this bookmark:



description:


tags: