nhaliday + hardware   46

Theory of Self-Reproducing Automata - John von Neumann
Fourth Lecture: THE ROLE OF HIGH AND OF EXTREMELY HIGH COMPLICATION

Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future.

Estimates for size for the human central nervous system. Excursus about the “mixed” character of living organisms. Analog and digital elements. Observations about the “mixed” character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these.

Evaluation of the discrepancy in size between artificial and natural automata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used.

The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires.

Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms--that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self-correcting features.

Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The “single error” principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata: The role of the autonomy of parts. Connections between this autonomy and evolution.

- 10^10 neurons in brain, 10^4 vacuum tubes in largest computer at time
- machines faster: 5 ms from neuron potential to neuron potential, 10^-3 ms for vacuum tubes

https://en.wikipedia.org/wiki/John_von_Neumann#Computing
pdf  article  papers  essay  nibble  math  cs  computation  bio  neuro  neuro-nitgrit  scale  magnitude  comparison  acm  von-neumann  giants  thermo  phys-energy  speed  performance  time  density  frequency  hardware  ems  efficiency  dirty-hands  street-fighting  fermi  estimate  retention  physics  interdisciplinary  multi  wiki  links  people  🔬  atoms  automata  duplication  iteration-recursion  turing  complexity  measure  nature  technology  complex-systems  bits  information-theory  circuits  robust  structure  composition-decomposition  evolution  mutation  axioms  analogy  thinking  input-output  hi-order-bits  coding-theory  flexibility  rigidity 
april 2018 by nhaliday
Complexity no Bar to AI - Gwern.net
Critics of AI risk suggest diminishing returns to computing (formalized asymptotically) means AI will be weak; this argument relies on a large number of questionable premises and ignoring additional resources, constant factors, and nonlinear returns to small intelligence advantages, and is highly unlikely. (computer science, transhumanism, AI, R)
created: 1 June 2014; modified: 01 Feb 2018; status: finished; confidence: likely; importance: 10
ratty  gwern  analysis  faq  ai  risk  speedometer  intelligence  futurism  cs  computation  complexity  tcs  linear-algebra  nonlinearity  convexity-curvature  average-case  adversarial  article  time-complexity  singularity  iteration-recursion  magnitude  multiplicative  lower-bounds  no-go  performance  hardware  humanity  psychology  cog-psych  psychometrics  iq  distribution  moments  complement-substitute  hanson  ems  enhancement  parable  detail-architecture  universalism-particularism  neuro  ai-control  environment  climate-change  threat-modeling  security  theory-practice  hacker  academia  realness  crypto  rigorous-crypto  usa  government 
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
Information Processing: US Needs a National AI Strategy: A Sputnik Moment?
FT podcasts on US-China competition and AI: http://infoproc.blogspot.com/2018/05/ft-podcasts-on-us-china-competition-and.html

A new recommended career path for effective altruists: China specialist: https://80000hours.org/articles/china-careers/
Our rough guess is that it would be useful for there to be at least ten people in the community with good knowledge in this area within the next few years.

By “good knowledge” we mean they’ve spent at least 3 years studying these topics and/or living in China.

We chose ten because that would be enough for several people to cover each of the major areas listed (e.g. 4 within AI, 2 within biorisk, 2 within foreign relations, 1 in another area).

AI Policy and Governance Internship: https://www.fhi.ox.ac.uk/ai-policy-governance-internship/

https://www.fhi.ox.ac.uk/deciphering-chinas-ai-dream/
https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf
Deciphering China’s AI Dream
The context, components, capabilities, and consequences of
China’s strategy to lead the world in AI

Europe’s AI delusion: https://www.politico.eu/article/opinion-europes-ai-delusion/
Brussels is failing to grasp threats and opportunities of artificial intelligence.
By BRUNO MAÇÃES

When the computer program AlphaGo beat the Chinese professional Go player Ke Jie in a three-part match, it didn’t take long for Beijing to realize the implications.

If algorithms can already surpass the abilities of a master Go player, it can’t be long before they will be similarly supreme in the activity to which the classic board game has always been compared: war.

As I’ve written before, the great conflict of our time is about who can control the next wave of technological development: the widespread application of artificial intelligence in the economic and military spheres.

...

If China’s ambitions sound plausible, that’s because the country’s achievements in deep learning are so impressive already. After Microsoft announced that its speech recognition software surpassed human-level language recognition in October 2016, Andrew Ng, then head of research at Baidu, tweeted: “We had surpassed human-level Chinese recognition in 2015; happy to see Microsoft also get there for English less than a year later.”

...

One obvious advantage China enjoys is access to almost unlimited pools of data. The machine-learning technologies boosting the current wave of AI expansion are as good as the amount of data they can use. That could be the number of people driving cars, photos labeled on the internet or voice samples for translation apps. With 700 or 800 million Chinese internet users and fewer data protection rules, China is as rich in data as the Gulf States are in oil.

How can Europe and the United States compete? They will have to be commensurately better in developing algorithms and computer power. Sadly, Europe is falling behind in these areas as well.

...

Chinese commentators have embraced the idea of a coming singularity: the moment when AI surpasses human ability. At that point a number of interesting things happen. First, future AI development will be conducted by AI itself, creating exponential feedback loops. Second, humans will become useless for waging war. At that point, the human mind will be unable to keep pace with robotized warfare. With advanced image recognition, data analytics, prediction systems, military brain science and unmanned systems, devastating wars might be waged and won in a matter of minutes.

...

The argument in the new strategy is fully defensive. It first considers how AI raises new threats and then goes on to discuss the opportunities. The EU and Chinese strategies follow opposite logics. Already on its second page, the text frets about the legal and ethical problems raised by AI and discusses the “legitimate concerns” the technology generates.

The EU’s strategy is organized around three concerns: the need to boost Europe’s AI capacity, ethical issues and social challenges. Unfortunately, even the first dimension quickly turns out to be about “European values” and the need to place “the human” at the center of AI — forgetting that the first word in AI is not “human” but “artificial.”

https://twitter.com/mr_scientism/status/983057591298351104
https://archive.is/m3Njh
US military: "LOL, China thinks it's going to be a major player in AI, but we've got all the top AI researchers. You guys will help us develop weapons, right?"

US AI researchers: "No."

US military: "But... maybe just a computer vision app."

US AI researchers: "NO."

https://www.theverge.com/2018/4/4/17196818/ai-boycot-killer-robots-kaist-university-hanwha
https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html
https://twitter.com/mr_scientism/status/981685030417326080
https://archive.is/3wbHm
AI-risk was a mistake.
hsu  scitariat  commentary  video  presentation  comparison  usa  china  asia  sinosphere  frontier  technology  science  ai  speedometer  innovation  google  barons  deepgoog  stories  white-paper  strategy  migration  iran  human-capital  corporation  creative  alien-character  military  human-ml  nationalism-globalism  security  investing  government  games  deterrence  defense  nuclear  arms  competition  risk  ai-control  musk  optimism  multi  news  org:mag  europe  EU  80000-hours  effective-altruism  proposal  article  realness  offense-defense  war  biotech  altruism  language  foreign-lang  philosophy  the-great-west-whale  enhancement  foreign-policy  geopolitics  anglo  jobs  career  planning  hmm  travel  charity  tech  intel  media  teaching  tutoring  russia  india  miri-cfar  pdf  automation  class  labor  polisci  society  trust  n-factor  corruption  leviathan  ethics  authoritarianism  individualism-collectivism  revolution  economics  inequality  civic  law  regulation  data  scale  pro-rata  capital  zero-positive-sum  cooperate-defect  distribution  time-series  tre 
february 2018 by nhaliday
Is the keyboard faster than the mouse?
Conclusion

It’s entirely possible that the mysterious studies Tog’s org spent $50M on prove that the mouse is faster than the keyboard for all tasks other than raw text input, but there doesn’t appear to be enough information to tell what the actual studies were. There are many public studies on user input, but I couldn’t find any that are relevant to whether or not I should use the mouse more or less at the margin.

When I look at various tasks myself, the results are mixed, and they’re mixed in the way that most programmers I polled predicted. This result is so boring that it would barely be worth mentioning if not for the large groups of people who believe that either the keyboard is always faster than the mouse or vice versa.

Please let me know if there are relevant studies on this topic that I should read! I’m not familiar with the relevant fields, so it’s possible that I’m searching with the wrong keywords and reading the wrong papers.
techtariat  dan-luu  engineering  programming  productivity  workflow  hci  hardware  working-stiff  benchmarks 
november 2017 by nhaliday
Two theories of home heat control - ScienceDirect
People routinely develop their own theories to explain the world around them. These theories can be useful even when they contradict conventional technical wisdom. Based on in-depth interviews about home heating and thermostat setting behavior, the present study presents two theories people use to understand and adjust their thermostats. The two theories are here called the feedback theory and the valve theory. The valve theory is inconsistent with engineering knowledge, but is estimated to be held by 25% to 50% of Americans. Predictions of each of the theories are compared with the operations normally performed in home heat control. This comparison suggests that the valve theory may be highly functional in normal day-to-day use. Further data is needed on the ways this theory guides behavior in natural environments.
study  hci  ux  hardware  embodied  engineering  dirty-hands  models  thinking  trivia  cocktail  map-territory  realness  neurons  psychology  cog-psych  social-psych  error  usa  poll  descriptive  temperature  protocol 
september 2017 by nhaliday
Superintelligence Risk Project Update II
https://www.jefftk.com/p/superintelligence-risk-project-update

https://www.jefftk.com/p/conversation-with-michael-littman
For example, I asked him what he thought of the idea that to we could get AGI with current techniques, primarily deep neural nets and reinforcement learning, without learning anything new about how intelligence works or how to implement it ("Prosaic AGI" [1]). He didn't think this was possible, and believes there are deep conceptual issues we still need to get a handle on. He's also less impressed with deep learning than he was before he started working in it: in his experience it's a much more brittle technology than he had been expecting. Specifically, when trying to replicate results, he's often found that they depend on a bunch of parameters being in just the right range, and without that the systems don't perform nearly as well.

The bottom line, to him, was that since we are still many breakthroughs away from getting to AGI, we can't productively work on reducing superintelligence risk now.

He told me that he worries that the AI risk community is not solving real problems: they're making deductions and inferences that are self-consistent but not being tested or verified in the world. Since we can't tell if that's progress, it probably isn't. I asked if he was referring to MIRI's work here, and he said their work was an example of the kind of approach he's skeptical about, though he wasn't trying to single them out. [2]

https://www.jefftk.com/p/conversation-with-an-ai-researcher
Earlier this week I had a conversation with an AI researcher [1] at one of the main industry labs as part of my project of assessing superintelligence risk. Here's what I got from them:

They see progress in ML as almost entirely constrained by hardware and data, to the point that if today's hardware and data had existed in the mid 1950s researchers would have gotten to approximately our current state within ten to twenty years. They gave the example of backprop: we saw how to train multi-layer neural nets decades before we had the computing power to actually train these nets to do useful things.

Similarly, people talk about AlphaGo as a big jump, where Go went from being "ten years away" to "done" within a couple years, but they said it wasn't like that. If Go work had stayed in academia, with academia-level budgets and resources, it probably would have taken nearly that long. What changed was a company seeing promising results, realizing what could be done, and putting way more engineers and hardware on the project than anyone had previously done. AlphaGo couldn't have happened earlier because the hardware wasn't there yet, and was only able to be brought forward by massive application of resources.

https://www.jefftk.com/p/superintelligence-risk-project-conclusion
Summary: I'm not convinced that AI risk should be highly prioritized, but I'm also not convinced that it shouldn't. Highly qualified researchers in a position to have a good sense the field have massively different views on core questions like how capable ML systems are now, how capable they will be soon, and how we can influence their development. I do think these questions are possible to get a better handle on, but I think this would require much deeper ML knowledge than I have.
ratty  core-rats  ai  risk  ai-control  prediction  expert  machine-learning  deep-learning  speedometer  links  research  research-program  frontier  multi  interview  deepgoog  games  hardware  performance  roots  impetus  chart  big-picture  state-of-art  reinforcement  futurism  🤖  🖥  expert-experience  singularity  miri-cfar  empirical  evidence-based  speculation  volo-avolo  clever-rats  acmtariat  robust  ideas  crux  atoms  detail-architecture  software  gradient-descent 
july 2017 by nhaliday
Overcoming Bias : A Future Of Pipes
The future of computing, after about 2035, is adiabatic reservable hardware. When such hardware runs at a cost-minimizing speed, half of the total budget is spent on computer hardware, and the other half is spent on energy and cooling for that hardware. Thus after 2035 or so, about as much will be spent on computer hardware and a physical space to place it as will be spent on hardware and space for systems to generate and transport energy into the computers, and to absorb and transport heat away from those computers. So if you seek a career for a futuristic world dominated by computers, note that a career making or maintaining energy or cooling systems may be just as promising as a career making or maintaining computing hardware.

We can imagine lots of futuristic ways to cheaply and compactly make and transport energy. These include thorium reactors and superconducting power cables. It is harder to imagine futuristic ways to absorb and transport heat. So we are likely to stay stuck with existing approaches to cooling. And the best of these, at least on large scales, is to just push cool fluids past the hardware. And the main expense in this approach is for the pipes to transport those fluids, and the space to hold those pipes.

Thus in future cities crammed with computer hardware, roughly half of the volume is likely to be taken up by pipes that move cooling fluids in and out. And the tech for such pipes will probably be more stable than tech for energy or computers. So if you want a stable career managing something that will stay very valuable for a long time, consider plumbing.

Will this focus on cooling limit city sizes? After all, the surface area of a city, where cooling fluids can go in and out, goes as the square of city scale , while the volume to be cooled goes as the cube of city scale. The ratio of volume to surface area is thus linear in city scale. So does our ability to cool cities fall inversely with city scale?

Actually, no. We have good fractal pipe designs to efficiently import fluids like air or water from outside a city to near every point in that city, and to then export hot fluids from near every point to outside the city. These fractal designs require cost overheads that are only logarithmic in the total size of the city. That is, when you double the city size, such overheads increase by only a constant amount, instead of doubling.
hanson  futurism  prediction  street-fighting  essay  len:short  ratty  computation  hardware  thermo  structure  composition-decomposition  complex-systems  magnitude  analysis  urban-rural  power-law  phys-energy  detail-architecture  efficiency  economics  supply-demand  labor  planning  long-term  physics  temperature  flux-stasis  fluid  measure  technology  frontier  speedometer  career  cost-benefit  identity  stylized-facts  objektbuch  data  trivia  cocktail 
august 2016 by nhaliday
Cyborg Nest
north sense implant to give you an internal compass
cool  hardware  bio  futurism  toys  neuro  ui  cocktail  hmm  wtf  electromag 
june 2016 by nhaliday

bundles : techie

related tags

80000-hours  :/  abstraction  academia  accuracy  acm  acmtariat  adversarial  ai  ai-control  algorithms  alien-character  alignment  allodium  altruism  analogy  analysis  analytical-holistic  anglo  anglosphere  announcement  approximation  arms  article  asia  atoms  audio  authoritarianism  automata  automation  average-case  axioms  backup  bangbang  barons  benchmarks  biases  big-peeps  big-picture  big-yud  bio  biotech  bitcoin  bits  blog  books  bostrom  brain-scan  brands  britain  business  caching  capital  career  charity  chart  chemistry  china  circuits  civic  civilization  class  clever-rats  climate-change  cloud  coarse-fine  cocktail  coding-theory  cog-psych  commentary  communication  comparison  competition  compilers  complement-substitute  complex-systems  complexity  composition-decomposition  computation  computer-vision  concept  conceptual-vocab  concrete  concurrency  consumerism  contrarianism  convexity-curvature  cool  cooperate-defect  coordination  core-rats  corporation  corruption  cost-benefit  creative  crux  crypto  cryptocurrency  cs  cycles  cynicism-idealism  dan-luu  data  database  death  debate  decentralized  decision-theory  deep-learning  deep-materialism  deepgoog  defense  dennett  density  descriptive  design  desktop  detail-architecture  deterrence  devtools  dirty-hands  discussion  distribution  diy  duplication  duty  economics  eden  eden-heaven  EEA  effective-altruism  efficiency  egalitarianism-hierarchy  EGT  electromag  embedded  embodied  empirical  ems  endogenous-exogenous  energy-resources  engineering  enhancement  entropy-like  environment  epistemic  ergo  error  essay  estimate  ethics  EU  europe  evidence-based  evolution  evopsych  examples  existence  expert  expert-experience  explanation  faq  farmers-and-foragers  fermi  fiction  finance  finiteness  flexibility  fluid  flux-stasis  foreign-lang  foreign-policy  formal-values  fourier  frameworks  frequency  frontier  futurism  gallic  games  gedanken  generalization  geopolitics  giants  gnon  google  government  gradient-descent  graphics  gravity  gregory-clark  growth-econ  guide  gwern  hacker  hanson  hard-tech  hardware  hci  heavy-industry  hi-order-bits  hmm  hn  howto  hsu  human-bean  human-capital  human-ml  humanity  hypocrisy  ideas  identity  idk  IEEE  iidness  illusion  impetus  incentives  india  individualism-collectivism  inequality  information-theory  innovation  input-output  insight  instinct  intel  intelligence  interdisciplinary  internet  interview  intricacy  intuition  investing  IoT  iq  iran  iteration-recursion  janus  japan  jobs  jvm  keyboards  labor  language  large-factor  law  legacy  len:long  len:short  lens  lesswrong  leviathan  libraries  linear-algebra  links  list  local-global  long-short-run  long-term  lower-bounds  machine-learning  magnitude  maker  malthus  map-territory  marginal  math  measure  measurement  mechanics  media  memory-management  metameta  microbiz  migration  military  minimum-viable  miri-cfar  mobile  model-organism  models  moloch  moments  multi  multiplicative  musk  mutation  mystic  n-factor  nationalism-globalism  nature  near-far  network-structure  neuro  neuro-nitgrit  neurons  news  nibble  nietzschean  nitty-gritty  no-go  nonlinearity  nootropics  nuclear  number  nyc  objektbuch  offense-defense  opsec  optimism  optimization  org:biz  org:lite  org:mag  org:rec  organization  os  oscillation  osx  outdoors  overflow  papers  parable  paradox  parenting  pdf  peace-violence  people  performance  pessimism  philosophy  phys-energy  physics  plan9  planning  plots  pls  podcast  polisci  poll  power-law  pragmatic  prediction  prepping  presentation  privacy  pro-rata  productivity  programming  project  proposal  protocol  psychology  psychometrics  publishing  q-n-a  quantum  quantum-info  quotes  rant  rationality  ratty  realness  reason  recommendations  reddit  reduction  reference  reflection  regularizer  regulation  reinforcement  relativity  religion  research  research-program  retention  revolution  rigidity  rigorous-crypto  risk  robotics  robust  roots  rsc  russia  scala  scale  science  scifi-fantasy  scitariat  search  security  selection  series  sex  shift  signal-noise  simulation  singularity  sinosphere  skunkworks  smoothness  social  social-psych  society  software  spatial  speculation  speed  speedometer  sports  spotify  stackex  state-of-art  store  stories  strategy  stream  street-fighting  structure  study  studying  stylized-facts  subculture  summary  supply-demand  survey  survival  sv  systems  tcs  teaching  tech  technology  techtariat  temperature  the-great-west-whale  the-self  the-world-is-just-atoms  theory-practice  theos  thermo  thinking  threat-modeling  tidbits  time  time-complexity  time-preference  time-series  top-n  toys  track-record  trade  travel  trends  tribalism  trivia  trust  turing  tutorial  tutoring  twitter  ui  uncertainty  unintended-consequences  universalism-particularism  urban-rural  us-them  usa  ux  values  vgr  video  virtualization  visualization  volo-avolo  von-neumann  vr  war  waves  wealth  web  whiggish-hegelian  white-paper  whole-partial-many  wiki  winner-take-all  within-without  workflow  working-stiff  wtf  x-sports  yak-shaving  zero-positive-sum  🐸  🔬  🖥  🤖 

Copy this bookmark:



description:


tags: