nhaliday + research   233

The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
Information Processing: Mathematical Theory of Deep Neural Networks (Princeton workshop)
"Recently, long-past-due theoretical results have begun to emerge. These results, and those that will follow in their wake, will begin to shed light on the properties of large, adaptive, distributed learning architectures, and stand to revolutionize how computer science and neuroscience understand these systems."
hsu  scitariat  commentary  links  research  research-program  workshop  events  princeton  sanjeev-arora  deep-learning  machine-learning  ai  generalization  explanans  off-convex  nibble  frontier  speedometer  state-of-art  big-surf  announcement 
january 2018 by nhaliday
Sequence Modeling with CTC
A visual guide to Connectionist Temporal Classification, an algorithm used to train deep neural networks in speech recognition, handwriting recognition and other sequence problems.
acmtariat  techtariat  org:bleg  nibble  better-explained  machine-learning  deep-learning  visual-understanding  visualization  analysis  let-me-see  research  sequential  audio  classification  model-class  exposition  language  acm  approximation  comparison  markov  iteration-recursion  concept  atoms  distribution  orders  DP  heuristic  optimization  trees  greedy  matching  gradient-descent 
december 2017 by nhaliday
New Theory Cracks Open the Black Box of Deep Learning | Quanta Magazine
A new idea called the “information bottleneck” is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn.

sounds like he's just talking about autoencoders?
news  org:mag  org:sci  popsci  announcement  research  deep-learning  machine-learning  acm  information-theory  bits  neuro  model-class  big-surf  frontier  nibble  hmm  signal-noise  deepgoog  expert  ideas  wild-ideas  summary  talks  video  israel  roots  physics  interdisciplinary  ai  intelligence  shannon  giants  arrows  preimage  lifts-projections  composition-decomposition  characterization  markov  gradient-descent  papers  liner-notes  experiment  hi-order-bits  generalization  expert-experience  explanans  org:inst  speedometer 
september 2017 by nhaliday
Superintelligence Risk Project Update II
https://www.jefftk.com/p/superintelligence-risk-project-update

https://www.jefftk.com/p/conversation-with-michael-littman
For example, I asked him what he thought of the idea that to we could get AGI with current techniques, primarily deep neural nets and reinforcement learning, without learning anything new about how intelligence works or how to implement it ("Prosaic AGI" [1]). He didn't think this was possible, and believes there are deep conceptual issues we still need to get a handle on. He's also less impressed with deep learning than he was before he started working in it: in his experience it's a much more brittle technology than he had been expecting. Specifically, when trying to replicate results, he's often found that they depend on a bunch of parameters being in just the right range, and without that the systems don't perform nearly as well.

The bottom line, to him, was that since we are still many breakthroughs away from getting to AGI, we can't productively work on reducing superintelligence risk now.

He told me that he worries that the AI risk community is not solving real problems: they're making deductions and inferences that are self-consistent but not being tested or verified in the world. Since we can't tell if that's progress, it probably isn't. I asked if he was referring to MIRI's work here, and he said their work was an example of the kind of approach he's skeptical about, though he wasn't trying to single them out. [2]

https://www.jefftk.com/p/conversation-with-an-ai-researcher
Earlier this week I had a conversation with an AI researcher [1] at one of the main industry labs as part of my project of assessing superintelligence risk. Here's what I got from them:

They see progress in ML as almost entirely constrained by hardware and data, to the point that if today's hardware and data had existed in the mid 1950s researchers would have gotten to approximately our current state within ten to twenty years. They gave the example of backprop: we saw how to train multi-layer neural nets decades before we had the computing power to actually train these nets to do useful things.

Similarly, people talk about AlphaGo as a big jump, where Go went from being "ten years away" to "done" within a couple years, but they said it wasn't like that. If Go work had stayed in academia, with academia-level budgets and resources, it probably would have taken nearly that long. What changed was a company seeing promising results, realizing what could be done, and putting way more engineers and hardware on the project than anyone had previously done. AlphaGo couldn't have happened earlier because the hardware wasn't there yet, and was only able to be brought forward by massive application of resources.

https://www.jefftk.com/p/superintelligence-risk-project-conclusion
Summary: I'm not convinced that AI risk should be highly prioritized, but I'm also not convinced that it shouldn't. Highly qualified researchers in a position to have a good sense the field have massively different views on core questions like how capable ML systems are now, how capable they will be soon, and how we can influence their development. I do think these questions are possible to get a better handle on, but I think this would require much deeper ML knowledge than I have.
ratty  core-rats  ai  risk  ai-control  prediction  expert  machine-learning  deep-learning  speedometer  links  research  research-program  frontier  multi  interview  deepgoog  games  hardware  performance  roots  impetus  chart  big-picture  state-of-art  reinforcement  futurism  🤖  🖥  expert-experience  singularity  miri-cfar  empirical  evidence-based  speculation  volo-avolo  clever-rats  acmtariat  robust  ideas  crux  atoms  detail-architecture  software  gradient-descent 
july 2017 by nhaliday
Correlated Equilibria in Game Theory | Azimuth
Given this, it’s not surprising that Nash equilibria can be hard to find. Last September a paper came out making this precise, in a strong way:

• Yakov Babichenko and Aviad Rubinstein, Communication complexity of approximate Nash equilibria.

The authors show there’s no guaranteed method for players to find even an approximate Nash equilibrium unless they tell each other almost everything about their preferences. This makes finding the Nash equilibrium prohibitively difficult to find when there are lots of players… in general. There are particular games where it’s not difficult, and that makes these games important: for example, if you’re trying to run a government well. (A laughable notion these days, but still one can hope.)

Klarreich’s article in Quanta gives a nice readable account of this work and also a more practical alternative to the concept of Nash equilibrium. It’s called a ‘correlated equilibrium’, and it was invented by the mathematician Robert Aumann in 1974. You can see an attempt to define it here:
baez  org:bleg  nibble  mathtariat  commentary  summary  news  org:mag  org:sci  popsci  equilibrium  GT-101  game-theory  acm  conceptual-vocab  concept  definition  thinking  signaling  coordination  tcs  complexity  communication-complexity  lower-bounds  no-go  liner-notes  big-surf  papers  research  algorithmic-econ  volo-avolo 
july 2017 by nhaliday
How to Escape Saddle Points Efficiently – Off the convex path
A core, emerging problem in nonconvex optimization involves the escape of saddle points. While recent research has shown that gradient descent (GD) generically escapes saddle points asymptotically (see Rong Ge’s and Ben Recht’s blog posts), the critical open problem is one of efficiency — is GD able to move past saddle points quickly, or can it be slowed down significantly? How does the rate of escape scale with the ambient dimensionality? In this post, we describe our recent work with Rong Ge, Praneeth Netrapalli and Sham Kakade, that provides the first provable positive answer to the efficiency question, showing that, rather surprisingly, GD augmented with suitable perturbations escapes saddle points efficiently; indeed, in terms of rate and dimension dependence it is almost as if the saddle points aren’t there!
acmtariat  org:bleg  nibble  liner-notes  machine-learning  acm  optimization  gradient-descent  local-global  off-convex  time-complexity  random  perturbation  michael-jordan  iterative-methods  research  learning-theory  math.DS  iteration-recursion 
july 2017 by nhaliday
A Unified Theory of Randomness | Quanta Magazine
Beyond the one-dimensional random walk, there are many other kinds of random shapes. There are varieties of random paths, random two-dimensional surfaces, random growth models that approximate, for example, the way a lichen spreads on a rock. All of these shapes emerge naturally in the physical world, yet until recently they’ve existed beyond the boundaries of rigorous mathematical thought. Given a large collection of random paths or random two-dimensional shapes, mathematicians would have been at a loss to say much about what these random objects shared in common.

Yet in work over the past few years, Sheffield and his frequent collaborator, Jason Miller, a professor at the University of Cambridge, have shown that these random shapes can be categorized into various classes, that these classes have distinct properties of their own, and that some kinds of random objects have surprisingly clear connections with other kinds of random objects. Their work forms the beginning of a unified theory of geometric randomness.
news  org:mag  org:sci  math  research  probability  profile  structure  geometry  random  popsci  nibble  emergent  org:inst 
february 2017 by nhaliday
Predicting with confidence: the best machine learning idea you never heard of | Locklin on science
The advantages of conformal prediction are many fold. These ideas assume very little about the thing you are trying to forecast, the tool you’re using to forecast or how the world works, and they still produce a pretty good confidence interval. Even if you’re an unrepentant Bayesian, using some of the machinery of conformal prediction, you can tell when things have gone wrong with your prior. The learners work online, and with some modifications and considerations, with batch learning. One of the nice things about calculating confidence intervals as a part of your learning process is they can actually lower error rates or use in semi-supervised learning as well. Honestly, I think this is the best bag of tricks since boosting; everyone should know about and use these ideas.

The essential idea is that a “conformity function” exists. Effectively you are constructing a sort of multivariate cumulative distribution function for your machine learning gizmo using the conformity function. Such CDFs exist for classical stuff like ARIMA and linear regression under the correct circumstances; CP brings the idea to machine learning in general, and to models like ARIMA when the standard parametric confidence intervals won’t work. Within the framework, the conformity function, whatever may be, when used correctly can be guaranteed to give confidence intervals to within a probabilistic tolerance. The original proofs and treatments of conformal prediction, defined for sequences, is extremely computationally inefficient. The conditions can be relaxed in many cases, and the conformity function is in principle arbitrary, though good ones will produce narrower confidence regions. Somewhat confusingly, these good conformity functions are referred to as “efficient” -though they may not be computationally efficient.
techtariat  acmtariat  acm  machine-learning  bayesian  stats  exposition  research  online-learning  probability  decision-theory  frontier  unsupervised  confidence 
february 2017 by nhaliday
(Gil Kalai) The weak epsilon-net problem | What's new
This is a problem in discrete and convex geometry. It seeks to quantify the intuitively obvious fact that large convex bodies are so “fat” that they cannot avoid “detection” by a small number of observation points.
gowers  mathtariat  tcstariat  tcs  math  concept  rounding  linear-programming  research  open-problems  geometry  math.CO  magnitude  probabilistic-method  math.MG  discrete  nibble  org:bleg  questions  curvature  pigeonhole-markov  convexity-curvature 
january 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : academe

related tags

aaronson  abstraction  academia  accretion  accuracy  acm  acmtariat  additive-combo  adversarial  advice  aggregator  ai  ai-control  akrasia  alg-combo  algebra  algebraic-complexity  algorithmic-econ  algorithms  alignment  altruism  ama  AMT  analogy  analysis  announcement  anthropology  aphorism  apollonian-dionysian  applications  approximation  arms  arrows  article  asia  atoms  attention  audio  authoritarianism  autism  auto-learning  automation  average-case  baez  bandits  bayesian  behavioral-econ  ben-recht  berkeley  best-practices  better-explained  bias-variance  biases  big-list  big-picture  big-surf  big-yud  bio  biodet  bioinformatics  bits  blog  boaz-barak  boltzmann  bonferroni  books  boolean-analysis  bostrom  bots  brain-scan  bret-victor  broad-econ  business-models  california  caltech  career  causation  characterization  chart  checklists  chicago  china  circuits  clarity  classification  clever-rats  cmu  coding-theory  cog-psych  combo-optimization  commentary  communication  communication-complexity  comparison  competition  complex-systems  complexity  composition-decomposition  compressed-sensing  compression  computation  computational-geometry  computer-vision  concept  conceptual-vocab  concrete  conference  confidence  confusion  contrarianism  convexity-curvature  cool  cooperate-defect  coordination  core-rats  cornell  correlation  cost-benefit  counterfactual  counting  course  creative  critique  crux  crypto  cs  cultural-dynamics  culture  curiosity  curvature  cycles  data  data-science  database  dataset  death  debate  debt  decentralized  decision-theory  deep-learning  deep-materialism  deepgoog  defense  definite-planning  definition  descriptive  detail-architecture  deterrence  differential  differential-privacy  dimensionality  direct-indirect  direction  dirty-hands  discovery  discrete  discrimination  discussion  disease  distributed  distribution  distributional  DP  dropbox  economics  ed-yong  eden  eden-heaven  EEA  effective-altruism  egalitarianism-hierarchy  EGT  embeddings  emergent  empirical  ems  encyclopedic  endogenous-exogenous  engineering  entanglement  epistemic  equilibrium  erdos  error  essay  estimate  ethics  ethnocentrism  events  evidence  evidence-based  evolution  evopsych  examples  existence  exocortex  experiment  expert  expert-experience  explanans  explanation  exploration-exploitation  exploratory  explore-exploit  exposition  extratricky  extrema  facebook  fall-2015  fall-2016  farmers-and-foragers  features  fellowship  finiteness  fixed-point  flexibility  flux-stasis  foreign-policy  formal-methods  formal-values  forum  fourier  frameworks  free  freelance  frequentist  frontier  functional  futurism  game-theory  games  gedanken  gelman  generalization  generative  genetics  genomics  geometry  giants  google  government  gowers  grad-school  gradient-descent  graph-theory  graphs  greedy  gregory-clark  ground-up  group-selection  growth-econ  GT-101  gtd  guide  GWAS  hacker  hamming  hanson  hard-core  hardness  hardware  hari-seldon  hashing  haskell  heavy-industry  henrich  heuristic  hi-order-bits  hierarchy  high-variance  hmm  hn  homepage  hsu  human-capital  human-ml  humanity  hypothesis-testing  ideas  idk  IEEE  iidness  impact  impetus  incentives  individualism-collectivism  inequality  inference  info-dynamics  info-econ  info-foraging  information-theory  init  innovation  insight  institutions  integral  intel  intelligence  interdisciplinary  interests  interpretability  intersection  intersection-connectedness  interview  intricacy  intuition  iq  isotropy  israel  iteration-recursion  iterative-methods  jargon  jobs  journos-pundits  kernels  knowledge  labor  language  large-factor  latent-variables  learning  learning-theory  lecture-notes  lectures  legacy  len:short  lens  lesswrong  let-me-see  levers  leviathan  libraries  lifts-projections  limits  linear-algebra  linear-programming  linearity  liner-notes  links  list  local-global  logic  long-short-run  long-term  lower-bounds  luca-trevisan  machine-learning  magnitude  malthus  marginal  markov  matching  math  math.AC  math.CA  math.CO  math.DS  math.GR  math.MG  math.NT  math.RT  mathtariat  matrix-factorization  measure  mechanics  media  meta:math  meta:research  meta:science  meta:war  metabuch  metameta  methodology  metrics  michael-jordan  michael-nielsen  microfoundations  mihai  military  miri-cfar  mit  model-class  model-organism  models  moloch  moments  monte-carlo  motivation  mrtz  msr  multi  multiplicative  mutation  nature  near-far  network-structure  neuro  neuro-nitgrit  neurons  news  nibble  nitty-gritty  nlp  no-go  nonlinearity  norms  notetaking  number  numerics  objektbuch  off-convex  offense-defense  oly  online-learning  open-closed  open-problems  openai  operational  optimization  orders  org:anglo  org:biz  org:bleg  org:edu  org:inst  org:junk  org:mag  org:mat  org:med  org:nat  org:ngo  org:rec  org:sci  organization  organizing  orourke  oss  overflow  oxbridge  p:*  p:**  p:someday  p:whenever  PAC  papadimitriou  papers  parenting  pareto  parsimony  paul-romer  pcp  pdf  peace-violence  people  performance  perturbation  phd  philosophy  physics  pic  pigeonhole-markov  pinboard  pinker  piracy  planning  plots  plt  polisci  polynomials  popsci  population-genetics  postmortem  potential  pragmatic  prediction  preimage  preprint  princeton  prioritizing  priors-posteriors  probabilistic-method  probability  problem-solving  prof  profile  programming  progression  project  proof-systems  proofs  propaganda  pseudorandomness  psychiatry  psychology  psychometrics  publishing  puzzles  python  q-n-a  qra  QTL  quantum  quantum-info  questions  quixotic  rand-approx  rand-complexity  random  ranking  rationality  ratty  reading  realness  reason  rec-math  recommendations  reddit  reduction  reference  reflection  regularization  regularizer  regulation  reinforcement  relativization  relaxation  replication  repo  research  research-program  retention  review  rhetoric  rigor  rigorous-crypto  risk  robotics  robust  roots  rounding  ryan-odonnell  s:***  safety  sample-complexity  sampling  sanjeev-arora  sapiens  scale  scholar  scholar-pack  science  scitariat  SDP  search  sebastien-bubeck  security  selection  seminar  sensitivity  sequential  sex  shannon  shift  SIGGRAPH  signal-noise  signaling  similarity  singularity  skeleton  skunkworks  slides  smoothness  social  social-norms  social-science  soft-question  software  sparsity  spatial  spectral  speculation  speed  speedometer  ssc  stanford  startups  stat-mech  state-of-art  stats  stochastic-processes  stories  strategy  stream  structure  study  studying  stylized-facts  subculture  subjective-objective  sublinear  success  sum-of-squares  summary  summer-2016  survey  sv  symmetry  synthesis  systematic-ad-hoc  systems  talks  tcs  tcstariat  tech  technology  techtariat  telos-atelos  tensors  the-self  the-trenches  the-west  things  thinking  threat-modeling  thurston  tidbits  tightness  time  time-complexity  time-preference  time-series  todo  tools  top-n  topics  track-record  tradeoffs  transitions  trees  trends  tricki  tricks  tutorial  twitter  UGC  uncertainty  unintended-consequences  unit  universalism-particularism  unsupervised  urban-rural  utopia-dystopia  values  vc-dimension  vcs  video  virginia-DC  visual-understanding  visualization  volo-avolo  war  wealth  west-hunter  wiki  wild-ideas  winner-take-all  wire-guided  workflow  workshop  wormholes  worrydream  writing  yak-shaving  yoga  zero-positive-sum  🌞  🎓  👳  👽  🔬  🖥  🤖 

Copy this bookmark:



description:


tags: