nhaliday + liner-notes   79

[1509.02504] Electric charge in hyperbolic motion: The early history and other geometrical aspects
We revisit the early work of Minkowski and Sommerfeld concerning hyperbolic motion, and we describe some geometrical aspects of the electrodynamic interaction. We discuss the advantages of a time symmetric formulation in which the material points are replaced by infinitesimal length elements.

SPACE AND TIME: An annotated, illustrated edition of Hermann Minkowski's revolutionary essay: http://web.mit.edu/redingtn/www/netadv/SP20130311.html
nibble  preprint  papers  org:mat  physics  electromag  relativity  exposition  history  mostly-modern  pre-ww2  science  the-trenches  discovery  intricacy  classic  explanation  einstein  giants  plots  manifolds  article  multi  liner-notes  org:junk  org:edu  absolute-relative 
november 2017 by nhaliday
[1709.06560] Deep Reinforcement Learning that Matters
https://twitter.com/WAWilsonIV/status/912505885565452288
I’ve been experimenting w/ various kinds of value function approaches to RL lately, and its striking how primitive and bad things seem to be
At first I thought it was just that my code sucks, but then I played with the OpenAI baselines and nope, it’s the children that are wrong.
And now, what comes across my desk but this fantastic paper: (link: https://arxiv.org/abs/1709.06560) arxiv.org/abs/1709.06560 How long until the replication crisis hits AI?

https://twitter.com/WAWilsonIV/status/911318326504153088
Seriously I’m not blown away by the PhDs’ records over the last 30 years. I bet you’d get better payoff funding eccentrics and amateurs.
There are essentially zero fundamentally new ideas in AI, the papers are all grotesquely hyperparameter tuned, nobody knows why it works.

Deep Reinforcement Learning Doesn't Work Yet: https://www.alexirpan.com/2018/02/14/rl-hard.html
Once, on Facebook, I made the following claim.

Whenever someone asks me if reinforcement learning can solve their problem, I tell them it can’t. I think this is right at least 70% of the time.
papers  preprint  machine-learning  acm  frontier  speedometer  deep-learning  realness  replication  state-of-art  survey  reinforcement  multi  twitter  social  discussion  techtariat  ai  nibble  org:mat  unaffiliated  ratty  acmtariat  liner-notes  critique  sample-complexity  cost-benefit  todo 
september 2017 by nhaliday
New Theory Cracks Open the Black Box of Deep Learning | Quanta Magazine
A new idea called the “information bottleneck” is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn.

sounds like he's just talking about autoencoders?
news  org:mag  org:sci  popsci  announcement  research  deep-learning  machine-learning  acm  information-theory  bits  neuro  model-class  big-surf  frontier  nibble  hmm  signal-noise  deepgoog  expert  ideas  wild-ideas  summary  talks  video  israel  roots  physics  interdisciplinary  ai  intelligence  shannon  giants  arrows  preimage  lifts-projections  composition-decomposition  characterization  markov  gradient-descent  papers  liner-notes  experiment  hi-order-bits  generalization  expert-experience  explanans  org:inst  speedometer 
september 2017 by nhaliday
Rank aggregation basics: Local Kemeny optimisation | David R. MacIver
This turns our problem from a global search to a local one: Basically we can start from any point in the search space and search locally by swapping adjacent pairs until we hit a minimum. This turns out to be quite easy to do. _We basically run insertion sort_: At step n we have the first n items in a locally Kemeny optimal order. Swap the n+1th item backwards until the majority think its predecessor is < it. This ensures all adjacent pairs are in the majority order, so swapping them would result in a greater than or equal K. This is of course an O(n^2) algorithm. In fact, the problem of merely finding a locally Kemeny optimal solution can be done in O(n log(n)) (for much the same reason as you can sort better than insertion sort). You just take the directed graph of majority votes and find a Hamiltonian Path. The nice thing about the above version of the algorithm is that it gives you a lot of control over where you start your search.
techtariat  liner-notes  papers  tcs  algorithms  machine-learning  acm  optimization  approximation  local-global  orders  graphs  graph-theory  explanation  iteration-recursion  time-complexity  nibble 
september 2017 by nhaliday
Fermat's Library | Cassini, Rømer and the velocity of light annotated/explained version.
Abstract: The discovery of the finite nature of the velocity of light is usually attributed to Rømer. However, a text at the Paris Observatory confirms the minority opinion according to which Cassini was first to propose the ‘successive motion’ of light, while giving a rather correct order of magnitude for the duration of its propagation from the Sun to the Earth. We examine this question, and discuss why, in spite of the criticisms of Halley, Cassini abandoned this hypothesis while leaving Rømer free to publish it.
liner-notes  papers  essay  history  early-modern  europe  the-great-west-whale  giants  the-trenches  mediterranean  nordic  science  innovation  discovery  physics  electromag  space  speed  nibble  org:sci  org:mat 
september 2017 by nhaliday
Correlated Equilibria in Game Theory | Azimuth
Given this, it’s not surprising that Nash equilibria can be hard to find. Last September a paper came out making this precise, in a strong way:

• Yakov Babichenko and Aviad Rubinstein, Communication complexity of approximate Nash equilibria.

The authors show there’s no guaranteed method for players to find even an approximate Nash equilibrium unless they tell each other almost everything about their preferences. This makes finding the Nash equilibrium prohibitively difficult to find when there are lots of players… in general. There are particular games where it’s not difficult, and that makes these games important: for example, if you’re trying to run a government well. (A laughable notion these days, but still one can hope.)

Klarreich’s article in Quanta gives a nice readable account of this work and also a more practical alternative to the concept of Nash equilibrium. It’s called a ‘correlated equilibrium’, and it was invented by the mathematician Robert Aumann in 1974. You can see an attempt to define it here:
baez  org:bleg  nibble  mathtariat  commentary  summary  news  org:mag  org:sci  popsci  equilibrium  GT-101  game-theory  acm  conceptual-vocab  concept  definition  thinking  signaling  coordination  tcs  complexity  communication-complexity  lower-bounds  no-go  liner-notes  big-surf  papers  research  algorithmic-econ  volo-avolo 
july 2017 by nhaliday
How to Escape Saddle Points Efficiently – Off the convex path
A core, emerging problem in nonconvex optimization involves the escape of saddle points. While recent research has shown that gradient descent (GD) generically escapes saddle points asymptotically (see Rong Ge’s and Ben Recht’s blog posts), the critical open problem is one of efficiency — is GD able to move past saddle points quickly, or can it be slowed down significantly? How does the rate of escape scale with the ambient dimensionality? In this post, we describe our recent work with Rong Ge, Praneeth Netrapalli and Sham Kakade, that provides the first provable positive answer to the efficiency question, showing that, rather surprisingly, GD augmented with suitable perturbations escapes saddle points efficiently; indeed, in terms of rate and dimension dependence it is almost as if the saddle points aren’t there!
acmtariat  org:bleg  nibble  liner-notes  machine-learning  acm  optimization  gradient-descent  local-global  off-convex  time-complexity  random  perturbation  michael-jordan  iterative-methods  research  learning-theory  math.DS  iteration-recursion 
july 2017 by nhaliday
[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox
If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: this can produce a 10^30 multiplier of achievable computation. We hence suggest the "aestivation hypothesis": the reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyzes the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis.

http://aleph.se/andart2/space/the-aestivation-hypothesis-popular-outline-and-faq/

simpler explanation (just different math for Drake equation):
Dissolving the Fermi Paradox: http://www.jodrellbank.manchester.ac.uk/media/eps/jodrell-bank-centre-for-astrophysics/news-and-events/2017/uksrn-slides/Anders-Sandberg---Dissolving-Fermi-Paradox-UKSRN.pdf
http://marginalrevolution.com/marginalrevolution/2017/07/fermi-paradox-resolved.html
Overall the argument is that point estimates should not be shoved into a Drake equation and then multiplied by each, as that requires excess certainty and masks much of the ambiguity of our knowledge about the distributions. Instead, a Bayesian approach should be used, after which the fate of humanity looks much better. Here is one part of the presentation:

Life Versus Dark Energy: How An Advanced Civilization Could Resist the Accelerating Expansion of the Universe: https://arxiv.org/abs/1806.05203
The presence of dark energy in our universe is causing space to expand at an accelerating rate. As a result, over the next approximately 100 billion years, all stars residing beyond the Local Group will fall beyond the cosmic horizon and become not only unobservable, but entirely inaccessible, thus limiting how much energy could one day be extracted from them. Here, we consider the likely response of a highly advanced civilization to this situation. In particular, we argue that in order to maximize its access to useable energy, a sufficiently advanced civilization would chose to expand rapidly outward, build Dyson Spheres or similar structures around encountered stars, and use the energy that is harnessed to accelerate those stars away from the approaching horizon and toward the center of the civilization. We find that such efforts will be most effective for stars with masses in the range of M∼(0.2−1)M⊙, and could lead to the harvesting of stars within a region extending out to several tens of Mpc in radius, potentially increasing the total amount of energy that is available to a future civilization by a factor of several thousand. We also discuss the observable signatures of a civilization elsewhere in the universe that is currently in this state of stellar harvesting.
preprint  study  essay  article  bostrom  ratty  anthropic  philosophy  space  xenobio  computation  physics  interdisciplinary  ideas  hmm  cocktail  temperature  thermo  information-theory  bits  🔬  threat-modeling  time  scale  insight  multi  commentary  liner-notes  pdf  slides  error  probability  ML-MAP-E  composition-decomposition  econotariat  marginal-rev  fermi  risk  org:mat  questions  paradox  intricacy  multiplicative  calculation  street-fighting  methodology  distribution  expectancy  moments  bayesian  priors-posteriors  nibble  measurement  existence  technology  geoengineering  magnitude  spatial  density  spreading  civilization  energy-resources  phys-energy  measure  direction  speculation  structure 
may 2017 by nhaliday
Trust, Trolleys and Social Dilemmas: A Replication Study
Overall, the present studies clearly confirmed the main finding of Everett et al., that deontologists are more trusted than consequentialists in social dilemma games. Study 1 replicates Everett et al.’s effect in the context of trust games. Study 2 generalizes the effect to public goods games, thus demonstrating that it is not specific to the type of social dilemma game used in Everett et al. Finally, both studies build on these results by demonstrating that the increased trust in deontologists may sometimes, but not always, be warranted: deontologists displayed increased cooperation rates but only in the public goods game and not in trust games.

The Adaptive Utility of Deontology: Deontological Moral Decision-Making Fosters Perceptions of Trust and Likeability: https://sci-hub.tw/http://link.springer.com/article/10.1007/s40806-016-0080-6
Consistent with previous research, participants liked and trusted targets whose decisions were consistent with deontological motives more than targets whose decisions were more consistent with utilitarian motives; this effect was stronger for perceptions of trust. Additionally, women reported greater dislike for targets whose decisions were consistent with utilitarianism than men. Results suggest that deontological moral reasoning evolved, in part, to facilitate positive relations among conspecifics and aid group living and that women may be particularly sensitive to the implications of the various motives underlying moral decision-making.

Inference of Trustworthiness From Intuitive Moral Judgments: https://sci-hub.tw/10.1037/xge0000165

Exposure to moral relativism compromises moral behavior: https://sci-hub.tw/http://www.sciencedirect.com/science/article/pii/S0022103113001339

Is utilitarian sacrifice becoming more morally permissible?: http://cushmanlab.fas.harvard.edu/docs/Hannikainanetal_2017.pdf

Disgust and Deontology: http://journals.sagepub.com/doi/abs/10.1177/1948550617732609
Trait Sensitivity to Contamination Promotes a Preference for Order, Hierarchy, and Rule-Based Moral Judgment

We suggest that a synthesis of these two literatures points to one specific emotion (disgust) that reliably predicts one specific type of moral judgment (deontological). In all three studies, we found that trait disgust sensitivity predicted more extreme deontological judgment.

The Influence of (Dis)belief in Free Will on Immoral Behavior: https://www.frontiersin.org/articles/10.3389/fpsyg.2017.00020/full

Beyond Sacrificial Harm: A Two-Dimensional Model of Utilitarian Psychology.: http://psycnet.apa.org/record/2017-57422-001
Recent research has relied on trolley-type sacrificial moral dilemmas to study utilitarian versus nonutilitarian modes of moral decision-making. This research has generated important insights into people’s attitudes toward instrumental harm—that is, the sacrifice of an individual to save a greater number. But this approach also has serious limitations. Most notably, it ignores the positive, altruistic core of utilitarianism, which is characterized by impartial concern for the well-being of everyone, whether near or far. Here, we develop, refine, and validate a new scale—the Oxford Utilitarianism Scale—to dissociate individual differences in the ‘negative’ (permissive attitude toward instrumental harm) and ‘positive’ (impartial concern for the greater good) dimensions of utilitarian thinking as manifested in the general population. We show that these are two independent dimensions of proto-utilitarian tendencies in the lay population, each exhibiting a distinct psychological profile. Empathic concern, identification with the whole of humanity, and concern for future generations were positively associated with impartial beneficence but negatively associated with instrumental harm; and although instrumental harm was associated with subclinical psychopathy, impartial beneficence was associated with higher religiosity. Importantly, although these two dimensions were independent in the lay population, they were closely associated in a sample of moral philosophers. Acknowledging this dissociation between the instrumental harm and impartial beneficence components of utilitarian thinking in ordinary people can clarify existing debates about the nature of moral psychology and its relation to moral philosophy as well as generate fruitful avenues for further research. (PsycINFO Database Record (c) 2017 APA, all rights reserved)

A breakthrough in moral psychology: https://nintil.com/2017/12/28/a-breakthrough-in-moral-psychology/

Gender Differences in Responses to Moral Dilemmas: A Process Dissociation Analysis: https://www.ncbi.nlm.nih.gov/pubmed/25840987
The principle of deontology states that the morality of an action depends on its consistency with moral norms; the principle of utilitarianism implies that the morality of an action depends on its consequences. Previous research suggests that deontological judgments are shaped by affective processes, whereas utilitarian judgments are guided by cognitive processes. The current research used process dissociation (PD) to independently assess deontological and utilitarian inclinations in women and men. A meta-analytic re-analysis of 40 studies with 6,100 participants indicated that men showed a stronger preference for utilitarian over deontological judgments than women when the two principles implied conflicting decisions (d = 0.52). PD further revealed that women exhibited stronger deontological inclinations than men (d = 0.57), while men exhibited only slightly stronger utilitarian inclinations than women (d = 0.10). The findings suggest that gender differences in moral dilemma judgments are due to differences in affective responses to harm rather than cognitive evaluations of outcomes.
study  psychology  social-psych  morality  ethics  things  trust  GT-101  coordination  hmm  adversarial  cohesion  replication  cooperate-defect  formal-values  public-goodish  multi  evopsych  gender  gender-diff  philosophy  values  decision-making  absolute-relative  universalism-particularism  intervention  pdf  piracy  deep-materialism  new-religion  stylized-facts  🌞  🎩  honor  trends  phalanges  age-generation  religion  theos  sanctity-degradation  correlation  order-disorder  egalitarianism-hierarchy  volo-avolo  organizing  impro  dimensionality  patho-altruism  altruism  exploratory  matrix-factorization  ratty  unaffiliated  commentary  summary  haidt  scitariat  reason  emotion  randy-ayndy  liner-notes  latent-variables  nature  autism  👽  focus  systematic-ad-hoc  analytical-holistic  expert-experience  economics  markets  civil-liberty  capitalism  personality  psych-architecture  cog-psych  psychometrics  tradition  left-wing  right-wing  ideology  politics  environment  big-peeps  old-anglo  good-evil  ends-means  nietzschean  effe 
march 2017 by nhaliday
Why is the Lin and Tegmark paper 'Why does deep and cheap learning work so well?' important? - Quora
To take the analogy further than I probably should, the resolution to the magic key problem might be that the key is magical, but that the locks are particularly magical. For deep learning, my guess is that it’s a bit of both.
q-n-a  qra  papers  liner-notes  deep-learning  off-convex  machine-learning  explanation  nibble  big-picture  explanans 
february 2017 by nhaliday
Information Processing: Search results for compressed sensing
https://www.unz.com/jthompson/the-hsu-boundary/
http://infoproc.blogspot.com/2017/09/phase-transitions-and-genomic.html
Added: Here are comments from "Donoho-Student":
Donoho-Student says:
September 14, 2017 at 8:27 pm GMT • 100 Words

The Donoho-Tanner transition describes the noise-free (h2=1) case, which has a direct analog in the geometry of polytopes.

The n = 30s result from Hsu et al. (specifically the value of the coefficient, 30, when p is the appropriate number of SNPs on an array and h2 = 0.5) is obtained via simulation using actual genome matrices, and is original to them. (There is no simple formula that gives this number.) The D-T transition had only been established in the past for certain classes of matrices, like random matrices with specific distributions. Those results cannot be immediately applied to genomes.

The estimate that s is (order of magnitude) 10k is also a key input.

I think Hsu refers to n = 1 million instead of 30 * 10k = 300k because the effective SNP heritability of IQ might be less than h2 = 0.5 — there is noise in the phenotype measurement, etc.

Donoho-Student says:
September 15, 2017 at 11:27 am GMT • 200 Words

Lasso is a common statistical method but most people who use it are not familiar with the mathematical theorems from compressed sensing. These results give performance guarantees and describe phase transition behavior, but because they are rigorous theorems they only apply to specific classes of sensor matrices, such as simple random matrices. Genomes have correlation structure, so the theorems do not directly apply to the real world case of interest, as is often true.

What the Hsu paper shows is that the exact D-T phase transition appears in the noiseless (h2 = 1) problem using genome matrices, and a smoothed version appears in the problem with realistic h2. These are new results, as is the prediction for how much data is required to cross the boundary. I don’t think most gwas people are familiar with these results. If they did understand the results they would fund/design adequately powered studies capable of solving lots of complex phenotypes, medical conditions as well as IQ, that have significant h2.

Most people who use lasso, as opposed to people who prove theorems, are not even aware of the D-T transition. Even most people who prove theorems have followed the Candes-Tao line of attack (restricted isometry property) and don’t think much about D-T. Although D eventually proved some things about the phase transition using high dimensional geometry, it was initially discovered via simulation using simple random matrices.
hsu  list  stream  genomics  genetics  concept  stats  methodology  scaling-up  scitariat  sparsity  regression  biodet  bioinformatics  norms  nibble  compressed-sensing  applications  search  ideas  multi  albion  behavioral-gen  iq  state-of-art  commentary  explanation  phase-transition  measurement  volo-avolo  regularization  levers  novelty  the-trenches  liner-notes  clarity  random-matrices  innovation  high-dimension  linear-models 
november 2016 by nhaliday
ShortScience.org - Making Science Accessible!
crowdsourced liner notes
heavy machine learning bias (altho I don't know if TCS papers warrant liner notes all that frequently)
init  aggregator  papers  research  summary  explanation  machine-learning  thinking  skunkworks  liner-notes  academia  hmm  org:mat  organization 
june 2016 by nhaliday
The News on Auto-tuning – arg min blog
bayesian optimization is not necessarily obviously better than randomized search on all fronts
critique  bayesian  optimization  machine-learning  expert  hmm  liner-notes  rhetoric  debate  acmtariat  ben-recht  mrtz  gwern  random  org:bleg  nibble  expert-experience 
june 2016 by nhaliday

bundles : academemetathinking

related tags

aaronson  absolute-relative  academia  acm  acmtariat  additive-combo  adversarial  age-generation  aggregator  ai  ai-control  albion  alg-combo  algebraic-complexity  algorithmic-econ  algorithms  altruism  analogy  analytical-holistic  announcement  anthropic  applications  approximation  arms  arrows  article  atoms  authoritarianism  autism  auto-learning  automation  average-case  baez  bandits  bayesian  behavioral-econ  behavioral-gen  ben-recht  best-practices  bias-variance  big-list  big-peeps  big-picture  big-surf  biodet  bioinformatics  bits  boaz-barak  boltzmann  bonferroni  books  boolean-analysis  bostrom  calculation  capitalism  characterization  chart  christianity  circuits  civil-liberty  civilization  clarity  classic  clever-rats  cocktail  cog-psych  cohesion  combo-optimization  commentary  communication-complexity  complexity  composition-decomposition  compressed-sensing  compression  computation  computational-geometry  computer-vision  concentration-of-measure  concept  conceptual-vocab  concurrency  conference  confluence  confusion  convexity-curvature  cool  cooperate-defect  coordination  correlation  cost-benefit  counting  critique  crypto  curiosity  curvature  data  data-science  database  debate  decision-making  deep-learning  deep-materialism  deepgoog  definition  density  descriptive  detail-architecture  differential-privacy  dimensionality  direction  discovery  discussion  distribution  early-modern  economics  econotariat  eden-heaven  effect-size  egalitarianism-hierarchy  einstein  electromag  embeddings  emotion  empirical  ends-means  energy-resources  environment  equilibrium  error  essay  estimate  ethics  europe  events  evopsych  examples  existence  expectancy  experiment  expert  expert-experience  explanans  explanation  exploratory  exposition  extrema  fall-2016  features  fermi  finance  fixed-point  flexibility  focus  formal-values  fourier  frequentist  frontier  futurism  game-theory  games  gelman  gender  gender-diff  generalization  generative  genetics  genomics  geoengineering  geometry  georgia  giants  good-evil  google  government  gowers  gradient-descent  graph-theory  graphs  GT-101  gwern  hacker  haidt  hashing  hi-order-bits  high-dimension  history  hmm  homepage  honor  hsu  human-ml  hypothesis-testing  icml  ideas  ideology  idk  IEEE  impro  info-dynamics  info-foraging  information-theory  init  innovation  insight  institutions  intel  intelligence  interdisciplinary  intersection  intersection-connectedness  intervention  intricacy  iq  isotropy  israel  iteration-recursion  iterative-methods  kernels  language  latent-variables  learning-theory  lectures  left-wing  lens  levers  leviathan  libraries  lifts-projections  linear-algebra  linear-models  linearity  liner-notes  links  list  local-global  lower-bounds  machine-learning  magnitude  manifolds  marginal-rev  markets  markov  matching  math  math.CA  math.CO  math.DS  math.GR  math.MG  mathtariat  matrix-factorization  measure  measurement  media  mediterranean  meta-analysis  meta:math  meta:science  metabuch  metameta  methodology  metrics  michael-jordan  micro  mihai  mit  ML-MAP-E  model-class  models  moments  monte-carlo  morality  mostly-modern  motivation  mrtz  multi  multiplicative  nature  neuro  neurons  new-religion  news  nibble  nietzschean  nips  nlp  no-go  nordic  norms  novelty  nuclear  numerics  off-convex  offense-defense  old-anglo  online-learning  open-problems  openai  optimization  order-disorder  orders  org:bleg  org:edu  org:inst  org:junk  org:mag  org:mat  org:med  org:nat  org:sci  organization  organizing  oss  overflow  p:someday  PAC  papers  paradox  parsimony  patho-altruism  pdf  performance  personality  perturbation  phalanges  phase-transition  philosophy  phys-energy  physics  pic  piracy  plots  polisci  political-econ  politics  polynomials  popsci  postmortem  potential  pragmatic  pre-ww2  preimage  preprint  princeton  priors-posteriors  probability  programming  project  proofs  propaganda  pseudorandomness  psych-architecture  psychology  psychometrics  public-goodish  q-n-a  qra  quantum  quantum-info  quantum-money  questions  quixotic  rand-complexity  random  random-matrices  randy-ayndy  ranking  ratty  reading  realness  reason  reduction  reference  reflection  regression  regularization  reinforcement  relativity  religion  replication  research  research-program  review  rhetoric  right-wing  rigidity  rigor  rigorous-crypto  risk  robust  roots  rounding  sample-complexity  sanctity-degradation  sanjeev-arora  scale  scaling-up  science  scitariat  search  sebastien-bubeck  security  sensitivity  shannon  signal-noise  signaling  similarity  skunkworks  slides  smoothness  social  social-psych  social-science  sociology  soft-question  software  space  sparsity  spatial  spectral  speculation  speed  speedometer  spreading  state-of-art  stats  stoc  stories  stream  street-fighting  structure  study  stylized-facts  success  summary  survey  synthesis  systematic-ad-hoc  talks  tcs  tcstariat  tech  technology  techtariat  temperature  the-great-west-whale  the-trenches  theos  thermo  things  thinking  threat-modeling  tidbits  tightness  time  time-complexity  todo  tools  top-n  tradition  trends  tricks  trust  tutorial  twitter  unaffiliated  unintended-consequences  unit  universalism-particularism  unsupervised  utopia-dystopia  values  vazirani  vc-dimension  video  visual-understanding  volo-avolo  wiki  wild-ideas  wire-guided  wonkish  workshop  world-war  wormholes  worrydream  xenobio  yoga  🌞  🎩  👳  👽  🔬 

Copy this bookmark:



description:


tags: