nhaliday + simulation   53

Finders, keepers - Wikipedia
Finders, keepers is an English adage with the premise that when something is unowned or abandoned, whoever finds it first can claim it. This idiom relates to an ancient Roman law of similar meaning and has been expressed in various ways over the centuries.[1] Of particular difficulty is how best to define when exactly something is unowned or abandoned, which can lead to legal or ethical disputes.


In the field of social simulation, Rosaria Conte and Cristiano Castelfranchi have used "finders, keepers" as a case study for simulating the evolution of norms in simple societies.[2]
concept  heuristic  law  leviathan  wiki  reference  aphorism  metabuch  philosophy  canon  history  iron-age  mediterranean  the-classics  anglosphere  conquest-empire  civil-liberty  social-norms  social-structure  universalism-particularism  axioms  ethics  simulation  egalitarianism-hierarchy  inequality  power  models  GT-101  EGT  new-religion  deep-materialism  parallax 
april 2018 by nhaliday
My March 28 talk at MIT - Marginal REVOLUTION
What happens when a simulated system becomes more real than the system itself?  Will the internet become “more real” than the world of ideas it is mirroring? Do we academics live in a simulacra?  If the “alt right” exists mainly on the internet, does that make it more or less powerful?  Do all innovations improve system quality, and if so why is a lot of food worse than before and home design was better in 1910-1930?  How does the world of ideas fit into this picture?
econotariat  marginal-rev  links  quotes  presentation  hmm  simulation  realness  internet  academia  gnon  🐸  subculture  innovation  food  trends  architecture  history  mostly-modern  pre-ww2 
march 2018 by nhaliday
Existential Risks: Analyzing Human Extinction Scenarios
Would you endorse choosing policy to max the expected duration of civilization, at least as a good first approximation?
Can anyone suggest a different first approximation that would get more votes?

How useful would it be to agree on a relatively-simple first-approximation observable-after-the-fact metric for what we want from the future universe, such as total life years experienced, or civilization duration?

We're Underestimating the Risk of Human Extinction: https://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/253821/
An Oxford philosopher argues that we are not adequately accounting for technology's risks—but his solution to the problem is not for Luddites.

Anderson: You have argued that we underrate existential risks because of a particular kind of bias called observation selection effect. Can you explain a bit more about that?

Bostrom: The idea of an observation selection effect is maybe best explained by first considering the simpler concept of a selection effect. Let's say you're trying to estimate how large the largest fish in a given pond is, and you use a net to catch a hundred fish and the biggest fish you find is three inches long. You might be tempted to infer that the biggest fish in this pond is not much bigger than three inches, because you've caught a hundred of them and none of them are bigger than three inches. But if it turns out that your net could only catch fish up to a certain length, then the measuring instrument that you used would introduce a selection effect: it would only select from a subset of the domain you were trying to sample.

Now that's a kind of standard fact of statistics, and there are methods for trying to correct for it and you obviously have to take that into account when considering the fish distribution in your pond. An observation selection effect is a selection effect introduced not by limitations in our measurement instrument, but rather by the fact that all observations require the existence of an observer. This becomes important, for instance, in evolutionary biology. For instance, we know that intelligent life evolved on Earth. Naively, one might think that this piece of evidence suggests that life is likely to evolve on most Earth-like planets. But that would be to overlook an observation selection effect. For no matter how small the proportion of all Earth-like planets that evolve intelligent life, we will find ourselves on a planet that did. Our data point-that intelligent life arose on our planet-is predicted equally well by the hypothesis that intelligent life is very improbable even on Earth-like planets as by the hypothesis that intelligent life is highly probable on Earth-like planets. When it comes to human extinction and existential risk, there are certain controversial ways that observation selection effects might be relevant.
bostrom  ratty  miri-cfar  skunkworks  philosophy  org:junk  list  top-n  frontier  speedometer  risk  futurism  local-global  scale  death  nihil  technology  simulation  anthropic  nuclear  deterrence  environment  climate-change  arms  competition  ai  ai-control  genetics  genomics  biotech  parasites-microbiome  disease  offense-defense  physics  tails  network-structure  epidemiology  space  geoengineering  dysgenics  ems  authoritarianism  government  values  formal-values  moloch  enhancement  property-rights  coordination  cooperate-defect  flux-stasis  ideas  prediction  speculation  humanity  singularity  existence  cybernetics  study  article  letters  eden-heaven  gedanken  multi  twitter  social  discussion  backup  hanson  metrics  optimization  time  long-short-run  janus  telos-atelos  poll  forms-instances  threat-modeling  selection  interview  expert-experience  malthus  volo-avolo  intel  leviathan  drugs  pharma  data  estimate  nature  longevity  expansionism  homo-hetero  utopia-dystopia 
march 2018 by nhaliday
Fermi paradox - Wikipedia
Rare Earth hypothesis: https://en.wikipedia.org/wiki/Rare_Earth_hypothesis
Fine-tuned Universe: https://en.wikipedia.org/wiki/Fine-tuned_Universe
something to keep in mind:
Puddle theory is a term coined by Douglas Adams to satirize arguments that the universe is made for man.[54][55] As stated in Adams' book The Salmon of Doubt:[56]
Imagine a puddle waking up one morning and thinking, “This is an interesting world I find myself in, an interesting hole I find myself in, fits me rather neatly, doesn't it? In fact, it fits me staggeringly well, must have been made to have me in it!” This is such a powerful idea that as the sun rises in the sky and the air heats up and as, gradually, the puddle gets smaller and smaller, it's still frantically hanging on to the notion that everything's going to be all right, because this World was meant to have him in it, was built to have him in it; so the moment he disappears catches him rather by surprise. I think this may be something we need to be on the watch out for.
article  concept  paradox  wiki  reference  fermi  anthropic  space  xenobio  roots  speculation  ideas  risk  threat-modeling  civilization  nihil  🔬  deep-materialism  new-religion  futurism  frontier  technology  communication  simulation  intelligence  eden  war  nuclear  deterrence  identity  questions  multi  explanans  physics  theos  philosophy  religion  chemistry  bio  hmm  idk  degrees-of-freedom  lol  troll  existence 
january 2018 by nhaliday
Team *Decorations Until Epiphany* on Twitter: "@RoundSqrCupola maybe just C https://t.co/SFPXb3qrAE"
Remember ‘BRICs’? Now it’s just ICs.
maybe just C
Solow predicts that if 2 countries have the same TFP, then the poorer nation should grow faster. But poorer India grows more slowly than China.

Solow thinking leads one to suspect India has substantially lower TFP.

Recent growth is great news, but alas 5 years isn't the long run!

FWIW under Solow conditional convergence assumptions--historically robust--the fact that a country as poor as India grows only a few % faster than the world average is a sign they'll end up poorer than S Europe.

see his spreadsheet here: http://mason.gmu.edu/~gjonesb/SolowForecast.xlsx
spearhead  econotariat  garett-jones  unaffiliated  twitter  social  discussion  india  asia  china  economics  macro  growth-econ  econ-metrics  wealth  wealth-of-nations  convergence  world  developing-world  trends  time-series  cjones-like  prediction  multi  backup  the-bones  long-short-run  europe  mediterranean  comparison  simulation  econ-productivity  great-powers  thucydides  broad-econ  pop-diff  microfoundations  🎩  marginal  hive-mind  rindermann-thompson  hari-seldon  tools  calculator  estimate 
december 2017 by nhaliday
Why do stars twinkle?
According to many astronomers and educators, twinkle (stellar scintillation) is caused by atmospheric structure that works like ordinary lenses and prisms. Pockets of variable temperature - and hence index of refraction - randomly shift and focus starlight, perceived by eye as changes in brightness. Pockets also disperse colors like prisms, explaining the flashes of color often seen in bright stars. Stars appear to twinkle more than planets because they are points of light, whereas the twinkling points on planetary disks are averaged to a uniform appearance. Below, figure 1 is a simulation in glass of the kind of turbulence structure posited in the lens-and-prism theory of stellar scintillation, shown over the Penrose tile floor to demonstrate the random lensing effects.

However appealing and ubiquitous on the internet, this popular explanation is wrong, and my aim is to debunk the myth. This research is mostly about showing that the lens-and-prism theory just doesn't work, but I also have a stellar list of references that explain the actual cause of scintillation, starting with two classic papers by C.G. Little and S. Chandrasekhar.
nibble  org:junk  space  sky  visuo  illusion  explanans  physics  electromag  trivia  cocktail  critique  contrarianism  explanation  waves  simulation  experiment  hmm  magnitude  atmosphere  roots  idk 
december 2017 by nhaliday
Use and Interpretation of LD Score Regression
LD Score regression distinguishes confounding from polygenicity in genome-wide association studies: https://sci-hub.bz/10.1038/ng.3211
- Po-Ru Loh, Nick Patterson, et al.


Both polygenicity (i.e. many small genetic effects) and confounding biases, such as cryptic relatedness and population stratification, can yield inflated distributions of test statistics in genome-wide association studies (GWAS). However, current methods cannot distinguish between inflation from bias and true signal from polygenicity. We have developed an approach that quantifies the contributions of each by examining the relationship between test statistics and linkage disequilibrium (LD). We term this approach LD Score regression. LD Score regression provides an upper bound on the contribution of confounding bias to the observed inflation in test statistics and can be used to estimate a more powerful correction factor than genomic control. We find strong evidence that polygenicity accounts for the majority of test statistic inflation in many GWAS of large sample size.

Supplementary Note: https://images.nature.com/original/nature-assets/ng/journal/v47/n3/extref/ng.3211-S1.pdf

An atlas of genetic correlations across human diseases
and traits: https://sci-hub.bz/10.1038/ng.3406


Supplementary Note: https://images.nature.com/original/nature-assets/ng/journal/v47/n11/extref/ng.3406-S1.pdf

ldsc is a command line tool for estimating heritability and genetic correlation from GWAS summary statistics. ldsc also computes LD Scores.
nibble  pdf  slides  talks  bio  biodet  genetics  genomics  GWAS  genetic-correlation  correlation  methodology  bioinformatics  concept  levers  🌞  tutorial  explanation  pop-structure  gene-drift  ideas  multi  study  org:nat  article  repo  software  tools  libraries  stats  hypothesis-testing  biases  confounding  gotchas  QTL  simulation  survey  preprint  population-genetics 
november 2017 by nhaliday
Your Sky
Welcome to Your Sky, the interactive planetarium of the Web. You can produce maps in the forms described below for any time and date, viewpoint, and observing location. If you enter the orbital elements of an asteroid or comet, Your Sky will compute its current position and plot it on the map. Each map is accompanied by an ephemeris for the Sun, Moon, planets, and any tracked asteroid or comet. A control panel permits customisation of which objects are plotted, limiting magnitudes, colour scheme, image size, and other parameters; each control is linked to its description in the help file.
nibble  tools  calculator  simulation  space  sky  navigation  time  objektbuch  data  visualization  trivia 
september 2017 by nhaliday
Global determinants of navigation ability | bioRxiv
Using a mobile-based virtual reality navigation task, we measured spatial navigation ability in more than 2.5 million people globally. Using a clustering approach, we find that navigation ability is not smoothly distributed globally but clustered into five distinct yet geographically related groups of countries. Furthermore, the economic wealth of a nation (Gross Domestic Product per capita) was predictive of the average navigation ability of its inhabitants and gender inequality (Gender Gap Index) was predictive of the size of performance difference between males and females.

- Figure 1 has the meat
- gender gap larger in richer/better-performing countries
- Anglo and Nordic countries do best (Finnish supremacy wins the day again)
- surprised China doesn't do better, probably a matter of development
- Singapore is close behind the Anglo-Nords tho
- speculation that practice of orienteering (originally Swedish) may be related to Nords doing well
- somewhat weird pattern wrt age
study  bio  preprint  psychology  cog-psych  iq  psychometrics  spatial  navigation  pop-diff  gender  gender-diff  egalitarianism-hierarchy  correlation  wealth  wealth-of-nations  econ-metrics  data  visualization  maps  world  developing-world  marginal  europe  the-great-west-whale  nordic  britain  anglo  usa  anglosphere  china  asia  sinosphere  polis  demographics  age-generation  aging  EU  group-level  regional-scatter-plots  games  simulation 
september 2017 by nhaliday
Econometric Modeling as Junk Science
The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics: https://www.aeaweb.org/articles?id=10.1257/jep.24.2.3

On data, experiments, incentives and highly unconvincing research – papers and hot beverages: https://papersandhotbeverages.wordpress.com/2015/10/31/on-data-experiments-incentives-and-highly-unconvincing-research/
In my view, it has just to do with the fact that academia is a peer monitored organization. In the case of (bad) data collection papers, issues related to measurement are typically boring. They are relegated to appendices, no one really has an incentive to monitor it seriously. The problem is similar in formal theory: no one really goes through the algebra in detail, but it is in principle feasible to do it, and, actually, sometimes these errors are detected. If discussing the algebra of a proof is almost unthinkable in a seminar, going into the details of data collection, measurement and aggregation is not only hard to imagine, but probably intrinsically infeasible.

Something different happens for the experimentalist people. As I was saying, I feel we have come to a point in which many papers are evaluated based on the cleverness and originality of the research design (“Using the World Cup qualifiers as an instrument for patriotism!? Woaw! how cool/crazy is that! I wish I had had that idea”). The sexiness of the identification strategy has too often become a goal in itself. When your peers monitor you paying more attention to the originality of the identification strategy than to the research question, you probably have an incentive to mine reality for ever crazier discontinuities. It is true methodologists have been criticized in the past for analogous reasons, such as being guided by the desire to increase mathematical complexity without a clear benefit. But, if you work with pure formal theory or statistical theory, your work is not meant to immediately answer question about the real world, but instead to serve other researchers in their quest. This is something that can, in general, not be said of applied CI work.

This post should have been entitled “Zombies who only think of their next cool IV fix”
massive lust for quasi-natural experiments, regression discontinuities
barely matters if the effects are not all that big
I suppose even the best of things must reach their decadent phase; methodological innov. to manias……

Following this "collapse of small-N social psych results" business, where do I predict econ will collapse? I see two main contenders.
One is lab studies. I dallied with these a few years ago in a Kenya lab. We ran several pilots of N=200 to figure out the best way to treat
and to measure the outcome. Every pilot gave us a different stat sig result. I could have written six papers concluding different things.
I gave up more skeptical of these lab studies than ever before. The second contender is the long run impacts literature in economic history
We should be very suspicious since we never see a paper showing that a historical event had no effect on modern day institutions or dvpt.
On the one hand I find these studies fun, fascinating, and probably true in a broad sense. They usually reinforce a widely believed history
argument with interesting data and a cute empirical strategy. But I don't think anyone believes the standard errors. There's probably a HUGE
problem of nonsignificant results staying in the file drawer. Also, there are probably data problems that don't get revealed, as we see with
the recent Piketty paper (http://marginalrevolution.com/marginalrevolution/2017/10/pikettys-data-reliable.html). So I take that literature with a vat of salt, even if I enjoy and admire the works
I used to think field experiments would show little consistency in results across place. That external validity concerns would be fatal.
In fact the results across different samples and places have proven surprisingly similar across places, and added a lot to general theory
Last, I've come to believe there is no such thing as a useful instrumental variable. The ones that actually meet the exclusion restriction
are so weird & particular that the local treatment effect is likely far different from the average treatment effect in non-transparent ways.
Most of the other IVs don't plausibly meet the e clue ion restriction. I mean, we should be concerned when the IV estimate is always 10x
larger than the OLS coefficient. This I find myself much more persuaded by simple natural experiments that use OLS, diff in diff, or
discontinuities, alongside randomized trials.

What do others think are the cliffs in economics?
PS All of these apply to political science too. Though I have a special extra target in poli sci: survey experiments! A few are good. I like
Dan Corstange's work. But it feels like 60% of dissertations these days are experiments buried in a survey instrument that measure small
changes in response. These at least have large N. But these are just uncontrolled labs, with negligible external validity in my mind.
The good ones are good. This method has its uses. But it's being way over-applied. More people have to make big and risky investments in big
natural and field experiments. Time to raise expectations and ambitions. This expectation bar, not technical ability, is the big advantage
economists have over political scientists when they compete in the same space.
(Ok. So are there any friends and colleagues I haven't insulted this morning? Let me know and I'll try my best to fix it with a screed)

Most papers that employ Differences-in-Differences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in state-level data on female wages from the Current Population Survey. For each law, we use OLS to compute the DD estimate of its “effect” as well as the standard error of this estimate. These conventional DD standard errors severely understate the standard deviation of the estimators: we find an “effect” significant at the 5 percent level for up to 45 percent of the placebo interventions. We use Monte Carlo simulations to investigate how well existing methods help solve this problem. Econometric corrections that place a specific parametric form on the time-series process do not perform well. Bootstrap (taking into account the auto-correlation of the data) works well when the number of states is large enough. Two corrections based on asymptotic approximation of the variance-covariance matrix work well for moderate numbers of states and one correction that collapses the time series information into a “pre” and “post” period and explicitly takes into account the effective sample size works well even for small numbers of states.

‘METRICS MONDAY: 2SLS–CHRONICLE OF A DEATH FORETOLD: http://marcfbellemare.com/wordpress/12733
As it turns out, Young finds that
1. Conventional tests tend to overreject the null hypothesis that the 2SLS coefficient is equal to zero.
2. 2SLS estimates are falsely declared significant one third to one half of the time, depending on the method used for bootstrapping.
3. The 99-percent confidence intervals (CIs) of those 2SLS estimates include the OLS point estimate over 90 of the time. They include the full OLS 99-percent CI over 75 percent of the time.
4. 2SLS estimates are extremely sensitive to outliers. Removing simply one outlying cluster or observation, almost half of 2SLS results become insignificant. Things get worse when removing two outlying clusters or observations, as over 60 percent of 2SLS results then become insignificant.
5. Using a Durbin-Wu-Hausman test, less than 15 percent of regressions can reject the null that OLS estimates are unbiased at the 1-percent level.
6. 2SLS has considerably higher mean squared error than OLS.
7. In one third to one half of published results, the null that the IVs are totally irrelevant cannot be rejected, and so the correlation between the endogenous variable(s) and the IVs is due to finite sample correlation between them.
8. Finally, fewer than 10 percent of 2SLS estimates reject instrument irrelevance and the absence of OLS bias at the 1-percent level using a Durbin-Wu-Hausman test. It gets much worse–fewer than 5 percent–if you add in the requirement that the 2SLS CI that excludes the OLS estimate.

Methods Matter: P-Hacking and Causal Inference in Economics*: http://ftp.iza.org/dp11796.pdf
Applying multiple methods to 13,440 hypothesis tests reported in 25 top economics journals in 2015, we show that selective publication and p-hacking is a substantial problem in research employing DID and (in particular) IV. RCT and RDD are much less problematic. Almost 25% of claims of marginally significant results in IV papers are misleading.

Ever since I learned social science is completely fake, I've had a lot more time to do stuff that matters, like deadlifting and reading about Mediterranean haplogroups
Wait, so, from fakest to realest IV>DD>RCT>RDD? That totally matches my impression.
org:junk  org:edu  economics  econometrics  methodology  realness  truth  science  social-science  accuracy  generalization  essay  article  hmm  multi  study  🎩  empirical  causation  error  critique  sociology  criminology  hypothesis-testing  econotariat  broad-econ  cliometrics  endo-exo  replication  incentives  academia  measurement  wire-guided  intricacy  twitter  social  discussion  pseudoE  effect-size  reflection  field-study  stat-power  piketty  marginal-rev  commentary  data-science  expert-experience  regression  gotchas  rant  map-territory  pdf  simulation  moments  confidence  bias-variance  stats  endogenous-exogenous  control  meta:science  meta-analysis  outliers  summary  sampling  ensembles  monte-carlo  theory-practice  applicability-prereqs  chart  comparison  shift  ratty  unaffiliated 
june 2017 by nhaliday
Cultural group selection plays an essential role in explaining human cooperation: A sketch of the evidence
Pursuing Darwin’s curious parallel: Prospects for a science of cultural evolution: http://www.pnas.org/content/early/2017/07/18/1620741114.full

Axelrod model: http://ncase.me/trust/

Peer punishment promotes enforcement of bad social norms: https://www.nature.com/articles/s41467-017-00731-0
Social norms are an important element in explaining how humans achieve very high levels of cooperative activity. It is widely observed that, when norms can be enforced by peer punishment, groups are able to resolve social dilemmas in prosocial, cooperative ways. Here we show that punishment can also encourage participation in destructive behaviours that are harmful to group welfare, and that this phenomenon is mediated by a social norm. In a variation of a public goods game, in which the return to investment is negative for both group and individual, we find that the opportunity to punish led to higher levels of contribution, thereby harming collective payoffs. A second experiment confirmed that, independently of whether punishment is available, a majority of subjects regard the efficient behaviour of non-contribution as socially inappropriate. The results show that simply providing a punishment opportunity does not guarantee that punishment will be used for socially beneficial ends, because the social norms that influence punishment behaviour may themselves be destructive.

Peer punishment can stabilize anything, both good and bad norms. This is why you need group selection to select good social norms.
pdf  study  article  survey  sociology  anthropology  sapiens  cultural-dynamics  🌞  cooperate-defect  GT-101  EGT  deep-materialism  group-selection  coordination  religion  theos  social-norms  morality  coalitions  s:**  turchin  decision-making  microfoundations  multi  better-explained  techtariat  visualization  dynamic  worrydream  simulation  operational  let-me-see  trust  garett-jones  polarization  media  internet  zero-positive-sum  axelrod  eden  honor  org:nat  unintended-consequences  public-goodish  broad-econ  twitter  social  commentary  summary  slippery-slope  selection  competition  organizing  war  henrich  evolution  darwinian  tribalism  hari-seldon  cybernetics  reinforcement  ecology  sociality 
june 2017 by nhaliday
Virtual revenge is sweet in Bangladesh | 1843
A bloodthirsty video game set during the war of independence – and sponsored by the government – is proving popular with young Bangladeshis
news  org:mag  org:anglo  org:biz  india  asia  MENA  politics  tribalism  war  internet  accelerationism  simulation  current-events  populism 
march 2017 by nhaliday
Religion, fertility and genes: a dual inheritance model | Proceedings of the Royal Society of London B: Biological Sciences
The paper considers the effect of religious defections and exogamy on the religious and genetic composition of society. Defections reduce the ultimate share of the population with religious allegiance and slow down the spread of the religiosity gene. However, provided the fertility differential persists, and people with a religious allegiance mate mainly with people like themselves, the religiosity gene will eventually predominate despite a high rate of defection. This is an example of ‘cultural hitch-hiking’, whereby a gene spreads because it is able to hitch a ride with a high-fitness cultural practice.
study  org:nat  bio  sapiens  evolution  biodet  genetics  population-genetics  coordination  group-selection  culture  religion  models  🌞  fertility  correlation  simulation  institutions  EGT  dynamical  GT-101  theos  the-bones  ecology 
march 2017 by nhaliday
There’s good eating on one of those | West Hunter
Recently, Y.-H. Percival Zhang and colleagues demonstrated a method of converting cellulose into starch and glucose. Zhang thinks that it can be scaled up into an effective industrial process, one that could produce a thousand calories of starch for less than a dollar from cellulosic waste. This would be a good thing. It’s not just that are 7 billion people – the problem is that we have hardly any food reserves (about 74 days at last report).

Prepare for Nuclear Winter: http://www.overcomingbias.com/2017/09/prepare-for-nuclear-winter.html
If a 1km asteroid were to hit the Earth, the dust it kicked up would block most sunlight over most of the world for 3 to 10 years. There’s only a one in a million chance of that happening per year, however. Whew. However, there’s a ten times bigger chance that a super volcano, such as the one hiding under Yellowstone, might explode, for a similar result. And I’d put the chance of a full scale nuclear war at ten to one hundred times larger than that: one in ten thousand to one thousand per year. Over a century, that becomes a one to ten percent chance. Not whew; grimace instead.

There is a substantial chance that a full scale nuclear war would produce a nuclear winter, with a similar effect: sunlight is blocked for 3-10 years or more. Yes, there are good criticisms of the more extreme forecasts, but there’s still a big chance the sun gets blocked in a full scale nuclear war, and there’s even a substantial chance of the same result in a mere regional war, where only 100 nukes explode (the world now has 15,000 nukes).


Yeah, probably a few people live on, and so humanity doesn’t go extinct. But the only realistic chance most of us have of surviving in this scenario is to use our vast industrial and scientific abilities to make food. We actually know of many plausible ways to make more than enough food to feed everyone for ten years, even with no sunlight. And even if big chunks of the world economy are in shambles. But for that to work, we must preserve enough social order to make use of at least the core of key social institutions.


Nuclear War Survival Skills: http://oism.org/nwss/nwss.pdf
Updated and Expanded 1987 Edition

Nuclear winter: https://en.wikipedia.org/wiki/Nuclear_winter

Yellowstone supervolcano may blow sooner than thought — and could wipe out life on the planet: https://www.usatoday.com/story/news/nation/2017/10/12/yellowstone-supervolcano-may-blow-sooner-than-thought-could-wipe-out-life-planet/757337001/
west-hunter  discussion  study  commentary  bio  food  energy-resources  technology  risk  the-world-is-just-atoms  agriculture  wild-ideas  malthus  objektbuch  threat-modeling  scitariat  scale  biophysical-econ  allodium  nihil  prepping  ideas  dirty-hands  magnitude  multi  ratty  hanson  planning  nuclear  arms  deterrence  institutions  alt-inst  securities  markets  pdf  org:gov  white-paper  survival  time  earth  war  wiki  reference  environment  sky  news  org:lite  hmm  idk  org:biz  org:sci  simulation  maps  usa  geoengineering 
march 2017 by nhaliday
Redistributing from Capitalists to Workers: An Impossibility Theorem, Garett Jones | EconLog | Library of Economics and Liberty
org:econlib  econotariat  spearhead  garett-jones  economics  policy  rhetoric  thinking  analysis  no-go  redistribution  labor  taxes  cracker-econ  multi  piketty  news  org:lite  org:biz  pdf  links  political-econ  capital  simulation  operational  dynamic  explanation  time-preference  patience  wonkish  study  science-anxiety  externalities  long-short-run  models  map-territory  stylized-facts  s:*  broad-econ  chart  article  🎩  randy-ayndy  envy  bootstraps  inequality  absolute-relative  X-not-about-Y  volo-avolo  ideas  status  capitalism  nationalism-globalism  metabuch  optimate  aristos  open-closed  macro  government  proofs  equilibrium 
february 2017 by nhaliday
The infinitesimal model | bioRxiv
Our focus here is on the infinitesimal model. In this model, one or several quantitative traits are described as the sum of a genetic and a non-genetic component, the first being distributed as a normal random variable centred at the average of the parental genetic components, and with a variance independent of the parental traits. We first review the long history of the infinitesimal model in quantitative genetics. Then we provide a definition of the model at the phenotypic level in terms of individual trait values and relationships between individuals, but including different evolutionary processes: genetic drift, recombination, selection, mutation, population structure, ... We give a range of examples of its application to evolutionary questions related to stabilising selection, assortative mating, effective population size and response to selection, habitat preference and speciation. We provide a mathematical justification of the model as the limit as the number M of underlying loci tends to infinity of a model with Mendelian inheritance, mutation and environmental noise, when the genetic component of the trait is purely additive. We also show how the model generalises to include epistatic effects. In each case, by conditioning on the pedigree relating individuals in the population, we incorporate arbitrary selection and population structure. We suppose that we can observe the pedigree up to the present generation, together with all the ancestral traits, and we show, in particular, that the genetic components of the individual trait values in the current generation are indeed normally distributed with a variance independent of ancestral traits, up to an error of order M^{-1/2}. Simulations suggest that in particular cases the convergence may be as fast as 1/M.

published version:
The infinitesimal model: Definition, derivation, and implications: https://sci-hub.tw/10.1016/j.tpb.2017.06.001

Commentary: Fisher’s infinitesimal model: A story for the ages: http://www.sciencedirect.com/science/article/pii/S0040580917301508?via%3Dihub
This commentary distinguishes three nested approximations, referred to as “infinitesimal genetics,” “Gaussian descendants” and “Gaussian population,” each plausibly called “the infinitesimal model.” The first and most basic is Fisher’s “infinitesimal” approximation of the underlying genetics – namely, many loci, each making a small contribution to the total variance. As Barton et al. (2017) show, in the limit as the number of loci increases (with enough additivity), the distribution of genotypic values for descendants approaches a multivariate Gaussian, whose variance–covariance structure depends only on the relatedness, not the phenotypes, of the parents (or whether their population experiences selection or other processes such as mutation and migration). Barton et al. (2017) call this rigorously defensible “Gaussian descendants” approximation “the infinitesimal model.” However, it is widely assumed that Fisher’s genetic assumptions yield another Gaussian approximation, in which the distribution of breeding values in a population follows a Gaussian — even if the population is subject to non-Gaussian selection. This third “Gaussian population” approximation, is also described as the “infinitesimal model.” Unlike the “Gaussian descendants” approximation, this third approximation cannot be rigorously justified, except in a weak-selection limit, even for a purely additive model. Nevertheless, it underlies the two most widely used descriptions of selection-induced changes in trait means and genetic variances, the “breeder’s equation” and the “Bulmer effect.” Future generations may understand why the “infinitesimal model” provides such useful approximations in the face of epistasis, linkage, linkage disequilibrium and strong selection.
study  exposition  bio  evolution  population-genetics  genetics  methodology  QTL  preprint  models  unit  len:long  nibble  linearity  nonlinearity  concentration-of-measure  limits  applications  🌞  biodet  oscillation  fisher  perturbation  stylized-facts  chart  ideas  article  pop-structure  multi  pdf  piracy  intricacy  map-territory  kinship  distribution  simulation  ground-up  linear-models  applicability-prereqs  bioinformatics 
january 2017 by nhaliday
Stockpile Stewardship | West Hunter
A lot of our nuclear weapons are old, and it’s not clear that they still work. If we still did underground tests, we’d know for sure (and could fix any problems) – but we don’t do that. We have a program called stockpile stewardship, that uses simulation programs and the data from laser-fusion experiments in an attempt to predict weapon efficacy.

I talked to some old friends who know as much about the nuclear stockpile as anyone: neither believes that that stockpile stewardship will do the job. There are systems that you can simulate with essentially perfect accuracy and confidence, Newtonian gravitational mechanics for example: this isn’t one of them.

You had two approaches to a problem that was vital to the security of the United States: option A was absolutely sure to work, option B might possibly work.

The Feds picked B.

interesting: https://westhunt.wordpress.com/2015/01/13/stockpile-stewardship/#comment-65553
Can’t they stick a warhead on a space launcher, loop it around the moon followed by some compact instrumentation and detonate it there, out of view? And keep mum about it.

How hard would it be for radioastronomers to notice a nuclear blast on the other side of the Moon? Would reflected light over interplanetary distances be even detectable?

I once brought this up to a bomb-designer friend: people have in fact worried about this.

They signed a treaty against that. http://en.wikipedia.org/wiki/Outer_Space_Treaty

The Soviets signed a treaty against developing germ warfare too, but they did it anyhow. Do you think that the Galactic Overlords automatically vaporize treaty violators?

People working in US intelligence may well have opinions, but they don’t know jack about nuclear weapons. I once said that Iraq couldn’t possibly have a live nuclear weapons program, given their lack of resources and the fact that we hadn’t detected any sign of it – in part, a ‘capacity’ argument. I later heard that the whole CIA had at most one guy who knew enough to do that casual, back-of-the-envelope analysis correctly, and he was working on something else.

west-hunter  rant  nuclear  policy  foreign-policy  deterrence  realpolitik  meta:war  scitariat  arms  defense  error  leadership  simulation  prudence  great-powers  war  kumbaya-kult  cynicism-idealism  peace-violence  counter-revolution  multi  gnon  isteveish  albion  org:junk  korea  current-events  paleocon  russia  communism  biotech  parasites-microbiome  intel  iraq-syria  elite  usa  government  poast  gedanken  space  being-right  info-dynamics  track-record  wiki  law  stories  volo-avolo  no-go  street-fighting  ability-competence  offense-defense 
december 2016 by nhaliday
Are You Living in a Computer Simulation?
Bostrom's anthropic arguments

In sum, if your descendants might make simulations of lives like yours, then you might be living in a simulation. And while you probably cannot learn much detail about the specific reasons for and nature of the simulation you live in, you can draw general conclusions by making analogies to the types and reasons of simulations today. If you might be living in a simulation then all else equal it seems that you should care less about others, live more for today, make your world look likely to become eventually rich, expect to and try to participate in pivotal events, be entertaining and praiseworthy, and keep the famous people around you happy and interested in you.

Theological Implications of the Simulation Argument: https://www.tandfonline.com/doi/pdf/10.1080/15665399.2010.10820012
Nick Bostrom’s Simulation Argument (SA) has many intriguing theological implications. We work out some of them here. We show how the SA can be used to develop novel versions of the Cosmological and Design Arguments. We then develop some of the affinities between Bostrom’s naturalistic theogony and more traditional theological topics. We look at the resurrection of the body and at theodicy. We conclude with some reflections on the relations between the SA and Neoplatonism (friendly) and between the SA and theism (less friendly).

lesswrong  philosophy  weird  idk  thinking  insight  links  summary  rationality  ratty  bostrom  sampling-bias  anthropic  theos  simulation  hanson  decision-making  advice  mystic  time-preference  futurism  letters  entertainment  multi  morality  humility  hypocrisy  wealth  malthus  power  drama  gedanken  pdf  article  essay  religion  christianity  the-classics  big-peeps  iteration-recursion  aesthetics  nietzschean  axioms  gwern  analysis  realness  von-neumann  space  expansionism  duplication  spreading  sequential  cs  computation  outcome-risk  measurement  empirical  questions  bits  information-theory  efficiency  algorithms  physics  relativity  ems  neuro  data  scale  magnitude  complexity  risk  existence  threat-modeling  civilization  forms-instances 
september 2016 by nhaliday
Information Processing: Evidence for (very) recent natural selection in humans
height (+), infant head circumference (+), some biomolecular stuff, female hip size (+), male BMI (-), age of menarche (+, !!), and birth weight (+)

Strong selection in the recent past can cause allele frequencies to change significantly. Consider two different SNPs, which today have equal minor allele frequency (for simplicity, let this be equal to one half). Assume that one SNP was subject to strong recent selection, and another (neutral) has had approximately zero effect on fitness. The advantageous version of the first SNP was less common in the far past, and rose in frequency recently (e.g., over the last 2k years). In contrast, the two versions of the neutral SNP have been present in roughly the same proportion (up to fluctuations) for a long time. Consequently, in the total past breeding population (i.e., going back tens of thousands of years) there have been many more copies of the neutral alleles (and the chunks of DNA surrounding them) than of the positively selected allele. Each of the chunks of DNA around the SNPs we are considering is subject to a roughly constant rate of mutation.

Looking at the current population, one would then expect a larger variety of mutations in the DNA region surrounding the neutral allele (both versions) than near the favored selected allele (which was rarer in the population until very recently, and whose surrounding region had fewer chances to accumulate mutations). By comparing the difference in local mutational diversity between the two versions of the neutral allele (should be zero modulo fluctuations, for the case MAF = 0.5), and between the (+) and (-) versions of the selected allele (nonzero, due to relative change in frequency), one obtains a sensitive signal for recent selection. See figure at bottom for more detail. In the paper what I call mutational diversity is measured by looking at distance distribution of singletons, which are rare variants found in only one individual in the sample under study.

The 2,000 year selection of the British: http://www.unz.com/gnxp/the-2000-year-selection-of-the-british/

Detection of human adaptation during the past 2,000 years: http://www.biorxiv.org/content/early/2016/05/07/052084

The key idea is that recent selection distorts the ancestral genealogy of sampled haplotypes at a selected site. In particular, the terminal (tip) branches of the genealogy tend to be shorter for the favored allele than for the disfavored allele, and hence, haplotypes carrying the favored allele will tend to carry fewer singleton mutations (Fig. 1A-C and SOM).

To capture this effect, we use the sum of distances to the nearest singleton in each direction from a test SNP as a summary statistic (Fig. 1D).

Figure 1. Illustration of the SDS method.

Figure 2. Properties of SDS.

Based on a recent model of European demography [25], we estimate that the mean tip length for a neutral sample of 3,000 individuals is 75 generations, or roughly 2,000 years (Fig. 2A). Since SDS aims to measure changes in tip lengths of the genealogy, we conjectured that it would be most likely to detect selection approximately within this timeframe.

Indeed, in simulated sweep models with samples of 3,000 individuals (Fig. 2B,C and fig. S2), we find that SDS focuses specifically on very recent time scales, and has equal power for hard and soft sweeps within this timeframe. At individual loci, SDS is powered to detect ~2% selection over 100 generations. Moreover, SDS has essentially no power to detect older selection events that stopped >100 generations before the present. In contrast, a commonly-used test for hard sweeps, iHS [12], integrates signal over much longer timescales (>1,000 generations), has no specificity to the more recent history, and has essentially no power for the soft sweep scenarios.

Catching evolution in the act with the Singleton Density Score: http://www.molecularecologist.com/2016/05/catching-evolution-in-the-act-with-the-singleton-density-score/
The Singleton Density Score (SDS) is a measure based on the idea that changes in allele frequencies induced by recent selection can be observed in a sample’s genealogy as differences in the branch length distribution.

You don’t need a weatherman: https://westhunt.wordpress.com/2016/05/08/you-dont-need-a-weatherman/
You can do a million cool things with this method. Since the effective time scale goes inversely with sample size, you could look at evolution in England over the past 1000 years or the past 500. Differencing, over the period 1-1000 AD. Since you can look at polygenic traits, you can see whether the alleles favoring higher IQs have increased or decreased in frequency over various stretches of time. You can see if Greg Clark’s proposed mechanism really happened. You can (soon) tell if creeping Pinkerization is genetic, or partly genetic.

You could probably find out if the Middle Easterners really have gotten slower, and when it happened.

Looking at IQ alleles, you could not only show whether the Ashkenazi Jews really are biologically smarter but if so, when it happened, which would give you strong hints as to how it happened.

We know that IQ-favoring alleles are going down (slowly) right now (not counting immigration, which of course drastically speeds it up). Soon we will know if this was true while Russia was under the Mongol yoke – we’ll know how smart Periclean Athenians were and when that boost occurred. And so on. And on!


“The pace has been so rapid that humans have changed significantly in body and mind over recorded history."

bicameral mind: https://westhunt.wordpress.com/2016/05/08/you-dont-need-a-weatherman/#comment-78934

Chinese, Koreans, Japanese and Ashkenazi Jews all have high levels of myopia. Australian Aborigines have almost none, I think.

I expect that the fall of all great empires is based on long term dysgenic trends. There is no logical reason why so many empires and civilizations throughout history could grow so big and then not simply keep growing, except for dysgenics.
I can think of about twenty other possible explanations off the top of my head, but dysgenics is a possible cause.
I agree with DataExplorer. The largest factor in the decay of civilizations is dysgenics. The discussion by R. A. Fisher 1930 p. 193 is very cogent on this matter. Soon we will know for sure.
Sometimes it can be rapid. Assume that the upper classes are mostly urban, and somewhat sharper than average. Then the Mongols arrive.
sapiens  study  genetics  evolution  hsu  trends  data  visualization  recent-selection  methodology  summary  GWAS  2016  scitariat  britain  commentary  embodied  biodet  todo  control  multi  gnxp  pop-diff  stat-power  mutation  hypothesis-testing  stats  age-generation  QTL  gene-drift  comparison  marginal  aDNA  simulation  trees  time  metrics  density  measurement  conquest-empire  pinker  population-genetics  aphorism  simler  dennett  👽  the-classics  iron-age  mediterranean  volo-avolo  alien-character  russia  medieval  spearhead  gregory-clark  bio  preprint  domestication  MENA  iq  islam  history  poast  west-hunter  scale  behavioral-gen  gotchas  cost-benefit  genomics  bioinformatics  stylized-facts  concept  levers  🌞  pop-structure  nibble  explanation  ideas  usa  dysgenics  list  applicability-prereqs  cohesion  judaism  visuo  correlation  china  asia  japan  korea  civilization  gibbon  rot  roots  fisher  giants  books  old-anglo  selection  agri-mindset  hari-seldon 
august 2016 by nhaliday
Information Processing: Bear baiting is dangerous
Oliver Stone confronts Idiocracy: http://infoproc.blogspot.com/2017/06/oliver-stone-confronts-idiocracy.html
Is there ever any reason to PPP-adjust aggregate GDP? I have not been able to come up with a single one. (Other than illegitimate reasons, like having more numbers to cherry-pick, or more opportunities to celebrate benchmarks.)

US is looking at huge estimated expenses to ensure safety/reliability of our stockpile. Everyone has these problems.

Because of test ban treaty the aging of nuclear weapons can only be studied indirectly through simulations, complex materials modeling, etc. Dangerous!

it's like opposite of Kissinger triangular diplomacy, alienate Russia over some inconsequential matter like Ukraine,
foreign-policy  russia  asia  world  usa  hsu  rhetoric  realpolitik  scitariat  wonkish  geopolitics  great-powers  multi  video  interview  commentary  economics  econ-metrics  china  heavy-industry  military  defense  scale  nuclear  poast  kumbaya-kult  simulation  twitter  social  discussion  econotariat  broad-econ  pseudoE  kissinger  real-nominal 
july 2016 by nhaliday
Guess the Correlation
some basic rules?
- more trouble w/ high than low end (maybe because I'm just guessing slope/omitting outliers?)
- should try out w/ correlated Gaussians to get some intuition
games  learning  stats  intuition  thinking  hmm  street-fighting  correlation  instinct  mental-math  nitty-gritty  simulation  operational  todo  spock  quantitative-qualitative  dependence-independence 
july 2016 by nhaliday

bundles : vague

related tags

2016-election  aaronson  ability-competence  absolute-relative  academia  accelerationism  accretion  accuracy  acm  acmtariat  aDNA  advice  aesthetics  africa  age-generation  aging  agri-mindset  agriculture  ai  ai-control  albion  algorithms  alien-character  alignment  allodium  alt-inst  altruism  analysis  analytical-holistic  anglo  anglosphere  announcement  anthropic  anthropology  antiquity  aphorism  applicability-prereqs  applications  approximation  architecture  aristos  arms  article  asia  atmosphere  authoritarianism  axelrod  axioms  backup  bayesian  behavioral-econ  behavioral-gen  being-right  best-practices  better-explained  bias-variance  biases  big-peeps  bio  biodet  bioinformatics  biophysical-econ  biotech  bits  books  bootstraps  bostrom  bounded-cognition  brain-scan  britain  broad-econ  business  calculation  calculator  canon  capital  capitalism  causation  chart  chemistry  china  christianity  civil-liberty  civilization  cjones-like  clever-rats  climate-change  cliometrics  coalitions  coarse-fine  cocktail  cog-psych  cohesion  commentary  communication  communism  community  comparison  competition  complement-substitute  complex-systems  complexity  computation  concentration-of-measure  concept  confidence  confounding  conquest-empire  contracts  contrarianism  control  convergence  cool  cooperate-defect  coordination  correlation  cost-benefit  counter-revolution  course  cracker-econ  criminology  critique  cs  cultural-dynamics  culture  culture-war  current-events  cybernetics  cycles  cynicism-idealism  d3  darwinian  data  data-science  dataset  death  decision-making  deep-learning  deep-materialism  defense  degrees-of-freedom  demographics  dennett  density  dependence-independence  detail-architecture  deterrence  developing-world  dirty-hands  discovery  discussion  disease  distribution  domestication  drama  drugs  duplication  duty  dynamic  dynamical  dysgenics  earth  ecology  econ-metrics  econ-productivity  econometrics  economics  econotariat  eden  eden-heaven  education  effect-size  efficiency  egalitarianism-hierarchy  EGT  elections  electromag  elite  embedded  embodied  emergent  empirical  ems  endo-exo  endogenous-exogenous  energy-resources  engineering  enhancement  ensembles  entertainment  entropy-like  environment  envy  epidemiology  equilibrium  ergodic  error  essay  estimate  ethics  EU  europe  events  evolution  evopsych  examples  existence  exocortex  expansionism  experiment  expert  expert-experience  explanans  explanation  exposition  externalities  facebook  fermi  fertility  feynman  fiction  field-study  finance  fisher  fluid  flux-stasis  food  foreign-policy  formal-values  forms-instances  frontier  futurism  games  garett-jones  gedanken  gender  gender-diff  gene-drift  generalization  genetic-correlation  genetic-load  genetics  genomics  geoengineering  geography  geopolitics  giants  gibbon  gnon  gnxp  gotchas  government  graphical-models  graphics  graphs  gravity  great-powers  gregory-clark  ground-up  group-level  group-selection  growth-econ  GT-101  GWAS  gwern  hacker  hanson  hardware  hari-seldon  hci  healthcare  heavy-industry  henrich  heuristic  hi-order-bits  history  hive-mind  hmm  homo-hetero  honor  housing  hsu  humanity  humility  hypocrisy  hypothesis-testing  ide  ideas  identity  idk  illusion  incentives  india  industrial-org  inequality  info-dynamics  information-theory  innovation  insight  instinct  institutions  integral  intel  intelligence  interdisciplinary  internet  interview  intricacy  intuition  iq  iraq-syria  iron-age  islam  isteveish  iteration-recursion  janus  japan  judaism  kinship  kissinger  korea  kumbaya-kult  labor  language  latent-variables  law  leadership  learning  lecture-notes  len:long  lens  lesswrong  let-me-see  letters  levers  leviathan  lexical  libraries  limits  linear-models  linearity  links  list  live-coding  local-global  lol  long-short-run  longevity  machine-learning  macro  magnitude  malthus  map-territory  maps  marginal  marginal-rev  markets  martial  math  math.CA  matrix-factorization  measure  measurement  mechanics  media  medieval  mediterranean  memetics  MENA  mental-math  meta-analysis  meta:math  meta:prediction  meta:science  meta:war  metabuch  metameta  methodology  metrics  michael-nielsen  micro  microfoundations  military  miri-cfar  missing-heritability  models  moloch  moments  monetary-fiscal  monte-carlo  morality  mostly-modern  multi  mutation  mystic  nationalism-globalism  nature  navigation  network-structure  neuro  neuro-nitgrit  new-religion  news  nibble  nietzschean  nihil  nitty-gritty  no-go  nonlinearity  nordic  nuclear  objektbuch  offense-defense  old-anglo  open-closed  openai  operational  optimate  optimization  ORFE  org:anglo  org:biz  org:bleg  org:econlib  org:edu  org:gov  org:junk  org:lite  org:mag  org:mat  org:nat  org:rec  org:sci  organization  organizing  oscillation  outcome-risk  outliers  p:whenever  paleocon  papers  paradox  parallax  parasites-microbiome  patience  pdf  peace-violence  perturbation  pharma  philosophy  physics  piketty  pinker  piracy  planning  play  plots  poast  polarization  policy  polis  polisci  political-econ  politics  poll  pop-diff  pop-structure  population  population-genetics  populism  power  power-law  pre-ww2  prediction  prepping  preprint  presentation  priors-posteriors  pro-rata  probability  programming  proofs  property-rights  prudence  pseudoE  psychology  psychometrics  public-goodish  QTL  quantitative-qualitative  quantum  quantum-info  questions  quotes  race  randy-ayndy  rant  rationality  ratty  real-nominal  realness  realpolitik  recent-selection  recommendations  redistribution  reduction  reference  reflection  regional-scatter-plots  regression  regularizer  reinforcement  relativity  religion  replication  repo  review  rhetoric  rindermann-thompson  risk  roots  rot  russia  s:*  s:**  s:***  sampling  sampling-bias  sapiens  scale  scaling-up  science  science-anxiety  scifi-fantasy  scitariat  securities  selection  sequential  shift  SIGGRAPH  signal-noise  simler  simulation  singularity  sinosphere  skunkworks  sky  slides  slippery-slope  social  social-choice  social-norms  social-science  social-structure  sociality  society  sociology  software  space  spatial  spearhead  speculation  speed  speedometer  spock  spreading  startups  stat-mech  stat-power  state-of-art  stats  status  stories  strategy  stream  street-fighting  structure  study  stylized-facts  subculture  summary  supply-demand  survey  survival  symmetry  syntax  systems  tails  tainter  talks  taxes  tcs  tcstariat  teaching  technology  techtariat  telos-atelos  temperature  terrorism  the-bones  the-classics  the-great-west-whale  the-world-is-just-atoms  theory-practice  theos  thermo  thinking  threat-modeling  thucydides  thurston  tidbits  time  time-complexity  time-preference  time-series  todo  tools  top-n  toxoplasmosis  track-record  trade  trees  trends  tribalism  trivia  troll  trust  truth  turchin  tutorial  twitter  ui  unaffiliated  uncertainty  unintended-consequences  unit  universalism-particularism  us-them  usa  utopia-dystopia  vague  values  video  visual-understanding  visualization  visuo  volo-avolo  von-neumann  war  water  waves  wealth  wealth-of-nations  weird  west-hunter  white-paper  wiki  wild-ideas  wire-guided  within-without  wonkish  working-stiff  world  worrydream  X-not-about-Y  xenobio  yoga  zero-positive-sum  🌞  🎩  🐸  👳  👽  🔬  🖥 

Copy this bookmark: