nhaliday + smoothness   39

Reconsidering epistemological scepticism – Dividuals
I blogged before about how I consider an epistemological scepticism fully compatible with being conservative/reactionary. By epistemological scepticism I mean the worldview where concepts, categories, names, classes aren’t considered real, just useful ways to categorize phenomena, but entirely mental constructs, basically just tools. I think you can call this nominalism as well. The nominalism-realism debate was certainly about this. What follows is the pro-empirical worldview where logic and reasoning is considered highly fallible: hence you don’t think and don’t argue too much, you actually look and check things instead. You rely on experience, not reasoning.

...

Anyhow, the argument is that there are classes, which are indeed artificial, and there are kinds, which are products of natural forces, products of causality.

...

And the deeper – Darwinian – argument, unspoken but obvious, is that any being with a model of reality that does not conform to such real clumps, gets eaten by a grue.

This is impressive. It seems I have to extend my one-variable epistemology to a two-variable epistemology.

My former epistemology was that we generally categorize things according to their uses or dangers for us. So “chair” is – very roughly – defined as “anything we can sit on”. Similarly, we can categorize “predator” as “something that eats us or the animals that are useful for us”.

The unspoken argument against this is that the universe or the biosphere exists neither for us nor against us. A fox can eat your rabbits and a lion can eat you, but they don’t exist just for the sake of making your life difficult.

Hence, if you interpret phenomena only from the viewpoint of their uses or dangers for humans, you get only half the picture right. The other half is what it really is and where it came from.

Copying is everything: https://dividuals.wordpress.com/2015/12/14/copying-is-everything/
Philosophy professor Ruth Millikan’s insight that everything that gets copied from an ancestor has a proper function or teleofunction: it is whatever feature or function that made it and its ancestor selected for copying, in competition with all the other similar copiable things. This would mean Aristotelean teleology is correct within the field of copyable things, replicators, i.e. within biology, although in physics still obviously incorrect.

Darwinian Reactionary drew attention to it two years ago and I still don’t understand why didn’t it generate a bigger buzz. It is an extremely important insight.

I mean, this is what we were waiting for, a proper synthesis of science and philosophy, and a proper way to rescue Aristotelean teleology, which leads to so excellent common-sense predictions that intuitively it cannot be very wrong, yet modern philosophy always denied it.

The result from that is the briding of the fact-value gap and burying the naturalistic fallacy: we CAN derive values from facts: a thing is good if it is well suitable for its natural purpose, teleofunction or proper function, which is the purpose it was selected for and copied for, the purpose and the suitability for the purpose that made the ancestors of this thing selected for copying, instead of all the other potential, similar ancestors.

...

What was humankind selected for? I am afraid, the answer is kind of ugly.

Men were selected to compete between groups, the cooperate within groups largely for coordinating for the sake of this competition, and have a low-key competition inside the groups as well for status and leadership. I am afraid, intelligence is all about organizing elaborate tribal raids: “coalitionary arms races”. The most civilized case, least brutal but still expensive case is arms races in prestige status, not dominance status: when Ancient Athens buildt pretty buildings and modern France built the TGV and America sent a man to the Moon in order to gain “gloire” i.e. the prestige type respect and status amongst the nations, the larger groups of mankind. If you are the type who doesn’t like blood, you should probably focus on these kinds of civilized, prestige-project competitions.

Women were selected for bearing children, for having strong and intelligent sons therefore having these heritable traits themselves (HBD kind of contradicts the more radically anti-woman aspects of RedPillery: marry a weak and stupid but attractive silly-blondie type woman and your son’s won’t be that great either), for pleasuring men and in some rarer but existing cases, to be true companions and helpers of their husbands.

https://en.wikipedia.org/wiki/Four_causes
- Matter: a change or movement's material cause, is the aspect of the change or movement which is determined by the material that composes the moving or changing things. For a table, that might be wood; for a statue, that might be bronze or marble.
- Form: a change or movement's formal cause, is a change or movement caused by the arrangement, shape or appearance of the thing changing or moving. Aristotle says for example that the ratio 2:1, and number in general, is the cause of the octave.
- Agent: a change or movement's efficient or moving cause, consists of things apart from the thing being changed or moved, which interact so as to be an agency of the change or movement. For example, the efficient cause of a table is a carpenter, or a person working as one, and according to Aristotle the efficient cause of a boy is a father.
- End or purpose: a change or movement's final cause, is that for the sake of which a thing is what it is. For a seed, it might be an adult plant. For a sailboat, it might be sailing. For a ball at the top of a ramp, it might be coming to rest at the bottom.

https://en.wikipedia.org/wiki/Proximate_and_ultimate_causation
A proximate cause is an event which is closest to, or immediately responsible for causing, some observed result. This exists in contrast to a higher-level ultimate cause (or distal cause) which is usually thought of as the "real" reason something occurred.

...

- Ultimate causation explains traits in terms of evolutionary forces acting on them.
- Proximate causation explains biological function in terms of immediate physiological or environmental factors.
gnon  philosophy  ideology  thinking  conceptual-vocab  forms-instances  realness  analytical-holistic  bio  evolution  telos-atelos  distribution  nature  coarse-fine  epistemic  intricacy  is-ought  values  duplication  nihil  the-classics  big-peeps  darwinian  deep-materialism  selection  equilibrium  subjective-objective  models  classification  smoothness  discrete  schelling  optimization  approximation  comparison  multi  peace-violence  war  coalitions  status  s-factor  fashun  reputation  civilization  intelligence  competition  leadership  cooperate-defect  within-without  within-group  group-level  homo-hetero  new-religion  causation  direct-indirect  ends-means  metabuch  physics  axioms  skeleton  wiki  reference  concept  being-becoming  essence-existence  logos  real-nominal 
july 2018 by nhaliday
Is the human brain analog or digital? - Quora
The brain is neither analog nor digital, but works using a signal processing paradigm that has some properties in common with both.
 
Unlike a digital computer, the brain does not use binary logic or binary addressable memory, and it does not perform binary arithmetic. Information in the brain is represented in terms of statistical approximations and estimations rather than exact values. The brain is also non-deterministic and cannot replay instruction sequences with error-free precision. So in all these ways, the brain is definitely not "digital".
 
At the same time, the signals sent around the brain are "either-or" states that are similar to binary. A neuron fires or it does not. These all-or-nothing pulses are the basic language of the brain. So in this sense, the brain is computing using something like binary signals. Instead of 1s and 0s, or "on" and "off", the brain uses "spike" or "no spike" (referring to the firing of a neuron).
q-n-a  qra  expert-experience  neuro  neuro-nitgrit  analogy  deep-learning  nature  discrete  smoothness  IEEE  bits  coding-theory  communication  trivia  bio  volo-avolo  causation  random  order-disorder  ems  models  methodology  abstraction  nitty-gritty  computation  physics  electromag  scale  coarse-fine 
april 2018 by nhaliday
The first ethical revolution – Gene Expression
Fifty years ago Julian Jaynes published The Origin of Consciousness in the Breakdown of the Bicameral Mind. Seventy years ago Karl Jaspers introduced the concept of the Axial Age. Both point to the same dynamic historically.

Something happened in the centuries around 500 BCE all around the world. Great religions and philosophies arose. The Indian religious traditions, the Chinese philosophical-political ones, and the roots of what we can recognize as Judaism. In Greece, the precursors of many modern philosophical streams emerged formally, along with a variety of political systems.

The next few centuries saw some more innovation. Rabbinical Judaism transformed a ritualistic tribal religion into an ethical one, and Christianity universalized Jewish religious thought, as well as infusing it with Greek systematic concepts. Meanwhile, Indian and Chinese thought continued to evolve, often due to interactions each other (it is hard to imagine certain later developments in Confucianism without the Buddhist stimulus). Finally, in the 7th century, Islam emerges as the last great world religion.

...

Living in large complex societies with social stratification posed challenges. A religion such as Christianity was not a coincidence, something of its broad outlines may have been inevitable. Universal, portable, ethical, and infused with transcendence and coherency. Similarly, god-kings seem to have universally transformed themselves into the human who binds heaven to earth in some fashion.

The second wave of social-ethical transformation occurred in the early modern period, starting in Europe. My own opinion is that economic growth triggered by innovation and gains in productivity unleashed constraints which had dampened further transformations in the domain of ethics. But the new developments ultimately were simply extensions and modifications on the earlier “source code” (e.g., whereas for nearly two thousand years Christianity had had to make peace with the existence of slavery, in the 19th century anti-slavery activists began marshaling Christian language against the institution).
gnxp  scitariat  discussion  reflection  religion  christianity  theos  judaism  china  asia  sinosphere  orient  india  the-great-west-whale  occident  history  antiquity  iron-age  mediterranean  the-classics  canon  philosophy  morality  ethics  universalism-particularism  systematic-ad-hoc  analytical-holistic  confucian  big-peeps  innovation  stagnation  technology  economics  biotech  enhancement  genetics  bio  flux-stasis  automation  ai  low-hanging  speedometer  time  distribution  smoothness  shift  dennett  simler  volo-avolo  👽  mystic  marginal  farmers-and-foragers  wealth  egalitarianism-hierarchy  values  formal-values  ideology  good-evil 
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
Prisoner's dilemma - Wikipedia
caveat to result below:
An extension of the IPD is an evolutionary stochastic IPD, in which the relative abundance of particular strategies is allowed to change, with more successful strategies relatively increasing. This process may be accomplished by having less successful players imitate the more successful strategies, or by eliminating less successful players from the game, while multiplying the more successful ones. It has been shown that unfair ZD strategies are not evolutionarily stable. The key intuition is that an evolutionarily stable strategy must not only be able to invade another population (which extortionary ZD strategies can do) but must also perform well against other players of the same type (which extortionary ZD players do poorly, because they reduce each other's surplus).[14]

Theory and simulations confirm that beyond a critical population size, ZD extortion loses out in evolutionary competition against more cooperative strategies, and as a result, the average payoff in the population increases when the population is bigger. In addition, there are some cases in which extortioners may even catalyze cooperation by helping to break out of a face-off between uniform defectors and win–stay, lose–switch agents.[8]

https://alfanl.com/2018/04/12/defection/
Nature boils down to a few simple concepts.

Haters will point out that I oversimplify. The haters are wrong. I am good at saying a lot with few words. Nature indeed boils down to a few simple concepts.

In life, you can either cooperate or defect.

Used to be that defection was the dominant strategy, say in the time when the Roman empire started to crumble. Everybody complained about everybody and in the end nothing got done. Then came Jesus, who told people to be loving and cooperative, and boom: 1800 years later we get the industrial revolution.

Because of Jesus we now find ourselves in a situation where cooperation is the dominant strategy. A normie engages in a ton of cooperation: with the tax collector who wants more and more of his money, with schools who want more and more of his kid’s time, with media who wants him to repeat more and more party lines, with the Zeitgeist of the Collective Spirit of the People’s Progress Towards a New Utopia. Essentially, our normie is cooperating himself into a crumbling Western empire.

Turns out that if everyone blindly cooperates, parasites sprout up like weeds until defection once again becomes the standard.

The point of a post-Christian religion is to once again create conditions for the kind of cooperation that led to the industrial revolution. This necessitates throwing out undead Christianity: you do not blindly cooperate. You cooperate with people that cooperate with you, you defect on people that defect on you. Christianity mixed with Darwinism. God and Gnon meet.

This also means we re-establish spiritual hierarchy, which, like regular hierarchy, is a prerequisite for cooperation. It is this hierarchical cooperation that turns a household into a force to be reckoned with, that allows a group of men to unite as a front against their enemies, that allows a tribe to conquer the world. Remember: Scientology bullied the Cathedral’s tax department into submission.

With a functioning hierarchy, men still gossip, lie and scheme, but they will do so in whispers behind closed doors. In your face they cooperate and contribute to the group’s wellbeing because incentives are thus that contributing to group wellbeing heightens status.

Without a functioning hierarchy, men gossip, lie and scheme, but they do so in your face, and they tell you that you are positively deluded for accusing them of gossiping, lying and scheming. Seeds will not sprout in such ground.

Spiritual dominance is established in the same way any sort of dominance is established: fought for, taken. But the fight is ritualistic. You can’t force spiritual dominance if no one listens, or if you are silenced the ritual is not allowed to happen.

If one of our priests is forbidden from establishing spiritual dominance, that is a sure sign an enemy priest is in better control and has vested interest in preventing you from establishing spiritual dominance..

They defect on you, you defect on them. Let them suffer the consequences of enemy priesthood, among others characterized by the annoying tendency that very little is said with very many words.

https://contingentnotarbitrary.com/2018/04/14/rederiving-christianity/
To recap, we started with a secular definition of Logos and noted that its telos is existence. Given human nature, game theory and the power of cooperation, the highest expression of that telos is freely chosen universal love, tempered by constant vigilance against defection while maintaining compassion for the defectors and forgiving those who repent. In addition, we must know the telos in order to fulfill it.

In Christian terms, looks like we got over half of the Ten Commandments (know Logos for the First, don’t defect or tempt yourself to defect for the rest), the importance of free will, the indestructibility of evil (group cooperation vs individual defection), loving the sinner and hating the sin (with defection as the sin), forgiveness (with conditions), and love and compassion toward all, assuming only secular knowledge and that it’s good to exist.

Iterated Prisoner's Dilemma is an Ultimatum Game: http://infoproc.blogspot.com/2012/07/iterated-prisoners-dilemma-is-ultimatum.html
The history of IPD shows that bounded cognition prevented the dominant strategies from being discovered for over over 60 years, despite significant attention from game theorists, computer scientists, economists, evolutionary biologists, etc. Press and Dyson have shown that IPD is effectively an ultimatum game, which is very different from the Tit for Tat stories told by generations of people who worked on IPD (Axelrod, Dawkins, etc., etc.).

...

For evolutionary biologists: Dyson clearly thinks this result has implications for multilevel (group vs individual selection):
... Cooperation loses and defection wins. The ZD strategies confirm this conclusion and make it sharper. ... The system evolved to give cooperative tribes an advantage over non-cooperative tribes, using punishment to give cooperation an evolutionary advantage within the tribe. This double selection of tribes and individuals goes way beyond the Prisoners' Dilemma model.

implications for fractionalized Europe vis-a-vis unified China?

and more broadly does this just imply we're doomed in the long run RE: cooperation, morality, the "good society", so on...? war and group-selection is the only way to get a non-crab bucket civilization?

Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent:
http://www.pnas.org/content/109/26/10409.full
http://www.pnas.org/content/109/26/10409.full.pdf
https://www.edge.org/conversation/william_h_press-freeman_dyson-on-iterated-prisoners-dilemma-contains-strategies-that

https://en.wikipedia.org/wiki/Ultimatum_game

analogy for ultimatum game: the state gives the demos a bargain take-it-or-leave-it, and...if the demos refuses...violence?

The nature of human altruism: http://sci-hub.tw/https://www.nature.com/articles/nature02043
- Ernst Fehr & Urs Fischbacher

Some of the most fundamental questions concerning our evolutionary origins, our social relations, and the organization of society are centred around issues of altruism and selfishness. Experimental evidence indicates that human altruism is a powerful force and is unique in the animal world. However, there is much individual heterogeneity and the interaction between altruists and selfish individuals is vital to human cooperation. Depending on the environment, a minority of altruists can force a majority of selfish individuals to cooperate or, conversely, a few egoists can induce a large number of altruists to defect. Current gene-based evolutionary theories cannot explain important patterns of human altruism, pointing towards the importance of both theories of cultural evolution as well as gene–culture co-evolution.

...

Why are humans so unusual among animals in this respect? We propose that quantitatively, and probably even qualitatively, unique patterns of human altruism provide the answer to this question. Human altruism goes far beyond that which has been observed in the animal world. Among animals, fitness-reducing acts that confer fitness benefits on other individuals are largely restricted to kin groups; despite several decades of research, evidence for reciprocal altruism in pair-wise repeated encounters4,5 remains scarce6–8. Likewise, there is little evidence so far that individual reputation building affects cooperation in animals, which contrasts strongly with what we find in humans. If we randomly pick two human strangers from a modern society and give them the chance to engage in repeated anonymous exchanges in a laboratory experiment, there is a high probability that reciprocally altruistic behaviour will emerge spontaneously9,10.

However, human altruism extends far beyond reciprocal altruism and reputation-based cooperation, taking the form of strong reciprocity11,12. Strong reciprocity is a combination of altruistic rewarding, which is a predisposition to reward others for cooperative, norm-abiding behaviours, and altruistic punishment, which is a propensity to impose sanctions on others for norm violations. Strong reciprocators bear the cost of rewarding or punishing even if they gain no individual economic benefit whatsoever from their acts. In contrast, reciprocal altruists, as they have been defined in the biological literature4,5, reward and punish only if this is in their long-term self-interest. Strong reciprocity thus constitutes a powerful incentive for cooperation even in non-repeated interactions and when reputation gains are absent, because strong reciprocators will reward those who cooperate and punish those who defect.

...

We will show that the interaction between selfish and strongly reciprocal … [more]
concept  conceptual-vocab  wiki  reference  article  models  GT-101  game-theory  anthropology  cultural-dynamics  trust  cooperate-defect  coordination  iteration-recursion  sequential  axelrod  discrete  smoothness  evolution  evopsych  EGT  economics  behavioral-econ  sociology  new-religion  deep-materialism  volo-avolo  characterization  hsu  scitariat  altruism  justice  group-selection  decision-making  tribalism  organizing  hari-seldon  theory-practice  applicability-prereqs  bio  finiteness  multi  history  science  social-science  decision-theory  commentary  study  summary  giants  the-trenches  zero-positive-sum  🔬  bounded-cognition  info-dynamics  org:edge  explanation  exposition  org:nat  eden  retention  long-short-run  darwinian  markov  equilibrium  linear-algebra  nitty-gritty  competition  war  explanans  n-factor  europe  the-great-west-whale  occident  china  asia  sinosphere  orient  decentralized  markets  market-failure  cohesion  metabuch  stylized-facts  interdisciplinary  physics  pdf  pessimism  time  insight  the-basilisk  noblesse-oblige  the-watchers  ideas  l 
march 2018 by nhaliday
Unaligned optimization processes as a general problem for society
TL;DR: There are lots of systems in society which seem to fit the pattern of “the incentives for this system are a pretty good approximation of what we actually want, so the system produces good results until it gets powerful, at which point it gets terrible results.”

...

Here are some more places where this idea could come into play:

- Marketing—humans try to buy things that will make our lives better, but our process for determining this is imperfect. A more powerful optimization process produces extremely good advertising to sell us things that aren’t actually going to make our lives better.
- Politics—we get extremely effective demagogues who pit us against our essential good values.
- Lobbying—as industries get bigger, the optimization process to choose great lobbyists for industries gets larger, but the process to make regulators robust doesn’t get correspondingly stronger. So regulatory capture gets worse and worse. Rent-seeking gets more and more significant.
- Online content—in a weaker internet, sites can’t be addictive except via being good content. In the modern internet, people can feel addicted to things that they wish they weren’t addicted to. We didn’t use to have the social expertise to make clickbait nearly as well as we do it today.
- News—Hyperpartisan news sources are much more worth it if distribution is cheaper and the market is bigger. News sources get an advantage from being truthful, but as society gets bigger, this advantage gets proportionally smaller.

...

For these reasons, I think it’s quite plausible that humans are fundamentally unable to have a “good” society with a population greater than some threshold, particularly if all these people have access to modern technology. Humans don’t have the rigidity to maintain social institutions in the face of that kind of optimization process. I think it is unlikely but possible (10%?) that this threshold population is smaller than the current population of the US, and that the US will crumble due to the decay of these institutions in the next fifty years if nothing totally crazy happens.
ratty  thinking  metabuch  reflection  metameta  big-yud  clever-rats  ai-control  ai  risk  scale  quality  ability-competence  network-structure  capitalism  randy-ayndy  civil-liberty  marketing  institutions  economics  political-econ  politics  polisci  advertising  rent-seeking  government  coordination  internet  attention  polarization  media  truth  unintended-consequences  alt-inst  efficiency  altruism  society  usa  decentralized  rhetoric  prediction  population  incentives  intervention  criminal-justice  property-rights  redistribution  taxes  externalities  science  monetary-fiscal  public-goodish  zero-positive-sum  markets  cost-benefit  regulation  regularizer  order-disorder  flux-stasis  shift  smoothness  phase-transition  power  definite-planning  optimism  pessimism  homo-hetero  interests  eden-heaven  telos-atelos  threat-modeling  alignment 
february 2018 by nhaliday
Genetics allows the dead to speak from the grave - The Unz Review
BOOKMARKIt is a running joke of mine on Twitter that the genetics of white people is one of those fertile areas of research that seems to never end. Is it a surprise that the ancient DNA field has first elucidated the nature of this obscure foggy continent, before rich histories of the untold billions of others? It’s funny, and yet these stories, true tales, do I think tell us a great deal about how modern human populations came to be in the last 10,000 years. The lessons of Europe can be generalized. We don’t have the rich stock of ancient DNA from China, the Middle East, or India. At least not enough to do population genomics, which requires larger sample sizes than a few. But, climate permitting, we may.

...

At about the same time the evidence for Neanderthal admixture came out, Luke Jostins posted results which showed that other human lineages were also undergoing encephalization, before their trajectory was cut short. That is, their brains were getting bigger before they went extinct. To me this suggested that the broader Homo lineage was undergoing a process of nearly inevitable change due to a series of evolutionary events very deep in our history, perhaps ancestral on the order of millions of years. Along with the evidence for admixture it made me reconsider my priors. Perhaps some Homo lineage was going to expand outward and do what we did, and perhaps it wasn’t inevitable that it was going to be us. Perhaps the Neanderthal Parallax scenario is not as fantastical as we might think?
gnxp  scitariat  books  review  reflection  sapiens  genetics  genomics  pop-structure  recommendations  history  antiquity  iron-age  the-classics  mediterranean  medieval  MENA  archaeology  gene-flow  migration  big-picture  deep-materialism  world  aDNA  methodology  🌞  europe  roots  gavisti  the-great-west-whale  culture  cultural-dynamics  smoothness  asia  india  summary  archaics  discussion  conquest-empire  canon  shift  eden  traces 
may 2017 by nhaliday
Barrier function - Wikipedia
In constrained optimization, a field of mathematics, a barrier function is a continuous function whose value on a point increases to infinity as the point approaches the boundary of the feasible region of an optimization problem.[1] Such functions are used to replace inequality constraints by a penalizing term in the objective function that is easier to handle.
math  acm  concept  optimization  singularity  smoothness  relaxation  wiki  reference  regularization  math.CA  nibble 
february 2017 by nhaliday
Sobolev space - Wikipedia
In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of Lp-norms of the function itself and its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, thus a Banach space. Intuitively, a Sobolev space is a space of functions with sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function.
math  concept  math.CA  math.FA  differential  inner-product  wiki  reference  regularity  smoothness  norms  nibble  zooming 
february 2017 by nhaliday
Prékopa–Leindler inequality | Academically Interesting
Consider the following statements:
1. The shape with the largest volume enclosed by a given surface area is the n-dimensional sphere.
2. A marginal or sum of log-concave distributions is log-concave.
3. Any Lipschitz function of a standard n-dimensional Gaussian distribution concentrates around its mean.
What do these all have in common? Despite being fairly non-trivial and deep results, they all can be proved in less than half of a page using the Prékopa–Leindler inequality.

ie, Brunn-Minkowski
acmtariat  clever-rats  ratty  math  acm  geometry  measure  math.MG  estimate  distribution  concentration-of-measure  smoothness  regularity  org:bleg  nibble  brunn-minkowski  curvature  convexity-curvature 
february 2017 by nhaliday
measure theory - Continuous function a.e. - Mathematics Stack Exchange
- note: Riemann integrable iff continuous a.e. (see Wheeden-Zygmund 5.54)
- equal a.e. to continuous f, but not continuous a.e.: characteristic function of rationals
- continuous a.e., but not equal a.e. to continuous f: step function
- continuous a.e., w/ uncountably many discontinuities: characteristic function of Cantor set
q-n-a  overflow  math  math.CA  counterexample  list  measure  smoothness  singularity  nibble  integral 
january 2017 by nhaliday
Performance Trends in AI | Otium
Deep learning has revolutionized the world of artificial intelligence. But how much does it improve performance? How have computers gotten better at different tasks over time, since the rise of deep learning?

In games, what the data seems to show is that exponential growth in data and computation power yields exponential improvements in raw performance. In other words, you get out what you put in. Deep learning matters, but only because it provides a way to turn Moore’s Law into corresponding performance improvements, for a wide class of problems. It’s not even clear it’s a discontinuous advance in performance over non-deep-learning systems.

In image recognition, deep learning clearly is a discontinuous advance over other algorithms. But the returns to scale and the improvements over time seem to be flattening out as we approach or surpass human accuracy.

In speech recognition, deep learning is again a discontinuous advance. We are still far away from human accuracy, and in this regime, accuracy seems to be improving linearly over time.

In machine translation, neural nets seem to have made progress over conventional techniques, but it’s not yet clear if that’s a real phenomenon, or what the trends are.

In natural language processing, trends are positive, but deep learning doesn’t generally seem to do better than trendline.

...

The learned agent performs much better than the hard-coded agent, but moves more jerkily and “randomly” and doesn’t know the law of reflection. Similarly, the reports of AlphaGo producing “unusual” Go moves are consistent with an agent that can do pattern-recognition over a broader space than humans can, but which doesn’t find the “laws” or “regularities” that humans do.

Perhaps, contrary to the stereotype that contrasts “mechanical” with “outside-the-box” thinking, reinforcement learners can “think outside the box” but can’t find the box?

http://slatestarcodex.com/2017/08/02/where-the-falling-einstein-meets-the-rising-mouse/
ratty  core-rats  summary  prediction  trends  analysis  spock  ai  deep-learning  state-of-art  🤖  deepgoog  games  nlp  computer-vision  nibble  reinforcement  model-class  faq  org:bleg  shift  chart  technology  language  audio  accuracy  speaking  foreign-lang  definite-planning  china  asia  microsoft  google  ideas  article  speedometer  whiggish-hegelian  yvain  ssc  smoothness  data  hsu  scitariat  genetics  iq  enhancement  genetic-load  neuro  neuro-nitgrit  brain-scan  time-series  multiplicative  iteration-recursion  additive  multi 
january 2017 by nhaliday
ca.analysis and odes - Why do functions in complex analysis behave so well? (as opposed to functions in real analysis) - MathOverflow
Well, real-valued analytic functions are just as rigid as their complex-valued counterparts. The true question is why complex smooth (or complex differentiable) functions are automatically complex analytic, whilst real smooth (or real differentiable) functions need not be real analytic.
q-n-a  overflow  math  math.CA  math.CV  synthesis  curiosity  gowers  oly  mathtariat  tcstariat  comparison  rigidity  smoothness  singularity  regularity  nibble 
january 2017 by nhaliday
Cantor function - Wikipedia
- uniformly continuous but not absolutely continuous
- derivative zero almost everywhere but not constant
- see also: http://mathoverflow.net/questions/31603/why-do-probabilists-take-random-variables-to-be-borel-and-not-lebesgue-measura/31609#31609 (the exercise mentioned uses c(x)+x for c the Cantor function)
math  math.CA  counterexample  wiki  reference  multi  math.FA  atoms  measure  smoothness  singularity  nibble 
january 2017 by nhaliday
real analysis - Proof of "every convex function is continuous" - Mathematics Stack Exchange
bound above by secant and below by tangent, so graph of function is constrained to a couple triangles w/ common vertex at (x, f(x))
tidbits  math  math.CA  q-n-a  visual-understanding  acm  overflow  proofs  smoothness  nibble  curvature  convexity-curvature 
november 2016 by nhaliday
Evolution of human intelligence: the roles of brain size and mental construction. - PubMed - NCBI
Two competing philosophical paradigms characterize approaches to the evolution of the human mind. One postulates continuity between animal and human behavioral capacities. The other assumes that humans and animals are separated by major qualitative behavioral and mental gaps. This paper presents a continuity model that suggests that expanded human mental capacities primarily reflect the increased information processing capacities of the enlarged human brain including the enlarged neocortex, cerebellum, and basal ganglia. These increased information processing capacities enhance human abilities to combine and recombine highly differentiated actions, perceptions, and concepts in order to construct larger, more complex, and highly variable behavioral units in a variety of behavioral domains including language, social intelligence, tool-making, and motor sequences.
study  sapiens  eden  evolution  neuro  intelligence  evopsych  neuro-nitgrit  models  bare-hands  shift  smoothness 
november 2016 by nhaliday
The Hyborian Age | West Hunter
I was contemplating Conan the Barbarian, and remembered the essay that Robert E. Howard wrote about the background of those stories – The Hyborian Age. I think that the flavor of Howard’s pseudo-history is a lot more realistic than the picture of the human past academics preferred over the past few decades.

In Conan’s world, it’s never surprising to find a people that once mixed with some ancient prehuman race. Happens all the time. Until very recently, the vast majority of workers in human genetics and paleontology were sure that this never occurred – and only changed their minds when presented with evidence that was both strong (ancient DNA) and too mathematically sophisticated for them to understand or challenge (D-statistics).

Conan’s history was shaped by the occasional catastrophe. Most academics (particularly geologists) don’t like catastrophes, but they have grudgingly come to admit their importance – things like the Thera and Toba eruptions, or the K/T asteroid strike and the Permo-Triassic crisis.

Between the time when the oceans drank Atlantis, and the rise of the sons of Aryas, evolution seems to have run pretty briskly, but without any pronounced direction. Men devolved into ape-men when the environment pushed in that direction (Flores ?) and shifted right back when the environment favored speech and tools. Culture shaped evolution, and evolution shaped culture. An endogamous caste of snake-worshiping priests evolved in a strange direction. Although their IQs were considerably higher than average, they remained surprisingly vulnerable to sword-bearing barbarians.

...

Most important, Conan, unlike the typical professor, knew what was best in life.
west-hunter  sapiens  antiquity  aphorism  gavisti  martial  scitariat  nietzschean  archaeology  kumbaya-kult  peace-violence  conquest-empire  nihil  death  gene-flow  archaics  aDNA  flux-stasis  smoothness  shift  history  age-of-discovery  latin-america  farmers-and-foragers  migration  anthropology  embodied  straussian  scifi-fantasy  gnosis-logos  god-man-beast-victim 
november 2016 by nhaliday
The Day Before Forever | West Hunter
Yesterday, I was discussing the possibilities concerning slowing, or reversing aging – why it’s obviously possible, although likely a hard engineering problem. Why partial successes would be valuable, why making use of the evolutionary theory of senescence should help, why we should look at whales and porcupines as well as Jeanne Calment, etc., etc. I talked a long time – it’s a subject that has interested me for many years.

But there’s one big question: why are the powers that be utterly uninterested ?

https://www.facebook.com/ISIInc/videos/vb.267919097102/641005449680861/?type=2&theater
The Intercollegiate Studies Institute and the Abagail Adams Institute host a debate between Peter Thiel and William Hurlbut. Resolved: Technology Should Treat Death as an Enemy

https://westhunt.wordpress.com/2017/07/03/the-best-things-in-life-are-cheap-today/
What if you could buy an extra year of youth for a million bucks (real cost). Clearly this country ( or any country) can’t afford that for everyone. Some people could: and I think it would stick in many people’s craw. Even worse if they do it by harvesting the pineal glands of children and using them to manufacture a waxy nodule that forfends age.

This is something like the days of old, pre-industrial times. Back then, the expensive, effective life-extender was food in a famine year.

https://westhunt.wordpress.com/2017/04/11/the-big-picture/
Once upon a time, I wrote a long spiel on life extension – before it was cool, apparently. I sent it off to an interested friend [a science fiction editor] who was at that time collaborating on a book with a certain politician. That politician – Speaker of the House, but that could be anyone of thousands of guys, right? – ran into my spiel and read it. His immediate reaction was that greatly extending the healthy human life span would be horrible – it would bankrupt Social Security ! Nice to know that guys running the show always have the big picture in mind.

Reminds me of a sf story [Trouble with Lichens] in which something of that sort is invented and denounced by the British trade unions, as a plot to keep them working forever & never retire.

https://westhunt.wordpress.com/2015/04/16/he-still-has-that-hair/
He’s got the argument backward: sure, natural selection has not favored perfect repair, so says the evolutionary theory of of senescence. If it had, then we could perhaps conclude that perfect repair was very hard to achieve, since we don’t see it, at least not in complex animals.* But since it was not favored, since natural selection never even tried, it may not be that difficult.

Any cost-free longevity gene that made you live to be 120 would have had a small payoff, since various hazards were fairly likely to get you by then anyway… And even if it would have been favored, a similar gene that cost a nickel would not have been. Yet we can afford a nickel.

There are useful natural examples: we don’t have to start from scratch. Bowhead whales live over 200 years: I’m not too proud to learn from them.

Lastly , this would take a lot of work. So what?

*Although we can invent things that evolution can’t – we don’t insist that all the intermediate stages be viable.

https://westhunt.wordpress.com/2013/12/09/aging/
https://westhunt.wordpress.com/2014/09/22/suspicious-minds/

doesn't think much of Aubrey de Gray: https://westhunt.wordpress.com/2013/07/21/of-mice-and-men/#comment-15832
I wouldn’t rely on Aubrey de Gray.

It might be easier to fix if we invested more than a millionth of a percent of GNP on longevity research. It’s doable, but hardly anyone is interested. I doubt if most people, including most MDs and biologists, even know that it’s theoretically possible.

I suppose I should do something about it. Some of our recent work ( Henry and me) suggests that people of sub-Saharan African descent might offer some clues – their funny pattern of high paternal age probably causes the late-life mortality crossover, it couldn’t hurt to know the mechanisms involved.

Make Room! Make Room!: https://westhunt.wordpress.com/2015/06/24/make-room-make-room/
There is a recent article in Phys Rev Letters (“Programed Death is Favored by Natural Selection in Spatial Systems”) arguing that aging is an adaptation – natural selection has favored mechanisms that get rid of useless old farts. I can think of other people that have argued for this – some pretty smart cookies (August Weismann, for example, although he later abandoned the idea) and at the other end of the spectrum utter loons like Martin Blaser.

...

There might could be mutations that significantly extended lifespan but had consequences that were bad for fitness, at least in past environments – but that isn’t too likely if mutational accumulation and antagonistic pleiotropy are the key drivers of senescence in humans. As I said, we’ve never seen any.

more on Martin Blaser:
https://westhunt.wordpress.com/2013/01/22/nasty-brutish-but-not-that-short/#comment-7514
This is off topic, but I just read Germs Are Us and was struck by the quote from Martin Blaser ““[causing nothing but harm] isn’t how evolution works” […] “H. pylori is an ancestral component of humanity.”
That seems to be the assumption that the inevitable trend is toward symbiosis that I recall from Ewald’s “Plague Time”. My recollection is that it’s false if the pathogen can easily jump to another host. The bulk of the New Yorker article reminded me of Seth Roberts.

I have corresponded at length with Blaser. He’s a damn fool, not just on this. Speaking of, would there be general interest in listing all the damn fools in public life? Of course making the short list would be easier.

https://westhunt.wordpress.com/2013/01/18/dirty-old-men/#comment-64117
I have corresponded at length with Blaser. He’s a damn fool, not just on this. Speaking of, would there be general interest in listing all the damn fools in public life? Of course making the short list would be easier.
enhancement  longevity  aging  discussion  west-hunter  scitariat  multi  thermo  death  money  big-picture  reflection  bounded-cognition  info-dynamics  scifi-fantasy  food  pinker  thinking  evolution  genetics  nature  oceans  inequality  troll  lol  chart  model-organism  shift  smoothness  🌞  🔬  track-record  low-hanging  aphorism  ideas  speculation  complex-systems  volo-avolo  poast  people  paternal-age  life-history  africa  natural-experiment  mutation  genetic-load  questions  study  summary  critique  org:nat  commentary  parasites-microbiome  disease  elite  tradeoffs  homo-hetero  contrarianism  history  medieval  lived-experience  EEA  modernity  malthus  optimization  video  facebook  social  debate  thiel  barons 
november 2016 by nhaliday
Overcoming Bias : In Praise of Low Needs
We humans have come a long way since we first became human; we’ve innovated and grown our ability to achieve human ends by perhaps a factor of ten million. Not at all shabby, even though it may be small compared to the total factor of growth and innovation that life achieved before humans arrived. But even if humanity’s leap is a great achievement, I fear that we have much further to go than we have come.

The universe seems almost entirely dead out there. There’s a chance it will eventually be densely filled with life, and that our descendants may help to make that happen. Some worry about the quality of that life filling the universe, and yes there are issues there. But I worry mostly about the difference between life and death. Our descendants may kill themselves or stop growing, and fail to fill the universe with life. Any life.

To fill the universe with life requires that we grow far more than our previous leap factor of ten million. More like three to ten factors that big still to go. (See Added below.) So think of all the obstacles we’ve overcome so far, obstacles that appeared when we reached new scales of size and levels of ability. If we were lucky to make it this far, we’ll have to be much more lucky to make it all the way.

...

Added 28Oct: Assume humanity’s leap factor is 107. Three of those is 1021. As there are 1024 stars in observable universe, that much growth could come from filling one in a thousand of those stars with as many rich humans as Earth now has. Ten of humanity’s leap is 1070, and there are now about 1010 humans on Earth. As there are about 1080 atoms in the observable universe, that much growth could come from finding a way to implement one human like creature per atom.
hanson  contrarianism  stagnation  trends  values  farmers-and-foragers  essay  rhetoric  new-religion  ratty  spreading  phalanges  malthus  formal-values  flux-stasis  economics  growth-econ  status  fashun  signaling  anthropic  fermi  nihil  death  risk  futurism  hierarchy  ranking  discipline  temperance  threat-modeling  existence  wealth  singularity  smoothness  discrete  scale  magnitude  population  physics  estimate  uncertainty  flexibility  rigidity  capitalism  heavy-industry  the-world-is-just-atoms  nature  corporation  institutions  coarse-fine 
october 2016 by nhaliday
Noise: dinosaurs, syphilis, and all that | West Hunter
Generally speaking, I thought the paleontologists were a waste of space: innumerate, ignorant about evolution, and simply not very smart.

None of them seemed to understand that a sharp, short unpleasant event is better at causing a mass extinction, since it doesn’t give flora and fauna time to adapt.

Most seemed to think that gradual change caused by slow geological and erosion forces was ‘natural’, while extraterrestrial impact was not. But if you look at the Moon, or Mars, or the Kirkwood gaps in the asteroids, or think about the KAM theorem, it is apparent that Newtonian dynamics implies that orbits will be perturbed, and that sometimes there will be catastrophic cosmic collisions. Newtonian dynamics is as ‘natural’ as it gets: paleontologists not studying it in school and not having much math hardly makes it ‘unnatural’.

One of the more interesting general errors was not understanding how to to deal with noise – incorrect observations. There’s a lot of noise in the paleontological record. Dinosaur bones can be eroded and redeposited well after their life times – well after the extinction of all dinosaurs. The fossil record is patchy: if a species is rare, it can easily look as if it went extinct well before it actually did. This means that the data we have is never going to agree with a perfectly correct hypothesis – because some of the data is always wrong. Particularly true if the hypothesis is specific and falsifiable. If your hypothesis is vague and imprecise – not even wrong – it isn’t nearly as susceptible to noise. As far as I can tell, a lot of paleontologists [ along with everyone in the social sciences] think of of unfalsifiability as a strength.

Done Quickly: https://westhunt.wordpress.com/2011/12/03/done-quickly/
I’ve never seen anyone talk about it much, but when you think about mass extinctions, you also have to think about rates of change

You can think of a species occupying a point in a many-dimensional space, where each dimension represents some parameter that influences survival and/or reproduction: temperature, insolation, nutrient concentrations, oxygen partial pressure, toxin levels, yada yada yada. That point lies within a zone of habitability – the set of environmental conditions that the species can survive. Mass extinction occurs when environmental changes are so large that many species are outside their comfort zone.

The key point is that, with gradual change, species adapt. In just a few generations, you can see significant heritable responses to a new environment. Frogs have evolved much greater tolerance of acidification in 40 years (about 15 generations). Some plants in California have evolved much greater tolerance of copper in just 70 years.

As this happens, the boundaries of the comfort zone move. Extinctions occur when the rate of environmental change is greater than the rate of adaptation, or when the amount of environmental change exceeds the limit of feasible adaptation. There are such limits: bar-headed geese fly over Mt. Everest, where the oxygen partial pressure is about a third of that at sea level, but I’m pretty sure that no bird could survive on the Moon.

...

Paleontologists prefer gradualist explanations for mass extinctions, but they must be wrong, for the most part.
disease  science  critique  rant  history  thinking  regularizer  len:long  west-hunter  thick-thin  occam  social-science  robust  parasites-microbiome  early-modern  parsimony  the-trenches  bounded-cognition  noise-structure  signal-noise  scitariat  age-of-discovery  sex  sexuality  info-dynamics  alt-inst  map-territory  no-go  contradiction  dynamical  math.DS  space  physics  mechanics  archaeology  multi  speed  flux-stasis  smoothness  evolution  environment  time  shift  death  nihil  inference  apollonian-dionysian  error  explanation  spatial  discrete  visual-understanding  consilience  traces  evidence  elegance 
september 2016 by nhaliday

bundles : abstractmath

related tags

ability-competence  abstraction  academia  accretion  accuracy  acm  acmtariat  additive  aDNA  advanced  adversarial  advertising  africa  age-of-discovery  aging  agriculture  ai  ai-control  algebra  algorithms  alignment  alt-inst  altruism  analogy  analysis  analytical-holistic  anthropic  anthropology  antiquity  aphorism  apollonian-dionysian  applicability-prereqs  applications  approximation  archaeology  archaics  arms  arrows  article  ascetic  asia  atoms  attention  audio  authoritarianism  autism  automation  axelrod  axioms  bare-hands  barons  behavioral-econ  behavioral-gen  being-becoming  biases  big-peeps  big-picture  big-yud  bio  biodet  biotech  bits  books  bostrom  bounded-cognition  brain-scan  broad-econ  brunn-minkowski  canon  capitalism  cartoons  causation  characterization  charity  chart  checklists  china  christianity  civil-liberty  civilization  classification  clever-rats  coalitions  coarse-fine  coding-theory  cog-psych  cohesion  commentary  communication  comparison  competition  complex-systems  composition-decomposition  computation  computer-vision  concentration-of-measure  concept  conceptual-vocab  concrete  concurrency  confluence  confucian  conquest-empire  consilience  contradiction  contrarianism  convexity-curvature  cooperate-defect  coordination  core-rats  corporation  cost-benefit  counterexample  coupling-cohesion  course  criminal-justice  critique  crux  cultural-dynamics  culture  curiosity  curvature  cybernetics  cycles  darwinian  data  death  debate  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  definite-planning  definition  degrees-of-freedom  dennett  density  detail-architecture  deterrence  developmental  differential  dimensionality  direct-indirect  discipline  discrete  discussion  disease  distribution  diversity  domestication  dropbox  duplication  dynamical  early-modern  ecology  economics  eden  eden-heaven  EEA  efficiency  egalitarianism-hierarchy  EGT  electromag  elegance  elite  embeddings  embodied  emotion  empirical  ems  endogenous-exogenous  ends-means  engineering  enhancement  entropy-like  environment  environmental-effects  envy  epistemic  equilibrium  error  essay  essence-existence  estimate  ethics  europe  evidence  evolution  evopsych  examples  existence  experiment  expert  expert-experience  explanans  explanation  exposition  externalities  extrema  facebook  faq  farmers-and-foragers  fashun  fermi  finiteness  fitness  flexibility  flux-stasis  food  foreign-lang  formal-values  forms-instances  fourier  free-riding  frontier  futurism  game-theory  games  gavisti  gedanken  gene-flow  generalization  generative  genetic-load  genetics  genomics  geography  geometry  giants  gnon  gnosis-logos  gnxp  god-man-beast-victim  good-evil  google  government  gowers  gradient-descent  graph-theory  graphical-models  graphs  gregory-clark  ground-up  group-level  group-selection  growth-econ  GT-101  guilt-shame  hanson  hardware  hari-seldon  heavy-industry  henrich  heuristic  hi-order-bits  hierarchy  history  hmm  homo-hetero  homogeneity  honor  hsu  humanity  ideas  identity  ideology  IEEE  iidness  illusion  impact  incentives  india  individualism-collectivism  industrial-revolution  inequality  inference  info-dynamics  inner-product  innovation  insight  institutions  integral  intelligence  interdisciplinary  interests  internet  intersection-connectedness  intervention  intricacy  intuition  invariance  investing  iq  iron-age  is-ought  iteration-recursion  jargon  judaism  justice  knowledge  kumbaya-kult  labor  land  language  large-factor  latin-america  leadership  lecture-notes  legacy  len:long  lesswrong  levers  leviathan  life-history  lifts-projections  linear-algebra  linearity  liner-notes  links  list  lived-experience  local-global  logos  lol  long-short-run  longevity  love-hate  low-hanging  machine-learning  magnitude  male-variability  malthus  manifolds  map-territory  marginal  market-failure  marketing  markets  markov  martial  martingale  math  math.CA  math.CV  math.DS  math.FA  math.GN  math.MG  mathtariat  measure  mechanics  media  medieval  mediterranean  MENA  meta:math  meta:rhetoric  metabuch  metameta  methodology  metric-space  microsoft  migration  miri-cfar  model-class  model-organism  models  modernity  moloch  moments  monetary-fiscal  money  morality  motivation  multi  multiplicative  mutation  mystic  n-factor  natural-experiment  nature  near-far  network-structure  neuro  neuro-nitgrit  neurons  new-religion  nibble  nietzschean  nihil  nitty-gritty  nlp  no-go  noblesse-oblige  noise-structure  nonlinearity  norms  number  occam  occident  oceans  oly  optimism  optimization  order-disorder  org:bleg  org:edge  org:nat  organizing  orient  overflow  oxbridge  p:***  p:someday  p:whenever  papers  parasites-microbiome  parenting  parsimony  paternal-age  patho-altruism  pdf  peace-violence  people  performance  personality  perturbation  pessimism  phalanges  phase-transition  philosophy  physics  pigeonhole-markov  pinker  piracy  plots  poast  polarization  polisci  political-econ  politics  pop-structure  population  population-genetics  power  pragmatic  pre-2013  prediction  preprint  princeton  prioritizing  probabilistic-method  probability  problem-solving  prof  proofs  property-rights  prudence  psychiatry  psychology  psychometrics  public-goodish  publishing  putnam-like  q-n-a  qra  QTL  quality  quantitative-qualitative  questions  quixotic  random  randy-ayndy  ranking  rant  rationality  ratty  real-nominal  realness  recommendations  redistribution  reduction  reference  reflection  regularity  regularization  regularizer  regulation  reinforcement  relaxation  religion  rent-seeking  reputation  research  retention  review  revolution  rhetoric  rigidity  risk  ritual  roadmap  robust  roots  rot  s-factor  s:**  s:***  sapiens  scale  schelling  scholar-pack  science  scifi-fantasy  scitariat  search  selection  self-interest  sensitivity  sequential  series  sex  sexuality  shift  signal-noise  signaling  similarity  simler  simplex  singularity  sinosphere  skeleton  smoothness  social  social-norms  social-science  sociality  society  sociology  software  space  spatial  speaking  spearhead  spectral  speculation  speed  speedometer  spock  sports  spreading  ssc  stagnation  state-of-art  status  stochastic-processes  strategy  straussian  street-fighting  structure  study  studying  stylized-facts  subculture  subjective-objective  summary  supply-demand  survey  synthesis  systematic-ad-hoc  taxes  tcs  tcstariat  technology  telos-atelos  temperance  the-basilisk  the-classics  the-great-west-whale  the-self  the-trenches  the-watchers  the-world-is-just-atoms  theory-practice  theos  thermo  thick-thin  thiel  thinking  threat-modeling  tidbits  time  time-preference  time-series  todo  toolkit  top-n  topology  traces  track-record  tradeoffs  trends  tribalism  tricki  trivia  troll  trust  truth  uncertainty  unintended-consequences  unit  universalism-particularism  unsupervised  urban-rural  us-them  usa  values  vampire-squid  variance-components  video  visual-understanding  visualization  volo-avolo  war  wealth  west-hunter  westminster  whiggish-hegelian  wiki  winner-take-all  within-group  within-without  world  yoga  yvain  zero-positive-sum  zooming  🌞  🎓  👳  👽  🔬  🤖 

Copy this bookmark:



description:


tags: