nhaliday + ems   53

Theory of Self-Reproducing Automata - John von Neumann
Fourth Lecture: THE ROLE OF HIGH AND OF EXTREMELY HIGH COMPLICATION

Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future.

Estimates for size for the human central nervous system. Excursus about the “mixed” character of living organisms. Analog and digital elements. Observations about the “mixed” character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these.

Evaluation of the discrepancy in size between artificial and natural automata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used.

The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires.

Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms--that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self-correcting features.

Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The “single error” principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata: The role of the autonomy of parts. Connections between this autonomy and evolution.

- 10^10 neurons in brain, 10^4 vacuum tubes in largest computer at time
- machines faster: 5 ms from neuron potential to neuron potential, 10^-3 ms for vacuum tubes

https://en.wikipedia.org/wiki/John_von_Neumann#Computing
pdf  article  papers  essay  nibble  math  cs  computation  bio  neuro  neuro-nitgrit  scale  magnitude  comparison  acm  von-neumann  giants  thermo  phys-energy  speed  performance  time  density  frequency  hardware  ems  efficiency  dirty-hands  street-fighting  fermi  estimate  retention  physics  interdisciplinary  multi  wiki  links  people  🔬  atoms  duplication  iteration-recursion  turing  complexity  measure  nature  technology  complex-systems  bits  information-theory  circuits  robust  structure  composition-decomposition  evolution  mutation  axioms  analogy  thinking  input-output  hi-order-bits  coding-theory  flexibility  rigidity  automata-languages 
april 2018 by nhaliday
Complexity no Bar to AI - Gwern.net
Critics of AI risk suggest diminishing returns to computing (formalized asymptotically) means AI will be weak; this argument relies on a large number of questionable premises and ignoring additional resources, constant factors, and nonlinear returns to small intelligence advantages, and is highly unlikely. (computer science, transhumanism, AI, R)
created: 1 June 2014; modified: 01 Feb 2018; status: finished; confidence: likely; importance: 10
ratty  gwern  analysis  faq  ai  risk  speedometer  intelligence  futurism  cs  computation  complexity  tcs  linear-algebra  nonlinearity  convexity-curvature  average-case  adversarial  article  time-complexity  singularity  iteration-recursion  magnitude  multiplicative  lower-bounds  no-go  performance  hardware  humanity  psychology  cog-psych  psychometrics  iq  distribution  moments  complement-substitute  hanson  ems  enhancement  parable  detail-architecture  universalism-particularism  neuro  ai-control  environment  climate-change  threat-modeling  security  theory-practice  hacker  academia  realness  crypto  rigorous-crypto  usa  government 
april 2018 by nhaliday
Is the human brain analog or digital? - Quora
The brain is neither analog nor digital, but works using a signal processing paradigm that has some properties in common with both.
 
Unlike a digital computer, the brain does not use binary logic or binary addressable memory, and it does not perform binary arithmetic. Information in the brain is represented in terms of statistical approximations and estimations rather than exact values. The brain is also non-deterministic and cannot replay instruction sequences with error-free precision. So in all these ways, the brain is definitely not "digital".
 
At the same time, the signals sent around the brain are "either-or" states that are similar to binary. A neuron fires or it does not. These all-or-nothing pulses are the basic language of the brain. So in this sense, the brain is computing using something like binary signals. Instead of 1s and 0s, or "on" and "off", the brain uses "spike" or "no spike" (referring to the firing of a neuron).
q-n-a  qra  expert-experience  neuro  neuro-nitgrit  analogy  deep-learning  nature  discrete  smoothness  IEEE  bits  coding-theory  communication  trivia  bio  volo-avolo  causation  random  order-disorder  ems  models  methodology  abstraction  nitty-gritty  computation  physics  electromag  scale  coarse-fine 
april 2018 by nhaliday
Harnessing Evolution - with Bret Weinstein | Virtual Futures Salon - YouTube
- ways to get out of Malthusian conditions: expansion to new frontiers, new technology, redistribution/theft
- some discussion of existential risk
- wants to change humanity's "purpose" to one that would be safe in the long run; important thing is it has to be ESS (maybe he wants a singleton?)
- not too impressed by transhumanism (wouldn't identify with a brain emulation)
video  interview  thiel  expert-experience  evolution  deep-materialism  new-religion  sapiens  cultural-dynamics  anthropology  evopsych  sociality  ecology  flexibility  biodet  behavioral-gen  self-interest  interests  moloch  arms  competition  coordination  cooperate-defect  frontier  expansionism  technology  efficiency  thinking  redistribution  open-closed  zero-positive-sum  peace-violence  war  dominant-minority  hypocrisy  dignity  sanctity-degradation  futurism  environment  climate-change  time-preference  long-short-run  population  scale  earth  hidden-motives  game-theory  GT-101  free-riding  innovation  leviathan  malthus  network-structure  risk  existence  civil-liberty  authoritarianism  tribalism  us-them  identity-politics  externalities  unintended-consequences  internet  social  media  pessimism  universalism-particularism  energy-resources  biophysical-econ  politics  coalitions  incentives  attention  epistemic  biases  blowhards  teaching  education  emotion  impetus  comedy  expression-survival  economics  farmers-and-foragers  ca 
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
Eternity in six hours: intergalactic spreading of intelligent life and sharpening the Fermi paradox
We do this by demonstrating that traveling between galaxies – indeed even launching a colonisation project for the entire reachable universe – is a relatively simple task for a star-spanning civilization, requiring modest amounts of energy and resources. We start by demonstrating that humanity itself could likely accomplish such a colonisation project in the foreseeable future, should we want to, and then demonstrate that there are millions of galaxies that could have reached us by now, using similar methods. This results in a considerable sharpening of the Fermi paradox.
pdf  study  article  essay  anthropic  fermi  space  expansionism  bostrom  ratty  philosophy  xenobio  ideas  threat-modeling  intricacy  time  civilization  🔬  futurism  questions  paradox  risk  physics  engineering  interdisciplinary  frontier  technology  volo-avolo  dirty-hands  ai  automation  robotics  duplication  iteration-recursion  von-neumann  data  scale  magnitude  skunkworks  the-world-is-just-atoms  hard-tech  ems  bio  bits  speedometer  nature  model-organism  mechanics  phys-energy  relativity  electromag  analysis  spock  nitty-gritty  spreading  hanson  street-fighting  speed  gedanken  nibble 
march 2018 by nhaliday
Existential Risks: Analyzing Human Extinction Scenarios
https://twitter.com/robinhanson/status/981291048965087232
https://archive.is/dUTD5
Would you endorse choosing policy to max the expected duration of civilization, at least as a good first approximation?
Can anyone suggest a different first approximation that would get more votes?

https://twitter.com/robinhanson/status/981335898502545408
https://archive.is/RpygO
How useful would it be to agree on a relatively-simple first-approximation observable-after-the-fact metric for what we want from the future universe, such as total life years experienced, or civilization duration?

We're Underestimating the Risk of Human Extinction: https://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/253821/
An Oxford philosopher argues that we are not adequately accounting for technology's risks—but his solution to the problem is not for Luddites.

Anderson: You have argued that we underrate existential risks because of a particular kind of bias called observation selection effect. Can you explain a bit more about that?

Bostrom: The idea of an observation selection effect is maybe best explained by first considering the simpler concept of a selection effect. Let's say you're trying to estimate how large the largest fish in a given pond is, and you use a net to catch a hundred fish and the biggest fish you find is three inches long. You might be tempted to infer that the biggest fish in this pond is not much bigger than three inches, because you've caught a hundred of them and none of them are bigger than three inches. But if it turns out that your net could only catch fish up to a certain length, then the measuring instrument that you used would introduce a selection effect: it would only select from a subset of the domain you were trying to sample.

Now that's a kind of standard fact of statistics, and there are methods for trying to correct for it and you obviously have to take that into account when considering the fish distribution in your pond. An observation selection effect is a selection effect introduced not by limitations in our measurement instrument, but rather by the fact that all observations require the existence of an observer. This becomes important, for instance, in evolutionary biology. For instance, we know that intelligent life evolved on Earth. Naively, one might think that this piece of evidence suggests that life is likely to evolve on most Earth-like planets. But that would be to overlook an observation selection effect. For no matter how small the proportion of all Earth-like planets that evolve intelligent life, we will find ourselves on a planet that did. Our data point-that intelligent life arose on our planet-is predicted equally well by the hypothesis that intelligent life is very improbable even on Earth-like planets as by the hypothesis that intelligent life is highly probable on Earth-like planets. When it comes to human extinction and existential risk, there are certain controversial ways that observation selection effects might be relevant.
bostrom  ratty  miri-cfar  skunkworks  philosophy  org:junk  list  top-n  frontier  speedometer  risk  futurism  local-global  scale  death  nihil  technology  simulation  anthropic  nuclear  deterrence  environment  climate-change  arms  competition  ai  ai-control  genetics  genomics  biotech  parasites-microbiome  disease  offense-defense  physics  tails  network-structure  epidemiology  space  geoengineering  dysgenics  ems  authoritarianism  government  values  formal-values  moloch  enhancement  property-rights  coordination  cooperate-defect  flux-stasis  ideas  prediction  speculation  humanity  singularity  existence  cybernetics  study  article  letters  eden-heaven  gedanken  multi  twitter  social  discussion  backup  hanson  metrics  optimization  time  long-short-run  janus  telos-atelos  poll  forms-instances  threat-modeling  selection  interview  expert-experience  malthus  volo-avolo  intel  leviathan  drugs  pharma  data  estimate  nature  longevity  expansionism  homo-hetero  utopia-dystopia 
march 2018 by nhaliday
Defection – quas lacrimas peperere minoribus nostris!
https://quaslacrimas.wordpress.com/2017/06/28/discussion-of-defection/

Kindness Against The Grain: https://srconstantin.wordpress.com/2017/06/08/kindness-against-the-grain/
I’ve heard from a number of secular-ish sources (Carse, Girard, Arendt) that the essential contribution of Christianity to human thought is the concept of forgiveness. (Ribbonfarm also has a recent post on the topic of forgiveness.)

I have never been a Christian and haven’t even read all of the New Testament, so I’ll leave it to commenters to recommend Christian sources on the topic.

What I want to explore is the notion of kindness without a smooth incentive gradient.

The Social Module: https://bloodyshovel.wordpress.com/2015/10/09/the-social-module/
Now one could propose that the basic principle of human behavior is to raise the SP number. Sure there’s survival and reproduction. Most people would forget all their socialization if left hungry and thirsty for days in the jungle. But more often than not, survival and reproduction depend on being high status; having a good name among your peers is the best way to get food, housing and hot mates.

The way to raise one’s SP number depends on thousands of different factors. We could grab most of them and call them “culture”. In China having 20 teenage mistresses as an old man raises your SP; in Western polite society it is social death. In the West making a fuss about disobeying one’s parents raises your SP, everywhere else it lowers it a great deal. People know that; which is why bureaucrats in China go to great lengths to acquire a stash of young women (who they seldom have time to actually enjoy), while teenagers in the West go to great lengths to be annoying to their parents for no good reason.

...

It thus shouldn’t surprise us that something as completely absurd as Progressivism is the law of the land in most of the world today, even though it denies obvious reality. It is not the case that most people know that progressive points are all bogus, but obey because of fear or cowardice. No, an average human brain has much more neurons being used to scan the social climate and see how SP are allotted, than neurons being used to analyze patterns in reality to ascertain the truth. Surely your brain does care a great deal about truth in some very narrow areas of concern to you. Remember Conquest’s first law: Everybody is Conservative about what he knows best. You have to know the truth about what you do, if you are to do it effectively.

But you don’t really care about truth anywhere else. And why would you? It takes time and effort you can’t really spare, and it’s not really necessary. As long as you have some area of specialization where you can make a living, all the rest you must do to achieve survival and reproduction is to raise your SP so you don’t get killed and your guts sacrificed to the mountain spirits.

SP theory (I accept suggestions for a better name) can also explains the behavior of leftists. Many conservatives of a medium level of enlightenment point out the paradox that leftists historically have held completely different ideas. Leftism used to be about the livelihood of industrial workers, now they agitate about the environment, or feminism, or foreigners. Some people would say that’s just historical change, or pull a No True Scotsman about this or that group not being really leftists. But that’s transparent bullshit; very often we see a single person shifting from agitating about Communism and worker rights, to agitate about global warming or rape culture.

...

The leftist strategy could be defined as “psychopathic SP maximization”. Leftists attempt to destroy social equilibrium so that they can raise their SP number. If humans are, in a sense, programmed to constantly raise their status, well high status people by definition can’t raise it anymore (though they can squabble against each other for marginal gains), their best strategy is to freeze society in place so that they can enjoy their superiority. High status people by definition have power, and thus social hierarchy during human history tends to be quite stable.

This goes against the interests of many. First of all the lower status people, who, well, want to raise their status, but can’t manage to do so. And it also goes against the interests of the particularly annoying members of the upper class who want to raise their status on the margin. Conservative people can be defined as those who, no matter the absolute level, are in general happy with it. This doesn’t mean they don’t want higher status (by definition all humans do), but the output of other brain modules may conclude that attempts to raise SP might threaten one’s survival and reproduction; or just that the chances of raising one’s individual SP is hopeless, so one might as well stay put.

...

You can’t blame people for being logically inconsistent; because they can’t possibly know anything about all these issues. Few have any experience or knowledge about evolution and human races, or about the history of black people to make an informed judgment on HBD. Few have time to learn about sex differences, and stuff like the climate is as close to unknowable as there is. Opinions about anything but a very narrow area of expertise are always output of your SP module, not any judgment of fact. People don’t know the facts. And even when they know; I mean most people have enough experience with sex differences and black dysfunction to be quite confident that progressive ideas are false. But you can never be sure. As Hume said, the laws of physics are a judgment of habit; who is to say that a genie isn’t going to change all you know the next morning? At any rate, you’re always better off toeing the line, following the conventional wisdom, and keeping your dear SP. Perhaps you can even raise them a bit. And that is very nice. It is niceness itself.

Leftism is just an easy excuse: https://bloodyshovel.wordpress.com/2015/03/01/leftism-is-just-an-easy-excuse/
Unless you’re not the only defector. You need a way to signal your intention to defect, so that other disloyal fucks such as yourself (and they’re bound to be others) can join up, thus reducing the likely costs of defection. The way to signal your intention to defect is to come up with a good excuse. A good excuse to be disloyal becomes a rallying point through which other defectors can coordinate and cover their asses so that the ruling coalition doesn’t punish them. What is a good excuse?

Leftism is a great excuse. Claiming that the ruling coalition isn’t leftist enough, isn’t holy enough, not inclusive enough of women, of blacks, of gays, or gorillas, of pedophiles, of murderous Salafists, is the perfect way of signalling your disloyalty towards the existing power coalition. By using the existing ideology and pushing its logic just a little bit, you ensure that the powerful can’t punish you. At least not openly. And if you’re lucky, the mass of disloyal fucks in the ruling coalition might join your banner, and use your exact leftist point to jump ship and outflank the powerful.

...

The same dynamic fuels the flattery inflation one sees in monarchical or dictatorial systems. In Mao China, if you want to defect, you claim to love Mao more than your boss. In Nazi Germany, you proclaim your love for Hitler and the great insight of his plan to take Stalingrad. In the Roman Empire, you claimed that Caesar is a God, son of Hercules, and those who deny it are treacherous bastards. In Ancient Persia you loudly proclaimed your faith in the Shah being the brother of the Sun and the Moon and King of all Kings on Earth. In Reformation Europe you proclaimed that you have discovered something new in the Bible and everybody else is damned to hell. Predestined by God!

...

And again: the precise content of the ideological point doesn’t matter. Your human brain doesn’t care about ideology. Humans didn’t evolve to care about Marxist theory of class struggle, or about LGBTQWERTY theories of social identity. You just don’t know what it means. It’s all abstract points you’ve been told in a classroom. It doesn’t actually compute. Nothing that anybody ever said in a political debate ever made any actual, concrete sense to a human being.

So why do we care so much about politics? What’s the point of ideology? Ideology is just the water you swim in. It is a structured database of excuses, to be used to signal your allegiance or defection to the existing ruling coalition. Ideology is just the feed of the rationalization Hamster that runs incessantly in that corner of your brain. But it is immaterial, and in most cases actually inaccessible to the logical modules in your brain.

Nobody ever acts on their overt ideological claims if they can get away with it. Liberals proclaim their faith in the potential of black children while clustering in all white suburbs. Communist party members loudly talk about the proletariat while being hedonistic spenders. Al Gore talks about Global Warming while living in a lavish mansion. Cognitive dissonance, you say? No; those cognitive systems are not connected in the first place.

...

And so, every little step in the way, power-seekers moved the consensus to the left. And open societies, democratic systems are by their decentralized nature, and by the size of their constituencies, much more vulnerable to this sort of signalling attacks. It is but impossible to appraise and enforce the loyalty of every single individual involved in a modern state. There’s too many of them. A Medieval King had a better chance of it; hence the slow movement of ideological innovation in those days. But the bigger the organization, the harder it is to gather accurate information of the loyalty of the whole coalition; and hence the ideological movement accelerates. And there is no stopping it.

Like the Ancients, We Have Gods. They’ll Get Greater: http://www.overcomingbias.com/2018/04/like-the-ancients-we-have-gods-they-may-get… [more]
gnon  commentary  critique  politics  polisci  strategy  tactics  thinking  GT-101  game-theory  cooperate-defect  hypocrisy  institutions  incentives  anthropology  morality  ethics  formal-values  ideology  schelling  equilibrium  multi  links  debate  ethnocentrism  cultural-dynamics  decision-making  socs-and-mops  anomie  power  info-dynamics  propaganda  signaling  axelrod  organizing  impetus  democracy  antidemos  duty  coalitions  kinship  religion  christianity  theos  n-factor  trust  altruism  noble-lie  japan  asia  cohesion  reason  scitariat  status  fashun  history  mostly-modern  world-war  west-hunter  sulla  unintended-consequences  iron-age  china  sinosphere  stories  leviathan  criminal-justice  peace-violence  nihil  wiki  authoritarianism  egalitarianism-hierarchy  cocktail  ssc  parable  open-closed  death  absolute-relative  justice  management  explanans  the-great-west-whale  occident  orient  courage  vitality  domestication  revolution  europe  pop-diff  alien-character  diversity  identity-politics  westminster  kumbaya-kult  cultu 
june 2017 by nhaliday
Overcoming Bias : A Tangled Task Future
So we may often retain systems that inherit the structure of the human brain, and the structures of the social teams and organizations by which humans have worked together. All of which is another way to say: descendants of humans may have a long future as workers. We may have another future besides being retirees or iron-fisted peons ruling over gods. Even in a competitive future with no friendly singleton to ensure preferential treatment, something recognizably like us may continue. And even win.
ratty  hanson  speculation  automation  labor  economics  ems  futurism  prediction  complex-systems  network-structure  intricacy  thinking  engineering  management  law  compensation  psychology  cog-psych  ideas  structure  gray-econ  competition  moloch  coordination  cooperate-defect  risk  ai  ai-control  singularity  number  humanity  complement-substitute  cybernetics  detail-architecture  legacy  threat-modeling  degrees-of-freedom  composition-decomposition  order-disorder  analogy  parsimony  institutions  software  coupling-cohesion 
june 2017 by nhaliday
Discovering Limits to Growth | Do the Math
https://en.wikipedia.org/wiki/The_Limits_to_Growth
http://www.unz.com/akarlin/review-limits-to-growth-meadows/
https://foundational-research.org/the-future-of-growth-near-zero-growth-rates/
One may of course be skeptical that this general trend will also apply to the growth of our technology and economy at large, as innovation seems to continually postpone our clash with the ceiling, yet it seems inescapable that it must. For in light of what we know about physics, we can conclude that exponential growth of the kinds we see today, in technology in particular and in our economy more generally, must come to an end, and do so relatively soon.
scitariat  prediction  hmm  economics  growth-econ  biophysical-econ  world  energy-resources  the-world-is-just-atoms  books  summary  quotes  commentary  malthus  models  dynamical  🐸  mena4  demographics  environment  org:bleg  nibble  regularizer  science-anxiety  deep-materialism  nihil  the-bones  whiggish-hegelian  multi  tetlock  trends  wiki  macro  pessimism  eh  primitivism  new-religion  cynicism-idealism  gnon  review  recommendations  long-short-run  futurism  ratty  miri-cfar  effective-altruism  hanson  econ-metrics  ems  magnitude  street-fighting  nitty-gritty  physics  data  phys-energy  🔬  multiplicative  iteration-recursion 
march 2017 by nhaliday
Are You Living in a Computer Simulation?
Bostrom's anthropic arguments

https://www.jetpress.org/volume7/simulation.htm
In sum, if your descendants might make simulations of lives like yours, then you might be living in a simulation. And while you probably cannot learn much detail about the specific reasons for and nature of the simulation you live in, you can draw general conclusions by making analogies to the types and reasons of simulations today. If you might be living in a simulation then all else equal it seems that you should care less about others, live more for today, make your world look likely to become eventually rich, expect to and try to participate in pivotal events, be entertaining and praiseworthy, and keep the famous people around you happy and interested in you.

Theological Implications of the Simulation Argument: https://www.tandfonline.com/doi/pdf/10.1080/15665399.2010.10820012
Nick Bostrom’s Simulation Argument (SA) has many intriguing theological implications. We work out some of them here. We show how the SA can be used to develop novel versions of the Cosmological and Design Arguments. We then develop some of the affinities between Bostrom’s naturalistic theogony and more traditional theological topics. We look at the resurrection of the body and at theodicy. We conclude with some reflections on the relations between the SA and Neoplatonism (friendly) and between the SA and theism (less friendly).

https://www.gwern.net/Simulation-inferences
lesswrong  philosophy  weird  idk  thinking  insight  links  summary  rationality  ratty  bostrom  sampling-bias  anthropic  theos  simulation  hanson  decision-making  advice  mystic  time-preference  futurism  letters  entertainment  multi  morality  humility  hypocrisy  wealth  malthus  power  drama  gedanken  pdf  article  essay  religion  christianity  the-classics  big-peeps  iteration-recursion  aesthetics  nietzschean  axioms  gwern  analysis  realness  von-neumann  space  expansionism  duplication  spreading  sequential  cs  computation  outcome-risk  measurement  empirical  questions  bits  information-theory  efficiency  algorithms  physics  relativity  ems  neuro  data  scale  magnitude  complexity  risk  existence  threat-modeling  civilization  forms-instances 
september 2016 by nhaliday

bundles : prediction

related tags

80000-hours  absolute-relative  abstraction  academia  accuracy  acm  acmtariat  adversarial  advice  aesthetics  afterlife  age-generation  aging  agriculture  ai  ai-control  albion  algorithms  alien-character  alignment  allodium  altruism  amazon  analogy  analysis  analytical-holistic  anglosphere  anomie  anthropic  anthropology  antidemos  aphorism  apollonian-dionysian  apple  applicability-prereqs  approximation  arbitrage  aristos  arms  art  article  asia  atmosphere  atoms  attention  audio  authoritarianism  automata-languages  automation  average-case  aversion  axelrod  axioms  backup  barons  behavioral-gen  being-becoming  benevolence  biases  big-peeps  big-picture  big-yud  bio  biodet  bioinformatics  biophysical-econ  biotech  bits  blockchain  blowhards  books  bostrom  bounded-cognition  brain-scan  branches  brands  broad-econ  business  business-models  california  canada  cancer  canon  capital  capitalism  career  cartoons  causation  characterization  charity  chart  china  christianity  circuits  civil-liberty  civilization  class  class-warfare  clever-rats  climate-change  coalitions  coarse-fine  cocktail  coding-theory  cog-psych  cohesion  cold-war  collaboration  comedy  commentary  communication  communism  community  comparison  compensation  competition  complement-substitute  complex-systems  complexity  composition-decomposition  computation  computer-vision  concept  conceptual-vocab  concrete  conquest-empire  contracts  contradiction  contrarianism  convexity-curvature  cooperate-defect  coordination  core-rats  corporation  corruption  cost-benefit  coupling-cohesion  courage  course  cracker-econ  creative  crime  criminal-justice  critique  crooked  crux  crypto  cryptocurrency  cs  cultural-dynamics  culture  culture-war  cybernetics  cycles  cynicism-idealism  dark-arts  darwinian  data  death  debate  debt  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  defense  definite-planning  definition  degrees-of-freedom  democracy  demographic-transition  demographics  dennett  density  descriptive  detail-architecture  deterrence  dignity  dimensionality  direct-indirect  direction  dirty-hands  discipline  discrete  discrimination  discussion  disease  distribution  diversity  domestication  dominant-minority  drama  drugs  duplication  duty  dynamical  dysgenics  early-modern  earth  ecology  econ-metrics  economics  econotariat  eden  eden-heaven  education  EEA  effective-altruism  efficiency  egalitarianism-hierarchy  EGT  eh  einstein  elections  electromag  elite  embedded-cognition  embodied-pack  emergent  emotion  empirical  ems  encyclopedic  end-times  endogenous-exogenous  energy-resources  engineering  enhancement  entertainment  entrepreneurialism  entropy-like  environment  envy  epidemiology  epistemic  equilibrium  error  essay  essence-existence  estimate  ethics  ethnocentrism  europe  evolution  evopsych  examples  existence  expansionism  expert-experience  explanans  exploratory  expression-survival  externalities  extra-introversion  extrema  facebook  faq  farmers-and-foragers  fashun  FDA  fermi  fertility  feudal  fiction  finance  flexibility  flux-stasis  focus  foreign-lang  formal-values  forms-instances  free-riding  frequency  frisson  frontier  futurism  gallic  game-theory  games  gedanken  gender  generalization  genetics  genomics  geoengineering  geography  germanic  giants  gnon  gnosis-logos  god-man-beast-victim  google  government  gray-econ  gregory-clark  growth-econ  GT-101  gwern  hacker  haidt  hanson  hard-tech  hardware  hari-seldon  harvard  healthcare  heavy-industry  heterodox  hi-order-bits  hidden-motives  high-variance  higher-ed  history  hmm  homo-hetero  honor  horror  human-capital  human-ml  humanity  humility  hypocrisy  ideas  identity  identity-politics  ideology  idk  IEEE  iidness  illusion  impetus  impro  incentives  individualism-collectivism  inequality  info-dynamics  info-econ  information-theory  innovation  input-output  insight  institutions  insurance  intel  intelligence  interdisciplinary  interests  internet  interview  intricacy  intuition  investing  iq  iron-age  is-ought  iteration-recursion  janus  japan  jargon  journos-pundits  judgement  justice  kinship  knowledge  kumbaya-kult  labor  land  language  large-factor  latin-america  law  leadership  lecture-notes  left-wing  legacy  legibility  len:long  len:short  lens  lesswrong  letters  leviathan  lexical  limits  linear-algebra  links  list  literature  local-global  lol  long-short-run  long-term  longevity  love-hate  lovecraft  lower-bounds  machiavelli  machine-learning  macro  magnitude  malthus  management  managerial-state  manifolds  map-territory  marginal  marginal-rev  market-power  markets  martial  math  math.CA  meaningness  measure  measurement  mechanics  media  medicine  mediterranean  mena4  meta:medicine  meta:prediction  meta:research  meta:rhetoric  meta:science  metabuch  metameta  methodology  metrics  microsoft  miri-cfar  mobile  model-organism  models  modernity  moloch  moments  monetary-fiscal  money  morality  mostly-modern  multi  multiplicative  musk  mutation  mystic  myth  n-factor  narrative  nationalism-globalism  nature  near-far  network-structure  neuro  neuro-nitgrit  neurons  new-religion  nibble  nietzschean  nihil  nitty-gritty  no-go  noble-lie  nonlinearity  northeast  novelty  nuclear  number  nutrition  nyc  objektbuch  occident  offense-defense  old-anglo  open-closed  optimate  optimism  optimization  order-disorder  org:bleg  org:econlib  org:junk  org:med  org:popup  organizing  orient  orwellian  other-xtian  outcome-risk  outliers  oxbridge  papers  parable  paradox  parallax  parasites-microbiome  parenting  parsimony  patience  paying-rent  pdf  peace-violence  people  performance  personality  persuasion  pessimism  phalanges  pharma  philosophy  phys-energy  physics  planning  play  plots  podcast  poetry  polanyi-marx  polarization  policy  polisci  politics  poll  pop-diff  population  power  power-law  pragmatic  pre-2013  pre-ww2  prediction  prediction-markets  preference-falsification  prejudice  prepping  presentation  primitivism  princeton  priors-posteriors  privacy  pro-rata  probability  problem-solving  propaganda  properties  property-rights  prudence  psych-architecture  psychology  psychometrics  public-goodish  publishing  q-n-a  qra  quality  quantum  quantum-info  questions  quotes  race  random  randy-ayndy  ranking  rationality  ratty  realness  reason  recommendations  recruiting  red-queen  redistribution  reduction  reference  reflection  regularizer  regulation  relativity  religion  rent-seeking  research  retention  review  revolution  rhetoric  rhythm  right-wing  rigidity  rigor  rigorous-crypto  risk  ritual  robotics  robust  roots  s-factor  s:*  s:***  sampling-bias  sanctity-degradation  sapiens  scale  schelling  science  science-anxiety  scifi-fantasy  scitariat  search  securities  security  selection  self-interest  sequential  sex  shakespeare  shift  signal-noise  signaling  similarity  simulation  singularity  sinosphere  skeleton  skunkworks  smoothness  social  social-capital  social-choice  social-norms  social-psych  social-science  sociality  society  socs-and-mops  software  space  speculation  speed  speedometer  spock  spreading  ssc  stagnation  stamina  stanford  startups  state-of-art  statesmen  stats  status  stereotypes  stochastic-processes  stock-flow  stories  strategy  straussian  street-fighting  structure  study  studying  stylized-facts  subculture  success  sulla  summary  supply-demand  survey  survival  sv  synchrony  systematic-ad-hoc  tactics  tails  tainter  taxes  tcs  tcstariat  teaching  tech  technocracy  technology  telos-atelos  temperance  terrorism  tetlock  the-basilisk  the-bones  the-classics  the-devil  the-founding  the-great-west-whale  the-self  the-watchers  the-west  the-world-is-just-atoms  theory-of-mind  theory-practice  theos  thermo  thick-thin  thiel  things  thinking  threat-modeling  time  time-complexity  time-preference  time-use  top-n  track-record  trade  tradeoffs  tradition  transportation  trends  tribalism  trivia  troll  trust  truth  turing  twitter  unaffiliated  uncertainty  unintended-consequences  uniqueness  universalism-particularism  urban-rural  us-them  usa  utopia-dystopia  values  venture  video  virtu  visual-understanding  visualization  vitality  volo-avolo  von-neumann  vr  war  wealth  weird  welfare-state  west-hunter  westminster  whiggish-hegelian  white-paper  whole-partial-many  wiki  winner-take-all  wisdom  within-without  wonkish  working-stiff  world  world-war  X-not-about-Y  xenobio  yoga  yvain  zeitgeist  zero-positive-sum  zooming  🌞  🎩  🐸  🔬  🤖  🦉 

Copy this bookmark:



description:


tags: