nhaliday + existence   43

An adaptability limit to climate change due to heat stress
Despite the uncertainty in future climate-change impacts, it is often assumed that humans would be able to adapt to any possible warming. Here we argue that heat stress imposes a robust upper limit to such adaptation. Peak heat stress, quantified by the wet-bulb temperature TW, is surprisingly similar across diverse climates today. TW never exceeds 31 °C. Any exceedence of 35 °C for extended periods should induce hyperthermia in humans and other mammals, as dissipation of metabolic heat becomes impossible. While this never happens now, it would begin to occur with global-mean warming of about 7 °C, calling the habitability of some regions into question. With 11–12 °C warming, such regions would spread to encompass the majority of the human population as currently distributed. Eventual warmings of 12 °C are possible from fossil fuel burning. One implication is that recent estimates of the costs of unmitigated climate change are too low unless the range of possible warming can somehow be narrowed. Heat stress also may help explain trends in the mammalian fossil record.

Trajectories of the Earth System in the Anthropocene: http://www.pnas.org/content/early/2018/07/31/1810141115
We explore the risk that self-reinforcing feedbacks could push the Earth System toward a planetary threshold that, if crossed, could prevent stabilization of the climate at intermediate temperature rises and cause continued warming on a “Hothouse Earth” pathway even as human emissions are reduced. Crossing the threshold would lead to a much higher global average temperature than any interglacial in the past 1.2 million years and to sea levels significantly higher than at any time in the Holocene. We examine the evidence that such a threshold might exist and where it might be.
study  org:nat  environment  climate-change  humanity  existence  risk  futurism  estimate  physics  thermo  prediction  temperature  nature  walls  civilization  flexibility  rigidity  embodied  multi  manifolds  plots  equilibrium  phase-transition  oscillation  comparison  complex-systems  earth 
august 2018 by nhaliday
Dividuals – The soul is not an indivisible unit and has no unified will
Towards A More Mature Atheism: https://dividuals.wordpress.com/2015/09/17/towards-a-more-mature-atheism/
Human intelligence evolved as a social intelligence, for the purposes of social cooperation, social competition and social domination. It evolved to make us efficient at cooperating at removing obstacles, especially the kinds of obstacles that tend to fight back, i.e. at warfare. If you ever studied strategy or tactics, or just played really good board games, you have probably found your brain seems to be strangely well suited for specifically this kind of intellectual activity. It’s not necessarily easier than studying physics, and yet it somehow feels more natural. Physics is like swimming, strategy and tactics is like running. The reason for that is that our brains are truly evolved to be strategic, tactical, diplomatic computers, not physics computers. The question our brains are REALLY good at finding the answer for is “Just what does this guy really want?”

...

Thus, a very basic failure mode of the human brain is to overdetect agency.

I think this is partially what SSC wrote about in Mysticism And Pattern-Matching too. But instead of mystical experiences, my focus is on our brains claiming to detect agency where there is none. Thus my view is closer to Richard Carrier’s definition of the supernatural: it is the idea that some mental things cannot be reduced to nonmental things.

...

Meaning actually means will and agency. It took me a while to figure that one out. When we look for the meaning of life, a meaning in life, or a meaningful life, we look for a will or agency generally outside our own.

...

I am a double oddball – kind of autistic, but still far more interested in human social dynamics, such as history, than in natural sciences or technology. As a result, I do feel a calling to religion – the human world, as opposed to outer space, the human city, the human history, is such a perfect fit for a view like that of Catholicism! The reason for that is that Catholicism is the pinnacle of human intellectual efforts dealing with human agency. Ideas like Augustine’s three failure modes of the human brain: greed, lust and desire for power and status, are just about the closest to forming correct psychological theories far earlier than the scientific method was discovered. Just read your Chesterbelloc and Lewis. And of course because the agency radars of Catholics run at full burst, they overdetect it and thus believe in a god behind the universe. My brain, due to my deep interest in human agency and its consequences, also would like to be religious: wouldn’t it be great if the universe was made by something we could talk to, like, everything else that I am interested in, from field generals to municipal governments are entities I could talk to?

...

I also dislike that atheists often refuse to propose a falsifiable theory because they claim the burden of proof is not on them. Strictly speaking it can be true, but it is still good form to provide one.

Since I am something like an “nontheistic Catholic” anyway (e.g. I believe in original sin from the practical, political angle, I just think it has natural, not supernatural causes: evolution, the move from hunting-gathering to agriculture etc.), all one would need to do to make me fully so is to plug a God concept in my mind.

If you can convince me that my brain is not actually overdetecting agency when I feel a calling to religion, if you can convince me that my brain and most human brains detect agency just about right, there will be no reason for me to not believe in God. Because if there would any sort of agency behind the universe, the smartest bet would be that this agency would be the God of Thomas Aquinas’ Summa. That guy was plain simply a genius.

How to convince me my brain is not overdetecting agency? The simplest way is to convince me that magic, witchcraft, or superstition in general is real, and real in the supernatural sense (I do know Wiccans who cast spells and claim they are natural, not supernatural: divination spells make the brain more aware of hidden details, healing spells recruit the healing processes of the body etc.) You see, Catholics generally do believe in magic and witchcraft, as in: “These really do something, and they do something bad, so never practice them.”

The Strange Places the “God of the Gaps” Takes You: https://dividuals.wordpress.com/2018/05/25/the-strange-places-the-god-of-the-gaps-takes-you/
I assume people are familiar with the God of the Gaps argument. Well, it is usually just an accusation, but Newton for instance really pulled one.

But natural science is inherently different from humanities, because in natural science you build a predictive model of which you are not part of. You are just a point-like neutral observer.

You cannot do that with other human minds because you just don’t have the computing power to simulate a roughly similarly intelligent mind and have enough left to actually work with your model. So you put yourself into the predictive model, you make yourself a part of the model itself. You use a certain empathic kind of understanding, a “what would I do in that guys shoes?” and generate your predictions that way.

...

Which means that while natural science is relatively new, and strongly correlates with technological progress, this empathic, self-programming model of the humanities you could do millenia ago as well, you don’t need math or tools for this, and you probably cannot expect anything like straight-line progress. Maybe some wisdoms people figure out this way are really timeless and we just keep on rediscovering them.

So imagine, say, Catholicism as a large set of humanities. Sociology, social psychology, moral philosophy in the pragmatic, scientific sense (“What morality makes a society not collapse and actually prosper?”), life wisdom and all that. Basically just figuring out how people tick, how societies tick and how to make them tick well.

...

What do? Well, the obvious move is to pull a Newton and inject a God of the Gaps into your humanities. We tick like that because God. We must do so and so to tick well because God.

...

What I am saying is that we are at some point probably going to prove pretty much all of the this-worldy, pragmatic (moral, sociological, psychological etc.) aspect of Catholicism correct by something like evolutionary psychology.

And I am saying that while it will dramatically increase our respect for religion, this will also be probably a huge blow to theism. I don’t want that to happen, but I think it will. Because eliminating God from the gaps of natural science does not hurt faith much. But eliminating God from the gaps of the humanities and yes, religion itself?

My Kind of Atheist: http://www.overcomingbias.com/2018/08/my-kind-of-athiest.html
I think I’ve mentioned somewhere in public that I’m now an atheist, even though I grew up in a very Christian family, and I even joined a “cult” at a young age (against disapproving parents). The proximate cause of my atheism was learning physics in college. But I don’t think I’ve ever clarified in public what kind of an “atheist” or “agnostic” I am. So here goes.

The universe is vast and most of it is very far away in space and time, making our knowledge of those distant parts very thin. So it isn’t at all crazy to think that very powerful beings exist somewhere far away out there, or far before us or after us in time. In fact, many of us hope that we now can give rise to such powerful beings in the distant future. If those powerful beings count as “gods”, then I’m certainly open to the idea that such gods exist somewhere in space-time.

It also isn’t crazy to imagine powerful beings that are “closer” in space and time, but far away in causal connection. They could be in parallel “planes”, in other dimensions, or in “dark” matter that doesn’t interact much with our matter. Or they might perhaps have little interest in influencing or interacting with our sort of things. Or they might just “like to watch.”

But to most religious people, a key emotional appeal of religion is the idea that gods often “answer” prayer by intervening in their world. Sometimes intervening in their head to make them feel different, but also sometimes responding to prayers about their test tomorrow, their friend’s marriage, or their aunt’s hemorrhoids. It is these sort of prayer-answering “gods” in which I just can’t believe. Not that I’m absolutely sure they don’t exist, but I’m sure enough that the term “atheist” fits much better than the term “agnostic.”

These sort of gods supposedly intervene in our world millions of times daily to respond positively to particular prayers, and yet they do not noticeably intervene in world affairs. Not only can we find no physical trace of any machinery or system by which such gods exert their influence, even though we understand the physics of our local world very well, but the history of life and civilization shows no obvious traces of their influence. They know of terrible things that go wrong in our world, but instead of doing much about those things, these gods instead prioritize not leaving any clear evidence of their existence or influence. And yet for some reason they don’t mind people believing in them enough to pray to them, as they often reward such prayers with favorable interventions.
gnon  blog  stream  politics  polisci  ideology  institutions  thinking  religion  christianity  protestant-catholic  history  medieval  individualism-collectivism  n-factor  left-wing  right-wing  tribalism  us-them  cohesion  sociality  ecology  philosophy  buddhism  gavisti  europe  the-great-west-whale  occident  germanic  theos  culture  society  cultural-dynamics  anthropology  volo-avolo  meaningness  coalitions  theory-of-mind  coordination  organizing  psychology  social-psych  fashun  status  nationalism-globalism  models  power  evopsych  EEA  deep-materialism  new-religion  metameta  social-science  sociology  multi  definition  intelligence  science  comparison  letters  social-structure  existence  nihil  ratty  hanson  intricacy  reflection  people  physics  paganism 
june 2018 by nhaliday
Contingent, Not Arbitrary | Truth is contingent on what is, not on what we wish to be true.
A vital attribute of a value system of any kind is that it works. I consider this a necessary (but not sufficient) condition for goodness. A value system, when followed, should contribute to human flourishing and not produce results that violate its core ideals. This is a pragmatic, I-know-it-when-I-see-it definition. I may refine it further if the need arises.

I think that the prevailing Western values fail by this standard. I will not spend much time arguing this; many others have already. If you reject this premise, this blog may not be for you.

I consider old traditions an important source of wisdom: they have proven their worth over centuries of use. Where they agree, we should listen. Where they disagree, we should figure out why. Where modernity departs from tradition, we should be wary of the new.

Tradition has one nagging problem: it was abandoned by the West. How and why did that happen? I consider this a central question. I expect the reasons to be varied and complex. Understanding them seems necessary if we are to fix what may have been broken.

In short, I want to answer these questions:

1. How do values spread and persist? An ideology does no good if no one holds it.
2. Which values do good? Sounding good is worse than useless if it leads to ruin.

The ultimate hope would be to find a way to combine the two. Many have tried and failed. I don’t expect to succeed either, but I hope I’ll manage to clarify the questions.

Christianity Is The Schelling Point: https://contingentnotarbitrary.com/2018/02/22/christianity-is-the-schelling-point/
Restoring true Christianity is both necessary and sufficient for restoring civilization. The task is neither easy nor simple but that’s what it takes. It is also our best chance of weathering the collapse if that’s too late to avoid.

Christianity is the ultimate coordination mechanism: it unites us with a higher purpose, aligns us with the laws of reality and works on all scales, from individuals to entire civilizations. Christendom took over the world and then lost it when its faith faltered. Historically and culturally, Christianity is the unique Schelling point for the West – or it would be if we could agree on which church (if any) was the true one.

Here are my arguments for true Christianity as the Schelling point. I hope to demonstrate these points in subsequent posts; for now I’ll just list them.

- A society of saints is the most powerful human arrangement possible. It is united in purpose, ideologically stable and operates in harmony with natural law. This is true independent of scale and organization: from military hierarchy to total decentralization, from persecuted minority to total hegemony. Even democracy works among saints – that’s why it took so long to fail.
- There is such a thing as true Christianity. I don’t know how to pinpoint it but it does exist; that holds from both secular and religious perspectives. Our task is to converge on it the best we can.
- Don’t worry too much about the existence of God. I’m proof that you don’t need that assumption in order to believe – it helps but isn’t mandatory.

Pascal’s Wager never sat right with me. Now I know why: it’s a sucker bet. Let’s update it.

If God exists, we must believe because our souls and civilization depend on it. If He doesn’t exist, we must believe because civilization depends on it.

Morality Should Be Adaptive: http://www.overcomingbias.com/2012/04/morals-should-be-adaptive.html
I agree with this
gnon  todo  blog  stream  religion  christianity  theos  morality  ethics  formal-values  philosophy  truth  is-ought  coordination  cooperate-defect  alignment  tribalism  cohesion  nascent-state  counter-revolution  epistemic  civilization  rot  fertility  intervention  europe  the-great-west-whale  occident  telos-atelos  multi  ratty  hanson  big-picture  society  culture  evolution  competition  🤖  rationality  rhetoric  contrarianism  values  water  embedded-cognition  ideology  deep-materialism  moloch  new-religion  patho-altruism  darwinian  existence  good-evil  memetics  direct-indirect  endogenous-exogenous  tradition  anthropology  cultural-dynamics  farmers-and-foragers  egalitarianism-hierarchy  organizing  institutions  protestant-catholic  enlightenment-renaissance-restoration-reformation  realness  science  empirical  modernity  revolution  inference  parallax  axioms  pragmatic  zeitgeist  schelling  prioritizing  ends-means  degrees-of-freedom  logic  reason  interdisciplinary  exegesis-hermeneutics  o 
april 2018 by nhaliday
Surveil things, not people – The sideways view
Technology may reach a point where free use of one person’s share of humanity’s resources is enough to easily destroy the world. I think society needs to make significant changes to cope with that scenario.

Mass surveillance is a natural response, and sometimes people think of it as the only response. I find mass surveillance pretty unappealing, but I think we can capture almost all of the value by surveilling things rather than surveilling people. This approach avoids some of the worst problems of mass surveillance; while it still has unattractive features it’s my favorite option so far.

...

The idea
We’ll choose a set of artifacts to surveil and restrict. I’ll call these heavy technology and everything else light technology. Our goal is to restrict as few things as possible, but we want to make sure that someone can’t cause unacceptable destruction with only light technology. By default something is light technology if it can be easily acquired by an individual or small group in 2017, and heavy technology otherwise (though we may need to make some exceptions, e.g. certain biological materials or equipment).

Heavy technology is subject to two rules:

1. You can’t use heavy technology in a way that is unacceptably destructive.
2. You can’t use heavy technology to undermine the machinery that enforces these two rules.

To enforce these rules, all heavy technology is under surveillance, and is situated such that it cannot be unilaterally used by any individual or small group. That is, individuals can own heavy technology, but they cannot have unmonitored physical access to that technology.

...

This proposal does give states a de facto monopoly on heavy technology, and would eventually make armed resistance totally impossible. But it’s already the case that states have a massive advantage in armed conflict, and it seems almost inevitable that progress in AI will make this advantage larger (and enable states to do much more with it). Realistically I’m not convinced this proposal makes things much worse than the default.

This proposal definitely expands regulators’ nominal authority and seems prone to abuses. But amongst candidates for handling a future with cheap and destructive dual-use technology, I feel this is the best of many bad options with respect to the potential for abuse.
ratty  acmtariat  clever-rats  risk  existence  futurism  technology  policy  alt-inst  proposal  government  intel  authoritarianism  orwellian  tricks  leviathan  security  civilization  ai  ai-control  arms  defense  cybernetics  institutions  law  unintended-consequences  civil-liberty  volo-avolo  power  constraint-satisfaction  alignment 
april 2018 by nhaliday
High male sexual investment as a driver of extinction in fossil ostracods | Nature
Sexual selection favours traits that confer advantages in the competition for mates. In many cases, such traits are costly to produce and maintain, because the costs help to enforce the honesty of these signals and cues1. Some evolutionary models predict that sexual selection also produces costs at the population level, which could limit the ability of populations to adapt to changing conditions and thus increase the risk of extinction2,3,4.
study  org:nat  bio  evolution  selection  sex  competition  cost-benefit  unintended-consequences  signaling  existence  gender  gender-diff  empirical  branches  rot  modernity  fertility  intervention  explanans  humility  status  matching  ranking  ratty  hanson 
april 2018 by nhaliday
Harnessing Evolution - with Bret Weinstein | Virtual Futures Salon - YouTube
- ways to get out of Malthusian conditions: expansion to new frontiers, new technology, redistribution/theft
- some discussion of existential risk
- wants to change humanity's "purpose" to one that would be safe in the long run; important thing is it has to be ESS (maybe he wants a singleton?)
- not too impressed by transhumanism (wouldn't identify with a brain emulation)
video  interview  thiel  expert-experience  evolution  deep-materialism  new-religion  sapiens  cultural-dynamics  anthropology  evopsych  sociality  ecology  flexibility  biodet  behavioral-gen  self-interest  interests  moloch  arms  competition  coordination  cooperate-defect  frontier  expansionism  technology  efficiency  thinking  redistribution  open-closed  zero-positive-sum  peace-violence  war  dominant-minority  hypocrisy  dignity  sanctity-degradation  futurism  environment  climate-change  time-preference  long-short-run  population  scale  earth  hidden-motives  game-theory  GT-101  free-riding  innovation  leviathan  malthus  network-structure  risk  existence  civil-liberty  authoritarianism  tribalism  us-them  identity-politics  externalities  unintended-consequences  internet  social  media  pessimism  universalism-particularism  energy-resources  biophysical-econ  politics  coalitions  incentives  attention  epistemic  biases  blowhards  teaching  education  emotion  impetus  comedy  expression-survival  economics  farmers-and-foragers  ca 
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
Existential Risks: Analyzing Human Extinction Scenarios
https://twitter.com/robinhanson/status/981291048965087232
https://archive.is/dUTD5
Would you endorse choosing policy to max the expected duration of civilization, at least as a good first approximation?
Can anyone suggest a different first approximation that would get more votes?

https://twitter.com/robinhanson/status/981335898502545408
https://archive.is/RpygO
How useful would it be to agree on a relatively-simple first-approximation observable-after-the-fact metric for what we want from the future universe, such as total life years experienced, or civilization duration?

We're Underestimating the Risk of Human Extinction: https://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/253821/
An Oxford philosopher argues that we are not adequately accounting for technology's risks—but his solution to the problem is not for Luddites.

Anderson: You have argued that we underrate existential risks because of a particular kind of bias called observation selection effect. Can you explain a bit more about that?

Bostrom: The idea of an observation selection effect is maybe best explained by first considering the simpler concept of a selection effect. Let's say you're trying to estimate how large the largest fish in a given pond is, and you use a net to catch a hundred fish and the biggest fish you find is three inches long. You might be tempted to infer that the biggest fish in this pond is not much bigger than three inches, because you've caught a hundred of them and none of them are bigger than three inches. But if it turns out that your net could only catch fish up to a certain length, then the measuring instrument that you used would introduce a selection effect: it would only select from a subset of the domain you were trying to sample.

Now that's a kind of standard fact of statistics, and there are methods for trying to correct for it and you obviously have to take that into account when considering the fish distribution in your pond. An observation selection effect is a selection effect introduced not by limitations in our measurement instrument, but rather by the fact that all observations require the existence of an observer. This becomes important, for instance, in evolutionary biology. For instance, we know that intelligent life evolved on Earth. Naively, one might think that this piece of evidence suggests that life is likely to evolve on most Earth-like planets. But that would be to overlook an observation selection effect. For no matter how small the proportion of all Earth-like planets that evolve intelligent life, we will find ourselves on a planet that did. Our data point-that intelligent life arose on our planet-is predicted equally well by the hypothesis that intelligent life is very improbable even on Earth-like planets as by the hypothesis that intelligent life is highly probable on Earth-like planets. When it comes to human extinction and existential risk, there are certain controversial ways that observation selection effects might be relevant.
bostrom  ratty  miri-cfar  skunkworks  philosophy  org:junk  list  top-n  frontier  speedometer  risk  futurism  local-global  scale  death  nihil  technology  simulation  anthropic  nuclear  deterrence  environment  climate-change  arms  competition  ai  ai-control  genetics  genomics  biotech  parasites-microbiome  disease  offense-defense  physics  tails  network-structure  epidemiology  space  geoengineering  dysgenics  ems  authoritarianism  government  values  formal-values  moloch  enhancement  property-rights  coordination  cooperate-defect  flux-stasis  ideas  prediction  speculation  humanity  singularity  existence  cybernetics  study  article  letters  eden-heaven  gedanken  multi  twitter  social  discussion  backup  hanson  metrics  optimization  time  long-short-run  janus  telos-atelos  poll  forms-instances  threat-modeling  selection  interview  expert-experience  malthus  volo-avolo  intel  leviathan  drugs  pharma  data  estimate  nature  longevity  expansionism  homo-hetero  utopia-dystopia 
march 2018 by nhaliday
Fermi paradox - Wikipedia
Rare Earth hypothesis: https://en.wikipedia.org/wiki/Rare_Earth_hypothesis
Fine-tuned Universe: https://en.wikipedia.org/wiki/Fine-tuned_Universe
something to keep in mind:
Puddle theory is a term coined by Douglas Adams to satirize arguments that the universe is made for man.[54][55] As stated in Adams' book The Salmon of Doubt:[56]
Imagine a puddle waking up one morning and thinking, “This is an interesting world I find myself in, an interesting hole I find myself in, fits me rather neatly, doesn't it? In fact, it fits me staggeringly well, must have been made to have me in it!” This is such a powerful idea that as the sun rises in the sky and the air heats up and as, gradually, the puddle gets smaller and smaller, it's still frantically hanging on to the notion that everything's going to be all right, because this World was meant to have him in it, was built to have him in it; so the moment he disappears catches him rather by surprise. I think this may be something we need to be on the watch out for.
article  concept  paradox  wiki  reference  fermi  anthropic  space  xenobio  roots  speculation  ideas  risk  threat-modeling  civilization  nihil  🔬  deep-materialism  new-religion  futurism  frontier  technology  communication  simulation  intelligence  eden  war  nuclear  deterrence  identity  questions  multi  explanans  physics  theos  philosophy  religion  chemistry  bio  hmm  idk  degrees-of-freedom  lol  troll  existence 
january 2018 by nhaliday
Centers of gravity in non-uniform fields - Wikipedia
In physics, a center of gravity of a material body is a point that may be used for a summary description of gravitational interactions. In a uniform gravitational field, the center of mass serves as the center of gravity. This is a very good approximation for smaller bodies near the surface of Earth, so there is no practical need to distinguish "center of gravity" from "center of mass" in most applications, such as engineering and medicine.

In a non-uniform field, gravitational effects such as potential energy, force, and torque can no longer be calculated using the center of mass alone. In particular, a non-uniform gravitational field can produce a torque on an object, even about an axis through the center of mass. The center of gravity seeks to explain this effect. Formally, a center of gravity is an application point of the resultant gravitational force on the body. Such a point may not exist, and if it exists, it is not unique. One can further define a unique center of gravity by approximating the field as either parallel or spherically symmetric.

The concept of a center of gravity as distinct from the center of mass is rarely used in applications, even in celestial mechanics, where non-uniform fields are important. Since the center of gravity depends on the external field, its motion is harder to determine than the motion of the center of mass. The common method to deal with gravitational torques is a field theory.
nibble  wiki  reference  physics  mechanics  intricacy  atoms  expectancy  spatial  direction  ground-up  concept  existence  uniqueness  homo-hetero  gravity  gotchas 
september 2017 by nhaliday
Subgradients - S. Boyd and L. Vandenberghe
If f is convex and x ∈ int dom f, then ∂f(x) is nonempty and bounded. To establish that ∂f(x) ≠ ∅, we apply the supporting hyperplane theorem to the convex set epi f at the boundary point (x, f(x)), ...
pdf  nibble  lecture-notes  acm  optimization  curvature  math.CA  estimate  linearity  differential  existence  proofs  exposition  atoms  math  marginal  convexity-curvature 
august 2017 by nhaliday
Lecture 6: Nash Equilibrum Existence
pf:
- For mixed strategy profile p ∈ Δ(A), let g_ij(p) = gain for player i to switch to pure strategy j.
- Define y: Δ(A) -> Δ(A) by y_ij(p) ∝ p_ij + g_ij(p) (normalizing constant = 1 + ∑_k g_ik(p)).
- Look at fixed point of y.
pdf  nibble  lecture-notes  exposition  acm  game-theory  proofs  math  topology  existence  fixed-point  simplex  equilibrium  ground-up 
june 2017 by nhaliday
Logic | West Hunter
All the time I hear some public figure saying that if we ban or allow X, then logically we have to ban or allow Y, even though there are obvious practical reasons for X and obvious practical reasons against Y.

No, we don’t.

http://www.amnation.com/vfr/archives/005864.html
http://www.amnation.com/vfr/archives/002053.html

compare: https://pinboard.in/u:nhaliday/b:190b299cf04a

Small Change Good, Big Change Bad?: https://www.overcomingbias.com/2018/02/small-change-good-big-change-bad.html
And on reflection it occurs to me that this is actually THE standard debate about change: some see small changes and either like them or aren’t bothered enough to advocate what it would take to reverse them, while others imagine such trends continuing long enough to result in very large and disturbing changes, and then suggest stronger responses.

For example, on increased immigration some point to the many concrete benefits immigrants now provide. Others imagine that large cumulative immigration eventually results in big changes in culture and political equilibria. On fertility, some wonder if civilization can survive in the long run with declining population, while others point out that population should rise for many decades, and few endorse the policies needed to greatly increase fertility. On genetic modification of humans, some ask why not let doctors correct obvious defects, while others imagine parents eventually editing kid genes mainly to max kid career potential. On oil some say that we should start preparing for the fact that we will eventually run out, while others say that we keep finding new reserves to replace the ones we use.

...

If we consider any parameter, such as typical degree of mind wandering, we are unlikely to see the current value as exactly optimal. So if we give people the benefit of the doubt to make local changes in their interest, we may accept that this may result in a recent net total change we don’t like. We may figure this is the price we pay to get other things we value more, and we we know that it can be very expensive to limit choices severely.

But even though we don’t see the current value as optimal, we also usually see the optimal value as not terribly far from the current value. So if we can imagine current changes as part of a long term trend that eventually produces very large changes, we can become more alarmed and willing to restrict current changes. The key question is: when is that a reasonable response?

First, big concerns about big long term changes only make sense if one actually cares a lot about the long run. Given the usual high rates of return on investment, it is cheap to buy influence on the long term, compared to influence on the short term. Yet few actually devote much of their income to long term investments. This raises doubts about the sincerity of expressed long term concerns.

Second, in our simplest models of the world good local choices also produce good long term choices. So if we presume good local choices, bad long term outcomes require non-simple elements, such as coordination, commitment, or myopia problems. Of course many such problems do exist. Even so, someone who claims to see a long term problem should be expected to identify specifically which such complexities they see at play. It shouldn’t be sufficient to just point to the possibility of such problems.

...

Fourth, many more processes and factors limit big changes, compared to small changes. For example, in software small changes are often trivial, while larger changes are nearly impossible, at least without starting again from scratch. Similarly, modest changes in mind wandering can be accomplished with minor attitude and habit changes, while extreme changes may require big brain restructuring, which is much harder because brains are complex and opaque. Recent changes in market structure may reduce the number of firms in each industry, but that doesn’t make it remotely plausible that one firm will eventually take over the entire economy. Projections of small changes into large changes need to consider the possibility of many such factors limiting large changes.

Fifth, while it can be reasonably safe to identify short term changes empirically, the longer term a forecast the more one needs to rely on theory, and the more different areas of expertise one must consider when constructing a relevant model of the situation. Beware a mere empirical projection into the long run, or a theory-based projection that relies on theories in only one area.

We should very much be open to the possibility of big bad long term changes, even in areas where we are okay with short term changes, or at least reluctant to sufficiently resist them. But we should also try to hold those who argue for the existence of such problems to relatively high standards. Their analysis should be about future times that we actually care about, and can at least roughly foresee. It should be based on our best theories of relevant subjects, and it should consider the possibility of factors that limit larger changes.

And instead of suggesting big ways to counter short term changes that might lead to long term problems, it is often better to identify markers to warn of larger problems. Then instead of acting in big ways now, we can make sure to track these warning markers, and ready ourselves to act more strongly if they appear.

Growth Is Change. So Is Death.: https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html
I see the same pattern when people consider long term futures. People can be quite philosophical about the extinction of humanity, as long as this is due to natural causes. Every species dies; why should humans be different? And few get bothered by humans making modest small-scale short-term modifications to their own lives or environment. We are mostly okay with people using umbrellas when it rains, moving to new towns to take new jobs, etc., digging a flood ditch after our yard floods, and so on. And the net social effect of many small changes is technological progress, economic growth, new fashions, and new social attitudes, all of which we tend to endorse in the short run.

Even regarding big human-caused changes, most don’t worry if changes happen far enough in the future. Few actually care much about the future past the lives of people they’ll meet in their own life. But for changes that happen within someone’s time horizon of caring, the bigger that changes get, and the longer they are expected to last, the more that people worry. And when we get to huge changes, such as taking apart the sun, a population of trillions, lifetimes of millennia, massive genetic modification of humans, robots replacing people, a complete loss of privacy, or revolutions in social attitudes, few are blasé, and most are quite wary.

This differing attitude regarding small local changes versus large global changes makes sense for parameters that tend to revert back to a mean. Extreme values then do justify extra caution, while changes within the usual range don’t merit much notice, and can be safely left to local choice. But many parameters of our world do not mostly revert back to a mean. They drift long distances over long times, in hard to predict ways that can be reasonably modeled as a basic trend plus a random walk.

This different attitude can also make sense for parameters that have two or more very different causes of change, one which creates frequent small changes, and another which creates rare huge changes. (Or perhaps a continuum between such extremes.) If larger sudden changes tend to cause more problems, it can make sense to be more wary of them. However, for most parameters most change results from many small changes, and even then many are quite wary of this accumulating into big change.

For people with a sharp time horizon of caring, they should be more wary of long-drifting parameters the larger the changes that would happen within their horizon time. This perspective predicts that the people who are most wary of big future changes are those with the longest time horizons, and who more expect lumpier change processes. This prediction doesn’t seem to fit well with my experience, however.

Those who most worry about big long term changes usually seem okay with small short term changes. Even when they accept that most change is small and that it accumulates into big change. This seems incoherent to me. It seems like many other near versus far incoherences, like expecting things to be simpler when you are far away from them, and more complex when you are closer. You should either become more wary of short term changes, knowing that this is how big longer term change happens, or you should be more okay with big long term change, seeing that as the legitimate result of the small short term changes you accept.

https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html#comment-3794966996
The point here is the gradual shifts of in-group beliefs are both natural and no big deal. Humans are built to readily do this, and forget they do this. But ultimately it is not a worry or concern.

But radical shifts that are big, whether near or far, portend strife and conflict. Either between groups or within them. If the shift is big enough, our intuition tells us our in-group will be in a fight. Alarms go off.
west-hunter  scitariat  discussion  rant  thinking  rationality  metabuch  critique  systematic-ad-hoc  analytical-holistic  metameta  ideology  philosophy  info-dynamics  aphorism  darwinian  prudence  pragmatic  insight  tradition  s:*  2016  multi  gnon  right-wing  formal-values  values  slippery-slope  axioms  alt-inst  heuristic  anglosphere  optimate  flux-stasis  flexibility  paleocon  polisci  universalism-particularism  ratty  hanson  list  examples  migration  fertility  intervention  demographics  population  biotech  enhancement  energy-resources  biophysical-econ  nature  military  inequality  age-generation  time  ideas  debate  meta:rhetoric  local-global  long-short-run  gnosis-logos  gavisti  stochastic-processes  eden-heaven  politics  equilibrium  hive-mind  genetics  defense  competition  arms  peace-violence  walter-scheidel  speed  marginal  optimization  search  time-preference  patience  futurism  meta:prediction  accuracy  institutions  tetlock  theory-practice  wire-guided  priors-posteriors  distribution  moments  biases  epistemic  nea 
may 2017 by nhaliday
[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox
If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: this can produce a 10^30 multiplier of achievable computation. We hence suggest the "aestivation hypothesis": the reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyzes the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis.

http://aleph.se/andart2/space/the-aestivation-hypothesis-popular-outline-and-faq/

simpler explanation (just different math for Drake equation):
Dissolving the Fermi Paradox: http://www.jodrellbank.manchester.ac.uk/media/eps/jodrell-bank-centre-for-astrophysics/news-and-events/2017/uksrn-slides/Anders-Sandberg---Dissolving-Fermi-Paradox-UKSRN.pdf
http://marginalrevolution.com/marginalrevolution/2017/07/fermi-paradox-resolved.html
Overall the argument is that point estimates should not be shoved into a Drake equation and then multiplied by each, as that requires excess certainty and masks much of the ambiguity of our knowledge about the distributions. Instead, a Bayesian approach should be used, after which the fate of humanity looks much better. Here is one part of the presentation:

Life Versus Dark Energy: How An Advanced Civilization Could Resist the Accelerating Expansion of the Universe: https://arxiv.org/abs/1806.05203
The presence of dark energy in our universe is causing space to expand at an accelerating rate. As a result, over the next approximately 100 billion years, all stars residing beyond the Local Group will fall beyond the cosmic horizon and become not only unobservable, but entirely inaccessible, thus limiting how much energy could one day be extracted from them. Here, we consider the likely response of a highly advanced civilization to this situation. In particular, we argue that in order to maximize its access to useable energy, a sufficiently advanced civilization would chose to expand rapidly outward, build Dyson Spheres or similar structures around encountered stars, and use the energy that is harnessed to accelerate those stars away from the approaching horizon and toward the center of the civilization. We find that such efforts will be most effective for stars with masses in the range of M∼(0.2−1)M⊙, and could lead to the harvesting of stars within a region extending out to several tens of Mpc in radius, potentially increasing the total amount of energy that is available to a future civilization by a factor of several thousand. We also discuss the observable signatures of a civilization elsewhere in the universe that is currently in this state of stellar harvesting.
preprint  study  essay  article  bostrom  ratty  anthropic  philosophy  space  xenobio  computation  physics  interdisciplinary  ideas  hmm  cocktail  temperature  thermo  information-theory  bits  🔬  threat-modeling  time  scale  insight  multi  commentary  liner-notes  pdf  slides  error  probability  ML-MAP-E  composition-decomposition  econotariat  marginal-rev  fermi  risk  org:mat  questions  paradox  intricacy  multiplicative  calculation  street-fighting  methodology  distribution  expectancy  moments  bayesian  priors-posteriors  nibble  measurement  existence  technology  geoengineering  magnitude  spatial  density  spreading  civilization  energy-resources  phys-energy  measure  direction  speculation  structure 
may 2017 by nhaliday
The Inexorable Progress of Science: Archaeology | West Hunter
I was noting something from Mario Alinei (an advocate of a model in which nobody ever invaded Europe, probably including Omaha Beach). He blames ideology:

” Surprisingly, although the archaeological research of the last few decennnia has
provided more and more evidence that no large-scale invasion took place in
Europe in the Calcholithic, Indoeuropean linguistics has stubbornly held to its
strong invasionist assumption, and has continued to produce more and more
variations on the old theme.

Clearly, the answer is ideological. For the invasion model was first advanced in the nineteenth century, when archaeology and related sciences were dominated by the ideology of colonialism, as recent historical research has shown. The successive generations of linguists and archaeologists have been strongly inspired by the racist views that stemmed out of colonialism. Historians of archaeology (e.g. Daniel 1962, Trigger 1989) have repeatedly shown the importance of ideology in shaping archaeological theories as well as theories of human origins, while, unfortunately, linguistics has not followed the same course, and thus strongly believes in its own innocence.”

You know, he may have a point.

With a very limited set of clues, smart guys managed to get key facts about European prehistory roughly correct almost 90 years ago . With tremendously better tools, better methods, vastly more money, more data, etc, archaeologists (most of them) drifted farther and farther from the truth.

https://westhunt.wordpress.com/2013/12/02/ancestral-journeys/
It is a refreshing antidote to previous accounts based on the pots-not-people fad that originated back in the 1960s, like so many other bad things. Once upon a time, when the world was young, archaeologists would find a significant transition in artifact types, see a simultaneous change in skeletons, and deduce that new tenants had arrived, for example with advent of the Bell Beaker culture. This became unfashionable: archaeologists were taught to think that invasions and Völkerwanderungs were never the explanation, even though history records many events of this kind. I suppose the work Franz Boas published back in 1912, falsely claiming that environment controlled skull shape rather than genetics, had something to do with it. And surely some archaeologists went overboard with migration, suggesting that New Coke cans were a sign of barbarian takeover. The usual explanation though, is that archaeologists began to find the idea of prehistoric population replacement [of course you know that means war – war means fighting, and fighting means killing] distasteful and concluded that therefore it must not have happened. Which meant that they were total loons, but that seems to happen a lot.

...

I mean, when the first farmers were settling Britain, about 4000 BC, they built ditched and palisaded enclosures. Some of these camps are littered with human bones – so, naturally, Brian Fagan, in a popular prehistory textbook, suggests that ” perhaps these camps were places where the dead were exposed for months before their bones were deposited in nearby communal burials.” ! . We also find thousands of flint arrowheads in extensive investigations of some of these enclosures, concentrated along the palisade and especially at the gates. Sounds a lot like Fort Apache, to me.

more in interview here: https://pinboard.in/u:nhaliday/b:9ab84243b967

interesting comment about archaeological traces of wars:
https://twitter.com/gcochran99/status/1106295127167778816
https://archive.is/3EsG8
Most wars known to have happened in historical times haven't left much of an archaeological record.

The same archaeologists were, a few years ago, sure that migrations and population replacements didn't play a big role in European prehistory.

possibly relevant for historicity of Exodus/OT?
west-hunter  discussion  rant  gnon  history  anthropology  antiquity  social-science  error  epistemic  sapiens  europe  gavisti  culture-war  quotes  westminster  migration  agriculture  language  bounded-cognition  mostly-modern  ideology  crooked  clown-world  realness  being-right  scitariat  info-dynamics  track-record  zeitgeist  truth  archaeology  kumbaya-kult  peace-violence  multi  books  review  summary  recommendations  stories  death  war  is-ought  conquest-empire  academia  the-trenches  alt-inst  risk  fashun  cold-war  rot  aphorism  traces  twitter  social  commentary  backup  existence  nihil  comparison  linguistics 
february 2017 by nhaliday
Existence of the moment generating function and variance - Cross Validated
This question provides a nice opportunity to collect some facts on moment-generating functions (mgf).

In the answer below, we do the following:
1. Show that if the mgf is finite for at least one (strictly) positive value and one negative value, then all positive moments of X are finite (including nonintegral moments).
2. Prove that the condition in the first item above is equivalent to the distribution of X having exponentially bounded tails. In other words, the tails of X fall off at least as fast as those of an exponential random variable Z (up to a constant).
3. Provide a quick note on the characterization of the distribution by its mgf provided it satisfies the condition in item 1.
4. Explore some examples and counterexamples to aid our intuition and, particularly, to show that we should not read undue importance into the lack of finiteness of the mgf.
q-n-a  overflow  math  stats  acm  probability  characterization  concept  moments  distribution  examples  counterexample  tails  rigidity  nibble  existence  s:null  convergence  series 
january 2017 by nhaliday
pr.probability - When are probability distributions completely determined by their moments? - MathOverflow
Roughly speaking, if the sequence of moments doesn't grow too quickly, then the distribution is determined by its moments. One sufficient condition is that if the moment generating function of a random variable has positive radius of convergence, then that random variable is determined by its moments.
q-n-a  overflow  math  acm  probability  characterization  tidbits  moments  rigidity  nibble  existence  convergence  series 
january 2017 by nhaliday
Overcoming Bias : In Praise of Low Needs
We humans have come a long way since we first became human; we’ve innovated and grown our ability to achieve human ends by perhaps a factor of ten million. Not at all shabby, even though it may be small compared to the total factor of growth and innovation that life achieved before humans arrived. But even if humanity’s leap is a great achievement, I fear that we have much further to go than we have come.

The universe seems almost entirely dead out there. There’s a chance it will eventually be densely filled with life, and that our descendants may help to make that happen. Some worry about the quality of that life filling the universe, and yes there are issues there. But I worry mostly about the difference between life and death. Our descendants may kill themselves or stop growing, and fail to fill the universe with life. Any life.

To fill the universe with life requires that we grow far more than our previous leap factor of ten million. More like three to ten factors that big still to go. (See Added below.) So think of all the obstacles we’ve overcome so far, obstacles that appeared when we reached new scales of size and levels of ability. If we were lucky to make it this far, we’ll have to be much more lucky to make it all the way.

...

Added 28Oct: Assume humanity’s leap factor is 107. Three of those is 1021. As there are 1024 stars in observable universe, that much growth could come from filling one in a thousand of those stars with as many rich humans as Earth now has. Ten of humanity’s leap is 1070, and there are now about 1010 humans on Earth. As there are about 1080 atoms in the observable universe, that much growth could come from finding a way to implement one human like creature per atom.
hanson  contrarianism  stagnation  trends  values  farmers-and-foragers  essay  rhetoric  new-religion  ratty  spreading  phalanges  malthus  formal-values  flux-stasis  economics  growth-econ  status  fashun  signaling  anthropic  fermi  nihil  death  risk  futurism  hierarchy  ranking  discipline  temperance  threat-modeling  existence  wealth  singularity  smoothness  discrete  scale  magnitude  population  physics  estimate  uncertainty  flexibility  rigidity  capitalism  heavy-industry  the-world-is-just-atoms  nature  corporation  institutions  coarse-fine 
october 2016 by nhaliday
Overcoming Bias : Beware General Visible Prey
So, bottom line, the future great filter scenario that most concerns me is one where our solar-system-bound descendants have killed most of nature, can’t yet colonize other stars, are general predators and prey of each other, and have fallen into a short-term-predatory-focus equilibrium where predators can easily see and travel to most all prey. Yes there are about a hundred billion comets way out there circling the sun, but even that seems a small enough number for predators to careful map and track all of them.
hanson  risk  prediction  futurism  speculation  pessimism  war  ratty  space  big-picture  fermi  threat-modeling  equilibrium  slippery-slope  anthropic  chart  deep-materialism  new-religion  ideas  bio  nature  plots  expansionism  malthus  marginal  convexity-curvature  humanity  farmers-and-foragers  diversity  entropy-like  homo-hetero  existence  volo-avolo  technology  frontier  intel  travel  time-preference  communication  civilization  egalitarianism-hierarchy  peace-violence  ecology  cooperate-defect  dimensionality  whole-partial-many  temperance  patience  thinking  long-short-run  prepping  offense-defense 
october 2016 by nhaliday
Are You Living in a Computer Simulation?
Bostrom's anthropic arguments

https://www.jetpress.org/volume7/simulation.htm
In sum, if your descendants might make simulations of lives like yours, then you might be living in a simulation. And while you probably cannot learn much detail about the specific reasons for and nature of the simulation you live in, you can draw general conclusions by making analogies to the types and reasons of simulations today. If you might be living in a simulation then all else equal it seems that you should care less about others, live more for today, make your world look likely to become eventually rich, expect to and try to participate in pivotal events, be entertaining and praiseworthy, and keep the famous people around you happy and interested in you.

Theological Implications of the Simulation Argument: https://www.tandfonline.com/doi/pdf/10.1080/15665399.2010.10820012
Nick Bostrom’s Simulation Argument (SA) has many intriguing theological implications. We work out some of them here. We show how the SA can be used to develop novel versions of the Cosmological and Design Arguments. We then develop some of the affinities between Bostrom’s naturalistic theogony and more traditional theological topics. We look at the resurrection of the body and at theodicy. We conclude with some reflections on the relations between the SA and Neoplatonism (friendly) and between the SA and theism (less friendly).

https://www.gwern.net/Simulation-inferences
lesswrong  philosophy  weird  idk  thinking  insight  links  summary  rationality  ratty  bostrom  sampling-bias  anthropic  theos  simulation  hanson  decision-making  advice  mystic  time-preference  futurism  letters  entertainment  multi  morality  humility  hypocrisy  wealth  malthus  power  drama  gedanken  pdf  article  essay  religion  christianity  the-classics  big-peeps  iteration-recursion  aesthetics  nietzschean  axioms  gwern  analysis  realness  von-neumann  space  expansionism  duplication  spreading  sequential  cs  computation  outcome-risk  measurement  empirical  questions  bits  information-theory  efficiency  algorithms  physics  relativity  ems  neuro  data  scale  magnitude  complexity  risk  existence  threat-modeling  civilization  forms-instances 
september 2016 by nhaliday

bundles : abstractmath

related tags

abstraction  academia  accelerationism  accuracy  acm  acmtariat  additive  advanced  advice  aesthetics  age-generation  agriculture  ai  ai-control  algebra  algorithms  alignment  alt-inst  altruism  AMT  analogy  analysis  analytical-holistic  anglosphere  anthropic  anthropology  antidemos  antiquity  aphorism  archaeology  arms  art  article  atoms  attention  audio  authoritarianism  automation  axioms  backup  barons  bayesian  behavioral-gen  being-right  biases  big-list  big-peeps  big-picture  big-yud  bio  biodet  biophysical-econ  biotech  bits  blog  blowhards  books  bostrom  bounded-cognition  branches  buddhism  calculation  caltech  capitalism  causation  characterization  chart  checklists  chemistry  christianity  civil-liberty  civilization  clever-rats  climate-change  clown-world  coalitions  coarse-fine  cocktail  coding-theory  cog-psych  cohesion  cold-war  coloring  combo-optimization  comedy  commentary  common-case  communication  comparison  competition  complement-substitute  complex-systems  complexity  composition-decomposition  computation  concentration-of-measure  concept  conceptual-vocab  confusion  conquest-empire  constraint-satisfaction  context  contracts  contradiction  contrarianism  convergence  convexity-curvature  cooperate-defect  coordination  corporation  cost-benefit  counter-revolution  counterexample  coupling-cohesion  course  critique  crooked  crux  cs  cultural-dynamics  culture  culture-war  curiosity  curvature  cybernetics  cycles  darwinian  data  death  debate  debt  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  defense  definite-planning  definition  degrees-of-freedom  democracy  demographics  dennett  density  detail-architecture  deterrence  differential  dignity  dimensionality  direct-indirect  direction  discipline  discrete  discrimination  discussion  disease  distribution  diversity  dominant-minority  drama  drugs  duality  duplication  dysgenics  earth  ecology  economics  econotariat  eden  eden-heaven  education  EEA  efficiency  egalitarianism-hierarchy  EGT  elections  elegance  embedded-cognition  embeddings  embodied  emergent  emotion  empirical  ems  endogenous-exogenous  ends-means  energy-resources  enhancement  enlightenment-renaissance-restoration-reformation  entertainment  entropy-like  environment  epidemiology  epistemic  equilibrium  error  essay  estimate  ethics  europe  evolution  evopsych  examples  exegesis-hermeneutics  existence  expansionism  expectancy  expert-experience  explanans  explanation  exposition  expression-survival  externalities  extrema  farmers-and-foragers  fashun  features  fermi  fertility  fields  fixed-point  flexibility  flux-stasis  formal-values  forms-instances  fourier  free-riding  frontier  futurism  game-theory  games  garett-jones  gavisti  gedanken  gender  gender-diff  generalization  generative  genetics  genomics  geoengineering  geometry  germanic  giants  gnon  gnosis-logos  good-evil  gotchas  government  gravity  gregory-clark  ground-up  growth-econ  GT-101  gwern  hanson  hard-tech  hardware  heavy-industry  heavyweights  heuristic  hidden-motives  hierarchy  history  hive-mind  hmm  homo-hetero  housing  humanity  humility  hypocrisy  ideas  identity  identity-politics  ideology  idk  IEEE  iidness  impetus  incentives  individualism-collectivism  inequality  inference  info-dynamics  information-theory  inner-product  innovation  insight  institutions  integral  intel  intelligence  interdisciplinary  interests  internet  intervention  interview  intricacy  intuition  iq  is-ought  iteration-recursion  janus  judgement  justice  kumbaya-kult  labor  language  large-factor  latent-variables  law  learning-theory  lecture-notes  lectures  left-wing  legacy  lens  lesswrong  letters  levers  leviathan  linear-algebra  linearity  liner-notes  linguistics  links  list  local-global  logic  lol  long-short-run  longevity  machine-learning  magnitude  malthus  manifolds  marginal  marginal-rev  markets  markov  matching  math  math.AT  math.CA  math.CO  math.DS  math.FA  math.GR  math.MG  mathtariat  maxim-gun  meaningness  measure  measurement  mechanics  media  medieval  memetics  meta:prediction  meta:rhetoric  metabuch  metameta  methodology  metrics  micro  migration  military  miri-cfar  mit  ML-MAP-E  model-organism  models  modernity  moloch  moments  morality  mostly-modern  multi  multiplicative  musk  mutation  mystic  n-factor  nascent-state  nationalism-globalism  nature  near-far  network-structure  neuro  neuro-nitgrit  new-religion  nibble  nietzschean  nihil  nitty-gritty  nlp  nonlinearity  nuclear  number  occident  ocw  off-convex  offense-defense  open-closed  optics  optimate  optimism  optimization  order-disorder  ORFE  org:bleg  org:junk  org:mat  org:med  org:nat  org:theos  organizing  orourke  orwellian  oscillation  outcome-risk  overflow  p:whenever  paganism  paleocon  papers  paradox  parallax  parasites-microbiome  parenting  pareto  patho-altruism  patience  pdf  peace-violence  people  pessimism  phalanges  pharma  phase-transition  philosophy  phys-energy  physics  pigeonhole-markov  play  plots  policy  polisci  politics  poll  population  power  pragmatic  prediction  prepping  preprint  prioritizing  priors-posteriors  privacy  pro-rata  probabilistic-method  probability  proofs  property-rights  proposal  protestant-catholic  prudence  psychology  psychometrics  public-goodish  publishing  q-n-a  quantifiers-sums  questions  quotes  random  ranking  rant  rationality  ratty  reading  realness  reason  recommendations  redistribution  reduction  reference  reflection  regularity  regulation  relativity  religion  research  review  revolution  rhetoric  right-wing  rigidity  rigor  risk  roots  rot  s:*  s:**  s:null  sampling-bias  sanctity-degradation  sanjeev-arora  sapiens  scale  schelling  science  scitariat  search  security  selection  self-interest  sequential  series  sex  shift  signal-noise  signaling  simplex  simulation  singularity  skunkworks  slides  slippery-slope  smoothness  social  social-choice  social-psych  social-science  social-structure  sociality  society  sociology  soft-question  software  space  spatial  speculation  speed  speedometer  spreading  stagnation  stats  status  stochastic-processes  stories  strategy  stream  street-fighting  structure  study  studying  stylized-facts  subculture  summary  survey  symmetry  synchrony  synthesis  systematic-ad-hoc  tails  taxes  tcs  tcstariat  teaching  technocracy  technology  telos-atelos  temperance  temperature  tetlock  the-classics  the-great-west-whale  the-self  the-trenches  the-world-is-just-atoms  theory-of-mind  theory-practice  theos  thermo  thiel  thinking  threat-modeling  thurston  tidbits  time  time-preference  todo  top-n  topics  topology  traces  track-record  trade  tradition  travel  trends  tribalism  tricki  tricks  troll  truth  turing  twitter  uncertainty  unintended-consequences  uniqueness  unit  universalism-particularism  urban-rural  us-them  utopia-dystopia  values  video  visuo  volo-avolo  von-neumann  walls  walter-scheidel  war  water  wealth  weird  west-hunter  westminster  whole-partial-many  wiki  winner-take-all  wire-guided  wisdom  within-without  xenobio  yoga  zeitgeist  zero-positive-sum  🔬  🤖 

Copy this bookmark:



description:


tags: