nhaliday + singularity   48

Complexity no Bar to AI - Gwern.net
Critics of AI risk suggest diminishing returns to computing (formalized asymptotically) means AI will be weak; this argument relies on a large number of questionable premises and ignoring additional resources, constant factors, and nonlinear returns to small intelligence advantages, and is highly unlikely. (computer science, transhumanism, AI, R)
created: 1 June 2014; modified: 01 Feb 2018; status: finished; confidence: likely; importance: 10
ratty  gwern  analysis  faq  ai  risk  speedometer  intelligence  futurism  cs  computation  complexity  tcs  linear-algebra  nonlinearity  convexity-curvature  average-case  adversarial  article  time-complexity  singularity  iteration-recursion  magnitude  multiplicative  lower-bounds  no-go  performance  hardware  humanity  psychology  cog-psych  psychometrics  iq  distribution  moments  complement-substitute  hanson  ems  enhancement  parable  detail-architecture  universalism-particularism  neuro  ai-control  environment  climate-change  threat-modeling  security  theory-practice  hacker  academia  realness  crypto  rigorous-crypto  usa  government 
april 2018 by nhaliday
Harnessing Evolution - with Bret Weinstein | Virtual Futures Salon - YouTube
- ways to get out of Malthusian conditions: expansion to new frontiers, new technology, redistribution/theft
- some discussion of existential risk
- wants to change humanity's "purpose" to one that would be safe in the long run; important thing is it has to be ESS (maybe he wants a singleton?)
- not too impressed by transhumanism (wouldn't identify with a brain emulation)
video  interview  thiel  expert-experience  evolution  deep-materialism  new-religion  sapiens  cultural-dynamics  anthropology  evopsych  sociality  ecology  flexibility  biodet  behavioral-gen  self-interest  interests  moloch  arms  competition  coordination  cooperate-defect  frontier  expansionism  technology  efficiency  thinking  redistribution  open-closed  zero-positive-sum  peace-violence  war  dominant-minority  hypocrisy  dignity  sanctity-degradation  futurism  environment  climate-change  time-preference  long-short-run  population  scale  earth  hidden-motives  game-theory  GT-101  free-riding  innovation  leviathan  malthus  network-structure  risk  existence  civil-liberty  authoritarianism  tribalism  us-them  identity-politics  externalities  unintended-consequences  internet  social  media  pessimism  universalism-particularism  energy-resources  biophysical-econ  politics  coalitions  incentives  attention  epistemic  biases  blowhards  teaching  education  emotion  impetus  comedy  expression-survival  economics  farmers-and-foragers  ca 
april 2018 by nhaliday
Ultimate fate of the universe - Wikipedia
The fate of the universe is determined by its density. The preponderance of evidence to date, based on measurements of the rate of expansion and the mass density, favors a universe that will continue to expand indefinitely, resulting in the "Big Freeze" scenario below.[8] However, observations are not conclusive, and alternative models are still possible.[9]

Big Freeze or heat death
Main articles: Future of an expanding universe and Heat death of the universe
The Big Freeze is a scenario under which continued expansion results in a universe that asymptotically approaches absolute zero temperature.[10] This scenario, in combination with the Big Rip scenario, is currently gaining ground as the most important hypothesis.[11] It could, in the absence of dark energy, occur only under a flat or hyperbolic geometry. With a positive cosmological constant, it could also occur in a closed universe. In this scenario, stars are expected to form normally for 1012 to 1014 (1–100 trillion) years, but eventually the supply of gas needed for star formation will be exhausted. As existing stars run out of fuel and cease to shine, the universe will slowly and inexorably grow darker. Eventually black holes will dominate the universe, which themselves will disappear over time as they emit Hawking radiation.[12] Over infinite time, there would be a spontaneous entropy decrease by the Poincaré recurrence theorem, thermal fluctuations,[13][14] and the fluctuation theorem.[15][16]

A related scenario is heat death, which states that the universe goes to a state of maximum entropy in which everything is evenly distributed and there are no gradients—which are needed to sustain information processing, one form of which is life. The heat death scenario is compatible with any of the three spatial models, but requires that the universe reach an eventual temperature minimum.[17]
physics  big-picture  world  space  long-short-run  futurism  singularity  wiki  reference  article  nibble  thermo  temperature  entropy-like  order-disorder  death  nihil  bio  complex-systems  cybernetics  increase-decrease  trends  computation  local-global  prediction  time  spatial  spreading  density  distribution  manifolds  geometry  janus 
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:


In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.


In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?


In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.


Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
The Coming Technological Singularity
Within thirty years, we will have the technological
means to create superhuman intelligence. Shortly after,
the human era will be ended.

Is such progress avoidable? If not to be avoided, can
events be guided so that we may survive? These questions
are investigated. Some possible answers (and some further
dangers) are presented.

_What is The Singularity?_

The acceleration of technological progress has been the central
feature of this century. I argue in this paper that we are on the edge
of change comparable to the rise of human life on Earth. The precise
cause of this change is the imminent creation by technology of
entities with greater than human intelligence. There are several means
by which science may achieve this breakthrough (and this is another
reason for having confidence that the event will occur):
o The development of computers that are "awake" and
superhumanly intelligent. (To date, most controversy in the
area of AI relates to whether we can create human equivalence
in a machine. But if the answer is "yes, we can", then there
is little doubt that beings more intelligent can be constructed
shortly thereafter.
o Large computer networks (and their associated users) may "wake
up" as a superhumanly intelligent entity.
o Computer/human interfaces may become so intimate that users
may reasonably be considered superhumanly intelligent.
o Biological science may find ways to improve upon the natural
human intellect.

The first three possibilities depend in large part on
improvements in computer hardware. Progress in computer hardware has
followed an amazingly steady curve in the last few decades [16]. Based
largely on this trend, I believe that the creation of greater than
human intelligence will occur during the next thirty years. (Charles
Platt [19] has pointed out the AI enthusiasts have been making claims
like this for the last thirty years. Just so I'm not guilty of a
relative-time ambiguity, let me more specific: I'll be surprised if
this event occurs before 2005 or after 2030.)

What are the consequences of this event? When greater-than-human
intelligence drives progress, that progress will be much more rapid.
In fact, there seems no reason why progress itself would not involve
the creation of still more intelligent entities -- on a still-shorter
time scale. The best analogy that I see is with the evolutionary past:
Animals can adapt to problems and make inventions, but often no faster
than natural selection can do its work -- the world acts as its own
simulator in the case of natural selection. We humans have the ability
to internalize the world and conduct "what if's" in our heads; we can
solve many problems thousands of times faster than natural selection.
Now, by creating the means to execute those simulations at much higher
speeds, we are entering a regime as radically different from our human
past as we humans are from the lower animals.
org:junk  humanity  accelerationism  futurism  prediction  classic  technology  frontier  speedometer  ai  risk  internet  time  essay  rhetoric  network-structure  ai-control  morality  ethics  volo-avolo  egalitarianism-hierarchy  intelligence  scale  giants  scifi-fantasy  speculation  quotes  religion  theos  singularity  flux-stasis  phase-transition  cybernetics  coordination  cooperate-defect  moloch  communication  bits  speed  efficiency  eden-heaven  ecology  benevolence  end-times  good-evil  identity  the-self  whole-partial-many  density 
march 2018 by nhaliday
Existential Risks: Analyzing Human Extinction Scenarios
Would you endorse choosing policy to max the expected duration of civilization, at least as a good first approximation?
Can anyone suggest a different first approximation that would get more votes?

How useful would it be to agree on a relatively-simple first-approximation observable-after-the-fact metric for what we want from the future universe, such as total life years experienced, or civilization duration?

We're Underestimating the Risk of Human Extinction: https://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/253821/
An Oxford philosopher argues that we are not adequately accounting for technology's risks—but his solution to the problem is not for Luddites.

Anderson: You have argued that we underrate existential risks because of a particular kind of bias called observation selection effect. Can you explain a bit more about that?

Bostrom: The idea of an observation selection effect is maybe best explained by first considering the simpler concept of a selection effect. Let's say you're trying to estimate how large the largest fish in a given pond is, and you use a net to catch a hundred fish and the biggest fish you find is three inches long. You might be tempted to infer that the biggest fish in this pond is not much bigger than three inches, because you've caught a hundred of them and none of them are bigger than three inches. But if it turns out that your net could only catch fish up to a certain length, then the measuring instrument that you used would introduce a selection effect: it would only select from a subset of the domain you were trying to sample.

Now that's a kind of standard fact of statistics, and there are methods for trying to correct for it and you obviously have to take that into account when considering the fish distribution in your pond. An observation selection effect is a selection effect introduced not by limitations in our measurement instrument, but rather by the fact that all observations require the existence of an observer. This becomes important, for instance, in evolutionary biology. For instance, we know that intelligent life evolved on Earth. Naively, one might think that this piece of evidence suggests that life is likely to evolve on most Earth-like planets. But that would be to overlook an observation selection effect. For no matter how small the proportion of all Earth-like planets that evolve intelligent life, we will find ourselves on a planet that did. Our data point-that intelligent life arose on our planet-is predicted equally well by the hypothesis that intelligent life is very improbable even on Earth-like planets as by the hypothesis that intelligent life is highly probable on Earth-like planets. When it comes to human extinction and existential risk, there are certain controversial ways that observation selection effects might be relevant.
bostrom  ratty  miri-cfar  skunkworks  philosophy  org:junk  list  top-n  frontier  speedometer  risk  futurism  local-global  scale  death  nihil  technology  simulation  anthropic  nuclear  deterrence  environment  climate-change  arms  competition  ai  ai-control  genetics  genomics  biotech  parasites-microbiome  disease  offense-defense  physics  tails  network-structure  epidemiology  space  geoengineering  dysgenics  ems  authoritarianism  government  values  formal-values  moloch  enhancement  property-rights  coordination  cooperate-defect  flux-stasis  ideas  prediction  speculation  humanity  singularity  existence  cybernetics  study  article  letters  eden-heaven  gedanken  multi  twitter  social  discussion  backup  hanson  metrics  optimization  time  long-short-run  janus  telos-atelos  poll  forms-instances  threat-modeling  selection  interview  expert-experience  malthus  volo-avolo  intel  leviathan  drugs  pharma  data  estimate  nature  longevity  expansionism  homo-hetero  utopia-dystopia 
march 2018 by nhaliday
What Peter Thiel thinks about AI risk - Less Wrong
TL;DR: he thinks its an issue but also feels AGI is very distant and hence less worried about it than Musk.

I recommend the rest of the lecture as well, it's a good summary of "Zero to One"  and a good QA afterwards.

For context, in case anyone doesn't realize: Thiel has been MIRI's top donor throughout its history.

other stuff:
nice interview question: "thing you know is true that not everyone agrees on?"
"learning from failure overrated"
cleantech a huge market, hard to compete
software makes for easy monopolies (zero marginal costs, network effects, etc.)
for most of history inventors did not benefit much (continuous competition)
ethical behavior is a luxury of monopoly
ratty  lesswrong  commentary  ai  ai-control  risk  futurism  technology  speedometer  audio  presentation  musk  thiel  barons  frontier  miri-cfar  charity  people  track-record  venture  startups  entrepreneurialism  contrarianism  competition  market-power  business  google  truth  management  leadership  socs-and-mops  dark-arts  skunkworks  hard-tech  energy-resources  wire-guided  learning  software  sv  tech  network-structure  scale  marginal  cost-benefit  innovation  industrial-revolution  economics  growth-econ  capitalism  comparison  nationalism-globalism  china  asia  trade  stagnation  things  dimensionality  exploratory  world  developing-world  thinking  definite-planning  optimism  pessimism  intricacy  politics  war  career  planning  supply-demand  labor  science  engineering  dirty-hands  biophysical-econ  migration  human-capital  policy  canada  anglo  winner-take-all  polarization  amazon  business-models  allodium  civilization  the-classics  microsoft  analogy  gibbon  conquest-empire  realness  cynicism-idealism  org:edu  open-closed  ethics  incentives  m 
february 2018 by nhaliday
Superintelligence Risk Project Update II

For example, I asked him what he thought of the idea that to we could get AGI with current techniques, primarily deep neural nets and reinforcement learning, without learning anything new about how intelligence works or how to implement it ("Prosaic AGI" [1]). He didn't think this was possible, and believes there are deep conceptual issues we still need to get a handle on. He's also less impressed with deep learning than he was before he started working in it: in his experience it's a much more brittle technology than he had been expecting. Specifically, when trying to replicate results, he's often found that they depend on a bunch of parameters being in just the right range, and without that the systems don't perform nearly as well.

The bottom line, to him, was that since we are still many breakthroughs away from getting to AGI, we can't productively work on reducing superintelligence risk now.

He told me that he worries that the AI risk community is not solving real problems: they're making deductions and inferences that are self-consistent but not being tested or verified in the world. Since we can't tell if that's progress, it probably isn't. I asked if he was referring to MIRI's work here, and he said their work was an example of the kind of approach he's skeptical about, though he wasn't trying to single them out. [2]

Earlier this week I had a conversation with an AI researcher [1] at one of the main industry labs as part of my project of assessing superintelligence risk. Here's what I got from them:

They see progress in ML as almost entirely constrained by hardware and data, to the point that if today's hardware and data had existed in the mid 1950s researchers would have gotten to approximately our current state within ten to twenty years. They gave the example of backprop: we saw how to train multi-layer neural nets decades before we had the computing power to actually train these nets to do useful things.

Similarly, people talk about AlphaGo as a big jump, where Go went from being "ten years away" to "done" within a couple years, but they said it wasn't like that. If Go work had stayed in academia, with academia-level budgets and resources, it probably would have taken nearly that long. What changed was a company seeing promising results, realizing what could be done, and putting way more engineers and hardware on the project than anyone had previously done. AlphaGo couldn't have happened earlier because the hardware wasn't there yet, and was only able to be brought forward by massive application of resources.

Summary: I'm not convinced that AI risk should be highly prioritized, but I'm also not convinced that it shouldn't. Highly qualified researchers in a position to have a good sense the field have massively different views on core questions like how capable ML systems are now, how capable they will be soon, and how we can influence their development. I do think these questions are possible to get a better handle on, but I think this would require much deeper ML knowledge than I have.
ratty  core-rats  ai  risk  ai-control  prediction  expert  machine-learning  deep-learning  speedometer  links  research  research-program  frontier  multi  interview  deepgoog  games  hardware  performance  roots  impetus  chart  big-picture  state-of-art  reinforcement  futurism  🤖  🖥  expert-experience  singularity  miri-cfar  empirical  evidence-based  speculation  volo-avolo  clever-rats  acmtariat  robust  ideas  crux  atoms  detail-architecture  software  gradient-descent 
july 2017 by nhaliday
Overcoming Bias : A Tangled Task Future
So we may often retain systems that inherit the structure of the human brain, and the structures of the social teams and organizations by which humans have worked together. All of which is another way to say: descendants of humans may have a long future as workers. We may have another future besides being retirees or iron-fisted peons ruling over gods. Even in a competitive future with no friendly singleton to ensure preferential treatment, something recognizably like us may continue. And even win.
ratty  hanson  speculation  automation  labor  economics  ems  futurism  prediction  complex-systems  network-structure  intricacy  thinking  engineering  management  law  compensation  psychology  cog-psych  ideas  structure  gray-econ  competition  moloch  coordination  cooperate-defect  risk  ai  ai-control  singularity  number  humanity  complement-substitute  cybernetics  detail-architecture  legacy  threat-modeling  degrees-of-freedom  composition-decomposition  order-disorder  analogy  parsimony  institutions  software  coupling-cohesion 
june 2017 by nhaliday
[1705.08807] When Will AI Exceed Human Performance? Evidence from AI Experts
Researchers predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans.

study  preprint  science  meta:science  technology  ai  automation  labor  ai-control  risk  futurism  poll  expert  usa  asia  trends  hmm  idk  definite-planning  frontier  ideas  prediction  innovation  china  sinosphere  multi  reddit  social  commentary  ssc  speedometer  flux-stasis  ratty  expert-experience  org:mat  singularity  optimism  pessimism  the-bones 
may 2017 by nhaliday
Barrier function - Wikipedia
In constrained optimization, a field of mathematics, a barrier function is a continuous function whose value on a point increases to infinity as the point approaches the boundary of the feasible region of an optimization problem.[1] Such functions are used to replace inequality constraints by a penalizing term in the objective function that is easier to handle.
math  acm  concept  optimization  singularity  smoothness  relaxation  wiki  reference  regularization  math.CA  nibble 
february 2017 by nhaliday
measure theory - Continuous function a.e. - Mathematics Stack Exchange
- note: Riemann integrable iff continuous a.e. (see Wheeden-Zygmund 5.54)
- equal a.e. to continuous f, but not continuous a.e.: characteristic function of rationals
- continuous a.e., but not equal a.e. to continuous f: step function
- continuous a.e., w/ uncountably many discontinuities: characteristic function of Cantor set
q-n-a  overflow  math  math.CA  counterexample  list  measure  smoothness  singularity  nibble  integral 
january 2017 by nhaliday
ca.analysis and odes - Why do functions in complex analysis behave so well? (as opposed to functions in real analysis) - MathOverflow
Well, real-valued analytic functions are just as rigid as their complex-valued counterparts. The true question is why complex smooth (or complex differentiable) functions are automatically complex analytic, whilst real smooth (or real differentiable) functions need not be real analytic.
q-n-a  overflow  math  math.CA  math.CV  synthesis  curiosity  gowers  oly  mathtariat  tcstariat  comparison  rigidity  smoothness  singularity  regularity  nibble 
january 2017 by nhaliday
Cantor function - Wikipedia
- uniformly continuous but not absolutely continuous
- derivative zero almost everywhere but not constant
- see also: http://mathoverflow.net/questions/31603/why-do-probabilists-take-random-variables-to-be-borel-and-not-lebesgue-measura/31609#31609 (the exercise mentioned uses c(x)+x for c the Cantor function)
math  math.CA  counterexample  wiki  reference  multi  math.FA  atoms  measure  smoothness  singularity  nibble 
january 2017 by nhaliday
Overcoming Bias : In Praise of Low Needs
We humans have come a long way since we first became human; we’ve innovated and grown our ability to achieve human ends by perhaps a factor of ten million. Not at all shabby, even though it may be small compared to the total factor of growth and innovation that life achieved before humans arrived. But even if humanity’s leap is a great achievement, I fear that we have much further to go than we have come.

The universe seems almost entirely dead out there. There’s a chance it will eventually be densely filled with life, and that our descendants may help to make that happen. Some worry about the quality of that life filling the universe, and yes there are issues there. But I worry mostly about the difference between life and death. Our descendants may kill themselves or stop growing, and fail to fill the universe with life. Any life.

To fill the universe with life requires that we grow far more than our previous leap factor of ten million. More like three to ten factors that big still to go. (See Added below.) So think of all the obstacles we’ve overcome so far, obstacles that appeared when we reached new scales of size and levels of ability. If we were lucky to make it this far, we’ll have to be much more lucky to make it all the way.


Added 28Oct: Assume humanity’s leap factor is 107. Three of those is 1021. As there are 1024 stars in observable universe, that much growth could come from filling one in a thousand of those stars with as many rich humans as Earth now has. Ten of humanity’s leap is 1070, and there are now about 1010 humans on Earth. As there are about 1080 atoms in the observable universe, that much growth could come from finding a way to implement one human like creature per atom.
hanson  contrarianism  stagnation  trends  values  farmers-and-foragers  essay  rhetoric  new-religion  ratty  spreading  phalanges  malthus  formal-values  flux-stasis  economics  growth-econ  status  fashun  signaling  anthropic  fermi  nihil  death  risk  futurism  hierarchy  ranking  discipline  temperance  threat-modeling  existence  wealth  singularity  smoothness  discrete  scale  magnitude  population  physics  estimate  uncertainty  flexibility  rigidity  capitalism  heavy-industry  the-world-is-just-atoms  nature  corporation  institutions  coarse-fine 
october 2016 by nhaliday

bundles : abstractmath

related tags

ability-competence  abortion-contraception-embryo  abstraction  academia  accelerationism  accuracy  acemoglu  acm  acmtariat  adversarial  afterlife  agriculture  ai  ai-control  algorithms  alignment  allodium  altruism  amazon  analogy  analysis  analytical-holistic  anglo  anthropic  anthropology  antidemos  apollonian-dionysian  applicability-prereqs  approximation  arms  art  article  asia  atoms  attention  audio  authoritarianism  automation  average-case  aversion  axelrod  backup  barons  behavioral-gen  benevolence  biases  big-peeps  big-picture  big-yud  bio  biodet  biophysical-econ  biotech  bits  blockchain  blowhards  books  bostrom  brain-scan  branches  broad-econ  business  business-models  canada  cancer  canon  capitalism  career  causation  charity  chart  chemistry  china  christianity  civil-liberty  civilization  class  classic  clever-rats  climate-change  coalitions  coarse-fine  cocktail  coding-theory  cog-psych  comedy  commentary  communication  communism  community  comparison  compensation  competition  complement-substitute  complex-systems  complexity  composition-decomposition  computation  computer-vision  concentration-of-measure  concept  conceptual-vocab  concurrency  confidence  conquest-empire  contracts  contrarianism  convexity-curvature  cooperate-defect  coordination  core-rats  corporation  corruption  cost-benefit  counterexample  coupling-cohesion  courage  cracker-econ  CRISPR  critique  crux  crypto  cryptocurrency  cs  cultural-dynamics  culture  curiosity  curvature  cybernetics  cycles  cynicism-idealism  dark-arts  darwinian  data  death  debate  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  defense  definite-planning  definition  degrees-of-freedom  democracy  demographic-transition  dennett  density  descriptive  detail-architecture  deterrence  developing-world  dignity  dimensionality  direct-indirect  direction  dirty-hands  discipline  discrete  discrimination  discussion  disease  distribution  dominant-minority  drugs  duplication  duty  dysgenics  earth  ecology  economics  econotariat  eden  eden-heaven  education  EEA  efficiency  egalitarianism-hierarchy  EGT  elections  electromag  elite  emergent  emotion  empirical  ems  encyclopedic  end-times  endogenous-exogenous  energy-resources  engineering  enhancement  entrepreneurialism  entropy-like  environment  envy  epidemiology  epistemic  equilibrium  essay  estimate  ethics  evidence-based  evolution  evopsych  examples  existence  expansionism  expectancy  expert  expert-experience  exploratory  expression-survival  externalities  extrema  failure  faq  farmers-and-foragers  fashun  fermi  fertility  fiction  finiteness  flexibility  flux-stasis  formal-values  forms-instances  free-riding  frequency  frisson  frontier  futurism  game-theory  games  gedanken  gender  generalization  generative  genetics  genomics  geoengineering  geometry  giants  gibbon  gnon  good-evil  google  government  gowers  gradient-descent  gravity  gray-econ  gregory-clark  growth-econ  GT-101  guilt-shame  gwern  hacker  hanson  happy-sad  hard-tech  hardware  hari-seldon  healthcare  heavy-industry  heuristic  hi-order-bits  hidden-motives  hierarchy  higher-ed  history  hmm  homo-hetero  horror  hsu  human-capital  human-ml  humanity  humility  hypocrisy  ideas  identity  identity-politics  idk  iidness  illusion  impact  impetus  impro  incentives  increase-decrease  individualism-collectivism  industrial-revolution  inequality  info-dynamics  info-econ  information-theory  innovation  insight  instinct  institutions  integral  integrity  intel  intelligence  interdisciplinary  interests  internet  interview  intricacy  intuition  iq  is-ought  iteration-recursion  janus  journos-pundits  judgement  justice  labor  land  language  large-factor  law  leadership  learning  left-wing  legacy  legibility  len:long  len:short  lens  lesswrong  letters  leviathan  lexical  limits  linear-algebra  linguistics  links  list  literature  local-global  long-short-run  longevity  love-hate  lovecraft  lower-bounds  machine-learning  magnitude  malthus  management  manifolds  marginal  market-power  markets  math  math.CA  math.CV  math.FA  mathtariat  maxim-gun  meaningness  measure  mechanics  media  medicine  meta:medicine  meta:prediction  meta:research  meta:rhetoric  meta:science  metabuch  metameta  methodology  metrics  microsoft  migration  miri-cfar  model-organism  models  modernity  moloch  moments  morality  mostly-modern  multi  multiplicative  music  musk  mutation  mystic  myth  nationalism-globalism  nature  near-far  network-structure  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nietzschean  nihil  nitty-gritty  nlp  no-go  nonlinearity  novelty  nuclear  number  off-convex  offense-defense  oly  open-closed  openai  operational  optimate  optimism  optimization  order-disorder  org:anglo  org:bleg  org:edu  org:junk  org:lite  org:mag  org:mat  org:med  org:sci  other-xtian  outcome-risk  overflow  oxbridge  parable  paradox  parallax  parasites-microbiome  parenting  parsimony  patience  pdf  peace-violence  people  performance  personality  pessimism  phalanges  pharma  phase-transition  philosophy  phys-energy  physics  planning  play  plots  podcast  poetry  polarization  policy  politics  poll  population  power  pre-2013  prediction  prediction-markets  prejudice  prepping  preprint  presentation  primitivism  priors-posteriors  privacy  pro-rata  probability  property-rights  psychology  psychometrics  public-goodish  publishing  q-n-a  quantum  quantum-info  questions  quotes  random  ranking  rationality  ratty  realness  reason  recruiting  red-queen  reddit  redistribution  reduction  reference  reflection  regularity  regularization  regularizer  regulation  reinforcement  relativity  relaxation  religion  research  research-program  retention  review  rhetoric  rigidity  rigor  rigorous-crypto  risk  robotics  robust  roots  s:***  sample-complexity  sanctity-degradation  sapiens  scale  scaling-up  science  scifi-fantasy  scitariat  search  security  selection  self-interest  sex  shift  signal-noise  signaling  signum  similarity  simulation  singularity  sinosphere  skunkworks  smoothness  social  social-choice  social-norms  social-psych  sociality  society  sociology  socs-and-mops  software  space  spatial  speculation  speed  speedometer  spock  spreading  ssc  stagnation  startups  state-of-art  status  strategy  street-fighting  structure  study  studying  stylized-facts  subculture  subjective-objective  summary  supply-demand  survey  survival  sv  symmetry  synthesis  tails  tainter  taxes  tcs  tcstariat  teaching  tech  technocracy  technology  telos-atelos  temperance  temperature  terrorism  tetlock  the-basilisk  the-bones  the-classics  the-self  the-world-is-just-atoms  theory-of-mind  theory-practice  theos  thermo  thiel  things  thinking  threat-modeling  time  time-complexity  time-preference  top-n  track-record  trade  tradition  trends  tribalism  trivia  trust  truth  turing  twitter  unaffiliated  uncertainty  unintended-consequences  uniqueness  universalism-particularism  unsupervised  urban-rural  us-them  usa  utopia-dystopia  values  VC-dimension  venture  video  virtu  visuo  vitality  volo-avolo  vr  war  wealth  web  weird  whiggish-hegelian  white-paper  whole-partial-many  wiki  winner-take-all  wire-guided  within-without  world  X-not-about-Y  xenobio  yvain  zeitgeist  zero-positive-sum  👽  🔬  🖥  🤖 

Copy this bookmark: