nhaliday + speed   49

The Open Steno Project | Hacker News
https://web.archive.org/web/20170315133208/http://www.danieljosephpetersen.com/posts/programming-and-stenography.html
I think at the end of the day, the Plover guys are trying to solve the wrong problem. Stenography is a dying field. I don’t wish anyone to lose their livelihood, but realistically speaking, the job should not exist once text to speech technology advances far enough. I’m not claiming that the field will be replaced by it, but I also don’t love the idea of people having to learn such an inane and archaic system.
hn  commentary  keyboard  speed  efficiency  writing  language  maker  homepage  project  multi  techtariat  cost-benefit  critique  expert-experience  programming  backup  contrarianism 
6 days ago by nhaliday
[Tutorial] A way to Practice Competitive Programming : From Rating 1000 to 2400+ - Codeforces
this guy really didn't take that long to reach red..., as of today he's done 20 contests in 2y to my 44 contests in 7y (w/ a long break)...>_>

tho he has 3 times as many submissions as me. maybe he does a lot of virtual rounds?

some snippets from the PDF guide linked:
1400-1900:
To be rating 1900, skills as follows are needed:
- You know and can use major algorithms like these:
Brute force DP DFS BFS Dijkstra
Binary Indexed Tree nCr, nPr Mod inverse Bitmasks Binary Search
- You can code faster (For example, 5 minutes for R1100 problems, 10 minutes for
R1400 problems)

If you are not good at fast-coding and fast-debugging, you should solve AtCoder problems. Actually, and statistically, many Japanese are good at fast-coding relatively while not so good at solving difficult problems. I think that’s because of AtCoder.

I recommend to solve problem C and D in AtCoder Beginner Contest. On average, if you can solve problem C of AtCoder Beginner Contest within 10 minutes and problem D within 20 minutes, you are Div1 in FastCodingForces :)

...

Interestingly, typical problems are concentrated in Div2-only round problems. If you are not good at Div2-only round, it is likely that you are not good at using typical algorithms, especially 10 algorithms that are written above.

If you can use some typical problem but not good at solving more than R1500 in Codeforces, you should begin TopCoder. This type of practice is effective for people who are good at Div.2 only round but not good at Div.1+Div.2 combined or Div.1+Div.2 separated round.

Sometimes, especially in Div1+Div2 round, some problems need mathematical concepts or thinking. Since there are a lot of problems which uses them (and also light-implementation!) in TopCoder, you should solve TopCoder problems.

I recommend to solve Div1Easy of recent 100 SRMs. But some problems are really difficult, (e.g. even red-ranked coder could not solve) so before you solve, you should check how many percent of people did solve this problem. You can use https://competitiveprogramming.info/ to know some informations.

1900-2200:
To be rating 2200, skills as follows are needed:
- You know and can use 10 algorithms which I stated in pp.11 and segment trees
(including lazy propagations)
- You can solve problems very fast: For example, 5 mins for R1100, 10 mins for
R1500, 15 mins for R1800, 40 mins for R2000.
- You have decent skills for mathematical-thinking or considering problems
- Strong mental which can think about the solution more than 1 hours, and don’t give up even if you are below average in Div1 in the middle of the contest

This is only my way to practice, but I did many virtual contests when I was rating 2000. In this page, virtual contest does not mean “Virtual Participation” in Codeforces. It means choosing 4 or 5 problems which the difficulty is near your rating (For example, if you are rating 2000, choose R2000 problems in Codeforces) and solve them within 2 hours. You can use https://vjudge.net/. In this website, you can make virtual contests from problems on many online judges. (e.g. AtCoder, Codeforces, Hackerrank, Codechef, POJ, ...)

If you cannot solve problem within the virtual contests and could not be able to find the solution during the contest, you should read editorial. Google it. (e.g. If you want to know editorial of Codeforces Round #556 (Div. 1), search “Codeforces Round #556 editorial” in google) There is one more important thing to gain rating in Codeforces. To solve problem fast, you should equip some coding library (or template code). For example, I think that equipping segment tree libraries, lazy segment tree libraries, modint library, FFT library, geometry library, etc. is very effective.

2200 to 2400:
Rating 2200 and 2400 is actually very different ...

To be rating 2400, skills as follows are needed:
- You should have skills that stated in previous section (rating 2200)
- You should solve difficult problems which are only solved by less than 100 people in Div1 contests

...

At first, there are a lot of educational problems in AtCoder. I recommend you should solve problem E and F (especially 700-900 points problem in AtCoder) of AtCoder Regular Contest, especially ARC058-ARC090. Though old AtCoder Regular Contests are balanced for “considering” and “typical”, but sadly, AtCoder Grand Contest and recent AtCoder Regular Contest problems are actually too biased for considering I think, so I don’t recommend if your goal is gain rating in Codeforces. (Though if you want to gain rating more than 2600, you should solve problems from AtCoder Grand Contest)

For me, actually, after solving AtCoder Regular Contests, my average performance in CF virtual contest increased from 2100 to 2300 (I could not reach 2400 because start was early)

If you cannot solve problems, I recommend to give up and read editorial as follows:
Point value 600 700 800 900 1000-
CF rating R2000 R2200 R2400 R2600 R2800
Time to editorial 40 min 50 min 60 min 70 min 80 min

If you solve AtCoder educational problems, your skills of competitive programming will be increased. But there is one more problem. Without practical skills, you rating won’t increase. So, you should do 50+ virtual participations (especially Div.1) in Codeforces. In virtual participation, you can learn how to compete as a purple/orange-ranked coder (e.g. strategy) and how to use skills in Codeforces contests that you learned in AtCoder. I strongly recommend to read editorial of all problems except too difficult one (e.g. Less than 30 people solved in contest) after the virtual contest. I also recommend to write reflections about strategy, learns and improvements after reading editorial on notebooks after the contests/virtual.

In addition, about once a week, I recommend you to make time to think about much difficult problem (e.g. R2800 in Codeforces) for couple of hours. If you could not reach the solution after thinking couple of hours, I recommend you to read editorial because you can learn a lot. Solving high-level problems may give you chance to gain over 100 rating in a single contest, but also can give you chance to solve easier problems faster.
oly  oly-programming  problem-solving  learning  practice  accretion  strategy  hmm  pdf  guide  reflection  advice  wire-guided  marginal  stylized-facts  speed  time  cost-benefit  tools  multi  sleuthin  review  comparison  puzzles  contest  aggregator  recommendations  objektbuch  time-use  growth  studying  🖥  👳  yoga 
august 2019 by nhaliday
Computer latency: 1977-2017
If we look at overall results, the fastest machines are ancient. Newer machines are all over the place. Fancy gaming rigs with unusually high refresh-rate displays are almost competitive with machines from the late 70s and early 80s, but “normal” modern computers can’t compete with thirty to forty year old machines.

...

If we exclude the game boy color, which is a different class of device than the rest, all of the quickest devices are Apple phones or tablets. The next quickest device is the blackberry q10. Although we don’t have enough data to really tell why the blackberry q10 is unusually quick for a non-Apple device, one plausible guess is that it’s helped by having actual buttons, which are easier to implement with low latency than a touchscreen. The other two devices with actual buttons are the gameboy color and the kindle 4.

After that iphones and non-kindle button devices, we have a variety of Android devices of various ages. At the bottom, we have the ancient palm pilot 1000 followed by the kindles. The palm is hamstrung by a touchscreen and display created in an era with much slower touchscreen technology and the kindles use e-ink displays, which are much slower than the displays used on modern phones, so it’s not surprising to see those devices at the bottom.

...

Almost every computer and mobile device that people buy today is slower than common models of computers from the 70s and 80s. Low-latency gaming desktops and the ipad pro can get into the same range as quick machines from thirty to forty years ago, but most off-the-shelf devices aren’t even close.

If we had to pick one root cause of latency bloat, we might say that it’s because of “complexity”. Of course, we all know that complexity is bad. If you’ve been to a non-academic non-enterprise tech conference in the past decade, there’s a good chance that there was at least one talk on how complexity is the root of all evil and we should aspire to reduce complexity.

Unfortunately, it's a lot harder to remove complexity than to give a talk saying that we should remove complexity. A lot of the complexity buys us something, either directly or indirectly. When we looked at the input of a fancy modern keyboard vs. the apple 2 keyboard, we saw that using a relatively powerful and expensive general purpose processor to handle keyboard inputs can be slower than dedicated logic for the keyboard, which would both be simpler and cheaper. However, using the processor gives people the ability to easily customize the keyboard, and also pushes the problem of “programming” the keyboard from hardware into software, which reduces the cost of making the keyboard. The more expensive chip increases the manufacturing cost, but considering how much of the cost of these small-batch artisanal keyboards is the design cost, it seems like a net win to trade manufacturing cost for ease of programming.

...

If you want a reference to compare the kindle against, a moderately quick page turn in a physical book appears to be about 200 ms.

https://twitter.com/gravislizard/status/927593460642615296
almost everything on computers is perceptually slower than it was in 1983
https://archive.is/G3D5K
https://archive.is/vhDTL
https://archive.is/a3321
https://archive.is/imG7S
techtariat  dan-luu  performance  time  hardware  consumerism  objektbuch  data  history  reflection  critique  software  roots  tainter  engineering  nitty-gritty  ui  ux  hci  ios  mobile  apple  amazon  sequential  trends  increase-decrease  measure  analysis  measurement  os  systems  IEEE  intricacy  desktop  benchmarks  rant  carmack  system-design  degrees-of-freedom  keyboard  terminal  editors  links  input-output  networking  world  s:**  multi  twitter  social  discussion  tech  programming  web  internet  speed  backup  worrydream  interface  metal-to-virtual  latency-throughput  workflow  form-design  interface-compatibility 
july 2019 by nhaliday
Theory of Self-Reproducing Automata - John von Neumann
Fourth Lecture: THE ROLE OF HIGH AND OF EXTREMELY HIGH COMPLICATION

Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future.

Estimates for size for the human central nervous system. Excursus about the “mixed” character of living organisms. Analog and digital elements. Observations about the “mixed” character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these.

Evaluation of the discrepancy in size between artificial and natural automata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used.

The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires.

Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms--that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self-correcting features.

Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The “single error” principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata: The role of the autonomy of parts. Connections between this autonomy and evolution.

- 10^10 neurons in brain, 10^4 vacuum tubes in largest computer at time
- machines faster: 5 ms from neuron potential to neuron potential, 10^-3 ms for vacuum tubes

https://en.wikipedia.org/wiki/John_von_Neumann#Computing
pdf  article  papers  essay  nibble  math  cs  computation  bio  neuro  neuro-nitgrit  scale  magnitude  comparison  acm  von-neumann  giants  thermo  phys-energy  speed  performance  time  density  frequency  hardware  ems  efficiency  dirty-hands  street-fighting  fermi  estimate  retention  physics  interdisciplinary  multi  wiki  links  people  🔬  atoms  duplication  iteration-recursion  turing  complexity  measure  nature  technology  complex-systems  bits  information-theory  circuits  robust  structure  composition-decomposition  evolution  mutation  axioms  analogy  thinking  input-output  hi-order-bits  coding-theory  flexibility  rigidity  automata-languages 
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
Eternity in six hours: intergalactic spreading of intelligent life and sharpening the Fermi paradox
We do this by demonstrating that traveling between galaxies – indeed even launching a colonisation project for the entire reachable universe – is a relatively simple task for a star-spanning civilization, requiring modest amounts of energy and resources. We start by demonstrating that humanity itself could likely accomplish such a colonisation project in the foreseeable future, should we want to, and then demonstrate that there are millions of galaxies that could have reached us by now, using similar methods. This results in a considerable sharpening of the Fermi paradox.
pdf  study  article  essay  anthropic  fermi  space  expansionism  bostrom  ratty  philosophy  xenobio  ideas  threat-modeling  intricacy  time  civilization  🔬  futurism  questions  paradox  risk  physics  engineering  interdisciplinary  frontier  technology  volo-avolo  dirty-hands  ai  automation  robotics  duplication  iteration-recursion  von-neumann  data  scale  magnitude  skunkworks  the-world-is-just-atoms  hard-tech  ems  bio  bits  speedometer  nature  model-organism  mechanics  phys-energy  relativity  electromag  analysis  spock  nitty-gritty  spreading  hanson  street-fighting  speed  gedanken  nibble 
march 2018 by nhaliday
Who We Are | West Hunter
I’m going to review David Reich’s new book, Who We Are and How We Got Here. Extensively: in a sense I’ve already been doing this for a long time. Probably there will be a podcast. The GoFundMe link is here. You can also send money via Paypal (Use the donate button), or bitcoins to 1Jv4cu1wETM5Xs9unjKbDbCrRF2mrjWXr5. In-kind donations, such as orichalcum or mithril, are always appreciated.

This is the book about the application of ancient DNA to prehistory and history.

height difference between northern and southern europeans: https://westhunt.wordpress.com/2018/03/29/who-we-are-1/
mixing, genocide of males, etc.: https://westhunt.wordpress.com/2018/03/29/who-we-are-2-purity-of-essence/
rapid change in polygenic traits (appearance by Kevin Mitchell and funny jab at Brad Delong ("regmonkey")): https://westhunt.wordpress.com/2018/03/30/rapid-change-in-polygenic-traits/
schiz, bipolar, and IQ: https://westhunt.wordpress.com/2018/03/30/rapid-change-in-polygenic-traits/#comment-105605
Dan Graur being dumb: https://westhunt.wordpress.com/2018/04/02/the-usual-suspects/
prediction of neanderthal mixture and why: https://westhunt.wordpress.com/2018/04/03/who-we-are-3-neanderthals/
New Guineans tried to use Denisovan admixture to avoid UN sanctions (by "not being human"): https://westhunt.wordpress.com/2018/04/04/who-we-are-4-denisovans/
also some commentary on decline of Out-of-Africa, including:
"Homo Naledi, a small-brained homonin identified from recently discovered fossils in South Africa, appears to have hung around way later that you’d expect (up to 200,000 years ago, maybe later) than would be the case if modern humans had occupied that area back then. To be blunt, we would have eaten them."

Live Not By Lies: https://westhunt.wordpress.com/2018/04/08/live-not-by-lies/
Next he slams people that suspect that upcoming genetic genetic analysis will, in most cases, confirm traditional stereotypes about race – the way the world actually looks.

The people Reich dumps on are saying perfectly reasonable things. He criticizes Henry Harpending for saying that he’d never seen an African with a hobby. Of course, Henry had actually spent time in Africa, and that’s what he’d seen. The implication is that people in Malthusian farming societies – which Africa was not – were selected to want to work, even where there was no immediate necessity to do so. Thus hobbies, something like a gerbil running in an exercise wheel.

He criticized Nicholas Wade, for saying that different races have different dispositions. Wade’s book wasn’t very good, but of course personality varies by race: Darwin certainly thought so. You can see differences at birth. Cover a baby’s nose with a cloth: Chinese and Navajo babies quietly breathe through their mouth, European and African babies fuss and fight.

Then he attacks Watson, for asking when Reich was going to look at Jewish genetics – the kind that has led to greater-than-average intelligence. Watson was undoubtedly trying to get a rise out of Reich, but it’s a perfectly reasonable question. Ashkenazi Jews are smarter than the average bear and everybody knows it. Selection is the only possible explanation, and the conditions in the Middle ages – white-collar job specialization and a high degree of endogamy, were just what the doctor ordered.

Watson’s a prick, but he’s a great prick, and what he said was correct. Henry was a prince among men, and Nick Wade is a decent guy as well. Reich is totally out of line here: he’s being a dick.

Now Reich may be trying to burnish his anti-racist credentials, which surely need some renewal after having pointing out that race as colloquially used is pretty reasonable, there’s no reason pops can’t be different, people that said otherwise ( like Lewontin, Gould, Montagu, etc. ) were lying, Aryans conquered Europe and India, while we’re tied to the train tracks with scary genetic results coming straight at us. I don’t care: he’s being a weasel, slandering the dead and abusing the obnoxious old genius who laid the foundations of his field. Reich will also get old someday: perhaps he too will someday lose track of all the nonsense he’s supposed to say, or just stop caring. Maybe he already has… I’m pretty sure that Reich does not like lying – which is why he wrote this section of the book (not at all logically necessary for his exposition of the ancient DNA work) but the required complex juggling of lies and truth required to get past the demented gatekeepers of our society may not be his forte. It has been said that if it was discovered that someone in the business was secretly an android, David Reich would be the prime suspect. No Talleyrand he.

https://westhunt.wordpress.com/2018/04/12/who-we-are-6-the-americas/
The population that accounts for the vast majority of Native American ancestry, which we will call Amerinds, came into existence somewhere in northern Asia. It was formed from a mix of Ancient North Eurasians and a population related to the Han Chinese – about 40% ANE and 60% proto-Chinese. Is looks as if most of the paternal ancestry was from the ANE, while almost all of the maternal ancestry was from the proto-Han. [Aryan-Transpacific ?!?] This formation story – ANE boys, East-end girls – is similar to the formation story for the Indo-Europeans.

https://westhunt.wordpress.com/2018/04/18/who-we-are-7-africa/
In some ways, on some questions, learning more from genetics has left us less certain. At this point we really don’t know where anatomically humans originated. Greater genetic variety in sub-Saharan African has been traditionally considered a sign that AMH originated there, but it possible that we originated elsewhere, perhaps in North Africa or the Middle East, and gained extra genetic variation when we moved into sub-Saharan Africa and mixed with various archaic groups that already existed. One consideration is that finding recent archaic admixture in a population may well be a sign that modern humans didn’t arise in that region ( like language substrates) – which makes South Africa and West Africa look less likely. The long-continued existence of homo naledi in South Africa suggests that modern humans may not have been there for all that long – if we had co-existed with homo naledi, they probably wouldn’t lasted long. The oldest known skull that is (probably) AMh was recently found in Morocco, while modern humans remains, already known from about 100,000 years ago in Israel, have recently been found in northern Saudi Arabia.

While work by Nick Patterson suggests that modern humans were formed by a fusion between two long-isolated populations, a bit less than half a million years ago.

So: genomics had made recent history Africa pretty clear. Bantu agriculuralists expanded and replaced hunter-gatherers, farmers and herders from the Middle East settled North Africa, Egypt and northeaat Africa, while Nilotic herdsmen expanded south from the Sudan. There are traces of earlier patterns and peoples, but today, only traces. As for questions back further in time, such as the origins of modern humans – we thought we knew, and now we know we don’t. But that’s progress.

https://westhunt.wordpress.com/2018/04/18/reichs-journey/
David Reich’s professional path must have shaped his perspective on the social sciences. Look at the record. He starts his professional career examining the role of genetics in the elevated prostate cancer risk seen in African-American men. Various social-science fruitcakes oppose him even looking at the question of ancestry ( African vs European). But they were wrong: certain African-origin alleles explain the increased risk. Anthropologists (and human geneticists) were sure (based on nothing) that modern humans hadn’t interbred with Neanderthals – but of course that happened. Anthropologists and archaeologists knew that Gustaf Kossina couldn’t have been right when he said that widespread material culture corresponded to widespread ethnic groups, and that migration was the primary explanation for changes in the archaeological record – but he was right. They knew that the Indo-European languages just couldn’t have been imposed by fire and sword – but Reich’s work proved them wrong. Lots of people – the usual suspects plus Hindu nationalists – were sure that the AIT ( Aryan Invasion Theory) was wrong, but it looks pretty good today.

Some sociologists believed that caste in India was somehow imposed or significantly intensified by the British – but it turns out that most jatis have been almost perfectly endogamous for two thousand years or more…

It may be that Reich doesn’t take these guys too seriously anymore. Why should he?

varnas, jatis, aryan invastion theory: https://westhunt.wordpress.com/2018/04/22/who-we-are-8-india/

europe and EEF+WHG+ANE: https://westhunt.wordpress.com/2018/05/01/who-we-are-9-europe/

https://www.nationalreview.com/2018/03/book-review-david-reich-human-genes-reveal-history/
The massive mixture events that occurred in the recent past to give rise to Europeans and South Asians, to name just two groups, were likely “male mediated.” That’s another way of saying that men on the move took local women as brides or concubines. In the New World there are many examples of this, whether it be among African Americans, where most European ancestry seems to come through men, or in Latin America, where conquistadores famously took local women as paramours. Both of these examples are disquieting, and hint at the deep structural roots of patriarchal inequality and social subjugation that form the backdrop for the emergence of many modern peoples.
west-hunter  scitariat  books  review  sapiens  anthropology  genetics  genomics  history  antiquity  iron-age  world  europe  gavisti  aDNA  multi  politics  culture-war  kumbaya-kult  social-science  academia  truth  westminster  environmental-effects  embodied  pop-diff  nordic  mediterranean  the-great-west-whale  germanic  the-classics  shift  gene-flow  homo-hetero  conquest-empire  morality  diversity  aphorism  migration  migrant-crisis  EU  africa  MENA  gender  selection  speed  time  population-genetics  error  concrete  econotariat  economics  regression  troll  lol  twitter  social  media  street-fighting  methodology  robust  disease  psychiatry  iq  correlation  usa  obesity  dysgenics  education  track-record  people  counterexample  reason  thinking  fisher  giants  old-anglo  scifi-fantasy  higher-ed  being-right  stories  reflection  critique  multiplicative  iteration-recursion  archaics  asia  developing-world  civil-liberty  anglo  oceans  food  death  horror  archaeology  gnxp  news  org:mag  right-wing  age-of-discovery  latin-america  ea 
march 2018 by nhaliday
The Coming Technological Singularity
Within thirty years, we will have the technological
means to create superhuman intelligence. Shortly after,
the human era will be ended.

Is such progress avoidable? If not to be avoided, can
events be guided so that we may survive? These questions
are investigated. Some possible answers (and some further
dangers) are presented.

_What is The Singularity?_

The acceleration of technological progress has been the central
feature of this century. I argue in this paper that we are on the edge
of change comparable to the rise of human life on Earth. The precise
cause of this change is the imminent creation by technology of
entities with greater than human intelligence. There are several means
by which science may achieve this breakthrough (and this is another
reason for having confidence that the event will occur):
o The development of computers that are "awake" and
superhumanly intelligent. (To date, most controversy in the
area of AI relates to whether we can create human equivalence
in a machine. But if the answer is "yes, we can", then there
is little doubt that beings more intelligent can be constructed
shortly thereafter.
o Large computer networks (and their associated users) may "wake
up" as a superhumanly intelligent entity.
o Computer/human interfaces may become so intimate that users
may reasonably be considered superhumanly intelligent.
o Biological science may find ways to improve upon the natural
human intellect.

The first three possibilities depend in large part on
improvements in computer hardware. Progress in computer hardware has
followed an amazingly steady curve in the last few decades [16]. Based
largely on this trend, I believe that the creation of greater than
human intelligence will occur during the next thirty years. (Charles
Platt [19] has pointed out the AI enthusiasts have been making claims
like this for the last thirty years. Just so I'm not guilty of a
relative-time ambiguity, let me more specific: I'll be surprised if
this event occurs before 2005 or after 2030.)

What are the consequences of this event? When greater-than-human
intelligence drives progress, that progress will be much more rapid.
In fact, there seems no reason why progress itself would not involve
the creation of still more intelligent entities -- on a still-shorter
time scale. The best analogy that I see is with the evolutionary past:
Animals can adapt to problems and make inventions, but often no faster
than natural selection can do its work -- the world acts as its own
simulator in the case of natural selection. We humans have the ability
to internalize the world and conduct "what if's" in our heads; we can
solve many problems thousands of times faster than natural selection.
Now, by creating the means to execute those simulations at much higher
speeds, we are entering a regime as radically different from our human
past as we humans are from the lower animals.
org:junk  humanity  accelerationism  futurism  prediction  classic  technology  frontier  speedometer  ai  risk  internet  time  essay  rhetoric  network-structure  ai-control  morality  ethics  volo-avolo  egalitarianism-hierarchy  intelligence  scale  giants  scifi-fantasy  speculation  quotes  religion  theos  singularity  flux-stasis  phase-transition  cybernetics  coordination  cooperate-defect  moloch  communication  bits  speed  efficiency  eden-heaven  ecology  benevolence  end-times  good-evil  identity  the-self  whole-partial-many  density 
march 2018 by nhaliday
Is the speed of light really constant?
So what if the speed of light isn’t the same when moving toward or away from us? Are there any observable consequences? Not to the limits of observation so far. We know, for example, that any one-way speed of light is independent of the motion of the light source to 2 parts in a billion. We know it has no effect on the color of the light emitted to a few parts in 1020. Aspects such as polarization and interference are also indistinguishable from standard relativity. But that’s not surprising, because you don’t need to assume isotropy for relativity to work. In the 1970s, John Winnie and others showed that all the results of relativity could be modeled with anisotropic light so long as the two-way speed was a constant. The “extra” assumption that the speed of light is a uniform constant doesn’t change the physics, but it does make the mathematics much simpler. Since Einstein’s relativity is the simpler of two equivalent models, it’s the model we use. You could argue that it’s the right one citing Occam’s razor, or you could take Newton’s position that anything untestable isn’t worth arguing over.

SPECIAL RELATIVITY WITHOUT ONE-WAY VELOCITY ASSUMPTIONS:
https://sci-hub.bz/https://www.jstor.org/stable/186029
https://sci-hub.bz/https://www.jstor.org/stable/186671
nibble  scitariat  org:bleg  physics  relativity  electromag  speed  invariance  absolute-relative  curiosity  philosophy  direction  gedanken  axioms  definition  models  experiment  space  science  measurement  volo-avolo  synchrony  uniqueness  multi  pdf  piracy  study  article 
november 2017 by nhaliday
GPS and Relativity
The nominal GPS configuration consists of a network of 24 satellites in high orbits around the Earth, but up to 30 or so satellites may be on station at any given time. Each satellite in the GPS constellation orbits at an altitude of about 20,000 km from the ground, and has an orbital speed of about 14,000 km/hour (the orbital period is roughly 12 hours - contrary to popular belief, GPS satellites are not in geosynchronous or geostationary orbits). The satellite orbits are distributed so that at least 4 satellites are always visible from any point on the Earth at any given instant (with up to 12 visible at one time). Each satellite carries with it an atomic clock that "ticks" with a nominal accuracy of 1 nanosecond (1 billionth of a second). A GPS receiver in an airplane determines its current position and course by comparing the time signals it receives from the currently visible GPS satellites (usually 6 to 12) and trilaterating on the known positions of each satellite[1]. The precision achieved is remarkable: even a simple hand-held GPS receiver can determine your absolute position on the surface of the Earth to within 5 to 10 meters in only a few seconds. A GPS receiver in a car can give accurate readings of position, speed, and course in real-time!

More sophisticated techniques, like Differential GPS (DGPS) and Real-Time Kinematic (RTK) methods, deliver centimeter-level positions with a few minutes of measurement. Such methods allow use of GPS and related satellite navigation system data to be used for high-precision surveying, autonomous driving, and other applications requiring greater real-time position accuracy than can be achieved with standard GPS receivers.

To achieve this level of precision, the clock ticks from the GPS satellites must be known to an accuracy of 20-30 nanoseconds. However, because the satellites are constantly moving relative to observers on the Earth, effects predicted by the Special and General theories of Relativity must be taken into account to achieve the desired 20-30 nanosecond accuracy.

Because an observer on the ground sees the satellites in motion relative to them, Special Relativity predicts that we should see their clocks ticking more slowly (see the Special Relativity lecture). Special Relativity predicts that the on-board atomic clocks on the satellites should fall behind clocks on the ground by about 7 microseconds per day because of the slower ticking rate due to the time dilation effect of their relative motion [2].

Further, the satellites are in orbits high above the Earth, where the curvature of spacetime due to the Earth's mass is less than it is at the Earth's surface. A prediction of General Relativity is that clocks closer to a massive object will seem to tick more slowly than those located further away (see the Black Holes lecture). As such, when viewed from the surface of the Earth, the clocks on the satellites appear to be ticking faster than identical clocks on the ground. A calculation using General Relativity predicts that the clocks in each GPS satellite should get ahead of ground-based clocks by 45 microseconds per day.

The combination of these two relativitic effects means that the clocks on-board each satellite should tick faster than identical clocks on the ground by about 38 microseconds per day (45-7=38)! This sounds small, but the high-precision required of the GPS system requires nanosecond accuracy, and 38 microseconds is 38,000 nanoseconds. If these effects were not properly taken into account, a navigational fix based on the GPS constellation would be false after only 2 minutes, and errors in global positions would continue to accumulate at a rate of about 10 kilometers each day! The whole system would be utterly worthless for navigation in a very short time.
nibble  org:junk  org:edu  explanation  trivia  cocktail  physics  gravity  relativity  applications  time  synchrony  speed  space  navigation  technology 
november 2017 by nhaliday
Fermat's Library | Cassini, Rømer and the velocity of light annotated/explained version.
Abstract: The discovery of the finite nature of the velocity of light is usually attributed to Rømer. However, a text at the Paris Observatory confirms the minority opinion according to which Cassini was first to propose the ‘successive motion’ of light, while giving a rather correct order of magnitude for the duration of its propagation from the Sun to the Earth. We examine this question, and discuss why, in spite of the criticisms of Halley, Cassini abandoned this hypothesis while leaving Rømer free to publish it.
liner-notes  papers  essay  history  early-modern  europe  the-great-west-whale  giants  the-trenches  mediterranean  nordic  science  innovation  discovery  physics  electromag  space  speed  nibble  org:sci  org:mat 
september 2017 by nhaliday
Population Growth and Technological Change: One Million B.C. to 1990
The nonrivalry of technology, as modeled in the endogenous growth literature, implies that high population spurs technological change. This paper constructs and empirically tests a model of long-run world population growth combining this implication with the Malthusian assumption that technology limits population. The model predicts that over most of history, the growth rate of population will be proportional to its level. Empirical tests support this prediction and show that historically, among societies with no possibility for technological contact, those with larger initial populations have had faster technological change and population growth.

Table I gives the gist (population growth rate scales w/ tech innovation). Note how the Mongol invasions + reverberations stand out.

https://jasoncollins.org/2011/08/15/more-people-more-ideas-in-the-long-run/
pdf  study  economics  growth-econ  broad-econ  cliometrics  anthropology  cjones-like  population  demographics  scale  innovation  technology  ideas  deep-materialism  stylized-facts  correlation  speed  flux-stasis  history  antiquity  iron-age  medieval  early-modern  mostly-modern  piracy  garett-jones  spearhead  big-picture  density  iteration-recursion  magnitude  econotariat  multi  commentary  summary  🎩  path-dependence  pop-diff  malthus  time-series  data  world  microfoundations  hari-seldon  conquest-empire  disease  parasites-microbiome  spreading  gavisti  asia  war  death  nihil  trends 
august 2017 by nhaliday
Reading | West Hunter
Reading speed and comprehension interest me, but I don’t have as much information as I would like.  I would like to see the distribution of reading speeds ( in the general population, and also in college graduates).  I have looked a bit at discussions of this, and there’s something wrong.  Or maybe a lot wrong.  Researchers apparently say that nobody reads 900 words a minute with full comprehension, but I’ve seen it done.  I would also like to know if anyone has statistically validated methods that  increase reading speed.

On related topics, I wonder how many serious readers  there are, here and also in other countries.  Are they as common in Japan or China, with their very different scripts?   Are reading speeds higher or lower there?

How many people have  their houses really, truly stuffed with books?  Here and elsewhere?  Last time I checked we had about 5000 books around the house: I figure that’s serious, verging on the pathological.

To what extent do people remember what they read?  Judging from the general results of  adult knowledge studies, not very much of what they took in school, but maybe voluntary reading is different.

https://westhunt.wordpress.com/2012/06/05/reading/#comment-3187
The researchers claim that the range of high-comprehension reading speed doesn’t go up anywhere near 900 wpm. But my daughter routinely reads at that speed. In high school, I took a reading speed test and scored a bit over 1000 wpm, with perfect comprehension.

I have suggested that the key to high reading speed is the experience of trying to finish a entire science fiction paperback in a drugstore before the proprietor tells you to buy the damn thing or get out. Helps if you can hide behind the bookrack.

https://westhunt.wordpress.com/2019/03/31/early-reading/
There are a few small children, mostly girls, that learn to read very early. You read stories to them and before you know they’re reading by themselves. By very early, I men age 3 or 4.

Does this happen in China ?

hmm:
Beijingers' average daily reading time exceeds an hour: report: http://www.chinadaily.com.cn/a/201712/07/WS5a293e1aa310fcb6fafd44c0.html

Free Speed Reading Test by AceReader: http://www.freereadingtest.com/
time+comprehension

http://www.readingsoft.com/
claims: 1000 wpm with 85% comprehension at top 1%, 200 wpm at 60% for average

https://www.wsj.com/articles/speed-reading-returns-1395874723
http://projects.wsj.com/speedread/

https://news.ycombinator.com/item?id=929753
Take a look at "Reading Rate: A Review of Research and Theory" by Ronald P. Carver
http://www.amazon.com/Reading-Rate-Review-Research-Theory/dp...
The conclusion is, basically, that speed reading courses don't work.
You can teach people to skim at a faster rate than they'd read with maximum comprehension and retention. And you can teach people study skills, such as how to summarize salient points, and take notes.
But all these skills are not at all the same as what speed reading usually promises, which is to drastically increase the rate at which you read with full comprehension and retention. According to Carver's book, it can't be done, at least not drastically past about the rate you'd naturally read at the college level.
west-hunter  scitariat  discussion  speculation  ideas  rant  critique  learning  studying  westminster  error  realness  language  japan  china  asia  sinosphere  retention  foreign-lang  info-foraging  scale  speed  innovation  explanans  creative  multi  data  urban-rural  time  time-use  europe  the-great-west-whale  occident  orient  people  track-record  trivia  books  number  knowledge  poll  descriptive  distribution  tools  quiz  neurons  anglo  hn  poast  news  org:rec  metrics  density  writing  meta:reading  thinking 
june 2017 by nhaliday
Logic | West Hunter
All the time I hear some public figure saying that if we ban or allow X, then logically we have to ban or allow Y, even though there are obvious practical reasons for X and obvious practical reasons against Y.

No, we don’t.

http://www.amnation.com/vfr/archives/005864.html
http://www.amnation.com/vfr/archives/002053.html

compare: https://pinboard.in/u:nhaliday/b:190b299cf04a

Small Change Good, Big Change Bad?: https://www.overcomingbias.com/2018/02/small-change-good-big-change-bad.html
And on reflection it occurs to me that this is actually THE standard debate about change: some see small changes and either like them or aren’t bothered enough to advocate what it would take to reverse them, while others imagine such trends continuing long enough to result in very large and disturbing changes, and then suggest stronger responses.

For example, on increased immigration some point to the many concrete benefits immigrants now provide. Others imagine that large cumulative immigration eventually results in big changes in culture and political equilibria. On fertility, some wonder if civilization can survive in the long run with declining population, while others point out that population should rise for many decades, and few endorse the policies needed to greatly increase fertility. On genetic modification of humans, some ask why not let doctors correct obvious defects, while others imagine parents eventually editing kid genes mainly to max kid career potential. On oil some say that we should start preparing for the fact that we will eventually run out, while others say that we keep finding new reserves to replace the ones we use.

...

If we consider any parameter, such as typical degree of mind wandering, we are unlikely to see the current value as exactly optimal. So if we give people the benefit of the doubt to make local changes in their interest, we may accept that this may result in a recent net total change we don’t like. We may figure this is the price we pay to get other things we value more, and we we know that it can be very expensive to limit choices severely.

But even though we don’t see the current value as optimal, we also usually see the optimal value as not terribly far from the current value. So if we can imagine current changes as part of a long term trend that eventually produces very large changes, we can become more alarmed and willing to restrict current changes. The key question is: when is that a reasonable response?

First, big concerns about big long term changes only make sense if one actually cares a lot about the long run. Given the usual high rates of return on investment, it is cheap to buy influence on the long term, compared to influence on the short term. Yet few actually devote much of their income to long term investments. This raises doubts about the sincerity of expressed long term concerns.

Second, in our simplest models of the world good local choices also produce good long term choices. So if we presume good local choices, bad long term outcomes require non-simple elements, such as coordination, commitment, or myopia problems. Of course many such problems do exist. Even so, someone who claims to see a long term problem should be expected to identify specifically which such complexities they see at play. It shouldn’t be sufficient to just point to the possibility of such problems.

...

Fourth, many more processes and factors limit big changes, compared to small changes. For example, in software small changes are often trivial, while larger changes are nearly impossible, at least without starting again from scratch. Similarly, modest changes in mind wandering can be accomplished with minor attitude and habit changes, while extreme changes may require big brain restructuring, which is much harder because brains are complex and opaque. Recent changes in market structure may reduce the number of firms in each industry, but that doesn’t make it remotely plausible that one firm will eventually take over the entire economy. Projections of small changes into large changes need to consider the possibility of many such factors limiting large changes.

Fifth, while it can be reasonably safe to identify short term changes empirically, the longer term a forecast the more one needs to rely on theory, and the more different areas of expertise one must consider when constructing a relevant model of the situation. Beware a mere empirical projection into the long run, or a theory-based projection that relies on theories in only one area.

We should very much be open to the possibility of big bad long term changes, even in areas where we are okay with short term changes, or at least reluctant to sufficiently resist them. But we should also try to hold those who argue for the existence of such problems to relatively high standards. Their analysis should be about future times that we actually care about, and can at least roughly foresee. It should be based on our best theories of relevant subjects, and it should consider the possibility of factors that limit larger changes.

And instead of suggesting big ways to counter short term changes that might lead to long term problems, it is often better to identify markers to warn of larger problems. Then instead of acting in big ways now, we can make sure to track these warning markers, and ready ourselves to act more strongly if they appear.

Growth Is Change. So Is Death.: https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html
I see the same pattern when people consider long term futures. People can be quite philosophical about the extinction of humanity, as long as this is due to natural causes. Every species dies; why should humans be different? And few get bothered by humans making modest small-scale short-term modifications to their own lives or environment. We are mostly okay with people using umbrellas when it rains, moving to new towns to take new jobs, etc., digging a flood ditch after our yard floods, and so on. And the net social effect of many small changes is technological progress, economic growth, new fashions, and new social attitudes, all of which we tend to endorse in the short run.

Even regarding big human-caused changes, most don’t worry if changes happen far enough in the future. Few actually care much about the future past the lives of people they’ll meet in their own life. But for changes that happen within someone’s time horizon of caring, the bigger that changes get, and the longer they are expected to last, the more that people worry. And when we get to huge changes, such as taking apart the sun, a population of trillions, lifetimes of millennia, massive genetic modification of humans, robots replacing people, a complete loss of privacy, or revolutions in social attitudes, few are blasé, and most are quite wary.

This differing attitude regarding small local changes versus large global changes makes sense for parameters that tend to revert back to a mean. Extreme values then do justify extra caution, while changes within the usual range don’t merit much notice, and can be safely left to local choice. But many parameters of our world do not mostly revert back to a mean. They drift long distances over long times, in hard to predict ways that can be reasonably modeled as a basic trend plus a random walk.

This different attitude can also make sense for parameters that have two or more very different causes of change, one which creates frequent small changes, and another which creates rare huge changes. (Or perhaps a continuum between such extremes.) If larger sudden changes tend to cause more problems, it can make sense to be more wary of them. However, for most parameters most change results from many small changes, and even then many are quite wary of this accumulating into big change.

For people with a sharp time horizon of caring, they should be more wary of long-drifting parameters the larger the changes that would happen within their horizon time. This perspective predicts that the people who are most wary of big future changes are those with the longest time horizons, and who more expect lumpier change processes. This prediction doesn’t seem to fit well with my experience, however.

Those who most worry about big long term changes usually seem okay with small short term changes. Even when they accept that most change is small and that it accumulates into big change. This seems incoherent to me. It seems like many other near versus far incoherences, like expecting things to be simpler when you are far away from them, and more complex when you are closer. You should either become more wary of short term changes, knowing that this is how big longer term change happens, or you should be more okay with big long term change, seeing that as the legitimate result of the small short term changes you accept.

https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html#comment-3794966996
The point here is the gradual shifts of in-group beliefs are both natural and no big deal. Humans are built to readily do this, and forget they do this. But ultimately it is not a worry or concern.

But radical shifts that are big, whether near or far, portend strife and conflict. Either between groups or within them. If the shift is big enough, our intuition tells us our in-group will be in a fight. Alarms go off.
west-hunter  scitariat  discussion  rant  thinking  rationality  metabuch  critique  systematic-ad-hoc  analytical-holistic  metameta  ideology  philosophy  info-dynamics  aphorism  darwinian  prudence  pragmatic  insight  tradition  s:*  2016  multi  gnon  right-wing  formal-values  values  slippery-slope  axioms  alt-inst  heuristic  anglosphere  optimate  flux-stasis  flexibility  paleocon  polisci  universalism-particularism  ratty  hanson  list  examples  migration  fertility  intervention  demographics  population  biotech  enhancement  energy-resources  biophysical-econ  nature  military  inequality  age-generation  time  ideas  debate  meta:rhetoric  local-global  long-short-run  gnosis-logos  gavisti  stochastic-processes  eden-heaven  politics  equilibrium  hive-mind  genetics  defense  competition  arms  peace-violence  walter-scheidel  speed  marginal  optimization  search  time-preference  patience  futurism  meta:prediction  accuracy  institutions  tetlock  theory-practice  wire-guided  priors-posteriors  distribution  moments  biases  epistemic  nea 
may 2017 by nhaliday
Bloggingheads.tv: Gregory Cochran (The 10,000 Year Explosion) and Razib Khan (Unz Foundation, Gene Expression)
http://bloggingheads.tv/videos/1999
one interesting tidbit: doesn't think Homo sapiens smart enough for agriculture during previous interglacial period
https://westhunt.wordpress.com/2014/06/04/the-time-before/
Although we’re still in an ice age, we are currently in an interglacial period. That’s a good thing, since glacial periods are truly unpleasant – dry, cold, low biological productivity, high variability. Low CO2 concentrations made plants more susceptible to drought. Peter Richerson and Robert Boyd have suggested that the development of agriculture was impossible in glacial periods, due to these factors.

There was an earlier interglacial period that began about 130,000 years ago and ended about 114,000 years ago. It was a bit warmer than the current interglacial (the Holocene).

The most interesting events in the Eemian are those that didn’t happen. In the Holocene, humans developed agriculture, which led to all kinds of interesting trouble. They did it more than once, possibly as many as seven times independently. Back in the Eeemian, nichevo. Neanderthals moved father north as the glaciers melted, AMH moved up into the Middle East, but nobody did much of anything new. Populations likely increased, as habitable area expanded and biological productivity went up, but without any obvious consequences. Anatomically modern humans weren’t yet up to displacing archaic groups like the Neanderthals.

So, it is fair to say that everybody back then, including AMH, lacked capabilities that some later humans had. We could, if we wished, call these new abilities ‘behavioral modernity’.

The Bushmen are the most divergent of all human populations, and probably split off earliest. They are farther from the Bantu (in genetic distance) than the French or Chinese are.

According to some models, this split (between the Bushmen and other populations of sub-Saharan Africa) occurred more than 100,000 years ago. Recent direct measurements of mutations show much lower rates than previously thought, which tends to place such splits even farther back in time.

The question is whether they split off before the development of practical behavioral modernity.

https://westhunt.wordpress.com/2016/04/08/the-long-count/
They are anatomically modern: they have chins, etc. Behaviorally modern? There have been only a few attempts to measure their intelligence: what has been done indicates that they have very low IQs. They definitely talk, tell stories, sing songs: does that imply that they could, given the right environment, have developed the Antikythera mechanism or a clipper ship?

This means that language is older than some had thought, a good deal older. It also means that people with language are quite capable of going a quarter of a million years without generating much technological advance – without developing the ability to push aside archaic humans, for example. Of course, people with Williams syndrome have language, and you can’t send them into the kitchen and rely on them to bring back a fork. Is the sophistication of Bushman language – this means the concepts they can and do convey, not the complexity of the grammar – comparable with that of other populations? I don’t know. As far as I can see, one of the major goals of modern anthropology is to make sure that nobody knows. Or that they know things that aren’t so.

...

Some have suggested that the key to technological development is higher population: that produces more intellects past a high threshold, sure. I don’t think that’s the main factor. Eskimos have a pretty advanced technology, but there were never very many of them. On the other hand, they have the highest IQ of any existing hunter-gatherer population: that’s got to help. Populations must have gone up the Eemian, the previous interglacial period, but nothing much got invented back then. It would seem that agriculture would have been possible in the Eemian, but as far as we know it didn’t happen. Except for Valusia of course. With AMH going back at least 300,000 years, we have to start thinking about even earlier interglacial peiods, like Mindel-Riss (424-374 k years ago)

https://en.wikipedia.org/wiki/Interglacial

https://westhunt.wordpress.com/2017/08/28/same-old/
We now know ( from ancient DNA) that Bushmen split off from the rest of humanity (or we from them) at least a quarter of a million years ago. Generally, when you see a complex trait in sister groups, you can conclude that it existed in the common ancestor. Since both Bushmen and (everybody else) have complex language, one can conclude that complex language existed at least a quarter million years ago, in our common ancestor. You should also suspect that unique features of Bushmen language, namely those clicks, are not necessarily superficial: there has been time enough for real, baked-in, biologically rooted language differences to evolve. It also shows that having complex language isn’t enough, in itself, to generate anything very interesting. Cf Williams syndrome. Certainly technological change was very slow back then. Interglacial periods came and went without AMH displacing archaics in Eurasia or developing agriculture.

Next, the ability to generate rapid cultural change, invent lots of stuff, improvise effective bullshit didn’t exist in the common ancestor of extant humanity, since change was very slow back then.

Therefore it is not necessarily the case that every group has it today, or has it to the same extent. Psychic unity of mankind is unlikely. It’s also denied by every measurement ever made, but I guess invoking data, or your lying eyes, would be cheating.

https://westhunt.wordpress.com/2017/08/28/bushmen-palate/
“it has been observed by several researchers that the Khoisan palate ends to lack a prominent alveolar ridge.”

https://westhunt.wordpress.com/2013/02/17/unchanging-essence/
John Shea is a professor of anthropology at Stony Brook, specializing in ancient archaeology. He’s been making the argument that ‘behavioral modernity’ is a flawed concept, which it is. Naturally, he wants to replace it with something even worse. Not only are all existing human populations intellectually equal, as most anthropologists affirm – all are ‘behaviorally modern’ – all past populations of anatomically modern humans were too! The idea that our ancestors circa 150,000 B.C. might not be quite as sharp as people today is just like the now-discredited concept of race. And you know, he’s right. They’re both perfectly natural consequences of neodarwinism.

Behavioral modernity is a silly concept. As he says, it’s a typological concept: hominids are either behaviorally modern or they’re not. Now why would this make sense? Surely people vary in smarts, for example: it’s silly to say that they are either smart or not smart. We can usefully make much finer distinctions. We could think in terms of distributions – we might say that you score in the top quarter of intelligence for your population. We could analyze smarts in terms of thresholds: what is the most complex task that a given individual can perform? What fraction of the population can perform tasks of that complexity or greater? Etc. That would be a more reasonable way of looking at smarts, and this is of course what psychometrics does.

It’s also a group property. If even a few members of a population do something that anthropologists consider a sign of behavioral modernity – like making beads – everyone in that population must be behaviorally modern. By the the same argument, if anyone can reach the top shelf, we are all tall.

The notion of behavioral modernity has two roots. The first is that if you go back far enough, it’s obvious that our distant ancestors were pretty dim. Look at Oldowan tools – they’re not much more than broken rocks. And they stayed that way for a million years – change was inhumanly slow back then. That’s evidence. The second is not. Anthropologists want to say that all living populations are intellectually equal – which is not what the psychometric evidence shows. Or what population differences in brain size suggest. So they conjured up a quality – behavioral modernity – that all living people possess, but that homo erectus did not, rather than talk about quantitative differences.
west-hunter  video  interview  sapiens  agriculture  history  antiquity  technology  disease  parasites-microbiome  europe  medieval  early-modern  age-of-discovery  gavisti  gnxp  scitariat  multi  behavioral-gen  archaeology  conquest-empire  eden  intelligence  iq  evolution  archaics  africa  farmers-and-foragers  🌞  speculation  ideas  usa  discussion  roots  population  density  critique  language  cultural-dynamics  anthropology  innovation  aDNA  climate-change  speed  oscillation  aphorism  westminster  wiki  reference  environment  atmosphere  temperature  deep-materialism  pop-diff  speaking  embodied  attaq  psychometrics  biodet  domestication  gene-flow 
april 2017 by nhaliday
Evolution Runs Faster on Short Timescales | Quanta Magazine
But if more splashes of paint appear on a wall, they will gradually conceal some of the original color beneath new layers. Similarly, evolution and natural selection write over the initial mutations that appear over short timescales. Over millions of years, an A in the DNA may become a T, but in the intervening time it may be a C or a G for a while. Ho believes that this mutational saturation is a major cause of what he calls the time-dependent rate phenomenon.

“Think of it like the stock market,” he said. Look at the hourly or daily fluctuations of Standard & Poor’s 500 index, and it will appear wildly unstable, swinging this way and that. Zoom out, however, and the market appears much more stable as the daily shifts start to average out. In the same way, the forces of natural selection weed out the less advantageous and more deleterious mutations over time.
news  org:mag  org:sci  evolution  bio  nature  mutation  selection  time  methodology  stylized-facts  genetics  population-genetics  genomics  speed  pigeonhole-markov  bits  nibble  org:inst 
march 2017 by nhaliday
how big was the edge? | West Hunter
One consideration in the question of what drove the Great Divergence [when Europe’s power and wealth came to greatly exceed that of the far East] is the extent to which Europe was already ahead in science, mathematics, and engineering. As I have said, at the highest levels European was already much more intellectually sophisticated than China. I have a partial list of such differences, but am interested in what my readers can come up with.

What were the European advantages in science, mathematics, and technology circa 1700? And, while we’re at it, in what areas did China/Japan/Korea lead at that point in time?

https://westhunt.wordpress.com/2017/03/10/how-big-was-the-edge/#comment-89299
Before 1700, Ashkenazi Jews did not contribute to the growth of mathematics, science, or technology in Europe. As for the idea that they played a crucial financial role in this period – not so. Medicis, Fuggers.

https://westhunt.wordpress.com/2017/03/10/how-big-was-the-edge/#comment-89287
I’m not so sure about China being behind in agricultural productivity.
--
Nor canal building. Miles ahead on that, I’d have thought.

China also had eyeglasses.
--
Well after they were invented in Italy.

https://westhunt.wordpress.com/2017/03/10/how-big-was-the-edge/#comment-89289
I would say that although the Chinese discovered and invented many things, they never developed science, anymore than they developed axiomatic mathematics.

https://westhunt.wordpress.com/2017/03/10/how-big-was-the-edge/#comment-89300
I believe Chinese steel production led the world until late in the 18th century, though I haven’t found any references to support that.
--
Probably true in the late Sung period, but not later. [ed.: So 1200s AD.]

https://westhunt.wordpress.com/2017/03/10/how-big-was-the-edge/#comment-89382
I’m confess I’m skeptical of your statement that the literacy rate in England in 1650 was 50%. Perhaps it was in London but the entire population?
--
More like 30%, for men, lower for women.

https://westhunt.wordpress.com/2017/03/10/how-big-was-the-edge/#comment-89322
They did pretty well, considering that they were just butterflies dreaming that they were men.

But… there is a real sense in which the Elements, or the New Astronomy, or the Principia, are more sophisticated than anything Confucious ever said.

They’re not just complicated – they’re correct.
--
Tell me how to distinguish good speculative metaphysics from bad speculative metaphysics.

random side note:
- dysgenics running at -.5-1 IQ/generation in NW Europe since ~1800 and China by ~1960
- gap between east asians and europeans typically a bit less than .5 SD (or .3 SD if you look at mainland chinese not asian-americans?), similar variances
- 160/30 * 1/15 = .36, so could explain most of gap depending on when exactly dysgenics started
- maybe Europeans were just smarter back then? still seems like you need additional cultural/personality and historical factors. could be parasite load too.

https://westhunt.wordpress.com/2019/09/07/wheel-in-the-sky/
scientifically than europe”. Nonsense, of course. Hellenistic science was more advanced than that of India and China in 1700 ! Although it makes me wonder the extent to which they’re teaching false history of science and technology in schools today- there’s apparently demand to blot out white guys from the story, which wouldn’t leave much.

Europe, back then, could be ridiculously sophisticated, at the highest levels. There had been no simple, accurate way of determining longitude – important in navigation, but also in mapmaking.

...

In the course of playing with this technique, the Danish astronomer Ole Rømer noted some discrepancies in the timing of those eclipses – they were farther apart when Earth and Jupiter were moving away from each other, closer together when the two planets were approaching each other. From which he deduced that light had a finite speed, and calculated the approximate value.

https://westhunt.wordpress.com/2019/09/07/wheel-in-the-sky/#comment-138328
“But have you noticed having a better memory than other smart people you respect?”

Oh yes.
--
I think some people have a stronger meta-memory than others, which can work as a multiplier of their intelligence. For some, their memory is a well ordered set of pointers to where information exists. It’s meta-data, rather than data itself. For most people, their memory is just a list of data, loosely organized by subject. Mixed in may be some meta-data, but otherwise it is a closed container.

I suspect sociopaths and politicians have a strong meta-data layer.
west-hunter  discussion  history  early-modern  science  innovation  comparison  asia  china  divergence  the-great-west-whale  culture  society  technology  civilization  europe  frontier  arms  military  agriculture  discovery  coordination  literature  sinosphere  roots  anglosphere  gregory-clark  spearhead  parasites-microbiome  dysgenics  definite-planning  reflection  s:*  big-picture  🔬  track-record  scitariat  broad-econ  info-dynamics  chart  prepping  zeitgeist  rot  wealth-of-nations  cultural-dynamics  ideas  enlightenment-renaissance-restoration-reformation  occident  modernity  microfoundations  the-trenches  marginal  summary  orient  speedometer  the-world-is-just-atoms  gnon  math  geometry  defense  architecture  hari-seldon  multi  westminster  culture-war  identity-politics  twitter  social  debate  speed  space  examples  physics  old-anglo  giants  nordic  geography  navigation  maps  aphorism  poast  retention  neurons  thinking  finance  trivia  pro-rata  data  street-fighting  protocol-metadata  context  oceans 
march 2017 by nhaliday
Orthogonal — Greg Egan
In Yalda’s universe, light has no universal speed and its creation generates energy.

On Yalda’s world, plants make food by emitting their own light into the dark night sky.
greg-egan  fiction  gedanken  physics  electromag  differential  geometry  thermo  space  cool  curiosity  reading  exposition  init  stat-mech  waves  relativity  positivity  unit  wild-ideas  speed  gravity  big-picture  🔬  xenobio  ideas  scifi-fantasy  signum 
february 2017 by nhaliday
Sir Ronald Aylmer Fisher | West Hunter
In 1930 he published The Genetical Theory of Natural Selection, which completed the fusion of Darwinian natural selection with Mendelian inheritance. James Crow said that it was ‘arguably the deepest and most influential book on evolution since Darwin’. In it, Fisher analyzed sexual selection, mimicry, and sex ratios, where he made some of the first arguments using game theory. The book touches on many other topics. As was the case with his other works, The Genetical Theory is a dense book, not easy for most people to understand. Fisher’s tendency to leave out mathematical steps that he deemed obvious (a leftover from his early training in mental mathematics) frustrates many readers.

The Genetical Theory is of particular interest to us because Fisher there lays out his ideas on how population size can speed up evolution. As we explain elsewhere, more individuals mean there will be more mutations, including favorable mutations, and so Fisher expected more rapid evolution in larger populations. This idea was originally suggested, in a nonmathematical way, in Darwin’s Origin of Species.

Although Fisher was fiercely loyal to friends and could be very charming, he had a quick temper and was a fine hater. The same uncompromising spirit that fostered his originality led to constant conflict with authority. He had a long conflict with Karl Pearson, who had also played an important part in the development of mathematical statistics. In this case, Pearson was more at fault, resisting the advent of a more talented competitor, as well as being an eminently hateable person in general. Over time Fisher also became increasing angry at Sewall Wright (another one of the founders of population genetics) due to scientific disagreements – and this was just wrong, because Wright was a sweetheart.

Fisher’s personality decreased his potential influence. He was not a school-builder, and was impatient with administrators. He expected to find some form of war-work in the Second World War, but his characteristics had alienated too many people, and thus his team dispersed to other jobs during the war. He returned to Rothamsted for the duration. This was a difficult time for him: his marriage disintegrated and his oldest son, an RAF pilot, was killed in the war.

...

Fisher’s ideas in genetics have taken an odd path. The Genetical Theory was not widely read, sold few copies, and has never been translated. Only gradually did its ideas find an audience. Of course, that audience included people like Bill Hamilton, the greatest mathematical biologist of the last half of the 20th century, who was strongly influenced by Fisher’s work. Hamilton said “By the time of my ultimate graduation,will I have understood all that is true in this book and will I get a First? I doubt it. In some ways some of us have overtaken Fisher; in many, however, this brilliant, daring man is still far in front.“

In fact, over the past generation, much of Fisher’s work has been neglected – in the sense that interest in population genetics has decreased (particularly interest in selection) and fewer students are exposed to his work in genetics in any way. Ernst Mayr didn’t even mention Fisher in his 1991 book One Long Argument: Charles Darwin and the Genesis of Modern Evolutionary Thought, while Stephen Jay Gould, in The Structure of Evolutionary Theory, gave Fisher 6 pages out of 1433. Of course Mayr and Gould were both complete chuckleheads.

Fisher’s work affords continuing insight, including important implications concerning human evolution that have emerged more than 50 years after his death. We strongly discourage other professionals from learning anything about his ideas.
west-hunter  history  bio  evolution  genetics  population-genetics  profile  giants  people  mostly-modern  the-trenches  innovation  novelty  britain  fisher  mental-math  narrative  scitariat  old-anglo  world-war  pre-ww2  scale  population  pop-structure  books  classic  speed  correlation  mutation  personality 
january 2017 by nhaliday
Psychological comments: Does Age make us sage or sag?
Khan on Twitter: "figure on right from @tuckerdrob lab is depressing (the knowledge plateau). do i read in vain??? https://t.co/DZzBD8onEv": https://twitter.com/razibkhan/status/809439911627493377
- reasoning rises then declines after age ~20
- knowledge plateaus by age 35-40
- different interpretation provided by study authors w/ another graph (renewal)
- study (can't find the exact graph anywhere): http://www.iapsych.com/wj3ewok/LinkedDocuments/McArdle2002.pdf

School’s out: https://westhunt.wordpress.com/2016/12/29/schools-out/
I saw a note by Razib Khan, in which he mentioned that psychometric research suggests that people plateau in their knowledge base as adults. I could believe it. But I’m not sure it’s true in my case. One might estimate total adult knowledge in terms of BS equivalents…

Age-related IQ decline is reduced markedly after adjustment for the Flynn effect: https://www.ncbi.nlm.nih.gov/m/pubmed/20349385/
Twenty-year-olds outperform 70-year-olds by as much as 2.3 standard deviations (35 IQ points) on subtests of the Wechsler Adult Intelligence Scale (WAIS). We show that most of the difference can be attributed to an intergenerational rise in IQ known as the Flynn effect.

...

For these verbal subtests, the Flynn effect masked a modest increase in ability as individuals grow older.

Predictors of ageing-related decline across multiple cognitive functions: http://www.sciencedirect.com/science/article/pii/S0160289616302707
Cognitive ageing is likely a process with few large-effect predictors

A strong link between speed of visual discrimination and cognitive ageing: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4123160/
Results showed a moderate correlation (r = 0.460) between inspection time performance and intelligence, and a strong correlation between change in inspection time and change in intelligence from 70 to 76 (r = 0.779). These results support the processing speed theory of cognitive ageing. They go beyond cross-sectional correlation to show that cognitive change is accompanied by changes in basic visual information processing as we age.
albion  psychology  cog-psych  psychometrics  aging  iq  objektbuch  long-term  longitudinal  study  summary  variance-components  scitariat  multi  gnxp  learning  metabuch  twitter  social  discussion  pic  data  planning  tradeoffs  flux-stasis  volo-avolo  west-hunter  studying  knowledge  age-generation  flexibility  rigidity  plots  manifolds  universalism-particularism  being-becoming  essence-existence  intelligence  stock-flow  large-factor  psych-architecture  visuo  correlation  time  speed  short-circuit  roots  flynn  trends  dysgenics  language  explanans  direction  chart 
december 2016 by nhaliday
Faster than Fisher | West Hunter
There’s a simple model of the spread of an advantageous allele:  You take σ, the typical  distance people move in one generation, and s,  the selective advantage: the advantageous allele spreads as a nonlinear wave at speed  σ * √(2s).  The problem is, that’s slow.   Suppose that s = 0.10 (a large advantage), σ = 10 kilometers, and a generation time of 30 years: the allele would take almost 7,000 years to expand out 1000 kilometers.

...

This big expansion didn’t just happen from peasants marrying the girl next door: it required migrations and conquests. This one looks as if it rode with the Indo-European expansion: I’ll bet it started out in a group that had domesticated only horses.

The same processes, migration and conquest, must explain the wide distribution of many geographically widespread selective sweeps and partial sweeps. They were adaptive, all right, but expanded much faster than possible from purely local diffusion. We already have reason to think that SLC24A5 was carried to Europe by Middle Eastern farmers; the same is probably true for the haplotype that carries the high-activity ergothioniene transporter and the 35delG connexin-26/GJB2 deafness mutation. The Indo-Europeans probably introduced the T-13910 LCT mutation and the delta-F508 cystic fibrosis mutation, so we should see delta-F508 in northwest India and Pakistan – and we do !

https://westhunt.wordpress.com/2014/11/22/faster-than-fisher/#comment-63067
To entertain a (possibly mistaken) physical analogy, it sounds like you’re suggested a sort genetic convection through space, as opposed to conduction. I.e. Entire masses of folks, carrying a new selected variant, are displacing others – as opposed to the slow gene flow process of “girl-next-door.” Is that about right? (Hopefully I haven’t revealed my ignorance of basic thermodynamics here…)

Has there been any attempt to estimate sigma from these time periods?

Genetic Convection: https://westhunt.wordpress.com/2015/02/22/genetic-convection/
People are sometimes interested in estimating the point of origin of a sweeping allele: this is probably effectively impossible even if diffusion were the only spread mechanism, since the selective advantage might well vary in both time and space. But that’s ok, since population movements – genetic convection – are real and very important. This means that the difficulties in estimating the origin of a Fisher wave are totally insignificant, compared to the difficulties of estimating the effects of past colonizations, conquests and Völkerwanderungs. So when Yuval Itan and Mark Thomas estimated that 13,910 T LCT allele originated in central Europe, in the early Neolithic, they didn’t just go wrong because of failing to notice that the same allele is fairly common in northern India: no, their whole notion was unsound in the first place. We’re talking turbulence on steroids. Hari Seldon couldn’t figure this one out from the existing geographic distribution.
west-hunter  genetics  population-genetics  street-fighting  levers  evolution  gavisti  🌞  selection  giants  nibble  fisher  speed  gene-flow  scitariat  stylized-facts  methodology  archaeology  waves  frontier  agri-mindset  analogy  visual-understanding  physics  thermo  interdisciplinary  spreading  spatial  geography  poast  multi  volo-avolo  accuracy  estimate  order-disorder  time  homo-hetero  branches  trees  distribution  data  hari-seldon  aphorism  cliometrics  aDNA  mutation  lexical 
november 2016 by nhaliday
The 10,000 Year Explosion - Parting of the Ways
There are plenty of other challenges that humans of that era (~100,000 years ago) never met: for example they never colonized the high Arctic, the Americas, or Australia/New Guinea. Even though Neanderthals and Africans had brains that were as large as or larger than those of modern humans, even though humans in Africa were reasonably modern-looking, modern behavioral capacities did not yet exist. They didn't yet have the spark. Come to think of it, most people today still don't. We'll have more to say on that in a moment.

...

The Neanderthals had big brains (averaging about 1500 cubic centimeters, noticeably larger than those of modern people) and a technology like that of their anatomically modern contemporaries in Africa, but were quite different in a number of ways: different physically, but also socially and ecologically. Neanderthals were cold-adapted, with relatively short arms and legs in order to reduce heat loss - something like Arctic peoples today, only much more so. Considering that the climate the Neanderthals experienced was considerably milder than the high Arctic (more like Wisconsin), their pronounced cold adaptation suggest that they may have relied more on physical than cultural changes. Of course they spent at least six times as many generations in the cold as any modern human population has, and that may have had something to do with it as well.

...

Like other early humans, Neanderthals were relatively uncreative; their tools changed very slowly and they show no signs of art, symbolism, or trade. Their brains were large and had grown larger over time, in parallel with humans in Africa, but we really have no idea what they did with them. Since brains are metabolically expensive, natural selection wouldn't have favored an increase in brain size unless it increased fitness, but we don't know what function that those big brains served. Usually people explain that those big brains are not as impressive as they seem, since the brain-to-body weight ratio is what’s really important, and Neanderthals were heavier than modern humans of the same height.

You may wonder why we normalize brain size by body weight. We wonder as well.

Among less intelligent creatures, such as amphibians and reptiles, most of the brain is busy dealing with a flood of sensory data. You’d expect that brain size would have to increase with body size in some way in order to keep up. If you assume that the key is how much surface the animal has, in order to monitor what’s causing that nagging itch and control all the muscles needed for movement, brain size should scale as the 2/3rds power of weight. If an animal has a brain that’s bigger than predicted by that 2/3rds power scaling law, then maybe it’s smarter than average. That argument works reasonable well for a wide range of species, but it can’t make sense for animals with big brains. In particular it can’t make sense for primates, since in that case we know that most of the brain is used for purposes other than muscle control and immediate reaction to sensation. Look at this way - if dividing brain volume by weight is a valid approach, Nero Wolfe must be really, really stupid.

We think that Neanderthal brains really were large, definitely larger than those of people today. This doesn’t necessarily mean that they were smarter, at least not as a culture. The archaeological record certainly indicates that they were not, since their material culture was definitely simpler than that of their successors. In fact, they may have been relatively unintelligent, even with their big brains. Although brain size certainly is correlated with intelligence in modern humans, it is not the only factor that affects intelligence. By the way, you may have read somewhere (The Mismeasure of Man) that brain volume has no relationship to intelligence, but that’s just a lie.

One paradoxical possibility is that Neanderthals lacked complex language and so had to be smart as individuals in order to learn their culture and technology, while that same lack severely limited their societal achievements. Complex language of the type we see in modern humans makes learning a lot easier: without it, learning to create even Mousterian tools may have been difficult. In that case, individuals would have to repeatedly re-invent the wheel (so to speak) while there would have been little societal progress.

It could also be that Neanderthal brains were less powerful than you’d expect because there just weren’t enough Neanderthals. That may sound obscure, but bear with us. The problem is that evolution is less efficient in small populations, in the same way that any statistical survey – polls, for example -becomes less accurate with fewer samples.

...

Our favorite hypothesis is that Neanderthals and other archaic humans had a fundamentally different kind of learning than moderns. One of the enduring puzzles is the near-stasis of tool kits in early humans - as we have said before, the Acheulean hand-axe tradition last for almost a million years and extended from the Cape of Good Hope to Germany, while the Mousterian lasted for a quarter of a million years. Somehow these early humans were capable of transmitting a simple material culture for hundreds of thousands of years with little change. More information was transmitted to the next generation than in chimpanzees, but not as much as in modern humans. At the same time, that information was transmitted with surprisingly high accuracy. This must be the case, since random errors in transmission would have caused changes in those tool traditions, resulting in noticeable variation over space and time – which we do not see.

It looks to us as if toolmaking in those populations was, to some extent, innate: genetically determined. Just as song birds are born with a rough genetic template that constrains what songs are learned, early humans may have been born with genetically determined behavioral tendencies that resulted in certain kinds of tools. Genetic transmission of that information has the characteristics required to explain this pattern of simple, near-static technology, since only a limited amount of information can be acquired through natural selection, while the information that is acquired is transmitted with very high accuracy.

...

Starting 70,000 or 80,000 years ago, we begin to see some signs of increased cultural complexity in Africa. There is evidence of long-distance transport of tool materials (obsidian) in Ethiopia, which could be the first signs of trade. A set of pierced snail shells (~75,000 years old) in Blombos Cave in South Africa seem, judging from wear, to be the remains of a necklace, although there is no evidence that tools were used to pierce the shells. In that same site, researchers found pieces of ochre with a crosshatched pattern inscribed. We have found manufactured ostrich-egg beads in Kenya that are about 50,000 years old, the first clear examples of artificial decorative or symbolic (that is to say, useless) objects. We see a new kind of small stone points that must have been used on darts that were considerably smaller than previous spears. Although it would seem likely that such darts would have been propelled by atlatls, no atlatls have yet been found that date anywhere near that far back. There are reports of 90,000 year-old bone fish spears from central Africa which, if correct, would be evidence of a significant advance in tool complexity. However, since no other similar tools found in Africa are older than 30,000 years, those fish spears are roughly as anomalous as a Neanderthal-era thumb drive, and we have our doubts about that date. On the whole, the African archeological data of this period furnishes examples of new technology and simple symbolic objects, but the evidence is patchy, and it seems that some innovations appeared and then faded away for reasons that we don’t understand.

A note on behavioral modernity: the consensus seems to be that any clear evidence of a population making symbolic or decorative objects establishes their behavioral modernity, defined as cultural creativity and reliance on abstract thought. For some reason, anthropologists treat behavioral modernity as a qualitative character: an ancient population either had it or not, just as women are pregnant or not, never a ‘little bit pregnant’. It’s treated as a Boolean variable. Like so many basic notions in anthropology, this makes no sense. The components of ‘behavioral modernity’ had to be evolved traits with heritable variation, subject to natural selection – how else would they have come into existence at all? Surely ancient individuals and populations varied in their capacity for abstract thought and cultural innovation – behavioral modernity must be more like height than pregnancy.

...

The fact the ability to learn complex new ideas and transmit them to the next generation is universal in modern humans suggests that natural selection favored that kind of receptivity. On the other hand, the rarity of individual creativity suggests that the trait itself was not favored by selection in the past, but is instead a rare side effect.

We think that the archaeological record in Africa before the expansion of modern humans shows a gradual but slow increase in such abilities, which is the usual pattern for a trait favored by selection. On the other hand, the rate of change in the European Upper Paleolithic seems faster, almost discontinuous – but there is a well-understood biological pattern that may explain that as well.

The most dramatic evidence of some kind of significant change is the fact that anatomically modern humans expanded out of Africa about 50,000 years ago.
antiquity  sapiens  len:long  essay  west-hunter  spearhead  archaics  migration  gene-flow  scitariat  eden  intelligence  neuro  neuro-nitgrit  brain-scan  🌞  article  speculation  ideas  flux-stasis  pop-structure  population  population-genetics  technology  innovation  time  history  creative  discovery  cjones-like  shift  speed  gene-drift  archaeology  measure  explanans 
september 2016 by nhaliday
Noise: dinosaurs, syphilis, and all that | West Hunter
Generally speaking, I thought the paleontologists were a waste of space: innumerate, ignorant about evolution, and simply not very smart.

None of them seemed to understand that a sharp, short unpleasant event is better at causing a mass extinction, since it doesn’t give flora and fauna time to adapt.

Most seemed to think that gradual change caused by slow geological and erosion forces was ‘natural’, while extraterrestrial impact was not. But if you look at the Moon, or Mars, or the Kirkwood gaps in the asteroids, or think about the KAM theorem, it is apparent that Newtonian dynamics implies that orbits will be perturbed, and that sometimes there will be catastrophic cosmic collisions. Newtonian dynamics is as ‘natural’ as it gets: paleontologists not studying it in school and not having much math hardly makes it ‘unnatural’.

One of the more interesting general errors was not understanding how to to deal with noise – incorrect observations. There’s a lot of noise in the paleontological record. Dinosaur bones can be eroded and redeposited well after their life times – well after the extinction of all dinosaurs. The fossil record is patchy: if a species is rare, it can easily look as if it went extinct well before it actually did. This means that the data we have is never going to agree with a perfectly correct hypothesis – because some of the data is always wrong. Particularly true if the hypothesis is specific and falsifiable. If your hypothesis is vague and imprecise – not even wrong – it isn’t nearly as susceptible to noise. As far as I can tell, a lot of paleontologists [ along with everyone in the social sciences] think of of unfalsifiability as a strength.

Done Quickly: https://westhunt.wordpress.com/2011/12/03/done-quickly/
I’ve never seen anyone talk about it much, but when you think about mass extinctions, you also have to think about rates of change

You can think of a species occupying a point in a many-dimensional space, where each dimension represents some parameter that influences survival and/or reproduction: temperature, insolation, nutrient concentrations, oxygen partial pressure, toxin levels, yada yada yada. That point lies within a zone of habitability – the set of environmental conditions that the species can survive. Mass extinction occurs when environmental changes are so large that many species are outside their comfort zone.

The key point is that, with gradual change, species adapt. In just a few generations, you can see significant heritable responses to a new environment. Frogs have evolved much greater tolerance of acidification in 40 years (about 15 generations). Some plants in California have evolved much greater tolerance of copper in just 70 years.

As this happens, the boundaries of the comfort zone move. Extinctions occur when the rate of environmental change is greater than the rate of adaptation, or when the amount of environmental change exceeds the limit of feasible adaptation. There are such limits: bar-headed geese fly over Mt. Everest, where the oxygen partial pressure is about a third of that at sea level, but I’m pretty sure that no bird could survive on the Moon.

...

Paleontologists prefer gradualist explanations for mass extinctions, but they must be wrong, for the most part.
disease  science  critique  rant  history  thinking  regularizer  len:long  west-hunter  thick-thin  occam  social-science  robust  parasites-microbiome  early-modern  parsimony  the-trenches  bounded-cognition  noise-structure  signal-noise  scitariat  age-of-discovery  sex  sexuality  info-dynamics  alt-inst  map-territory  no-go  contradiction  dynamical  math.DS  space  physics  mechanics  archaeology  multi  speed  flux-stasis  smoothness  evolution  environment  time  shift  death  nihil  inference  apollonian-dionysian  error  explanation  spatial  discrete  visual-understanding  consilience  traces  evidence  elegance 
september 2016 by nhaliday
Latency Numbers Every Programmer Should Know
systems  networking  performance  programming  os  engineering  tech  paste  cheatsheet  objektbuch  street-fighting  🖥  techtariat  big-picture  caching  magnitude  nitty-gritty  scaling-tech  let-me-see  quantitative-qualitative  chart  reference  nibble  career  interview-prep  time  scale  measure  comparison  metal-to-virtual  multi  sequential  visualization  trends  multiplicative  speed  web  dynamic  q-n-a  stackex  estimate  accuracy  org:edu  org:junk  visual-understanding  benchmarks  latency-throughput  client-server  thinking  howto  explanation  crosstab  within-group  usa  geography  maps  urban-rural  correlation 
may 2016 by nhaliday

bundles : abstractphysics

related tags

ability-competence  absolute-relative  abstraction  academia  accelerationism  accretion  accuracy  acm  acmtariat  additive  aDNA  adversarial  advice  africa  age-generation  age-of-discovery  aggregator  aging  agri-mindset  agriculture  ai  ai-control  albion  algorithms  alien-character  alignment  allodium  alt-inst  altruism  amazon  analogy  analysis  analytical-holistic  anglo  anglosphere  anthropic  anthropology  antidemos  antiquity  aphorism  api  apollonian-dionysian  apple  applicability-prereqs  applications  approximation  archaeology  archaics  architecture  aristos  arms  art  article  asia  atmosphere  atoms  attaq  authoritarianism  automata-languages  automation  axelrod  axioms  backup  baez  barons  behavioral-gen  being-becoming  being-right  benchmarks  benevolence  biases  big-peeps  big-picture  big-surf  big-yud  bio  biodet  bioinformatics  biomechanics  biophysical-econ  biotech  bits  books  bostrom  bounded-cognition  brain-scan  branches  brands  britain  broad-econ  business  business-models  caching  california  caltech  cancer  canon  capital  capitalism  career  carmack  cartoons  causation  chart  cheatsheet  checklists  chemistry  china  christianity  circuits  civil-liberty  civilization  cjones-like  class  classic  clever-rats  client-server  climate-change  cliometrics  coalitions  coarse-fine  cocktail  coding-theory  cog-psych  cohesion  cold-war  collaboration  commentary  communication  comparison  compensation  competition  complement-substitute  complex-systems  complexity  composition-decomposition  computation  computer-vision  concentration-of-measure  concept  conceptual-vocab  concrete  concurrency  conquest-empire  consilience  consumerism  contest  context  contracts  contradiction  contrarianism  convergence  convexity-curvature  cool  cooperate-defect  coordination  correlation  corruption  cost-benefit  counterexample  coupling-cohesion  courage  course  creative  crime  critique  crooked  crosstab  crux  cs  cultural-dynamics  culture  culture-war  curiosity  cybernetics  cycles  cynicism-idealism  dan-luu  dark-arts  darwinian  data  database  death  debate  debt  decentralized  decision-making  decision-theory  deep-materialism  defense  definite-planning  definition  degrees-of-freedom  democracy  demographics  dennett  density  descriptive  desktop  detail-architecture  deterrence  developing-world  developmental  differential  dimensionality  direct-indirect  direction  dirty-hands  discovery  discrete  discussion  disease  distribution  divergence  diversity  domestication  drugs  duality  duplication  duty  dynamic  dynamical  dysgenics  early-modern  earth  ecology  economics  econotariat  eden  eden-heaven  editors  education  EEA  efficiency  egalitarianism-hierarchy  EGT  einstein  electromag  elegance  elite  embodied  empirical  ems  end-times  endogenous-exogenous  energy-resources  engineering  enhancement  enlightenment-renaissance-restoration-reformation  entrepreneurialism  entropy-like  environment  environmental-effects  envy  epistemic  equilibrium  error  essay  essence-existence  estimate  ethics  EU  europe  evidence  evolution  evopsych  examples  existence  expansionism  expectancy  experiment  expert-experience  explanans  explanation  exploratory  exposition  extra-introversion  facebook  farmers-and-foragers  fashun  FDA  fermi  fertility  feudal  fiction  finance  finiteness  fisher  flexibility  fluid  flux-stasis  flynn  focus  food  foreign-lang  foreign-policy  form-design  formal-values  fourier  frequency  frontier  futurism  gallic  game-theory  games  garett-jones  gavisti  gedanken  gender  gender-diff  gene-drift  gene-flow  generalization  genetic-load  genetics  genomics  geoengineering  geography  geometry  germanic  giants  gibbon  gnon  gnosis-logos  gnxp  god-man-beast-victim  good-evil  google  government  gravity  gray-econ  great-powers  greg-egan  gregory-clark  growth  growth-econ  GT-101  guide  h2o  hanson  hard-tech  hardware  hari-seldon  harvard  hci  heavy-industry  heterodox  heuristic  hi-order-bits  hidden-motives  high-variance  higher-ed  history  hive-mind  hmm  hn  homepage  homo-hetero  honor  horror  howto  human-ml  humanity  humility  hypocrisy  ideas  identity  identity-politics  ideology  IEEE  iidness  illusion  impetus  incentives  increase-decrease  india  individualism-collectivism  industrial-org  inequality  inference  info-dynamics  info-econ  info-foraging  information-theory  init  innovation  input-output  insight  instinct  institutions  integrity  intel  intelligence  interdisciplinary  interests  interface  interface-compatibility  internet  intervention  interview  interview-prep  intricacy  intuition  invariance  investing  ios  iq  iron-age  is-ought  iteration-recursion  janus  japan  judaism  judgement  justice  keyboard  knowledge  kumbaya-kult  labor  language  large-factor  latency-throughput  latin-america  law  leadership  learning  lecture-notes  legacy  len:long  lens  lesswrong  let-me-see  letters  levers  leviathan  lexical  limits  liner-notes  linguistics  links  list  literature  local-global  logic  lol  long-short-run  long-term  longevity  longitudinal  love-hate  lower-bounds  machiavelli  machine-learning  macro  magnitude  maker  malthus  management  manifolds  map-territory  maps  marginal  market-power  markets  markov  martial  math  math.CA  math.DS  mathtariat  meaningness  measure  measurement  mechanics  media  medicine  medieval  mediterranean  MENA  mental-math  meta:prediction  meta:reading  meta:rhetoric  meta:war  metabuch  metal-to-virtual  metameta  methodology  metrics  microfoundations  microsoft  migrant-crisis  migration  military  miri-cfar  mobile  model-organism  models  modernity  moloch  moments  monetary-fiscal  morality  mostly-modern  multi  multiplicative  musk  mutation  mystic  myth  n-factor  narrative  nationalism-globalism  nature  navigation  near-far  network-structure  networking  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nietzschean  nihil  nitty-gritty  no-go  noble-lie  noise-structure  nonlinearity  nordic  northeast  novelty  nuclear  number  nutrition  nyc  obesity  objektbuch  occam  occident  oceans  offense-defense  old-anglo  oly  oly-programming  open-closed  open-problems  optimate  optimism  optimization  order-disorder  org:bleg  org:data  org:edu  org:foreign  org:gov  org:inst  org:junk  org:mag  org:mat  org:nat  org:rec  org:sci  organizing  orient  os  oscillation  outcome-risk  outliers  overflow  paleocon  papers  paradox  parallax  parasites-microbiome  parenting  parsimony  paste  path-dependence  patience  pdf  peace-violence  people  performance  personality  pessimism  phalanges  pharma  phase-transition  philosophy  phys-energy  physics  pic  pigeonhole-markov  piracy  planning  play  plots  poast  poetry  polanyi-marx  polarization  polisci  politics  poll  pop-diff  pop-structure  popsci  population  population-genetics  positivity  power  power-law  practice  pragmatic  pre-ww2  prediction  prepping  primitivism  princeton  priors-posteriors  privacy  pro-rata  probability  problem-solving  profile  programming  project  properties  property-rights  protestant-catholic  protocol-metadata  prudence  psych-architecture  psychiatry  psychology  psychometrics  public-goodish  publishing  puzzles  q-n-a  qra  quantitative-qualitative  quantum  quantum-info  questions  quiz  quotes  race  random  randy-ayndy  ranking  rant  rationality  ratty  reading  realness  realpolitik  reason  rec-math  recent-selection  recommendations  recruiting  redistribution  reduction  reference  reflection  regression  regularizer  regulation  reinforcement  relativity  religion  rent-seeking  research  retention  review  revolution  rhetoric  rhythm  right-wing  rigidity  rigor  risk  ritual  robotics  robust  roots  rot  russia  s:*  s:**  sapiens  scale  scaling-tech  science  scifi-fantasy  scitariat  search  securities  selection  sequential  series  sex  sexuality  shakespeare  shift  short-circuit  signal-noise  signaling  signum  simulation  singularity  sinosphere  skeleton  skunkworks  sky  sleuthin  slippery-slope  smoothness  social  social-choice  social-norms  social-psych  social-science  social-structure  society  socs-and-mops  software  space  spatial  speaking  spearhead  speculation  speed  speedometer  spock  sports  spreading  stackex  stagnation  stanford  startups  stat-mech  statesmen  stats  status  stereotypes  stochastic-processes  stock-flow  stories  strategy  street-fighting  structure  study  studying  stylized-facts  subculture  success  summary  survey  sv  synchrony  synthesis  system-design  systematic-ad-hoc  systems  tactics  tails  tainter  taxes  tech  technology  techtariat  telos-atelos  temperature  terminal  tetlock  the-basilisk  the-classics  the-devil  the-founding  the-great-west-whale  the-self  the-trenches  the-watchers  the-west  the-world-is-just-atoms  theory-of-mind  theory-practice  theos  thermo  thick-thin  thiel  things  thinking  threat-modeling  time  time-preference  time-series  time-use  tools  traces  track-record  trade  tradeoffs  tradition  transportation  trees  trends  tribalism  tricki  trivia  troll  trust  truth  turing  twitter  ui  uncertainty  unintended-consequences  uniqueness  unit  universalism-particularism  urban-rural  us-them  usa  utopia-dystopia  ux  values  variance-components  venture  video  virtu  visual-understanding  visualization  visuo  vitality  volo-avolo  von-neumann  walter-scheidel  war  waves  wealth  wealth-of-nations  web  welfare-state  west-hunter  westminster  whole-partial-many  wiki  wild-ideas  winner-take-all  wire-guided  wisdom  within-group  within-without  workflow  world  world-war  worrydream  writing  X-not-about-Y  xenobio  yoga  zeitgeist  zero-positive-sum  zooming  🌞  🎩  👳  🔬  🖥 

Copy this bookmark:



description:


tags: