nhaliday + measurement   178

Skin turgor: MedlinePlus Medical Encyclopedia
To check for skin turgor, the health care provider grasps the skin between two fingers so that it is tented up. Commonly on the lower arm or abdomen is checked. The skin is held for a few seconds then released.

Skin with normal turgor snaps rapidly back to its normal position. Skin with poor turgor takes time to return to its normal position.
tip-of-tongue  prepping  fluid  embodied  trivia  survival  howto  medicine  safety  measurement 
11 weeks ago by nhaliday
Advantages and disadvantages of building a single page web application - Software Engineering Stack Exchange
Advantages
- All data has to be available via some sort of API - this is a big advantage for my use case as I want to have an API to my application anyway. Right now about 60-70% of my calls to get/update data are done through a REST API. Doing a single page application will allow me to better test my REST API since the application itself will use it. It also means that as the application grows, the API itself will grow since that is what the application uses; no need to maintain the API as an add-on to the application.
- More responsive application - since all data loaded after the initial page is kept to a minimum and transmitted in a compact format (like JSON), data requests should generally be faster, and the server will do slightly less processing.

Disadvantages
- Duplication of code - for example, model code. I am going to have to create models both on the server side (PHP in this case) and the client side in Javascript.
- Business logic in Javascript - I can't give any concrete examples on why this would be bad but it just doesn't feel right to me having business logic in Javascript that anyone can read.
- Javascript memory leaks - since the page never reloads, Javascript memory leaks can happen, and I would not even know where to begin to debug them.

--

Disadvantages I often see with Single Page Web Applications:
- Inability to link to a specific part of the site, there's often only 1 entry point.
- Disfunctional back and forward buttons.
- The use of tabs is limited or non-existant.
(especially mobile:)
- Take very long to load.
- Don't function at all.
- Can't reload a page, a sudden loss of network takes you back to the start of the site.

This answer is outdated, Most single page application frameworks have a way to deal with the issues above – Luis May 27 '14 at 1:41
@Luis while the technology is there, too often it isn't used. – Pieter B Jun 12 '14 at 6:53

https://softwareengineering.stackexchange.com/questions/201838/building-a-web-application-that-is-almost-completely-rendered-by-javascript-whi

https://softwareengineering.stackexchange.com/questions/143194/what-advantages-are-conferred-by-using-server-side-page-rendering
Server-side HTML rendering:
- Fastest browser rendering
- Page caching is possible as a quick-and-dirty performance boost
- For "standard" apps, many UI features are pre-built
- Sometimes considered more stable because components are usually subject to compile-time validation
- Leans on backend expertise
- Sometimes faster to develop*
*When UI requirements fit the framework well.

Client-side HTML rendering:
- Lower bandwidth usage
- Slower initial page render. May not even be noticeable in modern desktop browsers. If you need to support IE6-7, or many mobile browsers (mobile webkit is not bad) you may encounter bottlenecks.
- Building API-first means the client can just as easily be an proprietary app, thin client, another web service, etc.
- Leans on JS expertise
- Sometimes faster to develop**
**When the UI is largely custom, with more interesting interactions. Also, I find coding in the browser with interpreted code noticeably speedier than waiting for compiles and server restarts.

https://softwareengineering.stackexchange.com/questions/237537/progressive-enhancement-vs-single-page-apps

https://stackoverflow.com/questions/21862054/single-page-application-advantages-and-disadvantages
=== ADVANTAGES ===
1. SPA is extremely good for very responsive sites:
2. With SPA we don't need to use extra queries to the server to download pages.
3.May be any other advantages? Don't hear about any else..

=== DISADVANTAGES ===
1. Client must enable javascript.
2. Only one entry point to the site.
3. Security.

https://softwareengineering.stackexchange.com/questions/287819/should-you-write-your-back-end-as-an-api
focused on .NET

https://softwareengineering.stackexchange.com/questions/337467/is-it-normal-design-to-completely-decouple-backend-and-frontend-web-applications
A SPA comes with a few issues associated with it. Here are just a few that pop in my mind now:
- it's mostly JavaScript. One error in a section of your application might prevent other sections of the application to work because of that Javascript error.
- CORS.
- SEO.
- separate front-end application means separate projects, deployment pipelines, extra tooling, etc;
- security is harder to do when all the code is on the client;

- completely interact in the front-end with the user and only load data as needed from the server. So better responsiveness and user experience;
- depending on the application, some processing done on the client means you spare the server of those computations.
- have a better flexibility in evolving the back-end and front-end (you can do it separately);
- if your back-end is essentially an API, you can have other clients in front of it like native Android/iPhone applications;
- the separation might make is easier for front-end developers to do CSS/HTML without needing to have a server application running on their machine.

Create your own dysfunctional single-page app: https://news.ycombinator.com/item?id=18341993
I think are three broadly assumed user benefits of single-page apps:
1. Improved user experience.
2. Improved perceived performance.
3. It’s still the web.

5 mistakes to create a dysfunctional single-page app
Mistake 1: Under-estimate long-term development and maintenance costs
Mistake 2: Use the single-page app approach unilaterally
Mistake 3: Under-invest in front end capability
Mistake 4: Use naïve dev practices
Mistake 5: Surf the waves of framework hype

The disadvantages of single page applications: https://news.ycombinator.com/item?id=9879685
You probably don't need a single-page app: https://news.ycombinator.com/item?id=19184496
https://news.ycombinator.com/item?id=20384738
MPA advantages:
- Stateless requests
- The browser knows how to deal with a traditional architecture
- Fewer, more mature tools
- SEO for free

When to go for the single page app:
- Core functionality is real-time (e.g Slack)
- Rich UI interactions are core to the product (e.g Trello)
- Lots of state shared between screens (e.g. Spotify)

Hybrid solutions
...
Github uses this hybrid approach.
...

Ask HN: Is it ok to use traditional server-side rendering these days?: https://news.ycombinator.com/item?id=13212465

https://www.reddit.com/r/webdev/comments/cp9vb8/are_people_still_doing_ssr/
https://www.reddit.com/r/webdev/comments/93n60h/best_javascript_modern_approach_to_multi_page/
https://www.reddit.com/r/webdev/comments/aax4k5/do_you_develop_solely_using_spa_these_days/
The SEO issues with SPAs is a persistent concern you hear about a lot, yet nobody ever quantifies the issues. That is because search engines keep the operation of their crawler bots and indexing secret. I have read into it some, and it seems that problem used to exist, somewhat, but is more or less gone now. Bots can deal with SPAs fine.
--
I try to avoid building a SPA nowadays if possible. Not because of SEO (there are now server-side solutions to help with that), but because a SPA increases the complexity of the code base by a magnitude. State management with Redux... Async this and that... URL routing... And don't forget to manage page history.

How about just render pages with templates and be done?

If I need a highly dynamic UI for a particular feature, then I'd probably build an embeddable JS widget for it.
q-n-a  stackex  programming  engineering  tradeoffs  system-design  design  web  frontend  javascript  cost-benefit  analysis  security  state  performance  traces  measurement  intricacy  code-organizing  applicability-prereqs  multi  comparison  smoothness  shift  critique  techtariat  chart  ui  coupling-cohesion  interface-compatibility  hn  commentary  best-practices  discussion  trends  client-server  api  composition-decomposition  cycles  frameworks  ecosystem  degrees-of-freedom  dotnet  working-stiff  reddit  social  project-management 
october 2019 by nhaliday
CppCon 2015: Chandler Carruth "Tuning C++: Benchmarks, and CPUs, and Compilers! Oh My!" - YouTube
- very basics of benchmarking
- Q: why does preemptive reserve speed up push_back by 10x?
- favorite tool is Linux perf
- callgraph profiling
- important option: -fomit-frame-pointer
- perf has nice interface ('a' = "annotate") for reading assembly (good display of branches/jumps)
- A: optimized to no-op
- how to turn off optimizer
- profilers aren't infallible. a lot of the time samples are misattributed to neighboring ops
- fast mod example
- branch prediction hints (#define UNLIKELY(x), __builtin_expected, etc)
video  presentation  c(pp)  pls  programming  unix  heavyweights  cracker-prog  benchmarks  engineering  best-practices  working-stiff  systems  expert-experience  google  llvm  common-case  stories  libraries  measurement  linux  performance  traces  graphs  static-dynamic  ui  assembly  compilers  methodology  techtariat 
october 2019 by nhaliday
"Performance Matters" by Emery Berger - YouTube
Stabilizer is a tool that enables statistically sound performance evaluation, making it possible to understand the impact of optimizations and conclude things like the fact that the -O2 and -O3 optimization levels are indistinguishable from noise (sadly true).

Since compiler optimizations have run out of steam, we need better profiling support, especially for modern concurrent, multi-threaded applications. Coz is a new "causal profiler" that lets programmers optimize for throughput or latency, and which pinpoints and accurately predicts the impact of optimizations.

- randomize extraneous factors like code layout and stack size to avoid spurious speedups
- simulate speedup of component of concurrent system (to assess effect of optimization before attempting) by slowing down the complement (all but that component)
- latency vs. throughput, Little's law
video  presentation  programming  engineering  nitty-gritty  performance  devtools  compilers  latency-throughput  concurrency  legacy  causation  wire-guided  let-me-see  manifolds  pro-rata  tricks  endogenous-exogenous  control  random  signal-noise  comparison  marginal  llvm  systems  hashing  computer-memory  build-packaging  composition-decomposition  coupling-cohesion  local-global  dbs  direct-indirect  symmetry  research  models  metal-to-virtual  linux  measurement  simulation  magnitude  realness  hypothesis-testing  techtariat 
october 2019 by nhaliday
Measures of cultural distance - Marginal REVOLUTION
A new paper with many authors — most prominently Joseph Henrich — tries to measure the cultural gaps between different countries.  I am reproducing a few of their results (see pp.36-37 for more), noting that higher numbers represent higher gaps:

...

Overall the numbers show much greater cultural distance of other nations from China than from the United States, a significant and under-discussed problem for China. For instance, the United States is about as culturally close to Hong Kong as China is.

[ed.: Japan is closer to the US than China. Interesting. I'd like to see some data based on something other than self-reported values though.]

the study:
Beyond WEIRD Psychology: Measuring and Mapping Scales of Cultural and Psychological Distance: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3259613
We present a new tool that provides a means to measure the psychological and cultural distance between two societies and create a distance scale with any population as the point of comparison. Since psychological data is dominated by samples drawn from the United States or other WEIRD nations, this tool provides a “WEIRD scale” to assist researchers in systematically extending the existing database of psychological phenomena to more diverse and globally representative samples. As the extreme WEIRDness of the literature begins to dissolve, the tool will become more useful for designing, planning, and justifying a wide range of comparative psychological projects. We have made our code available and developed an online application for creating other scales (including the “Sino scale” also presented in this paper). We discuss regional diversity within nations showing the relative homogeneity of the United States. Finally, we use these scales to predict various psychological outcomes.
econotariat  marginal-rev  henrich  commentary  study  summary  list  data  measure  metrics  similarity  culture  cultural-dynamics  sociology  things  world  usa  anglo  anglosphere  china  asia  japan  sinosphere  russia  developing-world  canada  latin-america  MENA  europe  eastern-europe  germanic  comparison  great-powers  thucydides  foreign-policy  the-great-west-whale  generalization  anthropology  within-group  homo-hetero  moments  exploratory  phalanges  the-bones  🎩  🌞  broad-econ  cocktail  n-factor  measurement  expectancy  distribution  self-report  values  expression-survival  uniqueness 
september 2019 by nhaliday
Friends with malefit. The effects of keeping dogs and cats, sustaining animal-related injuries and Toxoplasma infection on health and quality of life | bioRxiv
The main problem of many studies was the autoselection – participants were informed about the aims of the study during recruitment and later likely described their health and wellbeing according to their personal beliefs and wishes, not according to their real status. To avoid this source of bias, we did not mention pets during participant recruitment and hid the pet-related questions among many hundreds of questions in an 80-minute Internet questionnaire. Results of our study performed on a sample of on 10,858 subjects showed that liking cats and dogs has a weak positive association with quality of life. However, keeping pets, especially cats, and even more being injured by pets, were strongly negatively associated with many facets of quality of life. Our data also confirmed that infection by the cat parasite Toxoplasma had a very strong negative effect on quality of life, especially on mental health. However, the infection was not responsible for the observed negative effects of keeping pets, as these effects were much stronger in 1,527 Toxoplasma-free subjects than in the whole population. Any cross-sectional study cannot discriminate between a cause and an effect. However, because of the large and still growing popularity of keeping pets, the existence and nature of the reverse pet phenomenon deserve the outmost attention.
study  bio  preprint  wut  psychology  social-psych  nature  regularizer  cost-benefit  emotion  sentiment  poll  methodology  sampling-bias  confounding  happy-sad  intervention  sociology  disease  parasites-microbiome  correlation  contrarianism  branches  increase-decrease  measurement  internet  weird  🐸 
august 2019 by nhaliday
The Reason Why | West Hunter
There are odd things about the orbits of trans-Neptunian objects that suggest ( to some) that there might be an undiscovered super-Earth-sized planet  a few hundred AU from the Sun..

We haven’t seen it, but then it would be very hard to see. The underlying reason is simple enough, but I have never seen anyone mention it: the signal from such objects drops as the fourth power of distance from the Sun.   Not the second power, as is the case with luminous objects like stars, or faraway objects that are close to a star.  We can image close-in planets of other stars that are light-years distant, but it’s very difficult to see a fair-sized planet a few hundred AU out.
--
interesting little fun fact
west-hunter  scitariat  nibble  tidbits  scale  magnitude  visuo  electromag  spatial  space  measurement  paradox  physics 
july 2019 by nhaliday
Computer latency: 1977-2017
If we look at overall results, the fastest machines are ancient. Newer machines are all over the place. Fancy gaming rigs with unusually high refresh-rate displays are almost competitive with machines from the late 70s and early 80s, but “normal” modern computers can’t compete with thirty to forty year old machines.

...

If we exclude the game boy color, which is a different class of device than the rest, all of the quickest devices are Apple phones or tablets. The next quickest device is the blackberry q10. Although we don’t have enough data to really tell why the blackberry q10 is unusually quick for a non-Apple device, one plausible guess is that it’s helped by having actual buttons, which are easier to implement with low latency than a touchscreen. The other two devices with actual buttons are the gameboy color and the kindle 4.

After that iphones and non-kindle button devices, we have a variety of Android devices of various ages. At the bottom, we have the ancient palm pilot 1000 followed by the kindles. The palm is hamstrung by a touchscreen and display created in an era with much slower touchscreen technology and the kindles use e-ink displays, which are much slower than the displays used on modern phones, so it’s not surprising to see those devices at the bottom.

...

Almost every computer and mobile device that people buy today is slower than common models of computers from the 70s and 80s. Low-latency gaming desktops and the ipad pro can get into the same range as quick machines from thirty to forty years ago, but most off-the-shelf devices aren’t even close.

If we had to pick one root cause of latency bloat, we might say that it’s because of “complexity”. Of course, we all know that complexity is bad. If you’ve been to a non-academic non-enterprise tech conference in the past decade, there’s a good chance that there was at least one talk on how complexity is the root of all evil and we should aspire to reduce complexity.

Unfortunately, it's a lot harder to remove complexity than to give a talk saying that we should remove complexity. A lot of the complexity buys us something, either directly or indirectly. When we looked at the input of a fancy modern keyboard vs. the apple 2 keyboard, we saw that using a relatively powerful and expensive general purpose processor to handle keyboard inputs can be slower than dedicated logic for the keyboard, which would both be simpler and cheaper. However, using the processor gives people the ability to easily customize the keyboard, and also pushes the problem of “programming” the keyboard from hardware into software, which reduces the cost of making the keyboard. The more expensive chip increases the manufacturing cost, but considering how much of the cost of these small-batch artisanal keyboards is the design cost, it seems like a net win to trade manufacturing cost for ease of programming.

...

If you want a reference to compare the kindle against, a moderately quick page turn in a physical book appears to be about 200 ms.

https://twitter.com/gravislizard/status/927593460642615296
almost everything on computers is perceptually slower than it was in 1983
https://archive.is/G3D5K
https://archive.is/vhDTL
https://archive.is/a3321
https://archive.is/imG7S
techtariat  dan-luu  performance  time  hardware  consumerism  objektbuch  data  history  reflection  critique  software  roots  tainter  engineering  nitty-gritty  ui  ux  hci  ios  mobile  apple  amazon  sequential  trends  increase-decrease  measure  analysis  measurement  os  systems  IEEE  intricacy  desktop  benchmarks  rant  carmack  system-design  degrees-of-freedom  keyboard  terminal  editors  links  input-output  networking  world  s:**  multi  twitter  social  discussion  tech  programming  web  internet  speed  backup  worrydream  interface  metal-to-virtual  latency-throughput  workflow  form-design  interface-compatibility 
july 2019 by nhaliday
Why is Google Translate so bad for Latin? A longish answer. : latin
hmm:
> All it does its correlate sequences of up to five consecutive words in texts that have been manually translated into two or more languages.
That sort of system ought to be perfect for a dead language, though. Dump all the Cicero, Livy, Lucretius, Vergil, and Oxford Latin Course into a database and we're good.

We're not exactly inundated with brand new Latin to translate.
--
> Dump all the Cicero, Livy, Lucretius, Vergil, and Oxford Latin Course into a database and we're good.
What makes you think that the Google folks haven't done so and used that to create the language models they use?
> That sort of system ought to be perfect for a dead language, though.
Perhaps. But it will be bad at translating novel English sentences to Latin.
foreign-lang  reddit  social  discussion  language  the-classics  literature  dataset  measurement  roots  traces  syntax  anglo  nlp  stackex  links  q-n-a  linguistics  lexical  deep-learning  sequential  hmm  project  arrows  generalization  state-of-art  apollonian-dionysian  machine-learning  google 
june 2019 by nhaliday
c++ - Debugging template instantiations - Stack Overflow
Yes, there is a template metaprogramming debugger. Templight

https://github.com/mikael-s-persson/templight
--
Seems to be dead now, though :( [ed.: Partially true. They've merged pull requests recently tho.]
--
Metashell is still in active development though: github.com/metashell/metashell
q-n-a  stackex  nitty-gritty  pls  types  c(pp)  debugging  devtools  tools  programming  howto  advice  checklists  multi  repo  wire-guided  static-dynamic  compilers  performance  measurement  time  latency-throughput 
may 2019 by nhaliday
unix - How can I profile C++ code running on Linux? - Stack Overflow
If your goal is to use a profiler, use one of the suggested ones.

However, if you're in a hurry and you can manually interrupt your program under the debugger while it's being subjectively slow, there's a simple way to find performance problems.

Just halt it several times, and each time look at the call stack. If there is some code that is wasting some percentage of the time, 20% or 50% or whatever, that is the probability that you will catch it in the act on each sample. So that is roughly the percentage of samples on which you will see it. There is no educated guesswork required. If you do have a guess as to what the problem is, this will prove or disprove it.

You may have multiple performance problems of different sizes. If you clean out any one of them, the remaining ones will take a larger percentage, and be easier to spot, on subsequent passes. This magnification effect, when compounded over multiple problems, can lead to truly massive speedup factors.

Caveat: Programmers tend to be skeptical of this technique unless they've used it themselves. They will say that profilers give you this information, but that is only true if they sample the entire call stack, and then let you examine a random set of samples. (The summaries are where the insight is lost.) Call graphs don't give you the same information, because they don't summarize at the instruction level, and
they give confusing summaries in the presence of recursion.
They will also say it only works on toy programs, when actually it works on any program, and it seems to work better on bigger programs, because they tend to have more problems to find. They will say it sometimes finds things that aren't problems, but that is only true if you see something once. If you see a problem on more than one sample, it is real.

http://poormansprofiler.org/

gprof, Valgrind and gperftools - an evaluation of some tools for application level CPU profiling on Linux: http://gernotklingler.com/blog/gprof-valgrind-gperftools-evaluation-tools-application-level-cpu-profiling-linux/
gprof is the dinosaur among the evaluated profilers - its roots go back into the 1980’s. It seems it was widely used and a good solution during the past decades. But its limited support for multi-threaded applications, the inability to profile shared libraries and the need for recompilation with compatible compilers and special flags that produce a considerable runtime overhead, make it unsuitable for using it in today’s real-world projects.

Valgrind delivers the most accurate results and is well suited for multi-threaded applications. It’s very easy to use and there is KCachegrind for visualization/analysis of the profiling data, but the slow execution of the application under test disqualifies it for larger, longer running applications.

The gperftools CPU profiler has a very little runtime overhead, provides some nice features like selectively profiling certain areas of interest and has no problem with multi-threaded applications. KCachegrind can be used to analyze the profiling data. Like all sampling based profilers, it suffers statistical inaccuracy and therefore the results are not as accurate as with Valgrind, but practically that’s usually not a big problem (you can always increase the sampling frequency if you need more accurate results). I’m using this profiler on a large code-base and from my personal experience I can definitely recommend using it.
q-n-a  stackex  programming  engineering  performance  devtools  tools  advice  checklists  hacker  nitty-gritty  tricks  lol  multi  unix  linux  techtariat  analysis  comparison  recommendations  software  measurement  oly-programming  concurrency  debugging  metabuch 
may 2019 by nhaliday
Does left-handedness occur more in certain ethnic groups than others?
Yes. There are some aboriginal tribes in Australia who have about 70% of their population being left-handed. It’s also more than 50% for some South American tribes.

The reason is the same in both cases: a recent past of extreme aggression with other tribes. Left-handedness is caused by recessive genes, but being left-handed is a boost when in hand-to-hand combat with a right-handed guy (who usually has trained extensively with other right-handed guys, as this disposition is genetically dominant so right-handed are majority in most human populations, so lacks experience with a left-handed). Should a particular tribe enter too much war time periods, it’s proportion of left-handeds will naturally rise. As their enemy tribe’s proportion of left-handed people is rising as well, there’s a point at which the natural advantage they get in fighting disipates and can only climb higher should they continuously find new groups to fight with, who are also majority right-handed.

...

So the natural question is: given their advantages in 1-on-1 combat, why doesn’t the percentage grow all the way up to 50% or slightly higher? Because there are COSTS associated with being left-handed, as apparently our neural network is pre-wired towards right-handedness - showing as a reduced life expectancy for lefties. So a mathematical model was proposed to explain their distribution among different societies

THE FIGHTING HYPOTHESIS: STABILITY OF POLYMORPHISM IN HUMAN HANDEDNESS

http://gepv.univ-lille1.fr/downl...

Further, it appears the average left-handedness for humans (~10%) hasn’t changed in thousands of years (judging by the paintings of hands on caves)

Frequency-dependent maintenance of left handedness in humans.

Handedness frequency over more than 10,000 years

[ed.: Compare with Julius Evola's "left-hand path".]
q-n-a  qra  trivia  cocktail  farmers-and-foragers  history  antiquity  race  demographics  bio  EEA  evolution  context  peace-violence  war  ecology  EGT  unintended-consequences  game-theory  equilibrium  anthropology  cultural-dynamics  sapiens  data  database  trends  cost-benefit  strategy  time-series  art  archaeology  measurement  oscillation  pro-rata  iteration-recursion  gender  male-variability  cliometrics  roots  explanation  explanans  correlation  causation  branches 
july 2018 by nhaliday
Information Processing: US Needs a National AI Strategy: A Sputnik Moment?
FT podcasts on US-China competition and AI: http://infoproc.blogspot.com/2018/05/ft-podcasts-on-us-china-competition-and.html

A new recommended career path for effective altruists: China specialist: https://80000hours.org/articles/china-careers/
Our rough guess is that it would be useful for there to be at least ten people in the community with good knowledge in this area within the next few years.

By “good knowledge” we mean they’ve spent at least 3 years studying these topics and/or living in China.

We chose ten because that would be enough for several people to cover each of the major areas listed (e.g. 4 within AI, 2 within biorisk, 2 within foreign relations, 1 in another area).

AI Policy and Governance Internship: https://www.fhi.ox.ac.uk/ai-policy-governance-internship/

https://www.fhi.ox.ac.uk/deciphering-chinas-ai-dream/
https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf
Deciphering China’s AI Dream
The context, components, capabilities, and consequences of
China’s strategy to lead the world in AI

Europe’s AI delusion: https://www.politico.eu/article/opinion-europes-ai-delusion/
Brussels is failing to grasp threats and opportunities of artificial intelligence.
By BRUNO MAÇÃES

When the computer program AlphaGo beat the Chinese professional Go player Ke Jie in a three-part match, it didn’t take long for Beijing to realize the implications.

If algorithms can already surpass the abilities of a master Go player, it can’t be long before they will be similarly supreme in the activity to which the classic board game has always been compared: war.

As I’ve written before, the great conflict of our time is about who can control the next wave of technological development: the widespread application of artificial intelligence in the economic and military spheres.

...

If China’s ambitions sound plausible, that’s because the country’s achievements in deep learning are so impressive already. After Microsoft announced that its speech recognition software surpassed human-level language recognition in October 2016, Andrew Ng, then head of research at Baidu, tweeted: “We had surpassed human-level Chinese recognition in 2015; happy to see Microsoft also get there for English less than a year later.”

...

One obvious advantage China enjoys is access to almost unlimited pools of data. The machine-learning technologies boosting the current wave of AI expansion are as good as the amount of data they can use. That could be the number of people driving cars, photos labeled on the internet or voice samples for translation apps. With 700 or 800 million Chinese internet users and fewer data protection rules, China is as rich in data as the Gulf States are in oil.

How can Europe and the United States compete? They will have to be commensurately better in developing algorithms and computer power. Sadly, Europe is falling behind in these areas as well.

...

Chinese commentators have embraced the idea of a coming singularity: the moment when AI surpasses human ability. At that point a number of interesting things happen. First, future AI development will be conducted by AI itself, creating exponential feedback loops. Second, humans will become useless for waging war. At that point, the human mind will be unable to keep pace with robotized warfare. With advanced image recognition, data analytics, prediction systems, military brain science and unmanned systems, devastating wars might be waged and won in a matter of minutes.

...

The argument in the new strategy is fully defensive. It first considers how AI raises new threats and then goes on to discuss the opportunities. The EU and Chinese strategies follow opposite logics. Already on its second page, the text frets about the legal and ethical problems raised by AI and discusses the “legitimate concerns” the technology generates.

The EU’s strategy is organized around three concerns: the need to boost Europe’s AI capacity, ethical issues and social challenges. Unfortunately, even the first dimension quickly turns out to be about “European values” and the need to place “the human” at the center of AI — forgetting that the first word in AI is not “human” but “artificial.”

https://twitter.com/mr_scientism/status/983057591298351104
https://archive.is/m3Njh
US military: "LOL, China thinks it's going to be a major player in AI, but we've got all the top AI researchers. You guys will help us develop weapons, right?"

US AI researchers: "No."

US military: "But... maybe just a computer vision app."

US AI researchers: "NO."

https://www.theverge.com/2018/4/4/17196818/ai-boycot-killer-robots-kaist-university-hanwha
https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html
https://twitter.com/mr_scientism/status/981685030417326080
https://archive.is/3wbHm
AI-risk was a mistake.
hsu  scitariat  commentary  video  presentation  comparison  usa  china  asia  sinosphere  frontier  technology  science  ai  speedometer  innovation  google  barons  deepgoog  stories  white-paper  strategy  migration  iran  human-capital  corporation  creative  alien-character  military  human-ml  nationalism-globalism  security  investing  government  games  deterrence  defense  nuclear  arms  competition  risk  ai-control  musk  optimism  multi  news  org:mag  europe  EU  80000-hours  effective-altruism  proposal  article  realness  offense-defense  war  biotech  altruism  language  foreign-lang  philosophy  the-great-west-whale  enhancement  foreign-policy  geopolitics  anglo  jobs  career  planning  hmm  travel  charity  tech  intel  media  teaching  tutoring  russia  india  miri-cfar  pdf  automation  class  labor  polisci  society  trust  n-factor  corruption  leviathan  ethics  authoritarianism  individualism-collectivism  revolution  economics  inequality  civic  law  regulation  data  scale  pro-rata  capital  zero-positive-sum  cooperate-defect  distribution  time-series  tre 
february 2018 by nhaliday
Sex, Drugs, and Bitcoin: How Much Illegal Activity Is Financed Through Cryptocurrencies? by Sean Foley, Jonathan R. Karlsen, Tālis J. Putniņš :: SSRN
Cryptocurrencies are among the largest unregulated markets in the world. We find that approximately one-quarter of bitcoin users and one-half of bitcoin transactions are associated with illegal activity. Around $72 billion of illegal activity per year involves bitcoin, which is close to the scale of the US and European markets for illegal drugs. The illegal share of bitcoin activity declines with mainstream interest in bitcoin and with the emergence of more opaque cryptocurrencies. The techniques developed in this paper have applications in cryptocurrency surveillance. Our findings suggest that cryptocurrencies are transforming the way black markets operate by enabling “black e-commerce.”
study  economics  law  leviathan  bitcoin  cryptocurrency  crypto  impetus  scale  markets  civil-liberty  randy-ayndy  crime  criminology  measurement  estimate  pro-rata  money  monetary-fiscal  crypto-anarchy  drugs  internet  tradecraft  opsec  security  intel 
february 2018 by nhaliday
Frontiers | Can We Validate the Results of Twin Studies? A Census-Based Study on the Heritability of Educational Achievement | Genetics
As for most phenotypes, the amount of variance in educational achievement explained by SNPs is lower than the amount of additive genetic variance estimated in twin studies. Twin-based estimates may however be biased because of self-selection and differences in cognitive ability between twins and the rest of the population. Here we compare twin registry based estimates with a census-based heritability estimate, sampling from the same Dutch birth cohort population and using the same standardized measure for educational achievement. Including important covariates (i.e., sex, migration status, school denomination, SES, and group size), we analyzed 893,127 scores from primary school children from the years 2008–2014. For genetic inference, we used pedigree information to construct an additive genetic relationship matrix. Corrected for the covariates, this resulted in an estimate of 85%, which is even higher than based on twin studies using the same cohort and same measure. We therefore conclude that the genetic variance not tagged by SNPs is not an artifact of the twin method itself.
study  biodet  behavioral-gen  iq  psychometrics  psychology  cog-psych  twin-study  methodology  variance-components  state-of-art  🌞  developmental  age-generation  missing-heritability  biases  measurement  sampling-bias  sib-study 
december 2017 by nhaliday
galaxy - How do astronomers estimate the total mass of dust in clouds and galaxies? - Astronomy Stack Exchange
Dust absorbs stellar light (primarily in the ultraviolet), and is heated up. Subsequently it cools by emitting infrared, "thermal" radiation. Assuming a dust composition and grain size distribution, the amount of emitted IR light per unit dust mass can be calculated as a function of temperature. Observing the object at several different IR wavelengths, a Planck curve can be fitted to the data points, yielding the dust temperature. The more UV light incident on the dust, the higher the temperature.

The result is somewhat sensitive to the assumptions, and thus the uncertainties are sometimes quite large. The more IR data points obtained, the better. If only one IR point is available, the temperature cannot be calculated. Then there's a degeneracy between incident UV light and the amount of dust, and the mass can only be estimated to within some orders of magnitude (I think).
nibble  q-n-a  overflow  space  measurement  measure  estimate  physics  electromag  visuo  methodology 
december 2017 by nhaliday
How do you measure the mass of a star? (Beginner) - Curious About Astronomy? Ask an Astronomer
Measuring the mass of stars in binary systems is easy. Binary systems are sets of two or more stars in orbit about each other. By measuring the size of the orbit, the stars' orbital speeds, and their orbital periods, we can determine exactly what the masses of the stars are. We can take that knowledge and then apply it to similar stars not in multiple systems.

We also can easily measure the luminosity and temperature of any star. A plot of luminocity versus temperature for a set of stars is called a Hertsprung-Russel (H-R) diagram, and it turns out that most stars lie along a thin band in this diagram known as the main Sequence. Stars arrange themselves by mass on the Main Sequence, with massive stars being hotter and brighter than their small-mass bretheren. If a star falls on the Main Sequence, we therefore immediately know its mass.

In addition to these methods, we also have an excellent understanding of how stars work. Our models of stellar structure are excellent predictors of the properties and evolution of stars. As it turns out, the mass of a star determines its life history from day 1, for all times thereafter, not only when the star is on the Main Sequence. So actually, the position of a star on the H-R diagram is a good indicator of its mass, regardless of whether it's on the Main Sequence or not.
nibble  q-n-a  org:junk  org:edu  popsci  space  physics  electromag  measurement  mechanics  gravity  cycles  oscillation  temperature  visuo  plots  correlation  metrics  explanation  measure  methodology 
december 2017 by nhaliday
Is the speed of light really constant?
So what if the speed of light isn’t the same when moving toward or away from us? Are there any observable consequences? Not to the limits of observation so far. We know, for example, that any one-way speed of light is independent of the motion of the light source to 2 parts in a billion. We know it has no effect on the color of the light emitted to a few parts in 1020. Aspects such as polarization and interference are also indistinguishable from standard relativity. But that’s not surprising, because you don’t need to assume isotropy for relativity to work. In the 1970s, John Winnie and others showed that all the results of relativity could be modeled with anisotropic light so long as the two-way speed was a constant. The “extra” assumption that the speed of light is a uniform constant doesn’t change the physics, but it does make the mathematics much simpler. Since Einstein’s relativity is the simpler of two equivalent models, it’s the model we use. You could argue that it’s the right one citing Occam’s razor, or you could take Newton’s position that anything untestable isn’t worth arguing over.

SPECIAL RELATIVITY WITHOUT ONE-WAY VELOCITY ASSUMPTIONS:
https://sci-hub.bz/https://www.jstor.org/stable/186029
https://sci-hub.bz/https://www.jstor.org/stable/186671
nibble  scitariat  org:bleg  physics  relativity  electromag  speed  invariance  absolute-relative  curiosity  philosophy  direction  gedanken  axioms  definition  models  experiment  space  science  measurement  volo-avolo  synchrony  uniqueness  multi  pdf  piracy  study  article 
november 2017 by nhaliday
general relativity - What if the universe is rotating as a whole? - Physics Stack Exchange
To find out whether the universe is rotating, in principle the most straightforward test is to watch the motion of a gyroscope relative to the distant galaxies. If it rotates at an angular velocity -ω relative to them, then the universe is rotating at angular velocity ω. In practice, we do not have mechanical gyroscopes with small enough random and systematic errors to put a very low limit on ω. However, we can use the entire solar system as a kind of gyroscope. Solar-system observations put a model-independent upper limit of 10^-7 radians/year on the rotation,[Clemence 1957] which is an order of magnitude too lax to rule out the Gödel metric.
nibble  q-n-a  overflow  physics  relativity  gedanken  direction  absolute-relative  big-picture  space  experiment  measurement  volo-avolo 
november 2017 by nhaliday
The Science of Roman History: Biology, Climate, and the Future of the Past (Hardcover and eBook) | Princeton University Press
Forthcoming April 2018

How the latest cutting-edge science offers a fuller picture of life in Rome and antiquity
This groundbreaking book provides the first comprehensive look at how the latest advances in the sciences are transforming our understanding of ancient Roman history. Walter Scheidel brings together leading historians, anthropologists, and geneticists at the cutting edge of their fields, who explore novel types of evidence that enable us to reconstruct the realities of life in the Roman world.

Contributors discuss climate change and its impact on Roman history, and then cover botanical and animal remains, which cast new light on agricultural and dietary practices. They exploit the rich record of human skeletal material--both bones and teeth—which forms a bio-archive that has preserved vital information about health, nutritional status, diet, disease, working conditions, and migration. Complementing this discussion is an in-depth analysis of trends in human body height, a marker of general well-being. This book also assesses the contribution of genetics to our understanding of the past, demonstrating how ancient DNA is used to track infectious diseases, migration, and the spread of livestock and crops, while the DNA of modern populations helps us reconstruct ancient migrations, especially colonization.

Opening a path toward a genuine biohistory of Rome and the wider ancient world, The Science of RomanHistory offers an accessible introduction to the scientific methods being used in this exciting new area of research, as well as an up-to-date survey of recent findings and a tantalizing glimpse of what the future holds.

Walter Scheidel is the Dickason Professor in the Humanities, Professor of Classics and History, and a Kennedy-Grossman Fellow in Human Biology at Stanford University. He is the author or editor of seventeen previous books, including The Great Leveler: Violence and the History of Inequality from the Stone Age to the Twenty-First Century (Princeton).
books  draft  todo  broad-econ  economics  anthropology  genetics  genomics  aDNA  measurement  volo-avolo  environment  climate-change  archaeology  history  iron-age  mediterranean  the-classics  demographics  health  embodied  labor  migration  walter-scheidel  agriculture  frontier  malthus  letters  gibbon  traces 
november 2017 by nhaliday
Global Evidence on Economic Preferences
- Benjamin Enke et al

This paper studies the global variation in economic preferences. For this purpose, we present the Global Preference Survey (GPS), an experimentally validated survey dataset of time preference, risk preference, positive and negative reciprocity, altruism, and trust from 80,000 individuals in 76 countries. The data reveal substantial heterogeneity in preferences across countries, but even larger within-country heterogeneity. Across individuals, preferences vary with age, gender, and cognitive ability, yet these relationships appear partly country specific. At the country level, the data reveal correlations between preferences and bio-geographic and cultural variables such as agricultural suitability, language structure, and religion. Variation in preferences is also correlated with economic outcomes and behaviors. Within countries and subnational regions, preferences are linked to individual savings decisions, labor market choices, and prosocial behaviors. Across countries, preferences vary with aggregate outcomes ranging from per capita income, to entrepreneurial activities, to the frequency of armed conflicts.

...

This paper explores these questions by making use of the core features of the GPS: (i) coverage of 76 countries that represent approximately 90 percent of the world population; (ii) representative population samples within each country for a total of 80,000 respondents, (iii) measures designed to capture time preference, risk preference, altruism, positive reciprocity, negative reciprocity, and trust, based on an ex ante experimental validation procedure (Falk et al., 2016) as well as pre-tests in culturally heterogeneous countries, (iv) standardized elicitation and translation techniques through the pre-existing infrastructure of a global polling institute, Gallup. Upon publication, the data will be made publicly available online. The data on individual preferences are complemented by a comprehensive set of covariates provided by the Gallup World Poll 2012.

...

The GPS preference measures are based on twelve survey items, which were selected in an initial survey validation study (see Falk et al., 2016, for details). The validation procedure involved conducting multiple incentivized choice experiments for each preference, and testing the relative abilities of a wide range of different question wordings and formats to predict behavior in these choice experiments. The particular items used to construct the GPS preference measures were selected based on optimal performance out of menus of alternative items (for details see Falk et al., 2016). Experiments provide a valuable benchmark for selecting survey items, because they can approximate the ideal choice situations, specified in economic theory, in which individuals make choices in controlled decision contexts. Experimental measures are very costly, however, to implement in a globally representative sample, whereas survey measures are much less costly.⁴ Selecting survey measures that can stand in for incentivized revealed preference measures leverages the strengths of both approaches.

The Preference Survey Module: A Validated Instrument for Measuring Risk, Time, and Social Preferences: http://ftp.iza.org/dp9674.pdf

Table 1: Survey items of the GPS

Figure 1: World maps of patience, risk taking, and positive reciprocity.
Figure 2: World maps of negative reciprocity, altruism, and trust.

Figure 3: Gender coefficients by country. For each country, we regress the respective preference on gender, age and its square, and subjective math skills, and plot the resulting gender coefficients as well as their significance level. In order to make countries comparable, each preference was standardized (z-scores) within each country before computing the coefficients.

Figure 4: Cognitive ability coefficients by country. For each country, we regress the respective preference on gender, age and its square, and subjective math skills, and plot the resulting coefficients on subjective math skills as well as their significance level. In order to make countries comparable, each preference was standardized (z-scores) within each country before computing the coefficients.

Figure 5: Age profiles by OECD membership.

Table 6: Pairwise correlations between preferences and geographic and cultural variables

Figure 10: Distribution of preferences at individual level.
Figure 11: Distribution of preferences at country level.

interesting digression:
D Discussion of Measurement Error and Within- versus Between-Country Variation
study  dataset  data  database  let-me-see  economics  growth-econ  broad-econ  microfoundations  anthropology  cultural-dynamics  culture  psychology  behavioral-econ  values  🎩  pdf  piracy  world  spearhead  general-survey  poll  group-level  within-group  variance-components  🌞  correlation  demographics  age-generation  gender  iq  cooperate-defect  time-preference  temperance  labor  wealth  wealth-of-nations  entrepreneurialism  outcome-risk  altruism  trust  patience  developing-world  maps  visualization  n-factor  things  phalanges  personality  regression  gender-diff  pop-diff  geography  usa  canada  anglo  europe  the-great-west-whale  nordic  anglosphere  MENA  africa  china  asia  sinosphere  latin-america  self-report  hive-mind  GT-101  realness  long-short-run  endo-exo  signal-noise  communism  japan  korea  methodology  measurement  org:ngo  white-paper  endogenous-exogenous  within-without  hari-seldon 
october 2017 by nhaliday
Frontier Culture: The Roots and Persistence of “Rugged Individualism” in the United States∗
In a classic 1893 essay, Frederick Jackson Turner argued that the American frontier promoted individualism. We revisit the Frontier Thesis and examine its relevance at the subnational level. Using Census data and GIS techniques, we track the frontier throughout the 1790–1890 period and construct a novel, county-level measure of historical frontier experience. We document the distinctive demographics of frontier locations during this period—disproportionately male, prime-age adult, foreign-born, and illiterate—as well as their higher levels of individualism, proxied by the share of infrequent names among children. Many decades after the closing of the frontier, counties with longer historical frontier experience exhibit more prevalent individualism and opposition to redistribution and regulation. We take several steps towards a causal interpretation, including an instrumental variables approach that exploits variation in the speed of westward expansion induced by prior national immigration in- flows. Using linked historical Census data, we identify mechanisms giving rise to a persistent frontier culture. Greater individualism on the frontier was not driven solely by selective migration, suggesting that frontier conditions may have shaped behavior and values. We provide evidence suggesting that rugged individualism may be rooted in its adaptive advantage on the frontier and the opportunities for upward mobility through effort.

https://twitter.com/whyvert/status/921900860224897024
https://archive.is/jTzSe

The Origins of Cultural Divergence: Evidence from a Developing Country.: http://economics.handels.gu.se/digitalAssets/1643/1643769_37.-hoang-anh-ho-ncde-2017-june.pdf
Cultural norms diverge substantially across societies, often even within the same country. In this paper, we test the voluntary settlement hypothesis, proposing that individualistic people tend to self-select into migrating out of reach from collectivist states towards the periphery and that such patterns of historical migration are reflected even in the contemporary distribution of norms. For more than one thousand years during the first millennium CE, northern Vietnam was under an exogenously imposed Chinese rule. From the eleventh to the eighteenth centuries, ancient Vietnam gradually expanded its territory through various waves of southward conquest. We demonstrate that areas being annexed earlier into ancient Vietnam are nowadays more (less) prone to collectivist (individualist) culture. We argue that the southward out-migration of individualist people was the main mechanism behind this finding. The result is consistent across various measures obtained from an extensive household survey and robust to various control variables as well as to different empirical specifications, including an instrumental variable estimation. A lab-in-the-field experiment also confirms the finding.
pdf  study  economics  broad-econ  cliometrics  path-dependence  evidence-based  empirical  stylized-facts  values  culture  cultural-dynamics  anthropology  usa  frontier  allodium  the-west  correlation  individualism-collectivism  measurement  politics  ideology  expression-survival  redistribution  regulation  political-econ  government  migration  history  early-modern  pre-ww2  things  phalanges  🎩  selection  polisci  roots  multi  twitter  social  commentary  scitariat  backup  gnon  growth-econ  medieval  china  asia  developing-world  shift  natural-experiment  endo-exo  endogenous-exogenous  hari-seldon 
october 2017 by nhaliday
Genetics: CHROMOSOMAL MAPS AND MAPPING FUNCTIONS
Any particular gene has a specific location (its "locus") on a particular chromosome. For any two genes (or loci) alpha and beta, we can ask "What is the recombination frequency between them?" If the genes are on different chromosomes, the answer is 50% (independent assortment). If the two genes are on the same chromosome, the recombination frequency will be somewhere in the range from 0 to 50%. The "map unit" (1 cM) is the genetic map distance that corresponds to a recombination frequency of 1%. In large chromosomes, the cumulative map distance may be much greater than 50cM, but the maximum recombination frequency is 50%. Why? In large chromosomes, there is enough length to allow for multiple cross-overs, so we have to ask what result we expect for random multiple cross-overs.

1. How is it that random multiple cross-overs give the same result as independent assortment?

Figure 5.12 shows how the various double cross-over possibilities add up, resulting in gamete genotype percentages that are indistinguisable from independent assortment (50% parental type, 50% non-parental type). This is a very important figure. It provides the explanation for why genes that are far apart on a very large chromosome sort out in crosses just as if they were on separate chromosomes.

2. Is there a way to measure how close together two crossovers can occur involving the same two chromatids? That is, how could we measure whether there is spacial "interference"?

Figure 5.13 shows how a measurement of the gamete frequencies resulting from a "three point cross" can answer this question. If we would get a "lower than expected" occurrence of recombinant genotypes aCb and AcB, it would suggest that there is some hindrance to the two cross-overs occurring this close together. Crosses of this type in Drosophila have shown that, in this organism, double cross-overs do not occur at distances of less than about 10 cM between the two cross-over sites. ( Textbook, page 196. )

3. How does all of this lead to the "mapping function", the mathematical (graphical) relation between the observed recombination frequency (percent non-parental gametes) and the cumulative genetic distance in map units?

Figure 5.14 shows the result for the two extremes of "complete interference" and "no interference". The situation for real chromosomes in real organisms is somewhere between these extremes, such as the curve labelled "interference decreasing with distance".
org:junk  org:edu  explanation  faq  nibble  genetics  genomics  bio  ground-up  magnitude  data  flux-stasis  homo-hetero  measure  orders  metric-space  limits  measurement 
october 2017 by nhaliday
Tax Evasion and Inequality
This paper attempts to estimate the size and distribution of tax evasion in rich countries. We combine stratified random audits—the key source used to study tax evasion so far—with new micro-data leaked from two large offshore financial institutions, HSBC Switzerland (“Swiss leaks”) and Mossack Fonseca (“Panama Papers”). We match these data to population-wide wealth records in Norway, Sweden, and Denmark. We find that tax evasion rises sharply with wealth, a phenomenon that random audits fail to capture. On average about 3% of personal taxes are evaded in Scandinavia, but this figure rises to about 30% in the top 0.01% of the wealth distribution, a group that includes households with more than $40 million in net wealth. A simple model of the supply of tax evasion services can explain why evasion rises steeply with wealth. Taking tax evasion into account increases the rise in inequality seen in tax data since the 1970s markedly, highlighting the need to move beyond tax data to capture income and wealth at the top, even in countries where tax compliance is generally high. We also find that after reducing tax evasion—by using tax amnesties—tax evaders do not legally avoid taxes more. This result suggests that fighting tax evasion can be an effective way to collect more tax revenue from the ultra-wealthy.

Figure 1

America’s unreported economy: measuring the size, growth and determinants of income tax evasion in the U.S.: https://link.springer.com/article/10.1007/s10611-011-9346-x
This study empirically investigates the extent of noncompliance with the tax code and examines the determinants of federal income tax evasion in the U.S. Employing a refined version of Feige’s (Staff Papers, International Monetary Fund 33(4):768–881, 1986, 1989) General Currency Ratio (GCR) model to estimate a time series of unreported income as our measure of tax evasion, we find that 18–23% of total reportable income may not properly be reported to the IRS. This gives rise to a 2009 “tax gap” in the range of $390–$540 billion. As regards the determinants of tax noncompliance, we find that federal income tax evasion is an increasing function of the average effective federal income tax rate, the unemployment rate, the nominal interest rate, and per capita real GDP, and a decreasing function of the IRS audit rate. Despite important refinements of the traditional currency ratio approach for estimating the aggregate size and growth of unreported economies, we conclude that the sensitivity of the results to different benchmarks, imperfect data sources and alternative specifying assumptions precludes obtaining results of sufficient accuracy and reliability to serve as effective policy guides.
pdf  study  economics  micro  evidence-based  data  europe  nordic  scale  class  compensation  money  monetary-fiscal  political-econ  redistribution  taxes  madisonian  inequality  history  mostly-modern  natural-experiment  empirical  🎩  cocktail  correlation  models  supply-demand  GT-101  crooked  elite  vampire-squid  nationalism-globalism  multi  pro-rata  usa  time-series  trends  world-war  cold-war  government  todo  planning  long-term  trivia  law  crime  criminology  estimate  speculation  measurement  labor  macro  econ-metrics  wealth  stock-flow  time  density  criminal-justice  frequency  dark-arts  traces  evidence 
october 2017 by nhaliday
Does Learning to Read Improve Intelligence? A Longitudinal Multivariate Analysis in Identical Twins From Age 7 to 16
Stuart Richie, Bates, Plomin

SEM: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4354297/figure/fig03/

The variance explained by each path in the diagrams included here can be calculated by squaring its path weight. To take one example, reading differences at age 12 in the model shown in Figure​Figure33 explain 7% of intelligence differences at age 16 (.262). However, since our measures are of differences, they are likely to include substantial amounts of noise: Measurement error may produce spurious differences. To remove this error variance, we can take an estimate of the reliability of the measures (generally high, since our measures are normed, standardized tests), which indicates the variance expected purely by the reliability of the measure, and subtract it from the observed variance between twins in our sample. Correcting for reliability in this way, the effect size estimates are somewhat larger; to take the above example, the reliability-corrected effect size of age 12 reading differences on age 16 intelligence differences is around 13% of the “signal” variance. It should be noted that the age 12 reading differences themselves are influenced by many previous paths from both reading and intelligence, as illustrated in Figure​Figure33.

...

The present study provided compelling evidence that improvements in reading ability, themselves caused purely by the nonshared environment, may result in improvements in both verbal and nonverbal cognitive ability, and may thus be a factor increasing cognitive diversity within families (Plomin, 2011). These associations are present at least as early as age 7, and are not—to the extent we were able to test this possibility—driven by differences in reading exposure. Since reading is a potentially remediable ability, these findings have implications for reading instruction: Early remediation of reading problems might not only aid in the growth of literacy, but may also improve more general cognitive abilities that are of critical importance across the life span.

Does Reading Cause Later Intelligence? Accounting for Stability in Models of Change: http://sci-hub.tw/10.1111/cdev.12669
Results from a state–trait model suggest that reported effects of reading ability on later intelligence may be artifacts of previously uncontrolled factors, both environmental in origin and stable during this developmental period, influencing both constructs throughout development.
study  albion  scitariat  spearhead  psychology  cog-psych  psychometrics  iq  intelligence  eden  language  psych-architecture  longitudinal  twin-study  developmental  environmental-effects  studying  🌞  retrofit  signal-noise  intervention  causation  graphs  graphical-models  flexibility  britain  neuro-nitgrit  effect-size  variance-components  measurement  multi  sequential  time  composition-decomposition  biodet  behavioral-gen  direct-indirect  systematic-ad-hoc  debate  hmm  pdf  piracy  flux-stasis 
september 2017 by nhaliday
Caught in the act | West Hunter
The fossil record is sparse. Let me try to explain that. We have at most a few hundred Neanderthal skeletons, most in pretty poor shape. How many Neanderthals ever lived? I think their population varied in size quite a bit – lowest during glacial maxima, probably highest in interglacials. Their degree of genetic diversity suggests an effective population size of ~1000, but that would be dominated by the low points (harmonic average). So let’s say 50,000 on average, over their whole range (Europe, central Asia, the Levant, perhaps more). Say they were around for 300,000 years, with a generation time of 30 years – 10,000 generations, for a total of five hundred million Neanderthals over all time. So one in a million Neanderthals ends up in a museum: one every 20 generations. Low time resolution!

So if anatomically modern humans rapidly wiped out Neanderthals, we probably couldn’t tell. In much the same way, you don’t expect to find the remains of many dinosaurs killed by the Cretaceous meteor impact (at most one millionth of one generation, right?), or of Columbian mammoths killed by a wave of Amerindian hunters. Sometimes invaders leave a bigger footprint: a bunch of cities burning down with no rebuilding tells you something. But even when you know that population A completely replaced population B, it can be hard to prove that just how it happened. After all, population A could have all committed suicide just before B showed up. Stranger things have happened – but not often.
west-hunter  scitariat  discussion  ideas  data  objektbuch  scale  magnitude  estimate  population  sapiens  archaics  archaeology  pro-rata  history  antiquity  methodology  volo-avolo  measurement  pop-structure  density  time  frequency  apollonian-dionysian  traces  evidence 
september 2017 by nhaliday
Gimbal lock - Wikipedia
Gimbal lock is the loss of one degree of freedom in a three-dimensional, three-gimbal mechanism that occurs when the axes of two of the three gimbals are driven into a parallel configuration, "locking" the system into rotation in a degenerate two-dimensional space.

The word lock is misleading: no gimbal is restrained. All three gimbals can still rotate freely about their respective axes of suspension. Nevertheless, because of the parallel orientation of two of the gimbals' axes there is no gimbal available to accommodate rotation along one axis.

https://blender.stackexchange.com/questions/469/could-someone-please-explain-gimbal-lock
https://computergraphics.stackexchange.com/questions/4436/how-to-achieve-gimbal-lock-with-euler-angles
Now this is where most people stop thinking about the issue and move on with their life. They just conclude that Euler angles are somehow broken. This is also where a lot of misunderstandings happen so it's worth investigating the matter slightly further than what causes gimbal lock.

It is important to understand that this is only problematic if you interpolate in Euler angles**! In a real physical gimbal this is given - you have no other choice. In computer graphics you have many other choices, from normalized matrix, axis angle or quaternion interpolation. Gimbal lock has a much more dramatic implication to designing control systems than it has to 3d graphics. Which is why a mechanical engineer for example will have a very different take on gimbal locking.

You don't have to give up using Euler angles to get rid of gimbal locking, just stop interpolating values in Euler angles. Of course, this means that you can now no longer drive a rotation by doing direct manipulation of one of the channels. But as long as you key the 3 angles simultaneously you have no problems and you can internally convert your interpolation target to something that has less problems.

Using Euler angles is just simply more intuitive to think in most cases. And indeed Euler never claimed it was good for interpolating but just that it can model all possible space orientations. So Euler angles are just fine for setting orientations like they were meant to do. Also incidentally Euler angles have the benefit of being able to model multi turn rotations which will not happen sanely for the other representations.
nibble  dirty-hands  physics  mechanics  robotics  degrees-of-freedom  measurement  gotchas  volo-avolo  duplication  wiki  reference  multi  q-n-a  stackex  graphics  spatial  direction  dimensionality  sky 
september 2017 by nhaliday
GALILEO'S STUDIES OF PROJECTILE MOTION
During the Renaissance, the focus, especially in the arts, was on representing as accurately as possible the real world whether on a 2 dimensional surface or a solid such as marble or granite. This required two things. The first was new methods for drawing or painting, e.g., perspective. The second, relevant to this topic, was careful observation.

With the spread of cannon in warfare, the study of projectile motion had taken on greater importance, and now, with more careful observation and more accurate representation, came the realization that projectiles did not move the way Aristotle and his followers had said they did: the path of a projectile did not consist of two consecutive straight line components but was instead a smooth curve. [1]

Now someone needed to come up with a method to determine if there was a special curve a projectile followed. But measuring the path of a projectile was not easy.

Using an inclined plane, Galileo had performed experiments on uniformly accelerated motion, and he now used the same apparatus to study projectile motion. He placed an inclined plane on a table and provided it with a curved piece at the bottom which deflected an inked bronze ball into a horizontal direction. The ball thus accelerated rolled over the table-top with uniform motion and then fell off the edge of the table Where it hit the floor, it left a small mark. The mark allowed the horizontal and vertical distances traveled by the ball to be measured. [2]

By varying the ball's horizontal velocity and vertical drop, Galileo was able to determine that the path of a projectile is parabolic.

https://www.scientificamerican.com/author/stillman-drake/

Galileo's Discovery of the Parabolic Trajectory: http://www.jstor.org/stable/24949756

Galileo's Experimental Confirmation of Horizontal Inertia: Unpublished Manuscripts (Galileo
Gleanings XXII): https://sci-hub.tw/https://www.jstor.org/stable/229718
- Drake Stillman

MORE THAN A DECADE HAS ELAPSED since Thomas Settle published a classic paper in which Galileo's well-known statements about his experiments on inclined planes were completely vindicated.' Settle's paper replied to an earlier attempt by Alexandre Koyre to show that Galileo could not have obtained the results he claimed in his Two New Sciences by actual observations using the equipment there described. The practical ineffectiveness of Settle's painstaking repetition of the experiments in altering the opinion of historians of science is only too evident. Koyre's paper was reprinted years later in book form without so much as a note by the editors concerning Settle's refutation of its thesis.2 And the general literature continues to belittle the role of experiment in Galileo's physics.

More recently James MacLachlan has repeated and confirmed a different experiment reported by Galileo-one which has always seemed highly exaggerated and which was also rejected by Koyre with withering sarcasm.3 In this case, however, it was accuracy of observation rather than precision of experimental data that was in question. Until now, nothing has been produced to demonstrate Galileo's skill in the design and the accurate execution of physical experiment in the modern sense.

Pant of a page of Galileo's unpublished manuscript notes, written late in 7608, corroborating his inertial assumption and leading directly to his discovery of the parabolic trajectory. (Folio 1 16v Vol. 72, MSS Galileiani; courtesy of the Biblioteca Nazionale di Firenze.)

...

(The same skeptical historians, however, believe that to show that Galileo could have used the medieval mean-speed theorem suffices to prove that he did use it, though it is found nowhere in his published or unpublished writings.)

...

Now, it happens that among Galileo's manuscript notes on motion there are many pages that were not published by Favaro, since they contained only calculations or diagrams without attendant propositions or explanations. Some pages that were published had first undergone considerable editing, making it difficult if not impossible to discern their full significance from their printed form. This unpublished material includes at least one group of notes which cannot satisfactorily be accounted for except as representing a series of experiments designed to test a fundamental assumption, which led to a new, important discovery. In these documents precise empirical data are given numerically, comparisons are made with calculated values derived from theory, a source of discrepancy from still another expected result is noted, a new experiment is designed to eliminate this, and further empirical data are recorded. The last-named data, although proving to be beyond Galileo's powers of mathematical analysis at the time, when subjected to modern analysis turn out to be remarkably precise. If this does not represent the experimental process in its fully modern sense, it is hard to imagine what standards historians require to be met.

The discovery of these notes confirms the opinion of earlier historians. They read only Galileo's published works, but did so without a preconceived notion of continuity in the history of ideas. The opinion of our more sophisticated colleagues has its sole support in philosophical interpretations that fit with preconceived views of orderly long-term scientific development. To find manuscript evidence that Galileo was at home in the physics laboratory hardly surprises me. I should find it much more astonishing if, by reasoning alone, working only from fourteenth-century theories and conclusions, he had continued along lines so different from those followed by profound philosophers in earlier centuries. It is to be hoped that, warned by these examples, historians will begin to restore the old cautionary clauses in analogous instances in which scholarly opinions are revised without new evidence, simply to fit historical theories.

In what follows, the newly discovered documents are presented in the context of a hypothetical reconstruction of Galileo's thought.

...

As early as 1590, if we are correct in ascribing Galileo's juvenile De motu to that date, it was his belief that an ideal body resting on an ideal horizontal plane could be set in motion by a force smaller than any previously assigned force, however small. By "horizontal plane" he meant a surface concentric with the earth but which for reasonable distances would be indistinguishable from a level plane. Galileo noted at the time that experiment did not confirm this belief that the body could be set in motion by a vanishingly small force, and he attributed the failure to friction, pressure, the imperfection of material surfaces and spheres, and the departure of level planes from concentricity with the earth.5

It followed from this belief that under ideal conditions the motion so induced would also be perpetual and uniform. Galileo did not mention these consequences until much later, and it is impossible to say just when he perceived them. They are, however, so evident that it is safe to assume that he saw them almost from the start. They constitute a trivial case of the proposition he seems to have been teaching before 1607-that a mover is required to start motion, but that absence of resistance is then sufficient to account for its continuation.6

In mid-1604, following some investigations of motions along circular arcs and motions of pendulums, Galileo hit upon the law that in free fall the times elapsed from rest are as the smaller distance is to the mean proportional between two distances fallen.7 This gave him the times-squared law as well as the rule of odd numbers for successive distances and speeds in free fall. During the next few years he worked out a large number of theorems relating to motion along inclined planes, later published in the Two New Sciences. He also arrived at the rule that the speed terminating free fall from rest was double the speed of the fall itself. These theorems survive in manuscript notes of the period 1604-1609. (Work during these years can be identified with virtual certainty by the watermarks in the paper used, as I have explained elsewhere.8)

In the autumn of 1608, after a summer at Florence, Galileo seems to have interested himself in the question whether the actual slowing of a body moving horizontally followed any particular rule. On folio 117i of the manuscripts just mentioned, the numbers 196, 155, 121, 100 are noted along the horizontal line near the middle of the page (see Fig. 1). I believe that this was the first entry on this leaf, for reasons that will appear later, and that Galileo placed his grooved plane in the level position and recorded distances traversed in equal times along it. Using a metronome, and rolling a light wooden ball about 4 3/4 inches in diameter along a plane with a groove 1 3/4 inches wide, I obtained similar relations over a distance of 6 feet. The figures obtained vary greatly for balls of different materials and weights and for greatly different initial speeds.9 But it suffices for my present purposes that Galileo could have obtained the figures noted by observing the actual deceleration of a ball along a level plane. It should be noted that the watermark on this leaf is like that on folio 116, to which we shall come presently, and it will be seen later that the two sheets are closely connected in time in other ways as well.

The relatively rapid deceleration is obviously related to the contact of ball and groove. Were the ball to roll right off the end of the plane, all resistance to horizontal motion would be virtually removed. If, then, there were any way to have a given ball leave the plane at different speeds of which the ratios were known, Galileo's old idea that horizontal motion would continue uniformly in the absence of resistance could be put to test. His law of free fall made this possible. The ratios of speeds could be controlled by allowing the ball to fall vertically through known heights, at the ends of which it would be deflected horizontally. Falls through given heights … [more]
nibble  org:junk  org:edu  physics  mechanics  gravity  giants  the-trenches  discovery  history  early-modern  europe  mediterranean  the-great-west-whale  frontier  science  empirical  experiment  arms  technology  lived-experience  time  measurement  dirty-hands  iron-age  the-classics  medieval  sequential  wire-guided  error  wiki  reference  people  quantitative-qualitative  multi  pdf  piracy  study  essay  letters  discrete  news  org:mag  org:sci  popsci 
august 2017 by nhaliday
Mainspring - Wikipedia
A mainspring is a spiral torsion spring of metal ribbon—commonly spring steel—used as a power source in mechanical watches, some clocks, and other clockwork mechanisms. Winding the timepiece, by turning a knob or key, stores energy in the mainspring by twisting the spiral tighter. The force of the mainspring then turns the clock's wheels as it unwinds, until the next winding is needed. The adjectives wind-up and spring-powered refer to mechanisms powered by mainsprings, which also include kitchen timers, music boxes, wind-up toys and clockwork radios.

torque basically follows Hooke's Law
nibble  wiki  reference  physics  mechanics  spatial  diy  jargon  trivia  concept  time  technology  dirty-hands  history  medieval  early-modern  europe  the-great-west-whale  measurement 
august 2017 by nhaliday
Demography of the Roman Empire - Wikipedia
There are few recorded population numbers for the whole of antiquity, and those that exist are often rhetorical or symbolic. Unlike the contemporaneous Han Dynasty, no general census survives for the Roman Empire. The late period of the Roman Republic provides a small exception to this general rule: serial statistics for Roman citizen numbers, taken from census returns, survive for the early Republic through the 1st century CE.[41] Only the figures for periods after the mid-3rd century BCE are reliable, however. Fourteen figures are available for the 2nd century BCE (from 258,318 to 394,736). Only four figures are available for the 1st century BCE, and are feature a large break between 70/69 BCE (910,000) and 28 BCE (4,063,000). The interpretation of the later figures—the Augustan censuses of 28 BCE, 8 BCE, and 14 CE—is therefore controversial.[42] Alternate interpretations of the Augustan censuses (such as those of E. Lo Cascio[43]) produce divergent population histories across the whole imperial period.[44]

Roman population size: the logic of the debate: https://www.princeton.edu/~pswpc/pdfs/scheidel/070706.pdf
- Walter Scheidel (cited in book by Vaclav Smil, "Why America is Not a New Rome")

Our ignorance of ancient population numbers is one of the biggest obstacles to our understanding of Roman history. After generations of prolific scholarship, we still do not know how many people inhabited Roman Italy and the Mediterranean at any given point in time. When I say ‘we do not know’ I do not simply mean that we lack numbers that are both precise and safely known to be accurate: that would surely be an unreasonably high standard to apply to any pre-modern society. What I mean is that even the appropriate order of magnitude remains a matter of intense dispute.

Historical urban community sizes: https://en.wikipedia.org/wiki/Historical_urban_community_sizes

World population estimates: https://en.wikipedia.org/wiki/World_population_estimates
As a general rule, the confidence of estimates on historical world population decreases for the more distant past. Robust population data only exists for the last two or three centuries. Until the late 18th century, few governments had ever performed an accurate census. In many early attempts, such as in Ancient Egypt and the Persian Empire, the focus was on counting merely a subset of the population for purposes of taxation or military service.[3] Published estimates for the 1st century ("AD 1") suggest an uncertainty of the order of 50% (estimates range between 150 and 330 million). Some estimates extend their timeline into deep prehistory, to "10,000 BC", i.e. the early Holocene, when world population estimates range roughly between one and ten million (with an uncertainty of up to an order of magnitude).[4][5]

Estimates for yet deeper prehistory, into the Paleolithic, are of a different nature. At this time human populations consisted entirely of non-sedentary hunter-gatherer populations, with anatomically modern humans existing alongside archaic human varieties, some of which are still ancestral to the modern human population due to interbreeding with modern humans during the Upper Paleolithic. Estimates of the size of these populations are a topic of paleoanthropology. A late human population bottleneck is postulated by some scholars at approximately 70,000 years ago, during the Toba catastrophe, when Homo sapiens population may have dropped to as low as between 1,000 and 10,000 individuals.[6][7] For the time of speciation of Homo sapiens, some 200,000 years ago, an effective population size of the order of 10,000 to 30,000 individuals has been estimated, with an actual "census population" of early Homo sapiens of roughly 100,000 to 300,000 individuals.[8]
history  iron-age  mediterranean  the-classics  demographics  fertility  data  europe  population  measurement  volo-avolo  estimate  wiki  reference  article  conquest-empire  migration  canon  scale  archaeology  multi  broad-econ  pdf  study  survey  debate  uncertainty  walter-scheidel  vaclav-smil  urban  military  economics  labor  time-series  embodied  health  density  malthus  letters  urban-rural  database  list  antiquity  medieval  early-modern  mostly-modern  time  sequential  MENA  the-great-west-whale  china  asia  sinosphere  occident  orient  japan  britain  germanic  gallic  summary  big-picture  objektbuch  confidence  sapiens  anthropology  methodology  farmers-and-foragers  genetics  genomics  chart 
august 2017 by nhaliday
Is the economy illegible? | askblog
In the model of the economy as a GDP factory, the most fundamental equation is the production function, Y = f(K,L).

This says that total output (Y) is determined by the total amount of capital (K) and the total amount of labor (L).

Let me stipulate that the economy is legible to the extent that this model can be applied usefully to explain economic developments. I want to point out that the economy, while never as legible as economists might have thought, is rapidly becoming less legible.
econotariat  cracker-econ  economics  macro  big-picture  empirical  legibility  let-me-see  metrics  measurement  econ-metrics  volo-avolo  securities  markets  amazon  business-models  business  tech  sv  corporation  inequality  compensation  polarization  econ-productivity  stagnation  monetary-fiscal  models  complex-systems  map-territory  thinking  nationalism-globalism  time-preference  cost-disease  education  healthcare  composition-decomposition  econometrics  methodology  lens  arrows  labor  capital  trends  intricacy  🎩  moments  winner-take-all  efficiency  input-output 
august 2017 by nhaliday
The Determinants of Trust
Both individual experiences and community characteristics influence how much people trust each other. Using data drawn from US localities we find that the strongest factors that reduce trust are: i) a recent history of traumatic experiences, even though the passage of time reduces this effect fairly rapidly; ii) belonging to a group that historically felt discriminated against, such as minorities (black in particular) and, to a lesser extent, women; iii) being economically unsuccessful in terms of income and education; iv) living in a racially mixed community and/or in one with a high degree of income disparity. Religious beliefs and ethnic origins do not significantly affect trust. The latter result may be an indication that the American melting pot at least up to a point works, in terms of homogenizing attitudes of different cultures, even though racial cleavages leading to low trust are still quite high.

Understanding Trust: http://www.nber.org/papers/w13387
In this paper we resolve this puzzle by recognizing that trust has two components: a belief-based one and a preference based one. While the sender's behavior reflects both, we show that WVS-like measures capture mostly the belief-based component, while questions on past trusting behavior are better at capturing the preference component of trust.

MEASURING TRUST: http://scholar.harvard.edu/files/laibson/files/measuring_trust.pdf
We combine two experiments and a survey to measure trust and trustworthiness— two key components of social capital. Standard attitudinal survey questions about trust predict trustworthy behavior in our experiments much better than they predict trusting behavior. Trusting behavior in the experiments is predicted by past trusting behavior outside of the experiments. When individuals are closer socially, both trust and trustworthiness rise. Trustworthiness declines when partners are of different races or nationalities. High status individuals are able to elicit more trustworthiness in others.

What is Social Capital? The Determinants of Trust and Trustworthiness: http://www.nber.org/papers/w7216
Using a sample of Harvard undergraduates, we analyze trust and social capital in two experiments. Trusting behavior and trustworthiness rise with social connection; differences in race and nationality reduce the level of trustworthiness. Certain individuals appear to be persistently more trusting, but these people do not say they are more trusting in surveys. Survey questions about trust predict trustworthiness not trust. Only children are less trustworthy. People behave in a more trustworthy manner towards higher status individuals, and therefore status increases earnings in the experiment. As such, high status persons can be said to have more social capital.

Trust and Cheating: http://www.nber.org/papers/w18509
We find that: i) both parties to a trust exchange have implicit notions of what constitutes cheating even in a context without promises or messages; ii) these notions are not unique - the vast majority of senders would feel cheated by a negative return on their trust/investment, whereas a sizable minority defines cheating according to an equal split rule; iii) these implicit notions affect the behavior of both sides to the exchange in terms of whether to trust or cheat and to what extent. Finally, we show that individual's notions of what constitutes cheating can be traced back to two classes of values instilled by parents: cooperative and competitive. The first class of values tends to soften the notion while the other tightens it.

Nationalism and Ethnic-Based Trust: Evidence from an African Border Region: https://u.osu.edu/robinson.1012/files/2015/12/Robinson_NationalismTrust-1q3q9u1.pdf
These results offer microlevel evidence that a strong and salient national identity can diminish ethnic barriers to trust in diverse societies.

One Team, One Nation: Football, Ethnic Identity, and Conflict in Africa: http://conference.nber.org/confer//2017/SI2017/DEV/Durante_Depetris-Chauvin.pdf
Do collective experiences that prime sentiments of national unity reduce interethnic tensions and conflict? We examine this question by looking at the impact of national football teams’ victories in sub-Saharan Africa. Combining individual survey data with information on over 70 official matches played between 2000 and 2015, we find that individuals interviewed in the days after a victory of their country’s national team are less likely to report a strong sense of ethnic identity and more likely to trust people of other ethnicities than those interviewed just before. The effect is sizable and robust and is not explained by generic euphoria or optimism. Crucially, national victories do not only affect attitudes but also reduce violence. Indeed, using plausibly exogenous variation from close qualifications to the Africa Cup of Nations, we find that countries that (barely) qualified experience significantly less conflict in the following six months than countries that (barely) did not. Our findings indicate that, even where ethnic tensions have deep historical roots, patriotic shocks can reduce inter-ethnic tensions and have a tangible impact on conflict.

Why Does Ethnic Diversity Undermine Public Goods Provision?: http://www.columbia.edu/~mh2245/papers1/HHPW.pdf
We identify three families of mechanisms that link diversity to public goods provision—–what we term “preferences,” “technology,” and “strategy selection” mechanisms—–and run a series of experimental games that permit us to compare the explanatory power of distinct mechanisms within each of these three families. Results from games conducted with a random sample of 300 subjects from a slum neighborhood of Kampala, Uganda, suggest that successful public goods provision in homogenous ethnic communities can be attributed to a strategy selection mechanism: in similar settings, co-ethnics play cooperative equilibria, whereas non-co-ethnics do not. In addition, we find evidence for a technology mechanism: co-ethnics are more closely linked on social networks and thus plausibly better able to support cooperation through the threat of social sanction. We find no evidence for prominent preference mechanisms that emphasize the commonality of tastes within ethnic groups or a greater degree of altruism toward co-ethnics, and only weak evidence for technology mechanisms that focus on the impact of shared ethnicity on the productivity of teams.

does it generalize to first world?

Higher Intelligence Groups Have Higher Cooperation Rates in the Repeated Prisoner's Dilemma: https://ideas.repec.org/p/iza/izadps/dp8499.html
The initial cooperation rates are similar, it increases in the groups with higher intelligence to reach almost full cooperation, while declining in the groups with lower intelligence. The difference is produced by the cumulation of small but persistent differences in the response to past cooperation of the partner. In higher intelligence subjects, cooperation after the initial stages is immediate and becomes the default mode, defection instead requires more time. For lower intelligence groups this difference is absent. Cooperation of higher intelligence subjects is payoff sensitive, thus not automatic: in a treatment with lower continuation probability there is no difference between different intelligence groups

Why societies cooperate: https://voxeu.org/article/why-societies-cooperate
Three attributes are often suggested to generate cooperative behaviour – a good heart, good norms, and intelligence. This column reports the results of a laboratory experiment in which groups of players benefited from learning to cooperate. It finds overwhelming support for the idea that intelligence is the primary condition for a socially cohesive, cooperative society. Warm feelings towards others and good norms have only a small and transitory effect.

individual payoff, etc.:

Trust, Values and False Consensus: http://www.nber.org/papers/w18460
Trust beliefs are heterogeneous across individuals and, at the same time, persistent across generations. We investigate one mechanism yielding these dual patterns: false consensus. In the context of a trust game experiment, we show that individuals extrapolate from their own type when forming trust beliefs about the same pool of potential partners - i.e., more (less) trustworthy individuals form more optimistic (pessimistic) trust beliefs - and that this tendency continues to color trust beliefs after several rounds of game-play. Moreover, we show that one's own type/trustworthiness can be traced back to the values parents transmit to their children during their upbringing. In a second closely-related experiment, we show the economic impact of mis-calibrated trust beliefs stemming from false consensus. Miscalibrated beliefs lower participants' experimental trust game earnings by about 20 percent on average.

The Right Amount of Trust: http://www.nber.org/papers/w15344
We investigate the relationship between individual trust and individual economic performance. We find that individual income is hump-shaped in a measure of intensity of trust beliefs. Our interpretation is that highly trusting individuals tend to assume too much social risk and to be cheated more often, ultimately performing less well than those with a belief close to the mean trustworthiness of the population. On the other hand, individuals with overly pessimistic beliefs avoid being cheated, but give up profitable opportunities, therefore underperforming. The cost of either too much or too little trust is comparable to the income lost by forgoing college.

...

This framework allows us to show that income-maximizing trust typically exceeds the trust level of the average person as well as to estimate the distribution of income lost to trust mistakes. We find that although a majority of individuals has well calibrated beliefs, a non-trivial proportion of the population (10%) has trust beliefs sufficiently poorly calibrated to lower income by more than 13%.

Do Trust and … [more]
study  economics  alesina  growth-econ  broad-econ  trust  cohesion  social-capital  religion  demographics  race  diversity  putnam-like  compensation  class  education  roots  phalanges  general-survey  multi  usa  GT-101  conceptual-vocab  concept  behavioral-econ  intricacy  composition-decomposition  values  descriptive  correlation  harvard  field-study  migration  poll  status  🎩  🌞  chart  anthropology  cultural-dynamics  psychology  social-psych  sociology  cooperate-defect  justice  egalitarianism-hierarchy  inequality  envy  n-factor  axelrod  pdf  microfoundations  nationalism-globalism  africa  intervention  counter-revolution  tribalism  culture  society  ethnocentrism  coordination  world  developing-world  innovation  econ-productivity  government  stylized-facts  madisonian  wealth-of-nations  identity-politics  public-goodish  s:*  legacy  things  optimization  curvature  s-factor  success  homo-hetero  higher-ed  models  empirical  contracts  human-capital  natural-experiment  endo-exo  data  scale  trade  markets  time  supply-demand  summary 
august 2017 by nhaliday
How to estimate distance using your finger | Outdoor Herbivore Blog
1. Hold your right arm out directly in front of you, elbow straight, thumb upright.
2. Align your thumb with one eye closed so that it covers (or aligns) the distant object. Point marked X in the drawing.
3. Do not move your head, arm or thumb, but switch eyes, so that your open eye is now closed and the other eye is open. Observe closely where the object now appears with the other open eye. Your thumb should appear to have moved to some other point: no longer in front of the object. This new point is marked as Y in the drawing.
4. Estimate this displacement XY, by equating it to the estimated size of something you are familiar with (height of tree, building width, length of a car, power line poles, distance between nearby objects). In this case, the distant barn is estimated to be 100′ wide. It appears 5 barn widths could fit this displacement, or 500 feet. Now multiply that figure by 10 (the ratio of the length of your arm to the distance between your eyes), and you get the distance between you and the thicket of blueberry bushes — 5000 feet away(about 1 mile).

- Basically uses parallax (similar triangles) with each eye.
- When they say to compare apparent shift to known distance, won't that scale with the unknown distance? The example uses width of an object at the point whose distance is being estimated.

per here: https://www.trails.com/how_26316_estimate-distances-outdoors.html
Select a distant object that the width can be accurately determined. For example, use a large rock outcropping. Estimate the width of the rock. Use 200 feet wide as an example here.
outdoors  human-bean  embodied  embodied-pack  visuo  spatial  measurement  lifehack  howto  navigation  prepping  survival  objektbuch  multi  measure  estimate 
august 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : abstractngphysics

related tags

2016-election  80000-hours  :/  aaronson  ability-competence  absolute-relative  abstraction  academia  accretion  accuracy  acm  acmtariat  aDNA  advanced  adversarial  advertising  advice  aesthetics  africa  age-generation  age-of-discovery  aging  agri-mindset  agriculture  ai  ai-control  akrasia  albion  alesina  algorithms  alien-character  alignment  allodium  alt-inst  altruism  amazon  analogy  analysis  analytical-holistic  anglo  anglosphere  anomie  anthropic  anthropology  antidemos  antiquity  aphorism  api  apollonian-dionysian  apple  applicability-prereqs  applications  approximation  arbitrage  archaeology  archaics  aristos  arms  arrows  art  article  asia  assembly  assimilation  atmosphere  atoms  attaq  attention  audio  authoritarianism  autism  automation  axelrod  axioms  backup  bangbang  barons  bayesian  behavioral-econ  behavioral-gen  being-becoming  being-right  benchmarks  benevolence  best-practices  better-explained  bias-variance  biases  big-peeps  big-picture  big-surf  bio  biodet  bioinformatics  biophysical-econ  biotech  bitcoin  bits  blog  blowhards  books  bostrom  bounded-cognition  brain-scan  branches  brands  brexit  britain  broad-econ  build-packaging  business  business-models  c(pp)  c:**  c:***  caching  calculation  calculator  california  canada  cancer  candidate-gene  canon  capital  capitalism  carcinisation  cardio  career  carmack  cartoons  causation  censorship  chan  charity  chart  cheatsheet  checking  checklists  chemistry  china  christianity  civic  civil-liberty  civilization  cjones-like  clarity  class  class-warfare  classic  classification  clever-rats  client-server  climate-change  cliometrics  clown-world  coalitions  coarse-fine  cocktail  cocoa  code-organizing  coding-theory  cog-psych  cohesion  cold-war  collaboration  coming-apart  commentary  common-case  communication  communism  community  comparison  compensation  competition  compilers  complement-substitute  complex-systems  complexity  composition-decomposition  compressed-sensing  computation  computer-memory  computer-vision  concept  conceptual-vocab  concrete  concurrency  confidence  confluence  confounding  conquest-empire  consilience  constraint-satisfaction  consumerism  context  contracts  contrarianism  control  convexity-curvature  cooperate-defect  coordination  core-rats  corporation  correctness  correlation  corruption  cost-benefit  cost-disease  counter-revolution  counterexample  counterfactual  coupling-cohesion  courage  course  cracker-econ  cracker-prog  creative  crime  criminal-justice  criminology  CRISPR  critique  crooked  crosstab  crypto  crypto-anarchy  cryptocurrency  cs  cultural-dynamics  culture  culture-war  curiosity  current-events  curvature  cybernetics  cycles  cynicism-idealism  d3  dan-luu  dark-arts  darwinian  data  data-science  data-structures  database  dataset  dataviz  dbs  death  debate  debt  debugging  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  defense  definite-planning  definition  degrees-of-freedom  democracy  demographic-transition  demographics  dennett  density  dental  descriptive  design  desktop  detail-architecture  deterrence  developing-world  developmental  devops  devtools  diet  differential-privacy  dimensionality  direct-indirect  direction  dirty-hands  discipline  discovery  discrete  discrimination  discussion  disease  distributed  distribution  divergence  diversity  diy  documentation  domestication  dotnet  douthatish  draft  drama  drugs  DSL  duplication  duty  dynamic  dysgenics  early-modern  eastern-europe  ecology  econ-metrics  econ-productivity  econometrics  economics  econotariat  ecosystem  eden  editors  education  EEA  effect-size  effective-altruism  efficiency  egalitarianism-hierarchy  EGT  einstein  elections  electromag  elite  embedded-cognition  embodied  embodied-cognition  embodied-pack  embodied-street-fighting  emotion  empirical  ems  encyclopedic  endo-exo  endogenous-exogenous  endurance  energy-resources  engineering  enhancement  ensembles  entertainment  entrepreneurialism  entropy-like  environment  environmental-effects  envy  epidemiology  epigenetics  epistemic  equilibrium  error  error-handling  essay  essence-existence  estimate  ethanol  ethical-algorithms  ethics  ethnocentrism  ethnography  EU  europe  evidence  evidence-based  evolution  examples  existence  exocortex  expanders  expansionism  expectancy  experiment  expert-experience  explanans  explanation  exploratory  expression-survival  externalities  extra-introversion  extrema  facebook  faq  farmers-and-foragers  fashun  FDA  fermi  fertility  feudal  fiction  field-study  fields  fighting  film  finance  fire  fisher  fitness  fitsci  flexibility  fluid  flux-stasis  focus  food  foreign-lang  foreign-policy  form-design  formal-methods  formal-values  forms-instances  forum  frameworks  free-riding  frequency  frontend  frontier  functional  futurism  gallic  galor-like  galton  game-theory  games  garett-jones  gavisti  gbooks  GCTA  gedanken  gelman  gender  gender-diff  gene-drift  general-survey  generalization  generative  genetic-load  genetics  genomics  geoengineering  geography  geometry  geopolitics  germanic  get-fit  giants  gibbon  gig-econ  git  gnon  gnosis-logos  gnu  gnxp  god-man-beast-victim  golang  google  gotchas  government  grad-school  graph-theory  graphical-models  graphics  graphs  gravity  gray-econ  great-powers  greedy  gregory-clark  grokkability  grokkability-clarity  ground-up  group-level  group-selection  growth-econ  GT-101  guide  GWAS  gwern  h2o  habit  hacker  hanson  hanushek  happy-sad  hard-tech  hardness  hardware  hari-seldon  harvard  hashing  haskell  hci  health  healthcare  heavy-industry  heavyweights  henrich  heterodox  heuristic  hidden-motives  high-dimension  high-variance  higher-ed  history  hive-mind  hmm  hn  homepage  homo-hetero  honor  housing  howto  hsu  huge-data-the-biggest  human-bean  human-capital  human-ml  human-study  humanity  humility  huntington  hypochondria  hypocrisy  hypothesis-testing  ideas  identity  identity-politics  ideology  idk  IEEE  iidness  illusion  immune  impact  impetus  impro  incentives  increase-decrease  india  individualism-collectivism  industrial-org  industrial-revolution  inequality  inference  info-dynamics  info-econ  info-foraging  information-theory  inhibition  init  innovation  input-output  insight  institutions  integration-extension  integrity  intel  intelligence  interdisciplinary  interests  interface  interface-compatibility  internet  interpretation  intersection-connectedness  intervention  interview  interview-prep  intricacy  intuition  invariance  investing  ioannidis  ios  iq  iran  iraq-syria  iron-age  is-ought  islam  isteveish  iteration-recursion  janus  japan  jargon  javascript  jobs  journos-pundits  judaism  judgement  justice  jvm  kernels  keyboard  kinship  knowledge  korea  kumbaya-kult  labor  land  language  large-factor  latency-throughput  latin-america  law  leadership  learning  lecture-notes  left-wing  legacy  legibility  len:long  len:short  lens  lesswrong  let-me-see  letters  levers  leviathan  lexical  libraries  life-history  lifehack  limits  linear-algebra  linear-models  linearity  liner-notes  linguistics  links  linux  lisp  list  literature  lived-experience  llvm  local-global  lol  long-short-run  long-term  longevity  longform  longitudinal  love-hate  low-hanging  lower-bounds  lurid  machine-learning  macro  madisonian  magnitude  malaise  male-variability  malthus  management  managerial-state  manifolds  map-territory  maps  marginal  marginal-rev  market-failure  market-power  markets  martial  matching  math  math.CA  math.CO  math.DS  math.NT  matrix-factorization  meaningness  measure  measurement  mechanics  media  medicine  medieval  mediterranean  memes(ew)  MENA  mena4  meta-analysis  meta:medicine  meta:prediction  meta:science  meta:war  metabolic  metabuch  metal-to-virtual  metameta  methodology  metric-space  metrics  micro  microbiz  microfoundations  microsoft  migrant-crisis  migration  military  minimalism  miri-cfar  missing-heritability  ML-MAP-E  mobile  mobility  model-class  models  modernity  mokyr-allen-mccloskey  moments  monetary-fiscal  money  money-for-time  monte-carlo  mooc  morality  mostly-modern  move-fast-(and-break-things)  multi  multiplicative  musk  mutation  mystic  myth  n-factor  narrative  nascent-state  nationalism-globalism  natural-experiment  nature  navigation  neocons  network-structure  networking  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nietzschean  nihil  nitty-gritty  nlp  no-go  noahpinion  noble-lie  noise-structure  nonlinearity  nootropics  nordic  norms  north-weingast-like  northeast  nostalgia  notation  novelty  nuclear  null-result  number  nutrition  nyc  obesity  objective-measure  objektbuch  observer-report  ocaml-sml  occam  occident  oceans  offense-defense  old-anglo  oly-programming  open-closed  opioids  opsec  optimate  optimism  optimization  order-disorder  orders  org:anglo  org:biz  org:bleg  org:bv  org:com  org:data  org:econlib  org:edu  org:foreign  org:gov  org:health  org:junk  org:lite  org:mag  org:mat  org:med  org:nat  org:ngo  org:popup  org:rec  org:sci  organizing  orient  orwellian  os  oscillation  oss  osx  outcome-risk  outdoors  outliers  overflow  oxbridge  papers  parable  paradox  parallax  parametric  parasites-microbiome  parenting  pareto  parsimony  paste  paternal-age  path-dependence  patho-altruism  patience  paul-romer  pdf  peace-violence  people  performance  personality  persuasion  pessimism  phalanges  pharma  phase-transition  philosophy  phys-energy  physics  pic  piketty  pinboard  pinker  piracy  planning  plots  pls  plt  poast  podcast  poetry  polanyi-marx  polarization  policy  polisci  political-econ  politics  poll  pop-diff  pop-structure  popsci  population  population-genetics  populism  positivity  postrat  power  power-law  pragmatic  pre-2013  pre-ww2  prediction  preference-falsification  prejudice  prepping  preprint  presentation  primitivism  princeton  priors-posteriors  privacy  pro-rata  probability  problem-solving  productivity  profile  programming  progression  project  project-management  propaganda  properties  property-rights  proposal  protestant-catholic  protocol-metadata  pseudoE  psych-architecture  psychiatry  psycho-atoms  psychology  psychometrics  public-goodish  public-health  putnam-like  python  q-n-a  qra  QTL  quality  quantified-self  quantitative-qualitative  quantum  questions  quixotic  quiz  quotes  race  rand-approx  random  random-matrices  randy-ayndy  ranking  rant  rat-pack  rationality  ratty  reading  realness  reason  recent-selection  recommendations  recruiting  reddit  redistribution  reduction  reference  reflection  regression  regression-to-mean  regularization  regularizer  regulation  reinforcement  relativity  religion  rent-seeking  replication  repo  research  retention  retrofit  review  revolution  rhetoric  rhythm  right-wing  rigor  rindermann-thompson  risk  ritual  robotics  roots  rot  running  russia  rust  s-factor  s:*  s:**  s:***  saas  safety  sample-complexity  sampling  sampling-bias  sanctity-degradation  sapiens  scale  scaling-tech  scaling-up  scholar  science  science-anxiety  scifi-fantasy  scitariat  search  securities  security  selection  self-report  sentiment  sequential  sex  shakespeare  shift  sib-study  signal-noise  signaling  signum  similarity  simler  simplification-normalization  simulation  sinosphere  skeleton  skunkworks  sky  sleep  sleuthin  slides  smoothness  soccer  social  social-capital  social-choice  social-norms  social-psych  social-science  social-structure  society  sociology  socs-and-mops  software  space  sparsity  spatial  spearhead  speculation  speed  speedometer  spock  sports  spreading  ssc  stackex  stagnation  stanford  startups  stat-power  state  state-of-art  statesmen  static-dynamic  stats  status  stereotypes  stochastic-processes  stock-flow  stories  strategy  straussian  stream  street-fighting  stripe  structure  study  studying  stylized-facts  subculture  success  sulla  summary  summer-2014  supply-demand  survey  survival  sv  swimming  symmetry  synchrony  syntax  synthesis  system-design  systematic-ad-hoc  systems  szabo  tactics  tails  tainter  talks  taxes  tcs  tcstariat  teaching  tech  tech-infrastructure  technocracy  technology  techtariat  telos-atelos  temperance  temperature  terminal  tetlock  the-bones  the-classics  the-devil  the-founding  the-great-west-whale  the-self  the-south  the-trenches  the-watchers  the-west  the-world-is-just-atoms  theory-of-mind  theory-practice  theos  thermo  thesis  thick-thin  thiel  things  thinking  threat-modeling  thucydides  tidbits  time  time-preference  time-series  time-use  tip-of-tongue  todo  toolkit  tools  top-n  toys  traces  track-record  tracker  trade  tradecraft  tradeoffs  tradition  transportation  travel  trees  trends  tribalism  tricks  trivia  troll  trump  trust  truth  turchin  turing  tutorial  tutoring  twin-study  twitter  types  ubiquity  ui  unaffiliated  uncertainty  unintended-consequences  uniqueness  unit  universalism-particularism  unix  urban  urban-rural  us-them  usa  ux  vaclav-smil  vague  values  vampire-squid  variance-components  vcs  venture  vgr  video  virtu  visual-understanding  visualization  visuo  vitality  volo-avolo  von-neumann  walls  walter-scheidel  war  water  wealth  wealth-of-nations  wearables  web  weird  welfare-state  west-hunter  westminster  white-paper  whole-partial-many  wiki  winner-take-all  wire-guided  wisdom  within-group  within-without  woah  wonkish  workflow  working-stiff  world  world-war  worrydream  wut  X-not-about-Y  xenobio  yak-shaving  yoga  yvain  zeitgeist  zero-positive-sum  zooming  🌞  🎓  🎩  🐸  👽  🔬  🖥  🤖 

Copy this bookmark:



description:


tags: