Inference of ecological and social drivers of human brain-size evolution | Nature
"The human brain is unusually large. It has tripled in size from Australopithecines to modern humans1 and has become almost six times larger than expected for a placental mammal of human size2. Brains incur high metabolic costs3 and accordingly a long-standing question is why the large human brain has evolved4. The leading hypotheses propose benefits of improved cognition for overcoming ecological5,6,7, social8,9,10 or cultural11,12,13,14 challenges. However, these hypotheses are typically assessed using correlative analyses, and establishing causes for brain-size evolution remains difficult15,16. Here we introduce a metabolic approach that enables causal assessment of social hypotheses for brain-size evolution. Our approach yields quantitative predictions for brain and body size from formalized social hypotheses given empirical estimates of the metabolic costs of the brain. Our model predicts the evolution of adult Homo sapiens-sized brains and bodies when individuals face a combination of 60% ecological, 30% cooperative and 10% between-group competitive challenges, and suggests that between-individual competition has been unimportant for driving human brain-size evolution. Moreover, our model indicates that brain expansion in Homo was driven by ecological rather than social challenges, and was perhaps strongly promoted by culture. Our metabolic approach thus enables causal assessments that refine, refute and unify hypotheses of brain-size evolution."

--- I have no idea how they could possibly validate such a model (it's not like there are comparative cases!), so the last tag applies with force.
to:NB  to_read  human_evolution  to_be_shot_after_a_fair_trial 
Large-scale kernel methods for independence testing | SpringerLink
"Representations of probability measures in reproducing kernel Hilbert spaces provide a flexible framework for fully nonparametric hypothesis tests of independence, which can capture any type of departure from independence, including nonlinear associations and multivariate interactions. However, these approaches come with an at least quadratic computational cost in the number of observations, which can be prohibitive in many applications. Arguably, it is exactly in such large-scale datasets that capturing any type of dependence is of interest, so striking a favourable trade-off between computational efficiency and test performance for kernel independence tests would have a direct impact on their applicability in practice. In this contribution, we provide an extensive study of the use of large-scale kernel approximations in the context of independence testing, contrasting block-based, Nyström and random Fourier feature approaches. Through a variety of synthetic data experiments, it is demonstrated that our large-scale methods give comparable performance with existing methods while using significantly less computation time and memory."
to:NB  computational_statistics  dependence_measures  hypothesis_testing  statistics  kernel_methods  hilbert_space  gretton.arthurt 
3 days ago
Bootstrap bias corrections for ensemble methods | SpringerLink
"This paper examines the use of a residual bootstrap for bias correction in machine learning regression methods. Accounting for bias is an important obstacle in recent efforts to develop statistical inference for machine learning. We demonstrate empirically that the proposed bootstrap bias correction can lead to substantial improvements in both bias and predictive accuracy. In the context of ensembles of trees, we show that this correction can be approximated at only double the cost of training the original ensemble. Our method is shown to improve test set accuracy over random forests by up to 70% on example problems from the UCI repository."
to;NB  ensemble_methods  prediction  bootstrap  hooker.giles  statistics 
3 days ago
The stochastic topic block model for the clustering of vertices in networks with textual edges | SpringerLink
"Due to the significant increase of communications between individuals via social media (Facebook, Twitter, Linkedin) or electronic formats (email, web, e-publication) in the past two decades, network analysis has become an unavoidable discipline. Many random graph models have been proposed to extract information from networks based on person-to-person links only, without taking into account information on the contents. This paper introduces the stochastic topic block model, a probabilistic model for networks with textual edges. We address here the problem of discovering meaningful clusters of vertices that are coherent from both the network interactions and the text contents. A classification variational expectation-maximization algorithm is proposed to perform inference. Simulated datasets are considered in order to assess the proposed approach and to highlight its main features. Finally, we demonstrate the effectiveness of our methodology on two real-word datasets: a directed communication network and an undirected co-authorship network."
to:NB  text_mining  network_data_analysis  statistics  to_teach:baby-nets 
3 days ago
Experimenter’s regress argument, empiricism, and the calibration of the large hadron collider | SpringerLink
"H. Collins has challenged the empiricist understanding of experimentation by identifying what he thinks constitutes the experimenter’s regress: an instrument is deemed good because it produces good results, and vice versa. The calibration of an instrument cannot alone validate the results: the regressive circling is broken by an agreement essentially external to experimental procedures. In response, A. Franklin has argued that calibration is a key reasonable strategy physicists use to validate production of results independently of their interpretation. The physicists’ arguments about the merits of calibration are not coextensive with the interpretation of results, and thus an objective validation of results is possible. I argue, however, that the in-situ calibrating and measurement procedures and parameters at the Large Hadron Collider are closely and systematically interrelated. This requires empiricists to question their insistence on the independence of calibration from the outcomes of the experiment and rethink their position. Yet this does not leave the case of in-situ calibration open to the experimenter’s regress argument; it is predicated on too crude a view of the relationship between calibration and measurement that fails to capture crucial subtleties of the case."
to:NB  philosophy_of_science  particle_physics 
3 days ago
Boosting conditional probability estimators | SpringerLink
"In the standard agnostic multiclass model, <instance, label > pairs are sampled independently from some underlying distribution. This distribution induces a conditional probability over the labels given an instance, and our goal in this paper is to learn this conditional distribution. Since even unconditional densities are quite challenging to learn, we give our learner access to <instance, conditional distribution > pairs. Assuming a base learner oracle in this model, we might seek a boosting algorithm for constructing a strong learner. Unfortunately, without further assumptions, this is provably impossible. However, we give a new boosting algorithm that succeeds in the following sense: given a base learner guaranteed to achieve some average accuracy (i.e., risk), we efficiently construct a learner that achieves the same level of accuracy with arbitrarily high probability. We give generalization guarantees of several different kinds, including distribution-free accuracy and risk bounds. None of our estimates depend on the number of boosting rounds and some of them admit dimension-free formulations."
to:NB  boosting  density_estimation  kith_and_kin  kontorovich.aryeh  statistics 
3 days ago
Economic Science Fictions | The MIT Press
"From the libertarian economics of Ayn Rand to Aldous Huxley's consumerist dystopias, economics and science fiction have often orbited each other. In Economic Science Fictions, editor William Davies has deliberately merged the two worlds, asking how we might harness the power of the utopian imagination to revitalize economic thinking.
"Rooted in the sense that our current economic reality is no longer credible or viable, this collection treats our economy as a series of fictions and science fiction as a means of anticipating different economic futures. It asks how science fiction can motivate new approaches to economics and provides surprising new syntheses, merging social science with fiction, design with politics, scholarship with experimental forms. With an opening chapter from Ha-Joon Chang as well as theory, short stories, and reflections on design, this book from Goldsmiths Press challenges and changes the notion that economics and science fiction are worlds apart. The result is a wealth of fresh and unusual perspectives for anyone who believes the economy is too important to be left solely to economists."

--- Re "no longer credible", I am irresistible reminded of an ancient joke.
Preacher: Sir, do you believe in infant baptism?
Layman: Believe in it? Why, Reverend, I've seen it done!
Still, this looks right up my alley.
to:NB  books:noted  economics  science_fiction 
8 days ago
Defending Hierarchy from the Moon to the Indian Ocean: Symbolic Capital and Political Dominance in Early Modern China and the Cold War | International Organization | Cambridge Core
"Why do leading actors invest in costly projects that they expect will not yield appreciable military or economic benefits? We identify a causal process in which concerns about legitimacy produce attempts to secure dominance in arenas of high symbolic value by investing wealth and labor into unproductive (in direct military and economic terms) goods and performances. We provide evidence for our claims through a comparative study of the American Project Apollo and the Ming Dynasty's treasure fleets. We locate our argument within a broader constructivist and practice-theoretic understanding of hierarchy and hegemony. We build on claims that world politics is a sphere of complex social stratification by viewing constituent hierarchies in terms of social fields. Our specific theory and broader framework, we contend, provide tools for understanding the workings of power politics beyond military and economic competition."
to:NB  comparative_history  space_exploration  ming_dynasty 
8 days ago
[1804.07203] The Hardness of Conditional Independence Testing and the Generalised Covariance Measure
"It is a common saying that testing for conditional independence, i.e., testing whether X is independent of Y, given Z, is a hard statistical problem if Z is a continuous random variable. In this paper, we prove that conditional independence is indeed a particularly difficult hypothesis to test for. Statistical tests are required to have a size that is smaller than a predefined significance level, and different tests usually have power against a different class of alternatives. We prove that a valid test for conditional independence does not have power against any alternative.
"Given the non-existence of a uniformly valid conditional independence test, we argue that tests must be designed so their suitability for a particular problem setting may be judged easily. To address this need, we propose in the case where X and Y are univariate to nonlinearly regress X on Z, and Y on Z and then compute a test statistic based on the sample covariance between the residuals, which we call the generalised covariance measure (GCM). We prove that validity of this form of test relies almost entirely on the weak requirement that the regression procedures are able to estimate the conditional means X given Z, and Y given Z, at a slow rate. We extend the methodology to handle settings where X and Y may be multivariate or even high-dimensional.
"While our general procedure can be tailored to the setting at hand by combining it with any regression technique, we develop the theoretical guarantees for kernel ridge regression. A simulation study shows that the test based on GCM is competitive with state of the art conditional independence tests. Code will be available as an R package."
to:NB  independence_testing  hypothesis_testing  statistics  causal_discovery  heard_the_talk  to_read  peters.jonas  nonparametrics  have_skimmed 
9 days ago
[1706.08576] Invariant Causal Prediction for Nonlinear Models
"An important problem in many domains is to predict how a system will respond to interventions. This task is inherently linked to estimating the system's underlying causal structure. To this end, 'invariant causal prediction' (ICP) (Peters et al., 2016) has been proposed which learns a causal model exploiting the invariance of causal relations using data from different environments. When considering linear models, the implementation of ICP is relatively straight-forward. However, the nonlinear case is more challenging due to the difficulty of performing nonparametric tests for conditional independence. In this work, we present and evaluate an array of methods for nonlinear and nonparametric versions of ICP for learning the causal parents of given target variables. We find that an approach which first fits a nonlinear model with data pooled over all environments and then tests for differences between the residual distributions across environments is quite robust across a large variety of simulation settings. We call this procedure "Invariant residual distribution test". In general, we observe that the performance of all approaches is critically dependent on the true (unknown) causal structure and it becomes challenging to achieve high power if the parental set includes more than two variables. As a real-world example, we consider fertility rate modelling which is central to world population projections. We explore predicting the effect of hypothetical interventions using the accepted models from nonlinear ICP. The results reaffirm the previously observed central causal role of child mortality rates."
to:NB  causal_inference  causal_discovery  statistics  regression  prediction  peters.jonas  meinshausen.nicolai  to_read  heard_the_talk  to_teach:undergrad-ADA  re:ADAfaEPoV 
9 days ago
[1501.01332] Causal inference using invariant prediction: identification and confidence intervals
"What is the difference of a prediction that is made with a causal model and a non-causal model? Suppose we intervene on the predictor variables or change the whole environment. The predictions from a causal model will in general work as well under interventions as for observational data. In contrast, predictions from a non-causal model can potentially be very wrong if we actively intervene on variables. Here, we propose to exploit this invariance of a prediction under a causal model for causal inference: given different experimental settings (for example various interventions) we collect all models that do show invariance in their predictive accuracy across settings and interventions. The causal model will be a member of this set of models with high probability. This approach yields valid confidence intervals for the causal relationships in quite general scenarios. We examine the example of structural equation models in more detail and provide sufficient assumptions under which the set of causal predictors becomes identifiable. We further investigate robustness properties of our approach under model misspecification and discuss possible extensions. The empirical properties are studied for various data sets, including large-scale gene perturbation experiments."
to:NB  to_read  causal_inference  causal_discovery  statistics  prediction  regression  buhlmann.peter  meinshausen.nicolai  peters.jonas  heard_the_talk  re:ADAfaEPoV  to_teach:undergrad-ADA 
9 days ago
Social Learning Strategies: Bridge-Building between Fields: Trends in Cognitive Sciences
"While social learning is widespread, indiscriminate copying of others is rarely beneficial. Theory suggests that individuals should be selective in what, when, and whom they copy, by following ‘social learning strategies’ (SLSs). The SLS concept has stimulated extensive experimental work, integrated theory, and empirical findings, and created impetus to the social learning and cultural evolution fields. However, the SLS concept needs updating to accommodate recent findings that individuals switch between strategies flexibly, that multiple strategies are deployed simultaneously, and that there is no one-to-one correspondence between psychological heuristics deployed and resulting population-level patterns. The field would also benefit from the simultaneous study of mechanism and function. SLSs provide a useful vehicle for bridge-building between cognitive psychology, neuroscience, and evolutionary biology."
to:NB  social_learning  cultural_transmission  cultural_evolution  cognitive_science  social_influence  re:do-institutions-evolve 
9 days ago
Poldrack, R.: The New Mind Readers: What Neuroimaging Can and Cannot Reveal about Our Thoughts (Hardcover) | Princeton University Press
"The ability to read minds has long been the stuff of science fiction, but revolutionary new brain-imaging methods are bringing it closer to scientific reality. The New Mind Readers provides a compelling look at the origins, development, and future of these extraordinary tools, revealing how they are increasingly being used to decode our thoughts and experiences—and how this raises sometimes troubling questions about their application in domains such as marketing, politics, and the law.
"Russell Poldrack takes readers on a journey of scientific discovery, telling the stories of the visionaries behind these breakthroughs. Along the way, he gives an insider’s perspective on what is perhaps the single most important technology in cognitive neuroscience today—functional magnetic resonance imaging, or fMRI, which is providing astonishing new insights into the contents and workings of the mind. He highlights both the amazing power and major limitations of these techniques and describes how applications outside the lab often exceed the bounds of responsible science. Poldrack also details the unique and sometimes disorienting experience of having his own brain scanned more than a hundred times as part of a landmark study of how human brain function changes over time.
"Written by one of the world’s leading pioneers in the field, The New Mind Readers cuts through the hype and misperceptions surrounding these emerging new methods, offering needed perspective on what they can and cannot do—and demonstrating how they can provide new answers to age-old questions about the nature of consciousness and what it means to be human."

--- Poldrack is great and has his head screwed on straight (you should pardon the expression), so I really look forward to this.
to:NB  books:noted  fmri  neuroscience  cognitive_science  poldrack.russell_a.  popular_science 
9 days ago
Steven Pinker’s book Enlightenment Now is a huge hit. Too bad it gets the Enlightenment wrong. - Vox
This hints at an interesting study to be written, about how intellectuals freeze their views of disciplines other than their own at an early age, and indeed keep propagating ideas they picked up, perhaps without realizing it, in college or graduate school, long after they've been corrected in their homes. In this case, I suspect that a lot of what Pinker (Ph.D., 1979) says is extrapolating from conditions of the 1970s...
enlightenment  history_of_ideas  why_oh_why_cant_we_have_a_better_intelligentsia  pinker.steven 
9 days ago
Comfort history | Review: Enlightenment Now by Steven Pinker
The estimable David Wootton (who wrote a book advocating a literally Whiggish interpretation of the Scientific Revolution) takes a turn bashing his head against the wall.
book_reviews  enlightenment  pinker.steven  wootton.david  why_oh_why_cant_we_have_a_better_intelligentsia 
9 days ago
137 ancient human genomes from across the Eurasian steppes | Nature
"For thousands of years the Eurasian steppes have been a centre of human migrations and cultural change. Here we sequence the genomes of 137 ancient humans (about 1× average coverage), covering a period of 4,000 years, to understand the population history of the Eurasian steppes after the Bronze Age migrations. We find that the genetics of the Scythian groups that dominated the Eurasian steppes throughout the Iron Age were highly structured, with diverse origins comprising Late Bronze Age herders, European farmers and southern Siberian hunter-gatherers. Later, Scythians admixed with the eastern steppe nomads who formed the Xiongnu confederations, and moved westward in about the second or third century BC, forming the Hun traditions in the fourth–fifth century AD, and carrying with them plague that was basal to the Justinian plague. These nomads were further admixed with East Asian groups during several short-term khanates in the Medieval period. These historical events transformed the Eurasian steppes from being inhabited by Indo-European speakers of largely West Eurasian ancestry to the mostly Turkic-speaking groups of the present day, who are primarily of East Asian ancestry."

--- Slightly trollish summary: Genetics proves that Aryans are losers.
to:NB  central_asia  historical_genetics 
9 days ago
Industry and Intelligence - Contemporary Art Since 1820 | Columbia University Press
"The history of modern art is often told through aesthetic breakthroughs that sync well with cultural and political change. From Courbet to Picasso, from Malevich to Warhol, it is accepted that art tracks the disruptions of industrialization, fascism, revolution, and war. Yet filtering the history of modern art only through catastrophic events cannot account for the subtle developments that lead to the profound confusion at the heart of contemporary art.
"In Industry and Intelligence, the artist Liam Gillick writes a nuanced genealogy to help us appreciate contemporary art's engagement with history even when it seems apathetic or blind to current events. Taking a broad view of artistic creation from 1820 to today, Gillick follows the response of artists to incremental developments in science, politics, and technology. The great innovations and dislocations of the nineteenth and twentieth centuries have their place in this timeline, but their traces are alternately amplified and diminished as Gillick moves through artistic reactions to liberalism, mass manufacturing, psychology, nuclear physics, automobiles, and a host of other advances. He intimately ties the origins of contemporary art to the social and technological adjustments of modern life, which artists struggled to incorporate truthfully into their works."

--- Dr. Giedion, Dr. Sigfried Giedion, please call your office. (Despite my everything old-is-new-again snark, this book does sound interesting...)
to:NB  books:noted  industrial_revolution  modernism  art_history 
10 days ago
Minds Make Societies | Yale University Press
"“There is no good reason why human societies should not be described and explained with the same precision and success as the rest of nature.” Thus argues evolutionary psychologist Pascal Boyer in this uniquely innovative book.
"Integrating recent insights from evolutionary biology, genetics, psychology, economics, and other fields, Boyer offers precise models of why humans engage in social behaviors such as forming families, tribes, and nations, or creating gender roles. In fascinating, thought-provoking passages, he explores questions such as, Why is there conflict between groups? Why do people believe low-value information such as rumors? Why are there religions? What is social justice? What explains morality? Boyer provides a new picture of cultural transmission that draws on the pragmatics of human communication, the constructive nature of memory in human brains, and human motivation for group formation and cooperation."
to:NB  books:noted  social_theory  epidemiology_of_representations  evolutionary_psychology  anthropology  boyer.pascal  evolution_of_cooperation  cultural_transmission  re:do-institutions-evolve 
10 days ago
Altered Brain Activity in Unipolar Depression Revisited: Meta-analyses of Neuroimaging Studies | Depressive Disorders | JAMA Psychiatry | JAMA Network
"Importance During the past 20 years, numerous neuroimaging experiments have investigated aberrant brain activation during cognitive and emotional processing in patients with unipolar depression (UD). The results of those investigations, however, vary considerably; moreover, previous meta-analyses also yielded inconsistent findings.
"Objective To readdress aberrant brain activation in UD as evidenced by neuroimaging experiments on cognitive and/or emotional processing.
"Data Sources Neuroimaging experiments published from January 1, 1997, to October 1, 2015, were identified by a literature search of PubMed, Web of Science, and Google Scholar using different combinations of the terms fMRI (functional magnetic resonance imaging), PET (positron emission tomography), neural, major depression, depression, major depressive disorder, unipolar depression, dysthymia, emotion, emotional, affective, cognitive, task, memory, working memory, inhibition, control, n-back, and Stroop.
"Study Selection Neuroimaging experiments (using fMRI or PET) reporting whole-brain results of group comparisons between adults with UD and healthy control individuals as coordinates in a standard anatomic reference space and using an emotional or/and cognitive challenging task were selected.
"Data Extraction and Synthesis Coordinates reported to show significant activation differences between UD and healthy controls during emotional or cognitive processing were extracted. By using the revised activation likelihood estimation algorithm, different meta-analyses were calculated.
"Main Outcomes and Measures Meta-analyses tested for brain regions consistently found to show aberrant brain activation in UD compared with controls. Analyses were calculated across all emotional processing experiments, all cognitive processing experiments, positive emotion processing, negative emotion processing, experiments using emotional face stimuli, experiments with a sex discrimination task, and memory processing. All meta-analyses were calculated across experiments independent of reporting an increase or decrease of activity in major depressive disorder. For meta-analyses with a minimum of 17 experiments available, separate analyses were performed for increases and decreases.
"Results In total, 57 studies with 99 individual neuroimaging experiments comprising in total 1058 patients were included; 34 of them tested cognitive and 65 emotional processing. Overall analyses across cognitive processing experiments (P > .29) and across emotional processing experiments (P > .47) revealed no significant results. Similarly, no convergence was found in analyses investigating positive (all P > .15), negative (all P > .76), or memory (all P > .48) processes. Analyses that restricted inclusion of confounds (eg, medication, comorbidity, age) did not change the results.
"Conclusions and Relevance Inconsistencies exist across individual experiments investigating aberrant brain activity in UD and replication problems across previous neuroimaging meta-analyses. For individual experiments, these inconsistencies may relate to use of uncorrected inference procedures, differences in experimental design and contrasts, or heterogeneous clinical populations; meta-analytically, differences may be attributable to varying inclusion and exclusion criteria or rather liberal statistical inference approaches."
fmri  neuroscience  depression  meta-analysis  re:neutral_model_of_inquiry  to_be_shot_after_a_fair_trial 
10 days ago
SocArXiv Papers | Fake News: Status Threat Does Not Explain the 2016 Presidential Vote
"The April 2018 article of Diana Mutz, "Status Threat, Not Economic Hardship, Explains the 2016 Presidential Vote," was published in the Proceedings of the National Academy of Sciences and contradicts prior sociological research on the 2016 election. Mutz's article received widespread media coverage because of the strength of its primary conclusion, declaimed in its title. The current article is a critical reanalysis of the models offered by Mutz, using the data files released along with her article. Contrary to her conclusions, this article demonstrates that material interests and status threat are deeply entangled in her cross-sectional data and, together, do not enable a definitive analysis of their relative importance. In addition, her panel-data model of candidate thermometer ratings has a specification that does not reveal the causal effects that she claims to have effectively estimated. Her panel-data model of votes, which she represents as a fixed-effect logit model, is, in fact, a generic pooled logit model. It is plagued by the same weaknesses as her thermometer ratings model, but also by more generic confounding from fixed individual-level predictors of vote choice that are not specified, such as self-identified race and level of education completed. In contrast, the sociological literature has offered more careful interpretations, and as such provides a more credible interpretation of the 2016 election."

--- This is from the co-author of the first decent modern book on observational causal inference (http://bactra.org/weblog/algae-2010-07.html#morgan-winship), so I will actually read it.
to:NB  us_politics  causal_inference 
15 days ago
[1708.03579] Self-exciting point processes with spatial covariates: modeling the dynamics of crime
"Crime has both varying patterns in space, related to features of the environment, economy, and policing, and patterns in time arising from criminal behavior, such as retaliation. Serious crimes may also be presaged by minor crimes of disorder. We demonstrate that these spatial and temporal patterns are generally confounded, requiring analyses to take both into account, and propose a spatio-temporal self-exciting point process model which incorporates spatial features, near-repeat and retaliation effects, and triggering. We develop inference methods and diagnostic tools, such as residual maps, for this model, and through extensive simulation and crime data obtained from Pittsburgh, Pennsylvania, demonstrate its properties and usefulness."
to:NB  spatio-temporal_statistics  point_processes  prediction  statistics  crime  kith_and_kin  reinhart.alex  greenhouse.joel  on_the_thesis_committee  to_teach:data_over_space_and_time 
15 days ago
[1708.02647] A Review of Self-Exciting Spatio-Temporal Point Processes and Their Applications
"Self-exciting spatio-temporal point process models predict the rate of events as a function of space, time, and the previous history of events. These models naturally capture triggering and clustering behavior, and have been widely used in fields where spatio-temporal clustering of events is observed, such as earthquake modeling, infectious disease, and crime. In the past several decades, advances have been made in estimation, inference, simulation, and diagnostic tools for self-exciting point process models. In this review, I describe the basic theory, survey related estimation and inference techniques from each field, highlight several key applications, and suggest directions for future research."
to:NB  spatio-temporal_statistics  point_processes  statistics  kith_and_kin  reinhart.alex  on_the_thesis_committee  to_teach:data_over_space_and_time 
15 days ago
SocArXiv Papers | A systematic assessment of 'Axial Age' proposals using global comparative historical evidence
"Proponents of the Axial Age contend that parallel cultural developments between 800 and 200 BCE in what is today China, Greece, India, Iran, and Israel-Palestine constitute the global historical turning point towards modernity. While the Axial Age concept is well-known and influential, deficiencies in the historical evidence and sociological analysis available have thwarted efforts to evaluate the Axial Age concept’s major global contentions. As a result, the Axial Age concept remains controversial. Seshat: Global History Databank provides new tools for examining this topic in social formations across Afro-Eurasia during the first two millennia BCE and first millennium CE, allowing scholars to empirically evaluate the many varied— and contrasting—claims put forward about this period. Our systematic investigation undercuts the notion of a specific 'age' of axiality limited to a specific geo-temporal localization. Critical traits offered as evidence of an axial transformation by proponents of the Axial Age concept are shown to have appeared across Afro-Eurasia hundreds and in some cases thousands of years prior to the proposed Axial Age. Our analysis raises important questions for future evaluations of this period and points the way towards empirically-led, historical-sociological investigations of the ideological and institutional foundations of complex societies."
to:NB  ancient_history  comparative_history  world_history 
17 days ago
Moving toward Integration — Richard H. Sander, Yana A. Kucheva, Jonathan M. Zasloff | Harvard University Press
"Reducing residential segregation is the best way to reduce racial inequality in the United States. African American employment rates, earnings, test scores, even longevity all improve sharply as residential integration increases. Yet far too many participants in our policy and political conversations have come to believe that the battle to integrate America’s cities cannot be won. Richard Sander, Yana Kucheva, and Jonathan Zasloff write that the pessimism surrounding desegregation in housing arises from an inadequate understanding of how segregation has evolved and how policy interventions have already set many metropolitan areas on the path to integration.
"Scholars have debated for decades whether America’s fair housing laws are effective. Moving toward Integration provides the most definitive account to date of how those laws were shaped and implemented and why they had a much larger impact in some parts of the country than others. It uses fresh evidence and better analytic tools to show when factors like exclusionary zoning and income differences between blacks and whites pose substantial obstacles to broad integration, and when they do not.
"Through its interdisciplinary approach and use of rich new data sources, Moving toward Integration offers the first comprehensive analysis of American housing segregation. It explains why racial segregation has been resilient even in an increasingly diverse and tolerant society, and it demonstrates how public policy can align with demographic trends to achieve broad housing integration within a generation."
to:NB  books:noted  the_american_dilemma 
17 days ago
The Rise of the Working-Class Shareholder — David Webber | Harvard University Press
"When Steven Burd, CEO of the supermarket chain Safeway, cut wages and benefits, starting a five-month strike by 59,000 unionized workers, he was confident he would win. But where traditional labor action failed, a novel approach was more successful. With the aid of the California Public Employees’ Retirement System, a $300 billion pension fund, workers led a shareholder revolt that unseated three of Burd’s boardroom allies.
"In The Rise of the Working-Class Shareholder: Labor’s Last Best Weapon, David Webber uses cases such as Safeway’s to shine a light on labor’s most potent remaining weapon: its multitrillion-dollar pension funds. Outmaneuvered at the bargaining table and under constant assault in Washington, state houses, and the courts, worker organizations are beginning to exercise muscle through markets. Shareholder activism has been used to divest from anti-labor companies, gun makers, and tobacco; diversify corporate boards; support Occupy Wall Street; force global warming onto the corporate agenda; create jobs; and challenge outlandish CEO pay. Webber argues that workers have found in labor’s capital a potent strategy against their exploiters. He explains the tactic’s surmountable difficulties even as he cautions that corporate interests are already working to deny labor’s access to this powerful and underused tool.
"The Rise of the Working-Class Shareholder is a rare good-news story for American workers, an opportunity hiding in plain sight. Combining legal rigor with inspiring narratives of labor victory, Webber shows how workers can wield their own capital to reclaim their strength."

--- OK, but I remember reading versions of this story in the 1980s. This makes me wonder if "rise" is really the right direction...
to:NB  books:noted  class_struggles_in_america  economics  labor 
17 days ago
Sporadic sampling, not climatic forcing, drives observed early hominin diversity | PNAS
"The role of climate change in the origin and diversification of early hominins is hotly debated. Most accounts of early hominin evolution link observed fluctuations in species diversity to directional shifts in climate or periods of intense climatic instability. None of these hypotheses, however, have tested whether observed diversity patterns are distorted by variation in the quality of the hominin fossil record. Here, we present a detailed examination of early hominin diversity dynamics, including both taxic and phylogenetically corrected diversity estimates. Unlike past studies, we compare these estimates to sampling metrics for rock availability (hominin-, primate-, and mammal-bearing formations) and collection effort, to assess the geological and anthropogenic controls on the sampling of the early hominin fossil record. Taxic diversity, primate-bearing formations, and collection effort show strong positive correlations, demonstrating that observed patterns of early hominin taxic diversity can be explained by temporal heterogeneity in fossil sampling rather than genuine evolutionary processes. Peak taxic diversity at 1.9 million years ago (Ma) is a sampling artifact, reflecting merely maximal rock availability and collection effort. In contrast, phylogenetic diversity estimates imply peak diversity at 2.4 Ma and show little relation to sampling metrics. We find that apparent relationships between early hominin diversity and indicators of climatic instability are, in fact, driven largely by variation in suitable rock exposure and collection effort. Our results suggest that significant improvements in the quality of the fossil record are required before the role of climate in hominin evolution can be reliably determined."
to:NB  human_evolution  paleontology  data_analysis 
17 days ago
The Matthew effect in science funding | PNAS
"A classic thesis is that scientific achievement exhibits a “Matthew effect”: Scientists who have previously been successful are more likely to succeed again, producing increasing distinction. We investigate to what extent the Matthew effect drives the allocation of research funds. To this end, we assembled a dataset containing all review scores and funding decisions of grant proposals submitted by recent PhDs in a €2 billion granting program. Analyses of review scores reveal that early funding success introduces a growing rift, with winners just above the funding threshold accumulating more than twice as much research funding (€180,000) during the following eight years as nonwinners just below it. We find no evidence that winners’ improved funding chances in subsequent competitions are due to achievements enabled by the preceding grant, which suggests that early funding itself is an asset for acquiring later funding. Surprisingly, however, the emergent funding gap is partly created by applicants, who, after failing to win one grant, apply for another grant less often."
to:NB  sociology_of_science  science_as_a_social_process  matthew_effect 
17 days ago
On the existence of thermodynamically stable rigid solids | PNAS
"Customarily, crystalline solids are defined to be rigid since they resist changes of shape determined by their boundaries. However, rigid solids cannot exist in the thermodynamic limit where boundaries become irrelevant. Particles in the solid may rearrange to adjust to shape changes eliminating stress without destroying crystalline order. Rigidity is therefore valid only in the metastable state that emerges because these particle rearrangements in response to a deformation, or strain, are associated with slow collective processes. Here, we show that a thermodynamic collective variable may be used to quantify particle rearrangements that occur as a solid is deformed at zero strain rate. Advanced Monte Carlo simulation techniques are then used to obtain the equilibrium free energy as a function of this variable. Our results lead to a unique view on rigidity: While at zero strain a rigid crystal coexists with one that responds to infinitesimal strain by rearranging particles and expelling stress, at finite strain the rigid crystal is metastable, associated with a free energy barrier that decreases with increasing strain. The rigid phase becomes thermodynamically stable when an external field, which penalizes particle rearrangements, is switched on. This produces a line of first-order phase transitions in the field–strain plane that intersects the origin. Failure of a solid once strained beyond its elastic limit is associated with kinetic decay processes of the metastable rigid crystal deformed with a finite strain rate. These processes can be understood in quantitative detail using our computed phase diagram as reference."
to:NB  statistical_mechanics  physics 
17 days ago
Race on the Brain - What Implicit Bias Gets Wrong About the Struggle for Racial Justice | Columbia University Press
"Of the many obstacles to racial justice in America, none has received more recent attention than the one that lurks in our subconscious. As social movements and policing scandals have shown how far from being “postracial” we are, the concept of implicit bias has taken center stage in the national conversation about race. Millions of Americans have taken online tests purporting to show the deep, invisible roots of their own prejudice. A recent Oxford study that claims to have found a drug that reduces implicit bias is only the starkest example of a pervasive trend. But what do we risk when we seek the simplicity of a technological diagnosis—and solution—for racism? What do we miss when we locate racism in our biology and our brains rather than in our history and our social practices?
"In Race on the Brain, Jonathan Kahn argues that implicit bias has grown into a master narrative of race relations—one with profound, if unintended, negative consequences for law, science, and society. He emphasizes its limitations, arguing that while useful as a tool to understand particular types of behavior, it is only one among several tools available to policy makers. An uncritical embrace of implicit bias, to the exclusion of power relations and structural racism, undermines wider civic responsibility for addressing the problem by turning it over to experts. Technological interventions, including many tests for implicit bias, are premised on a color-blind ideal and run the risk of erasing history, denying present reality, and obscuring accountability. Kahn recognizes the significance of implicit social cognition but cautions against seeing it as a panacea for addressing America’s longstanding racial problems. A bracing corrective to what has become a common-sense understanding of the power of prejudice, Race on the Brain challenges us all to engage more thoughtfully and more democratically in the difficult task of promoting racial justice."
to:NB  books:noted  racism  psychology  implicit_association_test  mental_testing 
18 days ago
Not Enough — Samuel Moyn | Harvard University Press
"The age of human rights has been kindest to the rich. Even as state violations of political rights garnered unprecedented attention due to human rights campaigns, a commitment to material equality disappeared. In its place, market fundamentalism has emerged as the dominant force in national and global economies. In this provocative book, Samuel Moyn analyzes how and why we chose to make human rights our highest ideals while simultaneously neglecting the demands of a broader social and economic justice.
"In a pioneering history of rights stretching back to the Bible, Not Enough charts how twentieth-century welfare states, concerned about both abject poverty and soaring wealth, resolved to fulfill their citizens’ most basic needs without forgetting to contain how much the rich could tower over the rest. In the wake of two world wars and the collapse of empires, new states tried to take welfare beyond its original European and American homelands and went so far as to challenge inequality on a global scale. But their plans were foiled as a neoliberal faith in markets triumphed instead.
"Moyn places the career of the human rights movement in relation to this disturbing shift from the egalitarian politics of yesterday to the neoliberal globalization of today. Exploring why the rise of human rights has occurred alongside enduring and exploding inequality, and why activists came to seek remedies for indigence without challenging wealth, Not Enough calls for more ambitious ideals and movements to achieve a humane and equitable world."
to:NB  books:noted  human_rights  inequality  globalization  political_philosophy 
18 days ago
From global scaling to the dynamics of individual cities | PNAS
"Scaling has been proposed as a powerful tool to analyze the properties of complex systems and in particular for cities where it describes how various properties change with population. The empirical study of scaling on a wide range of urban datasets displays apparent nonlinear behaviors whose statistical validity and meaning were recently the focus of many debates. We discuss here another aspect, which is the implication of such scaling forms on individual cities and how they can be used for predicting the behavior of a city when its population changes. We illustrate this discussion in the case of delay due to traffic congestion with a dataset of 101 US cities in the years 1982–2014. We show that the scaling form obtained by agglomerating all of the available data for different cities and for different years does display a nonlinear behavior, but which appears to be unrelated to the dynamics of individual cities when their population grows. In other words, the congestion-induced delay in a given city does not depend on its population only, but also on its previous history. This strong path dependency prohibits the existence of a simple scaling form valid for all cities and shows that we cannot always agglomerate the data for many different systems. More generally, these results also challenge the use of transversal data for understanding longitudinal series for cities."
to:NB  cities  data_analysis  re:urban_scaling_what_urban_scaling 
18 days ago
Cooperation, clustering, and assortative mixing in dynamic networks | PNAS
"Humans’ propensity to cooperate is driven by our embeddedness in social networks. A key mechanism through which networks promote cooperation is clustering. Within clusters, conditional cooperators are insulated from exploitation by noncooperators, allowing them to reap the benefits of cooperation. Dynamic networks, where ties can be shed and new ties formed, allow for the endogenous emergence of clusters of cooperators. Although past work suggests that either reputation processes or network dynamics can increase clustering and cooperation, existing work on network dynamics conflates reputations and dynamics. Here we report results from a large-scale experiment (total n = 2,675) that embedded participants in clustered or random networks that were static or dynamic, with varying levels of reputational information. Results show that initial network clustering predicts cooperation in static networks, but not in dynamic ones. Further, our experiment shows that while reputations are important for partner choice, cooperation levels are driven purely by dynamics. Supplemental conditions confirmed this lack of a reputation effect. Importantly, we find that when participants make individual choices to cooperate or defect with each partner, as opposed to a single decision that applies to all partners (as is standard in the literature on cooperation in networks), cooperation rates in static networks are as high as cooperation rates in dynamic networks. This finding highlights the importance of structured relations for sustained cooperation, and shows how giving experimental participants more realistic choices has important consequences for whether dynamic networks promote higher levels of cooperation than static networks."
to:NB  experimental_sociology  evolution_of_cooperation  social_networks 
19 days ago
Pr0nbots2: Revenge Of The Pr0nbots | News from the Lab
I'd _probably_ not get in trouble if I used this data in an assignment, but perhaps better not to take the chance...
network_data_analysis  community_discovery  networked_life  fraud  practices_relating_to_the_transmission_of_genetic_information  pr0n 
19 days ago
Forecasting the spatial transmission of influenza in the United States | PNAS
"Recurrent outbreaks of seasonal and pandemic influenza create a need for forecasts of the geographic spread of this pathogen. Although it is well established that the spatial progression of infection is largely attributable to human mobility, difficulty obtaining real-time information on human movement has limited its incorporation into existing infectious disease forecasting techniques. In this study, we develop and validate an ensemble forecast system for predicting the spatiotemporal spread of influenza that uses readily accessible human mobility data and a metapopulation model. In retrospective state-level forecasts for 35 US states, the system accurately predicts local influenza outbreak onset,—i.e., spatial spread, defined as the week that local incidence increases above a baseline threshold—up to 6 wk in advance of this event. In addition, the metapopulation prediction system forecasts influenza outbreak onset, peak timing, and peak intensity more accurately than isolated location-specific forecasts. The proposed framework could be applied to emergent respiratory viruses and, with appropriate modifications, other infectious diseases."
to:NB  epidemic_models  influenza  contagion  prediction  statistics 
19 days ago
Reproducibility of research: Issues and proposed remedies | PNAS
"Reproducibility has been one of the major tools science has used to help establish the validity and importance of scientific findings since Philosophical Transactions of the Royal Society was established in 1665 (1). Since that time the process of discovery has evolved to make use of new technologies and methods in a changing regulatory and social environment. The Sackler Colloquium “Reproducibility of Research: Issues and Proposed Remedies,” which took place on March 8–10, 2017, convened a wide range of research community stakeholders to address our understanding of transparency and reproducibility in the modern research context with two related questions: what does reproducibility mean in different research contexts, and what remedies increase reproducibility and transparency?
"We approach the topic of reproducibility with sensitivity to its complexity, spanning a wide range of issues from data collection and reporting to communication of scientific findings by scientists and nonscientists alike. The Colloquium was organized by David Allison, Richard Shiffrin, Victoria Stodden, and Stephen Fienberg. Before the Colloquium our esteemed and respected friend and colleague, Stephen Fienberg, unfortunately died and could not witness the outcome of his vision..."
science_as_a_social_process  why_oh_why_cant_we_have_a_better_academic_publishing_system 
19 days ago
Individuals, institutions, and innovation in the debates of the French Revolution | PNAS
"The French Revolution brought principles of “liberty, equality, fraternity” to bear on the day-to-day challenges of governing what was then the largest country in Europe. Its experiments provided a model for future revolutions and democracies across the globe, but this first modern revolution had no model to follow. Using reconstructed transcripts of debates held in the Revolution’s first parliament, we present a quantitative analysis of how this body managed innovation. We use information theory to track the creation, transmission, and destruction of word-use patterns across over 40,000 speeches and a thousand speakers. The parliament as a whole was biased toward the adoption of new patterns, but speakers’ individual qualities could break these overall trends. Speakers on the left innovated at higher rates, while speakers on the right acted to preserve prior patterns. Key players such as Robespierre (on the left) and Abbé Maury (on the right) played information-processing roles emblematic of their politics. Newly created organizational functions—such as the Assembly president and committee chairs—had significant effects on debate outcomes, and a distinct transition appears midway through the parliament when committees, external to the debate process, gained new powers to “propose and dispose.” Taken together, these quantitative results align with existing qualitative interpretations, but also reveal crucial information-processing dynamics that have hitherto been overlooked. Great orators had the public’s attention, but deputies (mostly on the political left) who mastered the committee system gained new powers to shape revolutionary legislation."
to:NB  text_mining  french_revolution  information_theory 
19 days ago
The role of obesity in exceptionally slow US mortality improvement | PNAS
"Recent studies have described a reduction in the rate of improvement in American mortality. The pace of improvement is also slow by international standards. This paper attempts to identify the extent to which rising body mass index (BMI) is responsible for reductions in the rate of mortality improvement in the United States. The data for this study were obtained from subsequent cohorts of the National Health and Nutrition Examination Survey (NHANES III, 1988–1994; NHANES continuous, 1999–2010) and from the NHANES linked mortality files, which include follow-up into death records through December 2011. The role of BMI was estimated using Cox models comparing mortality trends in the presence and absence of adjustment for maximum lifetime BMI (Max BMI). Introducing Max BMI into a Cox model controlling for age and sex raised the annual rate of mortality decline by 0.54% (95% confidence interval 0.45–0.64%). Results were robust to the inclusion of other variables in the model, to differences in how Max BMI was measured, and to how trends were evaluated. The effect of rising Max BMI is large relative to international mortality trends and to alternative mortality futures simulated by the Social Security Administration. The increase in Max BMI over the period 1988–2011 is estimated to have reduced life expectancy at age 40 by 0.9 years in 2011 (95% confidence interval 0.7–1.1 years) and accounted for 186,000 excess deaths that year. Rising levels of BMI have prevented the United States from enjoying the full benefits of factors working to improve mortality."

--- Contributed, so who knows?
to:NB  obesity  whats_gone_wrong_with_america 
19 days ago
Social norm complexity and past reputations in the evolution of cooperation | Nature
"Indirect reciprocity is the most elaborate and cognitively demanding1 of all known cooperation mechanisms2, and is the most specifically human1,3 because it involves reputation and status. By helping someone, individuals may increase their reputation, which may change the predisposition of others to help them in future. The revision of an individual’s reputation depends on the social norms that establish what characterizes a good or bad action and thus provide a basis for morality3. Norms based on indirect reciprocity are often sufficiently complex that an individual’s ability to follow subjective rules becomes important4,5,6, even in models that disregard the past reputations of individuals, and reduce reputations to either ‘good’ or ‘bad’ and actions to binary decisions7,8. Here we include past reputations in such a model and identify the key pattern in the associated norms that promotes cooperation. Of the norms that comply with this pattern, the one that leads to maximal cooperation (greater than 90 per cent) with minimum complexity does not discriminate on the basis of past reputation; the relative performance of this norm is particularly evident when we consider a ‘complexity cost’ in the decision process. This combination of high cooperation and low complexity suggests that simple moral principles can elicit cooperation even in complex environments."
to:NB  evolution_of_cooperation  evolutionary_game_theory 
19 days ago
An empirical analysis of journal policy effectiveness for computational reproducibility | PNAS
"A key component of scientific communication is sufficient information for other researchers in the field to reproduce published findings. For computational and data-enabled research, this has often been interpreted to mean making available the raw data from which results were generated, the computer code that generated the findings, and any additional information needed such as workflows and input parameters. Many journals are revising author guidelines to include data and code availability. This work evaluates the effectiveness of journal policy that requires the data and code necessary for reproducibility be made available postpublication by the authors upon request. We assess the effectiveness of such a policy by (i) requesting data and code from authors and (ii) attempting replication of the published findings. We chose a random sample of 204 scientific papers published in the journal Science after the implementation of their policy in February 2011. We found that we were able to obtain artifacts from 44% of our sample and were able to reproduce the findings for 26%. We find this policy—author remission of data and code postpublication upon request—an improvement over no policy, but currently insufficient for reproducibility."
to:NB  why_oh_why_cant_we_have_a_better_academic_publishing_system  heard_the_talk  kith_and_kin  stodden.victoria 
19 days ago
Empirical confidence interval calibration for population-level effect estimation studies in observational healthcare data | PNAS
"Observational healthcare data, such as electronic health records and administrative claims, offer potential to estimate effects of medical products at scale. Observational studies have often been found to be nonreproducible, however, generating conflicting results even when using the same database to answer the same question. One source of discrepancies is error, both random caused by sampling variability and systematic (for example, because of confounding, selection bias, and measurement error). Only random error is typically quantified but converges to zero as databases become larger, whereas systematic error persists independent from sample size and therefore, increases in relative importance. Negative controls are exposure–outcome pairs, where one believes no causal effect exists; they can be used to detect multiple sources of systematic error, but interpreting their results is not always straightforward. Previously, we have shown that an empirical null distribution can be derived from a sample of negative controls and used to calibrate P values, accounting for both random and systematic error. Here, we extend this work to calibration of confidence intervals (CIs). CIs require positive controls, which we synthesize by modifying negative controls. We show that our CI calibration restores nominal characteristics, such as 95% coverage of the true effect size by the 95% CI. We furthermore show that CI calibration reduces disagreement in replications of two pairs of conflicting observational studies: one related to dabigatran, warfarin, and gastrointestinal bleeding and one related to selective serotonin reuptake inhibitors and upper gastrointestinal bleeding. We recommend CI calibration to improve reproducibility of observational studies."
to:NB  statistics  confidence_sets  madigan.david  calibration 
19 days ago
Altruism in a volatile world | Nature
"The evolution of altruism—costly self-sacrifice in the service of others—has puzzled biologists1 since The Origin of Species. For half a century, attempts to understand altruism have developed around the concept that altruists may help relatives to have extra offspring in order to spread shared genes2. This theory—known as inclusive fitness—is founded on a simple inequality termed Hamilton’s rule2. However, explanations of altruism have typically not considered the stochasticity of natural environments, which will not necessarily favour genotypes that produce the greatest average reproductive success3,4. Moreover, empirical data across many taxa reveal associations between altruism and environmental stochasticity5,6,7,8, a pattern not predicted by standard interpretations of Hamilton’s rule. Here we derive Hamilton’s rule with explicit stochasticity, leading to new predictions about the evolution of altruism. We show that altruists can increase the long-term success of their genotype by reducing the temporal variability in the number of offspring produced by their relatives. Consequently, costly altruism can evolve even if it has a net negative effect on the average reproductive success of related recipients. The selective pressure on volatility-suppressing altruism is proportional to the coefficient of variation in population fitness, and is therefore diminished by its own success. Our results formalize the hitherto elusive link between bet-hedging and altruism4,9,10,11, and reveal missing fitness effects in the evolution of animal societies."
to:NB  evolutionary_biology  evolutionary_game_theory  evolution_of_cooperation 
19 days ago
Coevolution of landesque capital intensive agriculture and sociopolitical hierarchy | PNAS
"One of the defining trends of the Holocene has been the emergence of complex societies. Two essential features of complex societies are intensive resource use and sociopolitical hierarchy. Although it is widely agreed that these two phenomena are associated cross-culturally and have both contributed to the rise of complex societies, the causality underlying their relationship has been the subject of longstanding debate. Materialist theories of cultural evolution tend to view resource intensification as driving the development of hierarchy, but the reverse order of causation has also been advocated, along with a range of intermediate views. Phylogenetic methods have the potential to test between these different causal models. Here we report the results of a phylogenetic study that modeled the coevolution of one type of resource intensification—the development of landesque capital intensive agriculture—with political complexity and social stratification in a sample of 155 Austronesian-speaking societies. We found support for the coevolution of landesque capital with both political complexity and social stratification, but the contingent and nondeterministic nature of both of these relationships was clear. There was no indication that intensification was the “prime mover” in either relationship. Instead, the relationship between intensification and social stratification was broadly reciprocal, whereas political complexity was more of a driver than a result of intensification. These results challenge the materialist view and emphasize the importance of both material and social factors in the evolution of complex societies, as well as the complex and multifactorial nature of cultural evolution."
to:NB  cultural_evolution  inequality  historical_materialism  phylogenetics 
19 days ago
Inequality and redistribution behavior in a give-or-take game | PNAS
"Political polarization and extremism are widely thought to be driven by the surge in economic inequality in many countries around the world. Understanding why inequality persists depends on knowing the causal effect of inequality on individual behavior. We study how inequality affects redistribution behavior in a randomized “give-or-take” experiment that created equality, advantageous inequality, or disadvantageous inequality between two individuals before offering one of them the opportunity to either take from or give to the other. We estimate the causal effect of inequality in representative samples of German and American citizens (n = 4,966) and establish two main findings. First, individuals imperfectly equalize payoffs: On average, respondents transfer 12% of the available endowments to realize more equal wealth distributions. This means that respondents tolerate a considerable degree of inequality even in a setting in which there are no costs to redistribution. Second, redistribution behavior in response to disadvantageous and advantageous inequality is largely asymmetric: Individuals who take from those who are richer do not also tend to give to those who are poorer, and individuals who give to those who are poorer do not tend to take from those who are richer. These behavioral redistribution types correlate in meaningful ways with support for heavy taxes on the rich and the provision of welfare benefits for the poor. Consequently, it seems difficult to construct a majority coalition willing to back the type of government interventions needed to counter rising inequality."
to:NB  experimental_economics  inequality 
19 days ago
Extracting neuronal functional network dynamics via adaptive Granger causality analysis | PNAS
"Quantifying the functional relations between the nodes in a network based on local observations is a key challenge in studying complex systems. Most existing time series analysis techniques for this purpose provide static estimates of the network properties, pertain to stationary Gaussian data, or do not take into account the ubiquitous sparsity in the underlying functional networks. When applied to spike recordings from neuronal ensembles undergoing rapid task-dependent dynamics, they thus hinder a precise statistical characterization of the dynamic neuronal functional networks underlying adaptive behavior. We develop a dynamic estimation and inference paradigm for extracting functional neuronal network dynamics in the sense of Granger, by integrating techniques from adaptive filtering, compressed sensing, point process theory, and high-dimensional statistics. We demonstrate the utility of our proposed paradigm through theoretical analysis, algorithm development, and application to synthetic and real data. Application of our techniques to two-photon Ca2+ imaging experiments from the mouse auditory cortex reveals unique features of the functional neuronal network structures underlying spontaneous activity at unprecedented spatiotemporal resolution. Our analysis of simultaneous recordings from the ferret auditory and prefrontal cortical areas suggests evidence for the role of rapid top-down and bottom-up functional dynamics across these areas involved in robust attentive behavior."
to:NB  neural_data_analysis  time_series  functional_connectivity  statistics 
19 days ago
How much has wealth concentration grown in the United States? A re-examination of data from 2001-2013
"Well known research based on capitalized income tax data shows robust growth in wealth concentration in the late 2000s. We show that these robust growth estimates rely on an assumption—homogeneous rates of return across the wealth distribution—that is not supported by data. When the capitalization model incorporates heterogeneous rates of return (on just interest-bearing assets), wealth concentration estimates in 2011 fall from 40.5% to 33.9%. These estimates are consistent in levels and trend with other micro wealth data and show that wealth concentration increases until the Great Recession, then declines before increasing again."
to:NB  economics  inequality  heavy_tails  class_struggles_in_america 
19 days ago
A Kernel Embedding–Based Approach for Nonstationary Causal Model Inference | Neural Computation | MIT Press Journals
"Although nonstationary data are more common in the real world, most existing causal discovery methods do not take nonstationarity into consideration. In this letter, we propose a kernel embedding–based approach, ENCI, for nonstationary causal model inference where data are collected from multiple domains with varying distributions. In ENCI, we transform the complicated relation of a cause-effect pair into a linear model of variables of which observations correspond to the kernel embeddings of the cause-and-effect distributions in different domains. In this way, we are able to estimate the causal direction by exploiting the causal asymmetry of the transformed linear model. Furthermore, we extend ENCI to causal graph discovery for multiple variables by transforming the relations among them into a linear nongaussian acyclic model. We show that by exploiting the nonstationarity of distributions, both cause-effect pairs and two kinds of causal graphs are identifiable under mild conditions. Experiments on synthetic and real-world data are conducted to justify the efficacy of ENCI over major existing methods."
to:NB  causal_inference  causal_discovery  kernel_methods  to_be_shot_after_a_fair_trial 
19 days ago
Solving Constraint-Satisfaction Problems with Distributed Neocortical-Like Neuronal Networks | Neural Computation | MIT Press Journals
"Finding actions that satisfy the constraints imposed by both external inputs and internal representations is central to decision making. We demonstrate that some important classes of constraint satisfaction problems (CSPs) can be solved by networks composed of homogeneous cooperative-competitive modules that have connectivity similar to motifs observed in the superficial layers of neocortex. The winner-take-all modules are sparsely coupled by programming neurons that embed the constraints onto the otherwise homogeneous modular computational substrate. We show rules that embed any instance of the CSP's planar four-color graph coloring, maximum independent set, and sudoku on this substrate and provide mathematical proofs that guarantee these graph coloring problems will convergence to a solution. The network is composed of nonsaturating linear threshold neurons. Their lack of right saturation allows the overall network to explore the problem space driven through the unstable dynamics generated by recurrent excitation. The direction of exploration is steered by the constraint neurons. While many problems can be solved using only linear inhibitory constraints, network performance on hard problems benefits significantly when these negative constraints are implemented by nonlinear multiplicative inhibition. Overall, our results demonstrate the importance of instability rather than stability in network computation and offer insight into the computational role of dual inhibitory mechanisms in neural circuits."
to:NB  neural_networks  constraint-satisfaction  to_be_shot_after_a_fair_trial 
19 days ago
Slowness as a Proxy for Temporal Predictability: An Empirical Comparison | Neural Computation | MIT Press Journals
"The computational principles of slowness and predictability have been proposed to describe aspects of information processing in the visual system. From the perspective of slowness being a limited special case of predictability we investigate the relationship between these two principles empirically. On a collection of real-world data sets we compare the features extracted by slow feature analysis (SFA) to the features of three recently proposed methods for predictable feature extraction: forecastable component analysis, predictable feature analysis, and graph-based predictable feature analysis. Our experiments show that the predictability of the learned features is highly correlated, and, thus, SFA appears to effectively implement a method for extracting predictable features according to different measures of predictability."
to:NB  time_series  prediction  statistics 
19 days ago
A Micro-Level Data-Calibrated Agent-Based Model: The Synergy between Microsimulation and Agent-Based Modeling | Artificial Life | MIT Press Journals
"Artificial life (ALife) examines systems related to natural life, its processes, and its evolution, using simulations with computer models, robotics, and biochemistry. In this article, we focus on the computer modeling, or “soft,” aspects of ALife and prepare a framework for scientists and modelers to be able to support such experiments. The framework is designed and built to be a parallel as well as distributed agent-based modeling environment, and does not require end users to have expertise in parallel or distributed computing. Furthermore, we use this framework to implement a hybrid model using microsimulation and agent-based modeling techniques to generate an artificial society. We leverage this artificial society to simulate and analyze population dynamics using Korean population census data. The agents in this model derive their decisional behaviors from real data (microsimulation feature) and interact among themselves (agent-based modeling feature) to proceed in the simulation. The behaviors, interactions, and social scenarios of the agents are varied to perform an analysis of population dynamics. We also estimate the future cost of pension policies based on the future population structure of the artificial society. The proposed framework and model demonstrates how ALife techniques can be used by researchers in relation to social issues and policies."
to:NB  agent-based_models  statistics  simulation 
19 days ago
Renewing Felsenstein’s phylogenetic bootstrap in the era of big data | Nature
"Felsenstein’s application of the bootstrap method to evolutionary trees is one of the most cited scientific papers of all time. The bootstrap method, which is based on resampling and replications, is used extensively to assess the robustness of phylogenetic inferences. However, increasing numbers of sequences are now available for a wide variety of species, and phylogenies based on hundreds or thousands of taxa are becoming routine. With phylogenies of this size Felsenstein’s bootstrap tends to yield very low supports, especially on deep branches. Here we propose a new version of the phylogenetic bootstrap in which the presence of inferred branches in replications is measured using a gradual ‘transfer’ distance rather than the binary presence or absence index used in Felsenstein’s original version. The resulting supports are higher and do not induce falsely supported branches. The application of our method to large mammal, HIV and simulated datasets reveals their phylogenetic signals, whereas Felsenstein’s bootstrap fails to do so."
to:NB  statistics  evolutionary_biology  phylogenetics  bootstrap 
19 days ago
Robust Regression on Stationary Time Series: A Self‐Normalized Resampling Approach - Akashi - 2018 - Journal of Time Series Analysis - Wiley Online Library
"This article extends the self‐normalized subsampling method of Bai et al. (2016) to the M‐estimation of linear regression models, where the covariate and the noise are stationary time series which may have long‐range dependence or heavy tails. The method yields an asymptotic confidence region for the unknown coefficients of the linear regression. The determination of these regions does not involve unknown parameters such as the intensity of the dependence or the heaviness of the distributional tail of the time series. Additional simulations can be found in a supplement. The computer codes are available from the authors."
to:NB  time_series  statistics  linear_regression  heavy_tails  long-range_dependence 
19 days ago
Estimating MA Parameters through Factorization of the Autocovariance Matrix and an MA‐Sieve Bootstrap - McMurry - 2018 - Journal of Time Series Analysis - Wiley Online Library
"A new method to estimate the moving‐average (MA) coefficients of a stationary time series is proposed. The new approach is based on the modified Cholesky factorization of a consistent estimator of the autocovariance matrix. Convergence rates are established, and the new estimates are used to implement an MA‐type sieve bootstrap. Finite‐sample simulations corroborate the good performance of the proposed methodology."
to:NB  time_series  bootstrap  statistics 
19 days ago
Low agreement among reviewers evaluating the same NIH grant applications | PNAS
"Obtaining grant funding from the National Institutes of Health (NIH) is increasingly competitive, as funding success rates have declined over the past decade. To allocate relatively scarce funds, scientific peer reviewers must differentiate the very best applications from comparatively weaker ones. Despite the importance of this determination, little research has explored how reviewers assign ratings to the applications they review and whether there is consistency in the reviewers’ evaluation of the same application. Replicating all aspects of the NIH peer-review process, we examined 43 individual reviewers’ ratings and written critiques of the same group of 25 NIH grant applications. Results showed no agreement among reviewers regarding the quality of the applications in either their qualitative or quantitative evaluations. Although all reviewers received the same instructions on how to rate applications and format their written critiques, we also found no agreement in how reviewers “translated” a given number of strengths and weaknesses into a numeric rating. It appeared that the outcome of the grant review depended more on the reviewer to whom the grant was assigned than the research proposed in the grant. This research replicates the NIH peer-review process to examine in detail the qualitative and quantitative judgments of different reviewers examining the same application, and our results have broad relevance for scientific grant peer review."
to:NB  sociology_of_science  peer_review  why_oh_why_cant_we_have_a_better_intelligentsia  science_as_a_social_process 
19 days ago
Reconstructing the genetic history of late Neanderthals | Nature
"Although it has previously been shown that Neanderthals contributed DNA to modern humans1,2, not much is known about the genetic diversity of Neanderthals or the relationship between late Neanderthal populations at the time at which their last interactions with early modern humans occurred and before they eventually disappeared. Our ability to retrieve DNA from a larger number of Neanderthal individuals has been limited by poor preservation of endogenous DNA3 and contamination of Neanderthal skeletal remains by large amounts of microbial and present-day human DNA3,4,5. Here we use hypochlorite treatment6 of as little as 9 mg of bone or tooth powder to generate between 1- and 2.7-fold genomic coverage of five Neanderthals who lived around 39,000 to 47,000 years ago (that is, late Neanderthals), thereby doubling the number of Neanderthals for which genome sequences are available. Genetic similarity among late Neanderthals is well predicted by their geographical location, and comparison to the genome of an older Neanderthal from the Caucasus2,7 indicates that a population turnover is likely to have occurred, either in the Caucasus or throughout Europe, towards the end of Neanderthal history. We find that the bulk of Neanderthal gene flow into early modern humans originated from one or more source populations that diverged from the Neanderthals that were studied here at least 70,000 years ago, but after they split from a previously sequenced Neanderthal from Siberia2 around 150,000 years ago. Although four of the Neanderthals studied here post-date the putative arrival of early modern humans into Europe, we do not detect any recent gene flow from early modern humans in their ancestry."
to:NB  genetics  human_genetics  human_evolution  historical_genetics 
19 days ago
A randomized controlled design reveals barriers to citizenship for low-income immigrants | PNAS
"Citizenship endows legal protections and is associated with economic and social gains for immigrants and their communities. In the United States, however, naturalization rates are relatively low. Yet we lack reliable knowledge as to what constrains immigrants from applying. Drawing on data from a public/private naturalization program in New York, this research provides a randomized controlled study of policy interventions that address these constraints. The study tested two programmatic interventions among low-income immigrants who are eligible for citizenship. The first randomly assigned a voucher that covers the naturalization application fee among immigrants who otherwise would have to pay the full cost of the fee. The second randomly assigned a set of behavioral nudges, similar to outreach efforts used by service providers, among immigrants whose incomes were low enough to qualify them for a federal waiver that eliminates the application fee. Offering the fee voucher increased naturalization application rates by about 41%, suggesting that application fees act as a barrier for low-income immigrants who want to become US citizens. The nudges to encourage the very poor to apply had no discernible effect, indicating the presence of nonfinancial barriers to naturalization."

--- Or those particular nudges (and perhaps outreach efforts in general?) are just ineffective.
to:NB  experimental_sociology  us_politics  political_science  re:anti-nudge 
19 days ago
Global spectral clustering in dynamic networks | PNAS
"Community detection is challenging when the network structure is estimated with uncertainty. Dynamic networks present additional challenges but also add information across time periods. We propose a global community detection method, persistent communities by eigenvector smoothing (PisCES), that combines information across a series of networks, longitudinally, to strengthen the inference for each period. Our method is derived from evolutionary spectral clustering and degree correction methods. Data-driven solutions to the problem of tuning parameter selection are provided. In simulations we find that PisCES performs better than competing methods designed for a low signal-to-noise ratio. Recently obtained gene expression data from rhesus monkey brains provide samples from finely partitioned brain regions over a broad time span including pre- and postnatal periods. Of interest is how gene communities develop over space and time; however, once the data are divided into homogeneous spatial and temporal periods, sample sizes are very small, making inference quite challenging. Applying PisCES to medial prefrontal cortex in monkey rhesus brains from near conception to adulthood reveals dense communities that persist, merge, and diverge over time and others that are loosely organized and short lived, illustrating how dynamic community detection can yield interesting insights into processes such as brain development."
to:NB  community_discovery  network_data_analysis  dynamical_systems  spectral_clustering  kith_and_kin  liu.fuchen  roeder.kathryn  choi.david  sat_on_the_thesis_committee  to_teach:baby-nets 
19 days ago
Probabilistic switching circuits in DNA | PNAS
"A natural feature of molecular systems is their inherent stochastic behavior. A fundamental challenge related to the programming of molecular information processing systems is to develop a circuit architecture that controls the stochastic states of individual molecular events. Here we present a systematic implementation of probabilistic switching circuits, using DNA strand displacement reactions. Exploiting the intrinsic stochasticity of molecular interactions, we developed a simple, unbiased DNA switch: An input signal strand binds to the switch and releases an output signal strand with probability one-half. Using this unbiased switch as a molecular building block, we designed DNA circuits that convert an input signal to an output signal with any desired probability. Further, this probability can be switched between 2n different values by simply varying the presence or absence of n distinct DNA molecules. We demonstrated several DNA circuits that have multiple layers and feedback, including a circuit that converts an input strand to an output strand with eight different probabilities, controlled by the combination of three DNA molecules. These circuits combine the advantages of digital and analog computation: They allow a small number of distinct input molecules to control a diverse signal range of output molecules, while keeping the inputs robust to noise and the outputs at precise values. Moreover, arbitrarily complex circuit behaviors can be implemented with just a single type of molecular building block."
to:NB  computation  biochemical_networks  biological_computers  molecular_biology 
19 days ago
Carrot-Ginger Dressing Recipe | SAVEUR
1 cup vegetable oil
1⁄2 cup rice vinegar
1⁄4 cup soy sauce
1 tbsp. sugar
1 1⁄2 tsp. finely grated ginger
2 medium carrots (about 8 oz.), peeled and roughly chopped
1⁄2 medium yellow onion (about 6 oz.), roughly chopped
Kosher salt and freshly ground black pepper, to taste
1 head (about 1 lb.) iceberg lettuce, trimmed and cut into bite-sized pieces, for serving

Combine oil, vinegar, soy sauce, sugar, ginger, carrots, and onion in a food processor, and process until smooth; season with salt and pepper. Combine dressing and lettuce in a bowl, and toss until evenly coated; serve immediately. Unused dressing will keep for up to two weeks in the refrigerator.
food  recipes  have_made 
20 days ago
Fundamental limits on dynamic inference from single-cell snapshots | PNAS
"Single-cell expression profiling reveals the molecular states of individual cells with unprecedented detail. Because these methods destroy cells in the process of analysis, they cannot measure how gene expression changes over time. However, some information on dynamics is present in the data: the continuum of molecular states in the population can reflect the trajectory of a typical cell. Many methods for extracting single-cell dynamics from population data have been proposed. However, all such attempts face a common limitation: for any measured distribution of cell states, there are multiple dynamics that could give rise to it, and by extension, multiple possibilities for underlying mechanisms of gene regulation. Here, we describe the aspects of gene expression dynamics that cannot be inferred from a static snapshot alone and identify assumptions necessary to constrain a unique solution for cell dynamics from static snapshots. We translate these constraints into a practical algorithmic approach, population balance analysis (PBA), which makes use of a method from spectral graph theory to solve a class of high-dimensional differential equations. We use simulations to show the strengths and limitations of PBA, and then apply it to single-cell profiles of hematopoietic progenitor cells (HPCs). Cell state predictions from this analysis agree with HPC fate assays reported in several papers over the past two decades. By highlighting the fundamental limits on dynamic inference faced by any method, our framework provides a rigorous basis for dynamic interpretation of a gene expression continuum and clarifies best experimental designs for trajectory reconstruction from static snapshot measurements."

--- This is probably too ambitious for the last tag, but some illustrations of the general issue wouldn't be amiss...
to:NB  identifiability  molecular_biology  statistics  to_teach:data_over_space_and_time 
21 days ago
Fast flow-based algorithm for creating density-equalizing map projections | PNAS
"Cartograms are maps that rescale geographic regions (e.g., countries, districts) such that their areas are proportional to quantitative demographic data (e.g., population size, gross domestic product). Unlike conventional bar or pie charts, cartograms can represent correctly which regions share common borders, resulting in insightful visualizations that can be the basis for further spatial statistical analysis. Computer programs can assist data scientists in preparing cartograms, but developing an algorithm that can quickly transform every coordinate on the map (including points that are not exactly on a border) while generating recognizable images has remained a challenge. Methods that translate the cartographic deformations into physics-inspired equations of motion have become popular, but solving these equations with sufficient accuracy can still take several minutes on current hardware. Here we introduce a flow-based algorithm whose equations of motion are numerically easier to solve compared with previous methods. The equations allow straightforward parallelization so that the calculation takes only a few seconds even for complex and detailed input. Despite the speedup, the proposed algorithm still keeps the advantages of previous techniques: With comparable quantitative measures of shape distortion, it accurately scales all areas, correctly fits the regions together, and generates a map projection for every point. We demonstrate the use of our algorithm with applications to the 2016 US election results, the gross domestic products of Indian states and Chinese provinces, and the spatial distribution of deaths in the London borough of Kensington and Chelsea between 2011 and 2014."
to:NB  cartograms  visual_display_of_quantitative_information  kith_and_kin  gastner.michael  to_teach:data_over_space_and_time 
21 days ago
Functional circuit architecture underlying parental behaviour | Nature
"Parenting is essential for the survival and wellbeing of mammalian offspring. However, we lack a circuit-level understanding of how distinct components of this behaviour are coordinated. Here we investigate how galanin-expressing neurons in the medial preoptic area (MPOAGal) of the hypothalamus coordinate motor, motivational, hormonal and social aspects of parenting in mice. These neurons integrate inputs from a large number of brain areas and the activation of these inputs depends on the animal’s sex and reproductive state. Subsets of MPOAGal neurons form discrete pools that are defined by their projection sites. While the MPOAGal population is active during all episodes of parental behaviour, individual pools are tuned to characteristic aspects of parenting. Optogenetic manipulation of MPOAGal projections mirrors this specificity, affecting discrete parenting components. This functional organization, reminiscent of the control of motor sequences by pools of spinal cord neurons, provides a new model for how discrete elements of a social behaviour are generated at the circuit level."
to:NB  neuroscience 
21 days ago
Global rise in emerging alien species results from increased accessibility of new source pools | PNAS
"Our ability to predict the identity of future invasive alien species is largely based upon knowledge of prior invasion history. Emerging alien species—those never encountered as aliens before—therefore pose a significant challenge to biosecurity interventions worldwide. Understanding their temporal trends, origins, and the drivers of their spread is pivotal to improving prevention and risk assessment tools. Here, we use a database of 45,984 first records of 16,019 established alien species to investigate the temporal dynamics of occurrences of emerging alien species worldwide. Even after many centuries of invasions the rate of emergence of new alien species is still high: One-quarter of first records during 2000–2005 were of species that had not been previously recorded anywhere as alien, though with large variation across taxa. Model results show that the high proportion of emerging alien species cannot be solely explained by increases in well-known drivers such as the amount of imported commodities from historically important source regions. Instead, these dynamics reflect the incorporation of new regions into the pool of potential alien species, likely as a consequence of expanding trade networks and environmental change. This process compensates for the depletion of the historically important source species pool through successive invasions. We estimate that 1–16% of all species on Earth, depending on the taxonomic group, qualify as potential alien species. These results suggest that there remains a high proportion of emerging alien species we have yet to encounter, with future impacts that are difficult to predict."
to:NB  ecology  environmental_management  globalization 
21 days ago
Within- and across-trial dynamics of human EEG reveal cooperative interplay between reinforcement learning and working memory | PNAS
"Learning from rewards and punishments is essential to survival and facilitates flexible human behavior. It is widely appreciated that multiple cognitive and reinforcement learning systems contribute to decision-making, but the nature of their interactions is elusive. Here, we leverage methods for extracting trial-by-trial indices of reinforcement learning (RL) and working memory (WM) in human electro-encephalography to reveal single-trial computations beyond that afforded by behavior alone. Neural dynamics confirmed that increases in neural expectation were predictive of reduced neural surprise in the following feedback period, supporting central tenets of RL models. Within- and cross-trial dynamics revealed a cooperative interplay between systems for learning, in which WM contributes expectations to guide RL, despite competition between systems during choice. Together, these results provide a deeper understanding of how multiple neural systems interact for learning and decision-making and facilitate analysis of their disruption in clinical populations."
to:NB  neuroscience  cognitive_science  reinforcement_learning 
21 days ago
Algorithms in the historical emergence of word senses | PNAS
"Human language relies on a finite lexicon to express a potentially infinite set of ideas. A key result of this tension is that words acquire novel senses over time. However, the cognitive processes that underlie the historical emergence of new word senses are poorly understood. Here, we present a computational framework that formalizes competing views of how new senses of a word might emerge by attaching to existing senses of the word. We test the ability of the models to predict the temporal order in which the senses of individual words have emerged, using an historical lexicon of English spanning the past millennium. Our findings suggest that word senses emerge in predictable ways, following an historical path that reflects cognitive efficiency, predominantly through a process of nearest-neighbor chaining. Our work contributes a formal account of the generative processes that underlie lexical evolution."
to:NB  cognitive_science  linguistics 
21 days ago
Rainfall statistics, stationarity, and climate change | PNAS
"There is a growing research interest in the detection of changes in hydrologic and climatic time series. Stationarity can be assessed using the autocorrelation function, but this is not yet common practice in hydrology and climate. Here, we use a global land-based gridded annual precipitation (hereafter P) database (1940–2009) and find that the lag 1 autocorrelation coefficient is statistically significant at around 14% of the global land surface, implying nonstationary behavior (90% confidence). In contrast, around 76% of the global land surface shows little or no change, implying stationary behavior. We use these results to assess change in the observed P over the most recent decade of the database. We find that the changes for most (84%) grid boxes are within the plausible bounds of no significant change at the 90% CI. The results emphasize the importance of adequately accounting for natural variability when assessing change."

--- They really do seem to be saying that because _independent, identically distributed_ random variables have 0 autocorrelation, all autocorrelated time series are non-stationary. This is so unbelievably stupid that I am going to have to read it again very carefully before banging my head into my desk.
to:NB  to_read  bad_data_analysis  time_series  statistics  to_teach:data_over_space_and_time  to_be_shot_after_a_fair_trial 
21 days ago
What Do Trade Agreements Really Do? (Rodrik)
"Economists have a tendency to associate "free trade agreements" all too closely with "free trade." They may be unaware of some of the new (and often problematic) beyond-the-border features of current trade agreements. As trade agreements have evolved and gone beyond import tariffs and quotas into regulatory rules and harmonization—intellectual property, health and safety rules, labor standards, investment measures, investor-state dispute settlement procedures, and others—they have become harder to fit into received economic theory. It is possible that rather than neutralizing the protectionists, trade agreements may empower a different set of rent-seeking interests and politically well-connected firms—international banks, pharmaceutical companies, and multinational firms. Trade agreements could still result in freer, mutually beneficial trade, through exchange of market access. They could result in the global upgrading of regulations and standards, for labor, say, or the environment. But they could also produce purely redistributive outcomes under the guise of "freer trade." As trade agreements become less about tariffs and nontariff barriers at the border and more about domestic rules and regulations, economists might do well to worry more about the latter possibility."
to:NB  economics  political_economy  class_struggles_in_america  rodrik.dani 
23 days ago
Trust your gut: using physiological states as a source of information is almost as effective as optimal Bayesian learning
"Approaches to understanding adaptive behaviour often assume that animals have perfect information about environmental conditions or are capable of sophisticated learning. If such learning abilities are costly, however, natural selection will favour simpler mechanisms for controlling behaviour when faced with uncertain conditions. Here, we show that, in a foraging context, a strategy based only on current energy reserves often performs almost as well as a Bayesian learning strategy that integrates all previous experiences to form an optimal estimate of environmental conditions. We find that Bayesian learning gives a strong advantage only if fluctuations in the food supply are very strong and reasonably frequent. The performance of both the Bayesian and the reserve-based strategy are more robust to inaccurate knowledge of the temporal pattern of environmental conditions than a strategy that has perfect knowledge about current conditions. Studies assuming Bayesian learning are often accused of being unrealistic; our results suggest that animals can achieve a similar level of performance to Bayesians using much simpler mechanisms based on their physiological state. More broadly, our work suggests that the ability to use internal states as a source of information about recent environmental conditions will have weakened selection for sophisticated learning and decision-making systems."

--- Slightly astonishing to see only one reference to Gigerenzer...
to:NB  psychology  ethology  adaptive_behavior  bayesianism  heuristics  via:? 
25 days ago
Elements of Surprise — Vera Tobin | Harvard University Press
"Why do some surprises delight—the endings of Agatha Christie novels, films like The Sixth Sense, the flash awareness that Pip’s benefactor is not (and never was!) Miss Havisham? Writing at the intersection of cognitive science and narrative pleasure, Vera Tobin explains how our brains conspire with stories to produce those revelatory plots that define a “well-made surprise.”
"By tracing the prevalence of surprise endings in both literary fiction and popular literature and showing how they exploit our mental limits, Tobin upends two common beliefs. The first is cognitive science’s tendency to consider biases a form of moral weakness and failure. The second is certain critics’ presumption that surprise endings are mere shallow gimmicks. The latter is simply not true, and the former tells at best half the story. Tobin shows that building a good plot twist is a complex art that reflects a sophisticated understanding of the human mind.
"Reading classic, popular, and obscure literature alongside the latest research in cognitive science, Tobin argues that a good surprise works by taking advantage of our mental limits. Elements of Surprise describes how cognitive biases, mental shortcuts, and quirks of memory conspire with stories to produce wondrous illusions, and also provides a sophisticated how-to guide for writers. In Tobin’s hands, the interactions of plot and cognition reveal the interdependencies of surprise, sympathy, and sense-making. The result is a new appreciation of the pleasures of being had."
to:NB  books:noted  literary_criticism  narrative  psychology  surprise 
4 weeks ago
Trust Me, I'm Lying by Ryan Holiday | PenguinRandomHouse.com
"Hailed as “astonishing and disturbing” by the Financial Times and “essential reading” by TechCrunch at its original publication, former American Apparel marketing director Ryan Holiday’s first book sounded a prescient alarm about the dangers of fake news. It’s all the more relevant today. 
"Trust Me, I’m Lying was the first book to blow the lid off the speed and force at which rumors travel online—and get “traded up” the media ecosystem until they become real headlines and generate real responses in the real world. The culprit? Marketers and professional media manipulators, encouraged by the toxic economics of the news business.
"Whenever you see a malicious online rumor costs a company millions, politically motivated fake news driving elections, a product or celebrity zooming from total obscurity to viral sensation, or anonymously sourced articles becoming national conversation, someone is behind it. Often someone like Ryan Holiday.
"As he explains, “I wrote this book to explain how media manipulators work, how to spot their fingerprints, how to fight them, and how (if you must) to emulate their tactics. Why am I giving away these secrets? Because I’m tired of a world where trolls hijack debates, marketers help write the news, opinion masquerades as fact, algorithms drive everything to extremes, and no one is accountable for any of it. I’m pulling back the curtain because it’s time the public understands how things really work. What you choose to do with this information is up to you.”"
to:NB  books:noted  advertising  networked_life  natural_history_of_truthiness  deceiving_us_has_become_an_industrial_process  re:democratic_cognition  to_be_shot_after_a_fair_trial 
4 weeks ago
Artificial Intelligence — The Revolution Hasn’t Happened Yet
Unsurprisingly, Michael Jordan talks sense.

(Trivial and unrelated rant: What on Earth is the point of using Medium? It takes a post which is about 24k of text and actual formatting, and bloats it to over 150k, to do, so far as I can see, absolutely nothing of value to readers.)
artificial_intelligence  debunking  machine_learning  jordan.michael_i. 
4 weeks ago
Word embeddings quantify 100 years of gender and ethnic stereotypes | PNAS
"Word embeddings are a powerful machine-learning framework that represents each English word by a vector. The geometric relationship between these vectors captures meaningful semantic relationships between the corresponding words. In this paper, we develop a framework to demonstrate how the temporal dynamics of the embedding helps to quantify changes in stereotypes and attitudes toward women and ethnic minorities in the 20th and 21st centuries in the United States. We integrate word embeddings trained on 100 y of text data with the US Census to show that changes in the embedding track closely with demographic and occupation shifts over time. The embedding captures societal shifts—e.g., the women’s movement in the 1960s and Asian immigration into the United States—and also illuminates how specific adjectives and occupations became more closely associated with certain populations over time. Our framework for temporal analysis of word embedding opens up a fruitful intersection between machine learning and quantitative social science."
to:NB  text_mining  sociology  sexism  racism  history_of_ideas  time_series  to_teach:data-mining  to_teach:data_over_space_and_time 
5 weeks ago
Multiscale mixing patterns in networks | PNAS
"Assortative mixing in networks is the tendency for nodes with the same attributes, or metadata, to link to each other. It is a property often found in social networks, manifesting as a higher tendency of links occurring between people of the same age, race, or political belief. Quantifying the level of assortativity or disassortativity (the preference of linking to nodes with different attributes) can shed light on the organization of complex networks. It is common practice to measure the level of assortativity according to the assortativity coefficient, or modularity in the case of categorical metadata. This global value is the average level of assortativity across the network and may not be a representative statistic when mixing patterns are heterogeneous. For example, a social network spanning the globe may exhibit local differences in mixing patterns as a consequence of differences in cultural norms. Here, we introduce an approach to localize this global measure so that we can describe the assortativity, across multiple scales, at the node level. Consequently, we are able to capture and qualitatively evaluate the distribution of mixing patterns in the network. We find that, for many real-world networks, the distribution of assortativity is skewed, overdispersed, and multimodal. Our method provides a clearer lens through which we can more closely examine mixing patterns in networks."

--- More descriptive statistics.
to:NB  network_data_analysis  homophily  to_teach:baby-nets 
5 weeks ago
The 4-color map theory is bunk. It is very easy to create an artificial map that... | Hacker News
The best part is when he (pretty sure it's a "he") uploads a picture of his supposed counter-example, and it's 4-colored within minutes by another poster.
mathematics  utter_stupidity  networked_life  via:? 
5 weeks ago
The Slave Trade and British Capital Formation in the Eighteenth Century: A Comment on the Williams Thesis* | Business History Review | Cambridge Core
"Professor Engerman constructs estimates of relevant data in order to test the assertion that profits from the slave trade provided the capital which financed the Industrial Revolution in England."

--- The last tag is tentative, but La Historienne has convinced me to at least explore using the Williams Thesis as a teaching example in the new class...
to:NB  to_read  industrial_revolution  capitalism  slavery  economic_history  time_series  to_teach:data_over_space_and_time 
5 weeks ago
« earlier      
20th_century_history academia afghanistan agent-based_models american_history ancient_history anthropology archaeology art bad_data_analysis bad_science_journalism bayesian_consistency bayesianism biochemical_networks biology blogged book_reviews books:noted books:owned books:recommended bootstrap causal_inference causality central_asia central_limit_theorem class_struggles_in_america classifiers climate_change clustering cognitive_science collective_cognition community_discovery complexity computational_statistics confidence_sets corruption coveted crime cultural_criticism cultural_evolution cultural_exchange data_analysis data_mining debunking decision-making decision_theory delong.brad democracy density_estimation dimension_reduction distributed_systems dynamical_systems ecology econometrics economic_history economic_policy economics education empirical_processes ensemble_methods entropy_estimation epidemic_models epidemiology_of_representations epistemology ergodic_theory estimation evisceration evolution_of_cooperation evolutionary_biology experimental_psychology finance financial_crisis_of_2007-- financial_speculation fmri food fraud funny funny:academic funny:geeky funny:laughing_instead_of_screaming funny:malicious genetics graph_theory graphical_models have_made have_read heard_the_talk heavy_tails high-dimensional_statistics hilbert_space history_of_ideas history_of_science human_evolution human_genetics hypothesis_testing ideology imperialism in_nb inequality inference_to_latent_objects information_theory institutions kernel_methods kith_and_kin krugman.paul large_deviations lasso learning_theory linguistics literary_criticism machine_learning macro_from_micro macroeconomics market_failures_in_everything markov_models mathematics mixing model_selection modeling modern_ruins monte_carlo moral_philosophy moral_psychology moral_responsibility mortgage_crisis natural_history_of_truthiness network_data_analysis networked_life networks neural_data_analysis neural_networks neuroscience non-equilibrium nonparametrics optimization our_decrepit_institutions philosophy philosophy_of_science photos physics pittsburgh political_economy political_science practices_relating_to_the_transmission_of_genetic_information prediction pretty_pictures principal_components probability programming progressive_forces psychology public_policy r racism random_fields re:almost_none re:aos_project re:democratic_cognition re:do-institutions-evolve re:g_paper re:homophily_and_confounding re:network_differences re:smoothing_adjacency_matrices re:social_networks_as_sensor_networks re:stacs re:your_favorite_dsge_sucks recipes regression running_dogs_of_reaction science_as_a_social_process science_fiction simulation social_influence social_life_of_the_mind social_media social_networks social_science_methodology sociology something_about_america sparsity spatial_statistics state-space_models statistical_inference_for_stochastic_processes statistical_mechanics statistics stochastic_processes text_mining the_american_dilemma the_continuing_crises time_series to:blog to:nb to_be_shot_after_a_fair_trial to_read to_teach:complexity-and-inference to_teach:data-mining to_teach:statcomp to_teach:undergrad-ada track_down_references us_politics utter_stupidity vast_right-wing_conspiracy via:? via:henry_farrell via:jbdelong visual_display_of_quantitative_information whats_gone_wrong_with_america why_oh_why_cant_we_have_a_better_academic_publishing_system why_oh_why_cant_we_have_a_better_press_corps

Copy this bookmark: