Identifying and Estimation Neighborhood Effects
"Residential segregation by race and income are enduring features of urban America. Understanding the effects of residential segregation on educational attainment, labor market outcomes, criminal activity, and other outcomes has been a leading project of the social sciences for over half a century. This paper describes techniques for measuring the effects of neighborhood of residence on long-run life outcomes."

--- Last tag very tentative, since I've not read the paper _and_ the class won't be that advanced
to:NB  statistics  econometrics  spatial_statistics  identifiability  inequality  the_american_dilemma  to_teach:data_over_space_and_time 
6 days ago
Notes towards a Theory of Ideology - Persée
One of Gellner's classics; collected in _Spectacles and Predicaments_.
have_read  ideology  sociology  gellner.ernest 
8 days ago
[1406.2661] Generative Adversarial Networks
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples."

--- Kind of astonishing that the phrases "actor-critic" or "co-evolution" do not appear anywhere in the paper. The bit about KL divergence is nice, though. (But even then, it doesn't involve multi-layer perceptrons at all, it's all at the level of probability distributions and could apply to any learning system.)
to:NB  have_read  neural_networks  machine_learning  computational_statistics  statistics 
11 days ago
Escaping Malthus: Economic Growth and Fertility Change in the Developing World
"Following mid-twentieth century predictions of Malthusian catastrophe, fertility in the developing world more than halved, while living standards more than doubled. We analyze how fertility change related to economic growth during this episode, using data on 2.3 million women from 255 household surveys. We find different responses to fluctuations and long-run growth, both heterogeneous over the life cycle. Fertility was procyclical but declined and delayed with long-run growth; fluctuations late (but not early) in the reproductive period affected lifetime fertility. The results are consistent with models of the escape from the Malthusian trap, extended with a life cycle and liquidity constraints."
to:NB  demography  economics  great_transformation 
11 days ago
Investment Strategy and Selection Bias: An Equilibrium Perspective on Overoptimism
"Investors implement projects based on idiosyncratic signal observations, without knowing how signals and returns are jointly distributed. The following heuristic is studied: investors collect information on previously implemented projects with the same signal realization and invest if the associated mean return exceeds the cost. The corresponding steady states result in suboptimal investments, due to selection bias and the heterogeneity of signals across investors. When higher signals are associated with higher returns, investors are overoptimistic, resulting in overinvestment. Rational investors increase the overoptimism of sampling investors, thereby illustrating a negative externality imposed by rational investors."
to:NB  economics  market_failures_in_everything 
12 days ago
Material Signals: A Historical Sociology of High-Frequency Trading | American Journal of Sociology: Vol 123, No 6
"Drawing on interviews with 194 market participants (including 54 practitioners of high-frequency trading or HFT), this article first identifies the main classes of “signals” (patterns of data) that influence how HFT algorithms buy and sell shares and interact with each other. Second, it investigates historically the processes that have led to three of the most important categories of these signals, finding that they arise from three features of U.S. share trading that are the result of episodes of meso-level conflict. Third, the article demonstrates the contingency of these features by briefly comparing HFT in share trading to HFT in futures, Treasurys, and foreign exchange. The article thus argues that how HFT algorithms act and interact is a specific, contingent product not just of the current but also of the past interaction of people, organizations, algorithms, and machines."
to:NB  finance  sociology  prediction 
12 days ago
[1805.12462] On GANs and GMMs
"A longstanding problem in machine learning is to find unsupervised methods that can learn the statistical structure of high dimensional signals. In recent years, GANs have gained much attention as a possible solution to the problem, and in particular have shown the ability to generate remarkably realistic high resolution sampled images. At the same time, many authors have pointed out that GANs may fail to model the full distribution ("mode collapse") and that using the learned models for anything other than generating samples may be very difficult. In this paper, we examine the utility of GANs in learning statistical models of images by comparing them to perhaps the simplest statistical model, the Gaussian Mixture Model. First, we present a simple method to evaluate generative models based on relative proportions of samples that fall into predetermined bins. Unlike previous automatic methods for evaluating models, our method does not rely on an additional neural network nor does it require approximating intractable computations. Second, we compare the performance of GANs to GMMs trained on the same datasets. While GMMs have previously been shown to be successful in modeling small patches of images, we show how to train them on full sized images despite the high dimensionality. Our results show that GMMs can generate realistic samples (although less sharp than those of GANs) but also capture the full distribution, which GANs fail to do. Furthermore, GMMs allow efficient inference and explicit representation of the underlying statistical structure. Finally, we discuss how a pix2pix network can be used to add high-resolution details to GMM samples while maintaining the basic diversity."

--- I wonder if I need a "your favorite deep learning technique/architecture sucks" tag.
to:NB  neural_networks  mixture_models  high-dimensional_statistics  to_be_shot_after_a_fair_trial  via:arsyed 
13 days ago
Homicide database: Mapping unsolved murders in major U.S. cities - Washington Post
I've bookmarked the Pittsburgh sub-page, for teaching purposes, but the whole thing looks great (for insanely depressing values of "great').
crime  violence  visual_display_of_quantitative_information  to_teach:data_over_space_and_time 
14 days ago
Distinct encoding of decision confidence in human medial prefrontal cortex | PNAS
"Our confidence in a choice and the evidence pertaining to a choice appear to be inseparable. However, an emerging computational consensus holds that the brain should maintain separate estimates of these quantities for adaptive behavioral control. We have devised a psychophysical task to decouple confidence in a perceptual decision from both the reliability of sensory evidence and the relation of such evidence with respect to a choice boundary. Using human fMRI, we found that an area in the medial prefrontal cortex, the perigenual anterior cingulate cortex (pgACC), tracked expected performance, an aggregate signature of decision confidence, whereas neural areas previously proposed to encode decision confidence instead tracked sensory reliability (posterior parietal cortex and ventral striatum) or boundary distance (presupplementary motor area). Supporting that information encoded by pgACC is central to a subjective sense of decision confidence, we show that pgACC activity does not simply covary with expected performance, but is also linked to within-subject and between-subject variation in explicit confidence estimates. Our study is consistent with the proposal that the brain maintains choice-dependent and choice-independent estimates of certainty, and sheds light on why dysfunctional confidence often emerges following prefrontal lesions and/or degeneration."
to:NB  neuroscience  decision-making  psychology 
14 days ago
Field studies of psychologically targeted ads face threats to internal validity | PNAS
"The paper uses Facebook’s standard ad platform to compare how different versions of ads perform. However, this process does not create a randomized experiment: users are not randomly assigned to different ads, and individuals may even receive multiple ad types (e.g., both extroverted and introverted ads). Furthermore, ad platforms like Facebook optimize campaign performance by showing ads to users whom the platform expects are more likely to fulfill the campaign’s objective..."
to:NB  advertising  experimental_psychology  causal_inference  eckles.dean  facebook 
14 days ago
blogdown: Creating Websites with R Markdown
Since I am, as a loyal reader informs me, the last human being still using Blosxom...
blogged  R  to_read 
14 days ago
Solomon's Secret Arts | Yale University Press
"The late seventeenth and eighteenth centuries are known as the Age of Enlightenment, a time of science and reason. But in this illuminating book, Paul Monod reveals the surprising extent to which Newton, Boyle, Locke, and other giants of rational thought and empiricism also embraced the spiritual, the magical, and the occult.
"Although public acceptance of occult and magical practices waxed and waned during this period they survived underground, experiencing a considerable revival in the mid-eighteenth century with the rise of new antiestablishment religious denominations. The occult spilled over into politics with the radicalism of the French Revolution and into literature in early Romanticism. Even when official disapproval was at its strongest, the evidence points to a growing audience for occult publications as well as to subversive popular enthusiasm. Ultimately, finds Monod, the occult was not discarded in favor of “reason” but was incorporated into new forms of learning. In that sense, the occult is part of the modern world, not simply a relic of an unenlightened past, and is still with us today."
to:NB  books:noted  history_of_ideas  psychoceramics  enlightenment  magic 
14 days ago
Leveraging Advances in Social Network Thinking for National Security: Proceedings of a Workshop | The National Academies Press
"Beginning in October 2017, the National Academies of Sciences, Engineering, and Medicine organized a set of workshops designed to gather information for the Decadal Survey of Social and Behavioral Sciences for Applications to National Security. The third workshop focused on advances in social network thinking, and this publication summarizes the presentations and discussions from this workshop."
to:NB  books:noted  social_networks  us_military  to_be_shot_after_a_fair_trial 
15 days ago
Choke Points: Logistics Workers Disrupting the Global Supply Chain, Alimahomed-Wilson, Ness
"The global economy seems indomitable. Goods travel all over the globe, supplying just-in-time retail stocks, keeping consumers satisfied and businesses profitable.
"But there are vulnerabilities, and Choke Points reveals them—and the ways that workers are finding ways to make use of the power that those choke points afford them. Exploring a number of case studies around the world, this book uncovers a little-known network of resistance by logistics workers worldwide who are determined to contest their exploitation by the forces of global capital. Through close accounts of wildcat strikes, roadblocks, and boycotts, from South China to Southern California, the contributors build a picture of a movement that flies under the radar, but carries the potential to force dramatic change."

--- Dividing through for the hopeful "The ancient prophecy will be fulfilled!" rhetoric, this is a very interesting phenomenon, and the question to my mind is why it isn't more common.
to:NB  books:noted  logistics  globalization  labor 
15 days ago
Hinterland: America’s New Landscape of Class and Conflict, Neel
"Over the last forty years, the human landscape of the United States has been fundamentally transformed. The metamorphosis is partially visible in the ascendance of glittering, coastal hubs for finance, infotech, and the so-called creative class. But this is only the tip of an economic iceberg, the bulk of which lies in the darkness of the declining heartland or on the dimly lit fringe of sprawling cities. This is America’s hinterland, populated by towering grain threshers and hunched farmworkers, where laborers drawn from every corner of the world crowd into factories and “fulfillment centers” and where cold storage trailers are filled with fentanyl-bloated corpses when the morgues cannot contain the dead.
"Urgent and unsparing, this book opens our eyes to America’s new heart of darkness. Driven by an ever-expanding socioeconomic crisis, America’s class structure is recomposing itself in new geographies of race, poverty, and production. The center has fallen. Riots ricochet from city to city led by no one in particular. Anarchists smash financial centers as a resurgent far right builds power in the countryside. Drawing on his direct experience of recent popular unrest, from the Occupy movement to the wave of riots and blockades that began in Ferguson, Missouri, Phil A. Neel provides a close-up view of this landscape in all its grim but captivating detail. Inaugurating the new Field Notes series, published in association with the Brooklyn Rail, Neel’s book tells the intimate story of a life lived within America’s hinterland."
to:NB  books:noted  something_about_america  whats_gone_wrong_with_america  travelers'_tales  class_struggles_in_america 
15 days ago
Demonising The Other: The Criminalisation of Morality, Whitehead
"Throughout history, societies have established “others”—groups, often defined through differences of culture, race, gender, or class, that have been demonized by the majority. In this book, Philip Whitehead challenges the idea that such demonization is an inevitable fact of life. He lays out the historical criminalization of the other and looks closely at modern attempts to prevent it through changes to criminal justice systems, ultimately questioning whether such approaches can be effective at altering the conditions of existence that are responsible for the creation of the other."
to:NB  books:noted  moral_psychology  crime  history_of_morals 
15 days ago
The Increasingly United States: How and Why American Political Behavior Nationalized, Hopkins
"In a campaign for state or local office these days, you’re as likely today to hear accusations that an opponent advanced Obamacare or supported Donald Trump as you are to hear about issues affecting the state or local community. This is because American political behavior has become substantially more nationalized. American voters are far more engaged with and knowledgeable about what’s happening in Washington, DC, than in similar messages whether they are in the South, the Northeast, or the Midwest. Gone are the days when all politics was local.
"With The Increasingly United States, Daniel J. Hopkins explores this trend and its implications for the American political system. The change is significant in part because it works against a key rationale of America’s federalist system, which was built on the assumption that citizens would be more strongly attached to their states and localities. It also has profound implications for how voters are represented. If voters are well informed about state politics, for example, the governor has an incentive to deliver what voters—or at least a pivotal segment of them—want. But if voters are likely to back the same party in gubernatorial as in presidential elections irrespective of the governor’s actions in office, governors may instead come to see their ambitions as tethered more closely to their status in the national party."
to:NB  books:noted  us_politics  political_science 
15 days ago
Culture and the Course of Human Evolution, Tomlinson
"The rapid evolutionary development of modern Homo sapiens over the past 200,000 years is a topic of fevered interest in numerous disciplines. How did humans, while undergoing few physical changes from their first arrival, so quickly develop the capacities to transform their world? Gary Tomlinson’s Culture and the Course of Human Evolution is aimed at both scientists and humanists, and it makes the case that neither side alone can answer the most important questions about our origins. 
"Tomlinson offers a new model for understanding this period in our emergence, one based on analysis of advancing human cultures in an evolution that was simultaneously cultural and biological—a biocultural evolution. He places front and center the emergence of culture and the human capacities to create it, in a fashion that expands the conceptual framework of recent evolutionary theory. His wide-ranging vision encompasses arguments on the development of music, modern technology, and metaphysics. At the heart of these developments, he shows, are transformations in our species’ particular knack for signmaking. With its innovative synthesis of humanistic and scientific ideas, this book will be an essential text."
to:NB  books:noted  human_evolution  cultural_evolution  cultural_transmission_of_cognitive_tools 
15 days ago
The misunderstanding of memes: Biography of an unscientific object, 1976–1999 | Perspectives on Science | MIT Press Journals
"When the “meme” was introduced in 1976, it was as a metaphor intended to illuminate an evolutionary argument. By the late-1980s, however, we see from its use in major US newspapers that this original meaning had become obscured. The meme became a virus of the mind. (In the UK, this occurred slightly later.) It is also now clear that this becoming involved complex sustained interactions between scholars, journalists, and the letter-writing public. We must therefore read the “meme” through lenses provided by its popularization. The results are in turn suggestive of the processes of meaning-construction in scholarly communication more generally."
to:NB  to_read  cultural_evolution  epidemiology_of_representations 
16 days ago
Regressus and Empiricism in the Controversy about Galileo’s Lunar Observations | Perspectives on Science | MIT Press Journals
"This paper defends a version of J. H. Randall’s thesis that modern empiricism is rooted in the Scholastic regressus method epitomized by Jacopo Zabarella in De Regressu (1578). Randall’s critics note that the empirical practice of Galileo and his contemporaries does not follow Zabarella. However, Zabarella’s account of the regressus is imprecise, which permitted an interpretation introducing empirical hypothesis testing into the framework. The discourse surrounding Galileo’s lunar observations in Sidereus Nuncius (1610) suggests that both Galileo and his interlocutors amended the regressus method in this way, such that a developmental narrative links Scholastic logic to Galilean science."
to:NB  history_of_science  history_of_ideas  philosophy_of_science  scientific_revolution  galileo 
16 days ago
Quantifying Selection Pressure | Evolutionary Computation | MIT Press Journals
"Selection is an essential component of any evolutionary system and analysing this fundamental force in evolution can provide relevant insights into the evolutionary development of a population. The 1990s and early 2000s saw a substantial number of publications that investigated selection pressure through methods such as takeover time and Markov chain analysis. Over the last decade, however, interest in the analysis of selection in evolutionary computing has waned. The established methods for analysis of selection pressure provide little insight when selection is based on more than comparison-of-fitness values. This can, for instance, be the case in coevolutionary systems, when measures unrelated to fitness affect the selection process (e.g., niching) or in systems that lack a crisply defined objective function. This article proposes two metrics that holistically consider the statistics of the evolutionary process to quantify selection pressure in evolutionary systems and so can be applied where traditionally used methods fall short. The metrics are based on a statistical analysis of the relation between reproductive success and a quantifiable trait: one method builds on an estimate of the probability that this relation is random; the other uses a correlation measure. These metrics provide convenient tools to analyse selection pressure and so allow researchers to better understand this crucial component of evolutionary systems. Both metrics are straightforward to implement and can be used in post-hoc analyses as well as during the evolutionary process, for example, to inform parameter control mechanisms. A number of case studies and a critical analysis show that the proposed metrics provide relevant and reliable measures of selection pressure."
to:NB  evolutionary_optimization 
16 days ago
Kingman : Subadditive Ergodic Theory
"It is now ten years since Hammersley and Welsh discovered (or invented) subadditive stochastic processes. Since then the theory has developed and deepened, new fields of application have been explored, and further challenging problems have arisen. This paper is a progress report on the last decade."
in_NB  stochastic_processes  ergodic_theory  have_read  re:almost_none 
18 days ago
Genetic instrumental variable regression: Explaining socioeconomic and health outcomes in nonexperimental data | PNAS
"Identifying causal effects in nonexperimental data is an enduring challenge. One proposed solution that recently gained popularity is the idea to use genes as instrumental variables [i.e., Mendelian randomization (MR)]. However, this approach is problematic because many variables of interest are genetically correlated, which implies the possibility that many genes could affect both the exposure and the outcome directly or via unobserved confounding factors. Thus, pleiotropic effects of genes are themselves a source of bias in nonexperimental data that would also undermine the ability of MR to correct for endogeneity bias from nongenetic sources. Here, we propose an alternative approach, genetic instrumental variable (GIV) regression, that provides estimates for the effect of an exposure on an outcome in the presence of pleiotropy. As a valuable byproduct, GIV regression also provides accurate estimates of the chip heritability of the outcome variable. GIV regression uses polygenic scores (PGSs) for the outcome of interest which can be constructed from genome-wide association study (GWAS) results. By splitting the GWAS sample for the outcome into nonoverlapping subsamples, we obtain multiple indicators of the outcome PGSs that can be used as instruments for each other and, in combination with other methods such as sibling fixed effects, can address endogeneity bias from both pleiotropy and the environment. In two empirical applications, we demonstrate that our approach produces reasonable estimates of the chip heritability of educational attainment (EA) and show that standard regression and MR provide upwardly biased estimates of the effect of body height on EA."

--- I don't see how this deals with environmental enodgeneity, but I just skimmed it. For that matter, I don't really see how it can be a valid instrument. The classic instrument needs a graphical model like (forgive the ASCII art)
Y <-- X <-- V
i.e., there's back-door path linking X and Y, but V saves us because it's an ancestor of Y and the _only_ path to Y goes through X. If I divide the genome into chunks and make a score I calculate on chunk 1 X, and another score on chunk 2 V, how does V only get to cause Y through first affecting X?
to:NB  to_read  causal_inference  human_genetics  instrumental_variables  statistics  heritability  to_be_shot_after_a_fair_trial 
20 days ago
Weak Galilean invariance as a selection principle for coarse-grained diffusive models | PNAS
"How does the mathematical description of a system change in different reference frames? Galilei first addressed this fundamental question by formulating the famous principle of Galilean invariance. It prescribes that the equations of motion of closed systems remain the same in different inertial frames related by Galilean transformations, thus imposing strong constraints on the dynamical rules. However, real world systems are often described by coarse-grained models integrating complex internal and external interactions indistinguishably as friction and stochastic forces. Since Galilean invariance is then violated, there is seemingly no alternative principle to assess a priori the physical consistency of a given stochastic model in different inertial frames. Here, starting from the Kac–Zwanzig Hamiltonian model generating Brownian motion, we show how Galilean invariance is broken during the coarse-graining procedure when deriving stochastic equations. Our analysis leads to a set of rules characterizing systems in different inertial frames that have to be satisfied by general stochastic models, which we call “weak Galilean invariance.” Several well-known stochastic processes are invariant in these terms, except the continuous-time random walk for which we derive the correct invariant description. Our results are particularly relevant for the modeling of biological systems, as they provide a theoretical principle to select physically consistent stochastic models before a validation against experimental data."
to:NB  stochastic_processes  macro_from_micro  hydrodynamics  physics  statistical_mechanics  classical_mechanics 
20 days ago
Lead pollution recorded in Greenland ice indicates European emissions tracked plagues, wars, and imperial expansion during antiquity | PNAS
"Lead pollution in Arctic ice reflects midlatitude emissions from ancient lead–silver mining and smelting. The few reported measurements have been extrapolated to infer the performance of ancient economies, including comparisons of economic productivity and growth during the Roman Republican and Imperial periods. These studies were based on sparse sampling and inaccurate dating, limiting understanding of trends and specific linkages. Here we show, using a precisely dated record of estimated lead emissions between 1100 BCE and 800 CE derived from subannually resolved measurements in Greenland ice and detailed atmospheric transport modeling, that annual European lead emissions closely varied with historical events, including imperial expansion, wars, and major plagues. Emissions rose coeval with Phoenician expansion, accelerated during expanded Carthaginian and Roman mining primarily in the Iberian Peninsula, and reached a maximum under the Roman Empire. Emissions fluctuated synchronously with wars and political instability particularly during the Roman Republic, and plunged coincident with two major plagues in the second and third centuries, remaining low for >500 years. Bullion in silver coinage declined in parallel, reflecting the importance of lead–silver mining in ancient economies. Our results indicate sustained economic growth during the first two centuries of the Roman Empire, terminated by the second-century Antonine plague."
to:NB  climatology  ancient_history  roman_empire 
20 days ago
An elementary proof of a theorem of Johnson and Lindenstrauss - Dasgupta - 2003 - Random Structures &amp; Algorithms - Wiley Online Library
"A result of Johnson and Lindenstrauss [13] shows that a set of n points in high dimensional Euclidean space can be mapped into an O(log n/ϵ2)‐dimensional Euclidean space such that the distance between any two points changes by only a factor of (1 ± ϵ). In this note, we prove this theorem using elementary probabilistic techniques."

Ungated: http://cseweb.ucsd.edu/~dasgupta/papers/jl.pdf
to:NB  random_projections  geometry  dimension_reduction  have_read  re:ADAfaEPoV 
20 days ago
"Correlates of homicide: new space/time interaction tests for spatiotem" by Seth R. Flaxman, Daniel B. Neill et al.
"Statistical inference on spatiotemporal data often proceeds by focusing on the temporal aspect of the data, ignoring space, or the spatial aspect, ignoring time. In this paper, we explicitly focus on the interaction between space and time. Using a geocoded, time-stamped dataset from Chicago of almost 9 millions calls to 911 between 2007 and 2010, we ask whether any of these call types are associated with shootings or homicides. Standard correlation techniques do not produce meaningful results in the spatiotemporal setting because of two confounds: purely spatial effects (i.e. “bad” neighborhoods) and purely temporal effects (i.e. more crimes in the summer) could introduce spurious correlations. To address this issue, a handful of statistical tests for space-time interaction have been proposed, which explicitly control for separable spatial and temporal dependencies. Yet these classical tests each have limitations. We propose a new test for space-time interaction, using a Mercer kernel-based statistic for measuring the distance between probability distributions. We compare our new test to existing tests on simulated and real data, where it performs comparably to or better than the classical tests. For the application we consider, we find a number of interesting and significant space-time interactions between 911 call types and shootings/homicides"
in_NB  point_processes  crime  spatio-temporal_statistics  statistics  dependence_measures  flaxman.seth  smola.alex 
21 days ago
[1610.08623] Poisson intensity estimation with reproducing kernels
"Despite the fundamental nature of the inhomogeneous Poisson process in the theory and application of stochastic processes, and its attractive generalizations (e.g. Cox process), few tractable nonparametric modeling approaches of intensity functions exist, especially when observed points lie in a high-dimensional space. In this paper we develop a new, computationally tractable Reproducing Kernel Hilbert Space (RKHS) formulation for the inhomogeneous Poisson process. We model the square root of the intensity as an RKHS function. Whereas RKHS models used in supervised learning rely on the so-called representer theorem, the form of the inhomogeneous Poisson process likelihood means that the representer theorem does not apply. However, we prove that the representer theorem does hold in an appropriately transformed RKHS, guaranteeing that the optimization of the penalized likelihood can be cast as a tractable finite-dimensional problem. The resulting approach is simple to implement, and readily scales to high dimensions and large-scale datasets."
in_NB  point_processes  statistics  nonparametrics  hilbert_space  flaxman.seth 
21 days ago
[1611.06713] Is Gun Violence Contagious?
"Existing theories of gun violence predict stable spatial concentrations and contagious diffusion of gun violence into surrounding areas. Recent empirical studies have reported confirmatory evidence of such spatiotemporal diffusion of gun violence. However, existing tests cannot readily distinguish spatiotemporal clustering from spatiotemporal diffusion. This leaves as an open question whether gun violence actually is contagious or merely clusters in space and time. Compounding this problem, gun violence is subject to considerable measurement error with many nonfatal shootings going unreported to police. Using point process data from an acoustical gunshot locator system and a combination of Bayesian spatiotemporal point process modeling and space/time interaction tests, this paper demonstrates that contemporary urban gun violence does diffuse, but only slightly, suggesting that a disease model for infectious spread of gun violence is a poor fit for the geographically stable and temporally stochastic process observed."
in_NB  crime  spatio-temporal_statistics  point_processes  social_influence  flaxman.seth 
21 days ago
Quantitative methods archaeology using R | Archaeological theory and methods | Cambridge University Press
"Quantitative Methods in Archaeology Using R is the first hands-on guide to using the R statistical computing system written specifically for archaeologists. It shows how to use the system to analyze many types of archaeological data. Part I includes tutorials on R, with applications to real archaeological data showing how to compute descriptive statistics, create tables, and produce a wide variety of charts and graphs. Part II addresses the major multivariate approaches used by archaeologists, including multiple regression (and the generalized linear model); multiple analysis of variance and discriminant analysis; principal components analysis; correspondence analysis; distances and scaling; and cluster analysis. Part III covers specialized topics in archaeology, including intra-site spatial analysis, seriation, and assemblage diversity."

--- This looks like it might be an interesting source of teaching examples. (OTOH, I'm not sure how many of The Kids would get it...)
to:NB  books:noted  archaeology  statistics  R 
21 days ago
[1805.10204] Adversarial examples from computational constraints
"Why are classifiers in high dimension vulnerable to "adversarial" perturbations? We show that it is likely not due to information theoretic limitations, but rather it could be due to computational constraints.
"First we prove that, for a broad set of classification tasks, the mere existence of a robust classifier implies that it can be found by a possibly exponential-time algorithm with relatively few training examples. Then we give a particular classification task where learning a robust classifier is computationally intractable. More precisely we construct a binary classification task in high dimensional space which is (i) information theoretically easy to learn robustly for large perturbations, (ii) efficiently learnable (non-robustly) by a simple linear separator, (iii) yet is not efficiently robustly learnable, even for small perturbations, by any algorithm in the statistical query (SQ) model. This example gives an exponential separation between classical learning and robust learning in the statistical query model. It suggests that adversarial examples may be an unavoidable byproduct of computational limitations of learning algorithms."
in_NB  adversarial_examples  computational_complexity  machine_learning  classifiers  have_read  bubeck.sebastien 
22 days ago
[1805.09966] Prestige drives epistemic inequality in the diffusion of scientific ideas
"The spread of ideas in the scientific community is often viewed as a competition, in which good ideas spread further because of greater intrinsic fitness. As a result, it is commonly believed that publication venue and citation counts correlate with importance and impact. However, relatively little is known about how structural factors influence the spread of ideas, and specifically how where an idea originates can influence how it spreads. Here, we investigate the role of faculty hiring networks, which embody the set of researcher transitions from doctoral to faculty institutions, in shaping the spread of ideas in computer science, and the importance of where in the network an idea originates. We consider comprehensive data on the hiring events of 5,032 faculty at all 205 Ph.D.-granting departments of computer science in the U.S. and Canada, and on the timing and titles of 200,476 associated publications. Analyzing three popular research topics, we show empirically that faculty hiring plays a significant role in driving the spread of ideas across the community. We then use epidemic models to simulate the generic spread of research ideas and quantify the consequences of where an idea originates on its longterm diffusion across the network. We find that research from prestigious institutions spreads more quickly and completely than work of similar quality originating from less prestigious institutions. Our analyses establish the theoretical trade-offs between university prestige and the quality of ideas necessary for efficient circulation. These results suggest a lower bound for epistemic inequality, identify a mechanism for the persistent epistemic advantage observed for elite institutions, and highlight limitations for meritocratic ideals."
to:NB  have_read  sociology_of_science  science_as_a_social_process  epidemiology_of_ideas  kith_and_kin  clauset.aaron  re:do-institutions-evolve  social_networks  epidemic_models  to_teach:baby-nets  academia 
22 days ago
One Parameter Is Always Enough
This is very cute, and deserves some attention.
Being me, I'll grump a bit. As he says at the end, this is just an elaboration of the point that we get a class of _infinite_ VC dimension by thresholding sin(kx), i.e., we can correctly classify any arbitrarily large set of points by tweaking k appropriately. Since this was known in the 1970s (proved by V&C themselves, if memory serves), it's been kind of insane that people continue to count parameters and pat themselves on the back about Occam. (See http://bactra.org/weblog/921.html et seq. for more.) Still, the elephant is nice.
model_selection  approximation  dynamical_systems  chaos  have_read  to:blog 
22 days ago
[1804.10611] On the Estimation of Latent Distances Using Graph Distances
"We are given the adjacency matrix of a geometric graph and the task of recovering the latent positions. We study one of the most popular approaches which consists in using the graph distances and derive error bounds under various assumptions on the link function. In the simplest case where the link function is an indicator function, the bound is (nearly) optimal as it (nearly) matches an information lower bound."
to:NB  network_data_analysis  statistics  re:hyperbolic_networks  via:ale  estimation  inference_to_latent_objects  to_read 
24 days ago
Inference of ecological and social drivers of human brain-size evolution | Nature
"The human brain is unusually large. It has tripled in size from Australopithecines to modern humans1 and has become almost six times larger than expected for a placental mammal of human size2. Brains incur high metabolic costs3 and accordingly a long-standing question is why the large human brain has evolved4. The leading hypotheses propose benefits of improved cognition for overcoming ecological5,6,7, social8,9,10 or cultural11,12,13,14 challenges. However, these hypotheses are typically assessed using correlative analyses, and establishing causes for brain-size evolution remains difficult15,16. Here we introduce a metabolic approach that enables causal assessment of social hypotheses for brain-size evolution. Our approach yields quantitative predictions for brain and body size from formalized social hypotheses given empirical estimates of the metabolic costs of the brain. Our model predicts the evolution of adult Homo sapiens-sized brains and bodies when individuals face a combination of 60% ecological, 30% cooperative and 10% between-group competitive challenges, and suggests that between-individual competition has been unimportant for driving human brain-size evolution. Moreover, our model indicates that brain expansion in Homo was driven by ecological rather than social challenges, and was perhaps strongly promoted by culture. Our metabolic approach thus enables causal assessments that refine, refute and unify hypotheses of brain-size evolution."

--- I have no idea how they could possibly validate such a model (it's not like there are comparative cases!), so the last tag applies with force.
to:NB  to_read  human_evolution  to_be_shot_after_a_fair_trial 
26 days ago
Teotihuacan - Edited by Matthew Robb - Hardcover - University of California Press
"Founded in the first century BCE near a set of natural springs in an otherwise dry northeastern corner of the Valley of Mexico, the ancient metropolis of Teotihuacan was on a symbolic level a city of elements. With a multiethnic population of perhaps one hundred thousand, at its peak in 400 CE, it was the cultural, political, economic, and religious center of ancient Mesoamerica. A devastating fire in the city center led to a rapid decline after the middle of the sixth century, but Teotihuacan was never completely abandoned or forgotten; the Aztecs revered the city and its monuments, giving many of them the names we still use today.
"Teotihuacan: City of Water, City of Fire examines new discoveries from the three main pyramids at the site—the Sun Pyramid, the Moon Pyramid, and, at the center of the Ciudadela complex, the Feathered Serpent Pyramid—which have fundamentally changed our understanding of the city’s history. With illustrations of the major objects from Mexico City’s Museo Nacional de Antropología and from the museums and storage facilities of the Zona de Monumentos Arqueológicos de Teotihuacan, along with selected works from US and European collections, the catalogue examines these cultural artifacts to understand the roles that offerings of objects and programs of monumental sculpture and murals throughout the city played in the lives of Teotihuacan’s citizens. "
to:NB  books:noted  ancient_history  art_history  native_american_history 
28 days ago
Large-scale kernel methods for independence testing | SpringerLink
"Representations of probability measures in reproducing kernel Hilbert spaces provide a flexible framework for fully nonparametric hypothesis tests of independence, which can capture any type of departure from independence, including nonlinear associations and multivariate interactions. However, these approaches come with an at least quadratic computational cost in the number of observations, which can be prohibitive in many applications. Arguably, it is exactly in such large-scale datasets that capturing any type of dependence is of interest, so striking a favourable trade-off between computational efficiency and test performance for kernel independence tests would have a direct impact on their applicability in practice. In this contribution, we provide an extensive study of the use of large-scale kernel approximations in the context of independence testing, contrasting block-based, Nyström and random Fourier feature approaches. Through a variety of synthetic data experiments, it is demonstrated that our large-scale methods give comparable performance with existing methods while using significantly less computation time and memory."
to:NB  computational_statistics  dependence_measures  hypothesis_testing  statistics  kernel_methods  hilbert_space  gretton.arthurt 
28 days ago
Bootstrap bias corrections for ensemble methods | SpringerLink
"This paper examines the use of a residual bootstrap for bias correction in machine learning regression methods. Accounting for bias is an important obstacle in recent efforts to develop statistical inference for machine learning. We demonstrate empirically that the proposed bootstrap bias correction can lead to substantial improvements in both bias and predictive accuracy. In the context of ensembles of trees, we show that this correction can be approximated at only double the cost of training the original ensemble. Our method is shown to improve test set accuracy over random forests by up to 70% on example problems from the UCI repository."
to;NB  ensemble_methods  prediction  bootstrap  hooker.giles  statistics 
28 days ago
The stochastic topic block model for the clustering of vertices in networks with textual edges | SpringerLink
"Due to the significant increase of communications between individuals via social media (Facebook, Twitter, Linkedin) or electronic formats (email, web, e-publication) in the past two decades, network analysis has become an unavoidable discipline. Many random graph models have been proposed to extract information from networks based on person-to-person links only, without taking into account information on the contents. This paper introduces the stochastic topic block model, a probabilistic model for networks with textual edges. We address here the problem of discovering meaningful clusters of vertices that are coherent from both the network interactions and the text contents. A classification variational expectation-maximization algorithm is proposed to perform inference. Simulated datasets are considered in order to assess the proposed approach and to highlight its main features. Finally, we demonstrate the effectiveness of our methodology on two real-word datasets: a directed communication network and an undirected co-authorship network."
in_NB  text_mining  network_data_analysis  statistics  to_teach:baby-nets 
28 days ago
Bootstrap methods for stationary functional time series | SpringerLink
"Bootstrap methods for estimating the long-run covariance of stationary functional time series are considered. We introduce a versatile bootstrap method that relies on functional principal component analysis, where principal component scores can be bootstrapped by maximum entropy. Two other bootstrap methods resample error functions, after the dependence structure being modeled linearly by a sieve method or nonlinearly by a functional kernel regression. Through a series of Monte-Carlo simulation, we evaluate and compare the finite-sample performances of these three bootstrap methods for estimating the long-run covariance in a functional time series. Using the intraday particulate matter ( PM10PM10 ) dataset in Graz, the proposed bootstrap methods provide a way of constructing the distribution of estimated long-run covariance for functional time series."
to:NB  bootstrap  time_series  functional_data_analysis  statistics 
28 days ago
Experimenter’s regress argument, empiricism, and the calibration of the large hadron collider | SpringerLink
"H. Collins has challenged the empiricist understanding of experimentation by identifying what he thinks constitutes the experimenter’s regress: an instrument is deemed good because it produces good results, and vice versa. The calibration of an instrument cannot alone validate the results: the regressive circling is broken by an agreement essentially external to experimental procedures. In response, A. Franklin has argued that calibration is a key reasonable strategy physicists use to validate production of results independently of their interpretation. The physicists’ arguments about the merits of calibration are not coextensive with the interpretation of results, and thus an objective validation of results is possible. I argue, however, that the in-situ calibrating and measurement procedures and parameters at the Large Hadron Collider are closely and systematically interrelated. This requires empiricists to question their insistence on the independence of calibration from the outcomes of the experiment and rethink their position. Yet this does not leave the case of in-situ calibration open to the experimenter’s regress argument; it is predicated on too crude a view of the relationship between calibration and measurement that fails to capture crucial subtleties of the case."
to:NB  philosophy_of_science  particle_physics 
28 days ago
Boosting conditional probability estimators | SpringerLink
"In the standard agnostic multiclass model, <instance, label > pairs are sampled independently from some underlying distribution. This distribution induces a conditional probability over the labels given an instance, and our goal in this paper is to learn this conditional distribution. Since even unconditional densities are quite challenging to learn, we give our learner access to <instance, conditional distribution > pairs. Assuming a base learner oracle in this model, we might seek a boosting algorithm for constructing a strong learner. Unfortunately, without further assumptions, this is provably impossible. However, we give a new boosting algorithm that succeeds in the following sense: given a base learner guaranteed to achieve some average accuracy (i.e., risk), we efficiently construct a learner that achieves the same level of accuracy with arbitrarily high probability. We give generalization guarantees of several different kinds, including distribution-free accuracy and risk bounds. None of our estimates depend on the number of boosting rounds and some of them admit dimension-free formulations."
to:NB  boosting  density_estimation  kith_and_kin  kontorovich.aryeh  statistics 
28 days ago
Advancing Concepts and Models for Measuring Innovation: Proceedings of a Workshop | The National Academies Press
"Because of the role of innovation as a driver of economic productivity and growth and as a mechanism for improving people's well-being in other ways, understanding the nature,determinants, and impacts of innovation has become increasingly important to policy makers.
"To be effective, investment in innovation requires this understanding, which, in turn, requires measurement of the underlying inputs and subsequent outcomes of innovation processes. In May 2016, at the request of the National Center for Science and Engineering Statistics of the National Science Foundation, the Committee on National Statistics of the National Academies of Sciences, Engineering, and Medicine convened a workshop - bringing together academic researchers, private and public sector experts, and representatives from public policy agencies - to develop strategies for broadening and modernizing innovation information systems.This publication summarizes the presentation and discussion of the event."
to:NB  books:noted  innovation  social_measurement 
28 days ago
Innovations in Federal Statistics: Combining Data Sources While Protecting Privacy | The National Academies Press
"Federal government statistics provide critical information to the country and serve a key role in a democracy. For decades, sample surveys with instruments carefully designed for particular data needs have been one of the primary methods for collecting data for federal statistics. However, the costs of conducting such surveys have been increasing while response rates have been declining, and many surveys are not able to fulfill growing demands for more timely information and for more detailed information at state and local levels.
"Innovations in Federal Statistics examines the opportunities and risks of using government administrative and private sector data sources to foster a paradigm shift in federal statistical programs that would combine diverse data sources in a secure manner to enhance federal statistics. This first publication of a two-part series discusses the challenges faced by the federal statistical system and the foundational elements needed for a new paradigm."
to:NB  books:noted  statistics  record_linkage 
28 days ago
Valuing Climate Damages: Updating Estimation of the Social Cost of Carbon Dioxide | The National Academies Press
"The social cost of carbon (SC-CO2) is an economic metric intended to provide a comprehensive estimate of the net damages - that is, the monetized value of the net impacts, both negative and positive - from the global climate change that results from a small (1-metric ton) increase in carbon-dioxide (CO2) emissions. Under Executive Orders regarding regulatory impact analysis and as required by a court ruling, the U.S. government has since 2008 used estimates of the SC-CO2 in federal rulemakings to value the costs and benefits associated with changes in CO2 emissions. In 2010, the Interagency Working Group on the Social Cost of Greenhouse Gases (IWG) developed a methodology for estimating the SC-CO2 across a range of assumptions about future socioeconomic and physical earth systems.
"Valuing Climate Changes examines potential approaches, along with their relative merits and challenges, for a comprehensive update to the current methodology. This publication also recommends near- and longer-term research priorities to ensure that the SC- CO2 estimates reflect the best available science."
to:NB  books:noted  climate_change  economics  carbon_pricing 
28 days ago
Haunted | Yale University Press
"Leo Braudy, a finalist for both the National Book Award and the National Book Critics Circle Award, has won accolades for revealing the complex and constantly shifting history behind seemingly unchanging ideas of fame, war, and masculinity.
"Continuing his interest in the history of emotion, this book explores how fear has been shaped into images of monsters and monstrosity. From the Protestant Reformation to contemporary horror films and fiction, he explores four major types: the monster from nature (King Kong), the created monster (Frankenstein), the monster from within (Mr. Hyde), and the monster from the past (Dracula). Drawing upon deep historical and literary research, Braudy discusses the lasting presence of fearful imaginings in an age of scientific progress, viewing the detective genre as a rational riposte to the irrational world of the monstrous. Haunted is a compelling and incisive work by a writer at the height of his powers."
to:NB  books:noted  history_of_ideas  literary_history  horror 
28 days ago
[1805.06005] Reconstructing mesoscale network structures
"When facing complex mesoscale network structures, it is generally believed that (null) models encoding the modular organization of nodes must be employed. The present paper focuses on two block structures that characterize the mesoscale organization of many real-world networks, i.e. the bow-tie and the core-periphery ones. Our analysis shows that constraining the network degree sequence is often enough to reproduce such structures, as confirmed by model selection criteria as AIC or BIC. As a byproduct, our paper enriches the toolbox for the analysis of bipartite networks - still far from being complete. The aforementioned structures, in fact, partition the networks into asymmetric blocks characterized by binary, directed connections, thus calling for the extension of a recently-proposed method to randomize undirected, bipartite networks to the directed case."
to:NB  network_data_analysis  statistics 
28 days ago
Economic Science Fictions | The MIT Press
"From the libertarian economics of Ayn Rand to Aldous Huxley's consumerist dystopias, economics and science fiction have often orbited each other. In Economic Science Fictions, editor William Davies has deliberately merged the two worlds, asking how we might harness the power of the utopian imagination to revitalize economic thinking.
"Rooted in the sense that our current economic reality is no longer credible or viable, this collection treats our economy as a series of fictions and science fiction as a means of anticipating different economic futures. It asks how science fiction can motivate new approaches to economics and provides surprising new syntheses, merging social science with fiction, design with politics, scholarship with experimental forms. With an opening chapter from Ha-Joon Chang as well as theory, short stories, and reflections on design, this book from Goldsmiths Press challenges and changes the notion that economics and science fiction are worlds apart. The result is a wealth of fresh and unusual perspectives for anyone who believes the economy is too important to be left solely to economists."

--- Re "no longer credible", I am irresistible reminded of an ancient joke.
Preacher: Sir, do you believe in infant baptism?
Layman: Believe in it? Why, Reverend, I've seen it done!
Still, this looks right up my alley.
to:NB  books:noted  economics  science_fiction 
4 weeks ago
Defending Hierarchy from the Moon to the Indian Ocean: Symbolic Capital and Political Dominance in Early Modern China and the Cold War | International Organization | Cambridge Core
"Why do leading actors invest in costly projects that they expect will not yield appreciable military or economic benefits? We identify a causal process in which concerns about legitimacy produce attempts to secure dominance in arenas of high symbolic value by investing wealth and labor into unproductive (in direct military and economic terms) goods and performances. We provide evidence for our claims through a comparative study of the American Project Apollo and the Ming Dynasty's treasure fleets. We locate our argument within a broader constructivist and practice-theoretic understanding of hierarchy and hegemony. We build on claims that world politics is a sphere of complex social stratification by viewing constituent hierarchies in terms of social fields. Our specific theory and broader framework, we contend, provide tools for understanding the workings of power politics beyond military and economic competition."
to:NB  comparative_history  space_exploration  ming_dynasty 
4 weeks ago
[1804.07203] The Hardness of Conditional Independence Testing and the Generalised Covariance Measure
"It is a common saying that testing for conditional independence, i.e., testing whether X is independent of Y, given Z, is a hard statistical problem if Z is a continuous random variable. In this paper, we prove that conditional independence is indeed a particularly difficult hypothesis to test for. Statistical tests are required to have a size that is smaller than a predefined significance level, and different tests usually have power against a different class of alternatives. We prove that a valid test for conditional independence does not have power against any alternative.
"Given the non-existence of a uniformly valid conditional independence test, we argue that tests must be designed so their suitability for a particular problem setting may be judged easily. To address this need, we propose in the case where X and Y are univariate to nonlinearly regress X on Z, and Y on Z and then compute a test statistic based on the sample covariance between the residuals, which we call the generalised covariance measure (GCM). We prove that validity of this form of test relies almost entirely on the weak requirement that the regression procedures are able to estimate the conditional means X given Z, and Y given Z, at a slow rate. We extend the methodology to handle settings where X and Y may be multivariate or even high-dimensional.
"While our general procedure can be tailored to the setting at hand by combining it with any regression technique, we develop the theoretical guarantees for kernel ridge regression. A simulation study shows that the test based on GCM is competitive with state of the art conditional independence tests. Code will be available as an R package."
to:NB  independence_testing  hypothesis_testing  statistics  causal_discovery  heard_the_talk  to_read  peters.jonas  nonparametrics  have_skimmed 
4 weeks ago
[1706.08576] Invariant Causal Prediction for Nonlinear Models
"An important problem in many domains is to predict how a system will respond to interventions. This task is inherently linked to estimating the system's underlying causal structure. To this end, 'invariant causal prediction' (ICP) (Peters et al., 2016) has been proposed which learns a causal model exploiting the invariance of causal relations using data from different environments. When considering linear models, the implementation of ICP is relatively straight-forward. However, the nonlinear case is more challenging due to the difficulty of performing nonparametric tests for conditional independence. In this work, we present and evaluate an array of methods for nonlinear and nonparametric versions of ICP for learning the causal parents of given target variables. We find that an approach which first fits a nonlinear model with data pooled over all environments and then tests for differences between the residual distributions across environments is quite robust across a large variety of simulation settings. We call this procedure "Invariant residual distribution test". In general, we observe that the performance of all approaches is critically dependent on the true (unknown) causal structure and it becomes challenging to achieve high power if the parental set includes more than two variables. As a real-world example, we consider fertility rate modelling which is central to world population projections. We explore predicting the effect of hypothetical interventions using the accepted models from nonlinear ICP. The results reaffirm the previously observed central causal role of child mortality rates."
to:NB  causal_inference  causal_discovery  statistics  regression  prediction  peters.jonas  meinshausen.nicolai  to_read  heard_the_talk  to_teach:undergrad-ADA  re:ADAfaEPoV 
4 weeks ago
[1501.01332] Causal inference using invariant prediction: identification and confidence intervals
"What is the difference of a prediction that is made with a causal model and a non-causal model? Suppose we intervene on the predictor variables or change the whole environment. The predictions from a causal model will in general work as well under interventions as for observational data. In contrast, predictions from a non-causal model can potentially be very wrong if we actively intervene on variables. Here, we propose to exploit this invariance of a prediction under a causal model for causal inference: given different experimental settings (for example various interventions) we collect all models that do show invariance in their predictive accuracy across settings and interventions. The causal model will be a member of this set of models with high probability. This approach yields valid confidence intervals for the causal relationships in quite general scenarios. We examine the example of structural equation models in more detail and provide sufficient assumptions under which the set of causal predictors becomes identifiable. We further investigate robustness properties of our approach under model misspecification and discuss possible extensions. The empirical properties are studied for various data sets, including large-scale gene perturbation experiments."
to:NB  to_read  causal_inference  causal_discovery  statistics  prediction  regression  buhlmann.peter  meinshausen.nicolai  peters.jonas  heard_the_talk  re:ADAfaEPoV  to_teach:undergrad-ADA 
4 weeks ago
Social Learning Strategies: Bridge-Building between Fields: Trends in Cognitive Sciences
"While social learning is widespread, indiscriminate copying of others is rarely beneficial. Theory suggests that individuals should be selective in what, when, and whom they copy, by following ‘social learning strategies’ (SLSs). The SLS concept has stimulated extensive experimental work, integrated theory, and empirical findings, and created impetus to the social learning and cultural evolution fields. However, the SLS concept needs updating to accommodate recent findings that individuals switch between strategies flexibly, that multiple strategies are deployed simultaneously, and that there is no one-to-one correspondence between psychological heuristics deployed and resulting population-level patterns. The field would also benefit from the simultaneous study of mechanism and function. SLSs provide a useful vehicle for bridge-building between cognitive psychology, neuroscience, and evolutionary biology."
to:NB  social_learning  cultural_transmission  cultural_evolution  cognitive_science  social_influence  re:do-institutions-evolve 
4 weeks ago
Poldrack, R.: The New Mind Readers: What Neuroimaging Can and Cannot Reveal about Our Thoughts (Hardcover) | Princeton University Press
"The ability to read minds has long been the stuff of science fiction, but revolutionary new brain-imaging methods are bringing it closer to scientific reality. The New Mind Readers provides a compelling look at the origins, development, and future of these extraordinary tools, revealing how they are increasingly being used to decode our thoughts and experiences—and how this raises sometimes troubling questions about their application in domains such as marketing, politics, and the law.
"Russell Poldrack takes readers on a journey of scientific discovery, telling the stories of the visionaries behind these breakthroughs. Along the way, he gives an insider’s perspective on what is perhaps the single most important technology in cognitive neuroscience today—functional magnetic resonance imaging, or fMRI, which is providing astonishing new insights into the contents and workings of the mind. He highlights both the amazing power and major limitations of these techniques and describes how applications outside the lab often exceed the bounds of responsible science. Poldrack also details the unique and sometimes disorienting experience of having his own brain scanned more than a hundred times as part of a landmark study of how human brain function changes over time.
"Written by one of the world’s leading pioneers in the field, The New Mind Readers cuts through the hype and misperceptions surrounding these emerging new methods, offering needed perspective on what they can and cannot do—and demonstrating how they can provide new answers to age-old questions about the nature of consciousness and what it means to be human."

--- Poldrack is great and has his head screwed on straight (you should pardon the expression), so I really look forward to this.
to:NB  books:noted  fmri  neuroscience  cognitive_science  poldrack.russell_a.  popular_science 
4 weeks ago
Steven Pinker’s book Enlightenment Now is a huge hit. Too bad it gets the Enlightenment wrong. - Vox
This hints at an interesting study to be written, about how intellectuals freeze their views of disciplines other than their own at an early age, and indeed keep propagating ideas they picked up, perhaps without realizing it, in college or graduate school, long after they've been corrected in their homes. In this case, I suspect that a lot of what Pinker (Ph.D., 1979) says is extrapolating from conditions of the 1970s...
enlightenment  history_of_ideas  why_oh_why_cant_we_have_a_better_intelligentsia  pinker.steven 
4 weeks ago
Comfort history | Review: Enlightenment Now by Steven Pinker
The estimable David Wootton (who wrote a book advocating a literally Whiggish interpretation of the Scientific Revolution) takes a turn bashing his head against the wall.
book_reviews  enlightenment  pinker.steven  wootton.david  why_oh_why_cant_we_have_a_better_intelligentsia 
4 weeks ago
137 ancient human genomes from across the Eurasian steppes | Nature
"For thousands of years the Eurasian steppes have been a centre of human migrations and cultural change. Here we sequence the genomes of 137 ancient humans (about 1× average coverage), covering a period of 4,000 years, to understand the population history of the Eurasian steppes after the Bronze Age migrations. We find that the genetics of the Scythian groups that dominated the Eurasian steppes throughout the Iron Age were highly structured, with diverse origins comprising Late Bronze Age herders, European farmers and southern Siberian hunter-gatherers. Later, Scythians admixed with the eastern steppe nomads who formed the Xiongnu confederations, and moved westward in about the second or third century BC, forming the Hun traditions in the fourth–fifth century AD, and carrying with them plague that was basal to the Justinian plague. These nomads were further admixed with East Asian groups during several short-term khanates in the Medieval period. These historical events transformed the Eurasian steppes from being inhabited by Indo-European speakers of largely West Eurasian ancestry to the mostly Turkic-speaking groups of the present day, who are primarily of East Asian ancestry."

--- Slightly trollish summary: Genetics proves that Aryans are losers.
to:NB  central_asia  historical_genetics 
4 weeks ago
Industry and Intelligence - Contemporary Art Since 1820 | Columbia University Press
"The history of modern art is often told through aesthetic breakthroughs that sync well with cultural and political change. From Courbet to Picasso, from Malevich to Warhol, it is accepted that art tracks the disruptions of industrialization, fascism, revolution, and war. Yet filtering the history of modern art only through catastrophic events cannot account for the subtle developments that lead to the profound confusion at the heart of contemporary art.
"In Industry and Intelligence, the artist Liam Gillick writes a nuanced genealogy to help us appreciate contemporary art's engagement with history even when it seems apathetic or blind to current events. Taking a broad view of artistic creation from 1820 to today, Gillick follows the response of artists to incremental developments in science, politics, and technology. The great innovations and dislocations of the nineteenth and twentieth centuries have their place in this timeline, but their traces are alternately amplified and diminished as Gillick moves through artistic reactions to liberalism, mass manufacturing, psychology, nuclear physics, automobiles, and a host of other advances. He intimately ties the origins of contemporary art to the social and technological adjustments of modern life, which artists struggled to incorporate truthfully into their works."

--- Dr. Giedion, Dr. Sigfried Giedion, please call your office. (Despite my everything old-is-new-again snark, this book does sound interesting...)
to:NB  books:noted  industrial_revolution  modernism  art_history 
5 weeks ago
Minds Make Societies | Yale University Press
"“There is no good reason why human societies should not be described and explained with the same precision and success as the rest of nature.” Thus argues evolutionary psychologist Pascal Boyer in this uniquely innovative book.
"Integrating recent insights from evolutionary biology, genetics, psychology, economics, and other fields, Boyer offers precise models of why humans engage in social behaviors such as forming families, tribes, and nations, or creating gender roles. In fascinating, thought-provoking passages, he explores questions such as, Why is there conflict between groups? Why do people believe low-value information such as rumors? Why are there religions? What is social justice? What explains morality? Boyer provides a new picture of cultural transmission that draws on the pragmatics of human communication, the constructive nature of memory in human brains, and human motivation for group formation and cooperation."
to:NB  books:noted  social_theory  epidemiology_of_representations  evolutionary_psychology  anthropology  boyer.pascal  evolution_of_cooperation  cultural_transmission  re:do-institutions-evolve 
5 weeks ago
Altered Brain Activity in Unipolar Depression Revisited: Meta-analyses of Neuroimaging Studies | Depressive Disorders | JAMA Psychiatry | JAMA Network
"Importance During the past 20 years, numerous neuroimaging experiments have investigated aberrant brain activation during cognitive and emotional processing in patients with unipolar depression (UD). The results of those investigations, however, vary considerably; moreover, previous meta-analyses also yielded inconsistent findings.
"Objective To readdress aberrant brain activation in UD as evidenced by neuroimaging experiments on cognitive and/or emotional processing.
"Data Sources Neuroimaging experiments published from January 1, 1997, to October 1, 2015, were identified by a literature search of PubMed, Web of Science, and Google Scholar using different combinations of the terms fMRI (functional magnetic resonance imaging), PET (positron emission tomography), neural, major depression, depression, major depressive disorder, unipolar depression, dysthymia, emotion, emotional, affective, cognitive, task, memory, working memory, inhibition, control, n-back, and Stroop.
"Study Selection Neuroimaging experiments (using fMRI or PET) reporting whole-brain results of group comparisons between adults with UD and healthy control individuals as coordinates in a standard anatomic reference space and using an emotional or/and cognitive challenging task were selected.
"Data Extraction and Synthesis Coordinates reported to show significant activation differences between UD and healthy controls during emotional or cognitive processing were extracted. By using the revised activation likelihood estimation algorithm, different meta-analyses were calculated.
"Main Outcomes and Measures Meta-analyses tested for brain regions consistently found to show aberrant brain activation in UD compared with controls. Analyses were calculated across all emotional processing experiments, all cognitive processing experiments, positive emotion processing, negative emotion processing, experiments using emotional face stimuli, experiments with a sex discrimination task, and memory processing. All meta-analyses were calculated across experiments independent of reporting an increase or decrease of activity in major depressive disorder. For meta-analyses with a minimum of 17 experiments available, separate analyses were performed for increases and decreases.
"Results In total, 57 studies with 99 individual neuroimaging experiments comprising in total 1058 patients were included; 34 of them tested cognitive and 65 emotional processing. Overall analyses across cognitive processing experiments (P > .29) and across emotional processing experiments (P > .47) revealed no significant results. Similarly, no convergence was found in analyses investigating positive (all P > .15), negative (all P > .76), or memory (all P > .48) processes. Analyses that restricted inclusion of confounds (eg, medication, comorbidity, age) did not change the results.
"Conclusions and Relevance Inconsistencies exist across individual experiments investigating aberrant brain activity in UD and replication problems across previous neuroimaging meta-analyses. For individual experiments, these inconsistencies may relate to use of uncorrected inference procedures, differences in experimental design and contrasts, or heterogeneous clinical populations; meta-analytically, differences may be attributable to varying inclusion and exclusion criteria or rather liberal statistical inference approaches."
fmri  neuroscience  depression  meta-analysis  re:neutral_model_of_inquiry  to_be_shot_after_a_fair_trial 
5 weeks ago
SocArXiv Papers | Fake News: Status Threat Does Not Explain the 2016 Presidential Vote
"The April 2018 article of Diana Mutz, "Status Threat, Not Economic Hardship, Explains the 2016 Presidential Vote," was published in the Proceedings of the National Academy of Sciences and contradicts prior sociological research on the 2016 election. Mutz's article received widespread media coverage because of the strength of its primary conclusion, declaimed in its title. The current article is a critical reanalysis of the models offered by Mutz, using the data files released along with her article. Contrary to her conclusions, this article demonstrates that material interests and status threat are deeply entangled in her cross-sectional data and, together, do not enable a definitive analysis of their relative importance. In addition, her panel-data model of candidate thermometer ratings has a specification that does not reveal the causal effects that she claims to have effectively estimated. Her panel-data model of votes, which she represents as a fixed-effect logit model, is, in fact, a generic pooled logit model. It is plagued by the same weaknesses as her thermometer ratings model, but also by more generic confounding from fixed individual-level predictors of vote choice that are not specified, such as self-identified race and level of education completed. In contrast, the sociological literature has offered more careful interpretations, and as such provides a more credible interpretation of the 2016 election."

--- This is from the co-author of the first decent modern book on observational causal inference (http://bactra.org/weblog/algae-2010-07.html#morgan-winship), so I will actually read it.
to:NB  us_politics  causal_inference 
5 weeks ago
[1708.03579] Self-exciting point processes with spatial covariates: modeling the dynamics of crime
"Crime has both varying patterns in space, related to features of the environment, economy, and policing, and patterns in time arising from criminal behavior, such as retaliation. Serious crimes may also be presaged by minor crimes of disorder. We demonstrate that these spatial and temporal patterns are generally confounded, requiring analyses to take both into account, and propose a spatio-temporal self-exciting point process model which incorporates spatial features, near-repeat and retaliation effects, and triggering. We develop inference methods and diagnostic tools, such as residual maps, for this model, and through extensive simulation and crime data obtained from Pittsburgh, Pennsylvania, demonstrate its properties and usefulness."
in_NB  spatio-temporal_statistics  point_processes  prediction  statistics  crime  kith_and_kin  reinhart.alex  greenhouse.joel  on_the_thesis_committee  to_teach:data_over_space_and_time 
5 weeks ago
[1708.02647] A Review of Self-Exciting Spatio-Temporal Point Processes and Their Applications
"Self-exciting spatio-temporal point process models predict the rate of events as a function of space, time, and the previous history of events. These models naturally capture triggering and clustering behavior, and have been widely used in fields where spatio-temporal clustering of events is observed, such as earthquake modeling, infectious disease, and crime. In the past several decades, advances have been made in estimation, inference, simulation, and diagnostic tools for self-exciting point process models. In this review, I describe the basic theory, survey related estimation and inference techniques from each field, highlight several key applications, and suggest directions for future research."
in_NB  spatio-temporal_statistics  point_processes  statistics  kith_and_kin  reinhart.alex  on_the_thesis_committee  to_teach:data_over_space_and_time 
5 weeks ago
SocArXiv Papers | A systematic assessment of 'Axial Age' proposals using global comparative historical evidence
"Proponents of the Axial Age contend that parallel cultural developments between 800 and 200 BCE in what is today China, Greece, India, Iran, and Israel-Palestine constitute the global historical turning point towards modernity. While the Axial Age concept is well-known and influential, deficiencies in the historical evidence and sociological analysis available have thwarted efforts to evaluate the Axial Age concept’s major global contentions. As a result, the Axial Age concept remains controversial. Seshat: Global History Databank provides new tools for examining this topic in social formations across Afro-Eurasia during the first two millennia BCE and first millennium CE, allowing scholars to empirically evaluate the many varied— and contrasting—claims put forward about this period. Our systematic investigation undercuts the notion of a specific 'age' of axiality limited to a specific geo-temporal localization. Critical traits offered as evidence of an axial transformation by proponents of the Axial Age concept are shown to have appeared across Afro-Eurasia hundreds and in some cases thousands of years prior to the proposed Axial Age. Our analysis raises important questions for future evaluations of this period and points the way towards empirically-led, historical-sociological investigations of the ideological and institutional foundations of complex societies."
to:NB  ancient_history  comparative_history  world_history 
6 weeks ago
Moving toward Integration — Richard H. Sander, Yana A. Kucheva, Jonathan M. Zasloff | Harvard University Press
"Reducing residential segregation is the best way to reduce racial inequality in the United States. African American employment rates, earnings, test scores, even longevity all improve sharply as residential integration increases. Yet far too many participants in our policy and political conversations have come to believe that the battle to integrate America’s cities cannot be won. Richard Sander, Yana Kucheva, and Jonathan Zasloff write that the pessimism surrounding desegregation in housing arises from an inadequate understanding of how segregation has evolved and how policy interventions have already set many metropolitan areas on the path to integration.
"Scholars have debated for decades whether America’s fair housing laws are effective. Moving toward Integration provides the most definitive account to date of how those laws were shaped and implemented and why they had a much larger impact in some parts of the country than others. It uses fresh evidence and better analytic tools to show when factors like exclusionary zoning and income differences between blacks and whites pose substantial obstacles to broad integration, and when they do not.
"Through its interdisciplinary approach and use of rich new data sources, Moving toward Integration offers the first comprehensive analysis of American housing segregation. It explains why racial segregation has been resilient even in an increasingly diverse and tolerant society, and it demonstrates how public policy can align with demographic trends to achieve broad housing integration within a generation."
to:NB  books:noted  the_american_dilemma 
6 weeks ago
The Rise of the Working-Class Shareholder — David Webber | Harvard University Press
"When Steven Burd, CEO of the supermarket chain Safeway, cut wages and benefits, starting a five-month strike by 59,000 unionized workers, he was confident he would win. But where traditional labor action failed, a novel approach was more successful. With the aid of the California Public Employees’ Retirement System, a $300 billion pension fund, workers led a shareholder revolt that unseated three of Burd’s boardroom allies.
"In The Rise of the Working-Class Shareholder: Labor’s Last Best Weapon, David Webber uses cases such as Safeway’s to shine a light on labor’s most potent remaining weapon: its multitrillion-dollar pension funds. Outmaneuvered at the bargaining table and under constant assault in Washington, state houses, and the courts, worker organizations are beginning to exercise muscle through markets. Shareholder activism has been used to divest from anti-labor companies, gun makers, and tobacco; diversify corporate boards; support Occupy Wall Street; force global warming onto the corporate agenda; create jobs; and challenge outlandish CEO pay. Webber argues that workers have found in labor’s capital a potent strategy against their exploiters. He explains the tactic’s surmountable difficulties even as he cautions that corporate interests are already working to deny labor’s access to this powerful and underused tool.
"The Rise of the Working-Class Shareholder is a rare good-news story for American workers, an opportunity hiding in plain sight. Combining legal rigor with inspiring narratives of labor victory, Webber shows how workers can wield their own capital to reclaim their strength."

--- OK, but I remember reading versions of this story in the 1980s. This makes me wonder if "rise" is really the right direction...
to:NB  books:noted  class_struggles_in_america  economics  labor 
6 weeks ago
Sporadic sampling, not climatic forcing, drives observed early hominin diversity | PNAS
"The role of climate change in the origin and diversification of early hominins is hotly debated. Most accounts of early hominin evolution link observed fluctuations in species diversity to directional shifts in climate or periods of intense climatic instability. None of these hypotheses, however, have tested whether observed diversity patterns are distorted by variation in the quality of the hominin fossil record. Here, we present a detailed examination of early hominin diversity dynamics, including both taxic and phylogenetically corrected diversity estimates. Unlike past studies, we compare these estimates to sampling metrics for rock availability (hominin-, primate-, and mammal-bearing formations) and collection effort, to assess the geological and anthropogenic controls on the sampling of the early hominin fossil record. Taxic diversity, primate-bearing formations, and collection effort show strong positive correlations, demonstrating that observed patterns of early hominin taxic diversity can be explained by temporal heterogeneity in fossil sampling rather than genuine evolutionary processes. Peak taxic diversity at 1.9 million years ago (Ma) is a sampling artifact, reflecting merely maximal rock availability and collection effort. In contrast, phylogenetic diversity estimates imply peak diversity at 2.4 Ma and show little relation to sampling metrics. We find that apparent relationships between early hominin diversity and indicators of climatic instability are, in fact, driven largely by variation in suitable rock exposure and collection effort. Our results suggest that significant improvements in the quality of the fossil record are required before the role of climate in hominin evolution can be reliably determined."
to:NB  human_evolution  paleontology  data_analysis 
6 weeks ago
The Matthew effect in science funding | PNAS
"A classic thesis is that scientific achievement exhibits a “Matthew effect”: Scientists who have previously been successful are more likely to succeed again, producing increasing distinction. We investigate to what extent the Matthew effect drives the allocation of research funds. To this end, we assembled a dataset containing all review scores and funding decisions of grant proposals submitted by recent PhDs in a €2 billion granting program. Analyses of review scores reveal that early funding success introduces a growing rift, with winners just above the funding threshold accumulating more than twice as much research funding (€180,000) during the following eight years as nonwinners just below it. We find no evidence that winners’ improved funding chances in subsequent competitions are due to achievements enabled by the preceding grant, which suggests that early funding itself is an asset for acquiring later funding. Surprisingly, however, the emergent funding gap is partly created by applicants, who, after failing to win one grant, apply for another grant less often."
to:NB  sociology_of_science  science_as_a_social_process  matthew_effect 
6 weeks ago
On the existence of thermodynamically stable rigid solids | PNAS
"Customarily, crystalline solids are defined to be rigid since they resist changes of shape determined by their boundaries. However, rigid solids cannot exist in the thermodynamic limit where boundaries become irrelevant. Particles in the solid may rearrange to adjust to shape changes eliminating stress without destroying crystalline order. Rigidity is therefore valid only in the metastable state that emerges because these particle rearrangements in response to a deformation, or strain, are associated with slow collective processes. Here, we show that a thermodynamic collective variable may be used to quantify particle rearrangements that occur as a solid is deformed at zero strain rate. Advanced Monte Carlo simulation techniques are then used to obtain the equilibrium free energy as a function of this variable. Our results lead to a unique view on rigidity: While at zero strain a rigid crystal coexists with one that responds to infinitesimal strain by rearranging particles and expelling stress, at finite strain the rigid crystal is metastable, associated with a free energy barrier that decreases with increasing strain. The rigid phase becomes thermodynamically stable when an external field, which penalizes particle rearrangements, is switched on. This produces a line of first-order phase transitions in the field–strain plane that intersects the origin. Failure of a solid once strained beyond its elastic limit is associated with kinetic decay processes of the metastable rigid crystal deformed with a finite strain rate. These processes can be understood in quantitative detail using our computed phase diagram as reference."
to:NB  statistical_mechanics  physics 
6 weeks ago
Race on the Brain - What Implicit Bias Gets Wrong About the Struggle for Racial Justice | Columbia University Press
"Of the many obstacles to racial justice in America, none has received more recent attention than the one that lurks in our subconscious. As social movements and policing scandals have shown how far from being “postracial” we are, the concept of implicit bias has taken center stage in the national conversation about race. Millions of Americans have taken online tests purporting to show the deep, invisible roots of their own prejudice. A recent Oxford study that claims to have found a drug that reduces implicit bias is only the starkest example of a pervasive trend. But what do we risk when we seek the simplicity of a technological diagnosis—and solution—for racism? What do we miss when we locate racism in our biology and our brains rather than in our history and our social practices?
"In Race on the Brain, Jonathan Kahn argues that implicit bias has grown into a master narrative of race relations—one with profound, if unintended, negative consequences for law, science, and society. He emphasizes its limitations, arguing that while useful as a tool to understand particular types of behavior, it is only one among several tools available to policy makers. An uncritical embrace of implicit bias, to the exclusion of power relations and structural racism, undermines wider civic responsibility for addressing the problem by turning it over to experts. Technological interventions, including many tests for implicit bias, are premised on a color-blind ideal and run the risk of erasing history, denying present reality, and obscuring accountability. Kahn recognizes the significance of implicit social cognition but cautions against seeing it as a panacea for addressing America’s longstanding racial problems. A bracing corrective to what has become a common-sense understanding of the power of prejudice, Race on the Brain challenges us all to engage more thoughtfully and more democratically in the difficult task of promoting racial justice."
to:NB  books:noted  racism  psychology  implicit_association_test  mental_testing 
6 weeks ago
Not Enough — Samuel Moyn | Harvard University Press
"The age of human rights has been kindest to the rich. Even as state violations of political rights garnered unprecedented attention due to human rights campaigns, a commitment to material equality disappeared. In its place, market fundamentalism has emerged as the dominant force in national and global economies. In this provocative book, Samuel Moyn analyzes how and why we chose to make human rights our highest ideals while simultaneously neglecting the demands of a broader social and economic justice.
"In a pioneering history of rights stretching back to the Bible, Not Enough charts how twentieth-century welfare states, concerned about both abject poverty and soaring wealth, resolved to fulfill their citizens’ most basic needs without forgetting to contain how much the rich could tower over the rest. In the wake of two world wars and the collapse of empires, new states tried to take welfare beyond its original European and American homelands and went so far as to challenge inequality on a global scale. But their plans were foiled as a neoliberal faith in markets triumphed instead.
"Moyn places the career of the human rights movement in relation to this disturbing shift from the egalitarian politics of yesterday to the neoliberal globalization of today. Exploring why the rise of human rights has occurred alongside enduring and exploding inequality, and why activists came to seek remedies for indigence without challenging wealth, Not Enough calls for more ambitious ideals and movements to achieve a humane and equitable world."
to:NB  books:noted  human_rights  inequality  globalization  political_philosophy 
6 weeks ago
From global scaling to the dynamics of individual cities | PNAS
"Scaling has been proposed as a powerful tool to analyze the properties of complex systems and in particular for cities where it describes how various properties change with population. The empirical study of scaling on a wide range of urban datasets displays apparent nonlinear behaviors whose statistical validity and meaning were recently the focus of many debates. We discuss here another aspect, which is the implication of such scaling forms on individual cities and how they can be used for predicting the behavior of a city when its population changes. We illustrate this discussion in the case of delay due to traffic congestion with a dataset of 101 US cities in the years 1982–2014. We show that the scaling form obtained by agglomerating all of the available data for different cities and for different years does display a nonlinear behavior, but which appears to be unrelated to the dynamics of individual cities when their population grows. In other words, the congestion-induced delay in a given city does not depend on its population only, but also on its previous history. This strong path dependency prohibits the existence of a simple scaling form valid for all cities and shows that we cannot always agglomerate the data for many different systems. More generally, these results also challenge the use of transversal data for understanding longitudinal series for cities."
to:NB  cities  data_analysis  re:urban_scaling_what_urban_scaling 
6 weeks ago
Cooperation, clustering, and assortative mixing in dynamic networks | PNAS
"Humans’ propensity to cooperate is driven by our embeddedness in social networks. A key mechanism through which networks promote cooperation is clustering. Within clusters, conditional cooperators are insulated from exploitation by noncooperators, allowing them to reap the benefits of cooperation. Dynamic networks, where ties can be shed and new ties formed, allow for the endogenous emergence of clusters of cooperators. Although past work suggests that either reputation processes or network dynamics can increase clustering and cooperation, existing work on network dynamics conflates reputations and dynamics. Here we report results from a large-scale experiment (total n = 2,675) that embedded participants in clustered or random networks that were static or dynamic, with varying levels of reputational information. Results show that initial network clustering predicts cooperation in static networks, but not in dynamic ones. Further, our experiment shows that while reputations are important for partner choice, cooperation levels are driven purely by dynamics. Supplemental conditions confirmed this lack of a reputation effect. Importantly, we find that when participants make individual choices to cooperate or defect with each partner, as opposed to a single decision that applies to all partners (as is standard in the literature on cooperation in networks), cooperation rates in static networks are as high as cooperation rates in dynamic networks. This finding highlights the importance of structured relations for sustained cooperation, and shows how giving experimental participants more realistic choices has important consequences for whether dynamic networks promote higher levels of cooperation than static networks."
to:NB  experimental_sociology  evolution_of_cooperation  social_networks 
6 weeks ago
« earlier      
academia afghanistan agent-based_models american_history ancient_history anthropology archaeology art bad_data_analysis bad_science_journalism bayesian_consistency bayesianism biochemical_networks biology blogged book_reviews books:noted books:owned books:recommended bootstrap causal_inference causality central_asia central_limit_theorem class_struggles_in_america classifiers climate_change clustering cognitive_science collective_cognition community_discovery complexity computational_statistics confidence_sets corruption coveted crime cultural_criticism cultural_evolution cultural_exchange data_analysis data_mining debunking decision-making decision_theory delong.brad democracy density_estimation dimension_reduction distributed_systems dynamical_systems ecology econometrics economic_history economic_policy economics education empirical_processes ensemble_methods entropy_estimation epidemic_models epidemiology_of_representations epistemology ergodic_theory estimation evisceration evolution_of_cooperation evolutionary_biology experimental_psychology finance financial_crisis_of_2007-- financial_speculation fmri food fraud funny funny:academic funny:geeky funny:laughing_instead_of_screaming funny:malicious graph_theory graphical_models have_read heard_the_talk heavy_tails high-dimensional_statistics hilbert_space history_of_ideas history_of_science human_evolution human_genetics hypothesis_testing ideology imperialism in_nb inequality inference_to_latent_objects information_theory institutions kernel_methods kith_and_kin krugman.paul large_deviations lasso learning_theory linguistics literary_criticism machine_learning macro_from_micro macroeconomics market_failures_in_everything markov_models mathematics mixing model_selection modeling modern_ruins monte_carlo moral_psychology moral_responsibility mortgage_crisis natural_history_of_truthiness network_data_analysis networked_life networks neural_data_analysis neural_networks neuroscience non-equilibrium nonparametrics optimization our_decrepit_institutions philosophy philosophy_of_science photos physics pittsburgh political_economy political_science practices_relating_to_the_transmission_of_genetic_information prediction pretty_pictures principal_components probability programming progressive_forces psychology public_policy r racism re:almost_none re:aos_project re:democratic_cognition re:do-institutions-evolve re:g_paper re:homophily_and_confounding re:network_differences re:smoothing_adjacency_matrices re:stacs re:your_favorite_dsge_sucks recipes regression running_dogs_of_reaction science_as_a_social_process science_fiction simulation social_influence social_life_of_the_mind social_media social_networks social_science_methodology sociology something_about_america sparsity spatial_statistics state-space_models statistical_inference_for_stochastic_processes statistical_mechanics statistics stochastic_processes text_mining the_american_dilemma the_continuing_crises time_series to:blog to:nb to_be_shot_after_a_fair_trial to_read to_teach:complexity-and-inference to_teach:data-mining to_teach:statcomp to_teach:undergrad-ada track_down_references us_politics utter_stupidity vast_right-wing_conspiracy via:? via:henry_farrell via:jbdelong visual_display_of_quantitative_information whats_gone_wrong_with_america why_oh_why_cant_we_have_a_better_academic_publishing_system why_oh_why_cant_we_have_a_better_press_corps

Copy this bookmark: