13741
The Centered Mind - Hardcover - Peter Carruthers - Oxford University Press
"The Centered Mind offers a new view of the nature and causal determinants of both reflective thinking and, more generally, the stream of consciousness. Peter Carruthers argues that conscious thought is always sensory-based, relying on the resources of the working-memory system. This system has been much studied by cognitive scientists. It enables sensory images to be sustained and manipulated through attentional signals directed at midlevel sensory areas of the brain. When abstract conceptual representations are bound into these images, we consciously experience ourselves as making judgments or arriving at decisions. Thus one might hear oneself as judging, in inner speech, that it is time to go home, for example. However, our amodal (non-sensory) propositional attitudes are never actually among the contents of this stream of conscious reflection. Our beliefs, goals, and decisions are only ever active in the background of consciousness, working behind the scenes to select the sensory-based imagery that occurs in working memory. They are never themselves conscious.
"Drawing on extensive knowledge of the scientific literature on working memory and related topics, Carruthers builds an argument that challenges the central assumptions of many philosophers. In addition to arguing that non-sensory propositional attitudes are never conscious, he also shows that they are never under direct intentional control. Written with his usual clarity and directness, The Centered Mind will be essential reading for all philosophers and cognitive scientists interested in the nature of human thought processes."
in_NB  cognitive_science  memory  consciousness  philosophy_of_mind  books:noted 
7 days ago
Globalists — Quinn Slobodian | Harvard University Press
"Neoliberals hate the state. Or do they? In the first intellectual history of neoliberal globalism, Quinn Slobodian follows a group of thinkers from the ashes of the Habsburg Empire to the creation of the World Trade Organization to show that neoliberalism emerged less to shrink government and abolish regulations than to redeploy them at a global level.
"Slobodian begins in Austria in the 1920s. Empires were dissolving and nationalism, socialism, and democratic self-determination threatened the stability of the global capitalist system. In response, Austrian intellectuals called for a new way of organizing the world. But they and their successors in academia and government, from such famous economists as Friedrich Hayek and Ludwig von Mises to influential but lesser-known figures such as Wilhelm Röpke and Michael Heilperin, did not propose a regime of laissez-faire. Rather they used states and global institutions—the League of Nations, the European Court of Justice, the World Trade Organization, and international investment law—to insulate the markets against sovereign states, political change, and turbulent democratic demands for greater equality and social justice.
"Far from discarding the regulatory state, neoliberals wanted to harness it to their grand project of protecting capitalism on a global scale. It was a project, Slobodian shows, that changed the world, but that was also undermined time and again by the inequality, relentless change, and social injustice that accompanied it."
to:NB  books:noted  neo-liberalism  political_economy  history_of_ideas  hayek.f.a._von  intellectuals_in_politics  via:henry_farrell 
14 days ago
Rigatoni with Kale Pesto Recipe - Cooking Light
Ingredients
2 bunches lacinato kale, stemmed
2 tablespoons pine nuts, toasted
2 garlic cloves, chopped
1/2 cup extra-virgin olive oil
1 tablespoon freshly grated lemon rind
2 ounces fresh pecorino Romano cheese, finely grated (about 1/2 cup)
1/2 teaspoon kosher salt
1/2 teaspoon freshly ground black pepper
Dash of Aleppo pepper (optional)
1 pound rigatoni pasta

Step 1
Bring a large pot of water to a boil. Fill a large bowl with ice water. Working in two or three batches, submerge kale in boiling water for 15 seconds until wilted and deep green; remove, and immediately plunge into ice water for 15 seconds. Drain kale well; place on layers of paper towels, and press to remove excess liquid. Place kale, pine nuts, and garlic in a food processor or blender. Pulse until coarsely chopped. Slowly add oil with motor running; blend until pesto is smooth. Place pesto in a medium bowl; stir in rind, cheese, salt, and peppers.

Step 2
Cook pasta according to package instructions, omitting salt and fat. When pasta is al dente, drain, reserving 1 cup pasta cooking liquid. Return pasta to pan over medium-low heat; stir in pesto. Add pasta cooking liquid, 1 tablespoon at a time, until sauce reaches desired consistency.

--- Notes after making this:
- 1/2 cup walnuts for pine nuts
- Ordinary kale works just as well.
- Obviously so does any small pasta
- And parmesan for romano cheese too
- Blanching is essential
- Adding all the pesto ingredients in the blender works just fine, and is much eaiser
food  recipes  have_made 
16 days ago
Author Summary: Political Stability in the Open Society – American Journal of Political Science
"In “Political Stability in the Open Society,” we argue that Rawls’s model of a well-ordered society—as an account of a realistic utopia—is defective for two reasons. First, the well-ordered society model wrongly excludes the deep disagreement and diversity that we find in contemporary political life from figuring into a model of liberal order. Second, when deep disagreement and diversity are integrated into the model, discovery becomes an important part of modeling a stable liberal order. A liberal society is one where people are free to experiment with different approaches to the good life and justice given that we know much less than we might about how to live together.
"If we are committed to recognizing deep diversity and the need for social discovery in modeling a stable liberal order, we must modify the idea of a well-ordered society and the ideas most closely associated with it in a liberal theory of justice. In particular, a more dynamic notion of stability for the right reasons is required for a new model that we call an open society. An open society is a liberal society that allows for deep disagreement about the good and justice and which sustains institutions that can adapt to new discoveries about what justice requires.
"Our goal is to explain the idea of stability appropriate for an open society. The challenge is that, given the importance of respecting diversity and openness to social change, stability for the right reasons now seems to have a cost; stable rules are hard to replace with better rules. On the other hand, some rules need to remain stable to support productive social change and experimentation.
"Given these challenges, we distinguish two different kinds of stability that apply at different levels of social organization. The first kind of stability applies to constitutional rules that set out the general legal rules within which our lower-level institutional rules operate. These constitutional rules must remain in equilibrium despite challenges and threats in order to preserve the social conditions that foster experimentation. But we reject a similar form of stability for lower-level legal and institutional rules. Experimentation at that level can be productive in ways that constitutional experimentation is not. Instead, lower level legal and institutional rules need to be robust in the sense that, when challenged, old rules can be replaced by stable new rules without undermining the system of rules as a whole.
"One important implication of our analysis is that, in the open society, a shared conception of justice is less important than a stable constitutional framework where many aspects of the open society, including justice, are open for debate. Rather than focusing on the particular principles of justice that are most reasonable for a well-ordered society, theorists should focus on the properties of different constitutional orders that encourage productive social evolution and experimentation. A second implication of our analysis is that open societies may turn out to be substantially different from one another. There will likely be no single type of social order that suits any given open society. This is all to the good because these diverse orders can learn from each other’s experiments."
to:NB  political_philosophy  re:democratic_cognition  via:henry_farrell 
20 days ago
Eigenvalues multiplicities and graphs | Discrete mathematics, information theory and coding | Cambridge University Press
"The arrangement of nonzero entries of a matrix, described by the graph of the matrix, limits the possible geometric multiplicities of the eigenvalues, which are far more limited by this information than algebraic multiplicities or the numerical values of the eigenvalues. This book gives a unified development of how the graph of a symmetric matrix influences the possible multiplicities of its eigenvalues. While the theory is richest in cases where the graph is a tree, work on eigenvalues, multiplicities and graphs has provided the opportunity to identify which ideas have analogs for non-trees, and those for which trees are essential. It gathers and organizes the fundamental ideas to allow students and researchers to easily access and investigate the many interesting questions in the subject."
to:NB  books:noted  graph_theory  mathematics  markov_models 
21 days ago
Non homogeneous random walks lyapunov function methods near critical stochastic systems | Probability theory and stochastic processes | Cambridge University Press
"Stochastic systems provide powerful abstract models for a variety of important real-life applications: for example, power supply, traffic flow, data transmission. They (and the real systems they model) are often subject to phase transitions, behaving in one way when a parameter is below a certain critical value, then switching behaviour as soon as that critical value is reached. In a real system, we do not necessarily have control over all the parameter values, so it is important to know how to find critical points and to understand system behaviour near these points. This book is a modern presentation of the 'semimartingale' or 'Lyapunov function' method applied to near-critical stochastic systems, exemplified by non-homogeneous random walks. Applications treat near-critical stochastic systems and range across modern probability theory from stochastic billiards models to interacting particle systems. Spatially non-homogeneous random walks are explored in depth, as they provide prototypical near-critical systems."
to:NB  books:noted  probability  stochastic_processes  phase_transitions  lyapunov_functions  interacting_particle_systems 
21 days ago
Newton on Islandworld: Ontic-Driven Explanations of Scientific Method | Perspectives on Science | MIT Press Journals
"Philosophers and scientists often cite ontic factors when explaining the methods and success of scientific inquiry. That is, the adoption of a method or approach (and its subsequent success or otherwise) is explained in reference to the kind of system in which the scientist is interested: these are explanations of why scientists do what they do, that appeal to properties of their target systems. We present a framework for understanding such “ontic-driven” explanations, and illustrate it using a toy-case, the biogeography of “Islandworld.” We then put our view to historical work, comparing Isaac Newton’s Opticks to his Principia. Newton’s optical work is largely experiment-driven, while the Principia is primarily mathematical, so usually, each work is taken to exemplify a different kind of science. However, Newton himself often presented them in terms of a largely consistent method. We use our framework to articulate an original and plausible position: that the differences between the Opticks and the Principia are due to the kinds of systems targeted. That is, we provide an ontic-driven explanation of methodological differences. We suspect that ontic factors should have a more prominent role in historical explanations of scientific method and development."
to:NB  philosophy_of_science  scientific_method  newton.isaac 
21 days ago
Fictionalism, Semantics, and Ontology | Perspectives on Science | MIT Press Journals
"This article expands upon the argument of a previous work which defended a variational account of scientific fictions. Specifically, I show that this understanding of scientific fictions can provide guidance for realist interpretations of scientific theories and models. Depending on a model’s variational properties, different ontological commitments are appropriate, providing a principled way for a realist to moderate her views according to the structural properties of a given model. This reasoning is then applied the Lee-Yang theory and Kubo-Martin-Schwinger statistics, two foundational models in quantum statistical mechanics. The Lee-Yang theory is analyzed in a way that permits a robust realist interpretation, whereas KMS statistics is shown to involve a use of fictions that shields the theory from confirmation and makes it inappropriate for strongly realist interpretation, without contradicting broadly realist commitments."
to:NB  philosophy_of_science 
21 days ago
Asking the Right Questions About AI – Yonatan Zunger – Medium
Pretty good, reading "machine learning", or even "statistical modeling", for "artificial intelligence" throughout (as he more or less admits up front). Worth teaching in particular for the black-faces-as-gorillas disaster.
machine_learning  data_mining  to_teach:data-mining 
21 days ago
Wald : Estimation of a Parameter When the Number of Unknown Parameters Increases Indefinitely with the Number of Observations (1948)
"Necessary and sufficient conditions are given for the existence of a uniformly consistent estimate of an unknown parameter θ when the successive observations are not necessarily independent and the number of unknown parameters involved in the joint distribution of the observations increases indefinitely with the number of observations. In analogy with R. A. Fisher's information function, the amount of information contained in the first n observations regarding θ is defined. A sufficient condition for the non-existence of a uniformly consistent estimate of θ is given in section 3 in terms of the information function. Section 4 gives a simplified expression for the amount of information when the successive observations are independent."
in_NB  have_read  fisher_information  estimation  information_theory  nonparametrics  statistics  wald.abraham 
23 days ago
The Gist of Reading | Andrew Elfenbein
"What happens to books as they live in our long-term memory? Why do we find some books entertaining and others not? And how does literary influence work on writers in different ways? Grounded in the findings of empirical psychology, this book amends classic reader-response theory and attends to neglected aspects of reading that cannot be explained by traditional literary criticism.
"Reading arises from a combination of two kinds of mental work: automatic and controlled processes. Automatic processes, such as the ability to see visual symbols as words, are the result of constant practice; controlled processes, such as predicting what might occur next in a story, arise from readers' conscious use of skills and background knowledge. When we read, automatic and controlled processes work together to create the "gist" of reading, the constant interplay between these two kinds of processes. Andrew Elfenbein not only explains how we read today, but also uses current knowledge about reading to consider readers of past centuries, arguing that understanding gist is central to interpreting the social, psychological, and political impact of literary works. The result is the first major revisionary account of reading practices in literary criticism since the 1970s."

--- It's probably reading too much into the first sentence to hope for a theory of the Suck Fairy.
to:NB  books:noted  psychology  literary_criticism  reading  criticism_of_criticism_of_criticism 
23 days ago
Stumbling and Mumbling: Why economists should look at horses
The quoted bits from Clark are remarkably stupid.
Re Cobb-Douglas, surely this argument was pretty much settled, in the negative, by Herbert Simon in the 1960s? (http://doi.library.cmu.edu/10.1184/pmc/simon/box00064/fld04934/bdl0001/doc0001 , and cf. http://www.jstor.org/stable/40326423 )
economics  have_read 
23 days ago
[1801.05667] Innateness, AlphaZero, and Artificial Intelligence
"The concept of innateness is rarely discussed in the context of artificial intelligence. When it is discussed, or hinted at, it is often the context of trying to reduce the amount of innate machinery in a given system. In this paper, I consider as a test case a recent series of papers by Silver et al (Silver et al., 2017a) on AlphaGo and its successors that have been presented as an argument that a "even in the most challenging of domains: it is possible to train to superhuman level, without human examples or guidance", "starting tabula rasa."
"I argue that these claims are overstated, for multiple reasons. I close by arguing that artificial intelligence needs greater attention to innateness, and I point to some proposals about what that innateness might look like."

--- Yet another occasion on which the deep learning discussion makes me feel trapped in the 1990s... (cf. http://bostonreview.net/archives/BR28.6/marcus.html )
to:NB  cognitive_science  artificial_intelligence  neural_networks  marcus.gary  have_read 
23 days ago
The problem with “critical” studies | In Due Course
Since this aligns very strongly with my prejudices, I _should_ try to pick holes in it. (E.g., maybe the book he holds out as an unfortunate example [kudos for actually giving a concrete example!] _also_ makes a reasonable case, _as well as_ engaging in the maneuvers Heath complains about? Or: How much of this is just territory-marking on behalf of a Habermas scholar, who grew up on the idea of "critical theory" as a specific project of the Frankfurt School, finding a beloved buzz-word deployed by other scholars from other traditions just close enough to be irritating?) But life is short and these recommendation letters won't write themselves...
academia  humanities  progressive_forces  heath.joseph  have_read 
23 days ago
[1708.02890] Asymptotic equivalence of probability measures and stochastic processes
"Let Pn and Qn be two probability measures representing two different probabilistic models of some system (e.g., an n-particle equilibrium system, a set of random graphs with n vertices, or a stochastic process evolving over a time n) and let Mn be a random variable representing a 'macrostate' or 'global observable' of that system. We provide sufficient conditions, based on the Radon-Nikodym derivative of Pn and Qn, for the set of typical values of Mn obtained relative to Pn to be the same as the set of typical values obtained relative to Qn in the limit n→∞. This extends to general probability measures and stochastic processes the well-known thermodynamic-limit equivalence of the microcanonical and canonical ensembles, related mathematically to the asymptotic equivalence of conditional and exponentially-tilted measures. In this more general sense, two probability measures that are asymptotically equivalent predict the same typical or macroscopic properties of the system they are meant to model."

--- Spoiler: the sufficient condition is that the normalized log likelihood ratio should tend to 0 (in probability) under both sequences of measures. (Equivalently, the KL divergence rate should tend to zero.) This is no doubt true, but it seems a bit like over-kill.
in_NB  have_read  probability  large_deviations  stochastic_processes  touchette.hugo  re:almost_none  via:rvenkat 
23 days ago
Ten years in, nobody has come up with a use for blockchain
"Everyone says the blockchain, the technology underpinning cryptocurrencies such as bitcoin, is going to change EVERYTHING. And yet, after years of tireless effort and billions of dollars invested, nobody has actually come up with a use for the blockchain—besides currency speculation and illegal transactions.
"Each purported use case — from payments to legal documents, from escrow to voting systems—amounts to a set of contortions to add a distributed, encrypted, anonymous ledger where none was needed. What if there isn’t actually any use for a distributed ledger at all? What if, ten years after it was invented, the reason nobody has adopted a distributed ledger at scale is because nobody wants it?"
have_read  debunking  blockchain  bitcoin  distributed_systems  institutions 
24 days ago
Homemade Poultry Seasoning Recipe - Genius Kitchen
2 teaspoons ground sage
1 1⁄2 teaspoons ground thyme
1 teaspoon ground marjoram
3⁄4 teaspoon ground rosemary
1⁄2 teaspoon nutmeg
1⁄2 teaspoon finely ground black pepper
food  recipes  have_made 
24 days ago
When the Revolution Came for Amy Cuddy - The New York Times
Morals under the "to teach" tag:
1. Don't do science like this.
2. Don't be a jerk when criticizing others for doing bad science.
(I realize that I am one to talk about #2.)
have_read  social_science_methodology  social_psychology  psychology  replication_crisis  gelman.andrew  popular_social_science  data_analysis  to_teach:undergrad-research 
24 days ago
Most of what you think you know about human reasoning is wrong. Here’s why. - The Washington Post
I feel like with a little effort, the title could have become a small masterpiece of meta-click-bait. (A good interview about important work.)
have_read  cognitive_science  collective_cognition  heuristics  social_life_of_the_mind  sperber.dan  mercier.hugo  farrell.henry  kith_and_kin 
24 days ago
Cognitive Gadgets — Cecilia Heyes | Harvard University Press
"How did human minds become so different from those of other animals? What accounts for our capacity to understand the way the physical world works, to think ourselves into the minds of others, to gossip, read, tell stories about the past, and imagine the future? These questions are not new: they have been debated by philosophers, psychologists, anthropologists, evolutionists, and neurobiologists over the course of centuries. One explanation widely accepted today is that humans have special cognitive instincts. Unlike other living animal species, we are born with complicated mechanisms for reasoning about causation, reading the minds of others, copying behaviors, and using language.
"Cecilia Heyes agrees that adult humans have impressive pieces of cognitive equipment. In her framing, however, these cognitive gadgets are not instincts programmed in the genes but are constructed in the course of childhood through social interaction. Cognitive gadgets are products of cultural evolution, rather than genetic evolution. At birth, the minds of human babies are only subtly different from the minds of newborn chimpanzees. We are friendlier, our attention is drawn to different things, and we have a capacity to learn and remember that outstrips the abilities of newborn chimpanzees. Yet when these subtle differences are exposed to culture-soaked human environments, they have enormous effects. They enable us to upload distinctively human ways of thinking from the social world around us."
to:NB  books:noted  human_evolution  cultural_transmission_of_cognitive_tools  cultural_transmission  psychology  cognitive_development 
24 days ago
Elements of Causal Inference: Foundations and Learning Algorithms | The MIT Press
"The mathematization of causality is a relatively recent development, and has become increasingly important in data science and machine learning. This book offers a self-contained and concise introduction to causal models and how to learn them from data.
"After explaining the need for causal models and discussing some of the principles underlying causal inference, the book teaches readers how to use causal models: how to compute intervention distributions, how to infer causal models from observational and interventional data, and how causal ideas could be exploited for classical machine learning problems. All of these topics are discussed first in terms of two variables and then in the more general multivariate case. The bivariate case turns out to be a particularly hard problem for causal learning because there are no conditional independences as used by classical methods for solving multivariate cases. The authors consider analyzing statistical asymmetries between cause and effect to be highly instructive, and they report on their decade of intensive research into this problem.
"The book is accessible to readers with a background in machine learning or statistics, and can be used in graduate courses or as a reference for researchers. The text includes code snippets that can be copied and pasted, exercises, and an appendix with a summary of the most important technical concepts."
in_NB  books:noted  coveted  causal_inference  graphical_models  causal_discovery  statistics  janzing.dominik 
25 days ago
[1702.04690] Simple rules for complex decisions
"From doctors diagnosing patients to judges setting bail, experts often base their decisions on experience and intuition rather than on statistical models. While understandable, relying on intuition over models has often been found to result in inferior outcomes. Here we present a new method, select-regress-and-round, for constructing simple rules that perform well for complex decisions. These rules take the form of a weighted checklist, can be applied mentally, and nonetheless rival the performance of modern machine learning algorithms. Our method for creating these rules is itself simple, and can be carried out by practitioners with basic statistics knowledge. We demonstrate this technique with a detailed case study of judicial decisions to release or detain defendants while they await trial. In this application, as in many policy settings, the effects of proposed decision rules cannot be directly observed from historical data: if a rule recommends releasing a defendant that the judge in reality detained, we do not observe what would have happened under the proposed action. We address this key counterfactual estimation problem by drawing on tools from causal inference. We find that simple rules significantly outperform judges and are on par with decisions derived from random forests trained on all available features. Generalizing to 22 varied decision-making domains, we find this basic result replicates. We conclude with an analytical framework that helps explain why these simple decision rules perform as well as they do."
to:NB  to_read  decision-making  classifiers  fast-and-frugal_heuristics  heuristics  clinical-vs-actuarial_prediction  prediction  crime  bail  via:vaguery 
26 days ago
[1703.10651] Reliable Decision Support using Counterfactual Models
"Making a good decision involves considering the likely outcomes under each possible action. For example, would drug A or drug B lead to a better outcome for this patient? Ideally, we answer these questions using an experiment, but this is not always possible (e.g., it may be unethical). As an alternative, we can use non-experimental data to learn models that make counterfactual predictions of what we would observe had we run an experiment. To learn such models for decision-making problems, we propose the use of counterfactual objectives in lieu of classical supervised learning objectives. We implement this idea in a challenging and frequently occurring context, and propose the counterfactual GP (CGP), a counterfactual model of continuous-time trajectories (time series) under sequences of actions taken in continuous-time. We develop our model within the potential outcomes framework of Neyman and Rubin. The counterfactual GP is trained using a joint maximum likelihood objective that adjusts for dependencies between observed actions and outcomes in the training data. We report two sets of experimental results. First, we show that the CGP's predictions are reliable; they are stable to changes in certain characteristics of the training data that are not relevant to the decision-making problem. Predictive models trained using classical supervised learning objectives, however, are not stable to such perturbations. In the second experiment, we use data from a real intensive care unit (ICU) and qualitatively demonstrate how the CGP's ability to answer "What if?" questions offers medical decision-makers a powerful new tool for planning treatment."

--- Not sure how this differs from e.g. Lok's work decades ago...
to:NB  causal_inference  statistics  stochastic_processes 
26 days ago
Why is pop culture obsessed with battles between good and evil? | Aeon Essays
It is somewhat astonishing that this never, ever talks about the Bible, millenarianism, or monotheistic religion generally. If you want stories where the conflict is defined between two sets of values, well, thus spoke Zarathustra. Characters switching sides when they change values --- could it be... SATAN? Etc. To the extent that there is a real change here at all, and not just selection bias, surely the explanation isn't _nationalism_, it's education that finally made the mass of people in Christendom take their professed religion seriously --- the Reformation and the Counter-Reformation.
have_read  folklore  cultural_criticism  literary_history  literary_criticism  out_of_their_depth  norman_cohn_died_for_your_sins  via:absfac 
29 days ago
PsyArXiv Preprints | Experimental Design and the Reliability of Priming Effects: Reconsidering the "Train Wreck"
"Recent failures to replicate high-profile priming effects have raised questions about the reliability of priming phenomena. The studies at the center of the discussion have been labeled “social priming,” and have raised the question as to whether priming phenomena generated in the social psychological literature are particularly problematic. However, the effects identified as “social priming” differ from other priming effects in multiple ways. In the present research, we examine one important difference: whether the effect has been demonstrated with a within- or between-subjects experimental design. To examine the significance of this design feature, we assess the reliability of four well-known priming effects from the cognitive and social psychological literatures using both between- and within-subjects designs and analyses. All four priming effects, both cognitive and social, are reliable when tested using a within-subjects approach. In contrast, only one priming effect reaches that statistical threshold when using the between-subjects approach. These results indicate that the key difference between priming effects identified as more and less reliable is the type of experimental design used to demonstrate the effect, rather than the content domain in which the effect has been demonstrated. This demonstration also serves as a salient illustration of the underappreciated importance of experimental design in considering power and reliability of priming effects."

--- Fascinating if true.
to:NB  experimental_design  experimental_psychology  statistics 
4 weeks ago
The social genome of friends and schoolmates in the National Longitudinal Study of Adolescent to Adult Health
"Humans tend to form social relationships with others who resemble them. Whether this sorting of like with like arises from historical patterns of migration, meso-level social structures in modern society, or individual-level selection of similar peers remains unsettled. Recent research has evaluated the possibility that unobserved genotypes may play an important role in the creation of homophilous relationships. We extend this work by using data from 5,500 adolescents from the National Longitudinal Study of Adolescent to Adult Health (Add Health) to examine genetic similarities among pairs of friends. Although there is some evidence that friends have correlated genotypes, both at the whole-genome level as well as at trait-associated loci (via polygenic scores), further analysis suggests that meso-level forces, such as school assignment, are a principal source of genetic similarity between friends. We also observe apparent social–genetic effects in which polygenic scores of an individual’s friends and schoolmates predict the individual’s own educational attainment. In contrast, an individual’s height is unassociated with the height genetics of peers."

--- Contributed, hence the last tag.
to:NB  sociology  re:homophily_and_confounding  human_genetics  social_networks  homophily  to_be_shot_after_a_fair_trial 
4 weeks ago
Ecological and evolutionary dynamics of interconnectedness and modularity
"In this contribution, we develop a theoretical framework for linking microprocesses (i.e., population dynamics and evolution through natural selection) with macrophenomena (such as interconnectedness and modularity within an ecological system). This is achieved by developing a measure of interconnectedness for population distributions defined on a trait space (generalizing the notion of modularity on graphs), in combination with an evolution equation for the population distribution. With this contribution, we provide a platform for understanding under what environmental, ecological, and evolutionary conditions ecosystems evolve toward being more or less modular. A major contribution of this work is that we are able to decompose the overall driver of changes at the macro level (such as interconnectedness) into three components: (i) ecologically driven change, (ii) evolutionarily driven change, and (iii) environmentally driven change."

--- Contributed.
to:NB  ecology  community_discovery  to_be_shot_after_a_fair_trial 
4 weeks ago
Predicting tipping points in mutualistic networks through dimension reduction
"Complex networked systems ranging from ecosystems and the climate to economic, social, and infrastructure systems can exhibit a tipping point (a “point of no return”) at which a total collapse of the system occurs. To understand the dynamical mechanism of a tipping point and to predict its occurrence as a system parameter varies are of uttermost importance, tasks that are hindered by the often extremely high dimensionality of the underlying system. Using complex mutualistic networks in ecology as a prototype class of systems, we carry out a dimension reduction process to arrive at an effective 2D system with the two dynamical variables corresponding to the average pollinator and plant abundances. We show, using 59 empirical mutualistic networks extracted from real data, that our 2D model can accurately predict the occurrence of a tipping point, even in the presence of stochastic disturbances. We also find that, because of the lack of sufficient randomness in the structure of the real networks, weighted averaging is necessary in the dimension reduction process. Our reduced model can serve as a paradigm for understanding and predicting the tipping point dynamics in real world mutualistic networks for safeguarding pollinators, and the general principle can be extended to a broad range of disciplines to address the issues of resilience and sustainability."
to:NB  ecology  dimension_reduction 
4 weeks ago
Sufficient Dimension Reduction via Direct Estimation of the Gradients of Logarithmic Conditional Densities | Neural Computation | MIT Press Journals
"Sufficient dimension reduction (SDR) is aimed at obtaining the low-rank projection matrix in the input space such that information about output data is maximally preserved. Among various approaches to SDR, a promising method is based on the eigendecomposition of the outer product of the gradient of the conditional density of output given input. In this letter, we propose a novel estimator of the gradient of the logarithmic conditional density that directly fits a linear-in-parameter model to the true gradient under the squared loss. Thanks to this simple least-squares formulation, its solution can be computed efficiently in a closed form. Then we develop a new SDR method based on the proposed gradient estimator. We theoretically prove that the proposed gradient estimator, as well as the SDR solution obtained from it, achieves the optimal parametric convergence rate. Finally, we experimentally demonstrate that our SDR method compares favorably with existing approaches in both accuracy and computational efficiency on a variety of artificial and benchmark data sets."
to:NB  dimension_reduction  sufficiency  density_estimation  linear_regression  statistics 
4 weeks ago
Terminal Pleistocene Alaskan genome reveals first founding population of Native Americans | Nature
"Despite broad agreement that the Americas were initially populated via Beringia, the land bridge that connected far northeast Asia with northwestern North America during the Pleistocene epoch, when and how the peopling of the Americas occurred remains unresolved1,2,3,4,5. Analyses of human remains from Late Pleistocene Alaska are important to resolving the timing and dispersal of these populations. The remains of two infants were recovered at Upward Sun River (USR), and have been dated to around 11.5 thousand years ago (ka)6. Here, by sequencing the USR1 genome to an average coverage of approximately 17 times, we show that USR1 is most closely related to Native Americans, but falls basal to all previously sequenced contemporary and ancient Native Americans1,7,8. As such, USR1 represents a distinct Ancient Beringian population. Using demographic modelling, we infer that the Ancient Beringian population and ancestors of other Native Americans descended from a single founding population that initially split from East Asians around 36 ± 1.5 ka, with gene flow persisting until around 25 ± 1.1 ka. Gene flow from ancient north Eurasians into all Native Americans took place 25–20 ka, with Ancient Beringians branching off around 22–18.1 ka. Our findings support a long-term genetic structure in ancestral Native Americans, consistent with the Beringian ‘standstill model’9. We show that the basal northern and southern Native American branches, to which all other Native Americans belong, diverged around 17.5–14.6 ka, and that this probably occurred south of the North American ice sheets. We also show that after 11.5 ka, some of the northern Native American populations received gene flow from a Siberian population most closely related to Koryaks, but not Palaeo-Eskimos1, Inuits or Kets10, and that Native American gene flow into Inuits was through northern and not southern Native American groups1. Our findings further suggest that the far-northern North American presence of northern Native Americans is from a back migration that replaced or absorbed the initial founding population of Ancient Beringians."
to:NB  historical_genetics 
4 weeks ago
How Do Hours Worked Vary with Income? Cross-Country Evidence and Implications
"This paper builds a new internationally comparable database of hours worked to measure how hours vary with income across and within countries. We document that average hours worked per adult are substantially higher in low-income countries than in high-income countries. The pattern of decreasing hours with aggregate income holds for both men and women, for adults of all ages and education levels, and along both the extensive and intensive margin. Within countries, hours worked per worker are also decreasing in the individual wage for most countries, though in the richest countries, hours worked are flat or increasing in the wage. One implication of our findings is that aggregate productivity and welfare differences across countries are larger than currently thought."

--- Last tag depends on availability of replication data.
to:NB  economics  labor  to_teach:undergrad-ADA 
4 weeks ago
The Design and Price of Information - American Economic Association
"A data buyer faces a decision problem under uncertainty. He can augment his initial private information with supplemental data from a data seller. His willingness to pay for supplemental data is determined by the quality of his initial private information. The data seller optimally offers a menu of statistical experiments. We establish the properties that any revenue-maximizing menu of experiments must satisfy. Every experiment is a non-dispersed stochastic matrix, and every menu contains a fully informative experiment. In the cases of binary states and actions, or binary types, we provide an explicit construction of the optimal menu of experiments."
to:NB  decision_theory  economics 
4 weeks ago
Extending the Range of Validity of the Autoregressive (Sieve) Bootstrap - Fragkeskou - 2017 - Journal of Time Series Analysis - Wiley Online Library
"Two modifications of the autoregressive-sieve and of the autoregressive bootstrap are proposed. The first modification replaces the classical i.i.d. resampling scheme applied to the residuals of the autoregressive fit by the generation of i.i.d. wild pseudo-innovations that appropriately mimic to the appropriate extent, also the fourth-order moment structure of the true innovations driving the underlying linear process. This modification extends the validity of the autoregressive-sieve bootstrap to classes of statistics for which the classical residual-based autoregressive-sieve bootstrap fails. In the second modification, an autoregressive bootstrap applied to an appropriately transformed time series is proposed, which, together with a dependent wild-type generation of pseudo-innovations, delivers a bootstrap procedure that is valid for large classes of statistics and for stochastic processes satisfying quite general, weak, dependent conditions. A fully data-driven selection of the bootstrap parameters involved in both modifications is proposed, and extensive simulations, including comparisons with alternative bootstrap methods, show a good finite sample performance of the proposed bootstrap procedures."
to:NB  bootstrap  time_series  statistics 
4 weeks ago
Model selection and prediction: Normal regression | SpringerLink
"This paper discusses the topic of model selection for finite-dimensional normal regression models. We compare model selection criteria according to prediction errors based upon prediction with refitting, and prediction without refitting. We provide a new lower bound for prediction without refitting, while a lower bound for prediction with refitting was given by Rissanen. Moreover, we specify a set of sufficient conditions for a model selection criterion to achieve these bounds. Then the achievability of the two bounds by the following selection rules are addressed: Rissanen's accumulated prediction error criterion (APE), his stochastic complexity criterion, AIC, BIC and the FPE criteria. In particular, we provide upper bounds on overfitting and underfitting probabilities needed for the achievability. Finally, we offer a brief discussion on the issue of finite-dimensional vs. infinite-dimensional model assumptions.'
in_NB  have_read  model_selection  statistics  information_criteria  yu.bin 
4 weeks ago
[1801.02858] Scalable high-resolution forecasting of sparse spatiotemporal events with kernel methods: a winning solution to the NIJ "Real-Time Crime Forecasting Challenge"
"This article describes Team Kernel Glitches' solution to the National Institute of Justice's (NIJ) Real-Time Crime Forecasting Challenge. The goal of the NIJ Real-Time Crime Forecasting Competition was to maximize two different crime hotspot scoring metrics for calls-for-service to the Portland Police Bureau (PPB) in Portland, Oregon during the period from March 1, 2017 to May 31, 2017. Our solution to the challenge is a spatiotemporal forecasting model combining scalable randomized Reproducing Kernel Hilbert Space (RKHS) methods for approximating Gaussian processes with autoregressive smoothing kernels in a regularized supervised learning framework. Our model can be understood as an approximation to the popular log-Gaussian Cox Process model: we discretize the spatiotemporal point pattern and learn a log intensity function using the Poisson likelihood and highly efficient gradient-based optimization methods. Model hyperparameters including quality of RKHS approximation, spatial and temporal kernel lengthscales, number of autoregressive lags, bandwidths for smoothing kernels, as well as cell shape, size, and rotation, were learned using crossvalidation. Resulting predictions exceeded baseline KDE estimates by 0.157. Performance improvement over baseline predictions were particularly large for sparse crimes over short forecasting horizons."

--- There seems to be some substantial improvements here over Seth's Ph.D. thesis...
in_NB  to_read  spatio-temporal_statistics  point_processes  statistics  prediction  crime  flaxman.seth 
4 weeks ago
[1801.02774] Adversarial Spheres
"State of the art computer vision models have been shown to be vulnerable to small adversarial perturbations of the input. In other words, most images in the data distribution are both correctly classified by the model and are very close to a visually similar misclassified image. Despite substantial research interest, the cause of the phenomenon is still poorly understood and remains unsolved. We hypothesize that this counter intuitive behavior is a naturally occurring result of the high dimensional geometry of the data manifold. As a first step towards exploring this hypothesis, we study a simple synthetic dataset of classifying between two concentric high dimensional spheres. For this dataset we show a fundamental tradeoff between the amount of test error and the average distance to nearest error. In particular, we prove that any model which misclassifies a small constant fraction of a sphere will be vulnerable to adversarial perturbations of size O(1/d‾‾√). Surprisingly, when we train several different architectures on this dataset, all of their error sets naturally approach this theoretical bound. As a result of the theory, the vulnerability of neural networks to small adversarial perturbations is a logical consequence of the amount of test error observed. We hope that our theoretical analysis of this very simple case will point the way forward to explore how the geometry of complex real-world data sets leads to adversarial examples."
to:NB  to_read  adversarial_examples  classifiers 
4 weeks ago
[1711.07137] Nonparametric Double Robustness
"Use of nonparametric techniques (e.g., machine learning, kernel smoothing, stacking) are increasingly appealing because they do not require precise knowledge of the true underlying models that generated the data under study. Indeed, numerous authors have advocated for their use with standard methods (e.g., regression, inverse probability weighting) in epidemiology. However, when used in the context of such singly robust approaches, nonparametric methods can lead to suboptimal statistical properties, including inefficiency and no valid confidence intervals. Using extensive Monte Carlo simulations, we show how doubly robust methods offer improvements over singly robust approaches when implemented via nonparametric methods. We use 10,000 simulated samples and 50, 100, 200, 600, and 1200 observations to investigate the bias and mean squared error of singly robust (g Computation, inverse probability weighting) and doubly robust (augmented inverse probability weighting, targeted maximum likelihood estimation) estimators under four scenarios: correct and incorrect model specification; and parametric and nonparametric estimation. As expected, results show best performance with g computation under correctly specified parametric models. However, even when based on complex transformed covariates, double robust estimation performs better than singly robust estimators when nonparametric methods are used. Our results suggest that nonparametric methods should be used with doubly instead of singly robust estimation techniques."
to:NB  statistics  causal_inference  estimation  nonparametrics  to_teach:undergrad-ADA  kith_and_kin 
4 weeks ago
Quick and Simple Pasta Pronto! Swiss Chard and Walnut Pesto Recipe
Ingredients
25g butter
bunch of Swiss chard (about 250g to 300g)
3 to 4 cloves garlic (peeled)
50g walnuts (toasted)
100g freshly grated Parmesan cheese (or Grana Padano cheese)
2 tablespoons olive oil
salt and black pepper

Directions
Step 1 Wash the chard thoroughly and trim the ends off the white stalks. Chop the chard roughly, separating the green leaves from the white stalks.
Step 2 Melt the butter in a large saucepan (with lid) and add the garlic and white chard stalks, then replace the lid and sweat over a medium heat for 5 to 6 minutes. After 5 to 6 minutes, add the (green) chard leaves and cook for a further 2 to 3 minutes, until the chard leaves have wilted.
Step 3 Place the cooked chard(leaves and stalks), garlic, walnuts, Parmesan cheese and olive oil into a blender, and pulse the ingredients until they are finely chopped and look like pesto sauce. Alternatively, pulse them all together with an immersion blender, until smooth.

--- Note: if anything, needs more garlic and olive oil.
have_made  recipes  food 
4 weeks ago
[1506.04956] The Scope and Limits of Simulation in Cognitive Models
"It has been proposed that human physical reasoning consists largely of running "physics engines in the head" in which the future trajectory of the physical system under consideration is computed precisely using accurate scientific theories. In such models, uncertainty and incomplete knowledge is dealt with by sampling probabilistically over the space of possible trajectories ("Monte Carlo simulation"). We argue that such simulation-based models are too weak, in that there are many important aspects of human physical reasoning that cannot be carried out this way, or can only be carried out very inefficiently; and too strong, in that humans make large systematic errors that the models cannot account for. We conclude that simulation-based reasoning makes up at most a small part of a larger system that encompasses a wide range of additional cognitive processes."
to:NB  simulation  mental_models  cognitive_science  marcus.gary 
4 weeks ago
Comparing the axiomatic and ecological approaches to rationality: fundamental agreement theorems in SCOP | SpringerLink
"There are two prominent viewpoints regarding the nature of rationality and how it should be evaluated in situations of interest: the traditional axiomatic approach and the newer ecological rationality. An obstacle to comparing and evaluating these seemingly opposite approaches is that they employ different language and formalisms, ask different questions, and are at different stages of development. I adapt a formal framework known as SCOP to address this problem by providing a comprehensive common framework in which both approaches may be defined and compared. The main result is that the axiomatic and ecological approaches are in far greater agreement on fundamental issues than has been appreciated; this is supported by a pair of theorems to the effect that they will make accordant rationality judgements when forced to evaluate the same situation. I conclude that ecological rationality has some subtle advantages, but that we should move past the issues currently dominating the discussion of rationality."
to:NB  rationality  heuristics 
4 weeks ago
[1711.11561] Measuring the tendency of CNNs to Learn Surface Statistical Regularities
"Deep CNNs are known to exhibit the following peculiarity: on the one hand they generalize extremely well to a test set, while on the other hand they are extremely sensitive to so-called adversarial perturbations. The extreme sensitivity of high performance CNNs to adversarial examples casts serious doubt that these networks are learning high level abstractions in the dataset. We are concerned with the following question: How can a deep CNN that does not learn any high level semantics of the dataset manage to generalize so well? The goal of this article is to measure the tendency of CNNs to learn surface statistical regularities of the dataset. To this end, we use Fourier filtering to construct datasets which share the exact same high level abstractions but exhibit qualitatively different surface statistical regularities. For the SVHN and CIFAR-10 datasets, we present two Fourier filtered variants: a low frequency variant and a randomly filtered variant. Each of the Fourier filtering schemes is tuned to preserve the recognizability of the objects. Our main finding is that CNNs exhibit a tendency to latch onto the Fourier image statistics of the training dataset, sometimes exhibiting up to a 28% generalization gap across the various test sets. Moreover, we observe that significantly increasing the depth of a network has a very marginal impact on closing the aforementioned generalization gap. Thus we provide quantitative evidence supporting the hypothesis that deep CNNs tend to learn surface statistical regularities in the dataset rather than higher-level abstract concepts."
to:NB  adversarial_examples  bengio.yoshua 
5 weeks ago
[1712.09665] Adversarial Patch
"We present a method to create universal, robust, targeted adversarial image patches in the real world. The patches are universal because they can be used to attack any scene, robust because they work under a wide variety of transformations, and targeted because they can cause a classifier to output any target class. These adversarial patches can be printed, added to any scene, photographed, and presented to image classifiers; even when the patches are small, they cause the classifiers to ignore the other items in the scene and report a chosen target class."
to:NB  to_read  adversarial_examples 
5 weeks ago
The Republic of Arabic Letters — Alexander Bevilacqua | Harvard University Press
"In the seventeenth and eighteenth centuries, a pioneering community of Christian scholars laid the groundwork for the modern Western understanding of Islamic civilization. These men produced the first accurate translation of the Qur’an into a European language, mapped the branches of the Islamic arts and sciences, and wrote Muslim history using Arabic sources. The Republic of Arabic Letters reconstructs this process, revealing the influence of Catholic and Protestant intellectuals on the secular Enlightenment understanding of Islam and its written traditions.
"Drawing on Arabic, English, French, German, Italian, and Latin sources, Alexander Bevilacqua’s rich intellectual history retraces the routes—both mental and physical—that Christian scholars traveled to acquire, study, and comprehend Arabic manuscripts. The knowledge they generated was deeply indebted to native Muslim traditions, especially Ottoman ones. Eventually the translations, compilations, and histories they produced reached such luminaries as Voltaire and Edward Gibbon, who not only assimilated the factual content of these works but wove their interpretations into the fabric of Enlightenment thought.
"The Republic of Arabic Letters shows that the Western effort to learn about Islam and its religious and intellectual traditions issued not from a secular agenda but from the scholarly commitments of a select group of Christians. These authors cast aside inherited views and bequeathed a new understanding of Islam to the modern West."
history_of_ideas  islamic_civilization  18th_century_history  enlightenment  in_NB 
6 weeks ago
Quantitative historical analysis uncovers a single dimension of complexity that structures global variation in human social organization
"Do human societies from around the world exhibit similarities in the way that they are structured, and show commonalities in the ways that they have evolved? These are long-standing questions that have proven difficult to answer. To test between competing hypotheses, we constructed a massive repository of historical and archaeological information known as “Seshat: Global History Databank.” We systematically coded data on 414 societies from 30 regions around the world spanning the last 10,000 years. We were able to capture information on 51 variables reflecting nine characteristics of human societies, such as social scale, economy, features of governance, and information systems. Our analyses revealed that these different characteristics show strong relationships with each other and that a single principal component captures around three-quarters of the observed variation. Furthermore, we found that different characteristics of social complexity are highly predictable across different world regions. These results suggest that key aspects of social organization are functionally related and do indeed coevolve in predictable ways. Our findings highlight the power of the sciences and humanities working together to rigorously test hypotheses about general rules that may have shaped human history."

--- Contributed, so the last tag applies very forcefully.
to:NB  to_read  comparative_history  complexity_measures  principal_components  to_teach:undergrad-ADA  to_be_shot_after_a_fair_trial 
6 weeks ago
Federal Statistics, Multiple Data Sources, and Privacy Protection: Next Steps | The National Academies Press
"The environment for obtaining information and providing statistical data for policy makers and the public has changed significantly in the past decade, raising questions about the fundamental survey paradigm that underlies federal statistics. New data sources provide opportunities to develop a new paradigm that can improve timeliness, geographic or subpopulation detail, and statistical efficiency. It also has the potential to reduce the costs of producing federal statistics.
"The panel's first report described federal statistical agencies’ current paradigm, which relies heavily on sample surveys for producing national statistics, and challenges agencies are facing; the legal frameworks and mechanisms for protecting the privacy and confidentiality of statistical data and for providing researchers access to data, and challenges to those frameworks and mechanisms; and statistical agencies access to alternative sources of data. The panel recommended a new approach for federal statistical programs that would combine diverse data sources from government and private sector sources and the creation of a new entity that would provide the foundational elements needed for this new approach, including legal authority to access data and protect privacy.
"This second of the panel's two reports builds on the analysis, conclusions, and recommendations in the first one. This report assesses alternative methods for implementing a new approach that would combine diverse data sources from government and private sector sources, including describing statistical models for combining data from multiple sources; examining statistical and computer science approaches that foster privacy protections; evaluating frameworks for assessing the quality and utility of alternative data sources; and various models for implementing the recommended new entity. Together, the two reports offer ideas and recommendations to help federal statistical agencies examine and evaluate data from alternative sources and then combine them as appropriate to provide the country with more timely, actionable, and useful information for policy makers, businesses, and individuals."
to:NB  books:noted  to_read  data_analysis  data_collection  social_measurement  statistics  record_linkage  privacy 
6 weeks ago
Quinn, J.: In Search of the Phoenicians (Hardcover and eBook) | Princeton University Press
"The Phoenicians traveled the Mediterranean long before the Greeks and Romans, trading, establishing settlements, and refining the art of navigation. But who these legendary sailors really were has long remained a mystery. In Search of the Phoenicians makes the startling claim that the “Phoenicians” never actually existed. Taking readers from the ancient world to today, this monumental book argues that the notion of these sailors as a coherent people with a shared identity, history, and culture is a product of modern nationalist ideologies—and a notion very much at odds with the ancient sources.
"Josephine Quinn shows how the belief in this historical mirage has blinded us to the compelling identities and communities these people really constructed for themselves in the ancient Mediterranean, based not on ethnicity or nationhood but on cities, family, colonial ties, and religious practices. She traces how the idea of “being Phoenician” first emerged in support of the imperial ambitions of Carthage and then Rome, and only crystallized as a component of modern national identities in contexts as far-flung as Ireland and Lebanon.
"In Search of the Phoenicians delves into the ancient literary, epigraphic, numismatic, and artistic evidence for the construction of identities by and for the Phoenicians, ranging from the Levant to the Atlantic, and from the Bronze Age to late antiquity and beyond. A momentous scholarly achievement, this book also explores the prose, poetry, plays, painting, and polemic that have enshrined these fabled seafarers in nationalist histories from sixteenth-century England to twenty-first century Tunisia."
in_NB  books:noted  ancient_history  phoenicians  debunking  nationalism  uses_of_the_past  coveted 
6 weeks ago
Rogan, T.: The Moral Economists: R. H. Tawney, Karl Polanyi, E. P. Thompson, and the Critique of Capitalism (Hardcover and eBook) | Princeton University Press
"What’s wrong with capitalism? Answers to that question today focus on material inequality. Led by economists and conducted in utilitarian terms, the critique of capitalism in the twenty-first century is primarily concerned with disparities in income and wealth. It was not always so. The Moral Economists reconstructs another critical tradition, developed across the twentieth century in Britain, in which material deprivation was less important than moral or spiritual desolation.
"Tim Rogan focuses on three of the twentieth century’s most influential critics of capitalism—R. H. Tawney, Karl Polanyi, and E. P. Thompson. Making arguments about the relationships between economics and ethics in modernity, their works commanded wide readerships, shaped research agendas, and influenced public opinion. Rejecting the social philosophy of laissez-faire but fearing authoritarianism, these writers sought out forms of social solidarity closer than individualism admitted but freer than collectivism allowed. They discovered such solidarities while teaching economics, history, and literature to workers in the north of England and elsewhere. They wrote histories of capitalism to make these solidarities articulate. They used makeshift languages of “tradition” and “custom” to describe them until Thompson patented the idea of the “moral economy.” Their program began as a way of theorizing everything economics left out, but in challenging utilitarian orthodoxy in economics from the outside, they anticipated the work of later innovators inside economics.
"Examining the moral cornerstones of a twentieth-century critique of capitalism, The Moral Economists explains why this critique fell into disuse, and how it might be reformulated for the twenty-first century."
to:NB  books:noted  history_of_ideas  history_of_economics  economics  capitalism  cultural_criticism 
6 weeks ago
Patient Zero and the Making of the AIDS Epidemic, McKay
"The search for a “patient zero”—popularly understood to be the first person infected in an epidemic—has been key to media coverage of major infectious disease outbreaks for more than three decades. Yet the term itself did not exist before the emergence of the HIV/AIDS epidemic in the 1980s. How did this idea so swiftly come to exert such a strong grip on the scientific, media, and popular consciousness? In Patient Zero, Richard A. McKay interprets a wealth of archival sources and interviews to demonstrate how this seemingly new concept drew upon centuries-old ideas—and fears—about contagion and social disorder.
"McKay presents a carefully documented and sensitively written account of the life of Gaétan Dugas, a gay man whose skin cancer diagnosis in 1980 took on very different meanings as the HIV/AIDS epidemic developed—and who received widespread posthumous infamy when he was incorrectly identified as patient zero of the North American outbreak. McKay shows how investigators from the US Centers for Disease Control inadvertently created the term amid their early research into the emerging health crisis; how an ambitious journalist dramatically amplified the idea in his determination to reframe national debates about AIDS; and how many individuals grappled with the notion of patient zero—adopting, challenging and redirecting its powerful meanings—as they tried to make sense of and respond to the first fifteen years of an unfolding epidemic. With important insights for our interconnected age, Patient Zero untangles the complex process by which individuals and groups create meaning and allocate blame when faced with new disease threats. What McKay gives us here is myth-smashing revisionist history at its best."
to:NB  books:noted  history_of_ideas  contagion  AIDS  american_history  outbreak_narrative 
6 weeks ago
« earlier      
academia afghanistan agent-based_models american_history ancient_history anthropology archaeology art bad_data_analysis bad_science_journalism bayesian_consistency bayesianism biochemical_networks biology blogged book_reviews books:noted books:owned books:recommended bootstrap causal_inference causality central_asia central_limit_theorem class_struggles_in_america classifiers climate_change clustering cognitive_science collective_cognition community_discovery complexity computational_statistics confidence_sets corruption coveted crime cross-validation cultural_criticism cultural_evolution cultural_exchange data_analysis data_mining debunking decision-making decision_theory delong.brad democracy density_estimation dimension_reduction distributed_systems dynamical_systems ecology econometrics economic_history economic_policy economics education empirical_processes ensemble_methods entropy_estimation epidemic_models epidemiology_of_representations epistemology ergodic_theory estimation evisceration evolution_of_cooperation evolutionary_biology experimental_psychology finance financial_crisis_of_2007-- financial_speculation fmri food fraud funny funny:academic funny:geeky funny:laughing_instead_of_screaming funny:malicious funny:pointed genetics graph_theory graphical_models have_read heard_the_talk heavy_tails high-dimensional_statistics hilbert_space history_of_ideas history_of_science human_genetics hypothesis_testing ideology imperialism in_nb inequality inference_to_latent_objects information_theory institutions kernel_methods kith_and_kin krugman.paul large_deviations lasso learning_theory liberman.mark linguistics literary_criticism machine_learning macro_from_micro macroeconomics market_failures_in_everything markov_models mathematics mixing model_selection modeling modern_ruins monte_carlo moral_psychology moral_responsibility mortgage_crisis natural_history_of_truthiness network_data_analysis networked_life networks neural_data_analysis neural_networks neuroscience non-equilibrium nonparametrics optimization our_decrepit_institutions philosophy philosophy_of_science photos physics pittsburgh political_economy political_science practices_relating_to_the_transmission_of_genetic_information prediction pretty_pictures principal_components probability programming progressive_forces psychology public_policy r racism random_fields re:almost_none re:aos_project re:democratic_cognition re:do-institutions-evolve re:g_paper re:homophily_and_confounding re:network_differences re:smoothing_adjacency_matrices re:social_networks_as_sensor_networks re:stacs re:your_favorite_dsge_sucks recipes regression running_dogs_of_reaction science_as_a_social_process science_fiction simulation social_influence social_life_of_the_mind social_media social_networks social_science_methodology sociology something_about_america sparsity spatial_statistics state-space_models statistical_inference_for_stochastic_processes statistical_mechanics statistics stochastic_processes text_mining the_american_dilemma the_continuing_crises time_series to:blog to:nb to_be_shot_after_a_fair_trial to_read to_teach:complexity-and-inference to_teach:data-mining to_teach:statcomp to_teach:undergrad-ada track_down_references us_politics utter_stupidity vast_right-wing_conspiracy via:? via:henry_farrell via:jbdelong visual_display_of_quantitative_information whats_gone_wrong_with_america why_oh_why_cant_we_have_a_better_academic_publishing_system why_oh_why_cant_we_have_a_better_press_corps

Copy this bookmark:



description:


tags: