13573
Fear and Loathing Across Party Lines
"When defined in terms of social identity and affect toward copartisans and opposing partisans, the polarization of the
American electorate has dramatically increased. We document the scope and consequences of affective polarization of
partisans using implicit, explicit, and behavioral indicators. Our evidence demonstrates that hostile feelings for the opposing
party are ingrained or automatic in voters’ minds, and that affective polarization based on party is just as strong as
polarization based on race. We further show that party cues exert powerful effects on nonpolitical judgments and behaviors.
Partisans discriminate against opposing partisans, doing so to a degree that exceeds discrimination based on race. We note
that the willingness of partisans to display open animus for opposing partisans can be attributed to the absence of norms
governing the expression of negative sentiment and that increased partisan affect provides an incentive for elites to engage
in confrontation rather than cooperation."
to:NB  public_opinion  political_science  us_politics  re:democratic_cognition  via:gabriel_rossman 
2 hours ago
Belief Network Analysis
"Many accounts of political belief systems conceive of them as networks of
interrelated opinions, in which some beliefs are central and others peripheral. We formally show
how such structural features can be used to construct direct measures of belief centrality in a
network of correlations. We apply this method to the 2000 ANES data, which have been used to
argue that political beliefs are organized around parenting schemas. Our structural approach
instead yields results consistent with the central role of political identity, which individuals may
use as the organizing heuristic to filter information from the political field. We search for
population heterogeneity in this organizing logic first by comparing 44 demographic
subpopulations, and then with inductive techniques. Contra recent accounts of belief system
heterogeneity, we find that belief systems of different groups vary in the amount of organization,
but not in the logic which organizes them. "
to:NB  political_science  public_opinion  networks  cognitive_science  re:democratic_cognition  via:gabriel_rossman 
2 hours ago
Where is the epistemic community? On democratisation of science and social accounts of objectivity | SpringerLink
"This article focuses on epistemic challenges related to the democratisation of scientific knowledge production, and to the limitations of current social accounts of objectivity. A process of ’democratisation’ can be observed in many scientific and academic fields today. Collaboration with extra-academic agents and the use of extra-academic expertise and knowledge has become common, and researchers are interested in promoting socially inclusive research practices. As this development is particularly prevalent in policy-relevant research, it is important that the new, more democratic forms of research be objective. In social accounts of objectivity only epistemic communities are taken to be able to produce objective knowledge, or the entity whose objectivity is to be assessed is precisely such a community. As I argue, these accounts do not allow for situations where it is not easy to identify the relevant epistemic community. Democratisation of scientific knowledge production can lead to such situations. As an example, I discuss attempts to link indigenous oral traditions to floods and tsunamis that happened hundreds or even thousands of years ago."
to:NB  science_as_a_social_process  re:democratic_cognition 
2 days ago
Different motivations, similar proposals: objectivity in scientific community and democratic science policy | SpringerLink
"The aim of the paper is to discuss some possible connections between philosophical proposals about the social organisation of science and developments towards a greater democratisation of science policy. I suggest that there are important similarities between one approach to objectivity in philosophy of science—Helen Longino’s account of objectivity as freedom from individual biases achieved through interaction of a variety of perspectives—and some ideas about the epistemic benefits of wider representation of various groups’ perspectives in science policy, as analysed by Mark Brown. Given these similarities, I suggest that they allow one to approach developments in science policy as if one of their aims were epistemic improvement that can be recommended on the basis of the philosophical account; analyses of political developments inspired by these ideas about the benefits of inclusive dialogue can then be used for understanding the possibility to implement a philosophical proposal for improving the objectivity of science in practice. Outlining this suggestion, I also discuss the possibility of important differences between the developments in the two spheres and show how the concern about the possible divergence of politically motivated and epistemically motivated changes may be mitigated. In order to substantiate further the suggestion I make, I discuss one example of a development where politically motivated and epistemically motivated changes converge in practice—the development of professional ethics in American archaeology as analysed by Alison Wylie. I suggest that analysing such specific developments and getting involved with them can be one of the tasks for philosophy of science. In the concluding part of the paper I discuss how this approach to philosophy of science is related to a number of arguments about a more politically relevant philosophy of science"
to:NB  re:democratic_cognition 
2 days ago
What was primitive accumulation? Reconstructing the origin of a critical conceptEuropean Journal of Political Theory - William Clare Roberts, 2017
"The ongoing critical redeployment of primitive accumulation proceeds under two premises. First, it is argued that Marx, erroneously, confined primitive accumulation to the earliest history of capitalism. Second, Marx is supposed to have teleologically justified primitive accumulation as a necessary precondition for socialist development. This article argues that reading Marx’s account of primitive accumulation in the context of contemporaneous debates about working class and socialist strategy rebuts both of these criticisms. Marx’s definition of primitive accumulation as the ‘prehistory of capital’ does not deny its contemporaneity, but marks the distinction between the operations of capital and those of other agencies – especially the state – which are necessary, but also external, to capital itself. This same distinction between capital, which accumulates via the exploitation of labour-power, and the state, which becomes dependent upon capitalist accumulation for its own existence, recasts the historical necessity of primitive accumulation. Marx characterizes the modern state as the armed and servile agent of capital, willing to carry out primitive accumulation wherever the conditions of capitalist accumulation are threatened. Hence, the recent reconstructions risk obliterating Marx’s key insights into the specificity of a) capital as a form of wealth and b) capital’s relationship to the state."

--- Footnote 1 is especially relevant to my interests.
to:NB  have_read  marx.karl  political_economy  re:reading_capital  via:? 
9 days ago
[1706.02744] Avoiding Discrimination through Causal Reasoning
"Recent work on fairness in machine learning has focused on various statistical discrimination criteria and how they trade off. Most of these criteria are observational: They depend only on the joint distribution of predictor, protected attribute, features, and outcome. While convenient to work with, observational criteria have severe inherent limitations that prevent them from resolving matters of fairness conclusively.
"Going beyond observational criteria, we frame the problem of discrimination based on protected attributes in the language of causal reasoning. This viewpoint shifts attention from "What is the right fairness criterion?" to "What do we want to assume about the causal data generating process?" Through the lens of causality, we make several contributions. First, we crisply articulate why and when observational criteria fail, thus formalizing what was before a matter of opinion. Second, our approach exposes previously ignored subtleties and why they are fundamental to the problem. Finally, we put forward natural causal non-discrimination criteria and develop algorithms that satisfy them."
to:NB  to_read  causality  algorithmic_fairness  prediction  machine_learning  janzing.dominik  re:ADAfaEPoV  via:arsyed 
9 days ago
Consistency without Inference: Instrumental Variables in Practical Application
"I use the bootstrap to study a comprehensive sample of 1400 instrumental
variables regressions in 32 papers published in the journals of the American
Economic Association. IV estimates are more often found to be falsely significant
and more sensitive to outliers than OLS, while having a higher mean squared error
around the IV population moment. There is little evidence that OLS estimates are
substantively biased, while IV instruments often appear to be irrelevant. In
addition, I find that established weak instrument pre-tests are largely
uninformative and weak instrument robust methods generally perform no better or
substantially worse than 2SLS. "
to:NB  to_read  re:ADAfaEPoV  to_teach:undergrad-ADA  instrumental_variables  causal_inference  regression  statistics  econometrics  via:kjhealy 
9 days ago
Democracy by mistake
"How does democracy emerge from authoritarian rule? Influential theories contend that incumbents deliberately choose to share or surrender power. They do so to prevent revolution, motivate citizens to fight wars, incentivize governments to provide public goods, outbid elite rivals, or limit factional violence. Examining the history of all democratizations since 1800, I show that such deliberate choice arguments may help explain up to one third of cases. In about two thirds, democratization occurred not because incumbent elites chose it but because, in trying to prevent it, they made mistakes that weakened their hold on power. Common mistakes include: calling elections or starting military conflicts, only to lose them; ignoring popular unrest and being overthrown; initiating limited reforms that get out of hand; and selecting a covert democrat as leader. These mistakes reflect well-known cognitive biases such as overconfidence and the illusion of control."
to:NB  to_read  democracy  re:democratic_cognition  history  institutions  political_science  via:henry_farrell 
10 days ago
Geismer, L.: Don't Blame Us: Suburban Liberals and the Transformation of the Democratic Party (Hardcover, Paperback and eBook) | Princeton University Press
"Don't Blame Us traces the reorientation of modern liberalism and the Democratic Party away from their roots in labor union halls of northern cities to white-collar professionals in postindustrial high-tech suburbs, and casts new light on the importance of suburban liberalism in modern American political culture. Focusing on the suburbs along the high-tech corridor of Route 128 around Boston, Lily Geismer challenges conventional scholarly assessments of Massachusetts exceptionalism, the decline of liberalism, and suburban politics in the wake of the rise of the New Right and the Reagan Revolution in the 1970s and 1980s. Although only a small portion of the population, knowledge professionals in Massachusetts and elsewhere have come to wield tremendous political leverage and power. By probing the possibilities and limitations of these suburban liberals, this rich and nuanced account shows that—far from being an exception to national trends—the suburbs of Massachusetts offer a model for understanding national political realignment and suburban politics in the second half of the twentieth century."
to:NB  books:noted  us_politics  20th_century_history  class_struggles_in_america 
13 days ago
The Zombie Diseases of Climate Change - The Atlantic
I would pay very good money to see what Linda Nagata or Paul McAuley could do with this.
climate_change  plagues_and_peoples  via:?  have_read 
14 days ago
Community and the Crime Decline: The Causal Effect of Local Nonprofits on Violent CrimeAmerican Sociological Review - Patrick Sharkey, Gerard Torrats-Espinosa, Delaram Takyar, 2017
"Largely overlooked in the theoretical and empirical literature on the crime decline is a long tradition of research in criminology and urban sociology that considers how violence is regulated through informal sources of social control arising from residents and organizations internal to communities. In this article, we incorporate the “systemic” model of community life into debates on the U.S. crime drop, and we focus on the role that local nonprofit organizations played in the national decline of violence from the 1990s to the 2010s. Using longitudinal data and a strategy to account for the endogeneity of nonprofit formation, we estimate the causal effect on violent crime of nonprofits focused on reducing violence and building stronger communities. Drawing on a panel of 264 cities spanning more than 20 years, we estimate that every 10 additional organizations focusing on crime and community life in a city with 100,000 residents leads to a 9 percent reduction in the murder rate, a 6 percent reduction in the violent crime rate, and a 4 percent reduction in the property crime rate."

- Last tag conditional on replication data.
to:NB  causal_inference  crime  institutions  via:rvenkat  to_teach:undergrad-ADA 
14 days ago
[1711.02834] Bootstrapping Generalization Error Bounds for Time Series
"We consider the problem of finding confidence intervals for the risk of forecasting the future of a stationary, ergodic stochastic process, using a model estimated from the past of the process. We show that a bootstrap procedure provides valid confidence intervals for the risk, when the data source is sufficiently mixing, and the loss function and the estimator are suitably smooth. Autoregressive (AR(d)) models estimated by least squares obey the necessary regularity conditions, even when mis-specified, and simulations show that the finite- sample coverage of our bounds quickly converges to the theoretical, asymptotic level. As an intermediate step, we derive sufficient conditions for asymptotic independence between empirical distribution functions formed by splitting a realization of a stochastic process, of independent interest."
in_NB  time_series  bootstrap  statistics  self-promotion  to:blog 
14 days ago
[1711.00867] The (Un)reliability of saliency methods
"Saliency methods aim to explain the predictions of deep neural networks. These methods lack reliability when the explanation is sensitive to factors that do not contribute to the model prediction. We use a simple and common pre-processing step ---adding a constant shift to the input data--- to show that a transformation with no effect on the model can cause numerous methods to incorrectly attribute. In order to guarantee reliability, we posit that methods should fulfill input invariance, the requirement that a saliency method mirror the sensitivity of the model with respect to transformations of the input. We show, through several examples, that saliency methods that do not satisfy input invariance result in misleading attribution."
to:NB  neural_networks  machine_learning  credit_attribution  via:? 
15 days ago
[1705.07809] Information-theoretic analysis of generalization capability of learning algorithms
"We derive upper bounds on the generalization error of a learning algorithm in terms of the mutual information between its input and output. The bounds provide an information-theoretic understanding of generalization in learning problems, and give theoretical guidelines for striking the right balance between data fit and generalization by controlling the input-output mutual information. We propose a number of methods for this purpose, among which are algorithms that regularize the ERM algorithm with relative entropy or with random noise. Our work extends and leads to nontrivial improvements on the recent results of Russo and Zou."
to:NB  to_read  learning_theory  information_theory  raginsky.maxim 
15 days ago
[1709.09702] Projective Sparse Latent Space Network Models
"In typical latent-space network models, nodes have latent positions, which are all drawn independently from a common distribution. As a consequence, the number of edges in a network scales quadratically with the number of nodes, resulting in a dense graph sequence as the number of nodes grows. We propose an adjustment to latent-space network models which allows the number edges to scale linearly with the number of nodes, to scale quadratically, or at any intermediate rate. Our models also form projective families, making statistical inference and prediction well-defined. Built through point processes, our models are related to both the Poisson random connection model and the graphex framework."
in_NB  network_data_analysis  networks  graph_limits  point_processes  stochastic_processes  self-promotion  to:blog  to_teach:graphons 
15 days ago
[1711.02123] Consistency of Maximum Likelihood for Continuous-Space Network Models
"Network analysis needs tools to infer distributions over graphs of arbitrary size from a single graph. Assuming the distribution is generated by a continuous latent space model which obeys certain natural symmetry and smoothness properties, we establish three levels of consistency for non-parametric maximum likelihood inference as the number of nodes grows: (i) the estimated locations of all nodes converge in probability on their true locations; (ii) the distribution over locations in the latent space converges on the true distribution; and (iii) the distribution over graphs of arbitrary size converges."
in_NB  network_data_analysis  statistics  self-promotion  to:blog  re:network_differences  re:hyperbolic_networks  to_teach:baby-nets 
15 days ago
[1609.00494] Publication bias and the canonization of false facts
"In the process of scientific inquiry, certain claims accumulate enough support to be established as facts. Unfortunately, not every claim accorded the status of fact turns out to be true. In this paper, we model the dynamic process by which claims are canonized as fact through repeated experimental confirmation. The community's confidence in a claim constitutes a Markov process: each successive published result shifts the degree of belief, until sufficient evidence accumulates to accept the claim as fact or to reject it as false. In our model, publication bias --- in which positive results are published preferentially over negative ones --- influences the distribution of published results. We find that when readers do not know the degree of publication bias and thus cannot condition on it, false claims often can be canonized as facts. Unless a sufficient fraction of negative results are published, the scientific process will do a poor job at discriminating false from true claims. This problem is exacerbated when scientists engage in p-hacking, data dredging, and other behaviors that increase the rate at which false positives are published. If negative results become easier to publish as a claim approaches acceptance as a fact, however, true and false claims can be more readily distinguished. To the degree that the model accurately represents current scholarly practice, there will be serious concern about the validity of purported facts in some areas of scientific research."
to:NB  science_as_a_social_process  collective_cognition  natural_history_of_truthiness  bergstrom.carl  to_read  via:?  kith_and_kin  re:democratic_cognition 
17 days ago
Fortifications and Democracy in the Ancient Greek World by Josiah Ober, Barry R. Weingast :: SSRN
"In the modern world, access-limiting fortification walls are not typically regarded as promoting democracy. But in Greek antiquity, increased investment in fortifications was correlated with the prevalence and stability of democracy. This paper sketches the background conditions of the Greek city-state ecology, analyzes a passage in Aristotle’s Politics, and assesses the choices of Hellenistic kings, Greek citizens, and urban elites, as modeled in a simple game. The paper explains how city walls promoted democracy and helps to explain several other puzzles: why Hellenistic kings taxed Greek cities at lower than expected rates; why elites in Greek cities supported democracy; and why elites were not more heavily taxed by democratic majorities. The relationship between walls, democracy, and taxes promoted continued economic growth into the late classical and Hellenistic period (4th-2nd centuries BCE), and ultimately contributed to the survival of Greek culture into the Roman era, and thus modernity. We conclude with a consideration of whether the walls-democracy relationship holds in modernity."
to:NB  ancient_history  institutions  democracy  war  ober.josiah  via:henry_farrell 
17 days ago
Universalism without Uniformity: Explorations in Mind and Culture, Cassaniti, Menon
"One of the major questions of cultural psychology is how to take diversity seriously while acknowledging our shared humanity. This collection, edited by Julia L. Cassaniti and Usha Menon, brings together leading scholars in the field to reconsider that question and explore the complex mechanisms that connect culture and the human mind.
"The contributors to Universalism without Uniformity offer tools for bridging silos that have historically separated anthropology’s attention to culture and psychology’s interest in universal mental processes. Throughout, they seek to answer intricate yet fundamental questions about why we are motivated to find meaning in everything around us and, in turn, how we constitute the cultural worlds we inhabit through our intentional involvement in them. Laying bare entrenched disciplinary blind spots, this book offers a trove of insights on issues such as morality, emotional functioning, and conceptions of the self across cultures. Filled with impeccable empirical research coupled with broadly applicable theoretical reflections on taking psychological diversity seriously, Universalism without Uniformity breaks new ground in the study of mind and culture. "
to:NB  books:noted  psychology  cultural_differences  cultural_transmission_of_cognitive_tools 
17 days ago
[1711.00813] Bootstrapping Exchangeable Random Graphs
"We introduce two new bootstraps for exchangeable random graphs. One, the "empirical graphon", is based purely on resampling, while the other, the "histogram stochastic block model", is a model-based "sieve" bootstrap. We show that both of them accurately approximate the sampling distributions of motif densities, i.e., of the normalized counts of the number of times fixed subgraphs appear in the network. These densities characterize the distribution of (infinite) exchangeable networks. Our bootstraps therefore give, for the first time, a valid quantification of uncertainty in inferences about fundamental network statistics, and so of parameters identifiable from them."
in_NB  network_data_analysis  statistics  bootstrap  graph_limits  nonparametrics  self-promotion  to:blog 
17 days ago
[1710.02773] Baseline Mixture Models for Social Networks
"Continuous mixtures of distributions are widely employed in the statistical literature as models for phenomena with highly divergent outcomes; in particular, many familiar heavy-tailed distributions arise naturally as mixtures of light-tailed distributions (e.g., Gaussians), and play an important role in applications as diverse as modeling of extreme values and robust inference. In the case of social networks, continuous mixtures of graph distributions can likewise be employed to model social processes with heterogeneous outcomes, or as robust priors for network inference. Here, we introduce some simple families of network models based on continuous mixtures of baseline distributions. While analytically and computationally tractable, these models allow more flexible modeling of cross-graph heterogeneity than is possible with conventional baseline (e.g., Bernoulli or U|man distributions). We illustrate the utility of these baseline mixture models with application to problems of multiple-network ERGMs, network evolution, and efficient network inference. Our results underscore the potential ubiquity of network processes with nontrivial mixture behavior in natural settings, and raise some potentially disturbing questions regarding the adequacy of current network data collection practices."
to:NB  network_data_analysis  exponential_family_random_graphs  mixture_models  statistics  butts.carter 
17 days ago
[1707.07397] Synthesizing Robust Adversarial Examples
"Neural network-based classifiers parallel or exceed human-level accuracy on many common tasks and are used in practical systems. Yet, neural networks are susceptible to adversarial examples, carefully perturbed inputs that cause networks to misbehave in arbitrarily chosen ways. When generated with standard methods, these examples do not consistently fool a classifier in the physical world due to viewpoint shifts, camera noise, and other natural transformations. Adversarial examples generated using standard techniques require complete control over direct input to the classifier, which is impossible in many real-world systems.
"We introduce the first method for constructing real-world 3D objects that consistently fool a neural network across a wide distribution of angles and viewpoints. We present a general-purpose algorithm for generating adversarial examples that are robust across any chosen distribution of transformations. We demonstrate its application in two dimensions, producing adversarial images that are robust to noise, distortion, and affine transformation. Finally, we apply the algorithm to produce arbitrary physical 3D-printed adversarial objects, demonstrating that our approach works end-to-end in the real world. Our results show that adversarial examples are a practical concern for real-world systems."
to:NB  neural_networks  adversarial_examples 
20 days ago
[1710.11304] Characterizing the structural diversity of complex networks across domains
"The structure of complex networks has been of interest in many scientific and engineering disciplines over the decades. A number of studies in the field have been focused on finding the common properties among different kinds of networks such as heavy-tail degree distribution, small-worldness and modular structure and they have tried to establish a theory of structural universality in complex networks. However, there is no comprehensive study of network structure across a diverse set of domains in order to explain the structural diversity we observe in the real-world networks. In this paper, we study 986 real-world networks of diverse domains ranging from ecological food webs to online social networks along with 575 networks generated from four popular network models. Our study utilizes a number of machine learning techniques such as random forest and confusion matrix in order to show the relationships among network domains in terms of network structure. Our results indicate that there are some partitions of network categories in which networks are hard to distinguish based purely on network structure. We have found that these partitions of network categories tend to have similar underlying functions, constraints and/or generative mechanisms of networks even though networks in the same partition have different origins, e.g., biological processes, results of engineering by human being, etc. This suggests that the origin of a network, whether it's biological, technological or social, may not necessarily be a decisive factor of the formation of similar network structure. Our findings shed light on the possible direction along which we could uncover the hidden principles for the structural diversity of complex networks."
to:NB  network_data_analysis  statistics  classifiers  clauset.aaron  kith_and_kin  to_read  to_teach:baby-nets 
20 days ago
Minimally Sufficient Conditions for the Evolution of Social Learning and the Emergence of Non-Genetic Evolutionary Systems | Artificial Life | MIT Press Journals
"Social learning, defined as the imitation of behaviors performed by others, is recognized as a distinctive characteristic in humans and several other animal species. Previous work has claimed that the evolutionary fixation of social learning requires decision-making cognitive abilities that result in transmission bias (e.g., discriminatory imitation) and/or guided variation (e.g., adaptive modification of behaviors through individual learning). Here, we present and analyze a simple agent-based model that demonstrates that the transition from instinctive actuators (i.e., non-learning agents whose behavior is hardcoded in their genes) to social learners (i.e., agents that imitate behaviors) can occur without invoking such decision-making abilities. The model shows that the social learning of a trait may evolve and fix in a population if there are many possible behavioral variants of the trait, if it is subject to strong selection pressure for survival (as distinct from reproduction), and if imitation errors occur at a higher rate than genetic mutation. These results demonstrate that the (sometimes implicit) assumption in prior work that decision-making abilities are required is incorrect, thus allowing a more parsimonious explanation for the evolution of social learning that applies to a wider range of organisms. Furthermore, we identify genotype-phenotype disengagement as a signal for the imminent fixation of social learners, and explain the way in which this disengagement leads to the emergence of a basic form of cultural evolution (i.e., a non-genetic evolutionary system)."
to:NB  cultural_evolution  agent-based_models  bullock.seth  re:do-institutions-evolve 
21 days ago
Austral
"The great geoengineering projects have failed.
"The world is still warming, sea levels are still rising, and the Antarctic  Peninsula is home to Earth's newest nation, with life quickened by ecopoets spreading across valleys and fjords exposed by the retreat of the ice.
"Austral Morales Ferrado, a child of the last generation of ecopoets, is a husky: an edited person adapted to the unforgiving climate of the far south, feared and despised by most of its population. She's been a convict, a corrections officer in a labour camp, and consort to a criminal, and now, out of desperation, she has committed the kidnapping of the century. But before she can collect the ransom and make a new life elsewhere, she must find a place of safety amongst the peninsula's forests and icy plateaus, and evade a criminal gang that has its own plans for the teenage girl she's taken hostage.
"Blending the story of Austral's flight with the fractured history of her family and its role in the colonisation of Antarctica, Austral is a vivid portrayal of a treacherous new world created by climate change, and shaped by the betrayals and mistakes of the past."

--- Why is this book, which I want with great intensity, not available in the US?
to:NB  books:noted  coveted  science_fiction  climate_change  antarctica  mcauley.paul 
21 days ago
Philosophy Within Its Proper Bounds - Edouard Machery - Oxford University Press
"In Philosophy Within Its Proper Bounds, Edouard Machery argues that resolving many traditional and contemporary philosophical issues is beyond our epistemic reach and that philosophy should re-orient itself toward more humble, but ultimately more important intellectual endeavors. Any resolution to many of these contemporary issues would require an epistemic access to metaphysical possibilities and necessities, which, Machery argues, we do not have. In effect, then, Philosophy Within Its Proper Bounds defends a form of modal skepticism. The book assesses the main philosophical method for acquiring the modal knowledge that the resolution of modally immodest philosophical issues turns on: the method of cases, that is, the consideration of actual or hypothetical situations (which cases or thought experiments describe) in order to determine what facts hold in these situations. Canvassing the extensive work done by experimental philosophers over the last 15 years, Edouard Machery shows that the method of cases is unreliable and should be rejected. Importantly, the dismissal of modally immodest philosophical issues is no cause for despair - many important philosophical issues remain within our epistemic reach. In particular, reorienting the course of philosophy would free time and resources for bringing back to prominence a once-central intellectual endeavor: conceptual analysis."

--- Giving a talk today in town, which I will have to miss.

--- ETA: I will be disappointed if he doesn't quote Pope:
Know then thyself, presume not God to scan;
The proper study of mankind is man.
to:NB  books:noted  philosophy  epistemology  skepticism 
21 days ago
Science organizations troubled by Rand Paul bill targeting peer review
In plainer words, the proposal is to add two political comissars to every peer-review panel.
peer_review  to:blog  us_politics  utter_stupidity 
21 days ago
Salganik, M.: Bit by Bit: Social Research in the Digital Age (Hardcover and eBook) | Princeton University Press
"In just the past several years, we have witnessed the birth and rapid spread of social media, mobile phones, and numerous other digital marvels. In addition to changing how we live, these tools enable us to collect and process data about human behavior on a scale never before imaginable, offering entirely new approaches to core questions about social behavior. Bit by Bit is the key to unlocking these powerful methods—a landmark book that will fundamentally change how the next generation of social scientists and data scientists explores the world around us.
"Bit by Bit is the essential guide to mastering the key principles of doing social research in this fast-evolving digital age. In this comprehensive yet accessible book, Matthew Salganik explains how the digital revolution is transforming how social scientists observe behavior, ask questions, run experiments, and engage in mass collaborations. He provides a wealth of real-world examples throughout and also lays out a principles-based approach to handling ethical challenges.
"Bit by Bit is an invaluable resource for social scientists who want to harness the research potential of big data and a must-read for data scientists interested in applying the lessons of social science to tomorrow’s technologies."
to:NB  books:noted  social_science_methodology  data_mining  salganik.matthew  experimental_sociology  experimental_design  sociology  networked_life  coveted 
21 days ago
Population Control Policies and Fertility Convergence
"Rapid population growth in developing countries in the middle of the 20th century led to fears of a population explosion and motivated the inception of what effectively became a global population- control program. The initiative, propelled in its beginnings by intellectual elites in the United States, Sweden, and some developing countries, mobilized resources to enact policies aimed at reducing fertility by widening contraception provision and changing family-size norms. In the following five decades, fertility rates fell dramatically, with a majority of countries converging to a fertility rate just above two children per woman, despite large cross-country differences in economic variables such as GDP per capita, education levels, urbanization, and female labor force participation. The fast decline in fertility rates in developing economies stands in sharp contrast with the gradual decline experienced earlier by more mature economies. In this paper, we argue that population-control policies likely played a central role in the global decline in fertility rates in recent decades and can explain some patterns of that fertility decline that are not well accounted for by other socioeconomic factors."
to:NB  demography  demogrqphic_transition  public_policy  20th_century_history 
22 days ago
Difficult People: Who Is Perceived to Be Demanding in Personal Networks and Why Are They There?American Sociological Review - Shira Offer, Claude S. Fischer, 2017
"Why do people maintain ties with individuals whom they find difficult? Standard network theories imply that such alters are avoided or dropped. Drawing on a survey of over 1,100 diverse respondents who described over 12,000 relationships, we examined which among those ties respondents nominated as a person whom they “sometimes find demanding or difficult.” Those so listed composed about 15 percent of all alters in the network. After holding ego and alter traits constant, close kin, especially women relatives and aging parents, were especially likely to be named as difficult alters. Non-kin described as friends were less likely, and those described as co-workers more likely, to be listed only as difficult alters. These results suggest that normative and institutional constraints may force people to retain difficult and demanding alters in their networks. We also found that providing support to alters, but not receiving support from those alters, was a major source of difficulty in these relationships. Furthermore, the felt burden of providing support was not attenuated by receiving assistance, suggesting that alters involved in reciprocated exchanges were not less often labeled difficult than were those in unreciprocated ones. This study underlines the importance of constraints in personal networks."
to:NB  social_networks  sociology 
22 days ago
[1706.04692] Bias and high-dimensional adjustment in observational studies of peer effects
"Peer effects, in which the behavior of an individual is affected by the behavior of their peers, are posited by multiple theories in the social sciences. Other processes can also produce behaviors that are correlated in networks and groups, thereby generating debate about the credibility of observational (i.e. nonexperimental) studies of peer effects. Randomized field experiments that identify peer effects, however, are often expensive or infeasible. Thus, many studies of peer effects use observational data, and prior evaluations of causal inference methods for adjusting observational data to estimate peer effects have lacked an experimental "gold standard" for comparison. Here we show, in the context of information and media diffusion on Facebook, that high-dimensional adjustment of a nonexperimental control group (677 million observations) using propensity score models produces estimates of peer effects statistically indistinguishable from those from using a large randomized experiment (220 million observations). Naive observational estimators overstate peer effects by 320% and commonly used variables (e.g., demographics) offer little bias reduction, but adjusting for a measure of prior behaviors closely related to the focal behavior reduces bias by 91%. High-dimensional models adjusting for over 3,700 past behaviors provide additional bias reduction, such that the full model reduces bias by over 97%. This experimental evaluation demonstrates that detailed records of individuals' past behavior can improve studies of social influence, information diffusion, and imitation; these results are encouraging for the credibility of some studies but also cautionary for studies of rare or new behaviors. More generally, these results show how large, high-dimensional data sets and statistical learning techniques can be used to improve causal inference in the behavioral sciences."
to:NB  to_read  re:homophily_and_confounding  causal_inference  network_data_analysis  eckles.dean  bakshy.eytan  experimental_sociology 
23 days ago
Language Log » Blue Cell Dyslexia
"At first I was hesitant to evaluate the study because I’m not a vision scientist, but then I realized that hadn’t prevented the authors from publishing it. Albert Le Floch and Guy Ropars are affiliated with the Université de Rennes, France. Their primary area of expertise appears to be laser physics."
dyslexia  why_oh_why_cant_we_have_a_better_academic_publishing_system  linguistics  psychology 
23 days ago
Professors like me can’t stay silent about this extremist moment on campuses - The Washington Post
This is the first I've heard of this, and of course one has to wonder if it's an accurate account of the situation, but if it is, it's incredibly outraegous.
us_politics  academic_freedom  academia  reed  have_read  wtf?  circular_firing_squad 
25 days ago
Evolutionary dynamics of language systems
"Understanding how and why language subsystems differ in their evolutionary dynamics is a fundamental question for historical and comparative linguistics. One key dynamic is the rate of language change. While it is commonly thought that the rapid rate of change hampers the reconstruction of deep language relationships beyond 6,000–10,000 y, there are suggestions that grammatical structures might retain more signal over time than other subsystems, such as basic vocabulary. In this study, we use a Dirichlet process mixture model to infer the rates of change in lexical and grammatical data from 81 Austronesian languages. We show that, on average, most grammatical features actually change faster than items of basic vocabulary. The grammatical data show less schismogenesis, higher rates of homoplasy, and more bursts of contact-induced change than the basic vocabulary data. However, there is a core of grammatical and lexical features that are highly stable. These findings suggest that different subsystems of language have differing dynamics and that careful, nuanced models of language change will be needed to extract deeper signal from the noise of parallel evolution, areal readaptation, and contact."

--- I would be very curious to know what historical linguists make of this.
to:NB  linguistics  language_history  cultural_evolution  phylogenetics 
28 days ago
Direct measurement of weakly nonequilibrium system entropy is consistent with Gibbs–Shannon form
"Stochastic thermodynamics extends classical thermodynamics to small systems in contact with one or more heat baths. It can account for the effects of thermal fluctuations and describe systems far from thermodynamic equilibrium. A basic assumption is that the expression for Shannon entropy is the appropriate description for the entropy of a nonequilibrium system in such a setting. Here we measure experimentally this function in a system that is in local but not global equilibrium. Our system is a micron-scale colloidal particle in water, in a virtual double-well potential created by a feedback trap. We measure the work to erase a fraction of a bit of information and show that it is bounded by the Shannon entropy for a two-state system. Further, by measuring directly the reversibility of slow protocols, we can distinguish unambiguously between protocols that can and cannot reach the expected thermodynamic bounds."
to:NB  thermodynamics  statistics  non-equilibrium  physics 
28 days ago
The Power of Bias in Economics Research - Ioannidis - 2017 - The Economic Journal - Wiley Online Library
"We investigate two critical dimensions of the credibility of empirical economics research: statistical power and bias. We survey 159 empirical economics literatures that draw upon 64,076 estimates of economic parameters reported in more than 6,700 empirical studies. Half of the research areas have nearly 90% of their results under-powered. The median statistical power is 18%, or less. A simple weighted average of those reported results that are adequately powered (power ≥ 80%) reveals that nearly 80% of the reported effects in these empirical economics literatures are exaggerated; typically, by a factor of two and with one-third inflated by a factor of four or more."
to:NB  economics  statistics  hypothesis_testing  bad_data_analysis  bad_science_journalism  re:neutral_model_of_inquiry  via:d-squared  to_read 
28 days ago
Gersham's Law of Model Averaging
"A decision maker doubts the stationarity of his environment. In response, he uses two models, one with time-varying parameters, and another with constant parameters. Forecasts are then based on a Bayesian model averaging strategy, which mixes forecasts from the two models. In reality, structural parameters are constant, but the (unknown) true model features expectational feedback, which the reduced-form models neglect. This feedback permits fears of parameter instability to become self-confirming. Within the context of a standard asset-pricing model, we use the tools of large deviations theory to show that even though the constant parameter model would converge to the rational expectations equilibrium if considered in isolation, the mere presence of an unstable alternative drives it out of consideration."
to:NB  to_read  model_selection  ensemble_methods  self-fulfilling_prophecies  large_deviations  statistics 
28 days ago
What Facebook Did to American Democracy - The Atlantic
This is very good. Some (small) points of critique:
(1) Zeynep Tufekci deserves much more than a parenthetical name-drop on these issues.
(1) The initial 2012 experiment by Fowler et al suffers from a very serious design flaw, which means it confounds _being exposed to a lot of social influence via Facebook_ with _being the kind of person who has many Facebook ties_. I am not aware of subsequent experiments which correct the flaw, though it could be done.
facebook  social_media  networked_life  us_politics  re:democratic_cognition  via:?  have_read 
29 days ago
The molecular genetics of participation in the Avon Longitudinal Study of Parents and Children | bioRxiv
"Background: It is often assumed that selection (including participation and dropout) does not represent an important source of bias in genetic studies. However, there is little evidence to date on the effect of genetic factors on participation. Methods: Using data on mothers (N=7,486) and children (N=7,508) from the Avon Longitudinal Study of Parents and Children, we 1) examined the association of polygenic risk scores for a range of socio-demographic, lifestyle characteristics and health conditions related to continued participation, 2) investigated whether associations of polygenic scores with body mass index (BMI; derived from self-reported weight and height) and self-reported smoking differed in the largest sample with genetic data and a sub-sample who participated in a recent follow-up and 3) determined the proportion of variation in participation explained by common genetic variants using genome-wide data. Results: We found evidence that polygenic scores for higher education, agreeableness and openness were associated with higher participation and polygenic scores for smoking initiation, higher BMI, neuroticism, schizophrenia, ADHD and depression were associated with lower participation. Associations between the polygenic score for education and self-reported smoking differed between the largest sample with genetic data (OR for ever smoking per SD increase in polygenic score:0.85, 95% CI:0.81,0.89) and sub-sample (OR:0.95, 95% CI:0.88,1.02). In genome-wide analysis, single nucleotide polymorphism based heritability explained 17-31% of variability in participation. Conclusions: Genetic association studies, including Mendelian randomization, can be biased by selection, including loss to follow-up. Genetic risk for dropout should be considered in all analyses of studies with selective participation."

!!!
to:NB  to_read  to_be_shot_after_a_fair_trial  human_genetics  heritability 
4 weeks ago
The Scientific Buddha: His Short and Happy Life | Yale University Press
"This book tells the story of the Scientific Buddha, "born" in Europe in the 1800s but commonly confused with the Buddha born in India 2,500 years ago. The Scientific Buddha was sent into battle against Christian missionaries, who were proclaiming across Asia that Buddhism was a form of superstition. He proved the missionaries wrong, teaching a dharma that was in harmony with modern science. And his influence continues. Today his teaching of "mindfulness" is heralded as the cure for all manner of maladies, from depression to high blood pressure.
"In this potent critique, a well-known chronicler of the West's encounter with Buddhism demonstrates how the Scientific Buddha's teachings deviate in crucial ways from those of the far older Buddha of ancient India. Donald Lopez shows that the Western focus on the Scientific Buddha threatens to bleach Buddhism of its vibrancy, complexity, and power, even as the superficial focus on "mindfulness" turns Buddhism into merely the latest self-help movement. The Scientific Buddha has served his purpose, Lopez argues. It is now time for him to pass into nirvana. This is not to say, however, that the teachings of the ancient Buddha must be dismissed as mere cultural artifacts. They continue to present a potent challenge, even to our modern world."
in_NB  books:noted  buddhism  history_of_religion 
4 weeks ago
The Global Transformation of Time — Vanessa Ogle | Harvard University Press
"As new networks of railways, steamships, and telegraph communications brought distant places into unprecedented proximity, previously minor discrepancies in local time-telling became a global problem. Vanessa Ogle’s chronicle of the struggle to standardize clock times and calendars from 1870 to 1950 highlights the many hurdles that proponents of uniformity faced in establishing international standards.
"Time played a foundational role in nineteenth-century globalization. Growing interconnectedness prompted contemporaries to reflect on the annihilation of space and distance and to develop a global consciousness. Time—historical, evolutionary, religious, social, and legal—provided a basis for comparing the world’s nations and societies, and it established hierarchies that separated “advanced” from “backward” peoples in an age when such distinctions underwrote European imperialism.
"Debates and disagreements on the varieties of time drew in a wide array of observers: German government officials, British social reformers, colonial administrators, Indian nationalists, Arab reformers, Muslim scholars, and League of Nations bureaucrats. Such exchanges often heightened national and regional disparities. The standardization of clock times therefore remained incomplete as late as the 1940s, and the sought-after unification of calendars never came to pass. The Global Transformation of Time reveals how globalization was less a relentlessly homogenizing force than a slow and uneven process of adoption and adaptation that often accentuated national differences."
in_NB  books:noted  modernity  imperialism  time-keeping  history_of_technology  history  the_present_before_it_was_widely_distributed 
4 weeks ago
[1710.05468] Generalization in Deep Learning
"This paper explains why deep learning can generalize well, despite large capacity and possible algorithmic instability, nonrobustness, and sharp minima, effectively addressing an open problem in the literature. Based on our theoretical insight, this paper also proposes a family of new regularization methods. Its simplest member was empirically shown to improve base models and achieve state-of-the-art performance on MNIST and CIFAR-10 benchmarks. Moreover, this paper presents both data-dependent and data-independent generalization guarantees with improved convergence rates. Our results suggest several new open areas of research."
to:NB  learning_theory  to_read 
5 weeks ago
A Pottery Barn rule for scientific journals – The Hardest Science
"Proposed: Once a journal has published a study, it becomes responsible for publishing direct replications of that study. Publication is subject to editorial review of technical merit but is not dependent on outcome. Replications shall be published as brief reports in an online supplement, linked from the electronic version of the original."

--- I like this proposal very much.
why_oh_why_cant_we_have_a_better_academic_publishing_system  science_as_a_social_process  have_read 
5 weeks ago
How Much Should We Trust Ideal Point Estmates from Surveys?
Ans.: Not at all.
Commentary, in no particular order:
(1) Justin Gross, in his 2010 Ph.D. thesis, looked at the stability of NOMINATE scores in Senate roll-call voting, across various sub-sets of votes. His main interest there was in getting at uncertainty in statements like "the 32nd most conservative senator", but some of his findings would also apply to CV for that context. (I don't know if Justin ever published that separately.)
(2) I suspect there is something funny about the way they are doing CV, because when they simulate from 1D ideal-point models, it doesn't favor 1 dimension! I think it might be better to make the validation set be distributed across both questions and items (as in Dabbs & Junker on network CV, https://arxiv.org/abs/1605.03000), but I am not sure.
(3) Regressing changes in average likelihood to see which sub-groups benefit most from an ideal point model is just weird. At the very least, I'd use log-likelihood, and there should be some way of getting at this by estimating a joint model, with both covariates and latent ideal points.
to:NB  public_opinion  inference_to_latent_objects  statistics  have_read  via:henry_farrell  cross-validation 
5 weeks ago
When and how to satisfice: an experimental investigation | SpringerLink
"This paper is about satisficing behaviour. Rather tautologically, this is when decision-makers are satisfied with achieving some objective, rather than in obtaining the best outcome. The term was coined by Simon (Q J Econ 69:99–118, 1955), and has stimulated many discussions and theories. Prominent amongst these theories are models of incomplete preferences, models of behaviour under ambiguity, theories of rational inattention, and search theories. Most of these, however, seem to lack an answer to at least one of two key questions: when should the decision-maker (DM) satisfice; and how should the DM satisfice. In a sense, search models answer the latter question (in that the theory tells the DM when to stop searching), but not the former; moreover, usually the question as to whether any search at all is justified is left to a footnote. A recent paper by Manski (Theory Decis. doi:10.1007/s11238-017-9592-1, 2017) fills the gaps in the literature and answers the questions: when and how to satisfice? He achieves this by setting the decision problem in an ambiguous situation (so that probabilities do not exist, and many preference functionals can therefore not be applied) and by using the Minimax Regret criterion as the preference functional. The results are simple and intuitive. This paper reports on an experimental test of his theory. The results show that some of his propositions (those relating to the ‘how’) appear to be empirically valid while others (those relating to the ‘when’) are less so."

--- I am continually impressed by economist's resistance to Simon's (very simple) points about computational complexity. (But: two cheers for actually doing something empirical.)
to:NB  decision-making  decision_theory  bounded_rationality 
6 weeks ago
Block Bootstrap for the Empirical Process of Long-Range Dependent Data - Tewes - 2017 - Journal of Time Series Analysis - Wiley Online Library
"We consider the bootstrapped empirical process of long-range dependent data. It is shown that this process converges to a semi-degenerate limit, where the random part of this limit is always Gaussian. Thus the bootstrap might fail when the original empirical process accomplishes a noncentral limit theorem. However, even in this case our results can be used to estimate a nuisance parameter that appears in the limit of many nonparametric tests under long memory. Moreover, we develop a new resampling procedure for goodness-of-fit tests and a test for monotonicity of transformations."
to:NB  stochastic_processes  time_series  statistics  bootstrap  empirical_processes  long-range_dependence 
6 weeks ago
Automated conjecturing III | SpringerLink
"Discovery in mathematics is a prototypical intelligent behavior, and an early and continuing goal of artificial intelligence research. We present a heuristic for producing mathematical conjectures of a certain typical form and demonstrate its utility. Our program conjectures relations that hold between properties of objects (property-relation conjectures). These objects can be of a wide variety of types. The statements are true for all objects known to the program, and are the simplest statements which are true of all these objects. The examples here include new conjectures for the hamiltonicity of a graph, a well-studied property of graphs. While our motivation and experiments have been to produce mathematical conjectures—and to contribute to mathematical research—other kinds of interesting property-relation conjectures can be imagined, and this research may be more generally applicable to the development of intelligent machinery."
to:NB  heuristics  artificial_intelligence  mathematics 
6 weeks ago
Live Work Work Work Die | Corey Pein | Macmillan
"At the height of the startup boom, journalist Corey Pein set out for Silicon Valley with little more than a smartphone and his wits. His goal: to learn how such an overhyped industry could possibly sustain itself as long as it has. Determined to cut through the clichés of big tech—the relentless optimism, the mandatory enthusiasm, and the earnest, incessant repetition of vacuous buzzwords—Pein decided that he would need to take an approach as unorthodox as the companies he would soon be covering. To truly understand the delirious reality of a Silicon Valley entrepreneur, he knew, he would have to inhabit that perspective—he would have to become an entrepreneur. Thus he begins his journey—skulking through gimmicky tech conferences, pitching his over-the-top business ideas to investors, and interviewing a cast of outrageous characters: cyborgs and con artists, Teamsters and transhumanists, jittery hackers and naive upstart programmers whose entire lives are managed by their employers—who work endlessly and obediently, never thinking to question their place in the system.
"In showing us this frantic world, Pein challenges the positive, feel-good self-image that the tech tycoons have crafted—as nerdy and benevolent creators of wealth and opportunity—revealing their self-justifying views and their insidious visions for the future. Indeed, as Pein shows, Silicon Valley is awash in disreputable ideas: Google executive and futurist Raymond Kurzweil has a side business peddling dietary supplements and has for years pushed the outlandish notion that human beings are destined to merge with computers and live forever in some kind of digital cosmic hive mind. Peter Thiel, the billionaire venture capitalist affiliated with PayPal and Facebook, is now an important advisor to President Trump and has subsidized a prolific blogger known by the pen name Mencius Moldbug who writes approvingly of ideas like eugenics and dictatorship. And Moldbug is not alone. There is, in fact, a small but influential—and growing—group of techies with similarly absurd and extremist beliefs who call themselves the “neoreactionary” vanguard of a “Dark Enlightenment.”
"Vivid and incisive, Live Work Work Work Die is a troubling portrait of a self-obsessed industry bent on imposing its disturbing visions on the rest of us."

--- By the author of the instant-classic "Mouthbreathing Machiavellis Dream of a Silicon Reich" (https://thebaffler.com/latest/mouthbreathing-machiavellis).
to:NB  books:noted  coveted  the_wired_ideology  nerdworld  running_dogs_of_reaction 
6 weeks ago
Envisioning the Data Science Discipline: The Undergraduate Perspective: Interim Report | The National Academies Press
"The need to manage, analyze, and extract knowledge from data is pervasive across industry, government, and academia. Scientists, engineers, and executives routinely encounter enormous volumes of data, and new techniques and tools are emerging to create knowledge out of these data, some of them capable of working with real-time streams of data. The nation’s ability to make use of these data depends on the availability of an educated workforce with necessary expertise. With these new capabilities have come novel ethical challenges regarding the effectiveness and appropriateness of broad applications of data analyses.
"The field of data science has emerged to address the proliferation of data and the need to manage and understand it. Data science is a hybrid of multiple disciplines and skill sets, draws on diverse fields (including computer science, statistics, and mathematics), encompasses topics in ethics and privacy, and depends on specifics of the domains to which it is applied. Fueled by the explosion of data, jobs that involve data science have proliferated and an array of data science programs at the undergraduate and graduate levels have been established. Nevertheless, data science is still in its infancy, which suggests the importance of envisioning what the field might look like in the future and what key steps can be taken now to move data science education in that direction.
"This study will set forth a vision for the emerging discipline of data science at the undergraduate level. This interim report lays out some of the information and comments that the committee has gathered and heard during the first half of its study, offers perspectives on the current state of data science education, and poses some questions that may shape the way data science education evolves in the future. The study will conclude in early 2018 with a final report that lays out a vision for future data science education."
to:NB  to_read  books:noted  statistics  education  to_be_shot_after_a_fair_trial 
6 weeks ago
Assessing benefits, costs, and disparate racial impacts of confrontational proactive policing
"Effective policing in a democratic society must balance the sometime conflicting objectives of public safety and community trust. This paper uses a formal model of optimal policing to explore how society might reasonably resolve the tension between these two objectives as well as evaluate disparate racial impacts. We do so by considering the social benefits and costs of confrontational types of proactive policing, such as stop, question, and frisk. Three features of the optimum that are particularly relevant to policy choices are explored: (i) the cost of enforcement against the innocent, (ii) the baseline level of crime rate without confrontational enforcement, and (iii) differences across demographic groups in the optimal rate of enforcement."
to:NB  police  political_economy  to_be_shot_after_a_fair_trial  to_read 
6 weeks ago
Trumpism and American Democracy: History, Comparison, and the Predicament of Liberal Democracy in the United States by Robert C. Lieberman, Suzanne Mettler, Thomas B. Pepinsky, Kenneth M. Roberts, Richard Valelly :: SSRN
"In the eyes of many citizens, activists, pundits, and scholars, American democracy appears under threat. Concern about President Trump and the future of American politics may be found among both conservatives and progressives; among voters, activists, and elites; and among many scholars and analysts of American and comparative politics. What is the nature of the Trumpism as a political phenomenon? And how much confidence should we have at present in the capacity of American institutions to withstand this threat?
"In this essay, we argue that answering these questions and understanding what is uniquely threatening to democracy at the present moment requires looking beyond the contemporary particulars of Donald Trump and his presidency. Instead, it demands a historical and comparative perspective on American politics. Drawing on a range of insights from the fields of comparative politics and American political development, we argue that President Trump’s election in 2016 represents the intersection of three streams in American politics: polarized two-party presidentialism; a polity fundamentally divided over membership and status in the political community, in ways structured by race and economic inequality; and the erosion of democratic norms at the elite and mass levels. The current political circumstance is an existential threat to American democratic order because of the interactive effects of institutions, identity, and norm-breaking in American politics."
us_politics  democracy  institutions  political_science  via:?  trump.donald 
6 weeks ago
[1607.00653] node2vec: Scalable Feature Learning for Networks
"Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks."
to:NB  to_read  network_data_analysis  leskovec.jure 
6 weeks ago
Large deviation principle for epidemic models | Journal of Applied Probability | Cambridge Core
"We consider a general class of epidemic models obtained by applying the random time changes of Ethier and Kurtz (2005) to a collection of Poisson processes and we show the large deviation principle for such models. We generalise the approach followed by Dolgoarshinnykh (2009) in the case of the SIR epidemic model. Thanks to an additional assumption which is satisfied in many examples, we simplify the recent work of Kratz and Pardoux (2017)."
to:NB  large_deviations  epidemic_models  stochastic_processes  to_read  re:almost_none  re:do-institutions-evolve 
6 weeks ago
[1609.04212] Formalizing Neurath's Ship: Approximate Algorithms for Online Causal Learning
"Higher-level cognition depends on the ability to learn models of the world. We can characterize this at the computational level as a structure-learning problem with the goal of best identifying the prevailing causal relationships among a set of relata. However, the computational cost of performing exact Bayesian inference over causal models grows rapidly as the number of relata increases. This implies that the cognitive processes underlying causal learning must be substantially approximate. A powerful class of approximations that focuses on the sequential absorption of successive inputs is captured by the Neurath's ship metaphor in philosophy of science, where theory change is cast as a stochastic and gradual process shaped as much by people's limited willingness to abandon their current theory when considering alternatives as by the ground truth they hope to approach. Inspired by this metaphor and by algorithms for approximating Bayesian inference in machine learning, we propose an algorithmic-level model of causal structure learning under which learners represent only a single global hypothesis that they update locally as they gather evidence. We propose a related scheme for understanding how, under these limitations, learners choose informative interventions that manipulate the causal system to help elucidate its workings. We find support for our approach in the analysis of four experiments."

--- This sounds interesting, but does nothing to alleviate my usual issue with Griffiths's stuff, which is that I don't see what _Bayesianism_ adds, that you couldn't just get from some sort of evolutionary optimization (cf. http://bactra.org/weblog/601.html and especially http://bactra.org/weblog/796.html) --- but then we're back in the world of Holland et al.'s _Induction_ (not that that's bad: http://bactra.org/reviews/hhnt-induction/).
to:NB  to_read  causal_inference  via:vaguery  to_be_shot_after_a_fair_trial 
6 weeks ago
Biological Clocks, Rhythms, and Oscillations: The Theory of Biological Timekeeping | The MIT Press
"All areas of biology and medicine contain rhythms, and these behaviors are best understood through mathematical tools and techniques. This book offers a survey of mathematical, computational, and analytical techniques used for modeling biological rhythms, gathering these methods for the first time in one volume. Drawing on material from such disciplines as mathematical biology, nonlinear dynamics, physics, statistics, and engineering, it presents practical advice and techniques for studying biological rhythms, with a common language.
"The chapters proceed with increasing mathematical abstraction. Part I, on models, highlights the implicit assumptions and common pitfalls of modeling, and is accessible to readers with basic knowledge of differential equations and linear algebra. Part II, on behaviors, focuses on simpler models, describing common properties of biological rhythms that range from the firing properties of squid giant axon to human circadian rhythms. Part III, on mathematical techniques, guides readers who have specific models or goals in mind. Sections on “frontiers” present the latest research; “theory” sections present interesting mathematical results using more accessible approaches than can be found elsewhere. Each chapter offers exercises. Commented MATLAB code is provided to help readers get practical experience.
"The book, by an expert in the field, can be used as a textbook for undergraduate courses in mathematical biology or graduate courses in modeling biological rhythms and as a reference for researchers."
to:NB  books:noted  mathematics  biology 
6 weeks ago
Perturbations, Optimization, and Statistics | The MIT Press
"In nearly all machine learning, decisions must be made given current knowledge. Surprisingly, making what is believed to be the best decision is not always the best strategy, even when learning in a supervised learning setting. An emerging body of work on learning under different rules applies perturbations to decision and learning procedures. These methods provide simple and highly efficient learning rules with improved theoretical guarantees. This book describes perturbation-based methods developed in machine learning to augment novel optimization methods with strong statistical guarantees, offering readers a state-of-the-art overview.
"Chapters address recent modeling ideas that have arisen within the perturbations framework, including Perturb & MAP, herding, and the use of neural networks to map generic noise to distribution over highly structured data. They describe new learning procedures for perturbation models, including an improved EM algorithm and a learning algorithm that aims to match moments of model samples to moments of data. They discuss understanding the relation of perturbation models to their traditional counterparts, with one chapter showing that the perturbations viewpoint can lead to new algorithms in the traditional setting. And they consider perturbation-based regularization in neural networks, offering a more complete understanding of dropout and studying perturbations in the context of deep neural networks."
to:NB  books:noted  computational_statistics  optimization  stochastic_approximation  learning_theory  statistics 
6 weeks ago
Coding Literacy | The MIT Press
"The message from educators, the tech community, and even politicians is clear: everyone should learn to code. To emphasize the universality and importance of computer programming, promoters of coding for everyone often invoke the concept of “literacy,” drawing parallels between reading and writing code and reading and writing text. In this book, Annette Vee examines the coding-as-literacy analogy and argues that it can be an apt rhetorical frame. The theoretical tools of literacy help us understand programming beyond a technical level, and in its historical, social, and conceptual contexts. Viewing programming from the perspective of literacy and literacy from the perspective of programming, she argues, shifts our understandings of both. Computer programming becomes part of an array of communication skills important in everyday life, and literacy, augmented by programming, becomes more capacious.
"Vee examines the ways that programming is linked with literacy in coding literacy campaigns, considering the ideologies that accompany this coupling, and she looks at how both writing and programming encode and distribute information. She explores historical parallels between writing and programming, using the evolution of mass textual literacy to shed light on the trajectory of code from military and government infrastructure to large-scale businesses to personal use. Writing and coding were institutionalized, domesticated, and then established as a basis for literacy. Just as societies demonstrated a “literate mentality” regardless of the literate status of individuals, Vee argues, a “computational mentality” is now emerging even though coding is still a specialized skill."
to:NB  books:noted  literacy  programming  to_be_shot_after_a_fair_trial 
7 weeks ago
Invisible Mind | The MIT Press
"In Invisible Mind, Lasana Harris takes a social neuroscience approach to explaining the worst of human behavior. How can a person take part in racially motivated violence and then tenderly cradle a baby or lovingly pet a puppy? Harris argues that our social cognition—the ability to infer the mental states of another agent—is flexible. That is, we can either engage or withhold social cognition. If we withhold social cognition, we dehumanize the other person. Integrating theory from a range of disciplines—social, developmental, and cognitive psychology, evolutionary anthropology, philosophy, economics, and law—with neuroscience data, Harris explores how and why we engage or withhold social cognition. He examines research in these different disciplines and describes biological processes that underlie flexible social cognition, including brain, genetic, hormonal, and physiological mechanisms.
"After laying out the philosophical and theoretical terrain, Harris explores examples of social cognitive ability in nonhumans and explains the evolutionary staying power of this trait. He addresses two motives for social cognition—prediction and explanation—and reviews cases of anthropomorphism (extending social cognition to entities without mental states) and dehumanization (withholding it from people with mental states). He discusses the relation of social cognition to the human/nonhuman distinction and to the evolution of sociality. He considers the importance of social context and, finally, he speculates about the implications of flexible social cognition in such arenas for human interaction as athletic competition and international disputes."
to:NB  books:noted  social_psychology  moral_psychology 
7 weeks ago
[1406.0423] Targeted Maximum Likelihood Estimation using Exponential Families
"Targeted maximum likelihood estimation (TMLE) is a general method for estimating parameters in semiparametric and nonparametric models. Each iteration of TMLE involves fitting a parametric submodel that targets the parameter of interest. We investigate the use of exponential families to define the parametric submodel. This implementation of TMLE gives a general approach for estimating any smooth parameter in the nonparametric model. A computational advantage of this approach is that each iteration of TMLE involves estimation of a parameter in an exponential family, which is a convex optimization problem for which software implementing reliable and computationally efficient methods exists. We illustrate the method in three estimation problems, involving the mean of an outcome missing at random, the parameter of a median regression model, and the causal effect of a continuous exposure, respectively. We conduct a simulation study comparing different choices for the parametric submodel, focusing on the first of these problems. To the best of our knowledge, this is the first study investigating robustness of TMLE to different specifications of the parametric submodel. We find that the choice of submodel can have an important impact on the behavior of the estimator in finite samples."
to:NB  statistics  estimation  nonparametrics  causal_inference  exponential_families 
7 weeks ago
Is Capitalism Obsolete? — Giacomo Corneo | Harvard University Press
"After communism collapsed in the former Soviet Union, capitalism seemed to many observers like the only game in town, and questioning it became taboo for academic economists. But the financial crisis, chronic unemployment, and the inexorable rise of inequality have resurrected the question of whether there is a feasible and desirable alternative to capitalism. Against this backdrop of growing disenchantment, Giacomo Corneo presents a refreshingly antidogmatic review of economic systems, taking as his launching point a fictional argument between a daughter indignant about economic injustice and her father, a professor of economics.
"Is Capitalism Obsolete? begins when the daughter’s angry complaints prompt her father to reply that capitalism cannot responsibly be abolished without an alternative in mind. He invites her on a tour of tried and proposed economic systems in which production and consumption obey noncapitalistic rules. These range from Plato’s Republic to diverse modern models, including anarchic communism, central planning, and a stakeholder society. Some of these alternatives have considerable strengths. But daunting problems arise when the basic institutions of capitalism—markets and private property—are suppressed. Ultimately, the father argues, all serious counterproposals to capitalism fail to pass the test of economic feasibility. Then the story takes an unexpected turn. Father and daughter jointly come up with a proposal to gradually transform the current economic system so as to share prosperity and foster democratic participation."
to:NB  books:noted  progressive_forces  capitalism  socialism  economics 
7 weeks ago
Rhododendron, Milne
"Has ever a plant inspired such love and such hatred as the rhododendron? Its beauty is inarguable; it can clothe whole hillsides and gardens with a blanket of vibrant color. The rhododendron has a propensity towards sexual infidelity, making it very popular with horticultural breeding programs. And it can also be used as an herbal remedy for an astonishing range of ailments.
"But there is a darker side to these gorgeous flowers. Daphne du Maurier used the red rhododendron as a symbol of blood in her best-selling novel Rebecca, and numerous Chinese folktales link the plant with tragedy and death. It can poison livestock and intoxicate humans, and its narcotic honey has been used as a weapon of war. Rhododendron ponticum has run riot across the British countryside, but the full story of this implacable invader contains many fascinating surprises.
"In this beautifully illustrated volume, Richard Milne explores the many ways in which the rhododendron has influenced human societies, relating this to the extraordinary story of the plant’s evolution. Over one thousand species of the plant exist, ranging from rugged trees on Himalayan slopes to rock-hugging alpines, and delicate plants perched on rainforest branches. Milne relays tales of mythical figures, intrepid collectors, and eccentric plant breeders. However much you may think you know about the rhododendron, this charming book will offer something new."
books:noted  plants 
7 weeks ago
A History of the Silk Road, Clements
"The Silk Road is not a place, but a journey, a route from the edges of the Mediterranean to the central plains of China, through high mountains and inhospitable deserts. For thousands of years its history has been a traveler’s history, of brief encounters in desert towns, snowbound passes and nameless forts. It was the conduit that first brought Buddhism, Christianity and Islam into China, and the site of much of the “Great Game” between 19th-century empires. Today, its central section encompasses several former Soviet republics, and the Chinese Autonomous Region of Xinjiang. The ancient trade route controversially crosses the sites of several forgotten kingdoms, buried in sand and only now revealing their secrets.
"A History of the Silk Road not only offers the reader a chronological outline of the region’s development, but also provides an invaluable introduction to its languages, literature, and arts. It takes a comprehensive and illuminating look at the rich history of this dynamic and little known region, and provides an easy-to-use reference source. Jonathan Clements pays particular attention to the fascinating historical sites which feature on any visitor’s itinerary and also gives special emphasis to the writings and reactions of travelers through the centuries."
to:NB  books:noted  ancient_history  eurasian_history  silk_road 
7 weeks ago
Technosystem — Andrew Feenberg | Harvard University Press
"We live in a world of technical systems designed in accordance with technical disciplines and operated by personnel trained in those disciplines. This is a unique form of social organization that largely determines our way of life, but the actions of individuals and social protest still play a role in developing and purposing these rational systems. In Technosystem, Andrew Feenberg builds a theory of both the threats of technocratic modernity and the potential for democratic change.
"Feenberg draws on the tradition of radical social criticism represented by Herbert Marcuse and the Frankfurt School, which recognized the social effects of instrumental rationality but did not advance a convincing alternative to the new forms of domination imposed by rational systems. That is where the fine-grained analyses of Science, Technology, and Society (STS) studies can contribute. Feenberg uses these approaches to reconcile the claims of rationality with the agency of a public increasingly mobilized to intervene in technically based decisions. The resulting social theory recognizes emerging forms of resistance, such as protests and hacking, as essential expressions of public life in the “rational society.”
"Combining the most salient insights from critical theory with the empirical findings of STS, Technosystem advances the philosophical debate over the nature and practice of reason in modern society."
to:NB  books:noted  philosophy  critical_theory  technology 
7 weeks ago
Test Score Measurement and the Black-White Test Score Gap | The Review of Economics and Statistics | MIT Press Journals
"Research as to the size of the black-white test score gap often comes to contradictory conclusions. Recent literature has affirmed that the source of these contradictions and other controversies in education economics may be due to the fact that test scores contain only ordinal information. In this paper, I propose a normalization of test scores that is invariant to monotonic transformations. Under fairly weak assumptions, this metric has interval properties and thus solves the ordinality problem. The measure can serve as a valuable robustness check to ensure that any results are not simply statistical artifacts from the choice of scale."
mental_testing  standardized_testing  re:g_paper  to:NB 
7 weeks ago
Optimistic realism about scientific progress | SpringerLink
"Scientific realists use the “no miracle argument” to show that the empirical and pragmatic success of science is an indicator of the ability of scientific theories to give true or truthlike representations of unobservable reality. While antirealists define scientific progress in terms of empirical success or practical problem-solving, realists characterize progress by using some truth-related criteria. This paper defends the definition of scientific progress as increasing truthlikeness or verisimilitude. Antirealists have tried to rebut realism with the “pessimistic metainduction”, but critical realists turn this argument into an optimistic view about progressive science."
to:NB  philosophy_of_science  epistemology 
8 weeks ago
On the Frequentist Properties of Bayesian Nonparametric Methods | Annual Review of Statistics and Its Application
"In this paper, I review the main results on the asymptotic properties of the posterior distribution in nonparametric or high-dimensional models. In particular, I explain how posterior concentration rates can be derived and what we learn from such analysis in terms of the impact of the prior distribution on high-dimensional models. These results concern fully Bayes and empirical Bayes procedures. I also describe some of the results that have been obtained recently in semiparametric models, focusing mainly on the Bernstein–von Mises property. Although these results are theoretical in nature, they shed light on some subtle behaviors of the prior models and sharpen our understanding of the family of functionals that can be well estimated for a given prior model."
to:NB  bayesian_consistency  bayesianism  statistics  re:bayes_as_evol 
8 weeks ago
« earlier      
academia afghanistan agent-based_models american_history ancient_history anthropology archaeology art bad_data_analysis bad_science_journalism bayesian_consistency bayesianism biochemical_networks biology blogged book_reviews books:noted books:owned books:recommended bootstrap causal_inference causality central_asia central_limit_theorem class_struggles_in_america classifiers climate_change clustering cognitive_science collective_cognition community_discovery complexity computational_statistics confidence_sets corruption coveted crime cross-validation cultural_criticism cultural_evolution cultural_exchange data_analysis data_mining debunking decision-making decision_theory delong.brad democracy density_estimation dimension_reduction distributed_systems dynamical_systems ecology econometrics economic_history economic_policy economics education empirical_processes ensemble_methods entropy_estimation epidemic_models epidemiology_of_representations epistemology ergodic_theory estimation evisceration evolution_of_cooperation evolutionary_biology experimental_psychology finance financial_crisis_of_2007-- financial_speculation fmri food fraud funny funny:academic funny:geeky funny:laughing_instead_of_screaming funny:malicious funny:pointed graph_theory graphical_models have_read heard_the_talk heavy_tails high-dimensional_statistics hilbert_space history_of_ideas history_of_science human_genetics hypothesis_testing ideology imperialism in_nb inequality inference_to_latent_objects information_theory institutions kernel_methods kith_and_kin krugman.paul large_deviations lasso learning_theory liberman.mark linguistics literary_criticism machine_learning macro_from_micro macroeconomics market_failures_in_everything markov_models mathematics mixing model_selection modeling modern_ruins monte_carlo moral_psychology moral_responsibility mortgage_crisis natural_history_of_truthiness network_data_analysis networked_life networks neural_data_analysis neural_networks neuroscience non-equilibrium nonparametrics optimization our_decrepit_institutions philosophy philosophy_of_science photos physics pittsburgh political_economy political_science practices_relating_to_the_transmission_of_genetic_information prediction pretty_pictures principal_components probability programming progressive_forces psychology public_policy r racism random_fields re:almost_none re:aos_project re:democratic_cognition re:do-institutions-evolve re:g_paper re:homophily_and_confounding re:network_differences re:smoothing_adjacency_matrices re:social_networks_as_sensor_networks re:stacs re:your_favorite_dsge_sucks recipes regression running_dogs_of_reaction science_as_a_social_process science_fiction simulation social_influence social_life_of_the_mind social_media social_networks social_science_methodology sociology something_about_america sparsity spatial_statistics state-space_models statistical_inference_for_stochastic_processes statistical_mechanics statistics stochastic_processes text_mining the_american_dilemma the_continuing_crises time_series to:blog to:nb to_be_shot_after_a_fair_trial to_read to_teach:complexity-and-inference to_teach:data-mining to_teach:statcomp to_teach:undergrad-ada track_down_references us_politics utter_stupidity vast_right-wing_conspiracy via:? via:henry_farrell via:jbdelong visual_display_of_quantitative_information whats_gone_wrong_with_america why_oh_why_cant_we_have_a_better_academic_publishing_system why_oh_why_cant_we_have_a_better_press_corps

Copy this bookmark:



description:


tags: