2426
Data Science for Undergraduates: Opportunities and Options | The National Academies Press
Data science is emerging as a field that is revolutionizing science and industries alike. Work across nearly all domains is becoming more data driven, affecting both the jobs that are available and the skills that are required. As more data and ways of analyzing them become available, more aspects of the economy, society, and daily life will become dependent on data. It is imperative that educators, administrators, and students begin today to consider how to best prepare for and keep pace with this data-driven era of tomorrow. Undergraduate teaching, in particular, offers a critical link in offering more data science exposure to students and expanding the supply of data science talent.

Data Science for Undergraduates: Opportunities and Options offers a vision for the emerging discipline of data science at the undergraduate level. This report outlines some considerations and approaches for academic institutions and others in the broader data science communities to help guide the ongoing transformation of this field.
nap  report  statistics  machine_learning  pedagogy  for_friends 
16 hours ago
Bayesian reverse-engineering considered as a research strategy for cognitive science | SpringerLink
Bayesian reverse-engineering is a research strategy for developing three-level explanations of behavior and cognition. Starting from a computational-level analysis of behavior and cognition as optimal probabilistic inference, Bayesian reverse-engineers apply numerous tweaks and heuristics to formulate testable hypotheses at the algorithmic and implementational levels. In so doing, they exploit recent technological advances in Bayesian artificial intelligence, machine learning, and statistics, but also consider established principles from cognitive psychology and neuroscience. Although these tweaks and heuristics are highly pragmatic in character and are often deployed unsystematically, Bayesian reverse-engineering avoids several important worries that have been raised about the explanatory credentials of Bayesian cognitive science: the worry that the lower levels of analysis are being ignored altogether; the challenge that the mathematical models being developed are unfalsifiable; and the charge that the terms ‘optimal’ and ‘rational’ have lost their customary normative force. But while Bayesian reverse-engineering is therefore a viable and productive research strategy, it is also no fool-proof recipe for explanatory success.
bayesian  cognition  cognitive_science  philosophy_of_science 
3 days ago
Suboptimality in Perceptual Decision Making | bioRxiv
Human perceptual decisions are often described as optimal. This view reflects recent successes of Bayesian approaches to both cognition and perception. However, claims regarding optimality have been strongly criticized for their excessive flexibility and lack of explanatory power. Rebuttals from Bayesian theorists in turn claim that critics unfairly pick on select few papers. To resolve the issue regarding the role of optimality in perceptual decision making, we review the vast literature on suboptimal performance in perceptual tasks. Specifically, we discuss eight different classes of suboptimal perceptual decisions, including improper placement, maintenance, and adjustment of perceptual criteria, inadequate tradeoff between speed and accuracy, inappropriate confidence ratings, misweightings in cue combination, and findings related to various perceptual illusions and biases. We then extract the proposed explanations for the suboptimal behavior seen in each type of task. Critically, we show that these explanations naturally fit within an overarching Bayesian framework. Specifically, each suboptimality can be explained by alternative likelihood functions, priors, cost functions, or decision rules (LPCDs). We argue that unless the observer's likelihood functions, priors, and cost functions are known, statements about the optimality or suboptimality of decision rules are meaningless. Further, the very definition of optimal behavior is debatable and may ultimately require appeals to evolutionary history beyond the current scope of perceptual science. The field should therefore shift its focus away from optimality. We propose a "LPCD approach" to perceptual decision making that focuses exclusively on uncovering the LPCD components, without debating whether the uncovered LPCDs are "optimal" or not.

-- To appear in BBS. Wonder if Tenenbaum's group has concerted response to this paper.
cognitive_science  bayesian  cognition  rationality  critique 
3 days ago
[1810.03579] Long ties accelerate noisy threshold-based contagions
"Changes to network structure can substantially affect when and how widely new ideas, products, and conventions are adopted. In models of biological contagion, interventions that randomly rewire edges (making them "longer") accelerate spread. However, there are other models relevant to social contagion, such as those motivated by myopic best-response in games with strategic complements, in which individual's behavior is described by a threshold number of adopting neighbors above which adoption occurs (i.e., complex contagions). Recent work has argued that highly clustered, rather than random, networks facilitate spread of these complex contagions. Here we show that minor modifications of prior analyses, which make them more realistic, reverse this result. The modification is that we allow very rarely below threshold adoption, i.e., very rarely adoption occurs, where there is only one adopting neighbor. To model the trade-off between long and short edges we consider networks that are the union of cycle-power-k graphs and random graphs on n nodes. We study how the time to global spread changes as we replace the cycle edges with (random) long ties. Allowing adoptions below threshold to occur with order 1/n‾√ probability is enough to ensure that random rewiring accelerates spread. Simulations illustrate the robustness of these results to other commonly-posited models for noisy best-response behavior. We then examine empirical social networks, where we find that hypothetical interventions that (a) randomly rewire existing edges or (b) add random edges reduce time to spread compared with the original network or addition of "short", triad-closing edges, respectively. This substantially revises conclusions about how interventions change the spread of behavior, suggesting that those wanting to increase spread should induce formation of long ties, rather than triad-closing ties."
via:cshalizi  networks  contagion  teaching  for_friends 
5 days ago
Trump, the 2016 Election, and Expressions of Sexism
The amount of prejudice that people express in social situations, in private conversations, or even on public opinion surveys is not a direct reflection of their views, but rather the result of a process of suppression and justification. Accordingly, the expression of prejudice can be influenced both by a change in one’s internal cognitive calculations and also by a change in how one perceives the norms of their social environment. In this paper, I examine how the 2016 election influenced the expression of sexist viewpoints among Republicans. Specifically, I find that partisan motivated reasoning made Republicans more willing to express tolerance for sexist rhetoric when it came from Trump rather than from another source. Additionally, I show that Republicans became more willing to endorse sexist statements after the 2016 election, likely due to the fact that Trump’s victory changed their perceptions about the prevalence of sexist attitudes in American society. This increase in expressed sexism has persisted into 2018.

--For better or worse, * studies terms have infiltrated social science literature. Now, the hard part of figuring out if these terms intangible concepts can be objectively quantified as easily as the recent scholarly work suggests it can be.
us_politics  political_psychology  gender  2016  via:nyhan  epidemiology_of_representations  political_science 
10 days ago
[1809.10756] An Introduction to Probabilistic Programming
This document is designed to be a first-year graduate-level introduction to probabilistic programming. It not only provides a thorough background for anyone wishing to use a probabilistic programming system, but also introduces the techniques needed to design and build these systems. It is aimed at people who have an undergraduate-level understanding of either or, ideally, both probabilistic machine learning and programming languages.
We start with a discussion of model-based reasoning and explain why conditioning as a foundational computation is central to the fields of probabilistic machine learning and artificial intelligence. We then introduce a simple first-order probabilistic programming language (PPL) whose programs define static-computation-graph, finite-variable-cardinality models. In the context of this restricted PPL we introduce fundamental inference algorithms and describe how they can be implemented in the context of models denoted by probabilistic programs.
In the second part of this document, we introduce a higher-order probabilistic programming language, with a functionality analogous to that of established programming languages. This affords the opportunity to define models with dynamic computation graphs, at the cost of requiring inference methods that generate samples by repeatedly executing the program. Foundational inference algorithms for this kind of probabilistic programming language are explained in the context of an interface between program executions and an inference controller.
This document closes with a chapter on advanced topics which we believe to be, at the time of writing, interesting directions for probabilistic programming research; directions that point towards a tight integration with deep neural network research and the development of systems for next-generation artificial intelligence applications.
probabilistic_programming  machine_learning  tutorial  review  via:droy 
10 days ago
[1810.01605] $mathbf{h_alpha}$: An index to quantify an individual's scientific leadership
The α person is the dominant person in a group. We define the α-author of a paper as the author of the paper with the highest h-index among all the coauthors, and an α-paper of a scientist as a paper authored or coauthored by the scientist where he/she is the α-author. For most but not all papers in the literature there is only one α-author. We define the hα index of a scientist as the number of papers in the h-core of the scientist (i.e. the set of papers that contribute to the h-index of the scientist) where this scientist is the α-author. We also define the h′α index of a scientist as the number of α-papers of this scientist that have ≥ h′α citations. hα and h′α contain similar information, while h′α is conceptually more appealing it is harder to obtain from existing databases, hence of less current practical interest. We propose that the hα and/or h′α indices, or other variants discussed in the paper, are useful complements to the h-index of a scientist to quantify his/her scientific achievement, that rectify an inherent drawback of the h-index, its inability to distinguish between authors with different coauthorships patterns. A high h index in conjunction with a high hα/h ratio is a hallmark of scientific leadership.
sociology_of_science  bibliometry  networks 
10 days ago
Fake images: The effects of source, intermediary, and digital media literacy on contextual assessment of image credibility online - Cuihua Shen, Mona Kasra, Wenjing Pan, Grace A Bassett, Yining Malloch, James F O’Brien, 2018
Fake or manipulated images propagated through the Web and social media have the capacity to deceive, emotionally distress, and influence public opinions and actions. Yet few studies have examined how individuals evaluate the authenticity of images that accompany online stories. This article details a 6-batch large-scale online experiment using Amazon Mechanical Turk that probes how people evaluate image credibility across online platforms. In each batch, participants were randomly assigned to 1 of 28 news-source mockups featuring a forged image, and they evaluated the credibility of the images based on several features. We found that participants’ Internet skills, photo-editing experience, and social media use were significant predictors of image credibility evaluation, while most social and heuristic cues of online credibility (e.g. source trustworthiness, bandwagon, intermediary trustworthiness) had no significant impact. Viewers’ attitude toward a depicted issue also positively influenced their credibility evaluation.
media_studies  misinformation  disinformation  online_experiments  amazon_turk  judgment_decision-making  political_psychology  via:nyhan 
16 days ago
Models and mechanisms in psychological explanation | SpringerLink
Mechanistic explanation has an impressive track record of advancing our understanding of complex, hierarchically organized physical systems, particularly biological and neural systems. But not every complex system can be understood mechanistically. Psychological capacities are often understood by providing cognitive models of the systems that underlie them. I argue that these models, while superficially similar to mechanistic models, in fact have a substantially more complex relation to the real underlying system. They are typically constructed using a range of techniques for abstracting the functional properties of the system, which may not coincide with its mechanistic organization. I describe these techniques and show that despite being non-mechanistic, these cognitive models can satisfy the normative constraints on good explanations.
explanation  philosophy_of_biology  cognitive_science  philosophy_of_science 
17 days ago
Kavanaugh is lying. His upbringing explains why. - The Washington Post
--Surely, as the chair of sociology at Columbia, the author could have crafted a better article instead of virtue signaling and pandering to the *good* people.
distrust_of_elites  institutions  sociology  misguided  WaPo 
18 days ago
Network Propaganda - Paperback - Yochai Benkler; Robert Faris; Hal Roberts - Oxford University Press
Is social media destroying democracy? Are Russian propaganda or "Fake news" entrepreneurs on Facebook undermining our sense of a shared reality? A conventional wisdom has emerged since the election of Donald Trump in 2016 that new technologies and their manipulation by foreign actors played a decisive role in his victory and are responsible for the sense of a "post-truth" moment in which disinformation and propaganda thrives.

Network Propaganda challenges that received wisdom through the most comprehensive study yet published on media coverage of American presidential politics from the start of the election cycle in April 2015 to the one year anniversary of the Trump presidency. Analysing millions of news stories together with Twitter and Facebook shares, broadcast television and YouTube, the book provides a comprehensive overview of the architecture of contemporary American political communications. Through data analysis and detailed qualitative case studies of coverage of immigration, Clinton scandals, and the Trump Russia investigation, the book finds that the right-wing media ecosystem operates fundamentally differently than the rest of the media environment.

The authors argue that longstanding institutional, political, and cultural patterns in American politics interacted with technological change since the 1970s to create a propaganda feedback loop in American conservative media. This dynamic has marginalized centre-right media and politicians, radicalized the right wing ecosystem, and rendered it susceptible to propaganda efforts, foreign and domestic. For readers outside the United States, the book offers a new perspective and methods for diagnosing the sources of, and potential solutions for, the perceived global crisis of democratic politics.

-- Open Access Title
book  yochai.benkler  misinformation  disinformation  media_studies  social_networks  political_science 
20 days ago
[1808.06581] The Deconfounded Recommender: A Causal Inference Approach to Recommendation
The goal of a recommender system is to show its users items that they will like. In forming its prediction, the recommender system tries to answer: "what would the rating be if we 'forced' the user to watch the movie?" This is a question about an intervention in the world, a causal question, and so traditional recommender systems are doing causal inference from observational data. This paper develops a causal inference approach to recommendation. Traditional recommenders are likely biased by unobserved confounders, variables that affect both the "treatment assignments" (which movies the users watch) and the "outcomes" (how they rate them). We develop the deconfounded recommender, a strategy to leverage classical recommendation models for causal predictions. The deconfounded recommender uses Poisson factorization on which movies users watched to infer latent confounders in the data; it then augments common recommendation models to correct for potential confounding bias. The deconfounded recommender improves recommendation and it enjoys stable performance against interventions on test sets.
causal_inference  machine_learning  david.blei  computaional_advertising 
21 days ago
Physical Computation - Paperback - Gualtiero Piccinini - Oxford University Press
Gualtiero Piccinini articulates and defends a mechanistic account of concrete, or physical, computation. A physical system is a computing system just in case it is a mechanism one of whose functions is to manipulate vehicles based solely on differences between different portions of the vehicles according to a rule defined over the vehicles. Physical Computation discusses previous accounts of computation and argues that the mechanistic account is better. Many kinds of computation are explicated, such as digital vs. analog, serial vs. parallel, neural network computation, program-controlled computation, and more. Piccinini argues that computation does not entail representation or information processing although information processing entails computation. Pancomputationalism, according to which every physical system is computational, is rejected. A modest version of the physical Church-Turing thesis, according to which any function that is physically computable is computable by Turing machines, is defended.
book  philosophy  computation  dynamical_system 
22 days ago
[1809.08937] Networks and the Resilience and Fall of Empires: a Macro-Comparison of the Imperium Romanum and Imperial China
This paper proposes to proceed from a rather metaphorical application of network terminology on polities and imperial formations of the past to an actual use of tools and concepts of network science. For this purpose, a well established network model of the route system in the Roman Empire and a newly created network model of the infrastructural web of Imperial China are visualised and analysed with regard to their structural properties. Findings indicate that these systems could be understood as large scale complex networks with pronounced differences in centrality and connectivity among places and a hierarchical sequence of clusters across spatial scales from the overregional to the local level. Such properties in turn would influence the cohesion and robustness of imperial networks, as is demonstrated with two tests on vulnerability to node failure and to the collapse of longdistance connectivity. Tentatively, results can be connected with actual historical dynamics and thus hint at underlying network mechanisms of large scale integration and disintegration of political formations.
networks  history  spatial_statistics  network_data_analysis  geography  for_friends  teaching  via:noahpinion 
23 days ago
Diversifying the picture of explanations in biological sciences: ways of combining topology with mechanisms | SpringerLink
Besides mechanistic explanations of phenomena, which have been seriously investigated in the last decade, biology and ecology also include explanations that pinpoint specific mathematical properties as explanatory of the explanandum under focus. Among these structural explanations, one finds topological explanations, and recent science pervasively relies on them. This reliance is especially due to the necessity to model large sets of data with no practical possibility to track the proper activities of all the numerous entities. The paper first defines topological explanations and then explains why topological explanations and mechanisms are different in principle. Then it shows that they are pervasive both in the study of networks—whose importance has been increasingly acknowledged at each level of the biological hierarchy—and in contexts where the notion of selective neutrality is crucial; this allows me to capture the difference between mechanisms and topological explanations in terms of practical modelling practices. The rest of the paper investigates how in practice mechanisms and topologies are combined. They may be articulated in theoretical structures and explanatory strategies, first through a relation of constraint, second in interlevel theories (Sect. 3), or they may condition each other (Sect. 4). Finally, I explore how a particular model can integrate mechanistic informations, by focusing on the recent practice of merging networks in ecology and its consequences upon multiscale modelling (Sect. 5).
philosophy_of_biology  explanation  networks 
24 days ago
The New Mechanical Philosophy - Stuart Glennan - Oxford University Press
The New Mechanical Philosophy argues for a new image of nature and of science--one that understands both natural and social phenomena to be the product of mechanisms, and that casts the work of science as an effort to discover and understand those mechanisms. Drawing on an expanding literature on mechanisms in physical, life, and social sciences, Stuart Glennan offers an account of the nature of mechanisms and of the models used to represent them. A key quality of mechanisms is that they are particulars - located at different places and times, with no one just like another. The crux of the scientist's challenge is to balance the complexity and particularity of mechanisms with our need for representations of them that are abstract and general.

This volume weaves together metaphysical and methodological questions about mechanisms. Metaphysically, it explores the implications of the mechanistic framework for our understanding of classical philosophical questions about the nature of objects, properties, processes, events, causal relations, natural kinds and laws of nature. Methodologically, the book explores how scientists build models to represent and understand phenomena and the mechanisms responsible for them. Using this account of representation, Glennan offers a scheme for characterizing the enormous diversity of things that scientists call mechanisms, and explores the scope and limits of mechanistic explanation.
book  philosophy_of_science  philosophy_of_biology 
4 weeks ago
Explicating Top-Down Causation Using Networks and Dynamics | Philosophy of Science: Vol 84, No 2
In many fields in the life sciences investigators refer to downward or top-down causal effects. Craver and I defended the view that such cases should be understood in terms of a constitution relation between levels in a mechanism and intralevel causal relations (occurring at any level). We did not, however, specify when entities constitute a higher-level mechanism. In this article I appeal to graph-theoretic representations of networks, now widely employed in systems biology and neuroscience, and associate mechanisms with modules that exhibit high clustering. As a result of interconnections within clusters, mechanisms often exhibit complex dynamic behaviors that constrain how individual components respond to external inputs, a central feature of top-down causation.
philosophy_of_science  networks  social_networks  dynamics  explanation  causality 
4 weeks ago
Rethinking Causality in Biological and Neural Mechanisms: Constraints and Control | SpringerLink
Existing accounts of mechanistic causation are not suited for understanding causation in biological and neural mechanisms because they do not have the resources to capture the unique causal structure of control heterarchies. In this paper, we provide a new account on which the causal powers of mechanisms are grounded by time-dependent, variable constraints. Constraints can also serve as a key bridge concept between the mechanistic approach to explanation and underappreciated work in theoretical biology that sheds light on how biological systems channel energy to actively respond to the environment in adaptive ways, perform work, and fulfill the requirements to maintain themselves far from equilibrium. We show how the framework applies to several concrete examples of control in simple organisms as well as the nervous system of complex organisms.
philosophy_of_biology  causality 
4 weeks ago
[1809.04578] Simplicity Creates Inequity: Implications for Fairness, Stereotypes, and Interpretability
Algorithmic predictions are increasingly used to aid, or in some cases supplant, human decision-making, and this development has placed new demands on the outputs of machine learning procedures. To facilitate human interaction, we desire that they output prediction functions that are in some fashion simple or interpretable. And because they influence consequential decisions, we also desire equitable prediction functions, ones whose allocations benefit (or at the least do not harm) disadvantaged groups.
We develop a formal model to explore the relationship between simplicity and equity. Although the two concepts appear to be motivated by qualitatively distinct goals, our main result shows a fundamental inconsistency between them. Specifically, we formalize a general framework for producing simple prediction functions, and in this framework we show that every simple prediction function is strictly improvable: there exists a more complex prediction function that is both strictly more efficient and also strictly more equitable. Put another way, using a simple prediction function both reduces utility for disadvantaged groups and reduces overall welfare. Our result is not only about algorithms but about any process that produces simple models, and as such connects to the psychology of stereotypes and to an earlier economics literature on statistical discrimination.
sendhil.mullainathan  algorithmic_fairness  machine_learning 
4 weeks ago
Freedom: The Holberg Lecture, 2018 by Cass R. Sunstein :: SSRN
If people have freedom of choice, do their lives go better? Under what conditions? By what criteria? Consider three distinct problems. (1) In countless situations, human beings face a serious problem of “navigability”; they do not know how to get to their preferred destination, whether the issue involves health, education, employment, or well-being in general. This problem is especially challenging for people who live under conditions of severe deprivation, but it can be significant for all of us. (2) Many of us face problems of self-control, and our decisions today endanger our own future. What we want, right now, hurts us, next year. (3) In some cases, we would actually be happy or well-off with two or more different outcomes, whether the issue involves our jobs, our diets, our city, or even our friends and partners, and the real question, on which good answers are increasingly available, is what most promotes our welfare. The evaluative problem, in such cases, is especially challenging if a decision would alter people’s identity, values, or character. Private and public institutions -- including small companies, large companies, governments – can help people to have better lives, given (1), (2), and (3). This Essay, the text of the Holberg Lecture 2018, is the basis for a different, thicker, and more elaborate treatment in a book.

-- Optimal foraging theory rediscovered by a law professor.
political_science  law  political_philosophy  book  cass.sunstein 
4 weeks ago
Nervous system-like signaling in plant defense | Science
The ability to initiate a rapid defense against biotic attacks and mechanical damage is critical for all organisms. Multicellular organisms have developed mechanisms to systemically communicate the occurrence of a wound to help them escape or defend themselves from predators. Because plants are stationary and cannot escape herbivory, they must respond with chemical defenses to deter herbivores and repair damaged tissue. On page 1112 of this issue, Toyota et al. (1) report long-distance calcium ion signaling in the model plant Arabidopsis thaliana in response to caterpillar herbivory or mechanical wounding (see the image). They uncover long-distance calcium signals that require glutamate-like receptor (GLR) channels for signal propagation. These channels are activated by extracellular glutamate, a well-known mammalian neurotransmitter and a more recently uncovered developmental signal in plants (2). In mammals, glutamate receptors are central to fast excitatory neurotransmission, which is an intriguing parallel to their role as long-distance signals in wounding and defense in plants.

-- Don't slime molds and biofilms display similar response?
comparative  bio-physics  behavior  plant_biology  bio-chemistry 
4 weeks ago
On Waste Plastics at Sea, Maria-Luiza Pedrotti Finds Unique Microbial Multitudes | Quanta Magazine
-- the scientific content in the article just squashed a core premise of one of my manuscripts but oh well,...

-- the most interesting ecosystem since I came across biofilms.
quanta_mag  microbiology  philosophy_of_biology  track_down_references  microbiome 
4 weeks ago
How Imports INCREASE GDP – Econlib
-- contains links to original sources, works, etc...and comments
globalization  macroeconomics  via:wolfers 
4 weeks ago
now publishers - Does Rape Culture Predict Rape? Evidence from U.S. Newspapers, 2000–2013
We offer the first quantitative analysis of rape culture in the United States. Observers have long worried that biased news coverage of rape — which blames victims, empathizes with perpetrators, implies consent, and questions victims' credibility — may deter victims from coming forward, and ultimately increase the incidence of rape. We present a theory of how rape culture might shape the preferences and choices of perpetrators, victims and law enforcement, and test this theory with data on news stories about rape published in U.S. newspapers between 2000 and 2013. We find that rape culture in the media predicts both the frequency of rape and its pursuit through the local criminal justice system. In jurisdictions where rape culture was more prevalent, there were more documented rape cases, but authorities were less vigilant in pursuing them.

--very strong latent causal claims, especially using an intangible variable which is really a gender studies concept. Given the status of such *found data* research, a more subdued claim should have been made.
causal_inference  gender_studies  media_studies  contemporary_culture  i_remain_skeptical  via:nyhan 
5 weeks ago
Economic losers and political winners: The rise of the radical right in Sweden | TSE
https://drive.google.com/file/d/115uMhYnCNqt_sb48R38uU4Tn3gldeaeq/view

We study the rise of the Sweden Democrats, a radical-right party that rose from negligible size in 2002 to Swedenís third largest party in 2014. We use comprehensive data to study both its politicians (supply side) and voters (demand side). All political candidates for the party can be identiÖed in register data, which also lets us aggregate individual social and economic conditions in municipalities or voting districts and relate them to the partyís vote share. We take a starting point in two key economic events: (i) a series of policy reforms in 2006-2011 that signiÖcantly widened the disposable- income gap between ìinsidersîand ìoutsidersîin the labor market, and (ii) the Önancial-crisis recession that doubled the job-loss risk for ìvulnerableî vs ìsecureîinsiders. On the supply side, the Sweden Democrats over-represent both losing groups relative to the population, whereas all other parties under-represent them, results which also hold when we disaggregate across time, subgroups, and municipalities. On the demand side, the local increase in the insider-outsider income gap, as well as the share of vulnerable insiders, are systematically associated with larger electoral gains for the Sweden Democrats. These Öndings can be given a citizen-candidate interpretation: economic losers (as we demonstrate) decrease their trust in established parties and institutions. As a result, some economic losers became Sweden-Democrat candidates, and many more supported the party electorally to obtain greater descriptive representation. This way, Swedish politics became potentially more inclusive. But the politicians elected for the Sweden Democrats score lower on expertise, moral values, and social trust ñas do their voters which made local political selection less valence oriented.

--A more traditional racial resentment PoV
https://www.nytimes.com/2018/09/06/opinion/how-the-far-right-conquered-sweden.html

-- It is plausible that both resentment of out-groups and economic factors simultaneously contributed to third party success. I can see parallels with what happened in India in the 1990s with the emergence of anti-establishment parties, which in turn can be traced to J.P. Narayan's socialist movement back in th3 70s. All in all, a nice way to analyze emergence and eventual success of third-parties in democracies.
political_economy  right-wing_populism  european_politics  via:nyhan 
5 weeks ago
The Theory Is Predictive, but Is It Complete? An Application to Human Perception of Randomness by Jon Kleinberg, Annie Liang, Sendhil Mullainathan :: SSRN
When testing a theory, we should ask not just whether its predictions match what we see in the data, but also about its “completeness”: how much of the predictable variation in the data does the theory capture? Defining completeness is conceptually challenging, but we show how methods based on machine learning can provide tractable measures of completeness. We also identify a model domain—the human perception and generation of randomness — where measures of completeness can be feasibly analyzed; from these measures we discover there is significant structure in the problem that existing theories have yet to capture.

-- I can think of other domains (e.g. neuroscience, cosmology, behavioral genetics?) where such analysis might be possible. Also, I am curious if philosophers of science (and statistics) have discussed anything similar or _superior_.
randomness  statistics  machine_learning  prediction  model_selection  ?  cognitive_science  sendhil.mullainathan 
5 weeks ago
Re-Engineering Philosophy for Limited Beings — William C. Wimsatt | Harvard University Press
Analytic philosophers once pantomimed physics: they tried to understand the world by breaking it down into the smallest possible bits. Thinkers from the Darwinian sciences now pose alternatives to this simplistic reductionism.

In this intellectual tour—essays spanning thirty years—William C. Wimsatt argues that scientists seek to atomize phenomena only when necessary in the search to understand how entities, events, and processes articulate at different levels. Evolution forms the natural world not as Laplace’s all-seeing demon but as a backwoods mechanic fixing and re-fashioning machines out of whatever is at hand. W. V. Quine’s lost search for a “desert ontology” leads instead to Wimsatt’s walk through a tropical rain forest.

This book offers a philosophy for error-prone humans trying to understand messy systems in the real world. Against eliminative reductionism, Wimsatt pits new perspectives to deal with emerging natural and social complexities. He argues that our philosophy should be rooted in heuristics and models that work in practice, not only in principle. He demonstrates how to do this with an analysis of the strengths, the limits, and a recalibration of our reductionistic and analytic methodologies. Our aims are changed and our philosophy is transfigured in the process.

https://link.springer.com/article/10.1007/s10539-011-9260-8

https://link.springer.com/article/10.1007/s10539-010-9202-x

https://link.springer.com/article/10.1007/s10539-010-9199-1
book  philosophy_of_biology 
6 weeks ago
Developing Scaffolds in Evolution, Culture, and Cognition | The MIT Press
"Scaffolding" is a concept that is becoming widely used across disciplines. This book investigates common threads in diverse applications of scaffolding, including theoretical biology, cognitive science, social theory, science and technology studies, and human development. Despite its widespread use, the concept of scaffolding is often given short shrift; the contributors to this volume, from a range of disciplines, offer a more fully developed analysis of scaffolding that highlights the role of temporal and temporary resources in development, broadly conceived, across concepts of culture, cognition, and evolution.

The book emphasizes reproduction, repeated assembly, and entrenchment of heterogeneous relations, parts, and processes as a complement to neo-Darwinism in the developmentalist tradition of conceptualizing evolutionary change. After describing an integration of theoretical perspectives that can accommodate different levels of analysis and connect various methodologies, the book discusses multilevel organization; differences (and reciprocality) between individuals and institutions as units of analysis; and perspectives on development that span brains, careers, corporations, and cultural cycles.

--link to a book review
https://link.springer.com/article/10.1007/s10441-014-9230-z

-- thoughts based on the introductory essay
Herb Simon's _Architecture of Complexity_ meets Developmental Biology. Stuff Andy Clark missed. S.J. Gould's first book philosophized.
book  philosophy_of_biology  philosophy_of_technology  cultural_evolution 
6 weeks ago
Mechanisms and the nature of causation | SpringerLink
In this paper I offer an analysis of causation based upon a theory of mechanisms-complex systems whose “internal” parts interact to produce a system's “external” behavior. I argue that all but the fundamental laws of physics can be explained by reference to mechanisms. Mechanisms provide an epistemologically unproblematic way to explain the necessity which is often taken to distinguish laws from other generalizations. This account of necessity leads to a theory of causation according to which events are causally related when there is a mechanism that connects them. I present reasons why the lack of an account of fundamental physical causation does not undermine the mechanical account.
philosophy_of_biology  causality 
6 weeks ago
The Explanatory Power of Network Models | Philosophy of Science: Vol 83, No 5
Network analysis is increasingly used to discover and represent the organization of complex systems. Focusing on examples from neuroscience in particular, I argue that whether network models explain, how they explain, and how much they explain cannot be answered for network models generally but must be answered by specifying an explanandum, by addressing how the model is applied to the system, and by specifying which kinds of relations count as explanatory.
philosophy_of_science  neuroscience  connectome  social_networks  ? 
6 weeks ago
In Search of Mechanisms: Discoveries across the Life Sciences, Craver, Darden
Neuroscientists investigate the mechanisms of spatial memory. Molecular biologists study the mechanisms of protein synthesis and the myriad mechanisms of gene regulation. Ecologists study nutrient cycling mechanisms and their devastating imbalances in estuaries such as the Chesapeake Bay. In fact, much of biology and its history involves biologists constructing, evaluating, and revising their understanding of mechanisms.

With In Search of Mechanisms, Carl F. Craver and Lindley Darden offer both a descriptive and an instructional account of how biologists discover mechanisms. Drawing on examples from across the life sciences and through the centuries, Craver and Darden compile an impressive toolbox of strategies that biologists have used and will use again to reveal the mechanisms that produce, underlie, or maintain the phenomena characteristic of living things. They discuss the questions that figure in the search for mechanisms, characterizing the experimental, observational, and conceptual considerations used to answer them, all the while providing examples from the history of biology to highlight the kinds of evidence and reasoning strategies employed to assess mechanisms. At a deeper level, Craver and Darden pose a systematic view of what biology is, of how biology makes progress, of how biological discoveries are and might be made, and of why knowledge of biological mechanisms is important for the future of the human species.
book  philosophy_of_biology 
6 weeks ago
Explaining the Brain - Hardcover - Carl F. Craver - Oxford University Press
What distinguishes good explanations in neuroscience from bad? Carl F. Craver constructs and defends standards for evaluating neuroscientific explanations that are grounded in a systematic view of what neuroscientific explanations are: descriptions of multilevel mechanisms. In developing this approach, he draws on a wide range of examples in the history of neuroscience (e.g. Hodgkin and Huxleys model of the action potential and LTP as a putative explanation for different kinds of memory), as well as recent philosophical work on the nature of scientific explanation. Readers in neuroscience, psychology, the philosophy of mind, and the philosophy of science will find much to provoke and stimulate them in this book.
book  philosophy_of_biology  neuroscience 
6 weeks ago
John Matthewson & Brett Calcott, Mechanistic models of population-level phenomena - PhilPapers
This paper is about mechanisms and models, and how they interact. In part, it is a response to recent discussion in philosophy of biology regarding whether natural selection is a mechanism. We suggest that this debate is indicative of a more general problem that occurs when scientists produce mechanistic models of populations and their behaviour. We can make sense of claims that there are mechanisms that drive population-level phenomena such as macroeconomics, natural selection, ecology, and epidemiology. But talk of mechanisms and mechanistic explanation evokes objects with well-defined and localisable parts which interact in discrete ways, while models of populations include parts and interactions that are neither local nor discrete in any actual populations. This apparent tension can be resolved by carefully distinguishing between the properties of a model and those of the system it represents. To this end, we provide an analysis that recognises the flexible relationship between a mechanistic model and its target system. In turn, this reveals a surprising feature of mechanistic representation and explanation: it can occur even when there is a mismatch between the mechanism of the model and that of its target. Our analysis reframes the debate, providing an alternative way to interpret scientists’ mechanism-talk , which initially motivated the issue. We suggest that the relevant question is not whether any population-level phenomenon such as natural selection is a mechanism, but whether it can be usefully modelled as though it were a particular type of mechanism
philosophy_of_biology 
6 weeks ago
Three kinds of new mechanism | SpringerLink
I distinguish three theses associated with the new mechanistic philosophy—concerning causation, explanation and scientific methodology. Advocates of each thesis are identified and relationships among them are outlined. I then look at some recent work on natural selection and mechanisms. Framing that debate in terms of different kinds of New Mechanism significantly affects what is at stake.
philosophy_of_biology 
6 weeks ago
Rethinking Mechanistic Explanation | Philosophy of Science: Vol 69, No S3
Philosophers of science typically associate the causal‐mechanical view of scientific explanation with the work of Railton and Salmon. In this paper I shall argue that the defects of this view arise from an inadequate analysis of the concept of mechanism. I contrast Salmon’s account of mechanisms in terms of the causal nexus with my own account of mechanisms, in which mechanisms are viewed as complex systems. After describing these two concepts of mechanism, I show how the complex‐systems approach avoids certain objections to Salmon’s account of causal‐mechanical explanation. I conclude by discussing how mechanistic explanations can provide understanding by unification.
philosophy_of_biology 
6 weeks ago
Discovering Complexity | The MIT Press
In Discovering Complexity, William Bechtel and Robert Richardson examine two heuristics that guided the development of mechanistic models in the life sciences: decomposition and localization. Drawing on historical cases from disciplines including cell biology, cognitive neuroscience, and genetics, they identify a number of "choice points" that life scientists confront in developing mechanistic explanations and show how different choices result in divergent explanatory models. Describing decomposition as the attempt to differentiate functional and structural components of a system and localization as the assignment of responsibility for specific functions to specific structures, Bechtel and Richardson examine the usefulness of these heuristics as well as their fallibility—the sometimes false assumption underlying them that nature is significantly decomposable and hierarchically organized.

When Discovering Complexity was originally published in 1993, few philosophers of science perceived the centrality of seeking mechanisms to explain phenomena in biology, relying instead on the model of nomological explanation advanced by the logical positivists (a model Bechtel and Richardson found to be utterly inapplicable to the examples from the life sciences in their study). Since then, mechanism and mechanistic explanation have become widely discussed. In a substantive new introduction to this MIT Press edition of their book, Bechtel and Richardson examine both philosophical and scientific developments in research on mechanistic models since 1993.
philosophy_of_biology  book 
6 weeks ago
Thinking about Mechanisms | Philosophy of Science: Vol 67, No 1
The concept of mechanism is analyzed in terms of entities and activities, organized such that they are productive of regular changes. Examples show how mechanisms work in neurobiology and molecular biology. Thinking in terms of mechanisms provides a new framework for addressing many traditional philosophical issues: causality, laws, explanation, reduction, and scientific change.
philosophy_of_biology 
6 weeks ago
Robert Skipper & Roberta Millstein, Thinking about evolutionary mechanisms: Natural selection - PhilPapers
This paper explores whether natural selection, a putative evolutionary mechanism, and a main one at that, can be characterized on either of the two dominant conceptions of mechanism, due to Glennan and the team of Machamer, Darden, and Craver, that constitute the “new mechanistic philosophy.” The results of the analysis are that neither of the dominant conceptions of mechanism adequately captures natural selection. Nevertheless, the new mechanistic philosophy possesses the resources for an understanding of natural selection under the rubric.
philosophy_of_biology 
6 weeks ago
I Worked With Avital Ronell. I Believe Her Accuser. - The Chronicle of Higher Education
https://bullybloggers.wordpress.com/2018/08/18/the-full-catastrophe/

https://www.newyorker.com/news/our-columnists/an-nyu-sexual-harassment-case-has-spurred-a-necessary-conversation-about-metoo

-- links too many to list, containing opinions of Zizek, Butler defending Ronell. I will be charitable and say,"Nice magic trick, guys!". Mostly a waste of time; useful if you are into learning argot of 21st century post-modernist, gender, queer and intersectional thought.
academia  education  university  bureaucracy  via:cottom 
6 weeks ago
Fifty Inventions That Shaped the Modern Economy: Harford, Tim: Hardcover: 9780735216136: Powell's Books
Fifty Inventions That Shaped the Modern Economy paints an epic picture of change in an intimate way by telling the stories of the tools, people, and ideas that had far-reaching consequences for all of us. From the plough to artificial intelligence, from Gillette's disposable razor to IKEA's Billy bookcase, bestselling author and Financial Times columnist Tim Harford recounts each invention's own curious, surprising, and memorable story.

Invention by invention, Harford reflects on how we got here and where we might go next. He lays bare often unexpected connections: how the bar code undermined family corner stores, and why the gramophone widened inequality. In the process, he introduces characters who developed some of these inventions, profited from them, and were ruined by them, as he traces the principles that helped explain their transformative effects. The result is a wise and witty book of history, economics, and biography.
book  economics  technology  via:wolfers 
6 weeks ago
Godfrey-Smith, P.: Philosophy of Biology (Hardcover, Paperback and eBook) | Princeton University Press
This is a concise, comprehensive, and accessible introduction to the philosophy of biology written by a leading authority on the subject. Geared to philosophers, biologists, and students of both, the book provides sophisticated and innovative coverage of the central topics and many of the latest developments in the field. Emphasizing connections between biological theories and other areas of philosophy, and carefully explaining both philosophical and biological terms, Peter Godfrey-Smith discusses the relation between philosophy and science; examines the role of laws, mechanistic explanation, and idealized models in biological theories; describes evolution by natural selection; and assesses attempts to extend Darwin's mechanism to explain changes in ideas, culture, and other phenomena. Further topics include functions and teleology, individuality and organisms, species, the tree of life, and human nature. The book closes with detailed, cutting-edge treatments of the evolution of cooperation, of information in biology, and of the role of communication in living systems at all scales.

-- I have major issues with the last chapter of the book but is an adequate book. Also, the lack of chapters on cybernetics and absence of adequate discussions on self-organization and emergence. Also missing are literate discussions of non-equilibrium statistical mechanics and its role in the genesis of biological complexity. Overall, the book left me with the feeling of "where is the other half of the book?". But with a suitable collection of articles on missing topics, the book can serve as a text for a first course.
book  peter.godfrey-smith  philosophy_of_biology  teaching 
7 weeks ago
Opinion | The Religion of Whiteness Becomes a Suicide Cult - The New York Times
Mishra is frustratingly repetitive, in his arguments, style, rhetoric and personal attacks.
post-colonialism  right-wing_populism  global_politics  NYTimes 
7 weeks ago
Rapid-onset gender dysphoria in adolescents and young adults: A study of parental reports
Methods

Recruitment information with a link to a 90-question survey, consisting of multiple-choice, Likert-type and open-ended questions, was placed on three websites where parents had reported rapid onsets of gender dysphoria. Website moderators and potential participants were encouraged to share the recruitment information and link to the survey with any individuals or communities that they thought might include eligible participants to expand the reach of the project through snowball sampling techniques. Data were collected anonymously via SurveyMonkey. Quantitative findings are presented as frequencies, percentages, ranges, means and/or medians. Open-ended responses from two questions were targeted for qualitative analysis of themes.

Results

There were 256 parent-completed surveys that met study criteria. The adolescent and young adult (AYA) children described were predominantly female sex at birth (82.8%) with a mean age of 16.4 years. Forty-one percent of the AYAs had expressed a non-heterosexual sexual orientation before identifying as transgender. Many (62.5%) of the AYAs had been diagnosed with at least one mental health disorder or neurodevelopmental disability prior to the onset of their gender dysphoria (range of the number of pre-existing diagnoses 0–7). In 36.8% of the friendship groups described, the majority of the members became transgender-identified. The most likely outcomes were that AYA mental well-being and parent-child relationships became worse since AYAs “came out”. AYAs expressed a range of behaviors that included: expressing distrust of non-transgender people (22.7%); stopping spending time with non-transgender friends (25.0%); trying to isolate themselves from their families (49.4%), and only trusting information about gender dysphoria from transgender sources (46.6%).

Conclusion

Rapid-onset gender dysphoria (ROGD) describes a phenomenon where the development of gender dysphoria is observed to begin suddenly during or after puberty in an adolescent or young adult who would not have met criteria for gender dysphoria in childhood. ROGD appears to represent an entity that is distinct from the gender dysphoria observed in individuals who have previously been described as transgender. The worsening of mental well-being and parent-child relationships and behaviors that isolate AYAs from their parents, families, non-transgender friends and mainstream sources of information are particularly concerning. More research is needed to better understand this phenomenon, its implications and scope.

--- and the administrative and bureaucratic stupidity that followed
https://news.brown.edu/articles/2018/08/gender
debates  nature-nurture  social_influence  contagion  social_networks  sociology_of_science  gender_studies  social_construction_of_ignorance  university  academia  bureaucracy 
7 weeks ago
[1803.02047] Observation of topological phenomena in a programmable lattice of 1,800 qubits
The celebrated work of Berezinskii, Kosterlitz and Thouless in the 1970s revealed exotic phases of matter governed by topological properties of low-dimensional materials such as thin films of superfluids and superconductors. Key to this phenomenon is the appearance and interaction of vortices and antivortices in an angular degree of freedom---typified by the classical XY model---due to thermal fluctuations. In the 2D Ising model this angular degree of freedom is absent in the classical case, but with the addition of a transverse field it can emerge from the interplay between frustration and quantum fluctuations. Consequently a Kosterlitz-Thouless (KT) phase transition has been predicted in the quantum system by theory and simulation. Here we demonstrate a large-scale quantum simulation of this phenomenon in a network of 1,800 in situ programmable superconducting flux qubits arranged in a fully-frustrated square-octagonal lattice. Essential to the critical behavior, we observe the emergence of a complex order parameter with continuous rotational symmetry, and the onset of quasi-long-range order as the system approaches a critical temperature. We use a simple but previously undemonstrated approach to statistical estimation with an annealing-based quantum processor, performing Monte Carlo sampling in a chain of reverse quantum annealing protocols. Observations are consistent with classical simulations across a range of Hamiltonian parameters. We anticipate that our approach of using a quantum processor as a programmable magnetic lattice will find widespread use in the simulation and development of exotic materials.

rdcu.be/4ZH0

--Wow, 1800 qubits!Did not know one could do this?
7 weeks ago
Complexity revisited | SpringerLink
I look back at my 1996 book Complexity and the Function of Mind in Nature, responding to papers by Pamela Lyon, Fred Keijzer and Argyris Arnellos, and Matt Grove.

-- A revised, and more subdued take on his _environmental complexity thesis_. I like this better.
peter.godfrey-smith  philosophy_of_biology  cognition 
8 weeks ago
Fanning the Flames of Hate: Social Media and Hate Crime by Karsten Müller, Carlo Schwarz :: SSRN
This paper investigates the link between social media and hate crime using Facebook data. We study the case of Germany, where the recently emerged right-wing party Alternative für Deutschland (AfD) has developed a major social media presence. We show that right-wing anti-refugee sentiment on Facebook predicts violent crimes against refugees in otherwise similar municipalities with higher social media usage. To further establish causality, we exploit exogenous variation in major internet and Facebook outages, which fully undo the correlation between social media and hate crime. We further find that the effect decreases with distracting news events; increases with user network interactions; and does not hold for posts unrelated to refugees. Our results suggest that social media can act as a propagation mechanism between online hate speech and real-life violent crime.
media_studies  social_influence  platform_studies  violence  mediation_analysis  ?  causal_inference  i_remain_skeptical 
8 weeks ago
Changing climates of conflict: A social network experiment in 56 schools | PNAS
Theories of human behavior suggest that individuals attend to the behavior of certain people in their community to understand what is socially normative and adjust their own behavior in response. An experiment tested these theories by randomizing an anticonflict intervention across 56 schools with 24,191 students. After comprehensively measuring every school’s social network, randomly selected seed groups of 20–32 students from randomly selected schools were assigned to an intervention that encouraged their public stance against conflict at school. Compared with control schools, disciplinary reports of student conflict at treatment schools were reduced by 30% over 1 year. The effect was stronger when the seed group contained more “social referent” students who, as network measures reveal, attract more student attention. Network analyses of peer-to-peer influence show that social referents spread perceptions of conflict as less socially normative.
social_influence  social_networks  social_psychology  norms  intervention  causal_inference 
8 weeks ago
[1805.10615] A Local Information Criterion for Dynamical Systems
Encoding a sequence of observations is an essential task with many applications. The encoding can become highly efficient when the observations are generated by a dynamical system. A dynamical system imposes regularities on the observations that can be leveraged to achieve a more efficient code. We propose a method to encode a given or learned dynamical system. Apart from its application for encoding a sequence of observations, we propose to use the compression achieved by this encoding as a criterion for model selection. Given a dataset, different learning algorithms result in different models. But not all learned models are equally good. We show that the proposed encoding approach can be used to choose the learned model which is closer to the true underlying dynamics. We provide experiments for both encoding and model selection, and theoretical results that shed light on why the approach works.
model_selection  dynamical_system  machine_learning  bernhard.schölkopf 
8 weeks ago
[1611.06221] Theoretical Aspects of Cyclic Structural Causal Models
Structural causal models (SCMs), also known as (non-parametric) structural equation models (SEMs), are widely used for causal modeling purposes. A large body of theoretical results is available for the special case in which cycles are absent (i.e., acyclic SCMs, also known as recursive SEMs). However, in many application domains cycles are abundantly present, for example in the form of feedback loops. In this paper, we provide a general and rigorous theory of cyclic SCMs. The paper consists of two parts: the first part gives a rigorous treatment of structural causal models, dealing with measure-theoretic and other complications that arise in the presence of cycles. In contrast with the acyclic case, in cyclic SCMs solutions may no longer exist, or if they exist, they may no longer be unique, or even measurable in general. We give several sufficient and necessary conditions for the existence of (unique) measurable solutions. We show how causal reasoning proceeds in these models and how this differs from the acyclic case. Moreover, we give an overview of the Markov properties that hold for cyclic SCMs. In the second part, we address the question of how one can marginalize an SCM (possibly with cycles) to a subset of the endogenous variables. We show that under a certain condition, one can effectively remove a subset of the endogenous variables from the model, leading to a more parsimonious marginal SCM that preserves the causal and counterfactual semantics of the original SCM on the remaining variables. Moreover, we show how the marginalization relates to the latent projection and to latent confounders, i.e. latent common causes.
graphical_models  causal_inference  bernhard.schölkopf 
8 weeks ago
[1703.06856] Counterfactual Fairness
Machine learning can impact people with legal or ethical consequences when it is used to automate decisions in areas such as insurance, lending, hiring, and predictive policing. In many of these scenarios, previous decisions have been made that are unfairly biased against certain subpopulations, for example those of a particular race, gender, or sexual orientation. Since this past data may be biased, machine learning predictors must account for this to avoid perpetuating or creating discriminatory practices. In this paper, we develop a framework for modeling fairness using tools from causal inference. Our definition of counterfactual fairness captures the intuition that a decision is fair towards an individual if it is the same in (a) the actual world and (b) a counterfactual world where the individual belonged to a different demographic group. We demonstrate our framework on a real-world problem of fair prediction of success in law school.
algorithmic_fairness  causal_inference  machine_learning 
8 weeks ago
[1706.02744] Avoiding Discrimination through Causal Reasoning
Recent work on fairness in machine learning has focused on various statistical discrimination criteria and how they trade off. Most of these criteria are observational: They depend only on the joint distribution of predictor, protected attribute, features, and outcome. While convenient to work with, observational criteria have severe inherent limitations that prevent them from resolving matters of fairness conclusively.
Going beyond observational criteria, we frame the problem of discrimination based on protected attributes in the language of causal reasoning. This viewpoint shifts attention from "What is the right fairness criterion?" to "What do we want to assume about the causal data generating process?" Through the lens of causality, we make several contributions. First, we crisply articulate why and when observational criteria fail, thus formalizing what was before a matter of opinion. Second, our approach exposes previously ignored subtleties and why they are fundamental to the problem. Finally, we put forward natural causal non-discrimination criteria and develop algorithms that satisfy them.
algorithmic_fairness  causal_inference  machine_learning  bernhard.schölkopf 
8 weeks ago
[1806.02380] Causal Interventions for Fairness
Most approaches in algorithmic fairness constrain machine learning methods so the resulting predictions satisfy one of several intuitive notions of fairness. While this may help private companies comply with non-discrimination laws or avoid negative publicity, we believe it is often too little, too late. By the time the training data is collected, individuals in disadvantaged groups have already suffered from discrimination and lost opportunities due to factors out of their control. In the present work we focus instead on interventions such as a new public policy, and in particular, how to maximize their positive effects while improving the fairness of the overall system. We use causal methods to model the effects of interventions, allowing for potential interference--each individual's outcome may depend on who else receives the intervention. We demonstrate this with an example of allocating a budget of teaching resources using a dataset of schools in New York City.
algorithmic_fairness  causal_inference  machine_learning 
8 weeks ago
[1808.00023] The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning
The nascent field of fair machine learning aims to ensure that decisions guided by algorithms are equitable. Over the last several years, three formal definitions of fairness have gained prominence: (1) anti-classification, meaning that protected attributes---like race, gender, and their proxies---are not explicitly used to make decisions; (2) classification parity, meaning that common measures of predictive performance (e.g., false positive and false negative rates) are equal across groups defined by the protected attributes; and (3) calibration, meaning that conditional on risk estimates, outcomes are independent of protected attributes. Here we show that all three of these fairness definitions suffer from significant statistical limitations. Requiring anti-classification or classification parity can, perversely, harm the very groups they were designed to protect; and calibration, though generally desirable, provides little guarantee that decisions are equitable. In contrast to these formal fairness criteria, we argue that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce. Such a strategy, while not universally applicable, often aligns well with policy objectives; notably, this strategy will typically violate both anti-classification and classification parity. In practice, it requires significant effort to construct suitable risk estimates. One must carefully define and measure the targets of prediction to avoid retrenching biases in the data. But, importantly, one cannot generally address these difficulties by requiring that algorithms satisfy popular mathematical formalizations of fairness. By highlighting these challenges in the foundation of fair machine learning, we hope to help researchers and practitioners productively advance the area.
machine_learning  algorithms  bias  ethics  privacy  review  for_friends 
8 weeks ago
« earlier      
2016 20th_century 21st_century ? academia active_matter administrative_state algorithms alt-right andrew.gelman anthropology artificial_intelligence autocracy automation bayesian behavior behavioral_economics bias big_data biology black_history blog book book_review brendan.nyhan bureaucracy capitalism causal_inference causality civil_rights cognition cognitive_science collective_action collective_cognition collective_dynamics collective_intention comparative complex_system computaional_advertising conspiracy_theories contagion contemporary_culture course crime criminal_justice critical_theory critique cultural_cognition cultural_evolution cultural_history cybersecurity data debates deep_learning democracy discrimination disinformation dmce dynamical_system dynamics econometrics economic_geography economic_history economic_sociology economics education epidemiology ethics european_politics evolutionary_biology evolutionary_psychology experimental_design experiments expert_judgment extremism feminism for_friends freedom_of_speech gafa game_theory gender genetics geography global_politics globalization governance graphical_models groups health heuristics historical_sociology history history_of_ideas homophily human_progress i_remain_skeptical ideology immigration india inequality influence institutions interating_particle_system international_affairs intervention journalism judgment_decision-making labor law liberalism machine_learning macroeconomics market_failures market_microstructure mathematics media_studies meta-analysis methods microeconomics misinformation moral_economy moral_psychology nap nationalism nature-nurture network_data_analysis networked_life networked_public_sphere networks neuroscience news_media non-equilibrium norms nytimes online_experiments packages people phase_transition philosophy philosophy_of_biology philosophy_of_science physics polarization policing policy political_economy political_psychology political_science political_sociology post-modernism poverty prediction privacy probability protests psychology public_opinion public_policy public_sphere quanta_mag race rational_choice regulation replication_of_studies report review right-wing_populism russia self_organization sentiment_analysis slavery social_behavior social_construction_of_knowledge social_media social_movements social_networks social_psychology social_science socialism sociology sociology_of_science software statistical_mechanics statistics stochastic_process surveillance survey teaching technology the_atlantic the_civilizing_process time_series trumpism twitter united_states_of_america university us_conservative_thought us_elections us_politics via:? via:cshalizi via:henryfarrell via:noahpinion via:nyhan via:pinker via:sunstein via:wolfers via:zeynep virtue_signaling vox wapo world_trends

Copy this bookmark:



description:


tags: