The ‘Alice in Wonderland’ mechanics of the rejection of (climate) science: simulating coherence by conspiracism | SpringerLink
"Science strives for coherence. For example, the findings from climate science form a highly coherent body of knowledge that is supported by many independent lines of evidence: greenhouse gas (GHG) emissions from human economic activities are causing the global climate to warm and unless GHG emissions are drastically reduced in the near future, the risks from climate change will continue to grow and major adverse consequences will become unavoidable. People who oppose this scientific body of knowledge because the implications of cutting GHG emissions—such as regulation or increased taxation—threaten their worldview or livelihood cannot provide an alternative view that is coherent by the standards of conventional scientific thinking. Instead, we suggest that people who reject the fact that the Earth’s climate is changing due to greenhouse gas emissions (or any other body of well-established scientific knowledge) oppose whatever inconvenient finding they are confronting in piece-meal fashion, rather than systematically, and without considering the implications of this rejection to the rest of the relevant scientific theory and findings. Hence, claims that the globe “is cooling” can coexist with claims that the “observed warming is natural” and that “the human influence does not matter because warming is good for us.” Coherence between these mutually contradictory opinions can only be achieved at a highly abstract level, namely that “something must be wrong” with the scientific evidence in order to justify a political position against climate change mitigation. This high-level coherence accompanied by contradictory subordinate propositions is a known attribute of conspiracist ideation, and conspiracism may be implicated when people reject well-established scientific propositions."
to:NB  philosophy_of_science  conspiracy_theories  climate_change  deceiving_us_has_become_an_industrial_process 
Network representation and complex systems | SpringerLink
"In this article, network science is discussed from a methodological perspective, and two central theses are defended. The first is that network science exploits the very properties that make a system complex. Rather than using idealization techniques to strip those properties away, as is standard practice in other areas of science, network science brings them to the fore, and uses them to furnish new forms of explanation. The second thesis is that network representations are particularly helpful in explaining the properties of non-decomposable systems. Where part-whole decomposition is not possible, network science provides a much-needed alternative method of compressing information about the behavior of complex systems, and does so without succumbing to problems associated with combinatorial explosion. The article concludes with a comparison between the uses of network representation analyzed in the main discussion, and an entirely distinct use of network representation that has recently been discussed in connection with mechanistic modeling."
to:NB  philosophy_of_science  networks  complexity  to_be_shot_after_a_fair_trial 
Muhanna, E.: The World in a Book: Al-Nuwayri and the Islamic Encyclopedic Tradition (Hardcover and eBook) | Princeton University Press
"Shihab al-Din al-Nuwayri was a fourteenth-century Egyptian polymath and the author of one of the greatest encyclopedias of the medieval Islamic world—a thirty-one-volume work entitled The Ultimate Ambition in the Arts of Erudition. A storehouse of knowledge, this enormous book brought together materials on nearly every conceivable subject, from cosmology, zoology, and botany to philosophy, poetry, ethics, statecraft, and history. Composed in Cairo during the golden age of Islamic encyclopedic activity, the Ultimate Ambition was one of hundreds of large-scale compendia, literary anthologies, dictionaries, and chronicles produced at this time—an effort that was instrumental in organizing the archive of medieval Islamic thought.
"In the first study of this landmark work in a European language, Elias Muhanna explores its structure and contents, sources and influences, and reception and impact in the Islamic world and Europe. He sheds new light on the rise of encyclopedic literature in the learned cities of the Mamluk Empire and situates this intellectual movement alongside other encyclopedic traditions in the ancient, medieval, Renaissance, and Enlightenment periods. He also uncovers al-Nuwayri’s world: a scene of bustling colleges, imperial chanceries, crowded libraries, and religious politics."
to:NB  books:noted  history_of_ideas  islamic_civilization  medieval_eurasian_history  encyclopedias 
3 days ago
Harper, K.: The Fate of Rome: Climate, Disease, and the End of an Empire (Hardcover and eBook) | Princeton University Press
"Here is the monumental retelling of one of the most consequential chapters of human history: the fall of the Roman Empire. The Fate of Rome is the first book to examine the catastrophic role that climate change and infectious diseases played in the collapse of Rome’s power—a story of nature’s triumph over human ambition.
"Interweaving a grand historical narrative with cutting-edge climate science and genetic discoveries, Kyle Harper traces how the fate of Rome was decided not just by emperors, soldiers, and barbarians but also by volcanic eruptions, solar cycles, climate instability, and devastating viruses and bacteria. He takes readers from Rome’s pinnacle in the second century, when the empire seemed an invincible superpower, to its unraveling by the seventh century, when Rome was politically fragmented and materially depleted. Harper describes how the Romans were resilient in the face of enormous environmental stress, until the besieged empire could no longer withstand the combined challenges of a “little ice age” and recurrent outbreaks of bubonic plague.
"A poignant reflection on humanity’s intimate relationship with the environment, The Fate of Rome provides a sweeping account of how one of history’s greatest civilizations encountered and endured, yet ultimately succumbed to the cumulative burden of nature’s violence. The example of Rome is a timely reminder that climate change and germ evolution have shaped the world we inhabit—in ways that are surprising and profound."

--- Suspiciously few reviews noted from classicists.
to:NB  books:noted  ancient_history  roman_empire  plagues_and_peoples  climate_change 
3 days ago
Haskel, J. and Westlake, S.: Capitalism without Capital: The Rise of the Intangible Economy (Hardcover and eBook) | Princeton University Press
"Early in the twenty-first century, a quiet revolution occurred. For the first time, the major developed economies began to invest more in intangible assets, like design, branding, R&D, and software, than in tangible assets, like machinery, buildings, and computers. For all sorts of businesses, from tech firms and pharma companies to coffee shops and gyms, the ability to deploy assets that one can neither see nor touch is increasingly the main source of long-term success.
"But this is not just a familiar story of the so-called new economy. Capitalism without Capital shows that the growing importance of intangible assets has also played a role in some of the big economic changes of the last decade. The rise of intangible investment is, Jonathan Haskel and Stian Westlake argue, an underappreciated cause of phenomena from economic inequality to stagnating productivity.
"Haskel and Westlake bring together a decade of research on how to measure intangible investment and its impact on national accounts, showing the amount different countries invest in intangibles, how this has changed over time, and the latest thinking on how to assess this. They explore the unusual economic characteristics of intangible investment, and discuss how these features make an intangible-rich economy fundamentally different from one based on tangibles."
to:NB  books:noted  economics  economic_history  market_failures_in_everything 
3 days ago
Lending, M.: Plaster Monuments: Architecture and the Power of Reproduction (Hardcover) | Princeton University Press
"We are taught to believe in originals. In art and architecture in particular, original objects vouch for authenticity, value, and truth, and require our protection and preservation. The nineteenth century, however, saw this issue differently. In a culture of reproduction, plaster casts of building fragments and architectural features were sold throughout Europe and America and proudly displayed in leading museums. The first comprehensive history of these full-scale replicas, Plaster Monuments examines how they were produced, marketed, sold, and displayed, and how their significance can be understood today.
"Plaster Monuments unsettles conventional thinking about copies and originals. As Mari Lending shows, the casts were used to restore wholeness to buildings that in reality lay in ruin, or to isolate specific features of monuments to illustrate what was typical of a particular building, style, or era. Arranged in galleries and published in exhibition catalogues, these often enormous objects were staged to suggest the sweep of history, synthesizing structures from vastly different regions and time periods into coherent narratives. While architectural plaster casts fell out of fashion after World War I, Lending brings the story into the twentieth century, showing how Paul Rudolph incorporated historical casts into the design for the Yale Art and Architecture building, completed in 1963.
"Drawing from a broad archive of models, exhibitions, catalogues, and writings from architects, explorers, archaeologists, curators, novelists, and artists, Plaster Monuments tells the fascinating story of a premodernist aesthetic and presents a new way of thinking about history’s artifacts."
to:NB  books:noted  art_history  the_work_of_art_in_the_age_of_mechanical_reproduction 
3 days ago
Machine learning, social learning and the governance of self-driving cars ---Social Studies of Science - Jack Stilgoe, 2017
"Self-driving cars, a quintessentially ‘smart’ technology, are not born smart. The algorithms that control their movements are learning as the technology emerges. Self-driving cars represent a high-stakes test of the powers of machine learning, as well as a test case for social learning in technology governance. Society is learning about the technology while the technology learns about society. Understanding and governing the politics of this technology means asking ‘Who is learning, what are they learning and how are they learning?’ Focusing on the successes and failures of social learning around the much-publicized crash of a Tesla Model S in 2016, I argue that trajectories and rhetorics of machine learning in transport pose a substantial governance challenge. ‘Self-driving’ or ‘autonomous’ cars are misnamed. As with other technologies, they are shaped by assumptions about social needs, solvable problems, and economic opportunities. Governing these technologies in the public interest means improving social learning by constructively engaging with the contingencies of machine learning."

--- The fact that I could almost have written the abstract from just the journal and the title suggests there is little new here, but the last tag applies.
to:NB  machine_learning  robots_and_robotics  sociology  to_be_shot_after_a_fair_trial 
4 days ago
Stationary subspace analysis of nonstationary processes - Sundararajan - 2017 - Journal of Time Series Analysis - Wiley Online Library
"Stationary subspace analysis (SSA) is a recent technique for finding linear transformations of nonstationary processes that are stationary in the limited sense that the first two moments or means and lag-0 covariances are time-invariant. It finds a matrix that projects the nonstationary data onto a stationary subspace by minimizing a Kullback–Leibler divergence between Gaussian distributions measuring the nonconstancy of the means and covariances across several segments. We propose an SSA procedure for general multivariate, second-order nonstationary processes. It relies on the asymptotic uncorrelatedness of the discrete Fourier transform of a stationary time series to define a measure of departure from stationarity, which is then minimized to find the stationary subspace. The dimension of the subspace is estimated using a sequential testing procedure, and its asymptotic properties are discussed. We illustrate the broader applicability and better performance of our method in comparison to existing SSA methods through simulations and discuss an application in analyzing electroencephalogram (EEG) data from brain–computer interface (BCI) experiments."
to:NB  time_series  stochastic_processes  non-stationarity  fourier_analysis 
5 days ago
The impossibility of intelligence explosion – François Chollet – Medium
Sensible, but disappointing that it has to be said.
(Even shorter: I. J. Good apparently forgot that a monotonically increasing sequence can have a finite limit!)
artificial_intelligence  debunking  rapture_for_nerds 
7 days ago
Knysh, A.: Sufism: A New History of Islamic Mysticism | Princeton University Press
"After centuries as the most important ascetic-mystical strand of Islam, Sufism saw a sharp decline in the twentieth century, only to experience a stunning revival in recent decades. In this comprehensive new history of Sufism from the earliest centuries of Islam to today, Alexander Knysh, a leading expert on the subject, reveals the tradition in all its richness.
"Knysh explores how Sufism has been viewed by both insiders and outsiders since its inception. He examines the key aspects of Sufism, from definitions and discourses to leadership, institutions, and practices. He devotes special attention to Sufi approaches to the Qur’an, drawing parallels with similar uses of scripture in Judaism and Christianity. He traces how Sufism grew from a set of simple moral-ethical precepts into a sophisticated tradition with professional Sufi masters (shaykhs) who became powerful players in Muslim public life but whose authority was challenged by those advocating the equality of all Muslims before God. Knysh also examines the roots of the ongoing conflict between the Sufis and their fundamentalist critics, the Salafis—a major fact of Muslim life today."
to:NB  books:noted  islam  islamic_civilization  history_of_religion  sufism  mysticism 
10 days ago
Rohling, E.: The Oceans: A Deep History (Hardcover and eBook) | Princeton University Press
"It has often been said that we know more about the moon than we do about our own oceans. In fact, we know a great deal more about the oceans than many people realize. Scientists know that our actions today are shaping the oceans and climate of tomorrow—and that if we continue to act recklessly, the consequences will be dire. In this timely and accessible book, Eelco Rohling traces the 4.4 billion-year history of Earth’s oceans while also shedding light on the critical role they play in our planet’s climate system.
"Beginning with the formation of primeval Earth and the earliest appearance of oceans, Rohling takes readers on a journey through prehistory to the present age, vividly describing the major events in the ocean’s evolution—from snowball and greenhouse Earth to the end-Permian mass extinction, the breakup of the Pangaea supercontinent, and the changing climate of today. Along the way, he explores the close interrelationships of the oceans, climate, solid Earth processes, and life, using the context of Earth and ocean history to provide perspective on humankind’s impacts on the health and habitability of our planet—and on what the future may hold for us."
to:NB  books:noted  popular_science  oceans  geology 
10 days ago
Mulgan, G.: Big Mind: How Collective Intelligence Can Change Our World (Hardcover and eBook) | Princeton University Press
"A new field of collective intelligence has emerged in the last few years, prompted by a wave of digital technologies that make it possible for organizations and societies to think at large scale. This “bigger mind”—human and machine capabilities working together—has the potential to solve the great challenges of our time. So why do smart technologies not automatically lead to smart results? Gathering insights from diverse fields, including philosophy, computer science, and biology, Big Mind reveals how collective intelligence can guide corporations, governments, universities, and societies to make the most of human brains and digital technologies.
"Geoff Mulgan explores how collective intelligence has to be consciously organized and orchestrated in order to harness its powers. He looks at recent experiments mobilizing millions of people to solve problems, and at groundbreaking technology like Google Maps and Dove satellites. He also considers why organizations full of smart people and machines can make foolish mistakes—from investment banks losing billions to intelligence agencies misjudging geopolitical events—and shows how to avoid them.
"Highlighting differences between environments that stimulate intelligence and those that blunt it, Mulgan shows how human and machine intelligence could solve challenges in business, climate change, democracy, and public health. But for that to happen we’ll need radically new professions, institutions, and ways of thinking."
to:NB  books:noted  collective_cognition  popular_social_science  re:democratic_cognition  to_be_shot_after_a_fair_trial 
10 days ago
Capturing the Dynamical Repertoire of Single Neurons with Generalized Linear Models | Neural Computation | MIT Press Journals
"A key problem in computational neuroscience is to find simple, tractable models that are nevertheless flexible enough to capture the response properties of real neurons. Here we examine the capabilities of recurrent point process models known as Poisson generalized linear models (GLMs). These models are defined by a set of linear filters and a point nonlinearity and are conditionally Poisson spiking. They have desirable statistical properties for fitting and have been widely used to analyze spike trains from electrophysiological recordings. However, the dynamical repertoire of GLMs has not been systematically compared to that of real neurons. Here we show that GLMs can reproduce a comprehensive suite of canonical neural response behaviors, including tonic and phasic spiking, bursting, spike rate adaptation, type I and type II excitation, and two forms of bistability. GLMs can also capture stimulus-dependent changes in spike timing precision and reliability that mimic those observed in real neurons, and can exhibit varying degrees of stochasticity, from virtually deterministic responses to greater-than-Poisson variability. These results show that Poisson GLMs can exhibit a wide range of dynamic spiking behaviors found in real neurons, making them well suited for qualitative dynamical as well as quantitative statistical studies of single-neuron and population response properties."
to:NB  neural_data_analysis  statistics  to_teach:undergrad-ADA  pillow.jonathan 
10 days ago
Bozeman, B. and Youtie, J.: The Strength in Numbers: The New Science of Team Science (Hardcover and eBook) | Princeton University Press
"Once upon a time, it was the lone scientist who achieved brilliant breakthroughs. No longer. Today, science is done in teams of as many as hundreds of researchers who may be scattered across continents and represent a range of hierarchies. These collaborations can be powerful, but they demand new ways of thinking about scientific research. When three hundred people make a discovery, who gets credit? How can all collaborators’ concerns be adequately addressed? Why do certain STEM collaborations succeed while others fail?
"Focusing on the nascent science of team science,The Strength in Numbers synthesizes the results of the most far-reaching study to date on collaboration among university scientists to provide answers to such questions. Drawing on a national survey with responses from researchers at more than one hundred universities, anonymous web posts, archival data, and extensive interviews with active scientists and engineers in over a dozen STEM disciplines, Barry Bozeman and Jan Youtie set out a framework to characterize different types of collaboration and their likely outcomes. They also develop a model to define research effectiveness, which assesses factors internal and external to collaborations. They advance what they have found to be the gold standard of science collaborations: consultative collaboration management. This strategy—which codifies methods of consulting all team members on a study’s key points and incorporates their preferences and values—empowers managers of STEM collaborations to optimize the likelihood of their effectiveness."
to:NB  books:noted  science_as_a_social_process  collective_cognition  sociology_of_science  re:democratic_cognition 
10 days ago
Oracle Properties, Bias Correction, and Bootstrap Inference for Adaptive Lasso for Time Series M-Estimators - Audrino - 2017 - Journal of Time Series Analysis - Wiley Online Library
"We derive new theoretical results on the properties of the adaptive least absolute shrinkage and selection operator (adaptive lasso) for possibly nonlinear time series models. In particular, we investigate the question of how to conduct inference on the parameters given an adaptive lasso model. Central to this study is the test of the hypothesis that a given adaptive lasso parameter equals zero, which therefore tests for a false positive. To this end, we introduce a recentered bootstrap procedure and show, theoretically and empirically through extensive Monte Carlo simulations, that the adaptive lasso can combine efficient parameter estimation, variable selection, and inference in one step. Moreover, we analytically derive a bias correction factor that is able to significantly improve the empirical coverage of the test on the active variables. Finally, we apply the adaptive lasso and the recentered bootstrap procedure to investigate the relation between the short rate dynamics and the economy, thereby providing a statistical foundation (from a model choice perspective) for the classic Taylor rule monetary policy model."

--- The bit about the Taylor rule is just dumb, but let it slide.
to:NB  statistics  time_series  hypothesis_testing  bootstrap  lasso 
10 days ago
Orthogonal Samples for Estimators in Time Series - Rao - 2017 - Journal of Time Series Analysis - Wiley Online Library
"Inference for statistics of a stationary time series often involves nuisance parameters and sampling distributions that are difficult to estimate. In this paper, we propose the method of orthogonal samples, which can be used to address some of these issues. For a broad class of statistics, an orthogonal sample is constructed through a slight modification of the original statistic such that it shares similar distributional properties as the centralized statistic of interest. We use the orthogonal sample to estimate nuisance parameters of the weighted average periodogram estimators and L2-type spectral statistics. Further, the orthogonal sample is utilized to estimate the finite sampling distribution of various test statistics under the null hypothesis. The proposed method is simple and computationally fast to implement. The viability of the method is illustrated with various simulations."
to:NB  statistics  time_series 
10 days ago
Scientists’ Conceptions of Good Research Practice | Perspectives on Science | MIT Press Journals
"This study examines how working scientists themselves understand, conceptualize, apply, and communicate norms and standards for good research practice. Drawing on semi-structured, detailed narrative interviews with more than 80 scientists, we highlight various topics of concern, including tensions between methodological requirements for good research practice and individual career goals, uncertainty about how exactly certain acknowledged methodological imperatives—such as replication—should be interpreted and turned into practice and the delegation of the responsibilty for ensuring good practice."
to:NB  sociology_of_science  science_as_a_social_process  methodology  philosophy_of_science  spontaneous_philosophy_of_scientists 
10 days ago
Neural Decoding: A Predictive Viewpoint | Neural Computation | MIT Press Journals
"Decoding in the context of brain-machine interface is a prediction problem, with the aim of retrieving the most accurate kinematic predictions attainable from the available neural signals. While selecting models that reduce the prediction error is done to various degrees, decoding has not received the attention that the fields of statistics and machine learning have lavished on the prediction problem in the past two decades. Here, we take a more systematic approach to the decoding prediction problem and search for risk-optimized reverse regression, optimal linear estimation (OLE), and Kalman filter models within a large model space composed of several nonlinear transformations of neural spike counts at multiple temporal lags. The reverse regression decoding framework is a standard prediction problem, where penalized methods such as ridge regression or Lasso are routinely used to find minimum risk models. We argue that minimum risk reverse regression is always more efficient than OLE and also happens to be 44% more efficient than a standard Kalman filter in a particular application of offline reconstruction of arm reaches of a rhesus macaque monkey. Yet model selection for tuning curves–based decoding models such as OLE and Kalman filtering is not a standard statistical prediction problem, and no efficient method exists to identify minimum risk models. We apply several methods to build low-risk models and show that in our application, a Kalman filter that includes multiple carefully chosen observation equations per neural unit is 67% more efficient than a standard Kalman filter, but with the drawback that finding such a model is computationally very costly."
to:NB  have_read  neural_coding_and_decoding  statistics  model_selection  state-space_models  kith_and_kin  ventura.valerie  todorova.sonia 
10 days ago
Optimized Pre-Processing for Discrimination Prevention
"Non-discrimination is a recognized objective in algorithmic decision making. In this paper, we introduce a novel probabilistic formulation of data pre-processing for reducing discrimination. We propose a convex optimization for learning a data transformation with three goals: controlling discrimination, limiting distortion in individual data samples, and preserving utility. We characterize the impact of limited sample size in accomplishing this objective. Two instances of the proposed optimization are applied to datasets, including one on real-world criminal recidivism. Results show that discrimination can be greatly reduced at a small cost in classification accuracy."
to:NB  classifiers  machine_learning  optimization  re:prediction-without-prejudice 
11 days ago
[1709.06560] Deep Reinforcement Learning that Matters
"In recent years, significant progress has been made in solving challenging problems across various domains using deep reinforcement learning (RL). Reproducing existing work and accurately judging the improvements offered by novel methods is vital to sustaining this progress. Unfortunately, reproducing results for state-of-the-art deep RL methods is seldom straightforward. In particular, non-determinism in standard benchmark environments, combined with variance intrinsic to the methods, can make reported results tough to interpret. Without significance metrics and tighter standardization of experimental reporting, it is difficult to determine whether improvements over the prior state-of-the-art are meaningful. In this paper, we investigate challenges posed by reproducibility, proper experimental techniques, and reporting procedures. We illustrate the variability in reported metrics and results when comparing against common baselines and suggest guidelines to make future results in deep RL more reproducible. We aim to spur discussion about how to ensure continued progress in the field by minimizing wasted effort stemming from results that are non-reproducible and easily misinterpreted."
to:NB  reinforcement_learning  neural_networks  repro  reproducibility 
12 days ago
[1711.10337] Are GANs Created Equal? A Large-Scale Study
"Generative adversarial networks (GAN) are a powerful subclass of generative models. Despite a very rich research activity leading to numerous interesting GAN algorithms, it is still very hard to assess which algorithm(s) perform better than others. We conduct a neutral, multi-faceted large-scale empirical study on state-of-the art models and evaluation measures. We find that most models can reach similar scores with enough hyperparameter optimization and random restarts. This suggests that improvements can arise from a higher computational budget and tuning more than fundamental algorithmic changes. To overcome some limitations of the current metrics, we also propose several data sets on which precision and recall can be computed. Our experimental results suggest that future GAN research should be based on more systematic and objective evaluation procedures. Finally, we did not find evidence that any of the tested algorithms consistently outperforms the original one."
to:NB  neural_networks  machine_learning  optimization 
12 days ago
[1707.05589] On the State of the Art of Evaluation in Neural Language Models
"Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing code bases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset."
to:NB  natural_language_processing  statistics  machine_learning  neural_networks 
12 days ago
Sovereignty, International Relations, and the Westphalian Myth on JSTOR
"The 350th anniversary of the Peace of Westphalia in 1998 was largely ignored by the discipline of international relations (IR), despite the fact that it regards that event as the beginning of the international system with which it has traditionally dealt. By contrast, there has recently been much debate about whether the "Westphalian system" is about to end. This debate necessitates, or at least implies, historical comparisons. I contend that IR, unwittingly, in fact judges current trends against the backdrop of a past that is largely imaginary, a product of the nineteenth- and twentieth-century fixation on the concept of sovereignty. I discuss how what I call the ideology of sovereignty has hampered the development of IR theory. I suggest that the historical phenomena I analyze in this article-the Thirty Years' War and the 1648 peace treaties as well as the post-1648 Holy Roman Empire and the European system in which it was embedded-may help us to gain a better understanding of contemporary international politics."
to:NB  early_modern_european_history  international_relations  debunking 
12 days ago
Poincaré Embeddings for Learning Hierarchical Representations
"Representation learning has become an invaluable approach for learning from symbolic data such as text and graphs. However, while complex symbolic datasets often exhibit a latent hierarchical structure, state-of-the-art methods typically learn embeddings in Euclidean vector spaces, which do not account for this property. For this purpose, we introduce a new approach for learning hierarchical representations of symbolic data by embedding them into hyperbolic space -- or more precisely into an n-dimensional Poincaré ball. Due to the underlying hyperbolic geometry, this allows us to learn parsimonious representations of symbolic data by simultaneously capturing hierarchy and similarity. We introduce an efficient algorithm to learn the embeddings based on Riemannian optimization and show experimentally that Poincaré embeddings outperform Euclidean embeddings significantly on data with latent hierarchies, both in terms of representation capacity and in terms of generalization ability."
to:NB  hyperbolic_geometry  dimension_reduction  hierarchical_structure  statistics  data_analysis 
17 days ago
Robust Conditional Probabilities
"Conditional probabilities are a core concept in machine learning. For example, optimal prediction of a label Y given an input X corresponds to maximizing the conditional probability of Y given X. A common approach to inference tasks is learning a model of conditional probabilities. However, these models are often based on strong assumptions (e.g., log-linear models), and hence their estimate of conditional probabilities is not robust and is highly dependent on the validity of their assumptions. Here we propose a framework for reasoning about conditional probabilities without assuming anything about the underlying distributions, except knowledge of their second order marginals, which can be estimated from data. We show how this setting leads to guaranteed bounds on conditional probabilities, which can be calculated efficiently in a variety of settings, including structured-prediction. Finally, we apply them to semi-supervised deep learning, obtaining results competitive with variational autoencoders."
to:NB  statistics  density_estimation  probability 
17 days ago
The power of absolute discounting: all-dimensional distribution estimation
"Categorical models are the natural fit for many problems. When learning the distribution of categories from samples, high-dimensionality may dilute the data. Minimax optimality is too pessimistic to remedy this issue. A serendipitously discovered estimator, absolute discounting, corrects empirical frequencies by subtracting a constant from observed categories, which it then redistributes among the unobserved. It outperforms classical estimators empirically, and has been used extensively in natural language modeling. In this paper, we rigorously explain the prowess of this estimator using less pessimistic notions. We show (1) that absolute discounting recovers classical minimax KL-risk rates, (2) that it is \emph{adaptive} to an effective dimension rather than the true dimension, (3) that it is strongly related to the Good-Turing estimator and inherits its \emph{competitive} properties. We use power-law distributions as the corner stone of these results. We validate the theory via synthetic data and an application to the Global Terrorism Database."
to:NB  to_read  density_estimation  statistics  heavy_tails 
17 days ago
Jinnealogy: Time, Islam, and Ecological Thought in the Medieval Ruins of Delhi | Anand Vivek Taneja
"In the ruins of a medieval palace in Delhi, a unique phenomenon occurs: Indians of all castes and creeds meet to socialize and ask the spirits for help. The spirits they entreat are Islamic jinns, and they write out requests as if petitioning the state. At a time when a Hindu right wing government in India is committed to normalizing a view of the past that paints Muslims as oppressors, Anand Vivek Taneja's Jinnealogy provides a fresh vision of religion, identity, and sacrality that runs counter to state-sanctioned history.
"The ruin, Firoz Shah Kotla, is an unusually democratic religious space, characterized by freewheeling theological conversations, DIY rituals, and the sanctification of animals. Taneja observes the visitors, who come mainly from the Muslim and Dalit neighborhoods of Delhi, and uses their conversations and letters to the jinns as an archive of voices so often silenced. He finds that their veneration of the jinns recalls pre-modern religious traditions in which spiritual experience was inextricably tied to ecological surroundings. In this enchanted space, Taneja encounters a form of popular Islam that is not a relic of bygone days, but a vibrant form of resistance to state repression and post-colonial visions of India."

--- I suspect the writing will be somewhat dreadful, but this sounds truly interesting.
to:NB  books:noted  anthropology  religion  delhi  islam  jinn  cultural_exchange 
17 days ago
Fear and Loathing Across Party Lines
"When defined in terms of social identity and affect toward copartisans and opposing partisans, the polarization of the
American electorate has dramatically increased. We document the scope and consequences of affective polarization of
partisans using implicit, explicit, and behavioral indicators. Our evidence demonstrates that hostile feelings for the opposing
party are ingrained or automatic in voters’ minds, and that affective polarization based on party is just as strong as
polarization based on race. We further show that party cues exert powerful effects on nonpolitical judgments and behaviors.
Partisans discriminate against opposing partisans, doing so to a degree that exceeds discrimination based on race. We note
that the willingness of partisans to display open animus for opposing partisans can be attributed to the absence of norms
governing the expression of negative sentiment and that increased partisan affect provides an incentive for elites to engage
in confrontation rather than cooperation."
to:NB  public_opinion  political_science  us_politics  re:democratic_cognition  via:gabriel_rossman 
18 days ago
Belief Network Analysis
"Many accounts of political belief systems conceive of them as networks of
interrelated opinions, in which some beliefs are central and others peripheral. We formally show
how such structural features can be used to construct direct measures of belief centrality in a
network of correlations. We apply this method to the 2000 ANES data, which have been used to
argue that political beliefs are organized around parenting schemas. Our structural approach
instead yields results consistent with the central role of political identity, which individuals may
use as the organizing heuristic to filter information from the political field. We search for
population heterogeneity in this organizing logic first by comparing 44 demographic
subpopulations, and then with inductive techniques. Contra recent accounts of belief system
heterogeneity, we find that belief systems of different groups vary in the amount of organization,
but not in the logic which organizes them. "
to:NB  political_science  public_opinion  networks  cognitive_science  re:democratic_cognition  via:gabriel_rossman 
18 days ago
Scientific revolutions and the explosion of scientific evidence | SpringerLink
"Scientific realism, the position that successful theories are likely to be approximately true, is threatened by the pessimistic induction according to which the history of science is full of successful, but false theories. I aim to defend scientific realism against the pessimistic induction. My main thesis is that our current best theories each enjoy a very high degree of predictive success, far higher than was enjoyed by any of the refuted theories. I support this thesis by showing that the amount and quality of scientific evidence has increased enormously in the recent past, resulting in a big boost of success for the best theories."
to:NB  philosophy_of_science 
20 days ago
Where is the epistemic community? On democratisation of science and social accounts of objectivity | SpringerLink
"This article focuses on epistemic challenges related to the democratisation of scientific knowledge production, and to the limitations of current social accounts of objectivity. A process of ’democratisation’ can be observed in many scientific and academic fields today. Collaboration with extra-academic agents and the use of extra-academic expertise and knowledge has become common, and researchers are interested in promoting socially inclusive research practices. As this development is particularly prevalent in policy-relevant research, it is important that the new, more democratic forms of research be objective. In social accounts of objectivity only epistemic communities are taken to be able to produce objective knowledge, or the entity whose objectivity is to be assessed is precisely such a community. As I argue, these accounts do not allow for situations where it is not easy to identify the relevant epistemic community. Democratisation of scientific knowledge production can lead to such situations. As an example, I discuss attempts to link indigenous oral traditions to floods and tsunamis that happened hundreds or even thousands of years ago."
to:NB  science_as_a_social_process  re:democratic_cognition 
20 days ago
Different motivations, similar proposals: objectivity in scientific community and democratic science policy | SpringerLink
"The aim of the paper is to discuss some possible connections between philosophical proposals about the social organisation of science and developments towards a greater democratisation of science policy. I suggest that there are important similarities between one approach to objectivity in philosophy of science—Helen Longino’s account of objectivity as freedom from individual biases achieved through interaction of a variety of perspectives—and some ideas about the epistemic benefits of wider representation of various groups’ perspectives in science policy, as analysed by Mark Brown. Given these similarities, I suggest that they allow one to approach developments in science policy as if one of their aims were epistemic improvement that can be recommended on the basis of the philosophical account; analyses of political developments inspired by these ideas about the benefits of inclusive dialogue can then be used for understanding the possibility to implement a philosophical proposal for improving the objectivity of science in practice. Outlining this suggestion, I also discuss the possibility of important differences between the developments in the two spheres and show how the concern about the possible divergence of politically motivated and epistemically motivated changes may be mitigated. In order to substantiate further the suggestion I make, I discuss one example of a development where politically motivated and epistemically motivated changes converge in practice—the development of professional ethics in American archaeology as analysed by Alison Wylie. I suggest that analysing such specific developments and getting involved with them can be one of the tasks for philosophy of science. In the concluding part of the paper I discuss how this approach to philosophy of science is related to a number of arguments about a more politically relevant philosophy of science"
to:NB  re:democratic_cognition 
20 days ago
What was primitive accumulation? Reconstructing the origin of a critical conceptEuropean Journal of Political Theory - William Clare Roberts, 2017
"The ongoing critical redeployment of primitive accumulation proceeds under two premises. First, it is argued that Marx, erroneously, confined primitive accumulation to the earliest history of capitalism. Second, Marx is supposed to have teleologically justified primitive accumulation as a necessary precondition for socialist development. This article argues that reading Marx’s account of primitive accumulation in the context of contemporaneous debates about working class and socialist strategy rebuts both of these criticisms. Marx’s definition of primitive accumulation as the ‘prehistory of capital’ does not deny its contemporaneity, but marks the distinction between the operations of capital and those of other agencies – especially the state – which are necessary, but also external, to capital itself. This same distinction between capital, which accumulates via the exploitation of labour-power, and the state, which becomes dependent upon capitalist accumulation for its own existence, recasts the historical necessity of primitive accumulation. Marx characterizes the modern state as the armed and servile agent of capital, willing to carry out primitive accumulation wherever the conditions of capitalist accumulation are threatened. Hence, the recent reconstructions risk obliterating Marx’s key insights into the specificity of a) capital as a form of wealth and b) capital’s relationship to the state."

--- Footnote 1 is especially relevant to my interests.
to:NB  have_read  marx.karl  political_economy  re:reading_capital  via:? 
27 days ago
[1706.02744] Avoiding Discrimination through Causal Reasoning
"Recent work on fairness in machine learning has focused on various statistical discrimination criteria and how they trade off. Most of these criteria are observational: They depend only on the joint distribution of predictor, protected attribute, features, and outcome. While convenient to work with, observational criteria have severe inherent limitations that prevent them from resolving matters of fairness conclusively.
"Going beyond observational criteria, we frame the problem of discrimination based on protected attributes in the language of causal reasoning. This viewpoint shifts attention from "What is the right fairness criterion?" to "What do we want to assume about the causal data generating process?" Through the lens of causality, we make several contributions. First, we crisply articulate why and when observational criteria fail, thus formalizing what was before a matter of opinion. Second, our approach exposes previously ignored subtleties and why they are fundamental to the problem. Finally, we put forward natural causal non-discrimination criteria and develop algorithms that satisfy them."
to:NB  to_read  causality  algorithmic_fairness  prediction  machine_learning  janzing.dominik  re:ADAfaEPoV  via:arsyed 
27 days ago
Consistency without Inference: Instrumental Variables in Practical Application
"I use the bootstrap to study a comprehensive sample of 1400 instrumental
variables regressions in 32 papers published in the journals of the American
Economic Association. IV estimates are more often found to be falsely significant
and more sensitive to outliers than OLS, while having a higher mean squared error
around the IV population moment. There is little evidence that OLS estimates are
substantively biased, while IV instruments often appear to be irrelevant. In
addition, I find that established weak instrument pre-tests are largely
uninformative and weak instrument robust methods generally perform no better or
substantially worse than 2SLS. "
to:NB  have_read  re:ADAfaEPoV  to_teach:undergrad-ADA  instrumental_variables  causal_inference  regression  statistics  econometrics  via:kjhealy 
27 days ago
Democracy by mistake
"How does democracy emerge from authoritarian rule? Influential theories contend that incumbents deliberately choose to share or surrender power. They do so to prevent revolution, motivate citizens to fight wars, incentivize governments to provide public goods, outbid elite rivals, or limit factional violence. Examining the history of all democratizations since 1800, I show that such deliberate choice arguments may help explain up to one third of cases. In about two thirds, democratization occurred not because incumbent elites chose it but because, in trying to prevent it, they made mistakes that weakened their hold on power. Common mistakes include: calling elections or starting military conflicts, only to lose them; ignoring popular unrest and being overthrown; initiating limited reforms that get out of hand; and selecting a covert democrat as leader. These mistakes reflect well-known cognitive biases such as overconfidence and the illusion of control."
to:NB  to_read  democracy  re:democratic_cognition  history  institutions  political_science  via:henry_farrell 
28 days ago
Geismer, L.: Don't Blame Us: Suburban Liberals and the Transformation of the Democratic Party (Hardcover, Paperback and eBook) | Princeton University Press
"Don't Blame Us traces the reorientation of modern liberalism and the Democratic Party away from their roots in labor union halls of northern cities to white-collar professionals in postindustrial high-tech suburbs, and casts new light on the importance of suburban liberalism in modern American political culture. Focusing on the suburbs along the high-tech corridor of Route 128 around Boston, Lily Geismer challenges conventional scholarly assessments of Massachusetts exceptionalism, the decline of liberalism, and suburban politics in the wake of the rise of the New Right and the Reagan Revolution in the 1970s and 1980s. Although only a small portion of the population, knowledge professionals in Massachusetts and elsewhere have come to wield tremendous political leverage and power. By probing the possibilities and limitations of these suburban liberals, this rich and nuanced account shows that—far from being an exception to national trends—the suburbs of Massachusetts offer a model for understanding national political realignment and suburban politics in the second half of the twentieth century."
to:NB  books:noted  us_politics  20th_century_history  class_struggles_in_america 
4 weeks ago
Learnability of latent position network models - IEEE Conference Publication
"The latent position model is a well known model for social network analysis which has also found application in other fields, such as analysis of marketing and e-commerce data. In such applications, the data sets are increasingly massive and only partially observed, giving rise to the possibility of overfitting by the model. Using tools from statistical learning theory, we bound the VC dimension of the latent position model, leading to bounds on the overfit of the model. We find that the overfit can decay to zero with increasing network size even if only a vanishing fraction of the total network is observed. However, the amount of observed data on a per-node basis should increase with the size of the graph."
to:NB  have_read  learning_theory  network_data_analysis  choi.david  kith_and_kin 
4 weeks ago
The Zombie Diseases of Climate Change - The Atlantic
I would pay very good money to see what Linda Nagata or Paul McAuley could do with this.
climate_change  plagues_and_peoples  via:?  have_read 
4 weeks ago
Community and the Crime Decline: The Causal Effect of Local Nonprofits on Violent CrimeAmerican Sociological Review - Patrick Sharkey, Gerard Torrats-Espinosa, Delaram Takyar, 2017
"Largely overlooked in the theoretical and empirical literature on the crime decline is a long tradition of research in criminology and urban sociology that considers how violence is regulated through informal sources of social control arising from residents and organizations internal to communities. In this article, we incorporate the “systemic” model of community life into debates on the U.S. crime drop, and we focus on the role that local nonprofit organizations played in the national decline of violence from the 1990s to the 2010s. Using longitudinal data and a strategy to account for the endogeneity of nonprofit formation, we estimate the causal effect on violent crime of nonprofits focused on reducing violence and building stronger communities. Drawing on a panel of 264 cities spanning more than 20 years, we estimate that every 10 additional organizations focusing on crime and community life in a city with 100,000 residents leads to a 9 percent reduction in the murder rate, a 6 percent reduction in the violent crime rate, and a 4 percent reduction in the property crime rate."

- Last tag conditional on replication data.
to:NB  causal_inference  crime  institutions  via:rvenkat  to_teach:undergrad-ADA 
4 weeks ago
[1711.02834] Bootstrapping Generalization Error Bounds for Time Series
"We consider the problem of finding confidence intervals for the risk of forecasting the future of a stationary, ergodic stochastic process, using a model estimated from the past of the process. We show that a bootstrap procedure provides valid confidence intervals for the risk, when the data source is sufficiently mixing, and the loss function and the estimator are suitably smooth. Autoregressive (AR(d)) models estimated by least squares obey the necessary regularity conditions, even when mis-specified, and simulations show that the finite- sample coverage of our bounds quickly converges to the theoretical, asymptotic level. As an intermediate step, we derive sufficient conditions for asymptotic independence between empirical distribution functions formed by splitting a realization of a stochastic process, of independent interest."
in_NB  time_series  bootstrap  statistics  self-promotion  to:blog 
4 weeks ago
[1711.00867] The (Un)reliability of saliency methods
"Saliency methods aim to explain the predictions of deep neural networks. These methods lack reliability when the explanation is sensitive to factors that do not contribute to the model prediction. We use a simple and common pre-processing step ---adding a constant shift to the input data--- to show that a transformation with no effect on the model can cause numerous methods to incorrectly attribute. In order to guarantee reliability, we posit that methods should fulfill input invariance, the requirement that a saliency method mirror the sensitivity of the model with respect to transformations of the input. We show, through several examples, that saliency methods that do not satisfy input invariance result in misleading attribution."
to:NB  neural_networks  machine_learning  credit_attribution  via:? 
4 weeks ago
Cats in Art, Morris
"The cat—that most graceful, stubborn, and agile of animals—has been a favorite subject of artists the world over from prehistory to the modern day. A spectacular 7,000-year-old engraving in Libya depicts a catfight. Figures modeled by the Babylonians remind us of their belief that the souls of priests were escorted to paradise by a helpful cat. Pablo Picasso was known to have loved cats and famously portrayed them as savage predators. In Victorian times, cats were depicted in loving family groups with mothers caring for their playful kittens. Today, the cat is one of the most popular domestic pets on the planet, and feline art is a hugely popular theme across the world.
"In his latest eye-catching book, best-selling author Desmond Morris tells the compelling story of cats in art. He explores feline art in its many forms, tracing its history from ancient rock paintings and spectacular Egyptian art to the work of old masters, avant-garde representations, and the depiction of cats in cartoons. Morris discusses the various ways in which artists have approached the subject throughout history, weaving illuminating stories with rarely seen images. The result is a beautifully illustrated book that will delight anyone with a Kitty, Max, or Tigger in their life."
to:NB  books:noted  cats  art_history 
4 weeks ago
[1705.07809] Information-theoretic analysis of generalization capability of learning algorithms
"We derive upper bounds on the generalization error of a learning algorithm in terms of the mutual information between its input and output. The bounds provide an information-theoretic understanding of generalization in learning problems, and give theoretical guidelines for striking the right balance between data fit and generalization by controlling the input-output mutual information. We propose a number of methods for this purpose, among which are algorithms that regularize the ERM algorithm with relative entropy or with random noise. Our work extends and leads to nontrivial improvements on the recent results of Russo and Zou."
to:NB  to_read  learning_theory  information_theory  raginsky.maxim 
4 weeks ago
[1709.09702] Projective Sparse Latent Space Network Models
"In typical latent-space network models, nodes have latent positions, which are all drawn independently from a common distribution. As a consequence, the number of edges in a network scales quadratically with the number of nodes, resulting in a dense graph sequence as the number of nodes grows. We propose an adjustment to latent-space network models which allows the number edges to scale linearly with the number of nodes, to scale quadratically, or at any intermediate rate. Our models also form projective families, making statistical inference and prediction well-defined. Built through point processes, our models are related to both the Poisson random connection model and the graphex framework."
in_NB  network_data_analysis  networks  graph_limits  point_processes  stochastic_processes  self-promotion  to:blog  to_teach:graphons 
4 weeks ago
[1711.02123] Consistency of Maximum Likelihood for Continuous-Space Network Models
"Network analysis needs tools to infer distributions over graphs of arbitrary size from a single graph. Assuming the distribution is generated by a continuous latent space model which obeys certain natural symmetry and smoothness properties, we establish three levels of consistency for non-parametric maximum likelihood inference as the number of nodes grows: (i) the estimated locations of all nodes converge in probability on their true locations; (ii) the distribution over locations in the latent space converges on the true distribution; and (iii) the distribution over graphs of arbitrary size converges."
in_NB  network_data_analysis  statistics  self-promotion  to:blog  re:network_differences  re:hyperbolic_networks  to_teach:baby-nets 
4 weeks ago
Recipe: Roasted Radishes and Carrots with Turmeric
1½ pounds organic radishes, washed with ends trimmed
12 ounces organic baby carrots, washed
2 tablespoons organic grape seed oil (or melted coconut oil)
½ of a lemon, juiced
2 teaspoons dried organic parsley flakes
1 teaspoon organic ground turmeric root
1 teaspoon ground black pepper
½ teaspoon real salt or sea salt
Oven 450 degrees
Cut your prepared radishes in half, larger radishes may need to be quartered.
Place the cut radishes and carrots in a large bowl, and drizzle with the oil and lemon juice. Toss well.
Mix the spices together, then sprinkle the mixture over the vegetables and toss until the vegetables are evenly coated.
Spread the vegetables out in a single layer on a baking sheet (stoneware works well for this). Roast for 20-25 minutes or until fork-tender (start checking after 15 minutes...mine needed 25 minutes of roasting).
Remove from oven, and transfer the radishes and carrots to a serving bowl. Serve right away for the best flavor!
food  recipes  have_made 
4 weeks ago
[1609.00494] Publication bias and the canonization of false facts
"In the process of scientific inquiry, certain claims accumulate enough support to be established as facts. Unfortunately, not every claim accorded the status of fact turns out to be true. In this paper, we model the dynamic process by which claims are canonized as fact through repeated experimental confirmation. The community's confidence in a claim constitutes a Markov process: each successive published result shifts the degree of belief, until sufficient evidence accumulates to accept the claim as fact or to reject it as false. In our model, publication bias --- in which positive results are published preferentially over negative ones --- influences the distribution of published results. We find that when readers do not know the degree of publication bias and thus cannot condition on it, false claims often can be canonized as facts. Unless a sufficient fraction of negative results are published, the scientific process will do a poor job at discriminating false from true claims. This problem is exacerbated when scientists engage in p-hacking, data dredging, and other behaviors that increase the rate at which false positives are published. If negative results become easier to publish as a claim approaches acceptance as a fact, however, true and false claims can be more readily distinguished. To the degree that the model accurately represents current scholarly practice, there will be serious concern about the validity of purported facts in some areas of scientific research."
to:NB  science_as_a_social_process  collective_cognition  natural_history_of_truthiness  bergstrom.carl  to_read  via:?  kith_and_kin  re:democratic_cognition 
5 weeks ago
Fortifications and Democracy in the Ancient Greek World by Josiah Ober, Barry R. Weingast :: SSRN
"In the modern world, access-limiting fortification walls are not typically regarded as promoting democracy. But in Greek antiquity, increased investment in fortifications was correlated with the prevalence and stability of democracy. This paper sketches the background conditions of the Greek city-state ecology, analyzes a passage in Aristotle’s Politics, and assesses the choices of Hellenistic kings, Greek citizens, and urban elites, as modeled in a simple game. The paper explains how city walls promoted democracy and helps to explain several other puzzles: why Hellenistic kings taxed Greek cities at lower than expected rates; why elites in Greek cities supported democracy; and why elites were not more heavily taxed by democratic majorities. The relationship between walls, democracy, and taxes promoted continued economic growth into the late classical and Hellenistic period (4th-2nd centuries BCE), and ultimately contributed to the survival of Greek culture into the Roman era, and thus modernity. We conclude with a consideration of whether the walls-democracy relationship holds in modernity."
to:NB  ancient_history  institutions  democracy  war  ober.josiah  via:henry_farrell 
5 weeks ago
Universalism without Uniformity: Explorations in Mind and Culture, Cassaniti, Menon
"One of the major questions of cultural psychology is how to take diversity seriously while acknowledging our shared humanity. This collection, edited by Julia L. Cassaniti and Usha Menon, brings together leading scholars in the field to reconsider that question and explore the complex mechanisms that connect culture and the human mind.
"The contributors to Universalism without Uniformity offer tools for bridging silos that have historically separated anthropology’s attention to culture and psychology’s interest in universal mental processes. Throughout, they seek to answer intricate yet fundamental questions about why we are motivated to find meaning in everything around us and, in turn, how we constitute the cultural worlds we inhabit through our intentional involvement in them. Laying bare entrenched disciplinary blind spots, this book offers a trove of insights on issues such as morality, emotional functioning, and conceptions of the self across cultures. Filled with impeccable empirical research coupled with broadly applicable theoretical reflections on taking psychological diversity seriously, Universalism without Uniformity breaks new ground in the study of mind and culture. "
to:NB  books:noted  psychology  cultural_differences  cultural_transmission_of_cognitive_tools 
5 weeks ago
[1711.00813] Bootstrapping Exchangeable Random Graphs
"We introduce two new bootstraps for exchangeable random graphs. One, the "empirical graphon", is based purely on resampling, while the other, the "histogram stochastic block model", is a model-based "sieve" bootstrap. We show that both of them accurately approximate the sampling distributions of motif densities, i.e., of the normalized counts of the number of times fixed subgraphs appear in the network. These densities characterize the distribution of (infinite) exchangeable networks. Our bootstraps therefore give, for the first time, a valid quantification of uncertainty in inferences about fundamental network statistics, and so of parameters identifiable from them."
in_NB  network_data_analysis  statistics  bootstrap  graph_limits  nonparametrics  self-promotion  to:blog 
5 weeks ago
[1710.02773] Baseline Mixture Models for Social Networks
"Continuous mixtures of distributions are widely employed in the statistical literature as models for phenomena with highly divergent outcomes; in particular, many familiar heavy-tailed distributions arise naturally as mixtures of light-tailed distributions (e.g., Gaussians), and play an important role in applications as diverse as modeling of extreme values and robust inference. In the case of social networks, continuous mixtures of graph distributions can likewise be employed to model social processes with heterogeneous outcomes, or as robust priors for network inference. Here, we introduce some simple families of network models based on continuous mixtures of baseline distributions. While analytically and computationally tractable, these models allow more flexible modeling of cross-graph heterogeneity than is possible with conventional baseline (e.g., Bernoulli or U|man distributions). We illustrate the utility of these baseline mixture models with application to problems of multiple-network ERGMs, network evolution, and efficient network inference. Our results underscore the potential ubiquity of network processes with nontrivial mixture behavior in natural settings, and raise some potentially disturbing questions regarding the adequacy of current network data collection practices."
to:NB  network_data_analysis  exponential_family_random_graphs  mixture_models  statistics  butts.carter 
5 weeks ago
[1707.07397] Synthesizing Robust Adversarial Examples
"Neural network-based classifiers parallel or exceed human-level accuracy on many common tasks and are used in practical systems. Yet, neural networks are susceptible to adversarial examples, carefully perturbed inputs that cause networks to misbehave in arbitrarily chosen ways. When generated with standard methods, these examples do not consistently fool a classifier in the physical world due to viewpoint shifts, camera noise, and other natural transformations. Adversarial examples generated using standard techniques require complete control over direct input to the classifier, which is impossible in many real-world systems.
"We introduce the first method for constructing real-world 3D objects that consistently fool a neural network across a wide distribution of angles and viewpoints. We present a general-purpose algorithm for generating adversarial examples that are robust across any chosen distribution of transformations. We demonstrate its application in two dimensions, producing adversarial images that are robust to noise, distortion, and affine transformation. Finally, we apply the algorithm to produce arbitrary physical 3D-printed adversarial objects, demonstrating that our approach works end-to-end in the real world. Our results show that adversarial examples are a practical concern for real-world systems."
to:NB  neural_networks  adversarial_examples 
5 weeks ago
[1710.11304] Characterizing the structural diversity of complex networks across domains
"The structure of complex networks has been of interest in many scientific and engineering disciplines over the decades. A number of studies in the field have been focused on finding the common properties among different kinds of networks such as heavy-tail degree distribution, small-worldness and modular structure and they have tried to establish a theory of structural universality in complex networks. However, there is no comprehensive study of network structure across a diverse set of domains in order to explain the structural diversity we observe in the real-world networks. In this paper, we study 986 real-world networks of diverse domains ranging from ecological food webs to online social networks along with 575 networks generated from four popular network models. Our study utilizes a number of machine learning techniques such as random forest and confusion matrix in order to show the relationships among network domains in terms of network structure. Our results indicate that there are some partitions of network categories in which networks are hard to distinguish based purely on network structure. We have found that these partitions of network categories tend to have similar underlying functions, constraints and/or generative mechanisms of networks even though networks in the same partition have different origins, e.g., biological processes, results of engineering by human being, etc. This suggests that the origin of a network, whether it's biological, technological or social, may not necessarily be a decisive factor of the formation of similar network structure. Our findings shed light on the possible direction along which we could uncover the hidden principles for the structural diversity of complex networks."
to:NB  network_data_analysis  statistics  classifiers  clauset.aaron  kith_and_kin  to_read  to_teach:baby-nets 
5 weeks ago
Minimally Sufficient Conditions for the Evolution of Social Learning and the Emergence of Non-Genetic Evolutionary Systems | Artificial Life | MIT Press Journals
"Social learning, defined as the imitation of behaviors performed by others, is recognized as a distinctive characteristic in humans and several other animal species. Previous work has claimed that the evolutionary fixation of social learning requires decision-making cognitive abilities that result in transmission bias (e.g., discriminatory imitation) and/or guided variation (e.g., adaptive modification of behaviors through individual learning). Here, we present and analyze a simple agent-based model that demonstrates that the transition from instinctive actuators (i.e., non-learning agents whose behavior is hardcoded in their genes) to social learners (i.e., agents that imitate behaviors) can occur without invoking such decision-making abilities. The model shows that the social learning of a trait may evolve and fix in a population if there are many possible behavioral variants of the trait, if it is subject to strong selection pressure for survival (as distinct from reproduction), and if imitation errors occur at a higher rate than genetic mutation. These results demonstrate that the (sometimes implicit) assumption in prior work that decision-making abilities are required is incorrect, thus allowing a more parsimonious explanation for the evolution of social learning that applies to a wider range of organisms. Furthermore, we identify genotype-phenotype disengagement as a signal for the imminent fixation of social learners, and explain the way in which this disengagement leads to the emergence of a basic form of cultural evolution (i.e., a non-genetic evolutionary system)."
to:NB  cultural_evolution  agent-based_models  bullock.seth  re:do-institutions-evolve 
5 weeks ago
"The great geoengineering projects have failed.
"The world is still warming, sea levels are still rising, and the Antarctic  Peninsula is home to Earth's newest nation, with life quickened by ecopoets spreading across valleys and fjords exposed by the retreat of the ice.
"Austral Morales Ferrado, a child of the last generation of ecopoets, is a husky: an edited person adapted to the unforgiving climate of the far south, feared and despised by most of its population. She's been a convict, a corrections officer in a labour camp, and consort to a criminal, and now, out of desperation, she has committed the kidnapping of the century. But before she can collect the ransom and make a new life elsewhere, she must find a place of safety amongst the peninsula's forests and icy plateaus, and evade a criminal gang that has its own plans for the teenage girl she's taken hostage.
"Blending the story of Austral's flight with the fractured history of her family and its role in the colonisation of Antarctica, Austral is a vivid portrayal of a treacherous new world created by climate change, and shaped by the betrayals and mistakes of the past."

--- Why is this book, which I want with great intensity, not available in the US?
to:NB  books:noted  coveted  science_fiction  climate_change  antarctica  mcauley.paul 
5 weeks ago
Philosophy Within Its Proper Bounds - Edouard Machery - Oxford University Press
"In Philosophy Within Its Proper Bounds, Edouard Machery argues that resolving many traditional and contemporary philosophical issues is beyond our epistemic reach and that philosophy should re-orient itself toward more humble, but ultimately more important intellectual endeavors. Any resolution to many of these contemporary issues would require an epistemic access to metaphysical possibilities and necessities, which, Machery argues, we do not have. In effect, then, Philosophy Within Its Proper Bounds defends a form of modal skepticism. The book assesses the main philosophical method for acquiring the modal knowledge that the resolution of modally immodest philosophical issues turns on: the method of cases, that is, the consideration of actual or hypothetical situations (which cases or thought experiments describe) in order to determine what facts hold in these situations. Canvassing the extensive work done by experimental philosophers over the last 15 years, Edouard Machery shows that the method of cases is unreliable and should be rejected. Importantly, the dismissal of modally immodest philosophical issues is no cause for despair - many important philosophical issues remain within our epistemic reach. In particular, reorienting the course of philosophy would free time and resources for bringing back to prominence a once-central intellectual endeavor: conceptual analysis."

--- Giving a talk today in town, which I will have to miss.

--- ETA: I will be disappointed if he doesn't quote Pope:
Know then thyself, presume not God to scan;
The proper study of mankind is man.
to:NB  books:noted  philosophy  epistemology  skepticism 
5 weeks ago
Science organizations troubled by Rand Paul bill targeting peer review
In plainer words, the proposal is to add two political comissars to every peer-review panel.
peer_review  to:blog  us_politics  utter_stupidity 
5 weeks ago
Salganik, M.: Bit by Bit: Social Research in the Digital Age (Hardcover and eBook) | Princeton University Press
"In just the past several years, we have witnessed the birth and rapid spread of social media, mobile phones, and numerous other digital marvels. In addition to changing how we live, these tools enable us to collect and process data about human behavior on a scale never before imaginable, offering entirely new approaches to core questions about social behavior. Bit by Bit is the key to unlocking these powerful methods—a landmark book that will fundamentally change how the next generation of social scientists and data scientists explores the world around us.
"Bit by Bit is the essential guide to mastering the key principles of doing social research in this fast-evolving digital age. In this comprehensive yet accessible book, Matthew Salganik explains how the digital revolution is transforming how social scientists observe behavior, ask questions, run experiments, and engage in mass collaborations. He provides a wealth of real-world examples throughout and also lays out a principles-based approach to handling ethical challenges.
"Bit by Bit is an invaluable resource for social scientists who want to harness the research potential of big data and a must-read for data scientists interested in applying the lessons of social science to tomorrow’s technologies."
to:NB  books:noted  social_science_methodology  data_mining  salganik.matthew  experimental_sociology  experimental_design  sociology  networked_life  coveted 
5 weeks ago
Population Control Policies and Fertility Convergence
"Rapid population growth in developing countries in the middle of the 20th century led to fears of a population explosion and motivated the inception of what effectively became a global population- control program. The initiative, propelled in its beginnings by intellectual elites in the United States, Sweden, and some developing countries, mobilized resources to enact policies aimed at reducing fertility by widening contraception provision and changing family-size norms. In the following five decades, fertility rates fell dramatically, with a majority of countries converging to a fertility rate just above two children per woman, despite large cross-country differences in economic variables such as GDP per capita, education levels, urbanization, and female labor force participation. The fast decline in fertility rates in developing economies stands in sharp contrast with the gradual decline experienced earlier by more mature economies. In this paper, we argue that population-control policies likely played a central role in the global decline in fertility rates in recent decades and can explain some patterns of that fertility decline that are not well accounted for by other socioeconomic factors."
to:NB  demography  demogrqphic_transition  public_policy  20th_century_history 
5 weeks ago
Difficult People: Who Is Perceived to Be Demanding in Personal Networks and Why Are They There?American Sociological Review - Shira Offer, Claude S. Fischer, 2017
"Why do people maintain ties with individuals whom they find difficult? Standard network theories imply that such alters are avoided or dropped. Drawing on a survey of over 1,100 diverse respondents who described over 12,000 relationships, we examined which among those ties respondents nominated as a person whom they “sometimes find demanding or difficult.” Those so listed composed about 15 percent of all alters in the network. After holding ego and alter traits constant, close kin, especially women relatives and aging parents, were especially likely to be named as difficult alters. Non-kin described as friends were less likely, and those described as co-workers more likely, to be listed only as difficult alters. These results suggest that normative and institutional constraints may force people to retain difficult and demanding alters in their networks. We also found that providing support to alters, but not receiving support from those alters, was a major source of difficulty in these relationships. Furthermore, the felt burden of providing support was not attenuated by receiving assistance, suggesting that alters involved in reciprocated exchanges were not less often labeled difficult than were those in unreciprocated ones. This study underlines the importance of constraints in personal networks."
to:NB  social_networks  sociology 
5 weeks ago
[1706.04692] Bias and high-dimensional adjustment in observational studies of peer effects
"Peer effects, in which the behavior of an individual is affected by the behavior of their peers, are posited by multiple theories in the social sciences. Other processes can also produce behaviors that are correlated in networks and groups, thereby generating debate about the credibility of observational (i.e. nonexperimental) studies of peer effects. Randomized field experiments that identify peer effects, however, are often expensive or infeasible. Thus, many studies of peer effects use observational data, and prior evaluations of causal inference methods for adjusting observational data to estimate peer effects have lacked an experimental "gold standard" for comparison. Here we show, in the context of information and media diffusion on Facebook, that high-dimensional adjustment of a nonexperimental control group (677 million observations) using propensity score models produces estimates of peer effects statistically indistinguishable from those from using a large randomized experiment (220 million observations). Naive observational estimators overstate peer effects by 320% and commonly used variables (e.g., demographics) offer little bias reduction, but adjusting for a measure of prior behaviors closely related to the focal behavior reduces bias by 91%. High-dimensional models adjusting for over 3,700 past behaviors provide additional bias reduction, such that the full model reduces bias by over 97%. This experimental evaluation demonstrates that detailed records of individuals' past behavior can improve studies of social influence, information diffusion, and imitation; these results are encouraging for the credibility of some studies but also cautionary for studies of rare or new behaviors. More generally, these results show how large, high-dimensional data sets and statistical learning techniques can be used to improve causal inference in the behavioral sciences."
to:NB  to_read  re:homophily_and_confounding  causal_inference  network_data_analysis  eckles.dean  bakshy.eytan  experimental_sociology 
5 weeks ago
Language Log » Blue Cell Dyslexia
"At first I was hesitant to evaluate the study because I’m not a vision scientist, but then I realized that hadn’t prevented the authors from publishing it. Albert Le Floch and Guy Ropars are affiliated with the Université de Rennes, France. Their primary area of expertise appears to be laser physics."
dyslexia  why_oh_why_cant_we_have_a_better_academic_publishing_system  linguistics  psychology 
5 weeks ago
Professors like me can’t stay silent about this extremist moment on campuses - The Washington Post
This is the first I've heard of this, and of course one has to wonder if it's an accurate account of the situation, but if it is, it's incredibly outraegous.
us_politics  academic_freedom  academia  reed  have_read  wtf?  circular_firing_squad 
6 weeks ago
Evolutionary dynamics of language systems
"Understanding how and why language subsystems differ in their evolutionary dynamics is a fundamental question for historical and comparative linguistics. One key dynamic is the rate of language change. While it is commonly thought that the rapid rate of change hampers the reconstruction of deep language relationships beyond 6,000–10,000 y, there are suggestions that grammatical structures might retain more signal over time than other subsystems, such as basic vocabulary. In this study, we use a Dirichlet process mixture model to infer the rates of change in lexical and grammatical data from 81 Austronesian languages. We show that, on average, most grammatical features actually change faster than items of basic vocabulary. The grammatical data show less schismogenesis, higher rates of homoplasy, and more bursts of contact-induced change than the basic vocabulary data. However, there is a core of grammatical and lexical features that are highly stable. These findings suggest that different subsystems of language have differing dynamics and that careful, nuanced models of language change will be needed to extract deeper signal from the noise of parallel evolution, areal readaptation, and contact."

--- I would be very curious to know what historical linguists make of this.
to:NB  linguistics  language_history  cultural_evolution  phylogenetics 
6 weeks ago
Direct measurement of weakly nonequilibrium system entropy is consistent with Gibbs–Shannon form
"Stochastic thermodynamics extends classical thermodynamics to small systems in contact with one or more heat baths. It can account for the effects of thermal fluctuations and describe systems far from thermodynamic equilibrium. A basic assumption is that the expression for Shannon entropy is the appropriate description for the entropy of a nonequilibrium system in such a setting. Here we measure experimentally this function in a system that is in local but not global equilibrium. Our system is a micron-scale colloidal particle in water, in a virtual double-well potential created by a feedback trap. We measure the work to erase a fraction of a bit of information and show that it is bounded by the Shannon entropy for a two-state system. Further, by measuring directly the reversibility of slow protocols, we can distinguish unambiguously between protocols that can and cannot reach the expected thermodynamic bounds."
to:NB  thermodynamics  statistics  non-equilibrium  physics 
6 weeks ago
The Power of Bias in Economics Research - Ioannidis - 2017 - The Economic Journal - Wiley Online Library
"We investigate two critical dimensions of the credibility of empirical economics research: statistical power and bias. We survey 159 empirical economics literatures that draw upon 64,076 estimates of economic parameters reported in more than 6,700 empirical studies. Half of the research areas have nearly 90% of their results under-powered. The median statistical power is 18%, or less. A simple weighted average of those reported results that are adequately powered (power ≥ 80%) reveals that nearly 80% of the reported effects in these empirical economics literatures are exaggerated; typically, by a factor of two and with one-third inflated by a factor of four or more."
to:NB  economics  statistics  hypothesis_testing  bad_data_analysis  bad_science_journalism  re:neutral_model_of_inquiry  via:d-squared  to_read 
6 weeks ago
Gersham's Law of Model Averaging
"A decision maker doubts the stationarity of his environment. In response, he uses two models, one with time-varying parameters, and another with constant parameters. Forecasts are then based on a Bayesian model averaging strategy, which mixes forecasts from the two models. In reality, structural parameters are constant, but the (unknown) true model features expectational feedback, which the reduced-form models neglect. This feedback permits fears of parameter instability to become self-confirming. Within the context of a standard asset-pricing model, we use the tools of large deviations theory to show that even though the constant parameter model would converge to the rational expectations equilibrium if considered in isolation, the mere presence of an unstable alternative drives it out of consideration."
to:NB  to_read  model_selection  ensemble_methods  self-fulfilling_prophecies  large_deviations  statistics 
6 weeks ago
What Facebook Did to American Democracy - The Atlantic
This is very good. Some (small) points of critique:
(1) Zeynep Tufekci deserves much more than a parenthetical name-drop on these issues.
(1) The initial 2012 experiment by Fowler et al suffers from a very serious design flaw, which means it confounds _being exposed to a lot of social influence via Facebook_ with _being the kind of person who has many Facebook ties_. I am not aware of subsequent experiments which correct the flaw, though it could be done.
facebook  social_media  networked_life  us_politics  re:democratic_cognition  via:?  have_read 
6 weeks ago
« earlier      
academia afghanistan agent-based_models american_history ancient_history anthropology archaeology art bad_data_analysis bad_science_journalism bayesian_consistency bayesianism biochemical_networks biology blogged book_reviews books:noted books:owned books:recommended bootstrap causal_inference causality central_asia central_limit_theorem class_struggles_in_america classifiers climate_change clustering cognitive_science collective_cognition community_discovery complexity computational_statistics confidence_sets corruption coveted crime cross-validation cultural_criticism cultural_evolution cultural_exchange data_analysis data_mining debunking decision-making decision_theory delong.brad democracy density_estimation dimension_reduction distributed_systems dynamical_systems ecology econometrics economic_history economic_policy economics education empirical_processes ensemble_methods entropy_estimation epidemic_models epidemiology_of_representations epistemology ergodic_theory estimation evisceration evolution_of_cooperation evolutionary_biology experimental_psychology finance financial_crisis_of_2007-- financial_speculation fmri food fraud funny funny:academic funny:geeky funny:laughing_instead_of_screaming funny:malicious funny:pointed graph_theory graphical_models have_read heard_the_talk heavy_tails high-dimensional_statistics hilbert_space history_of_ideas history_of_science human_genetics hypothesis_testing ideology imperialism in_nb inequality inference_to_latent_objects information_theory institutions kernel_methods kith_and_kin krugman.paul large_deviations lasso learning_theory liberman.mark linguistics literary_criticism machine_learning macro_from_micro macroeconomics market_failures_in_everything markov_models mathematics mixing model_selection modeling modern_ruins monte_carlo moral_psychology moral_responsibility mortgage_crisis natural_history_of_truthiness network_data_analysis networked_life networks neural_data_analysis neural_networks neuroscience non-equilibrium nonparametrics optimization our_decrepit_institutions philosophy philosophy_of_science photos physics pittsburgh political_economy political_science practices_relating_to_the_transmission_of_genetic_information prediction pretty_pictures principal_components probability programming progressive_forces psychology public_policy r racism random_fields re:almost_none re:aos_project re:democratic_cognition re:do-institutions-evolve re:g_paper re:homophily_and_confounding re:network_differences re:smoothing_adjacency_matrices re:social_networks_as_sensor_networks re:stacs re:your_favorite_dsge_sucks recipes regression running_dogs_of_reaction science_as_a_social_process science_fiction simulation social_influence social_life_of_the_mind social_media social_networks social_science_methodology sociology something_about_america sparsity spatial_statistics state-space_models statistical_inference_for_stochastic_processes statistical_mechanics statistics stochastic_processes text_mining the_american_dilemma the_continuing_crises time_series to:blog to:nb to_be_shot_after_a_fair_trial to_read to_teach:complexity-and-inference to_teach:data-mining to_teach:statcomp to_teach:undergrad-ada track_down_references us_politics utter_stupidity vast_right-wing_conspiracy via:? via:henry_farrell via:jbdelong visual_display_of_quantitative_information whats_gone_wrong_with_america why_oh_why_cant_we_have_a_better_academic_publishing_system why_oh_why_cant_we_have_a_better_press_corps

Copy this bookmark: