2256
[1406.0112] Marcus versus Stratonovich for Systems with Jump Noise
The famous It\^o-Stratonovich dilemma arises when one examines a dynamical system with a multiplicative white noise. In physics literature, this dilemma is often resolved in favour of the Stratonovich prescription because of its two characteristic properties valid for systems driven by Brownian motion: (i) it allows physicists to treat stochastic integrals in the same way as conventional integrals, and (ii) it appears naturally as a result of a small correlation time limit procedure. On the other hand, the Marcus prescription [IEEE Trans. Inform. Theory 24, 164 (1978); Stochastics 4, 223 (1981)] should be used to retain (i) and (ii) for systems driven by a Poisson process, L\'evy flights or more general jump processes. In present communication we present an in-depth comparison of the It\^o, Stratonovich, and Marcus equations for systems with multiplicative jump noise. By the examples of areal-valued linear system and a complex oscillator with noisy frequency (the Kubo-Anderson oscillator) we compare solutions obtained with the three prescriptions.
probability  stochastic_processes  jump-diffusion
4 days ago
[1710.05787] Information geometry and the renormalization group
We use information geometry, in which the local distance between models measures their distinguishability from data, to quantify the flow of information under the renormalization group. We show that information about relevant parameters is preserved, with distances along relevant directions maintained under flow. By contrast, irrelevant parameters become less distinguishable under the flow, with distances along irrelevant directions contracting according to renormalization group exponents. We develop a covariant formalism to understand the contraction of the model manifold. We then apply our tools to understand the emergence of the diffusion equation and more general statistical systems described by a free energy. Our results give an information-theoretic justification of universality in terms of the flow of the model manifold under coarse graining.
information_theory  renormalization  differential_geometry  statistical_mechanics  phase_transition  james.sethna
8 days ago
[1805.12316] Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data
We present a probabilistic framework for studying adversarial attacks on discrete data. Based on this framework, we derive a perturbation-based method, Greedy Attack, and a scalable learning-based method, Gumbel Attack, that illustrate various tradeoffs in the design of attacks. We demonstrate the effectiveness of these methods using both quantitative metrics and human evaluation on various state-of-the-art models for text classification, including a word-based CNN, a character-based CNN and an LSTM. As as example of our results, we show that the accuracy of character-based convolutional networks drops to the level of random selection by modifying only five characters through Greedy Attack.
9 days ago
[1605.00316] Directional Statistics in Machine Learning: a Brief Review
The modern data analyst must cope with data encoded in various forms, vectors, matrices, strings, graphs, or more. Consequently, statistical and machine learning models tailored to different data encodings are important. We focus on data encoded as normalized vectors, so that their "direction" is more important than their magnitude. Specifically, we consider high-dimensional vectors that lie either on the surface of the unit hypersphere or on the real projective plane. For such data, we briefly review common mathematical models prevalent in machine learning, while also outlining some technical aspects, software, applications, and open mathematical challenges.
review  statistics  machine_learning  differential_geometry
9 days ago
Complex Spreading Phenomena in Social Systems - Influence and Contagion in Real-World Social Networks | Sune Lehmann | Springer
This text is about spreading of information and influence in complex networks. Although previously considered similar and modeled in parallel approaches, there is now experimental evidence that epidemic and social spreading work in subtly different ways. While previously explored through modeling, there is currently an explosion of work on revealing the mechanisms underlying complex contagion based on big data and data-driven approaches.

This volume consists of four parts. Part 1 is an Introduction, providing an accessible summary of the state of the art. Part 2 provides an overview of the central theoretical developments in the field. Part 3 describes the empirical work on observing spreading processes in real-world networks. Finally, Part 4 goes into detail with recent and exciting new developments: dedicated studies designed to measure specific aspects of the spreading processes, often using randomized control trials to isolate the network effect from confounders, such as homophily.

Each contribution is authored by leading experts in the field. This volume, though based on technical selections of the most important results on complex spreading, remains quite accessible to the newly interested. The main benefit to the reader is that the topics are carefully structured to take the novice to the level of expert on the topic of social spreading processes. This book will be of great importance to a wide field: from researchers in physics, computer science, and sociology to professionals in public policy and public health.

https://socialcontagionbook.github.io/
networks  epidemics  contagion  social_networks  teaching
11 days ago
[1804.04622] Causal Inference via Kernel Deviance Measures
Discovering the causal structure among a set of variables is a fundamental problem in many areas of science. In this paper, we propose Kernel Conditional Deviance for Causal Inference (KCDC) a fully nonparametric causal discovery method based on purely observational data. From a novel interpretation of the notion of asymmetry between cause and effect, we derive a corresponding asymmetry measure using the framework of reproducing kernel Hilbert spaces. Based on this, we propose three decision rules for causal discovery. We demonstrate the wide applicability of our method across a range of diverse synthetic datasets. Furthermore, we test our method on real-world time series data and the real-world benchmark dataset Tubingen Cause-Effect Pairs where we outperform existing state-of-the-art methods.
causal_inference  kernel  machine_learning
15 days ago
Causal language and strength of inference in academic and media articles shared in social media (CLAIMS): A systematic review
The pathway from evidence generation to consumption contains many steps which can lead to overstatement or misinformation. The proliferation of internet-based health news may encourage selection of media and academic research articles that overstate strength of causal inference. We investigated the state of causal inference in health research as it appears at the end of the pathway, at the point of social media consumption.
causal_inference  sociology_of_science  critique  meta-analysis  media_studies  epidemiology_of_representations
17 days ago
[1805.10204] Adversarial examples from computational constraints
Why are classifiers in high dimension vulnerable to "adversarial" perturbations? We show that it is likely not due to information theoretic limitations, but rather it could be due to computational constraints.
First we prove that, for a broad set of classification tasks, the mere existence of a robust classifier implies that it can be found by a possibly exponential-time algorithm with relatively few training examples. Then we give a particular classification task where learning a robust classifier is computationally intractable. More precisely we construct a binary classification task in high dimensional space which is (i) information theoretically easy to learn robustly for large perturbations, (ii) efficiently learnable (non-robustly) by a simple linear separator, (iii) yet is not efficiently robustly learnable, even for small perturbations, by any algorithm in the statistical query (SQ) model. This example gives an exponential separation between classical learning and robust learning in the statistical query model. It suggests that adversarial examples may be an unavoidable byproduct of computational limitations of learning algorithms.
21 days ago
Freedom rising human empowerment and quest emancipation | Comparative politics | Cambridge University Press
This book presents a comprehensive theory of why human freedom gave way to increasing oppression since the invention of states – and why this trend began to reverse itself more recently, leading to a rapid expansion of universal freedoms and democracy. Drawing on a massive body of evidence, the author tests various explanations of the rise of freedom, providing convincing support of a well-reasoned theory of emancipation. The study demonstrates multiple trends toward human empowerment, which converge to give people control over their lives. Most important among these trends is the spread of “emancipative values,” which emphasize free choice and equal opportunities. The author identifies the desire for emancipation as the origin of the human empowerment trend and shows when and why this desire grows strong; why it is the source of democracy; and how it vitalizes civil society, feeds humanitarian norms, enhances happiness, and helps redirect modern civilization toward sustainable development.
book  history  comparative  political_science  development_economics  the_civilizing_process  democracy  debates
21 days ago
On the history of the transportation and maximum flow problems
We review two papers that are of historical interest for combinatorial optimization: an article of A.N.Tolstoi from 1930, in which the transportation problem is studied, and a negative cycle criterion is developed and applied to solve a (for that time)large-scale (10X68) transportation problem to optimality; andan,until recently secret,RAND report of T.E.Harris and F.S. Rossfrom 1955, that Ford and Fulkerson mention as motivation to study the maximum flow problem. The papers have in common that they both apply their methods to the Soviet railway network.
networks  combinatorics  optimization  algorithms
25 days ago
Algorithmic Fairness
Concerns that algorithms may discriminate against certain groups have led to numerous efforts to 'blind' the algorithm to race. We argue that this intuitive perspective is misleading and may do harm. Our primary result is exceedingly simple, yet often overlooked. A preference for fairness should not change the choice of estimator. Equity preferences can change how the estimated prediction function is used (e.g., different threshold for different groups) but the function itself should not change. We show in an empirical example for college admissions that the inclusion of variables such as race can increase both equity and efficiency.
econometrics  algorithms  ethics  machine_learning  sendhil.mullainathan
4 weeks ago
Internal Colonialism, Core-Periphery Contrasts and Devolution: An Integrative Comment on JSTOR
The idea of internal colonialism is presented as a framework for examining regional deprivation, especially in distinct cultural environments, and is considered in the light of the devolution debate.
economics  political_science  networks  economic_geography  economic_sociology  teaching
5 weeks ago
Optimization Methods for Large-Scale Machine Learning | SIAM Review | Vol. 60, No. 2 | Society for Industrial and Applied Mathematics
This paper provides a review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications. Through case studies on text classification and the training of deep neural networks, we discuss how optimization problems arise in machine learning and what makes them challenging. A major theme of our study is that large-scale machine learning represents a distinctive setting in which the stochastic gradient (SG) method has traditionally played a central role while conventional gradient-based nonlinear optimization techniques typically falter. Based on this viewpoint, we present a comprehensive theory of a straightforward, yet versatile SG algorithm, discuss its practical behavior, and highlight opportunities for designing algorithms with improved performance. This leads to a discussion about the next generation of optimization methods for large-scale machine learning, including an investigation of two main streams of research on techniques that diminish noise in the stochastic directions and methods that make use of second-order derivative approximations.
review  optimization  machine_learning
5 weeks ago
Configuring Random Graph Models with Fixed Degree Sequences | SIAM Review | Vol. 60, No. 2 | Society for Industrial and Applied Mathematics
Random graph null models have found widespread application in diverse research communities analyzing network datasets, including social, information, and economic networks, as well as food webs, protein-protein interactions, and neuronal networks. The most popular random graph null models, called configuration models, are defined as uniform distributions over a space of graphs with a fixed degree sequence. Commonly, properties of an empirical network are compared to properties of an ensemble of graphs from a configuration model in order to quantify whether empirical network properties are meaningful or whether they are instead a common consequence of the particular degree sequence. In this work we study the subtle but important decisions underlying the specification of a configuration model, and we investigate the role these choices play in graph sampling procedures and a suite of applications. We place particular emphasis on the importance of specifying the appropriate graph labeling---stub-labeled or vertex-labeled---under which to consider a null model, a choice that closely connects the study of random graphs to the study of random contingency tables. We show that the choice of graph labeling is inconsequential for studies of simple graphs, but can have a significant impact on analyses of multigraphs or graphs with self-loops. The importance of these choices is demonstrated through a series of three in-depth vignettes, analyzing three different network datasets under many different configuration models and observing substantial differences in study conclusions under different models. We argue that in each case, only one of the possible configuration models is appropriate. While our work focuses on undirected static networks, it aims to guide the study of directed networks, dynamic networks, and all other network contexts that are suitably studied through the lens of random graph null models.
networks  review  simulation
5 weeks ago
Subjective randomness as statistical inference - ScienceDirect
Some events seem more random than others. For example, when tossing a coin, a sequence of eight heads in a row does not seem very random. Where do these intuitions about randomness come from? We argue that subjective randomness can be understood as the result of a statistical inference assessing the evidence that an event provides for having been produced by a random generating process. We show how this account provides a link to previous work relating randomness to algorithmic complexity, in which random events are those that cannot be described by short computer programs. Algorithmic complexity is both incomputable and too general to capture the regularities that people can recognize, but viewing randomness as statistical inference provides two paths to addressing these problems: considering regularities generated by simpler computing machines, and restricting the set of probability distributions that characterize regularity. Building on previous work exploring these different routes to a more restricted notion of randomness, we define strong quantitative models of human randomness judgments that apply not just to binary sequences – which have been the focus of much of the previous work on subjective randomness – but also to binary matrices and spatial clustering.

https://cocosci.berkeley.edu/papers/RandomnessAsInference.pdf
bayesian  cognition  statistics  machine_learning  joshua.tenenbaum
6 weeks ago
A critical period for second language acquisition: Evidence from 2/3 million English speakers
Children learn language more easily than adults, though when and why this ability declines have been obscure for both empirical reasons (underpowered studies) and conceptual reasons (measuring the ultimate attainment of learners who started at different ages cannot by itself reveal changes in underlying learning ability). We address both limitations with a dataset of unprecedented size (669,498 native and non-native English speakers) and a computational model that estimates the trajectory of underlying learning ability by disentangling current age, age at first exposure, and years of experience. This allows us to provide the first direct estimate of how grammar-learning ability changes with age, finding that it is preserved almost to the crux of adulthood (17.4 years old) and then declines steadily. This finding held not only for “difficult” syntactic phenomena but also for “easy” syntactic phenomena that are normally mastered early in acquisition. The results support the existence of a sharply-defined critical period for language acquisition, but the age of offset is much later than previously speculated. The size of the dataset also provides novel insight into several other outstanding questions in language acquisition.
psychology  linguistics  cognition  cognitive_science  joshua.tenenbaum  steven.pinker
6 weeks ago
[1708.06401] A Tutorial on Hawkes Processes for Events in Social Media
This chapter provides an accessible introduction for point processes, and especially Hawkes processes, for modeling discrete, inter-dependent events over continuous time. We start by reviewing the definitions and the key concepts in point processes. We then introduce the Hawkes process, its event intensity function, as well as schemes for event simulation and parameter estimation. We also describe a practical example drawn from social media data - we show how to model retweet cascades using a Hawkes self-exciting process. We presents a design of the memory kernel, and results on estimating parameters and predicting popularity. The code and sample event data are available as an online appendix
point_process  tutorial  networks  dynamics  teaching
7 weeks ago
[1406.6766] Smoothness of marginal log-linear parameterizations
We provide results demonstrating the smoothness of some marginal log-linear parameterizations for distributions on multi-way contingency tables. First we give an analytical relationship between log-linear parameters defined within different margins, and use this to prove that some parameterizations are equivalent to ones already known to be smooth. Second we construct an iterative method for recovering joint probability distributions from marginal log-linear pieces, and prove its correctness in particular cases. Finally we use Markov chain theory to prove that certain cyclic conditional parameterizations are also smooth. These results are applied to show that certain conditional independence models are curved exponential families.
log-liner_model  graphical_models  differential_geometry
8 weeks ago
[1501.02103] Margins of discrete Bayesian networks
Bayesian network models with latent variables are widely used in statistics and machine learning. In this paper we provide a complete algebraic characterization of Bayesian network models with latent variables when the observed variables are discrete and no assumption is made about the state-space of the latent variables. We show that it is algebraically equivalent to the so-called nested Markov model, meaning that the two are the same up to inequality constraints on the joint probabilities. In particular these two models have the same dimension. The nested Markov model is therefore the best possible description of the latent variable model that avoids consideration of inequalities, which are extremely complicated in general. A consequence of this is that the constraint finding algorithm of Tian and Pearl (UAI 2002, pp519-527) is complete for finding equality constraints.
Latent variable models suffer from difficulties of unidentifiable parameters and non-regular asymptotics; in contrast the nested Markov model is fully identifiable, represents a curved exponential family of known dimension, and can easily be fitted using an explicit parameterization.
bayesian_network  latent_variable  graphical_models  causal_inference
8 weeks ago
Phys. Rev. Lett. 120, 024102 (2018) - Model-Free Prediction of Large Spatiotemporally Chaotic Systems from Data: A Reservoir Computing Approach
We demonstrate the effectiveness of using machine learning for model-free prediction of spatiotemporally chaotic systems of arbitrarily large spatial extent and attractor dimension purely from observations of the system’s past evolution. We present a parallel scheme with an example implementation based on the reservoir computing paradigm and demonstrate the scalability of our scheme using the Kuramoto-Sivashinsky equation as an example of a spatiotemporally chaotic system.

-- first, deep learning for detecting phase transitions; now, reservoir computing for predictions of spatiotemporal chaos. Did the physicists of the yore miss something or there is something not right with these works?
chaos  nonlinear_dynamics  prediction  machine_learning
8 weeks ago
[1710.07313] Using Machine Learning to Replicate Chaotic Attractors and Calculate Lyapunov Exponents from Data
We use recent advances in the machine learning area known as 'reservoir computing' to formulate a method for model-free estimation from data of the Lyapunov exponents of a chaotic process. The technique uses a limited time series of measurements as input to a high-dimensional dynamical system called a 'reservoir'. After the reservoir's response to the data is recorded, linear regression is used to learn a large set of parameters, called the 'output weights'. The learned output weights are then used to form a modified autonomous reservoir designed to be capable of producing arbitrarily long time series whose ergodic properties approximate those of the input signal. When successful, we say that the autonomous reservoir reproduces the attractor's 'climate'. Since the reservoir equations and output weights are known, we can compute derivatives needed to determine the Lyapunov exponents of the autonomous reservoir, which we then use as estimates of the Lyapunov exponents for the original input generating system. We illustrate the effectiveness of our technique with two examples, the Lorenz system, and the Kuramoto-Sivashinsky (KS) equation. In particular, we use the Lorenz system to show that achieving climate reproduction may require tuning of the reservoir parameters. For the case of the KS equation, we note that as the system's spatial size is increased, the number of Lyapunov exponents increases, thus yielding a challenging test of our method, which we find the method successfully passes.
chaos  nonlinear_dynamics  prediction  machine_learning
8 weeks ago
[1804.03665] An information-theoretic, all-scales approach to comparing networks
As network research becomes more sophisticated, it is more common than ever for researchers to find themselves not studying a single network but needing to analyze sets of networks. An important task when working with sets of networks is network comparison, developing a similarity or distance measure between networks so that meaningful comparisons can be drawn. The best means to accomplish this task remains an open area of research. Here we introduce a new measure to compare networks, the Portrait Divergence, that is mathematically principled, incorporates the topological characteristics of networks at all structural scales, and is general-purpose and applicable to all types of networks. An important feature of our measure that enables many of its useful properties is that it is based on a graph invariant, the network portrait. We test our measure on both synthetic graphs and real world networks taken from protein interaction data, neuroscience, and computational social science applications. The Portrait Divergence reveals important characteristics of multilayer and temporal networks extracted from data.
graph_theory  network_data_analysis  information_theory  network_comparison  two-sample  temporal_networks  networks
9 weeks ago
[1508.01303] Modern temporal network theory: A colloquium
The power of any kind of network approach lies in the ability to simplify a complex system so that one can better understand its function as a whole. Sometimes it is beneficial, however, to include more information than in a simple graph of only nodes and links. Adding information about times of interactions can make predictions and mechanistic understanding more accurate. The drawback, however, is that there are not so many methods available, partly because temporal networks is a relatively young field, partly because it more difficult to develop such methods compared to for static networks. In this colloquium, we review the methods to analyze and model temporal networks and processes taking place on them, focusing mainly on the last three years. This includes the spreading of infectious disease, opinions, rumors, in social networks; information packets in computer networks; various types of signaling in biology, and more. We also discuss future directions.
temporal_networks  review  networks  teaching
9 weeks ago
Current CRISPR gene drive systems are likely to be highly invasive in wild populations | bioRxiv
Recent reports have suggested that CRISPR-based gene drives are unlikely to invade wild populations due to drive-resistant alleles that prevent cutting. Here we develop mathematical models based on existing empirical data to explicitly test this assumption. We show that although resistance prevents drive systems from spreading to fixation in large populations, even the least effective systems reported to date are highly invasive. Releasing a small number of organisms often causes invasion of the local population, followed by invasion of additional populations connected by very low gene flow rates. Examining the effects of mitigating factors including standing variation, inbreeding, and family size revealed that none of these prevent invasion in realistic scenarios. Highly effective drive systems are predicted to be even more invasive. Contrary to the National Academies report on gene drive, our results suggest that standard drive systems should not be developed nor field-tested in regions harboring the host organism.

Pop version here
-----------------

Models from this paper
-----------------------
gene_editing  evolutionary_biology  population_biology  technology  risk_assessment  quanta_mag
9 weeks ago
Covering Ground: Movement Patterns and Random Walk Behavior in Aquilonastra anomala Sea Stars | The Biological Bulletin: Vol 231, No 2
The paths animals take while moving through their environments affect their likelihood of encountering food and other resources; thus, models of foraging behavior abound. To collect movement data appropriate for comparison with these models, we used time-lapse photography to track movements of a small, hardy, and easy-to-obtain organism, Aquilonastra anomala sea stars. We recorded the sea stars in a tank over many hours, with and without a food cue. With food present, they covered less distance, as predicted by theory; this strategy would allow them to remain near food. We then compared the paths of the sea stars to three common models of animal movement: Brownian motion, Lévy walks, and correlated random walks; we found that the sea stars’ movements most closely resembled a correlated random walk. Additionally, we compared the search performance of models of Brownian motion, a Lévy walk, and a correlated random walk to that of a model based on the sea stars’ movements. We found that the behavior of the modeled sea star walk was similar to that of the modeled correlated random walk and the Brownian motion model, but that the sea star walk was slightly more likely than the other walks to find targets at intermediate distances. While organisms are unlikely to follow an idealized random walk in all details, our data suggest that comparing the effectiveness of an organism’s paths to those from theory can give insight into the organism’s actual movement strategy. Finally, automated optical tracking of invertebrates proved feasible, and A. anomala was revealed to be a tractable, 2D-movement study system.
models_of_behavior  markov_models  data_analysis
9 weeks ago
[1803.08823] A high-bias, low-variance introduction to Machine Learning for physicists
Machine Learning (ML) is one of the most exciting and dynamic areas of modern research and application. The purpose of this review is to provide an introduction to the core concepts and tools of machine learning in a manner easily understood and intuitive to physicists. The review begins by covering fundamental concepts in ML and modern statistics such as the bias-variance tradeoff, overfitting, regularization, and generalization before moving on to more advanced topics in both supervised and unsupervised learning. Topics covered in the review include ensemble models, deep learning and neural networks, clustering and data visualization, energy-based models (including MaxEnt models and Restricted Boltzmann Machines), and variational methods. Throughout, we emphasize the many natural connections between ML and statistical physics. A notable aspect of the review is the use of Python notebooks to introduce modern ML/statistical packages to readers using physics-inspired datasets (the Ising Model and Monte-Carlo simulations of supersymmetric decays of proton-proton collisions). We conclude with an extended outlook discussing possible uses of machine learning for furthering our understanding of the physical world as well as open problems in ML where physicists maybe able to contribute. (Notebooks are available at this https URL )
tutorial  review  machine_learning  deep_learning  statistical_mechanics  physics  python  teaching
9 weeks ago
[1506.08237] Dyadic data analysis with amen
Dyadic data on pairs of objects, such as relational or social network data, often exhibit strong statistical dependencies. Certain types of second-order dependencies, such as degree heterogeneity and reciprocity, can be well-represented with additive random effects models. Higher-order dependencies, such as transitivity and stochastic equivalence, can often be represented with multiplicative effects. The "amen" package for the R statistical computing environment provides estimation and inference for a class of additive and multiplicative random effects models for ordinal, continuous, binary and other types of dyadic data. The package also provides methods for missing, censored and fixed-rank nomination data, as well as longitudinal dyadic data. This tutorial illustrates the "amen" package via example statistical analyses of several of these different data types.
network_data_analysis  time_series  tensor_regression  networks  social_networks  r  packages
10 weeks ago
[1611.00460] Inferential Approaches for Network Analyses: AMEN for Latent Factor Models
There is growing interest in the study of political networks. Network analysis allows scholars to move away from focusing on individual observations to the interrelationships among observations. Many network approaches have been developed in descriptive fashion, but attention to inferential approaches to network analysis has been growing. We introduce a new approach that models interdependencies among observations using additive and multiplicative effects (AME). This approach can be applied to binary, ordinal, and continuous network data, and provides a set of tools for inference from longitudinal networks as well. We review this approach and compare it to those examined in the recent survey by Cranmer et al. (2016). The AME approach is shown a) to be easy to implement; b) interpretable in a general linear model framework; c) computationally straightforward; d) not prone to degeneracy; e) captures 1st, 2nd, and 3rd order network dependencies; and f) notably outperforms multiple regression quadratic assignment procedures, exponential random graph models, and alternative latent space approaches on a variety of metrics and in an out-of-sample context. In summary, AME offers a straightforward way to undertake nuanced, principled inferential network analysis for a wide range of social science questions.
network_data_analysis  time_series  tensor_regression  networks  social_networks
10 weeks ago
[1712.02497] Multiplicative Coevolution Regression Models for Longitudinal Networks and Nodal Attributes
We introduce a simple and extendable coevolution model for the analysis of longitudinal network and nodal attribute data. The model features parameters that describe three phenomena: homophily, contagion and autocorrelation of the network and nodal attribute process. Homophily here describes how changes to the network may be associated with between-node similarities in terms of their nodal attributes. Contagion refers to how node-level attributes may change depending on the network. The model we present is based upon a pair of intertwined autoregressive processes. We obtain least-squares parameter estimates for continuous-valued fully-observed network and attribute data. We also provide methods for Bayesian inference in several other cases, including ordinal network and attribute data, and models involving latent nodal attributes. These model extensions are applied to an analysis of international relations data and to data from a study of teen delinquency and friendship networks.
network_data_analysis  time_series  tensor_regression  networks  social_networks
10 weeks ago
[1412.0048] Multilinear tensor regression for longitudinal relational data
A fundamental aspect of relational data, such as from a social network, is the possibility of dependence among the relations. In particular, the relations between members of one pair of nodes may have an effect on the relations between members of another pair. This article develops a type of regression model to estimate such effects in the context of longitudinal and multivariate relational data, or other data that can be represented in the form of a tensor. The model is based on a general multilinear tensor regression model, a special case of which is a tensor autoregression model in which the tensor of relations at one time point are parsimoniously regressed on relations from previous time points. This is done via a separable, or Kronecker-structured, regression parameter along with a separable covariance model. In the context of an analysis of longitudinal multivariate relational data, it is shown how the multilinear tensor regression model can represent patterns that often appear in relational and network data, such as reciprocity and transitivity.
network_data_analysis  time_series  tensor_regression  networks  social_networks
10 weeks ago
[1803.09123] Equation Embeddings
We present an unsupervised approach for discovering semantic representations of mathematical equations. Equations are challenging to analyze because each is unique, or nearly unique. Our method, which we call equation embeddings, finds good representations of equations by using the representations of their surrounding words. We used equation embeddings to analyze four collections of scientific articles from the arXiv, covering four computer science domains (NLP, IR, AI, and ML) and ∼98.5k equations. Quantitatively, we found that equation embeddings provide better models when compared to existing word embedding approaches. Qualitatively, we found that equation embeddings provide coherent semantic representations of equations and can capture semantic similarity to other equations and to words.

--Seems like a good first step but feels like a reinvention of some of the work by Simon and Langley from a M.I. Jordan perspective.
machine_learning  science_of_science  scientometry  ?  natural_language_processing  david.blei
10 weeks ago
Technical Perspective: Expressive Probabilistic Models and Scalable Method of Moments | April 2018 | Communications of the ACM
Across diverse fields, investigators face problems and opportunities involving data. Scientists, scholars, engineers, and other analysts seek new methods to ingest data, extract salient patterns, and then use the results for prediction and understanding. These methods come from machine learning (ML), which is quickly becoming core to modern technological systems, modern scientific workflow, and modern approaches to understanding data.

The classical approach to solving a problem with ML follows the "cookbook" approach, one where the scientist shoehorns her data and problem to match the inputs and outputs of a reliable ML method. This strategy has been successful in many domains—examples include spam filtering, speech recognition, and movie recommendation—but it can only take us so far. The cookbook focuses on prediction at the expense of explanation, and thus values generic and flexible methods. In contrast, many modern ML applications require interpretable methods that both form good predictions and suggest good reasons for them. Further, as data becomes more complex and ML problems become more varied, it becomes more difficult to shoehorn our diverse problems into a simple ML set-up.
david.blei  automation  explanation  machine_learning  artificial_intelligence
10 weeks ago
[1706.09072] Influence Networks in International Relations
Measuring influence and determining what drives it are persistent questions in political science and in network analysis more generally. Herein we focus on the domain of international relations. Our major substantive question is: How can we determine what characteristics make an actor influential? To address the topic of influence, we build on a multilinear tensor regression framework (MLTR) that captures influence relationships using a tensor generalization of a vector autoregression model. Influence relationships in that approach are captured in a pair of n x n matrices and provide measurements of how the network actions of one actor may influence the future actions of another. A limitation of the MLTR and earlier latent space approaches is that there are no direct mechanisms through which to explain why a certain actor is more or less influential than others. Our new framework, social influence regression, provides a way to statistically model the influence of one actor on another as a function of characteristics of the actors. Thus we can move beyond just estimating that an actor influences another to understanding why. To highlight the utility of this approach, we apply it to studying monthly-level conflictual events between countries as measured through the Integrated Crisis Early Warning System (ICEWS) event data project.

--Convert this to a class example or HW in a future Part II of this course?

-- Data available at Dataverse but requires some preparation. Involve others (JF,PG)?

https://doi.org/10.7910/DVN/28075

-- for students in political science and international relations and ...
political_science  international_affairs  networks  teaching  network_data_analysis
10 weeks ago
Empathy and well-being correlate with centrality in different social networks | PNAS
Individuals benefit from occupying central roles in social networks, but little is known about the psychological traits that predict centrality. Across four college freshman dorms (n = 193), we characterized individuals with a battery of personality questionnaires and also asked them to nominate dorm members with whom they had different types of relationships. This revealed several social networks within dorm communities with differing characteristics. In particular, additional data showed that networks varied in the degree to which nominations depend on (i) trust and (ii) shared fun and excitement. Networks more dependent upon trust were further defined by fewer connections than those more dependent on fun. Crucially, network and personality features interacted to predict individuals’ centrality: people high in well-being (i.e., life satisfaction and positive emotion) were central to networks characterized by fun, whereas people high in empathy were central to networks characterized by trust. Together, these findings provide network-based corroboration of psychological evidence that well-being is socially attractive, whereas empathy supports close relationships. More broadly, these data highlight how an individual’s personality relates to the roles that they play in sustaining their community.

--this one, Clauset et al hiring inequality, and Watts et al ATurk study for centrality discussion? (See also Newman and M.O.Jackson papers for theoretical discussions)
networks  teaching  network_data_analysis  matthew.jackson
10 weeks ago
A Network Formation Model Based on Subgraphs by Arun G. Chandrasekhar, Matthew O. Jackson :: SSRN
We develop a new class of random-graph models for the statistical estimation of network formation that allow for substantial correlation in links. Various subgraphs (e.g., links, triangles, cliques, stars) are generated and their union results in a network. We provide estimation techniques for recovering the rates at which the underlying subgraphs were formed. We illustrate the models via a series of applications including testing for incentives to form cross-caste relationships in rural India, testing to see whether network structure is used to enforce risk-sharing, testing as to whether networks change in response to a community's exposure to microcredit, and show that these models significantly outperform stochastic block models in matching observed network characteristics. We also establish asymptotic properties of the models and various estimators, which requires proving a new Central Limit Theorem for correlated random variables.
networks  dynamics  teaching  social_networks  matthew.jackson
10 weeks ago
[1405.0843] MuxViz: A Tool for Multilayer Analysis and Visualization of Networks
Multilayer relationships among entities and information about entities must be accompanied by the means to analyze, visualize, and obtain insights from such data. We present open-source software (muxViz) that contains a collection of algorithms for the analysis of multilayer networks, which are an important way to represent a large variety of complex systems throughout science and engineering. We demonstrate the ability of muxViz to analyze and interactively visualize multilayer data using empirical genetic, neuronal, and transportation networks. Our software is available at this https URL
networks  teaching  network_data_analysis  visualization  packages
10 weeks ago
[1505.06989] A Hitting Time Formula for the Discrete Green's Function
The discrete Green's function (without boundary) G is a pseudo-inverse of the combinatorial Laplace operator of a graph G=(V,E). We reveal the intimate connection between Green's function and the theory of exact stopping rules for random walks on graphs. We give an elementary formula for Green's function in terms of state-to-state hitting times of the underlying graph. Namely, G(i,j)=πj(∑k∈VπkH(k,j)−H(i,j)) where πi is the stationary distribution at vertex i and H(i,j) is the expected hitting time for a random walk starting from vertex i to first reach vertex j. This formula also holds for the digraph Laplace operator.
The most important characteristics of a stopping rule are its exit frequencies, which are the expected number of exits of a given vertex before the rule halts the walk. We show that Green's function is, in fact, a matrix of exit frequencies plus a rank one matrix. In the undirected case, we derive spectral formulas for Green's function and for some mixing measures arising from stopping rules. Finally, we further explore the exit frequency matrix point-of-view, and discuss a natural generalization of Green's function for any distribution τ defined on the vertex set of the graph.
networks  graph_theory  probability  random_walk  combinatorics
10 weeks ago
[1505.00297] Two-Dimensional Pursuit-Evasion in a Compact Domain with Piecewise Analytic Boundary
In a pursuit-evasion game, a team of pursuers attempt to capture an evader. The players alternate turns, move with equal speed, and have full information about the state of the game. We consider the most restictive capture condition: a pursuer must become colocated with the evader to win the game. We prove two general results about pursuit-evasion games in topological spaces. First, we show that one pursuer has a winning strategy in any CAT(0) space under this restrictive capture criterion. This complements a result of Alexander, Bishop and Ghrist, who provide a winning strategy for a game with positive capture radius. Second, we consider the game played in a compact domain in Euclidean two-space with piecewise analytic boundary and arbitrary Euler characteristic. We show that three pursuers always have a winning strategy by extending recent work of Bhadauria, Klein, Isler and Suri from polygonal environments to our more general setting.
mathematics  combinatorics  differential_geometry  game_theory
10 weeks ago
Lifetime-preserving reference models for characterizing spreading dynamics on temporal networks | Scientific Reports
To study how a certain network feature affects processes occurring on a temporal network, one often compares properties of the original network against those of a randomized reference model that lacks the feature in question. The randomly permuted times (PT) reference model is widely used to probe how temporal features affect spreading dynamics on temporal networks. However, PT implicitly assumes that edges and nodes are continuously active during the network sampling period – an assumption that does not always hold in real networks. We systematically analyze a recently-proposed restriction of PT that preserves node lifetimes (PTN), and a similar restriction (PTE) that also preserves edge lifetimes. We use PT, PTN, and PTE to characterize spreading dynamics on (i) synthetic networks with heterogeneous edge lifespans and tunable burstiness, and (ii) four real-world networks, including two in which nodes enter and leave the network dynamically. We find that predictions of spreading speed can change considerably with the choice of reference model. Moreover, the degree of disparity in the predictions reflects the extent of node/edge turnover, highlighting the importance of using lifetime-preserving reference models when nodes or edges are not continuously present in the network.
networks  dynamics  epidemics  temporal_networks  teaching
10 weeks ago
Multiscale mixing patterns in networks | PNAS
Assortative mixing in networks is the tendency for nodes with the same attributes, or metadata, to link to each other. It is a property often found in social networks, manifesting as a higher tendency of links occurring between people of the same age, race, or political belief. Quantifying the level of assortativity or disassortativity (the preference of linking to nodes with different attributes) can shed light on the organization of complex networks. It is common practice to measure the level of assortativity according to the assortativity coefficient, or modularity in the case of categorical metadata. This global value is the average level of assortativity across the network and may not be a representative statistic when mixing patterns are heterogeneous. For example, a social network spanning the globe may exhibit local differences in mixing patterns as a consequence of differences in cultural norms. Here, we introduce an approach to localize this global measure so that we can describe the assortativity, across multiple scales, at the node level. Consequently, we are able to capture and qualitatively evaluate the distribution of mixing patterns in the network. We find that, for many real-world networks, the distribution of assortativity is skewed, overdispersed, and multimodal. Our method provides a clearer lens through which we can more closely examine mixing patterns in networks.
networks  homophily  spatial_statistics  social_networks
10 weeks ago
[1803.09007] Quantifying Surveillance in the Networked Age: Node-based Intrusions and Group Privacy
From the "right to be left alone" to the "right to selective disclosure", privacy has long been thought as the control individuals have over the information they share and reveal about themselves. However, in a world that is more connected than ever, the choices of the people we interact with increasingly affect our privacy. This forces us to rethink our definition of privacy. We here formalize and study, as local and global node- and edge-observability, Bloustein's concept of group privacy. We prove edge-observability to be independent of the graph structure, while node-observability depends only on the degree distribution of the graph. We show on synthetic datasets that, for attacks spanning several hops such as those implemented by social networks and current US laws, the presence of hubs increases node-observability while a high clustering coefficient decreases it, at fixed density. We then study the edge-observability of a large real-world mobile phone dataset over a month and show that, even under the restricted two-hops rule, compromising as little as 1% of the nodes leads to observing up to 46% of all communications in the network. More worrisome, we also show that on average 36\% of each person's communications would be locally edge-observable under the same rule. Finally, we use real sensing data to show how people living in cities are vulnerable to distributed node-observability attacks. Using a smartphone app to compromise 1\% of the population, an attacker could monitor the location of more than half of London's population. Taken together, our results show that the current individual-centric approach to privacy and data protection does not encompass the realities of modern life. This makes us---as a society---vulnerable to large-scale surveillance attacks which we need to develop protections against.
networks  privacy  networked_life  social_networks
11 weeks ago
We survey a variety of possible explications of the term “Individual Risk.” These in turn are based on a variety of interpretations of “Probability,” including classical, enumerative, frequency, formal, metaphysical, personal, propensity, chance and logical conceptions of probability, which we review and compare. We distinguish between “groupist” and “individualist” understandings of probability, and explore both “group to individual” and “individual to group” approaches to characterising individual risk. Although in the end that concept remains subtle and elusive, some pragmatic suggestions for progress are made.
philip.dawid  risk_assessment  ethics  machine_learning  statistics
11 weeks ago
probability of causation1 | Law, Probability and Risk | Oxford Academic
Many legal cases require decisions about causality, responsibility or blame, and these may be based on statistical data. However, causal inferences from such data are beset by subtle conceptual and practical difficulties, and in general it is, at best, possible to identify the ‘probability of causation’ as lying between certain empirically informed limits. These limits can be refined and improved if we can obtain additional information, from statistical or scientific data, relating to the internal workings of the causal processes. In this article we review and extend recent work in this area, where additional information may be available on covariate and/or mediating variables.
philip.dawid  law  causality
11 weeks ago
[1803.10637] Objective measures for sentinel surveillance in network epidemiology
The problem of optimizing sentinel surveillance in networks is to find the nodes where an emerging disease outbreak can be discovered early or reliably. Whether the emphasis should be on early or reliable detection depends on the scenario in question. We investigate three objective measures quantifying the performance of nodes in sentinel surveillance the time to detection or extinction, the time to detection, and the frequency of detection. As a basis for the comparison, we use the susceptible-infectious-recovered model (SIR) on static and temporal networks of human contacts. We show that, for some regions of parameter space, the three objective measures can rank the nodes very differently. As opposed to other problems in network epidemiology, we find rather similar results for the static and temporal networks. Furthermore, we do not find one network structure that predicts the objective measures---that depends both on the data set and the SIR parameter .
networks  epidemics  contagion  signal_processing  temporal_networks
11 weeks ago
Organization of feed-forward loop motifs reveals architectural principles in natural and engineered networks | Science Advances
Network motifs are significantly overrepresented subgraphs that have been proposed as building blocks for natural and engineered networks. Detailed functional analysis has been performed for many types of motif in isolation, but less is known about how motifs work together to perform complex tasks. To address this issue, we measure the aggregation of network motifs via methods that extract precisely how these structures are connected. Applying this approach to a broad spectrum of networked systems and focusing on the widespread feed-forward loop motif, we uncover striking differences in motif organization. The types of connection are often highly constrained, differ between domains, and clearly capture architectural principles. We show how this information can be used to effectively predict functionally important nodes in the metabolic network of Escherichia coli. Our findings have implications for understanding how networked systems are constructed from motif parts and elucidate constraints that guide their evolution.
i_remain_skeptical  networks  teaching  prediction  network_data_analysis
11 weeks ago
[1704.06279] Mutual Information, Neural Networks and the Renormalization Group
Physical systems differing in their microscopic details often display strikingly similar behaviour when probed at low energies. Those universal properties, largely determining their physical characteristics, are revealed by the powerful renormalization group (RG) procedure, which systematically retains "slow" degrees of freedom and integrates out the rest. However, the important degrees of freedom may be difficult to identify. Here we demonstrate a machine learning (ML) algorithm capable of identifying the relevant degrees of freedom without any prior knowledge about the system. We introduce an artificial neural network based on a model-independent, information-theoretic characterization of a real-space RG procedure, performing this task. We apply the algorithm to classical statistical physics problems in two dimensions.

--Surely, the connection between computational complexity and statistical mechanical systems would make sure that their methods are not applicable for more general systems. Something's off here....

Now a *Nature paper*, I'm really surprised that the reviewers did not push them to make their connections to epsilon machines and other methods that already exist.
renormalization  phase_transition  statistical_mechanics  deep_learning  i_remain_skeptical
11 weeks ago
Spectral simplicity of apparent complexity. I. The nondiagonalizable metadynamics of prediction: Chaos: An Interdisciplinary Journal of Nonlinear Science: Vol 28, No 3
Virtually all questions that one can ask about the behavioral and structural complexity of a stochastic process reduce to a linear algebraic framing of a time evolution governed by an appropriate hidden-Markov process generator. Each type of question—correlation, predictability, predictive cost, observer synchronization, and the like—induces a distinct generator class. Answers are then functions of the class-appropriate transition dynamic. Unfortunately, these dynamics are generically nonnormal, nondiagonalizable, singular, and so on. Tractably analyzing these dynamics relies on adapting the recently introduced meromorphic functional calculus, which specifies the spectral decomposition of functions of nondiagonalizable linear operators, even when the function poles and zeros coincide with the operator's spectrum. Along the way, we establish special properties of the spectral projection operators that demonstrate how they capture the organization of subprocesses within a complex system. Circumventing the spurious infinities of alternative calculi, this leads in the sequel, Part II [P. M. Riechers and J. P. Crutchfield, Chaos 28, 033116 (2018)], to the first closed-form expressions for complexity measures, couched either in terms of the Drazin inverse (negative-one power of a singular operator) or the eigenvalues and projection operators of the appropriate transition dynamic.
complexity  markov_process  jam.crutchfield
11 weeks ago
Spectral simplicity of apparent complexity. II. Exact complexities and complexity spectra: Chaos: An Interdisciplinary Journal of Nonlinear Science: Vol 28, No 3
The meromorphic functional calculus developed in Part I overcomes the nondiagonalizability of linear operators that arises often in the temporal evolution of complex systems and is generic to the metadynamics of predicting their behavior. Using the resulting spectral decomposition, we derive closed-form expressions for correlation functions, finite-length Shannon entropy-rate approximates, asymptotic entropy rate, excess entropy, transient information, transient and asymptotic state uncertainties, and synchronization information of stochastic processes generated by finite-state hidden Markov models. This introduces analytical tractability to investigating information processing in discrete-event stochastic processes, symbolic dynamics, and chaotic dynamical systems. Comparisons reveal mathematical similarities between complexity measures originally thought to capture distinct informational and computational properties. We also introduce a new kind of spectral analysis via coronal spectrograms and the frequency-dependent spectra of past-future mutual information. We analyze a number of examples to illustrate the methods, emphasizing processes with multivariate dependencies beyond pairwise correlation. This includes spectral decomposition calculations for one representative example in full detail.
complexity  markov_process  jim.crutchfield
11 weeks ago
Maintained by environmental fluxes, biological systems are thermodynamic processes that operate far from equilibrium without detailed-balanced dynamics. Yet, they often exhibit well defined nonequilibrium steady states (NESSs). More importantly, critical thermodynamic functionality arises directly from transitions among their NESSs, driven by environmental switching. Here, we identify the constraints on excess heat and dissipated work necessary to control a system that is kept far from equilibrium by background, uncontrolled “housekeeping” forces. We do this by extending the Crooks fluctuation theorem to transitions among NESSs, without invoking an unphysical dual dynamics. This and corresponding integral fluctuation theorems determine how much work must be expended when controlling systems maintained far from equilibrium. This generalizes thermodynamic feedback control theory, showing that Maxwellian Demons can leverage mesoscopic-state information to take advantage of the excess energetics in NESS transitions. We also generalize an approach recently used to determine the work dissipated when driving between functionally relevant configurations of an active energy-consuming complex system. Altogether, these results highlight universal thermodynamic laws that apply to the accessible degrees of freedom within the effective dynamic at any emergent level of hierarchical organization. By way of illustration, we analyze a voltage-gated sodium ion channel whose molecular conformational dynamics play a critical functional role in propagating action potentials in mammalian neuronal membranes.
non-equilibrium  thermodynamics  statistical_mechanics  self_organization  control_theory  jim.crutchfield
11 weeks ago
The Book of Why – Hachette Book Group
-- I anticipate outcry that *his* vision is very limited, ...... and all that.
book  causality  causal_inference  judea.pearl
11 weeks ago
Peer review: the end of an error?
--Timothy Gowers call for an alternative system built on the foundations of arxiv culture.
sociology_of_science  scientific_publishing_complex  critique  institutions  norms  democracy
11 weeks ago
The hot hand is back! – Department of Economics
-- Sanjurjo's compilation of blogs, articles and press featuring the Hot-hand-fallacy fallacy

-- I liked this one in particular
https://arxiv.org/pdf/1512.08773.pdf

maybe because it was LaTeX-ed :)
dmce  teaching
11 weeks ago
Built
Imagine you woke up one morning, and everything that engineers had created disappeared. What would you see?

No cars, no houses; no phones, bridges or roads. No tunnels under tidal rivers, no soaring skyscrapers. Engineering is an intrinsic and intimate part of our existence, shaping the spaces in which reside. We cannot live without it.

In BUILT, structural engineer Roma Agrawal takes a unique look at how construction has evolved from the mud huts of our ancestors to towers of steel that reach into the sky. She unearths how humans have tunnelled through kilometres of solid mountain, bridged the widest and deepest of rivers, and tamed Nature’s precious – and elusive – water resources. She tells vivid tales of the pioneers behind landmark builds such as the Brooklyn Bridge and the Burj Khalifa, and examines, from an engineering perspective, tragedies like the collapse of the Twin Towers. She reveals how she designs a building so it will stand strong – even in the face of gales, fire, earthquakes and explosions.

With colourful stories of her life-long fascination with buildings – and her own hand-drawn illustrations – Roma uncovers the extraordinary secret lives of structures.
book  engineering  design  philosophy_of_technology  ?
11 weeks ago
Roberts, M.E.: Censored: Distraction and Diversion Inside Chinas Great Firewall (Hardcover and eBook) | Princeton University Press
As authoritarian governments around the world develop sophisticated technologies for controlling information, many observers have predicted that these controls would be ineffective because they are easily thwarted and evaded by savvy Internet users. In Censored, Margaret Roberts demonstrates that even censorship that is easy to circumvent can still be enormously effective. Taking advantage of digital data harvested from the Chinese Internet and leaks from China's Propaganda Department, this important book sheds light on how and when censorship influences the Chinese public.

Roberts finds that much of censorship in China works not by making information impossible to access but by requiring those seeking information to spend extra time and money for access. By inconveniencing users, censorship diverts the attention of citizens and powerfully shapes the spread of information. When Internet users notice blatant censorship, they are willing to compensate for better access. But subtler censorship, such as burying search results or introducing distracting information on the web, is more effective because users are less aware of it. Roberts challenges the conventional wisdom that online censorship is undermined when it is incomplete and shows instead how censorship's porous nature is used strategically to divide the public.

Drawing parallels between censorship in China and the way information is manipulated in the United States and other democracies, Roberts reveals how Internet users are susceptible to control even in the most open societies. Demonstrating how censorship travels across countries and technologies, Censored gives an unprecedented view of how governments encroach on the media consumption of citizens.

http://www.margaretroberts.net

-- book based on a set of papers co-authored with Gary King, extensively discussed in Tufekci's book. Her current projects include work with Grimmer on causal inference of political attitudes from text-mined data.
book  surveillance  authoritarianism  censorship  civil_rights  platform_studies  china  governance  social_media  political_science
11 weeks ago
[1803.08491] Influence of fake news in Twitter during the 2016 US presidential election

Other work
https://arxiv.org/abs/1707.01594
https://www.nature.com/articles/nphys1746

-- Absence of citations to other careful analysis available make me not trust these papers.
social_networks  influence  causal_inference  i_remain_skeptical
12 weeks ago
News Attention in a Mobile Era | Journal of Computer-Mediated Communication | Oxford Academic
Mobile access to the Internet is changing the way people consume information, yet we know little about the effects of this shift on news consumption. Consuming news is key to democratic citizenship, but is attention to news the same in a mobile environment? We argue that attention to news on mobile devices such as tablets and smartphones is not the same as attention to news for those on computers. Our research uses eye tracking in two lab experiments to capture the effects of mobile device use on news attention. We also conduct a large-scale study of web traffic data to provide further evidence that news attention is significantly different across computers and mobile devices.
media_studies  dmce  teaching
12 weeks ago
SocArXiv Papers | Exposure to Opposing Views can Increase Political Polarization: Evidence from a Large-Scale Field Experiment on Social Media
There is mounting concern that social media sites contribute to political polarization by creating `echo chambers" that insulate people from opposing views about current events. We surveyed a large sample of Democrats and Republicans who visit Twitter at least three times each week about a range of social policy issues. One week later, we randomly assigned respondents to a treatment condition in which they were offered financial incentives to follow a Twitter bot for one month that exposed them to messages produced by elected officials, organizations, and other opinion leaders with opposing political ideologies. Respondents were re-surveyed at the end of the month to measure the effect of this treatment, and at regular intervals throughout the study period to monitor treatment compliance. We find that Republicans who followed a liberal Twitter bot became substantially more conservative post-treatment, and Democrats who followed a conservative Twitter bot became slightly more liberal post-treatment. These findings have important implications for the interdisciplinary literature on political polarization as well as the emerging field of computational social science.
political_psychology  cultural_cognition  bias  public_opinion  opinion_dynamics  dmce  teaching  via:nyhan
12 weeks ago
Modernizing Crime Statistics: Report 2: New Systems for Measuring Crime | The National Academies Press
To derive statistics about crime – to estimate its levels and trends, assess its costs to and impacts on society, and inform law enforcement approaches to prevent it - a conceptual framework for defining and thinking about crime is virtually a prerequisite. Developing and maintaining such a framework is no easy task, because the mechanics of crime are ever evolving and shifting: tied to shifts and development in technology, society, and legislation.

Interest in understanding crime surged in the 1920s, which proved to be a pivotal decade for the collection of nationwide crime statistics. Now established as a permanent agency, the Census Bureau commissioned the drafting of a manual for preparing crime statistics—intended for use by the police, corrections departments, and courts alike. The new manual sought to solve a perennial problem by suggesting a standard taxonomy of crime. Shortly after the Census Bureau issued its manual, the International Association of Chiefs of Police in convention adopted a resolution to create a Committee on Uniform Crime Records —to begin the process of describing what a national system of data on crimes known to the police might look like.

Report 1 performed a comprehensive reassessment of what is meant by crime in U.S. crime statistics and recommends a new classification of crime to organize measurement efforts. This second report examines methodological and implementation issues and presents a conceptual blueprint for modernizing crime statistics.
crime  policing  data  statistics  nap  report  for_friends
12 weeks ago
Does Machine Learning Automate Moral Hazard and Error?
Machine learning tools are beginning to be deployed en masse in health care. While the statistical underpinnings of these techniques have been questioned with regard to causality and stability, we highlight a different concern here, relating to measurement issues. A characteristic feature of health data, unlike other applications of machine learning, is that neither y nor x is measured perfectly. Far from a minor nuance, this can undermine the power of machine learning algorithms to drive change in the health care system--and indeed, can cause them to reproduce and even magnify existing errors in human judgment.
machine_learning  statistics  prediction  judgment_decision-making  human-machine_error  for_friends
12 weeks ago
Investigator Characteristics and Respondent Behavior in Online Surveys | Journal of Experimental Political Science | Cambridge Core
Prior research demonstrates that responses to surveys can vary depending on the race, gender, or ethnicity of the investigator asking the question. We build upon this research by empirically testing how information about researcher identity in online surveys affects subject responses. We do so by conducting an experiment on Amazon’s Mechanical Turk in which we vary the name of the researcher in the advertisement for the experiment and on the informed consent page in order to cue different racial and gender identities. We fail to reject the null hypothesis that there is no difference in how respondents answer questions when assigned to a putatively black/white or male/female researcher.
online_experiments  amazon_turk  survey  race  gender  bias  sociology_of_science  social_psychology  via:nyhan
12 weeks ago
[1706.00394] Multiscale unfolding of real networks by geometric renormalization
Multiple scales coexist in complex networks. However, the small world property makes them strongly entangled. This turns the elucidation of length scales and symmetries a defiant challenge. Here, we define a geometric renormalization group for complex networks and use the technique to investigate networks as viewed at different scales. We find that real networks embedded in a hidden metric space show geometric scaling, in agreement with the renormalizability of the underlying geometric model. This allows us to unfold real scale-free networks in a self-similar multilayer shell which unveils the coexisting scales and their interplay. The multiscale unfolding offers a basis for a new approach to explore critical phenomena and universality in complex networks, and affords us immediate practical applications, like high-fidelity smaller-scale replicas of large networks and a multiscale navigation protocol in hyperbolic space which boosts the success of single-layer versions.
networks  hyperbolic_geometry  phase_transition  renormalization  network_data_analysis  ?
12 weeks ago
The Intellectual We Deserve | Current Affairs
-- Profuse thanks to the author for trudging through the source material.

-- Would our side (if I still am acceptable as part of the tribe) welcome such long form public _idea shaming_ of the out-there post-modernist, intersectionalist gender and queer studies thinking? And would we celebrate this on public sphere?
via:henryfarrell  contemporary_culture  critique
12 weeks ago
[1803.01422] DAGs with NO TEARS: Smooth Optimization for Structure Learning
Estimating the structure of directed acyclic graphs (DAGs, also known as Bayesian networks) is a challenging problem since the search space of DAGs is combinatorial and scales superexponentially with the number of nodes. Existing approaches rely on various local heuristics for enforcing the acyclicity constraint and are not well-suited to general purpose optimization packages for their solution. In this paper, we introduce a fundamentally different strategy: We formulate the structure learning problem as a smooth, constrained optimization problem over real matrices that avoids this combinatorial constraint entirely. This is achieved by a novel characterization of acyclicity that is not only smooth but also exact. The resulting nonconvex, constrained program involves smooth functions whose gradients are easy to compute and only involve elementary matrix operations. By using existing black-box optimization routines, our method uses global search to find an optimal DAG and can be implemented in about 50 lines of Python and outperforms existing methods without imposing any structural constraints.
bayesian_network  graphical_models  machine_learning
12 weeks ago
Ordinal Graphical Models: A Tale of Two Approaches
Undirected graphical models or Markov random fields (MRFs) are widely used for modeling multivariate probability distributions. Much of the work on MRFs has focused on continuous variables, and nominal variables (that is, unordered categorical variables). However, data from many real world applications involve ordered categorical variables also known as ordinal variables, e.g., movie ratings on Netflix which can be ordered from 1 to 5 stars. With respect to univariate ordinal distributions, as we detail in the paper, there are two main categories of distributions; while there have been efforts to extend these to multivariate ordinal distributions, the resulting distributions are typically very complex, with either a large number of parameters, or with non-convex likelihoods. While there have been some work on tractable approximations, these do not come with strong statistical guarantees, and moreover are relatively computationally expensive. In this paper, we theoretically investigate two classes of graphical models for ordinal data, corresponding to the two main categories of univariate ordinal distributions. In contrast to previous work, our theoretical developments allow us to provide correspondingly two classes of estimators that are not only computationally efficient but also have strong statistical guarantees.
graphical_models
12 weeks ago
[1802.04397] Identifiability of Nonparametric Mixture Models and Bayes Optimal Clustering
Motivated by problems in data clustering, we establish general conditions under which families of nonparametric mixture models are identifiable by introducing a novel framework for clustering overfitted \emph{parametric} (i.e. misspecified) mixture models. These conditions generalize existing conditions in the literature, and are flexible enough to include for example mixtures of Gaussian mixtures. In contrast to the recent literature on estimating nonparametric mixtures, we allow for general nonparametric mixture components, and instead impose regularity assumptions on the underlying mixing measure. As our primary application, we apply these results to partition-based clustering, generalizing the well-known notion of a Bayes optimal partition from classical model-based clustering to nonparametric settings. Furthermore, this framework is constructive in that it yields a practical algorithm for learning identified mixtures, which is illustrated through several examples. The key conceptual device in the analysis is the convex, metric geometry of probability distributions on metric spaces and its connection to optimal transport and the Wasserstein convergence of mixing measures. The result is a flexible framework for nonparametric clustering with formal consistency guarantees.
identifiability  nonparametric  statistics  machine_learning
12 weeks ago
Conflict and convention in dynamic networks | Journal of The Royal Society Interface
An important way to resolve games of conflict (snowdrift, hawk–dove, chicken) involves adopting a convention: a correlated equilibrium that avoids any conflict between aggressive strategies. Dynamic networks allow individuals to resolve conflict via their network connections rather than changing their strategy. Exploring how behavioural strategies coevolve with social networks reveals new dynamics that can help explain the origins and robustness of conventions. Here, we model the emergence of conventions as correlated equilibria in dynamic networks. Our results show that networks have the tendency to break the symmetry between the two conventional solutions in a strongly biased way. Rather than the correlated equilibrium associated with ownership norms (play aggressive at home, not away), we usually see the opposite host–guest norm (play aggressive away, not at home) evolve on dynamic networks, a phenomenon common to human interaction. We also show that learning to avoid conflict can produce realistic network structures in a way different than preferential attachment models

-- network formation using game theory.(M.O. Jackson has a paper on a related approach). More generally, it is network formation using optimization principles.
social_networks  dynamics  norms  game_theory  networks
12 weeks ago
In Sweden’s Preschools, Boys Learn to Dance and Girls Learn to Yell - The New York Times
-- This story is perfect propaganda for the alt-right, the hard right, conservatives and their ilk.

-- Looking forward to to seeing what data has to say
nature-nurture  experiments  social_science  gender  evolutionary_psychology  NYTimes  via:clairlemon
12 weeks ago
Why French Kids Don't Have ADHD | Psychology Today
In the United States, at least 9 percent of school-aged children have been diagnosed with ADHD, and are taking pharmaceutical medications. In France, the percentage of kids diagnosed and medicated for ADHD is less than .5 percent. How has the epidemic of ADHD—firmly established in the U.S.—almost completely passed over children in France?

-- where did they get this number from?
-- also interesting is the fact that France has its version of DSM
-- The rest of the article is pretty preachy and redundant
social_psychology  DSM  comparative  evolutionary_psychology  ?  via:?  track_down_references
12 weeks ago
Infectious Disease Modeling of Social Contagion in Networks
Many behavioral phenomena have been found to spread interpersonally through social networks, in a manner similar to infectious diseases. An important difference between social contagion and traditional infectious diseases, however, is that behavioral phenomena can be acquired by non-social mechanisms as well as through social transmission. We introduce a novel theoretical framework for studying these phenomena (the SISa model) by adapting a classic disease model to include the possibility for ‘automatic’ (or ‘spontaneous’) non-social infection. We provide an example of the use of this framework by examining the spread of obesity in the Framingham Heart Study Network. The interaction assumptions of the model are validated using longitudinal network transmission data. We find that the current rate of becoming obese is 2 per year and increases by 0.5 percentage points for each obese social contact. The rate of recovering from obesity is 4 per year, and does not depend on the number of non-obese contacts. The model predicts a long-term obesity prevalence of approximately 42, and can be used to evaluate the effect of different interventions on steady-state obesity. Model predictions quantitatively reproduce the actual historical time course for the prevalence of obesity. We find that since the 1970s, the rate of recovery from obesity has remained relatively constant, while the rates of both spontaneous infection and transmission have steadily increased over time. This suggests that the obesity epidemic may be driven by increasing rates of becoming obese, both spontaneously and transmissively, rather than by decreasing rates of losing weight. A key feature of the SISa model is its ability to characterize the relative importance of social transmission by quantitatively comparing rates of spontaneous versus contagious infection. It provides a theoretical framework for studying the interpersonal spread of any state that may also arise spontaneously, such as emotions, behaviors, health states, ideas or diseases with reservoirs.

-----------------------

"It has recently been suggested that certain, particular types of latent homophily, in which an unobservable trait influences both which friends one chooses and current and future behavior, may be impossible to distinguish from contagion in observational studies and hence may bias estimates of contagion and homophily [50]. The circumstances under which this is likely to be a serious source of bias (e.g., whether people, empirically, behave in these sorts of ways), and what (if anything) might be done about it (absent experimental data of the kind that some new networks studies are providing [22]) merits further study. Observational data invariably pose problems for causal inference, and require one set of assumptions or another to analyze; the plausibility of these assumptions (even of standard ones that are widely used) warrants constant review.
"The SISa model as presented here assumes that all individuals have the same probability of changing state (though not everyone will actually change state within their lifetime). It is clearly possible, however, that there is heterogeneity between individuals in these rates. We do not have sufficient data on obesity in the Framingham dataset to explore this issue, which would require observing numerous transitions between states for each individual. Exploring individual differences in acquisition rate empirically is a very interesting topic for future research, as is extending the theoretical framework we introduce to take into account individual differences."

--- For "suggested", read "proved"; the second paragraph amounts to saying "Let's just agree to ignore this".
social_networks  contagion  homophily  simulation  epidemics  networks  teaching
12 weeks ago
SocioPatterns.org
--more datasets for students, in case they are interested in dynamics, especially epidemics on networks. Includes data on temporal networks.
data_sets  contagion  epidemiology  social_networks  networks  teaching
12 weeks ago

Copy this bookmark:

description:

tags: