cshalizi + in_nb   2632

Useful Enemies - Noel Malcolm - Oxford University Press
"From the fall of Constantinople in 1453 until the eighteenth century, many Western European writers viewed the Ottoman Empire with almost obsessive interest. Typically they reacted to it with fear and distrust; and such feelings were reinforced by the deep hostility of Western Christendom towards Islam. Yet there was also much curiosity about the social and political system on which the huge power of the sultans was based. In the sixteenth century, especially, when Ottoman territorial expansion was rapid and Ottoman institutions seemed particularly robust, there was even open admiration.
"In this path-breaking book Noel Malcolm ranges through these vital centuries of East-West interaction, studying all the ways in which thinkers in the West interpreted the Ottoman Empire as a political phenomenon - and Islam as a political religion. Useful Enemies shows how the concept of 'oriental despotism' began as an attempt to turn the tables on a very positive analysis of Ottoman state power, and how, as it developed, it interacted with Western debates about monarchy and government. Noel Malcolm also shows how a negative portrayal of Islam as a religion devised for political purposes was assimilated by radical writers, who extended the criticism to all religions, including Christianity itself.
"Examining the works of many famous thinkers (including Machiavelli, Bodin, and Montesquieu) and many less well-known ones, Useful Enemies illuminates the long-term development of Western ideas about the Ottomans, and about Islam. Noel Malcolm shows how these ideas became intertwined with internal Western debates about power, religion, society, and war. Discussions of Islam and the Ottoman Empire were thus bound up with mainstream thinking in the West on a wide range of important topics. These Eastern enemies were not just there to be denounced. They were there to be made use of, in arguments which contributed significantly to the development of Western political thought"
in_NB  books:noted  history_of_ideas  early_modern_european_history  ottoman_empire  orientalism  via:auerbach 
6 days ago by cshalizi
World ordering: social theory cognitive evolution | International relations and international organisations | Cambridge University Press
"Drawing on evolutionary epistemology, process ontology, and a social-cognition approach, this book suggests cognitive evolution, an evolutionary-constructivist social and normative theory of change and stability of international social orders. It argues that practices and their background knowledge survive preferentially, communities of practice serve as their vehicle, and social orders evolve. As an evolutionary theory of world ordering, which does not borrow from the natural sciences, it explains why certain configurations of practices organize and govern social orders epistemically and normatively, and why and how these configurations evolve from one social order to another. Suggesting a multiple and overlapping international social orders' approach, the book uses three running cases of contested orders - Europe's contemporary social order, the cyberspace order, and the corporate order - to illustrate the theory. Based on the concepts of common humanity and epistemological security, the author also submits a normative theory of better practices and of bounded progress."
in_NB  books:noted  cultural_evolution  institutions  social_evolution  via:auerbach  re:do-institutions-evolve 
6 days ago by cshalizi
Stochastic stability of differential equations in abstract spaces | Differential and integral equations, dynamical systems and control | Cambridge University Press
"The stability of stochastic differential equations in abstract, mainly Hilbert, spaces receives a unified treatment in this self-contained book. It covers basic theory as well as computational techniques for handling the stochastic stability of systems from mathematical, physical and biological problems. Its core material is divided into three parts devoted respectively to the stochastic stability of linear systems, non-linear systems, and time-delay systems. The focus is on stability of stochastic dynamical processes affected by white noise, which are described by partial differential equations such as the Navier–Stokes equations. A range of mathematicians and scientists, including those involved in numerical computation, will find this book useful. It is also ideal for engineers working on stochastic systems and their control, and researchers in mathematical physics or biology."
in_NB  stochastic_processes  stochastic_differential_equations  dynamical_systems  hilbert_space  books:noted  re:almost_none 
9 days ago by cshalizi
Statistical modelling with exponential families | Statistical theory and methods | Cambridge University Press
"This book is a readable, digestible introduction to exponential families, encompassing statistical models based on the most useful distributions in statistical theory, including the normal, gamma, binomial, Poisson, and negative binomial. Strongly motivated by applications, it presents the essential theory and then demonstrates the theory's practical potential by connecting it with developments in areas like item response analysis, social network models, conditional independence and latent variable structures, and point process models. Extensions to incomplete data models and generalized linear models are also included. In addition, the author gives a concise account of the philosophy of Per Martin-Löf in order to connect statistical modelling with ideas in statistical physics, including Boltzmann's law. Written for graduate students and researchers with a background in basic statistical inference, the book includes a vast set of examples demonstrating models for applications and exercises embedded within the text as well as at the ends of chapters."
in_NB  exponential_families  statistics  books:noted 
10 days ago by cshalizi
Planning Without Prices (G. M. Heal, 1969)
Yet Another Lange-ian Central Planning Board:

The CPB sets a utility function in terms of levels of final goods. It also allocates raw materials and intermediate goods. Every firm must report to the CPB the marginal productivity of every resource for making every good; the CPB re-allocates goods towards firms with above-average productivity --- basically gradient ascent. (There is a slight complication here to avoid negative allocations.) This converges to a stationary point of the utility function. The claimed innovations over Lange are (a) no prices, just quantities (except that the CPB needs to use partial derivatives of the utility function that act just like prices for its internal work), (b) could handle non-convexity [sort of --- it'll converge to local maxima very happily], (c) along the path to the stationary point, we always stay inside the feasible set, and (d) the utility function is increasing along the path. The author sets the most store by (c) and (d), and so I'd characterize it as kin to an interior-point method, though without (say) a constraint-enforcing barrier penalty. The informational advantage over Kantorovich-style central planning is that the CPB doesn't have to know all the production functions, it just (!) needs to know every firm's marginal productivity for each possible input, which the firm will report honestly because reasons. (The computational and political difficulties of deciding on an economy-wide utility function are as usual unaddressed.)

--- N.B., the last tag (and my emphasis on what's _not_ here) is because someone pointed me at this (and an earlier paper by Malinvaud, cited by Heal) as disposing of everything I wrote about the difficulties of central planning.
have_read  economics  optimization  distributed_systems  re:in_soviet_union_optimization_problem_solves_you  shot_after_a_fair_trial  in_NB 
4 weeks ago by cshalizi
The relationship between external variables and common factors | SpringerLink
"A theorem is presented which gives the range of possible correlations between a common factor and an external variable (i.e., a variable not included in the test battery factor analyzed). Analogous expressions for component (and regression component) theory are also derived. Some situations involving external correlations are then discussed which dramatize the theoretical differences between components and common factors."
in_NB  have_read  factor_analysis  inference_to_latent_objects  psychometrics  statistics  re:g_paper 
7 weeks ago by cshalizi
Factor indeterminacy in the 1930's and the 1970's some interesting parallels | SpringerLink
"The issue of factor indeterminacy, and its meaning and significance for factor analysis, has been the subject of considerable debate in recent years. Interestingly, the identical issue was discussed widely in the literature of the late 1920's and early 1930's, but this early discussion was somehow lost or forgotten during the development and popularization of multiple factor analysis. There are strong parallels between the arguments in the early literature, and those which have appeared in recent papers. Here I review the history of this early literature, briefly survey the more recent work, and discuss these parallels where they are especially illuminating."
in_NB  psychometrics  factor_analysis  inference_to_latent_objects  have_read  a_long_time_ago  re:g_paper 
7 weeks ago by cshalizi
Some new results on factor indeterminacy | SpringerLink
"Some relations between maximum likelihood factor analysis and factor indeterminacy are discussed. Bounds are derived for the minimum average correlation between equivalent sets of correlated factors which depend on the latent roots of the factor intercorrelation matrix ψ. Empirical examples are presented to illustrate some of the theory and indicate the extent to which it can be expected to be relevant in practice."
in_NB  have_read  a_long_time_ago  factor_analysis  low-rank_approximation  statistics  re:g_paper 
7 weeks ago by cshalizi
Alquier , Marie : Matrix factorization for multivariate time series analysis
"Matrix factorization is a powerful data analysis tool. It has been used in multivariate time series analysis, leading to the decomposition of the series in a small set of latent factors. However, little is known on the statistical performances of matrix factorization for time series. In this paper, we extend the results known for matrix estimation in the i.i.d setting to time series. Moreover, we prove that when the series exhibit some additional structure like periodicity or smoothness, it is possible to improve on the classical rates of convergence."
in_NB  low-rank_approximation  time_series  factor_analysis  statistics  to_read  to_teach:data_over_space_and_time 
7 weeks ago by cshalizi
[1906.00001] Functional Adversarial Attacks
"We propose functional adversarial attacks, a novel class of threat models for crafting adversarial examples to fool machine learning models. Unlike a standard ℓp-ball threat model, a functional adversarial threat model allows only a single function to be used to perturb input features to produce an adversarial example. For example, a functional adversarial attack applied on colors of an image can change all red pixels simultaneously to light red. Such global uniform changes in images can be less perceptible than perturbing pixels of the image individually. For simplicity, we refer to functional adversarial attacks on image colors as ReColorAdv, which is the main focus of our experiments. We show that functional threat models can be combined with existing additive (ℓp) threat models to generate stronger threat models that allow both small, individual perturbations and large, uniform changes to an input. Moreover, we prove that such combinations encompass perturbations that would not be allowed in either constituent threat model. In practice, ReColorAdv can significantly reduce the accuracy of a ResNet-32 trained on CIFAR-10. Furthermore, to the best of our knowledge, combining ReColorAdv with other attacks leads to the strongest existing attack even after adversarial training. An implementation of ReColorAdv is available at this https URL ."
in_NB  adversarial_examples 
11 weeks ago by cshalizi
[1910.13427] Distribution Density, Tails, and Outliers in Machine Learning: Metrics and Applications
"We develop techniques to quantify the degree to which a given (training or testing) example is an outlier in the underlying distribution. We evaluate five methods to score examples in a dataset by how well-represented the examples are, for different plausible definitions of "well-represented", and apply these to four common datasets: MNIST, Fashion-MNIST, CIFAR-10, and ImageNet. Despite being independent approaches, we find all five are highly correlated, suggesting that the notion of being well-represented can be quantified. Among other uses, we find these methods can be combined to identify (a) prototypical examples (that match human expectations); (b) memorized training examples; and, (c) uncommon submodes of the dataset. Further, we show how we can utilize our metrics to determine an improved ordering for curriculum learning, and impact adversarial robustness. We release all metric values on training and test sets we studied."

--- Interesting to see if they look at earlier work on outliers at all.
in_NB  outliers  adversarial_examples  statistics 
11 weeks ago by cshalizi
[1909.06137] Defending Against Adversarial Attacks by Suppressing the Largest Eigenvalue of Fisher Information Matrix
"We propose a scheme for defending against adversarial attacks by suppressing the largest eigenvalue of the Fisher information matrix (FIM). Our starting point is one explanation on the rationale of adversarial examples. Based on the idea of the difference between a benign sample and its adversarial example is measured by the Euclidean norm, while the difference between their classification probability densities at the last (softmax) layer of the network could be measured by the Kullback-Leibler (KL) divergence, the explanation shows that the output difference is a quadratic form of the input difference. If the eigenvalue of this quadratic form (a.k.a. FIM) is large, the output difference becomes large even when the input difference is small, which explains the adversarial phenomenon. This makes the adversarial defense possible by controlling the eigenvalues of the FIM. Our solution is adding one term representing the trace of the FIM to the loss function of the original network, as the largest eigenvalue is bounded by the trace. Our defensive scheme is verified by experiments using a variety of common attacking methods on typical deep neural networks, e.g. LeNet, VGG and ResNet, with datasets MNIST, CIFAR-10, and German Traffic Sign Recognition Benchmark (GTSRB). Our new network, after adopting the novel loss function and retraining, has an effective and robust defensive capability, as it decreases the fooling ratio of the generated adversarial examples, and remains the classification accuracy of the original network."
in_NB  adversarial_examples  fisher_information  to_be_shot_after_a_fair_trial 
11 weeks ago by cshalizi
[1910.12227] EdgeFool: An Adversarial Image Enhancement Filter
"Adversarial examples are intentionally perturbed images that mislead classifiers. These images can, however, be easily detected using denoising algorithms, when high-frequency spatial perturbations are used, or can be noticed by humans, when perturbations are large. In this paper, we propose EdgeFool, an adversarial image enhancement filter that learns structure-aware adversarial perturbations. EdgeFool generates adversarial images with perturbations that enhance image details via training a fully convolutional neural network end-to-end with a multi-task loss function. This loss function accounts for both image detail enhancement and class misleading objectives. We evaluate EdgeFool on three classifiers (ResNet-50, ResNet-18 and AlexNet) using two datasets (ImageNet and Private-Places365) and compare it with six adversarial methods (DeepFool, SparseFool, Carlini-Wagner, SemanticAdv, Non-targeted and Private Fast Gradient Sign Methods)."
in_NB  adversarial_examples 
11 weeks ago by cshalizi
[1910.12196] Open the Boxes of Words: Incorporating Sememes into Textual Adversarial Attack
"Adversarial attack is carried out to reveal the vulnerability of deep neural networks. Word substitution is a class of effective adversarial textual attack method, which has been extensively explored. However, all existing studies utilize word embeddings or thesauruses to find substitutes. In this paper, we incorporate sememes, the minimum semantic units, into adversarial attack. We propose an efficient sememe-based word substitution strategy and integrate it into a genetic attack algorithm. In experiments, we employ our attack method to attack LSTM and BERT on both Chinese and English sentiment analysis as well as natural language inference benchmark datasets. Experimental results demonstrate our model achieves better attack success rates and less modification than the baseline methods based on word embedding or synonym. Furthermore, we find our attack model can bring more robustness enhancement to the target model with adversarial training."
in_NB  adversarial_examples 
11 weeks ago by cshalizi
[1910.12163] Understanding and Quantifying Adversarial Examples Existence in Linear Classification
"State-of-art deep neural networks (DNN) are vulnerable to attacks by adversarial examples: a carefully designed small perturbation to the input, that is imperceptible to human, can mislead DNN. To understand the root cause of adversarial examples, we quantify the probability of adversarial example existence for linear classifiers. Previous mathematical definition of adversarial examples only involves the overall perturbation amount, and we propose a more practical relevant definition of strong adversarial examples that separately limits the perturbation along the signal direction also. We show that linear classifiers can be made robust to strong adversarial examples attack in cases where no adversarial robust linear classifiers exist under the previous definition. The quantitative formulas are confirmed by numerical experiments using a linear support vector machine (SVM) classifier. The results suggest that designing general strong-adversarial-robust learning systems is feasible but only through incorporating human knowledge of the underlying classification problem."
in_NB  adversarial_examples  classifiers 
11 weeks ago by cshalizi
[1907.11684] On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method
"Robust machine learning is currently one of the most prominent topics which could potentially help shaping a future of advanced AI platforms that not only perform well in average cases but also in worst cases or adverse situations. Despite the long-term vision, however, existing studies on black-box adversarial attacks are still restricted to very specific settings of threat models (e.g., single distortion metric and restrictive assumption on target model's feedback to queries) and/or suffer from prohibitively high query complexity. To push for further advances in this field, we introduce a general framework based on an operator splitting method, the alternating direction method of multipliers (ADMM) to devise efficient, robust black-box attacks that work with various distortion metrics and feedback settings without incurring high query complexity. Due to the black-box nature of the threat model, the proposed ADMM solution framework is integrated with zeroth-order (ZO) optimization and Bayesian optimization (BO), and thus is applicable to the gradient-free regime. This results in two new black-box adversarial attack generation methods, ZO-ADMM and BO-ADMM. Our empirical evaluations on image classification datasets show that our proposed approaches have much lower function query complexities compared to state-of-the-art attack methods, but achieve very competitive attack success rates."
in_NB  adversarial_examples  optimization 
11 weeks ago by cshalizi
[1910.09821] Structure Matters: Towards Generating Transferable Adversarial Images
"Recent works on adversarial examples for image classification focus on directly modifying pixels with minor perturbations. The small perturbation requirement is imposed to ensure the generated adversarial examples being natural and realistic to humans, which, however, puts a curb on the attack space thus limiting the attack ability and transferability especially for systems protected by a defense mechanism. In this paper, we propose the novel concepts of structure patterns and structure-aware perturbations that relax the small perturbation constraint while still keeping images natural. The key idea of our approach is to allow perceptible deviation in adversarial examples while keeping structure patterns that are central to a human classifier. Built upon these concepts, we propose a \emph{structure-preserving attack (SPA)} for generating natural adversarial examples with extremely high transferability. Empirical results on the MNIST and the CIFAR10 datasets show that SPA adversarial images can easily bypass strong PGD-based adversarial training and are still effective against SPA-based adversarial training. Further, they transfer well to other target models with little or no loss of successful attack rate, thus exhibiting competitive black-box attack performance."
in_NB  adversarial_examples 
12 weeks ago by cshalizi
[1910.10106] Cross-Representation Transferability of Adversarial Perturbations: From Spectrograms to Audio Waveforms
"This paper shows the susceptibility of spectrogram-based audio classifiers to adversarial attacks and the transferability of such attacks to audio waveforms. Some commonly adversarial attacks to images have been applied to Mel-frequency and short-time Fourier transform spectrograms and such perturbed spectrograms are able to fool a 2D convolutional neural network (CNN) for music genre classification with a high fooling rate and high confidence. Such attacks produce perturbed spectrograms that are visually imperceptible by humans. Experimental results on a dataset of western music have shown that the 2D CNN achieves up to 81.87% of mean accuracy on legitimate examples and such a performance drops to 12.09% on adversarial examples. Furthermore, the audio signals reconstructed from the adversarial spectrograms produce audio waveforms that perceptually resemble the legitimate audio."
in_NB  adversarial_examples 
12 weeks ago by cshalizi
[1910.09841] Quasi Maximum Likelihood Estimation of Non-Stationary Large Approximate Dynamic Factor Models
"This paper considers estimation of large dynamic factor models with common and idiosyncratic trends by means of the Expectation Maximization algorithm, implemented jointly with the Kalman smoother. We show that, as the cross-sectional dimension n and the sample size T diverge to infinity, the common component for a given unit estimated at a given point in time is min(n‾√,T‾‾√)-consistent. The case of local levels and/or local linear trends trends is also considered. By means of a MonteCarlo simulation exercise, we compare our approach with estimators based on principal component analysis."
in_NB  factor_analysis  time_series  spatio-temporal_statistics  to_teach:data_over_space_and_time  high-dimensional_statistics 
12 weeks ago by cshalizi
Phys. Rev. E 100, 042306 (2019) - Backbone reconstruction in temporal networks from epidemic data
"Many complex systems are characterized by time-varying patterns of interactions. These interactions comprise strong ties, driven by dyadic relationships, and weak ties, based on node-specific attributes. The interplay between strong and weak ties plays an important role on dynamical processes that could unfold on complex systems. However, seldom do we have access to precise information about the time-varying topology of interaction patterns. A particularly elusive question is to distinguish strong from weak ties, on the basis of the sole node dynamics. Building upon analytical results, we propose a statistically-principled algorithm to reconstruct the backbone of strong ties from data of a spreading process, consisting of the time series of individuals' states. Our method is numerically validated over a range of synthetic datasets, encapsulating salient features of real-world systems. Motivated by compelling evidence, we propose the integration of our algorithm in a targeted immunization strategy that prioritizes influential nodes in the inferred backbone. Through Monte Carlo simulations on synthetic networks and a real-world case study, we demonstrate the viability of our approach."
in_NB  network_data_analysis  statistics  epidemics_on_networks 
october 2019 by cshalizi
[1910.07629] A New Defense Against Adversarial Images: Turning a Weakness into a Strength
"Natural images are virtually surrounded by low-density misclassified regions that can be efficiently discovered by gradient-guided search --- enabling the generation of adversarial images. While many techniques for detecting these attacks have been proposed, they are easily bypassed when the adversary has full knowledge of the detection mechanism and adapts the attack strategy accordingly. In this paper, we adopt a novel perspective and regard the omnipresence of adversarial perturbations as a strength rather than a weakness. We postulate that if an image has been tampered with, these adversarial directions either become harder to find with gradient methods or have substantially higher density than for natural images. We develop a practical test for this signature characteristic to successfully detect adversarial attacks, achieving unprecedented accuracy under the white-box setting where the adversary is given full knowledge of our detection mechanism."
in_NB  adversarial_examples 
october 2019 by cshalizi
[1910.07067] On adversarial patches: real-world attack on ArcFace-100 face recognition system
"Recent works showed the vulnerability of image classifiers to adversarial attacks in the digital domain. However, the majority of attacks involve adding small perturbation to an image to fool the classifier. Unfortunately, such procedures can not be used to conduct a real-world attack, where adding an adversarial attribute to the photo is a more practical approach. In this paper, we study the problem of real-world attacks on face recognition systems. We examine security of one of the best public face recognition systems, LResNet100E-IR with ArcFace loss, and propose a simple method to attack it in the physical world. The method suggests creating an adversarial patch that can be printed, added as a face attribute and photographed; the photo of a person with such attribute is then passed to the classifier such that the classifier's recognized class changes from correct to the desired one. Proposed generating procedure allows projecting adversarial patches not only on different areas of the face, such as nose or forehead but also on some wearable accessory, such as eyeglasses."
in_NB  adversarial_examples 
october 2019 by cshalizi
[1910.06943] The Local Elasticity of Neural Networks
"This paper presents a phenomenon in neural networks that we refer to as \textit{local elasticity}. Roughly speaking, a classifier is said to be locally elastic if its prediction at a feature vector $\bx'$ is \textit{not} significantly perturbed, after the classifier is updated via stochastic gradient descent at a (labeled) feature vector $\bx$ that is \textit{dissimilar} to $\bx'$ in a certain sense. This phenomenon is shown to persist for neural networks with nonlinear activation functions through extensive simulations on real-life and synthetic datasets, whereas this is not observed in linear classifiers. In addition, we offer a geometric interpretation of local elasticity using the neural tangent kernel \citep{jacot2018neural}. Building on top of local elasticity, we obtain pairwise similarity measures between feature vectors, which can be used for clustering in conjunction with K-means. The effectiveness of the clustering algorithm on the MNIST and CIFAR-10 datasets in turn corroborates the hypothesis of local elasticity of neural networks on real-life data. Finally, we discuss some implications of local elasticity to shed light on several intriguing aspects of deep neural networks."
in_NB  adversarial_examples  neural_networks  your_favorite_deep_neural_network_sucks  clustering  statistics 
october 2019 by cshalizi
[1910.05870] Network Modularity Controls the Speed of Information Diffusion
"The rapid diffusion of information and the adoption of ideas are of critical importance in situations as diverse as emergencies, collective actions, or advertising and marketing. Although the dynamics of large cascades have been extensively studied in various contexts, few have examined the mechanisms that govern the efficiency of information diffusion. Here, by employing the linear threshold model on networks with communities, we demonstrate that a prominent network feature---the modular structure---strongly affects the speed of information diffusion. Our simulation results show that, when global cascades are enabled, there exists an optimal network modularity for the most efficient information spreading process. Beyond this critical value, either a stronger or a weaker modular structure actually hinders the speed of global cascades. These results are further confirmed by predictions using an analytical approach. Our findings have practical implications in disciplines from marketing to epidemics, from neuroscience to engineering, where the understanding of the structural design of complex systems focuses on the efficiency of information propagation."
in_NB  information_cascades  community_discovery  network_data_analysis  epidemics_on_networks  re:do-institutions-evolve 
october 2019 by cshalizi
[1910.04618] Universal Adversarial Perturbation for Text Classification
"Given a state-of-the-art deep neural network text classifier, we show the existence of a universal and very small perturbation vector (in the embedding space) that causes natural text to be misclassified with high probability. Unlike images on which a single fixed-size adversarial perturbation can be found, text is of variable length, so we define the "universality" as "token-agnostic", where a single perturbation is applied to each token, resulting in different perturbations of flexible sizes at the sequence level. We propose an algorithm to compute universal adversarial perturbations, and show that the state-of-the-art deep neural networks are highly vulnerable to them, even though they keep the neighborhood of tokens mostly preserved. We also show how to use these adversarial perturbations to generate adversarial text samples. The surprising existence of universal "token-agnostic" adversarial perturbations may reveal important properties of a text classifier."
in_NB  adversarial_examples 
october 2019 by cshalizi
[1910.03821] Quasi Maximum Likelihood Estimation and Inference of Large Approximate Dynamic Factor Models via the EM algorithm
"This paper studies Quasi Maximum Likelihood estimation of dynamic factor models for large panels of time series. Specifically, we consider the case in which the autocorrelation of the factors is explicitly accounted for and therefore the factor model has a state-space form. Estimation of the factors and their loadings is implemented by means of the Expectation Maximization algorithm, jointly with the Kalman smoother. We prove that, as both the dimension of the panel n and the sample size T diverge to infinity, the estimated loadings, factors, and common components are min(n‾√,T‾‾√)-consistent and asymptotically normal. Although the model is estimated under the unrealistic constraint of independent idiosyncratic errors, this mis-specification does not affect consistency. Moreover, we give conditions under which the derived asymptotic distribution can still be used for inference even in case of mis-specifications. Our results are confirmed by a MonteCarlo simulation exercise where we compare the performance of our estimators with Principal Components."
in_NB  factor_analysis  statistics  time_series  to_teach:data_over_space_and_time 
october 2019 by cshalizi
[1910.04221] Likelihood-based Inference for Partially Observed Epidemics on Dynamic Networks
"We propose a generative model and an inference scheme for epidemic processes on dynamic, adaptive contact networks. Network evolution is formulated as a link-Markovian process, which is then coupled to an individual-level stochastic SIR model, in order to describe the interplay between epidemic dynamics on a network and network link changes. A Markov chain Monte Carlo framework is developed for likelihood-based inference from partial epidemic observations, with a novel data augmentation algorithm specifically designed to deal with missing individual recovery times under the dynamic network setting. Through a series of simulation experiments, we demonstrate the validity and flexibility of the model as well as the efficacy and efficiency of the data augmentation inference scheme. The model is also applied to a recent real-world dataset on influenza-like-illness transmission with high-resolution social contact tracking records."
in_NB  epidemics_on_networks  state-space_models  statistical_inference_for_stochastic_processes  statistics 
october 2019 by cshalizi
[1910.00164] Entropy Penalty: Towards Generalization Beyond the IID Assumption
"It has been shown that instead of learning actual object features, deep networks tend to exploit non-robust (spurious) discriminative features that are shared between training and test sets. Therefore, while they achieve state of the art performance on such test sets, they achieve poor generalization on out of distribution (OOD) samples where the IID (independent, identical distribution) assumption breaks and the distribution of non-robust features shifts. Through theoretical and empirical analysis, we show that this happens because maximum likelihood training (without appropriate regularization) leads the model to depend on all the correlations (including spurious ones) present between inputs and targets in the dataset. We then show evidence that the information bottleneck (IB) principle can address this problem. To do so, we propose a regularization approach based on IB, called Entropy Penalty, that reduces the model's dependence on spurious features-- features corresponding to such spurious correlations. This allows deep networks trained with Entropy Penalty to generalize well even under distribution shift of spurious features. As a controlled test-bed for evaluating our claim, we train deep networks with Entropy Penalty on a colored MNIST (C-MNIST) dataset and show that it is able to generalize well on vanilla MNIST, MNIST-M and SVHN datasets in addition to an OOD version of C-MNIST itself. The baseline regularization methods we compare against fail to generalize on this test-bed. Our code is available at this https URL."
in_NB  information_bottleneck  adversarial_examples  your_favorite_deep_neural_network_sucks  to_be_shot_after_a_fair_trial 
october 2019 by cshalizi
[1906.00555] Adversarially Robust Generalization Just Requires More Unlabeled Data
"Neural network robustness has recently been highlighted by the existence of adversarial examples. Many previous works show that the learned networks do not perform well on perturbed test data, and significantly more labeled data is required to achieve adversarially robust generalization. In this paper, we theoretically and empirically show that with just more unlabeled data, we can learn a model with better adversarially robust generalization. The key insight of our results is based on a risk decomposition theorem, in which the expected robust risk is separated into two parts: the stability part which measures the prediction stability in the presence of perturbations, and the accuracy part which evaluates the standard classification accuracy. As the stability part does not depend on any label information, we can optimize this part using unlabeled data. We further prove that for a specific Gaussian mixture problem, adversarially robust generalization can be almost as easy as the standard generalization in supervised learning if a sufficiently large amount of unlabeled data is provided. Inspired by the theoretical findings, we further show that a practical adversarial training algorithm that leverages unlabeled data can improve adversarial robust generalization on MNIST and Cifar-10."
in_NB  adversarial_examples  to_be_shot_after_a_fair_trial 
october 2019 by cshalizi
[1906.00945] Adversarial Robustness as a Prior for Learned Representations
"An important goal in deep learning is to learn versatile, high-level feature representations of input data. However, standard networks' representations seem to possess shortcomings that, as we illustrate, prevent them from fully realizing this goal. In this work, we show that robust optimization can be re-cast as a tool for enforcing priors on the features learned by deep neural networks. It turns out that representations learned by robust models address the aforementioned shortcomings and make significant progress towards learning a high-level encoding of inputs. In particular, these representations are approximately invertible, while allowing for direct visualization and manipulation of salient input features. More broadly, our results indicate adversarial robustness as a promising avenue for improving learned representations. Our code and models for reproducing these results is available at this https URL ."
in_NB  optimization  adversarial_examples 
october 2019 by cshalizi
[1909.11786] Probabilistic Modeling of Deep Features for Out-of-Distribution and Adversarial Detection
"We present a principled approach for detecting out-of-distribution (OOD) and adversarial samples in deep neural networks. Our approach consists in modeling the outputs of the various layers (deep features) with parametric probability distributions once training is completed. At inference, the likelihoods of the deep features w.r.t the previously learnt distributions are calculated and used to derive uncertainty estimates that can discriminate in-distribution samples from OOD samples. We explore the use of two classes of multivariate distributions for modeling the deep features - Gaussian and Gaussian mixture - and study the trade-off between accuracy and computational complexity. We demonstrate benefits of our approach on image features by detecting OOD images and adversarially-generated images, using popular DNN architectures on MNIST and CIFAR10 datasets. We show that more precise modeling of the feature distributions result in significantly improved detection of OOD and adversarial samples; up to 12 percentage points in AUPR and AUROC metrics. We further show that our approach remains extremely effective when applied to video data and associated spatio-temporal features by detecting adversarial samples on activity classification tasks using UCF101 dataset, and the C3D network. To our knowledge, our methodology is the first one reported for reliably detecting white-box adversarial framing, a state-of-the-art adversarial attack for video classifiers."
in_NB  adversarial_examples  uncertainty_for_neural_networks 
october 2019 by cshalizi
[1909.11835] GAMIN: An Adversarial Approach to Black-Box Model Inversion
"Recent works have demonstrated that machine learning models are vulnerable to model inversion attacks, which lead to the exposure of sensitive information contained in their training dataset. While some model inversion attacks have been developed in the past in the black-box attack setting, in which the adversary does not have direct access to the structure of the model, few of these have been conducted so far against complex models such as deep neural networks. In this paper, we introduce GAMIN (for Generative Adversarial Model INversion), a new black-box model inversion attack framework achieving significant results even against deep models such as convolutional neural networks at a reasonable computing cost. GAMIN is based on the continuous training of a surrogate model for the target model under attack and a generator whose objective is to generate inputs resembling those used to train the target model. The attack was validated against various neural networks used as image classifiers. In particular, when attacking models trained on the MNIST dataset, GAMIN is able to extract recognizable digits for up to 60% of labels produced by the target. Attacks against skin classification models trained on the pilot parliament dataset also demonstrated the capacity to extract recognizable features from the targets."
in_NB  adversarial_examples  inverse_problems  statistics  machine_learning  to_read 
october 2019 by cshalizi
[1904.04334] A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning
"Due to the lack of enough training data and high computational cost to train a deep neural network from scratch, transfer learning has been extensively used in many deep-neural-network-based applications. A commonly-used transfer learning approach involves taking a part of a pre-trained model, adding a few layers at the end, and re-training the new layers with a small dataset. This approach, while efficient and widely used, imposes a security vulnerability because the pre-trained model used in transfer learning are usually available publicly to everyone, including potential attackers. In this paper, we show that without any additional knowledge other than the pre-trained model, an attacker can launch an effective and efficient brute force attack that can craft instances of input to trigger each target class with high confidence. We assume that the attacker does not have access to any target-specific information, including samples from target classes, re-trained model, and probabilities assigned by Softmax to each class, and thus called target-agnostic attack. These assumptions render all previous attacks impractical, to the best of our knowledge. To evaluate the proposed attack, we perform a set of experiments on face recognition and speech recognition tasks and show the effectiveness of the attack. Our work sheds light on a fundamental security challenge of the Softmax layer when used in transfer learning settings."
in_NB  adversarial_examples 
september 2019 by cshalizi
[1909.09695] Epidemic spreading on modular networks: the fear to declare a pandemic
"In the last decades, the frequency of pandemics has been increased due to the growth of urbanization and mobility among countries. Since a disease spreading in one country could become a pandemic with a potential worldwide humanitarian and economic impact, it is important to develop models to estimate the probability of a worldwide pandemic. In this paper, we propose a model of disease spreading in a modular complex network (having communities) and study how the number of bridge nodes n that connect communities affects the disease spreading. We find that our model can be described at a global scale as an infectious transmission process between communities with infectious and recovery time distributions that depend on the internal structure of each community and n. At the steady state, we find that near the critical point as the number of bridge nodes increases, the disease could reach all the communities but with a small fraction of recovered nodes in each community. In addition, we obtain that in this limit, the probability of a pandemic increases abruptly at the critical point. This scenario could make more difficult the decision to launch or not a pandemic alert. Finally, we show that link percolation theory can be used at a global scale to estimate the probability of a pandemic."
in_NB  epidemics_on_networks 
september 2019 by cshalizi
[1909.06872] Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors
"Deep neural networks (DNNs) are notorious for their vulnerability to adversarial attacks, which are small perturbations added to their input images to mislead their prediction. Detection of adversarial examples is, therefore, a fundamental requirement for robust classification frameworks. In this work, we present a method for detecting such adversarial attacks, which is suitable for any pre-trained neural network classifier. We use influence functions to measure the impact of every training sample on the validation set data. From the influence scores, we find the most supportive training samples for any given validation example. A k-nearest neighbor (k-NN) model fitted on the DNN's activation layers is employed to search for the ranking of these supporting training samples. We observe that these samples are highly correlated with the nearest neighbors of the normal inputs, while this correlation is much weaker for adversarial inputs. We train an adversarial detector using the k-NN ranks and distances and show that it successfully distinguishes adversarial examples, getting state-of-the-art results on four attack methods with three datasets."
in_NB  adversarial_examples 
september 2019 by cshalizi
[1907.12392] A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment
"Empowerment is an information-theoretic method that can be used to intrinsically motivate learning agents. It attempts to maximize an agent's control over the environment by encouraging visiting states with a large number of reachable next states. Empowered learning has been shown to lead to complex behaviors, without requiring an explicit reward signal. In this paper, we investigate the use of empowerment in the presence of an extrinsic reward signal. We hypothesize that empowerment can guide reinforcement learning (RL) agents to find good early behavioral solutions by encouraging highly empowered states. We propose a unified Bellman optimality principle for empowered reward maximization. Our empowered reward maximization approach generalizes both Bellman's optimality principle as well as recent information-theoretical extensions to it. We prove uniqueness of the empowered values and show convergence to the optimal solution. We then apply this idea to develop off-policy actor-critic RL algorithms for high-dimensional continuous domains. We experimentally validate our methods in robotics domains (MuJoCo). Our methods demonstrate improved initial and competitive final performance compared to model-free state-of-the-art techniques."

--- Seems kinda ad-hoc at first glance, look more later in copious spare time...
in_NB  reinforcement_learning  information_theory 
september 2019 by cshalizi
[1908.04358] Graph hierarchy and spread of infections
"Trophic levels and hence trophic coherence can be defined only on networks with well defined sources, trophic analysis of networks had been restricted to the ecological domain until now. Trophic coherence, a measure of a network's hierarchical organisation, has been shown to be linked to a network's structural and dynamical aspects. In this paper we introduce hierarchical levels, which is a generalisation of trophic levels, that can be defined on any simple graph and we interpret it as a network influence metric. We discuss how our generalisation relates to the previous definition and what new insights our generalisation shines on the topological and dynamical aspects of networks. We also show that the mean of hierarchical differences correlates strongly with the topology of the graph. Finally, we model an epidemiological dynamics and show how the statistical properties of hierarchical differences relate to the incidence rate and how it affects the spreading process in a SIS model."
in_NB  epidemics_on_networks  re:do-institutions-evolve  have_read  shot_after_a_fair_trial 
september 2019 by cshalizi
[1908.01297] A Restricted Black-box Adversarial Framework Towards Attacking Graph Embedding Models
"With the great success of graph embedding model on both academic and industry area, the robustness of graph embedding against adversarial attack inevitably becomes a central problem in graph learning domain. Regardless of the fruitful progress, most of the current works perform the attack in a white-box fashion: they need to access the model predictions and labels to construct their adversarial loss. However, the inaccessibility of model predictions in real systems makes the white-box attack impractical to real graph learning system. This paper promotes current frameworks in a more general and flexible sense -- we demand to attack various kinds of graph embedding model with black-box driven. To this end, we begin by investigating the theoretical connections between graph signal processing and graph embedding models in a principled way and formulate the graph embedding model as a general graph signal process with corresponding graph filter. As such, a generalized adversarial attacker: GF-Attack is constructed by the graph filter and feature matrix. Instead of accessing any knowledge of the target classifiers used in graph embedding, GF-Attack performs the attack only on the graph filter in a black-box attack fashion. To validate the generalization of GF-Attack, we construct the attacker on four popular graph embedding models. Extensive experimental results validate the effectiveness of our attacker on several benchmark datasets. Particularly by using our attack, even small graph perturbations like one-edge flip is able to consistently make a strong attack in performance to different graph embedding models."
in_NB  network_data_analysis  adversarial_examples 
september 2019 by cshalizi
[1904.08554] Gotta Catch 'Em All: Using Concealed Trapdoors to Detect Adversarial Attacks on Neural Networks
"Deep neural networks are vulnerable to adversarial attacks. Numerous efforts have focused on defenses that either try to patch `holes' in trained models or try to make it difficult or costly to compute adversarial examples exploiting these holes. In our work, we explore a counter-intuitive approach of constructing "adversarial trapdoors. Unlike prior works that try to patch or disguise vulnerable points in the manifold, we intentionally inject `trapdoors,' artificial weaknesses in the manifold that attract optimized perturbation into certain pre-embedded local optima. As a result, the adversarial generation functions naturally gravitate towards our trapdoors, producing adversarial examples that the model owner can recognize through a known neuron activation signature. In this paper, we introduce trapdoors and describe an implementation of trapdoors using similar strategies to backdoor/Trojan attacks. We show that by proactively injecting trapdoors into the models (and extracting their neuron activation signature), we can detect adversarial examples generated by the state of the art attacks (Projected Gradient Descent, Optimization based CW, and Elastic Net) with high detection success rate and negligible impact on normal inputs. These results also generalize across multiple classification domains (image recognition, face recognition and traffic sign recognition). We explore different properties of trapdoors, and discuss potential countermeasures (adaptive attacks) and mitigations."
in_NB  adversarial_examples 
september 2019 by cshalizi
[1907.11932] Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment
"Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alterations from the original counterparts but can fool the state-of-the-art models. It is helpful to evaluate or even improve the robustness of these models by exposing the maliciously crafted adversarial examples. In this paper, we present TextFooler, a simple but strong baseline to generate natural adversarial text. By applying it to two fundamental natural language tasks, text classification and textual entailment, we successfully attacked three target models, including the powerful pre-trained BERT, and the widely used convolutional and recurrent neural networks. We demonstrate the advantages of this framework in three ways: (1) effective---it outperforms state-of-the-art attacks in terms of success rate and perturbation rate, (2) utility-preserving---it preserves semantic content and grammaticality, and remains correctly classified by humans, and (3) efficient---it generates adversarial text with computational complexity linear to the text length."
in_NB  adversarial_examples  text_mining 
september 2019 by cshalizi
On Whorfian Socioeconomics by Thomas B. Pepinsky :: SSRN
"Whorfian socioeconomics is an emerging interdisciplinary field of study that holds that linguistic structures explain differences in beliefs, values, and opinions across communities. Its core empirical strategy is to document a correlation between the presence or absence of a linguistic feature in a survey respondent’s language, and her/his responses to survey questions. This essay demonstrates — using the universe of linguistic features from the World Atlas of Language Structures and a wide array of responses from the World Values Survey — that such an approach produces highly statistically significant correlations in a majority of analyses, irrespective of the theoretical plausibility linking linguistic features to respondent beliefs. These results raise the possibility that correlations between linguistic features and survey responses are actually spurious. The essay concludes by showing how two simple and well-understood statistical fixes can more accurately reflect uncertainty in these analyses, reducing the temptation for analysts to create implausible Whorfian theories to explain spurious linguistic correlations."
in_NB  linguistics  economics  social_science_methodology  pepinsky.thomas_b.  debunking  evisceration  have_read  to_teach:linear_models  have_sent_gushing_fanmail  to:blog  to_teach:data_over_space_and_time 
september 2019 by cshalizi
[1909.04495] Natural Adversarial Sentence Generation with Gradient-based Perturbation
"This work proposes a novel algorithm to generate natural language adversarial input for text classification models, in order to investigate the robustness of these models. It involves applying gradient-based perturbation on the sentence embeddings that are used as the features for the classifier, and learning a decoder for generation. We employ this method to a sentiment analysis model and verify its effectiveness in inducing incorrect predictions by the model. We also conduct quantitative and qualitative analysis on these examples and demonstrate that our approach can generate more natural adversaries. In addition, it can be used to successfully perform black-box attacks, which involves attacking other existing models whose parameters are not known. On a public sentiment analysis API, the proposed method introduces a 20% relative decrease in average accuracy and 74% relative increase in absolute error."
in_NB  adversarial_examples 
september 2019 by cshalizi
[1909.02436] Are Adversarial Robustness and Common Perturbation Robustness Independant Attributes ?
"Neural Networks have been shown to be sensitive to common perturbations such as blur, Gaussian noise, rotations, etc. They are also vulnerable to some artificial malicious corruptions called adversarial examples. The adversarial examples study has recently become very popular and it sometimes even reduces the term "adversarial robustness" to the term "robustness". Yet, we do not know to what extent the adversarial robustness is related to the global robustness. Similarly, we do not know if a robustness to various common perturbations such as translations or contrast losses for instance, could help with adversarial corruptions. We intend to study the links between the robustnesses of neural networks to both perturbations. With our experiments, we provide one of the first benchmark designed to estimate the robustness of neural networks to common perturbations. We show that increasing the robustness to carefully selected common perturbations, can make neural networks more robust to unseen common perturbations. We also prove that adversarial robustness and robustness to common perturbations are independent. Our results make us believe that neural network robustness should be addressed in a broader sense."
in_NB  adversarial_examples 
september 2019 by cshalizi
[1706.06083] Towards Deep Learning Models Resistant to Adversarial Attacks
"Recent work has demonstrated that deep neural networks are vulnerable to adversarial examples---inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest the notion of security against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models. Code and pre-trained models are available at this https URL and this https URL."
in_NB  adversarial_examples 
september 2019 by cshalizi
Uncivil Agreement: How Politics Became Our Identity, Mason
"Political polarization in America is at an all-time high, and the conflict has moved beyond disagreements about matters of policy. For the first time in more than twenty years, research has shown that members of both parties hold strongly unfavorable views of their opponents. This is polarization rooted in social identity, and it is growing. The campaign and election of Donald Trump laid bare this fact of the American electorate, its successful rhetoric of “us versus them” tapping into a powerful current of anger and resentment.
"With Uncivil Agreement, Lilliana Mason looks at the growing social gulf across racial, religious, and cultural lines, which have recently come to divide neatly between the two major political parties. She argues that group identifications have changed the way we think and feel about ourselves and our opponents. Even when Democrats and Republicans can agree on policy outcomes, they tend to view one other with distrust and to work for party victory over all else. Although the polarizing effects of social divisions have simplified our electoral choices and increased political engagement, they have not been a force that is, on balance, helpful for American democracy. Bringing together theory from political science and social psychology, Uncivil Agreement clearly describes this increasingly “social” type of polarization in American politics and will add much to our understanding of contemporary politics."
in_NB  books:noted  us_politics  identity_group_formation 
september 2019 by cshalizi
The genetic history of admixture across inner Eurasia | Nature Ecology & Evolution
"The indigenous populations of inner Eurasia—a huge geographic region covering the central Eurasian steppe and the northern Eurasian taiga and tundra—harbour tremendous diversity in their genes, cultures and languages. In this study, we report novel genome-wide data for 763 individuals from Armenia, Georgia, Kazakhstan, Moldova, Mongolia, Russia, Tajikistan, Ukraine and Uzbekistan. We furthermore report additional damage-reduced genome-wide data of two previously published individuals from the Eneolithic Botai culture in Kazakhstan (~5,400 BP). We find that present-day inner Eurasian populations are structured into three distinct admixture clines stretching between various western and eastern Eurasian ancestries, mirroring geography. The Botai and more recent ancient genomes from Siberia show a decrease in contributions from so-called ‘ancient North Eurasian’ ancestry over time, which is detectable only in the northern-most ‘forest-tundra’ cline. The intermediate ‘steppe-forest’ cline descends from the Late Bronze Age steppe ancestries, while the ‘southern steppe’ cline further to the south shows a strong West/South Asian influence. Ancient genomes suggest a northward spread of the southern steppe cline in Central Asia during the first millennium BC. Finally, the genetic structure of Caucasus populations highlights a role of the Caucasus Mountains as a barrier to gene flow and suggests a post-Neolithic gene flow into North Caucasus populations from the steppe."
in_NB  central_asia  historical_genetics 
september 2019 by cshalizi
[1606.01200] Simple and Honest Confidence Intervals in Nonparametric Regression
"We consider the problem of constructing honest confidence intervals (CIs) for a scalar parameter of interest, such as the regression discontinuity parameter, in nonparametric regression based on kernel or local polynomial estimators. To ensure that our CIs are honest, we use critical values that take into account the possible bias of the estimator upon which the CIs are based. We show that this approach leads to CIs that are more efficient than conventional CIs that achieve coverage by undersmoothing or subtracting an estimate of the bias. We give sharp efficiency bounds of using different kernels, and derive the optimal bandwidth for constructing honest CIs. We show that using the bandwidth that minimizes the maximum mean-squared error results in CIs that are nearly efficient and that in this case, the critical value depends only on the rate of convergence. For the common case in which the rate of convergence is n−2/5, the appropriate critical value for 95% CIs is 2.18, rather than the usual 1.96 critical value. We illustrate our results in a Monte Carlo analysis and an empirical application."
in_NB  confidence_sets  nonparametrics  statistics 
august 2019 by cshalizi
Public Capitalism: The Political Authority of Corporate Executives on JSTOR
"In modern capitalist societies, the executives of large, profit-seeking corporations have the power to shape the collective life of the communities, local and global, in which they operate. Corporate executives issue directives to employees, who are normally prepared to comply with them, and impose penalties such as termination on those who fail to comply. The decisions made by corporate executives also affect people outside the corporation: investors, customers, suppliers, the general public. What can justify authority with such a broad reach? Political philosopher Christopher McMahon argues that the social authority of corporate executives is best understood as a form of political authority. Although corporations are privately owned, they must be managed in a way that promotes the public good. Public Capitalism begins with this claim and explores its implications for issues including corporate property rights, the moral status of corporations, the permissibility of layoffs and plant closings, and the legislative role played by corporate executives. Corporate executives acquire the status of public officials of a certain kind, who can be asked to work toward social goods in addition to prosperity. Public Capitalism sketches a new framework for discussion of the moral and political issues faced by corporate executives."
in_NB  downloaded  books:noted  corporations  political_philosophy  management  capitalism  democracy 
august 2019 by cshalizi
[1805.07820] Targeted Adversarial Examples for Black Box Audio Systems
"The application of deep recurrent networks to audio transcription has led to impressive gains in automatic speech recognition (ASR) systems. Many have demonstrated that small adversarial perturbations can fool deep neural networks into incorrectly predicting a specified target with high confidence. Current work on fooling ASR systems have focused on white-box attacks, in which the model architecture and parameters are known. In this paper, we adopt a black-box approach to adversarial generation, combining the approaches of both genetic algorithms and gradient estimation to solve the task. We achieve a 89.25% targeted attack similarity after 3000 generations while maintaining 94.6% audio file similarity."
in_NB  adversarial_examples 
august 2019 by cshalizi
[1908.07125] Universal Adversarial Triggers for NLP
"Adversarial examples highlight model vulnerabilities and are useful for evaluation and interpretation. We define universal adversarial triggers: input-agnostic sequences of tokens that trigger a model to produce a specific prediction when concatenated to any input from a dataset. We propose a gradient-guided search over tokens which finds short trigger sequences (e.g., one word for classification and four words for language modeling) that successfully trigger the target prediction. For example, triggers cause SNLI entailment accuracy to drop from 89.94% to 0.55%, 72% of "why" questions in SQuAD to be answered "to kill american people", and the GPT-2 language model to spew racist output even when conditioned on non-racial contexts. Furthermore, although the triggers are optimized using white-box access to a specific model, they transfer to other models for all tasks we consider. Finally, since triggers are input-agnostic, they provide an analysis of global model behavior. For instance, they confirm that SNLI models exploit dataset biases and help to diagnose heuristics learned by reading comprehension models."
in_NB  adversarial_examples  natural_language_processing 
august 2019 by cshalizi
[1908.06133] A model of discrete choice based on reinforcement learning under short-term memory
"A family of models of individual discrete choice are constructed by means of statistical averaging of choices made by a subject in a reinforcement learning process, where the subject has short, k-term memory span. The choice probabilities in these models combine in a non-trivial, non-linear way the initial learning bias and the experience gained through learning. The properties of such models are discussed and, in particular, it is shown that probabilities deviate from Luce's Choice Axiom, even if the initial bias adheres to it. Moreover, we shown that the latter property is recovered as the memory span becomes large.
"Two applications in utility theory are considered. In the first, we use the discrete choice model to generate binary preference relation on simple lotteries. We show that the preferences violate transitivity and independence axioms of expected utility theory. Furthermore, we establish the dependence of the preferences on frames, with risk aversion for gains, and risk seeking for losses. Based on these findings we propose next a parametric model of choice based on the probability maximization principle, as a model for deviations from expected utility principle. To illustrate the approach we apply it to the classical problem of demand for insurance."
in_NB  reinforcement_learning  econometrics 
august 2019 by cshalizi
[1902.09286] Adversarial attacks hidden in plain sight
"Convolutional neural networks have been used to achieve a string of successes during recent years, but their lack of interpretability remains a serious issue. Adversarial examples are designed to deliberately fool neural networks into making any desired incorrect classification, potentially with very high certainty. Several defensive approaches increase robustness against adversarial attacks, demanding attacks of greater magnitude, which lead to visible artifacts. By considering human visual perception, we compose a technique that allows to hide such adversarial attacks in regions of high complexity, such that they are imperceptible even to an astute observer. We carry out a user study on classifying adversarially modified images to validate the perceptual quality of our approach and find significant evidence for its concealment with regards to human visual perception."
in_NB  adversarial_examples  perception 
august 2019 by cshalizi
[1908.06456] Harmonic Analysis of Symmetric Random Graphs
"Following Ressel (1985,2008) this note attempts to understand graph limits (Lovasz and Szegedy 2006} in terms of harmonic analysis on semigroups (Berg et al. 1984), thereby providing an alternative derivation of de Finetti's theorem for random exchangeable graphs."

--- SL has been hinting about this for years (it's the natural combination of his 70s--80s work on "extremal point" models, sufficiency, and semi-groups with his recent interest in graph limits and graphons), so I'm very excited to read this.

--- ETA after reading: It's everything one might hope; isomorphism classes of graphs show up as the natural sufficient statistics in a generalized exponential family, etc.
in_NB  have_read  graph_limits  analysis  probability  lauritzen.steffen 
august 2019 by cshalizi
« earlier      
per page:    204080120160

related tags

18th_century_history  19th_century_history  20th_century_history  1001_nights  abbott.andrew  abstraction  abstract_algebra  academia  action_principles  active_learning  additive_models  adversarial_examples  afghanistan  agent-based_models  aggregation  ahmed.amr  ai  airoldi.edo  albers.dave  alchemy  aldous.david_j.  algebra  algebraic_geometry  algorithmic_fairness  algorithmic_information_theory  allende.salvador  amaral.luis  american_hegemony  american_history  american_revolution  american_south  american_southwest  american_west  analogy  analysis  ancient_greece  ancient_history  ancient_rome  ancient_trade  andersen.holly  and_I_wish_I_could_tag_this_for:aaronsw  animals  animal_cognition  animal_psychology  anomaly_detection  antarctica  anthropology  apocalypticism  appropriations_of_neuroscience  approximate_bayesian_computation  approximation  arab_spring  arbitrage  archaeology  arendt.hannah  arlot.sylvain  armchair_travel  aronow.peter  arrow_of_time  art  artificial_intelligence  art_history  art_of_conjecture  asking_the_egg_what_it_would_give_to_not_be_in_the_omlet  asta.dena  astrology  astronomy  asymptotics  atay.fatihcan  athens  atomic_physics  attention  australia  author-identification  authoritarianism  autism  automata_theory  automated_diagnosis  averaged_equations_of_motion  avigad.jeremy  axelrod_model  axtell.robert  ay.nihat  a_long_time_ago  backstrom.lars  bacteria  bactria  bad_data_analysis  bad_management  baez.john_c.  bakshy.eytan  balduzzi.david  bandit_problems  banking  barely-comprehensible_metaphysics  barndorff-nielsen.ole  bartlett.m.s.  bartlett.peter_l.  bayesianism  bayesian_consistency  bechtel.william  beer.stafford  behavioral_ecology  behavioral_economics  behavioral_genetics  belief  belkin.mikhail  bengio.yoshua  beran.jan  bergstrom.carl_t.  bernstein-von_mises  bewley.truman  bialek.william  biau.gerard  bibliometry  bickel.david_r.  bickel.peter_j.  binder.p.-m.  biochemical_networks  bioinformatics  biological_computation  biological_computers  biology  biophysics  birds  birge.lucien  blackouts  blanchard.olivier  blandly_dystopian_academic_prose  blei.david  blinder.alan  blitzstein.joseph  blogged  blogging  bollt.erik_m.  books:can't_really_recommend  books:noted  books:owned  books:partially_read  books:recommended  books:reviewed  book_reviews  boosting  boots.byron  bootstrap  borgs.christian  bottou.leon  boucheron.stephane  bounded_rationality  bowen.william_g.  bowles.samuel  branching_processes  brantingham.jeff  brazil  brillinger.david  brown.emery  bubeck.sebastien  bubonic_plague  buchman.susan  buddhism  buhlmann.peter  bureaucracy  burks.arthur_w.  butts.carter_t.  cai.t._tony  calibration  can't_remember_if_I_read_it_or_not  capitalism  carvalho.carlos  categorical_data  categorization  category_theory  catoni.olivier  causality  causal_discovery  causal_inference  cavalli-sforza.l.luca  celisse.alain  cellular_automata  censorship  centola.damon  central_asia  central_limit_theorem  cesa-bianchi.nicolo  chains_with_complete_connections  change-point_problem  change_of_representation  changing_the_subject  changing_your_mind  chaos  chatterjee.sourav  chayes.jennifer  cheating  chemistry  chicago  chile  china  china:prc  chinese_civilization  choi.david_s.  cholera  chow-liu_trees  christianity  chung.fan  citation_networks  cities  civil_rights  clark.andy  classifiers  class_struggles_in_america  clauset.aaron  clermont.gilles  climate_change  climatology  clinical_vs_actuarial_judgment  clustering  coarse-graining  cobb-douglas_production_functions  cognition  cognitive_development  cognitive_science  cohn.henry  cold_war  coleman.todd  collaborative_filtering  collective_action  collective_cognition  collective_support_for_individual_choice  collins.harry  communication_as_manipulation  communism  community_discovery  comparative_history  complexity  complexity_measures  compressed_sensing  computability  computation  computational_complexity  computational_statistics  computers  computer_games  computer_networks  computer_networks_as_provinces_of_the_commonwealth_of_letters  concentration_of_measure  condensed_matter  conditional_random_fields  conferences  confidence_sets  conformal_prediction  confounding  confucianism  congress  consciousness  conspiracy_theories  contagion  context_of_discovery_vs_context_of_justification  control_theory  control_theory_and_control_engineering  convergence_of_stochastic_processes  convexity  cool_if_true  copulas  corporations  corruption  cosmology  cost-benefit_analysis  cost_disease  counter-culture  counter-enlightenment  counter-insurgency  counter-terrorism  coupling  covariate_shift  coveted  cox.d.r.  cramer-rao  cramer-rao_inequality  crawford.forrest  creativity  crespi.valentino  crime  criticism_of_criticism_of_criticism  cross-validation  crutchfield.james_p.  cults  cultural_appropriation  cultural_criticism  cultural_differences  cultural_evolution  cultural_exchange  cultural_transmission  cultural_transmission_of_cognitive_tools  cultural_universals  cumulants  curiosity  curse_of_dimensionality  cybenko.george  cybernetics  d'souza.raissa  danks.david  daoism  data  dataset_shift  data_analysis  data_cleaning  data_mining  davison.anthony  dawid.philip  debowski.lukasz  debunking  deceiving_us_has_become_an_industrial_process  decision-making  decision-making_by_mutual_adjustment  decision_making  decision_theory  decision_trees  deconvolution  dedeo.simon  deep_learning  defenses_of_liberalism  deliberative_democracy  del_moral.pierre  dembo.amir  democracy  density_estimation  density_ratio_estimation  dependence_measures  descartes.rene  design  design_for_a_brain  detroit  developmental_biology  development_economics  deviation_inequalities  devroye.luc  dewey.john  de_haan.laurens  de_la_mettrie.julian_offray  de_la_mettrie.julien_offray  diaconis.persi  didelez.vanessa  diderot.denis  differential_geometry  differential_privacy  diffusion_maps  diffusion_of_innovations  diggle.peter  digital_humanities  dimension_estimation  dimension_reduction  directed_information  disasters_of_war  discrimination  distributed_systems  distributions  divergence_estimation  diversity  doering.charles  dolce_far_niente  domingos.pedro  douc.randal  download  downloaded  drees.holger  driskill.robert  dr_marx_dr_karl_marx_to_the_red_courtesy_phone_please  dsges  dudoit.sandrine  dynamical_systems  eagle.nathan  early_modern_european_history  early_modern_history  early_modern_world_history  earthquakes  eberhardt.frederick  eckles.dean  ecology  econometrics  economics  economic_geography  economic_growth  economic_history  economic_policy  eddy.william_f.  education  efron.brad  eichler.michael  einstein.albert  eliminative_induction  ellison.ralph  ellner.stephen  elvin.mark  emergence  empirical_likelihood  empirical_processes  em_algorithm  energy  engineers  enlightenment  ensemble_methods  entableted  entropy  entropy_estimation  entropy_rate  entropy_rates  environmental_history  environmental_management  epics  epidemics_on_networks  epidemic_models  epidemiology  epidemiology_of_representations  epistemology  ergodic_theory  error-in-variables  estimation  estimation_of_dynamical_systems  ethics  ethnography  ethology  eurasian_history  europe  evisceration  evo-devo  evolution  evolutionary_biology  evolutionary_economics  evolutionary_game_theory  evolutionary_optimization  evolutionary_psychology  evolution_of_cooperation  exchangeability  exchangeable_arrays  expectation-maximization  experimental_biology  experimental_design  experimental_economics  experimental_physics  experimental_political_science  experimental_psychology  experimental_sociology  expertise  explanation  explanation_by_mechanisms  exploration-exploitation  exponential_convergence_of_empirical_probabilities  exponential_families  exponential_family_random_graphs  extreme_values  factor_analysis  fairy_tales  falsification  fan.jianqing  fantasy  faraway.j.j.  farmer.doyne  fascism  fear  feedback  feldman.david_p.  feminism  field_theory  fienberg.stephen_e.  filtering  filtrations  finance  financialization  financial_crisis_of_2007--  financial_markets  financial_speculation  fisher.franklin_m.  fisher.r.a.  fisher_information  flaxman.seth  flocks_and_swarms  fluctuation-response  fluid_mechanics  flynn.james_r.  flynn_effect  fmri  formal_languages  foundations  foundations_of_statistical_mechanics  foundations_of_statistics  fourier_analysis  fourier_methods  fox.emily  fox.emily_b.  fractals  france  frankfurt_school  fraser.d.a.s.  freckleton.robert_p  freedom_of_expression  free_trade  french_revolution  freund.yoav  friday_cat_blogging  from_library  functional_analysis  functional_connectivity  functional_data  functional_data_analysis  fung.archon  funny:laughing_instead_of_screaming  funny:pointed  futurism  galilei.galileo  galstyan.aram  galves.antonio  gambetta.diego  games  game_theory  gaussian_processes  geanakoplos.john  geisser.seymour  gelman.andrew  generalized_linear_models  genetics  genetic_algorithms  gene_expression  gene_expression_data_analysis  gene_regulation  genocide  genomics  genovese.christopher  geography  geology  geometry  geometry_from_a_time_series  george.ed  gershman.samuel  getoor.lise  geyer.charles_j.  ghahramani.zoubin  gibbs_distributions  gibbs_measures  gigerenzer.gerd  gile.krista_j.  gill.richard  gintis.herbert  gives_economists_a_bad_name  globalization  gneiting.tilmann  goel.sharad  goerg.georg_m.  goernerup.olof  going_to_miss_the_talk  goodness-of-fit  gordon.geoff  gordon.geoffrey_j.  gothic  grammar_induction  granger_causality  graphical  graphical_models  graph_discovery  graph_embedding  graph_grammars  graph_limits  graph_sampling  graph_spectra  graph_theory  grassberger.peter  great_transformation  greece  green.peter_j.  greenhouse.joel  griffiths.thomas  grunwald.peter  guerillas  guestrin.carlos  gunpowder  guttorp.peter  haavelmo.trygve  habit  hahn.p._richard  hall.peter  halpern.joseph_y.  handcock.mark  hanneke.steve  hansen.bruce  hansen.christian  hanson.stephen_jose  harrison.matthew_t.  hart.jeffrey  hashing  haslinger.rob  have_forgotten  have_read  have_read_a_long_time_ago  have_read_too_many_times  have_sent_gushing_fanmail  have_skimmed  have_talked_about  have_taught  have_written  hayek.f.a._von  healy.kieran  heard_the_talk  heavy_tails  hellenstic_era  heritability  hero.alfred  hero.alfred_o._iii  heteroskedasticity  heuristics  hierarchical_statistical_models  hierarchical_structure  high-dimensional_probability  high-dimensional_statistics  hilbert_space  hill.claire  hill.jennifer  hinton.geoffrey  historical_genetics  historical_materialism  historical_myths  historiography  history  history_of_economics  history_of_ideas  history_of_mathematics  history_of_medicine  history_of_morals  history_of_physics  history_of_religion  history_of_science  history_of_social_science  history_of_statistics  history_of_technology  hjort.nils_lid  hoff.peter  hofman.jake  hofmann.thomas  holme.petter  holocaust  homophily  homrighausen.darren  hooker.giles  horror  horses  hotelling.harold  howson.colin  hsu.daniel  human_evolution  human_genetics  hunter.david_r.  hurricanes  hutton.ronald  hydrodynamics  hydrodynamic_limits  hyperbolic_geometry  hypothesis  hypothesis_testing  ibn_khaldun  iceland  idealization  identifiability  identity_group_formation  ideology  imagination  imperfect_competition  imperialism  implicit_learning  incest  independence_tests  india  indirect_inference  individualism  individual_sequence_prediction  indonesia  industrial_organization  industrial_revolution  inequality  inference_to_latent_objects  influence  information_bottleneck  information_cascades  information_criteria  information_geometry  information_retrieval  information_society  information_theory  infrastructure  injustice_inherent_in_the_welfare_function  innovation  input-output_analysis  insects  institutions  instrumental_variables  insurance  intelligence_(psychology)  intelligence_(spying)  interacting_particle_systems  interface_design  interlocking_directorates  internet  inverse_problems  in_library  in_NB  in_wishlist  iq  iran  ising_model  islam  islamic_civilization  italy  izbicki.rafael  i_hate_illinois_nazis  i_see_what_you_did_there  i_told_you_so  jackson.matthew_o.  jacobs.abigail_z.  jaeger.herbert  james.william  janson.svante  janzing.dominik  japan  javidi.tara  jelveh.zubin  jensen.david  jensen.shane  jin.jiashun  jordan.michael_i.  jost.jurgen  k-means  kadane.jay  kakade.sham  kalisch.markus  kallenberg.olav  kalman_filter  kantz.holger  karrer.brian  kass.robert  kautsky.karl  kernel_estimators  kernel_methods  kifer.yuri  kinase  king.gary  kirshner.sergey  kith_and_kin  kleinberg.jon  knockoffs  kolaczyk.eric  kolar.mladen  kolter.zico  kontorovich.aryeh  kontoyiannis.ioannis  kossinets.gueorgi  kotelentz.peter_m.  krakauer.david  kriging  krivitsky.pavel  kronecker_graphs  kullback.solomon  kurtz.thomas_g.  kushans  labor  lafferty.john  lahiri.s.n.  landauers_principle  landemore.helene  language_history  laplace_approximation  large_deviations  lashley.karl  lasso  latent_semantic_analysis  latent_variables  latin  lauritzen.steffen  law  law_of_the_iterated_logarithm  lazer.david  learning_in_animals  learning_in_games  learning_theory  lebanon.guy  lebowitz.joel  lecue.guillaume  lee.ann_b.  lehmann.erich  lei.jing  lem.stanislaw  leonardi.florencia  lerman.kristina  leskovec.jure  levina.elizaveta  levina.liza  levy_processes  le_cam.lucien  liberalism  libertarianism  libertinism  likelihood  limit_order_books  limit_theorems  lindblom.charles  linearization_via_hilbert_space  linear_algebra  linear_regression  linguistics  lipson.hod  literacy  literary_criticism  literary_history  literary_theory  liu.han  lives_of_the_scholars  lives_of_the_scientists  lives_of_the_tyrants  lloyd.g.e.r.  locality-sensitive_hashing  logic  logical_positivism  logistic_regression  lohr.wolfgang  long-range_dependence  lovasz.laszlo  lovasz.laszo  lovett.marsha  low-dimensional_summaries  low-level_equilibria  low-rank_approximation  low-regret_learning  lucretius  lugosi.gabor  lumley.thomas  lyapunov_exponents  maathuis.marloes_h.  machine_learning  machta.jon  mackey.michael_c.  macroeconomics  macro_from_micro  maes.pattie  magic  mahoney.michael  mahoney.michael_w.  major_transitions_of_evolution  making_the_baby_chomsky_cry  management  mandelbrot.benoit  manifold_learning  mankad.shawn  man_a_machine  marcus.gary  marcus.gary_f.  marketing  markets_as_collective_calculating_devices  market_failures_in_everything  markovian_representations  markov_models  martin.john_levi  martingales  marx.karl  marxism  mason.winter  massart.pascal  matching  materialism  mathematical_biology  mathematical_logic  mathematics  mathematization_of_the_world_picture  matloff.norm  matriarchy  matthew_effect  maurer.noel  maxwells_demon  maya_civilization  mayo-wilson.conor  mccallum.andrew  mccullagh.peter  mcculloch.warren  mcdonald.daniel  mdl  mean-field_theory  measurement  measure_theory  medieval_eurasian_history  medieval_european_history  meehl.paul  meila.marina  memory  mendelson.shahar  mental_testing  mercier.hugo  merhav.neri  metalworking  meteorology  methodological_advice  methodology  method_of_moments  metric_learning  meyn.sean  meyn.sean_p.  military_history  military_industrial_complex  millenarianism  ming  minimax  minimum_description_length  mirror_neurons  missing_data  missing_mass  misspecification  mixing  mixture_models  modeling  model_averaging  model_checking  model_discovery  model_selection  model_theory  moderate_deviations  modernism  modernity  modularity  mohri.mehryar  mohri.meryar  mokyr.joel  molecular_biology  moment_closures  mongol_empire  monteleoni.claire  monte_carlo  moocs  moore.cristopher  moral_panics  moral_philosophy  moral_psychology  moral_responsibility  moretti.franco  morley.james  mortgage_crisis  morvai.gusztav  mossel.elchanan  mother_courage_raises_the_west  moulines.eric  mountains  mucha.peter  mucha.peter_j.  mukherjee.sayan  muller.jerry  multidimensional_scaling  multiple_comparisons  multiple_personality  multiple_testing  murphy.susan  mythology  nadler.boaz  naidu.suresh  nanotechnology  nardi.yuval  narrative  nationalism  national_security_state  native_american_history  natural_history_of_truthiness  natural_language_processing  neal.radford  nearest-neighbors  nearest-neighbor_methods  nearest-neighors  nelson.richard_r.  nemenman.ilya  neo-conservatism  neo-platonism  netflix  networked_life  networks  network_alignment  network_data_analysis  network_differences  network_experiments  network_formation  network_growth  network_sampling  network_visualization  neural_coding_and_decoding  neural_control_of_action  neural_data_analysis  neural_networks  neurath.otto  neuropsychology  neuroscience  neutral_models  neville.jennifer  newman.mark  neyman.jerzy  nickl.richard  nickl.robert  nietzsche.friedrich  nilsson_jacobi.martin  nisbett.richard  niyogi.partha  nobel.andrew  noel.hans  non-equilibrium  non-stationarity  nonparametrics  norton.john  norton.john_d.  not_quite_scooped_exactly  nowak.robert  no_free_lunch_theorems  nugent.rebecca  number_theory  nyhan.brendan  oatley.thomas  ober.josiah  observable_operator_models  occams_razor  occultism  oconnor.brendan  ogburn.elizabeth  ok_not_quite_scooped  online_learning  ontologies  on_the_thesis_committee  op-amps  operator_learning  optics  optimization  orbanz.peter  order_statistics  organizations  orientalism  ottoman_empire  our_decrepit_institutions  our_national_shame  outliers  owen.art  p-values  paging_dr_craik_dr_kennth_craik  paintings  pakistan  paleontology  palmer.ada  paninski.liam  parenting  partial_identification  particle_filters  partisanship  part_played_by_social_labor_in_the_transition_from_ape_to_man  parzen.emanuel  path_dependence  path_integrals  patron-client_networks  pattern_formation  pearl.judea  pearson.karl  pedagogy  peer_review  peirce.c.s.  peirce_knew_it_all_along  pentland.alex  pepinsky.thomas_b.  perception  percolation  peres.yuval  perry.patrick_o.  persianate_culture  perturbation_theory  phase_transitions  philanthropy  philosophy  philosophy_of_history  philosophy_of_mind  philosophy_of_science  philosophy_of_science_by_scientists  phoenicians  phylogenetics  physics  physics_of_information  pillai.natesh  pillow.jonathan  pitman.jim  pittsburgh  plagues  plagues_and_peoples  please_dont_let_me_be_scooped  poczos.barnabas  poetry  poincare.henri  poincare_recurrence  point_processes  polanyi.michael  poldrack.russell  political_economy  political_networks  political_parties  political_philosophy  political_science  politics  pollard.david  polletta.francesca  polya.george  popular_culture  popular_science  popular_social_science  porter.mason_a.  post-modernism  post-selection_inference  post-soviet_life  pottery  power  practices_relating_to_the_transmission_of_genetic_information  pragmatism  prediction  prediction_trees  predictive_states  preferences  preferential_attachment  primates  principal_components  privacy  privatization  probability  productivity  professionals  professions  progressive_forces  propagation_of_error  prophecy  pseudoscience  psychoceramica  psychoceramics  psychology  psychometrics  public_policy  pure_products_of_america  quantum_mechanics  quiggin.john  R  race  racine.jeffrey_s.  racism  racist_idiocy  rademacher_complexity  radev.dragomir  raginsky.maxim  rahimi.ali  rakhlin.alexander  rand.william  randal.douc  randomization  randomness  random_boolean_networks  random_fields  random_forests  random_graphs  random_matrices  random_projections  random_walks  rare-event_simulation  rationality  re:6dfb  re:actually-dr-internet-is-the-name-of-the-monsters-creator  re:ADAfaEPoV  re:aggregating_random_graphs  re:almost_none  re:anti-nudge  re:AoS_project  re:automatic_pattern_discovery  re:backwards_arrow  re:bayes_as_evol  re:computational_lens  re:critique_of_diffusion  re:democratic_cognition  re:do-institutions-evolve  re:do_not_adjust_your_receiver  re:fitness_sampling  re:freshman_seminar_on_optimization  re:functional_communities  re:generalized-cortical-maps  re:growing_ensemble_project  re:g_paper  re:HEAS  re:homophily_and_confounding  re:hyperbolic_networks  re:in_soviet_union_optimization_problem_solves_you  re:knightian_uncertainty  re:learning_your_way_around_godel's_theorem  re:licors  re:LoB_project  re:model_selection_for_networks  re:naive-semi-supervised  re:network_differences  re:neutral_cultural_networks  re:neutral_model_of_inquiry  re:pac-and-mar  re:phil-of-bayes_paper  re:pli-R  re:projective_sparse_graph_models  re:reading_capital  re:small-area_estimation_by_smoothing  re:smoothing_adjacency_matrices  re:social_networks_as_sensor_networks  re:spike_train_complexity  re:sporns_review  re:stacs  re:urban_scaling_what_urban_scaling  re:what_is_a_macrostate  re:what_is_the_right_null_model_for_linear_regression  re:XV_for_mixing  re:XV_for_networks  re:your_favorite_dsge_sucks  re:your_favorite_ergm_sucks  reaction-diffusion  read_the_draft  read_the_thesis  reception_history  recht.benjamin  recommender_systems  recovered_memory  recurrence_times  reductionism  regression  regulation  reichardt.joerg  reinforcement_learning  reinhart.alex  relational_learning  relativity  religion  remarkable_if_true  renaissance_history  renormalization  renyi_entropy  replicator_dynamics  representation  respondent-driven_sampling  rhetoric  richardson.thomas  richardson.thomas_s.  rigollet.philippe  rinaldo.alessandro  ripley.brian  ripley.brian_d.  risk_assessment  risk_vs_uncertainty  robert.christian  robins.james  robinson.joan  robots_and_robotics  robustness  robust_statistics  rockmore.dan  rodrik.dani  roeder.kathryn  rohe.karl  roman_empire  rosenbaum.paul  rosenfeld.roni  rosvall.martin  roth.camille  rouquier.jean-baptiste  rousseau.judith  rubin.donald_b.  rubin.jonathan  rubinstein.ariel  ruelle.david  running_dogs_of_reaction  ryabko.b._ya.  ryabko.daniil  sabloff.jeremy  sacrifice  salakhutdinov.ruslan  salganik.matthew_j.  samii.cyrus  sampling  samworth.richard_j.  sarkar.purnamrita  sarwate.anand  satisficing  savage.leonard_j.  schapire.robert_e.  schelling.thomas  schelling_model  scholarly_misconstruction_of_reality  schweinberger.michael  science_as_a_social_process  science_fiction  science_in_society  science_policy  scientifici_thinking  scientific_revolution  scooped  scooped?  secrecy  self-centered  self-organization  self-organized_criticality  self-promotion  self-similarity  semantics  semantics_from_syntax  semi-supervised_learning  send_a_reprint  sensitivity_analysis  separation_of_time-scales  sequential_monte_carlo  sequential_optimization  sethna.james  set_theory  sexism  sex_differences  sex_vs_gender  shalev-shwartz.shai  shallice.tim  shapiro.ian  sharpee.tatyana_o.  shore.jesse_c.  shot_after_a_fair_trial  shpitser.ilya  shrinkage  siddiqi.sajid_m.  signal_processing  signal_transduction  silk_road  silverman.bernard  simulation  simulation-based_inference  slime_molds  smale.stephen  small-area_estimation  smil.vaclav  smith.eric  smola.alex  smoothing  smyth.padhraic  snijders.t.a.b.  sobel.michael_e.  socialism  socialist_calculation_debate  social_evolution  social_history  social_influence  social_learning  social_life_of_the_mind  social_measurement  social_mechanisms  social_media  social_movements  social_networks  social_psychology  social_science_methodology  social_theory  sociolinguistics  sociology  sociology_of_science  sofic_processes  solo.victor  solomon.sorin  solow.robert  something_about_america  song.le  song_dynasty  sornette.didier  soviet-afghan_war  space_exploration  spam  sparsity  spatial_statistics  spatio-temporal_statistics  spectral_clustering  spectral_estimation  spectral_gap  spectral_graph_theory  spectral_methods  spirtes.peter  splines  spontaneous_philosophy_of_scientists  stability_of_learning  standardized_testing  stark.philip_b.  state-building  state-space_models  state-space_reconstruction  state_estimation  stationarity  statistical_inference_for_stochastic_processes  statistical_mechanics  statistics  statistics_on_manifolds  stauffer.dietrich  steins_method  steinwart.ingo  stepping_stone_model  sternhell.zeev  stigler.stephen  stiglitz.joseph  stochastic_approximation  stochastic_differential_equations  stochastic_models  stochastic_processes  stochastic_volatility  strevens.michael  structural_equations  structural_risk_minimization  stuart.elizabeth  stumpf.michael.p.h.  sufficiency  sugiyama.masashi  superstition  supervenience  support-vector_machines  surrogate_data  surveillance  surveys  sutherland.william_j  sutton.charles  symbolic_dynamics  symmetry  synchronization  syntax  systemic_risk  systems_identification  tacit_knowledge  tagging  tang_dynasty  taskar.ben  taylor.g.i.  teaching  teaching_evaluations  technological_change  telgarsky.matus  tenenbaum.joshua  terrorism  terrorism_fears  tewari.ambuj  texas  text_mining  theology  theoretical_computer_science  theorizing  theory_of_mind  thermodynamics  thermodynamic_depth  the_american_dilemma  the_continuing_crises  the_electrification_of_the_whole_country  the_new_deal  the_nightmare_from_which_we_are_trying_to_awake  the_ph.d._octopus  the_present_before_it_was_widely_distributed  the_silk_roads  the_technomedical_thriller_writes_itself  the_violence_inherent_in_the_system  think_tanks  tibshirani.robert  tibshirani.ryan  tikal  tilly.charles  time-keeping  time-series  time_rescaling  time_series  time_series_connections  to:blog  to:NB  tomasello.michael  tools_into_theories  topic_models  topology  torture  totalitarianism  touchette.hugo  to_be_shot_after_a_fair_trial  to_browse  to_download  to_read  to_teach:advanced-stochastic-processes  to_teach:baby-nets  to_teach:complexity-and-inference  to_teach:data-mining  to_teach:data_over_space_and_time  to_teach:financial-time-series  to_teach:freshman_seminar_on_optimization  to_teach:graphons  to_teach:linear_models  to_teach:statcomp  to_teach:undergrad-ADA  to_teach:undergrad-research  tracked_down_references  track_down_references  traditionalism  transaction_networks  transmission_of_inequality  tripathy.shreejoy  trolls  trump.donald  tsallis_statistics  tuan.yi-fu  turbulence  turner.adair  twin_studies  two-sample_tests  u-statistics  ugander.johan  uncertainty_for_neural_networks  unions  universal_prediction  unsupervised_learning  urban.nathaniel  urban_economics  us-iraq_war  uses_of_the_past  ussr  us_military  us_politics  value-added_measurement  value-added_measurement_in_education  vanderweele.tyler  van_der_laan.mark  van_der_vaart.aad  van_de_geer.sara  van_handel.ramon  van_roy.benjamin  varadhan.s.r.s.  variable-length_markov_models  variable_selection  variance_estimation  variational_inference  vartanian.aram  vast_right-wing_conspiracy  vc-dimension  venkatasubramanian.suresh  ventura.sam  verdinelli.isa  ver_steeg.greg  via:?  via:absfac  via:abumuqawama  via:ariddell  via:arinaldo  via:arsyed  via:arthegall  via:asarwate  via:auerbach  via:betsy_ogburn  via:bob_williamson  via:bookslut  via:brendano  via:civilstat  via:crooked_timber  via:david.choi  via:ded-maxim  via:dena  via:dsparks  via:dynamic_ecology  via:flaxman  via:gabriel_rossman  via:gelman  via:genin.konstantin  via:georg  via:gmg  via:gptp2016  via:guslacerda  via:henry_farrell  via:jacobin  via:jbdelong  via:jcgoodwin  via:jin.jiashun  via:john-burke  via:joncgoodwin  via:kathryn  via:kinsella  via:klk  via:krackhardt  via:languagelog  via:magistra_et_mater  via:mathbabe  via:matthew_berryman  via:maynard_handley  via:melanie_mitchell  via:mindhacks  via:moritz-heene  via:mraginsky  via:myl  via:nick-watkins  via:nikete  via:nyhan  via:ogburn  via:orgtheory  via:orzelc  via:phnk  via:rortybomb  via:rvenkat  via:samii  via:scotte  via:seth  via:shivak  via:slackwire  via:slaniel  via:spirtes  via:timothy_burke  via:unfogged  via:vaguery  via:wiggins  via:xmarquez  vidyasagar.mathukumalli  violence  viral_marketing  vision  visual_culture  visual_display_of_quantitative_information  von_mises.richard  voter_model  vu.vince  vu.vincent  wahba.grace  wainwright.martin_j.  waiting_times  wald.abraham  war  warmuth.nanny  wasserman.larry  was_on_the_committee  watkins.nicholas  watts.duncan  wavelets  weak_dependence  web  weiss.benjamin  welling.max  wellman.michael_p.  we_can_look  whats_gone_wrong_with_america  when_the_operation_of_the_machine_becomes_so_odious_makes_you_so_sick_at_heart_that_you_can't_take_part_you_can't_even_passively_take_part_and_you've_got_to_put_your_bodies_upon_the_gears_and_upon_the_wheels_upon_the_levers_upon_all_the_apparatus_and_you'  when_you_walk_through_the_garden_you_gotta_watch_your_back  white.halbert  why_oh_why_cant_we_have_a_better_academic_publishing_system  wiener.norbert  wiggins.christopher  wikipedia  willett.rebecca_m.  williamson.robert_c.  wind_power  witches  wolfe.patrick_j.  world_bank  world_history  writing  wu.wei_biao  WWI  WWII  xing.eric  xinjiang  yarkoni.tal  yee.danny  yeomans.julia_m.  yes_i_know_serbia_was_never_part_of_the_ussr  your_favorite_deep_neural_network_sucks  yu.bin  zanette.damian_h.  zebrafish  zenker.sven  zhang.tong  zhu.jerry  zoroastrianism 

Copy this bookmark: