cshalizi + re:adafaepov   71

Cheng , Chen : Nonparametric inference via bootstrapping the debiased estimator
"In this paper, we propose to construct confidence bands by bootstrapping the debiased kernel density estimator (for density estimation) and the debiased local polynomial regression estimator (for regression analysis). The idea of using a debiased estimator was recently employed by Calonico et al. (2018b) to construct a confidence interval of the density function (and regression function) at a given point by explicitly estimating stochastic variations. We extend their ideas of using the debiased estimator and further propose a bootstrap approach for constructing simultaneous confidence bands. This modified method has an advantage that we can easily choose the smoothing bandwidth from conventional bandwidth selectors and the confidence band will be asymptotically valid. We prove the validity of the bootstrap confidence band and generalize it to density level sets and inverse regression problems. Simulation studies confirm the validity of the proposed confidence bands/sets. We apply our approach to an Astronomy dataset to show its applicability."
to:NB  to_read  statistics  bootstrap  confidence_sets  regression  density_estimation  re:ADAfaEPoV 
7 weeks ago by cshalizi
The Real Gold Standard: Measuring Counterfactual Worlds That Matter Most to Social Science and Policy | Annual Review of Criminology
"The randomized experiment has achieved the status of the gold standard for estimating causal effects in criminology and the other social sciences. Although causal identification is indeed important and observational data present numerous challenges to causal inference, we argue that conflating causality with the method used to identify it leads to a cognitive narrowing that diverts attention from what ultimately matters most—the difference between counterfactual worlds that emerge as a consequence of their being subjected to different treatment regimes applied to all eligible population members over a sustained period of time. To address this system-level and long-term challenge, we develop an analytic framework for integrating causality and policy inference that accepts the mandate of causal rigor but is conceptually rather than methodologically driven. We then apply our framework to two substantive areas that have generated high-visibility experimental research and that have considerable policy influence: (a) hot-spots policing and (b) the use of housing vouchers to reduce concentrated disadvantage and thereby crime. After reviewing the research in these two areas in light of our framework, we propose a research path forward and conclude with implications for the interplay of theory, data, and causal understanding in criminology and other social sciences."
to:NB  causal_inference  causality  social_science_methodology  statistics  nagin.dan  kith_and_kin  re:ADAfaEPoV 
may 2019 by cshalizi
Identification and Extrapolation of Causal Effects with Instrumental Variables | Annual Review of Economics
"Instrumental variables (IV) are widely used in economics to address selection on unobservables. Standard IV methods produce estimates of causal effects that are specific to individuals whose behavior can be manipulated by the instrument at hand. In many cases, these individuals are not the same as those who would be induced to treatment by an intervention or policy of interest to the researcher. The average causal effect for the two groups can differ significantly if the effect of the treatment varies systematically with unobserved factors that are correlated with treatment choice. We review the implications of this type of unobserved heterogeneity for the interpretation of standard IV methods and for their relevance to policy evaluation. We argue that making inferences about policy-relevant parameters typically requires extrapolating from the individuals affected by the instrument to the individuals who would be induced to treatment by the policy under consideration. We discuss a variety of alternatives to standard IV methods that can be used to rigorously perform this extrapolation. We show that many of these approaches can be nested as special cases of a general framework that embraces the possibility of partial identification."

--- Memo to self: Read this before revising the IV sections of ADAfaEPoV.
to:NB  causal_inference  instrumental_variables  partial_identification  statistics  re:ADAfaEPoV  to_read 
may 2019 by cshalizi
Murray : Multiple Imputation: A Review of Practical and Theoretical Findings
"Multiple imputation is a straightforward method for handling missing data in a principled fashion. This paper presents an overview of multiple imputation, including important theoretical results and their practical implications for generating and using multiple imputations. A review of strategies for generating imputations follows, including recent developments in flexible joint modeling and sequential regression/chained equations/fully conditional specification approaches. Finally, we compare and contrast different methods for generating imputations on a range of criteria before identifying promising avenues for future research."
to:NB  statistics  missing_data  multiple_imputation  re:ADAfaEPoV 
may 2019 by cshalizi
On the Interpretation of do(x) : Journal of Causal Inference
"This paper provides empirical interpretation of the do(x) operator when applied to non-manipulable variables such as race, obesity, or cholesterol level. We view do(x) as an ideal intervention that provides valuable information on the effects of manipulable variables and is thus empirically testable. We draw parallels between this interpretation and ways of enabling machines to learn effects of untried actions from those tried. We end with the conclusion that researchers need not distinguish manipulable from non-manipulable variables; both types are equally eligible to receive the do(x) operator and to produce useful information for decision makers."
to:NB  causality  pearl.judea  re:ADAfaEPoV  to_read 
may 2019 by cshalizi
Partial Identification of the Average Treatment Effect Using Instrumental Variables: Review of Methods for Binary Instruments, Treatments, and Outcomes: Journal of the American Statistical Association: Vol 113, No 522
"Several methods have been proposed for partially or point identifying the average treatment effect (ATE) using instrumental variable (IV) type assumptions. The descriptions of these methods are widespread across the statistical, economic, epidemiologic, and computer science literature, and the connections between the methods have not been readily apparent. In the setting of a binary instrument, treatment, and outcome, we review proposed methods for partial and point identification of the ATE under IV assumptions, express the identification results in a common notation and terminology, and propose a taxonomy that is based on sets of identifying assumptions. We further demonstrate and provide software for the application of these methods to estimate bounds. Supplementary materials for this article are available online."
to:NB  instrumental_variables  causal_inference  nonparametrics  statistics  re:ADAfaEPoV 
april 2019 by cshalizi
Nonparametric Estimation of Triangular Simultaneous Equations Models on JSTOR
"This paper presents a simple two-step nonparametric estimator for a triangular simultaneous equation model. Our approach employs series approximations that exploit the additive structure of the model. The first step comprises the nonparametric estimation of the reduced form and the corresponding residuals. The second step is the estimation of the primary equation via nonparametric regression with the reduced form residuals included as a regressor. We derive consistency and asymptotic normality results for our estimator, including optimal convergence rates. Finally we present an empirical example, based on the relationship between the hourly wage rate and annual hours worked, which illustrates the utility of our approach."
to:NB  nonparametrics  instrumental_variables  causal_inference  statistics  regression  econometrics  re:ADAfaEPoV 
april 2019 by cshalizi
AEA Web - American Economic Review - 103(3):550 - Abstract
"n many economic models, objects of interest are functions which satisfy conditional moment restrictions. Economics does not restrict the functional form of these models, motivating nonparametric methods. In this paper we review identification results and describe a simple nonparametric instrumental variables (NPIV) estimator. We also consider a simple method of inference. In addition we show how the ability to uncover nonlinearities with conditional moment restrictions is related to the strength of the instruments. We point to applications where important nonlinearities can be found with NPIV and applications where they cannot."
to:NB  nonparametrics  instrumental_variables  regression  causal_inference  statistics  econometrics  re:ADAfaEPoV 
april 2019 by cshalizi
Applied Nonparametric Instrumental Variables Estimation - Horowitz - 2011 - Econometrica - Wiley Online Library
"Instrumental variables are widely used in applied econometrics to achieve identification and carry out estimation and inference in models that contain endogenous explanatory variables. In most applications, the function of interest (e.g., an Engel curve or demand function) is assumed to be known up to finitely many parameters (e.g., a linear model), and instrumental variables are used to identify and estimate these parameters. However, linear and other finite‐dimensional parametric models make strong assumptions about the population being modeled that are rarely if ever justified by economic theory or other a priori reasoning and can lead to seriously erroneous conclusions if they are incorrect. This paper explores what can be learned when the function of interest is identified through an instrumental variable but is not assumed to be known up to finitely many parameters. The paper explains the differences between parametric and nonparametric estimators that are important for applied research, describes an easily implemented nonparametric instrumental variables estimator, and presents empirical examples in which nonparametric methods lead to substantive conclusions that are quite different from those obtained using standard, parametric estimators."
to:NB  nonparametrics  instrumental_variables  causal_inference  econometrics  statistics  inverse_problems  re:ADAfaEPoV 
april 2019 by cshalizi
Hall , Horowitz : Nonparametric methods for inference in the presence of instrumental variables
"We suggest two nonparametric approaches, based on kernel methods and orthogonal series to estimating regression functions in the presence of instrumental variables. For the first time in this class of problems, we derive optimal convergence rates, and show that they are attained by particular estimators. In the presence of instrumental variables the relation that identifies the regression function also defines an ill-posed inverse problem, the “difficulty” of which depends on eigenvalues of a certain integral operator which is determined by the joint density of endogenous and instrumental variables. We delineate the role played by problem difficulty in determining both the optimal convergence rate and the appropriate choice of smoothing parameter."
to:NB  nonparametrics  instrumental_variables  causal_inference  econometrics  statistics  inverse_problems  re:ADAfaEPoV 
april 2019 by cshalizi
Nonparametric Instrumental Regression - Darolles - 2011 - Econometrica - Wiley Online Library
"The focus of this paper is the nonparametric estimation of an instrumental regression function ϕ defined by conditional moment restrictions that stem from a structural econometric model E[Y−ϕ(Z)|W]=0, and involve endogenous variables Y and Z and instruments W. The function ϕ is the solution of an ill‐posed inverse problem and we propose an estimation procedure based on Tikhonov regularization. The paper analyzes identification and overidentification of this model, and presents asymptotic properties of the estimated nonparametric instrumental regression function."
to:NB  nonparametrics  instrumental_variables  causal_inference  statistics  inverse_problems  regression  econometrics  re:ADAfaEPoV 
april 2019 by cshalizi
A Note on Parametric and Nonparametric Regression in the Presence of Endogenous Control Variables by Markus Frölich :: SSRN
"This note argues that nonparametric regression not only relaxes functional form assumptions vis-a-vis parametric regression, but that it also permits endogenous control variables. To control for selection bias or to make an exclusion restriction in instrumental variables regression valid, additional control variables are often added to a regression. If any of these control variables is endogenous, OLS or 2SLS would be inconsistent and would require further instrumental variables. Nonparametric approaches are still consistent, though. A few examples are examined and it is found that the asymptotic bias of OLS can indeed be very large."
to:NB  causal_inference  instrumental_variables  nonparametrics  regression  statistics  re:ADAfaEPoV 
april 2019 by cshalizi
Nonparametric Instrumental Regression
"The focus of the paper is the nonparametric estimation of an instrumental regression function P defined by conditional moment restrictions stemming from a structural econometric model : E[Y-P(Z)|W]=0 and involving endogenous variables Y and Z and instruments W. The function P is the solution of an ill-posed inverse problem and we propose an estimation procedure based on Tikhonov regularization. The paper analyses identification and overidentification of this model and presents asymptotic properties of the estimated nonparametric instrumental regression function."

--- Was this ever published? It definitely seems like the most elegant approach to nonparametric IVs I've seen (French econometricians!).
to:NB  have_read  regression  instrumental_variables  nonparametrics  inverse_problems  causal_inference  re:ADAfaEPoV  econometrics 
april 2019 by cshalizi
Analysis of a complex of statistical variables into principal components.
"The problem is stated in detail, a method of analysis is derived and its geometrical meaning shown, methods of solution are illustrated and certain derivative problems are discussed. (To be concluded in October issue.) "

--- In which Harold Hotelling re-invents principal components analysis, 32 years after Karl Pearson. (Part 2: http://dx.doi.org/10.1037/h0070888)
to:NB  have_read  principal_components  data_analysis  hotelling.harold  re:ADAfaEPoV 
september 2018 by cshalizi
On lines and planes of closest fit to systems of points in space (K. Pearson, 1901)
In which Karl Pearson invents principal components analysis, with the entirely sensible objective of finding low-dimensional approximations to high-dimensional data. (i.e., basically the way I teach it!)
to:NB  principal_components  data_analysis  pearson.karl  re:ADAfaEPoV  have_read 
september 2018 by cshalizi
Parzen : On Estimation of a Probability Density Function and Mode
In which Parzen introduces kernel density estimation, three years after Rosenblatt introduced it _in the same journal_.
to:NB  statistics  density_estimation  have_read  parzen.emanuel  re:ADAfaEPoV 
september 2018 by cshalizi
Rosenblatt : Remarks on Some Nonparametric Estimates of a Density Function (1956)
"This note discusses some aspects of the estimation of the density function of a univariate probability distribution. All estimates of the density function satisfying relatively mild conditions are shown to be biased. The asymptotic mean square error of a particular class of estimates is evaluated."

--- In which Rosenblatt introduces kernel density estimation.
to:NB  statistics  density_estimation  have_read  rosenblatt.murray  re:ADAfaEPoV 
september 2018 by cshalizi
An elementary proof of a theorem of Johnson and Lindenstrauss - Dasgupta - 2003 - Random Structures & Algorithms - Wiley Online Library
"A result of Johnson and Lindenstrauss [13] shows that a set of n points in high dimensional Euclidean space can be mapped into an O(log n/ϵ2)‐dimensional Euclidean space such that the distance between any two points changes by only a factor of (1 ± ϵ). In this note, we prove this theorem using elementary probabilistic techniques."

Ungated: http://cseweb.ucsd.edu/~dasgupta/papers/jl.pdf
to:NB  random_projections  geometry  dimension_reduction  have_read  re:ADAfaEPoV 
may 2018 by cshalizi
[1706.08576] Invariant Causal Prediction for Nonlinear Models
"An important problem in many domains is to predict how a system will respond to interventions. This task is inherently linked to estimating the system's underlying causal structure. To this end, 'invariant causal prediction' (ICP) (Peters et al., 2016) has been proposed which learns a causal model exploiting the invariance of causal relations using data from different environments. When considering linear models, the implementation of ICP is relatively straight-forward. However, the nonlinear case is more challenging due to the difficulty of performing nonparametric tests for conditional independence. In this work, we present and evaluate an array of methods for nonlinear and nonparametric versions of ICP for learning the causal parents of given target variables. We find that an approach which first fits a nonlinear model with data pooled over all environments and then tests for differences between the residual distributions across environments is quite robust across a large variety of simulation settings. We call this procedure "Invariant residual distribution test". In general, we observe that the performance of all approaches is critically dependent on the true (unknown) causal structure and it becomes challenging to achieve high power if the parental set includes more than two variables. As a real-world example, we consider fertility rate modelling which is central to world population projections. We explore predicting the effect of hypothetical interventions using the accepted models from nonlinear ICP. The results reaffirm the previously observed central causal role of child mortality rates."
to:NB  causal_inference  causal_discovery  statistics  regression  prediction  peters.jonas  meinshausen.nicolai  to_read  heard_the_talk  to_teach:undergrad-ADA  re:ADAfaEPoV 
may 2018 by cshalizi
[1501.01332] Causal inference using invariant prediction: identification and confidence intervals
"What is the difference of a prediction that is made with a causal model and a non-causal model? Suppose we intervene on the predictor variables or change the whole environment. The predictions from a causal model will in general work as well under interventions as for observational data. In contrast, predictions from a non-causal model can potentially be very wrong if we actively intervene on variables. Here, we propose to exploit this invariance of a prediction under a causal model for causal inference: given different experimental settings (for example various interventions) we collect all models that do show invariance in their predictive accuracy across settings and interventions. The causal model will be a member of this set of models with high probability. This approach yields valid confidence intervals for the causal relationships in quite general scenarios. We examine the example of structural equation models in more detail and provide sufficient assumptions under which the set of causal predictors becomes identifiable. We further investigate robustness properties of our approach under model misspecification and discuss possible extensions. The empirical properties are studied for various data sets, including large-scale gene perturbation experiments."
to:NB  to_read  causal_inference  causal_discovery  statistics  prediction  regression  buhlmann.peter  meinshausen.nicolai  peters.jonas  heard_the_talk  re:ADAfaEPoV  to_teach:undergrad-ADA 
may 2018 by cshalizi
A Powerful Test for Changing Trends in Time Series Models - Wu - 2018 - Journal of Time Series Analysis - Wiley Online Library
"We propose a non-parametric test for trend specification with improved properties. Many existing tests in the literature exhibit non-monotonic power. To deal with this problem, Juhl and Xiao 2005 proposed a non-parametric test with good power by detrending the data non-parametrically. However, their test is developed for smooth changing trends and is constructed under the assumption of correct specification in the dynamics. In addition, their test suffers from size distortion in finite samples and imposes restrictive assumptions on the variance structure. The current article tries to address these issues. First, the proposed test allows for both abrupt breaks and smooth structural changes in deterministic trends. Second, the test employs a sieve approach to avoid the misspecification problem. Third, the extended test can be applied to the data with conditional heteroskedasticity and time-varying variance. Fourth, the power properties under alternatives are also investigated. Finally, a partial plug-in method is proposed to alleviate size distortion. Monte Carlo simulations show that the new test not only has good size but also has monotonic power in finite samples."
to:NB  time_series  change-point_problem  non-stationarity  hypothesis_testing  statistics  re:ADAfaEPoV 
march 2018 by cshalizi
[1802.03426] UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction
"UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimension reduction. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology. The result is a practical scalable algorithm that applies to real world data. The UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance. Furthermore, UMAP as described has no computational restrictions on embedding dimension, making it viable as a general purpose dimension reduction technique for machine learning"
to:NB  via:vaguery  manifold_learning  dimension_reduction  data_analysis  data_mining  to_teach:data-mining  re:ADAfaEPoV 
march 2018 by cshalizi
[1706.02744] Avoiding Discrimination through Causal Reasoning
"Recent work on fairness in machine learning has focused on various statistical discrimination criteria and how they trade off. Most of these criteria are observational: They depend only on the joint distribution of predictor, protected attribute, features, and outcome. While convenient to work with, observational criteria have severe inherent limitations that prevent them from resolving matters of fairness conclusively.
"Going beyond observational criteria, we frame the problem of discrimination based on protected attributes in the language of causal reasoning. This viewpoint shifts attention from "What is the right fairness criterion?" to "What do we want to assume about the causal data generating process?" Through the lens of causality, we make several contributions. First, we crisply articulate why and when observational criteria fail, thus formalizing what was before a matter of opinion. Second, our approach exposes previously ignored subtleties and why they are fundamental to the problem. Finally, we put forward natural causal non-discrimination criteria and develop algorithms that satisfy them."
to:NB  to_read  causality  algorithmic_fairness  prediction  machine_learning  janzing.dominik  re:ADAfaEPoV  via:arsyed 
november 2017 by cshalizi
Consistency without Inference: Instrumental Variables in Practical Application
"I use the bootstrap to study a comprehensive sample of 1400 instrumental
variables regressions in 32 papers published in the journals of the American
Economic Association. IV estimates are more often found to be falsely significant
and more sensitive to outliers than OLS, while having a higher mean squared error
around the IV population moment. There is little evidence that OLS estimates are
substantively biased, while IV instruments often appear to be irrelevant. In
addition, I find that established weak instrument pre-tests are largely
uninformative and weak instrument robust methods generally perform no better or
substantially worse than 2SLS. "
to:NB  have_read  re:ADAfaEPoV  to_teach:undergrad-ADA  instrumental_variables  causal_inference  regression  statistics  econometrics  via:kjhealy 
november 2017 by cshalizi
Virtual Classrooms: How Online College Courses Affect Student Success
"Online college courses are a rapidly expanding feature of higher education, yet little research identifies their effects relative to traditional in-person classes. Using an instrumental variables approach, we find that taking a course online, instead of in-person, reduces student success and progress in college. Grades are lower both for the course taken online and in future courses. Students are less likely to remain enrolled at the university. These estimates are local average treatment effects for students with access to both online and in-person options; for other students, online classes may be the only option for accessing college-level courses."

--- I will be very curious about their instrument, and whether it's at all plausible.
to:NB  education  instrumental_variables  causal_inference  statistics  re:ADAfaEPoV 
september 2017 by cshalizi
Testing Local Average Treatment Effect Assumptions | The Review of Economics and Statistics | MIT Press Journals
"In this paper, we propose an easy-to-implement procedure to test the key conditions for the identification and estimation of the local average treatment effect (LATE; Imbens & Angrist, 1994). We reformulate the testable implications of LATE assumptions as two conditional inequalities, which can be tested in the intersection bounds framework of Chernozhukov, Lee, and Rosen (2013) and easily implemented using the Stata package of Chernozhukov et al. (2015). We apply the proposed tests to the draft eligibility instrument in Angrist (1991), the college proximity instrument in Card (1993), and the same-sex instrument in Angrist and Evans (1998)."
to:NB  causal_inference  statistics  re:ADAfaEPoV  to_be_shot_after_a_fair_trial 
august 2017 by cshalizi
Lectures on the Nearest Neighbor Method | SpringerLink
"This text presents a wide-ranging and rigorous overview of nearest neighbor methods, one of the most important paradigms in machine learning. Now in one self-contained volume, this book systematically covers key statistical, probabilistic, combinatorial and geometric ideas for understanding, analyzing and developing nearest neighbor methods."
in_NB  books:noted  nearest-neighbors  density_estimation  regression  classifiers  statistics  devroye.luc  biau.gerard  nonparametrics  re:ADAfaEPoV  entropy_estimation 
august 2017 by cshalizi
Additive Component Analysis – Calvin Murdock
"Principal component analysis (PCA) is one of the most versatile tools for unsupervised learning with applications ranging from dimensionality reduction to exploratory data analysis and visualization. While much effort has been devoted to encouraging meaningful representations through regularization (e.g. non-negativity or sparsity), underlying linearity assumptions can limit their effectiveness. To address this issue, we propose Additive Component Analysis (ACA), a novel nonlinear extension of PCA. Inspired by multivariate nonparametric regression with additive models, ACA fits a smooth manifold to data by learning an explicit mapping from a low-dimensional latent space to the input space, which trivially enables applications like denoising. Furthermore, ACA can be used as a drop-in replacement in many algorithms that use linear component analysis methods as a subroutine via the local tangent space of the learned manifold. Unlike many other nonlinear dimensionality reduction techniques, ACA can be efficiently applied to large datasets since it does not require computing pairwise similarities or storing training data during testing. Multiple ACA layers can also be composed and learned jointly with essentially the same procedure for improved representational power, demonstrating the encouraging potential of nonparametric deep learning. We evaluate ACA on a variety of datasets, showing improved robustness, reconstruction performance, and interpretability."
to:NB  dimension_reduction  manifold_learning  additive_models  principal_components  statistics  to_read  re:ADAfaEPoV 
august 2017 by cshalizi
FFTrees: A toolbox to create, visualize, and evaluate fast-and-frugal decision trees
"Fast-and-frugal trees (FFTs) are simple algorithms that facilitate efficient and accurate decisions based on limited information. But despite their successful use in many applied domains, there is no widely available toolbox that allows anyone to easily create, visualize, and evaluate FFTs. We fill this gap by introducing the R package FFTrees. In this paper, we explain how FFTs work, introduce a new class of algorithms called fan for constructing FFTs, and provide a tutorial for using the FFTrees package. We then conduct a simulation across ten real-world datasets to test how well FFTs created by FFTrees can predict data. Simulation results show that FFTs created by FFTrees can predict data as well as popular classification algorithms such as regression and random forests, while remaining simple enough for anyone to understand and use."

--- I am skeptical about that "simple enough for anyone to understand and use"
to:NB  have_read  decision_trees  heuristics  cognitive_science  R  to_teach:undergrad-ADA  re:ADAfaEPoV 
august 2017 by cshalizi
[1603.00738] Mandelbrot's 1/f fractional renewal models of 1963-67: The non-ergodic missing link between change points and long range dependence
"The problem of 1/f noise has been with us for about a century. Because it is so often framed in Fourier spectral language, the most famous solutions have tended to be the stationary long range dependent (LRD) models such as Mandelbrot's fractional Gaussian noise. In view of the increasing importance to physics of non-ergodic fractional renewal models, I present preliminary results of my research into the history of Mandelbrot's very little known work in that area from 1963-67. I speculate about how the lack of awareness of this work in the physics and statistics communities may have affected the development of complexity science, and I discuss the differences between the Hurst effect, 1/f noise and LRD, concepts which are often treated as equivalent."
to_read  time_series  stochastic_processes  long-range_dependence  fractals  mandelbrot.benoit  watkins.nicholas  history_of_science  change-point_problem  in_NB  kith_and_kin  re:ADAfaEPoV  to_teach:data_over_space_and_time 
july 2017 by cshalizi
Imai, K.: Quantitative Social Science: An Introduction. (eBook, Paperback and Hardcover)
"Quantitative analysis is an increasingly essential skill for social science research, yet students in the social sciences and related areas typically receive little training in it—or if they do, they usually end up in statistics classes that offer few insights into their field. This textbook is a practical introduction to data analysis and statistics written especially for undergraduates and beginning graduate students in the social sciences and allied fields, such as economics, sociology, public policy, and data science.
"Quantitative Social Science engages directly with empirical analysis, showing students how to analyze data using the R programming language and to interpret the results—it encourages hands-on learning, not paper-and-pencil statistics. More than forty data sets taken directly from leading quantitative social science research illustrate how data analysis can be used to answer important questions about society and human behavior.
"Proven in the classroom, this one-of-a-kind textbook features numerous additional data analysis exercises and interactive R programming exercises, and also comes with supplementary teaching materials for instructors.
"Written especially for students in the social sciences and allied fields, including economics, sociology, public policy, and data science
"Provides hands-on instruction using R programming, not paper-and-pencil statistics
"Includes more than forty data sets from actual research for students to test their skills on
"Covers data analysis concepts such as causality, measurement, and prediction, as well as probability and statistical tools
"Features a wealth of supplementary exercises, including additional data analysis exercises and interactive programming exercises
"Offers a solid foundation for further study
"Comes with additional course materials online, including notes, sample code, exercises and problem sets with solutions, and lecture slides"
to:NB  books:noted  social_science_methodology  economics  statistics  econometrics  causal_inference  re:ADAfaEPoV 
june 2017 by cshalizi
Evaluations | The Abdul Latif Jameel Poverty Action Lab
"Search our database of 841 randomized evaluations conducted by our affiliates in 80 countries. To browse summaries of key policy recommendations from a subset of these evaluations, visit the Policy Publications tab."
to:NB  causal_inference  experimental_economics  experimental_sociology  statistics  re:ADAfaEPoV  to_teach:undergrad-ADA  economics 
june 2017 by cshalizi
Nuisance Variables and the Ex Post Facto Design
To mention when talking about what "controlled for", or "matched on", actually means. (However, the concern about matching leading to unrepresentative sub-populations can be allayed if the distributions have the same support, so that we can always find a match. Alternately, we limit our claim to a local effect, for the region of overlap.)
to:NB  statistics  causal_inference  social_science_methodology  re:ADAfaEPoV  meehl.paul 
march 2017 by cshalizi
[1511.01844] A note on the evaluation of generative models
"Probabilistic generative models can be used for compression, denoising, inpainting, texture synthesis, semi-supervised learning, unsupervised feature learning, and other tasks. Given this wide range of applications, it is not surprising that a lot of heterogeneity exists in the way these models are formulated, trained, and evaluated. As a consequence, direct comparison between models is often difficult. This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models. In particular, we show that three of the currently most commonly used criteria---average log-likelihood, Parzen window estimates, and visual fidelity of samples---are largely independent of each other when the data is high-dimensional. Good performance with respect to one criterion therefore need not imply good performance with respect to the other criteria. Our results show that extrapolation from one criterion to another is not warranted and generative models need to be evaluated directly with respect to the application(s) they were intended for. In addition, we provide examples demonstrating that Parzen window estimates should generally be avoided."
to:NB  simulation  stochastic_models  model_checking  statistics  via:vaguery  to_read  re:ADAfaEPoV  re:phil-of-bayes_paper 
december 2016 by cshalizi
Janzing , Balduzzi , Grosse-Wentrup , Schölkopf : Quantifying causal influences
"Many methods for causal inference generate directed acyclic graphs (DAGs) that formalize causal relations between n variables. Given the joint distribution on all these variables, the DAG contains all information about how intervening on one variable changes the distribution of the other n−1 variables. However, quantifying the causal influence of one variable on another one remains a nontrivial question.
"Here we propose a set of natural, intuitive postulates that a measure of causal strength should satisfy. We then introduce a communication scenario, where edges in a DAG play the role of channels that can be locally corrupted by interventions. Causal strength is then the relative entropy distance between the old and the new distribution.
"Many other measures of causal strength have been proposed, including average causal effect, transfer entropy, directed information, and information flow. We explain how they fail to satisfy the postulates on simple DAGs of ≤3 nodes. Finally, we investigate the behavior of our measure on time-series, supporting our claims with experiments on simulated data."
to:NB  graphical_models  time_series  causality  statistics  information_theory  to_read  re:ADAfaEPoV  to_teach:undergrad-ADA 
december 2016 by cshalizi
[1311.5828] The Splice Bootstrap
"This paper proposes a new bootstrap method to compute predictive intervals for nonlinear autoregressive time series model forecast. This method we call the splice boobstrap as it involves splicing the last p values of a given series to a suitably simulated series. This ensures that each simulated series will have the same set of p time series values in common, a necessary requirement for computing conditional predictive intervals. Using simulation studies we show the methods gives 90% intervals intervals that are similar to those expected from theory for simple linear and SETAR model driven by normal and non-normal noise. Furthermore, we apply the method to some economic data and demonstrate the intervals compare favourably with cross-validation based intervals."
to:NB  bootstrap  time_series  statistics  prediction  to_teach:undergrad-ADA  re:ADAfaEPoV  to_read 
december 2016 by cshalizi
How Multiple Imputation Makes a Difference
"Political scientists increasingly recognize that multiple imputation represents a superior strategy for analyzing missing data to the widely used method of list- wise deletion. However, there has been little systematic investigation of how mul- tiple imputation affects existing empirical knowledge in the discipline. This article presents the first large-scale examination of the empirical effects of substituting mul- tiple imputation for listwise deletion in political science. The examination focuses on research in the major subfield of comparative and international political economy (CIPE) as an illustrative example. Specifically, I use multiple imputation to reana- lyze the results of almost every quantitative CIPE study published during a recent five-year period in International Organization and World Politics, two of the leading subfield journals in CIPE. The outcome is striking: in almost half of the studies, key results “disappear” (by conventional statistical standards) when reanalyzed."
to:NB  have_skimmed  re:ADAfaEPoV  missing_data  statistics  political_science  via:henry_farrell 
august 2016 by cshalizi
Seasonal Adjustment Methods and Real Time Trend-Cycle | Estela Bee Dagum | Springer
"This book explores widely used seasonal adjustment methods and recent developments in real time trend-cycle estimation. It discusses in detail the properties and limitations of X12ARIMA, TRAMO-SEATS and STAMP - the main seasonal adjustment methods used by statistical agencies.  Several real-world cases illustrate each method and real data examples can be followed throughout the text. The trend-cycle estimation is presented using nonparametric techniques based on moving averages, linear filters and reproducing kernel Hilbert spaces, taking recent advances into account. The book provides a systematical treatment of results that to date have been scattered throughout the literature.
"Seasonal adjustment and real time trend-cycle prediction play an essential part at all levels of activity in modern economies. They are used by governments to counteract cyclical recessions, by central banks to control inflation, by decision makers for better modeling and planning and by hospitals, manufacturers, builders, transportation, and consumers in general to decide on appropriate action.
"This book appeals to practitioners in government institutions, finance and business, macroeconomists, and other professionals who use economic data as well as academic researchers in time series analysis, seasonal adjustment methods, filtering and signal extraction. It is also useful for graduate and final-year undergraduate courses in econometrics and time series with a good understanding of linear regression and matrix algebra, as well as ARIMA modelling."
to:NB  books:noted  time_series  statistics  re:ADAfaEPoV 
july 2016 by cshalizi
Cause and Correlation in Biology: A User's Guide to Path Analysis, Structural Equations and Causal Inference with R | Ecology and Conservation | Cambridge University Press
"Many problems in biology require an understanding of the relationships among variables in a multivariate causal context. Exploring such cause-effect relationships through a series of statistical methods, this book explains how to test causal hypotheses when randomised experiments cannot be performed. This completely revised and updated edition features detailed explanations for carrying out statistical methods using the popular and freely available R statistical language. Sections on d-sep tests, latent constructs that are common in biology, missing values, phylogenetic constraints, and multilevel models are also an important feature of this new edition. Written for biologists and using a minimum of statistical jargon, the concept of testing multivariate causal hypotheses using structural equations and path analysis is demystified. Assuming only a basic understanding of statistical analysis, this new edition is a valuable resource for both students and practising biologists."
to:NB  books:noted  causal_inference  graphical_models  statistics  re:ADAfaEPoV 
june 2016 by cshalizi
Hardle , Marron : Bootstrap Simultaneous Error Bars for Nonparametric Regression
"Simultaneous error bars are constructed for nonparametric kernel estimates of regression functions. The method is based on the bootstrap, where resampling is done from a suitably estimated residual distribution. The error bars are seen to give asymptotically correct coverage probabilities uniformly over any number of gridpoints. Applications to an economic problem are given and comparison to both pointwise and Bonferroni-type bars is presented through a simulation study."
to:NB  to_read  bootstrap  confidence_sets  regression  nonparametrics  statistics  to_teach:undergrad-ADA  re:ADAfaEPoV 
april 2016 by cshalizi
Statistically controlling for confounding constructs is harder than you think
"Social scientists often seek to demonstrate that a construct has incremental validity over and above other related constructs. However, these claims are typically supported by measurement- level models that fail to consider the effects of measurement (un)reliability. We use intuitive examples, Monte Carlo simulations, and a novel analytical framework to demonstrate that common strategies for establishing incremental construct validity using multiple regression analysis exhibit extremely high Type I error rates under parameter regimes common in many psychological domains. Counterintuitively, we find that error rates are highest—in some cases approaching 100%—when sample sizes are large and reliability is moderate. Our findings suggest that a potentially large proportion of incremental validity claims made in the literature are spurious. We present a web application (http://jakewestfall.org/ivy/) that readers can use to explore the statistical properties of these and other incremental validity arguments. We conclude by reviewing SEM-based statistical approaches that appropriately control the Type I error rate when attempting to establish incremental validity."
to:NB  have_read  measurement  social_measurement  social_science_methodology  psychometrics  econometrics  graphical_models  statistics  to_teach:undergrad-ADA  re:ADAfaEPoV  yarkoni.tal  to:blog 
march 2016 by cshalizi
[1602.01192] Regression with network cohesion
"Prediction problems typically assume the training data are independent samples, but in many modern applications samples come from individuals connected by a network. For example, in adolescent health studies of risk-taking behaviors, information on the subjects' social networks is often available and plays an important role through network cohesion, the empirically observed phenomenon of friends behaving similarly. Taking cohesion into account in prediction models should allow us to improve their performance. Here we propose a regression model with a network-based penalty on individual node effects to encourage network cohesion, and show that it performs better than traditional models both theoretically and empirically when network cohesion is present. The framework is easily extended to other models, such as the generalized linear model and Cox's proportional hazard model. Applications to predicting levels of recreational activity and marijuana usage among teenagers based on both demographic covariates and their friendship networks are discussed in detail and demonstrate the effectiveness of our approach."
to:NB  statistics  network_data_analysis  regression  levina.liza  re:ADAfaEPoV 
february 2016 by cshalizi
Does data splitting improve prediction? - Springer
"Data splitting divides data into two parts. One part is reserved for model selection. In some applications, the second part is used for model validation but we use this part for estimating the parameters of the chosen model. We focus on the problem of constructing reliable predictive distributions for future observed values. We judge the predictive performance using log scoring. We compare the full data strategy with the data splitting strategy for prediction. We show how the full data score can be decomposed into model selection, parameter estimation and data reuse costs. Data splitting is preferred when data reuse costs are high. We investigate the relative performance of the strategies in four simulation scenarios. We introduce a hybrid estimator that uses one part for model selection but both parts for estimation. We argue that a split data analysis is prefered to a full data analysis for prediction with some exceptions."

--- Ungated: http://arxiv.org/abs/1301.2983
statistics  regression  prediction  model_selection  faraway.j.j.  re:ADAfaEPoV  to_teach:undergrad-ADA  have_read  to_teach:linear_models  in_NB 
january 2016 by cshalizi
Statistical Modeling: A Fresh Approach
"Statistical Modeling: A Fresh Approach introduces and illuminates the statistical reasoning used in modern research throughout the natural and social sciences, medicine, government, and commerce. It emphasizes the use of models to untangle and quantify variation in observed data. By a deft and concise use of computing coupled with an innovative geometrical presentation of the relationship among variables, A Fresh Approach reveals the logic of statistical inference and empowers the reader to use and understand techniques such as analysis of covariance that appear widely in published research but are hardly ever found in introductory texts.
"Recognizing the essential role the computer plays in modern statistics, A Fresh Approach provides a complete and self-contained introduction to statistical computing using the powerful (and free) statistics package R."
in_NB  books:noted  statistics  regression  R  re:ADAfaEPoV 
december 2015 by cshalizi
Vector Generalized Linear and Additive Models - With an | Thomas W. Yee | Springer
"This book presents a greatly enlarged statistical framework compared to generalized linear models (GLMs) with which to approach regression modelling. Comprising of about half-a-dozen major classes of statistical models, and fortified with necessary infrastructure to make the models more fully operable, the framework allows analyses based on many semi-traditional applied statistics models to be performed as a coherent whole.
"Since their advent in 1972, GLMs have unified important distributions under a single umbrella with enormous implications. However, GLMs are not flexible enough to cope with the demands of practical data analysis. And data-driven GLMs, in the form of generalized additive models (GAMs), are also largely confined to the exponential family. The methodology here and accompanying software (the extensive VGAM R package) are directed at these limitations and are described comprehensively for the first time in one volume. This book treats distributions and classical models as generalized regression models, and the result is a much broader application base for GLMs and GAMs.
"The book can be used in senior undergraduate or first-year postgraduate courses on GLMs or categorical data analysis and as a methodology resource for VGAM users. In the second part of the book, the R package VGAM allows readers to grasp immediately applications of the methodology. R code is integrated in the text, and datasets are used throughout. Potential applications include ecology, finance, biostatistics, and social sciences. The methodological contribution of this book stands alone and does not require use of the VGAM package."

--- Hopefully this means the VGAM package is less user-hostile than it was...
to:NB  books:noted  additive_models  linear_regression  regression  statistics  re:ADAfaEPoV  in_wishlist 
october 2015 by cshalizi
This is a perfectly nice example. So how sexist am I that I am not going to swap out the cars-and-trucks one in my chapter on PCA for this? (I guess I should at least mention it.)

--- Python code at https://github.com/graceavery/Eigenstyle but apparently not the original set of images
data_mining  principal_components  fashion  have_read  to:blog  to_teach:data-mining  re:ADAfaEPoV  via:absfac 
august 2015 by cshalizi
Learning from Pairwise Marginal Independencies
"We consider graphs that represent pairwise marginal independencies amongst a set of vari- ables (for instance, the zero entries of a covari- ance matrix for normal data). We characterize the directed acyclic graphs (DAGs) that faithfully ex- plain a given set of independencies, and derive al- gorithms to efficiently enumerate such structures. Our results map out the space of faithful causal models for a given set of pairwise marginal inde- pendence relations. This allows us to show the extent to which causal inference is possible with- out using conditional independence tests."
to:NB  graphical_models  causal_discovery  statistics  re:ADAfaEPoV 
july 2015 by cshalizi
A complete generalized adjustment criterion
"Covariate adjustment is a widely used approach to estimate total causal effects from observational data. Several graphical criteria have been de- veloped in recent years to identify valid covari- ates for adjustment from graphical causal mod- els. These criteria can handle multiple causes, latent confounding, or partial knowledge of the causal structure; however, their diversity is con- fusing and some of them are only sufficient, but not necessary. In this paper, we present a cri- terion that is necessary and sufficient for four different classes of graphical causal models: di- rected acyclic graphs (DAGs), maximum ances- tral graphs (MAGs), completed partially directed acyclic graphs (CPDAGs), and partial ancestral graphs (PAGs). Our criterion subsumes the ex- isting ones and in this way unifies adjustment set construction for a large set of graph classes."

- Also http://arxiv.org/abs/1507.01524
have_read  causal_inference  statistics  graphical_models  kalisch.markus  re:ADAfaEPoV  in_NB  maathuis.marloes_h. 
july 2015 by cshalizi
Do-calculus when the true graph is unknown
"One of the basic tasks of causal discovery is to estimate the causal effect of some set of variables on another given a statistical data set. In this article we bridge the gap between causal struc- ture discovery and the do-calculus by proposing a method for the identification of causal effects on the basis of arbitrary (equivalence) classes of semi-Markovian causal models. The approach uses a general logical representation of the equiv- alence class of graphs obtained from a causal structure discovery algorithm, the properties of which can then be queried by procedures im- plementing the do-calculus inference for causal effects. We show that the method is more ef- ficient than determining causal effects using a naive enumeration of graphs in the equivalence class. Moreover, the method is complete with respect to the identifiability of causal effects for settings, in which extant methods that do not re- quire knowledge of the true graph, offer only in- complete results. The method is entirely modular and easily adapted for different background set- tings."

(Last tag is just a to-mention.)
in_NB  heard_the_talk  to_read  causal_inference  causal_discovery  graphical_models  statistics  eberhardt.frederick  kith_and_kin  re:ADAfaEPoV 
july 2015 by cshalizi
Missing Data as a Causal and Probabilistic Problem
"Causal inference is often phrased as a missing data problem – for every unit, only the response to observed treatment assignment is known, the response to other treatment assignments is not. In this paper, we extend the converse approach of [7] of representing missing data problems to causal models where only interventions on miss- ingness indicators are allowed. We further use this representation to leverage techniques devel- oped for the problem of identification of causal effects to give a general criterion for cases where a joint distribution containing missing variables can be recovered from data actually observed, given assumptions on missingness mechanisms. This criterion is significantly more general than the commonly used “missing at random” (MAR) criterion, and generalizes past work which also exploits a graphical representation of missing- ness. In fact, the relationship of our criterion to MAR is not unlike the relationship between the ID algorithm for identification of causal effects [22, 18], and conditional ignorability [13]."
statistics  graphical_models  causal_inference  missing_data  pearl.judea  shpitser.ilya  to_read  re:ADAfaEPoV  heard_the_talk  in_NB 
july 2015 by cshalizi
[1503.03515] Bi-cross-validation for factor analysis
"Factor analysis is over a century old, but it is still problematic to choose the number of factors for a given data set. The scree test is popular but subjective. The best performing objective methods are recommended on the basis of simulations. We introduce a method based on bi-cross-validation, using randomly held-out submatrices of the data to choose the number of factors. We find it performs better than the leading methods of parallel analysis (PA) and Kaiser's rule. Our performance criterion is based on recovery of the underlying factor-loading (signal) matrix rather than identifying the true number of factors. Like previous comparisons, our work is simulation based. Recent advances in random matrix theory provide principled choices for the number of factors when the noise is homoscedastic, but not for the heteroscedastic case. The simulations we choose are designed using guidance from random matrix theory. In particular, we include factors too small to detect, factors large enough to detect but not large enough to improve the estimate, and two classes of factors large enough to be useful. Much of the advantage of bi-cross-validation comes from cases with factors large enough to detect but too small to be well estimated. We also find that a form of early stopping regularization improves the recovery of the signal matrix."

--- Published version: https://doi.org/10.1214/15-STS539
in_NB  model_selection  factor_analysis  cross-validation  owen.art  statistics  re:ADAfaEPoV 
may 2015 by cshalizi
[1505.01163] Stationarity Tests for Time Series -- What Are We Really Testing?
"Traditionally stationarity refers to shift invariance of the distribution of a stochastic process. In this paper, we rediscover stationarity as a path property instead of a distributional property. More precisely, we characterize a set of paths denoted as A, which corresponds to the notion of stationarity. On one hand, the set A is shown to be large enough, so that for any stationary process, almost all of its paths are in A. On the other hand, we prove that any path in A will behave in the optimal way under any stationarity test satisfying some mild conditions. The results justify our intuition about how a "typical" stationary process should look like, and potentially lead to new families of stationarity tests."

--- The "set A" is basically "paths where time averages behave nicely; this is very close to Furstenberg's old book, which they cite at one point but don't really draw out. It's also close to what some authors call the set of "ergodic points".
time_series  ergodic_theory  statistics  statistical_inference_for_stochastic_processes  have_read  re:almost_none  re:ADAfaEPoV  in_NB 
may 2015 by cshalizi
Spirtes , Zhang : A Uniformly Consistent Estimator of Causal Effects under the $k$-Triangle-Faithfulness Assumption
"Spirtes, Glymour and Scheines [Causation, Prediction, and Search (1993) Springer] described a pointwise consistent estimator of the Markov equivalence class of any causal structure that can be represented by a directed acyclic graph for any parametric family with a uniformly consistent test of conditional independence, under the Causal Markov and Causal Faithfulness assumptions. Robins et al. [Biometrika 90 (2003) 491–515], however, proved that there are no uniformly consistent estimators of Markov equivalence classes of causal structures under those assumptions. Subsequently, Kalisch and Bühlmann [J. Mach. Learn. Res. 8 (2007) 613–636] described a uniformly consistent estimator of the Markov equivalence class of a linear Gaussian causal structure under the Causal Markov and Strong Causal Faithfulness assumptions. However, the Strong Faithfulness assumption may be false with high probability in many domains. We describe a uniformly consistent estimator of both the Markov equivalence class of a linear Gaussian causal structure and the identifiable structural coefficients in the Markov equivalence class under the Causal Markov assumption and the considerably weaker k-Triangle-Faithfulness assumption."
to:NB  causal_discovery  graphical_models  statistics  spirtes.peter  to_read  re:ADAfaEPoV 
may 2015 by cshalizi
Make - GNU Project - Free Software Foundation
My book needs a make file. Which means I need to figure out how to really write one, with the source files spread across a gazillion sub-directories...
programming  re:ADAfaEPoV 
february 2015 by cshalizi
On the Interpretation of Instrumental Variables in the Presence of Specification Errors
"The method of instrumental variables (IV) and the generalized method of moments (GMM), and their applications to the estimation of errors-in-variables and simultaneous equations models in econometrics, require data on a sufficient number of instrumental variables that are both exogenous and relevant. We argue that, in general, such instruments (weak or strong) cannot exist."

--- I think they are too quick to dismiss non-parametric IV; if what one wants is consistent estimates of the partial derivatives at a given point, you _can_ get that by (e.g.) splines or locally linear regression. Need to think through this in terms of Pearl's graphical definition of IVs.
in_NB  instrumental_variables  misspecification  regression  linear_regression  causal_inference  statistics  econometrics  via:jbdelong  have_read  to_teach:undergrad-ADA  re:ADAfaEPoV 
february 2015 by cshalizi
Chernozhukov , Chetverikov , Kato : Anti-concentration and honest, adaptive confidence bands
"Modern construction of uniform confidence bands for nonparametric densities (and other functions) often relies on the classical Smirnov–Bickel–Rosenblatt (SBR) condition; see, for example, Giné and Nickl [Probab. Theory Related Fields 143 (2009) 569–596]. This condition requires the existence of a limit distribution of an extreme value type for the supremum of a studentized empirical process (equivalently, for the supremum of a Gaussian process with the same covariance function as that of the studentized empirical process). The principal contribution of this paper is to remove the need for this classical condition. We show that a considerably weaker sufficient condition is derived from an anti-concentration property of the supremum of the approximating Gaussian process, and we derive an inequality leading to such a property for separable Gaussian processes. We refer to the new condition as a generalized SBR condition. Our new result shows that the supremum does not concentrate too fast around any value.
"We then apply this result to derive a Gaussian multiplier bootstrap procedure for constructing honest confidence bands for nonparametric density estimators (this result can be applied in other nonparametric problems as well). An essential advantage of our approach is that it applies generically even in those cases where the limit distribution of the supremum of the studentized empirical process does not exist (or is unknown). This is of particular importance in problems where resolution levels or other tuning parameters have been chosen in a data-driven fashion, which is needed for adaptive constructions of the confidence bands. Finally, of independent interest is our introduction of a new, practical version of Lepski’s method, which computes the optimal, nonconservative resolution levels via a Gaussian multiplier bootstrap method."

--- Ungated version: http://arxiv.org/abs/1303.7152
in_NB  confidence_sets  bootstrap  density_estimation  nonparametrics  statistics  regression  to_read  re:ADAfaEPoV 
february 2015 by cshalizi
r - knitr - How to align code and plot side by side - Stack Overflow
Could this be modified to put figure on top, then code, then caption?
R  kntir  re:ADAfaEPoV 
january 2015 by cshalizi
Knitr with Latex
This is going to be more annoying than I thought.
--- ETA: I came to this page after going over most of the knitr website, and while I don't pretend to have read every single page there, I am pretty sure that this is the first place I saw a clear explanation of how .Rnw files relate to .tex files, viz., that you write an .Rnw, which compiles to .tex with knitr, which compiles to pdf or ps or dvi or whatever with latex as usual.
latex  knitr  re:ADAfaEPoV 
january 2015 by cshalizi
[1406.6018] A brief history of long memory
"Long memory plays an important role, determining the behaviour and predictibility of systems, in many fields; for instance, climate, hydrology, finance, networks and DNA sequencing. In particular, it is important to test if a process is exhibiting long memory since that impacts the confidence with which one may predict future events on the basis of a small amount of historical data. A major force in the development and study of long memory was the late Benoit B. Mandelbrot. Here we discuss the original motivation of the development of long memory and Mandelbrot's influence on this fascinating field. We will also elucidate the contrasting approaches to long memory in the physics and statistics communities with an eye towards their influence on modern practice in these fields."
have_read  long-range_dependence  time_series  statistics  history_of_science  watkins.nicholas  kith_and_kin  in_NB  to:blog  re:ADAfaEPoV  to_teach:data_over_space_and_time 
july 2014 by cshalizi
[1307.6701] Iterative Estimation of Solutions to Noisy Nonlinear Operator Equations in Nonparametric Instrumental Regression
"This paper discusses the solution of nonlinear integral equations with noisy integral kernels as they appear in nonparametric instrumental regression. We propose a regularized Newton-type iteration and establish convergence and convergence rate results. A particular emphasis is on instrumental regression models where the usual conditional mean assumption is replaced by a stronger independence assumption. We demonstrate for the case of a binary instrument that our approach allows the correct estimation of regression functions which are not identifiable with the standard model. This is illustrated in computed examples with simulated data."
in_NB  inverse_problems  optimization  instrumental_variables  regression  causal_inference  statistics  econometrics  re:ADAfaEPoV 
july 2013 by cshalizi
Susan Cohen, Professional Book Indexer: Home Page
Since my options are to either roll my own index or hire a professional...
writing  indexing  re:ADAfaEPoV 
july 2013 by cshalizi
LaTeX Templates » Books
One that looks like J. Random Freshman Textbook, one that looks like Tufte. Try these for ADAfaEPoV? Or will the publisher just decide?
latex  writing  via:?  re:ADAfaEPoV 
june 2013 by cshalizi

related tags

additive_models  algorithmic_fairness  biau.gerard  books:noted  bootstrap  buhlmann.peter  causality  causal_discovery  causal_inference  change-point_problem  classifiers  cognitive_science  confidence_sets  cross-validation  data_analysis  data_mining  decision_trees  density_estimation  devroye.luc  dimension_reduction  eberhardt.frederick  econometrics  economics  education  entropy_estimation  ergodic_theory  experimental_economics  experimental_sociology  factor_analysis  faraway.j.j.  fashion  fractals  funny:geeky  geometry  graphical_models  have_read  have_skimmed  heard_the_talk  heuristics  history_of_science  hotelling.harold  hypothesis_testing  indexing  information_theory  instrumental_variables  inverse_problems  in_NB  in_wishlist  janzing.dominik  kalisch.markus  kith_and_kin  knitr  kntir  latex  levina.liza  linear_regression  long-range_dependence  lumley.thomas  maathuis.marloes_h.  machine_learning  mandelbrot.benoit  manifold_learning  maps  measurement  meehl.paul  meinshausen.nicolai  missing_data  misspecification  model_checking  model_selection  multiple_imputation  nagin.dan  nearest-neighbors  network_data_analysis  non-stationarity  nonparametrics  optimization  owen.art  partial_identification  parzen.emanuel  pearl.judea  pearson.karl  peters.jonas  political_science  prediction  principal_components  programming  psychometrics  R  random_projections  re:ADAfaEPoV  re:almost_none  re:phil-of-bayes_paper  regression  rosenblatt.murray  shpitser.ilya  simulation  social_measurement  social_science_methodology  spirtes.peter  statistical_inference_for_stochastic_processes  statistics  stochastic_models  stochastic_processes  time_series  to:blog  to:NB  to_be_shot_after_a_fair_trial  to_read  to_teach:data-mining  to_teach:data_over_space_and_time  to_teach:linear_models  to_teach:statcomp  to_teach:undergrad-ADA  via:?  via:absfac  via:arsyed  via:henry_farrell  via:jbdelong  via:kjhealy  via:phnk  via:vaguery  visual_display_of_quantitative_information  watkins.nicholas  writing  yarkoni.tal 

Copy this bookmark: