confidence_sets   103

« earlier    

Cheng , Chen : Nonparametric inference via bootstrapping the debiased estimator
"In this paper, we propose to construct confidence bands by bootstrapping the debiased kernel density estimator (for density estimation) and the debiased local polynomial regression estimator (for regression analysis). The idea of using a debiased estimator was recently employed by Calonico et al. (2018b) to construct a confidence interval of the density function (and regression function) at a given point by explicitly estimating stochastic variations. We extend their ideas of using the debiased estimator and further propose a bootstrap approach for constructing simultaneous confidence bands. This modified method has an advantage that we can easily choose the smoothing bandwidth from conventional bandwidth selectors and the confidence band will be asymptotically valid. We prove the validity of the bootstrap confidence band and generalize it to density level sets and inverse regression problems. Simulation studies confirm the validity of the proposed confidence bands/sets. We apply our approach to an Astronomy dataset to show its applicability."
to:NB  to_read  statistics  bootstrap  confidence_sets  regression  density_estimation  re:ADAfaEPoV 
12 days ago by cshalizi
[1702.03377] Uniform confidence bands for nonparametric errors-in-variables regression
"This paper develops a method to construct uniform confidence bands for a nonparametric regression function where a predictor variable is subject to a measurement error. We allow for the distribution of the measurement error to be unknown, but assume the availability of validation data or repeated measurements on the latent predictor variable. The proposed confidence band builds on the deconvolution kernel estimation and a novel application of the multiplier bootstrap method. We establish asymptotic validity of the proposed confidence band. To our knowledge, this is the first paper to derive asymptotically valid uniform confidence bands for nonparametric errors-in-variables regression."
to:NB  regression  confidence_sets  nonparametrics  statistics  errors-in-variables 
28 days ago by cshalizi
Confidence intervals: not a very strong property - Biased and Inefficient
Cute. (The "Gygax intervals" in paragraph 2 are what I use in teaching to say that coverage, while essential, isn't _enough_.)
statistics  confidence_sets  lumley.thomas  to_teach 
4 weeks ago by cshalizi
Computer model calibration with confidence and consistency
"The paper proposes and examines a calibration method for inexact models. The method produces a confidence set on the parameters that includes the best parameter with a desired probability under any sample size. Additionally, this confidence set is shown to be consistent in that it excludes suboptimal parameters in large sample environments. The method works and the results hold with few assumptions; the ideas are maintained even with discrete input spaces or parameter spaces. Computation of the confidence sets and approximate confidence sets is discussed. The performance is illustrated in a simulation example as well as two real data examples."
to:NB  simulation  statistics  confidence_sets  misspecification 
4 weeks ago by cshalizi
[1906.05349] Permutation-based uncertainty quantification about a mixing distribution
"Nonparametric estimation of a mixing distribution based on data coming from a mixture model is a challenging problem. Beyond estimation, there is interest in uncertainty quantification, e.g., confidence intervals for features of the mixing distribution. This paper focuses on estimation via the predictive recursion algorithm, and here we take advantage of this estimator's seemingly undesirable dependence on the data ordering to obtain a permutation-based approximation of the sampling distribution which can be used to quantify uncertainty. Theoretical and numerical results confirm that the proposed method leads to valid confidence intervals, at least approximately."
to:NB  mixture_models  confidence_sets  statistics 
4 weeks ago by cshalizi
[1905.10634] Adaptive, Distribution-Free Prediction Intervals for Deep Neural Networks
"This paper addresses the problem of assessing the variability of predictions from deep neural networks. There is a growing literature on using and improving the predictive accuracy of deep networks, but a concomitant improvement in the quantification of their uncertainty is lacking. We provide a prediction interval network (PI-Network) which is a transparent, tractable modification of the standard predictive loss used to train deep networks. The PI-Network outputs three values instead of a single point estimate and optimizes a loss function inspired by quantile regression. We go beyond merely motivating the construction of these networks and provide two prediction interval methods with provable, finite sample coverage guarantees without any assumptions on the underlying distribution from which our data is drawn. We only require that the observations are independent and identically distributed. Furthermore, our intervals adapt to heteroskedasticity and asymmetry in the conditional distribution of the response given the covariates. The first method leverages the conformal inference framework and provides average coverage. The second method provides a new, stronger guarantee by conditioning on the observed data. Lastly, our loss function does not compromise the predictive accuracy of the network like other prediction interval methods. We demonstrate the ease of use of the PI-Network as well as its improvements over other methods on both simulated and real data. As the PI-Network can be used with a host of deep learning methods with only minor modifications, its use should become standard practice, much like reporting standard errors along with mean estimates."
to:NB  prediction  confidence_sets  neural_networks  regression  leeb.hannes  statistics 
6 weeks ago by cshalizi
[1904.04276] On assumption-free tests and confidence intervals for causal effects estimated by machine learning
"For many causal effect parameters ψ of interest doubly robust machine learning estimators ψˆ1 are the state-of-the-art, incorporating the benefits of the low prediction error of machine learning algorithms; the decreased bias of doubly robust estimators; and.the analytic tractability and bias reduction of cross fitting. When the potential confounders is high dimensional, the associated (1−α) Wald intervals may still undercover even in large samples, because the bias may be of the same or even larger order than its standard error. In this paper, we introduce tests that can have the power to detect whether the bias of ψˆ1 is of the same or even larger order than its standard error of order n−1/2, can provide a lower confidence limit on the degree of under coverage of the interval and strikingly, are valid under essentially no assumptions. We also introduce an estimator with bias generally less than that of ψˆ1, yet whose standard error is not much greater than ψˆ1's. The tests, as well as the estimator ψˆ2, are based on a U-statistic that is the second-order influence function for the parameter that encodes the estimable part of the bias of ψˆ1. Our impressive claims need to be tempered in several important ways. First no test, including ours, of the null hypothesis that the ratio of the bias to its standard error can be consistent [without making additional assumptions that may be incorrect]. Furthermore the above claims only apply to parameters in a particular class. For the others, our results are less sharp and require more careful interpretation."

--- The old joke about "the part where you say it, and the part where you take it back" usually does not apply to the abstract. But JMR is always worth attending to.
to:NB  statistics  confidence_sets  hypothesis_testing  causal_inference  nonparametrics  robins.james 
9 weeks ago by cshalizi
[1904.01383] Can we trust Bayesian uncertainty quantification from Gaussian process priors with squared exponential covariance kernel?
"We investigate the frequentist coverage properties of credible sets resulting in from Gaussian process priors with squared exponential covariance kernel. First we show that by selecting the scaling hyper-parameter using the maximum marginal likelihood estimator in the (slightly modified) squared exponential covariance kernel the corresponding credible sets will provide overconfident, misleading uncertainty statements for a large, representative subclass of the functional parameters in context of the Gaussian white noise model. Then we show that by either blowing up the credible sets with a logarithmic factor or modifying the maximum marginal likelihood estimator with a logarithmic term one can get reliable uncertainty statement and adaptive size of the credible sets under some additional restriction. Finally we demonstrate on a numerical study that the derived negative and positive results extend beyond the Gaussian white noise model to the nonparametric regression and classification models for small sample sizes as well."
to:NB  bayesian_consistency  nonparametrics  statistics  confidence_sets 
april 2019 by cshalizi
Empirical confidence interval calibration for population-level effect estimation studies in observational healthcare data | PNAS
"Observational healthcare data, such as electronic health records and administrative claims, offer potential to estimate effects of medical products at scale. Observational studies have often been found to be nonreproducible, however, generating conflicting results even when using the same database to answer the same question. One source of discrepancies is error, both random caused by sampling variability and systematic (for example, because of confounding, selection bias, and measurement error). Only random error is typically quantified but converges to zero as databases become larger, whereas systematic error persists independent from sample size and therefore, increases in relative importance. Negative controls are exposure–outcome pairs, where one believes no causal effect exists; they can be used to detect multiple sources of systematic error, but interpreting their results is not always straightforward. Previously, we have shown that an empirical null distribution can be derived from a sample of negative controls and used to calibrate P values, accounting for both random and systematic error. Here, we extend this work to calibration of confidence intervals (CIs). CIs require positive controls, which we synthesize by modifying negative controls. We show that our CI calibration restores nominal characteristics, such as 95% coverage of the true effect size by the 95% CI. We furthermore show that CI calibration reduces disagreement in replications of two pairs of conflicting observational studies: one related to dabigatran, warfarin, and gastrointestinal bleeding and one related to selective serotonin reuptake inhibitors and upper gastrointestinal bleeding. We recommend CI calibration to improve reproducibility of observational studies."
to:NB  statistics  confidence_sets  madigan.david  calibration 
may 2018 by cshalizi
p-Values: The Insight to Modern Statistical Inference | Annual Review of Statistics and Its Application
"I introduce a p-value function that derives from the continuity inherent in a wide range of regular statistical models. This provides confidence bounds and confidence sets, tests, and estimates that all reflect model continuity. The development starts with the scalar-variable scalar-parameter exponential model and extends to the vector-parameter model with scalar interest parameter, then to general regular models, and then references for testing vector interest parameters are available. The procedure does not use sufficiency but applies directly to general models, although it reproduces sufficiency-based results when sufficiency is present. The emphasis is on the coherence of the full procedure, and technical details are not emphasized."
to:NB  p-values  hypothesis_testing  confidence_sets  statistics  fraser.d.a.s. 
september 2017 by cshalizi
[1311.4555] Confidence Intervals for Random Forests: The Jackknife and the Infinitesimal Jackknife
"We study the variability of predictions made by bagged learners and random forests, and show how to estimate standard errors for these methods. Our work builds on variance estimates for bagging proposed by Efron (1992, 2012) that are based on the jackknife and the infinitesimal jackknife (IJ). In practice, bagged predictors are computed using a finite number B of bootstrap replicates, and working with a large B can be computationally expensive. Direct applications of jackknife and IJ estimators to bagging require B on the order of n^{1.5} bootstrap replicates to converge, where n is the size of the training set. We propose improved versions that only require B on the order of n replicates. Moreover, we show that the IJ estimator requires 1.7 times less bootstrap replicates than the jackknife to achieve a given accuracy. Finally, we study the sampling distributions of the jackknife and IJ variance estimates themselves. We illustrate our findings with multiple experiments and simulation studies."
to:NB  bootstrap  confidence_sets  ensemble_methods  random_forests  decision_trees  statistics  nonparametrics  efron.bradley  hastie.trevor 
december 2016 by cshalizi
[1212.6788] Local and global asymptotic inference in smoothing spline models
"This article studies local and global inference for smoothing spline estimation in a unified asymptotic framework. We first introduce a new technical tool called functional Bahadur representation, which significantly generalizes the traditional Bahadur representation in parametric models, that is, Bahadur [Ann. Inst. Statist. Math. 37 (1966) 577-580]. Equipped with this tool, we develop four interconnected procedures for inference: (i) pointwise confidence interval; (ii) local likelihood ratio testing; (iii) simultaneous confidence band; (iv) global likelihood ratio testing. In particular, our confidence intervals are proved to be asymptotically valid at any point in the support, and they are shorter on average than the Bayesian confidence intervals proposed by Wahba [J. R. Stat. Soc. Ser. B Stat. Methodol. 45 (1983) 133-150] and Nychka [J. Amer. Statist. Assoc. 83 (1988) 1134-1143]. We also discuss a version of the Wilks phenomenon arising from local/global likelihood ratio testing. It is also worth noting that our simultaneous confidence bands are the first ones applicable to general quasi-likelihood models. Furthermore, issues relating to optimality and efficiency are carefully addressed. As a by-product, we discover a surprising relationship between periodic and nonperiodic smoothing splines in terms of inference."
to:NB  nonparametrics  splines  regression  confidence_sets  statistics 
december 2016 by cshalizi
[1209.1508] Confidence sets in sparse regression
"The problem of constructing confidence sets in the high-dimensional linear model with n response variables and p parameters, possibly p≥n, is considered. Full honest adaptive inference is possible if the rate of sparse estimation does not exceed n−1/4, otherwise sparse adaptive confidence sets exist only over strict subsets of the parameter spaces for which sparse estimators exist. Necessary and sufficient conditions for the existence of confidence sets that adapt to a fixed sparsity level of the parameter vector are given in terms of minimal ℓ2-separation conditions on the parameter space. The design conditions cover common coherence assumptions used in models for sparsity, including (possibly correlated) sub-Gaussian designs."
to:NB  confidence_sets  regression  high-dimensional_statistics  linear_regression  statistics  sparsity  van_de_geer.sara  nickl.richard 
november 2016 by cshalizi
[1607.07705] Delineating Parameter Unidentifiabilities in Complex Models
"Scientists use mathematical modelling to understand and predict the properties of complex physical systems. In highly parameterised models there often exist relationships between parameters over which model predictions are identical, or nearly so. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, and the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast timescale subsystems, as well as the regimes in which such approximations are valid. We base our algorithm on a novel quantification of regional parametric sensitivity: multiscale sloppiness. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher Information Matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the Likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm provides a tractable alternative. We finally apply our methods to a large-scale, benchmark Systems Biology model of NF-κB, uncovering previously unknown unidentifiabilities."
to:NB  to_read  identifiability  via:vaguery  statistics  fisher_information  confidence_sets 
november 2016 by cshalizi
Estimation and Testing Under Sparsity | Sara van de Geer | Springer
"Taking the Lasso method as its starting point, this book describes the main ingredients needed to study general loss functions and sparsity-inducing regularizers. It also provides a semi-parametric approach to establishing confidence intervals and tests. Sparsity-inducing methods have proven to be very useful in the analysis of high-dimensional data. Examples include the Lasso and group Lasso methods, and the least squares method with other norm-penalties, such as the nuclear norm. The illustrations provided include generalized linear models, density estimation, matrix completion and sparse principal components. Each chapter ends with a problem section. The book can be used as a textbook for a graduate or PhD course."
to:NB  books:noted  statistics  sparsity  high-dimensional_statistics  lasso  hypothesis_testing  confidence_sets  van_de_geer.sara  to_read  empirical_processes 
july 2016 by cshalizi
Hardle , Marron : Bootstrap Simultaneous Error Bars for Nonparametric Regression
"Simultaneous error bars are constructed for nonparametric kernel estimates of regression functions. The method is based on the bootstrap, where resampling is done from a suitably estimated residual distribution. The error bars are seen to give asymptotically correct coverage probabilities uniformly over any number of gridpoints. Applications to an economic problem are given and comparison to both pointwise and Bonferroni-type bars is presented through a simulation study."
to:NB  to_read  bootstrap  confidence_sets  regression  nonparametrics  statistics  to_teach:undergrad-ADA  re:ADAfaEPoV 
april 2016 by cshalizi
[1601.00934] Confidence Intervals for Projections of Partially Identified Parameters
"This paper proposes a bootstrap-based procedure to build confidence intervals for single components of a partially identified parameter vector, and for smooth functions of such components, in moment (in)equality models. The extreme points of our confidence interval are obtained by maximizing/minimizing the value of the component (or function) of interest subject to the sample analog of the moment (in)equality conditions properly relaxed. The novelty is that the amount of relaxation, or critical level, is computed so that the component of θ, instead of θ itself, is uniformly asymptotically covered with prespecified probability. Calibration of the critical level is based on repeatedly checking feasibility of linear programming problems, rendering it computationally attractive. Computation of the extreme points of the confidence interval is based on a novel application of the response surface method for global optimization, which may prove of independent interest also for applications of other methods of inference in the moment (in)equalities literature. The critical level is by construction smaller (in finite sample) than the one used if projecting confidence regions designed to cover the entire parameter vector θ. Hence, our confidence interval is weakly shorter than the projection of established confidence sets (Andrews and Soares, 2010), if one holds the choice of tuning parameters constant. We provide simple conditions under which the comparison is strict. Our inference method controls asymptotic coverage uniformly over a large class of data generating processes. Our assumptions and those used in the leading alternative approach (a profiling based method) are not nested. We explain why we employ some restrictions that are not required by other methods and provide examples of models for which our method is uniformly valid but profiling based methods are not."
to:NB  statistics  confidence_sets  partial_identification  bootstrap 
february 2016 by cshalizi
[1602.00359] Confidence intervals for means under constrained dependence
"We develop a general framework for conducting inference on the mean of dependent random variables given constraints on their dependency graph. We establish the consistency of an oracle variance estimator of the mean when the dependency graph is known, along with an associated central limit theorem. We derive an integer linear program for finding an upper bound for the estimated variance when the graph is unknown, but topological and degree-based constraints are available. We develop alternative bounds, including a closed-form bound, under an additional homoskedasticity assumption. We establish a basis for Wald-type confidence intervals for the mean that are guaranteed to have asymptotically conservative coverage. We apply the approach to inference from a social network link-tracing study and provide statistical software implementing the approach."
to:NB  network_data_analysis  graphical_models  estimation  statistics  confidence_sets 
february 2016 by cshalizi
[1507.02061] Honest confidence regions and optimality in high-dimensional precision matrix estimation
"We propose methodology for estimation of sparse precision matrices and statistical inference for their low-dimensional parameters in a high-dimensional setting where the number of parameters p can be much larger than the sample size. We show that the novel estimator achieves minimax rates in supremum norm and the low-dimensional components of the estimator have a Gaussian limiting distribution. These results hold uniformly over the class of precision matrices with row sparsity of small order n‾‾√/logp and spectrum uniformly bounded, under sub-Gaussian tail assumption on the margins of the true underlying distribution. Consequently, our results lead to uniformly valid confidence regions for low-dimensional parameters of the precision matrix. Thresholding the estimator leads to variable selection without imposing irrepresentability conditions. The performance of the method is demonstrated in a simulation study."
to:NB  confidence_sets  estimation  high-dimensional_statistics  statistics  van_de_geer.sara 
august 2015 by cshalizi
[1507.05315] Confidence Sets Based on the Lasso Estimator
"In a linear regression model with fixed dimension, we construct confidence sets for the unknown parameter vector based on the Lasso estimator in finite samples as well as in an asymptotic setup, thereby quantifying estimation uncertainty of this estimator. In finite samples with Gaussian errors and asymptotically in the case where the Lasso estimator is tuned to perform conservative model-selection, we derive formulas for computing the minimal coverage probability over the entire parameter space for a large class of shapes for the confidence sets, thus enabling the construction of valid confidence sets based on the Lasso estimator in these settings. The choice of shape for the confidence sets and comparison with the confidence ellipse based on the least-squares estimator is also discussed. Moreover, in the case where the Lasso estimator is tuned to enable consistent model-selection, we give a simple confidence set with minimal coverage probability converging to one."
to:NB  lasso  regression  confidence_sets  model_selection  variable_selection  statistics 
august 2015 by cshalizi

« earlier    

related tags

asymptotics  bayesian_consistency  bayesianism  benedikt  bernstein-von_mises  bickel.david_r.  books:noted  bootstrap  buhlmann.peter  cai.t._tony  calibration  causal_inference  classifiers  concentration_of_measure  convexity  cross-validation  curve_fitting  decision_trees  density_estimation  dynamical_systems  econometrics  economics  efron.bradley  empirical_processes  ensemble_methods  ergodic_theory  errors-in-variables  estimation  evolutionary_biology  fisher_information  foundations_of_statistics  fraser.d.a.s.  functional_data_analysis  gaussian_processes  gelman.andrew  geometry  graphical_models  haavelmo.trygve  hansen.bruce  hastie.trevor  have_read  heard_the_talk  heavy_tails  high-dimensional_statistics  hoff.peter  holmes.susan  hypothesis_testing  identifiability  in_nb  information_criteria  information_theory  kernel_estimators  kernel_methods  kith_and_kin  lahiri.s.n.  lasso  learning_theory  leeb.hannes  lei.jing  linear_regression  lumley.thomas  machine_learning  macroeconomics  madigan.david  methodology  minimax  misspecification  mixture_models  model_selection  monte_carlo  multiple_testing  natural_language_processing  network_data_analysis  neural_networks  neyman.jerzy  nickl.richard  nonparametrics  oracle_property  owen.art  p-values  partial_identification  phylogenetics  potscher  prediction  propagation_of_error_is_your_friend  random_forests  re:adafaepov  re:aos_project  re:growing_ensemble_project  re:network_differences  re:pli-r  re:xv_for_mixing  re:your_favorite_dsge_sucks  regression  resampling  rinaldo.alessandro  ritov.ya'acov  robins.james  sampling  self-similarity  sequential_decisions  shrinkage  simulation  social_science_methodology  sparsity  spatial_statistics  spatio-temporal_statistics  splines  statistical_inference_for_stochastic_processes  statistics  stochastic_approximation  surveys  systems_identification  tibshirani.robert  tibshirani.ryan  time_series  to:blog  to:nb  to_read  to_teach  to_teach:data-mining  to_teach:undergrad-ada  van_de_geer.sara  van_der_vaart.aad  variable_selection  variance_estimation  visual_display_of_quantitative_information  wasserman.larry 

Copy this bookmark:



description:


tags: