cshalizi + re:your_favorite_dsge_sucks   112

A Composite Likelihood Framework for Analyzing Singular DSGE Models | The Review of Economics and Statistics | MIT Press Journals
"This paper builds on the composite likelihood concept of Lindsay (1988) to develop a framework for parameter identification, estimation, inference, and forecasting in dynamic stochastic general equilibrium (DSGE) models allowing for stochastic singularity. The framework consists of four components. First, it provides a necessary and sufficient condition for parameter identification, where the identifying information is provided by the first- and second-order properties of nonsingular submodels. Second, it provides a procedure based on Markov Chain Monte Carlo for parameter estimation. Third, it delivers confidence sets for structural parameters and impulse responses that allow for model misspecification. Fourth, it generates forecasts for all the observed endogenous variables, irrespective of the number of shocks in the model. The framework encompasses the conventional likelihood analysis as a special case when the model is nonsingular. It enables the researcher to start with a basic model and then gradually incorporate more shocks and other features, meanwhile confronting all the models with the data to assess their implications. The methodology is illustrated using both small- and medium-scale DSGE models. These models have numbers of shocks ranging between 1 and 7."
to:NB  state-space_models  economics  time_series  macroeconomics  statistics  likelihood  re:your_favorite_dsge_sucks 
january 2019 by cshalizi
Evolution of Modern Business Cycle Models: Accounting for the Great Recession
"Modern business cycle theory focuses on the study of dynamic stochastic general equilibrium (DSGE) models that generate aggregate fluctuations similar to those experienced by actual economies. We discuss how these modern business cycle models have evolved across three generations, from their roots in the early real business cycle models of the late 1970s through the turmoil of the Great Recession four decades later. The first generation models were real (that is, without a monetary sector) business cycle models that primarily explored whether a small number of shocks, often one or two, could generate fluctuations similar to those observed in aggregate variables such as output, consumption, investment, and hours. These basic models disciplined their key parameters with micro evidence and were remarkably successful in matching these aggregate variables. A second generation of these models incorporated frictions such as sticky prices and wages; these models were primarily developed to be used in central banks for short-term forecasting purposes and for performing counterfactual policy experiments. A third generation of business cycle models incorporate the rich heterogeneity of patterns from the micro data. A defining characteristic of these models is not the heterogeneity among model agents they accommodate nor the micro-level evidence they rely on (although both are common), but rather the insistence that any new parameters or feature included be explicitly disciplined by direct evidence. We show how two versions of this latest generation of modern business cycle models, which are real business cycle models with frictions in labor and financial markets, can account, respectively, for the aggregate and the cross-regional fluctuations observed in the United States during the Great Recession."
to:NB  macroeconomics  re:your_favorite_dsge_sucks  financial_crisis_of_2007--  economics 
august 2018 by cshalizi
On DSGE Models
"The outcome of any important macroeconomic policy change is the net effect of forces operating on different parts of the economy. A central challenge facing policymakers is how to assess the relative strength of those forces. Economists have a range of tools that can be used to make such assessments. Dynamic stochastic general equilibrium (DSGE) models are the leading tool for making such assessments in an open and transparent manner. We review the state of mainstream DSGE models before the financial crisis and the Great Recession. We then describe how DSGE models are estimated and evaluated. We address the question of why DSGE modelers—like most other economists and policymakers—failed to predict the financial crisis and the Great Recession, and how DSGE modelers responded to the financial crisis and its aftermath. We discuss how current DSGE models are actually used by policymakers. We then provide a brief response to some criticisms of DSGE models, with special emphasis on criticism by Joseph Stiglitz, and offer some concluding remarks."
to:NB  macroeconomics  re:your_favorite_dsge_sucks 
august 2018 by cshalizi
Identification in Macroeconomics
"This paper discusses empirical approaches macroeconomists use to answer questions like: What does monetary policy do? How large are the effects of fiscal stimulus? What caused the Great Recession? Why do some countries grow faster than others? Identification of causal effects plays two roles in this process. In certain cases, progress can be made using the direct approach of identifying plausibly exogenous variation in a policy and using this variation to assess the effect of the policy. However, external validity concerns limit what can be learned in this way. Carefully identified causal effects estimates can also be used as moments in a structural moment matching exercise. We use the term "identified moments" as a short-hand for "estimates of responses to identified structural shocks," or what applied microeconomists would call "causal effects." We argue that such identified moments are often powerful diagnostic tools for distinguishing between important classes of models (and thereby learning about the effects of policy). To illustrate these notions we discuss the growing use of cross-sectional evidence in macroeconomics and consider what the best existing evidence is on the effects of monetary policy."
to:NB  causal_inference  macroeconomics  economics  re:your_favorite_dsge_sucks 
august 2018 by cshalizi
The Non-Existence of Representative Agents by Matthew O. Jackson, Leeat Yariv :: SSRN
"We characterize environments in which there exists a representative agent: an agent who inherits the structure of preferences of the population that she represents. The existence of such a representative agent imposes strong restrictions on individual utility functions -- requiring them to be linear in the allocation and additively separable in any parameter that characterizes agents' preferences (e.g., a risk aversion parameter, a discount factor, etc.). Commonly used classes of utility functions (exponentially discounted utility functions, CRRA or CARA utility functions, logarithmic functions, etc.) do not admit a representative agent."
in_NB  economics  macroeconomics  macro_from_micro  aggregation  jackson.matthew_o.  re:your_favorite_dsge_sucks  have_read 
july 2018 by cshalizi
Learning Theory Estimates with Observations from General Stationary Stochastic Processes | Neural Computation | MIT Press Journals
"This letter investigates the supervised learning problem with observations drawn from certain general stationary stochastic processes. Here by general, we mean that many stationary stochastic processes can be included. We show that when the stochastic processes satisfy a generalized Bernstein-type inequality, a unified treatment on analyzing the learning schemes with various mixing processes can be conducted and a sharp oracle inequality for generic regularized empirical risk minimization schemes can be established. The obtained oracle inequality is then applied to derive convergence rates for several learning schemes such as empirical risk minimization (ERM), least squares support vector machines (LS-SVMs) using given generic kernels, and SVMs using gaussian kernels for both least squares and quantile regression. It turns out that for independent and identically distributed (i.i.d.) processes, our learning rates for ERM recover the optimal rates. For non-i.i.d. processes, including geometrically -mixing Markov processes, geometrically -mixing processes with restricted decay, -mixing processes, and (time-reversed) geometrically -mixing processes, our learning rates for SVMs with gaussian kernels match, up to some arbitrarily small extra term in the exponent, the optimal rates. For the remaining cases, our rates are at least close to the optimal rates. As a by-product, the assumed generalized Bernstein-type inequality also provides an interpretation of the so-called effective number of observations for various mixing processes."
in_NB  stochastic_processes  learning_theory  dependence_measures  mixing  ergodic_theory  statistics  re:XV_for_mixing  re:your_favorite_dsge_sucks 
november 2016 by cshalizi
Herbst, E.P. and Schorfheide, F.: Bayesian Estimation of DSGE Models (eBook and Hardcover).
"Dynamic stochastic general equilibrium (DSGE) models have become one of the workhorses of modern macroeconomics and are extensively used for academic research as well as forecasting and policy analysis at central banks. This book introduces readers to state-of-the-art computational techniques used in the Bayesian analysis of DSGE models. The book covers Markov chain Monte Carlo techniques for linearized DSGE models, novel sequential Monte Carlo methods that can be used for parameter inference, and the estimation of nonlinear DSGE models based on particle filter approximations of the likelihood function. The theoretical foundations of the algorithms are discussed in depth, and detailed empirical applications and numerical illustrations are provided. The book also gives invaluable advice on how to tailor these algorithms to specific applications and assess the accuracy and reliability of the computations."
to:NB  books:noted  econometrics  macroeconomics  time_series  estimation  statistics  re:your_favorite_dsge_sucks 
january 2016 by cshalizi
Reductionism in Economics: Intentionality and Eschatological Justification in the Microfoundations of Macroeconomics
"Macroeconomists overwhelmingly believe that macroeconomics requires microfoundations, typically understood as a strong eliminativist reductionism. Microfoundations aims to recover intentionality. In the face of technical and data constraints macroeconomists typically employ a representative-agent model, in which a single agent solves microeconomic optimization problem for the whole economy, and take it to be microfoundationally adequate. The characteristic argument for the representative-agent model holds that the possibility of the sequential elaboration of the model to cover any number of individual agents justifies treating the policy conclusions of the single-agent model as practically relevant. This eschatological justification is examined and rejected."
in_NB  have_read  economics  reductionism  macroeconomics  social_science_methodology  philosophy_of_science  re:your_favorite_dsge_sucks  via:jbdelong 
august 2015 by cshalizi
[1410.3192] Learning without Concentration for General Loss Functions
"We study prediction and estimation problems using empirical risk minimization, relative to a general convex loss function. We obtain sharp error rates even when concentration is false or is very restricted, for example, in heavy-tailed scenarios. Our results show that the error rate depends on two parameters: one captures the intrinsic complexity of the class, and essentially leads to the error rate in a noise-free (or realizable) problem; the other measures interactions between class members the target and the loss, and is dominant when the problem is far from realizable. We also explain how one may deal with outliers by choosing the loss in a way that is calibrated to the intrinsic complexity of the class and to the noise-level of the problem (the latter is measured by the distance between the target and the class)."
to:NB  learning_theory  heavy_tails  statistics  to_read  re:your_favorite_dsge_sucks 
january 2015 by cshalizi
How good are out-of-sample forecasting tests? | VOX, CEPR’s Policy Portal
"Out-of-sample forecasting tests are increasingly used to establish the quality of macroeconomic models. This column discusses recent research that assesses what these tests can establish with confidence about macroeconomic models’ specification and forecasting ability. Using a Monte Carlo experiment on a widely used macroeconomic model, the authors find that out-of-sample forecasting tests have weak power against misspecification and forecasting performance. However, an in-sample indirect inference test can be used to establish reliably both the model’s specification quality and its forecasting capacity."

--- Except they don't run tests with _mis-specification_, they run tests with _changes in the parameters_. I am not at all surprised that the forecasts of the Smets-Wooters DSGE are fairly insensitive to the parameters. But to see the power of out-of-sample forecasting to detect mis-specification, you'd need to do runs where the data-generating process wasn't a Smets-Wooters DSGE with any parameter setting at all.
to:NB  track_down_references  economics  macroeconomics  econometrics  prediction  hypothesis_testing  re:your_favorite_dsge_sucks  have_read  via:djm1107 
january 2015 by cshalizi
Forecasting Nonstationary Time Series: From Theory to Algorithms
"Generalization bounds for time series prediction and other non-i.i.d. learning sce- narios that can be found in the machine learning and statistics literature assume that observations come from a (strictly) stationary distribution. The first bounds for completely non-stationary setting were proved in [6]. In this work we present an extension of these results and derive novel algorithms for forecasting non- stationary time series. Our experimental results show that our algorithms sig- nificantly outperform standard autoregressive models commonly used in practice."

--- Assumes mixing but not stationary.
to:NB  have_read  mixing  learning_theory  re:your_favorite_dsge_sucks  re:XV_for_mixing  time_series 
december 2014 by cshalizi
When economic models are unable to fit the data | VOX, CEPR’s Policy Portal
Shorter: if your model claims to include all the relevant variables and throwing more covariates into your regression improves your fit, you have a problem. (But I would be shocked if they are really doing an adequate job of accounting for specification-search and model-selection issues here.)
track_down_references  economics  model_selection  misspecification  goodness-of-fit  econometrics  statistics  baby_steps  to:blog  re:your_favorite_dsge_sucks 
november 2014 by cshalizi
Forecasting economic time series using flexible versus fixed specification and linear versus nonlinear econometric models
"Nine macroeconomic variables are forecast in a real-time scenario using a variety of flexible specification, fixed specification, linear, and nonlinear econometric models. All models are allowed to evolve through time, and our analysis focuses on model selection and performance. In the context of real-time forecasts, flexible specification models (including linear autoregressive models with exogenous variables and nonlinear artificial neural networks) appear to offer a useful and viable alternative to less flexible fixed specification linear models for a subset of the economic variables which we examine, particularly at forecast horizons greater than 1-step ahead. We speculate that one reason for this result is that the economy is evolving (rather slowly) over time. This feature cannot easily be captured by fixed specification linear models, however, and manifests itself in the form of evolving coefficient estimates. We also provide additional evidence supporting the claim that models which ‘win’ based on one model selection criterion (say a squared error measure) do not necessarily win when an alternative selection criterion is used (say a confusion rate measure), thus highlighting the importance of the particular cost function which is used by forecasters and ‘end-users’ to evaluate their models. A wide variety of different model selection criteria and statistical tests are used to illustrate our findings."
to:NB  economics  macroeconomics  prediction  white.halbert  to_read  re:your_favorite_dsge_sucks 
september 2014 by cshalizi
Angry Bear » Comment on Del Negro, Giannoni & Schorfheide (2014)
"My objection is that, since in practice all deviations between micro founded models and an ad hoc aggregate models are bugs not features, what possible use could there ever be in micro founding models."
macroeconomics  financial_crisis_of_2007--  economics  dsges  re:your_favorite_dsge_sucks  social_science_methodology  have_read 
july 2014 by cshalizi
[1406.2462] Empirical risk minimization for heavy-tailed losses
"The purpose of this paper is to discuss empirical risk minimization when the losses are not necessarily bounded and may have a distribution with heavy tails. In such situations usual empirical averages may fail to provide reliable estimates and empirical risk minimization may provide large excess risk. However, some robust mean estimators proposed in the literature may be used to replace empirical means. In this paper we investigate empirical risk minimization based on a robust estimate proposed by Catoni. We develop performance bounds based on chaining arguments tailored to Catoni's mean estimator."
in_NB  learning_theory  heavy_tails  statistics  re:your_favorite_dsge_sucks 
july 2014 by cshalizi
[1406.1037] Bootstrapping High Dimensional Time Series
"We focus on the problem of conducting inference for high dimensional weakly dependent time series. Our results are motivated by the applications in modern high dimensional inference including (1) constructing uniform confidence band for high dimensional mean vector and (2) specification testing on the second order property of high dimensional time series such as white noise testing and testing for bandedness of covariance matrix. In theory, we derive a Gaussian approximation result for the maximum of a sum of weakly dependent vectors by adapting Stein's method, where the dimension of the vectors is allowed to be exponentially larger than the sample size. Our result reveals an interesting phenomenon arising from the interplay between the dependence and dimensionality: the more dependent of the data vectors, the slower diverging rate of the dimension is allowed for obtaining valid statistical inference. Building on the Gaussian approximation result, we propose a blockwise multiplier (wild) bootstrap that is able to capture the dependence amongst and within the data vectors and thus provides high-quality distributional approximation to the distribution of the maximum of vector sum in the high dimensional context."
have_read  bootstrap  time_series  high-dimensional_statistics  statistics  re:your_favorite_dsge_sucks  in_NB 
july 2014 by cshalizi
Unpredictability in economic analysis, econometric modeling and forecasting
"Unpredictability arises from intrinsic stochastic variation, unexpected instances of outliers, and unanticipated extrinsic shifts of distributions. We analyze their properties, relationships, and different effects on the three arenas in the title, which suggests considering three associated information sets. The implications of unanticipated shifts for forecasting, economic analyses of efficient markets, conditional expectations, and inter-temporal derivations are described. The potential success of general-to-specific model selection in tackling location shifts by impulse-indicator saturation is contrasted with the major difficulties confronting forecasting."
to:NB  prediction  non-stationarity  econometrics  statistics  re:your_favorite_dsge_sucks  re:growing_ensemble_project  via:djm1107  to_read 
june 2014 by cshalizi
[0707.0322] Consistency of support vector machines for forecasting the evolution of an unknown ergodic dynamical system from observations with unknown noise
"We consider the problem of forecasting the next (observable) state of an unknown ergodic dynamical system from a noisy observation of the present state. Our main result shows, for example, that support vector machines (SVMs) using Gaussian RBF kernels can learn the best forecaster from a sequence of noisy observations if (a) the unknown observational noise process is bounded and has a summable α-mixing rate and (b) the unknown ergodic dynamical system is defined by a Lipschitz continuous function on some compact subset of ℝd and has a summable decay of correlations for Lipschitz continuous functions. In order to prove this result we first establish a general consistency result for SVMs and all stochastic processes that satisfy a mixing notion that is substantially weaker than α-mixing."
in_NB  dynamical_systems  mixing  ergodic_theory  nonparametrics  statistics  prediction  support-vector_machines  steinwart.ingo  time_series  statistical_inference_for_stochastic_processes  re:your_favorite_dsge_sucks  re:XV_for_mixing  to_read  entableted 
march 2014 by cshalizi
[1403.0740] On the Information-theoretic Limits of Graphical Model Selection for Gaussian Time Series
"We consider the problem of inferring the conditional independence graph (CIG) of a multivariate stationary dicrete-time Gaussian random process based on a finite length observation. Using information-theoretic methods, we derive a lower bound on the error probability of any learning scheme for the underlying process CIG. This bound, in turn, yields a minimum required sample-size which is necessary for any algorithm regardless of its computational complexity, to reliably select the true underlying CIG. Furthermore, by analysis of a simple selection scheme, we show that the information-theoretic limits can be achieved for a subclass of processes having sparse CIG. We do not assume a parametric model for the observed process, but require it to have a sufficiently smooth spectral density matrix (SDM)."
to:NB  graphical_models  conditional_independence  information_theory  learning_theory  re:your_favorite_dsge_sucks  time_series  statistics  to_read 
march 2014 by cshalizi
AER (104,2) p. 379 - A Macroeconomic Model with a Financial Sector
"This article studies the full equilibrium dynamics of an economy with financial frictions. Due to highly nonlinear amplification effects, the economy is prone to instability and occasionally enters volatile crisis episodes. Endogenous risk, driven by asset illiquidity, persists in crisis even for very low levels of exogenous risk. This phenomenon, which we call the volatility paradox, resolves the Kocherlakota (2000) critique. Endogenous leverage determines the distance to crisis. Securitization and derivatives contracts that improve risk sharing may lead to higher leverage and more frequent crises."
to:NB  economics  macroeconomics  financial_crisis_of_2007--  re:your_favorite_dsge_sucks 
february 2014 by cshalizi
Noahpinion: The equation at the core of modern macro
In defense of the Euler equation (*): why assume that the Fed funds rate / risk-free loan interest rate is the rate of time preference, or even closely correlated with the rate of time preference? Surely the r.o.t.p. is at most one component of even the risk-free interest rate. (I believe I am stealing this argument from J. W. Mason.) --- The point about checking intermediate parts of the model is however entirely sound (and not handled just by doing a generalized-method-of-moments estimate for each equation).
economics  macroeconomics  re:your_favorite_dsge_sucks  social_science_methodology  model_checking 
january 2014 by cshalizi
[1305.4825] Learning subgaussian classes : Upper and minimax bounds
"We obtain sharp oracle inequalities for the empirical risk minimization procedure in the regression model under the assumption that the target Y and the model $\cF$ are subgaussian. The bound we obtain is sharp in the minimax sense if $\cF$ is convex. Moreover, under mild assumptions on $\cF$, the error rate of ERM remains optimal even if the procedure is allowed to perform with constant probability. A part of our analysis is a new proof of minimax results for the gaussian regression model."
in_NB  regression  learning_theory  to_read  re:your_favorite_dsge_sucks 
january 2014 by cshalizi
[1401.0304] Learning without Concentration
"We obtain sharp bounds on the performance of Empirical Risk Minimization performed in a convex class and with respect to the squared loss, without any boundedness assumptions on class members or on the target. Rather than resorting to a concentration-based argument, the method relies on a `small-ball' assumption and thus holds for heavy-tailed sampling and heavy-tailed targets. Moreover, the resulting estimates scale correctly with the `noise'. When applied to the classical, bounded scenario, the method always improves the known estimates."
in_NB  learning_theory  re:your_favorite_dsge_sucks  re:XV_for_mixing  have_read 
january 2014 by cshalizi
AER (104,1) p. 27 - Risk Shocks
"We augment a standard monetary dynamic general equilibrium model to include a Bernanke-Gertler-Gilchrist financial accelerator mechanism. We fit the model to US data, allowing the volatility of cross-sectional idiosyncratic uncertainty to fluctuate over time. We refer to this measure of volatility as risk. We find that fluctuations in risk are the most important shock driving the business cycle."
to:NB  macroeconomics  economics  re:your_favorite_dsge_sucks 
january 2014 by cshalizi
Joint Estimation of Multiple Graphical Models from High Dimensional Time Series
"In this manuscript the problem of jointly estimating multiple graphical models in high dimensions is considered. It is assumed that the data are collected from n subjects, each of which consists of m non-independent observations. The graphical models of subjects vary, but are assumed to change smoothly corresponding to a measure of the closeness between subjects. A kernel based method for jointly estimating all graphical models is proposed. Theoretically, under a double asymptotic framework, where both (m,n) and the dimension d can increase, the explicit rate of convergence in parameter estimation is provided, thus characterizing the strength one can borrow across different individuals and impact of data dependence on parameter estimation. Empirically, experiments on both synthetic and real resting state functional magnetic resonance imaging (rs-fMRI) data illustrate the effectiveness of the proposed method."
to:NB  to_read  graphical_models  time_series  high-dimensional_statistics  kernel_estimators  liu.han  re:your_favorite_dsge_sucks  fmri 
january 2014 by cshalizi
Dynamic Hierarchical Factor Models
"This paper uses multilevel factor models to characterize within- and between-block variations as well as idiosyncratic noise in large dynamic panels. Block-level shocks are distinguished from genuinely common shocks, and the estimated block-level factors are easy to interpret. The framework achieves dimension reduction and yet explicitly allows for heterogeneity between blocks. The model is estimated using an MCMC algorithm that takes into account the hierarchical structure of the factors. The importance of block-level variations is illustrated in a four-level model estimated on a panel of 445 series related to different categories of real activity in the United States."
in_NB  time_series  inference_to_latent_objects  economics  macroeconomics  factor_analysis  hierarchical_statistical_models  statistics  re:your_favorite_dsge_sucks 
january 2014 by cshalizi
[1312.1473] Oracle Properties and Finite Sample Inference of the Adaptive Lasso for Time Series Regression Models
"We derive new theoretical results on the properties of the adaptive least absolute shrinkage and selection operator (adaptive lasso) for time series regression models. In particular, we investigate the question of how to conduct finite sample inference on the parameters given an adaptive lasso model for some fixed value of the shrinkage parameter. Central in this study is the test of the hypothesis that a given adaptive lasso parameter equals zero, which therefore tests for a false positive. To this end we construct a simple testing procedure and show, theoretically and empirically through extensive Monte Carlo simulations, that the adaptive lasso combines efficient parameter estimation, variable selection, and valid finite sample inference in one step. Moreover, we analytically derive a bias correction factor that is able to significantly improve the empirical coverage of the test on the active variables. Finally, we apply the introduced testing procedure to investigate the relation between the short rate dynamics and the economy, thereby providing a statistical foundation (from a model choice perspective) to the classic Taylor rule monetary policy model."
in_NB  lasso  time_series  variable_selection  statistics  re:your_favorite_dsge_sucks 
december 2013 by cshalizi
NON-PARAMETRIC ESTIMATION UNDER STRONG DEPENDENCE - Zhao - 2013 - Journal of Time Series Analysis - Wiley Online Library
"We study non-parametric regression function estimation for models with strong dependence. Compared with short-range dependent models, long-range dependent models often result in slower convergence rates. We propose a simple differencing-sequence based non-parametric estimator that achieves the same convergence rate as if the data were independent. Simulation studies show that the proposed method has good finite sample performance."

- The trick here is to only estimate the independence on an observed _and_ IID covariate, i.e., the model is Y(t) = m(X(t)) + g(t) + \epsilon_t, where X(t) is the IID covariate, g(t) is an (unknown) time-trend, and \epsilon_t is the long-range-dependent innovation sequence. Differencing, Y(t) - Y(t-1) = m(X(t)) - m(X(t-1)) + stuff which averages rapidly, so one can learn the m() function up to an over-all constant rapidly. Worth mentioning in the ADA notes, in the sense of "don't solve hard problems you don't have to", but not a fundamental advance.
to:NB  time_series  statistical_inference_for_stochastic_processes  statistics  re:your_favorite_dsge_sucks  have_read  nonparametrics  kernel_estimators  to_teach:undergrad-ADA 
december 2013 by cshalizi
JEL (51,4) p. 1120 - Facts and Challenges from the Great Recession for Forecasting and Macroeconomic Modeling
"This paper provides a survey of business cycle facts, updated to take account of recent data. Emphasis is given to the Great Recession, which was unlike most other postwar recessions in the United States in being driven by deleveraging and financial market factors. We document how recessions with financial market origins are different from those driven by supply or monetary policy shocks. This helps explain why economic models and predictors that work well at some times do poorly at other times. We discuss challenges for forecasters and empirical researchers in light of the updated business cycle facts."
to:NB  economics  macroeconomics  financial_crisis_of_2007--  time_series  re:your_favorite_dsge_sucks 
december 2013 by cshalizi
[1311.4175] Estimation in High-dimensional Vector Autoregressive Models
"Vector Autoregression (VAR) is a widely used method for learning complex interrelationship among the components of multiple time series. Over the years it has gained popularity in the fields of control theory, statistics, economics, finance, genetics and neuroscience. We consider the problem of estimating stable VAR models in a high-dimensional setting, where both the number of time series and the VAR order are allowed to grow with sample size. In addition to the ``curse of dimensionality" introduced by a quadratically growing dimension of the parameter space, VAR estimation poses considerable challenges due to the temporal and cross-sectional dependence in the data. Under a sparsity assumption on the model transition matrices, we establish estimation and prediction consistency of ℓ1-penalized least squares and likelihood based methods. Exploiting spectral properties of stationary VAR processes, we develop novel theoretical techniques that provide deeper insight into the effect of dependence on the convergence rates of the estimates. We study the impact of error correlations on the estimation problem and develop fast, parallelizable algorithms for penalized likelihood based VAR estimates."
in_NB  time_series  sparsity  statistics  re:your_favorite_dsge_sucks 
november 2013 by cshalizi
Expectations and Economic Fluctuations: An Analysis Using Survey Data
"Using survey-based measures of future U.S. economic activity from the Livingston Survey and the Survey of Professional Forecasters, we study how changes in expectations and their interaction with monetary policy contribute to fluctuations in macroeconomic aggregates. We find that changes in expected future economic activity are a quantitatively important driver of economic fluctuations: a perception that good times are ahead typically leads to a significant rise in current measures of economic activity and inflation. We also find that the short-term interest rate rises in response to expectations of good times as monetary policy tightens."
to:NB  economics  macroeconomics  re:your_favorite_dsge_sucks 
october 2013 by cshalizi
[1309.1007] Concentration in unbounded metric spaces and algorithmic stability
"We prove an extension of McDiarmid's inequality for metric spaces with unbounded diameter. To this end, we introduce the notion of the {\em subgaussian diameter}, which is a distribution-dependent refinement of the metric diameter. Our technique provides an alternative approach to that of Kutin and Niyogi's method of weakly difference-bounded functions, and yields nontrivial, dimension-free results in some interesting cases where the former does not. As an application, we give apparently the first generalization bound in the algorithmic stability setting that holds for unbounded loss functions. We furthermore extend our concentration inequality to strongly mixing processes."
in_NB  have_read  concentration_of_measure  stability_of_learning  learning_theory  probability  kontorovich.aryeh  kith_and_kin  re:XV_for_mixing  re:your_favorite_dsge_sucks 
september 2013 by cshalizi
Taylor & Francis Online :: Oracally Efficient Two-Step Estimation of Generalized Additive Model - Journal of the American Statistical Association - Volume 108, Issue 502
"The generalized additive model (GAM) is a multivariate nonparametric regression tool for non-Gaussian responses including binary and count data. We propose a spline-backfitted kernel (SBK) estimator for the component functions and the constant, which are oracally efficient under weak dependence. The SBK technique is both computationally expedient and theoretically reliable, thus usable for analyzing high-dimensional time series. Inference can be made on component functions based on asymptotic normality. Simulation evidence strongly corroborates the asymptotic theory. The method is applied to estimate insolvent probability and to obtain higher accuracy ratio than a previous study."
to:NB  time_series  additive_models  statistics  high-dimensional_statistics  smoothing  to_read  re:your_favorite_dsge_sucks  to_teach:undergrad-ADA  regression  nonparametrics  hardle.wolfgang 
july 2013 by cshalizi
[1305.5882] Limit theorems for kernel density estimators under dependent samples
"In this paper, we construct a moment inequality for mixing dependent random variables, it is of independent interest. As applications, the consistency of the kernel density estimation is investigated. Several limit theorems are established: First, the central limit theorems for the kernel density estimator $f_{n,K}(x)$ and its distribution function are constructed. Also, the convergence rates of $\|f_{n,K}(x)-Ef_{n,K}(x)\|_{p}$ in sup-norm loss and integral $L^{p}$-norm loss are proved. Moreover, the a.s. convergence rates of the supremum of $|f_{n,K}(x)-Ef_{n,K}(x)|$ over a compact set and the whole real line are obtained. It is showed, under suitable conditions on the mixing rates, the kernel function and the bandwidths, that the optimal rates for i.i.d. random variables are also optimal for dependent ones."

--- The "to_teach" is really "to_mention"
in_NB  kernel_estimators  density_estimation  statistical_inference_for_stochastic_processes  statistics  time_series  to_teach:undergrad-ADA  re:your_favorite_dsge_sucks 
may 2013 by cshalizi
Structural Risk Minimization over Data-Dependent Hierarchies
"The paper introduces some generalizations of Vapnik’s method of structural risk min- imisation (SRM). As well as making explicit some of the details on SRM, it provides a result that allows one to trade off errors on the training sample against improved general- ization performance. It then considers the more general case when the hierarchy of classes is chosen in response to the data. A result is presented on the generalization performance of classifiers with a “large margin”. This theoretically explains the impressive generaliza- tion performance of the maximal margin hyperplane algorithm of Vapnik and co-workers (which is the basis for their support vector machines). The paper concludes with a more general result in terms of “luckiness” functions, which provides a quite general way for ex- ploiting serendipitous simplicity in observed data to obtain better prediction accuracy from small training sets. Four examples are given of such functions, including the VC dimension measured on the sample."
in_NB  learning_theory  structural_risk_minimization  classifiers  vc-dimension  re:your_favorite_dsge_sucks  have_read  to:blog 
april 2013 by cshalizi
Noahpinion: The swamps of DSGE despair
Shorter Noah: With notably rare exceptions, economics is a progressive scientific discipline.
economics  macroeconomics  re:your_favorite_dsge_sucks 
march 2013 by cshalizi
Transforming Modern Macroeconomics: Exploring Disequilibrium Microfoundations, 1956–2003
"This book tells the story of the search for disequilibrium micro-foundations for macroeconomic theory, from the disequilibrium theories of Patinkin, Clower, and Leijonhufvud to recent dynamic stochastic general equilibrium models with imperfect competition. Placing this search against the background of wider developments in macroeconomics, the authors contend that this was never a single research program, but involved economists with very different aims who developed the basic ideas about quantity constraints, spillover effects, and coordination failures in different ways. The authors contrast this with the equilibrium, market-clearing approach of Phelps and Lucas, arguing that equilibrium theories simply assumed away the problems that had motivated the disequilibrium literature. Although market-clearing models came to dominate macroeconomics, disequilibrium theories never went away and continue to exert an important influence on the subject. Although this book focuses on one strand in modern macroeconomics, it is crucial to understanding the origins of modern macroeconomic theory."
in_NB  books:noted  economics  macroeconomics  re:your_favorite_dsge_sucks 
february 2013 by cshalizi
Zhao , Li : Inference for modulated stationary processes
"We study statistical inferences for a class of modulated stationary processes with time-dependent variances. Due to non-stationarity and the large number of unknown parameters, existing methods for stationary, or locally stationary, time series are not applicable. Based on a self-normalization technique, we address several inference problems, including a self-normalized central limit theorem, a self-normalized cumulative sum test for the change-point problem, a long-run variance estimation through blockwise self-normalization, and a self-normalization-based wild bootstrap. Monte Carlo simulation studies show that the proposed self-normalization-based methods outperform stationarity-based alternatives. We demonstrate the proposed methodology using two real data sets: annual mean precipitation rates in Seoul from 1771–2000, and quarterly U.S. Gross National Product growth rates from 1947–2002."
to:NB  to_read  time_series  statistics  non-stationarity  change-point_problem  re:your_favorite_dsge_sucks  re:growing_ensemble_project 
january 2013 by cshalizi
[1212.5796] On the method of typical bounded differences
"Concentration inequalities are fundamental tools in probabilistic combinatorics and theoretical computer science for proving that random functions are near their means. Of particular importance is the case where f(X) is a function of independent random variables X=(X_1, ..., X_n). Here the well known bounded differences inequality (also called McDiarmid's or Hoeffding-Azuma inequality) establishes sharp concentration if the function f does not depend too much on any of the variables. One attractive feature is that it relies on a very simple Lipschitz condition (L): it suffices to show that |f(X)-f(X')| leq c_k whenever X,X' differ only in X_k. While this is easy to check, the main disadvantage is that it considers worst-case changes c_k, which often makes the resulting bounds too weak to be useful.
"In this paper we prove a variant of the bounded differences inequality which can be used to establish concentration of functions f(X) where (i) the typical changes are small although (ii) the worst case changes might be very large. One key aspect of this inequality is that it relies on a simple condition that (a) is easy to check and (b) coincides with heuristic considerations why concentration should hold. Indeed, given an event Gamma that holds with very high probability, we essentially relax the Lipschitz condition (L) to situations where Gamma occurs. The point is that the resulting typical changes c_k are often much smaller than the worst case ones.
"To illustrate its application we consider the reverse H-free process, where H is 2-balanced. We prove that the final number of edges in this process is concentrated, and also determine its likely value up to constant factors. This answers a question of Bollob'as and ErdH{o}s."
in_NB  to_read  probability  concentration_of_measure  re:almost_none  re:your_favorite_dsge_sucks  re:XV_for_mixing  re:XV_for_networks  deviation_inequalities 
december 2012 by cshalizi
[1212.0463] Time series forecasting: model evaluation and selection using nonparametric risk bounds
"We derive generalization error bounds --- bounds on the expected inaccuracy of the predictions --- for traditional time series forecasting models. Our results hold for many standard forecasting tools including autoregressive models, moving average models, and, more generally, linear state-space models. These bounds allow forecasters to select among competing models and to guarantee that with high probability, their chosen model will perform well without making strong assumptions about the data generating process or appealing to asymptotic theory. We motivate our techniques with and apply them to standard economic and financial forecasting tools --- a GARCH model for predicting equity volatility and a dynamic stochastic general equilibrium model (DSGE), the standard tool in macroeconomic forecasting. We demonstrate in particular how our techniques can aid forecasters and policy makers in choosing models which behave well under uncertainty and mis-specification."
in_NB  learning_theory  self-promotion  statistics  statistical_inference_for_stochastic_processes  economics  time_series  macroeconomics  re:your_favorite_dsge_sucks 
december 2012 by cshalizi
De Grauwe, P.: Lectures on Behavioral Macroeconomics.
"In mainstream economics, and particularly in New Keynesian macroeconomics, the booms and busts that characterize capitalism arise because of large external shocks. The combination of these shocks and the slow adjustments of wages and prices by rational agents leads to cyclical movements. In this book, Paul De Grauwe argues for a different macroeconomics model--one that works with an internal explanation of the business cycle and factors in agents' limited cognitive abilities. By creating a behavioral model that is not dependent on the prevailing concept of rationality, De Grauwe is better able to explain the fluctuations of economic activity that are an endemic feature of market economies. This new approach illustrates a richer macroeconomic dynamic that provides for a better understanding of fluctuations in output and inflation.
"De Grauwe shows that the behavioral model is driven by self-fulfilling waves of optimism and pessimism, or animal spirits. Booms and busts in economic activity are therefore natural outcomes of a behavioral model. The author uses this to analyze central issues in monetary policies, such as output stabilization, before extending his investigation into asset markets and more sophisticated forecasting rules. He also examines how well the theoretical predictions of the behavioral model perform when confronted with empirical data."
in_NB  books:noted  economics  macroeconomics  re:your_favorite_dsge_sucks 
october 2012 by cshalizi
Learning Bounds for Importance Weights
"This paper presents an analysis of importance weighting for learning from finite samples and gives a series of theoretical and algorithmic results. We point out simple cases where importance weighting can fail, which suggests the need for an analysis of the properties of this technique. We then give both upper and lower bounds for generalization with bounded importance weights and, more signifi- cantly, give learning guarantees for the more common case of unbounded impor- tance weights under the weak assumption that the second moment is bounded, a condition related to the Re ́nyi divergence of the training and test distributions. These results are based on a series of novel and general bounds we derive for un- bounded loss functions, which are of independent interest. We use these bounds to guide the definition of an alternative reweighting algorithm and report the results of experiments demonstrating its benefits. Finally, we analyze the properties of normalized importance weights which are also commonly used."

(For the generalization bounds with unbounded losses.)
in_NB  have_read  learning_theory  re:your_favorite_dsge_sucks  mohri.meryar 
september 2012 by cshalizi
Is Modern Macro or 1978-era Macro More Relevant to the Understanding of the Current Economic Crisis?
"This paper differs from other recent critiques of “modern macro” based on DSGE models. It goes beyond criticizing these models for their assumptions of complete and efficient markets by proposing an alternative macroeconomic paradigm that is more suitable for tracing the links between financial bubbles and the commodity and labor markets of the real economy.
"The paper provides a fundamental critique of DSGE and the related core assumptions of modern business cycle macroeconomics. By attempting to combine sticky Calvo‐like prices in a theoretical setting that otherwise assumes that markets clear, DSGE macro becomes tangled in a web of contradictions. Once prices are sticky, markets fail to clear. Once markets fail to clear, workers are not moving back and forth on their voluntary labor supply curves, so the elasticity of such curves is irrelevant. Once markets fail to clear, firms are not sliding back and forth on their labor demand curves, and so it is irrelevant whether the price‐cost markup (i.e., slope of the labor demand curve) is negative or positive.
"The paper resurrects “1978‐era” macroeconomics that combines non‐market‐clearing aggregate demand based on incomplete price adjustment, together with a supply‐side invented in the mid‐1970s that _recognizes the co‐existence of flexible auction‐market prices for commodities like oil and sticky prices for the remaining non‐oil economy_. As combined in 1978‐era theories, empirical work, and pioneering intermediate macro textbooks, this merger of demand and supply resulted in a well‐articulated dynamic aggregate demand‐supply model that has stood the test of time in explaining both the multiplicity of links between the financial and real economies, as well as why inflation and unemployment can be both negatively and positively correlated.
"Along the way, the paper goes beyond most recent accounts of the worldwide economic crisis by pointing out numerous similarities between the leverage cycles of 1927‐29 and 2003‐06, particularly parallel regulatory failings in both episodes, and it links tightly the empirical lack of realism in the demand and supply sides of modern DSGE models with the empirical reality that has long been built into the 1978‐era paradigm resurrected here."
in_NB  economics  macroeconomics  re:your_favorite_dsge_sucks  financial_crisis_of_2007-- 
june 2012 by cshalizi
"This paper surveys the theoretical literature on aggregation of production functions. The objective is to make neoclassical economists aware of the insurmountable aggregation problems and their implications. We refer to both the Cambridge capital controversies and the aggregation conditions. The most salient results are summarized, and the problems that economists should be aware of from incorrect aggregation are discussed. The most important conclusion is that the conditions under which a well-behaved aggregate production function can be derived from micro production functions are so stringent that it is difficult to believe that actual economies satisfy them. Therefore, aggregate production functions do not have a sound theoretical foundation. For practical purposes this means that while generating GDP, for example, as the sum of the components of aggregate demand (or through the production or income sides of the economy) is correct, thinking of GDP as GDP = F(K, L), where K and L are aggregates of capital and labor, respectively, and F(•) is a well-defined neoclassical function, is most likely incorrect. Likewise, thinking of aggregate investment as a well-defined addition to
‘capital’ in production is also a mistake. The paper evaluates the standard reasons given by economists for continuing to use aggregate production functions in theoretical and applied work, and concludes that none of them provides a valid argument."

--- They are not altogether fair to the instrumentalist, it-works-doesn't-it, defense. (I'm not saying that defense is right, just that they don't really treat it fairly, which would involve looking into how aggregate production functions are supposed to work, and assessing the evidence that they do in fact, do those jobs well.)
in_NB  economics  macro_from_micro  re:your_favorite_dsge_sucks  via:crooked_timber  econometrics  cobb-douglas_production_functions  have_read  fisher.franklin_m. 
june 2012 by cshalizi
Introduction to Computable General Equilibrium Models - Academic and Professional Books - Cambridge University Press
"Computable general equilibrium (CGE) models are widely used by governmental organizations and academic institutions to analyze the economy-wide effects of events such as climate change, tax policies, and immigration. This book provides a practical, how-to guide to CGE models suitable for use at the undergraduate college level. Its introductory level distinguishes it from other available books and articles on CGE models. The book provides intuitive and graphical explanations of the economic theory that underlies a CGE model and includes many examples and hands-on modeling exercises. It may be used in courses on economics principles, microeconomics, macroeconomics, public finance, environmental economics, and international trade and finance, because it shows students the role of theory in a realistic model of an economy. The book is also suitable for courses on general equilibrium models and research methods, and for professionals interested in learning how to use CGE models."

- The mathematical and conceptual level here is shockingly low.
economics  simulation  re:your_favorite_dsge_sucks  re:computational_lens  have_read 
june 2012 by cshalizi
Lecué , Mendelson : General nonexact oracle inequalities for classes with a subexponential envelope
"We show that empirical risk minimization procedures and regularized empirical risk minimization procedures satisfy nonexact oracle inequalities in an unbounded framework, under the assumption that the class has a subexponential envelope function. The main novelty, in addition to the boundedness assumption free setup, is that those inequalities can yield fast rates even in situations in which exact oracle inequalities only hold with slower rates.
"We apply these results to show that procedures based on $ell_{1}$ and nuclear norms regularization functions satisfy oracle inequalities with a residual term that decreases like $1/n$ for every $L_{q}$-loss functions ($qgeq2$), while only assuming that the tail behavior of the input and output variables are well behaved. In particular, no RIP type of assumption or “incoherence condition” are needed to obtain fast residual terms in those setups. We also apply these results to the problems of convex aggregation and model selection."

This looks awesome.
in_NB  to_read  learning_theory  model_selection  statistics  re:your_favorite_dsge_sucks  re:XV_for_mixing  ensemble_methods  lecue.guillaume  mendelson.shahar 
june 2012 by cshalizi
Using Internet Data for Economic Research
"The data used by economists can be broadly divided into two categories. First, structured datasets arise when a government agency, trade association, or company can justify the expense of assembling records. The Internet has transformed how economists interact with these datasets by lowering the cost of storing, updating, distributing, finding, and retrieving this information. Second, some economic researchers affirmatively collect data of interest. For researcher-collected data, the Internet opens exceptional possibilities both by increasing the amount of information available for researchers to gather and by lowering researchers' costs of collecting information. In this paper, I explore the Internet's new datasets, present methods for harnessing their wealth, and survey a sampling of the research questions these data help to answer. The first section of this paper discusses "scraping" the Internet for data—that is, collecting data on prices, quantities, and key characteristics that are already available on websites but not yet organized in a form useful for economic research. A second part of the paper considers online experiments, including experiments that the economic researcher observes but does not control (for example, when Amazon or eBay alters site design or bidding rules); and experiments in which a researcher participates in design, including those conducted in partnership with a company or website, and online versions of laboratory experiments. Finally, I discuss certain limits to this type of data collection, including both "terms of use" restrictions on websites and concerns about privacy and confidentiality."
to:NB  economics  data_sets  web  re:your_favorite_dsge_sucks 
may 2012 by cshalizi
[1202.4294] Prediction of quantiles by statistical learning and application to GDP forecasting
"In this paper, we tackle the problem of prediction and confidence intervals for time series using a statistical learning approach and quantile loss functions. In a first time, we show that the Gibbs estimator (also known as Exponentially Weighted aggregate) is able to predict as well as the best predictor in a given family for a wide set of loss functions. In particular, using the quantile loss function of Koenker and Bassett (1978), this allows to build confidence intervals. We apply these results to the problem of prediction and confidence regions for the French Gross Domestic Product (GDP) growth, with promising results."
in_NB  to_read  prediction  confidence_sets  learning_theory  re:your_favorite_dsge_sucks  re:growing_ensemble_project 
february 2012 by cshalizi
[1202.4283] Fast rates in learning with dependent observations
"In this paper we tackle the problem of fast rates in time series forecasting from a statistical learning perspective. In a serie of papers (e.g. Meir 2000, Modha and Masry 1998, Alquier and Wintenberger 2012) it is shown that the main tools used in learning theory with iid observations can be extended to the prediction of time series. The main message of these papers is that, given a family of predictors, we are able to build a new predictor that predicts the series as well as the best predictor in the family, up to a remainder of order $1/sqrt{n}$. It is known that this rate cannot be improved in general. In this paper, we show that in the particular case of the least square loss, and under a strong assumption on the time series (phi-mixing) the remainder is actually of order $1/n$. Thus, the optimal rate for iid variables, see e.g. Tsybakov 2003, and individual sequences, see cite{lugosi} is, for the first time, achieved for uniformly mixing processes. We also show that our method is optimal for aggregating sparse linear combinations of predictors."

--- Assumes observations are in the interval [-B,B] and gets a bound which is O(B^3), and so useless for our purposes.
in_NB  learning_theory  mixing  ergodic_theory  re:your_favorite_dsge_sucks  re:XV_for_mixing  have_read 
february 2012 by cshalizi
The Asymmetric Business Cycle
"The business cycle is a fundamental yet elusive concept in macroeconomics. In this paper, we consider the problem of measuring the business cycle. First, we argue for the output-gap view that the business cycle corresponds to transitory deviations in economic activity away from a permanent, or trend, level. Then we investigate the extent to which a general model-based approach to estimating trend and cycle for the U.S. economy leads to measures of the business cycle that reflect models versus the data. We find empirical support for a nonlinear time series model that produces a business cycle measure with an asymmetric shape across NBER expansion and recession phases. Specifically, this business cycle measure suggests that recessions are periods of relatively large and negative transitory fluctuations in output. However, several close competitors to the nonlinear model produce business cycle measures of widely differing shapes and magnitudes. Given this model-based uncertainty, we construct a model-averaged measure of the business cycle. This measure also displays an asymmetric shape and is closely related to other measures of economic slack such as the unemployment rate and capacity utilization."
--- Worthy, but at the same time makes me want to lock them in a room with a copy of Li and Racine's _Nonparametric Econometrics_, or even _The Elements of Statistical Learning_, and not let them out until they understand it.
in_NB  time_series  statistics  economics  macroeconomics  inference_to_latent_objects  re:your_favorite_dsge_sucks  morley.james  have_read  ensemble_methods  model_selection 
february 2012 by cshalizi
[1111.3404] Estimated VC dimension for risk bounds
"Vapnik-Chervonenkis (VC) dimension is a fundamental measure of the generalization capacity of learning algorithms. However, apart from a few special cases, it is hard or impossible to calculate analytically. Vapnik et al. [10] proposed a technique for estimating the VC dimension empirically. While their approach behaves well in simulations, it could not be used to bound the generalization risk of classifiers, because there were no bounds for the estimation error of the VC dimension itself. We rectify this omission, providing high probability concentration results for the proposed estimator and deriving corresponding generalization bounds."
self-promotion  learning_theory  vc-dimension  machine_learning  re:your_favorite_dsge_sucks 
november 2011 by cshalizi
A Bernstein type inequality and moderate deviations for weakly dependent sequences
"In this paper we present a Bernstein-type tail inequality for the maximum of partial sums of a weakly dependent sequence of random variables that is not necessarily bounded. The class considered includes geometrically and subgeometrically strongly mixing sequences. The result is then used to derive asymptotic moderate deviation results. Applications are given for classes of Markov chains, iterated Lipschitz models and functions of linear processes with absolutely regular innovations." Also: http://arxiv.org/abs/0902.0582
in_NB  to_read  re:XV_for_mixing  re:your_favorite_dsge_sucks  concentration_of_measure  mixing  ergodic_theory  stochastic_processes  moderate_deviations  deviation_inequalities 
november 2011 by cshalizi
[1110.2529] The Generalization Ability of Online Algorithms for Dependent Data
"We study the generalization performance of arbitrary online learning algorithms trained on samples coming from a dependent source of data. We show that the generalization error of any stable online algorithm concentrates around its regret--an easily computable statistic of the online performance of the algorithm--when the underlying ergodic process is $beta$- or $phi$-mixing. We show high probability error bounds assuming the loss function is convex, and we also establish sharp convergence rates and deviation bounds for strongly convex losses and several linear prediction problems such as linear and logistic regression, least-squares SVM, and boosting on dependent data. In addition, our results have straightforward applications to stochastic optimization with dependent data, and our analysis requires only martingale convergence arguments; we need not rely on more powerful statistical tools such as empirical process theory."
in_NB  learning_theory  individual_sequence_prediction  ergodic_theory  mixing  re:growing_ensemble_project  re:XV_for_mixing  stability_of_learning  concentration_of_measure  have_read  re:your_favorite_dsge_sucks 
october 2011 by cshalizi
[1110.0356] Asymptotic properties of the maximum likelihood estimation in misspecified Hidden Markov models
"Let $(Y_k)$ be a stationary sequence on a probability space taking values in a standard Borel space. Consider the associated maximum likelihood estimator with respect to a parametrized family of Hidden Markov models such that the law of the observations $(Y_k)$ is not assumed to be described by any of the Hidden Markov models of this family. In this paper we investigate the consistency of this estimator in such mispecified models under mild assumptions."
statistical_inference_for_stochastic_processes  markov_models  state-space_models  re:your_favorite_dsge_sucks  in_NB  to_read  misspecification  randal.douc  moulines.eric 
october 2011 by cshalizi
Estimating a Function from Ergodic Samples with Additive Noise [Nobel and Adams]
"We study the problem of estimating an unknown function from ergodic samples corrupted by additive noise. It is shown that one can consistently recover an unknown measurable function in this setting, if the one-dimensional (1-D) distribution of the samples is comparable to a known reference distribution, and the noise is independent of the samples and has known mixing rates. The estimates are applied to deterministic sampling schemes, in which successive samples are obtained by repeatedly applying a fixed map to a given initial vector, and it is then shown how the estimates can be used to reconstruct an ergodic transformation from one of its trajectories"
statistics  estimation  regression  ergodic_theory  via:ded-maxim  in_NB  re:your_favorite_dsge_sucks  dynamical_systems  state-space_reconstruction 
september 2011 by cshalizi
How Useful are Estimated DSGE Model Forecasts? by Rochelle Edge, Refet Gurkaynak :: SSRN
The methodological ideas here are suspect.  It is true that there is not much to predict about an in-control system, and what is happening is largely random and so unpredictable, so that even the true model would show low forecasting ability.  The question however is why we are supposed to think that the DSGE _does_ give us good information about counterfactuals.  If you could show that it had much better predictive performance than baselines like constants or random walks during _out-of-control_ periods, that would be something; but they don't.
re:your_favorite_dsge_sucks  dsges  prediction  economics  macroeconomics  time_series  statistics  in_NB  have_read  to:blog 
july 2011 by cshalizi
[1107.4353] Upper Bounds for Markov Approximations of Ergodic Processes
"Chains of infinite order are generalizations of Markov chains that constitute a flexible class of models in the general theory of stochastic processes. These processes can be naturally studied using approximating Markov chains. Here we derive new upper bounds for the $bar{d}$-distance and the K"ullback-Leibler divergence between chains of infinite order and their canonical $k$-step Markov approximations. In contrast to the bounds available in the literature our results apply to chains of infinite order compatible with general classes of probability kernels. In particular, we allow kernels with discontinuities and null transition probabilities.""  (Pedantry: Pretty sure Kullback did not spell his name with an umlaut!)
markov_models  stochastic_processes  re:AoS_project  to_read  in_NB  approximation  re:your_favorite_dsge_sucks 
july 2011 by cshalizi
Cross-Validation and Mean-Square Stability
It's a little boggling that they don't cite any of the modern (2000--) work on theoretical properties of CV, but oh well...
cross-validation  learning_theory  stability_of_learning  statistics  re:your_favorite_dsge_sucks  re:XV_for_mixing  re:XV_for_networks  to_read  via:nikete 
march 2011 by cshalizi
Learnability, Stability, and Uniform Convergence
"characterizing learnability is the most basic question of statistical learning theory. A fundamental and long-standing answer, at least for the case of supervised classification and regression, is that learnability is equivalent to uniform convergence of the empirical risk to the population risk, and that if a problem is learnable, it is learnable via empirical risk minimization. In this paper, we consider the General Learning Setting (introduced by Vapnik), which includes most statistical learning problems as special cases. We show that in this setting, there are non-trivial learning problems where uniform convergence does not hold, empirical risk minimization fails, and yet they are learnable using alternative mechanisms. Instead of uniform convergence, we identify stability as the key necessary and sufficient condition for learnability. ... the conditions for learnability in the general setting are significantly more complex than in supervised classification and regression."
learning_theory  stability_of_learning  have_read  re:your_favorite_dsge_sucks 
november 2010 by cshalizi
Monetary Economics: An Integrated Approach to Credit, Money, Income, Production and Wealth by Wynne Godley - Powell's Books
"challenges the mainstream paradigm, which is based on the inter-temporal optimisation of welfare by individual agents. It introduces a new methodology for studying how it is institutions which create flows of income, expenditure and production together with stocks of assets (including money) and liabilities, thereby determining how whole economies evolve through time. Starting with extremely simple stock flow consistent (SFC) models, the text describes a succession of increasingly complex models. Solutions of these models are used to illustrate ways in which whole economies evolve when shocked in various ways. Readers will be able to download all the models and explore their properties for themselves. A major conclusion is that economies require management via fiscal and monetary policy if full employment without inflation is to be achieved."  In library.
books:noted  macroeconomics  economics  re:your_favorite_dsge_sucks 
july 2010 by cshalizi
« earlier      
per page:    204080120160

related tags

additive_models  agent-based_models  aggregation  approximation  baby_steps  bad_data_analysis  bad_science  bayesianism  blanchard.olivier  books:noted  books:owned  bootstrap  buiter.willem  causal_inference  central_limit_theorem  change-point_problem  classifiers  cobb-douglas_production_functions  complexity  concentration_of_measure  conditional_independence  confidence_sets  cross-validation  data_analysis  data_sets  delong.brad  dembo.amir  density_estimation  dependence_measures  deviation_inequalities  distributed_systems  dsges  dsquared  dynamical_systems  dynamic_programming  econometrics  economics  economic_policy  empirical_processes  ensemble_methods  entableted  ergodic_theory  estimation  evisceration  factor_analysis  filtering  financial_crisis_of_2007--  financial_markets  fisher.franklin_m.  fisher_information  fmri  frankel.jeffrey  funny:geeky  funny:malicious  galbraith.james_k.  goodness-of-fit  grants  graphical_models  hardle.wolfgang  have_read  heavy_tails  hierarchical_statistical_models  high-dimensional_statistics  hilbert_space  histograms  hodrick-prescott_filter  hoyer.patrik_o.  hypothesis_testing  identifiability  independent_component_analysis  individual_sequence_prediction  inequality  inference_to_latent_objects  information_theory  in_NB  jackson.matthew_o.  jiang.wenxin  kernel_estimators  kernel_methods  kirman.alan  kith_and_kin  kontorovich.aryeh  krugman.paul  large_deviations  lasso  learning_theory  lecue.guillaume  likelihood  liu.han  low-regret_learning  machine_learning  macroeconomics  macro_from_micro  markov_models  mendelson.shahar  methodology  misspecification  mixing  modeling  model_checking  model_selection  moderate_deviations  mohri.meryar  morley.james  moulines.eric  niyogi.partha  nobel.andrew  non-stationarity  nonparametrics  obvious_to_one_skilled_in_the_art  online_learning  optimization  our_decrepit_institutions  over-fitting  philosophy_of_science  political_economy  prediction  prescott.edward  probability  randal.douc  rauchway.eric  re:almost_none  re:AoS_project  re:computational_lens  re:growing_ensemble_project  re:phil-of-bayes_paper  re:stacs  re:XV_for_mixing  re:XV_for_networks  re:your_favorite_dsge_sucks  reductionism  regression  self-promotion  simulation  smith.noah  smoothing  social_engineering  social_science_methodology  solow.robert  sparsity  splines  stability_of_learning  state-space_models  state-space_reconstruction  statistical_inference_for_stochastic_processes  statistics  steinwart.ingo  stochastic_processes  structural_risk_minimization  summers.larry  support-vector_machines  systems_identification  time_series  to:blog  to:NB  to_be_shot_after_a_fair_trial  to_read  to_teach:undergrad-ADA  tracked_down_references  track_down_references  transaction_costs  variable_selection  vc-dimension  via:?  via:crooked_timber  via:ded-maxim  via:djm1107  via:jbdelong  via:kinsella  via:krugman  via:many  via:nikete  via:rortybomb  via:shivak  waldmann.robert  weak_dependence  web  wheels:reinvention_of  white.halbert 

Copy this bookmark: