|
on Econometrics |
By: | Monica Billio; Roberto Casarin; Luca Rossini |
Abstract: | Seemingly unrelated regression (SUR) models are used in studying the interactions among economic variables of interest. In a high dimensional setting and when applied to large panel of time series, these models have a large number of parameters to be estimated and suffer of inferential problems. We propose a Bayesian nonparametric hierarchical model for multivariate time series in order to avoid the overparametrization and overfitting issues and to allow for shrinkage toward multiple prior means with unknown location, scale and shape parameters. We propose a two-stage hierarchical prior distribution. The first stage of the hierarchy consists in a lasso conditionally independent prior distribution of the Normal-Gamma family for the SUR coefficients. The second stage is given by a random mixture distribution for the Normal-Gamma hyperparameters, which allows for parameter parsimony through two components. The first one is a random Dirac point-mass distribution, which induces sparsity in the SUR coefficients; the second is a Dirichlet process prior, which allows for clustering of the SUR coefficients. We provide a Gibbs sampler for posterior approximations based on introduction of auxiliary variables. Some simulated examples show the efficiency of the proposed. We study the effectiveness of our model and inference approach with an application to macroeconomics. |
Date: | 2016–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1608.02740&r=ecm |
By: | Beltran, Daniel O.; Draper, David |
Abstract: | Central banks have long used dynamic stochastic general equilibrium (DSGE) models, which are typically estimated using Bayesian techniques, to inform key policy decisions. This paper offers an empirical strategy that quantifies the information content of the data relative to that of the prior distribution. Using an off-the-shelf DSGE model applied to quarterly Euro Area data from 1970:3 to 2009:4, we show how Monte Carlo simulations can reveal parameters for which the model's structure obscures identification. By integrating out components of the likelihood function and conducting a Bayesian sensitivity analysis, we uncover parameters that are weakly informed by the data. The weak identification of some key structural parameters in our comparatively simple model should raise a red flag to researchers trying to draw valid inferences from, and to base policy upon, complex large-scale models featuring many parameters. |
Keywords: | Bayesian estimation ; Econometric modeling ; Kalman filter ; Likelihood ; Local identifcation ; Euro Area ; MCMC ; Policy-relevant parameters ; Prior-versus-posterior comparison ; Sensitivity analysis |
JEL: | C11 C18 F41 |
Date: | 2016–08 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedgif:1175&r=ecm |
By: | Veiga, Helena; Ruiz, Esther; González-Rivera, Gloria; Gonçalves Mazzeu, Joao Henrique |
Abstract: | We propose an extension of the Generalized Autocontour (G-ACR) tests (Gonzàlez-Rivera and Sun, 2015) for dynamic specifications of conditional densities (in-sample) and of forecast densities (out-of-sample). The new tests are based on probability integral transforms (PITs) computed from bootstrap conditional densities so that no assumption on the functional form of the density is needed. The proposed bootstrap procedure generates predictive densities that incorporate parameter uncertainty. In addition, the bootstrapped G-ACR tests enjoy standard asymptotic distributions. This approach is particularly useful to evaluate multi-step predictive densities whose functional form is unknown or difficult to obtain even in cases where the conditional density of the model is known. |
Keywords: | PIT; Parameter Uncertainty; Model Evaluation; Distribution Uncertainty |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:cte:wsrepe:23457&r=ecm |
By: | Rubi Tonantzin Gutiérrez Villanueva (Division of Economics, CIDE) |
Abstract: | This study has two major purposes: (1) to identify casual inference in time-series using Granger causality tests and Convergent Cross Mapping (Sugihara et. al., 2012) and (2) to investigate the economic growth and government expenditure relation in Mexico. Convergent Cross Mapping (CCM) has shown a high potential to perform casual inference in complex systems and non-linear system and has been used as an alternative approach to Granger causality. On the other hand, we show that CCM fails to infer causality direction in linear time-series and in time-series with structural breaks. On the other hand, we demostrate that Toda-Yamamoto test (Toda and Yamamoto, 1995) succesfully detects causal relation linear systems and systems with structural breaks. Besides, we evaluate the causal relation between government expenditure and economic growth in Mexico then we evaluate the validity of Wagner's law and the Keynesian view. The empirical results suggests that Wagner's law holds for Mexico for the period 1980 to 2010. |
Keywords: | Causality, Convergent Cross Mapping, Granger Causality Tests, Government Expenditure, Economic Growth, Mexico. |
JEL: | C12 C15 C18 C22 |
Date: | 2016–06 |
URL: | http://d.repec.org/n?u=RePEc:emc:thgrad:tesg008&r=ecm |
By: | Weihua An (Departments of Sociology and Statistics, Indiana University) |
Abstract: | Currently panel data analysis largely relies on parametric models (such as random effects and fixed effects models). These models make strong assumptions in order to draw causal inference while in reality, any of these assumptions may not hold. Compared to parametric models, matching does not make strong parametric assumptions and also helps provide focused inference on the effect of a particular cause. However, matching has been used typically in cross-sectional data analysis. In this paper, we extend matching to panel data analysis. In the spirit of the difference-in-difference method, we first difference the outcomes to remove the fixed effects. Then we apply matching on the differenced outcomes at each wave (except the first one). The results can be used to examine whether treatment effects vary across time. The estimates from the separate waves can also be combined to provide an overall estimate of the treatment effects. In doing so, we present a variance estimator for the overall treatment effects that can account for complicated sequential dependence in the data. We demonstrate the method through empirical examples and show its efficacy in comparison to previous methods. We also outline a Stata add-on "DIDMatch" that we are creating to implement the method. |
Date: | 2016–08–10 |
URL: | http://d.repec.org/n?u=RePEc:boc:scon16:21&r=ecm |
By: | Kun Ho Kim; Wolfgang K. Härdle; Shih-Kang Chao |
Abstract: | In this paper, we analyze the nonparametric part of a partially linear model when the covariates in parametric and non-parametric parts are subject to measurement errors. Based on a two-stage semi-parametric estimate, we construct a uniform confidence surface of the multivariate function for simultaneous inference. The developed methodology is applied to perform inference for the U.S. gasoline demand where the income and price variables are measured with errors. The empirical results strongly suggest that the linearity of the U:S: gasoline demand is rejected. |
Keywords: | Measurement error, Partially linear model, Regression calibration, Non-parametric function, Semi-parametric regression, Uniform confidence surface, Simultaneous inference, U.S. Gasoline demand, Non-linearity |
JEL: | C12 C13 C14 |
Date: | 2016–08 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2016-024&r=ecm |
By: | Francesco Agostinelli; Matthew Wiswall |
Abstract: | A recent and growing area of research applies latent factor models to study the development of children's skills. Some normalization is required in these models because the latent variables have no natural units and no known location or scale. We show that the standard practice of “re-normalizing” the latent variables each period is over-identifying and restrictive when used simultaneously with common skill production technologies that already have a known location and scale (KLS). The KLS class of functions include the Constant Elasticity of Substitution (CES) production technologies several papers use in their estimation. We show that these KLS production functions are already restricted in the sense that their location and scale is known (does not need to be identified and estimated) and therefore further restrictions on location and scale by re-normalizing the model each period is unnecessary and over-identifying. The most common type of re-normalization restriction imposes that latent skills are mean log-stationary, which restricts the class of CES technologies to be of the log-linear (Cobb-Douglas) sub-class, and does not allow for more general forms of complementarities. Even when a mean log-stationary model is correctly assumed, re-normalization can further bias the estimates of the skill production function. We support our analytic results through a series of Monte Carlo exercises. We show that in typical cases, estimators based on “re-normalizations” are biased, and simple alternative estimators, which do not impose these restrictions, can recover the underlying primitive parameters of the production technology. |
JEL: | C38 J13 |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:22441&r=ecm |
By: | Victor Chernozhukov (Institute for Fiscal Studies and MIT); Juan Carlos Escanciano (Institute for Fiscal Studies); Hidehiko Ichimura (Institute for Fiscal Studies and University of Tokyo); Whitney K. Newey (Institute for Fiscal Studies) |
Abstract: | This paper shows how to construct locally robust semiparametric GMM estimators, meaning equivalently moment conditions have zero derivative with respect to the first step and the first step does not affect the asymptotic variance. They are constructed by adding to the moment functions the adjustment term for first step estimation. Locally robust estimators have several advantages. They are vital for valid inference with machine learning in the first step, see Belloni et. al. (2012, 2014), and are less sensitive to the specification of the first step. They are doubly robust for affine moment functions, where moment conditions continue to hold when one first step component is incorrect. Locally robust moment conditions also have smaller bias that is flatter as a function of first step smoothing leading to improved small sample properties. Series first step estimators confer local robustness on any moment conditions and are doubly robust for affine moments, in the direction of the series approximation. Many new locally and doubly robust estimators are given here, including for economic structural models. We give simple asymptotic theory for estimators that use cross-fitting in the first step, including machine learning. |
Keywords: | Local robustness, double robustness, semiparametric estimation, bias, GMM |
JEL: | C13 C14 C21 D24 |
Date: | 2016–08–02 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:31/16&r=ecm |
By: | Escribano, Álvaro; Blazsek, Szabolcs |
Abstract: | This paper suggests new Dynamic Conditional Score (DCS) count panel data models. We compare the statistical performance of static model, finite distributed lag model, exponential feedback model and different DCS count panel data models. For DCS we consider random walk and quasi-autoregressive formulations of dynamics. We use panel data for a large cross section of United States firms for period 1979 to 2000. We estimate models by using the Poisson quasi-maximum likelihood estimator with fixed effects. The estimation results and diagnostics tests suggest that the statistical performance of DCS-QAR is superior to that of alternative models. |
Keywords: | quasi-maximum likelihood; dynamic conditional score; count panel data; research and development |
JEL: | O3 C52 C51 C35 C33 |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:cte:werepe:23458&r=ecm |
By: | Mikkel Bennedsen (Aarhus University and CREATES) |
Abstract: | Using theory on (conditionally) Gaussian processes with stationary increments developed in Barndorff-Nielsen et al. (2009, 2011), this paper presents a general semiparametric approach to conducting inference on the fractal index, a, of a time series. Our setup encompasses a large class of Gaussian processes and we show how to extend it to a large class of non-Gaussian models as well. It is proved that the asymptotic distribution of the estimator of a does not depend on the specifics of the data generating process for the observations, but only on the value of a and a “heteroscedasticity” factor. Using this, we propose a simulation-based approach to inference, which is easily implemented and is valid more generally than asymptotic analysis. We detail how the methods can be applied to test whether a stochastic process is a non-semimartingale. Finally, the methods are illustrated in two empirical applications motivated from finance. We study time series of log-prices and log-volatility from 29 individual US stocks; no evidence of non-semimartingality in asset prices is found, but we do find evidence of non-semimartingality in volatility. This confirms a recently proposed conjecture that stochastic volatility processes of financial assets are rough (Gatheral et al., 2014). |
Keywords: | Fractal index, Monte Carlo simulation, roughness, semimartingality, fractional Brownian motion, stochastic volatility JEL Classification: C12, C22, C63, G12 MSC 2010 Classification: 60G10, 60G15, 60G17, 60G22, 62M07, 62M09, 65C05 |
Date: | 2016–08–04 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2016-21&r=ecm |
By: | Iskrev, Nikolay; Ritto, Joao |
Abstract: | In a recent article Canova et al. (2014) study the optimal choice of variables to use in the estimation of a simplified version of the Smets and Wouters (2007) model. In this comment we examine their conclusions by applying a different methodology to the same model. Our results call into question most of Canova et al. (2014) conclusions. |
Keywords: | DSGE models, Observables, Identification, Information matrix, Cramér-Rao lower bounds |
JEL: | C1 C9 E32 |
Date: | 2016–08 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:72870&r=ecm |
By: | Stephen J. Redding; David E. Weinstein |
Abstract: | The measurement of price changes, economic welfare, and demand parameters is currently based on three disjoint approaches: macroeconomic models derived from time-invariant utility functions, microeconomic estimation based on time-varying utility (demand) systems, and actual price and real output data constructed using formulas that differ from either approach. The inconsistencies are so deep that the same assumptions that form the foundation of demand-system estimation can be used to prove that standard price indexes are incorrect, and the assumptions underlying standard exact and superlative price indexes invalidate demand-system estimation. In other words, we show that extant micro and macro welfare estimates are biased and inconsistent with each other as well as the data. We develop a unified approach to demand and price measurement that exactly rationalizes observed micro data on prices and expenditure shares while permitting exact aggregation and meaningful macro comparisons of welfare over time. We show that all standard price indexes are special cases of our approach for particular values of the elasticity of substitution, constant preferences for each good, and a constant set of goods. In contrast to these standard index numbers, our approach allows us to compute changes in the cost of living that take into account both changes in the preferences for individual goods and the entry and exit of goods over time. Using barcode data for the U.S. consumer goods industry, we show that allowing for the entry and exit of products, changing preferences for individual goods, and a value for the elasticity of substitution estimated from the data yields substantially different conclusions for changes in the cost of living from standard index numbers. |
Keywords: | elasticity of substitution, price index, consumer valuation bias, new goods, welfare |
JEL: | D11 D12 E01 E31 |
Date: | 2016–08 |
URL: | http://d.repec.org/n?u=RePEc:cep:cepdps:dp1445&r=ecm |
By: | Ying Chen; Wolfgang K. Härdle; Wee Song Chua |
Abstract: | Limit order book contains comprehensive information of liquidity on bid and ask sides. We propose a Vector Functional AutoRegressive (VFAR) model to describe the dynamics of the limit order book and demand curves and utilize the fitted model to predict the joint evolution of the liquidity demand and supply curves. In the VFAR framework, we derive a closed-form maximum likelihood estimator under sieves and provide the asymptotic consistency of the estimator. In application to limit order book records of 12 stocks in NASDAQ traded from 2 Jan 2015 to 6 Mar 2015, it shows the VAR model presents a strong predictability in liquidity curves, with R2 values as high as 98.5 percent for insample estimation and 98.2 percent in out-of-sample forecast experiments. It produces accurate 5䀀; 25䀀 and 50䀀-inute forecasts, with root mean squared error as low as 0.09 to 0.58 and mean absolute percentage error as low as 0.3 to 4.5 percent. |
Keywords: | Limit order book, Liquidity risk, multiple functional time series |
JEL: | C13 C32 C53 |
Date: | 2016–08 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2016-025&r=ecm |
By: | Lunsford, Kurt Graden (Federal Reserve Bank of Cleveland); Jentsch, Carsen (University of Mannheim) |
Abstract: | Proxy structural vector autoregressions (SVARs) identify structural shocks in vector autoregressions (VARs) with external proxy variables that are correlated with the structural shocks of interest but uncorrelated with other structural shocks. We provide asymptotic theory for proxy SVARs when the VAR innovations and proxy variables are jointly α-mixing. We also prove the asymptotic validity of a residual-based moving block bootstrap (MBB) for inference on statistics that depend jointly on estimators for the VAR coefficients and for covariances of the VAR innovations and proxy variables. These statistics include structural impulse response functions (IRFs). Conversely, wild bootstraps are invalid, even when innovations and proxy variables are either independent and identically distributed or martingale difference sequences, and simulations show that their coverage rates for IRFs can be badly mis-sized. Using the MBB to re-estimate confidence intervals for the IRFs in Mertens and Ravn (2013), we show that inferences cannot be made about the effects of tax changes on output, labor, or investment. |
Keywords: | fiscal policy; mixing; residual-based moving block bootstrap; structural vector autoregression; tax shocks; wild bootstrap; |
JEL: | C15 C32 E62 H24 H25 H3 H31 |
Date: | 2016–07–19 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedcwp:1619&r=ecm |
By: | Stephen J. Redding; David E. Weinstein |
Abstract: | The measurement of price changes, economic welfare, and demand parameters is currently based on three disjoint approaches: macroeconomic models derived from time-invariant utility functions, microeconomic estimation based on time-varying utility (demand) systems, and actual price and real output data constructed using formulas that differ from either approach. The inconsistencies are so deep that the same assumptions that form the foundation of demand-system estimation can be used to prove that standard price indexes are incorrect, and the assumptions underlying standard exact and superlative price indexes invalidate demand-system estimation. In other words, we show that extant micro and macro welfare estimates are biased and inconsistent with each other as well as the data. We develop a unified approach that exactly rationalizes observed micro data on prices and expenditure shares while permitting exact aggregation and meaningful macro comparisons of welfare over time. Using barcode data for the U.S. consumer goods industry, we show that allowing for the entry and exit of products, changing preferences for individual goods, and a value for the elasticity of substitution estimated from the data yields substantially different conclusions for changes in the cost of living from standard index numbers. |
JEL: | D11 D12 E01 E31 |
Date: | 2016–08 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:22479&r=ecm |
By: | Søren Johansen (University of Copenhagen and CREATES); Morten Ørregaard Nielsen (Queen?s University and CREATES) |
Abstract: | In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)= Z(t) + Y(t), where Z(t) belongs to a large class of deterministic regressors and Y(t) is a zero-mean CVAR. We suggest an extended model that can be estimated by reduced rank regression and give a condition for when the additive and extended models are asymptotically equivalent, as well as an algorithm for deriving the additive model parameters from the extended model parameters. We derive asymptotic properties of the maximum likelihood estimators and discuss tests for rank and tests on the deterministic terms. In particular, we give conditions under which the estimators are asymptotically (mixed) Gaussian, such that associated tests are khi squared distributed. |
Keywords: | Additive formulation, cointegration, deterministic terms, extended model, likelihood inference, VAR model |
JEL: | C32 |
Date: | 2016–07–24 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2016-22&r=ecm |