
on Econometrics 
By:  Monica Billio; Roberto Casarin; Luca Rossini 
Abstract:  Seemingly unrelated regression (SUR) models are used in studying the interactions among economic variables of interest. In a high dimensional setting and when applied to large panel of time series, these models have a large number of parameters to be estimated and suffer of inferential problems. We propose a Bayesian nonparametric hierarchical model for multivariate time series in order to avoid the overparametrization and overfitting issues and to allow for shrinkage toward multiple prior means with unknown location, scale and shape parameters. We propose a twostage hierarchical prior distribution. The first stage of the hierarchy consists in a lasso conditionally independent prior distribution of the NormalGamma family for the SUR coefficients. The second stage is given by a random mixture distribution for the NormalGamma hyperparameters, which allows for parameter parsimony through two components. The first one is a random Dirac pointmass distribution, which induces sparsity in the SUR coefficients; the second is a Dirichlet process prior, which allows for clustering of the SUR coefficients. We provide a Gibbs sampler for posterior approximations based on introduction of auxiliary variables. Some simulated examples show the efficiency of the proposed. We study the effectiveness of our model and inference approach with an application to macroeconomics. 
Date:  2016–08 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1608.02740&r=ecm 
By:  Beltran, Daniel O.; Draper, David 
Abstract:  Central banks have long used dynamic stochastic general equilibrium (DSGE) models, which are typically estimated using Bayesian techniques, to inform key policy decisions. This paper offers an empirical strategy that quantifies the information content of the data relative to that of the prior distribution. Using an offtheshelf DSGE model applied to quarterly Euro Area data from 1970:3 to 2009:4, we show how Monte Carlo simulations can reveal parameters for which the model's structure obscures identification. By integrating out components of the likelihood function and conducting a Bayesian sensitivity analysis, we uncover parameters that are weakly informed by the data. The weak identification of some key structural parameters in our comparatively simple model should raise a red flag to researchers trying to draw valid inferences from, and to base policy upon, complex largescale models featuring many parameters. 
Keywords:  Bayesian estimation ; Econometric modeling ; Kalman filter ; Likelihood ; Local identifcation ; Euro Area ; MCMC ; Policyrelevant parameters ; Priorversusposterior comparison ; Sensitivity analysis 
JEL:  C11 C18 F41 
Date:  2016–08 
URL:  http://d.repec.org/n?u=RePEc:fip:fedgif:1175&r=ecm 
By:  Veiga, Helena; Ruiz, Esther; GonzálezRivera, Gloria; Gonçalves Mazzeu, Joao Henrique 
Abstract:  We propose an extension of the Generalized Autocontour (GACR) tests (GonzàlezRivera and Sun, 2015) for dynamic specifications of conditional densities (insample) and of forecast densities (outofsample). The new tests are based on probability integral transforms (PITs) computed from bootstrap conditional densities so that no assumption on the functional form of the density is needed. The proposed bootstrap procedure generates predictive densities that incorporate parameter uncertainty. In addition, the bootstrapped GACR tests enjoy standard asymptotic distributions. This approach is particularly useful to evaluate multistep predictive densities whose functional form is unknown or difficult to obtain even in cases where the conditional density of the model is known. 
Keywords:  PIT; Parameter Uncertainty; Model Evaluation; Distribution Uncertainty 
Date:  2016–07 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:23457&r=ecm 
By:  Rubi Tonantzin Gutiérrez Villanueva (Division of Economics, CIDE) 
Abstract:  This study has two major purposes: (1) to identify casual inference in timeseries using Granger causality tests and Convergent Cross Mapping (Sugihara et. al., 2012) and (2) to investigate the economic growth and government expenditure relation in Mexico. Convergent Cross Mapping (CCM) has shown a high potential to perform casual inference in complex systems and nonlinear system and has been used as an alternative approach to Granger causality. On the other hand, we show that CCM fails to infer causality direction in linear timeseries and in timeseries with structural breaks. On the other hand, we demostrate that TodaYamamoto test (Toda and Yamamoto, 1995) succesfully detects causal relation linear systems and systems with structural breaks. Besides, we evaluate the causal relation between government expenditure and economic growth in Mexico then we evaluate the validity of Wagner's law and the Keynesian view. The empirical results suggests that Wagner's law holds for Mexico for the period 1980 to 2010. 
Keywords:  Causality, Convergent Cross Mapping, Granger Causality Tests, Government Expenditure, Economic Growth, Mexico. 
JEL:  C12 C15 C18 C22 
Date:  2016–06 
URL:  http://d.repec.org/n?u=RePEc:emc:thgrad:tesg008&r=ecm 
By:  Weihua An (Departments of Sociology and Statistics, Indiana University) 
Abstract:  Currently panel data analysis largely relies on parametric models (such as random effects and fixed effects models). These models make strong assumptions in order to draw causal inference while in reality, any of these assumptions may not hold. Compared to parametric models, matching does not make strong parametric assumptions and also helps provide focused inference on the effect of a particular cause. However, matching has been used typically in crosssectional data analysis. In this paper, we extend matching to panel data analysis. In the spirit of the differenceindifference method, we first difference the outcomes to remove the fixed effects. Then we apply matching on the differenced outcomes at each wave (except the first one). The results can be used to examine whether treatment effects vary across time. The estimates from the separate waves can also be combined to provide an overall estimate of the treatment effects. In doing so, we present a variance estimator for the overall treatment effects that can account for complicated sequential dependence in the data. We demonstrate the method through empirical examples and show its efficacy in comparison to previous methods. We also outline a Stata addon "DIDMatch" that we are creating to implement the method. 
Date:  2016–08–10 
URL:  http://d.repec.org/n?u=RePEc:boc:scon16:21&r=ecm 
By:  Kun Ho Kim; Wolfgang K. Härdle; ShihKang Chao 
Abstract:  In this paper, we analyze the nonparametric part of a partially linear model when the covariates in parametric and nonparametric parts are subject to measurement errors. Based on a twostage semiparametric estimate, we construct a uniform confidence surface of the multivariate function for simultaneous inference. The developed methodology is applied to perform inference for the U.S. gasoline demand where the income and price variables are measured with errors. The empirical results strongly suggest that the linearity of the U:S: gasoline demand is rejected. 
Keywords:  Measurement error, Partially linear model, Regression calibration, Nonparametric function, Semiparametric regression, Uniform confidence surface, Simultaneous inference, U.S. Gasoline demand, Nonlinearity 
JEL:  C12 C13 C14 
Date:  2016–08 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2016024&r=ecm 
By:  Francesco Agostinelli; Matthew Wiswall 
Abstract:  A recent and growing area of research applies latent factor models to study the development of children's skills. Some normalization is required in these models because the latent variables have no natural units and no known location or scale. We show that the standard practice of “renormalizing” the latent variables each period is overidentifying and restrictive when used simultaneously with common skill production technologies that already have a known location and scale (KLS). The KLS class of functions include the Constant Elasticity of Substitution (CES) production technologies several papers use in their estimation. We show that these KLS production functions are already restricted in the sense that their location and scale is known (does not need to be identified and estimated) and therefore further restrictions on location and scale by renormalizing the model each period is unnecessary and overidentifying. The most common type of renormalization restriction imposes that latent skills are mean logstationary, which restricts the class of CES technologies to be of the loglinear (CobbDouglas) subclass, and does not allow for more general forms of complementarities. Even when a mean logstationary model is correctly assumed, renormalization can further bias the estimates of the skill production function. We support our analytic results through a series of Monte Carlo exercises. We show that in typical cases, estimators based on “renormalizations” are biased, and simple alternative estimators, which do not impose these restrictions, can recover the underlying primitive parameters of the production technology. 
JEL:  C38 J13 
Date:  2016–07 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:22441&r=ecm 
By:  Victor Chernozhukov (Institute for Fiscal Studies and MIT); Juan Carlos Escanciano (Institute for Fiscal Studies); Hidehiko Ichimura (Institute for Fiscal Studies and University of Tokyo); Whitney K. Newey (Institute for Fiscal Studies) 
Abstract:  This paper shows how to construct locally robust semiparametric GMM estimators, meaning equivalently moment conditions have zero derivative with respect to the first step and the first step does not affect the asymptotic variance. They are constructed by adding to the moment functions the adjustment term for first step estimation. Locally robust estimators have several advantages. They are vital for valid inference with machine learning in the first step, see Belloni et. al. (2012, 2014), and are less sensitive to the specification of the first step. They are doubly robust for affine moment functions, where moment conditions continue to hold when one first step component is incorrect. Locally robust moment conditions also have smaller bias that is flatter as a function of first step smoothing leading to improved small sample properties. Series first step estimators confer local robustness on any moment conditions and are doubly robust for affine moments, in the direction of the series approximation. Many new locally and doubly robust estimators are given here, including for economic structural models. We give simple asymptotic theory for estimators that use crossfitting in the first step, including machine learning. 
Keywords:  Local robustness, double robustness, semiparametric estimation, bias, GMM 
JEL:  C13 C14 C21 D24 
Date:  2016–08–02 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:31/16&r=ecm 
By:  Escribano, Álvaro; Blazsek, Szabolcs 
Abstract:  This paper suggests new Dynamic Conditional Score (DCS) count panel data models. We compare the statistical performance of static model, finite distributed lag model, exponential feedback model and different DCS count panel data models. For DCS we consider random walk and quasiautoregressive formulations of dynamics. We use panel data for a large cross section of United States firms for period 1979 to 2000. We estimate models by using the Poisson quasimaximum likelihood estimator with fixed effects. The estimation results and diagnostics tests suggest that the statistical performance of DCSQAR is superior to that of alternative models. 
Keywords:  quasimaximum likelihood; dynamic conditional score; count panel data; research and development 
JEL:  O3 C52 C51 C35 C33 
Date:  2016–07 
URL:  http://d.repec.org/n?u=RePEc:cte:werepe:23458&r=ecm 
By:  Mikkel Bennedsen (Aarhus University and CREATES) 
Abstract:  Using theory on (conditionally) Gaussian processes with stationary increments developed in BarndorffNielsen et al. (2009, 2011), this paper presents a general semiparametric approach to conducting inference on the fractal index, a, of a time series. Our setup encompasses a large class of Gaussian processes and we show how to extend it to a large class of nonGaussian models as well. It is proved that the asymptotic distribution of the estimator of a does not depend on the specifics of the data generating process for the observations, but only on the value of a and a “heteroscedasticity” factor. Using this, we propose a simulationbased approach to inference, which is easily implemented and is valid more generally than asymptotic analysis. We detail how the methods can be applied to test whether a stochastic process is a nonsemimartingale. Finally, the methods are illustrated in two empirical applications motivated from finance. We study time series of logprices and logvolatility from 29 individual US stocks; no evidence of nonsemimartingality in asset prices is found, but we do find evidence of nonsemimartingality in volatility. This confirms a recently proposed conjecture that stochastic volatility processes of financial assets are rough (Gatheral et al., 2014). 
Keywords:  Fractal index, Monte Carlo simulation, roughness, semimartingality, fractional Brownian motion, stochastic volatility JEL Classification: C12, C22, C63, G12 MSC 2010 Classification: 60G10, 60G15, 60G17, 60G22, 62M07, 62M09, 65C05 
Date:  2016–08–04 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201621&r=ecm 
By:  Iskrev, Nikolay; Ritto, Joao 
Abstract:  In a recent article Canova et al. (2014) study the optimal choice of variables to use in the estimation of a simplified version of the Smets and Wouters (2007) model. In this comment we examine their conclusions by applying a different methodology to the same model. Our results call into question most of Canova et al. (2014) conclusions. 
Keywords:  DSGE models, Observables, Identification, Information matrix, CramérRao lower bounds 
JEL:  C1 C9 E32 
Date:  2016–08 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:72870&r=ecm 
By:  Stephen J. Redding; David E. Weinstein 
Abstract:  The measurement of price changes, economic welfare, and demand parameters is currently based on three disjoint approaches: macroeconomic models derived from timeinvariant utility functions, microeconomic estimation based on timevarying utility (demand) systems, and actual price and real output data constructed using formulas that differ from either approach. The inconsistencies are so deep that the same assumptions that form the foundation of demandsystem estimation can be used to prove that standard price indexes are incorrect, and the assumptions underlying standard exact and superlative price indexes invalidate demandsystem estimation. In other words, we show that extant micro and macro welfare estimates are biased and inconsistent with each other as well as the data. We develop a unified approach to demand and price measurement that exactly rationalizes observed micro data on prices and expenditure shares while permitting exact aggregation and meaningful macro comparisons of welfare over time. We show that all standard price indexes are special cases of our approach for particular values of the elasticity of substitution, constant preferences for each good, and a constant set of goods. In contrast to these standard index numbers, our approach allows us to compute changes in the cost of living that take into account both changes in the preferences for individual goods and the entry and exit of goods over time. Using barcode data for the U.S. consumer goods industry, we show that allowing for the entry and exit of products, changing preferences for individual goods, and a value for the elasticity of substitution estimated from the data yields substantially different conclusions for changes in the cost of living from standard index numbers. 
Keywords:  elasticity of substitution, price index, consumer valuation bias, new goods, welfare 
JEL:  D11 D12 E01 E31 
Date:  2016–08 
URL:  http://d.repec.org/n?u=RePEc:cep:cepdps:dp1445&r=ecm 
By:  Ying Chen; Wolfgang K. Härdle; Wee Song Chua 
Abstract:  Limit order book contains comprehensive information of liquidity on bid and ask sides. We propose a Vector Functional AutoRegressive (VFAR) model to describe the dynamics of the limit order book and demand curves and utilize the fitted model to predict the joint evolution of the liquidity demand and supply curves. In the VFAR framework, we derive a closedform maximum likelihood estimator under sieves and provide the asymptotic consistency of the estimator. In application to limit order book records of 12 stocks in NASDAQ traded from 2 Jan 2015 to 6 Mar 2015, it shows the VAR model presents a strong predictability in liquidity curves, with R2 values as high as 98.5 percent for insample estimation and 98.2 percent in outofsample forecast experiments. It produces accurate 5䀀; 25䀀 and 50䀀inute forecasts, with root mean squared error as low as 0.09 to 0.58 and mean absolute percentage error as low as 0.3 to 4.5 percent. 
Keywords:  Limit order book, Liquidity risk, multiple functional time series 
JEL:  C13 C32 C53 
Date:  2016–08 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2016025&r=ecm 
By:  Lunsford, Kurt Graden (Federal Reserve Bank of Cleveland); Jentsch, Carsen (University of Mannheim) 
Abstract:  Proxy structural vector autoregressions (SVARs) identify structural shocks in vector autoregressions (VARs) with external proxy variables that are correlated with the structural shocks of interest but uncorrelated with other structural shocks. We provide asymptotic theory for proxy SVARs when the VAR innovations and proxy variables are jointly αmixing. We also prove the asymptotic validity of a residualbased moving block bootstrap (MBB) for inference on statistics that depend jointly on estimators for the VAR coefficients and for covariances of the VAR innovations and proxy variables. These statistics include structural impulse response functions (IRFs). Conversely, wild bootstraps are invalid, even when innovations and proxy variables are either independent and identically distributed or martingale difference sequences, and simulations show that their coverage rates for IRFs can be badly missized. Using the MBB to reestimate confidence intervals for the IRFs in Mertens and Ravn (2013), we show that inferences cannot be made about the effects of tax changes on output, labor, or investment. 
Keywords:  fiscal policy; mixing; residualbased moving block bootstrap; structural vector autoregression; tax shocks; wild bootstrap; 
JEL:  C15 C32 E62 H24 H25 H3 H31 
Date:  2016–07–19 
URL:  http://d.repec.org/n?u=RePEc:fip:fedcwp:1619&r=ecm 
By:  Stephen J. Redding; David E. Weinstein 
Abstract:  The measurement of price changes, economic welfare, and demand parameters is currently based on three disjoint approaches: macroeconomic models derived from timeinvariant utility functions, microeconomic estimation based on timevarying utility (demand) systems, and actual price and real output data constructed using formulas that differ from either approach. The inconsistencies are so deep that the same assumptions that form the foundation of demandsystem estimation can be used to prove that standard price indexes are incorrect, and the assumptions underlying standard exact and superlative price indexes invalidate demandsystem estimation. In other words, we show that extant micro and macro welfare estimates are biased and inconsistent with each other as well as the data. We develop a unified approach that exactly rationalizes observed micro data on prices and expenditure shares while permitting exact aggregation and meaningful macro comparisons of welfare over time. Using barcode data for the U.S. consumer goods industry, we show that allowing for the entry and exit of products, changing preferences for individual goods, and a value for the elasticity of substitution estimated from the data yields substantially different conclusions for changes in the cost of living from standard index numbers. 
JEL:  D11 D12 E01 E31 
Date:  2016–08 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:22479&r=ecm 
By:  Søren Johansen (University of Copenhagen and CREATES); Morten Ørregaard Nielsen (Queen?s University and CREATES) 
Abstract:  In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a casebycase, or asneeded basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)= Z(t) + Y(t), where Z(t) belongs to a large class of deterministic regressors and Y(t) is a zeromean CVAR. We suggest an extended model that can be estimated by reduced rank regression and give a condition for when the additive and extended models are asymptotically equivalent, as well as an algorithm for deriving the additive model parameters from the extended model parameters. We derive asymptotic properties of the maximum likelihood estimators and discuss tests for rank and tests on the deterministic terms. In particular, we give conditions under which the estimators are asymptotically (mixed) Gaussian, such that associated tests are khi squared distributed. 
Keywords:  Additive formulation, cointegration, deterministic terms, extended model, likelihood inference, VAR model 
JEL:  C32 
Date:  2016–07–24 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201622&r=ecm 