
on Econometrics 
By:  Li, Yong (Renmin University of China); Yu, Jun (School of Economics, Singapore Management University); Zeng, Tao (Zhejiang University) 
Abstract:  Deviance information criterion (DIC) has been widely used for Bayesian model comparison, especially after Markov chain Monte Carlo (MCMC) is used to estimate candidate models. This paper studies the problem of using DIC to compare latent variable models after the models are estimated by MCMC together with the data augmentation technique. Our contributions are twofold. First, we show that when MCMC is used with data augmentation, it undermines theoretical underpinnings of DIC. As a result, by treating latent variables as parameters, the widely used way of constructing DIC based on the conditional likelihood, although facilitating computation, should not be used. Second, we propose two versions of integrated DIC (IDIC) to compare latent variable models without treating latent variables as parameters. The large sample properties of IDIC are studied and an asymptotic justi fication of IDIC is provided. Some popular algorithms such as the EM, Kalman and particle filtering algorithms are introduced to compute IDIC for latent variable models. IDIC is illustrated using asset pricing models, dynamic factor models, and stochastic volatility models. 
Keywords:  AIC; DIC; Latent variable models; Markov Chain Monte Carlo. 
JEL:  C11 C12 G12 
Date:  2018–02–25 
URL:  http://d.repec.org/n?u=RePEc:ris:smuesw:2018_006&r=ecm 
By:  Frank Windmeijer 
Abstract:  This paper develops the links between overidentification tests, underidentification tests, score tests and the CraggDonald (1993, 1997) and KleibergenPaap (2006) rank tests in linear instrumental variables (IV) models. This general framework shows that standard underidentification tests are (robust) score tests for overidentification in an auxiliary linear model, x_1 = X_2 δ + ε_1, where X = [x_1 X_2] are the endogenous explanatory variables in the original model, estimated by IV estimation methods using the same instruments as for the original model. This simple structure makes it possible to establish valid robust underidentification tests for linear IV models where these have not been proposed or used before, like clustered dynamic panel data models estimated by GMM. The framework also applies to general tests of rank, including the I test of Arellano, Hansen and Sentana (2012), and, outside the IV setting, for tests of rank of parameter matrices estimated by OLS. Invariant rank tests are based on LIML or continuously updated GMM estimators of the firststage parameters. This insight leads to the proposal of a new twostep invariant asymptotically efficient GMM estimator, and a new iterated GMM estimator that converges to the continuously updated GMM estimator. 
Keywords:  Overidentification, Underidentification, Rank tests, Dynamic Panel Data Models, Asset Pricing Models. 
JEL:  C12 C13 C23 C26 
Date:  2018–03–19 
URL:  http://d.repec.org/n?u=RePEc:bri:uobdis:18/696&r=ecm 
By:  Nikolay Iskrev 
Abstract:  We propose two measures of the impact of calibration on the estimation of macroeconomic models.The first quantifies the amount of information introduced with respect to each estimated parameter as a result of fixing the value of one or more calibrated parameters.The second is a measure of the sensitivity of parameter estimates to perturbations in the calibration values.The purpose of the measures is to show researchers how much and in what way calibration affects their estimation results – by shifting the location and reducing the spread of the marginal posterior distributions of the estimated parameters.This type of analysis is often appropriate since macroeconomists do not always agree on whether and how to calibrate structural parameters in macroeconomic models. The methodology is illustrated using the models estimated in Smets and Wouters (2007) and SchmittGrohé and Uribe (2012). 
Keywords:  DSGE models,information content,calibration 
JEL:  C32 C51 C52 E32 
Date:  2018–03 
URL:  http://d.repec.org/n?u=RePEc:ise:remwps:wp0342018&r=ecm 
By:  Ryo Okui; Takahide Yanagi 
Abstract:  This paper proposes nonparametric kernelsmoothing estimation for panel data to examine the degree of heterogeneity across crosssectional units. Our procedure is modelfree and easy to implement, and provides useful visual information, which enables us to understand intuitively the properties of heterogeneity. We rst estimate the sample mean, autocovariances, and auto correlations for each unit and then apply kernel smoothing to compute estimates of their density and cumulative distribution functions. The kernel estimators are consistent and asymptotically normal under double asymptotics, i.e., when both crosssectional and time series sample sizes tend to in nity. However, as these exhibit biases given the incidental parameter problem and the nonlinearity of the kernel function, we propose jackknife methods to alleviate any bias. We also develop bandwidth selection methods and bootstrap inferences based on the asymptotic properties. Lastly, we illustrate the success of our procedure using an empirical application of the dynamics of US prices and Monte Carlo simulation. 
Date:  2018–02 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1802.08825&r=ecm 
By:  Tomás del Barrio Castro (Universitat de les Illes Balears); Paulo M.M. Rodrigues (Banc of Portugal); A. M. Robert Taylor (University of Essex) 
Abstract:  In this paper we investigate the implications that temporally aggregating, either by average sampling or systematic sampling, a seasonal process has on the integration properties of the resulting series at both the zero and seasonal frequencies. Our results extend the existing literature in three ways. First, they demonstrate the implications of temporal aggregation for a general seasonally integrated process with S seasons. Second, rather than only considering the aggregation of seasonal processes with exact unit roots at some or all of the zero and seasonal frequencies, we consider the case where these roots are localtounity (which includes exact unit roots as a special case) such that the original series is nearintegrated at some or all of the zero and seasonal frequencies. These results show, among other things, that systematic sampling, although not average sampling, can impact on the nonseasonal unit root properties of the data; for example, even where an exact zero frequency unit root holds in the original data it need not necessarily hold in the systematically sampled data. Moreover, the systematically sampled data could be nearintegrated at the zero frequency even where the original data is not. Third, the implications of aggregation on the deterministic kernel of the series are explored. 
Keywords:  Aggregation, systematic sampling, average sampling, seasonal (near) unit roots, demodulation 
JEL:  C12 C22 
Date:  2018 
URL:  http://d.repec.org/n?u=RePEc:ubi:deawps:86&r=ecm 
By:  Nikolay Iskrev 
Abstract:  Standard economic intuition suggests that asset prices are more sensitive to news than other economic aggregates.This has led many researchers to conclude that asset price data would be very useful for the estimation of business cycle models containing news shocks.This paper shows how to formally evaluate the information content of observed variables with respect to unobservedshocks in structural macroeconomic models.The proposed methodology is applied to two different real business cycle models with news shocks.The contribution of asset prices is found to be relatively small.The methodology is general and can be used to measure the informational importance of observables with respect to latent variables in DSGE models.Thus,it provides a framework for systematic treatment of such issues,which are usually discussed in an informal manner in the literature. 
Keywords:  DSGE models,News Shocks,Asset prices,Information,Identification 
JEL:  C32 C51 C52 E32 
Date:  2018–03 
URL:  http://d.repec.org/n?u=RePEc:ise:remwps:wp0332018&r=ecm 
By:  Vikas Ramachandra 
Abstract:  In this paper, we propose deep learning techniques for econometrics, specifically for causal inference and for estimating individual as well as average treatment effects. The contribution of this paper is twofold: 1. For generalized neighbor matching to estimate individual and average treatment effects, we analyze the use of autoencoders for dimensionality reduction while maintaining the local neighborhood structure among the data points in the embedding space. This deep learning based technique is shown to perform better than simple k nearest neighbor matching for estimating treatment effects, especially when the data points have several features/covariates but reside in a low dimensional manifold in high dimensional space. We also observe better performance than manifold learning methods for neighbor matching. 2. Propensity score matching is one specific and popular way to perform matching in order to estimate average and individual treatment effects. We propose the use of deep neural networks (DNNs) for propensity score matching, and present a network called PropensityNet for this. This is a generalization of the logistic regression technique traditionally used to estimate propensity scores and we show empirically that DNNs perform better than logistic regression at propensity score matching. Code for both methods will be made available shortly on Github at: https://github.com/vikas84bf 
Date:  2018–02 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1803.00149&r=ecm 
By:  Sam Asher; Paul Novosad; Charlie Rafkin 
Abstract:  A conditional expectation function (CEF) can at best be partially identified when the conditioning variable is interval censored. When the number of bins is small, existing methods often yield minimally informative bounds. We propose three innovations that make meaningful inference possible in interval data contexts. First, we prove novel nonparametric bounds for contexts where the distribution of the censored variable is known. Second, we show that a class of measures that describe the conditional mean across a fixed interval of the conditioning space can often be bounded tightly even when the CEF itself cannot. Third, we show that a constraint on CEF curvature can either tighten bounds or can substitute for the monotonicity assumption often made in interval data applications. We derive analytical bounds that use the first two innovations, and develop a numerical method to calculate bounds under the third. We show the performance of the method in simulations and then present two applications. First, we resolve a known problem in the estimation of mortality as a function of education: because individuals with high school or less are a smaller and thus more negatively selected group over time, estimates of their mortality change are likely to be biased. Our method makes it possible to hold education rank bins constant over time, revealing that current estimates of rising mortality for less educated women are biased upward in some cases by a factor of three. Second, we apply the method to the estimation of intergenerational mobility, where researchers frequently use coarsely measured education data in the many contexts where matched parentchild income data are unavailable. Conventional measures like the rankrank correlation may be uninformative once interval censoring is taken into account; CEF intervalbased measures of mobility are bounded tightly. 
Date:  2018–02 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1802.10490&r=ecm 
By:  Áureo de Paula (University College London); Imran Rasul (University College of London); Pedro Souza (PUCRio) 
Abstract:  It is almost selfevident that social interactions can determine economic behavior and outcomes. Yet, information on social ties does not exist in most publicly available and widely used datasets. We present methods to recover information on the entire structure of social networks from observational panel data that contains no information on social ties between individuals. In the context of a canonical social interactions model, we provide sufficient conditions under which the social interactions matrix, endogenous and exogenous social effect parameters are all globally identified. We describe how high dimensional estimation techniques can be used to estimate the model based on the Adaptive Elastic Net GMM method. We showcase our method in Monte Carlo simulations using two stylized and two real world network structures. Finally, we employ our method to study tax competition across US states. We find the identified network structure of tax competition differs markedly from the common assumption of tax competition between geographically neighboring states. We analyze the identified social interactions matrix to provide novel insights into the longstanding debate on the relative roles of factor mobility and yardstick competition in driving tax setting behavior across states. Most broadly, our method shows how the analysis of social interactions can be usefully extended to economic realms where no network data exists. 
Keywords:  social interactions, panel data, high dimensional estimation, GMM, adaptive elastic net 
JEL:  C18 C31 D85 H71 
Date:  2018–03 
URL:  http://d.repec.org/n?u=RePEc:hka:wpaper:2018013&r=ecm 
By:  Matt Goldman; David M. Kaplan 
Abstract:  When comparing two distributions, it is often helpful to learn at which quantiles or values there is a statistically significant difference. This provides more information than the binary "reject" or "do not reject" decision of a global goodnessoffit test. Framing our question as multiple testing across the continuum of quantiles $\tau\in(0,1)$ or values $r\in\mathbb{R}$, we show that the KolmogorovSmirnov test (interpreted as a multiple testing procedure) achieves strong control of the familywise error rate. However, its wellknown flaw of low sensitivity in the tails remains. We provide an alternative method that retains such strong control of familywise error rate while also having even sensitivity, i.e., equal pointwise type I error rates at each of $n\to\infty$ order statistics across the distribution. Our onesample method computes instantly, using our new formula that also instantly computes goodnessoffit $p$values and uniform confidence bands. To improve power, we also propose stepdown and pretest procedures that maintain control of the asymptotic familywise error rate. Onesample and twosample cases are considered, as well as extensions to regression discontinuity designs and conditional distributions. Simulations, empirical examples, and code are provided. 
Date:  2017–08 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1708.04658&r=ecm 
By:  Nadia Boussaha; Faycal Hamdi; Saïd Souam 
Abstract:  The contribution of this paper is twofold. In a first step, we propose the so called Periodic Multivariate Autoregressive Stochastic Volatility (PV ARSV) model, that allows the Granger causality in volatility in order to capture periodicity in stochastic conditional variance. After a thorough discussion, we provide some probabilistic properties of this class of models. We thus propose two methods for the estimation problem, one based on the periodic Kalman filter and the other on the particle filter and smoother with ExpectationMaximization (EM) algorithm. In a second step, we propose an empirical application by modeling oil price and three exchange rates time series. It turns out that our modeling gives very accurate results and has a well volatility forecasting performance. 
Keywords:  Multivariate periodic stochastic volatility; periodic stationarity; periodic Kalman filter; particle filtering; exchange rates; Saharan Blend oil. 
JEL:  C32 C53 F31 G17 
Date:  2018 
URL:  http://d.repec.org/n?u=RePEc:drm:wpaper:201814&r=ecm 
By:  AMBA OYON, Claude Marius; Mbratana, Taoufiki 
Abstract:  This paper develops estimators for simultaneous equations with spatial autoregressive or spatial moving average error components. We derive a limited information estimator and a full information estimator. We give the simultaneous generalized method of moments to get each component of the variance covariance of the disturbance in spatial autoregressive case as well as spatial moving average case. The results of our Monte Carlo suggest that our estimators are consistent. When we estimate the coefficient of spatial dependence it seems better to use instrumental variables estimator that takes into account simultaneity. We also apply these set of estimators on real data. 
Keywords:  Simultaneous; GMM; Panel data; SAR; SMA 
JEL:  C13 C33 
Date:  2018 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:84746&r=ecm 
By:  Jan Pablo Burgard; Patricia Dörr 
Abstract:  Regression analysis aims at the revelation of interdependencies and causalities between variables observed in the population. That is, a structure between regressors and regressants that causes the realization of the finite population is assumed, the socalled data generating process or a superpopulation model. When data points occur in an inherent clustering, mixed models are a natural modelling approach. Given the finite population realization, a consistent estimation of the superpopulation parameters is possible. However, regression analysis seldomly takes place at the level of the finite population. Rather, a survey is conducted on the population and the analyst has to use the sample for regression modeling. Under a correct regression setup, derived estimators are consistent given the sample is noninformative. Though, these conditions are hard to verify, especially when the survey design is complex, employing clustering and unequal selection probabilities. The use of sampling weights may reduce a consequent estimation bias as they could contain additional information about the sampling process conditional on which the data generating process of the sampled units becomes closer to the one of the whole population. Common estimation procedures that allow for survey weights in generalized linear mixed models require one unique surveyweight per sampling stage which are consequently nested and correspond to the random effects analyzed in the regression. However, the data inherent clustering (e.g. students in classes in schools) possibly does not correspond to the sampling stages (e.g. blocks of houses where the students’ families live). Or the analyst has no access to the detailed sample design due to dis closure risk or the selection of units follows an unequal sampling probability scheme. Or the survey weights vary within clusters due to calibration. Therefore, we propose an estimation procedure that allows for unitspecific survey weights: The MonteCarlo EM (MCEM) algorithm whose completedata loglikelihood leads to a singlelevel modeling problem that allows a unitspecific weighting. In the Estep, the random effects are considered to be missing data. The expected (weighted) loglikelihood is approximated via MonteCarlo integration and maximized with respect to the regression parameters. The method’s performance is evaluated in a modelbased simulation study with finite populations. 
Date:  2018 
URL:  http://d.repec.org/n?u=RePEc:trr:wpaper:201801&r=ecm 
By:  ChiaLin Chang (National Chung Hsing University); Michael McAleer (Asia University, University of Sydney Business School, Erasmus University Rotterdam) 
Abstract:  The purpose of the paper is to (i) show that univariate GARCH is not a special case of multivariate GARCH, specifically the Full BEKK model, except under parametric restrictions on the offdiagonal elements of the random coefficient autoregressive coefficient matrix, that are not consistent with Full BEKK, and (ii) provide the regularity conditions that arise from the underlying random coefficient autoregressive process, for which the (quasi) maximum likelihood estimates (QMLE) have valid asymptotic properties under the appropriate parametric restrictions. The paper provides a discussion of the stochastic processes that lead to the alternative specifications, regularity conditions, and asymptotic properties of the univariate and multivariate GARCH models. It is shown that the Full BEKK model, which in empirical practice is estimated almost exclusively compared with Diagonal BEKK (DBEKK), has no underlying stochastic process that leads to its specification, regularity conditions, or asymptotic properties, as compared with DBEKK. An empirical illustration shows the differences in the QMLE of the parameters of the conditional means and conditional variances for the univariate, DEBEKK and Full BEKK specifications. 
Keywords:  Random coefficient stochastic process; Offdiagonal parametric restrictions; Diagonal BEKK; Full BEKK; Regularity conditions; Asymptotic properties; Conditional volatility; Univariate and multivariate models; Fossil fuels and carbon emissions. 
JEL:  C22 C32 C52 C58 
Date:  2018–03–07 
URL:  http://d.repec.org/n?u=RePEc:tin:wpaper:20180023&r=ecm 
By:  Chambers, Marcus J; Taylor, AM Robert 
Abstract:  We consider models for both deterministic onetime and continuous stochastic parameter change in a continuous time autoregressive model around a deterministic trend function. For the latter we focus on the case where the autoregressive parameter itself follows a firstorder autoregression. Exact discrete time analogue models are detailed in each case and compared to corresponding parameter change models adopted in the discrete time literature. The relationships between the parameters in the continuous time models and their discrete time analogues are also explored. For the one time parameter change model the discrete time models used in the literature can be justified by the corresponding continuous time model, with a only a minor modification needed for the (most likely) case where the changepoint does not coincide with one of the discrete time observation points. For the stochastic parameter change model considered we show that the resulting discrete time model is characterised by an autoregressive parameter the logarithm of which follows an ARMA(1,1) process. We discuss how this relates to models which have been proposed in the discrete time stochastic unit root literature. The implications of our results for a number of extant discrete time models and testing procedures are discussed. 
Keywords:  Timevarying parameters, continuous and discrete time, autoregression, trendbreak, unit root, persistence change, explosive bubbles, random coeffcient models 
Date:  2018–03–01 
URL:  http://d.repec.org/n?u=RePEc:esy:uefcwp:21684&r=ecm 
By:  David M. Kaplan (University of Missouri); Longhao Zhuo (Bank of America) 
Abstract:  Bayesian and frequentist criteria are fundamentally different, but often posterior and sampling distributions are asymptotically equivalent (e.g., Gaussian). For the corresponding limit experiment, we characterize the frequentist size of a certain Bayesian hypothesis test of (possibly nonlinear) inequalities. If the null hypothesis is that the (possibly infinitedimensional) parameter lies in a certain halfspace, then the Bayesian test's size is $\alpha$; if the null hypothesis is a subset of a halfspace, then size is above $\alpha$ (sometimes strictly); and in other cases, size may be above, below, or equal to $\alpha$. Two examples illustrate our results: testing stochastic dominance and testing curvature of a translog cost function. 
Date:  2016–07 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1607.00393&r=ecm 
By:  Xiao, Weilin (Zhejiang University); Yu, Jun (School of Economics, Singapore Management University) 
Abstract:  This paper extends the asymptotic theory for the fractional Vasicek model developed in Xiao and Yu (2018) from the case where H ∈ (1/2, 1) to the case where H ∈ (0, 1/2). It is found that the asymptotic theory of the persistence parameter (k) critically depends on the sign of k. Moreover, if k > 0, the asymptotic distribution for the estimator of k is different when H ∈ (0, 1/2) from that when H ∈ (1/2, 1). 
Keywords:  Least squares; Roughness; Strong consistency; Asymptotic distribution 
JEL:  C15 C22 C32 
Date:  2018–03–19 
URL:  http://d.repec.org/n?u=RePEc:ris:smuesw:2018_007&r=ecm 
By:  Victor Chernozhukov; Whitney Newey; James Robins 
Abstract:  We provide adaptive inference methods for linear functionals of sparse linear approximations to the conditional expectation function. Examples of such functionals include average derivatives, policy effects, average treatment effects, and many others. The construction relies on building Neymanorthogonal equations that are approximately invariant to perturbations of the nuisance parameters, including the Riesz representer for the linear functionals. We use L1regularized methods to learn approximations to the regression function and the Riesz representer, and construct the estimator for the linear functionals as the solution to the orthogonal estimating equations. We establish that under weak assumptions the estimator concentrates in a 1/root n neighborhood of the target with deviations controlled by the normal laws, and the estimator attains the semiparametric efficiency bound in many cases. In particular, either the approximation to the regression function or the approximation to the Rietz representer can be "dense" as long as one of them is sufficiently "sparse". Our main results are nonasymptotic and imply asymptotic uniform validity over large classes of models. 
Date:  2018–02 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1802.08667&r=ecm 
By:  Breitmoser, Yves (HU Berlin) 
Abstract:  Multinomial logit is the canonical model of discrete choice but widely criticized for requiring functional form assumptions as foundation. The present paper shows that logit is behaviorally founded without such assumptions. Logit\'s functional form obtains if relative choice probabilities are independent of irrelevant alternatives and invariant to utility translation, to relabeling options (presentation independence), and to changing utilities of third options (context independence). Reviewing behavioral evidence, presentation and context independence seem to be violated in typical experiments, though not IIA and translation invariance. Relaxing context independence yields contextual logit (Wilcox, 2011), relaxing presentation independence allows to capture \"focality\" of options. 
Keywords:  stochastic choice, logit, axiomatic foundation; behavioral evidence; utility estimation; 
JEL:  D03 C13 
Date:  2018–03–07 
URL:  http://d.repec.org/n?u=RePEc:rco:dpaper:78&r=ecm 
By:  Christian Dreger; Konstantin A. Kholodilin 
Abstract:  The European debt crisis has revealed serious deficiencies and risks on a proper functioning of the monetary union. Against this backdrop, early warning systems are of crucial importance. In this study that focuses on euro area member states, the robustness of early warning systems to predict crises of government debt is evaluated. Robustness is captured via several dimensions, such as the chronology of past crises, econometric methods, and the selection of indicators in forecast combinations. The chosen approach is shown to be crucial for the results. Therefore, the construction of early warning systems should be based on a wide set of variables and methods in order to be able to draw reliable conclusions. 
Keywords:  Sovereign debt crises, multiple bubbles, signal approach, logit, panel data model 
JEL:  C23 C25 H63 
Date:  2018 
URL:  http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1724&r=ecm 
By:  Max Löffler; Andreas Peichl; Sebastian Siegloch 
Abstract:  There is still considerable dispute about the magnitude of labor supply elasticities. While differences in estimates especially between micro and macro models are recently attributed to frictions and adjustment costs, we show that the variation in elasticities derived from structural labor supply models can also be explained by modeling assumptions. Specifically, we estimate 3,456 different models on the same data each representing a plausible combination of frequently made choices. While many modeling assumptions do not systematically affect labor supply elasticities, our controlled metaanalysis shows that results are very sensitive to the treatment of hourly wages in the estimation. For example, different (sensible) choices concerning the modeling of the underlying wage distribution and especially the imputation of (missing) wages lead to point estimates of elasticities between 0.2 and 0.65. We hence conclude that researchers should pay more attention to the robustness of their estimations with respect to the wage treatment. 
Keywords:  Labor supply, elasticity, random utility models, wages 
JEL:  C25 C52 H31 J22 
Date:  2018 
URL:  http://d.repec.org/n?u=RePEc:ces:ifowps:_259&r=ecm 
By:  L. Bauwens; E. Otrando 
Abstract:  New parameterizations of the dynamic conditional correlation (DCC) model and of the regimeswitching dynamic correlation (RSDC) model are introduced, such that these models provide a specific dynamics for each correlation. They imply a nonlinear autoregressive form of dependence on lagged correlations and are based on properties of the Hadamard exponential matrix. The new models are applied to a data set of twenty stock market indices, comparing them to the classical DCC and RSDC models. The empirical results show that the new models improve their classical versions in terms of several criteria. 
Keywords:  dynamic conditional correlations;regimeswitching dynamic correlations;Hadamard exponential matrix 
Date:  2018 
URL:  http://d.repec.org/n?u=RePEc:cns:cnscwp:201803&r=ecm 