
on Econometrics 
By:  Griffin, Jim; Liu, Jia; Maheu, John M 
Abstract:  Variance estimation is central to many questions in finance and economics. Until now expost variance estimation has been based on infill asymptotic assumptions that exploit highfrequency data. This paper offers a new exact finite sample approach to estimating expost variance using Bayesian nonparametric methods. In contrast to the classical counterpart, the proposed method exploits pooling over highfrequency observations with similar variances. Bayesian nonparametric variance estimators under no noise, heteroskedastic and serially correlated microstructure noise are introduced and discussed. Monte Carlo simulation results show that the proposed approach can increase the accuracy of variance estimation. Applications to equity data and comparison with realized variance and realized kernel estimators are included. 
Keywords:  pooling, microstructure noise, slice sampling 
JEL:  C11 C22 C58 G1 
Date:  2016–05–10 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:71220&r=ecm 
By:  Andrew J. Buck (Department of Economics, Temple University); George M. Lady (Department of Economics, Temple University) 
Abstract:  It is common econometric practice to propose a system of equations, termed the "structure," estimate each endogenous variable in the structure via a linear regression with all of the exogenous variables as arguments, and then employ one of variety of regression techniques to recapture the coefficients in the (Jacobian) arrays of the structure. A recent literature, e.g., Lady and Buck (2015), has shown that a qualitative analysis of a model's structural and estimated reduced form arrays can provide a robust procedure for assessing if a model's hypothesized structure has been falsified. This paper shows that the even weaker statement of the model's structure provided by zero restrictions on the structural arrays can be falsified, independent of the proposed nonzero entries. When this takes place, multistage least squares, or any procedure for estimating the structural arrays with the zero restrictions imposed, will present estimates that could not possibly have generated the data upon which the estimated reduced form is based. The examples given in the paper are based upon a Monte Carlo sampling procedure. 
Keywords:  Estimation, Falsified Model, reduced form, qualitative analysis 
JEL:  C15 C18 C51 C52 
Date:  2016–05 
URL:  http://d.repec.org/n?u=RePEc:tem:wpaper:1601&r=ecm 
By:  Georgiev, Iliyan; Harvey, David I; Leybourne, Stephen J; Taylor, A M Robert 
Abstract:  We examine how the familiar spurious regression problem can manifest itself in the context of recently proposed predictability tests. For these tests to provide asymptotically valid inference, account has to be taken of the degree of persistence of the putative predictors. Failure to do so can lead to spurious overrejections of the no predictability null hypothesis. A number of methods have been developed to achieve this. However, these approaches all make an underlying assumption that any predictability in the variable of interest is purely attributable to the predictors under test, rather than to any unobserved persistent latent variables, themselves uncorrelated with the predictors being tested. We show that where this assumption is violated, something that could very plausibly happen in practice, sizeable (spurious) rejections of the null can occur in cases where the variables under test are not valid predictors. In response, we propose a screening test for predictive regression invalidity based on a stationarity testing approach. In order to allow for an unknown degree of persistence in the putative predictors, and for both conditional and unconditional heteroskedasticity in the data, we implement our proposed test using a fixed regressor wild bootstrap procedure. We establish the asymptotic validity of this bootstrap test, which entails establishing a conditional invariance principle along with its bootstrap counterpart, both of which appear to be new to the literature and are likely to have important applications beyond the present context. We also show how our bootstrap test can be used, in conjunction with extant predictability tests, to deliver a twostep feasible procedure. Monte Carlo simulations suggest that our proposed bootstrap methods work well in finite samples. An illustration employing U.S. stock returns data demonstrates the practical usefulness of our procedures. 
Keywords:  Predictive regression; causality; persistence; spurious regression; stationarity test; fixed regressor wild bootstrap; conditional distribution. 
Date:  2015–11 
URL:  http://d.repec.org/n?u=RePEc:esy:uefcwp:16666&r=ecm 
By:  Gouriéroux, Christian; Zakoian, JeanMichel 
Abstract:  The noncausal autoregressive process with heavytailed errors possesses a nonlinear causal dynamics, which allows for %unit root, local explosion or asymmetric cycles often observed in economic and financial time series. It provides a new model for multiple local explosions in a strictly stationary framework. The causal predictive distribution displays surprising features, such as the existence of higher moments than for the marginal distribution, or the presence of a unit root in the Cauchy case. Aggregating such models can yield complex dynamics with local and global explosion as well as variation in the rate of explosion. The asymptotic behavior of a vector of sample autocorrelations is studied in a semiparametric noncausal AR(1) framework with Paretolike tails, and diagnostic tests are proposed. Empirical results based on the Nasdaq composite price index are provided. 
Keywords:  Causal innovation; Explosive bubble; Heavytailed errors; Noncausal process; Stable process 
JEL:  C13 C22 C52 
Date:  2016–05–05 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:71105&r=ecm 
By:  Ignace De Vos; Gerdie Everaert () 
Abstract:  This paper extends the Common Correlated Effects Pooled (CCEP) estimator designed by Pesaran (2006) to dynamic homogeneous models. For static panels, this estimator is consistent as the number of crosssections (N) goes to infinity irrespectively of the time series dimension (T). However, it suffers from a large bias in dynamic models when T is fixed (Everaert and De Groote, 2016). We develop a biascorrected CCEP estimator based on an asymptotic bias expression that is valid for a multifactor error structure provided that a sufficient number of crosssectional averages, and lags thereof, are added to the model. We show that the resulting CCEPbc estimator is consistent as N tends to infinity, both for T fixed or T growing large, and derive its limiting distribution. Monte Carlo experiments show that our bias correction performs very well. It is nearly unbiased, even when T and/or N are small, and hence offers a strong improvement over the severely biased CCEP estimator. CCEPbc is also found to be superior to alternative bias correction methods available in the literature in terms of bias, variance and inference. 
Keywords:  Dynamic panel data, bias, biascorrection, common correlated effects, unobserved common factors, crosssection dependence, lagged dependent variable 
JEL:  C23 C13 C15 
Date:  2016–04 
URL:  http://d.repec.org/n?u=RePEc:rug:rugwps:16/920&r=ecm 
By:  Rinke, Saskia 
Abstract:  In this paper the performance of information criteria and a test against SETAR nonlinearity for outlier contaminated time series are investigated. Additive outliers can seriously influence the properties of the underlying time series and hence of linearity tests, resulting in spurious test decisions of nonlinearity. Using simulation studies, the performance of the information criteria SIC and WIC as an alternative to linearity tests are assessed in time series with different degrees of persistence and different outlier magnitudes. For uncontaminated series and a small sample size the performance of SIC and WIC is similar to the performance of the linearity test at the $5\%$ and $10\%$ significance level, respectively. For an increasing number of observations the size of SIC and WIC tends to zero. In contaminated series the size of the test and of the information criteria increases with the outlier magnitude and the degree of persistence. SIC and WIC clearly outperform the test in larger samples and larger outlier magnitudes. The power of the test and of the information criteria depends on the sample size and on the difference between the regimes. The more distinct the regimes and the larger the sample, the higher is the power. Additive outliers decrease the power in distinct regimes in small samples and in intermediate regimes in large samples, but increase the power in similar regimes. Due to their higher robustness in terms of size, information criteria are a valuable alternative to linearity tests in outlier contaminated time series. 
Keywords:  Additive Outliers, Nonlinear Time Series, Information Criteria, Linearity Test, Monte Carlo 
JEL:  C15 C22 
Date:  2016–04 
URL:  http://d.repec.org/n?u=RePEc:han:dpaper:dp575&r=ecm 
By:  Hampf, Benjamin 
Abstract:  In this paper we propose a stochastic formulation of the materials balance condition which imposes physical constraints on production technologies. The estimation of the model involves a composed error term structure that is commonly applied in the literature on stochastic frontier analysis of productive efficiency. Moreover, we discuss how OLS, maximum likelihood and Bayesian methods can be used to estimate the proposed model. In contrast to previous approaches our model allows to estimate the physical limitations to production possibilities in the presence of statistical noise and depends on substantially weaker data requirements. We demonstrate the applicability of our new approach by estimating the materials balance condition for SO2 and CO2 using a sample of fossilfueled power plants in the United States. 
Keywords:  Materials balance condition,Abatement efficiency,Stochastic frontier analysis,Laws of thermodynamics,Applied econometrics,Environmental economics 
JEL:  Q53 C51 Q40 D24 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:zbw:darddp:226&r=ecm 
By:  Victor Aguirregabiria; Erhao Xie 
Abstract:  This paper studies the identification of players' preferences and beliefs in empirical applications of discrete choice games using experimental data. The experiment comprises a set of games with similar features (e.g., twoplayer coordination games) where each game has different values for the players' monetary payoffs. Each game can be interpreted as an experimental treatment group. The researcher assigns randomly subjects to play these games and observes the outcome of the game as described by the vector of players' actions. Data from this experiment can be described in terms of the empirical distribution of players' actions conditional on the treatment group. The researcher is interested in the nonparametric identification of players' preferences (utility function of money) and players' beliefs about the expected behavior of other players, without imposing restrictions such as unbiased or rational beliefs or a particular functional form for the utility of money. We show that the hypothesis of unbiased/rational beliefs is testable and propose a test of this null hypothesis. We apply our method to two sets of experiments conducted by Goeree and Holt (2001) and Heinemann, Nagel and Ockenfels (2009). Our empirical results suggest that in the matching pennies game, a player is able to correctly predict other player's behavior. In the public good coordination game, our test can reject the null hypothesis of unbiased beliefs when the payoff of the noncooperative action is relatively low. 
Keywords:  Testing biased beliefs; Multiple equilibria; Strategic uncertainty; Coordination game 
JEL:  C57 C72 
Date:  2016–05–12 
URL:  http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa560&r=ecm 
By:  Stelios D. Bekiros; Alessia Paccagnini 
Abstract:  Although policymakers and practitioners are particularly interested in dynamic stochastic general equilibrium (DSGE) models, these are typically too stylized to be applied directly to the data and often yield weak prediction results. Very recently, hybrid DSGE models have become popular for dealing with some of the model misspecifications. Major advances in estimation methodology could allow these models to outperform wellknown time series models and effectively deal with more complex realworld problems as richer sources of data become available. In this study we introduce a Bayesian approach to estimate a novel factor augmented DSGE model that extends the model of Consolo et al. [Consolo, A., Favero, C.A., and Paccagnini, A., 2009. On the Statistical Identification of DSGE Models. Journal of Econometrics, 150, 99–115]. We perform a comparative predictive evaluation of point and density forecasts for many different specifications of estimated DSGE models and various classes of VAR models, using datasets from the US economy including realtime data. Simple and hybrid DSGE models are implemented, such as DSGEVAR and tested against standard, Bayesian and factor augmented VARs. The results can be useful for macroforecasting and monetary policy analysis. 
Keywords:  Density forecasting; Marginal data density; DSGEFAVAR; RealTime data 
JEL:  C32 C11 C15 C53 D58 
Date:  2014–10 
URL:  http://d.repec.org/n?u=RePEc:ucn:oapubs:10197/7588&r=ecm 
By:  Bazen, Stephen (AixMarseille University); Joutard, Xavier (AixMarseille University); Magdalou, Brice (University of Montpellier 1) 
Abstract:  The widely used Oaxaca decomposition applies to linear models. Extending it to commonly used nonlinear models such as duration models is not straightforward. This paper shows that the original decomposition that uses a linear model can also be obtained by an application of the mean value theorem. By extension, this basis provides a means of obtaining a decomposition formula which applies to nonlinear models which are continuous functions. The detailed decomposition of the explained component is expressed in terms of what are usually referred to as marginal effects. Explicit formulae are provided for the decomposition of some nonlinear models commonly used in applied econometrics including binary choice, duration and Box‐Cox models. 
Keywords:  Oaxaca decomposition, nonlinear models, duration models, binary choice, Box‐Cox transformation 
JEL:  C10 C18 C21 
Date:  2016–04 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp9909&r=ecm 
By:  Maarten van Oordt; Chen Zhou 
Abstract:  This paper considers the problem of estimating a linear model between two heavytailed variables if the explanatory variable has an extremely low (or high) value. We propose an estimator for the model coefficient by exploiting the tail dependence between the two variables and prove its asymptotic properties. Simulations show that our estimation method yields a lower mean squared error than regressions conditional on tail observations. In an empirical application we illustrate the better performance of our approach relative to the conditional regression approach in projecting the losses of industryspecific stock portfolios in the event of a market crash. 
Keywords:  Econometric and statistical methods, Financial markets 
JEL:  C14 G01 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:bca:bocawp:1622&r=ecm 
By:  YuChin Hsu (Institute of Economics, Academia Sinica, Taipei, Taiwan); Shu Shen (Department of Economics, University of California, Davis) 
Abstract:  Treatment effect heterogeneity is frequently studied in regression discontinuity (RD) applications. This paper is the first to propose tests for treatment effect heterogeneity under the RD setup. The proposed tests study whether a policy treatment is 1) beneficial for at least some subpopulations defined by covariate values, 2) has any impact on at least some subpopulations, and 3) has a heterogeneous impact across subpopulations. Compared with other methods currently adopted in applied RD studies, such as the subsample regression method and the interaction term method, our tests have the advantage of being fully nonparametric, robust to weak inference and powerful. Monte Carlo simulations show that our tests perform very well in small samples. We apply the tests to study the impact of attending a better high school and discover interesting patterns of treatment e ect heterogeneity that were neglected by classic mean RD analyses. JEL Classification: C21, C31 
Keywords:  Sharp regression discontinuity, fuzzy regression discontinuity, treatment effect heterogeneity. 
Date:  2016–04 
URL:  http://d.repec.org/n?u=RePEc:sin:wpaper:16a005&r=ecm 
By:  Robert Kollmann 
Abstract:  This paper discusses a tractable approach for computing the likelihood function of nonlinear Dynamic Stochastic General Equilibrium (DSGE) models that are solved using second and third order accurate approximations. By contrast to particle filters, no stochastic simulations are needed for the method here. The method here is, hence, much faster and it is thus suitable for the estimation of mediumscale models. The method assumes that the number of exogenous innovations equals the number of observables. Given an assumed vector of initial states, the exogenous innovations can thus recursively be inferred from the observables. This easily allows to compute the likelihood function. Initial states and model parameters are estimated by maximizing the likelihood function. Numerical examples suggest that the method provides reliable estimates of model parameters and of latent state variables, even for highly nonlinear economies with big shocks. 
Keywords:  likelihoodbased estimation of nonlinear DSGE models; higherorder approximations; pruning; latent state variables 
JEL:  C63 C68 E37 
Date:  2016–03 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/228887&r=ecm 
By:  Mario Forni; Luca Gambetti; Luca Sala 
Abstract:  A shock of interest can be recovered, either exactly or with a good approximation, by means of standard VAR techniques even when the structural MA representation is non invertible or nonfundamental. We propose a measure of how informative a VAR model is for a specific shock of interest. We show how to use such a measure for the validation of shocks' transmission mechanism of DSGE models through VARs. In an application, we validate a theory of news shocks. The theory does remarkably well for all variables, but understates the longrun effects of technology news on TFP. 
Keywords:  invertibility, nonfundamentalness, news shocks, DSGE model validation, structural VAR 
JEL:  C32 E32 
Date:  2016–04 
URL:  http://d.repec.org/n?u=RePEc:mod:recent:119&r=ecm 
By:  Louis Paulot 
Abstract:  Monte Carlo simulations of diffusion processes often introduce bias in the final result, due to time discretization. Using an auxiliary Poisson process, it is possible to run simulations which are unbiased. In this article, we propose such a Monte Carlo scheme which converges to the exact value. We manage to keep the simulation variance finite in all cases, so that the strong law of large numbers guarantees the convergence. Moreover, the simulation noise is a decreasing function of the Poisson process intensity. Our method handles multidimensional processes with nonconstant drifts and nonconstant variancecovariance matrices. It also encompasses stochastic interest rates. 
Date:  2016–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1605.01998&r=ecm 
By:  Du Nguyen 
Abstract:  We derive an explicit formula for likelihood function for Gaussian VARMA model conditioned on initial observables where the movingaverage (MA) coefficients are scalar. For fixed MA coefficients the likelihood function is optimized in the autoregressive variables $\Phi$'s by a closed form formula generalizing regression calculation of the VAR model with the introduction of an inner product defined by MA coefficients. We show the assumption of scalar MA coefficients is not restrictive and this formulation of the VARMA model shares many nice features of VAR and MA model. The gradient and Hessian could be computed analytically. The likelihood function is preserved under the root invertion maps of the MA coefficients. We discuss constraints on the gradient of the likelihood function with moving average unit roots. With the help of FFT the likelihood function could be computed in $O((kp+1)^2T +ckT\log(T))$ time. Numerical calibration is required for the scalar MA variables only. The approach can be generalized to include additional drifts as well as integrated components. We discuss a relationship with the BorodinOkounkov formula and the case of infinite MA components. 
Date:  2016–04 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1604.08677&r=ecm 
By:  Marco Marini 
Abstract:  Benchmarking methods can be used to extrapolate (or â€œnowcastâ€ ) lowfrequency benchmarks on the basis of available highfrequency indicators. Quarterly national accounts are a typical example, where a number of monthly and quarterly indicators of economic activity are used to calculate preliminary annual estimates of GDP. Using both simulated and reallife national accounts data, this paper aims at assessing the prediction accuracy of three benchmarking methods widely used in the national accounts compilation: the proportional Denton method, the proportional CholetteDagum method with firstorder autoregressive error, and the regressionbased ChowLin method. The results show that the CholetteDagum method provides the most accurate extrapolations when the indicator and the annual benchmarks move along the same trend. However, the Denton and ChowLin methods could prevail in reallife cases when the quarterly indicator temporarily deviates from the target series. 
Keywords:  National accounts;Indicators of economic activity;Data collection;Time series;Data analysis;Benchmarking, Extrapolation, Quarterly National Accounts 
Date:  2016–03–18 
URL:  http://d.repec.org/n?u=RePEc:imf:imfwpa:16/71&r=ecm 
By:  Annastiina Silvennoinen (QUT); Timo Terasvirta (CREATES) 
Abstract:  The topic of this paper is testing the hypothesis of constant unconditional variance in GARCH models against the alternative that the unconditional variance changes deterministically over time. Tests of this hypothesis have previously been performed as misspecification tests after fitting a GARCH model to the original series. It is found by simulation that the positive size distortion present in these tests is a function of the kurtosis of the GARCH process. Adjusting the size by numerical methods is considered. The possibility of testing the constancy of the unconditional variance before fitting a GARCH model to the data is discussed. The power of the ensuing test is vastly superior to that of the misspecification test and the size distortion minimal. The test has reasonable power already in very short time series. It would thus serve as a test of constant variance in conditional mean models. An application to exchange rate returns is included. 
Keywords:  autoregressive conditional heteroskedasticity, modelling volatility, testing parameter constancy, timevarying GARCH 
JEL:  C32 C52 
Date:  2015–10–28 
URL:  http://d.repec.org/n?u=RePEc:qut:auncer:2015_06&r=ecm 
By:  Neil Shephard; Justin J Yang 
URL:  http://d.repec.org/n?u=RePEc:qsh:wpaper:360986&r=ecm 
By:  Clegg, Matthew; Krauss, Christopher 
Abstract:  Partial cointegration is a weakening of cointegration that allows for the "cointegrating" process to contain a random walk and a meanreverting component. We derive its representation in state space, provide a maximum likelihood based estimation routine, and a suitable likelihood ratio test. Then, we explore the use of partial cointegration as a means for identifying promising pairs and for generating buy and sell signals. Specifically, we benchmark partial cointegration against several classical pairs trading variants from 1990 until 2015, on a survivor bias free data set of the S&P 500 constituents. We find annualized returns of more than 12 percent after transaction costs. These results can only partially be explained by common sources of systematic risk and are well superior to classical distancebased or cointegrationbased pairs trading variants on our data set. 
Keywords:  statistical arbitrage,pairs trading,quantitative strategies,cointegration,partial cointegration 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:zbw:iwqwdp:052016&r=ecm 