
on Econometrics 
By:  Saikkonen, Pentti (Department of Mathematics and Statistics, University of Helsinki); Sandberg , Rickard (Department of Economics, Center for Economic Statistics, Stockholm School of Economics) 
Abstract:  This work develops likelihoodbased unit root tests in the noncausal autoregressive (NCAR) model formulated by Lanne and Saikkonen (2011, Journal of Time Series Econometrics 3, Iss. 3, Article 2). The possible unit root is assumed to appear in the causal autoregressive polynomial and for reasons of identification the error term of the model is supposed to be nonGaussian. In order to derive the tests, asymptotic properties of the maximum likelihood estimators are established under the unit root hypothesis. The limiting distributions of the proposed tests depend on a nuisance parameter determined by the distribution of the error term of the model. A simple procedure to handle this nuisance parameter dependence in applications is proposed. Finite sample properties of the tests are examined by means of Monte Carlo simulations. The results show that the size properties of the tests are satisfactory and the power against stationary NCAR alternatives is significantly higher than the power of conventional DickeyFuller tests and the Mtests of Lucas (1995, Econometric Theory 11, 331346). In an empirical application to a Finnish interest rate series evidence in favour of a stationary NCAR model with leptokurtic errors is found. 
Keywords:  maximum likelihood estimation; noncausal autoregressive model; nonGaussian time series; unit root 
JEL:  C01 C12 C22 
Date:  2013–11–02 
URL:  http://d.repec.org/n?u=RePEc:hhs:bofrdp:2013_026&r=ecm 
By:  James Davidson (Department of Economics, University of Exeter); Andreea G. Halunga (Department of Economics, University of Exeter) 
Abstract:  This paper proposes a consistent model speci?cation test that can be applied to a wide class of models and estimators, including all variants of quasimaximum likelihood and generalized method of moments. Our framework is independent of the form of the model and generalizes Bierens?(1982, 1990) approach. It has particular applications in new cases such as heteroskedastic errors, discrete data models, but the chief appeal of our approach is that it provides a "one size ?ts all" test. We specify a test based on a linear combination of individual components of the indicator vector that can be computed routinely, does not need to be tailored to the particular model, and is expected to have power against a wide class of alternatives. Although primarily envisaged as a test of functional form, this type of moment test can also be extended to testing for omitted variables. 
Keywords:  speci?cation testing; quasimaximum likelihood estimators; generalized method of moments estimators. 
JEL:  C12 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:exe:wpaper:1312&r=ecm 
By:  Xiaohu Wang (Chinese University of Hong Kong); Jun Yu (Singapore Management University, School of Economics) 
Abstract:  Large sample properties are studied for a rstorder autoregression (AR(1)) with a root greater than unity. It is shown that, contrary to the AR coe¢ cient, the least squares (LS) estimator of the intercept and its tstatistic are asymptotically normal without requiring the Gaussian error distribution, and hence an invariance principle applies. While the invariance principle does not apply to the asymptotic distribution of the LS estimator of the AR coe¢ cient, we show explicitly how it depends on the initial condition and the intercept. Also established are the asymptotic independence between the LS estimators of the intercept and the AR coefficient and the asymptotic independence between their tstatistics. Asymptotic theory for explosive processes is compared to that for unit root AR(1) processes and stationary AR(1) processes. The coefficient based test and the t test have better power for testing the hypothesis of zero intercept in the explosive process than in the stationary process. 
Date:  2013–11 
URL:  http://d.repec.org/n?u=RePEc:siu:wpaper:082013&r=ecm 
By:  Anton Skrobotov (Gaidar Institute for Economic Policy) 
Abstract:  In a recently publicized study, Harvey et al. (2012) investigated procedures for unit root testing employing break detection methods under local break in trend. We apply this methodology to analyze asymptotic and unite sample behavior of procedures under local break to test the stationarity null hypothesis local to unit root, against alternative hypothesis about the pres ence of a unit root. We extend the GLSbased stationarity test proposed by Harris et al. (2007) to the case of structural break and obtain asymptotic properties under local trend break. Two procedures are considered. The first procedure uses a withbreak stationarity test, but with adaptive critical values. The second procedure utilizes the intersection of rejection testing strategy containing tests with and without a break. Application of these approaches help to prevent serious size distortions for small break magnitude that are otherwise undetectable. Additionally, in a similar approach as Harvey et al. (2013) and Busetti and Harvey (2001), we propose a test based on minimizing the sequence of GLSbased stationarity test statistics over all possible break dates. This infimumtest in contrast to Busetti and Harvey (2001) does not require an additional assumption about a faster rate of convergence of break magnitude. Asymptotic and unite sample simulations show that under local to zero behavior of the trend break the asymptotic analysis provides a good approximation of the unite sample behavior of the proposed procedures. Proposed procedures can be used for confirmatory analysis together with tests of Harvey et al. (2012) and Harvey et al. (2013). 
Keywords:  Stationarity tests, KPSS tests, local break in trend, size distortions, intersection of rejection decision rule.. 
JEL:  C12 C22 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:gai:wpaper:0074&r=ecm 
By:  Peter M Robinson; Carlos Velasco 
Abstract:  A dynamic panel data model is considered that contains possibly stochastic individual components and a common fractional stochastic time trend. We propose four different ways of coping with the individual effects so as to estimate the fractional parameter. Like models with autoregressive dynamics, ours nests a unit root, but unlike the nonstandard asymptotics in the autoregressive case, estimates of the fractional parameter can be asymptotically normal. Establishing this property is made difficult due to bias caused by the individual effects, or by the consequences of eliminating them, and requires the number of time series observations T to increase, while the crosssectional size, N; can either remain fi�xed or increase with T: The biases in the central limit theorem are asymptotically negligible only under stringent conditions on the growth of N relative to T; but these can be relaxed by bias correction. For three of the estimates the biases depend only on the fractional parameter. In hypothesis testing, bias correction of the estimates is readily carried out. We evaluate the biases numerically for a range of T and parameter values, develop and justify feasible biascorrected estimates, and briefly discuss implied but less effective corrections. A Monte Carlo study of �finitesample performance is included. 
Keywords:  Panel data, Fractional time series, Estimation, Testing, Bias correction 
JEL:  C12 C13 C23 
Date:  2013–03 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:/2013/567&r=ecm 
By:  Brewer, Mike (ISER, University of Essex); Crossley, Thomas F. (University of Essex); Joyce, Robert (Institute for Fiscal Studies, London) 
Abstract:  A growing literature on inference in differenceindifferences (DiD) designs with grouped errors has been pessimistic about obtaining hypothesis tests of the correct size, particularly with few groups. We provide Monte Carlo evidence for three points: (i) it is possible to obtain tests of the correct size even with few groups, and in many settings very straightforward methods will achieve this; (ii) the main problem in DiD designs with grouped errors is instead low power to detect real effects; and (iii) feasible GLS estimation combined with robust inference can increase power considerably whilst maintaining correct test size – again, even with few groups. 
Keywords:  difference in differences, hypothesis test, power, cluster robust, feasible GLS 
JEL:  C12 C13 C21 
Date:  2013–11 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp7742&r=ecm 
By:  Jason R. Blevins (Department of Economics, Ohio State University) 
Abstract:  When a continuous time model is sampled only at equally spaced intervals, a priori restrictions on the parameters can provide natural identifying restrictions which serve to rule out otherwise observationally equivalent parameter values. Specifically, we consider identification of the parameter matrix in a linear system of firstorder stochastic differential equations, a setting which is general enough to include many common continuous time models in economics and finance. We derive a new characterization of the identification problem under a fully general class of linear restrictions on the parameter matrix and establish conditions under which only floor(n/2) restrictions are sufficient for identification when only the discrete time process is observable. Restrictions of the required kind are typically implied by economic theory and include zero restrictions that arise when some variables are excluded from an equation. We also consider identification of the intensity matrix of a discretelysampled finite Markov jump processes, a related special case where we show that only floor((n1)/2) restrictions are required. We demonstrate our results by applying them to two example models from economics and finance: a continuous time regression model with three equations and a continuoustime model of credit rating dynamics. 
Keywords:  stochastic differential equations, identification, continuous time regression, Markov jump process, matrix exponential, matrix logarithm 
JEL:  C32 C51 
Date:  2013–11 
URL:  http://d.repec.org/n?u=RePEc:osu:osuewp:1301&r=ecm 
By:  Guo, Shaojun; Ling, Shiqing; Zhu, Ke 
Abstract:  Testing causalityinmean and causalityinvariance has been largely studied. However, none of the tests can detect causalityinmean and causalityinvariance simultaneously. In this article, we introduce a factor double autoregressive (FDAR) model. Based on this model, a score test is proposed to detect causalityinmean and causalityinvariance simultaneously. Furthermore, strong consistency and asymptotic normality of the quasimaximum likelihood estimator (QMLE) for the FDAR model are established. A small simulation study shows good performances of the QMLE and the score test in finite samples. A real data example on the causal relationship between Hong Kong stock market and US stock market is given. 
Keywords:  Asymptotic Normality; Causalityinmean; Causalityinvariance; Factor DAR model; Instantaneous causality; Score test; Strong consistency. 
JEL:  C1 C12 C5 
Date:  2013–11–09 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:51570&r=ecm 
By:  Eleanor Sanderson; Frank Windmeijer (Institute for Fiscal Studies and University of Bristol) 
Abstract:  We consider testing for weak instruments in a model with multiple endogenous variables. Unlike Stock and Yogo (2005), who considered a weak instruments problem where the rank of the matrix of reduced form parameters is near zero, here we consider a weak instruments problem of a near rank reduction of one in the matrix of reduced form parameters. For example, in a twovariable model, we consider weak instrument asymptotics of the form pi1 = delta pi2 + c / âˆšn where pi1 and pi2 are the parameters in the two reducedform equations, c is a vector of constants and n is the sample size. We investigate the use of a conditional firststage Fstatistic along the lines of the proposal by Angrist and Pischke (2009) and show that, unless delta = 0 , the variance in the denominator of their Fstatistic needs to be adjusted in order to get a correct asymptotic distribution when testing the hypothesis H0 : pi1 = delta pi2. We show that a corrected conditional Fstatistic is equivalent to the Cragg and Donald (1993) minimum eigenvalue rank test statistic, and is informative about the maximum total relative bias of the 2SLS estimator and the Wald tests size distortions. When delta = 0 in the twovariable model, or when there are more than two endogenous variables, further information over and above the CraggDonald statistic can be obtained about the nature of the weak instrument problem by computing the conditional firststage Fstatistics. 
Date:  2013–11 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:58/13&r=ecm 
By:  Oellerer, Viktoria; Croux, Christophe; Alfons, Andreas 
Abstract:  To perform regression analysis in high dimensions, lasso or ridge estimation are a common choice. However, it has been shown that these methods are not robust to outliers. Therefore, alternatives as penalized Mestimation or the sparse least trimmed squares (LTS) estimator have been proposed. The robustness of these regression methods can be measured with the influence function. It quantifies the effect of infinitesimal perturbations in the data. Furthermore it can be used to compute the asymptotic variance and the mean squared error. In this paper we compute the influence function, the asymptotic variance and the mean squared error for penalized Mestimators and the sparse LTS estimator. The asymptotic biasedness of the estimators make the calculations non standard. We show that only Mestimators with a loss function with a bounded derivative are robust against regression outliers. In particular, the lasso has an unbounded influence function. 
Keywords:  Influence function; Lasso; Least trimmed squares; Penalized Mregression; Sparseness; 
Date:  2013–11 
URL:  http://d.repec.org/n?u=RePEc:ner:leuven:urn:hdl:123456789/425563&r=ecm 
By:  Anton Skrobotov (Gaidar Institute for Economic Policy) 
Abstract:  In this paper we propose tests based on GLSdetrending for testing the null hypothesis of deterministic seasonality. Unlike existing tests for deterministic seasonality, our tests do not suer from asymptotic size distortions under near integration. We also investigate the behavior of the proposed tests when the initial condition is not asymptotically negligible. types. 
Keywords:  Stationarity tests, KPSS test, seasonality, seasonal unit roots, deterministic seasonality, size distortion, GLSdetrending. 
JEL:  C12 C22 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:gai:wpaper:0073&r=ecm 
By:  Oellerer, Viktoria; Alfons, Andreas; Croux, Christophe 
Abstract:  To perform multiple regression, the least squares estimator is commonly used. However, this estimator is not robust to outliers. Therefore, robust methods such as MMestimation have been proposed. These estimators flag any observation with a large residual as an outlier and downweight it in the further procedure. This is also the case if the large residual is caused by only one component of the observation, which results in a loss of information. Therefore, we propose the shooting Sestimator, a regression estimator that is especially designed for situations where a large number of observations suffer from contamination in a small number of predictor variables. The shooting Sestimator combines the ideas of the coordinate descent algorithm with simple Sregression, which makes it robust against componentwise contamination. 
Keywords:  Cellwise outliers; Componentwise contamination; Shooting algorithm; Coordinate descent algorithm; Regression Sestimation; 
Date:  2013–11 
URL:  http://d.repec.org/n?u=RePEc:ner:leuven:urn:hdl:123456789/425555&r=ecm 
By:  Peter M Robinson; Francesca Rossi 
Abstract:  For testing lack of correlation against spatial autoregressive alternatives, Lagrange multiplier tests enjoy their usual computational advantages, but the(2) firstorder asymptotic approximation to critical values can be poor in small samples. We develop refined tests for lack of spatial error correlation in regressions, based on Edgeworth expansion. In Monte Carlo simulations these tests, and bootstrap ones, generally significantly outperform 2 based tests. 
Keywords:  Spatial autocorrelation, Lagrange multiplier test, Edgeworth expansion, bootstrap, finitesample corrections. 
JEL:  C29 
Date:  2013–10 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:/2013/566&r=ecm 
By:  Wilms, Ines; Croux, Christophe 
Abstract:  Canonical correlation analysis (CCA) describes the associations between two sets of variables by maximizing the correlation between linear combinations of the variables in each data set. However, in highdimensional settings where the number of variables exceeds the sample size or when the variables are highly correlated, traditional CCA is no longer appropriate. This paper proposes a method for sparse CCA. Sparse estimation produces linear combinations of only a subset of variables from each data set, thereby increasing the interpretability of the canonical variates. We consider the CCA problem from a predictive point of view and recast it into a multivariate regression framework. By combining a multivariate alternating regression approach together with a lasso penalty, we induce sparsity in the canonical vectors. We compare the performance with other sparse CCA techniques in dierent simulation settings and illustrate its usefulness on a genomic data set. 
Keywords:  Canonical correlation analysis; Genomic data; Lasso; Multivariate regression; Sparsity; 
Date:  2013–11 
URL:  http://d.repec.org/n?u=RePEc:ner:leuven:urn:hdl:123456789/425573&r=ecm 
By:  Alexandre Belloni; Victor Chernozhukov (Institute for Fiscal Studies and MIT); Iván FernándezVal (Institute for Fiscal Studies and Boston University); Christian Hansen (Institute for Fiscal Studies and Chicago GSB) 
Abstract:  We consider estimation of policy relevant treatment effects in a datarich environment where there may be many more control variables available than there are observations. In addition to allowing many control variables, the setting we consider allows heterogeneous treatment effects, endeogenous receipt of treatment, and functionvalued outcomes. To make information inference possible, we assume that reduced form predictive relationships are approximately sparse. That is, we require that the relationsihp between the covariates and the outcome, treatment status, and instrument status can be captured up to a small approximation error using a small number of controls whose identities are unknown to the researcher. This condition allows estimation and inference for a wide variety of treatment parameters to process after selection of an appropriate set of control variables formed by selecting controls separately for each reduced form relationship and then appropriately combining this set of reduced form predictive models and associated selected controls. We provide conditions under which postselection inferences is uniformly valid across a widerange of models and show that a key condition underlying uniform validity of postselection inference allowing for imperfect model selection is the use of approximately unbiased estimating equations. We illustrate the use of the proposed treatment effect estimation methods with an application to estimating the effect of 401(k) participation on accumulated assets. 
Date:  2013–11 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:57/13&r=ecm 
By:  Chorus, Caspar 
Abstract:  This paper presents, discusses and tests a generalized Random Regret Minimization (GRRM) model. The GRRM model is created by replacing a fixed constant in the attributespecific regret functions of the RRM model, by a regretweight variable. Depending on the value of the regretweights, the GRRM model generates predictions that equal those of, respectively, the canonical linearinparameters Random Utility Maximization (RUM) model, the conventional Random Regret Minimization (RRM) model, and hybrid RUMRRM specifications. When the regretweight variable is written as a binary logit function, the GRRM model can be estimated on choice data using conventional software packages. As an empirical proof of concept, the GRRM model is estimated on a stated route choice dataset, and its outcomes are compared with RUM and RRM counterparts. 
Keywords:  Random Utility Maximization; Random Regret Minimization; Choice model; Unified approach; Generalized Random Regret Minimization 
JEL:  C5 M30 R41 
Date:  2013–11–21 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:51637&r=ecm 
By:  Kyungsub Lee 
Abstract:  We study the probabilistic and statistical properties of the variation based realized third and fourth moments of financial returns. The realized moments of the return are unbiased and relative efficient estimators for the actual moments of the return distribution under a martingale condition in the return process. For the estimation of a stochastic volatility model, we employ a simple method of estimation and a generalized method of moments estimation based on the realized second and third moments. Conditional thin tale property of the return distribution with given quadratic variation of the return is discussed. We explain the structure of moments variation swaps and analyze the thin tale property of the portfolio return hedged by the third moment variation swap. 
Date:  2013–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1311.5036&r=ecm 
By:  Antonio Lijoi (Department of Economics and Management, University of Pavia and Collegio Carlo Alberto); Bernardo Nipoti (University of Turin and Collegio Carlo Alberto) 
Abstract:  Mixture models for hazard rate functions are widely used tools for addressing the statistical analysis of survival data subject to a censoring mechanism. The present paper introduces a new class of vectors of random hazard rate functions that are expressed as kernel mixtures of dependent completely random measures. This leads to define dependent nonparametric prior processes that are suitably tailored to draw inferences in the presence of heterogeneous observations. Besides its exibility, an important appealing feature of our proposal is analytical tractability: we are, indeed, able to determine some relevant distributional properties and a posterior characterization that is also the key for devising an ecient MCMC sampler. For illustrative purposes, we specialize our general results to a class of dependent extended gamma processes. We finally display a few numerical examples, including both simulated and real twosample datasets: these allow us to identify the effect of a borrowing strength phenomenon and provide evidence of the effectiveness of the prior to deal with datasets for which the proportional hazards assumption does not hold true. 
Keywords:  Bayesian nonparametrics; Completely random measures; Dependent processes; Extended gamma processes; Partial exchangeability. 
Date:  2013–11 
URL:  http://d.repec.org/n?u=RePEc:pav:demwpp:demwp0059&r=ecm 
By:  Ando, Michihito (Uppsala Center for Fiscal Studies) 
Abstract:  In a Regression Kink (RK) design with a finite sample, a confounding smooth nonlinear relationship between an assignment variable and an outcome variable around a threshold can be spuriously picked up as a kink and result in a biased estimate. In order to investigate how well RK designs handle such confounding nonlinearity, I firstly implement Monte Carlo simulations and then study the effect of fiscal equalization grants on local expenditure using a RK design. Results suggest that RK estimation with a confounding nonlinearity often suffers from bias or imprecision and estimates are credible only when relevant covariates are controlled for. 
Keywords:  Regression Kink Design; Endogenous regressors; Intergovernmental grants; Flypaper effect 
JEL:  C13 C21 H71 H72 H77 
Date:  2013–11–11 
URL:  http://d.repec.org/n?u=RePEc:hhs:uufswp:2013_015&r=ecm 
By:  Harding, Matthew (Stanford University); Lamarche, Carlos (University of Kentucky) 
Abstract:  This paper proposes new ℓ1penalized quantile regression estimators for panel data, which explicitly allows for individual heterogeneity associated with covariates. We conduct Monte Carlo simulations to assess the small sample performance of the new estimators and provide comparisons of new and existing penalized estimators in terms of quadratic loss. We apply the techniques to two empirical studies. First, the new method is applied to the estimation of labor supply elasticities and we find evidence that positive substitution effects dominate negative wealth effects at the middle of the conditional distribution of hours. The overall effect tends to be larger at the lower tail, which suggests that changes in taxes have different effects across the response distribution. Second, we estimate consumer preferences for nutrients from a demand model using a large scanner dataset of household food purchases. We show that preferences for nutrients vary across the conditional distribution of expenditure and across genders, and emphasize the importance of fully capturing consumer heterogeneity in demand modeling. Both applications highlight the importance of estimating individual heterogeneity when designing economic policy. 
Keywords:  shrinkage, panel data, quantile regression, labor supply, scanner data 
JEL:  C21 C23 J22 
Date:  2013–11 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp7741&r=ecm 
By:  Geon Ho Choe; Kyungsub Lee 
Abstract:  In an asset return series there is a conditional asymmetric dependence between current return and past volatility depending on the current return's sign. To take into account the conditional asymmetry, we introduce new models for asset return dynamics in which frequencies of the up and down movements of asset price have conditionally independent Poisson distributions with stochastic intensities. The intensities are assumed to be stochastic recurrence equations of the GARCH type in order to capture the volatility clustering and the leverage effect. We provide an important linkage between our model and existing GARCH, explain how to apply maximum likelihood estimation to determine the parameters in the intensity model and show empirical results with the S&P 500 index return series. 
Date:  2013–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1311.4977&r=ecm 
By:  Christoph Aistleitner; Markus Hofer; Robert Tichy 
Abstract:  We consider the problem of estimating $\mathbb{E} [f(U^1, \ldots, U^d)]$, where $(U^1, \ldots, U^d)$ denotes a random vector with uniformly distributed marginals. In general, Latin hypercube sampling (LHS) is a powerful tool for solving this kind of highdimensional numerical integration problem. In the case of dependent components of the random vector $(U^1, \ldots, U^d)$ one can achieve more accurate results by using Latin hypercube sampling with dependence (LHSD). We state a central limit theorem for the $d$dimensional LHSD estimator, by this means generalising a result of Packham and Schmidt. Furthermore we give conditions on the function $f$ and the distribution of $(U^1, \ldots, U^d)$ under which a reduction of variance can be achieved. Finally we compare the effectiveness of Monte Carlo and LHSD estimators numerically in exotic basket option pricing problems. 
Date:  2013–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1311.4698&r=ecm 
By:  Andreas Ortman (School of Economics, Australian School of Business, the University of New South Wales); Le Zhang (School of Banking and Finance, Australian School of Business, the University of New South Wales) 
Abstract:  Null Hypothesis Significance Testing has been widely used in the experimental economics literature. Typically, attention is restricted to typeIerrors. We demonstrate that not taking typeII errors into account is problematic. We also provide evidence, for one prominent area in experimental economics (dictator game experiments), that most studies are severely underpowered, suggesting that their findings are questionable. We then illustrate with several examples how poor (no) power planning can lead to questionable results. 
Keywords:  Null Hypothesis Significance Testing, TypeIerrors, TypeII errors, Significance level, Statistical power 
JEL:  A10 B23 C12 
Date:  2013–11 
URL:  http://d.repec.org/n?u=RePEc:swe:wpaper:201332&r=ecm 
By:  R\'emy Chicheportiche; Anirban Chakraborti 
Abstract:  We review ideas on temporal dependences and recurrences in discrete time series from several areas of natural and social sciences. We revisit existing studies and redefine the relevant observables in the language of copulas (joint laws of the ranks). We propose that copulas provide an appropriate mathematical framework to study nonlinear time dependences and related concepts  like aftershocks, Omori law, recurrences, waiting times. We also critically argue using this global approach that previous phenomenological attempts involving only a longranged autocorrelation function lacked complexity in that they were essentially monoscale. 
Date:  2013–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1311.5101&r=ecm 
By:  Alexander Chudik; Kamiar Mohaddes; M. Hashem Pesaran; Mehdi Raissi 
Abstract:  This paper investigates the longrun effects of public debt and inflation on economic growth. Our contribution is both theoretical and empirical. On the theoretical side, we develop a crosssectionally augmented distributed lag (CSDL) approach to the estimation of longrun effects in dynamic heterogeneous panel data models with crosssectionally dependent errors. The relative merits of the CSDL approach and other existing approaches in the literature are discussed and illustrated with small sample evidence obtained by means of Monte Carlo simulations. On the empirical side, using data on a sample of 40 countries over the 19652010 period, we find significant negative longrun effects of public debt and inflation on growth. Our results indicate that, if the debt to GDP ratio is raised and this increase turns out to be permanent, then it will have negative effects on economic growth in the long run. But if the increase is temporary, then there are no longrun growth effects so long as debt to GDP is brought back to its normal level. We do not find a universally applicable threshold effect in the relationship between public debt and growth. We only find statistically significant threshold effects in the case of countries with rising debt to GDP ratios. 
Keywords:  Longrun relationships, estimation and inference, large dynamic heterogeneous panels, crosssection dependence, debt, inflation and growth, debt overhang. 
JEL:  C23 E62 F34 H6 
Date:  2013–11–21 
URL:  http://d.repec.org/n?u=RePEc:cam:camdae:1350&r=ecm 