nep-ecm New Economics Papers
on Econometrics
Issue of 2016‒02‒29
twenty papers chosen by
Sune Karlsson
Örebro universitet

  1. Unified quasi-maximum likelihood estimation theory for stable and unstable Markov bilinear processes By Aknouche, Abdelhakim
  2. Numerical implementation of the QuEST function By Olivier Ledoit; Michael Wolf
  3. Structural Gravity with Dummies Only By Egger, Peter; Nigai, Sergey
  4. The lasso for high-dimensional regression with a possible change-point By Sokbae (Simon) Lee; Myung Hwan Seo; Youngki Shin
  5. The W Matrix in Network and Spatial Econometrics: Issues Relating to Specification and Estimation By Luisa Corrado; Bernard Fingleton
  7. The Davies Problem: A New Test for Random Slope in the Hierarchical Linear Model By van Oest, R.D.; Franses, Ph.H.B.F.
  8. The specification of dynamic discrete-time two-state panel data models By Tue Gørgens; Dean Hyslop
  9. Bayesian inference in generalized true random-effects model and Gibbs sampling By Makieła, Kamil
  10. A New Class of Tests for Overidentifying Restrictions in Moment Condition Models By Wang, Xuexin
  11. Models of Financial Return With Time-Varying Zero Probability By Sucarrat, Genaro; Grønneberg, Steffen
  12. A Note on “Continuous Invertibility and Stable QML Estimation of the EGARCH(1,1) Model” By Francisco Blasques; Paolo Gorgi; Siem Jan Koopman; Olivier Wintenberger
  13. Invariant tests based on M-estimators, estimating functions, and the generalized method of moments By Jean-Marie Dufour; Alain Trognon; Purevdorj Tuvaandorj
  14. Beyond dimension two: A test for higher-order tail risk By Bormann, Carsten; Schaumburg, Julia; Schienle, Melanie
  15. Estimation of NAIRU with Inflation Expectation Data By Wei Cui; Wolfgang K. Härdle; Weining Wang;
  16. Likelihood Evaluation of High-Dimensional Spatial Latent Gaussian Models with Non-Gaussian Response Variables By Jean-François Richard
  17. Kernel density estimation based on Ripley’s correction By Arthur Charpentier; Ewen Gallic
  18. Likelihood Inference for Exponential-Trawl Processes By Neil Shephard; Justin Yang; Mark Podolskij; Robert Stelzer; S Thorbjornsen
  19. Bootstrap for Value at Risk Prediction By Meriem Rjiba, Meriem; Tsagris, Michail; Mhalla, Hedi
  20. A Tractable Framework for Analyzing a Class of Nonstationary Markov Models By Lilia Maliar; Serguei Maliar; John B. Taylor; Inna Tsener

  1. By: Aknouche, Abdelhakim
    Abstract: A unified quasi-maximum likelihood (QML) estimation theory for stationary and nonstationary simple Markov bilinear (SMBL) models is proposed. Such models may be seen as generalized random coefficient autoregressions (GRCA) in which the innovation and the random coefficient processes are fully correlated. It is shown that the QML estimate (QMLE) for the SMBL model is always asymptotically Gaussian without assuming strict stationarity, meaning that there is no knife edge effect. The asymptotic variance of the QMLE is different in the stationary and nonstationary cases but is consistently estimated using the same estimator. A perhaps surprising result is that in the nonstationary domain, all SMBL parameters are consistently estimated in contrast with unstable GARCH and GRCA models where the QMLE of the conditional variance intercept is inconsistent. As a result, strict stationarity testing for the SMBL is studied. Simulation experiments and a real application to strict stationarity testing for some financial stock returns illustrate the theory in finite samples.
    Keywords: Markov bilinear process, random coefficient process, stability, instability, Quasi-maximum likelihood, knife edge effect, strict stationarity testing.
    JEL: C10 C13 C18 C19
    Date: 2015
  2. By: Olivier Ledoit; Michael Wolf
    Abstract: This paper deals with certain estimation problems involving the covariance matrix in large dimensions. Due to the breakdown of finite-dimensional asymptotic theory when the dimension is not negligible with respect to the sample size, it is necessary to resort to an alternative framework known as large-dimensional asymptotics. Recently, Ledoit and Wolf (2015) have proposed an estimator of the eigenvalues of the population covariance matrix that is consistent according to a mean-square criterion under large-dimensional asymptotics. It requires numerical inversion of a multivariate nonrandom function which they call the QuEST function. The present paper explains how to numerically implement the QuEST function in practice through a series of six successive steps. It also provides an algorithm to compute the Jacobian analytically, which is necessary for numerical inversion by a nonlinear optimizer. Monte Carlo simulations document the effectiveness of the code.
    Keywords: Large-dimensional asymptotics, numerical optimization, random matrix theory, spectrum estimation
    JEL: C13 C61 C87
    Date: 2016–01
  3. By: Egger, Peter; Nigai, Sergey
    Abstract: The measurement of trade costs and their effects on outcome is at the heart of a large quantitative literature in international economics. The majority of the recent significant contributions on the matter assumes that trade consists of a product of exporter-time-specific factors, importer-time-specific factors, and country-pair-time-specific trade costs, and that log trade costs are additively composed of a parameterized part and a residual part. We demonstrate that residual trade costs are relatively important and that the parameters on observable trade-cost measures as well as the structural country-time-specific variables or parameters are inevitably biased no matter of whether models are estimated by ordinary least-squares or by exponential-family models. The reason is that the country-specific variables are endogenous to the residual trade costs, regardless of whether they are captured by iteratively-solved structural terms or by country-time fixed effects. As a result, quantifications of effects of trade costs and comparative static results are also biased. Apart from diagnosing this problem, the paper provides remedies for it. All of the proposed remedies involve binary indicator variables (fixed effects) only, and they are nonlinear in both variables and parameters. We therefore dub these approaches as ones of a constrained analysis of variance (CANOVA) of bilateral exports or imports. We propose saturated as well as unsaturated versions of the CANOVA approach for both cross-section and panel data. The saturated approach uses up all degrees of freedom and estimates as many parameters as there are observations on bilateral exports or imports, providing an exact decomposition of bilateral trade into trade costs and country-specific parameters. The unsaturated approaches do not use up all degrees of freedom and estimate fewer parameters than there are observations on bilateral exports or imports, providing an approximate decomposition of bilateral trade into trade costs and country-specific parameters. We demonstrate that with panel data an unsaturated model with exporter-time, importer-time and country-pair effects works quite well relative to both the saturated model as well as models with parameterized trade-cost functions. The conclusions of the CANOVA models regarding the importance of trade costs for trade turn out to be substantially different from the ones implied by the conventional parameterized trade-cost-function models
    Keywords: fixed effects estimation; gravity models; panel econometrics; structural general equilibrium models
    JEL: C23 F14
    Date: 2015–02
  4. By: Sokbae (Simon) Lee (Institute for Fiscal Studies); Myung Hwan Seo (Institute for Fiscal Studies); Youngki Shin (Institute for Fiscal Studies)
    Abstract: We consider a high-dimensional regression model with a possible change-point due to a covariate threshold and develop the Lasso estimator of regression coefficients as well as the threshold parameter. Our Lasso estimator not only selects covariates but also selects a model between linear and threshold regression models. Under a sparsity assumption, we derive non-asymptotic oracle inequalities for both the prediction risk and the l1 estimation loss for regression coefficients. Since the Lasso estimator selects variables simultaneously, we show that oracle inequalities can be established without pretesting the existence of the threshold eect. Furthermore, we establish conditions under which the estimation error of the unknown threshold parameter can be bounded by a nearly n-1 factor even when the number of regressors can be much larger than the sample size (n). We illustrate the usefulness of our proposed estimation method via Monte Carlo simulations and an application to real data.
    Date: 2014–05
  5. By: Luisa Corrado (DEF and CEIS, Università di Roma "Tor Vergata" and University of Cambridge); Bernard Fingleton (University of Cambridge, Department of Land Economy)
    Abstract: Network and spatial econometric models commonly embody a so-called W matrix which defines the connectivity between nodes of a network. The reason for the existence of W is that it facilitates parsimonious parametrization of inter-nodal interaction which would otherwise be very difficult to achieve from a practical modelling perspective. The problem considered in this paper is the effect of misspecifying W. The paper demonstrates the effect in the context of two types of model, the dynamic spatial autoregressive panel model and the multilevel spatial autoregressive panel model, both of which include W as part of the model specification and use W in estimation. Monte Carlo results are presented showing the impact on bias and RMSE of misspecification of W. The paper highlights the need for careful attention to the correct structure of W in spatial econometric and network modelling.
    Keywords: Networks, Multilevel Modelling, Fixed E¤ects, Dynamic Spatial Autoregressive Panel Model, Multilevel Spatial Autoregressive Panel Model
    Date: 2016–02–12
  6. By: Quiroz, Matias (Research Department, Central Bank of Sweden); Villani, Mattias (Linköping University); Kohn, Robert (University of New South Wales)
    Abstract: We propose a generic Markov Chain Monte Carlo (MCMC) algorithm to speed up computations for datasets with many observations. A key feature of our approach is the use of the highly efficient difference estimator from the survey sampling literature to estimate the log-likelihood accurately using only a small fraction of the data. Our algorithm improves on the O(n) complexity of regular MCMC by operating over local data clusters instead of the full sample when computing the likelihood. The likelihood estimate is used in a Pseudo- marginal framework to sample from a perturbed posterior which is within O(m^-1/2) of the true posterior, where m is the subsample size. The method is applied to a logistic regression model to predict firm bankruptcy for a large data set. We document a significant speed up in comparison to the standard MCMC on the full dataset.
    Keywords: Bayesian inference; Markov Chain Monte Carlo; Pseudo-marginal MCMC; estimated likelihood; GLM for large data.
    JEL: C11 C13 C15 C83
    Date: 2015–08–01
  7. By: van Oest, R.D.; Franses, Ph.H.B.F.
    Abstract: __Abstract__ Crucial inference for the hierarchical linear model concerns the null hypothesis of no random slope. We argue that the usually applied statistical test suffers from the so-called Davies problem, that is, a nuisance parameter is only identified under the alternative. We propose an easy-to-implement methodology that exploits this property. We provide the relevant critical values and demonstrate through simulations that our new methodology has better power properties.
    Keywords: hierarchical linear model, random effects, slope variance, Davies problem
    Date: 2015–01–05
  8. By: Tue Gørgens (Australian National University); Dean Hyslop (Motu Economic and Public Policy Research)
    Abstract: This paper examines dynamic binary response and multi-spell duration model approaches to analyzing longitudinal discrete-time binary outcomes. Prototypical dynamic binary response models specify low-order Markovian state dependence and restrict the effects of observed and unobserved heterogeneity on the probability of transitioning into and out of a state to have the same magnitude and opposite signs. In contrast, multi-spell duration models typically allow for state-specific duration dependence, and allow the probability of entry into and exit from a state to vary flexibly. We show that both of these approaches are special cases within a general framework. We compare specific dynamic binary response and multi-spell duration models empirically using a case study of poverty transitions. In this example, both the specification of state dependence and the restrictions on the state-specific transition probabilities imposed by the simpler dynamic binary response models are severely rejected against the more flexible multi-spell duration models. Consistent with recent literature, we conclude that the standard dynamic binary response model is unacceptably restrictive in this context.
    Keywords: Panel data, transition data, binary response, duration analysis, event history analysis, initial conditions, random effects.
    JEL: C33 C35 C41 C51
    Date: 2016–02
  9. By: Makieła, Kamil
    Abstract: The paper investigates Bayesian approach to estimating generalized true random-effects model (GTRE) via Gibbs sampling. Simulation results show that under properly defined priors for transient and persistent inefficiency components the posterior characteristics of the GTRE model are well approximated using simple Gibbs sampling procedure. No model reparametrization is required and if such is made it leads to much lower numerical efficiency. The new model allows us to make more reasonable assumptions as regards prior inefficiency distribution and appears more reliable in handling especially nuisance datasets. Empirical application furthers the research into stochastic frontier analysis using GTRE by examining the relationship between inefficiency terms in GTRE, true random-effects (TRE), generalized stochastic frontier and a standard stochastic frontier model.
    Keywords: generalized true random-effects model, stochastic frontier analysis, Bayesian inference, cost efficiency, firm heterogeneity, transient and persistent efficiency
    JEL: C11 C23 C51 D24
    Date: 2016–01–19
  10. By: Wang, Xuexin
    Abstract: In this paper, we propose a new class of tests for overidentifying restrictions in moment condition models. The tests in this new class are quite easy to com- pute. They avoid the complicated saddle point problem in generalized empirical likelihood (GEL) estimation, only a √n consistent estimator, where n is the sample size, is needed. In addition to discussing their first-order properties, we establish that under some regularity conditions these tests share the same higher order properties as GEL overidentifying tests, given proper consistent estimators. Monte Carlo simulation study shows that the new class of tests of overidentifying restrictions has better finite sample performance than the two-step GMM overidentification test, and compares well to several potential alternatives in terms of overall performance.
    Keywords: Generalized Empirical Likelihood (GEL); Tests for Overidentifying Restrictions; C(alpha) Type Tests; High Order Equivalence;
    JEL: C12 C20
    Date: 2016–01–24
  11. By: Sucarrat, Genaro; Grønneberg, Steffen
    Abstract: The probability of an observed financial return being equal to zero is not necessarily zero. This can be due to price discreteness or rounding error, liquidity issues (e.g. low trading volume), market closures, data issues (e.g. data imputation due to missing values), characteristics specific to the market, and so on. Moreover, the zero probability may change and depend on market conditions. In standard models of return volatility, however, e.g. ARCH, SV and continuous time models, the zero probability is zero, constant or both. We propose a new class of models that allows for a time-varying zero probability, and which can be combined with standard models of return volatility: They are nested and obtained as special cases when the zero probability is constant and equal to zero. Another attraction is that the return properties of the new class (e.g. volatility, skewness, kurtosis, Value-at-Risk, Expected Shortfall) are obtained as functions of the underlying volatility model. The new class allows for autoregressive conditional dynamics in both the zero probability and volatility specifications, and for additional covariates. Simulations show parameter and risk estimates are biased if zeros are not appropriately handled, and an application illustrates that risk-estimates can be substantially biased in practice if the time-varying zero probability is not accommodated.
    Keywords: Financial return, volatility, zero-inflated return, GARCH, log-GARCH, ACL
    JEL: C01 C22 C32 C51 C52 C58
    Date: 2016–01–17
  12. By: Francisco Blasques (VU University Amsterdam, the Netherlands); Paolo Gorgi (VU University Amsterdam, the Netherlands, University of Padua, Italy); Siem Jan Koopman (VU University Amsterdam, the Netherlands, Aarhus University, Denmark); Olivier Wintenberger (University of Copenhagen, Denmark, Sorbonne Universités, UPMC University Paris, Sorbonne Universities, France)
    Abstract: We revisit Wintenberger (2013) on the continuous invertibility of the EGARCH(1,1) model. We note that the definition of continuous invertibility adopted in Wintenberger (2013) may not always be sufficient to deliver strong consistency of the QMLE. We also take the opportunity to provide other small clarifications and additions.
    Keywords: invertibility, quasi-maximum likelihood estimator, volatility models
    JEL: C01 C22 C51
    Date: 2015–12–11
  13. By: Jean-Marie Dufour; Alain Trognon; Purevdorj Tuvaandorj
    Abstract: We study the invariance properties of various test criteria which have been proposed for hypothesis testing in the context of incompletely specified models, such as models which are formulated in terms of estimating functions (Godambe, 1960, Ann. Math. Stat.) or moment conditions and are estimated by generalized method of moments (GMM) procedures (Hansen, 1982, Econometrica), and models estimated by pseudo-likelihood (Gouri´eroux, Monfort and Trognon, 1984, Econometrica) and M-estimation methods. The invariance properties considered include invariance to (possibly nonlinear) hypothesis reformulations and reparameterizations. The test statistics examined include Wald-type, LR-type, LM-type, score-type, and C()−type criteria. Extending the approach used in Dagenais and Dufour (1991, Econometrica), we show first that all these test statistics except the Wald-type ones are invariant to equivalent hypothesis reformulations (under usual regularity conditions), but all five of them are not generally invariant to model reparameterizations, including measurement unit changes in nonlinear models. In other words, testing two equivalent hypotheses in the context of equivalent models may lead to completely different inferences. For example, this may occur after an apparently innocuous rescaling of some model variables. Then, in view of avoiding such undesirable properties, we study restrictions that can be imposed on the objective functions used for pseudo-likelihood (or M-estimation) as well as the structure of the test criteria used with estimating functions and GMM procedures to obtain invariant tests. In particular, we show that using linear exponential pseudo-likelihood functions allows one to obtain invariant scoretype and C()−type test criteria, while in the context of estimating function (or GMM) procedures it is possible to modify a LR-type statistic proposed by Newey and West (1987, Int. Econ. Rev.) to obtain a test statistic that is invariant to general reparameterizations. The invariance associated with linear exponential pseudo-likelihood functions is interpreted as a strong argument for using such pseudo-likelihood functions in empirical work.
    Keywords: Testing, invariance, hypothesis reformulation, reparameterization, measurement unit, estimating function, generalized method of moment (GMM), pseudo-likelihood, M-estimator; Linear exponential model, Nonlinear model, Wald test, Likelihood ratio test, score test, lagrange multiplier test, C(∝) test.,
    JEL: C3 C12
    Date: 2015–06–24
  14. By: Bormann, Carsten; Schaumburg, Julia; Schienle, Melanie
    Abstract: In practice, multivariate dependencies between extreme risks are often only assessed in a pairwise way. We propose a test for detecting situations when such pairwise measures are inadequate and give incomplete results. This occurs when a significant portion of the multivariate dependence structure in the tails is of higher dimension than two. Our test statistic is based on a decomposition of the stable tail dependence function describing multivariate tail dependence. The asymptotic properties of the test are provided and a bootstrap based finite sample version of the test is proposed. A simulation study documents good size and power properties of the test including settings with time-series components and factor models. In an application to stock indices for non-crisis times, pairwise tail models seem appropriate for global markets while the test finds them not admissible for the tightly interconnected European market. From 2007/08 on, however, higher order dependencies generally increase and require a multivariate tail model in all cases.
    Keywords: decomposition of multivariate tail dependence,multivariate extreme values,stable tail dependence function,extreme dependence modeling
    JEL: C01 C46 C58
    Date: 2016
  15. By: Wei Cui; Wolfgang K. Härdle; Weining Wang;
    Abstract: Estimating natural rate of unemployment (NAIRU) is important for understanding the joint dynamics of unemployment, in ation, and in Nation expectation. However, existing literature falls short in endogenizing inflation expectation together with NAIRU in a model consistent way. We develop and estimate a structural model with forward and backward looking Phillips curve. Inflation expectation is treated as a function of state variables and we use survey data as its observations. We find out that the estimated NAIRU using our methodology tracks the unemployment process closely except for the high in ation period around 1970. Moreover, the estimated Bayesian credible sets are narrower and our model leads to better inflation and unemployment forecasts. These results suggest that monetary policy was very effective during the sample periods and there was not much room for policy improvement..
    Keywords: NAIRU; Inflation Expectation; Targeting
    JEL: C32 E23 E24
    Date: 2015–02
  16. By: Jean-François Richard
    Abstract: We propose a generic algorithm for numerically accurate likelihood evaluation of a broad class of spatial models characterized by a high-dimensional latent Gaussian process and non-Gaussian response variables.The class of models under consideration includes specifications for discrete choices, event counts and limited dependent variables (truncation, censoring, and sample selection) among others.Our algorithm relies upon a novel implementation of Efficient Importance Sampling (EIS) specifically designed to exploit typical sparsity of high-dimensional spatial precision (or covariance) matrices. It is numerically very accurate and computationally feasible even for very high-dimensional latent processes. Thus Maximum Likelihood (ML) estimation of high-dimensional non-Gaussian spatial models, hitherto considered to be computationally prohibitive, becomes feasible. We illustrate our approach with ML estimation of a spatial probit for US presidential voting decisions and spatial count data models (Poisson and Negbin) for firm location choices.
    Date: 2015–01
  17. By: Arthur Charpentier (Université du Québec à Montréal - UQAM (CANADA) - Université du Québec à Montréal - UQAM (CANADA), CREM - Centre de Recherche en Economie et Management - UR1 - Université de Rennes 1 - Université de Caen Basse-Normandie - CNRS - Centre National de la Recherche Scientifique); Ewen Gallic (CREM - Centre de Recherche en Economie et Management - UR1 - Université de Rennes 1 - Université de Caen Basse-Normandie - CNRS - Centre National de la Recherche Scientifique)
    Abstract: In this paper, we investigate a technique inspired by Ripley’s circumference method to correct bias of density estimation of edges (or frontiers) of regions. The idea of the method was theoretical and difficult to implement. We provide a simple technique – based of properties of Gaussian kernels – to efficiently compute weights to correct border bias on frontiers of the region of interest, with an automatic selection of an optimal radius for the method. We illustrate the use of that technique to visualize hot spots of car accidents and campsite locations, as well as location of bike thefts.
    Keywords: visualization,spatial process,GIS,Kernel density estimation,polygons,Ripley’s circumference method,Border bias,edge correction,frontier
    Date: 2015
  18. By: Neil Shephard; Justin Yang; Mark Podolskij; Robert Stelzer; S Thorbjornsen
  19. By: Meriem Rjiba, Meriem; Tsagris, Michail; Mhalla, Hedi
    Abstract: We evaluate the predictive performance of a variety of value-at-risk (VaR) models for a portfolio consisting of five assets. Traditional VaR models such as historical simulation with bootstrap and filtered historical simulation methods are considered. We suggest a new method for estimating Value at Risk: the filtered historical simulation GJR-GARCH method based on bootstrapping the standardized GJR-GARCH residuals. The predictive performance is evaluated in terms of three criteria, the test of unconditional coverage, independence and conditional coverage and the quadratic loss function suggested. The results show that classical methods are inefficient under moderate departures from normality and that the new method produces the most accurate forecasts of extreme losses.
    Keywords: Value at Risk, bootstrap, GARCH
    JEL: C15 G17
    Date: 2015
  20. By: Lilia Maliar; Serguei Maliar; John B. Taylor (Stanford University); Inna Tsener
    Abstract: We study a class of infinite-horizon nonlinear dynamic economic models in which preferences, technology and laws of motion for exogenous variables can change over time either deterministically or stochastically, according to a Markov process with time-varying transition probabilities, or both. The studied models are nonstationary in the sense that the decision and value functions are time-dependent, and they cannot be generally solved by conventional solution methods. We introduce a quantitative framework, called extended function path (EFP), for calibrating, solving, simulating and estimating such models. We apply EFP to analyze a collection of challenging applications that do not admit stationary Markov equilibria, including growth models with anticipated parameters shifts and drifts, unbalanced growth under capital augmenting technological progress, anticipated regime switches, deterministically time-varying volatility and seasonal fluctuations. Also, we show an example of estimation and calibration of parameters in an unbalanced growth model using data on the U.S. economy. Examples of MATLAB code are provided.
    Keywords: nonstationary models, unbalanced growth, time varying transition probabilities, time varying parameters, anticipated shock, shooting method, parameter shift, parameter drift, regime switch, stochastic volatility, capital augmenting, seasonality, Fair and Taylor, extended path, Smolyak method
    JEL: C61 C63 C68 E31 E52
    Date: 2015–03

This nep-ecm issue is ©2016 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.