nep-ecm New Economics Papers
on Econometrics
Issue of 2017‒01‒22
nineteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Parsimonious modeling with information filtering networks By Wolfram Barfuss; Guido Previde Massara; T. Di Matteo; Tomaso Aste
  2. A Simple R-Estimation Method for Semiparametric Duration Models By Marc Hallin; Davide La Vecchia
  3. The Impact of Integrated Measurement Errors on Modelling Long-run Macroeconomic Time Series By James Duffy; David Hendry
  4. Nonparametric Identification of a Time-Varying Frailty Model By Effraimidis, Georgios
  5. Three-stage estimation method for non-linear multiple time-series By Dominique Guegan; Giovanni De Luca; Giorgia Rivieccio
  6. Testing for breaks in the weighting matrix By Ana Angulo; Peter Burridge; Jesús Mur
  7. Alternative GMM estimators for spatial regression models By Jörg Breitung; Christoph Wigger
  8. Regression Discontinuity Design with Continuous Measurement Error in the Running Variable By Davezies, Laurent; Le Barbanchon, Thomas
  9. Inference in Partially Identified Heteroskedastic Simultaneous Equations Models By Helmut Lütkepohl; George Milunivich; Minxian Yang
  10. Measuring the Distributions of Public Inflation Perceptions and Expectations in the UK By Murasawa, Yasutomo
  11. Detecting outlying studies in meta-regression models using a forward search algorithm By Dimitris Mavridis; Irini Moustaki; Melanie Wall; Georgia Salanti
  12. Rating Transition Probability Models and CCAR Stress Testing: Methodologies and implementations By Yang, Bill Huajian; Du, Zunwei
  13. On the tail behavior of a class of multivariate conditionally heteroskedastic processes By Rasmus Pedersen; Olivier Wintenberger
  14. Can forbidden zones for the expectation explain noise influence in behavioral economics and decision sciences? By Harin, Alexander
  15. Identification and inference on regressions with missing covariate data By Esteban M. Aucejo; Federico A. Bugni; V. Joseph Hotz
  16. Measuring the output gap using stochastic model specification search By Joshua C C Chan; Angelia L Grant
  17. New insights into the stochastic ray production frontier By Arne Henningsen; Matěj Bělín; Géraldine Henningsen
  18. A Flexible Specification of Space–Time AutoRegressive Models By M. Mucciardi; E. Otranto
  19. System Priors for Econometric Time Series By Michal Andrle; Miroslav Plašil

  1. By: Wolfram Barfuss; Guido Previde Massara; T. Di Matteo; Tomaso Aste
    Abstract: We introduce a methodology to construct parsimonious probabilistic models. This method makes use of information filtering networks to produce a robust estimate of the global sparse inverse covariance from a simple sum of local inverse covariances computed on small subparts of the network. Being based on local and low-dimensional inversions, this method is computationally very efficient and statistically robust, even for the estimation of inverse covariance of high-dimensional, noisy, and short time series. Applied to financial data our method results are computationally more efficient than state-of-the-art methodologies such as Glasso producing, in a fraction of the computation time, models that can have equivalent or better performances but with a sparser inference structure. We also discuss performances with sparse factor models where we notice that relative performances decrease with the number of factors. The local nature of this approach allows us to perform computations in parallel and provides a tool for dynamical adaptation by partial updating when the properties of some variables change without the need of recomputing the whole model. This makes this approach particularly suitable to handle big data sets with large numbers of variables. Examples of practical application for forecasting, stress testing, and risk allocation in financial systems are also provided.
    JEL: F3 G3
    Date: 2016–12–13
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:68860&r=ecm
  2. By: Marc Hallin; Davide La Vecchia
    Abstract: Modeling nonnegative financial variables (e.g. durations between trades, traded volumes or asset volatilities) is central to a number of studies across financial econometrics, and, despite the efforts, still poses several statistical challenges. Among them, the efficiency aspects of semiparametric estimation. In this paper, we concentrate on estimation problems in Autoregressive Conditional Duration (ACD) models with unspecified innovation densities. Exponential quasi-likelihood estimators (QMLE) are the usual practice in that context. The efficiency of those QMLEs (the only Fisher-consistent QMLEs) unfortunately rapidly deteriorates away from the reference exponential density—a phenomenon that has been emphasized earlier by Drost and Werker (2003), who propose various semiparametrically efficient procedures to palliate that phenomenon. Those procedures rely on a general semiparametric approach which typically requires kernel estimation of the underlying innovation density. We propose rank-based estimators (R-estimators) as a substitute. Just as the QMLE, R-estimators remain root-n consistent irrespective of the underlying density, and rely on the choice of a reference density under which they achieve semiparametric efficiency; that density, however, needs not be the exponential one. Contrary to the semiparametric estimators proposed by Drost and Werker (2003), R-estimators neither require tangent space calculations nor kernel-based density estimation. Numerical results moreover indicate that R-estimators based on exponential reference densities uniformly outperform the exponential QMLE under such families of innovations as the Weibull or Burr densities. A real data example about modeling the price range of the Swiss stock market index concludes the paper.
    Keywords: efficiency; irregularly spaced data; quasi-likelihood; ranks; local asymptotic normality
    JEL: C01 C14 C41 C50
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/243446&r=ecm
  3. By: James Duffy; David Hendry
    Abstract: Abstract Data spanning long time periods, such as that over 1860–2012 for the UK, seem likely to have substantial errors of measurement that may even be integrated of order one, but which are probably cointegrated for cognate variables. We analyze and simulate the impacts of such measurement errors on parameter estimates and tests in a bivariate cointegrated system with trends and location shifts which reflect the many major turbulent events that have occurred historically. When trends or shifts therein are large, cointegration analysis is not much affected by such measurement errors, leading to conventional stationary attenuation biases dependent on the measurement-error variance, unlike the outcome when there are no offsetting shifts or trends.
    Keywords: Integrated Measurement Errors; Location Shifts; Long-run Data; Cointegration
    JEL: C51 C22
    Date: 2017–01–17
    URL: http://d.repec.org/n?u=RePEc:oxf:wpaper:818&r=ecm
  4. By: Effraimidis, Georgios (COHERE)
    Abstract: In duration analysis, the Mixed Proportional Hazard model is the most common choice among practitioners for the specification of the underlying hazard rate. One major drawback of this model is that the value of the frailty term (i.e. unobserved factors) is time-invariant. This paper introduces a new model, the Mixed Random Hazard (MRH) model, which allows the frailty term to be time-varying. We provide sufficient conditions under which the new model is nonparametrically identified. Moreover, a theoretical framework is proposed for testing whether the true model is MRH. We conclude this paper with a discussion of how the arguments for the univariate MRH model can be extended to various multivariate problems.
    Keywords: Competing risks model; Duration analysis; Mixed random hazard; Time-varying frailty
    JEL: C14 C31 C41
    Date: 2016–07–25
    URL: http://d.repec.org/n?u=RePEc:hhs:sduhec:2016_006&r=ecm
  5. By: Dominique Guegan (Centre d'Economie de la Sorbonne); Giovanni De Luca (Parthenope University of Naples); Giorgia Rivieccio (Parthenope University of Naples)
    Abstract: We present the three-stage pseudo maximum likelihood estimation in order to reduce the computational burdens when a copula-based model is applied to multiple time-series in high dimensions. The method is applied to general stationary Markov time series, under some assumptions which include a time-invariant copula as well as marginal distributions, extending the results of Yi and Liao [2010]. We explore, via simulated and real data, the performance of the model compared to the classical vectorial autoregressive model, giving the implications of misspecified assumptions for margins and/or joint distribution and providing tail dependence measures of economic variables involved in the analysis
    Keywords: Copula function; Three stage estimator; Multiple time series
    JEL: C1
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:17001&r=ecm
  6. By: Ana Angulo (University of Zaragoza); Peter Burridge (University of York); Jesús Mur (University of Zaragoza)
    Abstract: The weighting matrix is a key element in the specification of a spatial model. Typically, this matrix is fixed a priori by the researcher, which is not always satisfactory. Theoretical justification for the chosen matrix tends to be very vague, and the selection problem is seldom reconsidered. However, several recent proposals advocate a more data-driven approach. In fact, if we have panel data, the weighting matrix can be estimated from the data; this facilitates the development of statistical procedures for testing various hypotheses of interest. In the paper, we focus on the assumption of stability, through time, of this matrix by adapting a collection of covariance matrix stability tests, developed in a multivariate context. The tests are compared in a Monte Carlo; two examples illustrate the proposal.
    Keywords: Weights matrix; Estimation of W; Structural breaks; Tests of equality
    JEL: C4 C5 R1
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:zar:wpaper:dt2017-01&r=ecm
  7. By: Jörg Breitung; Christoph Wigger
    Abstract: Using approximations of the score of the log-likelihood function we derive optimal moment conditions for estimating spatial regression models. Our approach results in computationally simple and robust estimators. The moment conditions resemble those proposed by Kelejian & Prucha (1999), hence we provide an intuitive interpretation of their estimator as a second order approximation to the log-likelihood function. Furthermore we propose simplified and efficient GMM estimators based on a convenient modification of the moment conditions. Heteroskedasticity robust versions of our estimators are also provided. Finally, a first order approximation for the spatial lag model is also considered. Monte Carlo results suggest that a simple just-identified estimator based on a quadratic moment derived from a first order approximation of the score of the log-likelihood function performs similar to the GMM estimator proposed by Kelejian & Prucha (2010).
    Keywords: Spatial Econometrics, Spatial error correlation, GMM-estimation
    JEL: C01 C13 C31
    Date: 2017–01–12
    URL: http://d.repec.org/n?u=RePEc:kls:series:0089&r=ecm
  8. By: Davezies, Laurent; Le Barbanchon, Thomas
    Abstract: Since the late 90s, Regression Discontinuity (RD) designs have been widely used to estimate Local Average Treatment Effects (LATE). When the running variable is observed with continuous measurement error, identification fails. Assuming nondifferential measurement error, we propose a consistent nonparametric estimator of the LATE when the discrepancy between the true running variable and its noisy measure is observed in an auxiliary sample of treated individuals, and when there are treated individuals at any value of the true running variable - two-sided fuzzy designs. We apply our method to estimate the effect of receiving unemployment benefits.
    Keywords: Measurement error; regression discontinuity design; Unemployment insurance
    JEL: C14 C21 C51
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:11775&r=ecm
  9. By: Helmut Lütkepohl; George Milunivich; Minxian Yang
    Abstract: Identification through heteroskedasticity in heteroskedastic simultaneous equations models (HSEMs) is considered. The possibility that heteroskedasticity identifies the structural parameters only partially is explicitly allowed for. The asymptoticproperties of the identified parameters are derived. Moreover, tests for identification through heteroskedasticity are developed and their asymptotic distributions are derived. Monte Carlo simulations are used to explore the small sample properties of the asymptotically valid methods. Finally, the approach is applied to investigate the relation between the extent of economic openness and inflation.
    Keywords: Heteroskedasticity, simultaneous equations models, testing for identification, Davies' problem
    JEL: C30
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1632&r=ecm
  10. By: Murasawa, Yasutomo
    Abstract: The Bank of England/GfK NOP Inflation Attitudes Survey asks individuals about their inflation perceptions and expectations in eight ordered categories with known boundaries except for an indifference limen. With enough categories for identification, one can fit a mixture distribution to such data, which can be multi-modal. Thus Bayesian analysis of a normal mixture model for interval data with an indifference limen is of interest. This paper applies the No-U-Turn Sampler (NUTS) for Bayesian computation, and estimates the distributions of public inflation perceptions and expectations in the UK during 2001Q1--2015Q4. The estimated means are useful for measuring information rigidity.
    Keywords: Bayesian, Indifference limen, Information rigidity, Interval data, Normal mixture, No-U-turn sampler
    JEL: C11 C25 C46 C82 E31
    Date: 2017–01–16
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:76244&r=ecm
  11. By: Dimitris Mavridis; Irini Moustaki; Melanie Wall; Georgia Salanti
    Abstract: When considering data from many trials, it is likely that some of them present a markedly different intervention effect or exert an undue influence on the summary results. We develop a forward search algorithm for identifying outlying and influential studies in meta-analysis models. The forward search algorithm starts by fitting the hypothesized model to a small subset of likely outlier-free studies and proceeds by adding studies into the set one-by-one that are determined to be closest to the fitted model of the existing set. As each study is added to the set, plots of estimated parameters and measures of fit are monitored to identify outliers by sharp changes in the forward plots. We apply the proposed outlier detection method to two real data sets; a metaanalysis of 26 studies that examines the effect of writing-to-learn interventions on academic achievement adjusting for three possible effect modifiers, and a meta-analysis of 70 studies that compares a fluoride toothpaste treatment to placebo for preventing dental caries in children. A simple simulated example is used to illustrate the steps of the proposed methodology and a small scale simulation study is conducted to evaluate the performance of the proposed method.
    Keywords: backward methods; Cook’s distance; masking; meta-analysis; swamping; outliers
    JEL: C1
    Date: 2016–01–09
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:64337&r=ecm
  12. By: Yang, Bill Huajian; Du, Zunwei
    Abstract: Rating transition probability models, under the asymptotic single risk factor model framework, are widely used in the industry for stress testing and multi-period scenario loss projection. For a risk-rated portfolio, it is commonly believed that borrowers with higher risk ratings are more sensitive and vulnerable to adverse shocks. This means the asset correlation is required be differentiated between ratings and fully reflected in all respects of model fitting. In this paper, we introduce a risk component, called credit index, representing the part of systematic risk for the portfolio explained by a list of macroeconomic variables. We show that the transition probability, conditional to a list of macroeconomic variables, can be formulated analytically by using the credit index and the rating level sensitivity with respect to this credit index. Approaches for parameter estimation based on maximum likelihood for observing historical rating transition frequency, in presence of rating level asset correlation, are proposed. The proposed models and approaches are validated on a commercial portfolio, where we estimate the parameters for the conditional transition probability models, and project the loss for baseline, adverse and severely adverse supervisory scenarios provided by the Federal Reserve for the period 2016Q1-2018Q1. The paper explicitly demonstrates how Miu and Ozdemir’s original methodology ([5]) on transition probability models can be structured and implemented with rating specific asset correlation. It extends Yang and Du’s earlier work on this subject ([9]).We believe that the models and approaches proposed in this paper provide an effective tool to the practitioners for the use of transition probability models.
    Keywords: CCAR stress testing, multi-period scenario, loss projection, credit index, risk sensitivity, asset correlation, transition frequency, transition probability, through-the-cycle, maximum likelihood
    JEL: C13 C5 C51 C58 G32 G38
    Date: 2016–09
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:76270&r=ecm
  13. By: Rasmus Pedersen (University of Copenhagen, LSTA); Olivier Wintenberger (University of Copenhagen, LSTA)
    Abstract: Conditions for geometric ergodicity of multivariate ARCH processes, with the so-called BEKK parametrization, are considered. We show for a class of BEKK-ARCH processes that the invariant distribution is regularly varying. In order to account for the possibility of different tail indices of the marginals, we consider the notion of vector scaling regular variation, in the spirit of Perfekt (1997). The characterization of the tail behavior of the processes is used for deriving the asymptotic distribution of the sample covariance matrices.
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1701.05091&r=ecm
  14. By: Harin, Alexander
    Abstract: The present article is devoted to discrete random variables that take a limited number of values in finite closed intervals. I prove that if non-zero lower bounds exist for the variances of the variables, then non-zero bounds or forbidden zones exist for their expectations near the boundaries of the intervals. This article is motivated by the need in rigorous theoretical support for the analysis of the influence of scattering and noise on data in behavioral economics and decision sciences.
    Keywords: probability; dispersion; variance; noise; economics; utility theory; prospect theory; behavioral economics; decision sciences;
    JEL: C02 C1 D8 D81 D84
    Date: 2017–01–15
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:76240&r=ecm
  15. By: Esteban M. Aucejo; Federico A. Bugni; V. Joseph Hotz
    Abstract: This paper examines the problem of identification and inference on a conditional moment condition model with missing data, with special focus on the case when the conditioning covariates are missing. We impose no assumption on the distribution of the missing data and we confront the missing data problem by using a worst case scenario approach. We characterize the sharp identified set and argue that this set is usually too complex to compute or to use for inference. Given this difficulty, we consider the construction of outer identified sets (i.e. supersets of the identified set) that are easier to compute and can still characterize the parameter of interest. Two different outer identification strategies are proposed. Both of these strategies are shown to have non-trivial identifying power and are relatively easy to use and combine for inferential purposes.
    Keywords: missing data; missing covariate data; partial identification; outer identified sets; inference; confidence sets
    JEL: C01 C10 C20 C25
    Date: 2017–02
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:62524&r=ecm
  16. By: Joshua C C Chan; Angelia L Grant
    Abstract: It is well known that different specification choices can give starkly different output gap estimates. To account for model uncertainty, we average estimates over a wide variety of popular specifications using stochastic model specification search. In particular, we consider three types of specification choices: sets of variables used in the analysis, output trend specifications and distributional assumptions. Using US data, we find that the unemployment gap is useful in estimating the output gap, but conditional on the unemployment gap, the inflation gap no longer depends on the output gap. Our results show a steady decline in trend output growth throughout the sample, and the estimate at the end of our sample is only about 1%. Moreover, data favor t over Gaussian distributed innovations, suggesting the relatively frequent occurrence of extreme events.
    Keywords: model averaging, trend inflation, potential output, NAIRU, Okun’s law, Phillips curve
    JEL: C11 C52 E32
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2017-02&r=ecm
  17. By: Arne Henningsen (Department of Food and Resource Economics, University of Copenhagen); Matěj Bělín (Center for Economic Research and Graduate Education, Economics Institute, Czech Republic); Géraldine Henningsen (Department of Management Engineering, Technical University of Denmark)
    Abstract: The stochastic ray production frontier was developed as an alternative to the traditional output distance function to model production processes with multiple inputs and multiple outputs. Its main advantage over the traditional approach is that it can be used when some output quantities of some observations are zero. In this paper, we briefly discuss—and partly refute—a few existing criticisms of the stochastic ray production frontier. Furthermore, we discuss some shortcomings of the stochastic ray production frontier that have not yet been addressed in the literature and that we consider more important than the existing criticisms: taking logarithms of the polar coordinate angles, non-invariance to units of measurement, and ordering of the outputs. We also give some practical advice on how to address the newly raised issues.
    Keywords: Stochastic Ray Production Frontier, Distance Function, Multiple Outputs, Primal Approach, Zero Output Quantities
    JEL: C13 C51 D22
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:foi:wpaper:2017_01&r=ecm
  18. By: M. Mucciardi; E. Otranto
    Abstract: The Space–Time Autoregressive (STAR) model is one of the most widely used models to represent the dynamics of a certain variable recorded at several locations at the same time, capturing both their temporal and spatial relationships. Its advantages are often discussed in terms of parsimony with respect to space-time VAR structures because it considers a single coefficient for each time and spatial lag for the full time span and the full location set. This hypothesis can be very strong; the presence of groups of locations with similar dynamics makes it more realistic. In this work we add a certain degree of flexibility to the STAR model, providing the possibility for coefficients to vary in groups of locations, proposing a new class of flexible STAR models. Such groups are detected by means of a clustering algorithm. The new class or model is compared to the classical STAR and the space-time VAR by simulation experiments and a practical application.
    Keywords: spatial weight matrix,space–time models,forecasting,clustering
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:cns:cnscwp:201608&r=ecm
  19. By: Michal Andrle; Miroslav Plašil
    Abstract: The paper introduces “system priors†, their use in Bayesian analysis of econometric time series, and provides a simple and illustrative application. System priors were devised by Andrle and Benes (2013) as a tool to incorporate prior knowledge into an economic model. Unlike priors about individual parameters, system priors offer a simple and efficient way of formulating well-defined and economically-meaningful priors about high-level model properties. The generality of system priors are illustrated using an AR(2) process with a prior that most of its dynamics comes from business-cycle frequencies.
    Keywords: Econometric models;Vector autoregression;Time series;Economic theory;system priors, Bayesian analysis, time series
    Date: 2016–11–17
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:16/231&r=ecm

This nep-ecm issue is ©2017 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.