nep-ecm New Economics Papers
on Econometrics
Issue of 2018‒02‒05
eighteen papers chosen by
Sune Karlsson
Örebro universitet

  1. A Dirichlet Process Mixture Model of Discrete Choice By Rico Krueger; Akshay Vij; Taha H. Rashidi
  2. ARDL model as a remedy for spurious regression: problems, performance and prospectus By Ghouse, Ghulam; Khan, Saud Ahmed; Rehman, Atiq Ur
  3. Measuring financial interdependence in asset returns with an application to euro zone equities By Renée Fry-McKibbin; Cody Yu-Ling Hsiao; Vance L. Martin
  4. Predicting crypto-currencies using sparse non-Gaussian state space models By Christian Hotz-Behofsits; Florian Huber; Thomas O. Z\"orner
  5. Outliers and misleading leverage effect in asymmetric GARCH-type models By M. Angeles Carnero Fernández; Ana Pérez Espartero
  6. Bayesian Analysis of Realized Matrix-Exponential GARCH Models By Manabu Asai; Michael McAleer
  7. Estimation and Inference in Functional-Coefficient Spatial Autoregressive Panel Data Models with Fixed Effects By Sun, Yiguo; Malikov, Emir
  8. Structural Scenario Analysis with SVARs By Antolin-Diaz, Juan; Petrella, Ivan; Rubio-Ramírez, Juan Francisco
  9. Nonparametric Estimation and Inference for Panel Data Models By Christopher F. Parmeter; Jeffrey S. Racine
  10. Testing for Common Breaks in a Multiple Equations System By Tatsushi Oka; Pierre Perron
  11. Improving Forecast Accuracy of Financial Vulnerability: Partial Least Squares Factor Model Approach By Hyeongwoo Kim; Kyunghwan Ko
  12. Exact Likelihood Estimation and Probabilistic Forecasting in Higher-order INAR(p) Models By Lu, Yang
  13. Forecaster’s utility and forecasts coherence By Emilio Zanetti Chini
  14. Simple Tests for Social Interaction Models with Network Structures By Dogan, Osman; Taspinar, Suleyman; Bera, Anil K.
  15. Markov Switching Panel with Network Interaction Effects By Komla Mawulom Agudze; Monica Billio; Roberto Casarin; Francesco Ravazzolo
  16. Critically assessing estimated DSGE models: A case study of a multi-sector model By X. Liu; A.R. Pagan; T. Robinson
  17. Synthetic Control and Inference By Ruoyao Shi; Jinyong Hahn
  18. An Averaging GMM Estimator Robust to Misspecification By Ruoyao Shi; Zhipeng Liao

  1. By: Rico Krueger; Akshay Vij; Taha H. Rashidi
    Abstract: We present a mixed multinomial logit (MNL) model, which leverages the truncated stick-breaking process representation of the Dirichlet process as a flexible nonparametric mixing distribution. The proposed model is a Dirichlet process mixture model and accommodates discrete representations of heterogeneity, like a latent class MNL model. Yet, unlike a latent class MNL model, the proposed discrete choice model does not require the analyst to fix the number of mixture components prior to estimation, as the complexity of the discrete mixing distribution is inferred from the evidence. For posterior inference in the proposed Dirichlet process mixture model of discrete choice, we derive an expectation maximisation algorithm. In a simulation study, we demonstrate that the proposed model framework can flexibly capture differently-shaped taste parameter distributions. Furthermore, we empirically validate the model framework in a case study on motorists' route choice preferences and find that the proposed Dirichlet process mixture model of discrete choice outperforms a latent class MNL model and mixed MNL models with common parametric mixing distributions in terms of both in-sample fit and out-of-sample predictive ability. Compared to extant modelling approaches, the proposed discrete choice model substantially abbreviates specification searches, as it relies on less restrictive parametric assumptions and does not require the analyst to specify the complexity of the discrete mixing distribution prior to estimation.
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1801.06296&r=ecm
  2. By: Ghouse, Ghulam; Khan, Saud Ahmed; Rehman, Atiq Ur
    Abstract: Spurious regression have performed a vital role in the construction of contemporary time series econometrics and have developed many tools employed in applied macroeconomics. The conventional Econometrics has limitations in the treatment of spurious regression in non-stationary time series. While reviewing a well-established study of Granger and Newbold (1974) we realized that the experiments constituted in this paper lacked Lag Dynamics thus leading to spurious regression. As a result of this paper, in conventional Econometrics, the Unit root and Cointegration analysis have become the only ways to circumvent the spurious regression. These procedures are also equally capricious because of some specification decisions like, choice of the deterministic part, structural breaks, autoregressive lag length choice and innovation process distribution. This study explores an alternative treatment for spurious regression. We concluded that it is the missing variable (lag values) that are the major cause of spurious regression therefore an alternative way to look at the problem of spurious regression takes us back to the missing variable which further leads to ARDL Model. The study mainly focus on Monte Carlo simulations. The results are providing justification, that ARDL model can be used as an alternative tool to avoid the spurious regression problem.
    Keywords: Spurious regression, misspecification, Stationarity, unit root, cointegration and ARDL
    JEL: B41 C4 C5 C53
    Date: 2018–01–10
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:83973&r=ecm
  3. By: Renée Fry-McKibbin; Cody Yu-Ling Hsiao; Vance L. Martin
    Abstract: A general procedure is proposed to identify changes in asset return interdependence over time using entropy theory. The approach provides a decomposition of interdependence in terms of comoments including coskewness, cokurtosis and covolatility as well as more traditional measures based on second order moments such as correlations. A new diagnostic test of independence is also developed which incorporates these higher order comoments. The properties of the entropy interdependence measure are demonstrated using a number of simulation experiments, as well as applying the methodology to euro zone equity markets over the period 1990 to 2017.
    Keywords: Entropy theory, generalized exponential family, higher order comoment decomposition, independence testing
    JEL: C12 F30
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2018-05&r=ecm
  4. By: Christian Hotz-Behofsits; Florian Huber; Thomas O. Z\"orner
    Abstract: In this paper we forecast daily returns of crypto-currencies using a wide variety of different econometric models. To capture salient features commonly observed in financial time series like rapid changes in the conditional variance, non-normality of the measurement errors and sharply increasing trends, we develop a time-varying parameter VAR with t-distributed measurement errors and stochastic volatility. To control for overparameterization, we rely on the Bayesian literature on shrinkage priors that enables us to shrink coefficients associated with irrelevant predictors and/or perform model specification in a flexible manner. Using around one year of daily data we perform a real-time forecasting exercise and investigate whether any of the proposed models is able to outperform the naive random walk benchmark. To assess the economic relevance of the forecasting gains produced by the proposed models we moreover run a simple trading exercise.
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1801.06373&r=ecm
  5. By: M. Angeles Carnero Fernández (Universidad de Alicante); Ana Pérez Espartero (Dpto. Economía Aplicada)
    Abstract: This paper illustrates how outliers can affect both the estimation and testing of leverage effect by focusing on the TGARCH model. Three estimation methods are compared through Monte Carlo experiments: Gaussian Quasi-Maximum Likelihood, Quasi-Maximum Likelihood based on the t Student likelihood and Least Absolute Deviation method. The empirical behavior of the t-ratio and the Likelihood Ratio tests for the significance of the leverage parameter is also analyzed. Our results put forward the unreliability of Gaussian Quasi-Maximum Likelihood methods in the presence of outliers. In particular, we show that one isolated outlier could hide true leverage effect whereas two consecutive outliers bias the estimated leverage coefficient in a direction that crucially depends on the sign of the first outlier and could lead to wrongly reject the null of no leverage effect or to estimate asymmetries of the wrong sign. By contrast, we highlight the good performance of the robust estimators in the presence of an isolated outlier. However, when there are patches of outliers, our findings suggest that the sizes and powers of the tests as well as the estimated parameters based on robust methods may still be distorted in some cases. We illustrate these results with two series of daily returns, namely the Spain IGBM Consumer Goods index and the futures contracts of the Natural gas.
    Keywords: Conditional heteroscedasticity, QMLE, Robust estimators, TGARCH, AVGARCH
    JEL: C22 G10 Q40
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:ivi:wpasad:2018-01&r=ecm
  6. By: Manabu Asai (Soka University, Japan); Michael McAleer (Asia University, Taiwan; University of Sydney Business School, Australia; Erasmus School of Economics, Erasmus University Rotterdam, The Netherlands; Complutense University of Madrid, Spain; Yokohama National University, Japan)
    Abstract: The paper develops a new realized matrix-exponential GARCH (MEGARCH) model, which uses the information of returns and realized measure of co-volatility matrix simultaneously. The paper also considers an alternative multivariate asymmetric function to develop news impact curves. We consider Bayesian MCMC estimation to allow non-normal posterior distributions. For three US financial assets, we compare the realized MEGARCH models with existing multivariate GARCH class models. The empirical results indicate that the realized MEGARCH models outperform the other models regarding in-sample and out-of-sample performance. The news impact curves based on the posterior densities provide reasonable results.
    Keywords: C11; C32
    Date: 2018–01–17
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20180005&r=ecm
  7. By: Sun, Yiguo; Malikov, Emir
    Abstract: This paper develops an innovative way of estimating a functional-coefficient spatial autoregressive panel data model with unobserved individual effects which can accommodate (multiple) time-invariant regressors in the model with a large number of cross-sectional units and a fixed number of time periods. The methodology we propose removes unobserved fixed effects from the model by transforming the latter into a semiparametric additive model, the estimation of which however does not require the use of backfitting or marginal integration techniques. We derive the consistency and asymptotic normality results for the proposed kernel and sieve estimators. We also construct a consistent nonparametric test to test for spatial endogeneity in the data. A small Monte Carlo study shows that our proposed estimators and the test statistic exhibit good finite-sample performance.
    Keywords: First Difference, Fixed Effects, Hypothesis Testing, Local Linear Regression, Nonparametric GMM, Sieve Estimator, Spatial Autoregressive, Varying Coefficient
    JEL: C12 C13 C14 C23
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:83671&r=ecm
  8. By: Antolin-Diaz, Juan; Petrella, Ivan; Rubio-Ramírez, Juan Francisco
    Abstract: In the context of vector autoregressions, conditional forecasts are typically constructed by specifying the future path of one or more variables while remaining silent about the structural shocks that might have caused the path. However, in many cases, researchers may be interested in identifying a structural vector autoregression and choosing which structural shock is driving the path of the conditioning variables. This would allow researchers to create a ''structural scenario'' that can be given an economic interpretation. In this paper we show how to construct structural scenarios and develop efficient algorithms to implement our methods. We show how structural scenario analysis can lead to results that are very different from, but complementary to, those of the traditional conditional forecasting exercises. We also propose an approach to assess and compare the plausibility of alternative scenarios. We illustrate our methods by applying them to two examples: comparing alternative monetary policy options and stress testing the reaction of bank profitability to an economic recession.
    Keywords: Bayesian methods; Conditional forecasts; probability distribution; SVARs
    JEL: C32 C53 E47
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:12579&r=ecm
  9. By: Christopher F. Parmeter; Jeffrey S. Racine
    Abstract: This chapter surveys nonparametric methods for estimation and inference in a panel data setting. Methods surveyed include profile likelihood, kernel smoothers, as well as series and sieve estimators. The practical application of nonparametric panel-based techniques is less prevalent that, say, nonparametric density and regression techniques. It is our hope that the material covered in this chapter will prove useful and facilitate their adoption by practitioners.
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:mcm:deptwp:2018-02&r=ecm
  10. By: Tatsushi Oka; Pierre Perron
    Abstract: The issue addressed in this paper is that of testing for common breaks across or within equations of a multivariate system. Our framework is very general and allows integrated regressors and trends as well as stationary regressors. The null hypothesis is that breaks in different parameters occur at common locations and are separated by some positive fraction of the sample size unless they occur across different equations. Under the alternative hypothesis, the break dates across parameters are not the same and also need not be separated by a positive fraction of the sample size whether within or across equations. The test considered is the quasi-likelihood ratio test assuming normal errors, though as usual the limit distribution of the test remains valid with non-normal errors. Of independent interest, we provide results about the rate of convergence of the estimates when searching over all possible partitions subject only to the requirement that each regime contains at least as many observations as some positive fraction of the sample size, allowing break dates not separated by a positive fraction of the sample size across equations. Simulations show that the test has good finite sample properties. We also provide an application to issues related to level shifts and persistence for various measures of inflation to illustrate its usefulness.
    Date: 2016–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1606.00092&r=ecm
  11. By: Hyeongwoo Kim (Department of Economics, Auburn University); Kyunghwan Ko (Economic Research Team, Jeju Branch, The Bank of Korea)
    Abstract: We present a factor augmented forecasting model for assessing the financial vulnerability in Korea. Dynamic factor models often extract latent common factors from a large panel of time series data via the method of the principal components (PC). Instead, we employ the partial least squares (PLS) method that estimates target specific common factors, utilizing covariances between predictors and the target variable. Applying PLS to 198 monthly frequency macroeconomic time series variables and the Bank of Korea's Financial Stress Index (KFSTI), our PLS factor augmented forecasting models consistently outperformed the random walk benchmark model in out-of-sample prediction exercises in all forecast horizons we considered. Our models also outperformed the autoregressive benchmark model in short-term forecast horizons. We expect our models would provide useful early warning signs of the emergence of systemic risks in Korea's financial markets.
    Keywords: Partial least squares, Principal component analysis, Financial stress index, Out-of-sample forecast, RRMSPE, DMW statistics
    JEL: C38 C53 E44 E47 G01 G17
    Date: 2017–05–02
    URL: http://d.repec.org/n?u=RePEc:bok:wpaper:1714&r=ecm
  12. By: Lu, Yang
    Abstract: The computation of the likelihood function and the term structure of probabilistic forecasts in higher-order INAR(p) models are qualified numerically intractable and the literature has considered various approximations. Using the notion of compound autoregressive process, we propose an exact and fast algorithm for both quantities. We find that existing approximation schemes induce significant errors for forecasting.
    Keywords: compound autoregressive process, probabilistic forecast of counts, matrix arithmetic.
    JEL: C22 C25
    Date: 2018–01–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:83682&r=ecm
  13. By: Emilio Zanetti Chini (Department of Economics and Management, University of Pavia)
    Abstract: I provide general frequentist framework to elicit the forecaster’s expected utility based on a Lagrange Multiplier-type test for the null of locality of the scoring rules associated to the probabilistic forecast. These are assumed to be observed transition variables in a nonlinear autoregressive model to ease the statistical inference. A simulation study reveals that the test behaves consistently with the requirements of the theoretical literature. The locality of the scoring rule is fundamental to set dating algorithms to measure and forecast probability of recession in US business cycle. An investigation of Bank of Norway’s forecasts on output growth leads us to conclude that forecasts are often suboptimal with respect to some simplistic benchmark if forecaster’s reward is not properly evaluated.
    Keywords: Business Cycle, Evaluation, Locality Testing, Nonlinear Time Series, Predictive Density, Scoring Rules, Scoring Structures.
    JEL: C12 C22 C44 C53
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:pav:demwpp:demwp0145&r=ecm
  14. By: Dogan, Osman; Taspinar, Suleyman; Bera, Anil K.
    Abstract: We consider an extended spatial autoregressive model that can incorporate possible endogenous interactions, exogenous interactions, unobserved group fixed effects and correlation of unobservables. In the generalized method of moments (GMM) and the maximum likelihood (ML) frameworks, we introduce simple gradient based tests that can be used to test the presence of endogenous effects, the correlation of unobservables and the contextual effects. We show the asymptotic distributions of tests, and formulate robust tests that have central chi-square distributions under both the null and local misspecification. The proposed tests are easy to compute and only require the estimates from a transformed linear regression model. We carry out an extensive Monte Carlo study to investigate the size and power properties of the proposed tests. Our results show that the proposed tests have good finite sample properties and are useful for testing the presence of endogenous effects, correlation of unobservables and contextual effects in a social interaction model.
    Keywords: Social interactions, Endogenous effects, Spatial dependence, GMM inference, LM tests, Robust LM test, Local misspecification.
    JEL: C13 C21 C31
    Date: 2017–08–17
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:82828&r=ecm
  15. By: Komla Mawulom Agudze; Monica Billio; Roberto Casarin; Francesco Ravazzolo
    Abstract: The paper introduces a new dynamic panel model for large data sets of time series, each of them characterized by a series-specific Markov switching process. By introducing a neighbourhood system based on a network structure, the model accounts for local and global interactions among the switching processes. We develop an efficient Markov Chain Monte Carlo (MCMC) algorithm for the posterior approximation based on the Metropolis adjusted Langevin sampling method. We study efficiency and convergence of the proposed MCMC algorithm through several simulation experiments. In the empirical application, we deal with US states coincident indices, produced by the Federal Reserve Bank of Philadelphia, and find evidence that local interactions of state-level cycles with geographically and economically networks play a substantial role in the common movements of US regional business cycles.
    Keywords: Bayesian inference, interacting Markov chains, Metropolis adjusted Langevin, panel Markov-switching.
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:bny:wpaper:0059&r=ecm
  16. By: X. Liu; A.R. Pagan; T. Robinson
    Abstract: We describe methods for assessing estimated Dynamic Stochastic General Equilibrium (DSGE) models. One involves the computation of alternative impulse responses from models constrained to have an identical likelihood and the same contemporaneous signs as responses in the DSGE model. Others ask how well the model matches the data generating process; whether there is weak identification; the consequences of including measurement error with growth rates of non-stationary variables; and whether the model can reproduce features of the data that involve combinations of moments. The methods are applied to a large-scale small-open economy DSGE model, typical of those used at policy institutions.
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2018-04&r=ecm
  17. By: Ruoyao Shi (Department of Economics, University of California Riverside); Jinyong Hahn (UCLA Economics)
    Abstract: We examine properties of permutation tests in the context of synthetic control. Permutation tests are frequently used methods of inference for synthetic control when the number of potential control units is small. We analyze the permutation tests from a repeated sampling perspective and show that the size of permutation tests may be distorted. Several alternative methods are discussed.
    Keywords: synthetic control, permutation test, symmetry
    JEL: C12
    Date: 2016–10
    URL: http://d.repec.org/n?u=RePEc:ucr:wpaper:201802&r=ecm
  18. By: Ruoyao Shi (Department of Economics, University of California Riverside); Zhipeng Liao (UCLA Economics)
    Abstract: This paper studies the averaging GMM estimator that combines a conservative GMM estimator based on valid moment conditions and an aggressive GMM estimator based on both valid and possibly misspecified moment conditions, where the weight is the sample analog of an infeasible optimal weight. We establish asymptotic theory on uniform approximation of the upper and lower bounds of the finite-sample truncated risk difference between any two estimators, which is used to compare the averaging GMM estimator and the conservative GMM estimator. Under some sufficient conditions, we show that the asymptotic lower bound of the truncated risk difference between the averaging estimator and the conservative estimator is strictly less than zero, while the asymptotic upper bound is zero uniformly over any degree of misspecification. Extending seminal results on the James-Stein estimator, this uniform dominance is established in non-Gaussian semiparametric nonlinear models. The simulation results support our theoretical findings.
    Keywords: asymptotic risk, finite-sample risk, generalized shrinkage estimator, GMM, misspecification, model averaging, non-standard estimator, uniform approximation
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:ucr:wpaper:201803&r=ecm

This nep-ecm issue is ©2018 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.