nep-ecm New Economics Papers
on Econometrics
Issue of 2014‒10‒13
seven papers chosen by
Sune Karlsson
Örebro universitet

  1. Modified QML Estimation of Spatial Autoregressive Models with Unknown Heteroskedasticity and Nonnormality By Shew Fan Liu; Zhenlin Yang
  2. Finite sample properties of power-law cross-correlations estimators By Ladislav Kristoufek
  3. Stochastic Volatility Demand Systems By Apostolos Serletis; Maksim Isakin
  4. Money-Income Granger-Causality in Quantiles By Tae-Hwy Lee; Weiping Yang
  5. How did we get to where we are now? Reflections on 50 years of macroeconomic and financial econometrics By Michael Wickens
  6. Asymmetry and Leverage in Conditional Volatility Models By Michael McAleer
  7. Updating poverty estimates at frequent intervals in the absence of consumption data : methods and illustration with reference to a middle-income country By Dang, Hai-Anh H.; Lanjouw, Peter F.; Serajuddin, Umar

  1. By: Shew Fan Liu (School of Economics, Singapore Management University, Singapore, 178903); Zhenlin Yang (School of Economics, Singapore Management University, Singapore, 178903)
    Abstract: In the presence of heteroskedasticity, Lin and Lee (2010) show that the quasi maximum likelihood (QML) estimators of spatial autoregressive models (SAR) can be inconsistent as a ‘necessary’ condition for consistency can be violated, and thus propose robust GMM estimators for the model. In this paper, we first show that this condition may hold in many practical situations and when it does the regular QML estimators can be consistent. In cases where this condition is violated, we propose a modified QML estimation method robust against heteroskedasticity of unknown form. In both cases, asymptotic distributions of the estimators are derived, and methods for estimating robust variances are given, leading to robust inferences for the model. Extensive Monte Carlo results show that the modified QML estimator outperforms the GMM estimators, and the regular QML estimator even when it is consistent. The proposed robust inference methods can also be easily applied.
    Keywords: Spatial dependence; Unknown heteroskedasticity; Nonnormality; Modified QML estimator; Robust standard error
    JEL: C10 C13 C15 C21
    Date: 2014–09
  2. By: Ladislav Kristoufek
    Abstract: We study finite sample properties of estimators of power-law cross-correlations -- detrended cross-correlation analysis (DCCA), height cross-correlation analysis (HXA) and detrending moving-average cross-correlation analysis (DMCA) -- with a special focus on short-term memory bias as well as power-law coherency. Presented broad Monte Carlo simulation study focuses on different time series lengths, specific methods' parameter setting, and memory strength. We find that each method is best suited for different time series dynamics so that there is no clear winner between the three. The method selection should be then made based on observed dynamic properties of the analyzed series.
    Date: 2014–09
  3. By: Apostolos Serletis (University of Calgary); Maksim Isakin
    Abstract: We address the estimation of stochastic volatility demand systems. In particular, we relax the homoscedasticity assumption and instead assume that the covariance matrix of the errors of demand systems is time-varying. Since most economic and fiÂ…nancial time series are nonlinear, we achieve superior modeling using parametric nonlinear demand systems in which the unconditional variance is constant but the conditional variance, like the conditional mean, is also a random variable depending on current and past information. We also prove an important practical result of invariance of the maximum likelihood estimator with respect to the choice of equation eliminated from a singular demand system. An empirical application is provided, using the BEKK specifiÂ…cation to model the conditional covariance matrix of the errors of the basic translog demand system.
    Date: 2014–09–29
  4. By: Tae-Hwy Lee (Department of Economics, University of California Riverside); Weiping Yang (University of California, Riverside)
    Abstract: The causal relationship between money and income (output) has been an important topic and has been extensively studied. However, those empirical studies are almost entirely on Granger-causality in the conditional mean. Compared to conditional mean, conditional quantiles give a broader picture of an economy in various scenarios. In this paper, we explore whether forecasting conditional quantiles of output growth can be improved using money growth information. We compare the check loss values of quantile forecasts of output growth with and without using past information on money growth, and assess the statistical significance of the loss-differentials. Using U.S. monthly series of real personal income or industrial production for income and output, and M1 or M2 for money, we find that out-of-sample quantile forecasting for output growth is significantly improved by accounting for past money growth information, particularly in tails of the output growth conditional distribution. On the other hand, money-income Granger-causality in the conditional mean is quite weak and unstable. These empirical findings in this paper have not been observed in the money-income literature. The new results of this paper have an important implication on monetary policy, because they imply that the effectiveness of monetary policy has been under-estimated by merely testing Granger-causality in conditional mean. Money does Granger-cause income more strongly than it has been known and therefore information on money growth can (and should) be more utilized in implementing monetary policy.
    Keywords: Money-income Granger-causality, Conditional mean, Conditional quantiles, Conditional distribution
    JEL: C32 C5 E4 E5
    Date: 2014–09
  5. By: Michael Wickens
    Abstract: This lecture is about how best to evaluate economic theories in macroeconomics and finance, and the lessons that can be learned from the past use and misuse of evidence. It is argued that all macro/finance models are ‘false’ so should not be judged solely on the realism of their assumptions. The role of theory is to explain the data, They should therefore be judged by their ability to do this. Data mining will often improve the statistical properties of a model but it does not improve economic understanding. These propositions are illustrated with examples from the last fifty years of macro and financial econometrics.
    Keywords: Theory and evidence in economics, DSGE modelling, time series modelling, asset price modelling
    JEL: B1 C1 E1 G1
    Date: 2014–09
  6. By: Michael McAleer (University of Canterbury)
    Abstract: The three most popular univariate conditional volatility models are the generalized autoregressive conditional heteroskedasticity (GARCH) model of Engle (1982) and Bollerslev (1986), the GJR (or threshold GARCH) model of Glosten, Jagannathan and Runkle (1992), and the exponential GARCH (or EGARCH) model of Nelson (1990, 1991). The underlying stochastic specification to obtain GARCH was demonstrated by Tsay (1987), and that of EGARCH was shown recently in McAleer and Hafner (2014). These models are important in estimating and forecasting volatility, as well as capturing asymmetry, which is the different effects on conditional volatility of positive and negative effects of equal magnitude, and leverage, which is the negative correlation between returns shocks and subsequent shocks to volatility. As there seems to be some confusion in the literature between asymmetry and leverage, as well as which asymmetric models are purported to be able to capture leverage, the purpose of the paper is two-fold, namely: (1) to derive the GJR model from a random coefficient autoregressive process, with appropriate regularity conditions; and (2) to show that leverage is not possible in these univariate conditional volatility models.
    Keywords: Conditional volatility models, random coefficient autoregressive processes, random coefficient complex nonlinear moving average process, asymmetry, leverage
    JEL: C22 C52 C58 G32
    Date: 2014–09–25
  7. By: Dang, Hai-Anh H.; Lanjouw, Peter F.; Serajuddin, Umar
    Abstract: Obtaining consistent estimates on poverty over time as well as monitoring poverty trends on a timely basis is a priority concern for policy makers. However, these objectives are not readily achieved in practice when household consumption data are neither frequently collected, nor constructed using consistent and transparent criteria. This paper develops a formal framework for survey-to-survey poverty imputation in an attempt to overcome these obstacles, and to elevate the discussion of these methods beyond the largely ad-hoc efforts in the existing literature. The framework introduced here imposes few restrictive assumptions, works with simple variance formulas, provides guidance on the selection of control variables for model building, and can be generally applied to imputation either from one survey to another survey with the same design, or to another survey with a different design. Empirical results analyzing the Household Expenditure and Income Survey and the Unemployment and Employment Survey in Jordan are quite encouraging, with imputation-based poverty estimates closely tracking the direct estimates of poverty.
    Keywords: Rural Poverty Reduction,Statistical&Mathematical Sciences,Achieving Shared Growth,E-Business
    Date: 2014–09–01

This nep-ecm issue is ©2014 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.