nep-ecm New Economics Papers
on Econometrics
Issue of 2012‒04‒17
fifteen papers chosen by
Sune Karlsson
Orebro University

  1. Symmetric Jackknife Instrumental Variable Estimation By Bekker, Paul A.; Crudu, Federico
  2. Bayesian Unit Root Testing in Stochastic Volatility Models Using INLA By Márcio Laurini; Márcio Alves Diniz
  3. Robust Test for Spatial Error Model:Considering Changes of Spatial Layouts and Distribution Misspecification By Guo, Penghui; Liu, Lihu
  4. Time irreversible copula-based Markov Models By Beare, Brendan K.; Seo, Juwon
  5. Testing the concavity of an ordinaldominance curve By Beare, Brendan K.; Moon, Jong-Myun
  6. Empirical pricing kernel estimation using a functional gradient descent algorithm based on splines By Audrino, Francesco; Meier, Pirmin
  7. Realized Wavelet Jump-GARCH model: Can wavelet decomposition of volatility improve its forecasting? By Jozef Barunik; Lukas Vacha
  8. Double Asymptotics for Explosive Continuous Time Models By Xiaohu Wang; Jun Yu
  9. Test Measurement Error and Inference from Value-Added Models By Cory Koedel; Rebecca Leatherman; Eric Parsons
  10. Estimating VAR-MGARCH models in multiple steps By M. Angeles Carnero Fernández; M. Hakan Eratalay
  11. Learning and Model Validation By In-Koo Cho; Ken Kasa
  12. Modelling and Forecasting Yield Differentials in the euro area. A non-linear Global VAR model By Carlo A. Favero
  13. Applying approximate entropy (ApEn) to speculative bubble in the stock market By Saumitra, Bhaduri
  14. Consistent single- and multi-step sampling of multivariate arrival times: A characterization of self-chaining copulas By Damiano Brigo; Kyriakos Chourdakis
  15. Robust Ranking of Multivariate GARCH Models by Problem Dimension By Massimiliano Caporin; Michael McAleer

  1. By: Bekker, Paul A.; Crudu, Federico
    Abstract: This paper gives a new jackknife estimator for instrumental variable inference with unknown heteroskedasticity. The estimator is derived by using a method of moments approach similar to the one that produces LIML in case of homoskedasticity. The estimator is symmetric in the endogenous variables including the dependent variable. Many instruments and many weak instruments asymptotic distributions are derived using high-level assumptions that allow for the simultaneous presence of weak and strong instruments for different explanatory variables. Standard errors are formulated compactly. We review briefly known estimators and show in particular that the symmetric jackknife estimator performs well when compared to the HLIM and HFUL estimators of Hausman et al. (2011) in Monte Carlo experiments.
    Keywords: Instrumental Variables; Heteroskedasticity; many Instruments; Jackknife
    JEL: C13 C12 C23
    Date: 2012–04–05
  2. By: Márcio Laurini (IBMEC Business School); Márcio Alves Diniz (Departament of Statistics - UFSCAR)
    Abstract: This article discusses the use of Integrated Nested Laplace Approximations (INLA) in inference procedures and construction of unit root tests in stochastic volatility models. This approach allows to obtain accurate analytical approximations for the parameters and latent volatities, representing an alternative to methods based on Markov Chain Monte Carlo.
    Keywords: Unit Roots, Stochastic Volatility, Integrated Nested Laplace Approximations
    JEL: C11 C12 C22
    Date: 2012–04–04
  3. By: Guo, Penghui; Liu, Lihu
    Abstract: This paper suggests a robust LM (Lagrange Multiplier) test for spatial error model which not only reduces the influence of spatial lag dependence immensely, but also presents robust to changes of spatial layouts and distribution misspecification. Monte Carlo simulation results imply that existing LM tests have serious size and power distortion with the presence of spatial lag dependence, group interaction or non-normal distribution, but the robust LM test of this paper shows well performance.
    Keywords: LM test; Spatial Layouts; Distribution Misspecification; Robustness
    JEL: C1
    Date: 2011–11
  4. By: Beare, Brendan K.; Seo, Juwon
    Abstract: Economic and financial time series frequently exhibit time irreversible dynamics. For instance, there is considerable evidence of asymmetric fluctuations in many macroeconomic and financial variables, and certain game theoretic models of price determination predict asymmetric cycles in price series. In this paper we make two primary contributions to the econometric literature on time reversibility. First, we propose a new test of time reversibility, applicable to stationary Markov chains. Compared to existing tests, our test has the advantage of being consistent against arbitrary violations of reversibility. Second, we explain how a circulation density function may be used to characterize the nature of time irreversibility when it is present. We propose a copula-based estimator of the circulation density, and verify that it is well behaved asymptotically under suitable regularity conditions. We illustrate the use of our time reversibility test and circulation density estimator by applying them to five years of Canadian gasoline price markup data.
    Keywords: Econometrics and Quantitative Economics, Markov chains, time irreversible dynamics, economic time series
    Date: 2012–04–08
  5. By: Beare, Brendan K.; Moon, Jong-Myun
    Abstract: We study the asymptotic properties of a class of statistics used for testing the null hypothesis that an ordinal dominance curve is concave. The statistics are based on the Lp-distance between the empirical ordinal dominance curve and its least concave majo- rant, with 1 ≤ p ≤ ∞. We formally establish the limit distribution of the statistics when the true ordinal dominance curve is concave. Further, we establish that, when 1 ≤ p ≤ 2, the limit distribution is stochastically largest when the true ordinal dominance curve is the 45-degree line. When p > 2, this is not the case, and in fact the limit distribution diverges to infinity along a suitably chosen sequence of concave ordinal dominance curves. Our results serve to clarify, extend and amend assertions appearing previously in the literature for the cases p = 1 and p = ∞.
    Keywords: Econometrics and Quantitative Economics, null hypothesis
    Date: 2012–04–02
  6. By: Audrino, Francesco; Meier, Pirmin
    Abstract: We propose a new methodology to estimate the empirical pricing kernel implied from option data. In contrast to most of the studies in the literature that use an indirect approach, i.e. first estimating the physical and risk-neutral densities and obtaining the pricing kernel in a second step, we follow a direct approach. Departing from an adequate parametric and economically motivated pricing kernel, we apply a functional gradient descent (FGD) algorithm based on B-splines. This approach allows us to locally modify the initial pricing kernel and hence to improve the final estimate. We empirically illustrate the estimation properties of the method and test its predictive power on S&P 500 option data, comparing it as well with other recent approaches introduced in the empirical pricing kernel literature.
    Keywords: Empirical pricing kernel, function gradient descent, B-splines, option pricing.
    JEL: C13 C14 C51 C53 C58 C63
    Date: 2012–04
  7. By: Jozef Barunik; Lukas Vacha
    Abstract: In this paper, we propose a forecasting model for volatility based on its decomposition to several investment horizons and jumps. As a forecasting tool, we utilize Realized GARCH framework of Hansen et al. (2011), which models jointly returns and realized measures of volatility. For the decomposition, we use jump wavelet two scale realized volatility estimator (JWTSRV) of Barunik and Vacha (2012). While the main advantage of our time-frequency estimator is that it provides us with realized volatility measure robust to noise as well as with consistent estimate of jumps, it also allows to decompose volatility into the several investment horizons. On currency futures data covering the period of recent financial crisis, we compare forecasts from Realized GARCH(1,1) model using several measures. Namely, we use the realized volatility, bipower variation, two- scale realized volatility, realized kernel and our jump wavelet two scale realized volatility. We find that in-sample as well as out-of-sample performance of the model significantly differs based on the realized measure used. When JWTSRV estimator is used, model produces significantly best forecasts. We also utilize jumps and build Realized Jump-GARCH model. Utilizing the decomposition obtained by our estimator, we finally build Realized Wavelet-Jump GARCH model, which uses estimated jumps as well as volatility at several investment horizons. Our Realized Wavelet-Jump GARCH model proves to further improve the volatility forecasts. We conclude that realized volatility measurement in the time-frequency domain and inclusion of jumps improves the volatility forecasting considerably.
    Date: 2012–04
  8. By: Xiaohu Wang (School of Economics and Sim Kee Boon Institute for Financial Economics, Singapore Management University); Jun Yu (Sim Kee Boon Institute for Financial Economics, School of Economics and Lee Kong Chian School of Business)
    Abstract: This paper develops a double asymptotic limit theory for the persistent parameter (k) in explosive continuous time models driven by Lévy processes with a large number of time span (N) and a small number of sampling interval (h). The simultaneous double asymptotic theory is derived using a technique in the same spirit as in Phillips and Magdalinos (2007) for the mildly explosive discrete time model. Both the intercept term and the initial condition appear in the limiting distribution. In the special case of explosive continuous time models driven by the Brownian motion, we develop the limit theory that allows for the joint limits where N ! 1 and h ! 0 simultaneously, the sequential limits where N ! 1 is followed by h ! 0, and the sequential limits where h ! 0 is followed by N ! 1. All three asymptotic distributions are the same.
    Keywords: Explosive, Continuous Time, Lévy Process, Invariance Principle, Double Asymptotics
    JEL: C13 C22 G13
    Date: 2012–01
  9. By: Cory Koedel (Department of Economics, University of Missouri-Columbia); Rebecca Leatherman (Department of Economics, University of Missouri-Columbia); Eric Parsons (Department of Economics, University of Missouri-Columbia)
    Abstract: It is widely known that standardized tests are noisy measures of student learning, but value added models (VAMs) rarely take direct account of measurement error in student test scores. We examine the extent to which modifying VAMs to include information about test measurement error (TME) can improve inference. Our analysis is divided into two parts – one based on simulated data and the other based on administrative micro data from Missouri. In the simulations we control the data generating process, which ensures that we obtain accurate TME metrics with which to modify our value-added models. In the real-data portion of our analysis we use estimates of TME provided by a major test publisher. We find that inference from VAMs is improved by making simple TME adjustments to the models. This is a notable result because the improvement can be had at zero cost.
    Keywords: value added models, value added, teacher value added, test measurement error, teacher evaluation
    JEL: I20
    Date: 2012–01–30
  10. By: M. Angeles Carnero Fernández (Universidad de Alicante); M. Hakan Eratalay (Dpto. Fundamentos del Análisis Económico)
    Date: 2012–03
  11. By: In-Koo Cho; Ken Kasa (Simon Fraser University)
    Abstract: This paper studies adaptive learning with multiple models. An agent operating in a self-referential environment is aware of potential model misspecification, and tries to detect it, in real-time, using an econometric specification test. If the current model passes the test, it is used to construct an optimal policy. If it fails the test, a new model is selected from a fixed set of models. As the rate of coefficient updating decreases, one model becomes dominant, and is used 'almost always'. Dominant models can be characterized using the tools of large deviations theory. The analysis is applied to Sargent's (1999) Phillips Curve model.
    Keywords: Learning; Model validation
    JEL: C12 E59
    Date: 2012–04
  12. By: Carlo A. Favero
    Abstract: Unstability in the comovement among bond spreads in the euro area is an important feature for dynamic econometric modelling and forecasting. This paper proposes a non-linear GVAR approach to spreads in the euro area where the changing interdepence among these variables is modelled by making each country spread function of a global variable determined by fiscal fundamentals with a time-varying composition. The model naturally accommodates the possibility of multiple equilibria in the relation between default premia and local fiscal fundamentals. The estimation reveals a significant non-linear relation between spreads and fiscal fundamentals that generates time-varying impulse response of local spreads to shocks in other euro area countries spreads. The GVAR framework is then applied to the analysis of the dynamic effects of fiscal stabilization packages on the cost of government borrowing and to the evaluation of the importance of potential contagion effects determining a significant increase in cross-market linkages after a shock to a group of countries.
    Date: 2012
  13. By: Saumitra, Bhaduri
    Abstract: In contrast to the traditional duration dependence test, the paper introduces an order statistic known as Approximate Entropy to investigate the presence of speculative bubbles for a cross country sample. Using Approximate Entropy, the article examines four major crash in the US, Japan, Hong Kong and India. In addition, the paper also investigate the 1997 Asian crisis using weekly data for seven major Asian indices which includes Hong Kong, Malaysia, Singapore, Korea, Taiwan, Indonesia and Japan. The results confirm that there are strong “tale-tell” signs characterized by low Approximate Entropy (ApEn) level during many of these crash events. All the evidences using yearly as well as time series data (both discrete and rolling window analysis) point to a substantially lower level of ApEn during the crash.
    Keywords: Approximate Entropy, Bubble, India, stock Market
    JEL: G0 G01
    Date: 2012–04–10
  14. By: Damiano Brigo; Kyriakos Chourdakis
    Abstract: This paper deals with dependence across marginally-exponentially distributed arrival times, such as default times in financial modeling or inter-failure times in reliability theory. We explore the relationship between dependence and the possibility to sample final multivariate survival in a long time-interval as a sequence of iterations of local multivariate survivals along a partition of the total time interval. We find that this is possible under a form of multivariate lack of memory that is linked to a property of the survival times copula. This property defines a "self-chaining-copula", and we show that this coincides with the extreme value copulas characterization. The self-chaining condition is satisfied by the Gumbel-Hougaard copula, a full characterization of self chaining copulas in the Archimedean family, and by the Marshall-Olkin copula. We present a homogeneity characterization of the self chaining condition. The result has important practical implications for consistent single-step and multi-step simulation of multivariate arrival times in a way that does not destroy dependence through iterations, as happens when inconsistently iterating a Gaussian copula.
    Date: 2012–04
  15. By: Massimiliano Caporin; Michael McAleer (University of Canterbury)
    Abstract: During the last 15 years, several Multivariate GARCH (MGARCH) models have appeared in the literature. Recent research has begun to examine MGARCH specifications in terms of their out-of-sample forecasting performance. We provide an empirical comparison of alternative MGARCH models, namely BEKK, DCC, Corrected DCC (cDCC), CCC, OGARCH Exponentially Weighted Moving Average, and covariance shrinking, using historical data for 89 US equities. We contribute to the literature in several directions. First, we consider a wide range of models, including the recent cDCC and covariance shrinking models. Second, we use a range of tests and approaches for direct and indirect model comparison, including the Model Confidence Set. Third, we examine how the robust model rankings are influenced by the crosssectional dimension of the problem.
    Keywords: Covariance forecasting; model confidence set; robust model ranking; MGARCH; robust model comparison
    JEL: C18 C81 Y10
    Date: 2012–04–01

This nep-ecm issue is ©2012 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.