nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒08‒09
twenty papers chosen by
Sune Karlsson
Orebro University

  1. A Pairwise Difference Estimator for Partially Linear Spatial Autoregressive Models By Zhengyu Zhang
  2. Smoothing parameter selection for penalized spline estimators By Tatyana Krivobokova
  3. Generic Results for Establishing the Asymptotic Size of Confidence Sets and Tests By Donald W.K. Andrews; Xu Cheng; Patrik Guggenberger
  4. Algebraic Theory of Indentification in Parametric Models By Andrzej Kociecki
  5. Forecasting Under Strucural Break Uncertainty By Jing Tian; Heather M. Anderson
  6. A Conditional-Heteroskedasticity-Robust Confidence Interval for the Autoregressive Parameter By Donald W.K. Andrews; Patrik Guggenberger
  7. Estimation and Inference in Predictive Regressions By Eiji Kurozumi; Kohei Aono
  8. Dynamic copula-based Markov chains at work: Theory, testing and performance in modeling daily stock returns By Tinkl, Fabian; Reichert, Katja
  9. A Bayesian Model of Sample Selection with a Discrete Outcome Variable: Detecting Depression in Older Adults By Maksym Obrizan
  10. Bayesian Analysis of Time-Varying Parameter Vector Autoregressive Model with the Ordering of Variables for the Japanese Economy and Monetary Policy By Jouchi Nakajima; Toshiaki Watanabe
  11. A sieve bootstrap range test for poolability independent cointegrated panels By Francesca Di Iorio; Stefano Fachin
  12. Estimation of Forward-Looking Relationships in Closed Form: An Application to the New Keynesian Phillips Curve By Barnes, Michelle L.; Gumbau-Brisa, Fabià; Lie, Denny; Olivei, Giovanni P.
  13. Van Zwet ordering for Fechner asymmetry By Klein, Ingo
  14. Regional Indexes of Activity: Combining the Old with the New By Edda Claus; Chew Lian Chua; G. C. Lim
  15. Extreme value theory for finance: a survey By Marco Rocco
  16. A Stable and Robust Calibration Scheme of the Log-Periodic Power Law Model By Vladimir Filimonov; Didier Sornette
  17. Hedonic Predicted House Price Indices Using Time-Varying Hedonic Models with Spatial Autocorrelation By Alicia Rambaldi; Prasada Rao
  18. Quantile Forecasts of Financial Returns Using Realized GARCH Models By Toshiaki Watanabe
  19. A Response to Cogley and Sbordone's Comment on "Closed-Form Estimates of the New Keynesian Phillips Curve with Time-Varying Trend Inflation" By Gumbau-Brisa, Fabià; Lie, Denny; Olivei, Giovanni P.
  20. Detection of Crashes and Rebounds in Major Equity Markets By Wanfeng Yan; Reda Rebib; Ryan Woodard; Didier Sornette

  1. By: Zhengyu Zhang (Center for Econometric Study, Shanghai Academy of Social Sciences)
    Abstract: Su and Jin (2010) develop for partially linear spatial autoregressive (PL-SAR) model a profile quasimaximum likelihood based estimation procedure. More recently, Su (2011) proposes for this model a semiparametric GMM estimator. However, both of them can be computationally challenging for applied researchers and are not easy to implement in practice. In this article, we propose a computationally simple estimator for the PL-SAR model in the presence of either heteroscedastic or spatially correlated error terms. This estimator blends the essential features of both the GMM estimator for linear SAR model and the pairwise difference estimator for conventional partially linear model. Limiting distribution of the proposed estimator is established and consistent estimator for its asymptotic CV matrix is provided. Monte Carlo studies indicate that our estimator is attractive particularly when one is interested in estimating the finite-dimensional parameters in the model.
    Keywords: Spatial autoregression, Partially linear model, Pairwise difference
    JEL: C13 C14 C21
    Date: 2011–07
  2. By: Tatyana Krivobokova (Georg-August-University Göttingen)
    Abstract: There are two popular smoothing parameter selection methods for spline smoothing. First, criteria that approximate the average mean squared error of the estimator (e.g. generalized cross validation) are widely used. Alternatively, the maximum likelihood paradigm can be employed under the assumption that the underlying function to be estimated is a realization of some stochastic process. In this article the asymptotic properties of both smoothing parameter estimators are studied and compared in the frequentist and stochastic framework for penalized spline smoothing. Consistency and asymptotic normality of the estimators are proved and small sample properties are discussed. A simulation study and a real data example illustrate the theoretical findings.
    Keywords: Maximum likelihood; Mean squared error minimizer; Penalized splines; Smoothing splines
    Date: 2011–08–02
  3. By: Donald W.K. Andrews (Cowles Foundation, Yale University); Xu Cheng (Dept. of Economics, University of Pennsylvania); Patrik Guggenberger (Dept. of Economics, UCSD)
    Abstract: This paper provides a set of results that can be used to establish the asymptotic size and/or similarity in a uniform sense of confidence sets and tests. The results are generic in that they can be applied to a broad range of problems. They are most useful in scenarios where the pointwise asymptotic distribution of a test statistic has a discontinuity in its limit distribution. The results are illustrated in three examples. These are: (i) the conditional likelihood ratio test of Moreira (2003) for linear instrumental variables models with instruments that may be weak, extended to the case of heteroskedastic errors; (ii) the grid bootstrap confidence interval of Hansen (1999) for the sum of the AR coefficients in a k-th order autoregressive model with unknown innovation distribution, and (iii) the standard quasi-likelihood ratio test in a nonlinear regression model where identification is lost when the coefficient on the nonlinear regressor is zero.
    Keywords: Asymptotically similar, Asymptotic size, Autoregressive model, Confidence interval, Nonlinear regression, Test, Weak instruments
    JEL: C12 C22
    Date: 2011–08
  4. By: Andrzej Kociecki (Narodowy Bank Polski)
    Abstract: The paper presents the problem of identification in parametric models from the algebraic point of view. We argue that it is not just another perspective but the proper one. That is using our approach we can see the very nature of the identification problem, which is slightly different than that suggested in the literature. In practice it means that in many models we can unambiguously estimate parameters that have been thought as unidentifiable. This is illustrated in the case of Simultaneous Equations Model (SEM), where our analysis leads to conclusion that existing identification conditions, although correct, are based on the inappropriate premise: only the structural parameters that are in one–to–one correspondence with the reduced form parameters are identified. We will show that this is not true. In fact there are other structural parameters, which are identified, but can not be uniquely recovered from the reduced form parameters. Although we apply our theory only to SEM, it can be used in many standard econometric models.
    Keywords: Identification; Group theory; Orbits; Orbit representatives; Simultaneous Equations Model; Maximal Invariant
    JEL: C10 C30
    Date: 2011
  5. By: Jing Tian; Heather M. Anderson
    Abstract: This paper proposes two new weighting schemes that average forecasts using different estimation windows to account for structural change. We let the weights reflect the probability of each time point being the most-recent break point, and we use the reversed ordered Cusum test statistics to capture this intuition. The second weighting method simply imposes heavier weights on those forecasts that use more recent information. The proposed combination forecasts are evaluated using Monte Carlo techniques, and we compare them with forecasts based on other methods that try to account for structural change, including average forecasts weighted by past forecasting performance and techniques that first estimate a break point and then forecast using the post break data. Simulation results show that our proposed weighting methods often outperform the others in the presence of structural breaks. An empirical application based on a NAIRU Phillips curve model for the United States indicates that it is possible to outperform the random walk forecasting model when we employ forecasting methods that account for break uncertainty.
    Keywords: Forecasting with Structural breaks, Parameter Shifts, break Uncertainty, Structural break Tests, Choice of Estimation Sample, Forecast Combinations, NAIRU Phillips Curve.
    JEL: C22 C53 E37
    Date: 2011–07
  6. By: Donald W.K. Andrews (Cowles Foundation, Yale University); Patrik Guggenberger (Dept. of Economics, UCSD)
    Abstract: This paper introduces a new confidence interval (CI) for the autoregressive parameter (AR) in an AR(1) model that allows for conditional heteroskedasticity of general form and AR parameters that are less than or equal to unity. The CI is a modification of Mikusheva's (2007a) modification of Stock's (1991) CI that employs the least squares estimator and a heteroskedasticity-robust variance estimator. The CI is shown to have correct asymptotic size and to be asymptotically similar (in a uniform sense). It does not require any tuning parameters. No existing procedures have these properties. Monte Carlo simulations show that the CI performs well in finite samples in terms of coverage probability and average length, for innovations with and without conditional heteroskedasticity.
    Keywords: Asymptotically similar, Asymptotic size, Autoregressive model, Conditional heteroskedasticity, Confidence interval, Hybrid test, Subsampling test, Unit root
    JEL: C12 C15 C22
    Date: 2011–08
  7. By: Eiji Kurozumi; Kohei Aono
    Abstract: This paper proposes new point estimates for predictive regressions. Our estimates are easily obtained by the least squares and the instrumental variable methods. Our estimates, called the plug-in estimates, have nice asymptotic properties such as median unbiasedness and the approximated normality of the associated t-statistics. In addition, the plug-in estimates are shown to have good finite sample properties via Monte Carlo simulations. Using the new estimates, we investigate U.S. stock returns and find that some variables, which have not been statistically detected as useful predictors in the literature, are able to predict stock returns. Because of their nice properties, our methods complement the existing statistical tests for predictability to investigate the relations between stock returns and economic variables.
    Keywords: unit root, near unit root, bias, median unbiased, stock return
    JEL: C13 C22
    Date: 2011–05
  8. By: Tinkl, Fabian; Reichert, Katja
    Abstract: We generalize the score test for time-varying copula parameters proposed by [Abegaz & Naik-Nimbalkar, 2008] to a setting where more than one-parametric copulas can be tested for time variation in at least one parameter. In a next step we model the daily log returns of the Commerzbank stock using copula-based Markov chain models. We found evidence that compared to usual GARCH models the copula-based Markov chain models perform worse when daily stock returns are estimated. Thus we do not see any advantage of this model type when daily returns from financial data are modeled. --
    Keywords: Dynamic copula models,Markov chains,score test,GARCH models
    Date: 2011
  9. By: Maksym Obrizan (Kyiv School of Economics, Kyiv Economic Institute)
    Abstract: Depression as a major mental illness among older adults has attracted a lot of research attention. However, the problem of sample selection, inevitable in most health surveys, has been largely ignored. To fill in this gap, this paper formally models selection into the sample jointly with a discrete outcome variable for depression. A Bayesian model of sample selection is developed from a multivariate probit by (i) allowing missing depression status for nonselected respondents, and (ii) using Cholesky factorization of the inverse variance matrix to avoid a Metropolis-Hastings step in the Gibbs sampler. Non-selected respondents are less likely to suffer from depression.
    Keywords: Multivariate probit model; Sample selection; Bayesian methods; Gibbs sampler
    JEL: C11 C35 I1
    Date: 2011–07
  10. By: Jouchi Nakajima; Toshiaki Watanabe
    Abstract: This paper applies the time-varying parameter vector autoregressive model to the Japanese economy. The both parameters and volatilities, which are assumed to follow a random-walk process, are estimated using a Bayesian method with MCMC. The recursive structure is assumed for identification and the reversible jump MCMC is used for the ordering of variables. The empirical result reveals the time-varying structure of the Japanese economy and monetary policy during the period from 1981 to 2008 and provides evidence that the order of variables may change by the introduction of zero interest rate policy.
    Keywords: Bayesian inference, Monetary policy, Reversible jump Markov chain Monte Carlo, Stochastic volatility, Time-varying parameter VAR
    JEL: C11 C15 E52
    Date: 2011–07
  11. By: Francesca Di Iorio (Universita' di Napoli Federico II); Stefano Fachin (Universita' di Roma "La Sapienza")
    Abstract: We develop a sieve bootstrap range test for poolability of cointegrating regressions in dependent panels and evaluate by simulation its performances. Although slightly undersized the test has good power even when only a single unit of the panel is heterogenous.
    Keywords: Poolability, Panel cointegration, sieve bootstrap.
    JEL: C23 C15 E2
    Date: 2011–07
  12. By: Barnes, Michelle L.; Gumbau-Brisa, Fabià; Lie, Denny; Olivei, Giovanni P.
    Abstract: We illustrate the importance of placing model-consistent restrictions on expectations in the estimation of forward-looking Euler equations. In two-stage limited-information settings where first-stage estimates are used to proxy for expectations, parameter estimates can differ substantially, depending on whether these restrictions are imposed or not. This is shown in an application to the New Keynesian Phillips Curve (NKPC), first in a Monte Carlo exercise, and then on actual data. The closed-form (CF) estimates require by construction that expectations of inflation be model-consistent at all points in time, while the difference-equation (DE) estimates impose no model discipline on expectations. Between those two polar extremes there is a wide range of alternative DE specifications, based on the same dynamic relationship, that explicitly impose model restrictions on expectations for a finite number of periods. In our application, these last estimates quickly converge to the CF estimates, and illustrate that the DE estimates in Cogley and Sbordone (2008) are not robust to imposing modest model requirements on expectations. In particular, our estimates show that the NKPC is not purely forward-looking, and thus that time-varying trend inflation is insufficient to explain inflation persistence.
    Keywords: time-varying trend inflation; forward- looking Euler equation; New Keynesian Phillips curve; model-consistent expectations; closed form
    Date: 2011–06
  13. By: Klein, Ingo
    Abstract: There are several procedures to construct a skewed distribution. One of these procedures splits the value of a parameter of scale for the two halfs of a symmetric distribution. Fechner proposed this procedure in his famous book Kollektivmaßlehre (1897), p. 295ff.. A similar proposal comes from Fernandez et al. (1995). We consider the very general approach from Arellano-Valle et al. (2005) of splitting a scale parameter and show that this technique of generating skewed distributions incorporates a well-defined parameter of skewness. It is well-defined in the sense that the parameter of skewness is compatible with the ordering <2 of van Zwet (1964) which is the strongest ordering in the hierarchy of orderings discussed by Oja (1981). For this family of skewed distributions it will be shown that the measure proposed by Arnold & Groeneveld (1995) is a measure of skewness in the sense of Oja (1981). In the special case considered by Fechner (1897) this measure and the skewness parameter coincide. --
    Keywords: Skewness,skewness to the right,skewness ordering,measure of skewness
    Date: 2011
  14. By: Edda Claus (Melbourne Institute of Applied Economic and Social Research, The University of Melbourne); Chew Lian Chua (Melbourne Institute of Applied Economic and Social Research, The University of Melbourne); G. C. Lim (Melbourne Institute of Applied Economic and Social Research, The University of Melbourne)
    Abstract: This paper proposes a framework to construct indexes of activity which links two strands of the index literature – the traditional business cycle analysis and the latent variable approach. To illustrate the method, we apply the framework to Australian regional data, namely to two resource-rich and two service-based states. The results reveal differences in the evolution and drivers of economic activity across the four states. We also demonstrate the value of the Index in a broader context by using a structural vector autoregression (SVAR) approach to analyse the effects of shocks from the US and from China. This Index-SVAR approach facilitates a richer analysis because the unique feature of the index method proposed here allows impulse responses to be traced back to the components.
    Keywords: Regional economic activity, coincident indicators, dynamic latent factor model
    JEL: C43 E32
    Date: 2011–06
  15. By: Marco Rocco (Banca d'Italia)
    Abstract: Extreme value theory is concerned with the study of the asymptotical distribution of extreme events, that is to say events which are rare in frequency and huge with respect to the majority of observations. Statistical methods derived from this theory have been increasingly employed in finance, especially in the context of risk measurement. The aim of the present study is twofold. The first part delivers a critical review of the theoretical underpinnings of extreme value theory. The second part provides a survey of some major applications of extreme value theory to finance, namely its use to test different distributional assumptions for the data, Value-at-Risk and Expected Shortfall calculations, asset allocation under safety-first type constraints and the study of contagion and dependence across markets under stress conditions.
    Keywords: extreme value theory, risk management, fat-tailed distributions, Value-at-Risk, systemic risk, asset allocation
    JEL: C10 C16 G10 G20 G21
    Date: 2011–07
  16. By: Vladimir Filimonov; Didier Sornette
    Abstract: We present a simple transformation of the formulation of the log-periodic power law formula of the Johansen-Ledoit-Sornette model of financial bubbles that reduces it to a function of only three nonlinear parameters. The transformation significantly decreases the complexity of the fitting procedure and improves its stability tremendously because the modified cost function is now characterized by good smooth properties with in general a single minimum in the case where the model is appropriate to the empirical data. We complement the approach with an additional subordination procedure that slaves two of the nonlinear parameters to what can be considered to be the most crucial nonlinear parameter, the critical time $t_c$ defined as the end of the bubble and the most probably time for a crash to occur. This further decreases the complexity of the search and provides an intuitive representation of the results of the calibration. With our proposed methodology, metaheuristic searches are not longer necessary and one can resort solely to rigorous controlled local search algorithms, leading to dramatic increase in efficiency. Empirical tests on the Shanghai Composite index (SSE) from January 2007 to March 2008 illustrate our findings.
    Date: 2011–07
  17. By: Alicia Rambaldi (School of Economics, The University of Queensland); Prasada Rao (School of Economics, The University of Queensland)
    Abstract: Hedonic housing price indices are computed from estimated hedonic pricing models. The commonly used time dummy hedonic model and the rolling window hedonic model fail to account for changing consumer preferences over hedonic characteristics and typically these models do not account for the presence of spatial correlation in prices reflecting the role of locational characteristics. This paper develops a class of models with time-varying hedonic coefficients and spatially correlated errors, provides an assessment of the predictive performance of these compared to the commonly used hedonic models, and constructs and compares corresponding price index series. Alternative weighting systems, plutocratic versus democratic, are considered for the class of hedonic imputed price indices. Accounting for seasonality in house sales data, monthly chained indices and annual chained indices based on averages of year-on-year monthly indexes are presented. The empirical results are based on property sales data for Brisbane, Australia over the period 1985 to 2005. On the basis of root mean square prediction error criterion the time-varying parameter with spatial errors is found to be the best performing model and the rolling-window model to be the worst performing model.
    Date: 2011
  18. By: Toshiaki Watanabe
    Abstract: This article applies the realized GARCH model, which incorporates the GARCH model with realized volatility (RV), to quantile forecasts of financial returns such as Value-at-Risk and expected shortfall. This model has certain advantages in the application to quantile forecasts because it can adjust the bias of RV casued by microstructure noise and non-trading hours and enables us to estimate the parameters in the return distribution jointly with the other parameters. Student's t- and skewed strudent's t-distributions as well as normal distribution are used for the return distribution. The EGARCH model is used for comparison. Main results for the S&P 500 stock index are: (1) the realized GARCH model with the skewed student's t-distribution performs better than that with the normal and student's t-distributions and the EGARCH model using the daily returns only, and (2) the performance does not improve if the realized kernel, which takes account of microstructure noise, is used instead of the plain realized volatility, implying that the realized GARCH model can adjust the bias of RV caused by microstructure noise.
    Keywords: Expected shortfall, GARCH, Realized volatility, Skewed student's t-distribution, Value-at-Risk
    JEL: C52 C53
    Date: 2011–07
  19. By: Gumbau-Brisa, Fabià; Lie, Denny; Olivei, Giovanni P.
    Abstract: In their 2010 comment (which we refer to as CS10), Cogley and Sbordone argue that: (i) our estimates are not entirely closed form, and hence are arbitrary; (ii) we cannot guarantee that our estimates are valid, while their estimates (Cogley and Sbordone 2008, henceforth CS08) always are; and (iii) the estimates in CS08, in terms of goodness of fit, are just as good as other, much different estimates in our paper. We show in this reply that the exact closed-form estimates are virtually the same as the "quasi" closed-form estimates. Our estimates are consistent with the implicit assumptions underlying the first-stage VAR used to form expectations, while the estimates in CS08 are not. As a result, the estimates in CS08 point towards model misspecification. We also rebut the goodness of fit comparisons in CS10, and provide a more credible exercise that illustrates that our estimates outperform CS08's estimates.
    Keywords: time-varying trend inflation; forward-looking Euler equation; New Keynesian Phillips curve; model-consistent expectations; closed form
    Date: 2011–06
  20. By: Wanfeng Yan; Reda Rebib; Ryan Woodard; Didier Sornette
    Abstract: Financial markets are well known for their dramatic dynamics and consequences that affect much of the world's population. Consequently, much research has aimed at understanding, identifying and forecasting crashes and rebounds in financial markets. The Johansen-Ledoit-Sornette (JLS) model provides an operational framework to understand and diagnose financial bubbles from rational expectations and was recently extended to negative bubbles and rebounds. Using the JLS model, we develop an alarm index based on an advanced pattern recognition method with the aim of detecting bubbles and performing forecasts of market crashes and rebounds. Testing our methodology on 10 major global equity markets, we show quantitatively that our developed alarm performs much better than chance in forecasting market crashes and rebounds. We use the derived signal to develop elementary trading strategies that produce statistically better performances than a simple buy and hold strategy.
    Date: 2011–07

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.