nep-ecm New Economics Papers
on Econometrics
Issue of 2007‒09‒24
eleven papers chosen by
Sune Karlsson
Orebro University

  1. Local linear regression for functional predictor and scalar response By Amparo Baillo; Aurea Grane
  2. Large Panels with Common Factors and Spatial Correlations By Pesaran, M.H.; Tosetti, E.
  3. Aggregation of regional economic time series with different spatial correlation structures By Giuseppe Arbia; Marco Bee; Giuseppe Espa
  4. Identification and Estimation in an Incoherent Model of Contagion By Massacci, D.
  5. Averaging forecasts from VARs with uncertain instabilities By Todd E. Clark; Michael W. McCracken
  6. Wald Tests of I(1) against I(d) alternatives : some new properties and an extension to processes with trending components By Juan Jose Dolado; Jesús Gonzalo; Laura Mayoral
  7. Forecasting with small macroeconomic VARs in the presence of instabilities By Todd E. Clark; Michael W. McCracken
  8. Volatility Proxies for Discrete Time Models By de Vilder, Robin G.; Visser, Marcel P.
  9. Do Instrumental Variables Belong in Propensity Scores? By Jay Bhattacharya; William B. Vogt
  10. Global Identification In Nonlinear Semiparametric Models By Ivana Komunjer
  11. A specification analysis of discrete-time no-arbitrage term structure models with observable and unobservable factors By Marcello, Pericoli; Marco, Taboga

  1. By: Amparo Baillo; Aurea Grane
    Abstract: The aim of this work is to introduce a new nonparametric regression technique in the context of functional covariate and scalar response. We propose a local linear regression estimator and study its asymptotic behaviour. Its finite-sample performance is compared with a Nadayara-Watson type kernel regression estimator via a Monte Carlo study and the analysis of two real data sets. In all the scenarios considered, the local linear regression estimator performs better than the kernel one, in the sense that the mean squared prediction error and its standard deviation are lower.
    Date: 2007–08
  2. By: Pesaran, M.H.; Tosetti, E.
    Abstract: This paper considers the statistical analysis of large panel data sets where even after conditioning on common observed e¤ects the cross section units might remain dependently distributed. This could arise when the cross section units are subject to unobserved common e¤ects and/or if there are spill over e¤ects due to spatial or other forms of local dependencies. The paper provides an overview of the literature on cross section dependence, introduces the concepts of time-speci.c weak and strong cross section dependence and shows that the commonly used spatial models are examples of weak cross section dependence. It is then established that the Common Correlated Effects (CCE) estimator of panel data model with a multifactor error structure, recently advanced by Pesaran (2006), continues to provide consistent estimates of the slope coefficient, even in the presence of spatial error processes. Small sample properties of the CCE estimator under various patterns of cross section dependence, including spatial forms, are investigated by Monte Carlo experiments. Results show that the CCE approach works well in the presence of weak and/or strong cross sectionally correlated errors. We also explore the role of certain characteristics of spatial processes in determining the performance of CCE estimators, such as the form and intensity of spatial dependence, and the sparseness of the spatial weight matrix. Key words: Panels, Common Correlated Effects, Strong and Weak Cross Section Dependence.
    JEL: C10 C31 C33
    Date: 2007–08
  3. By: Giuseppe Arbia; Marco Bee; Giuseppe Espa
    Abstract: In this paper we compare the relative efficiency of different forecasting methods of space-time series when variables are spatially and temporally correlated. We consider the case of a space-time series aggregated into a single time series and the more general instance of a space-time series aggregated into a coarser spatial partition. We extend in various directions the outcomes found in the literature by including the consideration of larger datasets and the treatment of edge effects and of negative spatial correlation. The outcomes obtained provide operational suggestions on how to choose between alternative forecasting methods in empirical circumstances.
    Keywords: Spatial correlation, Aggregation, Forecast efficiency, Space–time models, Edge effects, Negative spatial correlation.
    JEL: C15 C21 C43 C53
    Date: 2007
  4. By: Massacci, D.
    Abstract: This paper deals with the issues of identification and estimation in the canonical model of contagion advanced in Pesaran and Pick (2007). The model is a two-equation nonlinear simultaneous equations system with endogenous dummy variables; it also represents an extension of univariate threshold autoregressive (TAR) models to a simultaneous equations framework. For a range of economic fundamentals, the model produces multiple (i.e. two) equilibria, and the choice of the equilibrium is modelled as being driven by a Bernoulli process; further, the presence of multiple equilibria leads to an incoherent econometric specification. The coherency issue is then reflected in the analytical expression for the likelihood function derived in the paper. It is proved that neither identification nor Full Information Maximum Likelihood (FIML) estimation of the model require knowledge of the Bernoulli process driving the solution choice in the multiple equilibria region. Monte Carlo experiments show that the FIML estimator performs better than the GIVE estimators proposed in Pesaran and Pick (2007). Finally, an empirical illustration based on stock market returns is provided. Key words: Contagion, Identification, Estimation, Coherent Models, Threshold Models.
    JEL: C10 C13 C15 C32 G10 G15
    Date: 2007–08
  5. By: Todd E. Clark; Michael W. McCracken
    Abstract: A body of recent work suggests commonly-used VAR models of output, inflation, and interest rates may be prone to instabilities. In the face of such instabilities, a variety of estimation or forecasting methods might be used to improve the accuracy of forecasts from a VAR. These methods include using different approaches to lag selection, different observation windows for estimation, (over-) differencing, intercept correction, stochastically time-varying parameters, break dating, discounted least squares, Bayesian shrinkage, and detrending of inflation and interest rates. Although each individual method could be useful, the uncertainty inherent in any single representation of instability could mean that combining forecasts from the entire range of VAR estimates will further improve forecast accuracy. Focusing on models of U.S. output, prices, and interest rates, this paper examines the effectiveness of combination in improving VAR forecasts made with real-time data. The combinations include simple averages, medians, trimmed means, and a number of weighted combinations, based on: Bates-Granger regressions, factor model estimates, regressions involving forecast quartiles, Bayesian model averaging, and predictive least squares-based weighting. Our goal is to identify those approaches that, in real time, yield the most accurate forecasts of these variables. We use forecasts from simple univariate time series models as benchmarks.
    Date: 2007
  6. By: Juan Jose Dolado; Jesús Gonzalo; Laura Mayoral
    Abstract: This paper analyses the power properties, under fixed alternatives, of a Wald-type test, i.e., the (Efficient) Fractional Dickey-Fuller (EFDF) test of I(1) against I(d), d<1, relative to LM tests. Further, it extends the implementation of the EFDF test to the presence of deterministic trending components in the DGP. Tests of these hypotheses are important in many macroeconomic applications where it is crucial to distinguish between permanent and transitory shocks because shocks die out in I(d) processes with d<1. We show how simple is the implementation of the EFDF in these situations and argue that, under fixed alternatives, it has better power properties than LM tests. Finally, an empirical application is provided where the EFDF approach allowing for deterministic components is used to test for long-memory in the GDP p.c. of several OECD countries, an issue that has important consequences to discriminate between alternative growth theories.
    Date: 2007–06
  7. By: Todd E. Clark; Michael W. McCracken
    Abstract: Small-scale VARs are widely used in macroeconomics for forecasting U.S. output, prices, and interest rates. However, recent work suggests these models may exhibit instabilities. As such, a variety of estimation or forecasting methods might be used to improve their forecast accuracy. These include using different observation windows for estimation, intercept correction, time-varying parameters, break dating, Bayesian shrinkage, model averaging, etc. This paper compares the effectiveness of such methods in real time forecasting. We use forecasts from univariate time series models, the Survey of Professional Forecasters and the Federal Reserve Board's Greenbook as benchmarks.
    Date: 2007
  8. By: de Vilder, Robin G.; Visser, Marcel P.
    Abstract: Discrete time volatility models typically employ a latent scale factor to represent volatility. High frequency data may be used to construct proxies for these scale factors. Examples are the intraday high-low range and the realized volatility. This paper develops a method for ranking and optimizing volatility proxies. It is possible to outperform the quadratic variation as a proxy for the discrete time scale factor. For the S&P 500 index data over the years 1988-2006 this is achieved by a proxy which puts, among other things, more weight on the highs than on the lows over intraday intervals.
    Keywords: volatility proxy; realized volatility; quadratic variation; scale factor; arch/garch/stochastic volatility; intraday seasonality
    JEL: C65 C52 C22
    Date: 2007–09–14
  9. By: Jay Bhattacharya; William B. Vogt
    Abstract: Propensity score matching is a popular way to make causal inferences about a binary treatment in observational data. The validity of these methods depends on which variables are used to predict the propensity score. We ask: "Absent strong ignorability, what would be the effect of including an instrumental variable in the predictor set of a propensity score matching estimator?" In the case of linear adjustment, using an instrumental variable as a predictor variable for the propensity score yields greater inconsistency than the naive estimator. This additional inconsistency is increasing in the predictive power of the instrument. In the case of stratification, with a strong instrument, propensity score matching yields greater inconsistency than the naive estimator. Since the propensity score matching estimator with the instrument in the predictor set is both more biased and more variable than the naive estimator, it is conceivable that the confidence intervals for the matching estimator would have greater coverage rates. In a Monte Carlo simulation, we show that this need not be the case. Our results are further illustrated with two empirical examples: one, the Tennessee STAR experiment, with a strong instrument and the other, the Connors' (1996) Swan-Ganz catheterization dataset, with a weak instrument.
    JEL: C1 I1 I2
    Date: 2007–09
  10. By: Ivana Komunjer (Department of Economics, University of California - San Diego)
    Abstract: This note derives primitive conditions for global identification in nonlinear simultaneous equations systems. Identification is semiparametric in the sense that the latent structural disturbance is only known to satisfy a number of orthogonality restricitions with respect to observed instruments. Our contribution to the literature on identification in a semiparametric context is twofold. First, we derive a set of unconditional moment restrictions on the observables that are the starting point for identification in nonlinear structural systems. Second, we provide primitive conditions under which a parameter value that solves those restrictions is unique.
    Keywords: identification, structural systems, multiple equilibria, semiparametric models,
    Date: 2007–07–01
  11. By: Marcello, Pericoli; Marco, Taboga
    Abstract: We derive a canonical representation for the no-arbitrage discrete-time term structure models with both observable and unobservable state variables, popularized by Ang and Piazzesi (2003). We conduct a specification analysis based on this canonical representation. We show that some of the restrictions commonly imposed in the literature, most notably that of independence between observable and unobservable variables, are not necessary for identification and are rejected by formal statistical tests. Furthermore, we show that there are important differences between the estimated risk premia, impulse response functions and variance decomposition of unrestricted models, parametrized according to our canonical representation, and those of models with overidentifying restrictions.
    Keywords: Term structure; canonical models
    JEL: G12
    Date: 2005–03

This nep-ecm issue is ©2007 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.