nep-ecm New Economics Papers
on Econometrics
Issue of 2014‒08‒02
thirteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Simple Robust Tests for the Specification of High-Frequency Predictors of a Low-Frequency Series By J. Isaac Miller
  2. Testing the Equality of Two Positive-Definite Matrices with Application to Information Matrix Testing By JIN SEO CHO; HALBERT WHITE
  3. A note on approximating moments of least squares estimators By Liu-Evans, Gareth
  4. An Empirical Model of Network Formation: Detecting Homophily When Agents Are Heterogenous By Bryan S. Graham
  5. Deviance Information Criterion for Comparing VAR Models By Tao Zeng; Yong Li; Jun Yu
  6. Finite Population Causal Standard Errors By Alberto Abadie; Susan Athey; Guido W. Imbens; Jeffrey M. Wooldridge
  7. On the Invertibility of EGARCH By Guillaume Gaetan Martinet; Michael McAleer
  8. Density forecasts with MIDAS models By Knut Are Aastveit; Claudia Foroni; Francesco Ravazzolo
  9. Probabilistic load forecasting via Quantile Regression Averaging of independent expert forecasts By Tao Hong; Katarzyna Maciejowska; Jakub Nowotarski; Rafal Weron
  10. Estimation and model selection for left-truncated and right-censored lifetime data with application to electric power transformers analysis By Emura, Takeshi; Shiu, Shau-Kai
  11. Broken or Fixed Effects? By Charles E. Gibbons; Juan Carlos Suárez Serrato; Michael B. Urbancic
  12. : Measuring spatial effects in presence of institutional constraints: the case of Italian Local Health Authority expenditure By Vincenzo Atella; Federico Belotti; Domenico Depalo; Andrea Piano Mortari
  13. Bayesian Analysis of Bubbles in Asset Prices By Andras Fulop; Jun Yu

  1. By: J. Isaac Miller (Department of Economics, University of Missouri-Columbia)
    Abstract: I propose two simple variable addition test statistics for three tests of the specification of high-frequency predictors in a model to forecast a series observed at a lower frequency. The first is similar to existing test statistics and I show that it is robust to biased forecasts, integrated and cointegrated predictors, and deterministic trends, while it is feasible and consistent even if estimation is not feasible under the alternative. It is not robust to biased forecasts with integrated predictors under the null of a fully aggregated predictor, and size distortion may be severe in this case. The second test statistic proposed is an easily implemented modification of the first that sacrifices some power in small samples but is also robust to this case.
    Keywords: temporal aggregation, mixed-frequency model, MIDAS, variable addition test, forecasting model comparison
    JEL: C12 C22
    Date: 2014–07–14
  2. By: JIN SEO CHO (Yonsei University); HALBERT WHITE (University of California, San Diego)
    Abstract: We provide a new characterization of the equality of two positive-definite matrices A and B, and we use this to propose several new computationally convenient statistical tests for the equality of two unknown positive-definite matrices. Our primary focus is on testing the information matrix equality (e.g., White, 1982, 1994). We characterize the asymptotic behavior of our new trace-determinant information matrix test statistics under the null and the alternative and investigate their finite-sample performance for a variety of models: linear regression, exponential duration, probit, and Tobit. The parametric bootstrap suggested by Horowitz (1994) delivers critical values that provide admirable level behavior, even in samples as small as n ¨¡ 50. Our new tests often have better power than the parametric-bootstrap version of the traditional IMT; when they do not, they nevertheless perform respectably.
    Keywords: Matrix equality; Information matrix test; Eigenvalues; Trace; Determinant; Eigenspectrum test; Parametric Bootstrap.
    JEL: C01 C12 C52
    Date: 2014–07
  3. By: Liu-Evans, Gareth
    Abstract: Results are presented for approximating the moments of least squares estimators, particularly those of the OLS estimator, and the methodology is illustrated using a simple dynamic model.
    Keywords: asymptotic approximation, bias, least squares, time series, simulteneity
    JEL: C10 C13
    Date: 2014–07–24
  4. By: Bryan S. Graham
    Abstract: I formalize a widely-used empirical model of network formation. The model allows for assortative matching on observables (homophily) as well as unobserved agent level heterogeneity in link surplus (degree heterogeneity). The joint distribution of observed and unobserved agent-level characteristics is left unrestricted. Inferences about homophily do not depend upon untestable assumptions about this distribution. The model is non-standard since the dimension of the heterogeneity parameter grows with the number of agents, and hence network size. Nevertheless, under certain conditions, a joint maximum likelihood (ML) procedure, which simultaneously estimates the common and agent-level parameters governing link formation, is consistent. Although the asymptotic sampling distribution of the common parameter is Normal, it (i) contains a bias term and (ii) its variance does not coincide with the inverse of Fisher's information matrix. Standard ML asymptotic inference procedures are invalid. Forming confidence intervals with a bias-corrected maximum likelihood estimate, and appropriate standard error estimates, results in correct coverage. I assess the value of these results for understanding finite sample behavior via a set of Monte Carlo experiments and through an empirical analysis of risk-sharing links in a rural Tanzanian village (cf., De Weerdt, 2004).
    JEL: C31 C35
    Date: 2014–07
  5. By: Tao Zeng (Singapore Management University); Yong Li (Renmin University of China); Jun Yu (Singapore Management University, School of Economics)
    Abstract: Vector Autoregression (VAR) has been a standard empirical tool used in macroeconomics and finance. In this paper we discuss how to compare alternative VAR models after they are estimated by Bayesian MCMC methods. In particular we apply a robust version of deviance information criterion (RDIC) recently developed in Li et al. (2014b) to determine the best candidate model. RDIC is a better information criterion than the widely used deviance information criterion (DIC) when latent variables are involved in candidate models. Empirical analysis using US data shows that the optimal model selected by RDIC can be different from that by DIC.
    Keywords: Bayes factor, DIC; VAR models; Markov Chain Monte Carlo.
    JEL: C11 C12 G12
    Date: 2014–06
  6. By: Alberto Abadie; Susan Athey; Guido W. Imbens; Jeffrey M. Wooldridge
    Abstract: When a researcher estimates the parameters of a regression function using information on all 50 states in the United States, or information on all visits to a website, what is the interpretation of the standard errors? Researchers typically report standard errors that are designed to capture sampling variation, based on viewing the data as a random sample drawn from a large population of interest, even in applications where it is difficult to articulate what that population of interest is and how it differs from the sample. In this paper we explore alternative interpretations for the uncertainty associated with regression estimates. As a leading example we focus on the case where some parameters of the regression function are intended to capture causal effects. We derive standard errors for causal effects using a generalization of randomization inference. Intuitively, these standard errors capture the fact that even if we observe outcomes for all units in the population of interest, there are for each unit missing potential outcomes for the treatment levels the unit was not exposed to. We show that our randomization-based standard errors in general are smaller than the conventional robust standard errors, and provide conditions under which they agree with them. More generally, correct statistical inference requires precise characterizations of the population of interest, the parameters that we aim to estimate within such population, and the sampling process. Estimation of causal parameters is one example where appropriate inferential methods may differ from conventional practice, but there are others.
    JEL: C01 C18
    Date: 2014–07
  7. By: Guillaume Gaetan Martinet; Michael McAleer (University of Canterbury)
    Abstract: Of the two most widely estimated univariate asymmetric conditional volatility models, the exponential GARCH (or EGARCH) specification can capture asymmetry, which refers to the different effects on conditional volatility of positive and negative effects of equal magnitude, and leverage, which refers to the negative correlation between the returns shocks and subsequent shocks to volatility. However, the statistical properties of the (quasi-) maximum likelihood estimator (QMLE) of the EGARCH parameters are not available under general conditions, but only for special cases under highly restrictive and unverifiable conditions. A limitation in the development of asymptotic properties of the QMLE for EGARCH is the lack of an invertibility condition for the returns shocks underlying the model. It is shown in this paper that the EGARCH model can be derived from a stochastic process, for which the invertibility conditions can be stated simply and explicitly. This will be useful in re-interpreting the existing properties of the QMLE of the EGARCH parameters.
    Keywords: Leverage, asymmetry, existence, stochastic process, asymptotic properties, invertibility
    JEL: C22 C52 C58 G32
    Date: 2014–07–26
  8. By: Knut Are Aastveit (Norges Bank (Central Bank of Norway)); Claudia Foroni (Norges Bank (Central Bank of Norway)); Francesco Ravazzolo (Norges Bank (Central Bank of Norway))
    Abstract: In this paper we derive a general parametric bootstrapping approach to compute density forecasts for various types of mixed-data sampling (MIDAS) regressions. We consider both classical and unrestricted MIDAS regressions with and without an autoregressive component. First, we compare the forecasting performance of the different MIDAS models in Monte Carlo simulation experiments. We find that the results in terms of point and density forecasts are coherent. Moreover, the results do not clearly indicate a superior performance of one of the models under scrutiny when the persistence of the low frequency variable is low. Some differences are instead more evident when the persistence is high, for which the ARMIDAS and the AR-U-MIDAS produce better forecasts. Second, in an empirical exercise we evaluate density forecasts for quarterly US output growth, exploiting information from typical monthly series. We find that MIDAS models applied to survey data provide accurate and timely density forecasts.
    Keywords: Mixed data sampling, Density forecasts, Nowcasting
    JEL: C11 C53 E37
    Date: 2014–07–18
  9. By: Tao Hong; Katarzyna Maciejowska; Jakub Nowotarski; Rafal Weron
    Abstract: Probabilistic load forecasting is becoming crucial in today's power systems planning and operations. We propose a novel methodology to compute interval forecasts of electricity demand, which applies a Quantile Regression Averaging (QRA) technique to a set of independent expert point forecasts. We demonstrate the effectiveness of the proposed methodology using data from the hierarchical load forecasting track of the Global Energy Forecasting Competition 2012. The results show that the new method is able to provide better prediction intervals than four benchmark models for the majority of the load zones and the aggregated level.
    Keywords: Electric load; Probabilistic forecasting; Prediction interval; Quantile regression; Forecasts combination; Expert forecast
    JEL: C22 C32 C38 C53 Q47
    Date: 2014–07–15
  10. By: Emura, Takeshi; Shiu, Shau-Kai
    Abstract: In lifetime analysis of electric transformers, the maximum likelihood estimation has been proposed with the EM algorithm. However, it is not clear whether the EM algorithm offers a better solution compared to the simpler Newton-Raphson algorithm. In this paper, the first objective is a systematic comparison of the EM algorithm with the Newton-Raphson algorithm in terms of convergence performance. The second objective is to examine the performance of Akaike's information criterion (AIC) for selecting a suitable distribution among candidate models via simulations. These methods are illustrated through the electric power transformer dataset.
    Keywords: Akaike's information criterion; EM algorithm; lognormal distribution; Newton-Raphson algorithm; Weibull distribution; Reliability
    JEL: C34
    Date: 2014–07–24
  11. By: Charles E. Gibbons; Juan Carlos Suárez Serrato; Michael B. Urbancic
    Abstract: This paper provides empirical evidence of an established theoretical result: in the presence of heterogeneous treatment effects, OLS is generally not a consistent estimator of the sample-weighted average treatment effect (SWE). We propose two alternative estimators that do recover the SWE in the presence of group-specific heterogeneity. We derive tests to detect the presence of heterogeneous treatment effects and to distinguish between the OLS and SWE. We document that heterogeneous treatment effects are common and the SWE is often statistically and economically different from the OLS estimate by extending eight influential papers. In all but one paper, there is statistically significant treatment effect heterogeneity; in five, the SWE is statistically different from the OLS estimator; and in five, the SWE and OLS estimators are economically different.
    JEL: C18 C21
    Date: 2014–07
  12. By: Vincenzo Atella (Department of Economics and Finance and CEIS Tor Vergata, CHP-PCOR); Federico Belotti (CEIS Tor Vergata); Domenico Depalo (Bank of Italy); Andrea Piano Mortari (CEIS Tor Vergata)
    Abstract: Spatial econometric models are now an established tool for measuring spillover effects between geographical entities. Unfortunately, however, when entities share common borders but are subject to different institutional frameworks, unless this is taken into account the conclusions may be misleading. In fact, under these circumstances, where institutional arrangements play a role, we should expect to find spatial effects mainly in entities within the same institutional setting, while the effect across different institutional settings should be small or nil even where the entities share a common border. In this case, factoring in only geographical proximity will produce biased estimates, due to the combination of two distinct effects. To avoid these problems, we derive a methodology that partitions the standard contiguity matrix into within-contiguity and between-contiguity matrices, allowing separate estimation of these spatial correlation coefficients and simple tests for the existence of institutional constraints. We then apply this methodology to Italian Local Health Authority expenditures, using spatial panel techniques. We find a high and significant spatial coefficient only for the within-contiguity effect, confirming the validity of our approach.
    Keywords: spatial, health expenditures, institutional setting, panel data
    JEL: H72 H51 C31
    Date: 2014–07
  13. By: Andras Fulop (ESSEC Business School, Paris-Singapore); Jun Yu (Singapore Management University, School of Economics)
    Abstract: We develop a new asset price model where the dynamic structure of the asset price, after the fundamental value is removed, is subject to two different regimes. One regime reflects the norma period where the asset price divided by the divided is assumed to follow a mean-reverting process around a stochastic long run mean. This latter is allowed to account for possible smooth structural change. The second regime reflects the bubble period with explosive behavior. Stochastic switches between two regimes and non-constant probabilities of exit from the bubble regime are both allowed. A bayesian learning approach is employed to jointly estimate the latent states and the model parameters in real time. An important feature of our Bayesian method is that we are able to deal with parameter uncertainty; and at the same time, to learn about the states and the parameters sequentially, allowing for real time model analysis. This feature is particularly useful for market surveillance. Analysis using simulated data reveals that our method has better power for detecting bubbles compared to existing altnerative procedures. Empirical analysis using price/dividend ratios of S&P500 highlights the advantages of our method.
    Keywords: Parameter Learning, Markov Switching, MCMC
    JEL: C11 C13 C32 G12
    Date: 2014–07

This nep-ecm issue is ©2014 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.