
on Econometrics 
By:  J. Isaac Miller (Department of Economics, University of MissouriColumbia) 
Abstract:  I propose two simple variable addition test statistics for three tests of the specification of highfrequency predictors in a model to forecast a series observed at a lower frequency. The first is similar to existing test statistics and I show that it is robust to biased forecasts, integrated and cointegrated predictors, and deterministic trends, while it is feasible and consistent even if estimation is not feasible under the alternative. It is not robust to biased forecasts with integrated predictors under the null of a fully aggregated predictor, and size distortion may be severe in this case. The second test statistic proposed is an easily implemented modification of the first that sacrifices some power in small samples but is also robust to this case. 
Keywords:  temporal aggregation, mixedfrequency model, MIDAS, variable addition test, forecasting model comparison 
JEL:  C12 C22 
Date:  2014–07–14 
URL:  http://d.repec.org/n?u=RePEc:umc:wpaper:1412&r=ecm 
By:  JIN SEO CHO (Yonsei University); HALBERT WHITE (University of California, San Diego) 
Abstract:  We provide a new characterization of the equality of two positivedefinite matrices A and B, and we use this to propose several new computationally convenient statistical tests for the equality of two unknown positivedefinite matrices. Our primary focus is on testing the information matrix equality (e.g., White, 1982, 1994). We characterize the asymptotic behavior of our new tracedeterminant information matrix test statistics under the null and the alternative and investigate their finitesample performance for a variety of models: linear regression, exponential duration, probit, and Tobit. The parametric bootstrap suggested by Horowitz (1994) delivers critical values that provide admirable level behavior, even in samples as small as n ¨¡ 50. Our new tests often have better power than the parametricbootstrap version of the traditional IMT; when they do not, they nevertheless perform respectably. 
Keywords:  Matrix equality; Information matrix test; Eigenvalues; Trace; Determinant; Eigenspectrum test; Parametric Bootstrap. 
JEL:  C01 C12 C52 
Date:  2014–07 
URL:  http://d.repec.org/n?u=RePEc:yon:wpaper:2014rwp67&r=ecm 
By:  LiuEvans, Gareth 
Abstract:  Results are presented for approximating the moments of least squares estimators, particularly those of the OLS estimator, and the methodology is illustrated using a simple dynamic model. 
Keywords:  asymptotic approximation, bias, least squares, time series, simulteneity 
JEL:  C10 C13 
Date:  2014–07–24 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:57543&r=ecm 
By:  Bryan S. Graham 
Abstract:  I formalize a widelyused empirical model of network formation. The model allows for assortative matching on observables (homophily) as well as unobserved agent level heterogeneity in link surplus (degree heterogeneity). The joint distribution of observed and unobserved agentlevel characteristics is left unrestricted. Inferences about homophily do not depend upon untestable assumptions about this distribution. The model is nonstandard since the dimension of the heterogeneity parameter grows with the number of agents, and hence network size. Nevertheless, under certain conditions, a joint maximum likelihood (ML) procedure, which simultaneously estimates the common and agentlevel parameters governing link formation, is consistent. Although the asymptotic sampling distribution of the common parameter is Normal, it (i) contains a bias term and (ii) its variance does not coincide with the inverse of Fisher's information matrix. Standard ML asymptotic inference procedures are invalid. Forming confidence intervals with a biascorrected maximum likelihood estimate, and appropriate standard error estimates, results in correct coverage. I assess the value of these results for understanding finite sample behavior via a set of Monte Carlo experiments and through an empirical analysis of risksharing links in a rural Tanzanian village (cf., De Weerdt, 2004). 
JEL:  C31 C35 
Date:  2014–07 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:20341&r=ecm 
By:  Tao Zeng (Singapore Management University); Yong Li (Renmin University of China); Jun Yu (Singapore Management University, School of Economics) 
Abstract:  Vector Autoregression (VAR) has been a standard empirical tool used in macroeconomics and finance. In this paper we discuss how to compare alternative VAR models after they are estimated by Bayesian MCMC methods. In particular we apply a robust version of deviance information criterion (RDIC) recently developed in Li et al. (2014b) to determine the best candidate model. RDIC is a better information criterion than the widely used deviance information criterion (DIC) when latent variables are involved in candidate models. Empirical analysis using US data shows that the optimal model selected by RDIC can be different from that by DIC. 
Keywords:  Bayes factor, DIC; VAR models; Markov Chain Monte Carlo. 
JEL:  C11 C12 G12 
Date:  2014–06 
URL:  http://d.repec.org/n?u=RePEc:siu:wpaper:012014&r=ecm 
By:  Alberto Abadie; Susan Athey; Guido W. Imbens; Jeffrey M. Wooldridge 
Abstract:  When a researcher estimates the parameters of a regression function using information on all 50 states in the United States, or information on all visits to a website, what is the interpretation of the standard errors? Researchers typically report standard errors that are designed to capture sampling variation, based on viewing the data as a random sample drawn from a large population of interest, even in applications where it is difficult to articulate what that population of interest is and how it differs from the sample. In this paper we explore alternative interpretations for the uncertainty associated with regression estimates. As a leading example we focus on the case where some parameters of the regression function are intended to capture causal effects. We derive standard errors for causal effects using a generalization of randomization inference. Intuitively, these standard errors capture the fact that even if we observe outcomes for all units in the population of interest, there are for each unit missing potential outcomes for the treatment levels the unit was not exposed to. We show that our randomizationbased standard errors in general are smaller than the conventional robust standard errors, and provide conditions under which they agree with them. More generally, correct statistical inference requires precise characterizations of the population of interest, the parameters that we aim to estimate within such population, and the sampling process. Estimation of causal parameters is one example where appropriate inferential methods may differ from conventional practice, but there are others. 
JEL:  C01 C18 
Date:  2014–07 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:20325&r=ecm 
By:  Guillaume Gaetan Martinet; Michael McAleer (University of Canterbury) 
Abstract:  Of the two most widely estimated univariate asymmetric conditional volatility models, the exponential GARCH (or EGARCH) specification can capture asymmetry, which refers to the different effects on conditional volatility of positive and negative effects of equal magnitude, and leverage, which refers to the negative correlation between the returns shocks and subsequent shocks to volatility. However, the statistical properties of the (quasi) maximum likelihood estimator (QMLE) of the EGARCH parameters are not available under general conditions, but only for special cases under highly restrictive and unverifiable conditions. A limitation in the development of asymptotic properties of the QMLE for EGARCH is the lack of an invertibility condition for the returns shocks underlying the model. It is shown in this paper that the EGARCH model can be derived from a stochastic process, for which the invertibility conditions can be stated simply and explicitly. This will be useful in reinterpreting the existing properties of the QMLE of the EGARCH parameters. 
Keywords:  Leverage, asymmetry, existence, stochastic process, asymptotic properties, invertibility 
JEL:  C22 C52 C58 G32 
Date:  2014–07–26 
URL:  http://d.repec.org/n?u=RePEc:cbt:econwp:14/21&r=ecm 
By:  Knut Are Aastveit (Norges Bank (Central Bank of Norway)); Claudia Foroni (Norges Bank (Central Bank of Norway)); Francesco Ravazzolo (Norges Bank (Central Bank of Norway)) 
Abstract:  In this paper we derive a general parametric bootstrapping approach to compute density forecasts for various types of mixeddata sampling (MIDAS) regressions. We consider both classical and unrestricted MIDAS regressions with and without an autoregressive component. First, we compare the forecasting performance of the different MIDAS models in Monte Carlo simulation experiments. We find that the results in terms of point and density forecasts are coherent. Moreover, the results do not clearly indicate a superior performance of one of the models under scrutiny when the persistence of the low frequency variable is low. Some differences are instead more evident when the persistence is high, for which the ARMIDAS and the ARUMIDAS produce better forecasts. Second, in an empirical exercise we evaluate density forecasts for quarterly US output growth, exploiting information from typical monthly series. We find that MIDAS models applied to survey data provide accurate and timely density forecasts. 
Keywords:  Mixed data sampling, Density forecasts, Nowcasting 
JEL:  C11 C53 E37 
Date:  2014–07–18 
URL:  http://d.repec.org/n?u=RePEc:bno:worpap:2014_10&r=ecm 
By:  Tao Hong; Katarzyna Maciejowska; Jakub Nowotarski; Rafal Weron 
Abstract:  Probabilistic load forecasting is becoming crucial in today's power systems planning and operations. We propose a novel methodology to compute interval forecasts of electricity demand, which applies a Quantile Regression Averaging (QRA) technique to a set of independent expert point forecasts. We demonstrate the effectiveness of the proposed methodology using data from the hierarchical load forecasting track of the Global Energy Forecasting Competition 2012. The results show that the new method is able to provide better prediction intervals than four benchmark models for the majority of the load zones and the aggregated level. 
Keywords:  Electric load; Probabilistic forecasting; Prediction interval; Quantile regression; Forecasts combination; Expert forecast 
JEL:  C22 C32 C38 C53 Q47 
Date:  2014–07–15 
URL:  http://d.repec.org/n?u=RePEc:wuu:wpaper:hsc1410&r=ecm 
By:  Emura, Takeshi; Shiu, ShauKai 
Abstract:  In lifetime analysis of electric transformers, the maximum likelihood estimation has been proposed with the EM algorithm. However, it is not clear whether the EM algorithm offers a better solution compared to the simpler NewtonRaphson algorithm. In this paper, the first objective is a systematic comparison of the EM algorithm with the NewtonRaphson algorithm in terms of convergence performance. The second objective is to examine the performance of Akaike's information criterion (AIC) for selecting a suitable distribution among candidate models via simulations. These methods are illustrated through the electric power transformer dataset. 
Keywords:  Akaike's information criterion; EM algorithm; lognormal distribution; NewtonRaphson algorithm; Weibull distribution; Reliability 
JEL:  C34 
Date:  2014–07–24 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:57528&r=ecm 
By:  Charles E. Gibbons; Juan Carlos Suárez Serrato; Michael B. Urbancic 
Abstract:  This paper provides empirical evidence of an established theoretical result: in the presence of heterogeneous treatment effects, OLS is generally not a consistent estimator of the sampleweighted average treatment effect (SWE). We propose two alternative estimators that do recover the SWE in the presence of groupspecific heterogeneity. We derive tests to detect the presence of heterogeneous treatment effects and to distinguish between the OLS and SWE. We document that heterogeneous treatment effects are common and the SWE is often statistically and economically different from the OLS estimate by extending eight influential papers. In all but one paper, there is statistically significant treatment effect heterogeneity; in five, the SWE is statistically different from the OLS estimator; and in five, the SWE and OLS estimators are economically different. 
JEL:  C18 C21 
Date:  2014–07 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:20342&r=ecm 
By:  Vincenzo Atella (Department of Economics and Finance and CEIS Tor Vergata, CHPPCOR); Federico Belotti (CEIS Tor Vergata); Domenico Depalo (Bank of Italy); Andrea Piano Mortari (CEIS Tor Vergata) 
Abstract:  Spatial econometric models are now an established tool for measuring spillover effects between geographical entities. Unfortunately, however, when entities share common borders but are subject to different institutional frameworks, unless this is taken into account the conclusions may be misleading. In fact, under these circumstances, where institutional arrangements play a role, we should expect to find spatial effects mainly in entities within the same institutional setting, while the effect across different institutional settings should be small or nil even where the entities share a common border. In this case, factoring in only geographical proximity will produce biased estimates, due to the combination of two distinct effects. To avoid these problems, we derive a methodology that partitions the standard contiguity matrix into withincontiguity and betweencontiguity matrices, allowing separate estimation of these spatial correlation coefficients and simple tests for the existence of institutional constraints. We then apply this methodology to Italian Local Health Authority expenditures, using spatial panel techniques. We find a high and significant spatial coefficient only for the withincontiguity effect, confirming the validity of our approach. 
Keywords:  spatial, health expenditures, institutional setting, panel data 
JEL:  H72 H51 C31 
Date:  2014–07 
URL:  http://d.repec.org/n?u=RePEc:bdi:wptemi:td_967_14&r=ecm 
By:  Andras Fulop (ESSEC Business School, ParisSingapore); Jun Yu (Singapore Management University, School of Economics) 
Abstract:  We develop a new asset price model where the dynamic structure of the asset price, after the fundamental value is removed, is subject to two different regimes. One regime reflects the norma period where the asset price divided by the divided is assumed to follow a meanreverting process around a stochastic long run mean. This latter is allowed to account for possible smooth structural change. The second regime reflects the bubble period with explosive behavior. Stochastic switches between two regimes and nonconstant probabilities of exit from the bubble regime are both allowed. A bayesian learning approach is employed to jointly estimate the latent states and the model parameters in real time. An important feature of our Bayesian method is that we are able to deal with parameter uncertainty; and at the same time, to learn about the states and the parameters sequentially, allowing for real time model analysis. This feature is particularly useful for market surveillance. Analysis using simulated data reveals that our method has better power for detecting bubbles compared to existing altnerative procedures. Empirical analysis using price/dividend ratios of S&P500 highlights the advantages of our method. 
Keywords:  Parameter Learning, Markov Switching, MCMC 
JEL:  C11 C13 C32 G12 
Date:  2014–07 
URL:  http://d.repec.org/n?u=RePEc:siu:wpaper:042014&r=ecm 