nep-ecm New Economics Papers
on Econometrics
Issue of 2005‒01‒16
sixteen papers chosen by
Sune Karlsson
Stockholm School of Economics

  1. User-Friendly Parallel Econometric Computations: Monte Carlo, Maximum Likelihood, and GMM By Michael Creel
  2. The Optimal Prediction Simultaneous Equations Selection By Gorobets, A.
  3. A Copula-Based Autoregressive Conditional Dependence Model of International Stock Markets By Rob van den Goorbergh
  4. Higher Power Tests for Bilateral Failure of PPP after 1973 By Graham Elliott; Elena Pesavento
  5. Residuals Bases Tests for the Null of No Cointegration: an Analytical Comparison By Elena Pesavento
  6. Nonparametric Tests for Common Values in First-Price Sealed-Bid Auctions By Philip A. Haile; Han Hong; Matthew Shum
  7. Meese-Rogoff Redux: Micro-Based Exchange Rate Forecasting By Martin D. D. Evans(Georgetown University and NBER) and Richard K. Lyons(U.C. Berkeley and NBER, Haas School of Business)
  8. Testing Structural Hypotheses on Cointegration Vectors: A Monte Carlo Study By Eriksson , Åsa
  9. Separating Uncertainty from Heterogeneity in Life Cycle Earnings By Flavio Cunha; James J. Heckman; Salvador Navarro
  10. Forecasting Austrian Inflation By Gabriel Moser; Fabio Rumler; Johann Scharler
  12. Curvature-Constrained Estimates of Technical Efficiency and Returns to Scale for U.S. Electric Utilities By Supawat Rungsuriyawiboon; Chris O’Donnell
  13. Forecasting Realized Volatility Using a Long Memory Stochastic Volatility Model: Estimation, Prediction and Seasonal Adjustment By Rohit Deo; Clifford Hurvich; Yi Lu
  14. The Variance Ratio Statistic at large Horizons By Willa Chen; Rohit Deo
  15. Estimation of mis-specified long memory models By Willa Chen; Rohit Deo
  16. Tracing the Source of Long Memory in Volatility By Rohit Deo; Mengchen Hsieh; Clifford Hurvich

  1. By: Michael Creel
    Abstract: This paper shows how MPITB for GNU Octave may be used to perform Monte Carlo simulation and estimation by maximum likelihood and GMM in parallel on symmetric multiprocessor computers or clusters of workstations. The implementation of parallelization is done in a way such that an investigator may use the programs without any knowledge of parallel programming. Three example problems show that parallelization can lead to important reductions in computational time. Detailed discussion of how the Monte Carlo problem was parallelized is included as an example for learning to write parallel programs for Octave.
    Keywords: parallel computing; Monte Carlo; maximum likelihood; GMM
    JEL: C13 C15 C63 C87
    Date: 2005–01–10
  2. By: Gorobets, A. (Erasmus Research Institute of Management (ERIM), Erasmus University Rotterdam)
    Abstract: This paper presents a method for selection of the optimal simultaneous equation system from a set of nested models under the condition of a small sample. The purpose of selection is to identify a model with the best prognostic possibilities. Multivariate AIC, BIC and AICC are used as the selection criteria. The selection properties of this method are investigated by Monte-Carlo simulations.
    Keywords: Simultaneous equations;selection;criteria;simulation;
    Date: 2005–01–03
  3. By: Rob van den Goorbergh
    Abstract: This paper investigates the level and development of cross-country stock market dependence using daily returns on stock indices. The use of copulas allows us to build exible models of the joint distribution of stock index returns. In particular, we apply univariate AR(p)-GARCH(1,1) models to the margins with possibly skewed and fat tailed return innovations, while modelling the dependence between markets using parametric families of copulas which offer various alternatives to the commonly assumed normal dependence structure. Moreover, the dependence across stock markets is allowed to vary over time through a GARCH-like autoregressive conditional copula model. Using synchronous daily returns on U.S., U.K., and French stock indices, we find strong evidence that the conditional dependence between pairs of each of these markets varies over time. All market pairs show high levels of dependence persistence. The performance of the copula-based approach is compared with Engle's (2002) dynamic conditional correlation model and found to be superior.</td></tr> <tr> <td valign="top" class="txt">download: <span class="txtblue"><a href="bin/doc/Working%20Paper%20No.%2022-2004_tcm12-49221.pdf" title="Download Engelse versie">Engelse versie</a></span> | <span class="txtblue"><a href="#" onClick="popup2('/webForms.nsf/AanvraagPublicatie?openForm&org=DNB&lang=nl&titel=nr 022 - A Copula-Based Autoregressive Conditional Dependence Model of International Stock Markets','AanvraagPublicatie',600,550);" title="Bestel">bestel</a></span>
    Date: 2004–12
  4. By: Graham Elliott; Elena Pesavento
    Abstract: Whilst point estimates for mean reversion in real exchange rates suggest reasonable (but long) half lives to shocks, it still remains uncomfortable that models without any mean reversion at all are often compatible with individual country pair data from the floating period. Studies with data over longer periods find mean reversion, but at the cost of mixing in data from earlier exchange rate arrangements. Pooling the floating period data for a number of countries also finds evidence of mean reversion, but at the expense of potentially mixing in country pairs with and without mean reversion. We examine tests for mean reversion for individual country pairs where greater power against close alternatives is gained through modeling other economic variables with the real exchange rate. Our results are broadly consistent with other methods to improve the power of tests for unit roots in real exchange rates, finding support for the mean reversion hypothesis.
    Date: 2005–01
  5. By: Elena Pesavento
    Abstract: This paper computes the asymptotic distribution of five residuals-based tests for the null of no cointegration under a local alternative when the tests are computed using both OLS and GLS detrended variables. The local asymptotic power of the tests is shown to be a function of Brownian Motion and Ornstein-Uhlenbeck processes, depending on a single nuisance parameter, which is determined by the correlation at frequency zero of the errors of the cointegration regression with the shocks to the right-hand variables. The tests are compared in terms of power in large and small samples. It is shown that, while no significant improvement can be achieved by using different unit root tests than the OLS detrended t-test originally proposed by Engle and Granger (1987), the power of GLS residuals tests can be higher than the power of system tests for some values of the nuisance parameter.
    Date: 2005–01
  6. By: Philip A. Haile (Yale University); Han Hong (Duke University); Matthew Shum (Johns Hopkins University)
    Abstract: We develop tests for common values at first-price sealed-bid auctions. Our tests are nonparametric, require observation only of the bids submitted at each auction, and are based on the fact that the “winner’s curse” arises only in common values auctions. The tests build on recently developed methods for using observed bids to estimate each bidder’s conditional expectation of the value of winning the auction. Equilibrium behavior implies that in a private values auction these expectations are invariant to the number of opponents each bidder faces, while with common values they are decreasing in the number of opponents. This distinction forms the basis of our tests. We consider both exogenous and endogenous variation in the number of bidders. Monte Carlo experiments show that our tests can perform well in samples of moderate sizes. We apply our tests to two different types of U.S. Forest Service timber auctions. For unit-price (“scaled”) sales often argued to fit a private values model, our tests consistently fail to find evidence of common values. For “lumpsum” sales, where a priori arguments for common values appear stronger, our tests yield mixed evidence against the private values hypothesis.
    Keywords: First-price auctions, Common values, Private values, Nonparametric testing, Winner’s curse, Stochastic dominance, Endogenous participation, Timber auctions
    JEL: C14 D44
    Date: 2004–12
  7. By: Martin D. D. Evans(Georgetown University and NBER) and Richard K. Lyons(U.C. Berkeley and NBER, Haas School of Business) (Department of Economics, Georgetown University)
    Abstract: This paper compares the true, ex-ante forecasting performance of a micro-based model against both a standard macro model and a random walk. In contrast to existing literature, which is focused on longer horizon forecasting, we examine forecasting over horizons from one day to one month (the one-month horizon being where micro and macro analysis begin to overlap). Over our 3-year forecasting sample, we find that the micro-based model consistently out-performs both the random walk and the macro model. Micro-based forecasts account for almost 16 per cent of the sample variance in monthly spot rate changes. These results provide a level of empirical validation as yet unattained by other models. Though our micro-based model out-performs the macro model, this does not imply that past macro analysis has overlooked key fundamentals: our structural interpretation using a fundamentals-based model shows that our findings are consistent with exchange rates being driven by standard fundamentals. Classification-JEL Codes:F3, F4, G1
    Keywords: Exchange rates, forecasting, Meese and Rogoff, microstructure, order flow
  8. By: Eriksson , Åsa (Department of Economics, Lund University)
    Abstract: In this paper, two tests for structural hypotheses on cointegration vectors are evaluated in a Monte Carlo study. The tests are the likelihood ratio test proposed by Johansen (1991) and the test for stationarity proposed by Kwiatkowski et al (1992). The analysis of the likelihood ratio test is extended with the inclusion of a Bartlett correction factor. Under circumstances common in empirical applications, all tests suffer from large size distortions and have low power to detect a false cointegration vector, but the Johansen (1991) test fares slightly better than the Kwiatkowski et al (1992) test. Applying a Bartlett correction factor in small samples improves to a large extent the likelihood ratio test.
    Keywords: Cointegration; Structural hypothesis; Monte Carlo simulation
    JEL: C12 C15 C22
    Date: 2004–12–17
  9. By: Flavio Cunha; James J. Heckman; Salvador Navarro
    Abstract: This paper develops and applies a method for decomposing cross section variability of earnings into components that are forecastable at the time students decide to go to college (heterogeneity) and components that are unforecastable. About 60% of variability in returns to schooling is forecastable. This has important implications for using measured variability to price risk and predict college attendance.
    JEL: C33 D84 I21
    Date: 2005–01
  10. By: Gabriel Moser (Oesterreichische Nationalbank, Foreign Research Department, Otto-Wagner Platz 3, POB 61, A-1011 Vienna); Fabio Rumler (Oesterreichische Nationalbank, Economic Analysis Division); Johann Scharler (Oesterreichische Nationalbank, Economic Analysis Division)
    Abstract: In this paper we apply factor models proposed by Stock and Watson [18] and VAR and ARIMA models to generate 12-month out of sample forecasts of Austrian HICP inflation and its subindices processed food, unprocessed food, energy, industrial goods and services price inflation. A sequential forecast model selection procedure tailored to this specific task is applied. It turns out that factor models possess the highest predictive accuracy for several subindices and that predictive accuracy can be further improved by combining the information contained in factor and VAR models for some indices. With respect to forecasting HICP inflation, our analysis suggests to favor the aggregation of subindices forecasts. Furthermore, the subindices forecasts are used as a tool to give a more detailed picture of the determinants of HICP inflation from both an ex-ante and ex-post perspective.
    Keywords: Inflation Forecasting, Forecast Model selection, Aggregation
    JEL: C52 C53 E31
    Date: 2004–10–04
  11. By: Chris O’Donnell; W.E. Griffiths (CEPA - School of Economics, The University of Queensland)
    Abstract: Chambers and Quiggin (2000) advocate the use of state-contingent production technologies to represent risky production and establish important theoretical results concerning producer behaviour under uncertainty. Unfortunately, perceived problems in the estimation of state-contingent models have limited the usefulness of the approach in policy formulation. We show that fixed and random effects state-contingent production frontiers can be conveniently estimated in a finite mixtures framework. An empirical example is provided. Compared to standard estimation approaches, we find that estimating production frontiers in a state-contingent framework produces significantly different estimates of elasticities, firm technical efficiencies and other quantities of economic interest.
    Date: 2004–07
  12. By: Supawat Rungsuriyawiboon; Chris O’Donnell (CEPA - School of Economics, The University of Queensland)
    Abstract: We estimate an input distance function for U.S. electric utilities under the assumption that non-negative variables associated with technical inefficiency are timeinvariant. We use Bayesian methodology to impose curvature restrictions implied by microeconomic theory and obtain exact finite-sample results for nonlinear functions of the parameters (eg. technical efficiency scores). We find that Bayesian point estimates of elasticities are more plausible than maximum likelihood estimates, technical efficiency scores from a random effects specification are higher than those obtained from a fixed effects model, and there is evidence of increasing returns to scale in the industry.
    Date: 2004–09
  13. By: Rohit Deo (New York University); Clifford Hurvich (New York University); Yi Lu (New York University)
    Abstract: We study the modeling of large data sets of high frequency returns using a long memory stochastic volatility (LMSV) model. Issues pertaining to estimation and forecasting of large datasets using the LMSV model are studied in detail. Furthermore, a new method of de-seasonalizing the volatility in high frequency data is proposed, that allows for slowly varying seasonality. Using both simulated as well as real data, we compare the forecasting performance of the LMSV model for forecasting realized volatility to that of a linear long memory model fit to the log realized volatility. The performance of the new seasonal adjustment is also compared to a recently proposed procedure using real data.
    Keywords: Realized Volatility, Long Memory Stochastic Volatility Model, High Frequency Data, Seasonal Adjustment
    JEL: C1 C2 C3 C4 C5 C8
    Date: 2005–01–07
  14. By: Willa Chen (Texas A&M University); Rohit Deo (New york University)
    Abstract: We make three contributions to using the variance ratio statistic at large horizons. Allowing for general heteroscedasticity in the data, we obtain the asymptotic distribution of the statistic when the horizon k is increasing with the sample size n but at a slower rate so that k/n¨0. The test is shown to be consistent against a variety of relevant mean reverting alternatives when k/n¨0. This is in contrast to the case when k/n¨ƒÂ>0, where the statistic has been recently shown to be inconsistent against such alternatives. Secondly, we provide and justify a simple power transformation of the statistic which yields almost perfectly normally distributed statistics in finite samples, solving the well known right skewness problem. Thirdly, we provide a more powerful way of pooling information from different horizons to test for mean reverting alternatives. Monte Carlo simulations illustrate the theoretical improvements provided.
    Keywords: Mean reversion, frequency domain, power transformations
    JEL: C12 C22
    Date: 2005–01–11
  15. By: Willa Chen (Texas A&M University); Rohit Deo (New York University)
    Abstract: We study the asymptotic behaviour of frequency domain maximum likelihood estimators of mis-specified models of long memory Gaussian series. We show that even if the long memory structure of the time series is correctly specified, mis-specification of the short memory dynamics may result in estimators of both long- and short-memory parameters that are slower than ãn consistent for the pseudo-true parameter values, which in general differ from the true values. The conditions under which this happens are provided and the asymptotic distribution of the estimators is shown to be non-Gaussian. Conditions under which estimators of the parameters of the mis-specified model have the standard ãn consistency for the pseudo-true values and are asymptotically normal are also provided.
    Keywords: long memory, model mis-specification
    JEL: C13 C22
    Date: 2005–01–11
  16. By: Rohit Deo (New York University); Mengchen Hsieh (New York University); Clifford Hurvich (New York University)
    Abstract: We study the effects of trade duration properties on dependence in counts (number of transactions) and thus on dependence in volatility of returns. A return model is established to link counts and volatility. We present theorems as well as a conjecture relating properties of durations to long memory in counts and thus in volatility. We then apply several parametric duration models to empirical trade durations and discuss our findings in the light of the theorems and conjecture.
    JEL: C1 C2 C3 C4 C5 C8
    Date: 2005–01–13

This nep-ecm issue is ©2005 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.