nep-ecm New Economics Papers
on Econometrics
Issue of 2009‒03‒22
thirty-two papers chosen by
Sune Karlsson
Orebro University

  1. Testing for Cross-Sectional Dependence in Fixed Effects Panel Data Models By Badi H. Baltagi; Qu Feng; Chihwa Kao
  2. Essays on the econometric theory of rank regressions By Subbotin, Viktor
  3. Instrumental Variables Quantile Regression for Panel Data with Measurement Errors By Antonio F. Galvao, Jr.; Gabriel V. Montes-Rojas
  4. Threshold Quantile Autoregressive Models By Antonio F. Galvao, Jr.; Gabriel V. Montes-Rojas; Jose Olmo
  5. Blaming the exogenous environment? Conditional efficiency estimation with continuous and discrete exogenous variables By De Witte, Kristof; Mika, Kortelainen
  6. Model selection, estimation and forecasting in VAR models with short-run and long-run restrictions By George Athanasopoulos; Osmani T. de C. Guillén; João V. Issler; Farshid Vahid
  7. Endogeneity in Panel Data Models with Time-Varying and Time-Fixed Regressors: To IV or not IV? By Timo Mitze
  8. To Bridge, to Warp or to Wrap? By David Ardia; Lennart Hoogerheide; Herman K. van Dijk
  9. Model selection criteria for factor-augmented regressions By Jan J. J. Groen; George Kapetanios
  10. A Computationally Practical Simulation Estimation Algorithm for Dynamic Panel Data Models with Unobserved Endogenous State Variables By Keane, Michael P.; Sauer, Robert M.
  11. Diagnostic checking using subspace methods By Alfredo García-Hiernaux
  12. Beyond point forecasting: evaluation of alternative prediction intervals for tourist arrivals By Jae H. Kim; Haiyang Song; Kevin Wong; George Athanasopoulos; Shen Liu
  13. Two-Step Extremum Estimation with Estimated Single-Indices By Kyungchul Song
  14. Dynamic binary outcome models with maximal heterogeneity By Martin Browning; Jesus M. Carro
  15. Rainbow plots, Bagplots and Boxplots for Functional Data By Rob J. Hyndman; Han Lin Shang
  16. "Customer Lifetime Value and RFM Data: Accounting Your Customers: One by One" By Makoto Abe
  17. The tourism forecasting competition By George Athanasopoulos; Rob J Hyndman; Haiyan Song; Doris C Wu
  18. Efficient Estimation of Average Treatment Effects under Treatment-Based Sampling By Kyungchul Song
  19. On the efficacy of techniques for evaluating multivariate volatility forecasts By Adam Clements; Mark Doolan; Stan Hurn; Ralf Becker
  20. Quantile Autoregressive Distributed Lag Model with an Application to House Price Returns By Antonio F. Galvao, Jr.; Gabriel V. Montes-Rojas; Gabriel Sung Y. Park
  21. A Generalized Spatial Panel Data Model with Random Effects By Badi H. Baltagi; Peter Egger; Michael Pfafermayr
  22. Testing the Fixed Effects Restrictions? A Monte Carlo Study of Chamberlain's Minimum Chi-Squared Test By Badi H. Baltagi; Georges Bresson; Alain Pirotte
  23. A View of Damped Trend as Incorporating a Tracking Signal into a State Space Model By Ralph D. Snyder; Anne B. Koehler
  24. Combining Non-Cointegration Tests By Bayer Christian; Hanck Christoph
  25. Do We Really Need Both BEKK and DCC? A Tale of Two Covariance Models By Massimiliano Caporin; Michael McAleer
  26. A Scientific Classification of Volatility Models. By Massimiliano Caporin; Michael McAleer
  27. Forecasting linear dynamical systems using subspace methods By Alfredo García-Hiernaux
  28. Thresholds, News Impact Surfaces and Dynamic Asymmetric Multivariate GARCH By Massimiliano Caporin; Michael McAleer
  29. Unit Roots in White Noise By Onatski, Alexei; Uhlig, Harald
  30. Comparing Population Distributions from bin-Aggregated Sample Data: an Application to Historical Height Data from France By Jean-Yves Duclos; Josée Leblanc; David Sahn
  31. A New Look at Copper Markets: A Regime-Switching Jump Model By Chan, Wing Hong; Young, Denise
  32. Density forecasting for long-term peak electricity demand By Rob J Hyndman; Shu Fan

  1. By: Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, Syracuse, NY 13244-1020); Qu Feng (http://www-cpr.maxwell.syr.edu); Chihwa Kao (Center for Policy Research, Maxwell School, Syracuse University, Syracuse, NY 13244-1020)
    Abstract: This paper proposes a new test for cross-sectional dependence in fixed effects panel data models. It is well known that ignoring cross-sectional dependence leads to incorrect statistical inference. In the panel data literature, attempts to account for cross-sectional dependence include factor models and spatial correlation. In most cases, strong assumptions on the covariance matrix are imposed. Attempts at avoiding ad hoc specifications rely on the sample covariance matrix. Unfortunately, when the dimension of this variance-covariance matrix is large, the sample covariance matrix turns out to be an inconsistent estimator of the population covariance matrix. This is especially relevant for micro panels with a large number of cross-sectional units observed over a short time series span. This fact undermines existing tests based on the sample covariance matrix directly. This paper uses the Random Matrix Theory based approach of Ledoit and Wolf (2002) to test for cross-sectional dependence of the error terms in linear large panel models with comparable number of cross-sectional units and time series observations. Since the errors are unobservable, the residuals from the fixed effects regression are used. As shown in the paper, the difference cannot be ignored asymptotically, and the limiting distribution of the proposed test statistic is derived. Additionally, its finite sample properties are examined and compared to the traditional testsw for cross-sectional dependence using Monte Carlo simulations.
    Keywords: Cross-sectional dependence, panel data, fixed effects, John test
    JEL: C13 C33
    Date: 2009–02
    URL: http://d.repec.org/n?u=RePEc:max:cprwps:112&r=ecm
  2. By: Subbotin, Viktor
    Abstract: Several semiparametric estimators recently developed in the econometrics literature are based on the rank correlation between the dependent and explanatory variables. Examples include the maximum rank correlation estimator (MRC) of Han [1987], the monotone rank estimator (MR) of Cavanagh and Sherman [1998], the pairwise-difference rank estimators (PDR) of Abrevaya [2003], and others. These estimators apply to various monotone semiparametric single-index models, such as the binary choice models, the censored regression models, the nonlinear regression models, and the transformation and duration models, among others, without imposing functional form restrictions on the unknown functions and distributions. This work provides several new results on the theory of rank-based estimators. In Chapter 2 we prove that the quantiles and the variances of their asymptotic distributions can be consistently estimated by the nonparametric bootstrap. In Chapter 3 we investigate the accuracy of inference based on the asymptotic normal and bootstrap approximations, and provide bounds on the associated error. In the case of MRC and MR, the bound is a function of the sample size of order close to n^(-1/6). The PDR estimators, however, belong to a special subclass of rank estimators for which the bound is vanishing with the rate close to n^(-1/2). In Chapter 4 we study the efficiency properties of rank estimators and propose weighted rank estimators that improve efficiency. We show that the optimally weighted MR attains the semiparametric efficiency bound in the nonlinear regression model and the binary choice model. Optimally weighted MRC has the asymptotic variance close to the semiparametric efficiency bound in single-index models under independence when the distribution of the errors is close to normal, and is consistent under practically relevant deviations from the single index assumption. Under moderate nonlinearities and nonsmoothness in the data, the efficiency gains from weighting are likely to be small for MRC in the transformation model and for MRC and MR in the binary choice model, and can be large for MRC and MR in the monotone regression model. Throughout, the theoretical results are illustrated with Monte-Carlo experiments and real data examples
    Keywords: Semiparametric models; Bootstrap; Maximum rank correlation estimator; Monotone rank estimator; Efficiency; U-processes; U-statistics; Maximal Inequalities; Econometric theory; Rank regressions
    JEL: C1 C2
    Date: 2008–12
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:14086&r=ecm
  3. By: Antonio F. Galvao, Jr. (galvao@illinois.edu); Gabriel V. Montes-Rojas (Department of Economics, City University, London)
    Abstract: This paper develops an instrumental variables estimator for quantile regression in panel data with fixed effects. Asymptotic properties of the instrumental variables estimator are studied for large N and T when Na/T->0, for some a > 0. Wald and Kolmogorov-Smirnov type tests for general linear restrictions are developed. The estimator is applied to the problem of measurement errors in variables, which induces endogeneity and as a result bias in the model. We derive an approximation to the bias in the quantile regression fixed effects estimator in the presence of measurement error and show its connection to similar effects in standard least squares models. Monte Carlo simulations are conducted to evaluate the finite sample properties of the estimator in terms of bias and root mean squared error. Finally, the methods are applied to a model of firm investment. The results show interesting heterogeneity in the Tobin’s q and cash flow sensitivities of investment. In both cases, the sensitivities are monotonically increasing along the quantiles.
    Keywords: Quantile Regression, Panel Data, Measurement Errors, Instrumental Variables
    JEL: C14 C23
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:cty:dpaper:0906&r=ecm
  4. By: Antonio F. Galvao, Jr. (galvao@illinois.edu); Gabriel V. Montes-Rojas (Department of Economics, City University, London); Jose Olmo (Department of Economics, City University, London)
    Abstract: We study in this article threshold quantile autoregressive processes. In particular we propose estimation and inference of the parameters in nonlinear quantile processes when the threshold parameter defining nonlinearities is known for each quantile, and also when the parameter vector is estimated consistently. We derive the asymptotic properties of the nonlinear threshold quantile autoregressive estimator. In addition, we develop hypothesis tests for detecting threshold nonlinearities in the quantile process when the threshold parameter vector is not identified under the null hypothesis. In this case we propose to approximate the asymptotic distribution of the composite test using a p-value transformation. This test contributes to the literature on nonlinearity tests by extending Hansen’s (Econometrica 64, 1996, pp.413-430) methodology for the conditional mean process to the entire quantile process. We apply the proposed methodology to model the dynamics of US unemployment growth after the Second World War. The results show evidence of important heterogeneity associated with unemployment, and strong asymmetric persistence on unemployment growth.
    Keywords: Nonlinear models; quantile regression; threshold models
    JEL: C14 C22 C32 C50
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:cty:dpaper:0905&r=ecm
  5. By: De Witte, Kristof; Mika, Kortelainen
    Abstract: This paper proposes a fully nonparametric framework to estimate relative efficiency of entities while accounting for a mixed set of continuous and discrete (both ordered and unordered) exogenous variables. Using robust partial frontier techniques, the probabilistic and conditional characterization of the production process, as well as insights from the recent developments in nonparametric econometrics, we present a generalized approach for conditional efficiency measurement. To do so, we utilize a tailored mixed kernel function with a data-driven bandwidth selection. So far only descriptive analysis for studying the effect of heterogeneity in conditional efficiency estimation has been suggested. We show how to use and interpret nonparametric bootstrap-based significance tests in a generalized conditional efficiency framework. This allows us to study statistical significance of continuous and discrete exogenous variables on production process. The proposed approach is illustrated using simulated examples as well as a sample of British pupils from the OECD Pisa data set. The results of the empirical application show that several exogenous discrete factors have a statistically significant effect on the educational process.
    Keywords: Nonparametric estimation; Conditional efficiency measures; Exogenous factors; Generalized kernel function; Education
    JEL: C14 I21 C25
    Date: 2009–03–04
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:14034&r=ecm
  6. By: George Athanasopoulos; Osmani T. de C. Guillén; João V. Issler; Farshid Vahid
    Abstract: We study the joint determination of the lag length, the dimension of the cointegrating space and the rank of the matrix of short-run parameters of a vector autoregressive (VAR) model using model selection criteria. We consider model selection criteria which have data-dependent penalties for a lack of parsimony, as well as the traditional ones. We suggest a new procedure which is a hybrid of traditional criteria with data-dependant penalties. In order to compute the fit of each model, we propose an iterative procedure to compute the maximum likelihood estimates of parameters of a VAR model with short-run and long-run restrictions. Our Monte Carlo simulations measure the improvements in forecasting accuracy that can arise from the joint determination of lag-length and rank, relative to the commonly used procedure of selecting the lag-length only and then testing for cointegration.
    Keywords: Reduced rank models, model selection criteria, forecasting accuracy
    JEL: C32 C53
    Date: 2009–02
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2009-2&r=ecm
  7. By: Timo Mitze
    Abstract: We analyse the problem of parameter inconsistency in panel data econometrics due to the correlation of exogenous variables with the error term.A common solution in this setting is to use Instrumental-Variable (IV) estimation in the spirit of Hausman-Taylor (1981). However, some potential shortcomings of the latter approach recently gave rise to the use of non-IV two-step estimators. Given their growing number of empirical applications, we aim to systematically compare the performance of IV and non-IV approaches in the presence of time-fixed variables and right hand side endogeneity using Monte Carlo simulations, where we explicitly control for the problem of IV selection in the Hausman-Taylor case. The simulation results show that the Hausman- Taylor model with perfect-knowledge about the underlying data structure (instrument orthogonality) has on average the smallest bias. However, compared to the empirically relevant specification with imperfect-knowledge and instruments chosen by statistical criteria, the non-IV rival performs equally well or even better especially in terms of estimating variable coefficients for timefixed regressors. Moreover, the non-IV method tends to have a smaller root mean square error (rmse) than both Hausman-Taylor models with perfect and imperfect knowledge about the underlying correlation between r.h.s variables and residual term.This indicates that it is generally more efficient.The results are roughly robust for various combinations in the time and cross-section dimension of the data.
    Keywords: Endogeneity, instrumental variables, two-step estimators, Monte Carlo simulations
    JEL: C15 C23 C52
    Date: 2009–01
    URL: http://d.repec.org/n?u=RePEc:rwi:repape:0083&r=ecm
  8. By: David Ardia (University of Fribourg, Switzerland); Lennart Hoogerheide (Erasmus University Rotterdam); Herman K. van Dijk (Erasmus University Rotterdam)
    Abstract: Important choices for efficient and accurate evaluation of marginal likelihoods by means of Monte Carlo simulation methods are studied for the case of highly non-elliptical posterior distributions. We focus on the situation where one makes use of importance sampling or the independence chain Metropolis-Hastings algorithm for posterior analysis. A comparative analysis is presented of possible advantages and limitations of different simulation techniques; of possible choices of candidate distributions and choices of target or warped target distributions; and finally of numerical standard errors. The importance of a robust and flexible estimation strategy is demonstrated where the complete posterior distribution is explored. In this respect, the adaptive mixture of Student-t distributions of Hoogerheide et al.(2007) works particularly well. Given an appropriately yet quickly tuned candidate, straightforward importance sampling provides the most efficient estimator of the marginal likelihood in the cases investigated in this paper, which include a non-linear regression model of Ritter and Tanner (1992) and a conditional normal distribution of Gelman and Meng (1991). A poor choice of candidate density may lead to a huge loss of efficiency where the numerical standard error may be highly unreliable.
    Keywords: marginal likelihood; Bayes factor; importance sampling; Markov chain Monte Carlo; bridge sampling; adaptive mixture of Student-t distributions
    JEL: C11 C15 C52
    Date: 2009–02–26
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20090017&r=ecm
  9. By: Jan J. J. Groen; George Kapetanios
    Abstract: In a factor-augmented regression, the forecast of a variable depends on a few factors estimated from a large number of predictors. But how does one determine the appropriate number of factors relevant for such a regression? Existing work has focused on criteria that can consistently estimate the appropriate number of factors in a large-dimensional panel of explanatory variables. However, not all of these factors are necessarily relevant for modeling a specific dependent variable within a factor-augmented regression. This paper develops a number of theoretical conditions that selection criteria must fulfill in order to provide a consistent estimate of the factor dimension relevant for a factor-augmented regression. Our framework takes into account factor estimation error and does not depend on a specific factor estimation methodology. It also provides, as a by-product, a template for developing selection criteria for regressions that include standard generated regressors. The conditions make it clear that standard model selection criteria do not provide a consistent estimate of the factor dimension in a factor-augmented regression. We propose alternative criteria that do fulfill our conditions. These criteria essentially modify standard information criteria so that the corresponding penalty function for dimensionality also penalizes factor estimation error. We show through Monte Carlo and empirical applications that these modified information criteria are useful in determining the appropriate dimensions of factor-augmented regressions.
    Keywords: Regression analysis ; Econometric models ; Time-series analysis ; Forecasting
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:fip:fednsr:363&r=ecm
  10. By: Keane, Michael P. (University of Technology, Sydney); Sauer, Robert M. (University of Bristol)
    Abstract: This paper develops a simulation estimation algorithm that is particularly useful for estimating dynamic panel data models with unobserved endogenous state variables. The new approach can easily deal with the commonly encountered and widely discussed "initial conditions problem," as well as the more general problem of missing state variables during the sample period. Repeated sampling experiments on dynamic probit models with serially correlated errors indicate that the estimator has good small sample properties. We apply the estimator to a model of married women's labor force participation decisions. The results show that the rarely used Polya model, which is very difficult to estimate given missing data problems, fits the data substantially better than the popular Markov model. The Polya model implies far less state dependence in employment status than the Markov model. It also implies that observed heterogeneity in education, young children and husband income are much more important determinants of participation, while race is much less important.
    Keywords: initial conditions, missing data, simulation, female labor force participation
    JEL: C15 C23 C25 J13 J21
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp4054&r=ecm
  11. By: Alfredo García-Hiernaux
    Abstract: The problem of diagnostic checking is tackled from the perspective of the subspace methods. Two statistics are presented and its asymptotic distributions are derived under the null. The procedures generalize the Box-Pierce statistic for single series and the Hoskings' statistic in the multivariate case. The performance of the proposals is illustrated via Monte Carlo simulations and an example with real data.
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:ucm:doicae:0901&r=ecm
  12. By: Jae H. Kim; Haiyang Song; Kevin Wong; George Athanasopoulos; Shen Liu
    Abstract: This paper evaluates the performance of prediction intervals generated from alternative time series models, in the context of tourism forecasting. The forecasting methods considered include the autoregressive (AR) model, the AR model using the bias-corrected bootstrap, seasonal ARIMA models, innovations state-space models for exponential smoothing, and Harvey's structural time series models. We use thirteen monthly time series for the number of tourist arrivals to Hong Kong and to Australia. The mean coverage rate and length of alternative prediction intervals are evaluated in an empirical setting. It is found that the prediction intervals from all models show satisfactory performance, except for those from the autoregressive model. In particular, those based on the bias-corrected bootstrap in general perform best, providing tight intervals with accurate coverage rates, especially when the forecast horizon is long.
    Keywords: Automatic forecasting, Bootstrapping, Interval forecasting
    JEL: C22 C52 C53
    Date: 2008–12
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2008-11&r=ecm
  13. By: Kyungchul Song (Department of Economics, University of Pennsylvania)
    Abstract: This paper studies two-step extremum estimation that involves the first step estimation of nonparametric functions of single-indices. First, this paper finds that under certain regularity conditions for conditional measures, linear functionals of conditional expectations are insensitive to the first order perturbation of the parameters in the conditioning variable. Applying this result to symmetrized nearest neighborhood estimation of the nonparametric functions, this paper shows that the influence of the estimated single-indices on the estimator of main interest is asymptotically negligible even when the estimated single-indices follow cube root asymptotics. As a practical use of this finding, this paper proposes a bootstrap method for conditional moment restrictions that are asymptotically valid in the presence of cube root-converging single-index estimators. Some results from Monte Carlo simulations are presented and discussed.
    Keywords: two-step extremum estimation, single-index restrictions, cube root asymptotics bootstrap
    JEL: C12 C14 C51
    Date: 2009–02–16
    URL: http://d.repec.org/n?u=RePEc:pen:papers:09-012&r=ecm
  14. By: Martin Browning; Jesus M. Carro
    Abstract: Most econometric schemes to allow for heterogeneity in micro behaviour have two drawbacks: they do not fit the data and they rule out interesting economic models. In this paper we consider the time homogeneous first order Markov (HFOM) model that allows for maximal heterogeneity. That is, the modelling of the heterogeneity does not impose anything on the data (except the HFOM assumption for each agent) and it allows for any theory model (that gives a HFOM process for an individual observable variable). `Maximal' means that the joint distribution of initial values and the transition probabilities is unrestricted. We establish necessary and sufficient conditions for the point identification of our heterogeneity structure and show how it depends on the length of the panel. A feasible ML estimation procedure is developed. Tests for a variety of subsidiary hypotheses such as the assumption that marginal dynamic effects are homogeneous are developed. We apply our techniques to a long panel of Danish workers who are very homogeneous in terms of observables. We show that individual unemployment dynamics are very heterogeneous, even for such a homogeneous group. We also show that the impact of cyclical variables on individual unemployment probabilities differs widely across workers. Some workers have unemployment dynamics that are independent of the cycle whereas others are highly sensitive to macro shocks.
    Keywords: Discrete choice, Markov processes, Nonparametric identification, Unemployment dynamics
    JEL: C23 C24 J64
    Date: 2009–02
    URL: http://d.repec.org/n?u=RePEc:cte:werepe:we091710&r=ecm
  15. By: Rob J. Hyndman; Han Lin Shang
    Abstract: We propose new tools for visualizing large numbers of functional data in the form of smooth curves or surfaces. The proposed tools include functional versions of the bagplot and boxplot, and make use of the first two robust principal component scores, Tukey's data depth and highest density regions. By-products of our graphical displays are outlier detection methods for functional data. We compare these new outlier detection methods with exiting methods for detecting outliers in functional data and show that our methods are better able to identify the outliers.
    Keywords: Highest density regions, Robust principal component analysis, Kernel density estimation, Outlier detection, Tukey's halfspace depth
    JEL: C14 C80
    Date: 2008–11
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2008-9&r=ecm
  16. By: Makoto Abe (Faculty of Economics, University of Tokyo)
    Abstract: A customer behavior model that permits the estimation of customer lifetime value (CLV) from standard RFM data in "non-contractual" setting is developed by extending the hierarchical Bayes (HB) framework of the Pareto/NBD model (Abe 2008). The model relates customer characteristics to frequency, dropout and spending behavior, which, in turn, is linked to CLV to provide useful insight into acquisition. The proposed model (1) relaxes the assumption of independently distributed parameters for frequency, dropout and spending processes across customers, (2) accommodates the inclusion of covariates through hierarchical modeling, (3) allows easy estimation of latent variables at the individual level, which could be useful for CRM, and (4) provides the correct measure of errors. Using FSP data from a department store and a CD chain, the HB model is shown to perform well on calibration and holdout samples both at the aggregate and disaggregate levels in comparison with the benchmark Pareto/NBD-based model. Several substantive issues are uncovered. First, both of our datasets exhibit correlation between frequency and spending parameters, violating the assumption of the existing Pareto/NBD-based CLV models. Direction of the correlation is found to be data dependent. Second, useful insight into acquisition is gained by decomposing the effect of change in covariates on CLV into three components: frequency, dropout and spending. The three components can exert influences in opposite directions, thereby canceling each other to produce less effect as the total on CLV. Third, not accounting for uncertainty in parameter estimate can cause large bias in measures, such as CLV and elasticity. Its ignorance can potentially have a serious consequence on managerial decision making.
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2009cf616&r=ecm
  17. By: George Athanasopoulos; Rob J Hyndman; Haiyan Song; Doris C Wu
    Abstract: We evaluate the performance of various methods for forecasting tourism demand. The data used include 380 monthly series, 427 quarterly series and 530 yearly series, all supplied to us by tourism bodies or by academics from previous tourism forecasting studies. The forecasting methods implemented in the competition are univariate time series approaches, and also econometric models. This forecasting completion differs from previous competitions in several ways: (i) we concentrate only on tourism demand data; (ii) we include econometric approaches; (iii) we evaluate forecast interval coverage as well as point forecast accuracy; (iv) we observe the effect of temporal aggregation on forecasting accuracy; and (v) we consider the mean absolute scaled error as an alternative forecasting accuracy measure.
    Keywords: Tourism forecasting, ARIMA, Exponential smoothing, Time varying parameter model, Autoregressive distributed lag model, Vector autoregression
    JEL: C22 C52 C53
    Date: 2008–12
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2008-10&r=ecm
  18. By: Kyungchul Song (Department of Economics, University of Pennsylvania)
    Abstract: Nonrandom sampling schemes are often used in program evaluation settings to improve the quality of inference. This paper considers what we call treatment-based sampling, a type of standard stratified sampling where part of the strata are based on treatments. This paper first establishes semiparametric efficiency bounds for estimators of weighted average treatment effects and average treatment effects on the treated. In doing so, this paper illuminates the role of information about the aggregate shares from the original data set. This paper also develops an optimal design of treatment-based sampling that yields the best semiparametric efficiency bound. Lastly, this paper finds that adapting the efficient estimators of Hirano, Imbens, and Ridder (2003) to treatment-based sampling does not always lead to an efficient estimator. This paper proposes different estimators that are efficient in such a situation.
    Keywords: treatment-based sampling, semiparametric efficiency, treatment effects.
    JEL: C12 C14 C52
    Date: 2009–03–06
    URL: http://d.repec.org/n?u=RePEc:pen:papers:09-011&r=ecm
  19. By: Adam Clements (QUT); Mark Doolan (QUT); Stan Hurn (QUT); Ralf Becker (University of Manchester)
    Abstract: The performance of techniques for evaluating univariate volatility forecasts are well understood. In the multivariate setting however, the efficacy of the evaluation techniques is not developed. Multivariate forecasts are often evaluated within an economic application such as portfolio optimisation context. This paper aims to evaluate the efficacy of such techniques, along with traditional statistical based methods. It is found that utility based methods perform poorly in terms of identifying optimal forecasts whereas statistical methods are more effective.
    Keywords: Multivariate volatility, forecasts, forecast evaluation, Model confidence set
    JEL: C22 G00
    Date: 2009–02–23
    URL: http://d.repec.org/n?u=RePEc:qut:auncer:2009_50&r=ecm
  20. By: Antonio F. Galvao, Jr. (galvao@illinois.edu); Gabriel V. Montes-Rojas (Department of Economics, City University, London); Gabriel Sung Y. Park (Wang Yanan Institute for Studies in Economics (WISE), Xiamen University)
    Abstract: This paper studies quantile regression in an autoregressive dynamic framework with exogenous stationary covariates. Hence, we develop a quantile autoregressive distributed lag model (QADL). We show that these estimators are consistent and asymptotically normal. Inference based on Wald and Kolmogorov-Smirnov tests for general linear restrictions is proposed. An extensive Monte Carlo simulation is conducted to evaluate the properties of the estimators. We demonstrate the potential of the QADL model with an application to house price returns in the United Kingdom. The results show that house price returns present a heterogeneous autoregressive behavior across the quantiles. The real GDP growth and interest rates also have an asymmetric impact on house prices variations.
    Keywords: quantile autoregression, distributed lag model, autoregressive model
    JEL: C14 C32
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:cty:dpaper:0904&r=ecm
  21. By: Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, Syracuse, NY 13244-1020); Peter Egger; Michael Pfafermayr
    Abstract: This paper prooses a generalized panel data model with random effects and first-order spatially autocorrelated residuals that encompasses two previously suggested specifications. The first one is described in Anselin's (1988) book and the second one by Kapoor, Kelejian, and Prucha (2007). Our encompassing specification allows us to test for these models as restricted specifications. In particular, we derive three LM and LR tests that restrict out generalized model to obtain (i) the Anselin model, (ii) the Kapoor, Kelejian, and Prucha model, and (iii) the simple random effects model that ignores the spatial correlation in the residuals. For two of these three tests, we obtain closed form solutions and we derive their large sample distributions. Our Monte Carlo results show that the suggested tests are powerful in testing for these restricted specifications even in small and medium sized samples.
    Keywords: Panel data, spatially autocorrelated residuals, maximum-likelihood estimation, Lagrange multiplier, likelihood ratio
    JEL: C12 C23
    Date: 2009–02
    URL: http://d.repec.org/n?u=RePEc:max:cprwps:113&r=ecm
  22. By: Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, Syracuse, NY 13244-1020); Georges Bresson; Alain Pirotte
    Abstract: Chamberlain (1982) showed that the fixed effects (FE) specification imposes testable restrictions on the coefficients from regressions of all leads and lags of dependent variableson all leads and lags of independent variables. Angrist and Newey (1991) suggested computing this test statistic as the degrees of freedom times the R2 from a regression of within residuals on all leads and lags of the exogenous variables. Despite the simplicity of these tests, they are not commonly used in practice. Instead, a Hausman (1978) test is used based on a contrast of the fixed and random effects specifications. We advocate the use of the Chamberlain (1982) test if the researcher wants to settle on the FE specifications, we check this test's performance using Monte Carlo experiments, and we apply it to the crime example of Cornwell and Trumbull (1994).
    Keywords: Panel data, fixed effects (FE), random effects (RE), Chamberlain test, minimum chi-squared (MCS), Angrist-Newey test
    JEL: C23
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:max:cprwps:115&r=ecm
  23. By: Ralph D. Snyder; Anne B. Koehler
    Abstract: Damped trend exponential smoothing has previously been established as an important forecasting method. Here, it is shown to have close links to simple exponential smoothing with a smoothed error tracking signal. A special case of damped trend exponential smoothing emerges from our analysis, one that is more parsimonious because it effectively relies on one less parameter. This special case is compared with its traditional counterpart in an application to the annual data from the M3 competition and is shown to be quite competitive.
    Keywords: Exponential smoothing, monitoring forecasts, structural change, adjusting forecasts, state space models, damped trend
    JEL: C32 C44 C53
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2008-7&r=ecm
  24. By: Bayer Christian; Hanck Christoph (METEOR)
    Abstract: The local asymptotic power of many popular non-cointegration tests has recently been shown to depend on a certain nuisance parameter. Depending on the value of that parameter, different tests perform best. This paper suggests combination procedures with the aim of providing meta tests that maintain high power across the range of the nuisance parameter. The local asymptotic power of the new meta tests is in general almost as high as that of the more powerful of the underlying tests. When the underlying tests have similar power, the meta tests are even more powerful than the best underlying test. At the same time, our new meta tests avoid the arbitrary decision which test to use if single test results conflict. Moreover it avoids the size distortion inherent in separately applying multiple tests for cointegration to the same data set. We apply our tests to 159 data sets from published cointegration studies. There, in one third of all cases single tests give conflicting results whereas our meta tests provide an unambiguous test decision.
    Keywords: macroeconomics ;
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:dgr:umamet:2009012&r=ecm
  25. By: Massimiliano Caporin (Department of Economic Sciences University of Padova); Michael McAleer (Universidad Complutense de Madrid.Department of Quantitative Economics)
    Abstract: Large and very large portfolios of financial assets are routine for many individuals and organizations. The two most widely used models of conditional covariances and correlations are BEKK and DCC. BEKK suffers from the archetypal "curse of dimensionality" whereas DCC does not. This is a misleading interpretation of the suitability of the two models to be used in practice. The primary purposes of the paper are to define targeting as an aid in estimating matrices associated with large numbers of financial assets, analyze the similarities and dissimilarities between BEKK and DCC, both with and without targeting, on the basis of structural derivation, the analytical forms of the sufficient conditions for the existence of moments, and the sufficient conditions for consistency and asymptotic normality, and computational tractability for very large (that is, ultra high) numbers of financial assets, to present a consistent two step estimation method for the DCC model, and to determine whether BEKK or DCC should be preferred in practical applications.
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:ucm:doicae:0904&r=ecm
  26. By: Massimiliano Caporin (Department of Economic Sciences University of Padova); Michael McAleer (Universidad Complutense de Madrid.Department of Quantitative Economics)
    Abstract: Modeling volatility, or “predictable changes” over time and space in a variable, is crucial in the natural and social sciences. Life can be volatile, and anything that matters, and which changes over time and space, involves volatility. Without volatility, many temporal and spatial variables would simply be constants. Our purpose is to propose a scientific classification of the alternative volatility models and approaches that are available in the literature, following the Linnaean taxonomy. This scientific classification is used because the literature has evolved as a living organism, with the birth of numerous new species of models.
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:ucm:doicae:0905&r=ecm
  27. By: Alfredo García-Hiernaux (Universidad Complutense de Madrid. Department of Quantitative Economics)
    Abstract: A new procedure to predict with subspace methods is presented in this paper. It is based on combining multiple forecasts obtained from setting a range of values for a specic parameter that is typically xed by the user in the subspace methods literature. An algorithm to compute these predictions and to obtain a suitable number of combinations is provided. The procedure is illustrated by forecasting the German gross domestic product.
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:ucm:doicae:0902&r=ecm
  28. By: Massimiliano Caporin (Department of Economic Sciences University of Padova); Michael McAleer (Universidad Complutense de Madrid.Department of Quantitative Economics)
    Abstract: DAMGARCH is a new model that extends the VARMA-GARCH model of Ling and McAleer (2003) by introducing multiple thresholds and time-dependent structure in the asymmetry of the conditional variances. Analytical expressions for the news impact surface implied by the new model are also presented. DAMGARCH models the shocks affecting the conditional variances on the basis of an underlying multivariate distribution. It is possible to model explicitly asset-specific shocks and common innovations by partitioning the multivariate density support. This paper presents the model structure, describes the implementation issues, and provides the conditions for the existence of a unique stationary solution, and for consistency and asymptotic normality of the quasi-maximum likelihood estimators. The paper also presents an empirical example to highlight the usefulness of the new model
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:ucm:doicae:0911&r=ecm
  29. By: Onatski, Alexei; Uhlig, Harald
    Abstract: We show that the empirical distribution of the roots of the vector auto-regression of order n fitted to T observations of a general stationary or non-stationary process, converges to the uniform distribution over the unit circle on the complex plane, when both T and n tend to infinity so that (ln T ) /n → 0 and n^3/T → 0. In particular, even if the process is a white noise, the roots of the estimated vector auto-regression will converge by absolute value to unity.
    Keywords: unit roots, unit root, white noise, asymptotics, autoregression, Granger and Jeon, clustering of roots
    JEL: C32 C22 C01
    Date: 2009–03–08
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:14057&r=ecm
  30. By: Jean-Yves Duclos; Josée Leblanc; David Sahn
    Abstract: This paper develops a methodology to estimate the entire population distributions from bin-aggregated sample data. We do this through the estimation of the parameters of mixtures of distributions that allow for maximal parametric flexibility. The statistical approach we develop enables comparisons of the full distributions of height data from potential army conscripts across France's 88 departments for most of the nineteenth century. These comparisons are made by testing for differences-of-means stochastic dominance. Corrections for possible measurement errors are also devised by taking advantage of the richness of the data sets. Our methodology is of interest to researchers working on historical as well as contemporary bin-aggregated or histogram-type data, something that still widely done since much of the information that is publicly available is in that form, often due to restrictions due to political sensitivity and/or confidentiality concerns.
    Keywords: Health, health inequality, aggregate data, 19th century France, welfare
    JEL: C14 C81 D3 D63 I1 I1 N3
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:lvl:lacicr:0910&r=ecm
  31. By: Chan, Wing Hong (Wilfred Laurier University); Young, Denise (University of Alberta, Department of Economics)
    Abstract: GARCH-jump models of metal price returns, while allowing for sudden movements (jumps), apply the same specification of the jump component in both 'bear'and 'bull' markets. As a result, the more frequent but relatively small jumps that occur in both bear and bull markets dominate the characterization of the jump process. Given that large jumps, although less frequent, are still quite common in copper (and other metal) markets, this is a potential shortcoming of current models. More flexibility can be added to the modeling process by allowing for regime-switching. In this paper we specify a model that allows for switching across two separate regimes, with the possibility of different jump sizes and frequencies under each regime, along with a regime-specific GARCH process for the conditional variance. This model is applied to daily copper futures prices over the period of January 2 1980 through the end of July 2007. The model is estimated both with and without factors such as interest and exchange rate movements entering into the specification of the state-dependent mean of the conditional jump size. In some respects, a Regime Switching GARCH-Jump Model performs well when applied to the copper returns data. The results are mixed in terms of whether or not variations of the model that allow jump sizes to be a function of interest or exchange rates offer much of an advantage over a pure time series approach to the modeling of copper returns over the past three decades.
    Keywords: regime switching; Poisson jump; GARCH volatility; copper futures
    JEL: C32 C53 G13
    Date: 2009–03–17
    URL: http://d.repec.org/n?u=RePEc:ris:albaec:2009_013&r=ecm
  32. By: Rob J Hyndman; Shu Fan
    Abstract: Long-term electricity demand forecasting plays an important role in planning for future generation facilities and transmission augmentation. In a long term context, planners must adopt a probabilistic view of potential peak demand levels, therefore density forecasts (providing estimates of the full probability distributions of the possible future values of the demand) are more helpful than point forecasts, and are necessary for utilities to evaluate and hedge the financial risk accrued by demand variability and forecasting uncertainty. This paper proposes a new methodology to forecast the density of long-term peak electricity demand. Peak electricity demand in a given season is subject to a range of uncertainties, including underlying population growth, changing technology, economic conditions, prevailing weather conditions (and the timing of those conditions), as well as the general randomness inherent in individual usage. It is also subject to some known calendar effects due to the time of day, day of week, time of year, and public holidays. We describe a comprehensive forecasting solution in this paper. First, we use semiparametric additive models to estimate the relationships between demand and the driver variables, including temperatures, calendar effects and some demographic and economic variables. Then we forecast the demand distributions using a mixture of temperature simulation, assumed future economic scenarios, and residual bootstrapping. The temperature simulation is implemented through a new seasonal bootstrapping method with variable blocks. The proposed methodology has been used to forecast the probability distribution of annual and weekly peak electricity demand for South Australia since 2007. We evaluate the performance of the methodology by comparing the forecast results with the actual demand of the summer 2007/08.
    Keywords: Long-term demand forecasting, density forecast, time series, simulation
    JEL: C14 C15 C52 C53 L94
    Date: 2008–08
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2008-6&r=ecm

This nep-ecm issue is ©2009 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.