nep-ecm New Economics Papers
on Econometrics
Issue of 2009‒03‒28
twenty-two papers chosen by
Sune Karlsson
Orebro University

  1. Moment-bases estimation of smooth transition regression models with endogenous variables By Areosa, W.D.; McAleer, M.; Medeiros, M.C.
  2. Bayesian near-boundary analysis in basic macroeconomic time series models By Pooter, M.D. de; Ravazzolo, F.; Segers, R.; Dijk, H.K. van
  3. Testing for seasonal unit roots in monthly panels of time series By Kunst, R.M.; Franses, Ph.H.B.F.
  4. A Correction Function Approach to Solve the Incidental Parameter Problem By Li, GuangJie; Leon-Gonzalez, Roberto
  5. Forecast evaluation of small nested model sets. By Kirstin Hubrich; Kenneth D. West
  6. Back to square one: identification issues in DSGE models By Canova, Fabio; Sala, Luca
  7. A State Space Approach to Estimating the Integrated Variance and Microstructure Noise Component By Daisuke Nagakura; Toshiaki Watanabe
  8. A Bayesian approach to two-mode clustering By Dijk, A. van; Rosmalen, J.M. van; Paap, R.
  9. Optimal Prediction Pools. By John Geweke; Gianni Amisano
  10. Forecasting Random Walks under Drift Instability By M. Hashem Pesaran; Andreas Pick
  11. Consistent Estimation, Model Selection and Averaging of Dynamic Panel Data Models with Fixed Effect By Li, GuangJie
  12. Adjusted empirical likelihood estimation of the youden index and associated threshold for the bigamia model By Emilio Leton; Elisa M. Molanes
  13. Expert opinion versus expertise in forecasting By Franses, Ph.H.B.F.; McAleer, M.; Legerstee, R.
  14. Growth Regressions, Principal Components and Frequentist Model Averaging By Wagner, Martin; Hlouskova, Jaroslava
  15. Outliers and judgemental adjustment of time series forecasts. By Franses, Ph.H.B.F.
  16. Model selection for forecast combination By Franses, Ph.H.B.F.
  17. On Directional Multiple-Output Quantile Regression By Davy Paindaveine; Miroslav Siman
  18. Measuring weekly consumer confidence. By Segers, R.; Franses, Ph.H.B.F.
  19. Range-based covariance estimation using high-frequency data: The realized co-range By Bannouh, K.; Dijk, D.J.C. van; Martens, M.P.E.
  20. Spatial Point Pattern Analysis and Industry Concentration By Reinhold Kosfeld; Hans-Friedrich Eckey; Jørgen Lauridsen
  21. Biased estimates in discrete choice models: the appropriate inclusion of psychometric data into the valuation of recycled wastewater By Gibson, Fiona L.; Burton, Michael
  22. Aggregation of Linear Models for Panel Data By Alexandre Petkovic; David Veredas

  1. By: Areosa, W.D.; McAleer, M.; Medeiros, M.C. (Erasmus Econometric Institute)
    Abstract: Nonlinear regression models have been widely used in practice for a variety of time series and cross-section datasets. For purposes of analyzing univariate and multivariate time series data, in particular, Smooth Transition Regression (STR) models have been shown to be very useful for representing and capturing asymmetric behavior. Most STR models have been applied to univariate processes, and have made a variety of assumptions, including stationary or cointegrated processes, uncorrelated, homoskedastic or conditionally heteroskedastic errors, and weakly exogenous regressors. Under the assumption of exogeneity, the standard method of estimation is nonlinear least squares. The primary purpose of this paper is to relax the assumption of weakly exogenous regressors and to discuss moment based methods for estimating STR models. The paper analyzes the properties of the STR model with endogenous variables by providing a diagnostic test of linearity of the underlying process under endogeneity, developing an estimation procedure and a misspecification test for the STR model, presenting the results of Monte Carlo simulations to show the usefulness of the model and estimation method, and providing an empirical application for inflation rate targeting in Brazil. We show that STR models with endogenous variables can be specified and estimated by a straightforward application of existing results in the literature.
    Keywords: smooth transition;nonlinear models;nonlinear instrumental variables;generalized method of moments;endogeneity;inflation targeting
    Date: 2008–12–16
    URL: http://d.repec.org/n?u=RePEc:dgr:eureir:1765014154&r=ecm
  2. By: Pooter, M.D. de; Ravazzolo, F.; Segers, R.; Dijk, H.K. van (Erasmus Econometric Institute)
    Abstract: Several lessons learnt from a Bayesian analysis of basic macroeconomic time series models are presented for the situation where some model parameters have substantial posterior probability near the boundary of the parameter region. This feature refers to near-instability within dynamic models, to forecasting with near-random walk models and to clustering of several economic series in a small number of groups within a data panel. Two canonical models are used: a linear regression model with autocorrelation and a simple variance components model. Several well-known time series models like unit root and error correction models and further state space and panel data models are shown to be simple generalizations of these two canonical models for the purpose of posterior inference. A Bayesian model averaging procedure is presented in order to deal with models with substantial probability both near and at the boundary of the parameter region. Analytical, graphical and empirical results using U.S. macroeconomic data, in particular on GDP growth, are presented.
    Keywords: Gibbs sampler;MCMC;autocorrelation;nonstationarity;reduced rank models;state space models;error correction models;random effects panel data models;Bayesian model averaging
    Date: 2008–08–25
    URL: http://d.repec.org/n?u=RePEc:dgr:eureir:1765013055&r=ecm
  3. By: Kunst, R.M.; Franses, Ph.H.B.F. (Erasmus Econometric Institute)
    Abstract: We consider the problem of testing for seasonal unit roots in monthly panel data. To this aim, we generalize the quarterly CHEGY test to the monthly case. This parametric test is contrasted with a new nonparametric test, which is the panel counterpart to the univariate RURS test that relies on counting extrema in time series. All methods are applied to an empirical data set on tourism in Austrian provinces. The power properties of the tests are evaluated in simulation experiments that are tuned to the tourism data.
    Keywords: seasonality;nonparametric test;unit roots;panel;tourism
    Date: 2009–02–19
    URL: http://d.repec.org/n?u=RePEc:dgr:eureir:1765014861&r=ecm
  4. By: Li, GuangJie (Cardiff Business School); Leon-Gonzalez, Roberto
    Abstract: Following Lancaster (2002), we propose a strategy to solve the incidental parameter problem. The method is demonstrated under a simple panel Poisson count model. We also extend the strategy to accomodate cases when information orthogonality is unavailable, such as the linear AR(p) panel model. For the AR(p) model, there exists a correction function to fix the incidental parameter problem when the model is stationary with strictly exogenous regressors. MCMC algorithms are developed for parameter estimation and model comparison. The results based on the simulated data sets suggest that our method could achieve consistency in both parameter estimation and model selection.
    Keywords: dynamic panel data model with fixed effect; incidental parameter problem; consistency in estimation; model selection; Bayesian model averaging; Markov chain Monte Carlo (MCMC)
    JEL: C52 C11 C12 C13 C15
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:cdf:wpaper:2009/6&r=ecm
  5. By: Kirstin Hubrich (European Central Bank, Kaiserstrasse 29, D-60311 Frankfurt am Main, Germany.); Kenneth D. West (University of Wisconsin, Madison, Department of Economics,  1180 Observatory Drive,  Madison, WI 53706, USA.)
    Abstract: We propose two new procedures for comparing the mean squared prediction error (MSPE) of a benchmark model to the MSPEs of a small set of alternative models that nest the benchmark. Our procedures compare the benchmark to all the alternative models simultaneously rather than sequentially, and do not require reestimation of models as part of a bootstrap procedure. Both procedures adjust MSPE differences in accordance with Clark and West (2007); one procedure then examines the maximum t-statistic, the other computes a chi-squared statistic. Our simulations examine the proposed procedures and two existing procedures that do not adjust the MSPE differences: a chi-squared statistic, and White’s (2000) reality check. In these simulations, the two statistics that adjust MSPE differences have most accurate size, and the procedure that looks at the maximum t-statistic has best power. We illustrate, our procedures by comparing forecasts of different models for U.S. inflation. JEL Classification: C32, C53, E37.
    Keywords: Out-of-sample, prediction, testing, multiple model comparisons, inflation forecasting.
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:200901030&r=ecm
  6. By: Canova, Fabio; Sala, Luca
    Abstract: We investigate identification issues in DSGE models and their consequences for parameter estimation and model evaluation when the objective function measures the distance between estimated and model-based impulse responses. Observational equivalence, partial and weak identification problems are widespread and typically produced by an ill-behaved mapping between the structural parameters and the coefficients of the solution. Different objective functions affect identification and small samples interact with parameters identification. Diagnostics to detect identification deficiencies are provided and applied to a widely used model.
    Keywords: DSGE models; Identification; Impulse Responses; small samples.
    JEL: C10 C52 E32 E50
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:7234&r=ecm
  7. By: Daisuke Nagakura (Institute for Monetary and Economic Studies, Bank of Japan (E-mail: daisuke.nagakura@boj.or.jp)); Toshiaki Watanabe (Professor, Institute of Economic Research, Hitotsubashi University, and Institute for Monetary and Economic Studies, Bank of Japan (E-mail: watanabe@hit-u.ac.jp, toshiaki.watanabe@boj.or.jp))
    Abstract: We call the realized variance (RV) calculated with observed prices contaminated by microstructure noises (MNs) the noise-contaminated RV (NCRV) and refer to the component in the NCRV associated with the MNs as the MN component. This paper develops a method for estimating the integrated variance (IV) and MN component simultaneously, extending the state space method proposed by Barndorff-Nielsen and Shephard (2002). Our extension is based on the result obtained in Meddahi (2003), namely, when the true log-price process follows a general class of continuous-time stochastic volatility (SV) models, the IV follows an ARMA process. We represent the NCRV by a state space form and show that the state space form parameters are not identifiable; however, they can be expressed as functions of fewer identifiable parameters. We illustrate how to estimate these parameters. The proposed method is applied to yen/dollar exchange rate data. We find that the magnitude of the MN component is, on average, about 21%-48 % of the NCRV, depending on the sampling frequency.
    Keywords: Realized Variance, Integrated Variance, Microstructure Noise
    JEL: C0 G0
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:ime:imedps:09-e-11&r=ecm
  8. By: Dijk, A. van; Rosmalen, J.M. van; Paap, R. (Erasmus Econometric Institute)
    Abstract: We develop a new Bayesian approach to estimate the parameters of a latent-class model for the joint clustering of both modes of two-mode data matrices. Posterior results are obtained using a Gibbs sampler with data augmentation. Our Bayesian approach has three advantages over existing methods. First, we are able to do statistical inference on the model parameters, which would not be possible using frequentist estimation procedures. In addition, the Bayesian approach allows us to provide statistical criteria for determining the optimal numbers of clusters. Finally, our Gibbs sampler has fewer problems with local optima in the likelihood function and empty classes than the EM algorithm used in a frequentist approach. We apply the Bayesian estimation method of the latent-class two-mode clustering model to two empirical data sets. The first data set is the Supreme Court voting data set of Doreian, Batagelj, and Ferligoj (2004). The second data set comprises the roll call votes of the United States House of Representatives in 2007. For both data sets, we show how two-mode clustering can provide useful insights.
    Keywords: two-mode data;model-based clustering;latent-class model;MCMC
    Date: 2009–03–16
    URL: http://d.repec.org/n?u=RePEc:dgr:eureir:1765015112&r=ecm
  9. By: John Geweke (Departments of Statistics and Economics, University of Iowa, Iowa City, IA, USA.); Gianni Amisano (European Central Bank, Kaiserstrasse 29, D-60311 Frankfurt am Main, Germany.)
    Abstract: A prediction model is any statement of a probability distribution for an outcome not yet observed. This study considers the properties of weighted linear combinations of n prediction models, or linear pools, evaluated using the conventional log predictive scoring rule. The log score is a concave function of the weights and, in general, an optimal linear combination will include several models with positive weights despite the fact that exactly one model has limiting posterior probability one. The paper derives several interesting formal results: for example, a prediction model with positive weight in a pool may have zero weight if some other models are deleted from that pool. The results are illustrated using S&P 500 returns with prediction models from the ARCH, stochastic volatility and Markov mixture families. In this example models that are clearly inferior by the usual scoring criteria have positive weights in optimal linear pools, and these pools substantially outperform their best components. JEL Classification: C11, C53.
    Keywords: forecasting, GARCH, log scoring, Markov mixture, model combination, S&P 500 returns, stochastic volatility.
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:200901017&r=ecm
  10. By: M. Hashem Pesaran; Andreas Pick
    Abstract: This paper considers forecast averaging when the same model is used but estimation is carried out over different estimation windows. It develops theoretical results for random walks when their drift and/or volatility are subject to one or more structural breaks. It is shown that compared to using forecasts based on a single estimation window, averaging over estimation windows leads to a lower bias and to a lower root mean square forecast error for all but the smallest of breaks. Similar results are also obtained when observations are exponentially down-weighted, although in this case the performance of forecasts based on exponential down-weighting critically depends on the choice of the weighting coefficient. The forecasting techniques are applied to 20 weekly series of stock market futures and it is found that average forecasting methods in general perform better than using forecasts based on a single estimation window. 
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:dnb:dnbwpp:207&r=ecm
  11. By: Li, GuangJie (Cardiff Business School)
    Abstract: In the context of an autoregressive panel data model with fixed effect, we examine the relationship between consistent parameter estimation and consistent model selection. Consistency in parameter estimation is achieved by using the transformation of the fixed effect proposed by Lancaster (2002). We find that such transformation does not necessarily lead to consistent estimation of the autoregressive coefficient when the wrong set of exogenous regressors are included. To estimate our model consistently and to measure its goodness of fit, we argue for comparing different model specifications using the Bayes factor rather than the Bayesian information criterion based on the biased maximum likelihood estimates. When the model uncertainty is substantial, we recommend the use of Bayesian Model Averaging. Finally, we apply our method to study the relationship between financial development and economic growth. Our findings reveal that stock market development is positively related to economic growth, while the effect of bank development is not as significant as the classical literature suggests.
    Keywords: dynamic panel data model with fixed effect; incidental parameter problem; consistency in estimation; model selection; Bayesian Model Averaging; finance and growth
    JEL: C52 C11 C13 C15
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:cdf:wpaper:2009/5&r=ecm
  12. By: Emilio Leton; Elisa M. Molanes
    Abstract: The Youden index is a widely used measure in the framework of medical diagnostic, where the effectiveness of a biomarker (screening marker or predictor) for classifying a disease status is studied. When the biomarker is continuous, it is important to determine the threshold or cut-off point to be used in practice for the discrimination between diseased and healthy populations. We introduce a new method based on adjusted empirical likelihood for quantiles aimed to estimate the Youden index and its associated threshold. We also include bootstrap based confidence intervals for both of them. In the simulation study, we compare this method with a recent approach based on the delta method under the bigamma scenario. Finally, a real example of prostatic cancer, well known in the literature, is analyzed to provide the reader with a better understanding of the new method
    Keywords: Confidence interval, Empirical likelihood, Optimal cut-off point, ROC curve, Youden index
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws091907&r=ecm
  13. By: Franses, Ph.H.B.F.; McAleer, M.; Legerstee, R. (Erasmus Econometric Institute)
    Abstract: Expert opinion is an opinion given by an expert, and it can have significant value in forecasting key policy variables in economics and finance. Expert forecasts can either be expert opinions, or forecasts based on an econometric model. An expert forecast that is based on an econometric model is replicable, and can be defined as a replicable expert forecast (REF), whereas an expert opinion that is not based on an econometric model can be defined as a non-replicable expert forecast (Non-REF). Both replicable and non-replicable expert forecasts may be made available by an expert regarding a policy variable of interest. In this paper we develop a model to generate replicable expert forecasts, and compare REF with Non-REF. A method is presented to compare REF and Non-REF using efficient estimation methods, and a direct test of expertise on expert opinion is given. The latter serves the purpose of investigating whether expert adjustment improves the model-based forecasts. Illustrations for forecasting pharmaceutical SKUs, where the econometric model is of (variations of) the ARIMA type, show the relevance of the new methodology proposed in the paper. In particular, experts possess significant expertise, and expert forecasts are significant in explaining actual sales.
    Keywords: direct test;efficient estimation;expert opinion;replicable expert;forecasts;generated regressors;non-replicable expert forecast
    Date: 2008–11–24
    URL: http://d.repec.org/n?u=RePEc:dgr:eureir:1765013902&r=ecm
  14. By: Wagner, Martin (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria); Hlouskova, Jaroslava (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria)
    Abstract: This paper offers two innovations for empirical growth research. First, the paper discusses principal components augmented regressions to take into account all available information in well-behaved regressions. Second, the paper proposes a frequentist model averaging framework as an alternative to Bayesian model averaging approaches. The proposed methodology is applied to three data sets, including the Sala-i-Martin et al. (2004) and Fernandez et al. (2001) data as well as a data set of the European Union member states' regions. Key economic variables are found to be significantly related to economic growth. The findings highlight the relevance of the proposed methodology for empirical economic growth research.
    Keywords: Frequentist model averaging, Growth regressions, Principal components
    JEL: C31 C52 O11 O18 O47
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:ihs:ihsesp:236&r=ecm
  15. By: Franses, Ph.H.B.F. (Erasmus Econometric Institute)
    Abstract: This paper links judgemental adjustment of model-based forecasts with the potential presence of exceptional observations in time series. Specific attention is given to current and future additive outliers, as these require most consideration. A brief illustration to a quarterly real GDP series demonstrates various issues. The main focus of the paper is on various testable propositions, which should facilitate the creation and the evaluation of judgemental adjustment of time series forecasts.
    Date: 2008–03–18
    URL: http://d.repec.org/n?u=RePEc:dgr:eureir:1765011705&r=ecm
  16. By: Franses, Ph.H.B.F. (Erasmus Econometric Institute)
    Abstract: In this paper it is advocated to select a model only if it significantly contributes to the accuracy of a combined forecast. Using hold-out-data forecasts of individual models and of the combined forecast, a useful test for equal forecast accuracy can be designed. An illustration for real-time forecasts for GDP in the Netherlands shows its ease of use.
    Keywords: forecast combination;model selection
    Date: 2008–06–01
    URL: http://d.repec.org/n?u=RePEc:dgr:eureir:1765012552&r=ecm
  17. By: Davy Paindaveine; Miroslav Siman
    Abstract: This paper sheds some new light on the multivariate (projectional) quantiles recently introduced in Kong and Mizera (2008). Contrary to the sophisticated set analysis used there, we adopt a more parametric approach and study the subgradient conditions associated with these quantiles. In this setup, we introduce Lagrange multipliers which can be interpreted in various interesting ways. We also link these quantiles with portfolio optimization and present an alternative proof that the resulting quantile regions coincide with the halfspace depth ones. Our proof makes the link between halfspace depth contours and univariate quantiles of projections more explicit and results into an exact computation of sample quantile regions (hence also of halfspace depth regions) from projectional quantiles. Throughout, we systematically consider the regression case, which was barely touched in Kong and Mizera (2008). Above all, we study the projectional regression quantile regions and compare them with those resulting from the approach considered in Hallin, Paindaveine, and Siman (2009). To gain in generality and to make the comparison between both concepts easier, we present a general framework for directional multivariate (regression) quantiles which includes both approaches as particular cases and is of interest in itself.
    Keywords: Multivariate quantile, quantile regression, mltiple-output regression, halfspace depth, portfolio optimization, value-at-Risk.
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2009_011&r=ecm
  18. By: Segers, R.; Franses, Ph.H.B.F. (Erasmus Econometric Institute)
    Abstract: This paper puts forward a data collection method to measure weekly consumer confidence at the individual level. The data thus obtained allow to statistically analyze the dynamic correlation of such a consumer confidence indicator and to draw inference on transition rates, which is not possible for currently available monthly data collected by statistical agencies on the basis of repeated cross-sections. An application of the method to various waves of data for the Netherlands shows its merits. Upon temporal aggregation we also show the resemblance of our data with those collected by Statistics Netherlands.
    Keywords: consumer confidence;randomized sampling
    Date: 2008–03–31
    URL: http://d.repec.org/n?u=RePEc:dgr:eureir:1765011892&r=ecm
  19. By: Bannouh, K.; Dijk, D.J.C. van; Martens, M.P.E. (Erasmus Econometric Institute)
    Abstract: We introduce the realized co-range, utilizing intraday high-low price ranges to estimate asset return covariances. Using simulations we find that for plausible levels of bid-ask bounce and infrequent and non-synchronous trading the realized co-range improves upon the realized covariance, which uses cross-products of intraday returns. One advantage of the co-range is that in an ideal world it is five times more efficient than the realized covariance when sampling at the same frequency. The second advantage is that the upward bias due to bid-ask bounce and the downward bias due to infrequent and non-synchronous trading partially offset each other. In a volatility timing strategy for S\&P500, bond and gold futures we find that the co-range estimates are less noisy as exemplified by lower transaction costs and also higher Sharpe ratios when using more weight on recent data for predicting covariances.
    Keywords: realized covariance;realized co-range;high-frequency date;market microstructure noise;bias-correction
    Date: 2008–01–15
    URL: http://d.repec.org/n?u=RePEc:dgr:eureir:1765010904&r=ecm
  20. By: Reinhold Kosfeld (Institute of Economics, University of Kassel, 34109 Kassel, Germany); Hans-Friedrich Eckey (Institute of Economics, University of Kassel, 34109 Kassel, Germany); Jørgen Lauridsen (Institute of Public Health, University of Southern Denmark, Odense M 5230, Denmark)
    Abstract: Traditional measures of spatial industry concentration are restricted to given areal units. They do not make allowance for the fact that concentration may be differently pronounced at various geographical levels. Methods of spatial point pattern analysis allow to measure industry concentration at a continuum of spatial scales. While common distancebased methods are well applicable for sub-national study areas, they become inefficient in measuring concentration at various levels within industrial countries. This particularly applies in testing for conditional concentration where overall manufacturing is used as a reference population. Using Ripley’s K function approach to second-order analysis, we propose a subsample similarity test as a feasible testing approach for establishing conditional clustering or dispersion at different spatial scales. For measuring the extent of clustering and dispersion, we introduce a concentration index of the style of Besag’s (1977) L function. By contrast to Besag’s L function, the new index can be employed to measure deviations of observed from general spatial point patterns. The K function approach is illustratively applied to measuring and testing industry concentration in Germany.
    Keywords: Spatial concentration, clustering, dispersion, spatial point pattern analysis, K function
    JEL: C46 L60 L70 R12
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:mar:magkse:200916&r=ecm
  21. By: Gibson, Fiona L.; Burton, Michael
    Abstract: The introduction of measurement bias in parameter estimates into non-linear discrete choice models, as a result of using factor analysis, was identified by Train et al. (1987). They found that the inclusion of factor scores, used to represent relationships amongst like variables, into a subsequent discrete choice models introduced measurement bias as the measurement error associated with each factor score is excluded. This is an issue for non-market valuation given the increase in popularity of including psychometric data, such as primitive beliefs, attitudes and motivations, in willingness to pay estimates. This study explores the relationship between willingness to pay and primitive beliefs through a case study eliciting Perth community values for drinking recycled wastewater. The standard discrete decision model, with sequential inclusion of factor scores, is compared to an equivalent discrete decision model, which corrects for the measurement bias by simultaneously estimating the underlying latent variables using a measurement model. Previous research has focused on the issue of biased parameters. Here we also consider the implications for willingness to pay estimates.
    Keywords: discrete choice models, attitudes, factor analysis, measurement models, recycled wastewater,
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:ags:aare09:47943&r=ecm
  22. By: Alexandre Petkovic; David Veredas
    Abstract: We study the impact of individual and temporal aggregation in linear static and dynamic models for panel data in terms of model specification and efficiency of the estimated parameters. Model wise we find that i) individual aggregation does not affect the model structure but temporal aggregation may introduce residual autocorrelation, and ii) individual aggregation entails heteroskedasticity while temporal aggregation does not. Estimation wise we find that i) in the static model, estimation by least squares with the aggregated data entails a decrease in the efficiency of the estimated parameters but we cannot rank different aggregation schemes in terms of eciency, and ii) in the dynamic model, estimation by GMM does not necessarily entail a decrease in the efficiency of the estimated parameters under individual aggregation and no analytic comparison can be established for temporal aggregation, though simulations suggests that temporal aggregation deteriorates the accuracy of the estimates.
    Keywords: Panel data, temporal aggregation, temporal aggregation, model specification, efficiency.
    JEL: C23 C51 C52
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2009_012&r=ecm

This nep-ecm issue is ©2009 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.