nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒11‒14
25 papers chosen by
Sune Karlsson
Orebro University

  1. Estimating and Forecasting with a Dynamic Spatial Panel Data Model By Badi H. Baltagi; Bernard Fingleton; Alain Pirotte
  2. Optimal Rank-Based Tests for Common Principal Components By Marc Hallin; Davy Paindaveine; Thomas Verdebout
  3. Testing instrument validity for LATE identification based on inequality moment constraints By Huber, Martin; Mellace, Giovanni
  4. Parameter spaces for stationary DGPs in spatial econometric modelling By Matthias Koch
  5. Bayesian Estimation of Generalized Hyperbolic Skewed Student GARCH Models By Philippe J. Deschamps
  6. Prediction intervals in conditionally heteroscedastic time series with stochastic components. By Espasa, Antoni; Pellegrini, Santiago; Ruiz, Esther
  7. Volatility Activity: Specification and Estimation By Viktor Todorov; George Tauchen; Iaryna Grynkiv
  8. Bayesian Inference for the Mixed-Frequency VAR Model By Paul Viefers
  9. Calibration of selfdecomposable Lévy models By Mathias Trabs
  10. Selecting the W Matrix. Parametric vs Nonparametric Approaches By Jesus Mur; Marcos Herrera; Manuel Ruiz
  11. Inverse Realized Laplace Transforms for Nonparametric Volatility Estimation in Jump-Diffusions By Viktor Todorov; George Tauchen
  12. High Dimensional Low Rank and Sparse Covariance Matrix Estimation via Convex Minimization By Xi Luo
  13. An information theoretic approach to ecological inference in presence of spatial heterogeneity and dependence By ROSA BERNARDINI PAPALIA
  14. Local weighting or the necessity of flexibility By Jesus Mur; Antonio Paez
  15. Agent Teams and Evolutionary Computation: Optimizing Semi- Parametric Spatial Autoregressive Models By Tamás Krisztin; Matthias Koch
  16. Estimation error reduction in portfolio optimization with Conditional Value-at-Risk By Noureddine El Karoui; Andrew E. B. Lim; Gah-Yi Vahn
  17. Testing Conditional Factor Models By Andrew Ang; Dennis Kristensen
  18. Identifying Demand with Multidimensional Unobservables: A Random Functions Approach By Jeremy T. Fox; Amit Gandhi
  19. Which Impulse Response Function? By Ronayne, David
  20. A Cautionary Note on Tests for Overidentifying Restrictions By Paulo M.D.C. Parente; Joao M.C. Santos Silva
  21. Evaluating density forecasts: model combination strategies versus the RBNZ By Chris McDonald; Leif Anders Thorsrud
  22. Nowcasting GDP in real-time: A density combination approach By Knut Are Aastveit; Karsten R. Gerdrup; Anne Sofie Jore; Leif Anders Thorsrud
  23. Time series and spatial interaction: An alternative method to detect converging clusters By Stilianos Alexiadis; Matthias Koch; Tamás Krisztin
  24. Generalized empirical likelihood for a continuum of moment conditions By Pierre Chaussé
  25. An Application of the Disequilibrium Adjustment Framework to Small Area Forecasting and Impact Analysis By Geoffrey Hewings; Jae Hong Kim

  1. By: Badi H. Baltagi; Bernard Fingleton; Alain Pirotte
    Abstract: This paper focuses on the estimation and predictive performance of several estimators for the dynamic and autoregressive spatial lag panel data model with spatially correlated disturbances. In the spirit of Arellano and Bond (1991) and Mutl (2006), a dynamic spatial GMM estimator is proposed based on Kapoor, Kelejian and Prucha (2007) for the Spatial AutoRegressive (SAR) error model. The main idea is to mix non-spatial and spatial instruments to obtain consistent estimates of the parameters. Then, a linear predictor of this spatial dynamic model is derived. Using Monte Carlo simulations, we compare the performance of the GMM spatial estimator to that of spatial and non-spatial estimators and illustrate our approach with an application to new economic geography.
    Keywords: Panel data, spatial lag, error components, linear predictor, GMM, spatialautocorrelation
    JEL: C33
    Date: 2011–11
    URL: http://d.repec.org/n?u=RePEc:cep:sercdp:0095&r=ecm
  2. By: Marc Hallin; Davy Paindaveine; Thomas Verdebout
    Abstract: This paper provides optimal testing procedures for the m-sample null hypothesis of Common Principal Components (CPC) under possibly non Gaussian and heterogenous elliptical densities. We first establish, under very mild assumptions that do not require finite moments of order four, the local asymptotic normality (LAN) of the model. Based on that result, we show that the pseudo-Gaussian test proposed in Hallin et al. (2010a) is locally and asymptotically optimal under Gaussian densities. We also show how to compute its local powers and asymptotic relative efficiencies (AREs). A numerical evaluation of those AREs, however, reveals that, while remaining valid, this test is poorly efficient away from the Gaussian. Moreover, it still requires finite moments of order four. We therefore propose rank-based procedures that remain valid under any possibly heterogenous m-tuple of elliptical densities, irrespective of any moment assumptions—in elliptical families, indeed, principal components naturally can be based on the scatter matrices characterizing the density contours, hence do not require finite variances. Those rank-based tests are not only validity-robust in the sense that they survive arbitrary elliptical population densities: we show that they also are efficiency-robust, in the sense that their local powers do not deteriorate under non-Gaussian alternatives. In the homogeneous case, the normal-score version of our tests uniformly dominates, in the Pitman sense, the optimal pseudo-Gaussian test. Theoretical results are obtained via a nonstandard application of Le Cam’s methodology in the context of curved LAN experiments. The finite-sample properties of the proposed tests are investigated through simulations
    Keywords: Common Principal Components; Rank-Based Methods; Local Asymptotic Normality; Robustness
    Date: 2011–11
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/101786&r=ecm
  3. By: Huber, Martin; Mellace, Giovanni
    Abstract: This paper proposes bootstrap tests for the validity of instrumental variables (IV) in just identified treatment effect models with endogeneity. We demonstrate that the IV assumptions required for the identification of the local average treatment effect (LATE) allow us to both point identify and bound the mean potential outcomes (i) of the always takers (those treated irrespective of the instrument) under treatment and (ii) of the never takers (never treated irrespective of the instrument) under non-treatment. The point identified means must lie within their respective bounds, which provides four testable inequality moment constraints for IV validity. Furthermore, we show that a similar logic applies to testing the assumptions needed to identify distributional features (e.g., local quantile treatment effects). Finally, we discuss how testing power can be increased by imposing dominance/equality assumptions on the potential outcome distributions of different subpopulations.
    Keywords: specification test, instrument, treatment effects, LATE, inequality moment constraints.
    JEL: C12 C15 C21
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:usg:econwp:2011:43&r=ecm
  4. By: Matthias Koch
    Abstract: Unlike the time series literature the spatial econometric literature has not really dealt with the issue of the parameter space. This paper shows that current parameter space concepts for spatial econometric DGPs are inadequate. It proves that the parameter space proposed by Kelejian and Prucha 2008 can result in nonstationary DGPs, while the parameter space proposed by Lee and Liu 2010 can be too restrictive in applied cases. Furthermore it is discussed that the practice of row standardizing lacks a mathematical foundation. Due to these problems concerning the current parameter space consepts, this paper provides a new de…nition for the spatial econometric parameter space. It is able to show which assumptions are necessary to give row standardizing the needed mathematical foundation. Finally two additional applications for the new parameter space de…nition concerning models with group interaction and panels with fixed cross section sample size are provided. Both applications result in parameter spaces that are substantially larger than the ones the literature would so far considered to be stationary.
    Date: 2011–09
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa11p1147&r=ecm
  5. By: Philippe J. Deschamps (Department of Quantitative Economics)
    Abstract: Efficient posterior simulators for two GARCH models with generalized hyperbolic disturbances are presented. The first model, GHt-GARCH, is a threshold GARCH with a skewed and heavy-tailed error distribution; in this model, the latent variables that account for skewness and heavy tails are identically and independently distributed. The second model, ODLV-GARCH, is formulated in terms of observation-driven latent variables; it automatically incorporates a risk premium effect. Both models nest the ordinary threshold t-GARCH as a limiting case. The GHt-GARCH and ODLV-GARCH models are compared with each other and with the threshold t-GARCH using five publicly available asset return data sets, by means of Bayes factors, information criteria, and classical forecast evaluation tools. The GHt-GARCH and ODLV-GARCH models both strongly dominate the threshold t-GARCH, and the Bayes factors generally favor GHt-GARCH over ODLV-GARCH. A Markov switching extension of GHt-GARCH is also presented. This extension is found to be an empirical improvement over the single-regime model for one of the five data sets.
    Keywords: Autoregressive conditional heteroskedasticity; Markov chain Monte Carlo; bridge sampling; heavy-tailed skewed distributions; generalized hyperbolic distribution; generalized inverse Gaussian distribution
    JEL: C11 C16 C53
    Date: 2011–10–28
    URL: http://d.repec.org/n?u=RePEc:fri:dqewps:wp0016&r=ecm
  6. By: Espasa, Antoni; Pellegrini, Santiago; Ruiz, Esther
    Abstract: Differencing is a very popular stationary transformation for series with stochastic trends. Moreover, when the differenced series is heteroscedastic, authors commonly model it using an ARMA-GARCH model. The corresponding ARIMA-GARCH model is then used to forecast future values of the original series. However, the heteroscedasticity observed in the stationary transformation should be generated by the transitory and/or the long-run component of the original data. In the former case, the shocks to the variance are transitory and the prediction intervals should converge to homoscedastic intervals with the prediction horizon.We show that, in this case, the prediction intervals constructed from the ARIMA-GARCH models could be inadequate because they never converge to homoscedastic intervals. All of the results are illustrated using simulated and real time series with stochastic levels.
    Keywords: ARIMA-GARCH models; Local level model; Nonlinear time series; State space models; Unobserved component models;
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:ner:carlos:info:hdl:10016/12257&r=ecm
  7. By: Viktor Todorov; George Tauchen; Iaryna Grynkiv
    Abstract: The paper examines volatility activity and its asymmetry and undertakes further specification analysis of volatility models based on it. We develop new nonparametric statistics using high frequency option-based VIX data to test for asymmetry in volatility jumps. We also develop methods to estimate and evaluate, using price data alone, a general encompassing model for volatility dynamics where volatility activity is unrestricted. The nonparametric application to VIX data, along with model estimation for S&P Index returns, suggests that volatility moves are best captured by infinite variation pure-jump martingale with symmetric jump distribution. The latter provides a parsimonious generalization of the jump-diffusions commonly used for volatility modeling.
    Keywords: Asymmetric Volatility Activity, High-Frequency Data, Laplace Transform, Signed Power Variation, Specification Testing, Stochastic Volatility, Volatility Jumps
    JEL: C51 C52 G12
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:duk:dukeec:11-23&r=ecm
  8. By: Paul Viefers
    Abstract: In this paper a mixed-frequency VAR à la Mariano & Murasawa (2004) with Markov regime switching in the parameters is estimated by Bayesian inference. Unlike earlier studies, that used the pseuo-EM algorithm of Dempster, Laird & Rubin (1977) to estimate the model, this paper describes how to make use of recent advances in Bayesian inference on mixture models. This way, one is able to surmount some well-known issues connected to inference on mixture models, e.g. the label switching problem. The paper features a numerical simulation study to gauge the model performance in terms of convergence to true parameter values and a small empirical example involving US business cycles.
    Keywords: Markov mixture models, Label switching, Bayesian VAR, Mixed frequencies
    JEL: C32 E32 E37 E51
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1172&r=ecm
  9. By: Mathias Trabs
    Abstract: We study the nonparametric calibration of exponential, self-decomposable Levy models whose jump density can be characterized by the k-function, which is typically nonsmooth at zero. On the one hand the estimation of the drift, the activity measure alpha:= k(0+) + k(0-) and analog parameters for the derivatives are considered and on the other hand we estimate the k-function outside of a neighborhood of zero. Minimax convergence rates are derived, which depend on . Therefore, we construct estimators adapting to this unknown parameter. Our estimation method is based on spectral representations of the observed option prices and on regularization by cutting off high frequencies. Finally, the procedure is applied to simulations and real data.
    Keywords: adaptation, European option, innite activity jump process, minimax rates, non linear inverse problem, self-decomposability
    JEL: C14 G13
    Date: 2011–11
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2011-073&r=ecm
  10. By: Jesus Mur; Marcos Herrera; Manuel Ruiz
    Abstract: In spatial econometrics, it is customary to specify a weighting matrix, the so-called W matrix, just choosing one matrix from the different types of matrices a user is considering (Anselin, 2002). In general, this selection is made a priori, depending on the user’s judgment. This decision is extremely important because if matrix W is miss-specified in some way, parameter estimates are likely to be biased and they will be inconsistent in models that contain some spatial lag. Also, for models without spatial lags but where the random terms are spatially autocorrelated, the obtaining of robust standard estimates of the errors will be incorrect if W is miss-specified. Goodness-of-fit tests may be used to chose between alternative specifications of W. Although, in practice, most users impose a certain W matrix without testing for the restrictions that the selected spatial operator implies. In this paper, we aim to establish a nonparametric procedure where the chosen by objective criteria. Our proposal is directly related with the Theory of Information. Specifically, the selection criterion that we propose is based on objective information existing in the data, which does not depend on the investigator’s subjectivity: it is a measure of conditional entropy. We compare the performance of our criteria against some other alternative like the J test of Davidson and McKinnon or a likelihood ratio obtained in a maximum likelihood framework.
    Date: 2011–09
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa11p1055&r=ecm
  11. By: Viktor Todorov; George Tauchen
    Abstract: We develop a nonparametric estimator of the stochastic volatility density of a discretely-observed Ito semimartingale in the setting of an increasing time span and finer mesh of the observation grid. There are two steps. The first is aggregating the high-frequency increments into the realized Laplace transform, which is a robust nonparametric estimate of the underlying volatility Laplace transform. The second step is using a regularized kernel to invert the realized Laplace transform. The two steps are relatively quick and easy to compute, so the nonparametric estimator is practicable. We derive bounds for the mean squared error of the estimator. The regularity conditions are sufficiently general to cover empirically important cases such as level jumps and possible dependencies between volatility moves and either diffusive or jump moves in the semimartingale. Monte Carlo work indicates that the nonparametric estimator is reliable and reasonably accurate in realistic estimation contexts. An empirical application to 5-minute data for three large-cap stocks, 1997-2010, reveals the importance of big short-term volatility spikes in generating high levels of stock price variability over and above that induced by price jumps. The application also shows how to trace out the dynamic response of the volatility density to both positive and negative jumps in the stock price.
    Keywords: Laplace transform, stochastic volatility, ill-posed problems, regularization, nonparametric density estimation, high-frequency data
    JEL: C51 C52 G12
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:duk:dukeec:11-21&r=ecm
  12. By: Xi Luo
    Abstract: This paper introduces a general framework of covariance structures that can be verified in many popular statistical models, such as factor and random effect models. The new structure is a summation of low rank and sparse matrices. We propose a LOw Rank and sparsE Covariance estimator (LOREC) to exploit this general structure in the high-dimensional setting. Analysis of this estimator shows that it recovers exactly the rank and support of the two components respectively. Convergence rates under various norms are also presented. The estimator is computed efficiently using convex optimization. We propose an iterative algorithm, based on Nesterov's method, to solve the optimization criterion. The algorithm is shown to produce a solution within O(1/t^2) of the optimal, after any finite t iterations. Numerical performance is illustrated using simulated data and stock portfolio selection on S&P 100.
    Date: 2011–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1111.1133&r=ecm
  13. By: ROSA BERNARDINI PAPALIA
    Abstract: This paper introduces Information Theoretic – based methods for estimating a target variable in a set of small geographical areas, by exploring spatially heterogeneous relationships at the disaggregate level. Controlling for spatial effects means introducing models whereby the assumption is that values in adjacent geographic locations are linked to each other by means of some form of underlying spatial relationship. This method offers a flexible framework for modeling the underlying variation in sub-group indicators, by addressing the spatial dependency problem. A basic ecological inference problem, which allows for spatial heterogeneity and dependence, is presented with the aim of first estimating the model at the aggregate level, and then of employing the estimated coefficients to obtain the sub-group level indicators. The Information Theoretic-based formulations could be a useful means of including spatial and inter-temporal features in analyses of micro-level behavior, and of providing an effective, flexible way of reconciling micro and macro data. An unique optimum solution may be obtained even if there are more parameters to be estimated than available moment conditions and the problem is ill-posed. Additional non-sample information from theory and/or empirical evidence can be introduced in the form of known probabilities by means of the cross-entropy formalism. Consistent estimates in small samples can be computed in the presence of incomplete micro-level data as well as in the presence of problems of collinearity and endogeneity in the individual local models, without imposing strong distributional assumptions. Keywords: Generalized Cross Entropy Estimation, Ecological Inference, Spatial Heterogeneity
    Date: 2011–09
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa11p317&r=ecm
  14. By: Jesus Mur; Antonio Paez
    Abstract: The local estimation algorithms are well-known techniques in the current spatial econometric literature. The Geographically Weighted Regressions are very popular to estimate, locally, static models, whereas the SALE or the Zoom approaches are useful solutions in the case of dynamic models. These techniques are well founded from a methodological point of view and present interesting properties. However, Farber and Paez (2008) detect some inconsistencies in the behavior of some of these algorithms that claim for a further analysis. The point that we want to study in this paper refers to the role of the bandwith. This measure defines how many neighbors will be used in the estimation of the local parameters corresponding to each observation. The cross-validation is the most popular criteria to fix the bandwith, although there are several other criteria in the literature. We think that there is a basic problem with this approach. The objective of these algorithms is to relax the restriction of homogeneity of the parameters of the model allowing for local peculiarities; however the definition of local neighborhood is the same. It does not matter if the observation corresponds to an isolated and poorly communicated region or it belongs to a central and highly connected point. According to our view, this is a very restrictive decision that should be avoided. Specifically, we discuss the procedure of specifying the sequence of local weighting matrices that will be used in the analysis. Our purpose is to achieve that these matrices also reflect the local surrounding of each observation. We examine two different strategies in order to construct the local weighting matrices. The first is a parametric approach which involves the J test, as presented by Kelejian (2008), and the second is a nonparametric approach that uses the guidance of the symbolic entropy measures. The first part of the paper presents the overall problem, including a review of the literature; we discuss the solutions in the second part and the third part consists of a Monte Carlo simulation.
    Date: 2011–09
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa11p942&r=ecm
  15. By: Tamás Krisztin; Matthias Koch
    Abstract: Classical spatial autoregressive models share the same weakness as the classical linear regression models, namely it is not possible to estimate non-linear relationships between the dependent and independent variables. In the case of classical linear regression a semi-parametric approach can be used to address this issue. Therefore an advanced semi- parametric modelling approach for spatial autoregressive models is introduced. Advanced semi-parametric modelling requires determining the best configuration of independent variable vectors, number of spline-knots and their positions. To solve this combinatorial optimization problem an asynchronous multi-agent system based on genetic-algorithms is utilized. Three teams of agents work each on a subset of the problem and cooperate through sharing their most optimal solutions. Through this system more complex relationships between the dependent and independent variables can be derived. These could be better suited for the possibly non-linear real-world problems faced by applied spatial econometricians.
    Date: 2011–09
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa11p1687&r=ecm
  16. By: Noureddine El Karoui; Andrew E. B. Lim; Gah-Yi Vahn
    Abstract: We investigate two methods for reducing estimation error in portfolio optimization with Conditional Value-at-Risk (CVaR). The first method is nonparametric: penalize portfolios with large variances in mean and CVaR estimations. The penalized problem is solvable by a quadratically-constrained quadratic program, and can be interpreted as a chance-constrained program. We show the original and penalized solutions follow the Central Limit Theorem with computable covariance by extending M-estimation results from statistics. The second method is parametric: solve the empirical Markowitz problem instead if the log-return distribution is in the elliptical family (which includes Gaussian and $t$ distributions), as then the population frontiers of the Markowitz and mean-CVaR problems are equivalent. Numerical simulations show both methods improve upon the empirical mean-CVaR solution under an elliptical model, with the Markowitz solution dominating. The penalized solution dominates under a non-elliptical model with heavy one-sided loss.
    Date: 2011–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1111.2091&r=ecm
  17. By: Andrew Ang; Dennis Kristensen
    Abstract: Using nonparametric techniques, we develop a methodology for estimating conditional alphas and betas and long-run alphas and betas, which are the averages of conditional alphas and betas, respectively, across time. The tests can be performed for a single asset or jointly across portfolios. The traditional Gibbons, Ross, and Shanken (1989) test arises as a special case of no time variation in the alphas and factor loadings and homoskedasticity. As applications of the methodology, we estimate conditional CAPM and multifactor models on book-to-market and momentum decile portfolios. We reject the null that long-run alphas are equal to zero even though there is substantial variation in the conditional factor loadings of these portfolios.
    JEL: C12 C13 C14 C32 G12
    Date: 2011–11
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:17561&r=ecm
  18. By: Jeremy T. Fox; Amit Gandhi
    Abstract: We explore the identification of nonseparable models without relying on the property that the model can be inverted in the econometric unobservables. In particular, we allow for infinite dimensional unobservables. In the context of a demand system, this allows each product to have multiple unobservables. We identify the distribution of demand both unconditional and conditional on market observables, which allows us to identify several quantities of economic interest such as the (conditional and unconditional) distributions of elasticities and the distribution of price effects following a merger. Our approach is based on a significant generalization of the linear in random coefficients model that only restricts the random functions to be analytic in the endogenous variables, which is satisfied by several standard demand models used in practice. We assume an (unknown) countable support for the the distribution of the infinite dimensional unobservables.
    JEL: C0 L0
    Date: 2011–11
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:17557&r=ecm
  19. By: Ronayne, David (University of Warwick)
    Abstract: This paper compares standard and local projection techniques in the production of impulse response functions both theoretically and empirically. Through careful selection of a structural decomposition, the comparison continues to an application of US data to the textbook ISLM model. It is argued that local projection techniques offer a remedy to the bias of the conventional method especially at horizons longer than the vector autoregression‘s lag length. The application highlights that the techniques can have different answers to important questions.
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:wrk:warwec:971&r=ecm
  20. By: Paulo M.D.C. Parente (Department of Economics, University of Exeter); Joao M.C. Santos Silva (University of Essex and CEMAPRE)
    Abstract: Tests of overidentifying restrictions are widely used in practice. However, there is often confusion about the nature of their null hypothesis and about the interpretation of their outcome. In this note we argue that these tests give little information on whether the instruments are correlated with the errors of the underlaying economic model and on whether they identify parameters of interest.
    Keywords: GMM, Hansen's J-test, Instrumental variables, Sargan test.
    JEL: C12 C13 C51 C52
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:exe:wpaper:1111&r=ecm
  21. By: Chris McDonald; Leif Anders Thorsrud (Reserve Bank of New Zealand)
    Abstract: Forecasting the future path of the economy is essential for good monetary policy decisions. The recent financial crisis has highlighted the importance of tail events, and that assessing the central projection is not enough. The whole range of outcomes should be forecasted, evaluated and accounted for when making monetary policy decisions. As such, we construct density fore- casts using the historical performance of the Reserve Bank of New Zealand's (RBNZ) published point forecasts. We compare these implied RBNZ den- sities to similarly constructed densities from a suite of empirical models. In particular, we compare the implied RBNZ densities to combinations of density forecasts from the models. Our results reveal that the combined den- sities are comparable in performance and sometimes better than the implied RBNZ densities across many dierent horizons and variables. We also find that the combination strategies typically perform better than relying on the best model in real-time, that is the selection strategy.
    JEL: C52 C53 E52
    Date: 2011–08
    URL: http://d.repec.org/n?u=RePEc:nzb:nzbdps:2011/03&r=ecm
  22. By: Knut Are Aastveit (Norges Bank (Central Bank of Norway)); Karsten R. Gerdrup (Norges Bank (Central Bank of Norway)); Anne Sofie Jore (Norges Bank (Central Bank of Norway)); Leif Anders Thorsrud (BI Norwegian Business School and Norges Bank (Central Bank of Norway))
    Abstract: In this paper we use U.S. real-time vintage data and produce combined density nowcasts for quarterly GDP growth from a system of three commonly used model classes. The density nowcasts are combined in two steps. First, a wide selection of individual models within each model class are combined separately. Then, the nowcasts from the three model classes are combined into a single predictive density. We update the density nowcast for every new data release throughout the quarter, and highlight the importance of new information for the evaluation period 1990Q2-2010Q3. Our results show that the logarithmic score of the predictive densities for U.S. GDP increase almost monotonically as new information arrives during the quarter. While the best performing model class is changing during the quarter, the density nowcasts from our combination framework is always performing well both in terms of logarithmic scores and calibration tests. The density combination approach is superior to a simple model selection strategy and also performs better in terms of point forecast evaluation than standard point forecast combinations.
    Keywords: Density combination, Forecast densities, Forecast evaluation, Monetary policy, Nowcasting; Real-time data
    JEL: C32 C52 E37 E52
    Date: 2011–09–28
    URL: http://d.repec.org/n?u=RePEc:bno:worpap:2011_11&r=ecm
  23. By: Stilianos Alexiadis; Matthias Koch; Tamás Krisztin
    Abstract: In this paper an attempt is made to assess the hypothesis of re- gional club-convergence, using a spatial panel analysis combined with B-Splines. In this context, a ‘convergence-club’ is conceived as a group of regions that in the long-run move towards steady-state equilib- rium, approximated in terms of the average per-capita income. Using data for the US states over the period 1929-2005, a pattern of club- convergence is detected. The ’cluster’ of converging states is rather limited and a strong spatial component is detected.
    Date: 2011–09
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa11p1678&r=ecm
  24. By: Pierre Chaussé (Department of Economics, University of Waterloo)
    Abstract: This paper extends the generalized empirical likelihood method to the case in which the moment conditions are defined on a continuum (CGEL). We show, for the iid case, that CGEL is asymptotically equivalent at the first order to the generalized method of moments for a continuum (CGMM) developed by Carrasco and Florens (2000). Because the system of equations that we need to solve becomes singular when the number of moment conditions converges to infinity, we treat CGEL as a nonlinear ill-posed problem and obtain the solution using the regularized Gauss-Newton method. This numerical algorithm is a fast and relatively easy way to compute the regularized Tikhonov solution to nonlinear ill-posed problems in function spaces. In order to compare the properties of CGEL and CGMM, we then perform a numerical study in which we estimate the parameters of a stable distribution using moment conditions based on the characteristic function. The results show that CGEL outperforms CGMM in most cases according to the root mean squared error criterion.
    JEL: C13 C30
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:wat:wpaper:1104&r=ecm
  25. By: Geoffrey Hewings; Jae Hong Kim
    Abstract: The disequilibrium adjustment frameworks, pioneered by Carlino & Mills (1987) and further extended by Boarnet (1994a), have been widely adopted by various regional and intra-regional studies, 1) determining whether jobs follow people or people follow jobs or the both; 2) examining the determinants of growth or location decisions; and 3) investigating spread versus backwash effects. Beyond these traditional uses of the framework, this chapter presents an idea of using the model for small area population and employment forecasting and impact analysis. An application using data for the Chicago metropolitan area reveals that the framework, capturing spatial population-employment interaction and adjustment processes, can be a powerful small area forecasting and impact analysis tool, when it is combined with a regional economic forecasting method. Particularly, the spatial econometric specification of the model facilitates the integration of horizontal (across spatial units) as well as vertical (over the hierarchy; macro and sub-regional) dimensions to the analysis of change. This study also discusses some theoretical issues and methodological challenges in this type of application. Keywords: Small-areas Forecasting, Spatial Adjustment, Econometric Input-Output Model.
    Date: 2011–09
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa11p1839&r=ecm

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.