
on Econometrics 
By:  Badi H. Baltagi; Bernard Fingleton; Alain Pirotte 
Abstract:  This paper focuses on the estimation and predictive performance of several estimators for the dynamic and autoregressive spatial lag panel data model with spatially correlated disturbances. In the spirit of Arellano and Bond (1991) and Mutl (2006), a dynamic spatial GMM estimator is proposed based on Kapoor, Kelejian and Prucha (2007) for the Spatial AutoRegressive (SAR) error model. The main idea is to mix nonspatial and spatial instruments to obtain consistent estimates of the parameters. Then, a linear predictor of this spatial dynamic model is derived. Using Monte Carlo simulations, we compare the performance of the GMM spatial estimator to that of spatial and nonspatial estimators and illustrate our approach with an application to new economic geography. 
Keywords:  Panel data, spatial lag, error components, linear predictor, GMM, spatialautocorrelation 
JEL:  C33 
Date:  2011–11 
URL:  http://d.repec.org/n?u=RePEc:cep:sercdp:0095&r=ecm 
By:  Marc Hallin; Davy Paindaveine; Thomas Verdebout 
Abstract:  This paper provides optimal testing procedures for the msample null hypothesis of Common Principal Components (CPC) under possibly non Gaussian and heterogenous elliptical densities. We first establish, under very mild assumptions that do not require finite moments of order four, the local asymptotic normality (LAN) of the model. Based on that result, we show that the pseudoGaussian test proposed in Hallin et al. (2010a) is locally and asymptotically optimal under Gaussian densities. We also show how to compute its local powers and asymptotic relative efficiencies (AREs). A numerical evaluation of those AREs, however, reveals that, while remaining valid, this test is poorly efficient away from the Gaussian. Moreover, it still requires finite moments of order four. We therefore propose rankbased procedures that remain valid under any possibly heterogenous mtuple of elliptical densities, irrespective of any moment assumptions—in elliptical families, indeed, principal components naturally can be based on the scatter matrices characterizing the density contours, hence do not require finite variances. Those rankbased tests are not only validityrobust in the sense that they survive arbitrary elliptical population densities: we show that they also are efficiencyrobust, in the sense that their local powers do not deteriorate under nonGaussian alternatives. In the homogeneous case, the normalscore version of our tests uniformly dominates, in the Pitman sense, the optimal pseudoGaussian test. Theoretical results are obtained via a nonstandard application of Le Cam’s methodology in the context of curved LAN experiments. The finitesample properties of the proposed tests are investigated through simulations 
Keywords:  Common Principal Components; RankBased Methods; Local Asymptotic Normality; Robustness 
Date:  2011–11 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/101786&r=ecm 
By:  Huber, Martin; Mellace, Giovanni 
Abstract:  This paper proposes bootstrap tests for the validity of instrumental variables (IV) in just identified treatment effect models with endogeneity. We demonstrate that the IV assumptions required for the identification of the local average treatment effect (LATE) allow us to both point identify and bound the mean potential outcomes (i) of the always takers (those treated irrespective of the instrument) under treatment and (ii) of the never takers (never treated irrespective of the instrument) under nontreatment. The point identified means must lie within their respective bounds, which provides four testable inequality moment constraints for IV validity. Furthermore, we show that a similar logic applies to testing the assumptions needed to identify distributional features (e.g., local quantile treatment effects). Finally, we discuss how testing power can be increased by imposing dominance/equality assumptions on the potential outcome distributions of different subpopulations. 
Keywords:  specification test, instrument, treatment effects, LATE, inequality moment constraints. 
JEL:  C12 C15 C21 
Date:  2011–10 
URL:  http://d.repec.org/n?u=RePEc:usg:econwp:2011:43&r=ecm 
By:  Matthias Koch 
Abstract:  Unlike the time series literature the spatial econometric literature has not really dealt with the issue of the parameter space. This paper shows that current parameter space concepts for spatial econometric DGPs are inadequate. It proves that the parameter space proposed by Kelejian and Prucha 2008 can result in nonstationary DGPs, while the parameter space proposed by Lee and Liu 2010 can be too restrictive in applied cases. Furthermore it is discussed that the practice of row standardizing lacks a mathematical foundation. Due to these problems concerning the current parameter space consepts, this paper provides a new deâ€¦nition for the spatial econometric parameter space. It is able to show which assumptions are necessary to give row standardizing the needed mathematical foundation. Finally two additional applications for the new parameter space deâ€¦nition concerning models with group interaction and panels with fixed cross section sample size are provided. Both applications result in parameter spaces that are substantially larger than the ones the literature would so far considered to be stationary. 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa11p1147&r=ecm 
By:  Philippe J. Deschamps (Department of Quantitative Economics) 
Abstract:  Efficient posterior simulators for two GARCH models with generalized hyperbolic disturbances are presented. The first model, GHtGARCH, is a threshold GARCH with a skewed and heavytailed error distribution; in this model, the latent variables that account for skewness and heavy tails are identically and independently distributed. The second model, ODLVGARCH, is formulated in terms of observationdriven latent variables; it automatically incorporates a risk premium effect. Both models nest the ordinary threshold tGARCH as a limiting case. The GHtGARCH and ODLVGARCH models are compared with each other and with the threshold tGARCH using five publicly available asset return data sets, by means of Bayes factors, information criteria, and classical forecast evaluation tools. The GHtGARCH and ODLVGARCH models both strongly dominate the threshold tGARCH, and the Bayes factors generally favor GHtGARCH over ODLVGARCH. A Markov switching extension of GHtGARCH is also presented. This extension is found to be an empirical improvement over the singleregime model for one of the five data sets. 
Keywords:  Autoregressive conditional heteroskedasticity; Markov chain Monte Carlo; bridge sampling; heavytailed skewed distributions; generalized hyperbolic distribution; generalized inverse Gaussian distribution 
JEL:  C11 C16 C53 
Date:  2011–10–28 
URL:  http://d.repec.org/n?u=RePEc:fri:dqewps:wp0016&r=ecm 
By:  Espasa, Antoni; Pellegrini, Santiago; Ruiz, Esther 
Abstract:  Differencing is a very popular stationary transformation for series with stochastic trends. Moreover, when the differenced series is heteroscedastic, authors commonly model it using an ARMAGARCH model. The corresponding ARIMAGARCH model is then used to forecast future values of the original series. However, the heteroscedasticity observed in the stationary transformation should be generated by the transitory and/or the longrun component of the original data. In the former case, the shocks to the variance are transitory and the prediction intervals should converge to homoscedastic intervals with the prediction horizon.We show that, in this case, the prediction intervals constructed from the ARIMAGARCH models could be inadequate because they never converge to homoscedastic intervals. All of the results are illustrated using simulated and real time series with stochastic levels. 
Keywords:  ARIMAGARCH models; Local level model; Nonlinear time series; State space models; Unobserved component models; 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:ner:carlos:info:hdl:10016/12257&r=ecm 
By:  Viktor Todorov; George Tauchen; Iaryna Grynkiv 
Abstract:  The paper examines volatility activity and its asymmetry and undertakes further specification analysis of volatility models based on it. We develop new nonparametric statistics using high frequency optionbased VIX data to test for asymmetry in volatility jumps. We also develop methods to estimate and evaluate, using price data alone, a general encompassing model for volatility dynamics where volatility activity is unrestricted. The nonparametric application to VIX data, along with model estimation for S&P Index returns, suggests that volatility moves are best captured by infinite variation purejump martingale with symmetric jump distribution. The latter provides a parsimonious generalization of the jumpdiffusions commonly used for volatility modeling. 
Keywords:  Asymmetric Volatility Activity, HighFrequency Data, Laplace Transform, Signed Power Variation, Specification Testing, Stochastic Volatility, Volatility Jumps 
JEL:  C51 C52 G12 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:duk:dukeec:1123&r=ecm 
By:  Paul Viefers 
Abstract:  In this paper a mixedfrequency VAR à la Mariano & Murasawa (2004) with Markov regime switching in the parameters is estimated by Bayesian inference. Unlike earlier studies, that used the pseuoEM algorithm of Dempster, Laird & Rubin (1977) to estimate the model, this paper describes how to make use of recent advances in Bayesian inference on mixture models. This way, one is able to surmount some wellknown issues connected to inference on mixture models, e.g. the label switching problem. The paper features a numerical simulation study to gauge the model performance in terms of convergence to true parameter values and a small empirical example involving US business cycles. 
Keywords:  Markov mixture models, Label switching, Bayesian VAR, Mixed frequencies 
JEL:  C32 E32 E37 E51 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1172&r=ecm 
By:  Mathias Trabs 
Abstract:  We study the nonparametric calibration of exponential, selfdecomposable Levy models whose jump density can be characterized by the kfunction, which is typically nonsmooth at zero. On the one hand the estimation of the drift, the activity measure alpha:= k(0+) + k(0) and analog parameters for the derivatives are considered and on the other hand we estimate the kfunction outside of a neighborhood of zero. Minimax convergence rates are derived, which depend on . Therefore, we construct estimators adapting to this unknown parameter. Our estimation method is based on spectral representations of the observed option prices and on regularization by cutting off high frequencies. Finally, the procedure is applied to simulations and real data. 
Keywords:  adaptation, European option, innite activity jump process, minimax rates, non linear inverse problem, selfdecomposability 
JEL:  C14 G13 
Date:  2011–11 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2011073&r=ecm 
By:  Jesus Mur; Marcos Herrera; Manuel Ruiz 
Abstract:  In spatial econometrics, it is customary to specify a weighting matrix, the socalled W matrix, just choosing one matrix from the different types of matrices a user is considering (Anselin, 2002). In general, this selection is made a priori, depending on the userâ€™s judgment. This decision is extremely important because if matrix W is missspecified in some way, parameter estimates are likely to be biased and they will be inconsistent in models that contain some spatial lag. Also, for models without spatial lags but where the random terms are spatially autocorrelated, the obtaining of robust standard estimates of the errors will be incorrect if W is missspecified. Goodnessoffit tests may be used to chose between alternative specifications of W. Although, in practice, most users impose a certain W matrix without testing for the restrictions that the selected spatial operator implies. In this paper, we aim to establish a nonparametric procedure where the chosen by objective criteria. Our proposal is directly related with the Theory of Information. Specifically, the selection criterion that we propose is based on objective information existing in the data, which does not depend on the investigatorâ€™s subjectivity: it is a measure of conditional entropy. We compare the performance of our criteria against some other alternative like the J test of Davidson and McKinnon or a likelihood ratio obtained in a maximum likelihood framework. 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa11p1055&r=ecm 
By:  Viktor Todorov; George Tauchen 
Abstract:  We develop a nonparametric estimator of the stochastic volatility density of a discretelyobserved Ito semimartingale in the setting of an increasing time span and finer mesh of the observation grid. There are two steps. The first is aggregating the highfrequency increments into the realized Laplace transform, which is a robust nonparametric estimate of the underlying volatility Laplace transform. The second step is using a regularized kernel to invert the realized Laplace transform. The two steps are relatively quick and easy to compute, so the nonparametric estimator is practicable. We derive bounds for the mean squared error of the estimator. The regularity conditions are sufficiently general to cover empirically important cases such as level jumps and possible dependencies between volatility moves and either diffusive or jump moves in the semimartingale. Monte Carlo work indicates that the nonparametric estimator is reliable and reasonably accurate in realistic estimation contexts. An empirical application to 5minute data for three largecap stocks, 19972010, reveals the importance of big shortterm volatility spikes in generating high levels of stock price variability over and above that induced by price jumps. The application also shows how to trace out the dynamic response of the volatility density to both positive and negative jumps in the stock price. 
Keywords:  Laplace transform, stochastic volatility, illposed problems, regularization, nonparametric density estimation, highfrequency data 
JEL:  C51 C52 G12 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:duk:dukeec:1121&r=ecm 
By:  Xi Luo 
Abstract:  This paper introduces a general framework of covariance structures that can be verified in many popular statistical models, such as factor and random effect models. The new structure is a summation of low rank and sparse matrices. We propose a LOw Rank and sparsE Covariance estimator (LOREC) to exploit this general structure in the highdimensional setting. Analysis of this estimator shows that it recovers exactly the rank and support of the two components respectively. Convergence rates under various norms are also presented. The estimator is computed efficiently using convex optimization. We propose an iterative algorithm, based on Nesterov's method, to solve the optimization criterion. The algorithm is shown to produce a solution within O(1/t^2) of the optimal, after any finite t iterations. Numerical performance is illustrated using simulated data and stock portfolio selection on S&P 100. 
Date:  2011–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1111.1133&r=ecm 
By:  ROSA BERNARDINI PAPALIA 
Abstract:  This paper introduces Information Theoretic â€“ based methods for estimating a target variable in a set of small geographical areas, by exploring spatially heterogeneous relationships at the disaggregate level. Controlling for spatial effects means introducing models whereby the assumption is that values in adjacent geographic locations are linked to each other by means of some form of underlying spatial relationship. This method offers a flexible framework for modeling the underlying variation in subgroup indicators, by addressing the spatial dependency problem. A basic ecological inference problem, which allows for spatial heterogeneity and dependence, is presented with the aim of first estimating the model at the aggregate level, and then of employing the estimated coefficients to obtain the subgroup level indicators. The Information Theoreticbased formulations could be a useful means of including spatial and intertemporal features in analyses of microlevel behavior, and of providing an effective, flexible way of reconciling micro and macro data. An unique optimum solution may be obtained even if there are more parameters to be estimated than available moment conditions and the problem is illposed. Additional nonsample information from theory and/or empirical evidence can be introduced in the form of known probabilities by means of the crossentropy formalism. Consistent estimates in small samples can be computed in the presence of incomplete microlevel data as well as in the presence of problems of collinearity and endogeneity in the individual local models, without imposing strong distributional assumptions. Keywords: Generalized Cross Entropy Estimation, Ecological Inference, Spatial Heterogeneity 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa11p317&r=ecm 
By:  Jesus Mur; Antonio Paez 
Abstract:  The local estimation algorithms are wellknown techniques in the current spatial econometric literature. The Geographically Weighted Regressions are very popular to estimate, locally, static models, whereas the SALE or the Zoom approaches are useful solutions in the case of dynamic models. These techniques are well founded from a methodological point of view and present interesting properties. However, Farber and Paez (2008) detect some inconsistencies in the behavior of some of these algorithms that claim for a further analysis. The point that we want to study in this paper refers to the role of the bandwith. This measure defines how many neighbors will be used in the estimation of the local parameters corresponding to each observation. The crossvalidation is the most popular criteria to fix the bandwith, although there are several other criteria in the literature. We think that there is a basic problem with this approach. The objective of these algorithms is to relax the restriction of homogeneity of the parameters of the model allowing for local peculiarities; however the definition of local neighborhood is the same. It does not matter if the observation corresponds to an isolated and poorly communicated region or it belongs to a central and highly connected point. According to our view, this is a very restrictive decision that should be avoided. Specifically, we discuss the procedure of specifying the sequence of local weighting matrices that will be used in the analysis. Our purpose is to achieve that these matrices also reflect the local surrounding of each observation. We examine two different strategies in order to construct the local weighting matrices. The first is a parametric approach which involves the J test, as presented by Kelejian (2008), and the second is a nonparametric approach that uses the guidance of the symbolic entropy measures. The first part of the paper presents the overall problem, including a review of the literature; we discuss the solutions in the second part and the third part consists of a Monte Carlo simulation. 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa11p942&r=ecm 
By:  TamÃ¡s Krisztin; Matthias Koch 
Abstract:  Classical spatial autoregressive models share the same weakness as the classical linear regression models, namely it is not possible to estimate nonlinear relationships between the dependent and independent variables. In the case of classical linear regression a semiparametric approach can be used to address this issue. Therefore an advanced semi parametric modelling approach for spatial autoregressive models is introduced. Advanced semiparametric modelling requires determining the best configuration of independent variable vectors, number of splineknots and their positions. To solve this combinatorial optimization problem an asynchronous multiagent system based on geneticalgorithms is utilized. Three teams of agents work each on a subset of the problem and cooperate through sharing their most optimal solutions. Through this system more complex relationships between the dependent and independent variables can be derived. These could be better suited for the possibly nonlinear realworld problems faced by applied spatial econometricians. 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa11p1687&r=ecm 
By:  Noureddine El Karoui; Andrew E. B. Lim; GahYi Vahn 
Abstract:  We investigate two methods for reducing estimation error in portfolio optimization with Conditional ValueatRisk (CVaR). The first method is nonparametric: penalize portfolios with large variances in mean and CVaR estimations. The penalized problem is solvable by a quadraticallyconstrained quadratic program, and can be interpreted as a chanceconstrained program. We show the original and penalized solutions follow the Central Limit Theorem with computable covariance by extending Mestimation results from statistics. The second method is parametric: solve the empirical Markowitz problem instead if the logreturn distribution is in the elliptical family (which includes Gaussian and $t$ distributions), as then the population frontiers of the Markowitz and meanCVaR problems are equivalent. Numerical simulations show both methods improve upon the empirical meanCVaR solution under an elliptical model, with the Markowitz solution dominating. The penalized solution dominates under a nonelliptical model with heavy onesided loss. 
Date:  2011–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1111.2091&r=ecm 
By:  Andrew Ang; Dennis Kristensen 
Abstract:  Using nonparametric techniques, we develop a methodology for estimating conditional alphas and betas and longrun alphas and betas, which are the averages of conditional alphas and betas, respectively, across time. The tests can be performed for a single asset or jointly across portfolios. The traditional Gibbons, Ross, and Shanken (1989) test arises as a special case of no time variation in the alphas and factor loadings and homoskedasticity. As applications of the methodology, we estimate conditional CAPM and multifactor models on booktomarket and momentum decile portfolios. We reject the null that longrun alphas are equal to zero even though there is substantial variation in the conditional factor loadings of these portfolios. 
JEL:  C12 C13 C14 C32 G12 
Date:  2011–11 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:17561&r=ecm 
By:  Jeremy T. Fox; Amit Gandhi 
Abstract:  We explore the identification of nonseparable models without relying on the property that the model can be inverted in the econometric unobservables. In particular, we allow for infinite dimensional unobservables. In the context of a demand system, this allows each product to have multiple unobservables. We identify the distribution of demand both unconditional and conditional on market observables, which allows us to identify several quantities of economic interest such as the (conditional and unconditional) distributions of elasticities and the distribution of price effects following a merger. Our approach is based on a significant generalization of the linear in random coefficients model that only restricts the random functions to be analytic in the endogenous variables, which is satisfied by several standard demand models used in practice. We assume an (unknown) countable support for the the distribution of the infinite dimensional unobservables. 
JEL:  C0 L0 
Date:  2011–11 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:17557&r=ecm 
By:  Ronayne, David (University of Warwick) 
Abstract:  This paper compares standard and local projection techniques in the production of impulse response functions both theoretically and empirically. Through careful selection of a structural decomposition, the comparison continues to an application of US data to the textbook ISLM model. It is argued that local projection techniques offer a remedy to the bias of the conventional method especially at horizons longer than the vector autoregression‘s lag length. The application highlights that the techniques can have different answers to important questions. 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:wrk:warwec:971&r=ecm 
By:  Paulo M.D.C. Parente (Department of Economics, University of Exeter); Joao M.C. Santos Silva (University of Essex and CEMAPRE) 
Abstract:  Tests of overidentifying restrictions are widely used in practice. However, there is often confusion about the nature of their null hypothesis and about the interpretation of their outcome. In this note we argue that these tests give little information on whether the instruments are correlated with the errors of the underlaying economic model and on whether they identify parameters of interest. 
Keywords:  GMM, Hansen's Jtest, Instrumental variables, Sargan test. 
JEL:  C12 C13 C51 C52 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:exe:wpaper:1111&r=ecm 
By:  Chris McDonald; Leif Anders Thorsrud (Reserve Bank of New Zealand) 
Abstract:  Forecasting the future path of the economy is essential for good monetary policy decisions. The recent financial crisis has highlighted the importance of tail events, and that assessing the central projection is not enough. The whole range of outcomes should be forecasted, evaluated and accounted for when making monetary policy decisions. As such, we construct density fore casts using the historical performance of the Reserve Bank of New Zealand's (RBNZ) published point forecasts. We compare these implied RBNZ den sities to similarly constructed densities from a suite of empirical models. In particular, we compare the implied RBNZ densities to combinations of density forecasts from the models. Our results reveal that the combined den sities are comparable in performance and sometimes better than the implied RBNZ densities across many dierent horizons and variables. We also find that the combination strategies typically perform better than relying on the best model in realtime, that is the selection strategy. 
JEL:  C52 C53 E52 
Date:  2011–08 
URL:  http://d.repec.org/n?u=RePEc:nzb:nzbdps:2011/03&r=ecm 
By:  Knut Are Aastveit (Norges Bank (Central Bank of Norway)); Karsten R. Gerdrup (Norges Bank (Central Bank of Norway)); Anne Sofie Jore (Norges Bank (Central Bank of Norway)); Leif Anders Thorsrud (BI Norwegian Business School and Norges Bank (Central Bank of Norway)) 
Abstract:  In this paper we use U.S. realtime vintage data and produce combined density nowcasts for quarterly GDP growth from a system of three commonly used model classes. The density nowcasts are combined in two steps. First, a wide selection of individual models within each model class are combined separately. Then, the nowcasts from the three model classes are combined into a single predictive density. We update the density nowcast for every new data release throughout the quarter, and highlight the importance of new information for the evaluation period 1990Q22010Q3. Our results show that the logarithmic score of the predictive densities for U.S. GDP increase almost monotonically as new information arrives during the quarter. While the best performing model class is changing during the quarter, the density nowcasts from our combination framework is always performing well both in terms of logarithmic scores and calibration tests. The density combination approach is superior to a simple model selection strategy and also performs better in terms of point forecast evaluation than standard point forecast combinations. 
Keywords:  Density combination, Forecast densities, Forecast evaluation, Monetary policy, Nowcasting; Realtime data 
JEL:  C32 C52 E37 E52 
Date:  2011–09–28 
URL:  http://d.repec.org/n?u=RePEc:bno:worpap:2011_11&r=ecm 
By:  Stilianos Alexiadis; Matthias Koch; TamÃ¡s Krisztin 
Abstract:  In this paper an attempt is made to assess the hypothesis of re gional clubconvergence, using a spatial panel analysis combined with BSplines. In this context, a â€˜convergenceclubâ€™ is conceived as a group of regions that in the longrun move towards steadystate equilib rium, approximated in terms of the average percapita income. Using data for the US states over the period 19292005, a pattern of club convergence is detected. The â€™clusterâ€™ of converging states is rather limited and a strong spatial component is detected. 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa11p1678&r=ecm 
By:  Pierre Chaussé (Department of Economics, University of Waterloo) 
Abstract:  This paper extends the generalized empirical likelihood method to the case in which the moment conditions are defined on a continuum (CGEL). We show, for the iid case, that CGEL is asymptotically equivalent at the first order to the generalized method of moments for a continuum (CGMM) developed by Carrasco and Florens (2000). Because the system of equations that we need to solve becomes singular when the number of moment conditions converges to infinity, we treat CGEL as a nonlinear illposed problem and obtain the solution using the regularized GaussNewton method. This numerical algorithm is a fast and relatively easy way to compute the regularized Tikhonov solution to nonlinear illposed problems in function spaces. In order to compare the properties of CGEL and CGMM, we then perform a numerical study in which we estimate the parameters of a stable distribution using moment conditions based on the characteristic function. The results show that CGEL outperforms CGMM in most cases according to the root mean squared error criterion. 
JEL:  C13 C30 
Date:  2011–10 
URL:  http://d.repec.org/n?u=RePEc:wat:wpaper:1104&r=ecm 
By:  Geoffrey Hewings; Jae Hong Kim 
Abstract:  The disequilibrium adjustment frameworks, pioneered by Carlino & Mills (1987) and further extended by Boarnet (1994a), have been widely adopted by various regional and intraregional studies, 1) determining whether jobs follow people or people follow jobs or the both; 2) examining the determinants of growth or location decisions; and 3) investigating spread versus backwash effects. Beyond these traditional uses of the framework, this chapter presents an idea of using the model for small area population and employment forecasting and impact analysis. An application using data for the Chicago metropolitan area reveals that the framework, capturing spatial populationemployment interaction and adjustment processes, can be a powerful small area forecasting and impact analysis tool, when it is combined with a regional economic forecasting method. Particularly, the spatial econometric specification of the model facilitates the integration of horizontal (across spatial units) as well as vertical (over the hierarchy; macro and subregional) dimensions to the analysis of change. This study also discusses some theoretical issues and methodological challenges in this type of application. Keywords: Smallareas Forecasting, Spatial Adjustment, Econometric InputOutput Model. 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa11p1839&r=ecm 