nep-ecm New Economics Papers
on Econometrics
Issue of 2013‒04‒06
twenty-one papers chosen by
Sune Karlsson
Orebro University

  1. Granger-Causal analysis of conditional mean and volatility models. By WOŹNIAK, Tomasz
  2. What do instrumental variable models deliver with discrete dependent variables? By Andrew Chesher; Adam Rosen
  3. Pivotal uniform inference in high-dimensional regression with random design in wide classes of models via linear programming By Eric Gautier; Alexandre Tsybakov
  4. Testing for homogeneity in mixture models By Jiaying Gu; Roger Koenker; Stanislav Volgushev
  5. On Size and Power of Heteroscedasticity and Autocorrelation Robust Tests By Preinerstorfer, David; Pötscher, Benedikt M.
  6. Generalized Least Squares Model Averaging By Qingfeng Liu; Ryo Okui; Arihiro Yoshimura
  7. Random coefficients in static games of complete information By Fabian Dunker; Stefan Hoderlein; Hiroaki Kaido
  8. Testing for Cointegration in the Presence of Moving Average Errors By Mallory, M.; Lence, Sergio H.
  9. A Simple and Successul Method to Shrink the Weight By Winfried Pohlmeier; Ruben R. Seiberlich; S. Derya Uysal
  10. Econometric Models for Mixed-Frequency Data. By FORONI, Claudia
  11. On smoothing macroeconomic time series using HP and modified HP filter By Choudhary, Ali; Hanif, Nadim; Iqbal, Javed
  12. An Information-Theoretic Test for Dependence with an Application to the Temporal Structure By Galen Sher; Pedro Vitoria
  13. Perturbation methods for Markov-switching DSGE models By Andrew Foerster; Juan Rubio-Ramírez; Daniel F. Waggoner; Tao Zha
  14. Dif-in-dif estimators of multiplicative treatment effects By Emanuele Ciani; Paul Fisher
  15. A new index of financial conditions By Gary Koop; Dimitris Korobilis
  16. Assortative matching and search with labour supply and home production By Nicolas Jacquemet; Jean-Marc Robin
  17. A model for ordinal responses with an application to policy interest rate By Andrei Sirchenko
  18. Nonlinear Dynamics and Recurrence Plots for Detecting Financial Crisis By Peter Martey Addo; Monica Billio; Dominique Guegan
  19. Forecasting with Non-spurious Factors in U.S. Macroeconomic Time Series By Yohei Yamamoto
  20. Modelling for the Wavelet Coefficients of ARFIMA Processes By Kei Nanamiya
  21. Agglomeration and firm-level productivity : a Bayesian spatial approach By Hashiguchi, Yoshihiro; Tanaka, Kiyoyasu

  1. By: WOŹNIAK, Tomasz
    Abstract: Recent economic developments have shown the importance of spillover and contagion effects in financial markets as well as in macroeconomic reality. Such effects are not limited to relations between the levels of variables but also impact on the volatility and the distributions. Granger causality in conditional means and conditional variances of time series is investigated in the framework of several popular multivariate econometric models. Bayesian inference is proposed as a method of assessment of the hypotheses of Granger noncausality. First, the family of ECCC-GARCH models is used in order to perform inference about Granger-causal relations in second conditional moments. The restrictions for second-order Granger noncausality between two vectors of variables are derived. Further, in order to investigate Granger causality in conditional mean and conditional variances of time series VARMA-GARCH models are employed. Parametric restrictions for the hypothesis of noncausality in conditional variances between two groups of variables, when there are other variables in the system as well are derived. These novel conditions are convenient for the analysis of potentially large systems of economic variables. Bayesian testing procedures applied to these two problems, Bayes factors and a Lindley-type test, make the testing possible regardless of the form of the restrictions on the parameters of the model. This approach also enables the assumptions about the existence of higher-order moments of the processes required by classical tests to be relaxed. Finally, a method of testing restrictions for Granger noncausality in mean, variance and distribution in the framework of Markov-switching VAR models is proposed. Due to the nonlinearity of the restrictions derived by Warne (2000), classical tests have limited use. Bayesian inference consists of a novel Block Metropolis-Hastings sampling algorithm for the estimation of the restricted models, and of standard methods of computing posterior odds ratios. The analysis may be applied to financial and macroeconomic time series with changes of parameter values over time and heteroskedasticity.
    Keywords: Subject GARCH model; Bayesian statistical decision theory; Finance -- Econometric models;
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:ner:euiflo:urn:hdl:1814/25136&r=ecm
  2. By: Andrew Chesher (Institute for Fiscal Studies and University College London); Adam Rosen (Institute for Fiscal Studies and University College London)
    Abstract: We study models with discrete endogenous variables and compare the use of two stage least squares (2SLS) in a linear probability model with bounds analysis using a nonparametric instrumental variable model. 2SLS has the advantage of providing an easy to compute point estimator of a slope coefficient which can be interpreted as a local average treatment effect (LATE). However, the 2SLS estimator does not measure the value of other useful treatment effect parameters without invoking untenable restrictions. The nonparametric instrumental variable (IV) model has the advantage of being weakly restrictive, so more generally applicable, but it usually delivers set identification. Nonetheless it can be used to consistently estimate bounds on many parameters of interest including, for example, average treatment effects. We illustrate using data from Angrist & Evans (1998) and study the effect of family size on female employment.
    Keywords: discrete endogenous variables; endogeneity; incomplete models; instrumental variables; set identification, structual econometrics
    JEL: C10 C14 C50 C51
    Date: 2013–03
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:10/13&r=ecm
  3. By: Eric Gautier (CREST - Centre de Recherche en Économie et Statistique - INSEE - École Nationale de la Statistique et de l'Administration Économique, ENSAE - École Nationale de la Statistique et de l'Administration Économique - ENSAE ParisTech); Alexandre Tsybakov (CREST - Centre de Recherche en Économie et Statistique - INSEE - École Nationale de la Statistique et de l'Administration Économique, ENSAE - École Nationale de la Statistique et de l'Administration Économique - ENSAE ParisTech)
    Abstract: We propose a new method of estimation in high-dimensional linear regression model. It allows for very weak distributional assumptions including heteroscedasticity, and does not require the knowledge of the variance of random errors. The method is based on linear programming only, so that its numerical implementation is faster than for previously known techniques using conic programs, and it allows one to deal with higher dimensional models. We provide upper bounds for estimation and prediction errors of the proposed estimator showing that it achieves the same rate as in the more restrictive situation of fixed design and i.i.d. Gaussian errors with known variance. Following Gautier and Tsybakov (2011), we obtain the results under weaker sensitivity assumptions than the restricted eigenvalue or assimilated conditions.
    Keywords: Heteroscedasticity, High-dimensional models, Linear models, Model selection, Non-Gaussian errors, Pivotal estimation
    Date: 2013–03–26
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-00805556&r=ecm
  4. By: Jiaying Gu; Roger Koenker (Institute for Fiscal Studies and University of Illinois); Stanislav Volgushev
    Abstract: Statistical models of unobserved heterogeneity are typically formalised as mixtures of simple parametric models and interest naturally focuses on testing for homogeneity versus general mixture alternatives. Many tests of this type can be interpreted as C (a) tests, as in Neyman (1959), and shown to be locally, asymptotically optimal. A unified approach to analysing the asymptotic behaviour of such tests will be described, employing a variant of the LeCam LAN framework. These C (a) tests will be contrasted with a new approach to likelihood ratio testing for mixture models. The latter tests are based on estimation of general (nonparametric) mixture models using the Kiefer and Wolfowitz (1956) maximum likelihood method. Recent developments in convex optimisation are shown to dramatically improve upon earlier EM methods for computation of these estimators, and new results on the large sample behaviour of likelihood rations involving such estimators yield a tractable form of asymptotic inference. We compare performance of the two approaches identifying circumstances in which each is preferred.
    Date: 2013–03
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:09/13&r=ecm
  5. By: Preinerstorfer, David; Pötscher, Benedikt M.
    Abstract: Testing restrictions on regression coefficients in linear models often requires correcting the conventional F-test for potential heteroscedasticity or autocorrelation amongst the disturbances, leading to so-called heteroskedasticity and autocorrelation robust test procedures. These procedures have been developed with the purpose of attenuating size distortions and power deficiencies present for the uncorrected F-test. We develop a general theory to establish positive as well as negative finite-sample results concerning the size and power properties of a large class of heteroskedasticity and autocorrelation robust tests. Using these results we show that nonparametrically as well as parametrically corrected F-type tests in time series regression models with stationary disturbances have either size equal to one or nuisance-infimal power equal to zero under very weak assumptions on the covariance model and under generic conditions on the design matrix. In addition we suggest an adjustment procedure based on artificial regressors. This adjustment resolves the problem in many cases in that the so-adjusted tests do not suffer from size distortions. At the same time their power function is bounded away from zero. As a second application we discuss the case of heteroscedastic disturbances.
    Keywords: Size distortion, power deficiency, invariance, robustness, autocorrelation, heteroscedasticity, HAC, fixed-bandwidth, long-run-variance, feasible GLS
    JEL: C12 C20
    Date: 2013–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:45675&r=ecm
  6. By: Qingfeng Liu (Otaru University of Commerce); Ryo Okui (Institute of Economic Research, Kyoto University); Arihiro Yoshimura (Kyoto University)
    Abstract: This paper proposes a method of averaging generalized least squares (GLS) estimators for linear regression models with heteroskedastic errors. We derive two kinds of Mallows' Cp criteria, calculated from the estimates of the mean of the squared errors of the tted value based on the averaged GLS estimators, for this class of models. The averaging weights are chosen by minimizing Mallows' Cp criterion. We show that this method achieves asymptotic optimality. It is also shown that the asymptotic optimality holds even when the variances of the error terms are estimated and the feasible generalized least squares (FGLS) estimators are averaged. Monte Carlo simulations demonstrate that averaging FGLS estimators yields an estimate that has a remarkably lower level of risk compared with averaging least squares estimators in the presence of heteroskedasticity, and it also works when heteroskedasticity is not present, in nite samples.
    Keywords: model averaging, GLS, FGLS, asymptotic optimality, Mallows' Cp
    JEL: C51 C52
    Date: 2013–03
    URL: http://d.repec.org/n?u=RePEc:kyo:wpaper:855&r=ecm
  7. By: Fabian Dunker; Stefan Hoderlein (Institute for Fiscal Studies and Boston College); Hiroaki Kaido (Institute for Fiscal Studies and Boston University)
    Abstract: Individual players in a simultaneous equation binary choice model act differently in different environments in ways that are frequently not captured by observables and a simple additive random error. This paper proposes a random coefficient specification to capture this type of heterogeneity in behaviour, and discusses nonparametric identification and estimation of the distribution of random coefficients. We establish nonparametric point identification of the joint distribution of all random coefficients, except those on the interaction effects, provided the players behave competitively in all markets. Moreover, we establish set identification of the density of the coefficients on the interaction effects, and provide additional conditions that allow to point identify this density. Since our identification strategy is constructive throughout, it allows us to construct sample counterpart estimators. We analyse their asymptotic behaviour, and illustrate their finite sample behaviour in a numerical study. Finally, we discuss several extensions, like the semiparametric case, or correlated random coefficients.
    Keywords: Games, heterogeneity, nonparametric identification, random coefficients, inverse problems
    Date: 2013–03
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:12/13&r=ecm
  8. By: Mallory, M.; Lence, Sergio H.
    Abstract:  This study explores performance of the Johansen cointegration statistics on data containingnegative moving average (NMA) errors. Monte Carlo experiments demonstrate that the asymptoticdistributions of the statistics are sensitive to NMA parameters, and that using the standard 5%asymptotic critical values results in severe underestimation of the actual test sizes. We demonstratethat problems associated with NMA errors do not decrease as sample size increases; instead,they become more severe. Further we examine evidence that many U.S. commodity prices arecharacterized by NMA errors. Pretesting data is recommended before using standard asymptoticcritical values for Johansen’s cointegration tests
    Keywords: cointegration; Johansen cointegration test; moving average
    JEL: C22
    Date: 2012–12–31
    URL: http://d.repec.org/n?u=RePEc:isu:genres:36076&r=ecm
  9. By: Winfried Pohlmeier (Department of Economics, University of Konstanz, Germany); Ruben R. Seiberlich (Department of Economics, University of Konstanz, Germany); S. Derya Uysal (Economics and Finance Department, Institute for Advanced Studies Vienna, Autriche)
    Abstract: We propose a simple way to improve the efficiency of the average treatment effect on propensity score based estimators. As the weights become arbitrarily large for the propensity scores being close to one or zero, we propose to shrink the propensity scores away from these boundaries. Using a comprehensive Monte Carlo study we show that this simple method substantially reduces the mean squared error of the estimators in finite samples.
    Keywords: Econometric evaluation, propensity score, penalizing, shrinkage, average treatment effect
    JEL: C14 C21
    Date: 2013–03–23
    URL: http://d.repec.org/n?u=RePEc:knz:dpteco:1305&r=ecm
  10. By: FORONI, Claudia
    Abstract: This thesis addresses different issues related to the use of mixed-frequency data. In the first chapter, I review, discuss and compare the main approaches proposed so far in the literature to deal with mixed-frequency data, with ragged edges due to publication delays: aggregation, bridge-equations, mixed-data sampling (MIDAS) approach, mixed-frequency VAR and factor models. The second chapter, a joint work with Massimiliano Marcellino, compares the different approaches analyzed in the first chapter, in a detailed empirical application. We focus on now- and forecasting the quarterly growth rate of Euro Area GDP and its components, using a very large set of monthly indicators, with a wide number of forecasting methods, in a pseudo real-time framework. The results highlight the importance of monthly information, especially during the crisis periods. The third chapter, a joint work with Massimiliano Marcellino and Christian Schumacher, studies the performance of a variant of the MIDAS model, which does not resort to functional distributed lag polynomials. We call this approach unrestricted MIDAS (U-MIDAS). We discuss the pros and cons of unrestricted lag polynomials in MIDAS regressions. In Monte Carlo experiments and empirical applications, we compare U-MIDAS to MIDAS and show that U-MIDAS performs better than MIDAS for small differences in sampling frequencies. The fourth chapter, a joint work with Massimiliano Marcellino, focuses on the issues related to mixed-frequency data in structural models. We show analytically, with simulation experiments and with actual data that a mismatch between the time scale of a DSGE or structural VAR model and that of the time series data used for its estimation generally creates identification problems, introduces estimation bias and distorts the results of policy analysis. On the constructive side, we prove that the use of mixed-frequency data can alleviate the temporal aggregation bias, mitigate the identification issues, and yield more reliable policy conclusions.
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:ner:euiflo:urn:hdl:1814/23750&r=ecm
  11. By: Choudhary, Ali; Hanif, Nadim; Iqbal, Javed
    Abstract: In business cycle research, smoothing data is an essential step in that it can influence the extent to which model-generated moments stand up to their empirical counterparts. To demonstrate this idea, we compare the results of McDermott’s (1997) modified HP-filter with the conventional HP-filter on the properties of simulated and actual macroeconomic series. Our simulations suggest that the modified HP-filter proxies better the true cyclical series. This is true for temporally aggregated data as well. Furthermore, we find that although the autoregressive properties of the smoothed observed series are immune to smoothing procedures, the multivariate analysis is not. As a result, we recommend and hence provide series-, country- and frequency specific smoothing parameters.
    Keywords: Business Cycles; Cross Country Comparisons; Smoothing Parameter; Time Aggregation
    JEL: C32 C43 E32
    Date: 2013–03–28
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:45630&r=ecm
  12. By: Galen Sher; Pedro Vitoria
    Abstract: Information theory provides ideas for conceptualising information and measuring relationships between objects. It has found wide application in the sciences, but economics and finance have made surprisingly little use of it. We show that time series data can usefully be studied as information -- by noting the relationship between statistical redundancy and dependence, we are able to use the results of information theory to construct a test for joint dependence of random variables. The test is in the same spirit of those developed by Ryabko and Astola (2005, 2006b,a), but differs from these in that we add extra randomness to the original stochatic process. It uses data compression to estimate the entropy rate of a stochastic process, which allows it to measure dependence among sets of random variables, as opposed to the existing econometric literature that uses entropy and finds itself restricted to pairwise tests of dependence. We show how serial dependence may be detected in S&P500 and PSI20 stock returns over different sample periods and frequencies. We apply the test to synthetic data to judge its ability to recover known temporal dependence structures.
    Date: 2013–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1304.0353&r=ecm
  13. By: Andrew Foerster; Juan Rubio-Ramírez; Daniel F. Waggoner; Tao Zha
    Abstract: This paper develops a general perturbation methodology for constructing high-order approximations to the solutions of Markov-switching DSGE models. We introduce an important and practical idea of partitioning the Markov-switching parameter space so that a steady state is well defined. With this definition, we show that the problem of finding an approximation of any order can be reduced to solving a system of quadratic equations. We propose using the theory of Grobner bases in searching all the solutions to the quadratic system. This approach allows us to obtain all the approximations and ascertain how many of them are stable. Our methodology is applied to three models to illustrate its feasibility and practicality.
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:fip:fedawp:2013-01&r=ecm
  14. By: Emanuele Ciani; Paul Fisher
    Abstract: We consider a difference-in-differences setting with a continuous outcome, such as wages or expenditure. The standard practice is to take its logarithm and then interpret the results as an approximation of the multiplicative treatment effect on the original outcome. We argue that a researcher should rather focus on the non-transformed outcome when discussing causal inference. Furthermore, it is preferable to use a non-linear estimator, because running OLS on the log-linearized model might confound distributional and mean changes. We illustrate the argument with an original empirical analysis of the impact of the UK Educational Maintenance Allowance on households' expenditure.
    Date: 2013–03–27
    URL: http://d.repec.org/n?u=RePEc:esx:essedp:725&r=ecm
  15. By: Gary Koop; Dimitris Korobilis
    Abstract: We use factor augmented vector autoregressive models with time-varying coe¢ cients to construct a …nancial conditions index. The time-variation in the parameters allows for the weights attached to each …nancial variable in the index to evolve over time. Furthermore, we develop methods for dynamic model averaging or selection which allow the …nancial variables entering into the FCI to change over time. We discuss why such extensions of the existing literature are important and show them to be so in an empirical application involving a wide range of …nancial variables.
    Keywords: nancial stress; dynamic model averaging; forecasting
    JEL: C11 C32 C52 C53
    URL: http://d.repec.org/n?u=RePEc:gla:glaewp:2013_06&r=ecm
  16. By: Nicolas Jacquemet; Jean-Marc Robin (Institute for Fiscal Studies and Sciences Po)
    Abstract: We extend the search-matching model of the marriage market of Shimer and Smith (200) to allow for labour supply and home production. We characterise the steady-state equilibrium when exogenous divorce is the only source of risk. We study nonparametric identification using cross-section data on wages and hours worked, and we develop a nonparametric estimator. The estimated matching probabilities that can be derived from the steady-state flow conditions are strongly increasing in male and female wages. We estimate the expected share of marriage surplus appropriated by each spouse as a function of wages. The model allows to infer the specialisation of female spouses in home production from observations on wages and hours worked.
    Keywords: search-matching, sorting, assortative matching, collective labour supply, structural estimation
    JEL: C78 D83 J12 J22
    Date: 2013–03
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:07/13&r=ecm
  17. By: Andrei Sirchenko (European University Institute, Florence, Italy)
    Abstract: The decisions to reduce, leave unchanged, or increase (the price, rating, policy interest rate, etc.) are often characterized by abundant no-change outcomes that are generated by di¤erent processes. Moreover, the positive and negative responses can also be driven by distinct forces. To capture the unobserved heterogeneity this paper develops a two- stage cross-nested model, combining three ordered probit equations. In the policy rate setting context, the …rst stage, a policy inclination decision, determines a latent policy stance (loose, neutral or tight), whereas the two latent amount decisions, conditional on a loose or tight stance, …ne-tune the rate at the second stage. The model allows for the possible correlation among the three latent decisions. This approach identi…es the driving factors and probabilities of three types of zeros: the ”neutral” zeros, generated directly by the neutral policy stance, and two kinds of ”o¤set” zeros, the ”loose” and ”tight” zeros, generated by the loose or tight stance, o¤set at the second stage. Monte Carlo experiments show good performance in small samples. Both the simulations and empirical applications to the panel data on individual policymakers’ votes for the interest rate demonstrate the superiority with respect to the conventional and two-part models. Only a quarter of observed zeros appears to be generated by the neutral policy stance, suggesting a high degree of deliberate interest-rate smoothing by the central bank.
    Keywords: ordinal responses; zero-in‡ated outcomes; three-part model; cross- nested model; policy interest rate; MPC votes; real-time data; panel data.
    JEL: C33 C35 E52
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:nbp:nbpmis:148&r=ecm
  18. By: Peter Martey Addo (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Paris I - Panthéon-Sorbonne, Università Ca' Foscari of Venice - Department of Economics, EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics - Ecole d'Économie de Paris); Monica Billio (Università Ca' Foscari of Venice - Department of Economics); Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Paris I - Panthéon-Sorbonne, EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics - Ecole d'Économie de Paris)
    Abstract: Identification of financial bubbles and crisis is a topic of major concern since it is important to prevent collapses that can severely impact nations and economies. Our analysis deals with the use of the recently proposed "delay vector variance" (DVV) method, which examines local predictability of a signal in the phase space to detect the presence of determinism and nonlinearity in a time series. Optimal embedding parameters used in the DVV analysis are obtained via a differential entropy based method using wavelet-based surrogates. We exploit the concept of recurrence plots to study the stock market to locate hidden patterns, non-stationarity, and to examine the nature of these plots in events of financial crisis. In particular, the recurrence plots are employed to detect and characterize financial cycles. A comprehensive analysis of the feasibility of this approach is provided. We show that our methodology is useful in the diagnosis and detection of financial bubbles, which have significantly impacted economic upheavals in the past few decades.
    Keywords: Nonlinearity analysis; surrogates; Delay vector variance (DVV) method; wavelets; financial bubbles; embedding parameters; recurrence plots
    Date: 2013–02
    URL: http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-00803450&r=ecm
  19. By: Yohei Yamamoto
    Abstract: Time instability in factor loadings can induce an overfitting problem in forecasting analyses since the structural change in factor loadings inflates the number of principal components and thus produces spurious factors. This paper proposes an algorithm to estimate non-spurious factors by identifying the set of observations with stable factor loadings based on the recursive procedure suggested by Inoue and Rossi (2011). I found that 51 out of 132 U.S. macroeconomic time series of Stock and Watson (2005) have stable factor loadings. Although crude principal components provide eight or more factors, there are only one or two non-spurious factors. The forecasts using non-spurious factors significantly improve out-of-sample performance.
    Keywords: dynamic factor model, principal components, structural change, spurious factors, out-of-sample forecasts, overfitting
    JEL: C12 C38 E17
    Date: 2013–02
    URL: http://d.repec.org/n?u=RePEc:hst:ghsdps:gd12-280&r=ecm
  20. By: Kei Nanamiya
    Abstract: We consider the model for the discrete nonboundary wavelet coefficients of ARFIMA processes. Although many authors have explained the utility of the wavelet transform for the long dependent processes in semiparametrical literature, there have been a few studies in parametric setting. In this paper, we restrict the Daubechies wavelets filters to make the form of the (general) spectral density function of these coefficients clear.
    Keywords: discrete wavelet transform, long memory process, spectral density function
    JEL: C22
    Date: 2013–02
    URL: http://d.repec.org/n?u=RePEc:hst:ghsdps:gd12-281&r=ecm
  21. By: Hashiguchi, Yoshihiro; Tanaka, Kiyoyasu
    Abstract: This paper estimates the impact of industrial agglomeration on firm-level productivity in Chinese manufacturing sectors. To account for spatial autocorrelation across regions, we formulate a hierarchical spatial model at the firm level and develop a Bayesian estimation algorithm. A Bayesian instrumental-variables approach is used to address endogeneity bias of agglomeration. Robust to these potential biases, we find that agglomeration of the same industry (i.e. localization) has a productivity-boosting effect, but agglomeration of urban population (i.e. urbanization) has no such effects. Additionally, the localization effects increase with educational levels of employees and the share of intermediate inputs in gross output. These results may suggest that agglomeration externalities occur through knowledge spillovers and input sharing among firms producing similar manufactures.
    Keywords: China, Industrial policy, Manufacturing industries, Productivity, Local economy, Agglomeration economies, Spatial autocorrelation, Bayes, Chinese firm-level data, GIS
    JEL: C21 C51 R10 R15
    Date: 2013–03
    URL: http://d.repec.org/n?u=RePEc:jet:dpaper:dpaper403&r=ecm

This nep-ecm issue is ©2013 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.