nep-ecm New Economics Papers
on Econometrics
Issue of 2009‒10‒10
34 papers chosen by
Sune Karlsson
Orebro University

  1. Nonparametric regression with spatially dependent data By Stefano Magrini; Margherita Gerolimetto
  2. Nonstandard Estimation of Inverse Conditional Density-Weighted Expectations By Chuan Goh
  3. Pre-averaging estimators of the ex-post covariance matrix in noisy diffusion models with non-synchronous data By Kim Christensen; Silja Kinnebrock; Mark Podolskij
  4. Finite-Sample Properties of the Maximum Likelihood Estimator for the Poisson Regression Model With Random Covariates By Qian Chen; David E. Giles
  5. Local polynomial Whittle estimation of perturbed fractional processes By Per Frederiksen; Frank S. Nielsen; Morten Ørregaard Nielsen
  6. Testing for cointegration in high-dimensional systems By Jorg Breitung; Gianluca Cubadda
  7. Testing for Unit Roots in Panel Time Series Models with Multiple Breaks By Westerlund, Joakim
  8. Robust Data-Driven Inference for Density-Weighted Average Derivatives By Matias D. Cattaneo; Richard K. Crump; Michael Jansson
  9. Efficient Semiparametric Detection of Changes in Trend By Chuan Goh
  10. A New Paradigm: A Joint Test of Structural and Correlation Parameters in Instrumental Variables Regression When Perfect Exogeneity is Violated By Caner, Mehmet; Sandler Morrill, Melinda
  11. Cross-section dependence in nonstationary panel models: a novel estimator By Eberhardt, Markus; Bond, Stephen
  12. Semiparametric Modelling and Estimation: A Selective Overview By Dennis Kristensen
  13. UK Macroeconomic Forecasting with Many Predictors: Which Models Forecast Best and When Do They Do So? By Gary Koop; Dimitris Korobilis
  14. Bias of the Maximum Likelihood Estimators of the Two-Parameter Gamma Distribution Revisited By David E. Giles; Hui Feng
  15. Reducing the Size Distortion of the KPSS Test By Eiji Kurozumi; Shinya Tanaka
  16. The Applications of Mixtures of Normal Distributions in Empirical Finance: A Selected Survey By Dinghai Xu
  17. Testing for Spatial Autocorrelation in a Fixed Effects Panel Data Model By Nicolas Debarsy; Cem Ertur
  18. The multivariate supOU stochastic volatility model By Ole Eiler Barndorff-Nielsen; Robert Stelzer
  19. Understanding limit theorems for semimartingales: a short survey By Mark Podolskij; Mathias Vetter
  20. Does the IV estimator establish causality? Re-examining Chinese fertility-growth relationship By Tilak Abeysinghe; Jiaying Gu
  21. Evaluating Nonexperimental Estimators for Multiple Treatments: Evidence from Experimental Data By Flores, Carlos A.; Mitnik, Oscar A.
  22. A Characterization of the Dickey-Fuller Distribution, With Some Extensions to the Multivariate Case By Cerqueti, Roy; Costantini, Mauro; Lupi, Claudio
  23. Identification of Macroeconomic Factors in Large Panels By Lasse Bork; Hans Dewachter; Romain Houssa
  24. Benchmark Priors Revisited: On Adaptive Shrinkage and the Supermodel Effect in Bayesian Model Averaging By Stefan Zeugner; Martin Feldkircher
  25. How Much Can We Trust Causal Interpretations of Fixed-Effects Estimators in the Context of Criminality? By Bjerk, David
  26. Improving Unemployment Rate Forecasts Using Survey Data By Österholm, Pär
  27. A bayesian estimation of a DSGE model with financial frictions By Rossana Merola
  28. The perceived framework of a classical statistic: Is the non-invariance of a Wald statistic much ado about null thing? By Dastoor, Naorayex
  29. Financial Applications of Random Matrix Theory: a short review By J. P. Bouchaud; M. Potters
  30. Estimating Income Poverty in the Presence of Missing Data and Measurement Error By Cheti Nicoletti; Franco Peracchi; Francesca Foliano
  31. Incorporating Market Information into the Construction of the Fan Chart By Selim Elekdag; Prakash Kannan
  32. The Macroeconomic Performance of the Inflation Targeting Policy: An Approach Based on the Evolutionary Co-spectral Analysis By Zied Ftiti
  33. Experimental Tests of Survey Responses to Expenditure Questions By Comerford, David; Delaney, Liam; Harmon, Colm P.
  34. Econometric evaluation of EU Cohesion Policy: a survey By Hagen, Tobias; Mohl, Philipp

  1. By: Stefano Magrini (Department of Economics, University Of Venice Cà Foscari); Margherita Gerolimetto (University Of Venice Cà Foscari)
    Abstract: In this paper we present a new procedure for nonparametric regression in case of spatially dependent data. In particular, we extend usual local linear regression (along the lines of Martins-Filho and Yao, 2009) and propose a two-step method where information on spatial dependence is incorporated in the error covariance matrix, estimated nonparametrically. The finite sample performance of our proposed procedure is then shown via Monte Carlo simulations for various data generating processes.
    Keywords: nonparametric smoothing, spatial dependence
    JEL: C14 C21
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:ven:wpaper:2009_20&r=ecm
  2. By: Chuan Goh
    Abstract: This paper is concerned with the semiparametric estimation of function means that are scaled by an unknown conditional density function. Parameters of this form arise naturally in the consideration of models where interest is focused on the expected value of an integral of a conditional expectation with respect to a continuously distributed “special regressor”' with unbounded support. In particular, a consistent and asymptotically normal estimator of an inverse conditional density-weighted average is proposed whose validity does not require data-dependent trimming or the subjective choice of smoothing parameters. The asymptotic normality result is also rate adaptive in the sense that it allows for the formulation of the usual Wald-type inference procedures without knowledge of the estimator's actual rate of convergence, which depends in general on the tail behaviour of the conditional density weight. The theory developed in this paper exploits recent results of Goh & Knight (2009) concerning the behaviour of estimated regression-quantile residuals. Simulation experiments illustrating the applicability of the procedure proposed here to a semiparametric binary-choice model are suggestive of good small-sample performance.
    Keywords: Semiparametric, identification at infinity, special regressor, rate-adaptive, regression quantile
    JEL: C14 C21 C24 C25
    Date: 2009–09–30
    URL: http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-374&r=ecm
  3. By: Kim Christensen (Aarhus University and CREATES); Silja Kinnebrock (Oxford-Man Institute of Quantitative Finance, Oxford University); Mark Podolskij (ETH Zürich, Switzerland and CREATES, Aarhus University)
    Abstract: In this paper, we show how simple pre-averaging can be applied to measure the ex-post covariance of high-frequency financial time series under market microstructure noise and non-synchronous trading. A modulated realised covariance based on pre-averaged data is proposed and studied in this setting, and we provide complete large sample asymptotics for this new estimator, including feasible central limit theorems for standard methods such as covariance, regression, and correlation analysis. We discuss several versions of the modulated realised covariance, which can be designed to possess an optimal rate of convergence or to guarantee positive semi-definite covariance matrix estimates. We also derive a pre-averaged version of the Hayashi-Yoshida estimator that can be applied directly to the noisy and nonsynchronous data without any prior alignment of prices. An empirical study illustrates how high-frequency covariances, regression coefficients, and correlations change through time.
    Keywords: Central limit theorem,Diffusionmodels, High-frequency data, Marketmicrostructure noise, Non-synchronous trading, Pre-averaging, Realised covariance
    JEL: C10 C22 C80
    Date: 2009–09–01
    URL: http://d.repec.org/n?u=RePEc:aah:create:2009-45&r=ecm
  4. By: Qian Chen (School of Public Finance & Public Policy, Central University of Finance & Economics, People's Republic of China); David E. Giles (Department of Economics, University of Victoria)
    Abstract: We examine the small-sample behaviour of the maximum likelihood estimator for the Poisson regression model with random covariates. Analytic expressions for the first-order bias and second-order mean squared error for this estimator are derived, and we undertake some numerical evaluations to illustrate these results for the single covariate case. The properties of the bias-adjusted maximum likelihood estimator, constructed by subtracting the estimated first-order bias from the original estimator, are investigated in a Monte Carlo experiment. Correcting the estimator for its first-order bias is found to be effective in the cases considered, and we recommend its use when the Poisson regression model is estimated by maximum likelihood with small samples.
    Keywords: Poisson regression model, bias, mean squared error, bias correction, random covariates
    JEL: C01 C13 C25
    Date: 2009–09–22
    URL: http://d.repec.org/n?u=RePEc:vic:vicewp:0907&r=ecm
  5. By: Per Frederiksen (Nordea Markets); Frank S. Nielsen (Aarhus University and CREATES); Morten Ørregaard Nielsen (Queen's University and CREATES)
    Abstract: We propose a semiparametric local polynomial Whittle with noise estimator of the memory parameter in long memory time series perturbed by a noise term which may be serially correlated. The estimator approximates the log-spectrum of the short-memory component of the signal as well as that of the perturbation by two separate polynomials. Including these polynomials we obtain a reduction in the order of magnitude of the bias, but also inflate the asymptotic variance of the long memory estimator by a multiplicative constant. We show that the estimator is consistent for d in (0,1), asymptotically normal for d in (0,3/4), and if the spectral density is sufficiently smooth near frequency zero, the rate of convergence can become arbitrarily close to the parametric rate, sqrt(n). A Monte Carlo study reveals that the proposed estimator performs well in the presence of a serially correlated perturbation term. Furthermore, an empirical investigation of the 30 DJIA stocks shows that this estimator indicates stronger persistence in volatility than the standard local Whittle (with noise) estimator.
    Keywords: Bias reduction, local Whittle, long memory, perturbed fractional process, semiparametric estimation, stochastic volatility
    JEL: C22
    Date: 2009–09
    URL: http://d.repec.org/n?u=RePEc:qed:wpaper:1218&r=ecm
  6. By: Jorg Breitung (University of Bonn); Gianluca Cubadda (Faculty of Economics, University of Rome "Tor Vergata")
    Abstract: This paper considers cointegration tests for dynamic systems where the number of variables is large relative to the sample size. Typical examples include tests for unit roots in panels, where the units are linked by complicated dynamic relationships. It is well known that conventional cointegration tests based on a parametric (vector autoregressive) representation of the system break down if the number of variables approaches the number of time periods. To sidestep this difficulty we propose nonparametric cointegration tests based on eigenvalue problems that are asymptotically free of nuisance parameters. Furthermore, a nonparametric panel unit root test is suggested. It turns out that if the number of variables is large, the nonparametric tests outperform their parametric (likelihood-ratio based) counterparts by a clear margin.
    Date: 2009–09–30
    URL: http://d.repec.org/n?u=RePEc:rtv:ceisrp:148&r=ecm
  7. By: Westerlund, Joakim (Department of Economics, School of Business, Economics and Law, Göteborg University)
    Abstract: This paper proposes two new unit root tests that are appropriate in the presence of an unknown number of structural breaks. One is based on a single time series and the other is based on a panel of multiple series. For the estimation of the number of breaks and their locations, a simple procedure based on outlier detection is proposed. The limiting distributions of the tests are derived and evaluated in small samples using simulation experiments. The implementation of the tests is illustrated using as an example purchasing power parity.<p>
    Keywords: Unit root test; Structural break; Outlier detection; Common factor; Purchasing power parity
    JEL: C12 C15 C22 F31
    Date: 2009–09–29
    URL: http://d.repec.org/n?u=RePEc:hhs:gunwpe:0384&r=ecm
  8. By: Matias D. Cattaneo (Department of Economics, University of Michigan); Richard K. Crump (Federal Reserve Bank of New York); Michael Jansson (Department of Economics, UC Berkeley and CREATES)
    Abstract: This paper presents a new data-driven bandwidth selector compatible with the small bandwidth asymptotics developed in Cattaneo, Crump, and Jansson (2009) for density- weighted average derivatives. The new bandwidth selector is of the plug-in variety, and is obtained based on a mean squared error expansion of the estimator of interest. An extensive Monte Carlo experiment shows a remarkable improvement in performance when the bandwidth- dependent robust inference procedure proposed by Cattaneo, Crump, and Jansson (2009) is coupled with this new data-driven bandwidth selector. The resulting robust data-driven confi- dence intervals compare favorably to the alternative procedures available in the literature.
    Keywords: Average derivatives, Bandwidth selection, Robust inference, Small bandwidth asymptotics
    JEL: C12 C14 C21 C24
    Date: 2009–09–28
    URL: http://d.repec.org/n?u=RePEc:aah:create:2009-46&r=ecm
  9. By: Chuan Goh
    Abstract: This paper proposes a test for the correct specification of a dynamic time-series model that is taken to be stationary about a deterministic linear trend function with no more than a finite number of discontinuities in the vector of trend coefficients. The test avoids the consideration of explicit alternatives to the null of trend stability. The proposal also does not involve the detailed modelling of the data-generating process of the stochastic component, which is simply assumed to satisfy a certain strong invariance principle for stationary causal processes taking a general form. As such, the resulting inference procedure is effectively an omnibus specification test for segmented linear trend stationarity. The test is of Wald-type, and is based on an asymptotically linear estimator of the vector of total-variation norms of the trend parameters whose influence function coincides with the efficient influence function. Simulations illustrate the utility of this procedure to detect discrete breaks or continuous variation in the trend parameter as well as alternatives where the trend coefficients change randomly each period. This paper also includes an application examining the adequacy of a linear trend-stationary specification with infrequent trend breaks for the historical evolution of U.S. real output.
    Keywords: Structural change, trend-stationary processes, nonparametric regression, efficient influence function
    JEL: C12 C14 C22
    Date: 2009–09–30
    URL: http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-373&r=ecm
  10. By: Caner, Mehmet; Sandler Morrill, Melinda
    Abstract: Currently, the commonly employed instrumental variables strategy relies on the knife-edge assumption of perfect exogeneity for valid inference. To make reliable inferences on the structural parameters under violations of exogeneity one must know the true correlation between the structural error and the instruments. The main innovation in this paper is to identify an appropriate test in this context: a joint null hypothesis of the structural parameters with the correlation between the instruments and the structural error term. We introduce a new endogeneity accounted test by combining the structural parameter inference while correcting the bias associated with non-exogeneity of the instrument. To address inference under violations of exogeneity, significant contributions have been made in the recent literature by assuming some degree of non-exogeneity. A key advantage of our approach over that of the previous literature is that we do not need to make any assumptions about the degree of violation of exogeneity either as possible values or prior distributions. In particular, our method is not a form of sensitivity analysis. Since our test statistic is continuous and monotonic in correlation, one can conduct inference for the structural parameters by a simple grid search over correlation values. We can make accurate inferences on the structural parameters because of a feature of the grid search over correlation values. One can also build joint confidence intervals for the structural parameters and the correlation parameter by inverting the test statistic. In the inversion, the null values of these parameters are used. We also propose a new way of testing exclusion restrictions, even in the just identified case.
    Keywords: identification; imperfect instruments
    JEL: C13 C12 C01
    Date: 2009–10–06
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:17689&r=ecm
  11. By: Eberhardt, Markus; Bond, Stephen
    Abstract: This paper uses Monte Carlo simulations to investigate the impact of nonstationarity, parameter heterogeneity and cross-section dependence on estimation and inference in macro panel data. We compare the performance of standard panel estimators with that of our own two-step method (the AMG) and the Pesaran (2006) Common Correlated Effects (CCE) estimators in time-series panels with arguably similar characteristics to those encountered in empirical applications using cross-country macro data. The empirical model adopted leads to an identification problem in standard estimation approaches in the case where the same unobserved common factors drive the evolution of both dependent and independent variables. We replicate the design of two recent Monte Carlo studies on the topic (Coakley et al, 2006; Kapetanios et al, 2009), with results confirming that the Pesaran (2006) CCE approach as well as our own AMG estimator solve this identification problem by accounting for the unobserved common factors in the regression equation. Our investigation however also indicates that simple augmentation with year dummies can do away with most of the bias in standard pooled estimators reported --- a finding which is in stark contrast to the results from earlier empirical work we carried out using cross-country panel data for agriculture and manufacturing (Eberhardt & Teal, 2008; Eberhardt & Teal, 2009). We therefore introduce a number of additional Monte Carlo setups which lead to greater discrepancy in the results between standard (micro-)panel estimators and the novel approaches incorporating cross-section dependence. We further highlight the performance of the pooled OLS estimator with variables in first differences and speculate about the reasons for its favourable results.
    Keywords: Nonstationary Panel Econometrics; Common Factor Models; Empirical Analysis of Economic Development
    JEL: O11 C33
    Date: 2009–10–07
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:17692&r=ecm
  12. By: Dennis Kristensen (Columbia University and CREATES)
    Abstract: Semiparametric models are characterized by a finite- and infinite-dimensional (func- tional) component. As such they allow for added flexibility over fully parametric models, and at the same time estimators of parametric components can be developed that exhibit standard parametric convergence rates. These two features have made semiparametric models and estimators increasingly popular in applied economics. We give a partial overview over the literature on semiparametric modelling and estimation with particular emphasis on semiparametric regression models. The main focus is on developing two-step semiparametric estimators and deriving their asymptotic properties. We do however also briefly discuss sieve-based estimators and semiparametric efficiency.
    Keywords: efficiency, kernel estimation, regression, semiparametric, sieve, two-step estimation
    JEL: C13 C14 C51
    Date: 2009–09–01
    URL: http://d.repec.org/n?u=RePEc:aah:create:2009-44&r=ecm
  13. By: Gary Koop (Department of Economics, University of Strathclyde); Dimitris Korobilis (Department of Economics, University of Strathclyde)
    Abstract: Block factor methods offer an attractive approach to forecasting with many predictors. These extract the information in these predictors into factors reflecting different blocks of variables (e.g. a price block, a housing block, a financial block, etc.). However, a forecasting model which simply includes all blocks as predictors risks being over-parameterized. Thus, it is desirable to use a methodology which allows for different parsimonious forecasting models to hold at different points in time. In this paper, we use dynamic model averaging and dynamic model selection to achieve this goal. These methods automatically alter the weights attached to different forecasting models as evidence comes in about which has forecast well in the recent past. In an empirical study involving forecasting output growth and inflation using 139 UK monthly time series variables, we find that the set of predictors changes substantially over time. Furthermore, our results show that dynamic model averaging and model selection can greatly improve forecast performance relative to traditional forecasting methods.
    Keywords: Bayesian, state space model, factor model, dynamic model averaging
    JEL: E31 E37 C11 C53
    Date: 2009–08
    URL: http://d.repec.org/n?u=RePEc:str:wpaper:0917&r=ecm
  14. By: David E. Giles (Department of Economics, University of Victoria); Hui Feng (Department of Economics, Business & Mathematics, King's College, University of Western Ontario)
    Abstract: We consider the quality of the maximum likelihood estimators for the parameters of the two-parameter gamma distribution in small samples. We show that the methodology suggested by Cox and Snell (1968) can be used very easily to bias-adjust these estimators. A simulation study shows that this analytic correction is frequently much more effective than bias-adjusting using the bootstrap – generally by an order of magnitude in percentage terms. The two bias-correction methods considered result in increased variability in small samples, and the original estimators and their bias-corrected counterparts all have similar percentage mean squared errors.
    Keywords: Maximum likelihood estimator; bias reduction; gamma distribution
    JEL: C13 C46
    Date: 2009–09–23
    URL: http://d.repec.org/n?u=RePEc:vic:vicewp:0908&r=ecm
  15. By: Eiji Kurozumi; Shinya Tanaka
    Abstract: This paper proposes a new stationarity test based on the KPSS test with less size distortion. We extend the boundary rule proposed by Sul, Phillips and Choi (2005) to the autoregressive spectral density estimator and parametrically estimate the long-run variance. We also derive the finite sample bias of the numerator of the test statistic up to the 1/T order and propose a correction to the bias term in the numerator. Finite sample simulations show that the correction term effectively reduces the bias in the numerator and that the finite sample size of our test is close to the nominal one as long as the long-run parameter in the model satisfies the boundary condition.
    Keywords: Stationary test, size distortion, boundary rule, bias correction
    JEL: C12 C22
    Date: 2009–09
    URL: http://d.repec.org/n?u=RePEc:hst:ghsdps:gd09-085&r=ecm
  16. By: Dinghai Xu (Department of Economics, University of Waterloo)
    Abstract: This paper provides a selected review of the recent developments and applications of mixtures of normal (MN) distribution models in empirical finance. Once attractive property of the MN model is that it is flexible enough to accommodate various shapes of continuous distributions, and able to capture leptokurtic, skewed and multimodal characteristics of financial time series data. In addition, the MN-based analysis fits well with the related regime-switching literature. The survey is conducted under two broad themes: (1) minimum-distance estimation methods, and (2) financial modeling and its applications.
    Keywords: Mixtures of Normal, Maximum Likelihood, Moment Generating Function, Characteristic Function, Switching Regression Model, (G) ARCH Model, Stochastic Volatility Model, Autoregressive Conditional Duration Model, Stochastic Duration Model, Value at Risk.
    JEL: C01 C13
    Date: 2009–09
    URL: http://d.repec.org/n?u=RePEc:wat:wpaper:0904&r=ecm
  17. By: Nicolas Debarsy (CERPE - Centre de Recherches en Economie Régionale et Politique Economique - Université de Namur); Cem Ertur (LEO - Laboratoire d'économie d'Orleans - CNRS : UMR6221 - Université d'Orléans)
    Abstract: This paper derives several Lagrange Multiplier statistics and the correspondinglikelihood ratio statistics to test for spatial autocorrelation in a fixed effectspanel data model. These tests allow discriminating between the two main typesof spatial autocorrelation which are relevant in empirical applications, namelyendogenous spatial lag versus spatially autocorrelated errors. In this paper, fivedifferent statistics are suggested. The first one, the joint test, detects the presenceof spatial autocorrelation whatever its type. Hence, it indicates whetherspecific econometric estimation methods should be implemented to account forthe spatial dimension. In case they need to be implemented, the other four testssupport the choice between the different specifications, i.e. endogenous spatiallag, spatially autocorrelated errors or both. The first two are simple hypothesistests as they detect one kind of spatial autocorrelation assuming the otherone is absent. The last two take into account the presence of one type of spatialautocorrelation when testing for the presence of the other one. We use themethodology developed in Lee and Yu (2008) to set up and estimate the generallikelihood function. Monte Carlo experiments show the good performance ofour tests. Finally, as an illustration, they are applied to the Feldstein-Horiokapuzzle. They indicate a misspecification of the investment-saving regressiondue to the omission of spatial autocorrelation. The traditional saving-retentioncoefficient is shown to be upward biased. In contrast our results favor capitalmobility.
    Keywords: Testing ; Spatial ; Autocorrelation ; Fixed ; Effects ; Panel Data Model
    Date: 2009–07
    URL: http://d.repec.org/n?u=RePEc:hal:journl:halshs-00414133_v1&r=ecm
  18. By: Ole Eiler Barndorff-Nielsen (Thiele Centre, Department of Mathematical Sciences & CREATES, Aarhus University); Robert Stelzer (TUM Institute for Advanced Study & Zentrum Mathematik, Technische Universität München)
    Abstract: Using positive semidefinite supOU (superposition of Ornstein-Uhlenbeck type) processes to describe the volatility, we introduce a multivariate stochastic volatility model for financial data which is capable of modelling long range dependence effects. The finiteness of moments and the second order structure of the volatility, the log returns, as well as their “squares” are discussed in detail. Moreover, we give several examples in which long memory effects occur and study how the model as well as the simple Ornstein-Uhlenbeck type stochastic volatility model behave under linear transformations. In particular, the models are shown to be preserved under invertible linear transformations. Finally, we discuss how (sup)OU stochastic volatility models can be combined with a factor modelling approach.
    Keywords: factor modelling, Lévy bases, linear transformations, long memory, Ornstein-Uhlenbeck type process, second order moment structure, stochastic volatility
    JEL: C1 C5 G0 G1
    Date: 2009–09–17
    URL: http://d.repec.org/n?u=RePEc:aah:create:2009-42&r=ecm
  19. By: Mark Podolskij (ETH Zürich and CREATES); Mathias Vetter (Ruhr-University of Bochum)
    Abstract: This paper presents a short survey on limit theorems for certain functionals of semimartingales, which are observed at high frequency. Our aim is to explain the main ideas of the theory to a broader audience. We introduce the concept of stable convergence, which is crucial for our purpose. We show some laws of large numbers (for the continuous and the discontinuous case) that are the most interesting from a practical point of view, and demonstrate the associated stable central limit theorems. Moreover, we state a simple sketch of the proofs and give some examples.
    Keywords: central limit theorem, high frequency observations, semimartingale, stable convergence
    JEL: C10 C13 C14
    Date: 2009–10–05
    URL: http://d.repec.org/n?u=RePEc:aah:create:2009-47&r=ecm
  20. By: Tilak Abeysinghe (Department of Economics, National University of Singapore); Jiaying Gu (Department of Economics, National University of Singapore)
    Abstract: The instrumental variable (IV) estimator in a cross-sectional or panel regression model is often taken to provide valid causal inference from contemporaneous correlations. In this exercise we point out that the IV estimator, like the OLS estimator, cannot be used effectively for causal inference without the aid of non-sample information. We present three possible cases (lack of identification, accounting identities, and temporal aggregation) where IV estimates could lead to misleading causal inference. In other words, a non-zero IV estimate does not necessarily indicate a causal effect nor does the causal direction. In this light, we re-examine the relationship between Chinese provincial birth rates and economic growth. This exercise highlights the potential pitfalls of using too much temporal averaging to compile the data for cross sectional and panel regressions and the importance of estimating both (x on y and y on x) regressions to avoid misleading causal inferences. The GMM-SYS results from dynamic panel regressions based on five-year averages show a strong negative relationship running both ways, from births to growth and growth to births. This outcome, however, changes to a more meaningful one-way relationship from births to growth if the panel analysis is carried out with the annual data. Although falling birth rates in China have enhanced the country’s growth performance, it is difficult to attribute this effect solely to the one-child policy implemented after 1978.
    Keywords: IV estimator and causality inference, identification, accounting identities, temporal aggregation, spurious causality, Chinese provincial growth and fertility relationship.
    JEL: C23 J13 O53
    Date: 2009–09
    URL: http://d.repec.org/n?u=RePEc:sca:scaewp:0902&r=ecm
  21. By: Flores, Carlos A. (University of Miami); Mitnik, Oscar A. (University of Miami)
    Abstract: This paper assesses the effectiveness of unconfoundedness-based estimators of mean effects for multiple or multivalued treatments in eliminating biases arising from nonrandom treatment assignment. We evaluate these multiple treatment estimators by simultaneously equalizing average outcomes among several control groups from a randomized experiment. We study linear regression estimators as well as partial mean and weighting estimators based on the generalized propensity score (GPS). We also study the use of the GPS in assessing the comparability of individuals among the different treatment groups, and propose a strategy to determine the overlap or common support region that is less stringent than those previously used in the literature. Our results show that in the multiple treatment setting there may be treatment groups for which it is extremely difficult to find valid comparison groups, and that the GPS plays a significant role in identifying those groups. In such situations, the estimators we consider perform poorly. However, their performance improves considerably once attention is restricted to those treatment groups with adequate overlap quality, with difference-in-difference estimators performing the best. Our results suggest that unconfoundedness-based estimators are a valuable econometric tool for evaluating multiple treatments, as long as the overlap quality is satisfactory.
    Keywords: multiple treatments, nonexperimental estimators, generalized propensity score
    JEL: C13 C14 C21
    Date: 2009–09
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp4451&r=ecm
  22. By: Cerqueti, Roy; Costantini, Mauro; Lupi, Claudio
    Abstract: This paper provides a theoretical functional representation of the density function related to the Dickey- Fuller random variable. The approach is extended to cover the multivariate case in two special frameworks: the independence and the perfect correlation of the series.
    Keywords: Dickey-Fuller distribution, unit root
    JEL: C12 C16 C22
    Date: 2009–09–28
    URL: http://d.repec.org/n?u=RePEc:mol:ecsdps:esdp09055&r=ecm
  23. By: Lasse Bork (Finance Research Group, Aarhus School of Business, University of Aarhus and CREATES); Hans Dewachter (CES, University of Leuven, RSM Rotterdam and CESIFO.); Romain Houssa (CRED and CEREFIM, University of Namur, CES, University of Leuven)
    Abstract: This paper presents a dynamic factor model in which the extracted factors and shocks are given a clear economic interpretation. The eco- nomic interpretation of the factors is obtained by means of a set of over- identifying loading restrictions, while the structural shocks are estimated following standard practices in the SVAR literature. Estimators based on the EM algorithm are developped. We apply this framework to a large panel of US monthly macroeconomic series. In particular, we identify nine macroeconomic factors and discuss the economic impact of monetary pol- icy stocks. The results are theoretically plausible and in line with other findings in the literature.
    Keywords: Monetary policy, Business Cycles, Factor Models, EM Algorithm
    JEL: E3 E43 C51 E52 C33
    Date: 2009–09–01
    URL: http://d.repec.org/n?u=RePEc:aah:create:2009-43&r=ecm
  24. By: Stefan Zeugner; Martin Feldkircher
    Abstract: Default prior choices fixing Zellner's g are predominant in the Bayesian Model Averaging literature, but tend to concentrate posterior mass on a tiny set of models. The paper demonstrates this supermodel effect and proposes to address it by a hyper-g prior, whose data-dependent shrinkage adapts posterior model distributions to data quality. Analytically, existing work on the hyper-g-prior is complemented by posterior expressions essential to fully Bayesian analysis and to sound numerical implementation. A simulation experiment illustrates the implications for posterior inference. Furthermore, an application to determinants of economic growth identifies several covariates whose robustness differs considerably from previous results.
    Date: 2009–09–18
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:09/202&r=ecm
  25. By: Bjerk, David (Claremont McKenna College)
    Abstract: Researchers are often interested in estimating the causal effect of some treatment on individual criminality. For example, two recent relatively prominent papers have attempted to estimate the respective direct effects of marriage and gang participation on individual criminal activity. One difficulty to overcome is that the treatment is often largely the product of individual choice. This issue can cloud causal interpretations of correlations between the treatment and criminality since those choosing the treatment (e.g. marriage or gang membership) may have differed in their criminality from those who did not even in the absence of the treatment. To overcome this potential for selection bias researchers have often used various forms of individual fixed-effects estimators. While such fixed-effects estimators may be an improvement on basic cross-sectional methods, they are still quite limited when it comes to uncovering a true causal effect of the treatment on individual criminality because they may fail to account for the possibility of dynamic selection. Using data from the NSLY97, I show that such dynamic selection can potentially be quite large when it comes to criminality, and may even be exacerbated when using more advanced fixed-effects methods such as Inverse Probability of Treatment Weighting (IPTW). Therefore substantial care must be taken when it comes to interpreting the results arising from fixed-effects methods.
    Keywords: fixed-effects, crime, marriage, gangs, smoking
    JEL: C12 K42
    Date: 2009–09
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp4387&r=ecm
  26. By: Österholm, Pär (National Institute of Economic Research)
    Abstract: This paper investigates whether forecasts of the Swedish unemployment rate can be improved by using business and household survey data. We conduct an out-of-sample forecast exercise in which the performance of a Bayesian VAR model with only macroeconomic data is compared to that when the model also includes survey data. Results show that the forecasting performance at short horizons can be improved. The im-provement is largest when forward-looking data from the manufacturing industry is employed.
    Keywords: Bayesian VAR; Labour market
    JEL: E17 E24 E27
    Date: 2009–06–01
    URL: http://d.repec.org/n?u=RePEc:hhs:nierwp:0112&r=ecm
  27. By: Rossana Merola (Université Catholique de Louvain la neuve)
    Abstract: Episodes of crises that have recently plagued many emerging market economies have lead to a wide-spread questioning of the two traditional generations of models of currency crises. Distressed banking system and adverse credit-markets conditions have been pointed as sources of serious macroeconomics contractions, so introducing these imperfections into standard economic models can help to explain the more recent crises. This paper introduces financial frictions à la Bernanke Gertler and Gilchrist in a two-sector small open economy, suited to analyze an emerging country. The model is estimated on simulated data applying both Bayesian techniques and maximum likelihood method and comparing the results under the two di¤erent estimation procedures. First, I analyze the influence of the prior on the estimation outcomes. Results seems to confirm that one of the main advantages of Bayesian approach is the ability of providing a framework for evaluating fundamentally mis-specified models. Second, I test the sensitivity of estimation outcomes to the sample size, showing how, for large samples, results under Bayesian estimation converges asymptotically to those obtained applying maximum likelihood. A further extension would be to perform the estimation on historical data for an emerging economy that have recently experienced a financial crisis.
    Keywords: DSGE models, Bayesian estimation, financial accelerator
    JEL: E30 E44 F34 F41
    Date: 2009–10–01
    URL: http://d.repec.org/n?u=RePEc:rtv:ceisrp:149&r=ecm
  28. By: Dastoor, Naorayex (University of Alberta, Department of Economics)
    Abstract: The distinction between a nominal framework for the three classical statistics and a perceived framework for each classical statistic provides more ways to interpret these statistics, and intuitively explains as well as more easily shows some well-known results. In particular, each classical statistic can be viewed in terms of a length in each of four spaces and, since the classical procedures per se are equivalent in a perceived framework, two statistics are identical if their perceived frameworks are identical. This helps to integrate the normally separately treated issues of a reformulation of a null hypothesis and of locally equivalent alternatives. For example, a Wald statistic is not invariant if a reformulation changes its perceived framework, and an appropriate score statistic is invariant as its perceived framework is unaffected by considering a locally equivalent alternative. [During the thirty-four months this paper was under consideration at The Econometrics Journal, the Editor-in-charge (Professor Stephane Gregoir) did not reply to three (of the author's four) requests about the status of the submission, and provided neither a referee's report nor a first decision. Also, when asked to intervene by the author, the new Managing Editor (Professor Richard J Smith) offered the author the possibility of submitting the paper (as a new submission) to the new editorial regime, at which point, the author withdrew the paper.]
    Keywords: classical statistic; likelihood ratio statistic; nominal framework; perceived framework; score statistic; Wald statistic
    JEL: C12
    Date: 2009–08–01
    URL: http://d.repec.org/n?u=RePEc:ris:albaec:2009_025&r=ecm
  29. By: J. P. Bouchaud; M. Potters
    Abstract: We discuss the applications of Random Matrix Theory in the context of financial markets and econometric models, a topic about which a considerable number of papers have been devoted to in the last decade. This mini-review is intended to guide the reader through various theoretical results (the Marcenko-Pastur spectrum and its various generalisations, random SVD, free matrices, largest eigenvalue statistics, etc.) as well as some concrete applications to portfolio optimisation and out-of-sample risk estimation.
    Date: 2009–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:0910.1205&r=ecm
  30. By: Cheti Nicoletti (ISER, University of Essex Colchester); Franco Peracchi (Faculty of Economics, University of Rome "Tor Vergata"); Francesca Foliano (Faculty of Economics, University of Rome "Tor Vergata")
    Abstract: Reliable measures of poverty are an essential statistical tool for public policies aimed at reducing poverty. In this paper we consider the reliability of income poverty measures based on survey data which are typically plagued by missing data and measurement error. Neglecting these problems can bias the estimated poverty rates. We show how to derive upper and lower bounds for the population poverty rate using the sample evidence, an upper bound on the probability of misclassifying people into poor and non-poor, and instrumental or monotone instrumental variable assumptions. By using the European Community Household Panel, we compute bounds for the poverty rate in ten European countries and study the sensitivity of poverty comparisons across countries to missing data and measurement error problems. Supplemental materials for this article may be downloaded from the JBES website.
    Keywords: Misclassification error; Survey non-response; Partial identification.
    Date: 2009–09–30
    URL: http://d.repec.org/n?u=RePEc:rtv:ceisrp:145&r=ecm
  31. By: Selim Elekdag; Prakash Kannan
    Abstract: This paper develops a simple procedure for incorporating market-based information into the construction of fan charts. Using the International Monetary Fund (IMF)'s global growth forecast as a working example, the paper goes through the theoretical and practical considerations of this new approach. The resulting spreadsheet, which implements the approach, is available upon request from the authors.
    Keywords: Data analysis , Economic forecasting , Economic growth , Economic models , Forecasting models , Markets , World Economic Outlook ,
    Date: 2009–08–24
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:09/178&r=ecm
  32. By: Zied Ftiti (GATE-CNRS/ENS LSH, University of Lyon, France)
    Abstract: This paper proposes a new methodology to check the economic performance of a monetary policy and in particular the inflation targeting policy (ITP). The main idea of this work is to consider the ITP as economically efficient when it generates a stable monetary environment. The latter is considered as stable when a long-run equilibrium exists to which the paths of economic variables (inflation rate, interest rate and GDP growth) converge. The convergence of the variables’ paths implies that these variables are more predictable and implies a lower degree of uncertainty in the economic environment. To measure the degree of convergence between economic variables, we propose, in this paper, a dynamic time-varying variable presented in the frequency approach named cohesion. This variable is estimated from the evolutionary co-spectral theory as deï¬ned by Priestley and Tong (1973) and Priestley (1988-1996). We apply this theory to the measure of cohesion presented by Croux et al (2001) to obtain a dynamic time-varying measure. In the last step of the study, we apply the Bai and Perron test (1998-2003b) to determine the change in the cohesion path. The results show that the implementation of the ITP generates a high degree of convergence between economic series that implies less uncertainty into the monetary environment. We conclude that the inflation targeting generates a stable monetary environment. This result allows us to conclude that the ITP is relevant in the case of industrialized countries.
    Keywords: segregation, Inflation Targeting, Co-Spectral Analysis, Cohesion, Stability Environment, Economic Performance and Structural Change
    JEL: C16 E52 E63
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:gat:wpaper:0918&r=ecm
  33. By: Comerford, David (University College Dublin); Delaney, Liam (University College Dublin); Harmon, Colm P. (University College Dublin)
    Abstract: This paper tests for a number of survey effects in the elicitation of expenditure items. In particular we examine the extent to which individuals use features of the expenditure question to construct their answers. We test whether respondents interpret question wording as researchers intend and examine the extent to which prompts, clarifications and seemingly arbitrary features of survey design influence expenditure reports. We find that over one quarter of respondents have difficulty distinguishing between "you" and “your household” when making expenditure reports; that respondents report higher pro-rata expenditure when asked to give responses on a weekly as opposed to monthly or annual time scale; that respondents give higher estimates when using a scale with a higher mid-point; and that respondents report higher aggregated expenditure when categories are presented in a disaggregated form. In summary, expenditure reports are constructed using convenient rules of thumb and available information, which will depend on the characteristics of the respondent, the expenditure domain and features of the survey question. It is crucial to further account for these features in ongoing surveys.
    Keywords: expenditure surveys, survey design, data experiments
    JEL: D12 C81 C93
    Date: 2009–09
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp4389&r=ecm
  34. By: Hagen, Tobias; Mohl, Philipp
    Abstract: More than one third of the European Union's total budget is spent on socalled Cohesion Policy via the structural funds. Its main purpose is to promote the development of the EU and to support convergence between the levels of development of the various European regions. Investigating the impact of European Cohesion Policy on economic growth and convergence is a wide research topic in applied econometric research. Nevertheless, the empirical evidence has provided mixed, if not contradictory, results. Against this background, the aim of this chapter is to provide a fundamental review on this topic. Taking fundamental methodological issues into account, we review the existing econometric evaluation studies, draw several conclusions and provide some remarks for future research.
    Keywords: Economic integration,regional growth,EU Cohesion Policy,panel data,spatial econometrics
    JEL: R10 R11 C21 C23
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:zbw:zewdip:09052&r=ecm

This nep-ecm issue is ©2009 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.