nep-ecm New Economics Papers
on Econometrics
Issue of 2014‒11‒28
fourteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Skew-rotsymmetric Distributions on Unit Spheres and Related Efficient Inferential Proceedures By Christophe Ley; Thomas Verdebout
  2. Specification Tests with Weak and Invalid Instruments By Firmin Doko Tchatoka
  3. "Prediction in Heteroscedastic Nested Error Regression Models with Random Dispersions" By Tatsuya Kubokawa; Shonosuke Sugasawa; Malay Ghosh; Sanjay Chaudhuri
  4. Estimating (Markov-Switching) VAR Models without Gibbs Sampling: A Sequential Monte Carlo Approach By Bognanni, Mark; Herbst, Edward
  5. An Efficient Parallel Simulation Method for Posterior Inference on Paths of Markov Processes By Matthias Held; Marcel Omachel
  6. Two-Sample Tests for High Dimensional Means with Thresholding and Data Transformation By Chen, Song Xi; Li, Jun; Zhong, Pingshou
  7. Measuring the Sensitivity of Parameter Estimates to Sample Statistics By Matthew Gentzkow; Jesse M. Shapiro
  8. The Modi ed R a Robust Measure of Association for Time Series By Rehman, Atiq-ur-; Malik, Muhammad Irfan
  9. Modelling Heaped Duration Data: An Application to Neonatal Mortality By Arulampalam, Wiji; Corradi, Valentina; Gutknecht, Daniel
  10. Dynamic Factor Analysis for Short Panels: Estimating Performance Trajectories for Water Utilities By Zirogiannis, Nikolaos; Tripodis, Yorghos
  11. Growth determinants across time and space: A semiparametric panel data approach By Stolzenburg, Ulrich
  12. Determination of long-run and short-run dynamics in EC-VARMA models via canonical correlations By George Athanasopoulos; D.S. Poskitt; Farshid Vahid; Wenying Yao
  13. Give me strong moments and time: Combining GMM and SMM to estimate long-run risk asset pricing models By Grammig, Joachim; Schaub, Eva-Maria
  14. On the degree of homogeneity in dynamic heterogeneous panel data models By Offermanns, Christian J.

  1. By: Christophe Ley; Thomas Verdebout
    Keywords: directional statistics; flexible modelling; generating mechanism; rotationally Symmetric Distribution; tests for rotational symmetry
    Date: 2014–11
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/177101&r=ecm
  2. By: Firmin Doko Tchatoka (School of Economics, University of Adelaide)
    Abstract: We focus on the classical linear simultaneous equations models and study the sensitivity to instrument endogeneity of six alternative versions of Durbin-Wu-Hausman (DWH) tests of exogeneity. To address this issue, we consider two setups for instrument endogeneity: (i) fixed instrument endogeneity, i.e., the parameter that controls instrument invalidity is fixed (does not depend on the sample size); (ii) local-to-zero instrument endogeneity, i.e., this parameter goes to zero at rate [n^1/2] as the sample size n increases [similar to Staiger and Stock (1997)]. In the first setup, we show that all tests have size converging to 1, as the sample size increases, no matter how weak the instruments are. In the second setup, we provide a characterization of the null limiting distribution of the statistics, which clearly shows that all statistics converge asymptotically to non-degenerated noncentral x^2 under the null hypothesis. This means that all tests have size greater than their nominal level asymptotically if the usual x^2 critical values are used, despite the fact that instrument endogeneity vanishes as the sample size increases. We propose size correction based on bootstrap techniques. Our analysis of the proposed bootstrap tests provides some new insights. More precisely, we show that even for moderate instrument endogeneity, the bootstrap provides a first-order approximation of the asymptotic size of the DWH tests, no matter how weak the instruments are. We present a Monte Carlo experiment that confirms our theoretical findings.
    Keywords: DWH tests; weak instruments; exclusion restrictions; instrument endogeneity; bootstrap.
    JEL: C3 C12 C15 C52
    Date: 2014–06
    URL: http://d.repec.org/n?u=RePEc:adl:wpaper:2014-05&r=ecm
  3. By: Tatsuya Kubokawa (Faculty of Economics, The University of Tokyo); Shonosuke Sugasawa (Graduate School of Economics, The University of Tokyo); Malay Ghosh (Department of Economics, University of Florida); Sanjay Chaudhuri (Department of Statistics, National Univeristy of Singapore)
    Abstract: The paper concerns small-area estimation in the heteroscedastic nested error regression (HNER) model which assumes that the within-area variances are different among areas. Although HNER is useful for analyzing data where the within-area variation changes from area to area, it is difficult to provide good estimates for the error variances because of small samples sizes for small-areas. To fix this difficulty, we suggest a random dispersion HNER model which assumes a prior distribution for the error variances. The resulting Bayes estimates of small area means provide stable shrinkage estimates even for small sample sizes. Next we propose an empirical Bayes procedure for estimating the small area means. For measuring uncertainty of the empirical Bayes estimators, we use the conditional and unconditional mean squared errors (MSE) and derive their second-order approximations. It is interesting to note that the difference between the two MSEs appears in the first-order terms while the difference appears in the second-order terms for classical normal linear mixed models. Second-order unbiased estimators of the two MSEs are given with an application to the posted land price data.
    Date: 2014–08
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2014cf939&r=ecm
  4. By: Bognanni, Mark (Federal Reserve Bank of Cleveland); Herbst, Edward (Federal Reserve Board of Governors)
    Abstract: Vector autoregressions with Markov-switching parameters (MS-VARs) offer dramatically better data fit than their constant-parameter predecessors. However, computational complications, as well as negative results about the importance of switching in parameters other than shock variances, have caused MS-VARs to see only sparse usage. For our first contribution, we document the effectiveness of Sequential Monte Carlo (SMC) algorithms at estimating MSVAR posteriors. Relative to multi-step, model-specific MCMC routines, SMC has the advantages of being simpler to implement, readily parallelizable, and unconstrained by reliance on convenient relationships between prior and likelihood. For our second contribution, we exploit SMC’s flexibility to demonstrate that the use of priors with superior data fit alters inference about the presence of time variation in macroeconomic dynamics. Using the same data as Sims, Waggoner, and Zha (2008, we provide evidence of recurrent episodes characterized by a flat Phillips Curve.
    Keywords: Vector Autoregressions; Sequential Monte Carlo; Regime-Switching Models; Bayesian Analysis
    JEL: C11 C15 C32 C52 E3 E4 E5
    Date: 2014–11–12
    URL: http://d.repec.org/n?u=RePEc:fip:fedcwp:1427&r=ecm
  5. By: Matthias Held (Faculty of Finance, WHU - Otto Beisheim School of Management); Marcel Omachel (Faculty of Finance, WHU - Otto Beisheim School of Management)
    Abstract: In this note, we propose a method for efficient simulation of paths of latent Markovian state processes in a Markov Chain Monte Carlo setting. Our method harnesses available parallel computing power by breaking the sequential nature of commonly encountered state simulation routines. We offer a worked example that highlights the computational merits of our approach.
    Keywords: Bayesian inference, Markov Chain Monte Carlo, Posterior path simulation
    JEL: C11 C15
    Date: 2014–10
    URL: http://d.repec.org/n?u=RePEc:mag:wpaper:140010&r=ecm
  6. By: Chen, Song Xi; Li, Jun; Zhong, Pingshou
    Abstract: We study two tests for the equality of two population mean vectors under high dimensionality and column-wise dependence by thresholding. They are designed for better power performance when the mean vectors of two populations differ only in sparsely populated coordinates. The first test is constructed by carrying out thresholding to remove those no-signal bearing dimensions. The second test combines data transformation and thresholding by first transforming the data with the precision matrix followed by thresholding. The benefits of the threshodling and the data transformations are demonstrated in terms of reduced variance of the test statistics and the improved power of the tests. Numerical analyses and empirical study are performed to confirm the theoretical findings and to demonstrate the practical implementations.
    Keywords: Data Transformation; Large deviation; Large p small n; Sparse signals; Thresholding.
    JEL: C0 C1 C12
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:59815&r=ecm
  7. By: Matthew Gentzkow; Jesse M. Shapiro
    Abstract: Empirical papers in economics often describe heuristically how their estimators map specific data features to parameters or other magnitudes of interest. We propose a formal, quantitative measure of this relationship that can be computed at negligible cost even for complex models. We show that our measure of sensitivity to particular sample statistics can be informative about the importance of particular identifying assumptions, providing one rationale for the attention that sensitivity receives in applied research. We apply our measure to empirical papers in industrial organization, macroeconomics, public economics, and finance.
    JEL: C1 C52
    Date: 2014–11
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:20673&r=ecm
  8. By: Rehman, Atiq-ur-; Malik, Muhammad Irfan
    Abstract: Since times of Yule (1926), it is known that correlation between two time series can produce spurious results. Granger and Newbold (1974) see the roots of spurious correlation in non-stationarity of the time series. However the study of Granger, Hyung and Jeon (2001) prove that spurious correlation also exists in stationary time series. These facts make the correlation coefficient an unreliable measure of association. This paper proposes ‘Modified R’ as an alternate measure of association for the time series. The Modified R is robust to the type of stationarity and type of deterministic part in the time series. The performance Modified R is illustrated via extensive Monte Carlo Experiments.
    Keywords: Correlation Coefficient; Spurious Regression; Stationary Series
    JEL: C01 C15 C52 C63
    Date: 2014–04–24
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:60025&r=ecm
  9. By: Arulampalam, Wiji (University of Warwick); Corradi, Valentina (University of Surrey); Gutknecht, Daniel (University of Oxford)
    Abstract: In 2005, the Indian Government launched a conditional cash-incentive program to encourage institutional delivery. This paper studies the effects of the program on neonatal mortality using district-level household survey data. We model mortality using survival analysis, paying special attention to the substantial heaping present in the data. The main objective of this paper is to provide a set of sufficient conditions for identification and consistent estimation of the baseline hazard accounting for heaping and unobserved heterogeneity. Our identification strategy requires neither administrative data nor multiple measurements, but a correctly reported duration and the presence of some flat segments in the baseline hazard which includes this correctly reported duration point. We establish the asymptotic properties of the maximum likelihood estimator and provide a simple procedure to test whether the policy had (uniformly) reduced mortality. While our empirical findings do not confirm the latter, they do indicate that accounting for heaping matters for the estimation of the baseline hazard.
    Keywords: discrete time duration model, heaping, measurement error, neonatal mortality, parameters on the boundary
    JEL: C12 C21 C24 C41
    Date: 2014–09
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp8493&r=ecm
  10. By: Zirogiannis, Nikolaos; Tripodis, Yorghos
    Abstract: We develop a dynamic factor model for panel data with a short time dimension (i.e. n<15). Unlike most of the work in the DFM literature where one common factor is estimated for a group of cross sectional units, our interest lies in the estimation of a latent variable for each cross sectional unit at every point in time. This difference increases the computational challenges of the estimation process. To facilitate estimation we develop the “Two-Cycle Conditional Expectation-Maximization” (2CCEM) algorithm which is a variant of the EM algorithm and it’s extensions (Dempster et al. 1977; Meng and Rubin 1993; Liu and Rubin 1994). Initially, the latent variable is estimated (first cycle) and then the dynamic component is incorporated into the estimation process (second cycle). The estimates of each cycle are updated with information from the estimates of the previous cycle until convergence is achieved. We provide simulation results demonstrating consistency of our 2CCEM estimator. One of the advantages of this work is that the estimation strategy can account for multiple cross sectional units with a short time dimension, and is flexible enough to be used in different types of applications. We apply our model to a dataset of 853 water and sanitation utilities from 45 countries and use the 2CCEM algorithm to estimate performance trajectories for each utility.
    Keywords: Dynamic Factor Models, EM algorithm, Panel Data, State-Space models, Water utilities, IBNET, Environmental Economics and Policy, Research Methods/ Statistical Methods,
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:ags:aaea14:170592&r=ecm
  11. By: Stolzenburg, Ulrich
    Abstract: A panel data set covering 145 countries between 1960 and 2010 has been investigated closely by using models of parameter heterogeneity. The Functional Coefficient Model (FCM) introduced by Cai, Fan and Yao (2000) allows estimated parameters of growth determinants to vary as functions of one or two status variables. As a status variable, coefficients depend on the level of development, measured by initial per capita GDP. In a two-dimensional setting, time is used as an additional status variable. At first, the analysis is restricted to bivariate relationships between growth and only one of its determinants, dependent on one or both status variables in a local estimation. Afterwards, the well-known Solow (1956) model serves as a core setting of control variables, while functional dependence of additional explanatory variables is investigated. While some constraints of this modeling approach have to be kept in mind, functional specifications are a promising tool to investigate growth relationships, as well as their robustness and sensitivity. Finally, a simple derivation of FCM called local mean values provides a suitable way to visualize macroeconomic or demographic development patterns in a descriptive diagram.
    Keywords: economic growth,cross-country growth regression,functional coefficient model,varying parameter,parameter heterogeneity,kernel regression,panel data,local mean value
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:zbw:cauewp:201411&r=ecm
  12. By: George Athanasopoulos; D.S. Poskitt; Farshid Vahid; Wenying Yao
    Abstract: This article studies a simple, coherent approach for identifying and estimating error correcting vector autoregressive moving average (EC-VARMA) models. Canonical correlation analysis is implemented for both determining the cointegrating rank, using a strongly consistent method, and identifying the short-run VARMA dynamics, using the scalar component methodology. Finite sample performances are evaluated via Monte-Carlo simulations and the approach is applied to model and forecast US interest rates. The results reveal that EC-VARMA models generate significantly more accurate out-of-sample forecasts than vector error correction models (VECMs), especially for short horizons.
    Keywords: Cointegration, Error correction, Scalar Component Model, Multivariate Time Series.
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2014-22&r=ecm
  13. By: Grammig, Joachim; Schaub, Eva-Maria
    Abstract: The long-run consumption risk (LRR) model is a promising approach to resolve prominent asset pricing puzzles. The simulated method of moments (SMM) provides a natural framework to estimate its deep parameters, but caveats concern model solubility and weak identification. We propose a twostep estimation strategy that combines GMM and SMM, and for which we elicit informative macroeconomic and financial moment matches from the LRR model structure. In particular, we exploit the persistent serial correlation of consumption and dividend growth and the equilibrium conditions for market return and risk-free rate, as well as the model-implied predictability of the risk-free rate. We match analytical moments when possible and simulated moments when necessary and determine the crucial factors required for both identification and reasonable estimation precision. A simulation study - the first in the context of long-run risk modeling - delineates the pitfalls associated with SMM estimation of a non-linear dynamic asset pricing model. Our study provides a blueprint for successful estimation of the LRR model.
    Keywords: asset pricing,long-run risk,simulated method of moments
    JEL: C58 G10 G12
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:zbw:cfswop:479&r=ecm
  14. By: Offermanns, Christian J.
    Abstract: We propose a semi-parametric approach to heterogeneous dynamic panel data modelling. The method generalizes existing approaches to model cross-section homogeneity within such panels. It allows for partial influence of other cross-section units on estimated coefficients, differentiating between short-run and long-run homogeneity, and determines the optimal degree of such homogeneity. The issue of cross-section homogeneity emerges as a special case of categorical conditioning. Applying our model to equilibrium exchange rate determination in a cross-country panel, we find evidence of largely heterogeneous adjustment and more homogeneous long-run coefficients across countries. The coefficient heterogeneity appears largely idiosyncratic and is not captured by simple categorizations like exchange rate regime classification.
    Keywords: dynamic panel data models,coefficient homogeneity,non-parametric estimation,equilibrium exchange rates
    JEL: C23 F31 C52
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:zbw:fubsbe:201425&r=ecm

This nep-ecm issue is ©2014 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.