nep-ecm New Economics Papers
on Econometrics
Issue of 2015‒02‒16
28 papers chosen by
Sune Karlsson
Örebro universitet

  1. Unbiased Instrumental Variables Estimation under Known First-Stage Sign By Isaiah Andrews; Timothy B. Armstrong
  2. Grouped Model Averaging for Finite Sample Size By Aman Ullah; Xinyu Zhang
  3. Small sample performance of indirect inference on DSGE models By Le, Vo Phuong Mai; Meenagh, David; Minford, Patrick; Wickens, Michael
  4. Unconditional Transformed Likelihood Estimation of Time-Space Dynamic Panel Data Models By Kripfganz, Sebastian
  5. On Bootstrap Validity for Subset Anderson-Rubin Test in IV Regressions By Firmin Doko Tchakota
  6. Bootstrapping the portmanteau tests in weak auto-regressive moving average models By Zhu, Ke
  7. Empirical Likelihood for Robust Poverty Comparisons By Tabri, Rami Victor
  8. Average partial effects in multivariate probit models with latent heterogeneity: Monte Carlo experiments and an application to immigrants' ethnic identity and economic performance By Giovanni Bruno; Orietta Dessy
  9. What is the effect of sample and prior distributions on a Bayesian autoregressive linear model? An application to piped water consumption By Andrés Ramírez Hassan; Jhonatan Cardona Jiménez; Raul Pericchi Guerra
  10. Overidentification test in a nonparametric treatment model with unobserved heterogeneity By Sarnetzki, Florian; Dzemski, Andreas
  11. Spatial GAS models for systemic risk measurement By Schaumburg, Julia; Blasques, Francisco; Koopman, Siem Jan; Lucas, Andre
  12. PANICCA - PANIC on Cross-Section Averages By Reese, Simon; Westerlund, Joakim
  13. Unit Roots and Smooth Transitions: A Replication By Kulaksizoglu, Tamer
  14. Sign restrictions and statistical identification under volatility breaks -- Simulation based evidence and an empirical application to monetary policy analysis By Herwartz, Helmut; Plödt, Martin
  15. Two-way models for gravity By Koen Jochmans
  16. On Selection of Statistics for Approximate Bayesian Computing or the Method of Simulated Moments By Michael Creel
  17. Approximate Bayesian logistic regression via penalized likelihood estimation with data augmentation By Andrea Discacciati
  18. Unbalanced Regressions and the Predictive Equation By Daniela Osterrieder; Daniel Ventosa-Santaulària; J. Eduardo Vera-Valdés
  19. Estimating the Impact of Alternative Multiple Imputation Methods on Longitudinal Wealth Data By Grabka, Markus; Westermeier, Christian
  20. A Factor Analytical Approach to Price Discovery By Westerlund, Joakim; Reese, Simon; Narayan, Paresh
  21. Systemic Risk with Exchangeable Contagion: Application to the European Banking System By Umberto Cherubini; Sabrina Mulinacci
  22. Detecting financial contagion in a multivariate system By Manner, Hans; Blatt, Dominik; Candelon, Bertrand
  23. Imputation of missing expenditure information in standard household income surveys By Massimo Baldini; Daniele Pacifico; Federica Termini
  24. On the Assumption of Bivariate Normality in Selection Models: A Copula Approach Applied to Estimating HIV Prevalence By McGovern, Mark E.; Bärnighausen, Till; Giampiero Marra; Rosalba Radice
  25. Identification of DSGE Models - A Comparison of Methods and the Effect of Second Order Approximation By Mutschler, Willi
  26. A score-test on measurement errors in rating transition times By Rafael Weißbach, Rafael; Voß, Sebastian
  27. How Reliable Are the Geographical Spatial Weights Matrices? By Davide Fiaschi; Angela Parenti
  28. The estimation uncertainty of permanent-transitory decompositions in co-integrated systems By Schreiber, Sven

  1. By: Isaiah Andrews (Harvard Society of Fellows); Timothy B. Armstrong (Cowles Foundation, Yale University)
    Abstract: We derive mean-unbiased estimators for the structural parameter in instrumental variables models where the sign of one or more first stage coefficients is known. In the case with a single instrument, the unbiased estimator is unique. For cases with multiple instruments we propose a class of unbiased estimators and show that an estimator within this class is efficient when the instruments are strong while retaining unbiasedness in finite samples. We show numerically that unbiasedness does not come at a cost of increased dispersion: in the single instrument case, the unbiased estimator is less dispersed than the 2SLS estimator. Our finite-sample results apply to normal models with known variance for the reduced form errors, and imply analogous results under weak instrument asymptotics with an unknown error distribution.
    Keywords: Weak instruments, Unbiased estimation, Sign restrictions
    JEL: C26 C36
    Date: 2015–02
  2. By: Aman Ullah (Department of Economics, University of California Riverside); Xinyu Zhang (Chinese Academy of Sciences)
    Abstract: This paper studies grouped model averaging methods for finite sample size situation. Sufficient conditions under which the grouped model averaging estimator dominates the ordinary least squares estimator are provided. A class of grouped model averaging estimators, g-class, is introduced, and its dominance condition over the ordinary least squares is established. All theoretical findings are verified by simulation examples. We also apply the methods to the analysis of the grain output data of China.
    Keywords: Finite Sample Size, Mean Squared Error, Model Averaging, Sufficient Condition.
    JEL: C13 C21
    Date: 2015–02
  3. By: Le, Vo Phuong Mai (Cardiff Business School); Meenagh, David (Cardiff Business School); Minford, Patrick (Cardiff Business School); Wickens, Michael (Cardiff Business School)
    Abstract: Using Monte Carlo experiments, we examine the performance of indirect inference tests of DSGE models in small samples, using various models in widespread use. We compare these with tests based on direct inference (using the Likelihood Ratio). We find that both tests have power so that a substantially false model will tend to be rejected by both; but that the power of the indirect inference test is by far the greater, necessitating re-estimation to ensure that the model is tested in its fullest sense. We also find that the small-sample bias with indirect estimation is around half of that with maximum likelihood estimation.
    Keywords: Bootstrap; DSGE; Indirect Inference; Likelihood Ratio; New Classical; New Keynesian; Wald statistic
    JEL: C12 C32 C52 E1
    Date: 2015–01
  4. By: Kripfganz, Sebastian
    Abstract: I derive the unconditional transformed likelihood function and its derivatives for a fixed-effects panel data model with time lags, spatial lags, and spatial time lags that encompasses the pure time dynamic and pure space dynamic models as special cases. In addition, the model can accommodate spatial dependence in the error term. I demonstrate that the model-consistent representation of the initial-period distribution involves higher-order spatial lag polynomials. Their order is linked to the minimal polynomial of the spatial weights matrix and, in general, tends to infinity with increasing sample size. Consistent estimation requires an appropriate truncation of these lag polynomials unless the spatial weights matrix has a regular structure. The finite sample evidence from Monte Carlo simulations shows that a misspecification of the spatial structure for the initial observations results in considerable biases while the correctly specified estimator behaves well. As an application, I estimate a time-space dynamic wage equation allowing for peer effects within households.
    JEL: C13 C23 J31
    Date: 2014
  5. By: Firmin Doko Tchakota (School of Economics, University of Adelaide)
    Abstract: This paper sheds new light on subset hypothesis testing in linear structural models in which instrumental variables (IVs) can be arbitrarily weak. For the first time, we investigate the validity of the bootstrap for Anderson-Rubin (AR) type tests of hypotheses specified on a subset of structural parameters, with or without identification. Our investigation focuses on two subset AR type statistics based on the plug-in principle. The first one uses the restricted limited information maximum likelihood (LIML) as the plug-in method, and the second exploits the restricted two-stage least squares (2SLS). We provide an analysis of the limiting distributions of both the standard and proposed bootstrap AR statistics under the subset null hypothesis of interest. Our results provide some new insights and extensions of earlier studies. In all cases, we show that when identification is strong and the number of instruments is fixed, the bootstrap provides a high-order approximation of the null limiting distributions of both plug-in subset statistics. However, the bootstrap is inconsistent when instruments are weak. This contrasts with the bootstrap of the AR statistic of the null hypothesis specified on the full vector of structural parameters, which remains valid even when identification is weak; see Moreira et al. (2009). We present a Monte Carlo experiment that confirms our theoretical findings.
    Keywords: Subset AR-test; bootstrap validity; bootstrap inconsistency; weak instruments; Edgeworth
    JEL: C12 C13 C36
    Date: 2015–01
  6. By: Zhu, Ke
    Abstract: This paper uses a random weighting (RW) method to bootstrap the critical values for the Ljung-Box/Monti portmanteau tests and weighted Ljung-Box/Monti portmanteau tests in weak ARMA models. Unlike the existing methods, no user-chosen parameter is needed to implement the RW method. As an application, these four tests are used to check the model adequacy in power GARCH models. Simulation evidence indicates that the weighted portmanteau tests have the power advantage over other existing tests. A real example on S&P 500 index illustrates the merits of our testing procedure. As one extension work, the block-wise RW method is also studied.
    Keywords: Bootstrap method; Portmanteau test; Power GARCH models; Random weighting approach; Weak ARMA models; Weighted portmanteau test.
    JEL: C0 C01 C12
    Date: 2015–02–06
  7. By: Tabri, Rami Victor
    Abstract: Robust rankings of poverty are ones that do not rely on a single poverty measure with a single poverty line. Mathematically, such robust rankings of two populations specifies a continuum of unconditional moment inequality constraints. If these constraints could be imposed in estimation then a statistical test can be performed using an empirical likelihood-ratio (ELR) test, which is a nonparametric version of the likelihood-ratio test in parametric inference. While these constraints cannot be imposed exactly, we show that these can be imposed approximately with the approximation disappearing asymptotically. We then propose a bootstrap test procedure that implements the resulting approximate ELR test. The paper derives the asymptotic properties of this test, presents Monte Carlo experiments that show improved power compared to existing tests such as that of Linton et al. (2010), and provides an empirical illustration to Canadian income distribution data. More generally, the bootstrap test procedure provides an asymptotically valid nonparametric test of a continuum of unconditional moment inequality constraints. The proofs exploit the fact that the constrained optimization problem is a concave semi-infinite programming optimization problem.
    Keywords: bargaining; adaptive learning; Evolution
    Date: 2015–01
  8. By: Giovanni Bruno (University Commercial Luigi Bocconi, Milan); Orietta Dessy (Ca'Foscari University of Venice)
    Abstract: We extend the univariate results in Wooldridge (2005) to multivariate probit models, proving the following. 1) Average partial effects (APEs) based on joint probabilities are consistently estimated by conventional multivariate probit models under general forms of conditionally independent latent heterogeneity (LH) as long as the only constraints beyond normalization, if any, are within-equation homogenous restrictions. The normalization of choice is not neutral to consistency in models with cross-equation parameter restrictions beyond normalization, such as those implemented by Stata's asmprobit command or in the panel probit model: if the normalization is through an error covariance matrix in correlation form, consistency breaks down unless the LH components are truly homoskedastic. This is substantial because an error covariance matrix in correlation form is the only normalization permitted by Stata's biprobit and mvprobit commands or Limdep's BIVARIATE PROBIT and MPROBIT. Covariance restrictions beyond normalizations generally conflict with an arbitrary covariance matrix for the LH components. The multinomial probit model with i.i.d. errors, implemented by Stata's mprobit, is a case in point. 2) Conditional independence of the LH components is not generally sufficient for consistent estimation of APEs on conditional probabilities. Consistency is restored by maintaining an additional independence assumption. This holds true whether or not the response variables are used as regressors. 3) The dimensionality benefit observed by Mullahy (2011) in the estimation of partial effects extends to APEs. We exploit this feature in the design of a simple procedure estimating APEs, which is both faster and more accurate than simulation-based codes, such as Stata's mvprobit and cmp. To demonstrate the finite-sample implications of our results, we carry out extensive Monte Carlo experiments with bivariate and trivariate probit models. Finally, we apply our procedure in (3) to Italian survey data of immigrants in order to estimate the APEs of a trivariate probit model of ethnic identity formation and economic performance.
    Date: 2014–11–13
  9. By: Andrés Ramírez Hassan; Jhonatan Cardona Jiménez; Raul Pericchi Guerra
    Abstract: In this paper we analyze the effect of four possible alternatives regarding the prior distributions in a linear model with autoregressive errors to predict piped water consumption: Normal-Gamma, Normal-Scaled Beta two, Studentized-Gamma and Student's t-Scaled Beta two. We show the effects of these prior distributions on the posterior distributions under different assumptions associated with the coefficient of variation of prior hyperparameters in a context where there is a conflict between the sample information and the elicited hyperparameters. We show that the posterior parameters are less affected by the prior hyperparameters when the Studentized-Gamma and Student's t-Scaled Beta two models are used. We show that the Normal-Gamma model obtains sensible outcomes in predictions when there is a small sample size. However, this property is lost when the experts overestimate the certainty of their knowledge. In the case that the experts greatly trust their beliefs, it is a good idea to use Student's t distribution as the prior distribution, because we obtain small posterior predictive errors. In addition, we find that the posterior predictive distributions using one of the versions of Student's t as prior are robust to the coefficient of variation of the prior parameters. Finally, it is shown that the Normal-Gamma model has a posterior distribution of the variance concentrated near zero when there is a high level of confidence in the experts' knowledge: this implies a narrow posterior predictive credibility interval, especially using small sample sizes.
    Keywords: Autoregressive model, Bayesian analysis, Forecast, Robust prior
    JEL: C11 C53
    Date: 2014–07–23
  10. By: Sarnetzki, Florian; Dzemski, Andreas
    Abstract: We provide an overidentification test for a nonparametric treatment model where individuals are allowed to select into treatment based on unobserved gains. Our test can be used to test the validity of instruments in a framework with essential heterogeneity (Imbens and Angrist 1994). The essential ingredient is to assume that a binary and a continuous instrument are available. The testable restriction is closely related to the overidentification of the Marginal Treatment Effect. We suggest a test statistic and characterize its asymptotic distribution and behavior under local alternatives. In simulations, we investigate the validity and finite sample performance of an easy-to-implement wild bootstrap procedure. Finally, we illustrate the applicability of our method by studying two instruments from the literature on teenage pregnancies. This research is motivated by the observation that in the presence of essential heterogeneity classical GMM overidentification tests can not be used to test for the validity of instruments (Heckman and Schmierer 2010). The test complements existing tests by considering for the first time the subpopulation of compliers. Our approach can be interpreted as a test of index sufficiency and is related to the test of the validity of the matching approach in Heckman et. al 1996,1998. Conditional on covariates the propensity score aggregates all information that the instruments provide about observed outcomes given that the model is correctly specified. The estimated propensity score enters our test statistic as a generated regressor. We quantify the effect of the first stage estimation error and find that in order to have good power against local alternatives we have to reduce the bias from estimating the first stage nonparametrically, e.g., by fitting a higher order local polynomial. For the second stage no bias reduction is necessary. Previous literature (Ying Ying Lee 2013) establishes the validity of a multiplier bootstrap to conduct inference in a treatment model with nonparametrically estimated regressors. Our simulations illustrate that a much easier to implement na ve wild bootstrap procedure can have good properties. In our application we consider two instruments that have been used in the analysis of the effect of teenage child bearing on high-school graduation. For the binary instrument we use teenage miscarriage and for the continuous instrument we use age of first menstrual period. If teenage girls select into pregnancy based on some unobserved heterogeneity that is correlated with their likelihood of graduation miscarriage does not constitute a valid instrument. Our test confirms this line of argument by rejecting that the treatment model is correctly specified.
    JEL: C21 C14 C12
    Date: 2014
  11. By: Schaumburg, Julia; Blasques, Francisco; Koopman, Siem Jan; Lucas, Andre
    Abstract: A new model for time-varying spatial dependencies is introduced. It forms an extension to the popular spatial lag model and can be estimated conveniently by maximum likelihood. The spatial dependence parameter is assumed to follow a generalized autoregressive score (GAS) process. The theoretical properties of the model are established and its satisfactory finite sample performance is shown in a small simulation study. In an empirical application, spatial dependencies between nine European sovereign CDS spreads are estimated for the time period from November 2008 until October 2013. The empirical model features a spatial weights matrix constructed from cross-border lending data and regressors including country-specific and Europe-wide risk factors. The estimation results indicate a high, time-varying degree of spatial spillovers in the spread data. A spatial GAS model with t-distributed errors provides the best fit. There is evidence for a downturn in spatial dependence after the Greek default in winter 2012, which can be explained economically by a change in bank regulation.
    JEL: C58 C23 G15
    Date: 2014
  12. By: Reese, Simon (Department of Economics, Lund University); Westerlund, Joakim (Department of Economics, Lund University)
    Abstract: The cross-section average (CA) augmentation approach of Pesaran (2007) and Pesaran et al. (2013), and the principal components-based panel analysis of non-stationarity in idiosyncratic and common components (PANIC) of Bai and Ng (2004, 2010) are among the most popular “second-generation” approaches for cross-section correlated panels. One feature of these approaches is that they have different strengths and weaknesses. The purpose of the current paper is to develop PANICCA, a combined approach that exploits the strengths of both CA and PANIC.
    Keywords: PANIC; cross-section average augmentation; unit root test; cross-section dependence; common factors
    JEL: C12 C13 C33 C36
    Date: 2014–10–20
  13. By: Kulaksizoglu, Tamer
    Abstract: This paper replicates Leybourne et al. (1998), who propose a Dickey-Fuller type test for unit root that is most appropriate when there is reason to suspect the possibility of deterministic structural change in the series. We find that our replicated results are quite similar to the authors' results. We also make the Ox source code available.
    Keywords: Dickey-Fuller test, Integrated process, Nonlinear trend, Structural change
    JEL: C12 C15
    Date: 2015–02–04
  14. By: Herwartz, Helmut; Plödt, Martin
    Abstract: Apart from a priori assumptions on instantaneous or long run effects of structural shocks, sign restrictions have become a prominent means for structural vector autoregressive (SVAR) analysis. Moreover, second order heterogeneity of systems of times series can be fruitfully exploited for identification purposes in SVARs. We show by means of a Monte Carlo study that taking statistical information into account offers a more accurate quantification of the true structural relations. In contrast, resorting only to commonly used sign restrictions bears a higher risk of failing to recover these structural relations. As an empirical illustration we employ the statistical and the sign restriction approach in a stylized model of US monetary policy. By combining identifying information from both approaches we strive for improved insights into the effects of monetary policy on output. Our results point to a decline in real GDP after a monetary tightening at an intermediate horizon.
    JEL: C32 E47 C10
    Date: 2014
  15. By: Koen Jochmans (Département d'économie)
    Abstract: Empirical models for panel data frequently feature fixed effects in both directions of the panel. Settings where this is prevalent include student-teacher interaction, the allocation of workers to firms, and the import-export flows between countries. Estimation of such fixed-effect models is difficult. We derive moment conditions for models with multiplicative unobservables and fixed effects and use them to set up generalized method of moments estimators that have good statistical properties. We estimate a gravity equation with multilateral resistance terms as an application of our methods.
    Date: 2015–02
  16. By: Michael Creel
    Abstract: This paper presents a cross validation method for selection of statistics for Approximate Bayesian Computing, and for related estimation methods such as the Method of Simulated Moments. The method uses simulated annealing to minimize the cross validation criterion over a combinatorial search space that may contain many, many elements. An example, for which optimal statistics are known from theory, shows that the method is able to select optimal statistics out of a large set of candidate statistics.
    Keywords: Approximate Bayesian Computation; likelihood-free methods; selection of statistics;method of simulated moments
    JEL: E24 O41
    Date: 2015–01–22
  17. By: Andrea Discacciati (Karolinska Institute, Stockholm)
    Abstract: Data augmentation is a technique for conducting approximate Bayesian regression analysis. This technique is a form of penalized likelihood estimation where prior information, represented by one or more specific prior data records, generates a penalty function that imposes the desired priors on the regression coefficients. We present a new command, penlogit, that fits penalized logistic regression via data augmentation. We illustrate the command through an example using data from an epidemiological study.
    Date: 2014–11–13
  18. By: Daniela Osterrieder (Rutgers Business School and CREATES); Daniel Ventosa-Santaulària (Center for Research and Teaching in Economics); J. Eduardo Vera-Valdés (Aarhus University and CREATES)
    Abstract: Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness in the theoretical predictive equation by suggesting a data generating process, where returns are generated as linear functions of a lagged latent I(0) risk process. The observed predictor is a function of this latent I(0) process, but it is corrupted by a fractionally integrated noise. Such a process may arise due to aggregation or unexpected level shifts. In this setup, the practitioner estimates a misspecified, unbalanced, and endogenous predictive regression. We show that the OLS estimate of this regression is inconsistent, but standard inference is possible. To obtain a consistent slope estimate, we then suggest an instrumental variable approach and discuss issues of validity and relevance. Applying the procedure to the prediction of daily returns on the S&P 500, our empirical analysis confirms return predictability and a positive risk-return trade-off.
    Keywords: Title: Time-varying disaster risk models: An empirical assessment of the Rietz-Barro hypothesis
    JEL: G17 C22 C26 C58
    Date: 2015–01–29
  19. By: Grabka, Markus; Westermeier, Christian
    Abstract: Statistical Analysis in surveys is often facing missing data. As case-wise deletion and single imputation prove to have undesired properties, multiple imputation remains as a measure to handle this problem. In a longitudinal study, where for some missing values past or future data points might be available, the question arises how to successfully transform this advantage into better imputation models. In a simulation study the authors compare six combinations of cross-sectional and longitudinal imputation strategies for German wealth panel data (SOEP wealth module). The authors create simulation data sets by blanking out observed data points: they induce item non response into the data by both missing at random (MAR) and two separate missing not at random (MNAR) mechanisms. We test the performance of multiple imputation using chained equations (MICE), an imputation procedure for panel data known as the row-and-columns method and a regression specification with correction for sample selection including a stochastic error term. The regression and MICE approaches serve as fallback methods when only cross-sectional data is available. Even though the regression approach omits certain stochastic components and estimators based on its result are likely to underestimate the uncertainty of the imputation procedure, it performs weak against the MICE set-up. The row-and-columns method, a univariate method, performs well considering both longitudinal and cross-sectional evaluation criteria. These results show that if the variables which ought to be imputed are assumed to exhibit high state dependency, univariate imputation techniques such as the row-and-columns imputation should not be dismissed beforehand.
    JEL: C18 C83 C46
    Date: 2014
  20. By: Westerlund, Joakim (Department of Economics, Lund University); Reese, Simon (Department of Economics, Lund University); Narayan, Paresh (Centre for Research in Economics and Financial Econometrics, Deakin University)
    Abstract: Existing econometric approaches for studying price discovery presume that the number of markets are small, and their properties become suspect when this restriction is not met. They also require making identifying restrictions and are in many cases not suitable for statistical inference. The current paper takes these shortcomings as a starting point to develop a factor analytical approach that makes use of the cross-sectional variation of the data, yet is very user-friendly in that it does not involve any identifying restrictions or obstacles to inference.
    Keywords: Price discovery; panel data; common factor models; cross-unit cointegration
    JEL: C12 C13 C33
    Date: 2014–12–30
  21. By: Umberto Cherubini; Sabrina Mulinacci
    Abstract: We propose a model and an estimation technique to distinguish systemic risk and contagion in credit risk. The main idea is to assume, for a set of $d$ obligors, a set of $d$ idiosyncratic shocks and a shock that triggers the default of all them. All shocks are assumed to be linked by a dependence relationship, that in this paper is assumed to be exchangeable and Archimedean. This approach is able to encompass both systemic risk and contagion, with the Marshall-Olkin pure systemic risk model and the Archimedean contagion model as extreme cases. Moreover, we show that assuming an affine structure for the intensities of idiosyncratic and systemic shocks and a Gumbel copula, the approach delivers a complete multivariate distribution with exponential marginal distributions. The model can be estimated by applying a moment matching procedure to the bivariate marginals. We also provide an easy visual check of the good specification of the model. The model is applied to a selected sample of banks for 8 European countries, assuming a common shock for every country. The model is found to be well specified for 4 of the 8 countries. We also provide the theoretical extension of the model to the non-exchangeable case and we suggest possible avenues of research for the estimation.
    Date: 2015–02
  22. By: Manner, Hans; Blatt, Dominik; Candelon, Bertrand
    Abstract: This paper proposes an original three-part sequential testing procedure (STP), with which to test for contagion using a multivariate model. First, it identifies structural breaks in the volatility of a given set of countries. Then a structural break test is applied to the correlation matrix to identify and date the potential contagion mechanism. As a third element, the STP tests for the distinctiveness of the break dates previously found. Compared to traditional contagion tests in a bivariate set-up, the STP has high testing power and is able to locate the dates of contagion more precisely. Monte Carlo simulations underline the importance of separating variance and correlation break testing, the endogenous dating of the breakpoints and the usage of multi-dimensional data. The procedure is applied for the 1997 Asian Financial Crisis, revealing the chronological order of the crisis events.
    JEL: C32 G01 G15
    Date: 2014
  23. By: Massimo Baldini; Daniele Pacifico; Federica Termini
    Abstract: The aim of this paper is to present a new methodology for dealing with missing expenditure information in standard income surveys. Under given conditions, typical imputation procedures, such as statistical matching or regression-based models, can replicate well in the income survey both the unconditional density of household expenditure and its joint density with a set of socio-demographic variables that the two surveys have in common. However, standard imputation procedures may fail in capturing the overall relation between income and expenditure, especially if the common control variables used for the imputation have a weak correlation with the missing information. The paper suggests a two-step imputation procedure that allows reproducing the joint relation between income and expenditure observed from external sources, while maintaining the advantages of traditional imputation methods. The proposed methodology suits well for any empirical analysis that needs to relate income and consumption, such as the estimation of Engel curves or the evaluation of consumption taxes through micro-simulation models. An empirical application shows the makings of such a technique for the evaluation of the distributive effects of consumption taxes and proves that common imputation methods may produce significantly biased results in terms of policy recommendations when the control variables used for the imputation procedure are weakly correlated with the missing variable.
    Keywords: expenditure imputation, matching, propensity score, tax incidence
    JEL: F14 F20 I23 J24
    Date: 2015–01
  24. By: McGovern, Mark E.; Bärnighausen, Till; Giampiero Marra; Rosalba Radice
    Abstract: Heckman-type selection models have been used to control HIV prevalence estimates for selection bias when participation in HIV testing and HIV status are associated after controlling for observed variables. These models typically rely on the strong assumption that the error terms in the participation and the outcome equations that comprise the model are distributed as bivariate normal. We introduce a novel approach for relaxing the bivariate normality assumption in selection models using copula functions. We apply this method to estimating HIV prevalence and new confidence intervals (CI) in the 2007 Zambia Demographic and Health Survey (DHS) by using interviewer identity as the selection variable that predicts participation (consent to test) but not the outcome (HIV status). We show in a simulation study that selection models can generate biased results when the bivariate normality assumption is violated. In the 2007 Zambia DHS, HIV prevalence estimates are similar irrespective of the structure of the association assumed between participation and outcome. For men, we estimate a population HIV prevalence of 21% (95% CI = 16%?25%) compared with 12% (11%?13%) among those who consented to be tested; for women, the corresponding figures are 19% (13%?24%) and 16% (15%?17%). Copula approaches to Heckman-type selection models are a useful addition to the methodological toolkit of HIV epidemiology and of epidemiology in general. We develop the use of this approach to systematically evaluate the robustness of HIV prevalence estimates based on selection models, both empirically and in a simulation study.
    Date: 2015–01
  25. By: Mutschler, Willi
    Abstract: Several formal methods have been proposed to check identification in DSGE models via (i) the autocovariogram (Iskrev 2010), (ii) the spectral density (Komunjer and Ng 2011; Qu and Tkachenko 2012), or (iii) Bayesian indicators (Koop et al 2012). Even though all methods seem similar, there has been no study of the advantages and drawbacks of implementing the different methods. The contribution of this paper is threefold: First, we derive all criteria in the same framework following Schmitt-Groh and Uribe (2004). While Iskrev (2010) already uses analytical derivatives, Komunjer and Ng (2011) and Qu and Tkachenko (2012) rely on numerical methods. For a rigorous comparison we thus show how to implement analytical derivatives into all criteria. We argue in favor of using analytical derivatives, whenever feasible, due to its robustness and greater speed than relying on numerical procedures. Second, we apply all methods on DSGE models that are known to have lack of identification. Our findings suggest that most of the times the methods come to the same conclusion, however, the issue of numerical errors due to nonlinearities and very large matrices may lead to unreliable or contradictory conclusions. The example models show that by evaluating different criteria we also gain inside into the dynamic structure of the DSGE model. We argue that in order to thoroughly analyze identification, one has to be aware of the advantages and drawbacks of the different methods. Third, we extend the methods to higher approximations given the pruned-state-space representation studied by Andreasen, Fern ndez-Villaverde and Rubio Ram rez (2014). It is argued that this can improve overall identification of a DSGE model via imposing additional restrictions on the mean and variance. In this way we are able to identify previously unidentified models.
    JEL: C10 E10 C50
    Date: 2014
  26. By: Rafael Weißbach, Rafael; Voß, Sebastian
    Abstract: We model credit rating histories as continuous-time discrete-state Markov processes. Infrequent monitoring of the debtors' solvency will result in erroneous observations of the rating transition times, and consequently in biased parameter estimates. We develop a score test against such measurement errors in the transition data that is independent of the error distribution. We derive the asymptotic chi-square distribution for the test statistic under the null by stochastic limit theory. The test is applied to an international corporate portfolio, while accounting for economic and debtor-specific covariates. The test indicates that measurement errors in the transition times are a real problem in practice.
    JEL: C41 C52 G33
    Date: 2014
  27. By: Davide Fiaschi; Angela Parenti
    Abstract: This paper shows how it is possible to estimate the interconnections between regions by a connectedness matrix recently proposed by Diebold and Yilmaz (2014), and discusses how the connectedness matrix is strictly related to the spatial weights matrix used in spatial econometrics. An empirical application using growth rate volatility of per capita GDP of 199 European NUTS2 regions (EU15) over the period 1981-2008 illustrates how our estimated connectedness matrix is not compatible with the most popular geographical weights matrices used in literature.
    Keywords: First-order Contiguity, Distance-based Matrix, Connectedness Matrix, European Regions, Network.
    JEL: C23 R11 R12 O52
    Date: 2015–02–01
  28. By: Schreiber, Sven
    Abstract: The topic of this paper is the estimation uncertainty of the Stock-Watson and Gonzalo-Granger permanent-transitory decompositions in the framework of the co-integrated vector autoregression. We suggest an approach to construct the confidence interval of the transitory component estimate in a given period (e.g. the latest observation) by conditioning on the observed data in that period. To calculate asymptotically valid confidence intervals we use the delta method and two bootstrap variants. As an illustration we analyze the uncertainty of (US) output gap estimates in a system of output, consumption, and investment.
    JEL: C32 C15 E32
    Date: 2014

This nep-ecm issue is ©2015 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.