
on Econometrics 
By:  Isaiah Andrews (Harvard Society of Fellows); Timothy B. Armstrong (Cowles Foundation, Yale University) 
Abstract:  We derive meanunbiased estimators for the structural parameter in instrumental variables models where the sign of one or more first stage coefficients is known. In the case with a single instrument, the unbiased estimator is unique. For cases with multiple instruments we propose a class of unbiased estimators and show that an estimator within this class is efficient when the instruments are strong while retaining unbiasedness in finite samples. We show numerically that unbiasedness does not come at a cost of increased dispersion: in the single instrument case, the unbiased estimator is less dispersed than the 2SLS estimator. Our finitesample results apply to normal models with known variance for the reduced form errors, and imply analogous results under weak instrument asymptotics with an unknown error distribution. 
Keywords:  Weak instruments, Unbiased estimation, Sign restrictions 
JEL:  C26 C36 
Date:  2015–02 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1984&r=ecm 
By:  Aman Ullah (Department of Economics, University of California Riverside); Xinyu Zhang (Chinese Academy of Sciences) 
Abstract:  This paper studies grouped model averaging methods for finite sample size situation. Sufficient conditions under which the grouped model averaging estimator dominates the ordinary least squares estimator are provided. A class of grouped model averaging estimators, gclass, is introduced, and its dominance condition over the ordinary least squares is established. All theoretical findings are verified by simulation examples. We also apply the methods to the analysis of the grain output data of China. 
Keywords:  Finite Sample Size, Mean Squared Error, Model Averaging, Sufficient Condition. 
JEL:  C13 C21 
Date:  2015–02 
URL:  http://d.repec.org/n?u=RePEc:ucr:wpaper:201501&r=ecm 
By:  Le, Vo Phuong Mai (Cardiff Business School); Meenagh, David (Cardiff Business School); Minford, Patrick (Cardiff Business School); Wickens, Michael (Cardiff Business School) 
Abstract:  Using Monte Carlo experiments, we examine the performance of indirect inference tests of DSGE models in small samples, using various models in widespread use. We compare these with tests based on direct inference (using the Likelihood Ratio). We find that both tests have power so that a substantially false model will tend to be rejected by both; but that the power of the indirect inference test is by far the greater, necessitating reestimation to ensure that the model is tested in its fullest sense. We also find that the smallsample bias with indirect estimation is around half of that with maximum likelihood estimation. 
Keywords:  Bootstrap; DSGE; Indirect Inference; Likelihood Ratio; New Classical; New Keynesian; Wald statistic 
JEL:  C12 C32 C52 E1 
Date:  2015–01 
URL:  http://d.repec.org/n?u=RePEc:cdf:wpaper:2015/2&r=ecm 
By:  Kripfganz, Sebastian 
Abstract:  I derive the unconditional transformed likelihood function and its derivatives for a fixedeffects panel data model with time lags, spatial lags, and spatial time lags that encompasses the pure time dynamic and pure space dynamic models as special cases. In addition, the model can accommodate spatial dependence in the error term. I demonstrate that the modelconsistent representation of the initialperiod distribution involves higherorder spatial lag polynomials. Their order is linked to the minimal polynomial of the spatial weights matrix and, in general, tends to infinity with increasing sample size. Consistent estimation requires an appropriate truncation of these lag polynomials unless the spatial weights matrix has a regular structure. The finite sample evidence from Monte Carlo simulations shows that a misspecification of the spatial structure for the initial observations results in considerable biases while the correctly specified estimator behaves well. As an application, I estimate a timespace dynamic wage equation allowing for peer effects within households. 
JEL:  C13 C23 J31 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:zbw:vfsc14:100604&r=ecm 
By:  Firmin Doko Tchakota (School of Economics, University of Adelaide) 
Abstract:  This paper sheds new light on subset hypothesis testing in linear structural models in which instrumental variables (IVs) can be arbitrarily weak. For the first time, we investigate the validity of the bootstrap for AndersonRubin (AR) type tests of hypotheses specified on a subset of structural parameters, with or without identification. Our investigation focuses on two subset AR type statistics based on the plugin principle. The first one uses the restricted limited information maximum likelihood (LIML) as the plugin method, and the second exploits the restricted twostage least squares (2SLS). We provide an analysis of the limiting distributions of both the standard and proposed bootstrap AR statistics under the subset null hypothesis of interest. Our results provide some new insights and extensions of earlier studies. In all cases, we show that when identification is strong and the number of instruments is fixed, the bootstrap provides a highorder approximation of the null limiting distributions of both plugin subset statistics. However, the bootstrap is inconsistent when instruments are weak. This contrasts with the bootstrap of the AR statistic of the null hypothesis specified on the full vector of structural parameters, which remains valid even when identification is weak; see Moreira et al. (2009). We present a Monte Carlo experiment that confirms our theoretical findings. 
Keywords:  Subset ARtest; bootstrap validity; bootstrap inconsistency; weak instruments; Edgeworth 
JEL:  C12 C13 C36 
Date:  2015–01 
URL:  http://d.repec.org/n?u=RePEc:adl:wpaper:201501&r=ecm 
By:  Zhu, Ke 
Abstract:  This paper uses a random weighting (RW) method to bootstrap the critical values for the LjungBox/Monti portmanteau tests and weighted LjungBox/Monti portmanteau tests in weak ARMA models. Unlike the existing methods, no userchosen parameter is needed to implement the RW method. As an application, these four tests are used to check the model adequacy in power GARCH models. Simulation evidence indicates that the weighted portmanteau tests have the power advantage over other existing tests. A real example on S&P 500 index illustrates the merits of our testing procedure. As one extension work, the blockwise RW method is also studied. 
Keywords:  Bootstrap method; Portmanteau test; Power GARCH models; Random weighting approach; Weak ARMA models; Weighted portmanteau test. 
JEL:  C0 C01 C12 
Date:  2015–02–06 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:61930&r=ecm 
By:  Tabri, Rami Victor 
Abstract:  Robust rankings of poverty are ones that do not rely on a single poverty measure with a single poverty line. Mathematically, such robust rankings of two populations specifies a continuum of unconditional moment inequality constraints. If these constraints could be imposed in estimation then a statistical test can be performed using an empirical likelihoodratio (ELR) test, which is a nonparametric version of the likelihoodratio test in parametric inference. While these constraints cannot be imposed exactly, we show that these can be imposed approximately with the approximation disappearing asymptotically. We then propose a bootstrap test procedure that implements the resulting approximate ELR test. The paper derives the asymptotic properties of this test, presents Monte Carlo experiments that show improved power compared to existing tests such as that of Linton et al. (2010), and provides an empirical illustration to Canadian income distribution data. More generally, the bootstrap test procedure provides an asymptotically valid nonparametric test of a continuum of unconditional moment inequality constraints. The proofs exploit the fact that the constrained optimization problem is a concave semiinfinite programming optimization problem. 
Keywords:  bargaining; adaptive learning; Evolution 
Date:  2015–01 
URL:  http://d.repec.org/n?u=RePEc:syd:wpaper:201502&r=ecm 
By:  Giovanni Bruno (University Commercial Luigi Bocconi, Milan); Orietta Dessy (Ca'Foscari University of Venice) 
Abstract:  We extend the univariate results in Wooldridge (2005) to multivariate probit models, proving the following. 1) Average partial effects (APEs) based on joint probabilities are consistently estimated by conventional multivariate probit models under general forms of conditionally independent latent heterogeneity (LH) as long as the only constraints beyond normalization, if any, are withinequation homogenous restrictions. The normalization of choice is not neutral to consistency in models with crossequation parameter restrictions beyond normalization, such as those implemented by Stata's asmprobit command or in the panel probit model: if the normalization is through an error covariance matrix in correlation form, consistency breaks down unless the LH components are truly homoskedastic. This is substantial because an error covariance matrix in correlation form is the only normalization permitted by Stata's biprobit and mvprobit commands or Limdep's BIVARIATE PROBIT and MPROBIT. Covariance restrictions beyond normalizations generally conflict with an arbitrary covariance matrix for the LH components. The multinomial probit model with i.i.d. errors, implemented by Stata's mprobit, is a case in point. 2) Conditional independence of the LH components is not generally sufficient for consistent estimation of APEs on conditional probabilities. Consistency is restored by maintaining an additional independence assumption. This holds true whether or not the response variables are used as regressors. 3) The dimensionality benefit observed by Mullahy (2011) in the estimation of partial effects extends to APEs. We exploit this feature in the design of a simple procedure estimating APEs, which is both faster and more accurate than simulationbased codes, such as Stata's mvprobit and cmp. To demonstrate the finitesample implications of our results, we carry out extensive Monte Carlo experiments with bivariate and trivariate probit models. Finally, we apply our procedure in (3) to Italian survey data of immigrants in order to estimate the APEs of a trivariate probit model of ethnic identity formation and economic performance. 
Date:  2014–11–13 
URL:  http://d.repec.org/n?u=RePEc:boc:isug14:10&r=ecm 
By:  Andrés Ramírez Hassan; Jhonatan Cardona Jiménez; Raul Pericchi Guerra 
Abstract:  In this paper we analyze the effect of four possible alternatives regarding the prior distributions in a linear model with autoregressive errors to predict piped water consumption: NormalGamma, NormalScaled Beta two, StudentizedGamma and Student's tScaled Beta two. We show the effects of these prior distributions on the posterior distributions under different assumptions associated with the coefficient of variation of prior hyperparameters in a context where there is a conflict between the sample information and the elicited hyperparameters. We show that the posterior parameters are less affected by the prior hyperparameters when the StudentizedGamma and Student's tScaled Beta two models are used. We show that the NormalGamma model obtains sensible outcomes in predictions when there is a small sample size. However, this property is lost when the experts overestimate the certainty of their knowledge. In the case that the experts greatly trust their beliefs, it is a good idea to use Student's t distribution as the prior distribution, because we obtain small posterior predictive errors. In addition, we find that the posterior predictive distributions using one of the versions of Student's t as prior are robust to the coefficient of variation of the prior parameters. Finally, it is shown that the NormalGamma model has a posterior distribution of the variance concentrated near zero when there is a high level of confidence in the experts' knowledge: this implies a narrow posterior predictive credibility interval, especially using small sample sizes. 
Keywords:  Autoregressive model, Bayesian analysis, Forecast, Robust prior 
JEL:  C11 C53 
Date:  2014–07–23 
URL:  http://d.repec.org/n?u=RePEc:col:000122:012434&r=ecm 
By:  Sarnetzki, Florian; Dzemski, Andreas 
Abstract:  We provide an overidentification test for a nonparametric treatment model where individuals are allowed to select into treatment based on unobserved gains. Our test can be used to test the validity of instruments in a framework with essential heterogeneity (Imbens and Angrist 1994). The essential ingredient is to assume that a binary and a continuous instrument are available. The testable restriction is closely related to the overidentification of the Marginal Treatment Effect. We suggest a test statistic and characterize its asymptotic distribution and behavior under local alternatives. In simulations, we investigate the validity and finite sample performance of an easytoimplement wild bootstrap procedure. Finally, we illustrate the applicability of our method by studying two instruments from the literature on teenage pregnancies. This research is motivated by the observation that in the presence of essential heterogeneity classical GMM overidentification tests can not be used to test for the validity of instruments (Heckman and Schmierer 2010). The test complements existing tests by considering for the first time the subpopulation of compliers. Our approach can be interpreted as a test of index sufficiency and is related to the test of the validity of the matching approach in Heckman et. al 1996,1998. Conditional on covariates the propensity score aggregates all information that the instruments provide about observed outcomes given that the model is correctly specified. The estimated propensity score enters our test statistic as a generated regressor. We quantify the effect of the first stage estimation error and find that in order to have good power against local alternatives we have to reduce the bias from estimating the first stage nonparametrically, e.g., by fitting a higher order local polynomial. For the second stage no bias reduction is necessary. Previous literature (Ying Ying Lee 2013) establishes the validity of a multiplier bootstrap to conduct inference in a treatment model with nonparametrically estimated regressors. Our simulations illustrate that a much easier to implement na ve wild bootstrap procedure can have good properties. In our application we consider two instruments that have been used in the analysis of the effect of teenage child bearing on highschool graduation. For the binary instrument we use teenage miscarriage and for the continuous instrument we use age of first menstrual period. If teenage girls select into pregnancy based on some unobserved heterogeneity that is correlated with their likelihood of graduation miscarriage does not constitute a valid instrument. Our test confirms this line of argument by rejecting that the treatment model is correctly specified. 
JEL:  C21 C14 C12 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:zbw:vfsc14:100620&r=ecm 
By:  Schaumburg, Julia; Blasques, Francisco; Koopman, Siem Jan; Lucas, Andre 
Abstract:  A new model for timevarying spatial dependencies is introduced. It forms an extension to the popular spatial lag model and can be estimated conveniently by maximum likelihood. The spatial dependence parameter is assumed to follow a generalized autoregressive score (GAS) process. The theoretical properties of the model are established and its satisfactory finite sample performance is shown in a small simulation study. In an empirical application, spatial dependencies between nine European sovereign CDS spreads are estimated for the time period from November 2008 until October 2013. The empirical model features a spatial weights matrix constructed from crossborder lending data and regressors including countryspecific and Europewide risk factors. The estimation results indicate a high, timevarying degree of spatial spillovers in the spread data. A spatial GAS model with tdistributed errors provides the best fit. There is evidence for a downturn in spatial dependence after the Greek default in winter 2012, which can be explained economically by a change in bank regulation. 
JEL:  C58 C23 G15 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:zbw:vfsc14:100632&r=ecm 
By:  Reese, Simon (Department of Economics, Lund University); Westerlund, Joakim (Department of Economics, Lund University) 
Abstract:  The crosssection average (CA) augmentation approach of Pesaran (2007) and Pesaran et al. (2013), and the principal componentsbased panel analysis of nonstationarity in idiosyncratic and common components (PANIC) of Bai and Ng (2004, 2010) are among the most popular “secondgeneration” approaches for crosssection correlated panels. One feature of these approaches is that they have different strengths and weaknesses. The purpose of the current paper is to develop PANICCA, a combined approach that exploits the strengths of both CA and PANIC. 
Keywords:  PANIC; crosssection average augmentation; unit root test; crosssection dependence; common factors 
JEL:  C12 C13 C33 C36 
Date:  2014–10–20 
URL:  http://d.repec.org/n?u=RePEc:hhs:lunewp:2015_003&r=ecm 
By:  Kulaksizoglu, Tamer 
Abstract:  This paper replicates Leybourne et al. (1998), who propose a DickeyFuller type test for unit root that is most appropriate when there is reason to suspect the possibility of deterministic structural change in the series. We find that our replicated results are quite similar to the authors' results. We also make the Ox source code available. 
Keywords:  DickeyFuller test, Integrated process, Nonlinear trend, Structural change 
JEL:  C12 C15 
Date:  2015–02–04 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:61867&r=ecm 
By:  Herwartz, Helmut; Plödt, Martin 
Abstract:  Apart from a priori assumptions on instantaneous or long run effects of structural shocks, sign restrictions have become a prominent means for structural vector autoregressive (SVAR) analysis. Moreover, second order heterogeneity of systems of times series can be fruitfully exploited for identification purposes in SVARs. We show by means of a Monte Carlo study that taking statistical information into account offers a more accurate quantification of the true structural relations. In contrast, resorting only to commonly used sign restrictions bears a higher risk of failing to recover these structural relations. As an empirical illustration we employ the statistical and the sign restriction approach in a stylized model of US monetary policy. By combining identifying information from both approaches we strive for improved insights into the effects of monetary policy on output. Our results point to a decline in real GDP after a monetary tightening at an intermediate horizon. 
JEL:  C32 E47 C10 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:zbw:vfsc14:100326&r=ecm 
By:  Koen Jochmans (Département d'économie) 
Abstract:  Empirical models for panel data frequently feature fixed effects in both directions of the panel. Settings where this is prevalent include studentteacher interaction, the allocation of workers to firms, and the importexport flows between countries. Estimation of such fixedeffect models is difficult. We derive moment conditions for models with multiplicative unobservables and fixed effects and use them to set up generalized method of moments estimators that have good statistical properties. We estimate a gravity equation with multilateral resistance terms as an application of our methods. 
Date:  2015–02 
URL:  http://d.repec.org/n?u=RePEc:spo:wpecon:info:hdl:2441/75dbbb2hc596np6q8flqf6i79k&r=ecm 
By:  Michael Creel 
Abstract:  This paper presents a cross validation method for selection of statistics for Approximate Bayesian Computing, and for related estimation methods such as the Method of Simulated Moments. The method uses simulated annealing to minimize the cross validation criterion over a combinatorial search space that may contain many, many elements. An example, for which optimal statistics are known from theory, shows that the method is able to select optimal statistics out of a large set of candidate statistics. 
Keywords:  Approximate Bayesian Computation; likelihoodfree methods; selection of statistics;method of simulated moments 
JEL:  E24 O41 
Date:  2015–01–22 
URL:  http://d.repec.org/n?u=RePEc:aub:autbar:950.15&r=ecm 
By:  Andrea Discacciati (Karolinska Institute, Stockholm) 
Abstract:  Data augmentation is a technique for conducting approximate Bayesian regression analysis. This technique is a form of penalized likelihood estimation where prior information, represented by one or more specific prior data records, generates a penalty function that imposes the desired priors on the regression coefficients. We present a new command, penlogit, that fits penalized logistic regression via data augmentation. We illustrate the command through an example using data from an epidemiological study. 
Date:  2014–11–13 
URL:  http://d.repec.org/n?u=RePEc:boc:isug14:03&r=ecm 
By:  Daniela Osterrieder (Rutgers Business School and CREATES); Daniel VentosaSantaulària (Center for Research and Teaching in Economics); J. Eduardo VeraValdés (Aarhus University and CREATES) 
Abstract:  Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, nonstandard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness in the theoretical predictive equation by suggesting a data generating process, where returns are generated as linear functions of a lagged latent I(0) risk process. The observed predictor is a function of this latent I(0) process, but it is corrupted by a fractionally integrated noise. Such a process may arise due to aggregation or unexpected level shifts. In this setup, the practitioner estimates a misspecified, unbalanced, and endogenous predictive regression. We show that the OLS estimate of this regression is inconsistent, but standard inference is possible. To obtain a consistent slope estimate, we then suggest an instrumental variable approach and discuss issues of validity and relevance. Applying the procedure to the prediction of daily returns on the S&P 500, our empirical analysis confirms return predictability and a positive riskreturn tradeoff. 
Keywords:  Title: Timevarying disaster risk models: An empirical assessment of the RietzBarro hypothesis 
JEL:  G17 C22 C26 C58 
Date:  2015–01–29 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201509&r=ecm 
By:  Grabka, Markus; Westermeier, Christian 
Abstract:  Statistical Analysis in surveys is often facing missing data. As casewise deletion and single imputation prove to have undesired properties, multiple imputation remains as a measure to handle this problem. In a longitudinal study, where for some missing values past or future data points might be available, the question arises how to successfully transform this advantage into better imputation models. In a simulation study the authors compare six combinations of crosssectional and longitudinal imputation strategies for German wealth panel data (SOEP wealth module). The authors create simulation data sets by blanking out observed data points: they induce item non response into the data by both missing at random (MAR) and two separate missing not at random (MNAR) mechanisms. We test the performance of multiple imputation using chained equations (MICE), an imputation procedure for panel data known as the rowandcolumns method and a regression specification with correction for sample selection including a stochastic error term. The regression and MICE approaches serve as fallback methods when only crosssectional data is available. Even though the regression approach omits certain stochastic components and estimators based on its result are likely to underestimate the uncertainty of the imputation procedure, it performs weak against the MICE setup. The rowandcolumns method, a univariate method, performs well considering both longitudinal and crosssectional evaluation criteria. These results show that if the variables which ought to be imputed are assumed to exhibit high state dependency, univariate imputation techniques such as the rowandcolumns imputation should not be dismissed beforehand. 
JEL:  C18 C83 C46 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:zbw:vfsc14:100353&r=ecm 
By:  Westerlund, Joakim (Department of Economics, Lund University); Reese, Simon (Department of Economics, Lund University); Narayan, Paresh (Centre for Research in Economics and Financial Econometrics, Deakin University) 
Abstract:  Existing econometric approaches for studying price discovery presume that the number of markets are small, and their properties become suspect when this restriction is not met. They also require making identifying restrictions and are in many cases not suitable for statistical inference. The current paper takes these shortcomings as a starting point to develop a factor analytical approach that makes use of the crosssectional variation of the data, yet is very userfriendly in that it does not involve any identifying restrictions or obstacles to inference. 
Keywords:  Price discovery; panel data; common factor models; crossunit cointegration 
JEL:  C12 C13 C33 
Date:  2014–12–30 
URL:  http://d.repec.org/n?u=RePEc:hhs:lunewp:2015_004&r=ecm 
By:  Umberto Cherubini; Sabrina Mulinacci 
Abstract:  We propose a model and an estimation technique to distinguish systemic risk and contagion in credit risk. The main idea is to assume, for a set of $d$ obligors, a set of $d$ idiosyncratic shocks and a shock that triggers the default of all them. All shocks are assumed to be linked by a dependence relationship, that in this paper is assumed to be exchangeable and Archimedean. This approach is able to encompass both systemic risk and contagion, with the MarshallOlkin pure systemic risk model and the Archimedean contagion model as extreme cases. Moreover, we show that assuming an affine structure for the intensities of idiosyncratic and systemic shocks and a Gumbel copula, the approach delivers a complete multivariate distribution with exponential marginal distributions. The model can be estimated by applying a moment matching procedure to the bivariate marginals. We also provide an easy visual check of the good specification of the model. The model is applied to a selected sample of banks for 8 European countries, assuming a common shock for every country. The model is found to be well specified for 4 of the 8 countries. We also provide the theoretical extension of the model to the nonexchangeable case and we suggest possible avenues of research for the estimation. 
Date:  2015–02 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1502.01918&r=ecm 
By:  Manner, Hans; Blatt, Dominik; Candelon, Bertrand 
Abstract:  This paper proposes an original threepart sequential testing procedure (STP), with which to test for contagion using a multivariate model. First, it identifies structural breaks in the volatility of a given set of countries. Then a structural break test is applied to the correlation matrix to identify and date the potential contagion mechanism. As a third element, the STP tests for the distinctiveness of the break dates previously found. Compared to traditional contagion tests in a bivariate setup, the STP has high testing power and is able to locate the dates of contagion more precisely. Monte Carlo simulations underline the importance of separating variance and correlation break testing, the endogenous dating of the breakpoints and the usage of multidimensional data. The procedure is applied for the 1997 Asian Financial Crisis, revealing the chronological order of the crisis events. 
JEL:  C32 G01 G15 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:zbw:vfsc14:100411&r=ecm 
By:  Massimo Baldini; Daniele Pacifico; Federica Termini 
Abstract:  The aim of this paper is to present a new methodology for dealing with missing expenditure information in standard income surveys. Under given conditions, typical imputation procedures, such as statistical matching or regressionbased models, can replicate well in the income survey both the unconditional density of household expenditure and its joint density with a set of sociodemographic variables that the two surveys have in common. However, standard imputation procedures may fail in capturing the overall relation between income and expenditure, especially if the common control variables used for the imputation have a weak correlation with the missing information. The paper suggests a twostep imputation procedure that allows reproducing the joint relation between income and expenditure observed from external sources, while maintaining the advantages of traditional imputation methods. The proposed methodology suits well for any empirical analysis that needs to relate income and consumption, such as the estimation of Engel curves or the evaluation of consumption taxes through microsimulation models. An empirical application shows the makings of such a technique for the evaluation of the distributive effects of consumption taxes and proves that common imputation methods may produce significantly biased results in terms of policy recommendations when the control variables used for the imputation procedure are weakly correlated with the missing variable. 
Keywords:  expenditure imputation, matching, propensity score, tax incidence 
JEL:  F14 F20 I23 J24 
Date:  2015–01 
URL:  http://d.repec.org/n?u=RePEc:mod:cappmo:0116&r=ecm 
By:  McGovern, Mark E.; Bärnighausen, Till; Giampiero Marra; Rosalba Radice 
Abstract:  Heckmantype selection models have been used to control HIV prevalence estimates for selection bias when participation in HIV testing and HIV status are associated after controlling for observed variables. These models typically rely on the strong assumption that the error terms in the participation and the outcome equations that comprise the model are distributed as bivariate normal. We introduce a novel approach for relaxing the bivariate normality assumption in selection models using copula functions. We apply this method to estimating HIV prevalence and new confidence intervals (CI) in the 2007 Zambia Demographic and Health Survey (DHS) by using interviewer identity as the selection variable that predicts participation (consent to test) but not the outcome (HIV status). We show in a simulation study that selection models can generate biased results when the bivariate normality assumption is violated. In the 2007 Zambia DHS, HIV prevalence estimates are similar irrespective of the structure of the association assumed between participation and outcome. For men, we estimate a population HIV prevalence of 21% (95% CI = 16%?25%) compared with 12% (11%?13%) among those who consented to be tested; for women, the corresponding figures are 19% (13%?24%) and 16% (15%?17%). Copula approaches to Heckmantype selection models are a useful addition to the methodological toolkit of HIV epidemiology and of epidemiology in general. We develop the use of this approach to systematically evaluate the robustness of HIV prevalence estimates based on selection models, both empirically and in a simulation study. 
Date:  2015–01 
URL:  http://d.repec.org/n?u=RePEc:qsh:wpaper:199101&r=ecm 
By:  Mutschler, Willi 
Abstract:  Several formal methods have been proposed to check identification in DSGE models via (i) the autocovariogram (Iskrev 2010), (ii) the spectral density (Komunjer and Ng 2011; Qu and Tkachenko 2012), or (iii) Bayesian indicators (Koop et al 2012). Even though all methods seem similar, there has been no study of the advantages and drawbacks of implementing the different methods. The contribution of this paper is threefold: First, we derive all criteria in the same framework following SchmittGroh and Uribe (2004). While Iskrev (2010) already uses analytical derivatives, Komunjer and Ng (2011) and Qu and Tkachenko (2012) rely on numerical methods. For a rigorous comparison we thus show how to implement analytical derivatives into all criteria. We argue in favor of using analytical derivatives, whenever feasible, due to its robustness and greater speed than relying on numerical procedures. Second, we apply all methods on DSGE models that are known to have lack of identification. Our findings suggest that most of the times the methods come to the same conclusion, however, the issue of numerical errors due to nonlinearities and very large matrices may lead to unreliable or contradictory conclusions. The example models show that by evaluating different criteria we also gain inside into the dynamic structure of the DSGE model. We argue that in order to thoroughly analyze identification, one has to be aware of the advantages and drawbacks of the different methods. Third, we extend the methods to higher approximations given the prunedstatespace representation studied by Andreasen, Fern ndezVillaverde and Rubio Ram rez (2014). It is argued that this can improve overall identification of a DSGE model via imposing additional restrictions on the mean and variance. In this way we are able to identify previously unidentified models. 
JEL:  C10 E10 C50 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:zbw:vfsc14:100598&r=ecm 
By:  Rafael Weißbach, Rafael; Voß, Sebastian 
Abstract:  We model credit rating histories as continuoustime discretestate Markov processes. Infrequent monitoring of the debtors' solvency will result in erroneous observations of the rating transition times, and consequently in biased parameter estimates. We develop a score test against such measurement errors in the transition data that is independent of the error distribution. We derive the asymptotic chisquare distribution for the test statistic under the null by stochastic limit theory. The test is applied to an international corporate portfolio, while accounting for economic and debtorspecific covariates. The test indicates that measurement errors in the transition times are a real problem in practice. 
JEL:  C41 C52 G33 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:zbw:vfsc14:100532&r=ecm 
By:  Davide Fiaschi; Angela Parenti 
Abstract:  This paper shows how it is possible to estimate the interconnections between regions by a connectedness matrix recently proposed by Diebold and Yilmaz (2014), and discusses how the connectedness matrix is strictly related to the spatial weights matrix used in spatial econometrics. An empirical application using growth rate volatility of per capita GDP of 199 European NUTS2 regions (EU15) over the period 19812008 illustrates how our estimated connectedness matrix is not compatible with the most popular geographical weights matrices used in literature. 
Keywords:  Firstorder Contiguity, Distancebased Matrix, Connectedness Matrix, European Regions, Network. 
JEL:  C23 R11 R12 O52 
Date:  2015–02–01 
URL:  http://d.repec.org/n?u=RePEc:pie:dsedps:2015/198&r=ecm 
By:  Schreiber, Sven 
Abstract:  The topic of this paper is the estimation uncertainty of the StockWatson and GonzaloGranger permanenttransitory decompositions in the framework of the cointegrated vector autoregression. We suggest an approach to construct the confidence interval of the transitory component estimate in a given period (e.g. the latest observation) by conditioning on the observed data in that period. To calculate asymptotically valid confidence intervals we use the delta method and two bootstrap variants. As an illustration we analyze the uncertainty of (US) output gap estimates in a system of output, consumption, and investment. 
JEL:  C32 C15 E32 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:zbw:vfsc14:100582&r=ecm 