
on Econometrics 
By:  Jiti Gao; Han Hong 
Abstract:  In this paper we study a statistical method of implementing quasiBayes estimators for nonlinear and nonseparable GMM models, that is motivated by the ideas proposed in Chernozhukov and Hong (2003) and Creel and Kristensen (2011) and that combines simulation with nonparametric regression in the computation of GMM models. We provide formal conditions under which frequentist inference is asymptotically valid and demonstrate the validity of the use of posterior quantiles. We also show that in this setting, local linear kernel regression methods have theoretical advantages over local kernel methods that are also reflected in finite sample simulation results. Our results also apply to both exactly and over identified models. These estimators do not need to rely on numerical optimization or Markov Chain Monte Carlo simulations. They provide an effective complement to the classical Mestimators and to MCMC methods, and can be applied to both likelihood based models and method of moment based models. 
Keywords:  Mestimators, Monte Carlo Markov Chain methods, Nonparametric Regressions. 
JEL:  C12 C15 C22 C52 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201424&r=ecm 
By:  GuerronQuintana, Pablo; Inoue, Atsushi; Kilian, Lutz 
Abstract:  One of the leading methods of estimating the structural parameters of DSGE mod els is the VARbased impulse response matching estimator. The existing asympotic theory for this estimator does not cover situations in which the number of impulse response parameters exceeds the number of VAR model parameters. Situations in which this order condition is violated arise routinely in applied work. We establish the consistency of the impulse response matching estimator in this situation, we derive its asymptotic distribution, and we show how this distribution can be approximated by bootstrap methods. Our methods of inference remain asymptotically valid when the order condition is satisfied, regardless of whether the usual rank condition for the application of the delta method holds. Our analysis sheds new light on the choice of the weighting matrix and covers both weakly and strongly identified DSGE model parameters. We also show that under our assumptions special care is needed to en sure the asymptotic validity of Bayesian methods of inference. A simulation study suggests that the frequentist and Bayesian point and interval estimators we propose are reasonably accurate in finite samples. We also show that using these methods may affect the substantive conclusions in empirical work. 
Keywords:  structural estimation,DSGE,VAR,impulse response,nonstandard asymptotics,bootstrap,weak identification,robust inference 
JEL:  C32 C52 E30 E50 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:zbw:cfswop:498&r=ecm 
By:  Clifford Lam; Pedro Souza 
Abstract:  This paper proposes a model for estimating the underlying crosssectional dependence structure of a large panel of time series. Technical difficulties meant such a structure is usually assumed before further analysis. We propose to estimate this by penalizing the elements in the spatial weight matrices using the adaptive LASSO proposed by Zou (2006). Nonasymptotic oracle inequalities and the asymptotic sign consistency of the estimators are proved when the dimension of the time series can be larger than the sample size, and they tend to infinity jointly. Asymptotic normality of the LASSO/adaptive LASSO estimator for the model regression parameter is also presented. All the proofs involve nonstandard analysis of LASSO/adaptive LASSO estimators, since our model, albeit like a standard regression, always has the response vector as one of the covariates. A block coordinate descent algorithm is introduced, with simulations and a real data analysis carried out to demonstrate the performance of our estimators. 
Keywords:  spatial econometrics, adaptive LASSO, sign consistency, asymptotic normality, nonasymptotic oracle inequalities, spatial weight matrices 
JEL:  C33 C4 C52 
Date:  2014–11 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:/2014/578&r=ecm 
By:  Einmahl, J.H.J. (Tilburg University, Center For Economic Research); Kiriliouk, A.; Krajina, A. (Tilburg University, Center For Economic Research); Segers, J. (Tilburg University, Center For Economic Research) 
Abstract:  Tail dependence models for distributions attracted to a maxstable law are tted using observations above a high threshold. To cope with spatial, highdimensional data, a rankbased Mestimator is proposed relying on bivariate margins only. A datadriven weight matrix is used to minimize the asymptotic variance. Empirical process arguments show that the estimator is consistent and asymptotically normal. Its nitesample performance is assessed in simulation experiments involving popular maxstable processes perturbed with additive noise. An analysis of wind speed data from the Netherlands illustrates the method. 
Keywords:  Brownresnick process; exceedances; multivariate extremes; ranks; spatial statistics; stable tail dependence function 
JEL:  C13 C14 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:tiu:tiucen:2d5c1a3ba5f643298df2f7d0bb2f1243&r=ecm 
By:  Sermin Gungor; Richard Luger 
Abstract:  We propose double bootstrap methods to test the meanvariance efficiency hypothesis when multiple portfolio groupings of the test assets are considered jointly rather than individually. A direct test of the joint null hypothesis may not be possible with standard methods when the total number of test assets grows large relative to the number of available timeseries observations, since the estimate of the disturbance covariance matrix eventually becomes singular. The suggested residual bootstrap procedures based on combining the individual group pvalues avoid this problem while controlling the overall significance level. Simulation and empirical results illustrate the usefulness of the joint meanvariance efficiency tests. 
Keywords:  Asset Pricing, Econometric and statistical methods, Financial markets 
JEL:  C12 C14 C15 G12 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:bca:bocawp:1451&r=ecm 
By:  Jin Seo Cho (School of Economics, Yonsei University); Peter C.B. Phillips (Cowles Foundation, Yale University) 
Abstract:  We provide a new test for equality of covariance matrices that leads to a convenient mechanism for testing specification using the information matrix equality. The test relies on a new characterization of equality between two k dimensional positivedefinite matrices A and B: the traces of AB^{–1} and BA^{–1} are equal to k if and only if A = B. Using this criterion, we introduce a class of omnibus test statistics for equality of covariance matrices and examine their null, local, and global approximations under some mild regularity conditions. Monte Carlo experiments are conducted to explore the performance characteristics of the test criteria and provide comparisons with existing tests under the null hypothesis and local and global alternatives. The tests are applied to the classic empirical models for voting turnout investigated by Wolfinger and Rosenstone (1980) and Nagler (1991, 1994). Our tests show that all classic models for the 1984 presidential voting turnout are misspecified in the sense that the information matrix equality fails. 
Keywords:  Matrix equality, Trace, Determinant, Arithmetic mean, Geometric mean, Harmonic mean, Information matrix, Eigenvalues, Parametric bootstrap 
JEL:  C01 C12 C52 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1970&r=ecm 
By:  Peter C.B. Phillips (Cowles Foundation, Yale University) 
Abstract:  Limit theory is developed for the dynamic panel GMM estimator in the presence of an autoregressive root near unity. In the unit root case, AndersonHsiao lagged variable instruments satisfy orthogonality conditions but are wellknown to be irrelevant. For a fixed time series sample size (T) GMM is inconsistent and approaches a shifted Cauchydistributed random variate as the cross section sample size n approaches infinity. But when T approaches infinity, either for fixed n or as n approaches infinity, GMM is square root{T} consistent and its limit distribution is a ratio of random variables that converges to twice a standard Cauchy as n approaches infinity. In this case, the usual instruments are uncorrelated with the regressor but irrelevance does not prevent consistent estimation. The same Cauchy limit theory holds sequentially and jointly as (n,T) approaches infinity with no restriction on the divergence rates of n and T. When the common autoregressive root rho = 1 + c/square root{T} the panel comprises a collection of mildly integrated time series. In this case, the GMM estimator is square root{n}n consistent for fixed T and square root{(nT)} consistent with limit distribution N(0,4) when n, T approaches infinity sequentially or jointly. These results are robust for common roots of the form rho = 1 + c/T^{gamma} for all gamma in (0,1) and joint convergence holds. Limit normality holds but the variance changes when gamma = 1. When gamma > 1 joint convergence fails and sequential limits differ with different rates of convergence. These findings reveal the fragility of conventional Gaussian GMM asymptotics to persistence in dynamic panel regressions. 
Keywords:  Cauchy limit theory, Dynamic panel, GMM estimation, Instrumental variable, Irrelevant instruments, Panel unit roots, Persistence 
JEL:  C23 C36 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1962&r=ecm 
By:  Lee, MeiYu 
Abstract:  This paper demonstrates the impact of particular factors – such as a nonnormal error distribution, constraints of the residuals, sample size, the multicollinear values of independent variables and the autocorrelation coefficient – on the distributions of errors and residuals. This explains how residuals increasingly tend to a normal distribution with increased linear constraints on residuals from the linear regression analysis method. Furthermore, reduced linear requirements cause the shape of the error distribution to be more clearly shown on the residuals. We find that if the errors follow a normal distribution, then the residuals do as well. However, if the errors follow a Uquadratic distribution, then the residuals have a mixture of the error distribution and a normal distribution due to the interaction of linear requirements and sample size. Thus, increasing the constraints on the residual from more independent variables causes the residuals to follow a normal distribution, leading to a poor estimator in the case where errors have a nonnormal distribution. Only when the sample size is large enough to eliminate the effects of these linear requirements and multicollinearity can the residuals be viewed as an estimator of the errors. 
Keywords:  Time series; Autoregressive model; Computer simulation; Nonnormal distribution 
JEL:  C15 C32 C63 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:60362&r=ecm 
By:  Jan F. KIVIET (Division of Economics, School of Humanities and Social Sciences, Nanyang Technological Univer sity. Address: 14 Nanyang Drive, Singapore, 637332.); Qu FENG (Division of Economics, School of Humanities and Social Sciences, Nanyang Technological Univer sity. Address: 14 Nanyang Drive, Singapore, 637332.) 
Abstract:  While coping with nonsphericality of the disturbances, standard GMM suffers from a blind spot for exploiting the most e¤ective instruments when these are ob tained directly from unconditional rather than conditional moment assumptions. For instance, standard GMM counteracts that exogenous regressors are used as their own optimal instruments. This is easily seen after transmuting GMM for linear models into IV in terms of transformed variables. It is demonstrated that modified GMM (MGMM), exploiting straightforward modifications of the instru ments, can achieve substantial efficiency gains and bias reductions, even under mild heteroskedasticity. Feasible MGMM implementations and their standard er ror estimates are examined and compared with standard GMM and IV for a range of typical models for crosssection data, both by simulation and by empirical il lustration. 
Keywords:  efficiency, generalized method of moments, instrument strength, nonspherical disturbances, (un)conditional moment assumptions 
JEL:  C01 C13 C26 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:nan:wpaper:1413&r=ecm 
By:  Klein, Torsten L. 
Abstract:  In applied statistics and computational econometrics a key task for researchers is to bring the sizable but unstructured body of numeric evidence, for example from Monte Carlo simulation, in a form ready for introducing to scientific dialog. At their disposal they find established means of arrangement: narrative text, tables, graphs. Employing classical principles of communication to evaluate their suitability graphical devices seem optimal. They absorb large quantities of data, and organize content into a productive tool. Graphs confirm the advantage when put to work in a standard simulation exercise. However, theory and application contrast with the norm observed in peerreviewed journals – by a wide margin and with considerable persistency researchers prefer tables. 
Keywords:  econometric and statistical methods, Monte Carlo, bivariate probit model, exogeneity testing, modes of communication, data visualization, economics of science 
JEL:  A14 C10 C15 C35 C52 Y10 
Date:  2014–12–11 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:60514&r=ecm 
By:  Klein, Torsten L. 
Abstract:  This brief note serves as a companion paper to Klein (2014). Small multiples incorporate graphical frameworks such as P value plots with ease, and thus facilitate visualizing quantitative data that record parameter change from simulation experiments. Pitfalls in layout may be avoided when observing elementary design principles. To illustrate their workings the principles revise a small multiple that collects simulation results on the empirical size of procedures testing exogeneity in the bivariate probit model. 
Keywords:  Monte Carlo, bivariate probit model, exogeneity testing, data visualization 
JEL:  C10 C15 C35 C52 Y10 
Date:  2014–12–11 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:60521&r=ecm 
By:  Dalibor Stevanovic 
Abstract:  In this paper we study the selection of the number of primitive shocks in exact and approximate factor models in the presence of structural instability. The empirical analysis shows that the estimated number of factors varies substantially across several selection methods and over the last 30 years in standard large macroeconomic and financial panels. Using Monte Carlo simulations, we suggest that the structural instability, in terms of both timevarying factor loadings and nonlinear factor representations, can alter the estimation of the number of factors and therefore provides an explanation for the empirical findings. <P> 
Keywords:  Factor model, Number of factors, Large panels, Monte Carlo simulations, 
JEL:  C12 C38 
Date:  2014–12–01 
URL:  http://d.repec.org/n?u=RePEc:cir:cirwor:2014s44&r=ecm 
By:  Kleijnen, Jack P.C. (Tilburg University, Center For Economic Research) 
Abstract:  Abstract: This chapter first summarizes Response Surface Methodology (RSM), which started with Box and Wilson’s article in 1951 on RSM for real, nonsimulated systems. RSM is a stepwise heuristic that uses firstorder polynomials to approximate the response surface locally. An estimated polynomial metamodel gives an estimated local gradient, which RSM uses in steepest ascent (or descent) to decide on the next local experiment. When RSM approaches the optimum, the latest firstorder polynomial is replaced by a secondorder polynomial. The fitted secondorder polynomial enables the estimation of the optimum. Furthermore, this chapter focuses on simulated systems, which may violate the assumptions of constant variance and independence. The chapter also summarizes a variant of RSM that is proven to converge to the true optimum, under specific conditions. The chapter presents an adapted steepest ascent that is scaleindependent. Moreover, the chapter generalizes RSM to multiple random responses, selecting one response as the goal variable and the other responses as the constrained variables. This generalized RSM is combined with mathematical programming to estimate a better search direction than the steepest ascent direction. To test whether the estimated solution is indeed optimal, bootstrapping may be used. Finally, the chapter discusses robust optimization of the decision variables, while accounting for uncertainties in the environmental variables. 
Keywords:  simulation; optimization; regression; robustness; risk 
JEL:  C0 C1 C9 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:tiu:tiucen:7f9f17eedb7f4041a686d41f32a78539&r=ecm 
By:  Pettenuzzo, Davide; Timmermann, Allan G; Valkanov, Rossen 
Abstract:  We propose a new approach to predictive density modeling that allows for MIDAS effects in both the first and second moments of the outcome and develop Gibbs sampling methods for Bayesian estimation in the presence of stochastic volatility dynamics. When applied to quarterly U.S. GDP growth data, we find strong evidence that models that feature MIDAS terms in the conditional volatility generate more accurate forecasts than conventional benchmarks. Finally, we find that forecast combination methods such as the optimal predictive pool of Geweke and Amisano (2011) produce consistent gains in outofsample predictive performance. 
Keywords:  Bayesian estimation; GDP growth; MIDAS regressions; outofsample forecasts; stochastic volatility 
JEL:  C11 C32 C53 E37 
Date:  2014–09 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:10160&r=ecm 
By:  Mateescu, George Daniel (Institute for Economic Forecasting, Romanian Academy) 
Abstract:  In a previous article, we introduced the technique of multiple regression points. This technique is an extension of regression methods in the case of multi valued functions. Specifically, we study the time series for which a value is an interval. This is the case, for example, exchange rate values in one day, which varies between a minimum and a maximum. Also, we can study the associated data series stock indicators, which varies continuously throughout the day. In this paper we present the extension of this technique to nonlinear regression. 
Keywords:  multiple points regression 
JEL:  C02 C32 
Date:  2014–11 
URL:  http://d.repec.org/n?u=RePEc:ror:seince:141118&r=ecm 
By:  Dagsvik, John K. (Research Department, Statistics Norway, and the Frisch Centre of Economic Research); Jia, Zhiyang (Statistics Norway) 
Abstract:  This paper discusses aspects of a framework for modeling labor supply where the notion of job choice is fundamental. In this framework, workers are assumed to have preferences over latent job opportunities belonging to workerspecific choice sets from which they choose their preferred job. The observed hours of work and wage is interpreted as the jobspecific hours and wage of the chosen job. The main contribution of this paper is an analysis of the identification problem of this framework under various conditions, when conventional crosssection microdata are applied. <p> The modeling framework is applied to analyze labor supply behavior for married/cohabiting couples using Norwegian micro data. Specifically, we estimate two model versions with in the general framework. Based on the empirical results, we discuss further qualitative properties of the model versions. Finally, we apply the preferred model version to conduct a simulation experiment of a counterfactual policy reforms. 
Keywords:  Labor supply; nonpecuniary job attributes; latent choice sets; random utility models; identification 
JEL:  C51 J22 
Date:  2014–09–15 
URL:  http://d.repec.org/n?u=RePEc:hhs:osloec:2014_022&r=ecm 
By:  David Hendry; Jurgen A. Doornik 
Abstract:  Big Data offer potential benefits for statistical modelling, but confront problems like an excess of false positives, mistaking correlations for causes, ignoring sampling biases, and selecting by inappropriate methods. We consider the many important requirements when searching for a databased relationship using Big Data, and the possible role of Autometrics in that context. Paramount considerations include embedding relationships in general initial models, possibly restricting the number of variables to be selected over by nonstatistical criteria (the formulation problem), using good quality data on all variables, analyzed with tight significance levels by a powerful selection procedure, retaining available theory insights (the selection problem) while testing for relationships being well specified and invariant to shifts in explanatory variables (the evaluation problem), using a viable approach that resolves the computational problem of immense numbers of possible models. 
Keywords:  Big Data, Model Selection, Location Shifts, Autometrics 
JEL:  C51 C22 
Date:  2014–12–09 
URL:  http://d.repec.org/n?u=RePEc:oxf:wpaper:735&r=ecm 
By:  Canova, Fabio; Pérez Forero, Fernando J. 
Abstract:  This paper provides a general procedure to estimate structural VARs. The algorithm can be used in constant or time varying coefficient models, and in the latter case, the law of motion of the coefficients can be linear or nonlinear. It can deal in a unified way with justidentified (recursive or nonrecursive) or overidentified systems where identification restrictions are of linear or of nonlinear form. We study the transmission of monetary policy shocks in models with time varying and time invariant parameters. 
Keywords:  Identification restrictions; Metropolis algorithm; Monetary transmission mechanism.; Timevarying coefficient structural VAR models 
JEL:  C11 E51 E52 
Date:  2014–06 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:10022&r=ecm 
By:  Kreif, N.;; Grieve, R.;; DÃaz, I.;; Harrison, D.; 
Abstract:  When the treatment under evaluation is continuous rather than binary, the marginal causal effect can be reported from the estimated doseresponse function. Here, regression methods can be employed that specify a model for the endpoint, given the treatment and covariates. An alternative is to estimate the generalised propensity score (GPS), which can adjust by the conditional density of the treatment, given the covariates. Witheither regression or GPS approaches, model misspecification can lead to biased estimates. This paper introduces a machine learning approach, the â€œSuper Learnerâ€, to estimate both the GPS and the doseresponse function. The Super Learner selects the convex combination of candidate estimation algorithms, to create new estimators. We take a two stage estimation approach whereby the Super Learner selects a GPS, and then a doseresponse function conditional on the GPS. We compare this approach to parametric implementations of the GPS and regression methods. We contrast the methods in the Risk Adjustment In Neurocritical care (RAIN) cohort study, in which we estimate the marginal causal effects of increasing transfer time from emergency departments to specialised neuroscience centres, for patients with traumatic brain injury. With parametric models for the outcome we find that doseresponse curves differ according to choice of parametric specification. With the Super Learner approach to both regression and the GPS, we find that transfer time does not have a statistically significant marginal effect on the outcome. 
Keywords:  program evaluation; generalised propensity score; machine learning; 
JEL:  C1 C5 
Date:  2014–08 
URL:  http://d.repec.org/n?u=RePEc:yor:hectdg:14/19&r=ecm 
By:  Stacy, Brian 
Abstract:  This paper examines the effect of measurement error in the dependent variable on quantile regression, because unlike OLS regression, even classical measurement error can generate bias. I examine the pattern and size of the bias using both simulation and an empirical example. The simulations indicate that classical error can cause bias and that nonclassical measurement error, particularly heteroskedastic measurement error, has the potential to produce substantial bias. Also, the size and direction of the bias depends on the amount of heterogeneity in the effects across quantiles and the regression error distribution. Using restricted access Health and Retirement Study data containing matched IRS W2 earnings records, I examine whether estimates of the returns to education statistically differ using a precisely measured and mismeasured earnings variable. I find that returns to education are overstated by roughly 1 percentage point at the median and 75th percentile using earnings reported by survey respondents. 
Keywords:  Quantile Regression,Dependent Variable Measurement Error,Returns to Education 
JEL:  C01 C21 C31 J24 
Date:  2014–12–08 
URL:  http://d.repec.org/n?u=RePEc:zbw:esprep:104744&r=ecm 
By:  Andreas Steinmayr 
Abstract:  A key problem in the literature on the economics of migration is how emigration of an individual affects households left behind. Answers to this question must confront a problem I refer to as invisible sample selection: when entire households migrate, no information about them remains in their source country. Since estimation is typically based on source country data, invisible sample selection yields biased estimates if allmove households differ from households that send only a subset of their members abroad. I address this identification problem and derive nonparametric bounds within a principal stratification framework. Instrumental variables estimates are biased, even if allmove households do not differ in their potential outcomes. For this case, I derive a corrected instrumental variables estimator. I illustrate the approach using individual and household data from widely cited, recent studies. Potential bias from invisible sample selection can be large, but transparent assumptions regarding behaviors of household members and selectivity of migrants allow identification of informative bounds 
Keywords:  Sample selection, migration, selectivity, principal stratification 
JEL:  C21 F22 J61 O15 
Date:  2014–11 
URL:  http://d.repec.org/n?u=RePEc:kie:kieliw:1975&r=ecm 
By:  Manfred M Fischer; James P. LeSage 
Abstract:  Spatial interaction models represent a class of models that are used for modelling origindestination flow data. The focus of this paper is on the lognormal version of the model. In this context, we consider spatial econometric specifications that can be used to accommodate two types of dependence scenarios, one involving endogenous interaction and the other exogenous interaction. These model specifications replace the conventional assumption of independence between origindestination flows with formal approaches that allow for two different types of spatial dependence in magnitudes. Endogenous interaction reflects situations where there is a reaction to feedback regarding flow magnitudes from regions neighbouring origin and destination regions. This type of interaction can be modelled using specifications proposed by LeSage and Pace (2008) who use spatial lags of the dependent variable to quantify the magnitude and extent of the feedback effects, hence the term endogenous interaction. Exogenous interaction represents a situation where spillovers arise from nearby (or perhaps even distant) regions, and these need to be taken into account when modelling observed variations in flows across the network of regions. In contrast to endogenous interaction, these contextual effects do not generate reactions to the spillovers, leading to a model specification that can be interpreted without considering changes in the longrun equilibrium state of the system of flows. As in the case of social networks, contextual effects are modelled using spatial lags of the explanatory variables that represent characteristics of neighbouring (or more generally connected) regions, but not spatial lags of the dependent variable, hence the term exogenous interaction. In addition to setting forth expressions for the true partial derivatives of nonspatial and endogenous spatial interaction models and associated scalar summary measures from ThomasAgnan and LeSage (2014), we propose new scalar summary measures for the exogenous spatial interaction specification introduced here. An illustration applies the exogenous spatial interaction model to a flow matrix of teacher movements between 67 school districts in the state of Florida. 
Keywords:  Lognormal spatial interaction model; spatial dependence among ODflows; exogenous spatial interaction specifications; endogenous spatial interaction specifications; interpreting estimates 
Date:  2014–11 
URL:  http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa14p716&r=ecm 
By:  Freitag L. (GSBE) 
Abstract:  This paper examines the relationship between sovereign credit default swaps CDS and sovereign rating changes of European countries. To this aim, a new estimator is introduced which merges mixed data sampling MIDAS with probit regression. Simulations show that the estimator has good properties in finite sample. Also, I investigate a bootstrap procedure introduced by Ghysels et al. 2007, which should be able to handle significance testing in a MIDAS setting. The bootstrap hasgood size but low power. For the empirical analysis I use sovereign CDS data for 22 EU countries trying to correlate sovereign downgrades with sovereign CDS premiums. Overall the CDS data and the ratings are in most cases significantly positively correlated. Therefore, Credit Rating Agencies CRA and financial markets are generally agreeing on the implied default probability of sovereign nations. Also, CDS prices are expecting downgrades in advance in the majority of investigateddatasets. However, this does not mean that a default probability can be extracted from raw CDS prices. Instead, by using a MIDAS estimator, I significantly reduce the amount of noise in the data. Therefore, CRAs are still providing important information to financial markets. 
Keywords:  Single Equation Models; Single Variables: Discrete Regression and Qualitative Choice Models; Discrete Regressors; Proportions; Model Construction and Estimation; Investment Banking; Venture Capital; Brokerage; Ratings and Ratings Agencies; 
JEL:  C25 C51 G24 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:unm:umagsb:2014038&r=ecm 
By:  Galina Besstremyannaya (Stanford University) 
Abstract:  The paper proposes a combination of finite mixture models and matching estimators to account for heterogeneous and nonlinear effects of the coinsurance rate on healthcare expenditure. We use loglinear model and generalized linear models with different distribution families, and measure the conditional average treatment effect of a rise in the coinsurance rate in each component of the model. The estimations with panel data for adult Japanese consumers in 20082010 and for female consumers in 20002010 demonstrate the presence of subpopulations with high, medium and low healthcare expenditure, and subpopulation membership is explained by lifestyle variables. Generalized linear models provide adequate fit compared to loglinear model. Conditional average treatment effect estimations reveal the existence of nonlinear effects of the coinsurance rate in the subpopulation with high expenditure. 
Keywords:  finite mixture model; generalized linear model; matching estimators 
JEL:  C44 C61 I13 
Date:  2014–11 
URL:  http://d.repec.org/n?u=RePEc:sip:dpaper:14014&r=ecm 
By:  Mario Forni; Luca Gambetti; Marco Lippi; Luca Sala 
Abstract:  We investigate the role of "noise" shocks as a source of business cycle fl uctuations. To do so we set up a simple model of imperfect information and derive restrictions for identifying the noise shock in a VAR model. The novelty of our approach is that identification is reached by means of dynamic rotations of the reduced form residuals. We find that noise shocks generate humpshaped responses of GDP, consumption and investment and account for quite a sizable fraction of their prediction error variance at business cycle horizons. JEL classification: C32, E32, E62. Keywords: Nonfundamentalness, SVAR, Imperfect Information, News, Noise, Business cycles. 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:igi:igierp:531&r=ecm 
By:  Laurent Callot (VU University Amsterdam, the Tinbergen Institute and CREATES); Niels Haldrup (Aarhus University and CREATES); Malene Kallestrup Lamb (Aarhus University and CREATES) 
Abstract:  The Lee and Carter (1992) model assumes that the deterministic and stochastic time series dynamics loads with identical weights when describing the development of age specific mortality rates. Effectively this means that the main characteristics of the model simplifies to a random walk model with age specific drift components. But restricting the adjustment mechanism of the stochastic and linear trend components to be identical may be a too strong simplification. In fact, the presence of a stochastic trend component may itself result from a bias induced by properly fitting the linear trend that characterizes mortality data. We find empirical evidence that this feature of the LeeCarter model overly restricts the system dynamics and we suggest to separate the deterministic and stochastic time series components at the benefit of improved fit and forecasting performance. In fact, we find that the classical LeeCarter model will otherwise over estimate the reduction of mortality for the younger age groups and will under estimate the reduction of mortality for the older age groups. In practice, our recommendation means that the LeeCarter model instead of a onefactor model should be formulated as a two (or several)factor model where one factor is deterministic and the other factors are stochastic. This feature generalizes to the range of models that extend the LeeCarter model in various directions. 
Keywords:  Mortality modelling, factor models, principal components, stochastic and deterministic trends 
JEL:  C2 C23 J1 J11 
Date:  2014–11–19 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201444&r=ecm 
By:  Alberto Bagnai (Department of Economics, Gabriele d'Annunzio University); Christian Alexander Mongeau Ospina (Italian Association for the Study of Economic Asymmetries) 
Abstract:  We develop a mediumsized annual macroeconometric model of the Italian economy. The theoretical framework is the usual AS/AD model, where the demand side is specified along Keynesian lines, and the supply side adopts a standard neoclassical technology, with Harrod neutral technological progress. The empirical specification consists of 140 equations, of which 29 stochastic, with 55 exogenous variables. The model structure presents some distinct features, among which the disaggregation of the foreign trade block in seven trade partner regions (thus representing the bilateral imports and exports flows in function of the regional GDP and of the bilateral real exchange rates), and the explicit modelling of the impact of labour market reforms on the wage setting mechanism (which explains the shift in the Phillips curve observed over the last two decades). The model is conceived for the analysis of the medium to longrun developments of the Italian economy, and as such it adopts econometric methods that allow the researcher to quantify the structural longrun parameters. The equation are estimated over a large sample of annual data (19602013), using cointegration techniques that take into account the possible presence of structural breaks in the model parameters. The model overall tracking performance is good. We perform some standard policy experiments in order to show the model’s response to usual shocks: an increase in public expenditure, an exchange rate devaluation, a slowdown in world demand, and an increase in oil prices. The shocks are evaluated by ex post simulation and their impact tracked over a fiveyear span. The dynamic multipliers appear to be consistent with the economic intuition. 
Keywords:  Model construction and estimation, Simulation methods, Quantitative policy modeling, Keynesian model, Fiscal policy, Empirical studies of trade, Open economy macroeconomics, Macroeconomic issues of monetary unions, Forecasting and simulation. 
JEL:  C51 C53 C54 E12 E62 F14 F41 F47 
Date:  2014–11 
URL:  http://d.repec.org/n?u=RePEc:ais:wpaper:1405&r=ecm 