nep-ecm New Economics Papers
on Econometrics
Issue of 2015‒09‒18
fifteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Causal latent Markov model for the comparison of multiple treatments in observational longitudinal studies By Bartolucci, Francesco; Pennoni, Fulvia; Vittadini, Giorgio
  2. Tests of Equal Accuracy for Nested Models with Estimated Factors By Goncalves, Silvia; McCracken, Michael W.; Perron, Benoit
  3. Simpler Bootstrap Estimation of the Asymptotic Variance of U-statistic Based Estimators By Honore, Bo E.; Hu, Luojia
  4. Adding Flexibility to Markov Switching Models By E. Otranto
  5. Evolutionary Sequential Monte Carlo Samplers for Change-point Models By Arnaud Dufays
  6. Moment Estimation of the Probit Model with an Endogenous Continuous Regressor By Daiji Kawaguchi; Yukitoshi Matsushita; Hisahiro Naito
  7. Efficient Inference in the Classical IV Regression Model with Weak Identification: Asymptotic Power Against Arbitrarily Large Deviations from the Null Hypothesis By Marmer, Vadim; Yu, Zhengfei
  8. Modeling financial sector joint tail risk in the euro area By Lucas, André; Schwaab, Bernd; Zhang, Xin
  9. Generalised partially linear regression with misclassied data and an application to labour market transitions By Dlugosz, Stephan; Mammen, Enno; Wilke, Ralf A.
  10. GMM estimation of fiscal rules: Monte Carlo experiments and empirical tests By I. Mammi
  11. Inequality Constrained State Space Models By Qian, Hang
  12. Forecasting with Temporal Hierarchies By Athanasopoulos, George; Hyndman, Rob J.; Kourentzes, Nikolaos; Petropoulos, Fotios
  13. Approximating time varying structural models with time invariant structures By Canova, Fabio; Ferroni, Filippo; Matthes, Christian
  14. Kernel density estimation for heaped data By Groß, Marcus; Rendtel, Ulrich
  15. Seasonal adjustment with and without revisions: A comparison of X-13ARIMA-SEATS and CAMPLET By Barend Abeln; Jan P. A. M. Jacobs

  1. By: Bartolucci, Francesco; Pennoni, Fulvia; Vittadini, Giorgio
    Abstract: We extend to the longitudinal setting a latent class approach that has beed recently introduced by \cite{lanza:et:al:2013} to estimate the causal effect of a treatment. The proposed approach permits the evaluation of the effect of multiple treatments on subpopulations of individuals from a dynamic perspective, as it relies on a Latent Markov (LM) model that is estimated taking into account propensity score weights based on individual pre-treatment covariates. These weights are involved in the expression of the likelihood function of the LM model and allow us to balance the groups receiving different treatments. This likelihood function is maximized through a modified version of the traditional expectation-maximization algorithm, while standard errors for the parameter estimates are obtained by a non-parametric bootstrap method. We study in detail the asymptotic properties of the causal effect estimator based on the maximization of this likelihood function and we illustrate its finite sample properties through a series of simulations showing that the estimator has the expected behavior. As an illustration, we consider an application aimed at assessing the relative effectiveness of certain degree programs on the basis of three ordinal response variables when the work path of a graduate is considered as the manifestation of his/her human capital level across time.
    Keywords: Causal inference, Expectation-Maximization algorithm, Hidden Markov models, Multiple treatments, Policy evaluation, Propensity score.
    JEL: C1 C52 C53 C54 I23 J44
    Date: 2015–08
  2. By: Goncalves, Silvia (Western University, Canada); McCracken, Michael W. (Federal Reserve Bank of St. Louis); Perron, Benoit (Université de Montréal, Canada)
    Abstract: In this paper we develop asymptotics for tests of equal predictive ability between nested models when factor-augmented regression models are used to forecast. We provide conditions under which the estimation of the factors does not affect the asymptotic distributions developed in Clark and McCracken (2001) and McCracken (2007). This enables researchers to use the existing tabulated critical values when conducting inference. As an intermediate result, we derive the asymptotic properties of the principal components estimator over recursive windows. We provide simulation evidence on the finite sample effects of factor estimation and apply the tests to the case of forecasting excess returns to the S&P 500 Composite Index.
    Keywords: factor model; out-of-sample forecasts; recursive estimation
    JEL: C12 C32 C38 C52
    Date: 2015–09–14
  3. By: Honore, Bo E. (Princeton University); Hu, Luojia (Federal Reserve Bank of Chicago)
    Abstract: The bootstrap is a popular and useful tool for estimating the asymptotic variance of complicated estimators. Ironically, the fact that the estimators are complicated can make the standard bootstrap computationally burdensome because it requires repeated re-calculation of the estimator. In Honoré and Hu (2015), we propose a computationally simpler bootstrap procedure based on repeated re-calculation of one-dimensional estimators. The applicability of that approach is quite general. In this paper, we propose an alternative method which is specific to extremum estimators based on U-statistics. The contribution here is that rather than repeated re-calculating the U-statistic-based estimator, we can recalculate a related estimator based on single-sums. A simulation study suggests that the approach leads to a good approximation to the standard bootstrap, and that if this is the goal, then our approach is superior to numerical derivative methods.
    Keywords: U-statistics; bootstrap; inference; numerical derivatives
    JEL: C10 C18
    Date: 2015–09–15
  4. By: E. Otranto
    Abstract: Very often time series are subject to abrupt changes in the level, which are generally represented by Markov Switching (MS) models, hypothesizing that the level is constant within a certain state (regime). This is not a realistic framework because in the same regime the level could change with minor jumps with respect to a change of state; this is a typical situation in many economic time series, such as the Gross Domestic Product or the volatility of financial markets. We propose to make the state flexible, introducing a very general model which provides oscillations of the level of the time series within each state of the MS model; these movements are driven by a forcing variable. The flexibility of the model allows for consideration of extreme jumps in a parsimonious way (also in the simplest 2-state case), without the adoption of a larger number of regimes; moreover this model increases the interpretability and fitting of the data with respect to the analogous MS model. This approach can be applied in several fields, also using unobservable data. We show its advantages in three distinct applications, involving macroeconomic variables, volatilities of financial markets and conditional correlations.
    Keywords: abrupt changes, goodness of fit, Hamilton filter, smoothed changes, time–varying parameters
    JEL: C22 C32 C5
    Date: 2015
  5. By: Arnaud Dufays
    Abstract: Sequential Monte Carlo (SMC) methods are widely used for non-linear filtering purposes. Nevertheless the SMC scope encompasses wider applications such as estimating static model parameters so much that it is becoming a serious alternative to Markov-Chain Monte-Carlo (MCMC) methods. Not only SMC algorithms draw posterior distributions of static or dynamic parameters but additionally provide an estimate of the marginal likelihood. The tempered and time (TNT) algorithm, developed in the paper, combines (off-line) tempered SMC inference with on-line SMC inference for drawing realizations from many sequential posterior distributions without experiencing a particle degeneracy problem. Furthermore, it introduces a new MCMC rejuvenation step that is generic, automated and well-suited for multi-modal distributions. As this update relies on the wide heuristic optimization literature, numerous extensions are already available. The algorithm is notably appropriate for estimating Change-point models. As an example, we compare Change-point GARCH models through their marginal likelihoods over time.
    Keywords: Bayesian inference, Sequential Monte Carlo, Annealed Importance sampling, Change-point models, Differential Evolution, GARCH models
    JEL: C11 C15 C22 C58
    Date: 2015
  6. By: Daiji Kawaguchi; Yukitoshi Matsushita; Hisahiro Naito
    Abstract: We propose a GMM estimator with optimal instruments for a probit model that includes a continuous endogenous regressor. This GMM estimator incorporates the probit error and the heteroscedasticity of the error term in the first-stage equation to construct the optimal instruments. The estimator estimates the structural equation and the first-stage equation jointly and, based on this joint moment condition, is efficient within the class of GMM estimators. To estimate the heteroscedasticity of the error term of thefirst-stage equation, we use the k-nearest neighbor (k-nn) non-parametric estimation procedure. Then, a Monte Carlo simulation shows that in the presence of heteroscedasticity and endogeneity, our GMM estimator outperforms the two-stage conditional maximum likelihood (2SCML) estimator. Our results suggest that in the presence of heteroscedasticity in thefirst-stage equation, the proposed GMM estimator with optimal instruments is a useful option for researchers.
    Date: 2015–08
  7. By: Marmer, Vadim; Yu, Zhengfei
    Abstract: This paper considers efficient inference for the coefficient on the endogenous variable in linear regression models with weak instrumental variables (Weak-IV) and homoskedastic errors. We focus on the alternative hypothesis determined by an arbitrarily large deviation from the null hypothesis. The efficient rotation-invariant and asymptotically similar test turns out to be infeasible as it depends on the unknown correlation between structural and first-stage errors (the degree of endogeneity). We compare the asymptotic power properties of popular Weak-IV-robust tests, focusing on the Anderson-Rubin (AR) and the Conditional Likelihood Ratio (CLR) tests. We find that their relative power performance depends on the degree of endogeneity in the model and the number of IVs. Unexpectedly, the AR test outperforms the CLR when the degree of endogeneity is small and the number of IVs is large. We also describe a test that is optimal when IVs are strong and, when IVs are weak, has the same asymptotic power as the AR test against arbitrarily large deviations from the null.
    Keywords: weak instruments; arbitrarily large deviations; power envelope; power comparisons
    Date: 2015–09–01
  8. By: Lucas, André; Schwaab, Bernd; Zhang, Xin
    Abstract: We develop a novel high-dimensional non-Gaussian modeling framework to infer measures of conditional and joint default risk for numerous financial sector firms. The model is based on a dynamic Generalized Hyperbolic Skewed-t block-equicorrelation copula with time-varying volatility and dependence parameters that naturally accommodates asymmetries, heavy tails, as well as non-linear and time-varying default dependence. We apply a conditional law of large numbers in this setting to define joint and conditional risk measures that can be evaluated quickly and reliably. We apply the modeling framework to assess the joint risk from multiple defaults in the euro area during the 2008-2012 financial and sovereign debt crisis. We document unprecedented tail risks between 2011-2012, as well as their steep decline following subsequent policy actions. JEL Classification: G21, C32
    Keywords: dynamic equicorrelation, generalized hyperbolic distribution, large portfolio approximation, law of large numbers
    Date: 2015–08
  9. By: Dlugosz, Stephan; Mammen, Enno; Wilke, Ralf A.
    Abstract: "We consider the semiparametric generalised linear regression model which has mainstream empirical models such as the (partially) linear mean regression, logistic and multinomial regression as special cases. As an extension to related literature we allow a misclassified covariate to be interacted with a nonparametric function of a continuous covariate. This model is tailor- made to address known data quality issues of administrative labour market data. Using a sample of 20m observations from Germany we estimate the determinants of labour market transitions and illustrate the role of considerable misclassification in the educational status on estimated transition probabilities and marginal effects." (Author's abstract, IAB-Doku) ((en))
    Keywords: IAB-Datensatz Arbeiten und Lernen, IAB-Beschäftigtenstichprobe, Datenqualität, Fehler, Regressionsanalyse, Methodenliteratur
    Date: 2015–09–04
  10. By: I. Mammi
    Abstract: This paper focuses on the estimation of fiscal response functions for advanced economies and on the performance of alternative specifications of the Generalized Method of Moments (GMM) estimator for the rule’s parameters. We first estimate the parameters on simulated data through Monte Carlo experiments; we then run an empirical test on data for the European Monetary Union (EMU). We estimate both the Cyclicallyadjusted primary balance (CAPB) and the Primary balance (PB) models, and check the robustness of the estimates to different specifications of the GMM estimator and to alternative settings of the parameters. We also compare alternative instrument reduction strategies in a context where several endogenous variables enter the model. We find that the system GMM estimator is the best-performing in this framework and the high instrument count comes out not to be problematic. We also make the algebraic links between the parameters in the CAPB and in the PB models explicit, suggesting an effective strategy to estimate the discretionary fiscal response from the coefficients of the PB model. In the empirical application on a dataset for EMU Countries, we find that the evidence of a-cyclicality of discretionary policies is robust to all the specifications of the GMM estimator.
    JEL: C15 C33 E62 H60
    Date: 2015–09
  11. By: Qian, Hang
    Abstract: The standard Kalman filter cannot handle inequality constraints imposed on the state variables, as state truncation induces a non-linear and non-Gaussian model. We propose a Rao-Blackwellised particle filter with the optimal importance function for forward filtering and the likelihood function evaluation. The particle filter effectively enforces the state constraints when the Kalman filter violates them. We find substantial Monte Carlo variance reduction by using the optimal importance function and Rao-Blackwellisation, in which the Gaussian linear sub-structure is exploited at both the cross-sectional and temporal levels.
    Keywords: Rao-Blackwellisation, Kalman filter, Particle filter, Sequential Monte Carlo
    JEL: C32 C53
    Date: 2015–09–03
  12. By: Athanasopoulos, George; Hyndman, Rob J.; Kourentzes, Nikolaos; Petropoulos, Fotios
    Abstract: This paper introduces the concept of Temporal Hierarchies for time series forecasting. A temporal hierarchy can be constructed for any time series by means of non-overlapping temporal aggregation. Predictions constructed at all aggregation levels are combined with the proposed framework to result in temporally reconciled, accurate and robust forecasts. The implied combination mitigates modelling uncertainty, while the reconciled nature of the forecasts results in a unified prediction that supports aligned decisions at different planning horizons: from short-term operational up to long-term strategic planning. The proposed methodology is independent of forecasting models. It can embed high level managerial forecasts that incorporate complex and unstructured information with lower level statistical forecasts. Our results show that forecasting with temporal hierarchies increases accuracy over conventional forecasting, particularly under increased modelling uncertainty. We discuss organisational implications of the temporally reconciled forecasts using a case study of Accident & Emergency departments.
    Keywords: Hierarchical forecasting, temporal aggregation, reconciliation, forecast combination
    JEL: C44 C53
    Date: 2015–08–28
  13. By: Canova, Fabio; Ferroni, Filippo; Matthes, Christian
    Abstract: The paper studies how parameter variation affects the decision rules of a DSGE model and structural inference. We provide diagnostics to detect parameter variations and to ascertain whether they are exogenous or endogenous. Identification and inferential distortions when a constant parameter model is incorrectly assumed are examined. Likelihood and VAR-based estimates of the structural dynamics when parameter variations are neglected are compared. Time variations in the financial frictions of a Gertler and Karadi's (2010) model are studied.
    Keywords: endogenous variations; misspecification; Structural model; time varying coefficients
    JEL: C10 E27 E32
    Date: 2015–09
  14. By: Groß, Marcus; Rendtel, Ulrich
    Abstract: In self-reported data usually a phenomenon called 'heaping' occurs, i.e. survey participants round the values of their income, weight or height to some degree. Additionally, respondents may be more prone to round off or up due to social desirability. By ignoring the heaping process a severe bias in terms of spikes and bumps is introduced when applying kernel density methods naively to the rounded data. A generalized Stochastic Expectation Maximization (SEM) approach accounting for heaping with potentially asymmetric rounding behaviour in univariate kernel density estimation is presented in this work. The introduced methods are applied to survey data of the German Socio-Economic Panel and exhibit very good performance simulations.
    Keywords: Heaping,Survey Data,Measurement error,Self-reported data,Kernel density estimation,Rounded data
    Date: 2015
  15. By: Barend Abeln; Jan P. A. M. Jacobs
    Abstract: Seasonality in macroeconomic time series can obscure movements of other components in a series that are operationally more important for economic and econometric analyses. Indeed, in practice one often prefers to work with seasonally adjusted data to assess the current state of the economy and its future course. Recently, two most widely used seasonal adjustment methods, Census X-12-ARIMA and TRAMO-SEATS, merged into X-13ARIMA-SEATS to become a new industry standard. In this paper, we compare and contrast X-13ARIMA-SEATS with a seasonal adjustment program called CAMPLET, an acronym of its tuning parameters. CAMPLET consists of a simple adaptive procedure which separates the seasonal component and the non-seasonal component from an observed time series. Once this process has been carried out there will be no need to revise these components at a later stage when more observations become available, in contrast with other seasonal adjustment methods. The paper briefly reviews of X-13ARIMA-SEATS and describes the main features of CAMPLET. We evaluate the outcomes of both methods in a controlled simulation framework using a variety of processes. Finally, we apply the X-13ARIMA-SEATS and CAMPLET methods to three time series: U.S. non-farm payroll employment, operational income of Ahold, and real GDP in the Netherlands.
    Keywords: seasonal adjustment, real-time, seasonal pattern, simulations, employment, operational income, real GDP,
    JEL: C22 E24 E32 E37
    Date: 2015–07–31

This nep-ecm issue is ©2015 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.