nep-ecm New Economics Papers
on Econometrics
Issue of 2014‒11‒17
nineteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Joint Bayesian Analysis of Parameters and States in Nonlinear, Non-Gaussian State Space Models By Istv�n Barra; Lennart Hoogerheide; Siem Jan Koopman; Andr� Lucas
  2. Empirical Bayes Methods for Dynamic Factor Models By Siem Jan Koopman; Geert Mesters
  3. Forecasting Medium and Large Datasets with Vector Autoregressive Moving Average (VARMA) Models By Gustavo Fruet Dias; George Kapetanios
  4. Finite Sample Properties of Tests Based on Prewhitened Nonparametric Covariance Estimators By Preinerstorfer, David
  5. Information Theoretic Optimality of Observation Driven Time Series Models By Francisco Blasques; Siem Jan Koopman; Andr� Lucas
  6. Maximum likelihood estimation of the Markov chain model with macro data and the ecological inference model By Arie ten Cate
  7. Regularized Regression Incorporating Network Information: Simultaneous Estimation of Covariate Coefficients and Connection Signs By Matthias Weber; Martin Schumacher; Harald Binder
  8. Regularized Extended Skew-Normal Regression By Shutes, Karl; Adcock, Chris
  9. Estimation of Ergodic Agent-Based Models by Simulated Minimum Distance By Jakob Grazzini; Matteo Richiardi
  10. Qualitative variables and their reduction possibility. Application to time series models By Ciuiu, Daniel
  11. Decomposition of Gender or Racial Inequality with Endogenous Intervening Covariates: An extension of the DiNardo-Fortin-Lemieux method By YAMAGUCHI Kazuo
  12. Bayesian Forecasting of US Growth using Basic Time Varying Parameter Models and Expectations Data By Nalan Basturk; Pinar Ceyhan; Herman K. van Dijk
  13. Modelling cross-border systemic risk in the European banking sector: a copula approach By Raffaella Calabrese; Silvia Osmetti
  14. Model Averaging in Markov-Switching Models: Predicting National Recessions with Regional Data By Guérin, Pierre; Leiva-Leon, Danilo
  15. A practitioners' guide to gravity models of international migration By Michel Beine; Simone Bertoli; Jesús Fernández-Huertas Moraga
  16. Generalized Autocontours: Evaluation of Multivariate Density Models By Gloria Gonzalez-Rivera; Yingying Sun
  17. TENET: Tail-Event driven NETwork risk By Wolfgang Karl Härdle; Natalia Sirotko-Sibirskaya; Weining Wang;
  18. Improving Density Forecasts and Value-at-Risk Estimates by Combining Densities By Anne Opschoor; Dick van Dijk; Michel van der Wel
  19. Asymptotic Properties of Imputed Hedonic Price Indices By Olivier Schöni

  1. By: Istv�n Barra (VU University Amsterdam, Duisenberg School of Finance, the Netherlands); Lennart Hoogerheide (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam); Andr� Lucas (VU University Amsterdam, the Netherlands)
    Abstract: We propose a new methodology for designing flexible proposal densities for the joint posterior density of parameters and states in a nonlinear non-Gaussian state space model. We show that a highly efficient Bayesian procedure emerges when these proposal densities are used in an independent Metropolis-Hastings algorithm. A particular feature of our approach is that smoothed estimates of the states and the marginal likelihood are obtained directly as an output of the algorithm. Our method provides a computationally efficient alternative to several recently proposed algorithms. We present extensive simulation evidence for stochastic volatility and stochastic intensity models. For our empirical study, we analyse the performance of our method for stock returns and corporate default panel data. (This paper is an updated version of the paper that appeared earlier as Barra, I., Hoogerheide, L.F., Koopman, S.J., and Lucas, A. (2013) "Joint Independent Metropolis-Hastings Methods for Nonlinear Non-Gaussian State Space Models". TI Discussion Paper 13-050/III. Amsterdam: Tinbergen Institute.)
    Keywords: Bayesian inference, importance sampling, Monte Carlo estimation, Metropolis-Hastings algorithm, mixture of Student's t-distributions
    JEL: C11 C15 C22 C32 C58
    Date: 2014–09–02
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20140118&r=ecm
  2. By: Siem Jan Koopman; Geert Mesters (VU University Amsterdam)
    Abstract: We consider the dynamic factor model where the loading matrix, the dynamic factors and the disturbances are treated as latent stochastic processes. We present empirical Bayes methods that enable the efficient shrinkage-based estimation of the loadings and the factors. We show that our estimates have lower quadratic loss compared to the standard maximum likelihood estimates. We investigate the methods in a Monte Carlo study where we document the finite sample properties. Finally, we present and discuss the results of an empirical study concerning the forecasting of U.S. macroeconomic time series using our empirical Bayes methods.
    Keywords: Importance sampling, Kalman filtering, Likelihood-based analysis, Posterior modes, Rao-Blackwellization, Shrinkage
    JEL: C32 C43
    Date: 2014–05–23
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20140061&r=ecm
  3. By: Gustavo Fruet Dias (Aarhus University and CREATES); George Kapetanios (Queen Mary University of London)
    Abstract: We address the issue of modelling and forecasting macroeconomic variables using medium and large datasets, by adopting VARMA models. We overcome the estimation issue that arises with this class of models by implementing an iterative ordinary least squares (IOLS) estimator. We establish the consistency and asymptotic distribution of the estimator for strong and weak VARMA(p,q) models. Monte Carlo results show that IOLS is consistent and feasible for large systems, outperforming the MLE and other linear regression based efficient estimators under alternative scenarios. Our empirical application shows that VARMA models outperform the AR(1), VAR(p) and factor models, considering different model dimensions.
    Keywords: VARMA, weak VARMA, weak ARMA, Forecasting, Large datasets, Iterative ordinary least squares (IOLS) estimator, Asymptotic contraction mapping
    JEL: C13 C32 C53 C63 E0
    Date: 2014–10–23
    URL: http://d.repec.org/n?u=RePEc:aah:create:2014-37&r=ecm
  4. By: Preinerstorfer, David
    Abstract: We analytically investigate size and power properties of a popular family of procedures for testing linear restrictions on the coefficient vector in a linear regression model with temporally dependent errors. The tests considered are autocorrelation-corrected F-type tests based on prewhitened nonparametric covariance estimators that possibly incorporate a data-dependent bandwidth parameter, e.g., estimators as considered in Andrews and Monahan (1992), Newey and West (1994), or Rho and Shao (2013). For design matrices that are generic in a measure theoretic sense we prove that these tests either suffer from extreme size distortions or from strong power deficiencies. Despite this negative result we demonstrate that a simple adjustment procedure based on artificial regressors can often resolve this problem.
    Keywords: Autocorrelation robustness, HAC test, fixed-b test, prewhitening, size distortion, power deficiency, artificial regressors.
    JEL: C12 C32
    Date: 2014–08–20
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:58333&r=ecm
  5. By: Francisco Blasques (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam, the Netherlands, and CREATES, Aarhus University, Denmark); Andr� Lucas (VU University Amsterdam)
    Abstract: We investigate the information theoretic optimality properties of the score function of the predictive likelihood as a device to update parameters in observation driven time-varying parameter models. The results provide a new theoretical justification for the class of generalized autoregressive score models, which covers the GARCH model as a special case. Our main contribution is to show that only parameter updates based on the score always reduce the local Kullback-Leibler divergence between the true conditional density and the model implied conditional density. This result holds irrespective of the severity of model misspecification. We also show that the use of the score leads to a considerably smaller global Kullback-Leibler divergence in empirically relevant settings. We illustrate the theory with an application to time-varying volatility models. We show that th e reduction in Kullback-Leibler divergence across a range of different settings can be substantial in comparison to updates based on for example squared lagged observations.
    Keywords: generalized autoregressive models, information theory, optimality, Kullback-Leibler distance, volatility models
    JEL: C12 C22
    Date: 2014–04–11
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20140046&r=ecm
  6. By: Arie ten Cate
    Abstract: This CPB Discussion Paper merges two isolated bodies of literature: the Markov chain model with macro data (MacRae, 1977) and the ecological inference model (Robinson, 1950). Both are choice models. They have the same likelihood function and the same regression equation. Decades ago, this likelihood function was computationally demanding. This has led to the use of several approximate methods. Due to the improvement in computer hardware and software since 1977, exact maximum likelihood should now be the preferred estimation method.
    JEL: C21 C22 C25 J64
    Date: 2014–09
    URL: http://d.repec.org/n?u=RePEc:cpb:discus:284&r=ecm
  7. By: Matthias Weber (University of Amsterdam, the Netherlands); Martin Schumacher (University Medical Center, Freiburg); Harald Binder (University Medical Center, Mainz, Germany)
    Abstract: We develop an algorithm that incorporates network information into regression settings. It simultaneously estimates the covariate coefficients and the signs of the network connections (i.e. whether the connections are of an activating or of a repressing type). For the coefficient estimation steps an additional penalty is set on top of the lasso penalty, similarly to Li and Li (2008). We develop a fast implementation for the new method based on coordinate descent. Furthermore, we show how the new methods can be applied to time-to-event data. The new method yields good results in simulation studies concerning sensitivity and specificity of non-zero covariate coefficients, estimation of network connection signs, and prediction performance. We also apply the new method to two microarray time-to-event data sets from patients with ovarian cancer and diffuse large B-cell lymphoma. The new method performs very well in both cases. The main application of this new method is of biomedical nature, but it may also be useful in other fields where network data is available.
    Keywords: high-dimensional data, gene expression data, pathway information, penalized regression
    JEL: C13 C41
    Date: 2014–07–16
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20140089&r=ecm
  8. By: Shutes, Karl; Adcock, Chris
    Abstract: This paper considers the impact of using the regularisation techniques for the analysis of the extended skew-normal distribution. The approach is estimated using a number of techniques and compared to OLS based LASSO and ridge regressions in addition to non- constrained skew-normal regression.
    Keywords: Skew-normal; LASSO; l1 regression
    JEL: C1 C13 C16 C46
    Date: 2013–11–24
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:58445&r=ecm
  9. By: Jakob Grazzini (Catholic University of Milan, Dept of Economics and Finance); Matteo Richiardi (Institute for New Economic Thinking, Nuffield College, Oxford and Collegio Carlo Alberto)
    Abstract: Two diculties arise in the estimation of AB models: (i) the criterion function has no simple analytical expression, (ii) the aggregate properties of the model cannot be analytically understood. In this paper we show how to circumvent these diculties and under which conditions ergodic models can be consistently estimated by simulated minimum distance techniques, both in a long-run equilibrium and during an adjustment phase.
    Keywords: Agent-based Models, Consistent Estimation, Method of Simulated Moments.
    JEL: C15 C63
    Date: 2014–10–21
    URL: http://d.repec.org/n?u=RePEc:nuf:econwp:1407&r=ecm
  10. By: Ciuiu, Daniel
    Abstract: In this paper we will study the influence of qualitative variables on the unit root tests for stationarity. For the linear regressions involved the implied assumption is that they are not influenced by such qualitative variables. For this reason, after we have introduced such variables, we check ï¬rst if we can remove some of them from the model. The considered qualitative variables are according the corresponding coefï¬cient (the intercept, the coefï¬cient of Xt −1 and the coefï¬cient of t ), and on the different groups built tacking into account the characteristics of the time moments.
    Keywords: Qualitative variables, Dickey-Fuller, ARIMA, GDP, homogeneity.
    JEL: C52 C58
    Date: 2013–06
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:59284&r=ecm
  11. By: YAMAGUCHI Kazuo
    Abstract: This paper first clarifies that, unlike propensity-score weighting in Rubin's causal model where confounding covariates can be endogenous, propensity-score weighting in the DiNardo-Fortin-Lemieux (DFL) decomposition analysis may generate biased estimates for the decomposition of inequality into"direct"and"indirect"components when intervening variables are endogenous. The paper also clarifies that the Blinder-Oaxaca method confounds the modeling of two distinct counterfactual situations: one where the covariate effects of the first group become equal to those of the second group, and the other where the covariate distribution of the second group becomes equal to that of the first group. The paper shows that the DFL method requires a distinct condition to provide an unbiased decomposition of inequality that remains under each counterfactual situation. The paper then introduces a combination of the DFL method with Heckman's two-step method as a way of testing and eliminating bias in the DFL estimate when some intervening covariates are endogenous. The paper also intends to bring gender and race back into the center of statistical causal analysis. An application focuses on the decomposition of gender inequality in earned income among white-collar regular employees in Japan.
    Date: 2014–10
    URL: http://d.repec.org/n?u=RePEc:eti:dpaper:14061&r=ecm
  12. By: Nalan Basturk (Maastricht University, the Netherlands); Pinar Ceyhan (Erasmus University Rotterdam); Herman K. van Dijk (Erasmus University Rotterdam, VU University Amsterdam, the Netherlands)
    Abstract: Time varying patterns in US growth are analyzed using various univariate model structures, starting from a naive model structure where all features change every period to a model where the slow variation in the conditional mean and changes in the conditional variance are specified together with their interaction, including survey data on expected growth in order to strengthen the information in the model. Use is made of a simulation based Bayesian inferential method to determine the forecasting performance of the various model specifications. The extension of a basic growth model with a constant mean to models including time variation in the mean and variance requires careful investigation of possible identification issues of the parameters and existence conditions of the posterior under a diffuse prior. The use of diffuse priors leads to a focus on the likelihood fu nction and it enables a researcher and policy adviser to evaluate the scientific information contained in model and data. Empirical results indicate that incorporating time variation in mean growth rates as well as in volatility are important in order to improve for the predictive performances of growth models. Furthermore, using data information on growth expectations is important for forecasting growth in specific periods, such as the the recession periods around 2000s and around 2008.
    Keywords: Growth, Time varying parameters, Expectations data
    JEL: C11 C22 E17
    Date: 2014–09–01
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20140119&r=ecm
  13. By: Raffaella Calabrese; Silvia Osmetti
    Abstract: We propose a new methodology based on the Marshall-Olkin (MO) copula to model cross-border systemic risk. The proposed framework estimates the impact of the systematic and idiosyncratic components on systemic risk. Initially, we propose a maximum-likelihood method to estimate the parameter of the MO copula. In order to use the data on non-distressed banks for these estimates, we consider times to bank failures as censored samples. Hence, we propose an estimation procedure for the MO copula on censored data. The empirical evidence from European banks shows that the proposed censored model avoid possible underestimation of the contagion risk.
    Date: 2014–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1411.1348&r=ecm
  14. By: Guérin, Pierre; Leiva-Leon, Danilo
    Abstract: This paper estimates and forecasts U.S. business cycle turning points with state-level data. The probabilities of recession are obtained from univariate and multivariate regime-switching models based on a pairwise combination of national and state-level data. We use two classes of combination schemes to summarize the information from these models: Bayesian Model Averaging and Dynamic Model Averaging. In addition, we suggest the use of combination schemes based on the past predictive ability of a given model to estimate regimes. Both simulation and empirical exercises underline the utility of such combination schemes. Moreover, our best specification provides timely updates of the U.S. business cycles. In particular, the estimated turning points from this specification largely precede the announcements of business cycle turning points from the NBER business cycle dating committee, and compare favorably with competing models.
    Keywords: Markov-switching; Nowcasting; Forecasting; Business Cycles; Forecast combination.
    JEL: C53 E32 E37
    Date: 2014–10–17
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:59361&r=ecm
  15. By: Michel Beine (CREA, Université de Luxembourg); Simone Bertoli (CERDI, University of Auvergne and CNRS); Jesús Fernández-Huertas Moraga (FEDEA and IAE, CSIC)
    Abstract: The use of bilateral data for the analysis of international migration is at the same time a blessing and a curse. It is a blessing since the dyadic dimension of the data allows researchers to address a number of previously unanswered questions, but it is also a curse for the various analytical challenges it gives rise to. This paper presents the theoretical foundations of the estimation of gravity models of international migration, and the main difficulties that have to be tackled in the econometric analysis, such as the nature of migration data, how to account for multilateral resistance to migration or endogeneity. We also review some empirical evidence that has considered these issues.
    Keywords: Gravity equation; discrete choice models; international migration
    JEL: F22 C23
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:luc:wpaper:14-24&r=ecm
  16. By: Gloria Gonzalez-Rivera (Department of Economics, University of California Riverside); Yingying Sun
    Date: 2014–03
    URL: http://d.repec.org/n?u=RePEc:ucr:wpaper:201431&r=ecm
  17. By: Wolfgang Karl Härdle; Natalia Sirotko-Sibirskaya; Weining Wang;
    Abstract: We propose a semiparametric measure to estimate systemic interconnectedness across financial institutions based on tail-driven spill-over effects in a ultra-high dimensional framework. Methodologically, we employ a variable selection technique in a time series setting in the context of a single-index model for a generalized quantile regression framework. We can thus include more financial institutions into the analysis, to measure their interdependencies in tails and, at the same time, to take into account non-linear relationships between them. A empirical application on a set of 200 publicly traded U. S. nancial institutions provides useful rankings of systemic exposure and systemic contribution at various stages of financial crisis. Network analysis, its behaviour and dynamics, allows us to characterize a role of each sector in the financial crisis and yields a new perspective of the nancial markets at the U. S. financial market 2007 - 2012.
    Keywords: Systemic Risk, Systemic Risk Network, Generalized Quantile, Quantile Single-Index Regression, Value at Risk, CoVaR, Lasso
    JEL: G01 G18 G32 G38 C21 C51 C63
    Date: 2014–12
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2014-066&r=ecm
  18. By: Anne Opschoor (VU University Amsterdam); Dick van Dijk (Erasmus University Rotterdam); Michel van der Wel (Erasmus University Rotterdam)
    Abstract: We investigate the added value of combining density forecasts for asset return prediction in a specific region of support. We develop a new technique that takes into account model uncertainty by assigning weights to individual predictive densities using a scoring rule based on the censored likelihood. We apply this approach in the context of recently developed univariate volatility models (including HEAVY and Realized GARCH models), using daily returns from the S&P 500, DJIA, FTSE and Nikkei stock market indexes from 2000 until 2013. The results show that combined density forecasts based on the censored likelihood scoring rule significantly outperform pooling based on the log scoring rule and individual density forecasts. The same result, albeit less strong, holds when compared to combined density forecasts based on equal weights. In addition, VaR estimates improve a t the short horizon, in particular when compared to estimates based on equal weights or to the VaR estimates of the individual models.
    Keywords: Density forecast evaluation, Volatility modeling, Censored likelihood, Value-at-Risk
    JEL: C53 C58 G17
    Date: 2014–07–21
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20140090&r=ecm
  19. By: Olivier Schöni
    Abstract: Hedonic price indices are currently considered to be the state-of-the-art approach to computing constant-quality price indices. In particular, hedonic price indices based on imputed prices have become popular both among practitioners and researchers to analyze price changes at an aggregate level. Although widely employed, little research has been conducted to investigate their asymptotic properties and the influence of the econometric model on the parameters estimated by these price indices. The present paper therefore tries to fill the actual knowledge gap by analyzing the asymptotic properties of the most commonly used imputed hedonic price indices in the case of linear and linearizable models. The obtained results are used to gauge the impact of bias adjusted predictions on hedonic imputed indices in the case of log-linear hedonic functions with normal distributed errors.
    Keywords: Price indices, hedonic regression, imputation, asymptotic theory
    JEL: C21 C43 C53 C58
    Date: 2014–10
    URL: http://d.repec.org/n?u=RePEc:cep:sercdp:0166&r=ecm

This nep-ecm issue is ©2014 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.