Econometrics
http://lists.repec.org/mailman/listinfo/nep-ecm
Econometrics2014-11-17Sune KarlssonJoint Bayesian Analysis of Parameters and States in Nonlinear, Non-Gaussian State Space Models
http://d.repec.org/n?u=RePEc:dgr:uvatin:20140118&r=ecm
We propose a new methodology for designing flexible proposal densities for the joint posterior density of parameters and states in a nonlinear non-Gaussian state space model. We show that a highly efficient Bayesian procedure emerges when these proposal densities are used in an independent Metropolis-Hastings algorithm. A particular feature of our approach is that smoothed estimates of the states and the marginal likelihood are obtained directly as an output of the algorithm. Our method provides a computationally efficient alternative to several recently proposed algorithms. We present extensive simulation evidence for stochastic volatility and stochastic intensity models. For our empirical study, we analyse the performance of our method for stock returns and corporate default panel data. (This paper is an updated version of the paper that appeared earlier as Barra, I., Hoogerheide, L.F., Koopman, S.J., and Lucas, A. (2013) "Joint Independent Metropolis-Hastings Methods for Nonlinear Non-Gaussian State Space Models". TI Discussion Paper 13-050/III. Amsterdam: Tinbergen Institute.)Istv�n Barra, Lennart Hoogerheide, Siem Jan Koopman, Andr� Lucas2014-09-02Bayesian inference, importance sampling, Monte Carlo estimation, Metropolis-Hastings algorithm, mixture of Student's t-distributionsEmpirical Bayes Methods for Dynamic Factor Models
http://d.repec.org/n?u=RePEc:dgr:uvatin:20140061&r=ecm
We consider the dynamic factor model where the loading matrix, the dynamic factors and the disturbances are treated as latent stochastic processes. We present empirical Bayes methods that enable the efficient shrinkage-based estimation of the loadings and the factors. We show that our estimates have lower quadratic loss compared to the standard maximum likelihood estimates. We investigate the methods in a Monte Carlo study where we document the finite sample properties. Finally, we present and discuss the results of an empirical study concerning the forecasting of U.S. macroeconomic time series using our empirical Bayes methods.Siem Jan Koopman, Geert Mesters2014-05-23Importance sampling, Kalman filtering, Likelihood-based analysis, Posterior modes, Rao-Blackwellization, ShrinkageForecasting Medium and Large Datasets with Vector Autoregressive Moving Average (VARMA) Models
http://d.repec.org/n?u=RePEc:aah:create:2014-37&r=ecm
We address the issue of modelling and forecasting macroeconomic variables using medium and large datasets, by adopting VARMA models. We overcome the estimation issue that arises with this class of models by implementing an iterative ordinary least squares (IOLS) estimator. We establish the consistency and asymptotic distribution of the estimator for strong and weak VARMA(p,q) models. Monte Carlo results show that IOLS is consistent and feasible for large systems, outperforming the MLE and other linear regression based efficient estimators under alternative scenarios. Our empirical application shows that VARMA models outperform the AR(1), VAR(p) and factor models, considering different model dimensions.Gustavo Fruet Dias, George Kapetanios2014-10-23VARMA, weak VARMA, weak ARMA, Forecasting, Large datasets, Iterative ordinary least squares (IOLS) estimator, Asymptotic contraction mappingFinite Sample Properties of Tests Based on Prewhitened Nonparametric Covariance Estimators
http://d.repec.org/n?u=RePEc:pra:mprapa:58333&r=ecm
We analytically investigate size and power properties of a popular family of procedures for testing linear restrictions on the coefficient vector in a linear regression model with temporally dependent errors. The tests considered are autocorrelation-corrected F-type tests based on prewhitened nonparametric covariance estimators that possibly incorporate a data-dependent bandwidth parameter, e.g., estimators as considered in Andrews and Monahan (1992), Newey and West (1994), or Rho and Shao (2013). For design matrices that are generic in a measure theoretic sense we prove that these tests either suffer from extreme size distortions or from strong power deficiencies. Despite this negative result we demonstrate that a simple adjustment procedure based on artificial regressors can often resolve this problem.Preinerstorfer, David2014-08-20Autocorrelation robustness, HAC test, fixed-b test, prewhitening, size distortion, power deficiency, artificial regressors.Information Theoretic Optimality of Observation Driven Time Series Models
http://d.repec.org/n?u=RePEc:dgr:uvatin:20140046&r=ecm
We investigate the information theoretic optimality properties of the score function of the predictive likelihood as a device to update parameters in observation driven time-varying parameter models. The results provide a new theoretical justification for the class of generalized autoregressive score models, which covers the GARCH model as a special case. Our main contribution is to show that only parameter updates based on the score always reduce the local Kullback-Leibler divergence between the true conditional density and the model implied conditional density. This result holds irrespective of the severity of model misspecification. We also show that the use of the score leads to a considerably smaller global Kullback-Leibler divergence in empirically relevant settings. We illustrate the theory with an application to time-varying volatility models. We show that th e reduction in Kullback-Leibler divergence across a range of different settings can be substantial in comparison to updates based on for example squared lagged observations.Francisco Blasques, Siem Jan Koopman, Andr� Lucas2014-04-11generalized autoregressive models, information theory, optimality, Kullback-Leibler distance, volatility modelsMaximum likelihood estimation of the Markov chain model with macro data and the ecological inference model
http://d.repec.org/n?u=RePEc:cpb:discus:284&r=ecm
This CPB Discussion Paper merges two isolated bodies of literature: the Markov chain model with macro data (MacRae, 1977) and the ecological inference model (Robinson, 1950). Both are choice models. They have the same likelihood function and the same regression equation. Decades ago, this likelihood function was computationally demanding. This has led to the use of several approximate methods. Due to the improvement in computer hardware and software since 1977, exact maximum likelihood should now be the preferred estimation method.Arie ten Cate2014-09Regularized Regression Incorporating Network Information: Simultaneous Estimation of Covariate Coefficients and Connection Signs
http://d.repec.org/n?u=RePEc:dgr:uvatin:20140089&r=ecm
We develop an algorithm that incorporates network information into regression settings. It simultaneously estimates the covariate coefficients and the signs of the network connections (i.e. whether the connections are of an activating or of a repressing type). For the coefficient estimation steps an additional penalty is set on top of the lasso penalty, similarly to Li and Li (2008). We develop a fast implementation for the new method based on coordinate descent. Furthermore, we show how the new methods can be applied to time-to-event data. The new method yields good results in simulation studies concerning sensitivity and specificity of non-zero covariate coefficients, estimation of network connection signs, and prediction performance. We also apply the new method to two microarray time-to-event data sets from patients with ovarian cancer and diffuse large B-cell lymphoma. The new method performs very well in both cases. The main application of this new method is of biomedical nature, but it may also be useful in other fields where network data is available.Matthias Weber, Martin Schumacher, Harald Binder2014-07-16high-dimensional data, gene expression data, pathway information, penalized regressionRegularized Extended Skew-Normal Regression
http://d.repec.org/n?u=RePEc:pra:mprapa:58445&r=ecm
This paper considers the impact of using the regularisation techniques for the analysis of the extended skew-normal distribution. The approach is estimated using a number of techniques and compared to OLS based LASSO and ridge regressions in addition to non- constrained skew-normal regression.Shutes, Karl, Adcock, Chris2013-11-24Skew-normal; LASSO; l1 regressionEstimation of Ergodic Agent-Based Models by Simulated Minimum Distance
http://d.repec.org/n?u=RePEc:nuf:econwp:1407&r=ecm
Two diculties arise in the estimation of AB models: (i) the criterion function has no simple analytical expression, (ii) the aggregate properties of the model cannot be analytically understood. In this paper we show how to circumvent these diculties and under which conditions ergodic models can be consistently estimated by simulated minimum distance techniques, both in a long-run equilibrium and during an adjustment phase.Jakob Grazzini, Matteo Richiardi2014-10-21Agent-based Models, Consistent Estimation, Method of Simulated Moments.Qualitative variables and their reduction possibility. Application to time series models
http://d.repec.org/n?u=RePEc:pra:mprapa:59284&r=ecm
In this paper we will study the inï¬‚uence of qualitative variables on the unit root tests for stationarity. For the linear regressions involved the implied assumption is that they are not inï¬‚uenced by such qualitative variables. For this reason, after we have introduced such variables, we check ï¬rst if we can remove some of them from the model. The considered qualitative variables are according the corresponding coefï¬cient (the intercept, the coefï¬cient of Xt âˆ’1 and the coefï¬cient of t ), and on the different groups built tacking into account the characteristics of the time moments.Ciuiu, Daniel2013-06Qualitative variables, Dickey-Fuller, ARIMA, GDP, homogeneity.Decomposition of Gender or Racial Inequality with Endogenous Intervening Covariates: An extension of the DiNardo-Fortin-Lemieux method
http://d.repec.org/n?u=RePEc:eti:dpaper:14061&r=ecm
This paper first clarifies that, unlike propensity-score weighting in Rubin's causal model where confounding covariates can be endogenous, propensity-score weighting in the DiNardo-Fortin-Lemieux (DFL) decomposition analysis may generate biased estimates for the decomposition of inequality into"direct"and"indirect"components when intervening variables are endogenous. The paper also clarifies that the Blinder-Oaxaca method confounds the modeling of two distinct counterfactual situations: one where the covariate effects of the first group become equal to those of the second group, and the other where the covariate distribution of the second group becomes equal to that of the first group. The paper shows that the DFL method requires a distinct condition to provide an unbiased decomposition of inequality that remains under each counterfactual situation. The paper then introduces a combination of the DFL method with Heckman's two-step method as a way of testing and eliminating bias in the DFL estimate when some intervening covariates are endogenous. The paper also intends to bring gender and race back into the center of statistical causal analysis. An application focuses on the decomposition of gender inequality in earned income among white-collar regular employees in Japan.YAMAGUCHI Kazuo2014-10Bayesian Forecasting of US Growth using Basic Time Varying Parameter Models and Expectations Data
http://d.repec.org/n?u=RePEc:dgr:uvatin:20140119&r=ecm
Time varying patterns in US growth are analyzed using various univariate model structures, starting from a naive model structure where all features change every period to a model where the slow variation in the conditional mean and changes in the conditional variance are specified together with their interaction, including survey data on expected growth in order to strengthen the information in the model. Use is made of a simulation based Bayesian inferential method to determine the forecasting performance of the various model specifications. The extension of a basic growth model with a constant mean to models including time variation in the mean and variance requires careful investigation of possible identification issues of the parameters and existence conditions of the posterior under a diffuse prior. The use of diffuse priors leads to a focus on the likelihood fu nction and it enables a researcher and policy adviser to evaluate the scientific information contained in model and data. Empirical results indicate that incorporating time variation in mean growth rates as well as in volatility are important in order to improve for the predictive performances of growth models. Furthermore, using data information on growth expectations is important for forecasting growth in specific periods, such as the the recession periods around 2000s and around 2008.Nalan Basturk, Pinar Ceyhan, Herman K. van Dijk2014-09-01Growth, Time varying parameters, Expectations dataModelling cross-border systemic risk in the European banking sector: a copula approach
http://d.repec.org/n?u=RePEc:arx:papers:1411.1348&r=ecm
We propose a new methodology based on the Marshall-Olkin (MO) copula to model cross-border systemic risk. The proposed framework estimates the impact of the systematic and idiosyncratic components on systemic risk. Initially, we propose a maximum-likelihood method to estimate the parameter of the MO copula. In order to use the data on non-distressed banks for these estimates, we consider times to bank failures as censored samples. Hence, we propose an estimation procedure for the MO copula on censored data. The empirical evidence from European banks shows that the proposed censored model avoid possible underestimation of the contagion risk.Raffaella Calabrese, Silvia Osmetti2014-11Model Averaging in Markov-Switching Models: Predicting National Recessions with Regional Data
http://d.repec.org/n?u=RePEc:pra:mprapa:59361&r=ecm
This paper estimates and forecasts U.S. business cycle turning points with state-level data. The probabilities of recession are obtained from univariate and multivariate regime-switching models based on a pairwise combination of national and state-level data. We use two classes of combination schemes to summarize the information from these models: Bayesian Model Averaging and Dynamic Model Averaging. In addition, we suggest the use of combination schemes based on the past predictive ability of a given model to estimate regimes. Both simulation and empirical exercises underline the utility of such combination schemes. Moreover, our best specification provides timely updates of the U.S. business cycles. In particular, the estimated turning points from this specification largely precede the announcements of business cycle turning points from the NBER business cycle dating committee, and compare favorably with competing models.Guérin, Pierre, Leiva-Leon, Danilo2014-10-17Markov-switching; Nowcasting; Forecasting; Business Cycles; Forecast combination.A practitioners' guide to gravity models of international migration
http://d.repec.org/n?u=RePEc:luc:wpaper:14-24&r=ecm
The use of bilateral data for the analysis of international migration is at the same time a blessing and a curse. It is a blessing since the dyadic dimension of the data allows researchers to address a number of previously unanswered questions, but it is also a curse for the various analytical challenges it gives rise to. This paper presents the theoretical foundations of the estimation of gravity models of international migration, and the main difficulties that have to be tackled in the econometric analysis, such as the nature of migration data, how to account for multilateral resistance to migration or endogeneity. We also review some empirical evidence that has considered these issues.Michel Beine, Simone Bertoli, Jesús Fernández-Huertas Moraga2014Gravity equation; discrete choice models; international migrationGeneralized Autocontours: Evaluation of Multivariate Density Models
http://d.repec.org/n?u=RePEc:ucr:wpaper:201431&r=ecm
Gloria Gonzalez-Rivera, Yingying Sun2014-03TENET: Tail-Event driven NETwork risk
http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2014-066&r=ecm
We propose a semiparametric measure to estimate systemic interconnectedness across financial institutions based on tail-driven spill-over effects in a ultra-high dimensional framework. Methodologically, we employ a variable selection technique in a time series setting in the context of a single-index model for a generalized quantile regression framework. We can thus include more financial institutions into the analysis, to measure their interdependencies in tails and, at the same time, to take into account non-linear relationships between them. A empirical application on a set of 200 publicly traded U. S. nancial institutions provides useful rankings of systemic exposure and systemic contribution at various stages of financial crisis. Network analysis, its behaviour and dynamics, allows us to characterize a role of each sector in the financial crisis and yields a new perspective of the nancial markets at the U. S. financial market 2007 - 2012.Wolfgang Karl Härdle, Natalia Sirotko-Sibirskaya, Weining Wang, 2014-12Systemic Risk, Systemic Risk Network, Generalized Quantile, Quantile Single-Index Regression, Value at Risk, CoVaR, LassoImproving Density Forecasts and Value-at-Risk Estimates by Combining Densities
http://d.repec.org/n?u=RePEc:dgr:uvatin:20140090&r=ecm
We investigate the added value of combining density forecasts for asset return prediction in a specific region of support. We develop a new technique that takes into account model uncertainty by assigning weights to individual predictive densities using a scoring rule based on the censored likelihood. We apply this approach in the context of recently developed univariate volatility models (including HEAVY and Realized GARCH models), using daily returns from the S&P 500, DJIA, FTSE and Nikkei stock market indexes from 2000 until 2013. The results show that combined density forecasts based on the censored likelihood scoring rule significantly outperform pooling based on the log scoring rule and individual density forecasts. The same result, albeit less strong, holds when compared to combined density forecasts based on equal weights. In addition, VaR estimates improve a t the short horizon, in particular when compared to estimates based on equal weights or to the VaR estimates of the individual models.Anne Opschoor, Dick van Dijk, Michel van der Wel2014-07-21Density forecast evaluation, Volatility modeling, Censored likelihood, Value-at-RiskAsymptotic Properties of Imputed Hedonic Price Indices
http://d.repec.org/n?u=RePEc:cep:sercdp:0166&r=ecm
Hedonic price indices are currently considered to be the state-of-the-art approach to computing constant-quality price indices. In particular, hedonic price indices based on imputed prices have become popular both among practitioners and researchers to analyze price changes at an aggregate level. Although widely employed, little research has been conducted to investigate their asymptotic properties and the influence of the econometric model on the parameters estimated by these price indices. The present paper therefore tries to fill the actual knowledge gap by analyzing the asymptotic properties of the most commonly used imputed hedonic price indices in the case of linear and linearizable models. The obtained results are used to gauge the impact of bias adjusted predictions on hedonic imputed indices in the case of log-linear hedonic functions with normal distributed errors.Olivier Schöni2014-10Price indices, hedonic regression, imputation, asymptotic theory