|
on Econometrics |
By: | István Barra (VU University Amsterdam, Duisenberg School of Finance, the Netherlands); Lennart Hoogerheide (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam); André Lucas (VU University Amsterdam, the Netherlands) |
Abstract: | We propose a new methodology for designing flexible proposal densities for the joint posterior density of parameters and states in a nonlinear non-Gaussian state space model. We show that a highly efficient Bayesian procedure emerges when these proposal densities are used in an independent Metropolis-Hastings algorithm. A particular feature of our approach is that smoothed estimates of the states and the marginal likelihood are obtained directly as an output of the algorithm. Our method provides a computationally efficient alternative to several recently proposed algorithms. We present extensive simulation evidence for stochastic volatility and stochastic intensity models. For our empirical study, we analyse the performance of our method for stock returns and corporate default panel data. (This paper is an updated version of the paper that appeared earlier as Barra, I., Hoogerheide, L.F., Koopman, S.J., and Lucas, A. (2013) "Joint Independent Metropolis-Hastings Methods for Nonlinear Non-Gaussian State Space Models". TI Discussion Paper 13-050/III. Amsterdam: Tinbergen Institute.) |
Keywords: | Bayesian inference, importance sampling, Monte Carlo estimation, Metropolis-Hastings algorithm, mixture of Student's t-distributions |
JEL: | C11 C15 C22 C32 C58 |
Date: | 2014–09–02 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20140118&r=ecm |
By: | Francisco Blasques (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam); Andre Lucas (VU University Amsterdam) |
Abstract: | We study the strong consistency and asymptotic normality of the maximum likelihood estimator for a class of time series models driven by the score function of the predictive likelihood. This class of nonlinear dynamic models includes both new and existing observation driven time series models. Examples include models for generalized autoregressive conditional heteroskedasticity, mixed-measurement dynamic factors, serial dependence in heavy-tailed densities, and other time varying parameter processes. We formulate primitive conditions for global identification, invertibility, strong consistency, asymptotic normality under correct specification and under mis-specification. We provide key illustrations of how the theory can be applied to specific dynamic models. |
Keywords: | time-varying parameter models, GAS, score driven models, Markov processes estimation, stationarity, invertibility, consistency, asymptotic normality |
JEL: | C13 C22 C12 |
Date: | 2014–03–04 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20140029&r=ecm |
By: | Chaohua Dong; Jiti Gao; Bin Peng |
Abstract: | In this paper, we consider a partially linear panel data model with cross-sectional dependence and non-stationarity. Meanwhile, we allow fixed effects to be correlated with the regressors to capture unobservable heterogeneity. Under a general spatial error dependence structure, we then established some consistent closed-form estimates for both the unknown parameters and the unknown function for the case where N and T go jointly to infinity. Rates of convergence and asymptotic normality results are established for the proposed estimators. Both the finite-sample performance and the empirical applications show that the proposed estimation method works well when the cross-sectional dependence exists in the data set. |
Keywords: | Asymptotic theory, closed-form estimate, orthogonal series method, partially linear panel data model. |
JEL: | C13 C14 C23 C51 |
Date: | 2015 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2015-7&r=ecm |
By: | Pawel Janus (UBS Global Asset Management, the Netherlands); André Lucas (VU University Amsterdam); Anne Opschoor (VU University Amsterdam, the Netherlands) |
Abstract: | We develop a new model for the multivariate covariance matrix dynamics based on daily return observations and daily realized covariance matrix kernels based on intraday data. Both types of data may be fat-tailed. We account for this by assuming a matrix-F distribution for the realized kernels, and a multivariate Student’s t distribution for the returns. Using generalized autoregressive score dynamics for the unobserved true covariance matrix, our approach automatically corrects for the effect of outliers and incidentally large observations, both in returns and in covariances. Moreover, by an appropriate choice of scaling of the conditional score function we are able to retain a convenient matrix formulation for the dynamic updates of the covariance matrix. This makes the model highly computationally efficient. We show how the model performs in a controlled simulation setting as well as for empirical data. In our empirical application, we study daily returns and realized kernels from 15 equities over the period 2001-2012 and find that the new model statistically outperforms (recently developed) multivariate volatility models, both in-sample and out-of-sample. We also comment on the possibility to use composite likelihood methods for estimation if desired. |
Keywords: | realized covariance matrices, heavy tails, (degenerate) matrix-F distribution, generalized autoregressive score (GAS) dynamics |
JEL: | C32 C58 |
Date: | 2014–06–19 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20140073&r=ecm |
By: | Lukasz Gatarek (Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam); Lennart Hoogerheide (VU University Amsterdam); Koen Hooning (Delft University of Technology); Herman K. van Dijk (Econometric Institute, Erasmus University Rotterdam, and VU University Amsterdam) |
Abstract: | Accurate prediction of risk measures such as Value at Risk (VaR) and Expected Shortfall (ES) requires precise estimation of the tail of the predictive distribution. Two novel concepts are introduced that offer a specific focus on this part of the predictive density: the censored posterior, a posterior in which the likelihood is replaced by the censored likelihood; and the censored predictive likelihood, which is used for Bayesian Model Averaging. We perform extensive experiments involving simulated and empirical data. Our results show the ability of these new approaches to outperform the standard posterior and traditional Bayesian Model Averaging techniques in applications of Value-at-Risk prediction in GARCH models. |
Keywords: | censored likelihood, censored posterior, censored predictive likelihood, Bayesian Model Averaging, Value at Risk, Metropolis-Hastings algorithm. |
JEL: | C11 C15 C22 C51 C53 C58 G17 |
Date: | 2013–04–15 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20130060&r=ecm |
By: | Charles S. Bos (VU University Amsterdam); Pawel Janus (VU University Amsterdam) |
Abstract: | In this article we introduce a new class of test statistics designed to detect the occurrence of abnormal observations. It derives from the joint distribution of moment- and quantile-based estimators of power variation sigma^r, under the assumption of a normal distribution for the underlying data. Our novel tests can be applied to test for jumps and are found to be generally more powerful than widely used alternatives. An extensive empirical illustration for high-frequency equity data suggests that jumps can be more prevalent than inferred from existing tests on the second or third moment of the data. |
Keywords: | Finite activity jumps, higher order moments, order statistics, outliers, realized variation. |
JEL: | C10 C12 G12 |
Date: | 2013–10–04 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20130155&r=ecm |
By: | Francisco Blasques (VU University Amsterdam, the Netherlands); Artem Duplinskiy (VU University Amsterdam, the Netherlands) |
Abstract: | Parameter estimates of structural economic models are often difficult to interpret at the light of the underlying economic theory. Bayesian methods have become increasingly popular as a tool for conducting inference on structural models since priors offer a way to exert control over the estimation results. This paper proposes a penalized indirect inference estimator that allows researchers to obtain economically meaningful parameter estimates in a frequentist setting. The asymptotic properties of the estimator are established for both correctly and incorrectly specified models. A Monte Carlo study reveals the role of the penalty function in shaping the finite sample distribution of the estimator. The advantages of using this estimator are highlighted in the empirical study of a state-of-the-art dynamic stochastic general equilibrium model. |
Keywords: | Penalized estimation, Indirect Inference, Simulation-based methods, DSGE models |
JEL: | C15 C13 D58 E32 |
Date: | 2015–01–19 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20150009&r=ecm |
By: | Siem Jan Koopman; Geert Mesters (VU University Amsterdam) |
Abstract: | We consider the dynamic factor model where the loading matrix, the dynamic factors and the disturbances are treated as latent stochastic processes. We present empirical Bayes methods that enable the efficient shrinkage-based estimation of the loadings and the factors. We show that our estimates have lower quadratic loss compared to the standard maximum likelihood estimates. We investigate the methods in a Monte Carlo study where we document the finite sample properties. Finally, we present and discuss the results of an empirical study concerning the forecasting of U.S. macroeconomic time series using our empirical Bayes methods. |
Keywords: | Importance sampling, Kalman filtering, Likelihood-based analysis, Posterior modes, Rao-Blackwellization, Shrinkage |
JEL: | C32 C43 |
Date: | 2014–05–23 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20140061&r=ecm |
By: | Francisco Blasques; Siem Jan Koopman; André Lucas (VU University Amsterdam, the Netherlands) |
Abstract: | We develop optimal formulations for nonlinear autoregressive models by representing them as linear autoregressive models with time-varying temporal dependence coefficients. We propose a parameter updating scheme based on the score of the predictive likelihood function at each time point. The resulting time-varying autoregressive model is formulated as a nonlinear autoregressive model and is compared with threshold and smooth-transition autoregressive models. We establish the information theoretic optimality of the score driven nonlinear autoregressive process and the asymptotic theory for maximum likelihood parameter estimation. The performance of our model in extracting the time-varying or the nonlinear dependence for finite samples is studied in a Monte Carlo exercise. In our empirical study we present the in-sample and out-of-sample performances of our model for a weekly time series of unemployment insurance claims. |
Keywords: | Asymptotic theory; Dynamic models, Observation driven time series models; Smooth-transition model; Time-Varying Parameters; Treshold autoregressive model |
JEL: | C13 C22 C32 |
Date: | 2014–08–11 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20140103&r=ecm |
By: | Francisco Blasques (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam, the Netherlands); André Lucas (VU University Amsterdam, the Netherlands, and Aarhus University, Denmark) |
Abstract: | The strong consistency and asymptotic normality of the maximum likelihood estimator in observation-driven models usually requires the study of the model both as a filter for the time-varying parameter and as a data generating process (DGP) for observed data. The probabilistic properties of the filter can be substantially different from those of the DGP. This difference is particularly relevant for recently developed time varying parameter models. We establish new conditions under which the dynamic properties of the true time varying parameter as well as of its filtered counterpart are both well-behaved and We only require the verification of one rather than two sets of conditions. In particular, we formulate conditions under which the (local) invertibility of the model follows directly from the stable behavior of the true time varying parameter. We use these results to prove the local strong consistency and asymptotic normality of the maximum likelihood estimator. To illustrate the results, we apply the theory to a number of empirically relevant models. |
Keywords: | Observation-driven models, stochastic recurrence equations, contraction conditions, invertibility, stationarity, ergodicity, generalized autoregressive score models |
JEL: | C13 C22 C12 |
Date: | 2014–06–20 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20140074&r=ecm |
By: | Marco Bazzi (University of Padova, Italy); Francisco Blasques (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam); Andre Lucas (VU University Amsterdam, the Netherlands) |
Abstract: | We propose a new Markov switching model with time varying probabilities for the transitions. The novelty of our model is that the transition probabilities evolve over time by means of an observation driven model. The innovation of the time varying probability is generated by the score of the predictive likelihood function. We show how the model dynamics can be readily interpreted. We investigate the performance of the model in a Monte Carlo study and show that the model is successful in estimating a range of different dynamic patterns for unobserved regime switching probabilities. We also illustrate the new methodology in an empirical setting by studying the dynamic mean and variance behavior of U.S. Industrial Production growth. We find empirical evidence of changes in the regime switching probabilities, with more persistence for high volatility regimes in the earlier part of the sample, and more persistence for low volatility regimes in the later part of the sample. |
Keywords: | Hidden Markov Models; observation driven models; generalized autoregressive score dynamics |
JEL: | C22 C32 |
Date: | 2014–06–17 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20140072&r=ecm |
By: | Francisco Blasques; Siem Jan Koopman; Max Mallee (VU University Amsterdam, the Netherlands) |
Abstract: | The multivariate analysis of a panel of economic and financial time series with mixed frequencies is a challenging problem. The standard solution is to analyze the mix of monthly and quarterly time series jointly by means of a multivariate dynamic model with a monthly time index: artificial missing values are inserted for the intermediate months of the quarterly time series. In this paper we explore an alternative solution for a class of dynamic factor models that is specified by means of a low frequency quarterly time index. We show that there is no need to introduce artificial missing values while the high frequency (monthly) information is preserved and can still be analyzed. We also provide evidence that the analysis based on a low frequency specification can be carried out in a computationally more efficient way. A comparison study with existing mixed frequency procedures is presented and discussed. Furthermore, we modify the method of maximum likelihood in the context of a dynamic factor model. We introduce variable-specific weights in the likelihood function to let some variable equations be of more importance during the estimation process. We derive the asymptotic properties of the weighted maximum likelihood estimator and we show that the estimator is consistent and asymptotically normal. We also verify the weighted estimation method in a Monte Carlo study to investigate the effect of differen t choices for the weights in different scenarios. Finally, we empirically illustrate the new developments for the extraction of a coincident economic indicator from a small panel of mixed frequency economic time series. |
Keywords: | Asymptotic theory, Forecasting, Kalman filter, Nowcasting, State space |
JEL: | C13 C32 C53 E17 |
Date: | 2014–08–11 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20140105&r=ecm |
By: | Matthias Weber (University of Amsterdam, the Netherlands); Martin Schumacher (University Medical Center, Freiburg); Harald Binder (University Medical Center, Mainz, Germany) |
Abstract: | We develop an algorithm that incorporates network information into regression settings. It simultaneously estimates the covariate coefficients and the signs of the network connections (i.e. whether the connections are of an activating or of a repressing type). For the coefficient estimation steps an additional penalty is set on top of the lasso penalty, similarly to Li and Li (2008). We develop a fast implementation for the new method based on coordinate descent. Furthermore, we show how the new methods can be applied to time-to-event data. The new method yields good results in simulation studies concerning sensitivity and specificity of non-zero covariate coefficients, estimation of network connection signs, and prediction performance. We also apply the new method to two microarray time-to-event data sets from patients with ovarian cancer and diffuse large B-cell lymphoma. The new method performs very well in both cases. The main application of this new method is of biomedical nature, but it may also be useful in other fields where network data is available. |
Keywords: | high-dimensional data, gene expression data, pathway information, penalized regression |
JEL: | C13 C41 |
Date: | 2014–07–16 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20140089&r=ecm |
By: | Francisco Blasques (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam, the Netherlands, and CREATES, Aarhus University, Denmark); André Lucas (VU University Amsterdam) |
Abstract: | We investigate the information theoretic optimality properties of the score function of the predictive likelihood as a device to update parameters in observation driven time-varying parameter models. The results provide a new theoretical justification for the class of generalized autoregressive score models, which covers the GARCH model as a special case. Our main contribution is to show that only parameter updates based on the score always reduce the local Kullback-Leibler divergence between the true conditional density and the model implied conditional density. This result holds irrespective of the severity of model misspecification. We also show that the use of the score leads to a considerably smaller global Kullback-Leibler divergence in empirically relevant settings. We illustrate the theory with an application to time-varying volatility models. We show that th e reduction in Kullback-Leibler divergence across a range of different settings can be substantial in comparison to updates based on for example squared lagged observations. |
Keywords: | generalized autoregressive models, information theory, optimality, Kullback-Leibler distance, volatility models |
JEL: | C12 C22 |
Date: | 2014–04–11 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20140046&r=ecm |
By: | Siem Jan Koopman (VU University Amsterdam); Rutger Lit (VU University Amsterdam); André Lucas (VU University Amsterdam) |
Abstract: | We introduce a dynamic statistical model for Skellam distributed random variables. The Skellam distribution can be obtained by taking differences between two Poisson distributed random variables. We treat cases where observations are measured over time and where possible serial correlation is modeled via stochastically time-varying intensities of the underlying Poisson counts. The likelihood function for our model is analytically intractable and we evaluate it via a multivariate extension of numerically accelerated importance sampling techniques. We illustrate the new model by two empirical studies and verify whether our framework can adequately handle large data sets. First, we analyze long univariate high-frequency time series of U.S. stock price changes, which evolve as discrete multiples of a fixed tick size of one dollar cent. In a second illustration, we analyze the score differences between rival soccer teams using a large, unbalanced panel of seven seasons of weekly matches in the German Bundesliga.In both empirical studies, the new model provides interesting and non-trivial dynamics with a clear interpretation. |
Keywords: | dynamic count data models, non-Gaussian multivariate time series models, importance sampling, numerical integration, volatility models, sports data |
JEL: | C22 C32 C58 |
Date: | 2014–03–10 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20140032&r=ecm |
By: | Skrobotov, Anton (Russian Presidential Academy of National Economy and Public Administration (RANEPA)) |
Abstract: | In this paper we propose a likelihood ratio test for a change in persistence of a time series. We consider the null hypothesis of a constant persistence I(1) and an alternative in which the series changes from a stationary regime to a unit root regime and vice versa. Both known and unknown break dates are analyzed. Moreover, we consider a modication of a lag length selection procedure which provides better size control over various data generation processes. In general, our likelihood ratio-based tests show the best nite sample properties from all persistence change tests that use the null hypothesis of a unit root throughout. |
Keywords: | change in persistence, likelihood ratio test, unit root test, lag length selection |
JEL: | C12 C22 |
Date: | 2015–01–28 |
URL: | http://d.repec.org/n?u=RePEc:rnp:ppaper:skr001&r=ecm |
By: | Kazim Azam (VU University Amsterdam); Andre Lucas (VU University Amsterdam) |
Abstract: | We consider a new copula method for mixed marginals of discrete and continuous random variables. Unlike the Bayesian methods in the literature, we use maximum likelihood estimation based on closed-form copula functions. We show with a simulation that our methodology performs similar to the method of Hoff (2007) for mixed data, but is considerably simpler to estimate. We extend to a time series setting, where the parameters are allowed to vary over time. In an empirical application using data from the 2013 Household Finance Survey, we show how the copula dependence between income (continuous) and discrete household characteristics varies across groups who were affected differently by the recent economic crisis. |
Keywords: | copula, discrete data, time series |
JEL: | C32 C35 |
Date: | 2015–01–08 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20150003&r=ecm |
By: | Carsten Bormann; Melanie Schienle (Leibniz Universität Hannover, Germany); Julia Schaumburg (VU University Amsterdam) |
Abstract: | In practice, multivariate dependencies between extreme risks are often only assessed in a pairwise way. We propose a test to detect when tail dependence is truly high-dimensional and bivariate simplifications would produce misleading results. This occurs when a significant portion of the multivariate dependence structure in the tails is of higher dimension than two. Our test statistic is based on a decomposition of the stable tail dependence function, which is standard in extreme value theory for describing multivariate tail dependence. The asymptotic properties of the test are provided and a bootstrap based finite sample version of the test is suggested. A simulation study documents the good performance of the test for standard sample sizes. In an application to international government bonds, we detect a high tail{risk and low return situation during the last decade which can essentially be attributed to increased higher{order tail risk. We also illustrate the empirical consequences from ignoring higher-dimensional tail risk. |
Keywords: | decomposition of tail dependence, multivariate extreme values, stable tail dependence function, subsample bootstrap, tail correlation |
JEL: | C12 C19 |
Date: | 2014–02–25 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20140024&r=ecm |
By: | G. Forchini; Bin Jiang; Bin Peng |
Abstract: | This paper introduces a novel approach to study the effects of common shocks on panel data models with endogenous explanatory variables when the cross section dimension (N) is large and the time series dimension (T) is fixed: this relies on conditional strong laws of large numbers and conditional central limit theorems. These results can act as a useful reference for readers who wish to further investigate the effects of common shocks on panel data. The paper shows that the key assumption in determining consistency of the panel TSLS and LIML estimators is the independence of the factor loadings in the reduced form errors from the factor loadings in the exogenous variables and instruments conditional on the factors. We also show that these estimators have non-standard asymptotic distributions but tests on the coefficients have standard distributions under the null hypothesis provided the estimators are consistent. |
Keywords: | Panel data, factor structure, endogeneity, instrumental variables |
JEL: | C33 C36 |
Date: | 2015 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2015-8&r=ecm |
By: | Francisco Blasques (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam); Katarzyna Lasak (VU University Amsterdam); André Lucas (VU University Amsterdam) |
Abstract: | We study the performance of two analytical methods and one simulation method for computing in-sample confidence bounds for time-varying parameters. These in-sample bounds are designed to reflect parameter uncertainty in the associated filter. They are applicable to the complete class of observation driven models and are valid for a wide range of estimation procedures. A Monte Carlo study is conducted for time-varying parameter models such as generalized autoregressive conditional heteroskedasticity and autoregressive conditional duration models. Our results show clear differences between the actual coverage provided by our three methods of computing in-sample bounds. The analytical methods may be less reliable than the simulation method, their coverage performance is sufficiently adequate to provide a reasonable impression of the parameter uncertainty that is embedded in the time-varying parameter path. We illustrate our findings in a volatility analysis for monthly Standard & Poor's 500 index returns. |
Keywords: | autoregressive conditional duration, delta-method, generalized autoregressive conditional heteroskedasticity, score driven models, time-varying mean |
JEL: | C15 C22 C58 |
Date: | 2015–02–23 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20150027&r=ecm |
By: | Zdravko Botev (The University of New South Wales, Sydney, Australia); Ad Ridder (VU University Amsterdam); Leonardo Rojas-Nandayapa (The University of Queensland) |
Abstract: | The Cross Entropy method is a well-known adaptive importance sampling method for rare-event probability estimation, which requires estimating an optimal importance sampling density within a parametric class. In this article we estimate an optimal importance sampling density within a wider semiparametric class of distributions. We show that this semiparametric version of the Cross Entropy method frequently yields efficient estimators. We illustrate the excellent practical performance of the method with numerical experiments and show that for the problems we consider it typically outperforms alternative schemes by orders of magnitude. |
Keywords: | Light-Tailed; Regularly-Varying; Subexponential; Rare-Event Probability; Cross Entropy method, Markov Chain Monte Carlo |
JEL: | C61 C63 |
Date: | 2013–09–02 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20130127&r=ecm |
By: | Francesco Calvori (Department of Statistics 'G. Parenti', University of Florence, Italy); Drew Creal (Booth School of Business, University of Chicago); Siem Jan Koopman (VU University Amsterdam); Andre Lucas (VU University Amsterdam) |
Abstract: | We develop a new parameter stability test against the alternative of observation driven generalized autoregressive score dynamics. The new test generalizes the ARCH-LM test of Engle (1982) to settings beyond time-varying volatility and exploits any autocorrelation in the likelihood scores under the alternative. We compare the test's performance with that of alternative tests developed for competing time-varying parameter frameworks, such as structural breaks and observation driven parameter dynamics. The new test has higher and more stable power against alternatives with frequent regime switches or with non-local parameter driven time-variation. For parameter driven time variation close to the null or for infrequent structural changes, the test of Muller and Petalas (2010) performs best overall. We apply all tests empirically to a panel of losses given default over the period 1982--2010 and find significant evidence of parameter variation in the underlying beta distribution. |
Keywords: | time-varying parameters; observation driven models; parameter driven models; structural breaks; generalized autoregressive score model; regime switching; credit risk |
JEL: | C12 C52 C22 |
Date: | 2014–01–14 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20140010&r=ecm |
By: | Sungyong Park (Chung-Ang University, Seoul, Korea); Wendun Wang (Erasmus University Rotterdam, the Netherlands); Naijing Huang (Boston College, United States) |
Abstract: | Regarding the asymmetric and leptokurtic behavior of financial data, we propose a new contagion test in the quantile regression framework that is robust to model misspecification. Unlike conventional correlation-based tests, the proposed quantile contagion test allows us to investigate the stock market contagion at various quantiles, not only at the mean. We show that the quantile contagion test can detect a contagion effect that is possibly ignored by correlation-based tests. A wide range of simulation studies show that the proposed test is superior to the correlation-based tests in terms of size and power. We compare our test with correlation-based tests using three real data sets: the 1994 Tequila crisis, the 1997 Asia crisis, and the 2001 Argentina crisis. Empirical results show substantial differences between two types of tests. |
Keywords: | Financial contagion, Quantile regression, One-sided score test |
JEL: | C21 C58 D53 |
Date: | 2015–03–26 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20150040&r=ecm |
By: | Lukasz Gatarek (Erasmus University Rotterdam); Lennart Hoogerheide (VU University Amsterdam); Herman K. van Dijk (VU University Amsterdam, and Erasmus University Rotterdam) |
Abstract: | We investigate the direct connection between the uncertainty related to estimated stable ratios of stock prices and risk and return of two pairs trading strategies: a conditional statistical arbitrage method and an implicit arbitrage one. A simulation-based Bayesian procedure is introduced for predicting stable stock price ratios, defined in a cointegration model. Using this class of models and the proposed inferential technique, we are able to connect estimation and model uncertainty with risk and return of stock trading. In terms of methodology, we show the effect that using an encompassing prior, which is shown to be equivalent to a Jeffreys’ prior, has under an orthogonal normalization for the selection of pairs of cointegrated stock prices and further, its effect for the estimation and prediction of the spread between cointegrated stock prices. We distinguish between models with a normal and Student <I>t</I> distribution since the latter typically provides a better description of daily changes of prices on financial markets. As an empirical application, stocks are used that are ingredients of the Dow Jones Composite Average index. The results show that normalization has little effect on the selection of pairs of cointegrated stocks on the basis of Bayes factors. However, the results stress the importance of the orthogonal normalization for the estimation and prediction of the spread — the deviation from the equilibrium relationship — which leads to better results in terms of profit per capital engagement and risk than using a standard linear normalization. |
Keywords: | Bayesian analysis; cointegration; linear normalization; orthogonal normalization; pairs trading; statistical arbitrage |
JEL: | C11 C15 C32 C58 G17 |
Date: | 2014–03–20 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20140039&r=ecm |
By: | Laurent Callot (VU University Amsterdam, the Netherlands); Mehmet Caner (North Carolina State University, United States); Anders Bredahl Kock (Aarhus University, Denmark); Juan Andres Riquelme (North Carolina State University, United States) |
Abstract: | We propose a new estimator, the thresholded scaled Lasso, in high dimensional threshold regressions. First, we establish an upper bound on the <I>ℓ</I><SUB>∞</SUB> estimation error of the scaled Lasso estimator of Lee et al. (2012). This is a non-trivial task as the literature on high-dimensional models has focused almost exclusively on <I>ℓ</I><SUB>1</SUB> and <I>ℓ</I><SUB>2</SUB> estimation errors. We show that this sup-norm bound can be used to distinguish between zero and non-zero coefficients at a much finer scale than would have been possible using classical oracle inequalities. Thus, our sup-norm bound is tailored to consistent variable selection via thresholding. Our simulations show that thresholding the scaled Lasso yields substantial improvements in terms of variable selection. Finally, we use our estimator to shed further empirical light on the long running debate on the relationship between the level of debt (public and private) and GDP growth. |
Keywords: | Threshold model, sup-norm bound, thresholded scaled Lasso, oracle inequality, debt effect on gdp growth |
JEL: | C13 C23 C26 |
Date: | 2015–02–10 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20150019&r=ecm |
By: | Maxwell L. King; Sivagowry Sriananthakumar |
Abstract: | In the absence of uniformly most powerful (UMP) tests or uniformly most powerful invariant (UMPI) TESTS, King (1987c) suggested the use of Point Optimal (PO) tests, which are most powerful at a chosen point under the alternative hypothesis. This paper surveys the literature and major developments on point optimal testing since 1987 and suggests some areas for future research. Topics include tests for which all nuisance parameters have been eliminated and dealing with nuisance parameters via (i) a weighted average of p values, (ii) approximate point optimal tests, (iii) plugging in estimated parameter values, (iv) using asymptotics and (v) integration. Progress on using point-optimal testing principles for two-sided testing and multi-dimensional alternatives is also reviewed. The paper concludes with thoughts on how best to deal with nuisance parameters under both the null and alternative hypotheses, as well as the development of a new class of point optimal test for multi-dimensional testing. |
Keywords: | Local to unity asymptotics, Neyman-Pearson lemma, nuisance parameters, power envelope, unit root testing. |
JEL: | C12 C50 |
Date: | 2015 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2015-5&r=ecm |
By: | Gerda Claeskens (KU Leuven, Belgium); Jan Magnus (VU University Amsterdam, the Netherlands); Andrey Vasnev (University of Sydney, Australia); Wendun Wang (Erasmus University, Rotterdam, the Netherlands) |
Abstract: | This papers offers a theoretical explanation for the stylized fact that forecast combinations with estimated optimal weights often perform poorly in applications. The properties of the forecast combination are typically derived under the assumption that the weights are fixed, while in practice they need to be estimated. If the fact that the weights are random rather than fixed is taken into account during the optimality derivation, then the forecast combination will be biased (even when the original forecasts are unbiased) and its variance is larger than in the fixed-weights case. In particular, there is no guarantee that the 'optimal' forecast combination will be better than the equal-weights case or even improve on the original forecasts. We provide the underlying theory, some special cases and an application in the context of model selection. |
Keywords: | forecast combination, optimal weights, model selection |
JEL: | C53 C52 |
Date: | 2014–09–19 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20140127&r=ecm |
By: | Nalan Basturk (Maastricht University, the Netherlands); Pinar Ceyhan (Erasmus University Rotterdam); Herman K. van Dijk (Erasmus University Rotterdam, VU University Amsterdam, the Netherlands) |
Abstract: | Time varying patterns in US growth are analyzed using various univariate model structures, starting from a naive model structure where all features change every period to a model where the slow variation in the conditional mean and changes in the conditional variance are specified together with their interaction, including survey data on expected growth in order to strengthen the information in the model. Use is made of a simulation based Bayesian inferential method to determine the forecasting performance of the various model specifications. The extension of a basic growth model with a constant mean to models including time variation in the mean and variance requires careful investigation of possible identification issues of the parameters and existence conditions of the posterior under a diffuse prior. The use of diffuse priors leads to a focus on the likelihood fu nction and it enables a researcher and policy adviser to evaluate the scientific information contained in model and data. Empirical results indicate that incorporating time variation in mean growth rates as well as in volatility are important in order to improve for the predictive performances of growth models. Furthermore, using data information on growth expectations is important for forecasting growth in specific periods, such as the the recession periods around 2000s and around 2008. |
Keywords: | Growth, Time varying parameters, Expectations data |
JEL: | C11 C22 E17 |
Date: | 2014–09–01 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20140119&r=ecm |
By: | Saikat Saha |
Abstract: | We revisit the Bayesian online inference problems for the linear dynamic systems (LDS) under non- Gaussian environment. The noises can naturally be non-Gaussian (skewed and/or heavy tailed) or to accommodate spurious observations, noises can be modeled as heavy tailed. However, at the cost of such noise robustness, the performance may degrade when such spurious observations are absent. Therefore, any inference engine should not only be robust to noise outlier, but also be adaptive to potentially unknown and time varying noise parameters; yet it should be scalable and easy to implement. To address them, we envisage here a new noise adaptive Rao-Blackwellized particle filter (RBPF), by leveraging a hierarchically Gaussian model as a proxy for any non-Gaussian (process or measurement) noise density. This leads to a conditionally linear Gaussian model (CLGM), that is tractable. However, this framework requires a valid transition kernel for the intractable state, targeted by the particle filter (PF). This is typically unknown. We outline how such kernel can be constructed provably, at least for certain classes encompassing many commonly occurring non-Gaussian noises, using auxiliary latent variable approach. The efficacy of this RBPF algorithm is demonstrated through numerical studies. |
Date: | 2015–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1504.05723&r=ecm |
By: | André Lucas (VU University Amsterdam); Xin Zhang (Sveriges Riksbank, Sweden) |
Abstract: | We present a simple new methodology to allow for time variation in volatilities using a recursive updating scheme similar to the familiar RiskMetrics approach. We update parameters using the score of the forecasting distribution rather than squared lagged observations. This allows the parameter dynamics to adapt automatically to any non-normal data features and robustifies the subsequent volatility estimates. Our new approach nests several extensions to the exponentially weighted moving average (EWMA) scheme as proposed earlier. Our approach also easily handles extensions to dynamic higher-order moments or other choices of the preferred forecasting distribution. We apply our method to Value-at-Risk forecasting with Student's t distributions and a time varying degrees of freedom parameter and show that the new method is competitive to or better than earlier methods for volatility forecasting of individual stock returns and exchange rates. |
Keywords: | dynamic volatilities, time varying higher order moments, integrated generalized autoregressive score models, Exponential Weighted Moving Average (EWMA), Value-at-Risk (VaR) |
JEL: | C51 C52 C53 G15 |
Date: | 2014–07–22 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20140092&r=ecm |
By: | Kurennoy, Alexey (Russian Presidential Academy of National Economy and Public Administration (RANEPA)) |
Abstract: | This paper studies the behaviour of the bias corrected LSDV estimator and GMM-based estimators in dynamic panel data models with endogenous regressors. We obtain an expansion of the conditional bias of the LSDV estimator with the leading term coinciding with the one in the expansion from (Kiviet, 1995) and (Kiviet, 1999). Nevertheless, our simulations suggest that in the presence of endogenous regressors the performance of the corrected LSDV estimator can be low. This indicates that although the bias has similar structure whether or not the exogeneity assumption holds, the approximation technique that the LSDVc estimator is based on can work poorly in the endogenous case. GMM-based estimators also have low performance in our experiment. |
Keywords: | dynamic panel data model, endogenous regressors, LSDV estimator, bias expansion, simulation |
JEL: | C13 C23 |
Date: | 2015–01–22 |
URL: | http://d.repec.org/n?u=RePEc:rnp:ppaper:kur001&r=ecm |
By: | Norbert Christopeit (University of Bonn, Germany); Michael Massmann (VU University Amsterdam) |
Abstract: | This paper investigates the asymptotic properties of the ordinary least squares (OLS) estimator of structural parameters in a stylised macroeconomic model in which agents are boundedly rational and use an adaptive learning rule to form expectations of the endogenous variable. In particular, when the learning recursion is subject to so-called decreasing gain sequences the model does not satisfy, in general, any of the sufficient conditions for consistent estimability available in the literature. The paper demonstrates that, for appropriate parameter sets, the OLS estimator nevertheless remains strongly consistent and asymptotically normally distributed. |
Keywords: | non-stationary regression, strong consistency, asymptotic normality, bounded rationality, adaptive learning |
JEL: | C22 C51 D83 |
Date: | 2013–08–06 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20130111&r=ecm |
By: | Xiaodong Gong; Jiti Gao |
Abstract: | This paper is motivated by our attempt to answer an empirical question: how is private health insurance take-up in Australia affected by the income threshold at which the Medicare Levy Surcharge (MLS) kicks in? We propose a new difference de-convolution kernel estimator for the location and size of regression discontinuities. We also propose a bootstrapping procedure for estimating confidence bands for the estimated discontinuity. Performance of the estimator is evaluated by Monte Carlo simulations before it is applied to estimating the effect of the income threshold of Medicare Levy Surcharge on the take-up of private health insurance in Australia using contaminated data. |
Keywords: | De-convolution kernel estimator, regression discontinuity, error-in-variables, demand for private health insurance. |
JEL: | C13 C14 C29 I13 |
Date: | 2015 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2015-6&r=ecm |
By: | Otter, Pieter W.; Jacobs, Jan P.A.M.; Reijer, Ard H.J. de (Groningen University) |
Abstract: | This paper derives a new criterion for the determination of the number of factors in static approximate factor models, that is strongly associated with the scree test. Our criterion looks for the number of eigenvalues for which the difference between adjacent eigenvalue-component number blocks is maximized. Monte Carlo experiments compare the properties of our criterion to the Edge Distribution (ED) estimator of Onatski (2010) and the two eigenvalue ratio estimators of Ahn and Horenstein (2013). Our criterion outperforms the latter two for all sample sizes and the ED estimator of Onatski (2010) for samples up to 300 variables/observations |
Date: | 2014 |
URL: | http://d.repec.org/n?u=RePEc:gro:rugsom:14008-eef&r=ecm |
By: | Laurent Callot (VU University Amsterdam, the Netherlands); Anders B. Kock (Aarhus University, Denmark); Marcelo C. Medeiros (Pontifical Catholic University of Rio de Janeiro, Brasil) |
Abstract: | In this paper we consider modeling and forecasting of large realized covariance matrices by penalized vector autoregressive models. We propose using Lasso-type estimators to reduce the dimensionality to a manageable one and provide strong theoretical performance guarantees on the forecast capability of our procedure. To be precise, we show that we can forecast future realized covariance matrices almost as precisely as if we had known the true driving dynamics of these in advance. We next investigate the sources of these driving dynamics for the realized covariance matrices of the 30 Dow Jones stocks and find that these dynamics are not stable as the data is aggregated from the daily to the weekly and monthly frequency. The theoretical performance guarantees on our forecasts are illustrated on the Dow Jones index. In particular, we can beat our benchmark by a wide margin at the longer forecast horizons. Finally, we investigate the economic value of our forecasts in a portfolio selection exercise and find that in certain cases an investor is willing to pay a considerable amount in order get access to our forecasts. |
Keywords: | Realized covariance, vector autoregression, shrinkage, Lasso, forecasting, portfolio allocation |
JEL: | C22 |
Date: | 2014–11–13 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20140147&r=ecm |
By: | Aiste Ruseckaite (Erasmus University Rotterdam); Peter Goos (Universiteit Antwerpen, Belgium); Dennis Fok (Erasmus University Rotterdam) |
Abstract: | Consumer products and services can often be described as mixtures of ingredients. Examples are the mixture of ingredients in a cocktail and the mixture of different components of waiting time (e.g., in-vehicle and out-of-vehicle travel time) in a transportation setting. Choice experiments may help to determine how the respondents' choice of a product or service is affected by the combination of ingredients. In such studies, individuals are confronted with sets of hypothetical products or services and they are asked to choose the most preferred product or service from each set. However, there are no studies on the optimal design of choice experiments involving mixtures. We propose a method for generating an optimal design for such choice experiments. To this end, we first introduce mixture models in the choice context and next present an algorithm to construct optimal experimental designs, assuming the multinomial logit model is used to analyze the choice data. To overcome the problem that the optimal designs depend on the unknown parameter values, we adopt a Bayesian D-optimal design approach. We also consider locally D-optimal designs and compare the performance of the resulting designs to those produced by a utility-neutral (UN) approach in which designs are based on the assumption that individuals are indifferent between all choice alternatives. We demonstrate that our designs are quite different and in general perform better than the UN designs. |
Keywords: | Bayesian design, Choice experiments, D-optimality, Experimental design, Mixture coordinate-exchange algorithm, Mixture experiment, Multinomial logit model, Optimal design |
JEL: | C01 C10 C25 C61 C83 C90 C99 |
Date: | 2014–05–09 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20140057&r=ecm |
By: | Cecilia Mancini (Dipartimento di Scienze per l'Economia e l'Impresa, Universita' degli Studi di Firenze) |
Abstract: | The speed of convergence of the truncated realized covariance to the integrated covariation between the two Brownian parts of two semimartingales is heavily influenced by the presence of infinite activity jumps with infinite variation. Namely, the two processes small jumps play a crucial role through their degree of dependence, other than through their jump activity indices. This theoretical result is established when the semimartingales are observed discretely on a finite time horizon. The estimator in many cases is less efficient than when the model only has finite variation jumps. The small jumps of each semimartingale are assumed to be the small jumps of a Lévy stable process, and to the two stable processes a parametric simple dependence structure is imposed, which allows to range from independence to monotonic dependence. The result of this paper is relevant in financial economics, since by the truncated realized covariance it is possible to separately estimate the common jumps among assets, which has important implications in risk management and contagion modeling. |
Keywords: | Brownian correlation coefficient, integrated covariation, co-jumps, Lévy copulas,threshold estimator. |
JEL: | C13 C14 C58 |
Date: | 2015–04 |
URL: | http://d.repec.org/n?u=RePEc:flo:wpaper:2015-02&r=ecm |
By: | Daouia, Abdelaati; Girard, Stéphane; Stupfler, Gilles |
Abstract: | The class of quantiles lies at the heart of extreme-value theory and is one of the basic tools in risk management. The alternative family of expectiles is based on squared rather than absolute error loss minimization. The exibility and virtues of these least squares analogues of quantiles are now well established in actuarial science, econo- metrics and statistical finance. Both quantiles and expectiles were embedded in the more general class of M-quantiles as the minimizers of a generic asymmetric convex loss function. It has been proved very recently that the only M-quantiles that are coherent risk measures are the expectiles. Also, in contrast to the quantile-based ex- pected shortfall, expectiles benefit from the important property of elicitability that corresponds to the existence of a natural backtesting methodology. Least asymmetri- cally weighted squares estimation of expectiles did not, however, receive yet as much attention as quantile-based risk measures from the perspective of extreme values. In this article, we develop new methods for estimating the Value-at-Risk and expected shortfall measures via high expectiles. We focus on the challenging domain of attrac- tion of heavy-tailed distributions that better describe the tail structure and sparseness of most actuarial and financial data. We first estimate the intermediate large expec- tiles and then extrapolate these estimates to the very far tails. We establish the limit distributions of the proposed estimators when they are located in the range of the data or near and even beyond the maximum observed loss. Monte Carlo experiments and a concrete application are given to illustrate the utility of extremal expectiles as an efficient instrument of risk protection. |
Keywords: | Asymmetric squared loss; Coherent Value-at-Risk; Expected shortfall; Expectiles; Extrapolation; Extreme value theory; Heavy tails. |
Date: | 2015–04 |
URL: | http://d.repec.org/n?u=RePEc:tse:wpaper:29257&r=ecm |
By: | Francine Gresnigt; Erik Kole; Philip Hans Franses (Erasmus University Rotterdam, the Netherlands) |
Abstract: | We propose a modeling framework which allows for creating probability predictions on a future market crash in the medium term, like sometime in the next five days. Our framework draws upon noticeable similarities between stock returns around a financial market crash and seismic activity around earthquakes. Our model is incorporated in an Early Warning System for future crash days. Testing our EWS on S&P 500 data during the recent financial crisis, we find positive Hanssen-Kuiper Skill Scores. Furthermore our modeling framework is capable of exploiting information in the returns series not captured by well known and commonly used volatility models. EWS based on our models outperform EWS based on the volatility models forecasting extreme price movements, while forecasting is much less time-consuming. |
Keywords: | Financial crashes, Hawkes process, self-exciting process, Early Warning System |
JEL: | C13 C15 C53 G17 |
Date: | 2014–06–03 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20140067&r=ecm |
By: | Jørgen Modalsli (Statistics Norway) |
Abstract: | The Altham statistic is often used to calculate intergenerational associations in occupations in studies of historical social mobility. This paper presents a method to incorporate individual covariates into such estimates of social mobility, and to construct corresponding confidence intervals. The method is applied to an intergenerational sample of Norwegian data, showing that estimates of intergenerational mobility are robust to the inclusion of controls for father's and son's age. |
Keywords: | Intergenerational occupational mobility; Altham statistic |
JEL: | J62 N34 C46 |
Date: | 2015–03 |
URL: | http://d.repec.org/n?u=RePEc:ssb:dispap:804&r=ecm |
By: | Anne Opschoor (VU University Amsterdam); Dick van Dijk (Erasmus University Rotterdam); Michel van der Wel (Erasmus University Rotterdam) |
Abstract: | We investigate the added value of combining density forecasts for asset return prediction in a specific region of support. We develop a new technique that takes into account model uncertainty by assigning weights to individual predictive densities using a scoring rule based on the censored likelihood. We apply this approach in the context of recently developed univariate volatility models (including HEAVY and Realized GARCH models), using daily returns from the S&P 500, DJIA, FTSE and Nikkei stock market indexes from 2000 until 2013. The results show that combined density forecasts based on the censored likelihood scoring rule significantly outperform pooling based on the log scoring rule and individual density forecasts. The same result, albeit less strong, holds when compared to combined density forecasts based on equal weights. In addition, VaR estimates improve a t the short horizon, in particular when compared to estimates based on equal weights or to the VaR estimates of the individual models. |
Keywords: | Density forecast evaluation, Volatility modeling, Censored likelihood, Value-at-Risk |
JEL: | C53 C58 G17 |
Date: | 2014–07–21 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20140090&r=ecm |
By: | Siem Jan Koopman (VU University Amsterdam, the Netherlands); Rutger Lit (VU University Amsterdam, the Netherlands); André Lucas (VU University Amsterdam, the Netherlands) |
Abstract: | We investigate the intraday dependence pattern between tick data of stock price changes using a new time-varying model for discrete copulas. We let parameters of both the marginal models and the copula vary over time using an observation driven autoregressive updating scheme based on the score of the conditional probability mass function with respect to the time-varying parameters. We apply the model to high-frequency stock price changes expressed as discrete tick-size multiples for four liquid U.S. financial stocks. Our modeling framework is based on Skellam densities for the marginals and a range of different copula functions. We find evidence of intraday time-variation in the dependence structure. After the opening and before the close of the stock market, dependence levels are lower. We attribute this finding to more idiosyncratic trading at these times. The introduction of score driven dynamics in the dependence structure significantly increases the likelihood values of the time-varying copula model. By contrast, a fixed daily seasonal dependence pattern clearly fits the data less well. |
Keywords: | time-varying copulas, dynamic discrete data, score driven models, Skellam distribution, dynamic dependence |
JEL: | C32 G11 |
Date: | 2015–03–19 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20150037&r=ecm |
By: | Andre Lucas (VU University Amsterdam); Bernd Schwaab (European Central Bank, Financial Markets Research); Xin Zhang (VU University Amsterdam, and Sveriges Riksbank, Research Division) |
Abstract: | We develop a novel high-dimensional non-Gaussian modeling framework to infer conditional and joint risk measures for many financial sector firms. The model is based on a dynamic Generalized Hyperbolic Skewed-t block-equicorrelation copula with time-varying volatility and dependence parameters that naturally accommodates asymmetries, heavy tails, as well as non-linear and time-varying default dependence. We demonstrate how to apply a conditional law of large numbers in this setting to define risk measures that can be evaluated quickly and reliably. We apply the modeling framework to assess the joint risk from multiple financial firm defaults in the euro area during the 2008-2012 financial and sovereign debt crisis. We document unprecedented tail risks during 2011-12, as well as their steep decline after subsequent policy actions. |
Keywords: | systemic risk; dynamic equicorrelation model; generalized hyperbolic distribution; Law of Large Numbers |
JEL: | G21 C32 |
Date: | 2013–05–13 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20130063&r=ecm |
By: | Yasuo Hirose (Keio University); Atsushi Inoue (Vanderbilt University) |
Abstract: | This paper examines how and to what extent parameter estimates can be biased in a dynamic stochastic general equilibrium (DSGE) model that omits the zero lower bound (ZLB) constraint on the nominal interest rate. Our Monte Carlo experiments using a standard sticky-price DSGE model show that no significant bias is detected in parameter estimates and that the estimated impulse response functions are quite similar to the true ones. However, as the probability of hitting the ZLB increases, the parameter bias becomes larger and therefore leads to substantial differences between the estimated and true impulse responses. It is also demonstrated that the model missing the ZLB causes biased estimates of structural shocks even with the virtually unbiased parameters. |
JEL: | E3 E5 |
Date: | 2014–09–09 |
URL: | http://d.repec.org/n?u=RePEc:van:wpaper:vuecon-14-00009&r=ecm |
By: | Hans-Peter Hafner (Saarland State University of Applied Sciences); Felix Ritchie (University of the West of England, Bristol); Rainer Lenz (Technical University of Dortmund) |
Abstract: | When producing anonymised microdata for research, national statistics institutes (NSIs) identify a number of 'risk scenarios' of how intruders might seek to attack a confidential dataset. This paper argues that the strategy used to identify confidentiality protection measures can be seriously misguided, mainly since scenarios focus on data protection without sufficient reference to other aspects of data. This paper brings together a number of findings to see how the above problem can be addressed in a practical context. Using as an example the creation of a scientific use file, the paper demonstrates that an alternative perspective can have dramatically different outcomes. |
Keywords: | statistical disclosure control, data protection, microdata anonymisation, big data |
JEL: | C81 C18 |
Date: | 2015–01–03 |
URL: | http://d.repec.org/n?u=RePEc:uwe:wpaper:20151503&r=ecm |
By: | Guillaume Gaetan Martinet (ENSAE Paris Tech, France, and Columbia University, USA); Michael McAleer (National Tsing Hua University, Taiwan, Erasmus University Rotterdam, the Netherlands, Complutense University of Madrid, Spain.) |
Abstract: | Of the two most widely estimated univariate asymmetric conditional volatility models, the exponential GARCH (or EGARCH) specification can capture asymmetry, which refers to the different effects on conditional volatility of positive and negative effects of equal magnitude, and leverage, which refers to the negative correlation between the returns shocks and subsequent shocks to volatility. However, the statistical properties of the (quasi-) maximum likelihood estimator (QMLE) of the EGARCH parameters are not available under general conditions, but only for special cases under highly restrictive and unverifiable conditions, such as EGARCH(1,0) or EGARCH(1,1), and possibly only under simulation. A limitation in the development of asymptotic properties of the QMLE for the EGARCH(p,q) model is the lack of an invertibility condition for the returns shocks underlying the model. It is shown in this paper that the EGARCH(p,q) model can be derived from a stochastic process, for which the invertibility conditions can be stated simply and explicitly. This will be useful in re-interpreting the existing properties of the QMLE of the EGARCH(p,q) parameters. |
Keywords: | Leverage, asymmetry, existence, stochastic process, asymptotic properties, invertibility |
JEL: | C22 C52 C58 G32 |
Date: | 2015–02–12 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20150022&r=ecm |
By: | Manabu Asai (Soka University, Japan); Michael McAleer (National Tsing Hua University, Taiwan, Erasmus University Rotterdam, the Netherlands, Complutense University of Madrid, Spain) |
Abstract: | The paper investigates the impact of jumps in forecasting co-volatility, accommodating leverage effects. We modify the jump-robust two time scale covariance estimator of Boudt and Zhang (2013)such that the estimated matrix is positive definite. Using this approach we can disentangle the estimates of the integrated co-volatility matrix and jump variations from the quadratic covariation matrix. Empirical results for three stocks traded on the New York Stock Exchange indicate that the co-jumps of two assets have a significant impact on future co-volatility, but that the impact is negligible for forecasting weekly and monthly horizons. |
Keywords: | Co-Volatility; Forecasting; Jump; Leverage Effects; Realized Covariance; Threshold |
JEL: | C32 C53 C58 G17 |
Date: | 2015–02–09 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20150018&r=ecm |
By: | Tino Berger; Gerdie Everaert; Hauke Vierke (-) |
Abstract: | This paper analyzes the amount of time variation in the parameters of a reduced-form empirical macroeconomic model for the U.S. economy. We set up an unobserved components model to decompose output, inflation and unemployment in their stochastic trend and business cycle gap components. The latter are related through the Phillips curve and Okun's Law. Key parameters such as the potential output growth rate, the slope of the Phillips curve and the strength of Okun's Law, are allowed to change over time in order to account for potential structural changes in the U.S. economy. Moreover, stochastic volatility is added to all components to account for shifts in macroeconomic volatility. A Bayesian stochastic model specification search is employed to test which parameters are time-varying and which unobserved components exhibit stochastic volatility. Using quarterly data from 1959Q2 to 2014Q3 we find substantial time variation in Okun's Law, while the Phillips curve slope appears to be stable. The potential output growth rate exhibits a drastic and persistent decline. Stochastic volatility is found to be important for cyclical shocks to the economy, while the volatility of permanent shocks remains stable. |
JEL: | C32 E24 E31 |
Date: | 2015–04 |
URL: | http://d.repec.org/n?u=RePEc:rug:rugwps:15/903&r=ecm |
By: | Michael McAleer (National Tsing Hua University Taiwan; Erasmus University Rotterdam, the Netherlands; Complutense University of Madrid, Spain) |
Abstract: | The three most popular univariate conditional volatility models are the generalized autoregressive conditional heteroskedasticity (GARCH) model of Engle (1982) and Bollerslev (1986), the GJR (or threshold GARCH) model of Glosten, Jagannathan and Runkle (1992), and the exponential GARCH (or EGARCH) model of Nelson (1990, 1991). The underlying stochastic specification to obtain GARCH was demonstrated by Tsay (1987), and that of EGARCH was shown recently in McAleer and Hafner (2014). These models are important in estimating and forecasting volatility, as well as capturing asymmetry, which is the different effects on conditional volatility of positive and negative effects of equal magnitude, and leverage, which is the negative correlation between returns shocks and subsequent shocks to volatility. As there seems to be some confusion in the literature between asymmetry and leverage, as well as which asymmetric models are purported to be able to capture leverage, the purpose of the paper is two-fold, namely: (1) to derive the GJR model from a random coefficient autoregressive process, with appropriate regularity conditions; and (2) to show that leverage is not possible in these univariate conditional volatility models. |
Keywords: | Conditional volatility models, random coefficient autoregressive processes, random coefficient complex nonlinear moving average process, asymmetry, leverage |
JEL: | C22 C52 C58 G32 |
Date: | 2014–09–18 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20140125&r=ecm |