|
on Econometric Time Series |
By: | Tue Gorgens; Dean Hyslop |
Abstract: | This paper examines dynamic binary response and multi-spell duration model approaches to analyzing longitudinal discrete-time binary outcomes. Prototypical dynamic binary response models specify low-order Markovian state dependence and restrict the effects of observed and unobserved heterogeneity on the probability of transitioning into and out of a state to have the same magnitude and opposite signs. In contrast, multi-spell duration models typically allow for state-specific duration dependence, and allow the probability of entry into and exit from a state to vary flexibly. We show that both of these approaches are special cases within a general framework. We compare specific dynamic binary response and multi-spell duration models empirically using a case study of poverty transitions. In this example, both the specification of state dependence and the restrictions on the state-specific transition probabilities imposed by the simpler dynamic binary response models are severely rejected against the more flexible multi-spell duration models. Consistent with recent literature, we conclude that the standard dynamic binary response model is unacceptably restrictive in this context. |
Keywords: | Panel data, transition data, binary response, duration analysis, event history analysis, initial conditions, random effects. |
JEL: | C33 C35 C41 C51 |
Date: | 2016–02 |
URL: | http://d.repec.org/n?u=RePEc:acb:cbeeco:2016-631&r=ets |
By: | Tue Gorgens; Chirok Han; Sen Xue |
Abstract: | This paper investigates the relationship between moment restrictions and identification in simple linear AR(1) dynamic panel data models with fixed effects under standard minimal assumptions. The number of time periods is assumed to be small. The assumptions imply linear and quadratic moment restrictions which can be used for GMM estimation. The paper makes three points. First, contrary to common belief, the linear moment restrictions may fail to identify the autoregressive parameter even when it is known to be less than 1. Second, the quadratic moment restrictions provide full or partial identification in many of the cases where the linear moment restrictions do not. Third, the first moment restrictions can also be important for identification. Practical implications of the findings are illustrated using Monte Carlo simulations. |
Keywords: | Dynamic panel data models, fixed effects, identification, generalized method of moments, Arellano-Bond estimator. |
JEL: | C23 |
Date: | 2016–03 |
URL: | http://d.repec.org/n?u=RePEc:acb:cbeeco:2016-633&r=ets |
By: | Ching-Wai Chiu (Bank of England); Haroon Mumtaz (School of Economics and Finance Queen Mary); Gabor Pinter (Bank of England; Centre for Macroeconomics (CFM)) |
Abstract: | We introduce a Bayesian VAR model with non-Gaussian disturbances that are modelled with a finite mixture of normal distributions. Importantly, we allow for regime switching among the different components of the mixture of normals. Our model is highly flexible and can capture distributions that are fat-tailed, skewed and even multimodal. We show that our model can generate large out-of-sample forecast gains relative to standard forecasting models, especially during tranquil periods. Our model forecasts are also competitive with those generated by the conventional VAR model with stochastic volatility. |
JEL: | C11 C32 C52 |
Date: | 2016–02 |
URL: | http://d.repec.org/n?u=RePEc:cfm:wpaper:1609&r=ets |
By: | Forni, Mario; Giovannelli, Alessandro; Lippi, Marco; Soccorsi, Stefano |
Abstract: | The paper compares the pseudo real-time forecasting performance of three Dynamic Factor Models: (i) The standard principal-component model, Stock and Watson (2002a), (ii) The model based on generalized principal components, Forni et al. (2005), (iii) The model recently proposed in Forni et al. (2015b) and Forni et al. (2015a). We employ a large monthly dataset of macroeconomic and financial time series for the US economy, which includes the Great Moderation, the Great Recession and the subsequent recovery. Using a rolling window for estimation and prediction, we nd that (iii) neatly outperforms (i) and (ii) in the Great Moderation period for both Industrial Production and Inflation, and for Inflation over the full sample. However, (iii) is outperfomed by (i) and (ii) over the full sample for Industrial Production. |
Date: | 2016–03 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:11161&r=ets |
By: | Chen, J.; Kobayashi, M.; McAleer, M.J. |
Abstract: | The paper considers the problem as to whether financial returns have a common volatility process in the framework of stochastic volatility models that were suggested by Harvey et al. (1994). We propose a stochastic volatility version of the ARCH test proposed by Engle and Susmel (1993), who investigated whether international equity markets have a common volatility process. The paper also checks the hypothesis of frictionless cross-market hedging, which implies perfectly correlated volatility changes, as suggested by Fleming et al. (1998). The paper uses the technique of Chesher (1984) in differentiating an integral that contains a degenerate density function in deriving the Lagrange Multiplier test statistic. |
Keywords: | Volatility comovement, Cross-market hedging, Spillovers, Contagion |
JEL: | C12 C58 G01 G11 |
Date: | 2016–02–29 |
URL: | http://d.repec.org/n?u=RePEc:ems:eureir:79925&r=ets |
By: | Marcin Chlebus (Faculty of Economic Sciences, University of Warsaw) |
Abstract: | In the study a proposal of two-step EWS-GARCH models to forecast Value-at-Risk is presented. The EWS-GARCH allows different distributions of returns to be used in Value-at-Risk forecasting depending on a forecasted state of the financial time series. In the study EWS-GARCH with GARCH(1,1) and GARCH(1,1) with the amendment to the empirical distribution of returns as a Value-at-Risk model in a state of tranquillity and empirical tail, exponential or Pareto distributions used to forecast Value-at-Risk in a state of turbulence were considered. The evaluation of the quality of the Value-at-Risk forecasts was based on the Value-at-Risk forecasts adequacy (the excess ratio, the Kupiec test, the Christoffersen test, the asymptotic test of unconditional coverage and the back-testing criteria defined by the Basel committee) and the analysis of loss functions (the Lopez quadratic loss function, the Abad & Benito absolute loss function, the 3rd version of Caporin loss function and proposed in the study the function of excessive costs). Obtained results indicate that the EWS-GARCH models may improve the quality of the Value-at-Risk forecasts generated using benchmark models. However, the choice of best assumptions for an EWS-GARCH model should depend on the goals of the Value-at-Risk forecasting model. The final selection may depend on an expected level of adequacy, conservatism and costs of a model. |
Keywords: | Value-at-Risk, GARCH, forecasting, state of turbulence, regime switching, risk management, risk measure, market risk. |
JEL: | G17 C51 C52 C53 |
Date: | 2016 |
URL: | http://d.repec.org/n?u=RePEc:war:wpaper:2016-06&r=ets |