|
on Econometric Time Series |
Issue of 2020‒02‒03
thirteen papers chosen by Jaqueson K. Galimberti Auckland University of Technology |
By: | Chainarong Amornbunchornvej; Elena Zheleva; Tanya Y. Berger-Wolf |
Abstract: | Granger causality is a fundamental technique for causal inference in time series data, commonly used in the social and biological sciences. Typical operationalizations of Granger causality make a strong assumption that every time point of the effect time series is influenced by a combination of other time series with a fixed time delay. However, the assumption of the fixed time delay does not hold in many applications, such as collective behavior, financial markets, and many natural phenomena. To address this issue, we develop variable-lag Granger causality, a generalization of Granger causality that relaxes the assumption of the fixed time delay and allows causes to influence effects with arbitrary time delays. In addition, we propose a method for inferring variable-lag Granger causality relations. We demonstrate our approach on an application for studying coordinated collective behavior and show that it performs better than several existing methods in both simulated and real-world datasets. Our approach can be applied in any domain of time series analysis. |
Date: | 2019–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1912.10829&r=all |
By: | Yoonseok Lee (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244); Yulong Wang (Department of Economics and Center for Policy Research, 127 eggers Hall, Syracuse University, Syracuse, NY 13244-1020) |
Abstract: | This paper develops new statistical inference methods for the parameters in threshold regression models. In particular, we develop a test for homogeneity of the threshold parameter and a test for linear restrictions on the regression coefficients. The tests are built upon a transformed partial-sum process after re-ordering the observations based on the rank of the threshold variable, which recasts the cross-sectional threshold problem into the time-series structural break analogue. The asymptotic distributions of the test statistics are derived using this novel approach, and the finite sample properties are studied in Monte Carlo simulations. We apply the new tests to the tipping point problem studied by Card, Mas, and Rothstein (2008), and statistically justify that the location of the tipping point varies across tracts. |
Keywords: | Threshold Regression, Test, Homogeneous Threshold, Linear Restriction, Tipping Point |
JEL: | C12 C24 |
Date: | 2020–01 |
URL: | http://d.repec.org/n?u=RePEc:max:cprwps:223&r=all |
By: | Worapree Maneesoonthorn; Gael M. Martin; Catherine S. Forbes |
Abstract: | We conduct an extensive evaluation of price jump tests based on high-frequency financial data. After providing a concise review of multiple alternative tests, we document the size and power of all tests in a range of empirically relevant scenarios. Particular focus is given to the robustness of test performance to the presence of jumps in volatility and microstructure noise, and to the impact of sampling frequency. The paper concludes by providing guidelines for empirical researchers about which test to choose in any given setting. |
Keywords: | price jump tests, nonparametric jump measures, bivariate jump diffusion model, volatility jumps, microstructure noise, sampling frequency. |
JEL: | C12 C22 C58 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2020-3&r=all |
By: | Schlösser, Alexander |
Abstract: | We investigate the predictive power of several leading indicators in order to forecast industrial production in Germany. In addition, we compare their predictive performance with variables from two competing categories, namely macroeconomic and financial variables. The predictive power within and between these three categories is evaluated by applying Dynamic Model Averaging (DMA) which allows for timevarying coefficients and model change. We find that leading indicators have the largest predictive power. Macroeconomic variables, in contrast, are weak predictors as they are even not able to outperform a benchmark AR model, while financial variables are clearly inferior in terms of their predictive power compared to leading indicators. We show that the best set of predictors, within and between categories, changes over time and depends on the forecast horizon. Furthermore, allowing for time-varying model size is especially crucial after the Great Recession. |
Keywords: | forecasting,industrial production,model averaging,leading indicator,time-varying parameter |
JEL: | C11 C52 E23 E27 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:zbw:rwirep:838&r=all |
By: | Guglielmo Maria Caporale; Menelaos Karanasos; Stavroula Yfanti |
Abstract: | This paper estimates a bivariate HEAVY system including daily and intra-daily volatility equations and its macro-augmented asymmetric power extension. It focuses on economic factors that exacerbate stock market volatility and represent major threats to financial stability. In particular, it extends the HEAVY framework with powers, leverage, and macro effects that improve its forecasting accuracy significantly. Higher uncertainty is found to increase the leverage and macro effects from credit and commodity markets on stock market realized volatility. Specifically, Economic Policy Uncertainty is shown to be one of the main drivers of US and UK financial volatility alongside global credit and commodity factors. |
Keywords: | asymmetries, economic policy uncertainty, HEAVY model, high-frequency data, macro-financial linkages, power transformations, realized variance, risk management |
JEL: | C22 C58 D80 E44 G01 G15 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_8000&r=all |
By: | Thanasis Stengos (Department of Economics and Finance, University of Guelph, Guelph ON Canada); Theodore Panagiotidis (Department of Economics, University of Macedonia); Orestis Vravosinos (Department of Economics, New York University) |
Abstract: | We examine the significance of fourty-one potential covariates of bitcoin returns for the period 2010–2018 (2,872 daily observations). The principal component-guided sparse regression is employed, introduced by Tay et al. (2018). We reveal that economic policy uncertainty and stock market volatility are among the most important variables for bitcoin. We also trace strong evidence of bubbly bitcoin behavior in the 2017-2018 period. |
Keywords: | bitcoin; cryptocurrency; bubble; sparse regression; LASSO; PC-LASSO; principal component; flexible least squares |
JEL: | G12 G15 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:gue:guelph:2020-01&r=all |
By: | Budnik, Katarzyna; Rünstler, Gerhard |
Abstract: | We study the identification of policy shocks in Bayesian proxy VARs for the case that the instrument consists of sparse qualitative observations indicating the signs of certain shocks. We propose two identification schemes, i.e. linear discriminant analysis and a non-parametric sign concordance criterion. Monte Carlo simulations suggest that these provide more accurate confidence bounds than standard proxy VARs and are more efficient than local projections. Our application to U.S. macroprudential policies finds persistent effects of capital requirements and mortgage underwriting standards on credit volumes and house prices together with moderate effects on GDP and inflation. JEL Classification: C32, E44, G38 |
Keywords: | Bayesian proxy VAR, capital requirements, discriminant analysis, mortgage underwriting standards, sign concordance |
Date: | 2020–01 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20202353&r=all |
By: | Ibrahim, Omar |
Abstract: | This research aims at evaluating among market risk measures to equity exposures on the Egyptian stock market, while utilising a variety of parametric and non-parametric methods to estimating volatility dynamics. Historical Simulation, EWMA (RiskMetrics), GARCH, GJR-GARCH, and Markov-Regime switching GARCH models are empirically estimated. Value at Risk and Conditional Value at Risk measures are backtested in order to evaluate among the alternative models. Results indicate the superiority of asymmetric GARCH models when combined with a Markov-Regime switching process in quantifying market risk - as is evident from the results of the backtests - which have been performed in accordance with the current regulatory demands. Implications are important to regulators and practitioners. |
Keywords: | Risk Management, Value at Risk, GARCH, Markov Chains |
JEL: | C58 |
Date: | 2019–12 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:98091&r=all |
By: | Florian Huber; Michael Pfarrhofer; Philipp Piribauer |
Abstract: | This paper develops a dynamic factor model that uses euro area (EA) country-specific information on output and inflation to estimate an area-wide measure of the output gap. Our model assumes that output and inflation can be decomposed into country-specific stochastic trends and a common cyclical component. Comovement in the trends is introduced by imposing a factor structure on the shocks to the latent states. We moreover introduce flexible stochastic volatility specifications to control for heteroscedasticity in the measurement errors and innovations to the latent states. Carefully specified shrinkage priors allow for pushing the model towards a homoscedastic specification, if supported by the data. Our measure of the output gap closely tracks other commonly adopted measures, with small differences in magnitudes and timing. To assess whether the model-based output gap helps in forecasting inflation, we perform an out-of-sample forecasting exercise. The findings indicate that our approach yields superior inflation forecasts, both in terms of point and density predictions. |
Date: | 2020–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2001.03935&r=all |
By: | Hajivassiliou, Vassilis |
Abstract: | This paper discusses switching regressions econometric modelling with imperfect regime classification information. The econometric novelty is that misclassification probabilities are allowed to vary endogenously over time. Standard maximum likelihood estimation is infeasible in this case because each likelihood contribution requires the evaluation of 2 T terms (where T is the number of observations available). We develop an algorithm that allows efficient estimation when such imperfect information is available, by evaluating the exact likelihood through simply T matrix multiplications (each of a 2 2 matrix times a 2 1 vector.) Our methods are shown to be widely applicable to various areas of economic analysis such as to Hamilton's work on Markov-Switching models in Macroeconomics; to external financing problems faced by firms in Corporate Finance; and to game-theoretic models of price collusion in Industrial Organization. We proceed to apply our methods to analyze price fixing by the Joint Executive Committee railroad cartel from 1880 to 1886 and develop tests of two prototypical game-theoretic models of tacit collusion. The first model, due to Abreu, Pearce and Stacchetti (1986), predicts that price will switch across regimes according to a Markov process. The second model, by Rotemberg and Saloner (1986), concludes that price wars are more likely in periods of high industry demand. Switching regressions are used to model the firm's shifting between collusive and punishment behaviour. The JEC data set is expanded to include measures of grain production to be shipped and availability of substitute transportation services. Our findings cast doubt on the applicability of the Rotemberg and Saloner model to the JEC railroad cartel, while they conÖrm the Markovian prediction of the Abreu et al. model. |
Keywords: | switching regressions models; measurement errors; trigger-price mechanisms; price-fixing |
JEL: | C72 L12 C51 C52 C15 |
Date: | 2019–11 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:103119&r=all |
By: | Jaqueson K. Galimberti (School of Economics, Auckland University of Technology) |
Abstract: | This note evaluates how adaptive learning agents weigh different pieces of information when forming expectations with a recursive least squares algorithm. The analysis is based on a new and more general non-recursive representaion of the learning algorithm, namely, a penalized weighted least squares estimator, where a penalty term accounts for the effects of the learning initials. The paper then draws behavioral implications of diferent specifications of the learning mechanism, such as the cases with decreasing-, constant-, regime-switching, and age-dependent gains. The latter is shown to imply the emergence of "dormant memories" as the agents get old. |
Keywords: | bounded rationality, expectations, adaptive learning, memory |
JEL: | D83 D84 D90 E37 C32 C63 |
Date: | 2020–01 |
URL: | http://d.repec.org/n?u=RePEc:aut:wpaper:202004&r=all |
By: | Angeletos, Georges Marios; Collard, Fabrice; Dellas, Harris |
Abstract: | We propose a new strategy for dissecting the macroeconomic time series, provide a template for the propagation mechanism that best describes the observed business cycles, and use its properties to appraise models of both the parsimonious and the medium-scale variety. Our findings support the existence of a main business-cycle driver but rule out the following candidates for this role: technology or other shocks that map to TFP movements; news about future productivity; and inflationary demand shocks of the textbook type. Prominent members of the DSGE literature also lack the propagation mechanism seen in our anatomy of the data. Models that aim at accommodating demanddriven cycles under flexible prices appear promising. |
Date: | 2020–01 |
URL: | http://d.repec.org/n?u=RePEc:tse:wpaper:123954&r=all |
By: | Ulrich K. Müller; James H. Stock; Mark W. Watson |
Abstract: | We develop a Bayesian latent factor model of the joint evolution of GDP per capita for 113 countries over the 118 years from 1900 to 2017. We find considerable heterogeneity in rates of convergence, including rates for some countries that are so slow that they might not converge (or diverge) in century-long samples, and evidence of “convergence clubs” of countries. The joint Bayesian structure allows us to compute a joint predictive distribution for the output paths of these countries over the next 100 years. This predictive distribution can be used for simulations requiring projections into the deep future, such as estimating the costs of climate change. The model’s pooling of information across countries results in tighter prediction intervals than are achieved using univariate information sets. Still, even using more than a century of data on many countries, the 100-year growth paths exhibit very wide uncertainty. |
JEL: | C32 C55 O47 |
Date: | 2019–12 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:26593&r=all |