|
on Econometrics |
By: | Arturas Juodis (Faculty of Economics and Business, University of Groningen); Yiannis Karavias (Department of Economics, University of Birmingham) |
Abstract: | The power of Granger non-causality tests in panel data depends on the type of the alternative hypothesis: feedback from other variables might be homogeneous, homogeneous within groups or heterogeneous across different panel units. Existing tests have power against only one of these alternatives and may fail to reject the null hypothesis if the specified type of alternative is incorrect. This paper proposes a new Union-Intersections (UI) test which has correct size and good power against any type of alternative. The UI test is based on an existing test which is powerful against heterogeneous alternatives and a new Wald-type test which is powerful against homogeneous alternatives. The Wald test is designed to have good size and power properties for moderate to large time series dimensions and is based on a bias-corrected split panel jackknife-type estimator. Evidence from simulations confirm the new UI tests provide power against any direction of the alternative. |
Keywords: | Panel Data, Granger Causality, VAR |
JEL: | C13 C33 |
Date: | 2019–04–24 |
URL: | http://d.repec.org/n?u=RePEc:lie:wpaper:59&r=all |
By: | Peter C. B. Phillips; Zhentao Shi |
Abstract: | The Hodrick-Prescott (HP) filter is one of the most widely used econometric methods in applied macroeconomic research. The technique is nonparametric and seeks to decompose a time series into a trend and a cyclical component unaided by economic theory or prior trend specification. Like all nonparametric methods, the HP filter depends critically on a tuning parameter that controls the degree of smoothing. Yet in contrast to modern nonparametric methods and applied work with these procedures, empirical practice with the HP filter almost universally relies on standard settings for the tuning parameter that have been suggested largely by experimentation with macroeconomic data and heuristic reasoning about the form of economic cycles and trends. As recent research has shown, standard settings may not be adequate in removing trends, particularly stochastic trends, in economic data. This paper proposes an easy-to-implement practical procedure of iterating the HP smoother that is intended to make the filter a smarter smoothing device for trend estimation and trend elimination. We call this iterated HP technique the boosted HP filter in view of its connection to L2-boosting in machine learning. The paper develops limit theory to show that the boosted HP filter asymptotically recovers trend mechanisms that involve unit root processes, deterministic polynomial drifts, and polynomial drifts with structural breaks -- the most common trends that appear in macroeconomic data and current modeling methodology. A stopping criterion is used to automate the iterative HP algorithm, making it a data-determined method that is ready for modern data-rich environments in economic research. The methodology is illustrated using three real data examples that highlight the differences between simple HP filtering, the data-determined boosted filter, and an alternative autoregressive approach. |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1905.00175&r=all |
By: | Nick Koning; Paul Bekker |
Abstract: | This paper considers the problem of testing many moment inequalities, where the number of moment inequalities ($p$) is possibly larger than the sample size ($n$). Chernozhukov et al. (2018) proposed asymptotic tests for this problem using the maximum $t$ statistic. We observe that such tests can have low power if multiple inequalities are violated. As an alternative, we propose a novel randomization test based on a maximum non-negatively weighted combination of $t$ statistics. Simulations show that the test controls size in small samples ($n = 30$, $p = 1000$), and often has substantially higher power against alternatives with multiple violations. |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1904.12775&r=all |
By: | Papageorgiou, Ioulia; Moustaki, Irini |
Abstract: | Pairwise likelihood is a limited information estimation method that has also been used for estimating the parameters of latent variable and structural equation models. Pairwise likelihood is a special case of composite likelihood methods that uses lower order conditional or marginal log-likelihoods instead of the full log-likelihood. The composite likelihood to be maximized is a weighted sum of marginal or conditional log-likelihoods. Weighting has been proposed for increasing efficiency but the choice of weights is not straightforward in most applications. Furthermore, the importance of leaving out higher order scores to avoid duplicating lower order marginal information has been pointed out. In this paper, we approach the problem of weighting from a sampling perspective. More especially, we propose a sampling method for selecting pairs based on their contribution to the total variance from all pairs. The sampling approach does not aim to increase efficiency but to decrease the estimation time, especially in models with a large number of observed categorical variables. We demonstrate the performance of the proposed methodology using simulated examples and a real application. |
Keywords: | principal component analysis; structural equation models; factor analysis; composite likelihood |
JEL: | C1 |
Date: | 2018–04–21 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:87592&r=all |
By: | A Clements; D Preve |
Abstract: | The standard heterogeneous autoregressive (HAR) model is perhaps the most popular benchmark model for forecasting return volatility. It is often estimated using raw realized variance (RV) and ordinary least squares (OLS). However, given the stylized facts of RV and wellknown properties of OLS, this combination should be far from ideal. One goal of this paper is to investigate how the predictive accuracy of the HAR model depends on the choice of estimator, transformation, and forecasting scheme made by the market practitioner. Another goal is to examine the effect of replacing its high-frequency data based volatility proxy (RV) with a proxy based on free and publicly available low-frequency data (logarithmic range). In an out-of-sample study, covering three major stock market indices over 16 years, it is found that simple remedies systematically outperform not only standard HAR but also state of the art HARQ forecasts, and that HAR models using logarithmic range can often produce forecasts of similar quality to those based on RV. |
Keywords: | Volatility forecasting; Realized variance; HAR model; HARQ model; Robust regression; Box-Cox transformation; Forecast comparisons; QLIKE loss; Model confidence set |
JEL: | C22 C51 C52 C53 C58 |
Date: | 2019–04–12 |
URL: | http://d.repec.org/n?u=RePEc:qut:auncer:2019_01&r=all |
By: | Gregor Zens; Maximilian B\"ock |
Abstract: | This paper investigates the role of high-dimensional information sets in the context of Markov switching models with time varying transition probabilities. Markov switching models are commonly employed in empirical macroeconomic research and policy work. However, the information used to model the switching process is usually limited drastically to ensure stability of the model. Increasing the number of included variables to enlarge the information set might even result in decreasing precision of the model. Moreover, it is often not clear a priori which variables are actually relevant when it comes to informing the switching behavior. Building strongly on recent contributions in the field of dynamic factor analysis, we introduce a general type of Markov switching autoregressive models for non-linear time series analysis. Large numbers of time series are allowed to inform the switching process through a factor structure. This factor-augmented Markov switching (FAMS) model overcomes estimation issues that are likely to arise in previous assessments of the modeling framework. More accurate estimates of the switching behavior as well as improved model fit result. The performance of the FAMS model is illustrated in a simulated data example as well as in an US business cycle application. |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1904.13194&r=all |
By: | Rico Krueger; Prateek Bansal; Michel Bierlaire; Ricardo A. Daziano; Taha H. Rashidi |
Abstract: | Variational Bayes (VB) methods have emerged as a fast and computationally-efficient alternative to Markov chain Monte Carlo (MCMC) methods for Bayesian estimation of mixed logit models. In this paper, we derive a VB method for posterior inference in mixed multinomial logit models with unobserved inter- and intra-individual heterogeneity. The proposed VB method is benchmarked against MCMC in a simulation study. The results suggest that VB is substantially faster than MCMC but also noticeably less accurate, because the mean-field assumption of VB is too restrictive. Future research should thus focus on enhancing the expressiveness and flexibility of the variational approximation. |
Date: | 2019–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1905.00419&r=all |
By: | Jelena Bradic; Stefan Wager; Yinchu Zhu |
Abstract: | Many popular methods for building confidence intervals on causal effects under high-dimensional confounding require strong "ultra-sparsity" assumptions that may be difficult to validate in practice. To alleviate this difficulty, we here study a new method for average treatment effect estimation that yields asymptotically exact confidence intervals assuming that either the conditional response surface or the conditional probability of treatment allows for an ultra-sparse representation (but not necessarily both). This guarantee allows us to provide valid inference for average treatment effect in high dimensions under considerably more generality than available baselines. In addition, we showcase that our results are semi-parametrically efficient. |
Date: | 2019–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1905.00744&r=all |
By: | Ingo Hoffmann; Christoph J. B\"orner |
Abstract: | In risk management, tail risks are of crucial importance. The assessment of risks should be carried out in accordance with the regulatory authority's requirement at high quantiles. In general, the underlying distribution function is unknown, the database is sparse, and therefore special tail models are used. Very often, the generalized Pareto distribution is employed as a basic model, and its parameters are determined with data from the tail area. With the determined tail model, statisticians then calculate the required high quantiles. In this context, we consider the possible accuracy of the calculation of the quantiles and determine the finite sample distribution function of the quantile estimator, depending on the confidence level and the parameters of the tail model, and then calculate the finite sample bias and the finite sample variance of the quantile estimator. Finally, we present an impact analysis on the quantiles of an unknown distribution function. |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1904.12113&r=all |
By: | Allison Koenecke; Amita Gajewar |
Abstract: | For any financial organization, computing accurate quarterly forecasts for various products is one of the most critical operations. As the granularity at which forecasts are needed increases, traditional statistical time series models may not scale well. We apply deep neural networks in the forecasting domain by experimenting with techniques from Natural Language Processing (Encoder-Decoder LSTMs) and Computer Vision (Dilated CNNs), as well as incorporating transfer learning. A novel contribution of this paper is the application of curriculum learning to neural network models built for time series forecasting. We illustrate the performance of our models using Microsoft's revenue data corresponding to Enterprise, and Small, Medium & Corporate products, spanning approximately 60 regions across the globe for 8 different business segments, and totaling in the order of tens of billions of USD. We compare our models' performance to the ensemble model of traditional statistics and machine learning techniques currently used by Microsoft Finance. With this in-production model as a baseline, our experiments yield an approximately 30% improvement in overall accuracy on test data. We find that our curriculum learning LSTM-based model performs best, showing that it is reasonable to implement our proposed methods without overfitting on medium-sized data. |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1904.12887&r=all |
By: | A Clements; M Doolan |
Abstract: | The ability to improve out-of-sample forecasting performance by combining forecasts is well established in the literature. This paper advances this literature in the area of multivariate volatility forecasts by developing two combination weighting schemes that are capable of placing varying emphasis on losses within the combination estimation period. A comprehensive empirical analysis of the out-of-sample forecast performance across varying dimensions, loss functions, sub-samples and forecast horizons show that new approaches significantly outperform their counterparts in terms of statistical accuracy. Within the financial applications considered, significant benefits from combination forecasts relative to the individual candidate models are observed. Although the more sophisticated combination approaches consistently rank higher relative to the equally weighted approach, their performance is statistically indistinguishable given the relatively low power of these loss functions. Finally, within the applications, further analysis highlights how combination forecasts dramatically reduce the variability in the parameter of interest, namely the portfolio weight or beta. |
Keywords: | Multivariate volatility, combination forecasts, forecast evaluation, model confidence set |
JEL: | C22 G00 |
Date: | 2018–12–11 |
URL: | http://d.repec.org/n?u=RePEc:qut:auncer:2018_02&r=all |
By: | John A. Clithero; Jae Joon Lee; Joshua Tasoff |
Abstract: | Direct elicitation, guided by theory, is the standard method for eliciting individual-level latent variables. We present an alternative approach, supervised machine learning (SML), and apply it to measuring individual valuations for goods. We find that the approach is superior for predicting out-of-sample individual purchases relative to a canonical direct-elicitation approach, the Becker-DeGroot-Marschak (BDM) method. The BDM is imprecise and systematically biased by understating valuations. We characterize the performance of SML using a variety of estimation methods and data. The simulation results suggest that prices set by SML would increase revenue by 22% over the BDM, using the same data. |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1904.13329&r=all |