|
on Econometrics |
By: | Li, Z. M.; Laeven, R. J. A.; Vellekoop, M. H. |
Abstract: | In this paper, we develop econometric tools to analyze the integrated volatility (IV) of the efficient price and the dynamic properties of microstructure noise in high-frequency data under general dependent noise. We first develop consistent estimators of the variance and autocovariances of noise using a variant of realized volatility. Next, we employ these estimators to adapt the pre-averaging method and derive consistent estimators of the IV, which converge stably to a mixed Gaussian distribution at the optimal rate n1/4. To improve the finite sample performance, we propose a multi-step approach that corrects the finite sample bias, which turns out to be crucial in applications. Our extensive simulation studies demonstrate the excellent performance of our multi-step estimators. In an empirical study, we analyze the dependence structures of microstructure noise and provide intuitive economic interpretations; we also illustrate the importance of accounting for both the serial dependence in noise and the finite sample bias when estimating IV. |
Keywords: | Dependent microstructure noise, realized volatility, bias correction, integrated volatility, mixing sequences, pre-averaging method |
JEL: | C13 C14 C58 |
Date: | 2019–06–14 |
URL: | http://d.repec.org/n?u=RePEc:cam:camdae:1952&r=all |
By: | Matteo Iacopini; Luca Rossini |
Abstract: | Over the last decade, big data have poured into econometrics, demanding new statistical methods for analysing high-dimensional data and complex non-linear relationships. A common approach for addressing dimensionality issues relies on the use of static graphical structures for extracting the most significant dependence interrelationships between the variables of interest. Recently, Bayesian nonparametric techniques have become popular for modelling complex phenomena in a flexible and efficient manner, but only few attempts have been made in econometrics. In this paper, we provide an innovative Bayesian nonparametric (BNP) time-varying graphical framework for making inference in high-dimensional time series. We include a Bayesian nonparametric dependent prior specification on the matrix of coefficients and the covariance matrix by mean of a Time-Series DPP as in Nieto-Barajas et al. (2012). Following Billio et al. (2019), our hierarchical prior overcomes over-parametrization and over-fitting issues by clustering the vector autoregressive (VAR) coefficients into groups and by shrinking the coefficients of each group toward a common location. Our BNP timevarying VAR model is based on a spike-and-slab construction coupled with dependent Dirichlet Process prior (DPP) and allows to: (i) infer time-varying Granger causality networks from time series; (ii) flexibly model and cluster non-zero time-varying coefficients; (iii) accommodate for potential non-linearities. In order to assess the performance of the model, we study the merits of our approach by considering a well-known macroeconomic dataset. Moreover, we check the robustness of the method by comparing two alternative specifications, with Dirac and diffuse spike prior distributions. |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1906.02140&r=all |
By: | George Kapetanios (King’s College London); Laura Serlenga (University of Bari "Aldo Moro"); Yongcheol Shin (University of York) |
Abstract: | A large strand of the literature on panel data models has focused on explicitly modelling the cross-section dependence between panel units. Factor augmented approaches have been proposed to deal with this issue. Under a mild restriction on the correlation of the factor loadings, we show that factor augmented panel data models can be encompassed by a standard two-way fixed effect model. This highlights the importance of verifying whether the factor loadings are correlated, which, we argue, is an important hypothesis to be tested, in practice. As a main contribution, we propose a Hausman-type test that determines the presence of correlated factor loadings in panels with interactive effects. Furthermore, we develop two nonparametric variance estimators that are robust to the presence of heteroscedasticity, autocorrelation as well as slope heterogeneity. Via Monte Carlo simulations, we demonstrate desirable size and power performance of the proposed test, even in small samples. Finally, we provide extensive empirical evidence in favour of uncorrelated factor loadings in panels with interactive effects. |
Keywords: | Panel Data Models; Cross-sectional Error Dependence; Unobserved Heterogeneous Factors; Factor Correlated Loadings |
JEL: | C13 C33 |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:bai:series:series_wp_02-2019&r=all |
By: | Ivan Mendieta-Munoz; Mengheng Li |
Abstract: | We propose a multivariate simultaneous unobserved components framework to determine the two-sided interactions between structural trend and cycle innovations. We relax the standard assumption in unobserved components models that trends are only driven by permanent shocks and cycles are only driven by transitory shocks by considering the possible spillover effects between structural innovations. The direction of spillover has a structural interpretation, whose identification is achieved via heteroskedasticity. We provide identifiability conditions and develop an efficient Bayesian MCMC procedure for estimation. Empirical implementations for both Okun's law and the Phillips curve show evidence of significant spillovers between trend and cycle components. |
Keywords: | Unobserved components, Identification via heteroskedasticity, Trends and cycles, Permanent and transitory shocks, State space models, Spillover structural effects. JEL Classification: C11, C32, E31, E32, E52 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:uta:papers:2019_06&r=all |
By: | David Frazier; Bonsoo Koo |
Abstract: | We propose the use of indirect inference estimation for inference in locally stationary models. We develop a local indirect inference algorithm and establish the asymptotic properties of the proposed estimator. Due to the nonparametric nature of the model under study, the resulting estimators display nonparametric rates of convergence and behavior. We validate our methodology via simulation studies in the confines of a locally stationary moving average model and a locally stationary multiplicative stochastic volatility model. An application of the methodology gives evidence of non-linear, time-varying volatility for monthly returns on the Fama-French portfolios. |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1906.01768&r=all |
By: | Arthur Charpentier (CREST - Centre de Recherche en Économie et Statistique - ENSAI - Ecole Nationale de la Statistique et de l'Analyse de l'Information [Bruz] - X - École polytechnique - ENSAE ParisTech - École Nationale de la Statistique et de l'Administration Économique - CNRS - Centre National de la Recherche Scientifique); Ndéné Ka; Stéphane Mussard (UNIMES - Université de Nîmes); Oumar Ndiaye (CHROME - Détection, évaluation, gestion des risques CHROniques et éMErgents (CHROME) / Université de Nîmes - UNIMES - Université de Nîmes) |
Abstract: | We propose an Aitken estimator for Gini regression. The suggested A-Gini estimator is proven to be a U-statistics. Monte Carlo simulations are provided to deal with heteroskedasticity and to make some comparisons between the generalized least squares and the Gini regression. A Gini-White test is proposed and shows that a better power is obtained compared with the usual White test when outlying observations contaminate the data. |
Keywords: | Gini,heteroskedasticity,jackknife,U-statistics |
Date: | 2019–03 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-02131746&r=all |
By: | Nathaniel Tomasetti; Catherine Forbes; Anastasios Panagiotelis |
Abstract: | Variational Bayesian (VB) methods usually produce posterior inference in a time frame considerably smaller than traditional Markov Chain Monte Carlo approaches. Although the VB posterior is an approximation, it has been shown to produce good parameter estimates and predicted values when a rich class of approximating distributions are considered. In this paper we propose Updating VB (UVB), a recursive algorithm used to update a sequence of VB posterior approximations in an online setting, with the computation of each posterior update requiring only the data observed since the previous update. An extension to the proposed algorithm, named UVB-IS, allows the user to trade accuracy for a substantial increase in computational speed through the use of importance sampling. The two methods and their properties are detailed in two separate simulation studies. Two empirical illustrations of the proposed UVB methods are provided, including one where a Dirichlet Process Mixture model is repeatedly updated in the context of predicting the future behaviour of vehicles on a stretch of the US Highway 101. |
Keywords: | importance sampling, forecasting, clustering, dirichlet process mixture, variational inference |
JEL: | C11 G18 G39 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2019-13&r=all |
By: | Nathan Canen; Kyungchul Song |
Abstract: | Structural models that admit multiple reduced forms, such as game-theoretic models with multiple equilibria, pose challenges in practice, especially when parameters are set-identified and the identified set is large. In such cases, researchers often choose to focus on a particular subset of equilibria for counterfactual analysis, but this choice can be hard to justify. This paper proposes a refinement criterion for the identified set. Our criterion chooses a subset such that counterfactual predictions of outcomes are most stable against local perturbations of the reduced forms (e.g. the equilibrium selection rule). Our refinement has multiple appealing features, including an intuitive characterization, lower computational cost, and stable predictions. Focusing on moment inequality models, we propose bootstrap inference on the refinement and provide generic conditions under which the inference is uniformly asymptotically valid. We present and discuss results from our Monte Carlo study and an empirical application based on a model with top-coded data. |
Date: | 2019–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1906.00003&r=all |
By: | Andrew Bennett; Nathan Kallus; Tobias Schnabel |
Abstract: | Instrumental variable analysis is a powerful tool for estimating causal effects when randomization or full control of confounders is not possible. The application of standard methods such as 2SLS, GMM, and more recent variants are significantly impeded when the causal effects are complex, the instruments are high-dimensional, and/or the treatment is high-dimensional. In this paper, we propose the DeepGMM algorithm to overcome this. Our algorithm is based on a new variational reformulation of GMM with optimal inverse-covariance weighting that allows us to efficiently control very many moment conditions. We further develop practical techniques for optimization and model selection that make it particularly successful in practice. Our algorithm is also computationally tractable and can handle large-scale datasets. Numerical results show our algorithm matches the performance of the best tuned methods in standard settings and continues to work in high-dimensional settings where even recent methods break. |
Date: | 2019–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1905.12495&r=all |
By: | Mohamed Chikhi; Claude Diebolt; Tapas Mishra |
Abstract: | Stock price forecasting, a popular growth-enhancing exercise for investors, is inherently complex – thanks to the interplay of financial economic drivers which determine both the magnitude of memory and the extent of non-linearity within a system. In this paper, we accommodate both features within a single estimation framework to forecast stock prices and identify the nature of market efficiency commensurate with the proposed model. We combine a class of semiparametric autoregressive fractionally integrated moving average (SEMIFARMA) model with asymmetric exponential generalized autoregressive score (AEGAS) errors to design a SEMIRFARMA-AEGAS framework based on which predictive performance of this model is tested against competing methods. Our conditional variance includes leverage effects, jumps and fat tail-skewness distribution, each of which affects magnitude of memory in a stock price system. A true forecast function is built and new insights into stock price forecasting are presented. We estimate several models using the Skewed Student-t maximum likelihood and find that the informational shocks have permanent effects on returns and the SEMIFARMA-AEGAS is appropriate for capturing volatility clustering for both negative (long Value-at-Risk) and positive returns (short Value-at-Risk). We show that this model has better predictive performance over competing models for both long and/or some short time horizons. The predictions from SEMIRFARMA-AEGAS model beats comfortably the random walk model. Our results have implications for market-efficiency: the weak efficiency assumption of financial markets stands violated for all stock price returns studied over a long period. |
Keywords: | Stock price forecasting; SEMIFARMA model; AEGAS model; Skewed Student-t maximum likelihood; Asymmetry; Jumps. |
JEL: | C14 C58 C22 G17 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:ulp:sbbeta:2019-24&r=all |
By: | Steven E. Pav |
Abstract: | We apply the procedure of Lee et al. to the problem of performing inference on the signal noise ratio of the asset which displays maximum sample Sharpe ratio over a set of possibly correlated assets. We find a multivariate analogue of the commonly used approximate standard error of the Sharpe ratio to use in this conditional estimation procedure. Testing indicates this procedure achieves the nominal type I rate, and does not appear to suffer from non-normality of returns. The conditional estimation test has low power under the alternative where there is little spread in the signal noise ratios of the assets, and high power under the alternative where a single asset has high signal noise ratio. |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1906.00573&r=all |
By: | Cl\'ement de Chaisemartin; Jaime Ramirez-Cuellar |
Abstract: | In paired experiments, the units included in the randomization, e.g. villages, are matched into pairs, and then one unit of each pair is randomly assigned to treatment. We conducted a survey of papers that used paired randomized experiments in economics. To estimate the treatment effect, researchers usually regress their outcome on a treatment indicator and pair fixed effects, and cluster standard errors at the unit-of-randomization level, namely, at the village level in our example. We show that the variance estimator in this regression may be severely downward biased: under constant treatment effect, its expectation is equal to 1/2 of the true variance. Using that variance estimator may lead researchers to substantially overreject the null of no treatment effect. Instead, we argue that researchers should cluster their standard errors at the pair level. |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1906.00288&r=all |
By: | Laniado Rodas, Henry; Lillo Rodríguez, Rosa Elvira; Cabana Garceran del Vall, Elisa |
Abstract: | A robust estimator is proposed for the parameters that characterize the linear regression problem. It is based on the notion of shrinkages, often used in Finance and previously studied for outlier detection in multivariate data. A thorough simulation study is conducted to investigate: the efficiency with normal and heavy-tailed errors, the robustness under contamination, the computational times, the affine equivariance and breakdown value of the regression estimator. Two classical data-sets often used in the literature and a real socio-economic data-set about the Living Environment Deprivation of areas in Liverpool (UK), are studied. The results from the simulations and the real data examples show the advantages of the proposed robust estimator in regression. |
Keywords: | Environmental Study; Outliers; Shrinkage Estimator; Robust Mahalanobis Distance; Robust Regression |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:cte:wsrepe:28500&r=all |
By: | Rahul Singh; Maneesh Sahani; Arthur Gretton |
Abstract: | Instrumental variable regression is a strategy for learning causal relationships in observational data. If measurements of input X and output Y are confounded, the causal relationship can nonetheless be identified if an instrumental variable Z is available that influences X directly, but is conditionally independent of Y given X. The classic two-stage least squares algorithm (2SLS) simplifies the estimation problem by modeling all relationships as linear functions. We propose kernel instrumental variable regression (KIV), a nonparametric generalization of 2SLS, modeling relations among X, Y, and Z as nonlinear functions in reproducing kernel Hilbert spaces (RKHSs). We prove the consistency of KIV under mild assumptions, and derive conditions under which the convergence rate achieves the minimax optimal rate for unconfounded, one-stage RKHS regression. In doing so, we obtain an efficient ratio between training sample sizes used in the algorithm's first and second stages. In experiments, KIV outperforms state of the art alternatives for nonparametric instrumental variable regression. |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1906.00232&r=all |
By: | Timmermann, Allan G; Zhu, Yinchu |
Abstract: | Abstract This paper develops new methods for testing equal predictive accuracy in panels of forecasts that exploit information in the time series and cross-sectional dimensions of the data. Using a common factor setup, we establish conditions on cross-sectional dependencies in forecast errors which allow us to conduct inference and compare performance on a single cross-section of forecasts. We consider both unconditional tests of equal predictive accuracy as well as tests that condition on the realization of common factors and show how to decompose forecast errors into exposures to common factors and an idiosyncratic variance component. Our tests are demonstrated in an empirical application that compares IMF forecasts of country-level real GDP growth and inflation to private-sector survey forecasts and forecasts from a simple time-series model |
Keywords: | Economic forecasting; GDP growth; Inflation forecasts; panel data |
Date: | 2019–05 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:13746&r=all |
By: | Tenglong Li; Kenneth A. Frank |
Abstract: | The internal validity of observational study is often subject to debate. In this study, we define the counterfactuals as the unobserved sample and intend to quantify its relationship with the null hypothesis statistical testing (NHST). We propose the probability of a causal inference is robust for internal validity, i.e., the PIV, as a robustness index of causal inference. Formally, the PIV is the probability of rejecting the null hypothesis again based on both the observed sample and the counterfactuals, provided the same null hypothesis has already been rejected based on the observed sample. Under either frequentist or Bayesian framework, one can bound the PIV of an inference based on his bounded belief about the counterfactuals, which is often needed when the unconfoundedness assumption is dubious. The PIV is equivalent to statistical power when the NHST is thought to be based on both the observed sample and the counterfactuals. We summarize the process of evaluating internal validity with the PIV into an eight-step procedure and illustrate it with an empirical example (i.e., Hong and Raudenbush (2005)). |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1906.08726&r=all |
By: | Bo Zhang; Jiti Gao; Guangming Pan |
Abstract: | This paper considers a p–dimensional time series model of the form x(t)-δ(t)=ψ(x(t−1)−δ(t−1))+Σ(1/2)y(t), 1 |
Keywords: | asymptotic normality, largest eigenvalue, linear process, near unit root test. |
JEL: | C21 C32 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2019-10&r=all |
By: | Hjalmarsson, Erik (Department of Economics, School of Business, Economics and Law, Göteborg University); Kiss, Tamás (Department of Economics, School of Business, Economics and Law, Göteborg University) |
Abstract: | The dividend-growth based test of return predictability, proposed by Cochrane[2008, Review of Financial Studies 21, 1533-1575], is similar to a likelihood-based test of the standard return-predictability model, treating the autoregressive parameter of the dividend-price ratio as known. In comparison to standard OLS-based inference, both tests achieve power gains from a strong use of the exact value postulated for the autoregressive parameter. When compared to the likelihood-based test, there are no power advantages for the dividend-growth based test. In common implementations, with the autoregressive parameter set equal to the corresponding OLS estimate, Cochrane's test also suffers from severe size distortions. |
Keywords: | Predictive regressions; Present-value relationship; Stock-return predictability |
JEL: | C22 G12 |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:hhs:gunwpe:0768&r=all |
By: | Sylvain Barde |
Abstract: | Comparison of macroeconomic simulation models, particularly agent-based models (ABMs), with more traditional approaches such as VAR and DSGE models has long been identified as an important yet problematic issue in the literature. This is due to the fact that many such simulations have been developed following the great recession with a clear aim to inform policy, yet the methodological tools required for validating these models on empirical data are still in their infancy. The paper aims to address this issue by developing and testing a comparison framework for macroeconomic simulation models based on a multivariate extension of the Markov Information Criterion (MIC) originally developed in Barde (2017). The MIC is designed to measure the informational distance between a set of models and some empirical data by mapping the simulated data to the markov transition matrix of the underlying data generating process, and is proven to perform optimally (i.e. the measurement is unbiased in expectation) for all models reducible to a markov process. As a result, not only can the MIC provide an accurate measure of distance solely on the basis of simulated data, but it can do it for a very wide class of data generating processes. The paper first presents the strategies adopted to address the computational challenges that arise from extending the methodology to multivariate settings and validates the extension on VAR and DGSE models. The paper then carries out a comparison of the benchmark ABM of Caiani et al. (2016) and the DGSE framework of Smets and Wouters (2007), which to our knowledge, is the first direct comparison between a macroeconomic ABM and a DGSE model. |
Keywords: | Model comparison; Agent-based models; Validation methods |
JEL: | B41 C15 C52 C63 |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:ukc:ukcedp:1908&r=all |
By: | Xue Guo; Hu Zhang; Tianhai Tian |
Abstract: | Development of stock networks is an important approach to explore the relationship between different stocks in the era of big-data. Although a number of methods have been designed to construct the stock correlation networks, it is still a challenge to balance the selection of prominent correlations and connectivity of networks. To address this issue, we propose a new approach to select essential edges in stock networks and also maintain the connectivity of established networks. This approach uses different threshold values for choosing the edges connecting to a particular stock, rather than employing a single threshold value in the existing asset-value method. The innovation of our algorithm includes the multiple distributions in a maximum likelihood estimator for selecting the threshold value rather than the single distribution estimator in the existing methods. Using the Chinese Shanghai security market data of 151 stocks, we develop a stock relationship network and analyze the topological properties of the developed network. Our results suggest that the proposed method is able to develop networks that maintain appropriate connectivities in the type of assets threshold methods. |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1906.08088&r=all |
By: | Earo Wang; Dianne Cook; Rob J Hyndman |
Abstract: | Mining temporal data for information is often inhibited by a multitude of formats: irregular or multiple time intervals, point events that need aggregating, multiple observational units or repeated measurements on multiple individuals, and heterogeneous data types. On the other hand, the software supporting time series modeling and forecasting, makes strict assumptions on the data to be provided, typically requiring a matrix of numeric data with implicit time indexes. Going from raw data to model-ready data is painful. This work presents a cohesive and conceptual framework for organizing and manipulating temporal data, which in turn flows into visualization, modeling and forecasting routines. Tidy data principles are extended to temporal data by: (1) mapping the semantics of a dataset into its physical layout; (2) including an explicitly declared index variable representing time; (3) incorporating a "key" comprising single or multiple variables to uniquely identify units over time. This tidy data representation most naturally supports thinking of operations on the data as building blocks, forming part of a “data pipeline†in time-based contexts. A sound data pipeline facilitates a fluent workflow for analyzing temporal data. The infrastructure of tidy temporal data has been implemented in the R package tsibble. |
Keywords: | time series, data wrangling, tidy data, R, forecasting, data science, exploratory data analysis, data pipelines |
JEL: | C88 C81 C82 C22 C32 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2019-12&r=all |
By: | Bruno Perdigão |
Abstract: | In this paper I use prominent models as a laboratory to analyze the performance of different identification strategies and propose the introduction of new model consistent restrictions to identify monetary policy shocks in SVARs. In particular, besides standard sign restrictions on interest rates and inflation, the inability of monetary policy to have real effects ten years after the shock is proposed as an additional identification restriction. Evidence is presented of the model consistency of this neutrality restriction both for the canonical three-equation new Keynesian model and the Smets and Wouters (2007) model. In a simple empirical application, I show that this restriction may be important to recover real effects of monetary policy. |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:bcb:wpaper:494&r=all |
By: | Grätz, Michael (Swedish Institute for Social Research, Stockholm University) |
Abstract: | The counterfactual approach to causality has become the dominant approach to understand causality in contemporary social science research. Whilst most sociologists are aware that unobserved, confounding variables may bias estimates of causal effects, the issues of overcontrol and collider bias have received comparatively less attention. In particular, widely used practices in research on intergenerational mobility require conditioning on variables that are endogenous to the process of the intergenerational transmission of advantage. I review four of these practices from the viewpoint of the counterfactual approach to causality and show that overcontrol and collider biases arise when these practices are implemented. I use data from the German Socio-Economic Panel Study (SOEP) to demonstrate the practical consequences of these biases for conclusions about intergenerational mobility. Future research on intergenerational mobility should reflect more upon the possibilities of bias introduced by conditioning on variables. |
Keywords: | causality; collider bias; directed acyclic graphs; intergenerational mobility; overcontrol bias |
Date: | 2019–06–05 |
URL: | http://d.repec.org/n?u=RePEc:hhs:sofiwp:2019_002&r=all |
By: | Rob Donnelly; Francisco R. Ruiz; David Blei; Susan Athey |
Abstract: | This paper proposes a method for estimating consumer preferences among discrete choices, where the consumer chooses at most one product in a category, but selects from multiple categories in parallel. The consumer's utility is additive in the different categories. Her preferences about product attributes as well as her price sensitivity vary across products and are in general correlated across products. We build on techniques from the machine learning literature on probabilistic models of matrix factorization, extending the methods to account for time-varying product attributes and products going out of stock. We evaluate the performance of the model using held-out data from weeks with price changes or out of stock products. We show that our model improves over traditional modeling approaches that consider each category in isolation. One source of the improvement is the ability of the model to accurately estimate heterogeneity in preferences (by pooling information across categories); another source of improvement is its ability to estimate the preferences of consumers who have rarely or never made a purchase in a given category in the training data. Using held-out data, we show that our model can accurately distinguish which consumers are most price sensitive to a given product. We consider counterfactuals such as personally targeted price discounts, showing that using a richer model such as the one we propose substantially increases the benefits of personalization in discounts. |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1906.02635&r=all |
By: | Galvao, Ana Beatriz (University of Warwick); Mitchell, James (University of Warwick) |
Abstract: | Historical economic data are often uncertain due to sampling and non-sampling errors. But data uncertainty is rarely communicated quantitatively. An exception are the “fan charts” for historical GDP growth published at the Bank of England. We propose a generic loss function based approach to extract from these ex ante density forecasts a quantitative measure of unforecastable data uncertainty. We find GDP data uncertainty in the UK rose sharply at the onset of the 2008/9 recession; and that data uncertainty is positively correlated with popular estimates of macroeconomic uncertainty. |
Keywords: | data revisions ; macroeconomic uncertainty ; ex ante uncertainty ; ex post uncertainty ; density forecast calibration ; backcasts |
JEL: | C53 E32 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:wrk:wrkemf:24&r=all |
By: | Jef Boeckx; Maarten Dossche; Alessandro Galesi; Boris Hofmann; Gert Peersman (-) |
Abstract: | A growing empirical literature has shown, based on structural vector autoregressions (SVARs) identified through sign restrictions, that unconventional monetary policies implemented after the outbreak of the Great Financial Crisis (GFC) had expansionary macroeconomic effects. In a recent paper, Elbourne and Ji (2019) conclude that these studies fail to identify true unconventional monetary policy shocks in the euro area. In this note, we show that their findings are actually fully consistent with a successful identification of unconventional monetary policy shocks by the earlier studies and that their approach does not serve the purpose of evaluating identification strategies of SVARs. |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:rug:rugwps:19/973&r=all |
By: | Rom\'an Salmer\'on G\'omez; Catalina Garc\'ia Garc\'ia y Jos\'e Garc\'ia P\'erez |
Abstract: | This paper analyzes the diagnostic of near multicollinearity in a multiple linear regression from auxiliary centered regressions (with intercept) and non-centered (without intercept). From these auxiliary regression, the centered and non-centered Variance Inflation Factors are calculated, respectively. It is also presented an expression that relate both of them. |
Date: | 2019–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1905.12293&r=all |