|
on Econometrics |
By: | Xu, Yongdeng (Cardiff Business School) |
Abstract: | This paper proposes a new class of multivariate volatility model that utilising high-frequency data. We call this model the DCC-HEAVY model as key ingredients are the Engle (2002) DCC model and Shephard and Sheppard (2012) HEAVY model. We discuss the models' dynamics and highlight their differences from DCC-GARCH models. Specifically, the dynamics of conditional variances are driven by the lagged realized variances, while the dynamics of conditional correlations are driven by the lagged realized correlations in the DCC-HEAVY model. The new model removes well known asymptotic bias in DCC-GARCH model estimation and has more desirable asymptotic properties. We also derive a Quasi-maximum likelihood estimation and provide closed-form formulas for multi-step forecasts. Empirical results suggest that the DCC-HEAVY model outperforms the DCC-GARCH model in and out-of-sample. |
Keywords: | HEAVY model, Multivariate volatility, High-frequency data, Forecasting, Wishart distribution |
JEL: | C32 C58 G17 |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:cdf:wpaper:2019/5&r=all |
By: | Fan Yingying; Lv Jinchi; Sharifvaghefi Mahrad; Uematsu Yoshimasa |
Abstract: | Interpretability and stability are two important features that are desired in many contemporary big data applications arising in economics and finance. While the former is enjoyed to some extent by many existing forecasting approaches, the latter in the sense of controlling the fraction of wrongly discovered features which can enhance greatly the interpretability is still largely underdeveloped in the econometric settings. To this end, in this paper we exploit the general framework of model-X knockoffs introduced recently in Candes, Fan, Janson and Lv (2018), which is nonconventional for reproducible large-scale inference in that the framework is completely free of the use of p-values for significance testing, and suggest a new method of intertwined probabilistic factors decoupling (IPAD) for stable interpretable forecasting with knockoffs inference in high-dimensional models. The recipe of the method is constructing the knockoff variables by assuming a latent factor model that is exploited widely in economics and finance for the association structure of covariates. Our method and work are distinct from the existing literature in that we estimate the covariate distribution from data instead of assuming that it is known when constructing the knockoff variables, our procedure does not require any sample splitting, we provide theoretical justifications on the asymptotic false discovery rate control, and the theory for the power analysis is also established. Several simulation examples and the real data analysis further demonstrate that the newly suggested method has appealing finite-sample performance with desired interpretability and stability compared to some popularly used forecasting methods. |
Date: | 2019–01 |
URL: | http://d.repec.org/n?u=RePEc:toh:dssraa:92&r=all |
By: | Jianfei Cao; Connor Dowd |
Abstract: | The synthetic control method is often used in treatment effect estimation with panel data where only a few units are treated and a small number of post-treatment periods are available. Current estimation and inference procedures for synthetic control methods do not allow for the existence of spillover effects, which are plausible in many applications. In this paper, we consider estimation and inference for synthetic control methods, allowing for spillover effects. We propose estimators for both direct treatment effects and spillover effects and show they are asymptotically unbiased. In addition, we propose an inferential procedure and show it is asymptotically unbiased. Our estimation and inference procedure applies to cases with multiple treated units or periods, and where the underlying factor model is either stationary or cointegrated. In simulations, we confirm that the presence of spillovers renders current methods biased and have distorted sizes, whereas our methods yield properly sized tests and retain reasonable power. We apply our method to a classic empirical example that investigates the effect of California's tobacco control program as in Abadie et al. (2010) and find evidence of spillovers. |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1902.07343&r=all |
By: | Canova, Fabio; Matthes, Christian |
Abstract: | We consider a set of potentially misspecified structural models, geometrically combine their likelihood functions, and estimate the parameters using composite methods. Composite estimators may be preferable to likelihood-based estimators in the mean squared error. Composite models may be superior to individual models in the Kullback-Leibler sense. We describe Bayesian quasi-posterior computations and compare the approach to Bayesian model averaging, finite mixture methods, and robustness procedures. We robustify inference using the composite posterior distribution of the parameters and the pool of models. We provide estimates of the marginal propensity to consume and evaluate the role of technology shocks for output fluctuations. |
Keywords: | Bayesian model averaging; composite likelihood; finite mixture; model misspecification |
JEL: | C13 C51 E17 |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:13511&r=all |
By: | Gao, Yan; Zhang, Xinyu; Wang, Shouyang; Chong, Terence Tai Leung; Zou, Guohua |
Abstract: | This paper develops a frequentist model averaging approach for threshold model specifications. The resulting estimator is proved to be asymptotically optimal in the sense of achieving the lowest possible squared errors. In particular, when com-bining estimators from threshold autoregressive models, this approach is also proved to be asymptotically optimal. Simulation results show that for the situation where the existing model averaging approach is not applicable, our proposed model averaging approach has a good performance; for the other situations, our proposed model aver-aging approach performs marginally better than other commonly used model selection and model averaging methods. An empirical application of our approach on the US unemployment data is given. |
Keywords: | Asymptotic optimality · Generalized cross-validation · Model averaging, Threshold model |
JEL: | C13 C52 |
Date: | 2017–11–28 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:92036&r=all |
By: | Hanene Ben Salah (IMAG - Institut Montpelliérain Alexander Grothendieck - UM - Université de Montpellier - CNRS - Centre National de la Recherche Scientifique, BESTMOD - Business and Economic Statistics MODeling - ISG - Institut Supérieur de Gestion de Tunis [Tunis] - Université de Tunis [Tunis], SAF - Laboratoire de Sciences Actuarielle et Financière - UCBL - Université Claude Bernard Lyon 1 - Université de Lyon); Mohamed Chaouch (UAEU - United Arab Emirates University); Ali Gannoun (IMAG - Institut Montpelliérain Alexander Grothendieck - UM - Université de Montpellier - CNRS - Centre National de la Recherche Scientifique); Christian De Peretti (SAF - Laboratoire de Sciences Actuarielle et Financière - UCBL - Université Claude Bernard Lyon 1 - Université de Lyon); Abdelwahed Trabelsi (BESTMOD - Business and Economic Statistics MODeling - ISG - Institut Supérieur de Gestion de Tunis [Tunis] - Université de Tunis [Tunis]) |
Abstract: | The DownSide Risk (DSR) model for portfolio optimisation allows to overcome the drawbacks of the classical Mean-Variance model concerning the asymmetry of returns and the risk perception of investors. This model optimization deals with a positive definite matrix that is endogenous with respect to portfolio weights. This aspect makes the problem far more difficult to handle. For this purpose, Athayde (2001) developed a new recursive minimization procedure that ensures the convergence to the solution. However, when a finite number of observations is available, the portfolio frontier presents some discontinuity and is not very smooth. In order to overcome that, Athayde (2003) proposed a Mean Kernel estimation of the returns, so as to create a smoother portfolio frontier. This technique provides an effect similar to the case in which continuous observations are available. In this paper, Athayde model is reformulated and clarified. Then, taking advantage on the robustness of the median, another nonparametric approach based on Median Kernel returns estimation is proposed in order to construct a portfolio frontier. A new version of Athayde's algorithm will be exhibited. Finally, the properties of this improved portfolio frontier are studied and analysed on the French Stock Market. Keywords DownSide Risk · Kernel Method · Mean Nonparametric Estimation · Median Nonparametric Estimation · Portefolio Efficient Frontier · Semi-Variance. |
Keywords: | Downside risk,Kernel method,Mean nonparametric estimation,Median nonparametric estimation,Portefolio efficient frontier,Semi-variance |
Date: | 2018–03 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-01300673&r=all |
By: | Laséen, Stefan; Lindé, Jesper; Ratto, Marco |
Abstract: | In this paper, we study identification and misspecification problems in standard closed and open-economy empirical New-Keynesian DSGE models used in monetary policy analysis. We find that problems with model misspecification still appear to be a first-order issue in monetary DSGE models, and argue that it is problems with model misspecification that may benefit the most from moving from a classical to a Bayesian framework. We also argue that lack of identification should neither be ignored nor be assumed to affect all DSGE models. Fortunately, identification problems can be readily assessed on a case-by-case basis, by applying recently developed pre-tests of identification. |
Keywords: | Bayesian estimation; Closed economy; DSGE model; Maximum Likelihood Estimation; Monte-Carlo methods; Open economy |
JEL: | C13 C51 E30 |
Date: | 2019–01 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:13492&r=all |
By: | Adam Elbourne (CPB Netherlands Bureau for Economic Policy Analysis); Kan Ji (CPB Netherlands Bureau for Economic Policy Analysis) |
Abstract: | This research re-examines the findings of the existing literature on the effects of unconventional monetary policy. It concludes that the existing estimates based on vector autoregressions in combination with zero and sign restrictions do not successfully isolate unconventional monetary policy shocks from other shocks impacting the euro area economy. In our research, we show that altering existing published studies by making the incorrect assumption that expansionary monetary shocks shrink the ECB’s balance sheet or even ignoring all information about the stance of monetary policy results in the same shocks and, therefore, the same estimated responses of output and prices. As a consequence, it is implausible that the shocks previously identified in the literature are true unconventional monetary policy shocks. Since correctly isolating unconventional monetary policy shocks is a prerequisite for subsequently estimating the effects of unconventional monetary policy shocks, the conclusions from previous vector autoregression models are unwarranted. We show this lack of identification for different specifications of the vector autoregression models and different sample periods. |
JEL: | C32 E52 |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:cpb:discus:391&r=all |
By: | Guy Tchuente |
Abstract: | The identification of the network effect is based on either group size variation, the structure of the network or the relative position in the network. I provide easy-to-verify necessary conditions for identification of undirected network models based on the number of distinct eigenvalues of the adjacency matrix. Identification of network effects is possible; although in many empirical situations existing identification strategies may require the use of many instruments or instruments that could be strongly correlated with each other. The use of highly correlated instruments or many instruments may lead to weak identification or many instruments bias. This paper proposes regularized versions of the two-stage least squares (2SLS) estimators as a solution to these problems. The proposed estimators are consistent and asymptotically normal. A Monte Carlo study illustrates the properties of the regularized estimators. An empirical application, assessing a local government tax competition model, shows the empirical relevance of using regularization methods. |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1902.06143&r=all |
By: | Cristina Gualdani; Shruti Sinha |
Abstract: | We consider the one-to-one matching models with transfers of Choo and Siow (2006) and Galichon and Salani\'e (2015). When the analyst has data on one large market only, we study identification of the systematic components of the agents' preferences without imposing parametric restrictions on the probability distribution of the latent variables. Specifically, we provide a tractable characterisation of the region of parameter values that exhausts all the implications of the model and data (the sharp identified set), under various classes of nonparametric distributional assumptions on the unobserved terms. We discuss a way to conduct inference on the sharp identified set and conclude with Monte Carlo simulations. |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1902.05610&r=all |
By: | Joshua C. C. Chan |
Abstract: | Bayesian vector autoregressions are widely used for macroeconomic forecasting and structural analysis. Until recently, however, most empirical work had considered only small systems with a few variables due to parameter proliferation concern and computational limitations. We first review a variety of shrinkage priors that are useful for tackling the parameter proliferation problem in large Bayesian VARs, followed by a detailed discussion of efficient sampling methods for overcoming the computational problem. We then give an overview of some recent models that incorporate various important model features into conventional large Bayesian VARs, including stochastic volatility, non-Gaussian and serially correlated errors. Efficient estimation methods for fitting these more flexible models are also discussed. These models and methods are illustrated using a forecasting exercise that involves a real-time macroeconomic dataset. The corresponding MATLAB code is also provided. |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:een:camaaa:2019-19&r=all |
By: | Chambers, Marcus J; Taylor, AM Robert |
Abstract: | We consider a model of deterministic one-time parameter change in a continuous time autoregressive model around a deterministic trend function. The exact discrete time analogue model is detailed and compared to corresponding parameter change models adopted in the discrete time literature. The relationships between the parameters in the continuous time model and the discrete time analogue model are also explored. Our results show that the discrete time models used in the literature can be justified by the corresponding continuous time model, with a only a minor modification needed for the (most likely) case where the changepoint does not coincide with one of the discrete time observation points. The implications of our results for a number of extant discrete time models and testing procedures are discussed. |
Date: | 2019–02–14 |
URL: | http://d.repec.org/n?u=RePEc:esy:uefcwp:24072&r=all |
By: | LeSage, James P.; Fischer, Manfred M. |
Abstract: | Past focus in the panel gravity literature has been on multidimensional fixed effects specifications in an effort to accommodate heterogeneity. After introducing conventional multidimensional fixed effects, we find evidence of cross-sectional dependence in flows. We propose a simultaneous dependence gravity model that allows for network dependence in flows, along with computationally efficient Markov Chain Monte Carlo estimation methods that produce a Monte Carlo integration estimate of log-marginal likelihood useful for model comparison. Application of the model to a panel of trade flows points to network spillover effects, suggesting the presence of network dependence and biased estimates from conventional trade flow specifications. The most important sources of network dependence were found to be membership in trade organizations, historical colonial ties, common currency and spatial proximity of countries. |
Keywords: | origin-destination panel data ows, cross-sectional dependence, log-marginal like- lihood, gravity models of trade, sociocultural distance, convex combinations of interaction matrices |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:wiw:wus046:6828&r=all |
By: | Riani, Marco; Corbellini, Aldo; Atkinson, Anthony C. |
Abstract: | Misinvoicing is a major tool in fraud including money laundering. We develop a method of detecting the patterns of outliers that indicate systematic mis‐pricing. As the data only become available year by year, we develop a combination of very robust regression and the use of ‘cleaned’ prior information from earlier years, which leads to early and sharp indication of potentially fraudulent activity that can be passed to legal agencies to institute prosecution. As an example, we use yearly imports of a specific seafood into the European Union. This is only one of over one million annual data sets, each of which can currently potentially contain 336 observations. We provide a solution to the resulting big data problem, which requires analysis with the minimum of human intervention. |
Keywords: | big data; data cleaning; forward search; MM estimation; misinvoicing; money laundering; seafood; timeliness |
JEL: | C1 F3 G3 |
Date: | 2018–08–01 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:87685&r=all |
By: | Bettendorf, Timo; Heinlein, Reinhold |
Abstract: | This paper presents a new approach for modelling the connectedness between asset returns. We adapt the measure of Diebold and Y¸lmaz (2014), which is based on the forecast error variance decomposition of a VAR model. However, their connectedness measure hinges on critical assumptions with regard to the variance-covariance matrix of the error terms. We propose to use a more agnostic empirical approach, based on a machine learning algorithm, to identify the contemporaneous structure. In a Monte Carlo study we compare the different connectedness measures and discuss their advantages and disadvantages. In an empirical application we analyse the connectedness between the G10 currencies. Our results suggest that the US dollar as well as the Norwegian krone are the most independent currencies in our sample. By contrast, the Swiss franc and New Zealand dollar have a negligible impact on other currencies. Moreover, a cluster analysis suggests that the currencies can be divided into three groups, which we classify as: commodity currencies, European currencies and safe haven/carry trade financing currencies. |
Keywords: | connectedness,networks,graph theory,vector autoregression,exchange rates |
JEL: | C32 C51 F31 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:zbw:bubdps:062019&r=all |
By: | Shahbaz, Muhammad; Omay, Tolga; Roubaud, David |
Abstract: | This study proposes a flexible unit root test that detects sharp and smooth breaks simultaneously. Most unit root tests are not general enough to capture different dynamics, such as smooth structural breaks, sharp structural breaks, state-dependent nonlinearity or a mixture of them. Therefore, considering all these data structures in one unit root process is important, and the results produced with this type of test structure do not face misspecification problems. We test 9 countries’ historical renewable energy consumption covering the period of 1800-2008 with traditionally used structural break unit root tests and a newly proposed test. The newly proposed test performs better than the traditional ones. The reason is that renewable energy consumption has sharp and smooth breaks in its data generating process which are not captured simultaneously by any other traditional unit root test. The empirical results indicate that renewable energy consumption contains stationary process in the presence of sharp and smooth structural breaks. |
Keywords: | Unit Root Testing, Sharp and Smooth Break, Renewable Energy Consumption |
JEL: | Q4 |
Date: | 2019–02–05 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:92176&r=all |