
on Econometrics 
By:  Claudio, Morana 
Abstract:  This paper proposes a threestep estimation strategy for dynamic conditional correlation models. In the first step, conditional variances for individual and aggregate series are estimated by means of QML equation by equation. In the second step, conditional covariances are estimated by means of the polarization identity, and conditional correlations are estimated by their usual normalization. In the third step, the twostep conditional covariance and correlation matrices are regularized by means of a new nonlinear shrinkage procedure and used as starting value for the maximization of the joint likelihood of the model. This yields the final, third step smoothed estimate of the conditional covariance and correlation matrices. Due to its scant computational burden, the proposed strategy allows to estimate high dimensional conditional covariance and correlation matrices. An application to global minimum variance portfolio is also provided, confirming that SPDCC is a simple and viable alternative to existing DCC models. 
Keywords:  Conditional covariance, Dynamic conditional correlation model, Semiparametric dynamic conditional correlation model, Multivariate GARCH. 
JEL:  C32 C58 
Date:  2018–06–04 
URL:  http://d.repec.org/n?u=RePEc:mib:wpaper:382&r=ecm 
By:  Joshua C.C. Chan; Eric Eisenstat; Chenghan Hou; Gary Koop 
Abstract:  Adding multivariate stochastic volatility of a flexible form to large Vector Autoregressions (VARs) involving over a hundred variables has proved challenging due to computational considerations and overparameterization concerns. The existing literature either works with homoskedastic models or smaller models with restrictive forms for the stochastic volatility. In this paper, we develop composite likelihood methods for large VARs with multivariate stochastic volatility. These involve estimating large numbers of parsimonious models and then taking a weighted average across these models. We discuss various schemes for choosing the weights. In our empirical work involving VARs of up to 196 variables, we show that composite likelihood methods have similar properties to existing alternatives used with small data sets in that they estimate the multivariate stochastic volatility in a flexible and realistic manner and they forecast comparably. In very high dimensional VARs, they are computationally feasible where other approaches involving stochastic volatility are not and produce superior forecasts than natural conjugate prior homoscedastic VARs. 
Keywords:  Bayesian, large VAR, composite likelihood, prediction pools, stochastic volatility 
JEL:  C11 C32 C53 
Date:  2018–05 
URL:  http://d.repec.org/n?u=RePEc:een:camaaa:201826&r=ecm 
By:  Joshua C.C. Chan; Liana Jacobi; Dan Zhu 
Abstract:  Vector autoregressions combined with Minnesotatype priors are widely used for macroeconomic forecasting. The fact that strong but sensible priors can substantially improve forecast performance implies VAR forecasts are sensitive to prior hyperparameters. But the nature of this sensitivity is seldom investigated. We develop a general method based on Automatic Differentiation to systematically compute the sensitivities of forecasts—both points and intervals—with respect to any prior hyperparameters. In a forecasting exercise using US data, we find that forecasts are relatively sensitive to the strength of shrinkage for the VAR coefficients, but they are not much affected by the prior mean of the error covariance matrix or the strength of shrinkage for the intercepts. 
Keywords:  vector autoregression, automatic differentiation, interval forecasts 
JEL:  C11 C53 E37 
Date:  2018–05 
URL:  http://d.repec.org/n?u=RePEc:een:camaaa:201825&r=ecm 
By:  Hsu, YuChin; Huber, Martin; Lee, YingYing; Pipoz, Layal 
Abstract:  This paper proposes semi and nonparametric methods for disentangling the total causal effect of a continuous treatment on an outcome variable into its natural direct effect and the indirect effect that operates through one or several intermediate variables or mediators. Our approach is based on weighting observations by the inverse of two versions of the generalized propensity score (GPS), namely the conditional density of treatment either given observed covariates or given covariates and the mediator. Our effect estimators are shown to be asymptotically normal when the GPS is estimated by either a parametric or a nonparametric kernelbased method. We also provide a simulation study and an application to the Job Corps program. 
Keywords:  Mediation; direct and indirect effects; continuous treatment; weighting; generalized propensity score 
JEL:  C21 
Date:  2018–06–05 
URL:  http://d.repec.org/n?u=RePEc:fri:fribow:fribow00495&r=ecm 
By:  Shota Gugushvili; Frank van der Meulen; Moritz Schauer; Peter Spreij 
Abstract:  Given discrete time observations over a fixed time interval, we study a nonparametric Bayesian approach to estimation of the volatility coefficient of a stochastic differential equation. We postulate a histogramtype prior on the volatility with piecewise constant realisations on bins forming a partition of the time interval. The values on the bins are assigned an inverse Gamma Markov chain (IGMC) prior. Posterior inference is straightforward to implement via Gibbs sampling, as the full conditional distributions are available explicitly and turn out to be inverse Gamma. We also discuss in detail the hyperparameter selection for our method. Our nonparametric Bayesian approach leads to good practical results in representative simulation examples. Finally, we apply it on a classical data set in changepoint analysis: weekly closings of the DowJones industrial averages. 
Date:  2018–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1801.09956&r=ecm 
By:  Russell Davidson; Andrea Monticini (Università Cattolica del Sacro Cuore; Dipartimento di Economia e Finanza, Università Cattolica del Sacro Cuore) 
Abstract:  The fast double bootstrap can improve considerably on the single bootstrap when the bootstrapped statistic is approximately independent of the bootstrap DGP. This is because, among the approximations that underlie the fast double bootstrap (FDB), is the assumption of such independence. In this paper, use is made of a discrete formu lation of bootstrapping in order to develop a conditional version of the FDB, which makes use of the joint distribution of a statistic and its bootstrap counterpart, rather than the joint distribution of the statistic and the full distribution of its bootstrap counterpart, which is available only by means of a simulation as costly as the full double bootstrap. Simulation evidence shows that the conditional FDB can greatly improve on the performance of the FDB when the statistic and the bootstrap DGP are far from independent, while giving similar results in cases of near independence. 
Keywords:  Bootstrap inference, fast double bootstrap, discrete model, conditional fast double bootstrap. 
JEL:  C12 C22 C32 
Date:  2018–04 
URL:  http://d.repec.org/n?u=RePEc:ctc:serie1:def070&r=ecm 
By:  Emmanuel Mamatzakis (Department of Business and Management, University of Sussex, UK; Rimini Centre for Economic Analysis); Mike Tsionas (Lancaster University Management School, UK; Athens University of Economics and Business, Greece) 
Abstract:  We provide a Bayesian panel model to take into account persistence in US funds' performance while we tackle the important problem of errors in variables. Our modelling departs from prior strong assumptions such as error terms across funds being independent. In fact, we provide a novel, general Bayesian model for (dynamic) panel data that is stable across different priors as reported from the mapping of the prior to the posterior of the Bayesian baseline model with the adoption of different priors. We demonstrate that our model detects previously undocumented striking variability in terms of performance and persistence across funds categories and over time, and in particular through the financial crisis. The reported stochastic volatility exhibits a rising trend as early as 20032004 and could act as an early warning of future crisis. 
Keywords:  Bayesian panel model, timevarying stochastic heteroskedasticity, timevarying covariance, general autocorrelation, US mutual fund performance 
Date:  2018–05 
URL:  http://d.repec.org/n?u=RePEc:rim:rimwps:1823&r=ecm 
By:  Sylvain Barde (Sciences Po) 
Abstract:  There centincreasein the breath of computational methodologies has been matched with a corresponding increase in the difﬁculty of comparing the relative explanatory power of models from different methodological lineages.In order to help address this problem a Markovian information criterion (MIC) is developed that is analogous to the Akaike information criterion (AIC) in its theoretical derivation and yet can be applied to any model able to generate simulated or predicted data,regardless of its methodology. Both the AIC and proposed MIC rely on the Kullback–Leibler (KL) distance between model predictions and real data as a measure of prediction accuracy. Instead of using the maximum likelihood approach like the AIC, the proposed MIC relies instead on the literal interpretation of the KL distance as the inefﬁciency of compressing real data using modelled probabilities, and therefore uses the output of a universal compression algorithm to obtain an estimate of the KL distance. Several Monte Carlo tests are carried out in order to (a) conﬁrm the performance of the algorithm and (b) evaluate the ability of the MIC to identify the true datagenerating process from a set of alternative models. 
Keywords:  AIC; Description length; Markov process; Market selection 
JEL:  B41 C15 C52 C63 
Date:  2017–03 
URL:  http://d.repec.org/n?u=RePEc:spo:wpmain:info:hdl:2441/5fafm6me7k8omq5jbo61urqq27&r=ecm 
By:  G. Angelini; L. Fanelli 
Abstract:  In this paper we discuss general identification results for Structural Vector Autoregressions (SVARs) with external instruments, considering the case in which r valid instruments are used to identify g ≥ 1 structural shocks, where r ≥ g. We endow the SVAR with an auxiliary statistical model for the external instruments which is a system of reduced form equations. The SVAR and the auxiliary model for the external instruments jointly form a `larger' SVAR characterized by a particularly restricted parametric structure, and are connected by the covariance matrix of their disturbances which incorporates the `relevance' and `exogeneity' conditions. We discuss identification results and likelihoodbased estimation methods both in the `multiple shocks' approach, where all structural shocks are of interest, and in the `partial shock' approach, where only a subset of the structural shocks is of interest. Overidentified SVARs with external instruments can be easily tested in our setup. The suggested method is applied to investigate empirically whether commonly employed measures of macroeconomic and financial uncertainty respond onimpact, other than with lags, to business cycle uctuations in the U.S. in the period after the Global Financial Crisis. To do so, we employ two external instruments to identify the real economic activity shock in a partial shock approach. 
JEL:  C32 C51 E44 G10 
Date:  2018–05 
URL:  http://d.repec.org/n?u=RePEc:bol:bodewp:wp1122&r=ecm 
By:  Mohlin , Erik (Department of Economics, Lund University) 
Abstract:  Regression trees are evaluated with respect to mean square error (MSE), mean integrated square error (MISE), and integrated squared error (ISE), as the size of the training sample goes to infinity. The asymptotically MSE and MISE minimizing (locally adaptive) regression trees are characterized. Under an optimal tree, MSE is O(n^{2/3}). The estimator is shown to be asymptotically normally distributed. An estimator for ISE is also proposed, which may be used as a complement to crossvalidation in the pruning of trees. 
Keywords:  PieceWise Linear Regression; Partitioning Estimators; NonParametric Regression; Categorization; Partition; Prediction Trees; Decision Trees; Regression Trees; Regressogram; Mean Squared Error 
JEL:  C14 C38 
Date:  2018–05–22 
URL:  http://d.repec.org/n?u=RePEc:hhs:lunewp:2018_012&r=ecm 
By:  Cassim, Lucius 
Abstract:  The main objective of this study is to derive semi parametric GARCH (1, 1) estimator under serially dependent innovations. The specific objectives are to show that the derived estimator is not only consistent but also asymptotically normal. Normally, the GARCH (1, 1) estimator is derived through quasimaximum likelihood estimation technique and then consistency and asymptotic normality are proved using the weak law of large numbers and Lindeberg central limit theorem respectively. In this study, we apply the quasimaximum likelihood estimation technique to derive the GARCH (1, 1) estimator under the assumption that the innovations are serially dependent. Allowing serial dependence of the innovations has however brought problems in terms of methodology. Firstly, we cannot split the joint probability distribution into a product of marginal distributions as is normally done. Rather, the study splits the joint distribution into a product of conditional densities to get around this problem. Secondly, we cannot use the weak laws of large numbers or/and the Lindeberg central limit theorem. We therefore employ the martingale techniques to achieve the specific objectives. Having derived the semi parametric GARCH (1, 1) estimator, we have therefore shown that the derived estimator not only converges almost surely to the true population parameter but also converges in distribution to the normal distribution with the highest possible convergence rate similar to that of parametric estimators 
Keywords:  GARCH(1,1), semi parametric , Quasi Maximum Likelihood Estimation, Martingale 
JEL:  C4 C58 
Date:  2018–05–05 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:86572&r=ecm 
By:  Moguerza, Javier M.; Muñoz García, Alberto; Martos, Gabriel; Hernández Banadik, Nicolás Jorge 
Abstract:  We propose a definition of entropy for stochastic processes. We provide a reproducing kernel Hilbert space model to estimate entropy from a random sample of realizations of a stochastic process, namely functional data, and introduce two approaches to estimate minimum entropy sets. These sets are relevant to detect anomalous or outlier functional data. A numerical experiment illustrates the performance of the proposed method; in addition, we conduct an analysis of mortality rate curves as an interesting application in a realdata context to explore functional anomaly detection. 
Keywords:  functional data; anomaly detection; minimumentropy sets; stochastic process; entropy 
Date:  2018–05–01 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:26915&r=ecm 
By:  PERRON, Pierre; YAMAMOTO, Yohei 
Abstract:  We consider the issue of forecast failure (or breakdown) and propose methods to assess retrospectively whether a given forecasting model provides forecasts which show evidence of changes with respect to some loss function. We adapt the classical structural change tests to the forecast failure context. First, we recommend that all tests should be carried with a fixed scheme to have best power. This ensures a maximum difference between the fitted in and outofsample means of the losses and avoids contamination issues under the rolling and recursive schemes. With a fixed scheme, Giacomini and Rossi's (2009) (GR) test is simply a Wald test for a onetime change in the mean of the total (the insample plus outofsample) losses at a known break date, say m, the value that separates the in and outofsample periods. To alleviate this problem, we consider a variety of tests: maximizing the GR test over all possible values of m within a prespecified range; a Double supWald (DSW) test which for each m performs a supWald test for a change in the mean of the outofsample losses and takes the maximum of such tests over some range; we also propose to work directly with the total loss series to define the Total Loss supWald (TLSW) test and the Total Loss UDmax (TLUD) test. Using extensive simulations, we show that with forecasting models potentially involving lagged dependent variables, the only tests having a monotonic power function for all datagenerating processes are the DSW and TLUD tests, constructed with a fixed forecasting window scheme. Some explanations are provided and two empirical applications illustrate the relevance of our findings in practice. 
Keywords:  forecast failure, nonmonotonic power, structural change, outofsample method 
JEL:  C14 C22 
Date:  2018–05 
URL:  http://d.repec.org/n?u=RePEc:hit:econdp:201803&r=ecm 
By:  Richard Gerlach; Chao Wang 
Abstract:  The joint Value at Risk (VaR) and expected shortfall (ES) quantile regression model of Taylor (2017) is extended via incorporating a realized measure, to drive the tail risk dynamics, as a potentially more efficient driver than daily returns. Both a maximum likelihood and an adaptive Bayesian Markov Chain Monte Carlo method are employed for estimation, whose properties are assessed and compared via a simulation study; results favour the Bayesian approach, which is subsequently employed in a forecasting study of seven market indices and two individual assets. The proposed models are compared to a range of parametric, nonparametric and semiparametric models, including GARCH, RealizedGARCH and the joint VaR and ES quantile regression models in Taylor (2017). The comparison is in terms of accuracy of onedayahead ValueatRisk and Expected Shortfall forecasts, over a long forecast sample period that includes the global financial crisis in 20072008. The results favor the proposed models incorporating a realized measure, especially when employing the subsampled Realized Variance and the subsampled Realized Range. 
Date:  2018–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1805.08653&r=ecm 