|
on Econometrics |
By: | Jiri Kukacka (Institute of Economic Studies, Faculty of Social Sciences, Charles University in Prague, Smetanovo nabrezi 6, 111 01 Prague 1, Czech Republic; Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Pod Vodarenskou Vezi 4, 182 00, Prague, Czech Republic); Jozef Barunik (Institute of Economic Studies, Faculty of Social Sciences, Charles University in Prague, Smetanovo nabrezi 6, 111 01 Prague 1, Czech Republic; Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Pod Vodarenskou Vezi 4, 182 00, Prague, Czech Republic) |
Abstract: | This paper proposes computational framework for empirical estimation of Financial Agent-Based Models (FABMs) that does not rely upon restrictive theoretical assumptions. We customise a recent methodology of the Non-Parametric Simulated Maximum Likelihood Estimator (NPSMLE) based on kernel methods by Kristensen and Shin (2012) and elaborate its capability for FABMs estimation purposes. To start with, we apply the methodology to the popular and widely analysed model of Brock and Hommes (1998). We extensively test finite sample properties of the estimator via Monte Carlo simulations and show that important theoretical features of the estimator, the consistency and asymptotic efficiency, also hold in small samples for the model. We also verify smoothness of the simulated log-likelihood function and identification of parameters. Main empirical results of our analysis are the statistical insignificance of the switching coefficient but markedly significant belief parameters defining heterogeneous trading regimes with an absolute superiority of trend-following over contrarian strategies and a slight proportional dominance of fundamentalists over trend following chartists. |
Keywords: | Heterogeneous Agent Model, Heterogeneous Expectations, Behavioural Finance, Intensity of Choice, Switching, Non-Parametric Simulated Maximum Likelihood Estimator |
JEL: | C14 C51 C63 D84 G02 G12 |
Date: | 2016–03 |
URL: | http://d.repec.org/n?u=RePEc:fau:wpaper:wp2016_07&r=ecm |
By: | Kim, Jae |
Abstract: | This paper examines the validity of statistical significance reported in the seminal studies of the weather effect on stock return. It is found that their research design is statistically flawed and seriously biased against the null hypothesis of no effect. This, coupled with the test statistics inflated by massive sample sizes, strongly suggests that the statistical significance is spurious as an outcome of Type I error. The alternatives to the p-value criterion for statistical significance soundly support the null hypothesis of no weather effect. As an application, the effect of daily sunspot numbers on stock return is examined. Under the same research design as that of a seminal study, the number of sunspots is found to be highly statistically significant although its economic impact on stock return is negligible. |
Keywords: | Anomaly, Behavioural finance, Data mining, Market efficiency, Sunspot numbers, Type I error, Weather |
JEL: | G12 G14 |
Date: | 2016–04–12 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:70692&r=ecm |
By: | Daniel Kosiorowski; Jerzy P. Rydlewski; Ma{\l}gorzata Snarska |
Abstract: | Functional data analysis (FDA) is a part of modern multivariate statistics that analyses data providing information about curves, surfaces or anything else varying over a certain continuum. In economics and other practical applications we often have to deal with time series of functional data, where we cannot easily decide, whether they are to be considered as stationary or nonstationary. However the definition of nonstationary functional time series is a bit vogue. Quite a fundamental issue is that before we try to statistically model such data, we need to check whether these curves (suitably transformed, if needed) form a stationary functional time series. At present there are no adequate tests of stationarity for such functional data. We propose a novel statistic for detetecting nonstationarity in functional time series based on local Wilcoxon test. |
Date: | 2016–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1604.03776&r=ecm |
By: | Tim Bollerslev (Duke University, NBER and CREATES); Andrew J. Patton (Duke University); Rogier Quaedvlieg (Maastricht University) |
Abstract: | We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases into the covariance forecasts, by allowing the ex-ante predictions to respond more (less) aggressively to changes in the ex-post realized covariance measures when they are more (less) reliable. Applying the new procedures in the construction of minimum variance and minimum tracking error portfolios results in reduced turnover and statistically superior positions compared to existing procedures. Translating these statistical improvements into economic gains, we find that under empirically realistic assumptions a risk-averse investor would be willing to pay up to 170 basis points per year to shift to using the new class of forecasting models. |
Keywords: | Common risks; realized covariances; forecasting; asset allocation; portfolio construction |
JEL: | C32 C58 G11 G32 |
Date: | 2016–04–05 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2016-10&r=ecm |
By: | Giampiero M. Gallo (Dipartimento di Statistica, Informatica, Applicazioni "G. Parenti", Università di Firenze); Edoardo Otranto |
Abstract: | Volatility in financial markets alternates persistent turmoil and quiet periods. As a consequence, modelling realized volatility time series requires a specification in which these subperiods are adequately represented. Changes in regimes is a solution, but the question of whether transition between periods is abrupt or smooth remains open. In a recent work we have shown that modifications of the Asymmetric Multiplicative Error Models (AMEM) suitably capture the dynamics of the realized kernel volatility of the S&P500 index: a Markov Switching (MS–AMEM) extension with three regimes performs well in–sample, whereas a Smooth Transition (ST)–AMEM seems to have the best performance out–of–sample. In this paper we combine the two approaches, providing a new class of models with a set of parameters subject to abrupt changes in regime and another set subject to smooth transition changes. These models capture the possibility that regimes may overlap with one another (thus we label them fuzzy). We compare the performance of these models against the MS–AMEM and ST–AMEM, keeping the no–regime AMEM and HAR as benchmarks. The empirical application is carried out on the volatility of four US indices (S&P500, Russell 2000, Dow Jones 30, Nasdaq 100): the superiority of these more flexible models is established on the basis of several criteria and loss functions. |
Keywords: | Volatility, Regime switching, Smooth transition, Forecasting, Turbulence, Multiplicative Error Models, MEM |
JEL: | C58 C22 G01 |
Date: | 2016–04 |
URL: | http://d.repec.org/n?u=RePEc:fir:econom:wp2016_02&r=ecm |
By: | Gunawan, David; Tran, Minh-Ngoc; Kohn, Robert |
Abstract: | Variational Bayes (VB) is a popular statistical method for Bayesian inference. The existing VB algorithms are restricted to cases where the likelihood is tractable, which precludes their use in many interesting models. Tran et al. (2015) extend the scope of application of VB to cases where the likelihood is intractable but can be estimated unbiasedly, and name the method “Variational Bayes with Intractable Likelihood (VBIL)”. This paper presents a version of VBIL, named Variational Bayes with Intractable Log-Likelihood (VBILL), that is useful for cases, such as big data and big panel data models, where only unbiased estimators of the log-likelihood are available. In particular, we develop an estimation approach, based on subsampling and the MapReduce programming technique, for analysing massive datasets which cannot fit into a single desktop’s memory. The proposed method is theoretically justified in the sense that, apart from an extra Monte Carlo error which can be controlled, it is able to produce estimators as if the true log-likelihood or full data were used. The proposed methodology is robust in the sense that it works well when only highly variable estimates of the log-likelihood are available. The method is illustrated empirically using several simulated datasets and a big real dataset based on the arrival time status of U. S. airlines. Keywords. Pseudo Marginal Metropolis-Hastings, Debiasing Approach, Big Data, Panel Data, Difference Estimator. |
Keywords: | Pseudo Marginal Metropolis-Hastings; Debiasing Approach; Big Data; Panel Data; Difference Estimator |
Date: | 2016–03–30 |
URL: | http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/14594&r=ecm |
By: | Rabovic, Renata (Tilburg University, Center For Economic Research); Cizek, Pavel (Tilburg University, Center For Economic Research) |
Abstract: | To analyze data obtained by non-random sampling in the presence of cross-sectional dependence, estimation of a sample selection model with a spatial lag of a latent dependent variable or a spatial error in both the selection and outcome equations is considered. Since there is no estimation framework for the spatial lag model and the existing estimators for the spatial error model are either computationally demanding or have poor small sample properties, we suggest to estimate these models by the partial maximum likelihood estimator, following Wang et al. (2013)'s framework for a spatial error probit model. We show that the estimator is consistent and asymptotically normally distributed. To facilitate easy and precise estimation of the variance matrix without requiring the spatial stationarity of errors, we propose the parametric bootstrap method. Monte Carlo simulations demonstrate the advantages of the estimators. |
Keywords: | Asymptotic distribution; Maximum likelihood; near epoch dependence; sample selection model; Spatial Autoregressive Models |
JEL: | C13 C31 C34 |
Date: | 2016 |
URL: | http://d.repec.org/n?u=RePEc:tiu:tiucen:8a4b2e5d-6787-4685-8b9e-128d0e6d4e47&r=ecm |
By: | Saraswata Chaudhuriy; David T. Frazierz; Éric Renault |
Abstract: | We consider consistent estimation of parameters in a structural model by Indirect Inference (II) when the exogenous variables can be missing at random (MAR) endogenously. We demonstrate that II procedures which simply discard sample units with missing observations can yield inconsistent estimates of the true structural parameters. By inverse probability weighting (IPW) the “complete case” observations, i.e., sample units with no missing variables for the observed and simulated samples, we propose a new method of II to consistently estimate the structural parameters of interest. Asymptotic properties of the new estimator are discussed. An illustration is provided based on a multinomial probit model. A small scale Monte-Carlo study in this model demonstrates the severe bias incurred by existing II estimators, and its subsequent correction by our new II estimator. |
Keywords: | Indirect Inference; Missing at Random; Inverse Probability Weighting, |
JEL: | L14 L62 F23 |
Date: | 2016–04–08 |
URL: | http://d.repec.org/n?u=RePEc:cir:cirwor:2016s-15&r=ecm |
By: | Conrad, Christian; Kleen, Onno |
Abstract: | We examine the statistical properties of multiplicative GARCH models. First, we show that in multiplicative models, returns have higher kurtosis and squared returns have a more persistent autocorrelation function than in the nested GARCH model. Second, we extend the results of Andersen and Bollerslev (1998) on the upper bound of the R2 in a Mincer-Zarnowitz regression to the case of a multiplicative GARCH model, using squared returns as a proxy for the true but unobservable conditional variance. Our theoretical results imply that multiplicative GARCH models provide an explanation for stylized facts that cannot be captured by classical GARCH modeling. |
Keywords: | Forecast evaluation; GARCH-MIDAS; Mincer-Zarnowitz regression; volatility persistence; volatility component model; long-term volatility. |
Date: | 2016–03–18 |
URL: | http://d.repec.org/n?u=RePEc:awi:wpaper:0613&r=ecm |
By: | Mattia Guerini; Alessio Moneta |
Abstract: | This paper proposes a new method for empirically validate simulation models that generate artificial time series data comparable with real-world data. The approach is based on comparing structures of vector autoregression models which are estimated from both artificial and real-world data by means of causal search algorithms. This relatively simple procedure is able to tackle both the problem of confronting theoretical simulation models with the data and the problem of comparing different models in terms of their empirical reliability. Moreover the paper provides an application of the validation procedure to the Dosi et al. (2015) macro-model. |
Keywords: | Models validation; Agent-Based models; Causality; Structural Vector Autoregressions |
Date: | 2016–12–04 |
URL: | http://d.repec.org/n?u=RePEc:ssa:lemwps:2016/16&r=ecm |
By: | SHAHARIAR HUDA (Kuwait University) |
Abstract: | The problems of optimally designing experiments for trigonometric regression models over an interval on the real line are considered for the situation where estimation of the differences between responses at two points in the factor space is of primary interest. Minimization of the variance of the difference between estimated responses at two points maximized over all pairs of points in the region of interest is taken as the design criterion. Optimal designs under the minimax criterion are derived for various set-ups for the first-order model. Some comparisons with the traditional D-optimal designs are also provided. Open problems for further research are indicated. |
Keywords: | Minimax designs, Optimal designs, Response surface designs, |
URL: | http://d.repec.org/n?u=RePEc:sek:iacpro:3505662&r=ecm |