|
on Econometrics |
By: | Gabriele Fiorentini (Università di Firenze, Italy; Rimini Centre for Economic Analysis); Enrique Sentana (CEMFI, Spain) |
Abstract: | We propose generalised DWH specification tests which simultaneously compare three or more likelihood-based estimators of conditional mean and variance parameters in multivariate conditionally heteroskedastic dynamic regression models. Our tests are useful for Garch models and in many empirically relevant macro and finance applications involving Vars and multivariate regressions. To design powerful and reliable tests, we determine the rank deficiencies of the differences between the estimators' asymptotic covariance matrices under the null of correct specification, and take into account that some parameters remain consistently estimated under the alternative of distributional misspecification. Finally, we provide finite sample results through Monte Carlo simulations. |
Keywords: | Durbin-Wu-Hausman Tests, Partial Adaptivity, Semiparametric Estimators, Singular Covariance Matrices |
JEL: | C12 C14 C22 C32 C52 |
Date: | 2018–05 |
URL: | http://d.repec.org/n?u=RePEc:rim:rimwps:18-22&r=ecm |
By: | Mika Meitz; Daniel Preve; Pentti Saikkonen |
Abstract: | A new mixture autoregressive model based on Student's $t$-distribution is proposed. A key feature of our model is that the conditional $t$-distributions of the component models are based on autoregressions that have multivariate $t$-distributions as their (low-dimensional) stationary distributions. That autoregressions with such stationary distributions exist is not immediate. Our formulation implies that the conditional mean of each component model is a linear function of past observations and the conditional variance is also time varying. Compared to previous mixture autoregressive models our model may therefore be useful in applications where the data exhibits rather strong conditional heteroskedasticity. Our formulation also has the theoretical advantage that conditions for stationarity and ergodicity are always met and these properties are much more straightforward to establish than is common in nonlinear autoregressive models. An empirical example employing a realized kernel series based on S&P 500 high-frequency data shows that the proposed model performs well in volatility forecasting. |
Date: | 2018–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1805.04010&r=ecm |
By: | Alessandro Casini; Pierre Perron |
Abstract: | This chapter covers methodological issues related to estimation, testing and computation for models involving structural changes. Our aim is to review developments as they relate to econometric applications based on linear models. Substantial advances have been made to cover models at a level of generality that allow a host of interesting practical applications. These include models with general stationary regressors and errors that can exhibit temporal dependence and heteroskedasticity, models with trending variables and possible unit roots and cointegrated models, among others. Advances have been made pertaining to computational aspects of constructing estimates, their limit distributions, tests for structural changes, and methods to determine the number of changes present. A variety of topics are covered. The first part summarizes and updates developments described in an earlier review, Perron (2006), with the exposition following heavily that of Perron (2008). Additions are included for recent developments: testing for common breaks, models with endogenous regressors (emphasizing that simply using least-squares is preferable over instrumental variables methods), quantile regressions, methods based on Lasso, panel data models, testing for changes in forecast accuracy, factors models and methods of inference based on a continuous records asymptotic framework. Our focus is on the so-called off-line methods whereby one wants to retrospectively test for breaks in a given sample of data and form confidence intervals about the break dates. The aim is to provide the readers with an overview of methods that are of direct usefulness in practice as opposed to issues that are mostly of theoretical interest. |
Date: | 2018–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1805.03807&r=ecm |
By: | Marc Hallin |
Abstract: | In his celebrated 1936 paper on “the generalized distance in statistics,” P.C. Mahalanobis pioneered the idea that, when defined over a space equipped with some probability measure P, a meaningful distance should be P-specific, with data-driven empirical counterpart. The so-called Mahalanobis distance and the corresponding Mahalanobis outlyingness achieve this objective in the case of a Gaussian P by mapping P to the spherical standard Gaussian, via a transformation based on second-order moments which appears to be an optimal transport in the Monge-Kantorovich sense. In a non-Gaussian context, though, one may feel that second-order moments are not informative enough, or inappropriate; moreover, they might not exist. We therefore propose a distance that fully takes the underlying P into account—not just its second-order features—by considering the potential that characterizes the optimal transport mapping P to the uniform over the unit ball, along with a symmetrized version of the corresponding Bregman divergence. |
Keywords: | Bregman divergence; gradient of convex function; Mahalanobis distance; measure transportation; McCann theorem; Monge-Kantorovich problem; multivariate distribution function; multivariate quantiles; outlyingness |
Date: | 2018–05 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2013/270860&r=ecm |
By: | Breitung, Jörg; Knüppel, Malte |
Abstract: | Forecasts are useless whenever the forecast error variance fails to be smaller than the unconditional variance of the target variable. This paper develops tests for the null hypothesis that forecasts become uninformative beyond some limiting forecast horizon h. Following Diebold and Mariano (DM, 1995) we propose a test based on the comparison of the mean-squared error of the forecast and the sample variance. We show that the resulting test does not possess a limiting normal distribution and suggest two simple modifications of the DM-type test with different limiting null distributions. Furthermore, a forecast encompassing test is developed that tends to better control the size of the test. In our empirical analysis, we apply our tests to macroeconomic forecasts from the survey of Consensus Economics. Our results suggest that forecasts of macroeconomic key variables are barely informative beyond 2-4 quarters ahead. |
Keywords: | Hypothesis Testing,Predictive Accuracy,Informativeness |
JEL: | C12 C32 C53 |
Date: | 2018 |
URL: | http://d.repec.org/n?u=RePEc:zbw:bubdps:072018&r=ecm |
By: | Li, Yong (Renmin University of China); Liu, Xiaobin (Zhejiang University); Zeng, Tao (Zhejiang University); Yu, Jun (SMU School of Economics) |
Abstract: | A new Wald-type statistic is proposed for hypothesis testing based on Bayesian posterior distributions. The new statistic can be explained as a posterior version of Wald test and have several nice properties. First, it is well-defi ned under improper prior distributions. Second, it avoids Jeffreys-Lindley's paradox. Third, under the null hypothesis and repeated sampling, it follows a x2 distribution asymptotically, offering an asymptotically pivotal test. Fourth, it only requires inverting the posterior covariance for the parameters of interest. Fifth and perhaps most importantly, when a random sample from the posterior distribution (such as an MCMC output) is available, the proposed statistic can be easily obtained as a by-product of posterior simulation. In addition, the numerical standard error of the estimated proposed statistic can be computed based on the random sample. The finite sample performance of the statistic is examined in Monte Carlo studies. The method is applied to two latent variable models used in microeconometrics and financial econometrics. |
Keywords: | Decision theory; Hypothesis testing; Latent variable models; Posterior simulation; Wald test. |
JEL: | C11 C12 |
Date: | 2018–05–11 |
URL: | http://d.repec.org/n?u=RePEc:ris:smuesw:2018_008&r=ecm |
By: | Juan Carlos Escanciano; Wei Li |
Abstract: | Ordinary least squares provides the optimal linear approximation to the true regression function under misspecification. This paper investigates the Instrumental Variables (IV) version of this problem. The resulting population parameter is called the Optimal Linear IV Approximation (OLIVA). This paper shows that a necessary condition for regular identification of the OLIVA is also sufficient for existence of an IV estimand in a linear IV model. The necessary condition holds for the important case of a binary endogenous treatment, leading also to a LATE interpretation with positive weights. The instrument in the IV estimand is unknown and is estimated in a first step. A Two-Step IV (TSIV) estimator is proposed. We establish the asymptotic normality of a debiased TSIV estimator based on locally robust moments. The TSIV estimator does not require neither completeness nor identification of the instrument. As a by-product of our analysis, we robustify the classical Hausman test for exogeneity against misspecification of the linear model. Monte Carlo simulations suggest excellent finite sample performance for the proposed inferences. |
Date: | 2018–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1805.03275&r=ecm |
By: | Schreiber, Sven |
Abstract: | Applied time series research often faces the challenge that (a) potentially relevant variables are unobservable, (b) it is fundamentally uncertain which covariates are relevant. Thus cointegration is often analyzed in partial systems, ignoring potential (stationary) covariates. By simulating hypothesized larger systems Benati (2015) found that a nominally significant cointegration outcome using a bootstrapped rank test (Cavaliere, Rahbek, and Taylor, 2012) in the bivariate sub-system might be due to test size distortions. In this note we review this issue systematically. Apart from revisiting the partial-system results we also investigate alternative bootstrap test approaches in the larger system. Throughout we follow the given application of a long-run Phillips curve (euro-area inflation and unemployment). The methods that include the covariates do not reject the null of no cointegration, but by simulation we find that they display very low power, such that the (bivariate) partial-system approach is still preferred. The size distortions of all approaches are only mild when a standard HP-filtered output gap measure is used among the covariates. The bivariate trace test p-value of 0.027 (heteroskedasticity-consistent wild bootstrap) therefore still suggests rejection of non-cointegration at the 5% but not at the 1% significance level. The earlier findings of considerable test size distortions can be replicated when instead an output gap measure with different longer-run developments is used. This detrimental effect of large borderline-stationary roots reflects an earlier insight from the literature (Cavaliere, Rahbek, and Taylor, 2015). |
Keywords: | bootstrap,cointegration rank test,empirical size |
JEL: | C32 C15 E31 |
Date: | 2018 |
URL: | http://d.repec.org/n?u=RePEc:zbw:fubsbe:20188&r=ecm |
By: | Takaki Sato; Yasumasa Matsuda |
Abstract: | This study proposes spatiotemporal extensions of time series autoregressive conditional heteroskedasticity (ARCH) models. We call spatiotemporally extended ARCH models as spatiotemporal ARCH (ST-ARCH) models. ST-ARCH models specify conditional variances given simultaneous observations and past observations, which constitutes a good contrast with time series ARCH models that specify conditional variances given past own observations. We have proposed two types of ST-ARCH models based on cross-sectional correlations between error terms. A spatial weight matrix based on Fama-French 3 factor models are used to quantify the closeness between stock prices. We estimate the parameters in ST-ARCH models by a two-step procedure of the quasi maximum likelihood estimation method. We demonstrate the empirical properties of the models by simulation studies and real data analysis of stock price data in the Japanese market. |
Date: | 2018–05 |
URL: | http://d.repec.org/n?u=RePEc:toh:dssraa:82&r=ecm |
By: | Baltagi, Badi H.; Fingleton, Bernard; Pirotte, Alain |
Abstract: | This paper focuses on the estimation and predictive performance of several estimators for the time-space dynamic panel data model with Spatial Moving Average Random Effects (SMA-RE) structure of the disturbances. A dynamic spatial Generalized Moments (GM) estimator is proposed which combines the approaches proposed by Baltagi, Fingleton and Pirotte (2014) and Fingleton (2008). The main idea is to mix non-spatial and spatial instruments to obtain consistent estimates of the parameters. Then, a forecasting approach is proposed and a linear predictor is derived. Using Monte Carlo simulations, we compare the short-run and long-run effects and evaluate the predictive efficiencies of optimal and various suboptimal predictors using the Root Mean Square Error (RMSE) criterion. Last, our approach is illustrated by an application in geographical economics which studies the employment levels across 255 NUTS regions of the EU over the period 2001-2012, with the last two years reserved for prediction. |
Keywords: | Panel data; Spatial lag; Error components; Time-space; Dynamic;OLS; Within; GM; Spatial autocorrelation; Direct and indirect effects; Moving average; Prediction; Simulations, Rook contiguity, Interregional trade. |
JEL: | C23 |
Date: | 2018–04–18 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:86371&r=ecm |
By: | Atkinson, Tyler (Federal Reserve Bank of Dallas); Richter, Alexander (Federal Reserve Bank of Dallas); Throckmorton, Nathaniel (College of William & Mary) |
Abstract: | This paper evaluates the accuracy of linear and nonlinear estimation methods for dynamic stochastic general equilibrium models. We generate a large sample of artificial datasets using a global solution to a nonlinear New Keynesian model with an occasionally binding zero lower bound (ZLB) constraint on the nominal interest rate. For each dataset, we estimate the nonlinear model—solved globally, accounting for the ZLB—and the linear analogue of the nonlinear model—solved locally, ignoring the ZLB—with a Metropolis-Hastings algorithm where the likelihood function is evaluated with a Kalman filter, unscented Kalman filter, or particle filter. In datasets that resemble the U.S. experience, the nonlinear model estimated with a particle filter is more accurate and has a higher marginal data density than the linear model estimated with a Kalman filter, as long as the measurement error variances in the particle filter are not too big. |
Keywords: | Bayesian estimation; nonlinear solution; particle filter; unscented Kalman filter |
JEL: | C11 C32 C51 E43 |
Date: | 2018–05–07 |
URL: | http://d.repec.org/n?u=RePEc:fip:feddwp:1804&r=ecm |
By: | Atkinson, Anthony C.; Riani, Marco; Cerioli, Andrea |
Abstract: | The forward search is a method of robust data analysis in which outlier free subsets of the data of increasing size are used in model fitting; the data are then ordered by closeness to the model. Here the forward search, with many random starts, is used to cluster multivariate data. These random starts lead to the diagnostic identification of tentative clusters. Application of the forward search to the proposed individual clusters leads to the establishment of cluster membership through the identification of non-cluster members as outlying. The method requires no prior information on the number of clusters and does not seek to classify all observations. These properties are illustrated by the analysis of 200 six-dimensional observations on Swiss banknotes. The importance of linked plots and brushing in elucidating data structures is illustrated. We also provide an automatic method for determining cluster centres and compare the behaviour of our method with model-based clustering. In a simulated example with 8 clusters our method provides more stable and accurate solutions than model-based clustering. We consider the computational requirements of both procedures. |
Keywords: | brushing; data structure; forward search; graphical methods; linked plots; Mahalanobis distance; MM estimation; outliers; S estimation; Tukey’s biweight. |
JEL: | C1 |
Date: | 2017–04–08 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:72291&r=ecm |
By: | Simon Freyaldenhoven; Christian Hansen; Jesse M. Shapiro |
Abstract: | We consider a linear panel event-study design in which unobserved confounds may be related both to the outcome and to the policy variable of interest. We provide sufficient conditions to identify the causal effect of the policy by exploiting covariates related to the policy only through the confounds. Our model implies a set of moment equations that are linear in parameters. The effect of the policy can be estimated by 2SLS, and causal inference is valid even when endogeneity leads to pre-event trends ("pre-trends") in the outcome. Alternative approaches, such as estimation following a test for pre-trends, perform poorly. |
JEL: | C23 C26 |
Date: | 2018–04 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:24565&r=ecm |
By: | Victor De Oliveira (UTSA) |
Abstract: | This work considers models for geostatistical data for situations in which the region where the phenomenon of interest varies is partitioned into two disjoint subregions, which is called a binary map. The goals of this work are threefold. First, a review is provided of the classes of models that have been proposed so far for geostatistical binary data as well as a description of their main features. Second, a generalization is provided of a spatial multivariate probit model that eases regression function modeling, interpretation of the regression parameters, and establishing connections with other models. The second-order properties of this model are studied in some detail. Finally, connections between the aforementioned classes of models are established, showing that some of these are reformulations (reparametrizations) of the other models. |
Keywords: | Clipped Gaussian random field; Gaussian copula model; Generalized linear mixed model; Indicator kriging; Multivariate probit model. |
JEL: | C21 C31 C53 |
Date: | 2017–12–10 |
URL: | http://d.repec.org/n?u=RePEc:tsa:wpaper:0151mss&r=ecm |
By: | VÁZQUEZ-ALCOCER, Alan; SCHOEN, Eric D.; GOOS, Peter |
Abstract: | After completing the experimental runs of a screening design, the responses under study are analyzed by statistical methods to detect the active effects. To increase the chances of correctly identifying these effects, a good analysis method should: (1) provide alternative interpretations of the data, (2) reveal the aliasing present in the design, and (3) search only meaningful sets of effects as defined by user-specified restrictions such as effect heredity or constraints that include all the contrasts of a multi-level factor in the model. Methods like forward selection, the Dantzig selector or LASSO do not posses all these properties. Simulated annealing model search cannot handle other constraints than effect heredity. This paper presents a novel strategy to analyze data from screening designs that posses properties (1)-(3) in full. It uses modern mixed integer optimization methods that returns the results in a few minutes. We illustrate our method by analyzing data from real and synthetic experiments involving two-level and mixed-level screening designs. Using simulations, we show the capability of our method to automatically select the set of active effects and compare it to the benchmark methods. |
Keywords: | Dantzig selector, Definitive screening design, LASSO, Sparsity, Two-factor interaction |
Date: | 2018–05 |
URL: | http://d.repec.org/n?u=RePEc:ant:wpaper:2018007&r=ecm |
By: | Søren Johansen (Department of Economics, University of Copenhagen and CREATES) |
Abstract: | A multivariate CVAR(1) model for some observed variables and some unobserved variables is analysed using its infinite order CVAR representation of the observations. Cointegration and adjustment coefficients in the infinite order CVAR are found as functions of the parameters in the CVAR(1) model. Conditions for weak exogeneity of the cointegrating vectors in the approximating finite order CVAR are derived. The results are illustrated by a few simple examples of relevance for modelling causal graphs. |
Keywords: | Adjustment coefficients, cointegrating coefficients, CVAR, causal models |
JEL: | C32 |
Date: | 2018–05–17 |
URL: | http://d.repec.org/n?u=RePEc:kud:kuiedp:1805&r=ecm |
By: | Alexis Akira Toda |
Abstract: | Although using non-Gaussian distributions in economic models has become increasingly popular, currently there is no systematic way for calibrating a discrete distribution from the data without imposing parametric assumptions. This paper proposes a simple nonparametric calibration method that combines the kernel density estimator and Gaussian quadrature. Applications to an asset pricing model and an optimal portfolio problem suggest that assuming Gaussian instead of nonparametric shocks leads to a 15-25% reduction in the equity premium and overweighting in the stock portfolio because the investor underestimates the probability of crashes. |
Date: | 2018–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1805.00896&r=ecm |
By: | Lisa Schlosser; Torsten Hothorn; Reto Stauffer; Achim Zeileis |
Abstract: | To obtain a probabilistic model for a dependent variable based on some set of explanatory variables, a distributional approach is often adopted where the parameters of the distribution are linked to regressors. In many classical models this only captures the location of the distribution but over the last decade there has been increasing interest in distributional regression approaches modeling all parameters including location, scale, and shape. Notably, so-called non-homogenous Gaussian regression (NGR) models both mean and variance of a Gaussian response and is particularly popular in weather forecasting. More generally, the GAMLSS framework allows to establish generalized additive models for location, scale, and shape with smooth linear or nonlinear effects. However, when variable selection is required and/or there are non-smooth dependencies or interactions (especially unknown or of high-order), it is challenging to establish a good GAMLSS. A natural alternative in these situations would be the application of regression trees or random forests but, so far, no general distributional framework is available for these. Therefore, a framework for distributional regression trees and forests is proposed that blends regression trees and random forests with classical distributions from the GAMLSS framework as well as their censored or truncated counterparts. To illustrate these novel approaches in practice, they are employed to obtain probabilistic precipitation forecasts at numerous sites in a mountainous region (Tyrol, Austria) based on a large number of numerical weather prediction quantities. It is shown that the novel distributional regression forests automatically select variables and interactions, performing on par or often even better than GAMLSS specified either through prior meteorological knowledge or a computationally more demanding boosting approach. |
Keywords: | parametric models, regression trees, random forests, recursive partitioning, probabilistic forecasting, GAMLSS |
Date: | 2018–08 |
URL: | http://d.repec.org/n?u=RePEc:inn:wpaper:2018-08&r=ecm |
By: | Jeffrey S. Racine; Qi Li; Li Zheng |
Abstract: | Model averaging has a rich history dating from its use for combining forecasts from time-series models (Bates & Granger 1969) and presents a compelling alternative to model selection methods. We propose a frequentist model average procedure defined over categorical regression splines (Ma, Racine & Yang 2015) that allows for non-nested and heteroskedastic candidate models. Theoretical underpinnings are provided, finite-sample performance is evaluated, and an empirical illustration reveals that the method is capable of outperforming a range of popular model selection criteria in applied settings. An R package is available for practitioners (Racine 2017). |
Date: | 2018–05 |
URL: | http://d.repec.org/n?u=RePEc:mcm:deptwp:2018-10&r=ecm |
By: | Arkadiusz Koziol; Anuradha Roy (UTSA); Roman Zmyslony (University of Zielona Gora); Ricardo Leiva; Miguel Fonseca |
Abstract: | The article addresses the best unbiased estimators of doubly exchangeable covariance structure for three-level data, an extension of the block compound symmetry covariance structure. Under multivariate normality, the free-coordinate approach is used to obtain linear and quadratic estimates for the model parameters that are su_cient, complete, unbiased and consistent. Data from a clinical study is analyzed to illustrate the application of the obtained results. |
Keywords: | Best unbiased estimator, doubly exchangeable covariance structure, three-level multivariate data, coordinate free approach, unstructured mean vector. |
JEL: | H12 J10 F10 |
Date: | 2016–08–09 |
URL: | http://d.repec.org/n?u=RePEc:tsa:wpaper:0149mss&r=ecm |
By: | Shuzhen Yang; Jianfeng Yao |
Abstract: | Several well-established benchmark predictors exist for Value-at-Risk (VaR), a major instrument for financial risk management. Hybrid methods combining AR-GARCH filtering with skewed-$t$ residuals and the extreme value theory-based approach are particularly recommended. This study introduces yet another VaR predictor, G-VaR, which follows a novel methodology. Inspired by the recent mathematical theory of sublinear expectation, G-VaR is built upon the concept of model uncertainty, which in the present case signifies that the inherent volatility of financial returns cannot be characterized by a single distribution but rather by infinitely many statistical distributions. By considering the worst scenario among these potential distributions, the G-VaR predictor is precisely identified. Extensive experiments on both the NASDAQ Composite Index and S\&P500 Index demonstrate the excellent performance of the G-VaR predictor, which is superior to most existing benchmark VaR predictors. |
Date: | 2018–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1805.03890&r=ecm |
By: | Myasnikov, Alexander |
Abstract: | The traditional spatial lag model assumes that all regions in the sample exhibit the same degree of sensitivity to spatial external effects. This may not always be the case, however, especially with highly heterogeneous regions like those in Russia. Therefore, a model has been suggested that views spatial coefficients as being endogenously defined by regions’ intrinsic characteristics. We generalize this model, describe approaches to its estimation based on maximum likelihood and generalized least squares, and perform a Monte Carlo simulation of these two estimation methods in small samples. We find that the maximum likelihood estimator is preferable due to the lower biases and variances of the estimates it yields, although the generalized least squares estimator can also be useful in small samples for robustness checks and as a first approximation tool. In larger samples, results of the generalized least squares estimator are very close to those of the maximum likelihood estimator, so the former may be preferred because of its simplicity and less strict computational power requirements. |
Keywords: | spatial lag model; endogenous spatial coefficients; Monte Carlo simulation; small sample estimation |
JEL: | O18 R15 |
Date: | 2018–03 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:86696&r=ecm |