|
on Econometrics |
By: | Claeskens, Gerda; Hart, Jeffrey D. |
Abstract: | Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors are normally distributed. Most of the proposed methods can be extended to generalized linear models where tests for non-normal distributions are of interest. Our tests are nonparametric in the sense that they are designed to detect virtually any alternative to normality. In case of rejection of the null hypothesis, the nonparametric estimation method that is used to construct a test provides an estimator of the alternative distribution. |
Keywords: | Mixed model; Hypothesis test; Nonparametric test; Minimum distance; Order selection; |
Date: | 2009–02 |
URL: | http://d.repec.org/n?u=RePEc:ner:leuven:urn:hdl:123456789/220817&r=ecm |
By: | Croux, Christophe; Gelper, Sarah; Mahieu, Koen |
Abstract: | Multivariate time series may contain outliers of different types. In presence of such outliers, applying standard multivariate time series techniques becomes unreliable. A robust version of multivariate exponential smoothing is proposed. The method is affine equivariant, and involves the selection of a smoothing parameter matrix by minimizing a robust loss function. It is shown that the robust method results in much better forecasts than the classic approach in presence of outliers, and performs similar when the data contain no outliers. Moreover, the robust procedure yields an estimator of the smoothing parameter less subject to downward bias. As a byproduct, a cleaned version of the time series is obtained, as is illustrated by means of a real data example. |
Keywords: | Data cleaning; Exponential smoothing; Forecasting; Multivariate time series; Robustness; |
Date: | 2009–08 |
URL: | http://d.repec.org/n?u=RePEc:ner:leuven:urn:hdl:123456789/242199&r=ecm |
By: | Otilia Boldea; Alastair Hall; Sanggohn Han |
Abstract: | In this paper, we present a limiting distribution theory for the break point estimator in a linear regression model with multiple structural breaks obtained by minimizing a Two Stage Least Squares (2SLS) objective function. Our analysis covers both the case in which the reduced form for the endogenous regressors is stable and the case in which it is unstable with multiple structural breaks. For stable reduced forms, we present a limiting distribution theory under two different scenarios: in the case where the parameter change is of fixed magnitude, it is shown that the resulting distribution depends on the distribution of the data and is not of much practical use for inference; in the case where the magnitude of the parameter change shrinks with the sample size, it is shown that the resulting distribution can be used to construct approximate large sample confidence intervals for the break points. For unstable reduced forms, we consider the case where the magnitudes of the parameter changes in both the equation of interest and the reduced forms shrink with the sample size at potentially different rates and not necessarily the same locations in the sample. The resulting limiting distribution theory can be used to construct approximate large sample confidence intervals for the break points. The finite sample performance of these intervals are analyzed in a small simulation study and the intervals are illustrated via an application to the New Keynesian Phillips curve. |
Date: | 2010 |
URL: | http://d.repec.org/n?u=RePEc:man:cgbcrp:134&r=ecm |
By: | Mikko Myrskylä (Max Planck Institute for Demographic Research, Rostock, Germany); Joshua R. Goldstein (Max Planck Institute for Demographic Research, Rostock, Germany) |
Abstract: | We study prediction and error propagation in Hernes, Gompertz, and logistic models for innovation diffusion. We develop a unifying framework in which the models are linearized with respect to cohort age and predictions are derived from the underlying linear process. We develop and compare methods for deriving the predictions and show how Monte Carlo simulation can be used to estimate prediction uncertainty for a wide class of underlying linear processes. For an important special case, random walk with, we develop an analytic prediction variance estimator. Both the Monte Carlo method and the analytic variance estimator allow the forecasters to make precise the level of within-model prediction uncertainty in innovation diffusion models. Empirical applications to first births, first marriages and cumulative fertility illustrate the usefulness of these methods. |
JEL: | J1 Z0 |
Date: | 2010–02 |
URL: | http://d.repec.org/n?u=RePEc:dem:wpaper:wp-2010-013&r=ecm |
By: | Consentino, Fabrizio; Claeskens, Gerda |
Abstract: | We develop nonparametric tests for the null hypothesis that a function has a prescribed form, to apply to data sets with missing observations. Omnibus nonparametric tests do not need to specify a particular alternative parametric form, and have power against a large range of alternatives, the order selection tests that we study are one example. We extend such order selection tests to be applicable in the context of missing data. In particular, we consider likelihood-based order selection tests for multiply- imputed data. A simulation study and data analysis illustrate the performance of the tests. A model selection method in the style of Akaike's information criterion for multiply imputed datasets results along the same lines. |
Keywords: | Akaike information criterion; Hypothesis test; Multiple imputation; lack-of-fit test; Missing data; Omnibus test; Order selection; |
Date: | 2009–03 |
URL: | http://d.repec.org/n?u=RePEc:ner:leuven:urn:hdl:123456789/223654&r=ecm |
By: | Krivobokova, Tatyana; Kneib, Thomas; Claeskens, Gerda |
Abstract: | In this paper we construct simultaneous confidence bands for a smooth curve using penalized spline estimators. We consider three types of estimation methods: (i) as a standard (fixed effect) nonparametric model, (ii) using the mixed model framework with the spline coefficients as random effects and (iii) a full Bayesian approach. The volume-of-tube formula is applied for the first two methods and compared from a frequentist perspective to Bayesian simultaneous confidence bands. It is shown that the mixed model formulation of penalized splines can help to obtain, at least approximately, confidence bands with either Bayesian or frequentist properties. Simulations and data analysis support the methods proposed. The R package ConfBands accompanies the paper. |
Keywords: | Bayesian penalized splines; B-splines; Confidence band; Mixed model; Penalization; |
Date: | 2010–01 |
URL: | http://d.repec.org/n?u=RePEc:ner:leuven:urn:hdl:123456789/255626&r=ecm |
By: | Daisuke Nagakura; Toshiaki Watanabe |
Abstract: | Abstract We call the realized variance (RV) calculated with observed prices contaminated by (market) microstructure noises (MNs) the noise-contaminated RV (NCRV), referring to the bias component in the NCRV associated with the MNs as the MN component. This paper develops a state space method for estimating the integrated variance (IV) and MN component. We represent the NCRV by a state space form and show that the state space form parameters are not identifiable, however, they can be expressed as functions of identifiable parameters. We illustrate how to estimate these parameters. The proposed method also serves as a convenient way for estimating a general class of continuous-time stochastic volatility (SV) models under the existence of MN. We apply the proposed method to yen/dollar exchange rate data, where we find that most of the variation in NCRV is of the MN component. |
Keywords: | Realized Variance, Integrated Variance, Microstructure Noise, State Space, Identification, Exchange Rate |
Date: | 2010–02 |
URL: | http://d.repec.org/n?u=RePEc:hst:ghsdps:gd09-115&r=ecm |
By: | Timothy Falcon Crack; Olivier Ledoit |
Abstract: | Although dependence in financial data is pervasive, standard doctoral-level econometrics texts do not make clear that the common central limit theorems (CLTs) contained therein fail when applied to dependent data. More advanced books that are clear in their CLT assumptions do not contain any worked examples of CLTs that apply to dependent data. We address these pedagogical gaps by discussing dependence in financial data and dependence assumptions in CLTs and by giving a worked example of the application of a CLT for dependent data to the case of the derivation of the asymptotic distribution of the sample variance of a Gaussian AR(1). We also provide code and the results for a Monte-Carlo simulation used to check the results of the derivation. |
Date: | 2010–02 |
URL: | http://d.repec.org/n?u=RePEc:zur:iewwpx:480&r=ecm |
By: | William Griffiths; Xiaohui Zhang; Xueyan Zhao |
Abstract: | The stochastic frontier model used for continuous dependent variables is extended to accommodate output measured as a discrete ordinal outcome variable. Conditional on the inefficiency error, the assumptions of the ordered probit model are adopted for the log of output. Bayesian estimation utilizing a Gibbs sampler with data augmentation is applied to a convenient re-parameterisation of the model. Using panel data from an Australian longitudinal survey, demographic and socioeconomic characteristics are specified as inputs to health production, whereas production efficiency is made dependent on lifestyle factors. Posterior summary statistics are obtained for selected health status probabilities, efficiencies, and marginal effects. |
Keywords: | Bayesian estimation, Gibbs sampling, ordered probit, production efficiency |
JEL: | C11 C21 C23 I12 |
Date: | 2010 |
URL: | http://d.repec.org/n?u=RePEc:mlb:wpaper:1092&r=ecm |
By: | Laurent Ferrara (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, Banque de France - Business Conditions and Macroeconomic Forecasting Directorate); Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics - Ecole d'Économie de Paris); Patrick Rakotomarolahy (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I) |
Abstract: | This paper formalizes the process of forecasting unbalanced monthly datasets in order to obtain robust nowcasts and forecasts of quarterly gross domestic product (GDP) growth rate through a semi-parametric modeling. This innovative approach lies in the use of non-parametric methods, based on nearest neighbors and on radial basis function approaches, to forecast the monthly variables involved in the parametric modeling of GDP using bridge equations. A real-time experience is carried out on euro area vintage data in order to anticipate, with an advance ranging from 6 to 1 months, the GDP flash estimate for the whole zone. |
Keywords: | euro area GDP • real-time nowcasting • forecasting • non-parametric methods |
Date: | 2010 |
URL: | http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-00460461_v1&r=ecm |
By: | Ruipeng Liu; Thomas Lux |
Abstract: | Long memory (long-term dependence) of volatility counts as one of the ubiquitous stylized facts of financial data. Inspired by the long memory property, multifractal processes have recently been introduced as a new tool for modeling financial time series. In this paper, we propose a parsimonious version of a bivariate multifractal model and estimate its parameters via both maximum likelihood and simulation based inference approaches. In order to explore its practical performance, we apply the model for computing value-at-risk and expected shortfall statistics for various portfolios and compare the results with those from an alternative bivariate multifractal model proposed by Calvet et al. (2006) and the bivariate CC-GARCH of Bollerslev (1990). As it turns out, the multifractal models provide much more reliable results than CC-GARCH, and our new model compares well with the one of Calvet et al. although it has an even smaller number of parameters |
Keywords: | Long memory, multifractal models, simulation based inference, value-at-risk, expected shortfall |
JEL: | C11 C13 G15 |
Date: | 2010–02 |
URL: | http://d.repec.org/n?u=RePEc:kie:kieliw:1594&r=ecm |
By: | Alberto Elices; Jean-Pierre Fouque |
Abstract: | Gaussian copulas are widely used in the industry to correlate two random variables when there is no prior knowledge about the co-dependence between them. The perturbed Gaussian copula approach allows introducing the skew information of both random variables into the co-dependence structure. The analytical expression of this copula is derived through an asymptotic expansion under the assumption of a common fast mean reverting stochastic volatility factor. This paper applies this new perturbed copula to the valuation of derivative products; in particular FX quanto options to a third currency. A calibration procedure to fit the skew of both underlying securities is presented. The action of the perturbed copula is interpreted compared to the Gaussian copula. A real worked example is carried out comparing both copulas and a local volatility model with constant correlation for varying maturities, correlations and skew configurations. |
Date: | 2010–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1003.0041&r=ecm |
By: | Maria M. De Mello (CEF.UP, Faculdade de Economia, Universidade do Porto) |
Abstract: | This paper assesses the forecast performance of a set of VAR models under a growing number of restrictions. With a maximum forecast horizon of 12 years, we show that the farther the horizon is, the more structured and restricted VAR models have to be to produce accurate forecasts. Indeed, unrestricted VAR models, not subjected to integration or cointegration, are poor forecasters for both short and long run horizons. Differenced VAR models, subject to integration, are reliable predictors for one-step horizons but ineffectual for multi-step horizons. Cointegrated VAR models including appropriate structural breaks and exogenous variables, as well as being subjected to over-identifying theory consistent restrictions, are excellent forecasters for both short and long run horizons. Hence, to obtain precise forecasts from VAR models, proper specification and cointegration are crucial for whatever horizons are at stake, while integration is relevant only for short run horizons. |
Keywords: | VAR demand systems; structural breaks, exogenous regressors, integration; cointegration; forecast accuracy. |
JEL: | C32 C53 |
Date: | 2009–10 |
URL: | http://d.repec.org/n?u=RePEc:por:cetedp:0902&r=ecm |
By: | Anindya Banerjee; Massimiliano Marcellino; Igor Masten |
Abstract: | As a generalization of the factor-augmented VAR (FAVAR) and of the Error Correction Model (ECM), Banerjee and Marcellino (2009) introduced the Factor- augmented Error Correction Model (FECM). The FECM combines error-correction, cointegration and dynamic factor models, and has several conceptual advantages over standard ECM and FAVAR models. In particular, it uses a larger dataset compared to the ECM and incorporates the long-run information lacking from the FAVAR because of the latters speci cation in dfferences. In this paper we examine the forecasting performance of the FECM by means of an analytical example, Monte Carlo simula- tions and several empirical applications. We show that relative to the FAVAR, FECM generally offers a higher forecasting precision and in general marks a very useful step forward for forecasting with large datasets. |
Keywords: | Forecasting with Factor-augmented Error Correction |
JEL: | C32 E17 |
Date: | 2010–01 |
URL: | http://d.repec.org/n?u=RePEc:bir:birmec:09-06r&r=ecm |
By: | Kozo Mayumi (Faculty of Integrated Arts and Sciences, The University of Tokushima); Mario Giampietro (Institut de Ciencia i Tecnologia Ambientals, Universitat Autònoma de Barcelona); Jesus Ramos-Martin (Departament d'Economia i d'Història Econòmica, Universitat Autònoma de Barcelona) |
Abstract: | When dealing with sustainability we are concerned with the biophysical as well as the monetary aspects of economic and ecological interactions. This multidimensional approach requires that special attention is given to dimensional issues in relation to curve fitting practice in economics. Unfortunately, many empirical and theoretical studies in economics, as well as in ecological economics, apply dimensional numbers in exponential or logarithmic functions. We show that it is an analytical error to put a dimensional unit x into exponential functions ( a x ) and logarithmic functions ( x a log ). Secondly, we investigate the conditions of data sets under which a particular logarithmic specification is superior to the usual regression specification. This analysis shows that logarithmic specification superiority in terms of least square norm is heavily dependent on the available data set. The last section deals with economists’ “curve fitting fetishism”. We propose that a distinction be made between curve fitting over past observations and the development of a theoretical or empirical law capable of maintaining its fitting power for any future observations. Finally we conclude this paper with several epistemological issues in relation to dimensions and curve fitting practice in economics. |
Keywords: | dimensions, logarithmic function, curve fitting, logarithmic specification |
JEL: | C01 C13 C51 C65 |
Date: | 2010–02 |
URL: | http://d.repec.org/n?u=RePEc:aub:uhewps:2010_01&r=ecm |
By: | Jörg-Peter Schräpler |
Abstract: | This paper focuses on fraud detection in surveys using Socio-Economic Panel (SOEP) data as an example for testing newly methods proposed here. A statistical theorem referred to as Benford's Law states that in many sets of numerical data, the significant digits are not uniformly distributed, as one might expect, but rather adhere to a certain logarithmic probability function. To detect fraud we derive several requirements that should, according to this law, be fulfilled in the case of survey data. We show that in several SOEP subsamples, Benford's Law holds for the available continuous data. For this analysis, we have developed a measure that reflects the plausibility of the digit distribution in interviewer clusters. We are able to demonstrate that several interviews that were known to have been fabricated and therefore deleted in the original user data set can be detected using this method. Furthermore, in one subsample, we use this method to identify a case of an interviewer falsifying ten interviews who had not been detected previously by the fieldwork organization. In the last section of our paper, we try to explain the deviation from Benford's distribution empirically, and show that several factors can influence the test statistic used. To avoid misinterpretations and false conclusions, it is important to take these factors into account when Benford's Law is applied to survey data. |
Keywords: | Falsification, data quality, Benford's Law, SOEP |
JEL: | C69 C81 |
Date: | 2010 |
URL: | http://d.repec.org/n?u=RePEc:diw:diwsop:diw_sp273&r=ecm |