|
on Econometrics |
By: | Tae-Hwan Kim (School of Economics, Yonsei University); Christophe Muller (DEFI, University of Aix-Marseille) |
Abstract: | In this paper we develop a test to detect the presence of endogeneity in conditional quantile models. The proposed test is a Hausman-type test in that it is based on the distance between two estimators in which one is consistent only under no endogeneity while the other estimator is consistent regardless of the presence of endogeneity in conditional quantile models. We derive the asymptotic distribution of the test statistic under the null hypothesis of no endogeneity. The finite sample properties of the test is investigated by Monte Carlo simulations and it is found that the test shows reasonably good size and power properties in finite samples. Finally, we apply our approach to test for endogeneity in a conditional quantile model for estimating Engel curves using the UK consumption and expenditure data. It is revealed that the pattern of endogeneity found in the Engel curve varies across different quantiles. |
Keywords: | regression quantile, endogeneity, two-stage estimation, Hausman test, Engel curve |
JEL: | C21 |
Date: | 2012–05 |
URL: | http://d.repec.org/n?u=RePEc:yon:wpaper:2012rwp-49&r=ecm |
By: | Arndt Reichert; Harald Tauchmann |
Abstract: | The classical Heckman (1976, 1979) selection correction estimator (heckit) is misspecified and inconsistent if an interaction of the outcome variable and an explanatory variable matters for selection. To address this specification problem, a full information maximum likelihood estimator and a simple two-step estimator are developed. Monte-Carlo simulations illustrate that the bias of the ordinary heckit estimator is removed by these generalized estimation procedures. Along with OLS and the ordinary heckit procedure, we apply these estimators to data from a randomized trial that evaluates the effectiveness of financial incentives for weight loss among the obese. Estimation results indicate that the choice of the estimation procedure clearly matters. |
Keywords: | Selection bias; interaction; heterogeneity; generalized estimator |
JEL: | C24 C93 |
Date: | 2012–10 |
URL: | http://d.repec.org/n?u=RePEc:rwi:repape:0372&r=ecm |
By: | Paulo Parente; Richard Smith (Institute for Fiscal Studies and University of Cambridge) |
Abstract: | The primary concern of this article is the provision of definitions and tests for exogeneity appropriate for models defined through sets of conditional moment restrictions. These forms of exogeneity are expressed as additional conditional moment constraints and may be equivalently formulated as a countably infinite number of unconditional restrictions. Consequently, tests of exogeneity may be seen as tests for an additional set of infinite moment conditions. A number of test statistics are suggested based on GMM and generalised empirical likelihood. The asymptotic properties of the statistics are described under both null hypothesis and a suitable sequence of local alternatives. An extensive set of simulation experiments explores the relative practical efficacy of the various test statistics in terms of empirical size and size-adjusted power. |
JEL: | C12 C14 C30 |
Date: | 2012–10 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:30/12&r=ecm |
By: | Søren Johansen (University of Copenhagen and CREATES); Marco Riani (Dipartimento di Economia, Università di Parma); Anthony C. Atkinson (Department of Statistics, London School of Economics) |
Abstract: | We develop a $C_{p}$ statistic for the selection of regression models with stationary and nonstationary ARIMA error term. We derive the asymptotic theory of the maximum likelihood estimators and show they are consistent and asymptotically Gaussian. We also prove that the distribution of the sum of squares of one step ahead standardized prediction errors, when the parameters are estimated, differs from the chi-squared distribution by a term which tends to infinity at a lower rate than $\chi _{n}^{2}$. We further prove that, in the prediction error decomposition, the term involving the sum of the variance of one step ahead standardized prediction errors is convergent. Finally, we provide a small simulation study. Empirical comparisons of a consistent version of our $C_{p}$ statistic with BIC and a generalized RIC show that our statistic has superior performance, particularly for small signal to noise ratios. A new plot of our time series $C_{p}$ statistic is highly informative about the choice of model. On the way we introduce a new version of AIC for regression models, show that it estimates a Kullback-Leibler distance and provide a version for small samples that is bias corrected. We highlight the connections with standard Mallows $C_{p}$. |
Keywords: | AIC, ARMA models, bias correction, BIC, $C_{p}$ plot, generalized RIC, Kalman filter, Kullback-Leibler distance, state-space formulation |
JEL: | C22 |
Date: | 2012–11–08 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2012-46&r=ecm |
By: | Xiaohong Chen (Institute for Fiscal Studies and Yale University); Jinyong Hahn; Zhipeng Liao |
Abstract: | In this note, we characterise the semiparametric efficiency bound for a class of semiparametric models in which the unknown nuisance functions are identified via nonparametric conditional moment restrictions with possibly non-nested or over-lapping conditioning sets, and the finite dimensional parameters are potentially over-identified via unconditional moment restrictions involving the nuisance functions. We discover a surprising result that semiparametric two-step optimally weighted GMM estimators achieve the efficiency bound, where the nuisance functions could be estimated via any consistent non-parametric procedures in the first step. Regardless of whether the efficiency bound has a closed form expression or not, we provide easy-to-compute sieve based optimal weight matrices that lead to asymptotically efficient two-step GMM estimators. |
Keywords: | Overlapping information sets, semiparametric efficiency, two-step GMM |
JEL: | C14 C31 C32 |
Date: | 2012–10 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:31/12&r=ecm |
By: | Peter Reinhard Hansen (European University Institute and CREATES); Allan Timmermann (UCSD and CREATES) |
Abstract: | We establish the equivalence between a commonly used out-of-sample test of equal predictive accuracy and the difference between two Wald statistics. This equivalence greatly simpli?es the computational burden of calculating recursive out-of-sample tests and evaluating their critical values. Our results shed new light on many aspects of the test and establishes certain weaknesses associated with using out-of-sample forecast comparison tests to conduct inference about nested regression models. |
Keywords: | Out-of-sample Forecast Evaluation, Nested Models, Testing. |
JEL: | C12 C53 G17 |
Date: | 2012–10–10 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2012-45&r=ecm |
By: | Alexios Ghalanos (Faculty of Finance, Cass Business School); Eduardo Rossi (Department of Economics and Management, University of Pavia); Giovanni Urga (Faculty of Finance, Cass Business School and University of Bergamo) |
Abstract: | In this paper, we propose a novel Independent Factor Autoregressive Conditional Density (IFACD) model able to generate time-varying higher moments using an independent factor setup. Our proposed framework incorporates dynamic estimation of higher comovements and feasible portfolio representation within a non elliptical multivariate distribution. We report an empirical application, using returns data from 14 MSCI equity index iShares for the period 1996 to 2011, and we show that the IFACD model provides superior VaR forecasts and portfolio allocations with respect to the CHICAGO and DCC models. |
Keywords: | Independent Factor Model, GO-GARCH, Independent Component Analysis, Timevarying Co-moments |
JEL: | C13 C16 C32 G11 |
Date: | 2012–11 |
URL: | http://d.repec.org/n?u=RePEc:pav:demwpp:021&r=ecm |
By: | Habert white; Tae-Hwan Kim (School of Economics, Yonsei University); Simone Manganelli (European Central Bank, DG-Research) |
Abstract: | This paper proposes methods for estimation and inference in multivariate, multi-quantile models. The theory can simultaneously accommodate models with multiple random variables, multiple confidence levels, and multiple lags of the associated quantiles. The proposed framework can be conveniently thought of as a vector autoregressive (VAR) extension to quantile models. We estimate a simple version of the model using market equity returns data to analyse spillovers in the values at risk (VaR) between a market index and financial institutions. We construct impulse-response functions for the quantiles of a sample of 230 financial institutions around the world and study how financial institution-specific and system-wide shocks are absorbed by the system. We show how our methodology can successfully identify both in-sample and out-of-sample the set of financial institutions whose risk is most sentitive to market wide shocks in situations of financial distress, and can prove a valuable addition to the traditional toolkit of policy makers and supervisors. |
Keywords: | Quantile impulse-responses, spillover, codependence,CAViaR |
JEL: | C13 C14 C32 |
Date: | 2012–08 |
URL: | http://d.repec.org/n?u=RePEc:yon:wpaper:2012rwp-45&r=ecm |
By: | Andrew Chesher (Institute for Fiscal Studies and University College London); Adam Rosen (Institute for Fiscal Studies and University College London) |
Abstract: | In this paper we study a random coefficient model for a binary outcome. We allow for the possibility that some or even all of the regressors are arbitrarily correlated with the random coefficients, thus permitting endogeneity. We assume the existence of observed instrumental variables Z that are jointly independent with the random coefficients, although we place no structure on the joint determination of the endogenous variable X and instruments Z, as would be required for a control function approach. The model fits within the spectrum of generalised instrumental variable models studied in Chesher and Rosen (2012a), and we thus apply identification results from that and related studies to the present context, demonstrating their use. Specifically, we characterize the identified set for the distribution of random coefficients in the binary response model with endogeneity via a collection of conditional moment inequalities, and we investigate the structure of these sets by way of numerical illustration. |
Keywords: | random coefficients, instrumental variables, endogeneity, incomplete models, set identification, partial identification, random sets |
JEL: | C10 C14 C50 C51 |
Date: | 2012–10 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:34/12&r=ecm |
By: | Peter Reinhard Hansen (European University Institute and CREATES); Zhuo Huang (Peking University, National School of Development, China Center for Economic Research) |
Abstract: | We introduce the Realized Exponential GARCH model that can utilize multiple realized volatility measures for the modeling of a return series. The model specifies the dynamic properties of both returns and realized measures, and is characterized by a flexible modeling of the dependence between returns and volatility. We apply the model to DJIA stocks and an exchange traded fund that tracks the S&P 500 index and find that specifications with multiple realized measures dominate those that rely on a single realized measure. The empirical analysis suggests some convenient simplifications and highlights the advantages of the new specification. |
Keywords: | EGARCH, High Frequency Data, Realized Variance, Leverage Effect. |
JEL: | C10 C22 C80 |
Date: | 2012–10–10 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2012-44&r=ecm |
By: | Jingzhao Qi; Huijie Yang |
Abstract: | A new concept, called balanced estimator of diffusion entropy, is proposed to detect scalings in short time series. The effectiveness of the method is verified by means of a large number of artificial fractional Brownian motions. It is used also to detect scaling properties and structural breaks in stock price series of Shanghai Stock market. |
Date: | 2012–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1211.2862&r=ecm |
By: | Stan Hurn (QUT); Ken Lindsay (QUT); Andrew McClelland (QUT) |
Abstract: | This paper describes a maximum likelihood method for estimating the parameters of Heston's model of stochastic volatility using data on an underlying market index and the prices of options written on that index. Parameters of the physical measure (associated with the index) and the parameters of the risk-neutral measure (associated with the options) are identified including the equity and volatility risk premia. The estimation is implemented using a particle filter. The computational load of this estimation method, which previously has been prohibitive, is managed by the effective use of parallel computing using Graphical Processing Units. A byproduct of this focus on easing the computational burden is the development of a simplication of the closed-form approximation used to price European options in Heston's model. The efficacy of the filter is demonstrated under simulation and an empirical investigation of the fit of the model to the S&P 500 Index is undertaken. All the parameters of the model are reliably estimated and, in contrast to previous work, the volatility premium is well estimated and found to be significant. |
Keywords: | stochastic volatility, parameter estimation, maximum likelihood, particle filter |
JEL: | C22 C52 |
Date: | 2012–10–18 |
URL: | http://d.repec.org/n?u=RePEc:qut:auncer:2012_91&r=ecm |
By: | Itai Sher (Department of Economics, University of Minnesota); Kyoo il Kim (Department of Economics, University of Minnesota) |
Abstract: | We study the nonparametric identification of distributions of utility functions in a multiple purchase setting with a finite number of consumers. Each utility function takes as arguments subsets or, alternatively, quantities of the multiple goods. We exploit mathematical insights from auction theory to generically identify the distribution of utility functions. We use price variation and aggregate data on the sales of each product, but not on individual level purchases or the total sales of bundles of products. |
Keywords: | Multiple Discrete Choice, Multiple Purchase, Nonparametric Identification, Distribution of Utility Functions, Individual Heterogeneity, Submodularity, Gross Substitutes |
JEL: | C14 D11 |
Date: | 2012–11–09 |
URL: | http://d.repec.org/n?u=RePEc:min:wpaper:2012-2&r=ecm |
By: | Chatelain, Jean-Bernard; Ralf, Kirsten |
Abstract: | In multiple regressions, explanatory variables with simple correlation coefficients with the dependent variable below 0.1 in absolute value (such as aid with economic growth) may have very large and statistically significant estimated parameters which are unfortunately "outliers driven" and spurious. This is obtained by including another regressor which is highly correlated with the initial regressor, such as a lag, a square or interaction terms of this regressor. The analysis is applied on the "Botswana outliers driven" Burnside and Dollar [2000] article which found that aid had an effect on growth only for countries achieving good macroeconomic policies. |
Keywords: | Near-Multicollinearity; Student t-Statistic; Spurious regressions; Ceteris paribus; Parameter Inflation Factor; Growth; Foreign Aid |
JEL: | F35 C52 C12 P45 |
Date: | 2012–11–08 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:42533&r=ecm |
By: | Durán-Vázquez, Rocio; Lorenzo-Valdes, Arturo; Ruiz-Porras, Antonio |
Abstract: | We develop a GARCH model with autoregressive conditional asymmetry to describe time-series. This means that, in addition to the conditional mean and variance, we assume that the skewness describes the behavior of the time-series. Analytically, we use the methodology proposed by Fernández and Steel (1998) to define the behavior of the innovations of the model. We use the approach developed by Brooks, et. al., (2005), to build it. Moreover, we show its usefulness by modeling the daily returns of the Mexican Stock Market Index (IPC) during the period between January 3rd, 2008 and September 29th, 2009. |
Keywords: | Conditional Asymmetry; GARCH; Skewness; Stock Market Returns; Mexico |
JEL: | C22 G10 |
Date: | 2012–11–07 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:42548&r=ecm |
By: | Claude Lefèvre (ULB - Département de Mathématique [Bruxelles] - Université Libre de Bruxelles); Stéphane Loisel (SAF - Laboratoire de Sciences Actuarielle et Financière - Université Claude Bernard - Lyon I : EA2429) |
Abstract: | This paper is concerned with the class of distributions, continuous or discrete, whose shape is monotone of finite integer order t. A characterization is presented as a mixture of a minimum of t independent uniform distributions. Then, a comparison of t-monotone distributions is made using the s-convex stochastic orders. A link is also pointed out with an alternative approach to monotonicity based on a stationary-excess operator. Finally, the monotonicity property is exploited to reinforce the classical Markov and Lyapunov inequalities. The results are illustrated by several applications to insurance. |
Date: | 2012–10–01 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:hal-00750562&r=ecm |
By: | Paola Cerchiello (Department of Economics and Management, University of Pavia); Paolo Giudici (Department of Economics and Management, University of Pavia) |
Abstract: | In this contribution we aim at improving ordinal variable selection in the context of causal models. In this regard, we propose an approach that provides a formal inferential tool to compare the explanatory power of each covariate, and, therefore, to select an effective model for classification purposes. Our proposed model is Bayesian nonparametric, and, thus, keeps the amount of model specification to a minimum. We consider the case in which information from the covariates is at the ordinal level. A noticeable instance of this regards the situation in which ordinal variables result from rankings of companies that are to be evaluated according to different macro and micro economic aspects, leading to different ordinal covariates that correspond to different ratings, that entail different magnitudes of the probability of default. For each given covariate, we suggest to partition the statistical units in as many groups as the number of observed levels of the covariate. We then assume individual defaults to be homogeneous within each group and heterogeneous across groups. Our aim is to compare and, therefore, select, the partition structures resulting from the consideration of different explanatory covariates. The metric we choose for variable comparison is the calculation of the posterior probability of each partition. The application of our proposal to a European credit risk database shows that it performs well, leading to a coherent and clear to explain method for variable averaging the estimated default probabilities. |
Date: | 2012–11 |
URL: | http://d.repec.org/n?u=RePEc:pav:demwpp:019&r=ecm |