|
on Econometrics |
By: | Copt, Samuel; Heritier, Stephane |
Abstract: | Mixed linear models are used to analyse data in many settings. These models generally rely on the normality assumption and are often fitted by means of the maximum likelihood estimator (MLE) or the restricted maximum likelihood estimator (REML). However, the sensitivity of these estimation techniques and related tests to this underlying assumption has been identified as a weakness that can even lead to wrong interpretations. Recently Copt and Victoria-Feser(2005) proposed a high breakdown estimator, namely an S-estimator, for general mixed linear models. It has the advantage of being easy to compute - even for highly structured variance matrices - and allow the computation of a robust score test. However this proposal cannot be used to define a likelihood ratio type test which is certainly the most direct route to robustify an F-test. As the latter is usually a key tool to test hypothesis in mixed linear models, we propose two new robust estimators that allow the desired extension. They also lead to resistant Wald-type tests useful for testing contrasts and covariate efects. We study their properties theoretically and by means of simulations. An analysis of a real data set illustrates the advantage of the new approach in the presence of outlying observations. |
Date: | 2006–01 |
URL: | http://d.repec.org/n?u=RePEc:gen:geneem:2006.01&r=ecm |
By: | Philippe HUBER (University of Geneva, HEC and FAME); Olivier SCAILLET (University of Geneva, HEC and FAME); Maria-Pia VICTORIA-FESER (University of Geneva, HEC and FAME) |
Abstract: | In this paper we develop a structural equation model with latent variables in an ordinal setting which allows us to test broker-dealer predictive ability of financial market movements. We use a multivariate logit model in a latent factor framework, develop a tractable estimator based on a Laplace approximation, and show its consistency and asymptotic normality. Monte Carlo experiments reveal that both the estimation method and the testing procedure perform well in small samples. An empirical illustration is given for mid-term forecasts simultaneously made by two broker-dealers for several countries. |
Keywords: | structural equation model, latent variable, generalised linear model, factor analysis, multinomial logit, forecasts, LAMLE, canonical correlation |
JEL: | C12 C13 C30 C51 C52 C53 G10 |
Date: | 2005–10 |
URL: | http://d.repec.org/n?u=RePEc:fam:rpseri:rp159&r=ecm |
By: | Timothy Halliday (Department of Economics, University of Hawaii at Manoa; John A. Burns School of Medicine, University of Hawaii at Manoa) |
Abstract: | We consider the identification of state dependence in a non-stationary process of binary outcomes within the context of the dynamic logit model with time-variant transition probabilities and an arbitrary distribution for the unobserved heterogeneity. We derive a simple identification result that allows us to calculate a test for state dependence in this model. We also consider alternative tests for state dependence that will have desirable properties only in stationary processes and derive their asymptotic properties when the true underlying process is non-stationary. Finally, we provide Monte Carlo evidence that shows a range of non-stationarity in which the effects of mis-specifying the binary process as stationary are not too large. |
Keywords: | Dynamic Panel Data Models, State Dependence, Non-Stationary Processes |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:hai:wpaper:200601&r=ecm |
By: | Gianni Amisano; Raffaella Giacomini |
Abstract: | We propose a test for comparing the out-of-sample accuracy of competing density forecasts of a variable. The test is valid under general conditions: the data can be heterogeneous and the forecasts can be based on (nested or non-nested) parametric models or produced by semi- parametric, non-parametric or Bayesian estimation techniques. The evaluation is based on scoring rules, which are loss functions de?ned over the density forecast and the realizations of the variable. We restrict attention to the logarithmic scoring rule and propose an out-of-sample ?weighted likelihood ratio?test that compares weighted averages of the scores for the competing forecasts. The user-de?ned weights are a way to focus attention on di¤erent regions of the distribution of the variable. For a uniform weight function, the test can be interpreted as an extension of Vuong (1989)?s likelihood ratio test to time series data and to an out-of-sample testing framework. We apply the tests to evaluate density forecasts of US in?ation produced by linear and Markov Switching Phillips curve models estimated by either maximum likelihood or Bayesian methods. We conclude that a Markov Switching Phillips curve estimated by maximum likelihood produces the best density forecasts of in?ation. |
URL: | http://d.repec.org/n?u=RePEc:ubs:wpaper:ubs0504&r=ecm |
By: | Michal Benko; Wolfgang Härdle; Alois Kneip |
Abstract: | Functional principal component analysis (FPCA) based on the Karhunen-Lo`eve decomposition has been successfully applied in many applications, mainly for one sample problems. In this paper we consider common functional principal components for two sample problems. Our research is motivated not only by the theoretical challenge of this data situation but also by the actual question of dynamics of implied volatility (IV) functions. For different maturities the logreturns of IVs are samples of (smooth) random functions and the methods proposed here study the similarities of their stochastic behavior. Firstly we present a new method for estimation of functional principal components from discrete noisy data. Next we present the two sample inference for FPCA and develop two sample theory. We propose bootstrap tests for testing the equality of eigenvalues, eigenfunctions, and mean functions of two functional samples, illustrate the test-properties by simulation study and apply the method to the IV analysis. |
Keywords: | Functional Principal Components, Nonparametric Regression, Bootstrap, Two Sample Problem |
JEL: | C14 G19 |
Date: | 2006–01 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2006-010&r=ecm |
By: | Russell Davidson (GREQAM and McGill University); James MacKinnon (Queen's University) |
Abstract: | We study several tests for the coefficient of the single right-hand-side endogenous variable in a linear equation estimated by instrumental variables. We show that all the test statistics--Student's t, Anderson-Rubin, Kleibergen's K, and likelihood ratio (LR)--can be written as functions of six random quantities. This leads to a number of interesting results about the properties of the tests under weak-instrument asymptotics. We then propose several new procedures for bootstrapping the three non-exact test statistics and a conditional version of the LR test. These use more efficient estimates of the parameters of the reduced-form equation than existing procedures. When the best of these new procedures is used, K and conditional LR have excellent performance under the null, and LR also performs very well. However, power considerations suggest that the conditional LR test, bootstrapped using this new procedure when the sample size is not large, is probably the method of choice. |
Keywords: | bootstrap test, weak instruments, anderson-rubin test, conditional LR test, wald test |
JEL: | C12 C15 C30 |
Date: | 2006–02 |
URL: | http://d.repec.org/n?u=RePEc:qed:wpaper:1024&r=ecm |
By: | Stefan Boes (Socioeconomic Institute, University of Zurich) |
Abstract: | Recent advances in the econometric modelling of count data have often been based on the generalized method of moments (GMM). However, the two-step GMM procedure may perform poorly in small samples, and several empirical likelihood-based estimators have been suggested alternatively. In this paper I discuss empirical likelihood (EL) estimation for count data models with endogenous regressors. I carefully distinguish between parametric and semi-parametric methods and analyze the properties of the EL estimator by means of a Monte Carlo experiment. I apply the proposed method to estimate the effect of women’s schooling on fertility. |
Keywords: | Nonparametric likelihood, Poisson model, endogeneity, fertility and education |
JEL: | C14 C25 J13 |
Date: | 2004–03 |
URL: | http://d.repec.org/n?u=RePEc:soz:wpaper:0404&r=ecm |
By: | Kazuhiko Hayakawa |
Abstract: | This paper addresses the many instruments problem, i.e. (1) the trade-off between the bias and the efficiency of the GMM estimator, and (2) inaccuracy of inference, in dynamic panel data models where unobservable heterogeneity may be large. We find that if we use all the instruments in levels, although the GMM estimator is robust to large heterogeneity, inference is inaccurate. In contrast, if we use the minimum number of instruments in levels in the sense that we use only one instrument for each period, the performance of the GMM estimator is heavily affected by the degree of heterogeneity, that is, both the asymptotic bias and the variance are proportional to the magnitude of heterogeneity. To address this problem, we propose a new form of instruments that are obtained from the so-called backward orthogonal deviation transformation. The asymptotic analysis shows that the GMM estimator with the minimum number of new instruments has smaller asymptotic bias than the estimators typically used such as the GMM estimator with all instruments in levels, the LIML estimators and the within-groups estimators, while the asymptotic variance of the proposed estimator is equal to the lower bound. Thus both the asymptotic bias and the variance of the proposed estimators become small simultaneously. Simulation results show that our new GMM estimator outperforms the conventional GMM estimator with all instruments in levels in term of the RMSE and in terms of accuracy of inference. An empirical application with Spanish firm data is also provided. |
Keywords: | Dynamic panel data, many instruments, generalized method of moments estimator, unobservable large heterogeneity |
JEL: | C23 |
Date: | 2006–01 |
URL: | http://d.repec.org/n?u=RePEc:hst:hstdps:d05-130&r=ecm |
By: | Giovanni Forchini |
Abstract: | We derive general formulae for the asymptotic distribution of the LIML estimator for the coefficients of both endogenous and exogenous variables in a partially identified linear structural equation. We extend previous results of Phillips (1989) and Choi and Phillips (1992) where the focus was on IV estimators. We show that partial failure of identification affects the LIML in that its moments do not exist even asymptotically. |
Keywords: | LIML estimator, Partial Identification, Linear structural equation, Asymptotic distribution |
JEL: | C13 C30 |
Date: | 2006–01 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2006-1&r=ecm |
By: | Hiroyuki Kasahara (University of Western Ontario); Katsumi Shimotsu (Queen's University) |
Abstract: | This paper analyzes the higher-order properties of nested pseudo-likelihood (NPL) estimators and their practical implementation for parametric discrete Markov decision models in which the probability distribution is defined as a fixed point. We propose a new NPL estimator that can achieve quadratic convergence without fully solving the fixed point problem in every iteration. We then extend the NPL estimators to develop one-step NPL bootstrap procedures for discrete Markov decision models and provide some Monte Carlo evidence based on a machine replacement model of Rust (1987). The proposed one-step bootstrap test statistics and confidence intervals improve upon the first order asymptotics even with a relatively small number of iterations. Improvements are particularly noticeable when analyzing the dynamic impacts of counterfactual policies. |
Keywords: | k-step bootstrap; maximum pseudo-likelihood estimators; nested fixed point algorithm; Newton-Raphson method; policy iteration. |
JEL: | C12 C13 C15 C44 C63 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:uwo:uwowop:20064&r=ecm |
By: | James MacKinnon (Queen's University) |
Abstract: | The fast double bootstrap, or FDB, is a procedure for calculating bootstrap P values that is much more computationally efficient than the double bootstrap itself. In many cases, it can provide more accurate results than ordinary bootstrap tests. For the fast double bootstrap to be valid, the test statistic must be asymptotically independent of the random parts of the bootstrap data generating process. This paper presents simulation evidence on the performance of FDB tests in three cases of interest to econometricians. One of the cases involves both symmetric and equal-tail bootstrap tests, which, interestingly, can have quite different power properties. Another highlights the importance of imposing the null hypothesis on the bootstrap DGP. |
Keywords: | bootstrap test, serial correlation, ARCH errors, weak instruments, double bootstrap |
JEL: | C12 C15 |
Date: | 2006–02 |
URL: | http://d.repec.org/n?u=RePEc:qed:wpaper:1023&r=ecm |
By: | Simone Manganelli (European Central Bank, Kaiserstrasse 29, Postfach 16 03 19, 60066 Frankfurt am Main, Germany.) |
Abstract: | This paper argues that forecast estimators should minimise the loss function in a statistical, rather than deterministic, way. We introduce two new elements into the classical econometric analysis: a subjective guess on the variable to be forecasted and a probability reflecting the confidence associated to it. We then propose a new forecast estimator based on a test of whether the first derivatives of the loss function evaluated at the subjective guess are statistically different from zero. We show that the classical estimator is a special case of this new estimator, and that in general the two estimators are asymptotically equivalent. We illustrate the implications of this new theory with a simple simulation, an application to GDP forecast and an example of mean-variance portfolio selection. |
Keywords: | Decision under uncertainty; estimation; overfitting; asset allocation |
JEL: | C13 C53 G11 |
Date: | 2006–01 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20060584&r=ecm |
By: | Kreider, Brent; Pepper, John V. |
Abstract: | We generalize Horowitz and Manski's (1995) identification analysis in contaminated sampling when the observed outcome distribution is a mixture of the distribution of interest and some unknown distribution. The independence assumption maintained under contaminated sampling is relaxed to allow the two outcome distributions to differ by a bounded factor of proportionality. This generalization allows researchers to take middle-ground positions about the nature of dependence between classification errors and the outcome. Under this restriction, we derive bounded identification regions that reduce the uncertainty found in corrupt samples. We illustrate how the assumption can be used to inform researchers about the population's use of illicit drugs and receipt of welfare in the presence of nonrandom reporting errors. |
Keywords: | measurement error, identification, contaminated sampling, corrupt sampling, nonparametric bounds |
JEL: | C1 |
Date: | 2006–02–02 |
URL: | http://d.repec.org/n?u=RePEc:isu:genres:12496&r=ecm |
By: | Hendry, David F; Hubrich, Kirstin |
Abstract: | We explore whether forecasting an aggregate variable using information on its disaggregate components can improve the prediction mean squared error over first forecasting the disaggregates and then aggregating those forecasts, or, alternatively, over using only lagged aggregate information in forecasting the aggregate. We show theoretically that the first method of forecasting the aggregate should outperform the alternative methods in population. We investigate whether this theoretical prediction can explain our empirical findings and analyse why forecasting the aggregate using information on its disaggregate components improves forecast accuracy of the aggregate forecast of euro area and US inflation in some situations, but not in others. |
Keywords: | disaggregate information; factor models; forecast model selection; predictability; VAR |
JEL: | C51 C53 E31 |
Date: | 2006–01 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:5485&r=ecm |
By: | Markku Lanne; Pentti Saikkonen |
Abstract: | In this paper we propose a new GARCH-in-Mean (GARCH-M) model allowing for conditional skewness. The model is based on the so-called z distribution capable of modeling moderate skewness and kurtosis typically encountered in stock return series. The need to allow for skewness can also be readily tested. Our empirical results indicate the presence of conditional skewness in the postwar U.S. stock returns. Small positive news is also found to have a smaller impact on conditional variance than no news at all. Moreover, the symmetric GARCH-M model not allowing for conditional skewness is found to systematically overpredict conditional variance and average excess returns. |
Keywords: | Conditional skewness, GARCH-in-Mean, Risk-return tradeoff |
JEL: | C16 C22 G12 |
Date: | 2005 |
URL: | http://d.repec.org/n?u=RePEc:eui:euiwps:eco2005/14&r=ecm |
By: | Markku Lanne; Helmut Luetkepohl |
Abstract: | In structural vector autoregressive (SVAR) models identifying restrictions for shocks and impulse responses are usually derived from economic theory or institutional constraints. Sometimes the restrictions are insufficient for identifying all shocks and impulse responses. In this paper it is pointed out that specific distributional assumptions can also help in identifying the structural shocks. In particular, a mixture of normal distributions is considered as a plausible model that can be used in this context. Our model setup makes it possible to test restrictions which are just-identifying in a standard SVAR framework. In particular, we can test for the number of transitory and permanent shocks in a cointegrated SVAR model. The results are illustrated using a data set from King, Plosser, Stock and Watson (1991) and a system of US and European interest rates.Classification-JEL: C32 |
Keywords: | Mixture normal distribution, cointegration, vector autoregressive process, vector error correction model, impulse responses |
Date: | 2005 |
URL: | http://d.repec.org/n?u=RePEc:eui:euiwps:eco2005/25&r=ecm |
By: | Kazuhiko Hayakawa |
Abstract: | This paper complements Alvarez and Arellano (2003) by showing the asymptotic properties of the system GMM estimator for AR(1) panel data models when both N and T tend to infinity. We show that the system GMM estimator with the instruments which Blundell and Bond (1998) used will be inconsistent when both N and T are large. We also show that the system GMM estimator with all available instruments, including redundant ones, will be consistent if ƒÐ<sub>ƒÅ</sub><sup>2</sup>/ƒÐ<sub>v</sub><sup>2</sup> = 1-ƒ¿ holds. |
Date: | 2006–01 |
URL: | http://d.repec.org/n?u=RePEc:hst:hstdps:d05-129&r=ecm |
By: | Stefan Boes (Socioeconomic Institute, University of Zurich); Rainer Winkelmann (Socioeconomic Institute, University of Zurich) |
Abstract: | We discuss regression models for ordered responses, such as ratings of bonds, schooling attainment, or measures of subjective well-being. Commonly used models in this context are the ordered logit and ordered probit regression models. They are based on an underlying latent model with single index function and constant thresholds. We argue that these approaches are overly restrictive and preclude a flexible estimation of the effect of regressors on the discrete outcome probabilities. For example, the signs of the marginal probability effects can only change once when moving from the smallest category to the largest one. We then discuss several alternative models that overcome these limitations. An application illustrates the benefit of these alternatives. |
Keywords: | Marginal effects, generalized threshold, sequential model, random coeffcients, latent class analysis, happiness |
JEL: | C25 |
Date: | 2005–03 |
URL: | http://d.repec.org/n?u=RePEc:soz:wpaper:0507&r=ecm |
By: | Katy Streso (Max Planck Institute for Demographic Research, Rostock, Germany); Francesco Lagona |
Abstract: | Tooth Cementum Annulation (TCA) is an age estimation method carried out on thin cross sections of the root of human teeth. Age is computed by adding the tooth eruption age to the count of annual incremental lines that are called tooth rings and appear in the cementum band. Algorithms to denoise and segment the digital image of the tooth section are considered a crucial step towards computer-assisted TCA. The approach pursued in this paper relies on modelling the images as hidden Markov random fields, where gray values are assumed to be pixelwise conditionally independent and normally distributed, given a hidden random field of labels. These unknown labels have to be estimated to segment the image. To account for long-range dependence among the observed values and for periodicity in the placement of tooth rings, the Gibbsian label distribution is specified by a potential function that incorporates macro-features of the TCA-image (a FRAME model). Estimation of the model parameters is carried out by an EM-algorithm that exploits the mean field approximation of the label distribution. Segmentation is based on the predictive distribution of the labels given the observed gray values. |
Keywords: | EM, FRAME, Gibbs distribution, (hidden) Markov random field, mean field approximation, TCA |
JEL: | J1 Z0 |
Date: | 2005–10 |
URL: | http://d.repec.org/n?u=RePEc:dem:wpaper:wp-2005-032&r=ecm |
By: | Ralf Brüggemann; Wolfgang Härdle; Julius Mungo; Carsten Trenkler |
Abstract: | The implied volatility of a European option as a function of strike price and time to maturity forms a volatility surface. Traders price according to the dynamics of this high dimensional surface. Recent developments that employ semiparametric models approximate the implied volatility surface (IVS) in a finite dimensional function space, allowing for a low dimensional factor representation of these dynamics. This paper presents an investigation into the stochastic properties of the factor loading times series using the vector autoregressive (VAR) framework and analyzes associated movements of these factors with movements in some macroeconomic variables of the Euro - economy. |
Keywords: | Implied volatility surface, dynamic semiparametric factor model, unit root tests, vector autoregression, impulse responses |
JEL: | C14 C32 |
Date: | 2006–02 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2006-011&r=ecm |
By: | Maxim S. Finkelstein (Max Planck Institute for Demographic Research, Rostock, Germany); Veronica Esaulova |
Abstract: | Mixtures of increasing failure rate distributions (IFR) can decrease at least in some intervals of time. Usually this property is observed asymptotically as time tends to infinity , which is due to the fact that a mixture failure rate is ‘bent down’, as the weakest populations are dying out first. We consider a survival model, generalizing a very well known in reliability and survival analysis additive hazards, proportional hazards and accelerated life models. We obtain new explicit asymptotic relations for a general setting and study specific cases. Under reasonable assumptions we prove that asymptotic behavior of the mixture failure rate depends only on the behavior of the mixing distri-bution in the neighborhood of the left end point of its support and not on the whole mixing distribution. |
JEL: | J1 Z0 |
Date: | 2005–08 |
URL: | http://d.repec.org/n?u=RePEc:dem:wpaper:wp-2005-023&r=ecm |