|
on Econometrics |
By: | Xiaohong Chen (Cowles Foundation, Yale University); Han Hong (Duke University); Alessandro Tarozzi (Duke University) |
Abstract: | We study semiparametric efficiency bounds and efficient estimation of parameters defined through general nonlinear, possibly non-smooth and over-identified moment restrictions, where the sampling information consists of a primary sample and an auxiliary sample. The variables of interest in the moment conditions are not directly observable in the primary data set, but the primary data set contains proxy variables which are correlated with the variables of interest. The auxiliary data set contains information about the conditional distribution of the variables of interest given the proxy variables. Identification is achieved by the assumption that this conditional distribution is the same in both the primary and auxiliary data sets. We provide semiparametric efficiency bounds for both the "verify-out-of-sample" case, where the two samples are independent, and the "verify-in-sample" case, where the auxiliary sample is a subset of the primary sample; and the bounds are derived when the propensity score is unknown, or known, or belongs to a correctly specified parametric family. These efficiency variance bounds indicate that the propensity score is ancillary for the "verify-in-sample" case, but is not ancillary for the "verify-out-of-sample" case. We show that sieve conditional expectation projection based GMM estimators achieve the semiparametric efficiency bounds for all the above mentioned cases, and establish their asymptotic efficiency under mild regularity conditions. Although inverse probability weighting based GMM estimators are also shown to be semiparametrically efficient, they need stronger regularity conditions and clever combinations of nonparametric and parametric estimates of the propensity score to achieve the efficiency bounds for various cases. Our results contribute to the literature on non-classical measurement error models, missing data and treatment effects. |
Keywords: | Auxiliary data, Measurement error, Missing data, Treatment effect, Semiparametric efficiency bound, GMM, Sieve estimation |
JEL: | C1 C3 |
Date: | 2008–03 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:1644&r=ecm |
By: | Jason Allen; Allan W. Gregory; Katsumi Shimotsu |
Abstract: | Monte Carlo evidence has made it clear that asymptotic tests based on generalized method of moments (GMM) estimation have disappointing size. The problem is exacerbated when the moment conditions are serially correlated. Several block bootstrap techniques have been proposed to correct the problem, including Hall and Horowitz (1996) and Inoue and Shintani (2006). We propose an empirical likelihood block bootstrap procedure to improve inference where models are characterized by nonlinear moment conditions that are serially correlated of possibly infinite order. Combining the ideas of Kitamura (1997) and Brown and Newey (2002), the parameters of a model are initially estimated by GMM which are then used to compute the empirical likelihood probability weights of the blocks of moment conditions. The probability weights serve as the multinomial distribution used in resampling. The first-order asymptotic validity of the proposed procedure is proven, and a series of Monte Carlo experiments show it may improve test sizes over conventional block bootstrapping. |
Keywords: | generalized methods of moments, empirical likelihood, block-bootstrap |
JEL: | C14 C22 |
Date: | 2008–03 |
URL: | http://d.repec.org/n?u=RePEc:qed:wpaper:1156&r=ecm |
By: | Marco Bee; Giuseppe Espa |
Abstract: | This paper proposes an algorithm for the estimation of the parameters of a Logistic Auto-logistic Model when some values of the target variable are missing at random but the auxiliary information is known for the same areas. First, we derive a Monte Carlo EM algorithm in the setup of maximum pseudo-likelihood estimation; given the analytical intractability of the conditional expectation of the complete pseudo-likelihood function, we implement the E-step by means of Monte Carlo simulation. Second, we give an example using a simulated dataset. Finally, a comparison with the standard non-missing data case shows that the algorithm gives consistent results. |
Keywords: | Spatial Missing Data, Monte Carlo EM Algorithm, Logistic Auto-logistic Model, Pseudo-Likelihood. |
JEL: | C13 C15 C51 |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:trn:utwpde:0801&r=ecm |
By: | Oliver Linton1 (Department of Economics, London School of Economics); Kyungchul Song (Department of Economics, University of Pennsylvania); Yoon-Jae Whang (Department of Economics, Seoul National University) |
Abstract: | We propose a new method of testing stochastic dominance which improves on existing tests based on bootstrap or subsampling. Our test requires estimation of the contact sets between the marginal distributions. Our tests have asymptotic sizes that are exactly equal to the nominal level uniformly over the boundary points of the null hypothesis and are therefore valid over the whole null hypothesis. We also allow the prospects to be indexed by infinite as well as finite dimensional unknown parameters, so that the variables may be residuals from nonparametric and semiparametric models. Our simulation results show that our tests are indeed more powerful than the existing subsampling and recentered bootstrap. |
Keywords: | Set estimation, Size of test, Unbiasedness, Similarity, Bootstrap, Subsampling |
JEL: | C12 C14 C52 |
Date: | 2008–02–19 |
URL: | http://d.repec.org/n?u=RePEc:pen:papers:08-006&r=ecm |
By: | Kyungchul Song (Department of Economics, University of Pennsylvania) |
Abstract: | When Barret and Donald (2003) in Econometrica proposed a consistent test of stochastic dominance, they were silent about the asymptotic unbiasedness of their tests against √n-converging Pitman local alternatives. This paper shows that when we focus on first-order stochastic dominance, there exists a wide class of √n-converging Pitman local alternatives against which their test is asymptotically biased, i.e., having the local asymptotic power strictly below the asymptotic size. This phenomenon more generally applies to one-sided nonparametric tests which have a sup norm of a shifted standard Brownian bridge as their limit under √n-converging Pitman local alternatives. Among other examples are tests of independence or conditional independence. We provide an intuitive explanation behind this phenomenon, and illustrate the implications using the simulation studies. |
Keywords: | Asymptotic Bias, One-sided Tests, Stochastic Dominance, Conditional Independence, Pitman Local Alternatives, Brownian Bridge Processes |
JEL: | C12 C14 C52 |
Date: | 2008–02–19 |
URL: | http://d.repec.org/n?u=RePEc:pen:papers:08-005&r=ecm |
By: | Mohamed CHAOUCH (IMB, Université de Bourgogne); Ali GANNOUN (CNAM, Paris); Jérôme SARACCO (GREThA) |
Abstract: | Conditional quantiles are required in various economic, biomedical or industrial problems. Lack of objective basis for ordering multivariate observations is a major problem in extending the notion of quantiles or conditional quantiles (also called regression quantiles) in a multidimensional setting. We first recall some characterizations of the unconditional spatial quantiles and the corresponding estimators. Then, we consider the conditional case. In this work, we focus our study on the geometric (or spatial) notion of quantiles introduced by Chaudhuri (1992a, 1996). We generalize, in the conditional framework, the Theorem 2.1.2 of Chaudhuri (1996), and we present algorithms allowing the calculation of the unconditional and conditional spatial quantile estimators. Finally, these various concepts are illustrated using simulated data. |
Keywords: | Conditional Spatial Quantile, Contours, Kernel Estimators, Spatial Quantile |
JEL: | C14 C63 |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:grt:wpegrt:2008-10&r=ecm |
By: | Schlicht, Ekkehart |
Abstract: | Trend extraction from time series is often performed by using the filter proposed by Leser (1961), also known as the Hodrick-Prescott filter. Practical problems arise, however, if the time series contains structural breaks (as produced by German unification for German time series, for instance), or if some data are missing. This note proposes a method for coping with these problems. |
Keywords: | dummies; gaps; Hodrick-Prescott filter; interpolation; Leser filter; missing observations; smoothing; spline; structural breaks; time-series; trend; break point; break point location |
JEL: | C22 C32 C63 C14 |
Date: | 2008–02–25 |
URL: | http://d.repec.org/n?u=RePEc:lmu:muenec:2127&r=ecm |
By: | Alain Chaboud; Benjamin Chiquoine; Erik Hjalmarsson; Mico Loretan |
Abstract: | Using two newly available ultrahigh-frequency datasets, we investigate empirically how frequently one can sample certain foreign exchange and U.S. Treasury security returns without contaminating estimates of their integrated volatility with market microstructure noise. We find that one can sample FX returns as frequently as once every 15 to 20 seconds without contaminating volatility estimates; bond returns may be sampled as frequently as once every 2 to 3 minutes on days without U.S. macroeconomic announcements, and as frequently as once every 40 seconds on announcement days. With a simple realized kernel estimator, the sampling frequencies can be increased to once every 2 to 5 seconds for FX returns and to about once every 30 to 40 seconds for bond returns. These sampling frequencies, especially in the case of FX returns, are much higher than those often recommended in the empirical literature on realized volatility in equity markets. The higher sampling frequencies for FX and bond returns likely reflects the superior depth and liquidity of these markets. |
Keywords: | realized volatility, sampling frequency, market microstructure, bond markets, foreign exchange markets, liquidity |
Date: | 2008–02 |
URL: | http://d.repec.org/n?u=RePEc:bis:biswps:249&r=ecm |
By: | Jaap Abbring (Institute for Fiscal Studies and Tinbergen Institute); James Heckman (Institute for Fiscal Studies and University of Chicago) |
Abstract: | <p><p><p>This chapter studies the microeconometric treatment-effect and structural approaches to dynamic policy evaluation. First, we discuss a reduced-form approach based on a sequential randomization or dynamic matching assumption that is popular in biostatistics. We then discuss two complementary approaches for treatments that are single stopping times and that allow for non- trivial dynamic selection on unobservables. The first builds on continuous-time duration and event-history models. The second extends the discrete-time dynamic discrete-choice literature.</p></p></p></p></p> |
Date: | 2008–02 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:05/08&r=ecm |
By: | Manuele Bigeco; Enrico Grosso; Edoardo Otranto |
Abstract: | The problem of forecasting financial time series has received great attention in the past, from both Econometrics and Pattern Recognition researchers. In this context, most of the efforts were spent to represent and model the volatility of the financial indicators in long time series. In this paper a different problem is faced, the prediction of increases and decreases in short (local) financial trends. This problem, poorly considered by the researchers, needs specific models, able to capture the movement in the short time and the asymmetries between increase and decrease periods. The methodology presented in this paper explicitly considers both aspects, encoding the financial returns in binary values (representing the signs of the returns), which are subsequently modelled using two separate Hidden Markov models, one for increases and one for decreases, respectively. The approach has been tested with different experiments with the Dow Jones index and other shares of the same market of different risk, with encouraging results. |
Keywords: | Markov Models; Asymmetries; Binary data; Short-time forecasts |
JEL: | C02 C63 G11 |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:cns:cnscwp:200803&r=ecm |
By: | Andrew J. Patton; Kevin Sheppard |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:sbs:wpsefe:2008fe22&r=ecm |
By: | Andrew J. Patton |
Abstract: | This paper presents an overview of the literature on applications of copulas in the modelling of financial time series. Copulas have been used both in multivariate time series analysis, where they are used to charaterise the (conditional) cross-sectional dependence between individual time series, and in univariate time series analysis, where they are used to characterise the dependence between a sequence of observations of a scalar time series process. The paper includes a broad, brief, review of the many applications of copulas in finance and economics. |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:sbs:wpsefe:2008fe21&r=ecm |
By: | Alper, C. Emre; Fendoglu, Salih; Saltoglu, Burak |
Abstract: | We explore the relative weekly stock market volatility forecasting performance of the linear univariate MIDAS regression model based on squared daily returns vis-a-vis the benchmark model of GARCH(1,1) for a set of four developed and ten emerging market economies. We first estimate the two models for the 2002-2007 period and compare their in-sample properties. Next we estimate the two models using the data on 2002-2005 period and then compare their out-of-sample forecasting performance for the 2006-2007 period, based on the corresponding mean squared prediction errors following the testing procedure suggested by West (2006). Our findings show that the MIDAS squared daily return regression model outperforms the GARCH model significantly in four of the emerging markets. Moreover, the GARCH model fails to outperform the MIDAS regression model in any of the emerging markets significantly. The results are slightly less conclusive for the developed economies. These results may imply superior performance of MIDAS in relatively more volatile environments. |
Keywords: | Mixed Data Sampling regression model; Conditional volatility forecasting; Emerging Markets. |
JEL: | C53 C52 C22 G10 |
Date: | 2008–03 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:7460&r=ecm |
By: | Grant Hillier (Institute for Fiscal Studies and University of Southampton); Raymond Kan; Xiaolu Wang |
Abstract: | <p>The top-order zonal polynomials Ck(A),and top-order invariant polynomials Ck1,...,kr(A1,...,Ar)in which each of the partitions of ki,i = 1,..., r,has only one part, occur frequently in multivariate distribution theory, and econometrics - see, for example Phillips (1980, 1984, 1985, 1986), Hillier (1985, 2001), Hillier and Satchell (1986), and Smith (1989, 1993). However, even with the recursive algorithms of Ruben (1962) and Chikuse (1987), numerical evaluation of these invariant polynomials is extremely time consuming. As a result, the value of invariant polynomials has been largely confined to analytic work on distribution theory. In this paper we present new, very much more efficient, algorithms for computing both the top-order zonal and invariant polynomials. These results should make the theoretical results involving these functions much more valuable for direct practical study. We demonstrate the value of our results by providing fast and accurate algorithms for computing the moments of a ratio of quadratic forms in normal random variables.</p> |
JEL: | C16 C46 C63 |
Date: | 2008–02 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:07/08&r=ecm |
By: | Bo Honore (Department of Economics, Princeton University); Aureo de Paula (Department of Economics, University of Pennsylvania) |
Abstract: | This paper studies the identification of a simultaneous equation model where the variable of interest is a duration measure. It proposes a game theoretic model in which durations are determined by strategic agents. In the absence of strategic motives, the model delivers a version of the generalized accelerated failure time model. In its most general form, the system resembles a classical simultaneous equation model in which endogenous variables interact with observable and unobservable exogenous components to characterize a certain economic environment. In this paper, the endogenous variables are the individually chosen equilibrium durations. Even though a unique solution to the game is not always attainable in this context, the structural elements of the economic system are shown to be semiparametrically point identified. We also present a brief discussion of estimation ideas and a set of simulation studies on the model. |
Keywords: | duration, empirical games, identification |
JEL: | C10 C30 C41 |
Date: | 2008–02–19 |
URL: | http://d.repec.org/n?u=RePEc:pen:papers:08-007&r=ecm |
By: | Michel, TENENHAUS |
Abstract: | Two complementary schools have come to the fore in the field of Structural Equation Modelling (SEM): covariance-based SEM and component-based SEM. The first approach developed around Karl Jöreskog. It can be considered as a generalisation of both principal component analysis and factor analysis to the case of several data tables connected by causal links. The second approach developed around Herman Wold under the name "PLS" (Partial Least Squares). More recently Hwang and Takane (2004) have proposed a new method named Generalized Structural Component Analysis. This second approach is a generalisation of principal component analysis (PCA) to the case of several data tables connected by causal links. Covariance-based SEM is usually used with an objective of model validation and needs a large sample (what is large varies from an author to another: more than 100 subjects and preferably more than 200 subjects are often mentioned). Component-based SEM is mainly used for score computation and can be carried out on very small samples. A research based on 6 subjects has been published by Tenenhaus, Pagès, Ambroisine & Guinot (2005) and will be used in this paper. In 1996, Roderick McDonald published a paper in which he showed how to carry out a PCA using the ULS (Unweighted Least Squares) criterion in the covariance-based SEM approach. He concluded from this that he could in fact use the covariance-based SEM approach to obtain results similar to those of the PLS approach, but with a precise optimisation criterion in place of an algorithm with not well known properties. In this research, we will explore the use of ULS-SEM and PLS on small samples. First experiences have already shown that score computation and bootstrap validation are very insensitive to the choice of the method. We will also study the very important contribution of these methods to multiblock analysis |
Keywords: | Multi-block analysis; PLS path modelling; Structural Equation Modelling; Unweighted Least Squares |
JEL: | C20 C30 |
Date: | 2007–12–01 |
URL: | http://d.repec.org/n?u=RePEc:ebg:heccah:0885&r=ecm |
By: | Harry Kelejian; Peter Murrell (Department of Economics, University of Maryland); Oleksandr Shepotylo |
Abstract: | We examine spatial spillovers between countries in the development of institutions. Our dependent variables are three measures of institutions that relate to politics, law, and governmental administration. The major explanatory variable on which we focus is a spatial lag of the dependent variable, that is, the level of similar institutions in bordering countries. We also consider long-term determinants of institutions that have been previously examined in the literature, such as legal origin, religious groupings, ethnolinguistic fractionalization, resource base, and initial level of GDP per capita. Our framework of analysis is a spatial panel data model. Because of missing observations, our panel data set is not balanced, which causes special problems in estimating spatial models. These problems are explicitly recognized in our estimation procedure, which implements new results in spatial econometrics. Spatial spill-over effects between countries are statistically significant and economically important. We provide evidence of the size of the general equilibrium effects of spatial spillovers by examining a counter-factual the non-existence of the Soviet Union. Our central conclusions are bolstered by robustness tests that involve alternative treatments of GDP per capita and the inclusion of fixed effects. |
Keywords: | institutions, spatial econometrics, governance, neighborhood effects, spatial spillovers |
JEL: | C5 O1 O5 P5 |
Date: | 2007–11 |
URL: | http://d.repec.org/n?u=RePEc:umd:umdeco:07-001&r=ecm |