
on Econometrics 
By:  Stefan Hoderlein (Institute for Fiscal Studies and Brown) 
Abstract:  <p><p><p><p><p>In this paper we consider endogenous regressors in the binary choice model under a weak median exclusion restriction, but without further specification of the distribution of the unobserved random components. Our reduced form specification with heteroscedastic residuals covers various heterogeneous structural binary choice models. As a particularly relevant example of a structural model where no semiparametric estimator has of yet been analyzed, we consider the binary random utility model with endogenous regressors and heterogeneous parameters. We employ a control function IV assumption to establish identification of a slope parameter 'â' by the mean ratio of derivatives of two functions of the instruments. We propose an estimator based on direct sample counterparts, and discuss the large sample behavior of this estimator. In particular, we show '√'n consistency and derive the asymptotic distribution. In the same framework, we propose tests for heteroscedasticity, overidentification and endogeneity. We analyze the small sample performance through a simulation study. An application of the model to discrete choice demand data concludes this paper.</p></p></p></p></p> 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:34/09&r=ecm 
By:  Markus Jochmann (Department of Economics, University of Strathclyde); Gary Koop (Department of Economics, University of Strathclyde); Roberto LeonGonzalez (National Graduate Institute for Policy Studies); Rodney W. Strachan (University of Queensland) 
Abstract:  This paper develops methods for Stochastic Search Variable Selection (currently popular with regression and Vector Autoregressive models) for Vector Error Correction models where there are many possible restrictions on the cointegration space. We show how this allows the researcher to begin with a single unrestricted model and either do model selection or model averaging in an automatic and computationally efficient manner. We apply our methods to a large UK macroeconomic model. 
Keywords:  Bayesian, cointegration, model averaging, model selection, Markov chain Monte Carlo 
JEL:  C11 C32 C52 
Date:  2009–10 
URL:  http://d.repec.org/n?u=RePEc:str:wpaper:0919&r=ecm 
By:  Yonghong An; Yingyao Hu (Institute for Fiscal Studies and Johns Hopkins University) 
Abstract:  <p>It is widely admitted that the inverse problem of estimating the distribution of a latent variable X* from an observed sample of X, a contaminated measurement of X*, is illposed. This paper shows that measurement error models for selfreporting data are wellposed, assuming the probability of reporting truthfully is nonzero, which is an observed property in validation studies. This optimistic result suggests that one should not ignore the point mass at zero in the error distribution when modeling measurement errors in selfreported data. We also illustrate that the classical measurement error models may in fact be conditionally wellposed given prior information on the distribution of the latent variable X*. By both a Monte Carlo study and an empirical application, we show that failing to account for the property can lead to significant bias on estimation of distribution of X*.</p> 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:35/09&r=ecm 
By:  Hrishikesh D. Vinod (Fordham University, Department of Economics) 
Abstract:  Phillips (1986) provides asymptotic theory for regressions that relate nonstationary time series including those integrated of order 1, I(1). A practical implication of the literature on spurious regression is that one cannot trust the usual confidence intervals. In the absence of prior knowledge that two series are cointegrated, it is therefore recommended that after carrying out unit root tests we work with differenced or detrended series instead of original data in levels. We propose a new alternative for obtaining confidence intervals based on the Maximum Entropy bootstrap explained in Vinod and LopezdeLacalle (2009). An extensive Monte Carlo simulation shows that our proposal can provide more reliable conservative confidence intervals than traditional, differencing and block bootstrap (BB) intervals. 
Keywords:  Bootstrap, simulation, confidence intervals 
JEL:  C12 C15 C22 C51 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:frd:wpaper:dp201001&r=ecm 
By:  MOON, H.R.; PERRON, Benoit 
Abstract:  Most panel unit root tests are designed to test the joint null hypothesis of a unit root for each individual series in a panel. After a rejection, it will often be of interest to identify which series can be deemed to be stationary and which series can be deemed nonstationary. Researchers will sometimes carry out this classification on the basis of n individual (univariate) unit root tests based on some ad hoc significance level. In this paper, we demonstrate how to use the false discovery rate (FDR) in evaluating I(1)=I(0) classifications based on individual unit root tests when the size of the cross section (n) and time series (T) dimensions are large. We report results from a simulation experiment and illustrate the methods on two data sets. 
Keywords:  False discovery rate, Multiple testing, unit root tests, panel data 
JEL:  C32 C33 C44 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:mtl:montec:102010&r=ecm 
By:  José Luis Aznarte (Department of Computer Science and Artificial Intelligence,CITICUGR, University of Granada, (Spain)); Marcelo Cunha Medeiros (Department of Economics PUCRio); José Manuel Benítez Sánchez (Department of Computer Science and Artificial Intelligence,CITICUGR, University of Granada, (Spain)) 
Abstract:  In this paper, we introduce a linearity test for fuzzy rulebased models in the framework of time series modeling. To do so, we explore a family of statistical models, the regime switching autoregressive models, and the relations that link them to the fuzzy rulebased models. From these relations, we derive a Lagrange Multiplier linearity test and some properties of the maximum likelihood estimator needed for it. Finally, an empirical study of the goodness of the test is presented. 
Keywords:  fuzzy rulebased models, time series, linearity test, statistical inference 
Date:  2010–03 
URL:  http://d.repec.org/n?u=RePEc:rio:texdis:566&r=ecm 
By:  Jouchi Nakajima; Yasuhiro Omori 
Abstract:  Bayesian analysis of a stochastic volatility model with a generalized hyperbolic (GH) skew Student's terror distribution is described where we first consider an asymmetric heavytailness as well as leverage effects. An efficient Markov chain Monte Carlo estimation method is described exploiting a normal variancemean mixture representation of the error distribution with an inverse gamma distribution as a mixing distribution. The proposed method is illustrated using simulated data, daily TOPIX and S&P500 stock returns. The model comparison for stock returns is conducted based on the marginal likelihood in the empirical study. The strong evidence of the leverage and asymmetric heavytailness is found in the stock returns. Further, the prior sensitivity analysis is conducted to investigate whether obtained results are robust with respect to the choice of the priors. 
Keywords:  generalized hyperbolic skew Student's tdistribution, Markov chain Monte Carlo, Mixing distribution, State space model, Stochastic volatility, Stock returns 
Date:  2010–03 
URL:  http://d.repec.org/n?u=RePEc:hst:ghsdps:gd09124&r=ecm 
By:  Michael McAleer (Econometric Institute, Erasmus University Rotterdam); Marcelo Cunha Medeiros (Department of Economics PUCRio) 
Abstract:  In this paper we consider a nonlinear model based on neural networks as well as linear models to forecast the daily volatility of the S&P 500 and FTSE 100 indexes. As a proxy for daily volatility, we consider a consistent and unbiased estimator of the integrated volatility that is computed from high frequency intraday returns. We also consider a simple algorithm based on bagging (bootstrap aggregation) in order to specify the models analyzed in this paper. 
Keywords:  Financial econometrics, volatility forecasting, neural networks, nonlinear models, realized volatility, bagging. 
Date:  2010–03 
URL:  http://d.repec.org/n?u=RePEc:rio:texdis:568&r=ecm 
By:  Chunrong Ai; Xiaohong Chen (Institute for Fiscal Studies and Yale) 
Abstract:  <p>This paper computes the semiparametric efficiency bound for finite dimensional parameters identified by models of sequential moment restrictions containing unknown functions. Our results extend those of Chamberlain (1992b) and Ai and Chen (2003) for semiparametric conditional moment restriction models with identical information sets to the case of nested information sets, and those of Chamberlain (1992a) and Brown and Newey (1998) for models of sequential moment restrictions without unknown functions to cases with unknown functions of possibly endogenous variables. Our bound results are applicable to semiparametric panel data models and semiparametric two stage plugin problems. As an example, we compute the efficiency bound for a weighted average derivative of a nonparametric instrumental variables (IV) regression, and find that the simple plugin estimator is not efficient. Finally, we present an optimally weighted, orthogonalized, sieve minimum distance estimator that achieves the semiparametric efficiency bound.</p> 
Date:  2009–10 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:28/09&r=ecm 
By:  Hrishikesh D. Vinod (Fordham University, Department of Economics) 
Abstract:  This paper considers estimation situations where identification, endogeneity and nonspherical regression error problems are present. Instead of always using GMM despite weak instruments to solve the endogeneity, it is possible to first check whether endogeneity is serious enough to cause inconsistency in the particular problem at hand. We show how to use Maximum Entropy bootstrap (meboot) for nonstationary time series data and check `convergence in probability’ and `almost sure convergence’ by evaluating the proportion of sample paths straying outside error bounds as the sample size increases. The new Keynesian Phillips curve (NKPC) ordinary least squares (OLS) estimation for US data finds little endogeneityinduced inconsistency and that GMM seems to worsen it. The potential `lack of identification’ problem is solved by replacing the traditional pivot which divides an estimate by its standard error by the Godambe pivot, as explained in Vinod (2008) and Vinod (2010), leading to superior confidence intervals for deep parameters of the NKPC model. 
Keywords:  Bootstrap, simulation, convergence, inflation inertia, sticky prices 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:frd:wpaper:dp201002&r=ecm 
By:  Daniel Preve (Singapore Management University); Marcelo Cunha Medeiros (Department of Economics PUCRio) 
Abstract:  In this paper we introduce a linear programming estimator (LPE) for the slope parameter in a constrained linear regression model with a single regressor. The LPE is interesting because it can be superconsistent in the presence of an endogenous regressor and, hence, preferable to the ordinary least squares estimator (LSE). Two dierent cases are considered as we investigate the statistical properties of the LPE. In the rst case, the regressor is assumed to be xed in repeated samples. In the second, the regressor is stochastic and potentially endogenous. For both cases the strong consistency and exact nitesample distribution of the LPE is established. Conditions under which the LPE is consistent in the presence of serially correlated, heteroskedastic errors are also given. Finally, we describe how the LPE can be extended to the case with multiple regressors and conjecture that the extended estimator is consistent under conditions analogous to the ones given herein. Finitesample properties of the LPE and extended LPE in comparison to the LSE and instrumental variable estimator (IVE) are investigated in a simulation study. One advantage of the LPE is that it does not require an instrument. 
Date:  2010–03 
URL:  http://d.repec.org/n?u=RePEc:rio:texdis:567&r=ecm 
By:  Davide Ferrari; Sandra Paterlini 
Abstract:  We consider a new robust parametric estimation procedure, which minimizes an empirical version of the HavrdaCharvàtTsallis entropy. The resulting estimator adapts according to the discrepancy between the data and the assumed model by tuning a single constant q, which controls the tradeoff between robustness and effciency. The method is applied to expected return and volatility estimation of financial asset returns under multivariate normality. Theoretical properties, ease of implementability and empirical results on simulated and financial data make it a valid alternative to classic robust estimators and semiparametric minimum divergence methods based on kernel smoothing. 
Keywords:  qentropy; robust estimation; powerdivergence; financial returns 
JEL:  C13 G11 
Date:  2010–02 
URL:  http://d.repec.org/n?u=RePEc:mod:recent:041&r=ecm 
By:  Adam Rosen (Institute for Fiscal Studies and University College London) 
Abstract:  <p>This paper studies the identifying power of conditional quantile restrictions in short panels with fixed effects. In contrast to classical fixed effects models with conditional mean restrictions, conditional quantile restrictions are not preserved by taking differences in the regression equation over time. This paper shows however that a conditional quantile restriction, in conjunction with a weak conditional independence restriction, provides bounds on quantiles of differences in timevarying unobservables across periods. These bounds carry observable implications for model parameters which generally result in set identification. The analysis of these bounds includes conditions for point identification of the parameter vector, as well as weaker conditions that result in identification of individual parameter components.</p> 
Date:  2009–09 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:26/09&r=ecm 
By:  Brendan P.M. McCabe; Gael Martin; Keith Freeland 
Abstract:  A test is derived for shortmemory correlation in the conditional variance of strictly positive, skewed data. The test is quasilocally most powerful (QLMP) under the assumption of conditionally gamma data. Analytical asymptotic relative efficiency calculations show that an alternative test, based on the firstorder autocorrelation coefficient of the squared data, has negligible relative power to detect correlation in the conditional variance. Finite sample simulation results con.rm the poor performance of the squaresbased test for fixed alternatives, as well as demonstrating the poor performance of the test based on the firstorder autocorrelation coefficient of the raw (levels) data. Robustness of the QLMP test, both to misspecification of the conditional distribution and misspecification of the dynamics is also demonstrated using simulation. The test is illustrated using financial trade durations data. 
Keywords:  Locally most powerful test; quasilikelihood; asymptotic relative efficiency; durations data; gamma distribution; Weibull distribution. 
JEL:  C12 C16 C22 
Date:  2010–02–09 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:20102&r=ecm 
By:  Sokbae 'Simon' Lee (Institute for Fiscal Studies and University College London); YoonJae Whang (Institute for Fiscal Studies and Seoul National University) 
Abstract:  <p><p><p>We develop a general class of nonparametric tests for treatment effects conditional on covariates. We consider a wide spectrum of null and alternative hypotheses regarding conditional treatment effects, including (i) the null hypothesis of the conditional stochastic dominance between treatment and control groups; ii) the null hypothesis that the conditional average treatment effect is positive for each value of covariates; and (iii) the null hypothesis of no distributional (or average) treatment effect conditional on covariates against a onesided (or twosided) alternative hypothesis. The test statistics are based on L1type functionals of uniformly consistent nonparametric kernel estimators of conditional expectations that characterize the null hypotheses. Using the Poissionization technique of Giné et al. (2003), we show that suitably studentized versions of our test statistics are asymptotically standard normal under the null hypotheses and also show that the proposed nonparametric tests are consistent against general fixed alternatives. Furthermore, it turns out that our tests have nonnegligible powers against some local alternatives that are n−½ different from the null hypotheses, where n is the sample size. We provide a more powerful test for the case when the null hypothesis may be binding only on a strict subset of the support and also consider an extension to testing for quantile treatment effects. We illustrate the usefulness of our tests by applying them to data from a randomized, job training program (LaLonde, 1986) and by carrying out Monte Carlo experiments based on this dataset.</p></p></p> 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:36/09&r=ecm 
By:  Alvaro Cartea; Dimitrios Karyampas 
Abstract:  Using high frequency data for the price dynamics of equities we measure the impact that market microstructure noise has on estimates of the: (i) volatility of returns; and (ii) variancecovariance matrix of n assets. We propose a Kalmanfilterbased methodology that allows us to deconstruct price series into the true efficient price and the microstructure noise. This approach allows us to employ volatility estimators that achieve very low Root Mean Squared Errors (RMSEs) compared to other estimators that have been proposed to deal with market microstructure noise at high frequencies. Furthermore, this price series decomposition allows us to estimate the variance covariance matrix of $n$ assets in a more efficient way than the methods so far proposed in the literature. We illustrate our results by calculating how microstructure noise affects portfolio decisions and calculations of the equity beta in a CAPM setting. 
Keywords:  Volatility estimation, Highfrequency data, Market microstructure theory, Covariation of assets, Matrix process, Kalman filter 
JEL:  G12 G14 C22 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:cte:wbrepe:wb097609&r=ecm 
By:  H. F. CoronelBrizio (Facultad de Fisica e Inteligencia Artificial, Departamento de Inteligencia Artificial, Universidad Veracruzana, Mexico); A. R. HernandezMontoya (Facultad de Fisica e Inteligencia Artificial, Departamento de Inteligencia Artificial, Universidad Veracruzana, Mexico) 
Abstract:  Maximum likelihood estimation and a test of fit based on the AndersonDarling statistic is presented for the case of the power law distribution when the parameters are estimated from a leftcensored sample. Expressions for the maximum likelihood estimators and tables of asymptotic percentage points for the A^2 statistic are given. The technique is illustrated for data from the Dow Jones Industrial Average index, an example of high theoretical and practical importance in Econophysics, Finance, Physics, Biology and, in general, in other related Sciences such as Complexity Sciences. 
Date:  2010–04 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1004.0417&r=ecm 
By:  Hacker, Scott (Jonkoping International Business School); HatemiJ, Abdulnasser (UAE University) 
Abstract:  The classic DickeyFuller unitroot test can be applied using three different equations, depending upon the inclusion of a constant and/or a time trend in the regression equation. This paper investigates the size and power properties of a unitroot testing strategy outlined in Enders (2004), which allows for repeated testing of the unit root with the three equations depending on the significance of various parameters in the equations. This strategy is similar to strategies suggested by others for unit root testing. Our Monte Carlo simulation experiments show that serious mass significance problems prevail when using the strategy suggested by Enders. Excluding the possibility of unrealistic outcomes and using a priori information on whether there is a trend in the underlying time series, as suggested by Elder and Kennedy (2001), reduces the mass significance problem for the unit root test and improves power for that test. Subsequent testing for whether a trend exists is seriously affected by testing for the unit root first, however. 
Keywords:  Unit Roots; Deterministic Components; Model Selection 
JEL:  C30 
Date:  2010–02–11 
URL:  http://d.repec.org/n?u=RePEc:hhs:cesisp:0214&r=ecm 
By:  Athanasopoulos, George; Guillén, Osmani Teixeira de Carvalho; Issler, João Victor; Vahid, Farshid 
Abstract:  We study the joint determination of the lag length, the dimension of the cointegrating spaceand the rank of the matrix of shortrun parameters of a vector autoregressive (VAR) model usingmodel selection criteria. We consider model selection criteria which have datadependent penaltiesas well as the traditional ones. We suggest a new twostep model selection procedure which is ahybrid of traditional criteria and criteria with datadependant penalties and we prove its consistency.Our Monte Carlo simulations measure the improvements in forecasting accuracy that can arisefrom the joint determination of laglength and rank using our proposed procedure, relative to anunrestricted VAR or a cointegrated VAR estimated by the commonly used procedure of selecting thelaglength only and then testing for cointegration. Two empirical applications forecasting Brazilianin ation and U.S. macroeconomic aggregates growth rates respectively show the usefulness of themodelselection strategy proposed here. The gains in di¤erent measures of forecasting accuracy aresubstantial, especially for short horizons. 
Date:  2010–03–29 
URL:  http://d.repec.org/n?u=RePEc:fgv:epgewp:704&r=ecm 
By:  Andrew Chesher (Institute for Fiscal Studies and University College London) 
Abstract:  <p>This paper studies single equation models for binary outcomes incorporating instrumental variable restrictions. The models are incomplete in the sense that they place no restriction on the way in which values of endogenous variables are generated. The models are set, not point, identifying. The paper explores the nature of set identification in single equation IV models in which the binary outcome is determined by a threshold crossing condition. There is special attention to models which require the threshold crossing function to be a monotone function of a linear index involving observable endogenous and exogenous explanatory variables. Identified sets can be large unless instrumental variables have substantial predictive power. A generic feature of the identified sets is that they are not connected when instruments are weak. The results suggest that the strong point identifying power of triangular "control function" models  restricted versions of the IV models considered here  is fragile, the wide expanses of the IV model's identified set awaiting in the event of failure of the triangular model's restrictions.</p> 
Date:  2009–08 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:23/09&r=ecm 
By:  Koji, Miyawaki (National Institute for Environmental Studies); Yasuhiro Omori (Faculty of Economics, University of Tokyo); Akira Hibiki (National Institute for Environmental Studies) 
Abstract:  This article proposes a Bayesian estimation of demand functions under blockrate pricing by focusing on increasing blockrate pricing. This is the first study that explicitly considers the separability condition which has been ignored in previous literature. Under this pricing structure, the price changes when consumption exceeds a certain threshold and the consumer faces a utility maximization problem subject to a piecewiselinear budget constraint. Solving this maximization problem leads to a statistical model in which model parameters are strongly restricted by the separability condition. In this article, by taking a hierarchical Bayesian approach, we implement a Markov chain Monte Carlo simulation to properly estimate the demand function. We find, however, that the convergence of the distribution of simulated samples to the posterior distribution is slow, requiring an additional scale transformation step for parameters to the Gibbs sampler. These proposed methods are then applied to estimate the Japanese residential water demand function. 
Date:  2010–01 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2010cf712&r=ecm 
By:  Manuel Arellano (Institute for Fiscal Studies and CEMFI); Lars Peter Hansen; Enrique Sentana 
Abstract:  <p><p>We develop methods for testing the hypothesis that an econometric model is underidentified and inferring the nature of the failed identification. By adopting a generalizedmethodof moments perspective, we feature directly the structural relations and we allow for nonlinearity in the econometric specification. We establish the link between a test for overidentification and our proposed test for underidentification. If, after attempting to replicate the structural relation, we find substantial evidence against the overidentifying restrictions of an augmented model, this is evidence against underidentification of the original model.</p></p> 
Date:  2009–09 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:24/09&r=ecm 
By:  Arie Beresteanu (Institute for Fiscal Studies and Duke); Ilya Molchanov; Francesca Molinari (Institute for Fiscal Studies and Cornell University) 
Abstract:  <p>We provide a tractable characterization of the sharp identification region of the parameters θ in a broad class of incomplete econometric models. Models in this class have setvalued predictions that yield a convex set of conditional or unconditional moments for the model variables. In short, we call these models with convex predictions. Examples include static, simultaneous move finite games of complete information in the presence of multiple mixed strategy Nash equilibria; random utility models of multinomial choice in the presence of interval regressors data; and best linear predictors with interval outcome and covariate data. Given a candidate value for θ, we establish that the convex set of moments yielded by the model predictions can be represented as the Aumann expectation of a properly defined random set. The sharp identification region of θ, denoted Θ<sub>I</sub>, can then be obtained as the set of minimizers of the distance from a properly specified vector of moments of random variables to this Aumann expectation. We show that algorithms in convex programming can be exploited to efficiently verify whether a candidate θ is in Θ<sub>I</sub>. We use examples analyzed in the literature to illustrate the gains in identification and computational tractability afforded by our method.</p><p>This paper is a revised version of cemmap working paper CWP15/08</p> 
Date:  2009–10 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:27/09&r=ecm 
By:  Tatsuya Kubokawa (Faculty of Economics, University of Tokyo) 
Abstract:  The estimation of a linear combination of several restricted location parameters is addressed from a decisiontheoretic point of view. A benchmark estimator of the linear combination is an unbiased estimator, which is minimax, but inadmissible relative to the mean squared error. An interesting issue is what is a prior distribution which results in the generalized Bayes and minimax estimator. Although it seems plausible that the generalized Bayes estimator against the uniform prior over the restricted space should be minimax, it is shown to be not minimax when the number of the location parameters, k, is more than or equal to three, while it is minimax for k = 1. In the case of k = 2, a necessary and sufficient condition for the minimaxity is given, namely, the minimaxity depends on signs of coefficients of the linear combination. When the underlying distributions are normal, we can obtain a prior distribution which results in the generalized Bayes estimator satisfying minimaxity and admissibility. Finally, it is demonstrated that the estimation of ratio of normal variances converges to the estimation of difference of the normal positive means, which gives a motivation of the issue studied here. 
Date:  2010–03 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2010cf723&r=ecm 
By:  Erich Battistin (Institute for Fiscal Studies); Andrew Chesher (Institute for Fiscal Studies and University College London) 
Abstract:  <p>This paper investigates the effect that covariate measurement error has on a conventional treatment effect analysis built on an unconfoundedness restriction that embodies conditional independence restrictions in which there is conditioning on error free covariates. The approach uses small parameter asymptotic methods to obtain the approximate generic effects of measurement error. The approximations can be estimated using data on observed outcomes, the treatment indicator and error contaminated covariates providing an indication of the nature and size of measurement error effects. The approximations can be used in a sensitivity analysis to probe the potential effects of measurement error on the evaluation of treatment effects.</p> 
Date:  2009–09 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:25/09&r=ecm 
By:  Julia Schaumburg 
Abstract:  This paper studies the performance of nonparametric quantile regression as a tool to predict Value at Risk (VaR). The approach is flexible as it requires no assumptions on the form of return distributions. A monotonized double kernel local linear estimator is applied to estimate moderate (1%) conditional quantiles of index return distributions. For extreme (0.1%) quantiles, where particularly few data points are available, we propose to combine nonparametric quantile regression with extreme value theory. The outofsample forecasting performance of our methods turns out to be clearly superior to different specifications of the Conditionally Autoregressive VaR (CAViaR) models. 
Keywords:  Value at Risk, nonparametric quantile regression, risk management, extreme value theory, monotonization, CAViaR 
JEL:  C14 C22 C52 C53 
Date:  2010–02 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2010009&r=ecm 
By:  Matteo Pelagatti (Department of Statistics, Università degli Studi di MilanoBicocca); Pranab Sen (Department of Statistics and Operations Research, University of North Carolina at Chapel Hill) 
Abstract:  This paper proposes a test of the null hypothesis of stationarity that is robust to the presence of fattailed errors. The test statistic is a modified version of the KPSS statistic, in which ranks substitute the original observations. The rank KPSS statistic has the same limiting distribution as the standard KPSS statistic under the null and diverges under I(1) alternatives. It features good power both under thintailed and fattailed distributions and it turns out to be a valid alternative to the original KPSS and the recently proposed Index KPSS (de Jong et al. 2007). 
Keywords:  Stationarity testing, Time series, Robustness, Rank statistics, Empirical processes 
JEL:  C12 C14 C22 
Date:  2009–07 
URL:  http://d.repec.org/n?u=RePEc:mis:wpaper:20090701&r=ecm 
By:  Artem Prokhorov (Concordia University) 
Abstract:  Several recent papers (e.g., Newey et al., 2005; Newey and Smith, 2004; Anatolyev, 2005) derive general expressions for the secondorder bias of the GMM estimator and its firstorder equivalents such as the EL estimator. Except for some simulation evidence, it is unknown how these compare to the secondorder bias of QMLE of covariance structure models. The paper derives the QMLE bias formulas for this general class of models. The bias  identical to the EL secondorder bias under normality  depends on the fourth moments of data and remains the same as for EL even for nonnormal data so long as the condition for equal asymptotic efficiency of QMLE and GMM derived in Prokhorov (2009) is satisfied. 
Keywords:  (Q)MLE, GMM, EL, Covariance structures 
JEL:  C13 
Date:  2010–01 
URL:  http://d.repec.org/n?u=RePEc:crd:wpaper:10001&r=ecm 
By:  Stephen Hall 
Abstract:  The method of instrumental variables (IV) and the generalized method of moments (GMM) and their applications to the estimation of errorsinvariables and simultaneous equations models in econometrics require data on a sufficient number of instrumental variables which are (insert space)both exogeneous and relevant. We argue that in general such instruments (weak or strong) cannot exist. 
JEL:  C32 C51 
Date:  2009–09 
URL:  http://d.repec.org/n?u=RePEc:lec:leecon:09/16&r=ecm 
By:  Francesco Battaglia; Mattheos K. Protopapas 
Abstract:  Nonlinear nonstationary models for time series are considered, where the series is generated from an autoregressive equation whose coe±cients change both according to time and the delayed values of the series itself, switching between several regimes. The transition from one regime to the next one may be discontinuous (selfexciting threshold model), smooth (smooth transition model) or continuous linear (piecewise linear threshold model). A genetic algorithm for identifying and estimating such models is proposed, and its behavior is evaluated through a simulation study and application to temperature data and a financial index. 
Date:  2010–01–28 
URL:  http://d.repec.org/n?u=RePEc:com:wpaper:026&r=ecm 
By:  Gianluca Moretti (Bank of Italy); Giulio Nicoletti (Bank of Italy) 
Abstract:  Recent empirical literature shows that key macro variables such as GDP and productivity display long memory dynamics. For DSGE models, we propose a Â‘GeneralizedÂ’ Kalman Filter to deal effectively with this problem: our method connects to and innovates upon datafiltering techniques already used in the DSGE literature. We show our method produces more plausible estimates of the deep parameters as well as more accurate outofsample forecasts of macroeconomic data. 
Keywords:  DSGE models, long memory, Kalman Filter. 
JEL:  C51 C53 E37 
Date:  2010–03 
URL:  http://d.repec.org/n?u=RePEc:bdi:wptemi:td_750_10&r=ecm 
By:  Asai, M.; Caporin, M. (Erasmus Econometric Institute) 
Abstract:  Most multivariate variance models suffer from a common problem, the â€œcurse of dimensionalityâ€. For this reason, most are fitted under strong parametric restrictions that reduce the interpretation and flexibility of the models. Recently, the literature has focused on multivariate models with milder restrictions, whose purpose was to combine the need for interpretability and efficiency faced by model users with the computational problems that may emerge when the number of assets is quite large. We contribute to this strand of the literature proposing a blocktype parameterization for multivariate stochastic volatility models. 
Keywords:  block structures;multivariate stochastic volatility;curse of dimensionality 
Date:  2009–12–17 
URL:  http://d.repec.org/n?u=RePEc:dgr:eureir:1765017523&r=ecm 
By:  Hacker, Scott (Jonkoping International Business School) 
Abstract:  This paper compares the performance of using an information criterion, such as the Akaike information criterion or the Schwarz (Bayesian) information criterion, rather than hypothesis testing in consideration of the presence of a unit root for a variable and, if unknown, the presence of a trend in that variable. The investigation is performed through Monte Carlo simulations. Properties considered are frequency of choosing the unit root status correctly, predictive performance, and frequency of choosing an appropriate subsequent action on the examined variable (first differencing, detrending, or doing nothing). Relative performance is considered in a minimax regret framework. The results indicate that use of an information criterion for determining unit root status and (if necessary) trend status of a variable can be competitive to alternative hypothesis testing strategies. 
Keywords:  Unit Root; Stationarity; Model Selection; Minimax regret; Information Criteria 
JEL:  C22 
Date:  2010–02–11 
URL:  http://d.repec.org/n?u=RePEc:hhs:cesisp:0213&r=ecm 
By:  Stephen Hall 
Abstract:  We examine the behaviour of Dickey Fuller test (DF) in the case of noisy data using Monte Carlo simulation. The findings show clearly that the size distortion of DF test becomes larger as the noise increases in the data. 
Keywords:  Hypothesis testing; Unit root test; Monte Carlo Analysis. 
JEL:  C01 C12 C15 
Date:  2009–09 
URL:  http://d.repec.org/n?u=RePEc:lec:leecon:09/18&r=ecm 
By:  Ghosh, Gaurav (E.ON Energy Research Center, Future Energy Consumer Needs and Behavior (FCN)); Carriazo, Fernando (Economic Research Service, U.S. Department of Agriculture) 
Abstract:  We empirically compare the accuracy and precision of representative Least Squares, Maximum Likelihood and Bayesian methods of estimation. Using an approach similar to the jackknife, each method is repeatedly applied to subsamples of a data set on the property market in Bogotá, Colombia to generate multiple estimates of the underlying explanatory spatial hedonic model. The estimates are then used to predict prices at a fixed set of locations. A nonparametric comparison of the estimates and predictions suggests that the Bayesian method performs best overall, but that the Likelihood method is most suited to estimation of the independent variable coefficients. Significant heterogeneity exists in the specific test results. 
Keywords:  Spatial Econometrics; Bayesian Statistics; Hedonic Valuation 
Date:  2009–11 
URL:  http://d.repec.org/n?u=RePEc:ris:fcnwpa:2009_009&r=ecm 
By:  Ivan Savin; Peter Winker 
Abstract:  Innovations, be they radical new products or technology improvements are widely recognized as a key factor of economic growth. To identify the factors triggering innovative activities is a main concern for economic theory and empirical analysis. As the number of hypotheses is large, the process of model selection becomes a crucial part of the empirical implementation. The problem is complicated by the fact that unobserved heterogeneity and possible endogeneity of regressors have to be taken into account. A new efficient solution to this problem is suggested, applying optimization heuristics, which exploits the inherent discrete nature of the problem. The model selection is based on information criteria and the Sargan test of overidentifying restrictions. The method is applied to Russian regional data within the framework of a loglinear dynamic panel data model. To illustrate the performance of the method, we also report the results of MonteCarlo simulations. 
Keywords:  Innovation, dynamic panel data, GMM, model selection, threshold accepting, genetic algorithms. 
Date:  2010–02–04 
URL:  http://d.repec.org/n?u=RePEc:com:wpaper:027&r=ecm 
By:  Toru Kitagawa (Institute for Fiscal Studies and UCL) 
Abstract:  <p>This paper examines identification power of the instrument exogeneity assumption in the treatment effect model. We derive the identification region: The set of potential outcome distributions that are compatible with data and the model restriction. The model restrictions whose identifying power is investigated are (i)instrument independence of each of the potential outcome (marginal independence), (ii) instrument joint independence of the potential outcomes and the selection heterogeneity, and (iii) instrument monotonicity in addition to (ii) (the LATE restriction of Imbens and Angrist (1994)), where these restrictions become stronger in the order of listing. By comparing the size of the identification region under each restriction, we show that the joint independence restriction can provide further identifying information for the potential outcome distributions than marginal independence, but the LATE restriction never does since it solely constrains the distribution of data. We also derive the tightest possible bounds for the average treatment effects under each restriction. Our analysis covers both the discrete and continuous outcome case, and extends the treatment effect bounds of Balke and Pearl(1997) that are available only for the binary outcome case to a wider range of settings including the continuous outcome case.</p> 
Date:  2009–10 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:30/09&r=ecm 
By:  Markus Jochmann (Department of Economics, University of Strathclyde) 
Abstract:  This paper uses an infinite hidden Markov model (IHMM) to analyze U.S. inflation dynamics with a particular focus on the persistence of inflation. The IHMM is a Bayesian nonparametric approach to modeling structural breaks. It allows for an unknown number of breakpoints and is a flexible and attractive alternative to existing methods. We found a clear structural break during the recent financial crisis. Prior to that, inflation persistence was high and fairly constant. 
Keywords:  inflation dynamics, hierarchical Dirichlet process, IHMM, structural breaks, Bayesian nonparametrics 
JEL:  C11 C22 E31 
Date:  2010–01 
URL:  http://d.repec.org/n?u=RePEc:rim:rimwps:03_10&r=ecm 
By:  Hacker, R. Scott (CESIS  Centre of Excellence for Science and Innovation Studies, Royal Institute of Technology); HatemiJ, Abdulnasser (UAE University) 
Abstract:  Granger causality tests have become among the most popular empirical applications with time series data. Several new tests have been developed in the literature that can deal with different data generating processes. In all existing theoretical papers it is assumed that the lag length is known a priori. However, in applied research the lag length has to be selected before testing for causality. This paper suggests that in investigating the effectiveness of various Granger causality testing methodologies, including those using bootstrapping, the lag length choice should be endogenized, by which we mean the datadriven preselection of lag length should be taken into account. We provide and accordingly evaluate a Grangercausality bootstrap test which may be used with data that may or may not be integrated, and compare the performance of this test to that for the analogous asymptotic test. The suggested bootstrap test performs well and appears to be also robust to ARCH effects that usually characterize the financial data. This test is applied to testing the causal impact of the US financial market on the market of the United Arab Emirates. 
Keywords:  Causality; VAR Model; Stability; Endogenous Lag; ARCH; Leverages 
JEL:  C15 C32 G11 
Date:  2010–04–10 
URL:  http://d.repec.org/n?u=RePEc:hhs:cesisp:0223&r=ecm 
By:  Andrew Chesher (Institute for Fiscal Studies and University College London); Konrad Smolinski (Institute for Fiscal Studies) 
Abstract:  <p>This paper studies single equation instrumental variable models of ordered choice in which explanatory variables may be endogenous. The models are weakly restrictive, leaving unspecified the mechanism that generates endogenous variables. These incomplete models are set, not point, identifying for parametrically (e.g. ordered probit) or nonparametrically specified structural functions. The paper gives results on the properties of the identified set for the case in which potentially endogenous explanatory variables are discrete. The results are used as the basis for calculations showing the rate of shrinkage of identified sets as the number of classes in which the outcome is categorised increases.</p> 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:37/09&r=ecm 
By:  Lluís Bermúdez (Departament de Matemµatica Econµomica, Financera i Actuarial, Universitat de Barcelona); Dimitris Karlis (Athens University of Economics and Business) 
Abstract:  When actuaries face with the problem of pricing an insurance contract that contains different types of coverage, such as a motor insurance or homeowner's insurance policy, they usually assume that types of claim are independent. However, this assumption may not be realistic: several studies have shown that there is a positive correlation between types of claim. Here we introduce di®erent multivariate Poisson regression models in order to relax the independence assumption, including zeroin°ated models to account for excess of zeros and overdispersion. These models have been largely ignored to date, mainly because of their computational di±culties. Bayesian inference based on MCMC helps to solve this problem (and also lets us derive, for several quantities of interest, posterior summaries to account for uncertainty). Finally, these models are applied to an automobile insurance claims database with three different types of claims. We analyse the consequences for pure and loaded premiums when the independence assumption is relaxed by using different multivariate Poisson regression models and their zeroinflated versions. 
Keywords:  Multivariate Poisson regression models, Zeroinflated models, Automobile insurance, MCMC inference, Gibbs sampling 
JEL:  C51 
Date:  2010–04 
URL:  http://d.repec.org/n?u=RePEc:xrp:wpaper:xreap20104&r=ecm 
By:  Stefan Hoderlein (Institute for Fiscal Studies and Brown); Halbert White 
Abstract:  <p>This paper is concerned with extending the familiar notion of fixed effects to nonlinear setups with infinite dimensional unobservables like preferences. The main result is that a generalized version of differencing identifies local average structural derivatives (LASDs) in very general nonseparable models, while allowing for arbitrary dependence between the persistent unobservables and the regressors of interest even if there are only two time periods. These quantities specialize to well known objects like the slope coefficient in the semiparametric panel data binary choice model with fixed effects. We extend the basic framework to include dynamics in the regressors and time trends, and show how distributional effects as well as average effects are identified. In addition, we show how to handle endogeneity in the transitory component. Finally, we adapt our results to the semiparametric binary choice model with correlated coefficients, and establish that average structural marginal probabilities are identified. We conclude this paper by applying the last result to a real world data example. Using the PSID, we analyze the way in which the lending restrictions for mortgages eased between 2000 and 2004.</p> 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:33/09&r=ecm 
By:  Victor Chernozhukov (Institute for Fiscal Studies and Massachusetts Institute of Technology); Ivan FernandezVal; Whitney Newey (Institute for Fiscal Studies and Massachusetts Institute of Technology) 
Abstract:  <p><p><p>This paper gives identification and estimation results for quantile and average effects in nonseparable panel models, when the distribution of period specific disturbances does not vary over time. Bounds are given for interesting effects with discrete regressors that are strictly exogenous or predetermined. We allow for location and scale time effects and show how monotonicity can be used to shrink the bounds. We derive rates at which the bounds tighten as the number T of time series observations grows and give an empirical illustration.</p></p></p> 
Date:  2009–10 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:29/09&r=ecm 
By:  Aleksey Tetenov 
Abstract:  This paper studies the problem of treatment choice between a status quo treatment with a known outcome distribution and an innovation whose outcomes are observed only in a representative finite sample. I evaluate statistical decision rules, which are functions that map sample outcomes into the planner’s treatment choice for the population, based on regret, which is the expected welfare loss due to assigning inferior treatments. I extend previous work that applied the minimax regret criterion to treatment choice problems by considering decision criteria that asymmetrically treat Type I regret (due to mistakenly choosing an inferior new treatment) and Type II regret (due to mistakenly rejecting a superior innovation). I derive exact finite sample solutions to these problems for experiments with normal, Bernoulli and bounded distributions of individual outcomes. In conclusion, I discuss approaches to the problem for other classes of distributions. Along the way, the paper compares asymmetric minimax regret criteria with statistical decision rules based on classical hypothesis tests. 
Keywords:  treatment effects, loss aversion, statistical decisions, hypothesis testing. 
JEL:  C44 C21 C12 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:cca:wpaper:119&r=ecm 
By:  Johanna Kappus; Markus Reiß 
Abstract:  A Lévy process is observed at time points of distance delta until time T. We construct an estimator of the LévyKhinchine characteristics of the process and derive optimal rates of convergence simultaneously in T and delta. Thereby, we encompass the usual low and highfrequency assumptions and obtain also asymptotics in the midfrequency regime. 
Keywords:  Lévy process, LévyKhinchine characteristics, Nonparametric estimation, Inverse problem, Optimal rates of convergence 
JEL:  G13 C14 
Date:  2010–02 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2010015&r=ecm 
By:  K. W. DE BOCK; K. COUSSEMENT; D. VAN DEN POEL; 
Abstract:  Generalized additive models (GAMs) are a generalization of generalized linear models (GLMs) and constitute a powerful technique which has successfully proven its ability to capture nonlinear relationships between explanatory variables and a response variable in many domains. In this paper, GAMs are proposed as base classifiers for ensemble learning. Three alternative ensemble strategies for binary classification using GAMs as base classifiers are proposed: (i) GAMbag based on Bagging, (ii) GAMrsm based on the Random Subspace Method (RSM), and (iii) GAMens as a combination of both. In an experimental validation performed on 12 data sets from the UCI repository, the proposed algorithms are benchmarked to a single GAM and to decision tree based ensemble classifiers (i.e. RSM, Bagging, Random Forest, and the recently proposed Rotation Forest). From the results a number of conclusions can be drawn. Firstly, the use of an ensemble of GAMs instead of a single GAM always leads to improved prediction performance. Secondly, GAMrsm and GAMens perform comparably, while both versions outperform GAMbag. Finally, the value of using GAMs as base classifiers in an ensemble instead of standard decision trees is demonstrated. GAMbag demonstrates comparable performance to ordinary Bagging. Moreover, GAMrsm and GAMens outperform RSM and Bagging, while these two GAM ensemble variations perform comparably to Random Forest and Rotation Forest. Sensitivity analyses are included for the number of member classifiers in the ensemble, the number of variables included in a random feature subspace and the number of degrees of freedom for GAM spline estimation. 
Keywords:  Data mining, Classification, Ensemble learning, GAM, UCI 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:rug:rugwps:09/625&r=ecm 
By:  Christian de Peretti; Carole Siani; Mario Cerrato 
Abstract:  This paper proposes a bootstrap artificial neural network based panel unit root test in a dynamic heterogeneous panel context. An application to a panel of bilateral real exchange rate series with the US Dollar from the 20 major OECD countries is provided to investigate the Purchase Power Parity (PPP). The combination of neural network and bootstrapping significantly changes the findings of the economic study in favour of PPP. 
Keywords:  Artificial neural network, panel unit root test, bootstrap, Monte Carlo experiments, exchange rates. 
JEL:  C12 C15 C22 C23 F31 
Date:  2010–03 
URL:  http://d.repec.org/n?u=RePEc:gla:glaewp:2010_05&r=ecm 
By:  Bill Russell; Anindya Banerjee; Issam Malki; Natalia Ponomareva 
Abstract:  Phillips curves have often been estimated without due attention to the underlying time series properties of the data. In particular, the consequences of inflation having discrete breaks in mean, for example caused by supply shocks and the corresponding responses of policymakers, have not been studied adequately. We show by means of simulations and a detailed empirical example based on United States data that not taking account of breaks may lead to spuriously upwardly biased estimates of the dynamic inflation terms of the Phillips curve. We suggest a method to account for the breaks in mean and obtain meaningful and unbiased estimates of the short and longrun Phillips curves in the United States and contrast our results with those derived from more traditional approaches, most recently undertaken by Cogley and Sbordone (2008). 
Keywords:  Phillips curve, inflation, panel data, nonstationary data, breaks 
JEL:  C22 C23 E31 
Date:  2010–04 
URL:  http://d.repec.org/n?u=RePEc:dun:dpaper:232&r=ecm 
By:  Aurea Grané; Helena Veiga 
Abstract:  In this paper we focus on the impact of additive level outliers on the calculation of risk measures, such as minimum capital risk requirements, and compare four alternatives of reducing these measures' estimation biases. The first three proposals proceed by detecting and correcting outliers before estimating these risk measures with the GARCH(1,1) model, while the fourth procedure fits a Student’s tdistributed GARCH(1,1) model directly to the data. The former group includes the proposal of Grané and Veiga (2010), a detection procedure based on wavelets with hard or softthresholding filtering, and the well known method of Franses and Ghijsels (1999). The first results, based on Monte Carlo experiments, reveal that the presence of outliers can bias severely the minimum capital risk requirement estimates calculated using the GARCH(1,1) model. The message driven from the second results, both empirical and simulations, is that outlier detection and filtering generate more accurate minimum capital risk requirements than the fourth alternative. Moreover, the detection procedure based on wavelets with hardthresholding filtering gathers a very good performance in attenuating the effects of outliers and generating accurate minimum capital risk requirements outofsample, even in pretty volatile periods 
Keywords:  Minimum capital risk requirements, Outliers, Wavelets 
JEL:  C22 C5 G13 
Date:  2010–01 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws100502&r=ecm 
By:  Heij, C.; Franses, Ph.H.B.F. (Erasmus Econometric Institute) 
Abstract:  Preelection polls can suffer from survey effects. For example, surveyed individuals can become more aware of the upcoming election so that they become more inclined to vote. These effects may depend on factors like political orientation and prior intention to vote, and this may cause biases in forecasts of election outcomes. We advocate a simple methodology to estimate the magnitude of these survey effects, which can be taken into account when translating future poll results into predicted election outcomes. The survey effects are estimated by collecting survey data both before and after the election. We illustrate our method by means of a field study with data concerning the 2009 European Parliament elections in the Netherlands. Our study provides empirical evidence of significant positive survey effects with respect to voter participation, especially for individuals with low intention to vote. For our data, the overall survey effect on party shares is small. This effect can be more substantial for less balanced survey samples, for example, if political orientation and voting intention are correlated in the sample. We conclude that preelection polls that do not correct for survey effects will overestimate voter turnout and will have biased party shares. 
Keywords:  preelection polls;survey effects;intention modification;selfprophecy;data collection;turnout forecast;bias correction 
Date:  2010–03–31 
URL:  http://d.repec.org/n?u=RePEc:dgr:eureir:1765018637&r=ecm 
By:  Iqbal Syed (School of Economics, University of New South Wales) 
Abstract:  Hedonic regressions are prone to omitted variable bias. The estimation of price relatives for new and disappearing goods using hedonic imputation methods involves taking ratios of hedonic models. This may lead to a situation where the omitted variable bias in each of the hedonic regressions offset each other. This study finds that the single imputation hedonic method estimates inconsistent price relatives, while the double imputation method may produce consistent price relatives depending on the behavior of unobserved characteristics in the comparison periods. The study outlines a methodology to test whether double imputation price relatives are consistent. The results of this study have implications with regard to the construction of quality adjusted indexes. 
Keywords:  Hedonic imputation method; omitted variable bias; model selection; quality adjusted price indexes; new and disappearing goods 
JEL:  C43 C52 E31 
Date:  2010–01 
URL:  http://d.repec.org/n?u=RePEc:swe:wpaper:201003&r=ecm 