
on Econometrics 
By:  Michael Creel; Dennis Kristensen 
Abstract:  Abstract. Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Because conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. It is shown that as the number of simulations diverges, the estimator is consistent and a higherorder expansion reveals the stochastic difference between the infeasible GMM estimator based on the same moment conditions and the simulated version. In particular, we show how to adjust standard errors to account for the simulations. Monte Carlo results show how the estimator may be applied to a range of dynamic latent variable (DLV) models, and that it performs well in comparison to several other estimators that have been proposed for DLV models. 
Keywords:  dynamic latent variable models; simulationbased estimation; simulated moments; kernel regression; nonparametric estimation 
JEL:  C13 C14 C15 
Date:  2009–11–12 
URL:  http://d.repec.org/n?u=RePEc:aub:autbar:792.09&r=ecm 
By:  Zhijie Xiao (Boston College); Roger Koenker (University of Illinois UrbanaChampaign) 
Abstract:  Conditional quantile estimation is an essential ingredient in modern risk management. Although GARCH processes have proven highly successful in modeling financial data it is generally recognized that it would be useful to consider a broader class of processes capable of representing more flexibly both asymmetry and tail behavior of conditional returns distributions. In this paper, we study estimation of conditional quantiles for GARCH models using quantile regression. Quantile regression estimation of GARCH models is highly nonlinear; we propose a simple and effective twostep approach of quantile regression estimation for linear GARCH time series. In the first step, we employ a quan tile autoregression sieve approximation for the GARCH model by combining information over different quantiles; second stage estimation for the GARCH model is then carried out based on the first stage minimum distance estimation of the scale process of the time series. Asymptotic properties of the sieve approximation, the minimum distance estimators, and the final quantile regression estimators employing generated regressors are studied. These results are of independent interest and have applications in other quantile regression settings. Monte Carlo and empirical application results indicate that the proposed estimation methods outperform some existing conditional quantile estimation methods. 
Keywords:  Quantile Regression, GARCH, ValueatRisk 
JEL:  C13 C21 C22 
Date:  2009–03–13 
URL:  http://d.repec.org/n?u=RePEc:boc:bocoec:725&r=ecm 
By:  Dennis Kristensen (Columbia University and CREATES) 
Abstract:  A novel estimation method for two classes of semiparametric scalar diffusion models is proposed: In the first class, the diffusion term is parameterised and the drift is left unspecified, while in the second class only the drift term is specified. Under the assumption of stationarity, the unspecified term can be identified as a functional of the parametric component and the stationary density. Given a discrete sample with a fixed time distance, the parametric compo nent is then estimated by maximizing the associated likelihood with a preliminary estimator of the unspecified term plugged in. It is shown that this PseudoMLE (PMLE) is root nconsistent and asymptotically normally distributed under regularity conditions, and demonstrate how the models and estimators can be used in a twostep specification testing strategy of fully parametric models. Since the likelihood function is not available on closed form, the practical implementa tion of our estimator and tests will rely on simulated or approximate PMLE's. Under regularity conditions, it is verified that approximate/simulated versions of the PMLE inherits the prop erties of the actual but infeasible estimator. A simulation study investigates the finitesample performance of the PMLE, and finds that it performs well and is comparable to parametric MLE both in terms of bias and variance. 
Keywords:  Diffusion process, fixedtime distance asymptotics, kernel estimation, pseudolikelihood, semiparametric 
JEL:  C12 C13 C14 C22 
Date:  2009–09–18 
URL:  http://d.repec.org/n?u=RePEc:aah:create:200941&r=ecm 
By:  Andros Kourtellos (Department of Economics,University of Cyprus); Thanasis Stengos (Department of Economics, University of Guelphy); Chih Ming Tan (Department of Economics,Tufts University) 
Abstract:  This paper extends the simple threshold regression framework of Hansen (2000) and Caner and Hansen (2004) to allow for endogeneity of the threshold variable. We develop a concentrated least squares estimator of the threshold parameter based on an inverse Mills ratio bias correction. We show that our estimator is consistent and investigate its performance using a Monte Carlo simulation that indicates the applicability of the method in finite samples. 
JEL:  C13 C51 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:gue:guelph:20097.&r=ecm 
By:  Sokbae Lee (University College London); YoonJae Whang (Seoul National University) 
Abstract:  We develop a general class of nonparametric tests for treatment effects conditional on covariates. We consider a wide spectrum of null and alternative hypotheses regarding conditional treatment effects, including (i) the null hypothesis of the conditional stochastic dominance between treatment and control groups; (ii) the null hypothesis that the conditional average treatment effect is positive for each value of covariates; and (iii) the null hypothesis of no distributional (or average) treatment effect conditional on covariates against a onesided (or twosided) alternative hypothesis. The test statistics are based on L_{1}type functionals of uniformly consistent nonparametric kernel estimators of conditional expectations that characterize the null hypotheses. Using the Poissionization technique of Gine et al. (2003), we show that suitably studentized versions of our test statistics are asymptotically standard normal under the null hypotheses and also show that the proposed nonparametric tests are consistent against general fixed alternatives. Furthermore, it turns out that our tests have nonnegligible powers against some local alternatives that are n^{1/2} different from the null hypotheses, where n is the sample size. We provide a more powerful test for the case when the null hypothesis may be binding only on a strict subset of the support and also consider an extension to testing for quantile treatment effects. We illustrate the usefulness of our tests by applying them to data from a randomized, job training program (LaLonde (1986)) and by carrying out Monte Carlo experiments based on this dataset. 
Keywords:  Average treatment effect, Conditional stochastic dominance, Poissionization, Programme evaluation 
JEL:  C12 C14 C21 
Date:  2009–11 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1740&r=ecm 
By:  Heinen, Florian; Sibbertsen, Philipp; Kruse, Robinson 
Abstract:  We consider the problem of forecasting time series with long memory when the memory parameter is subject to a structural break. By means of a largescale Monte Carlo study we show that ignoring such a change in persistence leads to substantially reduced forecasting precision. The strength of this effect depends on whether the memory parameter is increasing or decreasing over time. A comparison of six forecasting strategies allows us to conclude that pretesting for a change in persistence is highly recommendable in our setting. In addition we provide an empirical example which underlines the importance of our findings. 
Keywords:  Long memory time series, Break in persistence, Structural change, Simulation, Forecasting competition 
JEL:  C15 C22 C53 
Date:  2009–11 
URL:  http://d.repec.org/n?u=RePEc:han:dpaper:dp433&r=ecm 
By:  Álvaro Cartea; Dimitrios Karyampas (Department of Economics, Mathematics & Statistics, Birkbeck) 
Abstract:  Using high frequency data for the price dynamics of equities we measure the impact that market microstructure noise has on estimates of the: (i) volatility of returns; and (ii) variancecovariance matrix of n. assets. We propose a Kalmanfilterbased methodology that allows us to deconstruct price series into the true effcient price and the microstructure noise. This approach allows us to employ volatility estimators that achieve very low Root Mean Squared Errors (RMSEs) compared to other estimators that have been proposed to deal with market microstructure noise at high frequencies. Furthermore, this price series decomposition allows us to estimate the variance covariance matrix of n assets in a more efficient way than the methods so far proposed in the literature. We illustrate our results by calculating how microstructre noise affects portfolio decisions and calculations of the equity beta in a CAPM setting. 
Date:  2009–10 
URL:  http://d.repec.org/n?u=RePEc:bbk:bbkefp:0913&r=ecm 
By:  Christopher Gust; Robert Vigfusson 
Abstract:  Are structural vector autoregressions (VARs) useful for discriminating between macro models? Recent assessments of VARs have shown that these statistical methods have adequate size properties. In other words, in simulation exercises, VARs will only infrequently reject the true data generating process. However, in assessing a statistical test, we often also care about power: the ability of the test to reject a false hypothesis. Much less is known about the power of structural VARs. ; This paper attempts to fill in this gap by exploring the power of longrun structural VARs against a set of DSGE models that vary in degree from the true data generating process. We report results for two tests: the standard test of checking the sign on impact and a test of the shape of the response. For the models studied here, testing the shape is a more powerful test than simply looking at the sign of the response. In addition, relative to an alternative statistical test based on sample correlations, we find that the shapebased tests have greater power. Given the results on the power and size properties of longrun VARs, we conclude that these VARs are useful for discriminating between macro models. 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:fip:fedgif:978&r=ecm 
By:  Pepa Ramirez; Rosa E. Lillo; Michael P. Wiper 
Abstract:  In this paper we consider the problem of identifiability of the twostate Markovian Arrival process (MAP2). In particular, we show that the MAP2 is not identifiable and conditions are given under which two different sets of parameters, induce identical stationary laws for the observable process. 
Keywords:  Batch Markovian Arrival process, Markov Renewal process, Hidden Markov models, Identifiability problems 
Date:  2009–11 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws097121&r=ecm 
By:  Ladislav Kristoufek (Institute of Economic Studies, Faculty of Social Sciences, Charles University, Prague, Czech Republic; Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Prague) 
Abstract:  Mostly used estimators of Hurst exponent for detection of longrange dependence are biased by presence of shortrange dependence in the underlying time series. We present confidence intervals estimates for rescaled range and modified rescaled range. We show that the difference in expected values and confidence intervals enables us to use both methods together to clearly distinguish between the two types of processes. Moreover, both methods are robust against the presence of heavy tails in the underlying process. 
Keywords:  rescaled range, modified rescaled range, Hurst exponent, longrange dependence, confidence intervals 
JEL:  G1 G10 G14 G15 
Date:  2009–11 
URL:  http://d.repec.org/n?u=RePEc:fau:wpaper:wp2009_26&r=ecm 
By:  Pawel J. Szerszen 
Abstract:  In this paper I analyze a broad class of continuoustime jump diffusion models of asset returns. In the models, stochastic volatility can arise either from a diffusion part, or a jump part, or both. The jump component includes either compound Poisson or Lévy alphastable jumps. To be able to estimate the models with latent Lévy alphastable jumps, I construct a new Markov chain Monte Carlo algorithm. I estimate all model specifications with S&P500 daily returns. I find that models with Levy alphastable jumps perform well in capturing return characteristics if diffusion is a source of stochastic volatility. Models with stochastic volatility from jumps and models with Poisson jumps cannot represent excess kurtosis and tails of return distribution. In density forecast and VaR analysis, the model with Levy alphastable jumps and joint stochastic volatility performs the best among all other specifications, since both diffusion and infinite activity jump part provide information about latent volatility. 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:fip:fedgfe:200940&r=ecm 
By:  Dimitrios Vortelinos; Dimitrios Thomakos 
Abstract:  We test for and model volatility jumps for three major indices of the Athens Stock Exchange (ASE). Using intraday data we first construct several, stateoftheart realized volatility estimators. We use these estimators to construct the jump components of volatility and perform various tests on their properties. Then we use the class of Heterogeneous Autoregressive (HAR) models for assessing the relevant effects of jumps on volatility. Our results expand and complement the previous literature on the ASE market and, in particular, this is the first time, to the best of our knowledge, that volatility jumps are examined and modeled for the Greek market, using a variety of realized volatility estimators. 
Keywords:  Athens Stock Exchange , Bipower variation, Heterogeneous autoregressive models, Realized volatility, Volatility jumps. 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:uop:wpaper:00044&r=ecm 
By:  Matthew Denes; Gauti B. Eggertsson 
Abstract:  This paper outlines a simple Bayesian methodology for estimating tax and spending multipliers in a dynamic stochastic general equilibrium (DSGE) model. After forming priors about the parameters of the model and the relevant shock, we used the model to exactly match only one data point: the trough of the Great Depression, that is, an output collapse of 30 percent, deflation of 10 percent, and a zero shortterm nominal interest rate. Because we form our priors as distributions, the key economic inference of our analysisthe multipliers of tax and spendingare welldefined probability distributions derived from the posterior of the model. While the Bayesian methods used are standard, the application is slightly unusual. We conjecture that this methodology can be applied in several different settings with severe data limitations and where more informal calibrations have been the norm. The main advantage over usual calibration exercises is that the posterior of the model offers an interesting way to think about sensitivity analysis and gives researchers a useful way to describe modelbased inference. We apply our simple estimation method to the American Recovery and Reinvestment Act (ARRA), passed by Congress as part of the 2009 stimulus plan. The mean of our estimate indicates that ARRA increased output by 3.6 percent in 2009 and 2010. The standard deviation of this estimate is 1 percent. 
Keywords:  Depressions ; Econometric models ; Taxation ; Government spending policy 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:fip:fednsr:403&r=ecm 
By:  Maxim S. Finkelstein (Max Planck Institute for Demographic Research, Rostock, Germany) 
Abstract:  Mixtures of distributions are usually effectively used for modeling heterogeneity. It is well known that mixtures of DFR distributions are always DFR. On the other hand, mixtures of IFR distributions can decrease, at least in some intervals of time. As IFR distributions often model lifetimes governed by ageing processes, the operation of mixing can dramatically change the pattern of ageing. Therefore, the study of the shape of the observed (mixture) failure rate in a heterogeneous setting is important in many applications. We study discrete and continuous mixtures, obtain conditions for the mixture failure rate to tend to the failure rate of the strongest populations and describe asymptotic behavior as t tends to infty. Some demographic and engineering examples are considered. The corresponding inverse problem is discussed. 
JEL:  J1 Z0 
Date:  2009–11 
URL:  http://d.repec.org/n?u=RePEc:dem:wpaper:wp2009031&r=ecm 