|
on Econometrics |
By: | Violetta Dalla; Liudas Giraitis; Javier Hidalgo |
Abstract: | For linear processes, semiparametric estimation of the memory parameter, based on the log-periodogramand local Whittle estimators, has been exhaustively examined and their properties are well established.However, except for some specific cases, little is known about the estimation of the memory parameter fornonlinear processes. The purpose of this paper is to provide general conditions under which the localWhittle estimator of the memory parameter of a stationary process is consistent and to examine its rate ofconvergence. We show that these conditions are satisfied for linear processes and a wide class of nonlinearmodels, among others, signal plus noise processes, nonlinear transforms of a Gaussian process ?tandEGARCH models. Special cases where the estimator satisfies the central limit theorem are discussed. Thefinite sample performance of the estimator is investigated in a small Monte-Carlo study. |
Keywords: | Long memory, semiparametric estimation, local Whittle estimator. |
JEL: | C14 C22 |
Date: | 2006–01 |
URL: | http://d.repec.org/n?u=RePEc:cep:stiecm:/2006/497&r=ecm |
By: | Javier Hualde; Peter M Robinson |
Abstract: | Empirical evidence has emerged of the possibility of fractional cointegration such that thegap, ß, between the integration order d of observable time series, and the integrationorder ? of cointegrating errors, is less than 0.5. This includes circumstances whenobservables are stationary or asymptotically stationary with long memory (so d < 1/2),and when they are nonstationary (so d 1/2). This "weak cointegration" contrastsstrongly with the traditional econometric prescription of unit root observables and shortmemory cointegrating errors, where ß = 1. Asymptotic inferential theory also differs fromthis case, and from other members of the class ß > 1/2, in particular=consistent - n andasymptotically normal estimation of the cointegrating vector ? is possible when ß < 1/2,as we explore in a simple bivariate model. The estimate depends on ? and d or, morerealistically, on estimates of unknown ? and d. These latter estimates need to beconsistent - n , and the asymptotic distribution of the estimate of ? is sensitive to theirprecise form. We propose estimates of ? and d that are computationally relativelyconvenient, relying on only univariate nonlinear optimization. Finite sample performanceof the methods is examined by means of Monte Carlo simulations, and severalapplications to empirical data included. |
Keywords: | Fractional cointegration, Parametric estimation, Asymptotic normality. |
JEL: | C32 |
Date: | 2006–03 |
URL: | http://d.repec.org/n?u=RePEc:cep:stiecm:/2006/499&r=ecm |
By: | Alessio Sancetta |
Abstract: | The appealing feature of nonparametric density estimators is consistency in a variety of circumstances. However, a major drawback is the large variability, particularly in high dimensions when the sample size is relatively small. This paper suggests shrinking the nonparametric estimator towards a parametric density in order to reduce variability. We trade off variance for bias. Due to the so called curse of dimensionality, this appears the natural thing to do in high dimensional problems whenever the parametric density is not too misspecified. |
Keywords: | Integrated Square Error, Nonparametric Estimation, Parametric Model, Shrinkage |
JEL: | C13 C14 |
Date: | 2006–05 |
URL: | http://d.repec.org/n?u=RePEc:cam:camdae:0638&r=ecm |
By: | Afonso Gonçalves da Silva; Peter M Robinson |
Abstract: | Nonlinear functions of multivariate financial time series can exhibit longmemory and fractional cointegration. However, tools for analysingthese phenomena have principally been justified under assumptionsthat are invalid in this setting. Determination of asymptotic theoryunder more plausible assumptions can be complicated and lengthy.We discuss these issues and present a Monte Carlo study, showingthat asymptotic theory should not necessarily be expected to provide agood approximation to finite-sample behaviour. |
Keywords: | Fractional cointegration, memory estimation,stochastic volatility. |
JEL: | C32 |
Date: | 2006–04 |
URL: | http://d.repec.org/n?u=RePEc:cep:stiecm:/2006/501&r=ecm |
By: | M. Gerolimetto; Peter M Robinson |
Abstract: | Instrumental variables estimation is classically employed to avoid simultaneousequations bias in a stable environment. Here we use it to improve upon ordinaryleast squares estimation of cointegrating regressions between nonstationaryand/or long memory stationary variables where the integration orders of regressorand disturbance sum to less than 1, as happens always for stationary regressors,and sometimes for mean-reverting nonstationary ones. Unlike in the classicalsituation, instruments can be correlated with disturbances and/or uncorrelated withregressors. The approach can also be used in traditional non-fractionalcointegrating relations. Various choices of instrument are proposed. Finite sampleperformance is examined. |
Keywords: | Cointegration, Instrumental variables estimation, I(d) processes. |
JEL: | C32 |
Date: | 2006–04 |
URL: | http://d.repec.org/n?u=RePEc:cep:stiecm:/2006/500&r=ecm |
By: | Markku Lanne |
Abstract: | A multiplicative error model with time-varying parameters and an error term following a mixture of gamma distributions is introduced. The model is fitted to the daily realized volatility series of Deutschemark/Dollar and Yen/Dollar returns and is shown to capture the conditional distribution of these variables better than the commonly used ARFIMA model. The forecasting performance of the new model is found to be, in general, superior to that of the set of volatility models recently considered by Andersen et al. (2003) for the same data. |
Keywords: | Mixture model, Realized volatility, Gamma distribution |
JEL: | C22 C52 C53 G15 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:eui:euiwps:eco2006/3&r=ecm |
By: | Markus Frölich (University of St.Gallen, IFAU Uppsala and IZA Bonn) |
Abstract: | This note argues that nonparametric regression not only relaxes functional form assumptions vis-a-vis parametric regression, but that it also permits endogenous control variables. To control for selection bias or to make an exclusion restriction in instrumental variables regression valid, additional control variables are often added to a regression. If any of these control variables is endogenous, OLS or 2SLS would be inconsistent and would require further instrumental variables. Nonparametric approaches are still consistent, though. A few examples are examined and it is found that the asymptotic bias of OLS can indeed be very large. |
Keywords: | endogeneity, nonparametric regression, instrumental variables |
JEL: | C13 C14 |
Date: | 2006–05 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp2126&r=ecm |
By: | Alessio Sancetta |
Abstract: | For high dimensional data sets the sample covariance matrix is usually unbiased but noisy if the sample is not large enough. Shrinking the sample covariance towards a constrained, low dimensional estimator can be used to mitigate the sample variability. By doing so, we introduce bias, but reduce variance. In this paper, we give details on feasible optimal shrinkage allowing for time series dependent observations. |
Keywords: | Sample Covariance Matrix, Shrinkage, Weak Dependence |
JEL: | C13 C14 |
Date: | 2006–05 |
URL: | http://d.repec.org/n?u=RePEc:cam:camdae:0637&r=ecm |
By: | Peter M Robinson |
Abstract: | Smoothed nonparametric kernel spectral density estimates areconsidered for stationary data observed on a d-dimensional lattice.The implications for edge effect bias of the choice of kernel andbandwidth are considered. Under some circumstances the bias canbe dominated by the edge effect. We show that this problem can bemitigated by tapering. Some extensions and related issues arediscussed.MSC: 62M30, 62M15 C22 |
Keywords: | nonparametric spectrum estimation, edge effect, tapering. |
JEL: | C22 |
Date: | 2006–02 |
URL: | http://d.repec.org/n?u=RePEc:cep:stiecm:/2006/498&r=ecm |
By: | Lucia Alessi; Matteo Barigozzi; Marco Capasso |
Abstract: | We propose a new model for multivariate forecasting which combines the Generalized Dynamic Factor Model (GDFM)and the GARCH model. The GDFM, applied to a huge number of series, captures the multivariate information and disentangles the common and the idiosyncratic part of each series of returns. In this financial analysis, both these components are modeled as a GARCH. We compare GDFM+GARCH and standard GARCH performance on samples up to 475 series, predicting both levels and volatility of returns. While results on levels are not significantly different, on volatility the GDFM+GARCH model outperforms the standard GARCH in most cases. These results are robust with respect to different volatility proxies. |
Keywords: | Dynamic Factors, GARCH, Volatility Forecasting |
Date: | 2006–05–13 |
URL: | http://d.repec.org/n?u=RePEc:ssa:lemwps:2006/13&r=ecm |
By: | Dagsvik, John K. (Research Department, Statistics Norway and the Ragnar Frisch Centre for) |
Abstract: | This paper proposes a particular axiomatic approach to motivate the choice of functional forms and distribution of unobservables in continuous time models for discrete panel data analysis. We discuss in particular applications with data on transitions between employment and unemployment. This framework yields a characterization of transition probabilities and duration distributions in terms of structural parameters of the utility function and choice constraints. Moreover, it is discussed how the modeling framework can be extended to allow for involuntary transitions, structural state dependence and random effects. |
Keywords: | Discrete choice in continuous time; Duration of unemployment/employment; Random utility models; Functional form; Invariance principles |
JEL: | C23 C25 C41 |
Date: | 2006–04–25 |
URL: | http://d.repec.org/n?u=RePEc:hhs:osloec:2006_006&r=ecm |
By: | Yves Atchade (Department of Mathematics and Statistics, University of Ottawa and LRSP) |
Abstract: | We introduce the idea that resampling from past observations in a Markov Chain Monte Carlo sampler can fasten convergence. We prove that proper resampling from the past does not disturb the limit distribution of the algorithm. We illustrate the method with two examples. The first on a Bayesian analysis of stochastic volatility models and the other on Bayesian phylogeny reconstruction. |
Keywords: | Monte Carlo methods, Resampling, Stochastic volatility models, Bayesian phylogeny reconstruction. |
JEL: | C10 C40 |
Date: | 2006–03–07 |
URL: | http://d.repec.org/n?u=RePEc:pqs:wpaper:062006&r=ecm |
By: | John Mullahy |
Abstract: | In econometric risk-adjustment exercises, models estimated with one or more included endogenous explanatory variables ("risk adjusters") will generally result in biased predictions of outcomes of interest, e.g. unconditional mean healthcare expenditures. This paper shows that a first-order contributor to this prediction bias is the difference between the distribution of explanatory variables in the estimation sample and the prediction sample -- a form of "extrapolation bias." In the linear model context, a difference in the means of the respective joint marginal distributions of observed covariates suffices to produce bias when endogenous explanatory variables are used in estimation. If these means do not differ, then the "endogeneity-related" extrapolation bias disappears although a form of "standard" extrapolation bias may persist. These results are extended to some of the nonlinear models in common use in this literature with some provisionally-similar conclusions. In general the bias problem will be most acute where risk adjustment is most useful, i.e. when estimated risk-adjustment models are applied in populations whose characteristics differ from those from which the estimation data are drawn. |
JEL: | I1 |
Date: | 2006–05 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:12236&r=ecm |
By: | Li-Chun Zhang (Statistics Norway) |
Abstract: | Systematic sampling is a widely used technique in survey sampling. It is easy to execute, whether the units are to be selected with equal probability or with probabilities proportional to auxiliary sizes. It can be very efficient if one manages to achieve favourable stratification effects through the listing of units. The main disadvantages are that there is no unbiased method for estimating the sampling variance, and that systematic sampling may be poor when the ordering of the population is based on inaccurate knowledge. In this paper we examine an aspect of the systematic sampling that previously has not received much attention. It is shown that in a number of common situations, where the systematic sampling has on average the same efficiency as the corresponding random sampling alternatives under an assumed model for the population, the sampling variance fluctuates much more with the systematic sampling. The use of systematic sampling is associated with a risk that in general increases with the sampling fraction. This can be highly damaging for large samples from small populations in the case of single-stage sampling, or large sub-samples from small sub-populations as in the case of multi-stage sampling. |
Keywords: | Statistical decision; second order Bayes risk; Robust design; Panel survey. |
Date: | 2006–04 |
URL: | http://d.repec.org/n?u=RePEc:ssb:dispap:456&r=ecm |
By: | Don Harding; Adrian Pagan |
Abstract: | Macroeconometric and Financial researchers often use secondary or constructed binary random variables that differ in terms of their statistical properties from the primary random variables used in microeconometric studies. One important di¤erence between primary and secondary binary variables is that while the former are, in many instances, independently distributed (i.d.) the later are rarely i.d. We show how popular rules for constructing binary states determine the degree and nature of the dependence in those states. When using constructed binary variables as regressands a common mistake is to ignore the dependence by using a probit model. We present an alternative non-parametric method that allows for dependence and apply that method to the issue of using the yield spread to predict recessions. |
Keywords: | Business cycle;binary variable;Markov chain;probit model;yield curve |
JEL: | C22 C53 E32 E37 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:mlb:wpaper:963&r=ecm |
By: | Christophe Hurlin (LEO - Laboratoire d'économie d'Orleans - [CNRS : UMR6221] - [Université d'Orléans]); Sessi Tokpavi (LEO - Laboratoire d'économie d'Orleans - [CNRS : UMR6221] - [Université d'Orléans]) |
Abstract: | This paper proposes a new test of Value at Risk (VaR) validation. Our test exploits the idea that the sequence of VaR violations (Hit function) - taking value 1-α, if there is a violation, and -α otherwise - for a nominal coverage rate α verifies the properties of a martingale difference if the model used to quantify risk is adequate (Berkowitz et al., 2005). More precisely, we use the Multivariate Portmanteau statistic of Li and McLeod (1981) - extension to the multivariate framework of the test of Box and Pierce (1970) - to jointly test the absence of autocorrelation in the vector of Hit sequences for various coverage rates considered as relevant for the management of extreme risks. We show that this shift to a multivariate dimension appreciably improves the power properties of the VaR validation test for reasonable sample sizes. |
Keywords: | Value-at-Risk; Risk Management; Model Selection |
Date: | 2006–05–11 |
URL: | http://d.repec.org/n?u=RePEc:hal:papers:halshs-00068384_v1&r=ecm |
By: | Martin Spieß (International Institute of Management, University of Flensburg) |
Date: | 2006–04 |
URL: | http://d.repec.org/n?u=RePEc:fln:wpaper:010&r=ecm |
By: | Koji Miyawaki (Graduate School of Economics, University of Tokyo); Yasuhiro Omori (Faculty of Economics, University of Tokyo); Akira Hibiki (National Institute for Environmental Studies and Department of Social Engineering, Tokyo Insitute of Technology) |
Abstract: | This article proposes a Bayesian estimation method of the demand function under block rate pricing, mainly focusing on increasing block rat epricing. Block rate pricing is often observed in public sectors, such as water and electricity. Under this price structure, price changes when consumption exceeds a certain threshold, and the demand function is subject to a piece wise-linear budget constraint. We apply a discrete/continuous choice model to analyze household behavior with such a price system and take a hierarchical Bayesian approach to estimate its demand function. Moreover, a separability condition is additionally considered to obtain proper estimates. The model is extended to allow random coe?cients for panel data and spatial correlation to account for consumer heterogeneity of spatial data. The proposed method is applied to estimate the Japanese residential water demand function under increasing block rate pricing. |
Date: | 2006–05 |
URL: | http://d.repec.org/n?u=RePEc:tky:fseres:2006cf424&r=ecm |