nep-ecm New Economics Papers
on Econometrics
Issue of 2010‒06‒04
twenty-one papers chosen by
Sune Karlsson
Orebro University

  1. Efficient Bayesian estimation and combination of GARCH-type models By Ardia, David; Hoogerheide, Lennart F.
  2. Estimating a change point in the long memory parameter By Yamaguchi, Keiko
  3. A Simple Estimator for Dynamic Models with Serially Correlated Unobservables By Yingyao Hu, Matthew Shum and Wei Tan
  4. Bootstrap prediction intervals for VaR and ES in the context of GARCH models By María Rosa Nieto; Esther Ruiz
  5. A comparison of alternative approaches to sup-norm goodness of fit tests with estimated parameters By Parker, Thomas
  6. Goodness-of-fit testing for regime-switching models By Janczura, Joanna; Weron, Rafal
  7. Euler-Equation Estimation for Discrete Choice Models: A Capital Accumulation Application By Russell Cooper; John Haltiwanger; Jonathan L. Willis
  8. Econometric Analysis of High Dimensional VARs Featuring a Dominant Unit By Pesaran, M.H.; Chudik, A.
  9. Representing functional data in reproducing Kernel Hilbert Spaces with applications to clustering and classification By Javier González; Alberto Muñoz
  10. Robust Inference with Clustered Data By Cameron, A. Colin; Miller, Douglas L.
  11. Weights and pools for a Norwegian density combination By Hilde Bjørnland; Karsten Gerdrup; Christie Smith; Anne Sofie Jore; Leif Anders Thorsrud
  12. Thresholds, News Impact Surfaces and Dynamic Asymmetric Multivariate GARCH By Michael McAleer; Massimiliano Caporin
  13. Nonparametric Estimation of Marketing-Mix Effects Using a Regression Discontinuity Design By Hartmann, Wesley; Nair, Harikesh; Narayanan, Sridhar
  14. Estimating Persistence in the Volatility of Asset Returns with Signal Plus Noise Models By Guglielmo Maria Caporale; Luis A. Gil-Alana
  15. Some Results for Extreme Value Processes in Analogy to the Gaussian Spectral Representation By Andree Ehlert; Martin Schlather
  16. Description Length Based Signal Detection in singular Spectrum Analysis By Md Atikur Rahman Khan; D.S. Poskitt
  17. Some Remarks on T-copulas By Volf Frishling; David G Maher
  18. Identification Strategies in Survey Response Using Vignettes By Corrado, L.; Weeks, M.
  19. An Expanded Scope For Qualitative Economics By Andrew J. Buck; George M. Lady
  20. Equilibrium exchange rate determination and multiple structural changes By Hyunsok Kim; Ronald MacDonald
  21. Oversampling of stochastic processes By D.S.G. Pollock

  1. By: Ardia, David; Hoogerheide, Lennart F.
    Abstract: This chapter proposes an up-to-date review of estimation strategies available for the Bayesian inference of GARCH-type models. The emphasis is put on a novel efficient procedure named AdMitIS. The methodology automatically constructs a mixture of Student-t distributions as an approximation to the posterior density of the model parameters. This density is then used in importance sampling for model estimation, model selection and model combination. The procedure is fully automatic which avoids difficult and time consuming tuning of MCMC strategies. The AdMitIS methodology is illustrated with an empirical application to S&P index log-returns where non-nested GARCH-type models are estimated and combined to predict the distribution of next-day ahead log-returns.
    Keywords: GARCH; Bayesian inference; MCMC; marginal likelihood; Bayesian model averaging; adaptive mixture of Student-t distributions; importance sampling.
    JEL: C51 C22 C15 C11
    Date: 2010–02–08
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:22919&r=ecm
  2. By: Yamaguchi, Keiko
    Abstract: We propose an estimator of change point in the long memory parameter d of an ARFIMA(p, d, q) process using the sup Wald test. We derive the consistency and the rate of convergence of the parameter. The convergence rate of our change point estimator depends on the magnitude of a shift. Furthermore, we obtain the limiting distribution of our change point estimator without depending on the distribution of the process. Therefore, we can construct the confidence interval of the change point. Simulations show the validity of the asymptotic theory of our estimator if the sample size is large enough. We apply our change point estimator to the yearly Nile river minimum time series.
    Keywords: Break in persistence, long memory, change point
    JEL: C22
    Date: 2010–05
    URL: http://d.repec.org/n?u=RePEc:hit:econdp:2010-07&r=ecm
  3. By: Yingyao Hu, Matthew Shum and Wei Tan
    Abstract: We present a method for estimating Markov dynamic models with unobserved state variables which can be serially correlated over time. We focus on the case where all the model variables have discrete support. Our estimator is simple to compute because it is noniterative, and involves only elementary matrix manipulations. Our estimation method is nonparametric, in that no parametric assumptions on the distributions of the unobserved state variables or the laws of motions of the state variables are required. Monte Carlo simulations show that the estimator performs well in practice, and we illustrate its use with a dataset of doctors' prescription of pharmaceutical drugs.
    Date: 2010–05
    URL: http://d.repec.org/n?u=RePEc:jhu:papers:558&r=ecm
  4. By: María Rosa Nieto; Esther Ruiz
    Abstract: In this paper, we propose a new bootstrap procedure to obtain prediction intervals of future Value at Risk (VaR) and Expected Shortfall (ES) in the context of univariate GARCH models. These intervals incorporate the parameter uncertainty associated with the estimation of the conditional variance of returns. Furthermore, they do not depend on any particular assumption on the error distribution. Alternative bootstrap intervals previously proposed in the literature incorporate the first but not the second source of uncertainty when computing the VaR and ES. We also consider an iterated smoothed bootstrap with better properties than traditional ones when computing prediction intervals for quantiles. However, this latter procedure depends on parameters that have to be arbitrarily chosen and is very complicated computationally. We analyze the finite sample performance of the proposed procedure and show that the coverage of our proposed procedure is closer to the nominal than that of the alternatives. All the results are illustrated by obtaining one-step-ahead prediction intervals of the VaR and ES of several real time series of financial returns.
    Keywords: Expected Shortfall, Feasible Historical Simulation, Hill estimator, Parameter uncertainty, Quantile intervals, Value at Risk
    Date: 2010–05
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws102814&r=ecm
  5. By: Parker, Thomas
    Abstract: Goodness of fit tests based on sup-norm statistics of empirical processes have nonstandard limiting distributions when the null hypothesis is composite — that is, when parameters of the null model are estimated. Several solutions to this problem have been suggested, including the calculation of adjusted critical values for these nonstandard distributions and the transformation of the empirical process such that statistics based on the transformed process are asymptotically distribution-free. The approximation methods proposed by Durbin (1985) can be applied to compute appropriate critical values for tests based on sup-norm statistics. The resulting tests have quite accurate size, a fact which has gone unrecognized in the econometrics literature. Some justification for this accuracy lies in the similar features that Durbin’s approximation methods share with the theory of extrema for Gaussian random fields and for Gauss-Markov processes. These adjustment techniques are also related to the transformation methodology proposed by Khmaladze (1981) through the score function of the parametric model. Monte Carlo experiments suggest that these two testing strategies are roughly comparable to one another and more powerful than a simple bootstrap procedure.
    Keywords: Goodness of fit test; Estimated parameters; Gaussian process; Gauss-Markov process; Boundary crossing probability; Martingale transformation
    JEL: C14 C12 C46
    Date: 2010–05–26
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:22926&r=ecm
  6. By: Janczura, Joanna; Weron, Rafal
    Abstract: In this paper we propose a novel goodness-of-fit testing scheme for regime-switching models. We consider models with an observable, as well as, a latent state process. The test is based on the Kolmogorov-Smirnov supremum-distance statistic and the concept of the weighted empirical distribution function. We apply the proposed scheme to test whether a 2-state Markov regime-switching model fits electricity spot price data.
    Keywords: Regime-switching; Goodness-of-fit; Weighted empirical distribution function; Kolmogorov-Smirnov test
    JEL: C52 C12 Q40
    Date: 2010–05–24
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:22871&r=ecm
  7. By: Russell Cooper; John Haltiwanger; Jonathan L. Willis
    Abstract: This paper studies capital adjustment at the establishment level. Our goal is to characterize capital adjustment costs, which are important for understanding both the dynamics of aggregate investment and the impact of various policies on capital accumulation. Our estimation strategy searches for parameters that minimize ex post errors in an Euler equation. This strategy is quite common in models for which adjustment occurs in each period. Here, we extend that logic to the estimation of parameters of dynamic optimization problems in which non-convexities lead to extended periods of investment inactivity. In doing so, we create a method to take into account censored observations stemming from intermittent investment. This methodology allows us to take the structural model directly to the data, avoiding time-consuming simulationbased methods. To study the effectiveness of this methodology, we first undertake several Monte Carlo exercises using data generated by the structural model. We then estimate capital adjustment costs for U.S. manufacturing establishments in two sectors.
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:eui:euiwps:eco2010/21&r=ecm
  8. By: Pesaran, M.H.; Chudik, A.
    Abstract: This paper extends the analysis of infinite dimensional vector autoregressive models (IVAR) proposed in Chudik and Pesaran (2010) to the case where one of the variables or the cross section units in the IVAR model is dominant or pervasive. This extension is not straightforward and involves several technical dificulties. The dominant unit influences the rest of the variables in the IVAR model both directly and indirectly, and its effects do not vanish even as the dimension of the model (N) tends to infinity. The dominant unit acts as a dynamic factor in the regressions of the non-dominant units and yields an infinite order distributed lag relationship between the two types of units. Despite this it is shown that the effects of the dominant unit as well as those of the neighborhood units can be consistently estimated by running augmented least squares regressions that include distributed lag functions of the dominant unit. The asymptotic distribution of the estimators is derived and their small sample properties investigated by means of Monte Carlo experiments.
    Keywords: IVAR Models, Dominant Units, Large Panels, Weak and Strong Cross Section Dependence, Factor Model
    JEL: C10 C33 C51
    Date: 2010–05–29
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:1024&r=ecm
  9. By: Javier González; Alberto Muñoz
    Abstract: Functional data are difficult to manage for many traditional statistical techniques given their very high (or intrinsically infinite) dimensionality. The reason is that functional data are essentially functions and most algorithms are designed to work with (low) finite-dimensional vectors. Within this context we propose techniques to obtain finitedimensional representations of functional data. The key idea is to consider each functional curve as a point in a general function space and then project these points onto a Reproducing Kernel Hilbert Space with the aid of Regularization theory. In this work we describe the projection method, analyze its theoretical properties and propose a model selection procedure to select appropriate Reproducing Kernel Hilbert spaces to project the functional data.
    Keywords: Functional data, Reproducing, Kernel Hilbert Spaces, Regularization theory
    Date: 2010–05
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws102713&r=ecm
  10. By: Cameron, A. Colin (University of California, Davis); Miller, Douglas L. (University of California, Davis)
    Abstract: In this paper we survey methods to control for regression model error that is correlated within groups or clusters, but is uncorrelated across groups or clusters. Then failure to control for the clustering can lead to understatement of standard errors and overstatement of statistical significance, as emphasized most notably in empirical studies by Moulton (1990) and Bertrand, Duo and Mullainathan (2004). We emphasize OLS estimation with statistical inference based on minimal assumptions regarding the error correlation process. Complications we consider include cluster-specific fixed effects, few clusters, multi-way clustering, more efficient feasible GLS estimation, and adaptation to nonlinear and instrumental variables estimators.
    JEL: C12 C21 C23
    Date: 2010–02
    URL: http://d.repec.org/n?u=RePEc:ecl:ucdeco:10-6&r=ecm
  11. By: Hilde Bjørnland (Norwegian School of Management (BI) and Norges Bank (Central Bank of Norway)); Karsten Gerdrup (Norges Bank (Central Bank of Norway)); Christie Smith (Reserve Bank of New Zealand); Anne Sofie Jore (Norges Bank (Central Bank of Norway)); Leif Anders Thorsrud (Norges Bank (Central Bank of Norway))
    Abstract: We apply a suite of models to produce quasi-real-time density forecasts of Norwegian GDP and in ation, and evaluate dfferent combination and selection methods using the Kullback-Leibler information criterion (KLIC). We use linear and logarithmic opinion pools in conjunction with various weighting schemes, and we compare these combinations to two different selection methods. In our application, logarithmic opinion pools were better than linear opinion pools, and score-based weights were generally superior to other weighting schemes. Model selection generally yielded poor density forecasts, as evaluated by KLIC.
    Keywords: Model combination; evaluation; density forecasting; KLIC
    JEL: C32 C52 C53 E52
    Date: 2010–05–19
    URL: http://d.repec.org/n?u=RePEc:bno:worpap:2010_06&r=ecm
  12. By: Michael McAleer (University of Canterbury); Massimiliano Caporin
    Abstract: DAMGARCH is a new model that extends the VARMA-GARCH model of Ling and McAleer (2003) by introducing multiple thresholds and time-dependent structure in the asymmetry of the conditional variances. Analytical expressions for the news impact surface implied by the new model are also presented. DAMGARCH models the shocks affecting the conditional variances on the basis of an underlying multivariate distribution. It is possible to model explicitly asset-specific shocks and common innovations by partitioning the multivariate density support. This paper presents the model structure, describes the implementation issues, and provides the conditions for the existence of a unique stationary solution, and for consistency and asymptotic normality of the quasi-maximum likelihood estimators. The paper also presents an empirical example to highlight the usefulness of the new model.
    Keywords: multivariate asymmetry; conditional variance; stationarity conditions; asymptotic theory; multivariate news impact curve
    JEL: C32 C51 C52
    Date: 2010–04–01
    URL: http://d.repec.org/n?u=RePEc:cbt:econwp:10/32&r=ecm
  13. By: Hartmann, Wesley (Stanford University); Nair, Harikesh (Stanford University); Narayanan, Sridhar (Stanford University)
    Abstract: We discuss how regression discontinuity designs arise naturally in settings where firms target marketing activity at consumers, and discuss how this aspect may be exploited for econometric inference of causal effects of marketing effort. Our main insight is to use commonly observed discreteness and kinks in the heuristics by which firms target such marketing activity to consumers for nonparametric identification. Such kinks, along with continuity restrictions that are typically satisfied in marketing and industrial organization applications, are sufficient for identification of local treatment effects. We review the theory of regression discontinuity estimation in the context of targeting, and explore its applicability to several marketing settings. We discuss identifiability of causal marketing effects using the design, and illustrate theoretically the conditions under which the RD estimator may be valid. Specifically, we argue that consideration of an underlying model of strategic consumer behavior reveals how identification hinges on model features such as the specification and value of structural parameters as well as belief structures. We present two empirical applications: the first, to measuring the effect of casino e-mail promotions targeted to customers based on ranges of their expected profitability; and the second, to measuring the effect of direct mail targeted by a B2C company to zip-codes based on thresholds of expected response. In both cases, we illustrate that exploiting the regression discontinuity design reveals negative effects of the marketing campaigns that would not have been uncovered using other approaches. Our results are nonparameteric, easy to compute, and fully control for the endogeneity induced by the targeting rule.
    Date: 2009–11
    URL: http://d.repec.org/n?u=RePEc:ecl:stabus:2039&r=ecm
  14. By: Guglielmo Maria Caporale; Luis A. Gil-Alana
    Abstract: This paper examines the degree of persistence in the volatility of financial time series using a Long Memory Stochastic Volatility (LMSV) model. Specifically, it employs a Gaussian semiparametric (or local Whittle) estimator of the memory parameter, based on the frequency domain, proposed by Robinson (1995a), and shown by Arteche (2004) to be consistent and asymptotically normal in the context of signal plus noise models. Daily data on the NASDAQ index are analysed. The results suggest that volatility has a component of long- memory behaviour, the order of integration ranging between 0.3 and 0.5, the series being therefore stationary and mean-reverting.
    Keywords: Fractional integration, long memory, stochastic volatility, asset returns
    JEL: C13 C22
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1006&r=ecm
  15. By: Andree Ehlert (Georg-August-University Göttingen); Martin Schlather (Georg-August-University Göttingen)
    Abstract: The extremal coefficient function has been discussed as an analog of the autocovariance function for extreme values. However, as to the behavior of valid extremal coefficient functions little is known apart from their positive definite type. In particular, the reconstruction of valid processes from given extremal coefficient functions has not been considered before. We show, for the one-dimensional case, the equivalence of the set correlation functions and the extremal coefficient functions with finite range on a grid, and study an analogy to Bochner’s theorem, namely that any such extremal coefficient function is representable as a convex combination of a finite set of positive definite functions. This allows for the construction of simple max-stable processes complying with a given extremal coefficient function and, in addition, highlights further properties of the latter. We will include an application of this approach and discuss several examples. As to processes with infinite range we will consider a natural extension of the term “long memory” that is well-known in the Gaussian framework to max-stable processes.
    Keywords: Extreme value theory; max-stable process; extremal dependence; extremal coefficient function; set covariance function; set correlation function; homometric; long memory; summability
    Date: 2010–05–25
    URL: http://d.repec.org/n?u=RePEc:got:gotcrc:030&r=ecm
  16. By: Md Atikur Rahman Khan; D.S. Poskitt
    Abstract: This paper provides an information theoretic analysis of the signal-noise separation problem in Singular Spectrum Analysis. We present a signal-plus-noise model based on the Karhunen-Loève expansion and use this model to motivate the construction of a minimum description length criterion that can be employed to select both the window length and the signal. We show that under very general regularity conditions the criterion will identify the true signal dimension with probability one as the sample size increases, and will choose the smallest window length consistent with the Whitney embedding theorem. Empirical results obtained using simulated and real world data sets indicate that the asymptotic theory is reflected in observed behaviour, even in relatively small samples.
    Keywords: Karhunen-Loève expansion, minimum description length, signal-plus-noise model, Singular Spectrum Analysis, embedding
    JEL: C14 C22 C52
    Date: 2010–05–24
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2010-13&r=ecm
  17. By: Volf Frishling; David G Maher
    Abstract: We examine three methods of constructing correlated Student-$t$ random variables. Our motivation arises from simulations that utilise heavy-tailed distributions for the purposes of stress testing and economic capital calculations for financial institutions. We make several observations regarding the suitability of the three methods for this purpose.
    Date: 2010–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1005.4456&r=ecm
  18. By: Corrado, L.; Weeks, M.
    Abstract: In this paper we explore solutions to a particular type of heterogeneity in survey data which is manifest in the presence of individual-specific response scales. We consider this problem in the context of existing evidence on cross-country differences in subjective life satisfaction, and in particular the extent of cross-country comparability. In this instance observed responses are not directly comparable, and inference is compromised.<br> We utilise two broad identification strategies to account for scale heterogeneity. Keeping the data fixed, we consider a number of estimators based on alternative generalisations of the ordered response model. We also examine a number of alternative approaches based on the use of additional information in the form of responses on one or more additional questions with the same response categories as the self-assessment question. These additional questions, referred to as anchoring vignettes, can under certain conditions, be used to correct for the resultant biases in model parameters.
    Keywords: Vignettes, ordered response, generalised ordered response, stochastic thresholds, attitudinal surveys.
    Date: 2010–05–29
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:1031&r=ecm
  19. By: Andrew J. Buck (Department of Economics, Temple University); George M. Lady (Department of Economics, Temple University)
    Abstract: As currently practiced, the analysis of an economic model’s qualitative properties is very restricted and rarely productive. This paper provides an approach for conducting an expanded qualitative analysis that can be applied to any economic model. The method proposed will enable the qualitative properties of all economic models to be critically assessed.
    Keywords: Qualitative Analysis, Comparative Statics, Falsification
    JEL: C12 C14 C52
    Date: 2010–05
    URL: http://d.repec.org/n?u=RePEc:tem:wpaper:1007&r=ecm
  20. By: Hyunsok Kim; Ronald MacDonald
    Abstract: The large appreciation and depreciation of the US dollar in the 1980s stimulated an important debate on the usefulness of unit root tests in the presence of structural breaks. In this paper, we propose a simple model to describe the evolution of the real exchange rate. We then propose a more general smooth transition (STR) function than has hitherto been employed, which is able to capture structural changes along the (long-run) equilibrium path, and show that this is consistent with our economic model. Our framework allows for a gradual adjustment between regimes and allows for under- and/or over-valued exchange rate adjustments. Using monthly and quarterly data for up to twenty OECD countries, we apply our methodology to investigate the univariate time series properties of CPI-based real exchange rates with both the U.S. dollar and German mark as the numeraire currencies. The empirical results show that, for more than half of the quarterly series, the evidence in favour of the stationarity of the real exchange rate was clearer in the sub-sample period post-1980.
    Keywords: Unit root tests, structural breaks, purchasing power parity
    JEL: C16 C22 F31
    Date: 2010–05
    URL: http://d.repec.org/n?u=RePEc:gla:glaewp:2010_14&r=ecm
  21. By: D.S.G. Pollock (University of Leicester)
    Abstract: Discrete-time ARMA processes can be placed in a one-to-one correspondence with a set of continuous-time processes that are bounded in frequency by the Nyquist value of ð radians per sample period. It is well known that, if data are sampled from a continuous process of which the maximum frequency exceeds the Nyquist value, then there will be a problem of aliasing. However, if the sampling is too rapid, then other problems will arise that will cause the ARMA estimates to be severely biased. The paper reveals the nature of these problems and it shows how they may be overcome.
    Keywords: Stochastic Differential Equations, Band-Limited Stochastic Processes, Oversampling
    Date: 2010–05–25
    URL: http://d.repec.org/n?u=RePEc:wse:wpaper:44&r=ecm

This nep-ecm issue is ©2010 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.