nep-ecm New Economics Papers
on Econometrics
Issue of 2007‒07‒07
eleven papers chosen by
Sune Karlsson
Orebro University

  1. The Maximum Lq-Likelihood Method: an Application to Extreme Quantile Estimation in Finance By Davide Ferrari; Sandra Paterlini
  2. Efficient High-Dimensional Importance Sampling By Jean-Francois Richard; Wei Zhang
  3. Non-negativity Conditions for the Hyperbolic GARCH Model By Christian Conrad
  4. A Comparison of Estimation Methods for Vector Autoregressive Moving-Average Models By Christian Kascha
  5. Econometric Analysis with Vector Autoregressive Models By Helmut Luetkepohl
  6. Back to square one: identification issues in DSGE models By Fabio Canova; Luca Sala
  7. Ordinary Least Squares Bias and Bias Corrections for <em>iid</em> Samples By Lonnie Magee
  8. A note on the coefficient of determination in models with infinite variance variables By Jeong-Ryeol Kurz-Kim; Mico Loretan
  9. Quantile Forecasting for Credit Risk Management using possibly Mis-specified Hidden Markov Models By Konrad Banachewicz; André Lucas
  10. The Levy sections theorem: an application to econophysics By Figueiredo, Annibal; Matsushita, Raul; Da Silva, Sergio; Serva, Maurizio; Viswanathan, Gandhi; Nascimento, Cesar; Gleria, Iram
  11. Measuring changes in the value of the numeraire By Ricardo Reis; Mark W. Watson

  1. By: Davide Ferrari; Sandra Paterlini
    Abstract: Estimating financial risk is a critical issue for banks and insurance companies. Recently, quantile estimation based on Extreme Value Theory (EVT) has found a successful domain of application in such a context, outperforming other approaches. Given a parametric model provided by EVT, a natural approach is Maximum Likelihood estimation. Although the resulting estimator is asymptotically efficient, often the number of observations available to estimate the parameters of the EVT models is too small in order to make the large sample property trustworthy. In this paper, we study a new estimator of the parameters, the Maximum Lq-Likelihood estimator (MLqE), introduced by Ferrari and Yang (2007). We show that the MLqE can outperform the standard MLE, when estimating tail probabilities and quantiles of the Generalized Extreme Value (GEV) and the Generalized Pareto (GP) distributions. First, we assess the relative efficiency between the the MLqE and the MLE for various sample sizes, using Monte Carlo simulations. Second, we analyze the performance of the MLqE for extreme quantile estimation using real-world financial data. The MLqE is characterized by a distortion parameter q and extends the traditional log-likelihood maximization procedure. When q!1, the new estimator approaches the traditional Maximum Likelihood Estimator (MLE), recovering its desirable asymptotic properties; when q 6= 1 and the sample size is moderate or small, the MLqE successfully trades bias for variance, resulting in an overall gain in terms of accuracy (Mean Squared Error).
    Keywords: Maximum Likelihood; Extreme Value Theory; q-Entropy; Tail-related risk measures
    JEL: C13 C22 C51
    Date: 2007–07
    URL: http://d.repec.org/n?u=RePEc:mod:wcefin:07071&r=ecm
  2. By: Jean-Francois Richard; Wei Zhang
    Abstract: The paper describes a simple, generic and yet highly accurate Efficient Importance Sampling (EIS) Monte Carlo (MC) procedure for the evaluation of high-dimensional numerical integrals. EIS is based upon a sequence of auxiliary weighted regressions which actually are linear under appropriate conditions. It can be used to evaluate likelihood functions and byproducts thereof, such as ML estimators, for models which depend upon unobservable variables. A dynamic stochastic volatility model and a logit panel data model with unobserved heterogeneity (random effects) in both dimensions are used to provide illustrations of EIS high numerical accuracy, even under small number of MC draws. MC simulations are used to characterize the finite sample numerical and statistical properties of EIS-based ML estimators.
    Date: 2007–06
    URL: http://d.repec.org/n?u=RePEc:pit:wpaper:321&r=ecm
  3. By: Christian Conrad (KOF Swiss Economic Institute, ETH Zurich Switzerland)
    Abstract: In this article we derive conditions which ensure the non-negativity of the conditional variance in the Hyperbolic GARCH(p; d; q) (HYGARCH) model of Davidson (2004). The conditions are necessary and suffcient for p < 2 and suffcient for p > 2 and emerge as natural extensions of the inequality constraints derived in Nelson and Cao (1992) for the GARCH model and in Conrad and Haag (2006) for the FIGARCH model. As a by-product we obtain a representation of the ARCH(1) coeffcients which allows computationally effcient multi-step-ahead forecasting of the conditional variance of a HYGARCH process. We also relate the necessary and suffcient parameter set of the HYGARCH to the necessary and su±cient parameter sets of its GARCH and FIGARCH components. Finally, we analyze the effects of erroneously fitting a FIGARCH model to a data sample which was truly generated by a HYGARCH process. An empirical application of the HYGARCH(1; d; 1) model to daily NYSE data illustrates the importance of our results.
    Keywords: Inequality constraints, fractional integration, long memory GARCH processes
    JEL: C22 C52 C53
    Date: 2007–04
    URL: http://d.repec.org/n?u=RePEc:kof:wpskof:07-162&r=ecm
  4. By: Christian Kascha
    Abstract: Classical Gaussian maximum likelihood estimation of mixed vector autoregressive moving-average models is plagued with various numerical problems and has been considered di±cult by many applied researchers. These disadvantages could have led to the dominant use of vector autoregressive models in macroeconomic research. Therefore, several other, simpler estimation methods have been proposed in the literature. In this paper these methods are compared by means of a Monte Carlo study. Different evaluation criteria are used to judge the relative performances of the algorithms.
    Keywords: VARMA Models, Estimation Algorithms, Forecasting
    JEL: C32 C15 C63
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:eui:euiwps:eco2007/12&r=ecm
  5. By: Helmut Luetkepohl
    Abstract: Vector autoregressive (VAR) models for stationary and integrated variables are reviewed. Model specification and parameter estimation are discussed and various uses of these models for forecasting and economic analysis are considered. For integrated and cointegrated variables it is argued that vector error correction models offer a particularly convenient parameterization both for model specification and for using the models for economic analysis.
    Keywords: VAR, vector autoregressive models
    JEL: C32
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:eui:euiwps:eco2007/11&r=ecm
  6. By: Fabio Canova (Universitat Pompeu Fabra); Luca Sala (Innocenzo Gasparini Institute for Economic Research (IGIER) - Università Commerciale Luigi Bocconi)
    Abstract: We investigate identifiability issues in DSGE models and their consequences for parameter estimation and model evaluation when the objective function measures the distance between estimated and model impulse responses. Observational equivalence, partial and weak identification problems are widespread and they lead to biased estimates, unreliable t-statistics and may induce investigators to select false models. We examine whether different objective functions affect identification and study how small samples interact with parameters and shock identification. We provide diagnostics and tests to detect identification failures and apply them to a state-of-the-art model.
    Keywords: identification, impulse responses, DSGE models, small samples
    JEL: C10 C52 E32 E50
    Date: 2007–06
    URL: http://d.repec.org/n?u=RePEc:bde:wpaper:0715&r=ecm
  7. By: Lonnie Magee
    Abstract: The O(n-1) bias and O(n-2) MSE of OLS are derived for iid samples. An approach is suggested for handling nonexistent finite sample moments. Bias corrections based on plug-in, weighting, jackknife and pairs bootstrap methods are equal to Op(n-3/2). Sometimes they are effective at lowering bias and MSE, but not always. In simulations, the bootstrap correction removes more bias than the others, but has a higher MSE. A hypothesis test is given for the presence of this bias. The techniques are applied to survey data on food expenditure, and the estimated bias is small and statistically insignificant.
    Keywords: OLS bias; finite sample moments; Nagar approximation; bias correction; pairs bootstrap
    JEL: C13 C29 C49
    Date: 2007–06
    URL: http://d.repec.org/n?u=RePEc:mcm:qseprr:419&r=ecm
  8. By: Jeong-Ryeol Kurz-Kim; Mico Loretan
    Abstract: Since the seminal work of Mandelbrot (1963), alpha-stable distributions with infinite variance have been regarded as a more realistic distributional assumption than the normal distribution for some economic variables, especially financial data. After providing a brief survey of theoretical results on estimation and hypothesis testing in regression models with infinite-variance variables, we examine the statistical properties of the coefficient of determination in models with alpha-stable variables. If the regressor and error term share the same index of stability alpha<2, the coefficient of determination has a nondegenerate asymptotic distribution on the entire [0, 1] interval, and the density of this distribution is unbounded at 0 and 1. We provide closed-form expressions for the cumulative distribution function and probability density function of this limit random variable. In contrast, if the indices of stability of the regressor and error term are unequal, the coefficient of determination converges in probability to either 0 or 1, depending on which variable has the smaller index of stability. In an empirical application, we revisit the Fama-MacBeth two-stage regression and show that in the infinite-variance case the coefficient of determination of the second-stage regression converges to zero in probability even if the slope coefficient is nonzero.
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:fip:fedgif:895&r=ecm
  9. By: Konrad Banachewicz (Vrije Universiteit Amsterdam); André Lucas (Vrije Universiteit Amsterdam)
    Abstract: Recent models for credit risk management make use of Hidden Markov Models (HMMs). The HMMs are used to forecast quantiles of corporate default rates. Little research has been done on the quality of such forecasts if the underlying HMM is potentially mis-specified. In this paper, we focus on mis-specification in the dynamics and the dimension of the HMM. We consider both discrete and continuous state HMMs. The differences are substantial. Underestimating the number of discrete states has an economically significant impact on forecast quality. Generally speaking, discrete models underestimate the high-quantile default rate forecasts. Continuous state HMMs, however, vastly overestimate high quantiles if the true HMM has a discrete state space. In the reverse setting, the biases are much smaller, though still substantial in economic terms. We illustrate the empirical differences using U.S. default data.
    Keywords: defaults; Markov switching; misspecification; quantile forecast; Expectation-Maximization; simulated maximum likelihood; importance sampling
    JEL: C53 C22 G32
    Date: 2007–06–13
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20070046&r=ecm
  10. By: Figueiredo, Annibal; Matsushita, Raul; Da Silva, Sergio; Serva, Maurizio; Viswanathan, Gandhi; Nascimento, Cesar; Gleria, Iram
    Abstract: We employ the Levy sections theorem in the analysis of selected dollar exchange rate time series. The theorem is an extension of the classical central limit theorem and offers an alternative to the most usual analysis of the sum variable. We find that the presence of fat tails can be related to the local volatility pattern of the series.
    JEL: C49
    Date: 2007–07–03
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:3810&r=ecm
  11. By: Ricardo Reis; Mark W. Watson
    Abstract: This paper estimates a common component in many price series that has an equiproportional effect on all prices. Changes in this component can be interpreted as changes in the value of the numeraire since, by definition, they leave all relative prices unchanged. The first aim of the paper is to measure these changes. The paper provides a framework for identifying this component, suggests an estimator for the component based on a dynamic factor model, and assesses its performance relative to alternative estimators. Using 187 U.S. time-series on prices, we estimate changes in the value of the numeraire from 1960 to 2006, and further decompose these changes into a part that is related to relative price movements and a residual ‘exogenous’ part. The second aim of the paper is to use these estimates to investigate two economic questions. First, we show that the size of exogenous changes in the value of the numeraire helps distinguish between different theories of pricing, and that the U.S. evidence argues against several strict theories of nominal rigidities. Second, we find that changes in the value of the numeraire are significantly related to changes in real quantities, and discuss interpretations of this apparent non-neutrality.
    Keywords: Inflation, Money illusion, Monetary neutrality, Price index
    JEL: E31 C43 C32
    Date: 2007–06
    URL: http://d.repec.org/n?u=RePEc:kie:kieliw:1364&r=ecm

This nep-ecm issue is ©2007 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.