
on Econometrics 
By:  Davide Ferrari; Sandra Paterlini 
Abstract:  Estimating financial risk is a critical issue for banks and insurance companies. Recently, quantile estimation based on Extreme Value Theory (EVT) has found a successful domain of application in such a context, outperforming other approaches. Given a parametric model provided by EVT, a natural approach is Maximum Likelihood estimation. Although the resulting estimator is asymptotically efficient, often the number of observations available to estimate the parameters of the EVT models is too small in order to make the large sample property trustworthy. In this paper, we study a new estimator of the parameters, the Maximum LqLikelihood estimator (MLqE), introduced by Ferrari and Yang (2007). We show that the MLqE can outperform the standard MLE, when estimating tail probabilities and quantiles of the Generalized Extreme Value (GEV) and the Generalized Pareto (GP) distributions. First, we assess the relative efficiency between the the MLqE and the MLE for various sample sizes, using Monte Carlo simulations. Second, we analyze the performance of the MLqE for extreme quantile estimation using realworld financial data. The MLqE is characterized by a distortion parameter q and extends the traditional loglikelihood maximization procedure. When q!1, the new estimator approaches the traditional Maximum Likelihood Estimator (MLE), recovering its desirable asymptotic properties; when q 6= 1 and the sample size is moderate or small, the MLqE successfully trades bias for variance, resulting in an overall gain in terms of accuracy (Mean Squared Error). 
Keywords:  Maximum Likelihood; Extreme Value Theory; qEntropy; Tailrelated risk measures 
JEL:  C13 C22 C51 
Date:  2007–07 
URL:  http://d.repec.org/n?u=RePEc:mod:wcefin:07071&r=ecm 
By:  JeanFrancois Richard; Wei Zhang 
Abstract:  The paper describes a simple, generic and yet highly accurate Efficient Importance Sampling (EIS) Monte Carlo (MC) procedure for the evaluation of highdimensional numerical integrals. EIS is based upon a sequence of auxiliary weighted regressions which actually are linear under appropriate conditions. It can be used to evaluate likelihood functions and byproducts thereof, such as ML estimators, for models which depend upon unobservable variables. A dynamic stochastic volatility model and a logit panel data model with unobserved heterogeneity (random effects) in both dimensions are used to provide illustrations of EIS high numerical accuracy, even under small number of MC draws. MC simulations are used to characterize the finite sample numerical and statistical properties of EISbased ML estimators. 
Date:  2007–06 
URL:  http://d.repec.org/n?u=RePEc:pit:wpaper:321&r=ecm 
By:  Christian Conrad (KOF Swiss Economic Institute, ETH Zurich Switzerland) 
Abstract:  In this article we derive conditions which ensure the nonnegativity of the conditional variance in the Hyperbolic GARCH(p; d; q) (HYGARCH) model of Davidson (2004). The conditions are necessary and suffcient for p < 2 and suffcient for p > 2 and emerge as natural extensions of the inequality constraints derived in Nelson and Cao (1992) for the GARCH model and in Conrad and Haag (2006) for the FIGARCH model. As a byproduct we obtain a representation of the ARCH(1) coeffcients which allows computationally effcient multistepahead forecasting of the conditional variance of a HYGARCH process. We also relate the necessary and suffcient parameter set of the HYGARCH to the necessary and su±cient parameter sets of its GARCH and FIGARCH components. Finally, we analyze the effects of erroneously fitting a FIGARCH model to a data sample which was truly generated by a HYGARCH process. An empirical application of the HYGARCH(1; d; 1) model to daily NYSE data illustrates the importance of our results. 
Keywords:  Inequality constraints, fractional integration, long memory GARCH processes 
JEL:  C22 C52 C53 
Date:  2007–04 
URL:  http://d.repec.org/n?u=RePEc:kof:wpskof:07162&r=ecm 
By:  Christian Kascha 
Abstract:  Classical Gaussian maximum likelihood estimation of mixed vector autoregressive movingaverage models is plagued with various numerical problems and has been considered di±cult by many applied researchers. These disadvantages could have led to the dominant use of vector autoregressive models in macroeconomic research. Therefore, several other, simpler estimation methods have been proposed in the literature. In this paper these methods are compared by means of a Monte Carlo study. Different evaluation criteria are used to judge the relative performances of the algorithms. 
Keywords:  VARMA Models, Estimation Algorithms, Forecasting 
JEL:  C32 C15 C63 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:eui:euiwps:eco2007/12&r=ecm 
By:  Helmut Luetkepohl 
Abstract:  Vector autoregressive (VAR) models for stationary and integrated variables are reviewed. Model specification and parameter estimation are discussed and various uses of these models for forecasting and economic analysis are considered. For integrated and cointegrated variables it is argued that vector error correction models offer a particularly convenient parameterization both for model specification and for using the models for economic analysis. 
Keywords:  VAR, vector autoregressive models 
JEL:  C32 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:eui:euiwps:eco2007/11&r=ecm 
By:  Fabio Canova (Universitat Pompeu Fabra); Luca Sala (Innocenzo Gasparini Institute for Economic Research (IGIER)  Università Commerciale Luigi Bocconi) 
Abstract:  We investigate identifiability issues in DSGE models and their consequences for parameter estimation and model evaluation when the objective function measures the distance between estimated and model impulse responses. Observational equivalence, partial and weak identification problems are widespread and they lead to biased estimates, unreliable tstatistics and may induce investigators to select false models. We examine whether different objective functions affect identification and study how small samples interact with parameters and shock identification. We provide diagnostics and tests to detect identification failures and apply them to a stateoftheart model. 
Keywords:  identification, impulse responses, DSGE models, small samples 
JEL:  C10 C52 E32 E50 
Date:  2007–06 
URL:  http://d.repec.org/n?u=RePEc:bde:wpaper:0715&r=ecm 
By:  Lonnie Magee 
Abstract:  The O(n1) bias and O(n2) MSE of OLS are derived for iid samples. An approach is suggested for handling nonexistent finite sample moments. Bias corrections based on plugin, weighting, jackknife and pairs bootstrap methods are equal to Op(n3/2). Sometimes they are effective at lowering bias and MSE, but not always. In simulations, the bootstrap correction removes more bias than the others, but has a higher MSE. A hypothesis test is given for the presence of this bias. The techniques are applied to survey data on food expenditure, and the estimated bias is small and statistically insignificant. 
Keywords:  OLS bias; finite sample moments; Nagar approximation; bias correction; pairs bootstrap 
JEL:  C13 C29 C49 
Date:  2007–06 
URL:  http://d.repec.org/n?u=RePEc:mcm:qseprr:419&r=ecm 
By:  JeongRyeol KurzKim; Mico Loretan 
Abstract:  Since the seminal work of Mandelbrot (1963), alphastable distributions with infinite variance have been regarded as a more realistic distributional assumption than the normal distribution for some economic variables, especially financial data. After providing a brief survey of theoretical results on estimation and hypothesis testing in regression models with infinitevariance variables, we examine the statistical properties of the coefficient of determination in models with alphastable variables. If the regressor and error term share the same index of stability alpha<2, the coefficient of determination has a nondegenerate asymptotic distribution on the entire [0, 1] interval, and the density of this distribution is unbounded at 0 and 1. We provide closedform expressions for the cumulative distribution function and probability density function of this limit random variable. In contrast, if the indices of stability of the regressor and error term are unequal, the coefficient of determination converges in probability to either 0 or 1, depending on which variable has the smaller index of stability. In an empirical application, we revisit the FamaMacBeth twostage regression and show that in the infinitevariance case the coefficient of determination of the secondstage regression converges to zero in probability even if the slope coefficient is nonzero. 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:fip:fedgif:895&r=ecm 
By:  Konrad Banachewicz (Vrije Universiteit Amsterdam); André Lucas (Vrije Universiteit Amsterdam) 
Abstract:  Recent models for credit risk management make use of Hidden Markov Models (HMMs). The HMMs are used to forecast quantiles of corporate default rates. Little research has been done on the quality of such forecasts if the underlying HMM is potentially misspecified. In this paper, we focus on misspecification in the dynamics and the dimension of the HMM. We consider both discrete and continuous state HMMs. The differences are substantial. Underestimating the number of discrete states has an economically significant impact on forecast quality. Generally speaking, discrete models underestimate the highquantile default rate forecasts. Continuous state HMMs, however, vastly overestimate high quantiles if the true HMM has a discrete state space. In the reverse setting, the biases are much smaller, though still substantial in economic terms. We illustrate the empirical differences using U.S. default data. 
Keywords:  defaults; Markov switching; misspecification; quantile forecast; ExpectationMaximization; simulated maximum likelihood; importance sampling 
JEL:  C53 C22 G32 
Date:  2007–06–13 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20070046&r=ecm 
By:  Figueiredo, Annibal; Matsushita, Raul; Da Silva, Sergio; Serva, Maurizio; Viswanathan, Gandhi; Nascimento, Cesar; Gleria, Iram 
Abstract:  We employ the Levy sections theorem in the analysis of selected dollar exchange rate time series. The theorem is an extension of the classical central limit theorem and offers an alternative to the most usual analysis of the sum variable. We ﬁnd that the presence of fat tails can be related to the local volatility pattern of the series. 
JEL:  C49 
Date:  2007–07–03 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:3810&r=ecm 
By:  Ricardo Reis; Mark W. Watson 
Abstract:  This paper estimates a common component in many price series that has an equiproportional effect on all prices. Changes in this component can be interpreted as changes in the value of the numeraire since, by definition, they leave all relative prices unchanged. The first aim of the paper is to measure these changes. The paper provides a framework for identifying this component, suggests an estimator for the component based on a dynamic factor model, and assesses its performance relative to alternative estimators. Using 187 U.S. timeseries on prices, we estimate changes in the value of the numeraire from 1960 to 2006, and further decompose these changes into a part that is related to relative price movements and a residual ‘exogenous’ part. The second aim of the paper is to use these estimates to investigate two economic questions. First, we show that the size of exogenous changes in the value of the numeraire helps distinguish between different theories of pricing, and that the U.S. evidence argues against several strict theories of nominal rigidities. Second, we find that changes in the value of the numeraire are significantly related to changes in real quantities, and discuss interpretations of this apparent nonneutrality. 
Keywords:  Inflation, Money illusion, Monetary neutrality, Price index 
JEL:  E31 C43 C32 
Date:  2007–06 
URL:  http://d.repec.org/n?u=RePEc:kie:kieliw:1364&r=ecm 