nep-ecm New Economics Papers
on Econometrics
Issue of 2009‒12‒19
thirty-two papers chosen by
Sune Karlsson
Orebro University

  1. Multivariate portmanteau test for structural VARMA models with uncorrelated but non-independent error terms By Boubacar Mainassara, Yacouba
  2. Covariate Measurement Error:Bias Reduction under Response-based Sampling By Esmeralda Ramalho
  3. Bayesian estimation of an extended local scale stochastic volatility model By Philippe J. Deschamps
  4. Detecting Common Dynamics in Transitory Components By Tim M Christensen; Stan Hurn; Adrian Pagan
  5. Precise finite-sample quantiles of the Jarque-Bera adjusted Lagrange multiplier test By Wuertz, Diethelm; Katzgraber, Helmut
  6. Characteristic function estimation of Ornstein-Uhlenbeck-based stochastic volatility models. By Emanuele Taufer; Nikolai Leonenko; Marco Bee
  7. Detrending Bootstrap Unit Root Tests By Smeekes Stephan
  8. Monitoring Structural Changes in Regression with Long Memory Processes By  Wen-Jen Tsay
  9. Identification of lagged duration dependence in multiple-spell competing risks models. By Horny, G.; Picchio, M.
  10. Testing for Common Autocorrelation in Data Rich Environments By Gianluca Cubadda; Alain hecq
  11. Bias-Corrected Realized Variance under Dependent Microstructure Noise By Kosuke Oya
  12. A solution to the problem of too many instruments in dynamic panel data GMM By Mehrhoff, Jens
  13. A Simple Approximation for Bivariate Normal Integral Based on Error Function and its Application on Probit Model with Binary Endogenous Regressor By Wen-Jen Tsay; Peng-Hsuan Ke
  14. Maximum Likelihood Estimation of Censored Stochastic Frontier Models: An Application to the Three-Stage DEA Method By Wen-Jen Tsay; Cliff J. Huang; Tsu-Tan Fu; I-Lin Ho
  15. Combining VAR and DSGE forecast densities By Ida Wolden Bache; Anne Sofie Jore; James Mitchell; Shaun P. Vahey
  16. Tails of correlation mixtures of elliptical copulas By Hans Manner; Johan Segers
  17. Forecasting Euro-area recessions using time-varying binary response models for financial. By Bellégo, C.; Ferrara, L.
  18. "Realized Volatility Risk" By David E. Allen; Michael McAleer; Marcel Scharth
  19. Forecasting the US Real House Price Index: Structural and Non-Structural Models with and without Fundamentals By Rangan Gupta; Alain Kabundi; Stephen M. Miller
  20. High-Frequency and Model-Free Volatility Estimators By Robert Ślepaczuk; Grzegorz Zakrzewski
  21. Impulse Response Identification in DSGE Models By Martin Fukac
  22. Measuring output gap uncertainty By Anthony Garratt; James Mitchell; Shaun P. Vahey
  23. Directional Prediction of Returns under Asymmetric Loss: Direct and Indirect Approaches By Stanislav Anatolyev; Natalia Kryzhanovskaya
  24. Putting the New Keynesian DSGE model to the real-time forecasting test. By Marcin Kolasa; Michał Rubaszek; Paweł Skrzypczyński
  25. A duality approach to the worst case value at risk for a sum of dependent random variables with known covariances By Brice Franke; Michael Stolz
  26. A unit-error theory for register-based household statistics By Li-Chun Zhang
  27. A mixed splicing procedure for economic time series By Ángel de la Fuente
  28. Moments of the generalized hyperbolic distribution By Scott, David J; Würtz, Diethelm; Dong, Christine; Tran, Thanh Tam
  29. The first passage event for sums of dependent L\'evy processes with applications to insurance risk By Irmingard Eder; Claudia Kl\"uppelberg
  30. Measuring the Euro area output gap using multivariate unobserved components models containing phase shifts By Xiaoshan Chen; Terence C. Mills
  31. The Bivariate Normal Copula By Christian Meyer
  32. High and Low Frequency Correlations in Global Equity Markets By Robert F. Engle; José Gonzalo Rangel

  1. By: Boubacar Mainassara, Yacouba
    Abstract: We consider portmanteau tests for testing the adequacy of vector autoregressive moving-average (VARMA) models under the assumption that the errors are uncorrelated but not necessarily independent. We relax the standard independence assumption to extend the range of application of the VARMA models, and allow to cover linear representations of general nonlinear processes. We first study the joint distribution of the quasi-maximum likelihood estimator (QMLE) or the least squared estimator (LSE) and the noise empirical autocovariances. We then derive the asymptotic distribution of residual empirical autocovariances and autocorrelations under weak assumptions on the noise. We deduce the asymptotic distribution of the Ljung-Box (or Box-Pierce) portmanteau statistics for VARMA models with nonindependent innovations. In the standard framework (i.e. under iid assumptions on the noise), it is known that the asymptotic distribution of the portmanteau tests is that of a weighted sum of independent chi-squared random variables. The asymptotic distribution can be quite different when the independence assumption is relaxed. Consequently, the usual chi-squared distribution does not provide an adequate approximation to the distribution of the Box-Pierce goodness-of fit portmanteau test. Hence we propose a method to adjust the critical values of the portmanteau tests. Monte carlo experiments illustrate the finite sample performance of the modified portmanteau test.
    Keywords: Goodness-of-fit test; QMLE/LSE; Box-Pierce and Ljung-Box portmanteau tests; residual autocorrelation; Structural representation; weak VARMA models
    JEL: C32 C3 C02 C01
    Date: 2009–12–07
  2. By: Esmeralda Ramalho (Universidade de Evora, Departamento de Economia, CEFAGE-UE)
    Abstract: In this paper we propose a general framework to deal with the presence of covariate measurement error (CME) in response-based (RB) samples. Using Chesher’s (1991) methodology, we obtain a small error variance approximation for the contaminated sampling distributions that characterise RB samples with CME. Then, following Chesher (2000), we develop generalised method of moments (GMM) estimators that reduce the bias of the most well known likelihood-based estimators for RB samples which ignore the existence of CME and derive a score test to detect the presence of this type of measurement error. Our approach only requires the specification of the conditional distribution of the response variable given the latent covariates and the classical additive measurement error model assumption, the availability of information on both the marginal probability of the strata in the population and the variance of the measurement error not being essential. Monte Carlo evidence is presented which suggests that, in RB samples of moderate sizes, the bias-reduced GMM estimators perform well.
    Keywords: Response-based samples; Covariate measurement error; Generalized method ofmoments estimation; Score tests.
    JEL: C51 C52
    Date: 2009
  3. By: Philippe J. Deschamps (Department of Quantitative Economics)
    Abstract: A new version of the local scale model of Shephard (1994) is presented. Its features are identically distributed evolution equation disturbances, the incorporation of in-the-mean effects, and the incorporation of variance regressors. A Bayesian posterior simulator and an exact simulation smoother are presented. The model is applied to simulated data and to publicly available exchange rate and asset return data. Simulation smoothing turns out to be essential for the accurate interval estimation of volatilities. Bayes factors show that the new model is competitive with GARCH and Lognormal stochastic volatility formulations. Its forecasting performance is comparable to GARCH.
    Keywords: State space models; Markov chain Monte Carlo; simulation smoothing; generalized error distribution; generalized t distribution
    JEL: C11 C13 C15 C22
    Date: 2009–08–04
  4. By: Tim M Christensen (Yale); Stan Hurn (QUT); Adrian Pagan (QUT)
    Abstract: This paper considers VAR/VECM models for variables exhibiting cointegration and common features in the transitory components. While the presence of cointegration reduces the rank of the long-run multiplier matrix, other types of common features lead to rank reduction in the short-run dynamics. These common transitory components arise when linear combination of the first differenced variables in a cointegrated VAR are white noise. This paper offers a reinterpretation of the traditional approach to testing for common feature dynamics, namely checking for a singular covariance matrix for the transitory components. Instead, the matrix of short-run coefficients becomes the focus of the testing procedure thus allowing a wide range of tests for reduced rank in parameter matrices to be potentially relevant tests of common transitory components. The performance of the different methods is illustrated in a Monte Carlo analysis which is then used to reexamine an existing empirical study. Finally, this approach is applied to analyze whether one would observe common dynamics in standard DSGE models.
    Keywords: Transitory components, common features, reduced rank, cointegration.
    JEL: C14 C52
    Date: 2009–11–17
  5. By: Wuertz, Diethelm; Katzgraber, Helmut
    Abstract: It is well known that the finite-sample null distribution of the Jarque-Bera Lagrange Multiplier (LM) test for normality and its adjusted version (ALM) introduced by Urzua differ considerably from their asymptotic \chi^2(2) limit. Here, we present results from Monte Carlo simulations using 10^7 replications which yield very precise numbers for the LM and ALM statistic over a wide range of critical values and sample sizes. Depending on the sample size and values of the statistic we get p values which signicantly deviate from numbers previously published and used in hypothesis tests in many statistical software packages. The p values listed in this short Letter enable for the first time a precise implementation of the Jarque-Bera LM and ALM tests for finite samples.
    Keywords: Jarque-Bera; Lagrange Multiplier
    JEL: C12
    Date: 2009–12–11
  6. By: Emanuele Taufer (DISA, Faculty of Economics, Trento University); Nikolai Leonenko; Marco Bee
    Abstract: Continuous-time stochastic volatility models are becoming increasingly popular in finance because of their flexibility in accommodating most stylized facts of financial time series. However, their estimation is difficult because the likelihood function does not have a closed-form expression. In this paper we propose a characteristic function-based estimation method for non-Gaussian Ornstein-Uhlenbeck-based stochastic volatility models. After deriving explicit expressions of the characteristic functions for various cases of interest we analyze the asymptotic properties of the estimators and evaluate their performance by means of a simulation experiment. Finally, a real-data application shows that the superposition of two Ornstein-Uhlenbeck processes gives a good approximation to the dependence structure of the process.
    Keywords: ornstein-uhlenbeck process; lévy process; stochastic volatility; characteristic function estimation
    Date: 2009–12
  7. By: Smeekes Stephan (METEOR)
    Abstract: The role of detrending in bootstrap unit root tests is investigated. When bootstrapping, detrending must not only be done for the construction of the test statistic, but also in the first step of the bootstrap algorithm. It is argued that the two points should be treated separately. Asymptotic validity of sieve bootstrap ADF unit root tests is shown for test statistics based on full sample and recursive OLS and GLS detrending. It is also shown that the detrending method in the first step of the bootstrap may differ from the one used in the construction of the test statistic. A simulation study is conducted to analyze the effects of detrending on finite sample performance of the bootstrap test. It is found that full sample detrending should be preferred in the first step of the bootstrap algorithm and that the decision about the detrending method used to obtain the test statistic should be based on the power properties of the corresponding asymptotic tests.
    Keywords: econometrics;
    Date: 2009
  8. By:  Wen-Jen Tsay (Institute of Economics, Academia Sinica, Taipei, Taiwan)
    Abstract: This paper extends the °uctuation monitoring test of Chu et al. (1996) to the regression model involving stationary or nonstationary long memory regressors and errors by proposing two innovative on-line detectors. In spite of the general framework covered by these detectors, their computational cost is extremely mild in that they do not depend on the bootstrap procedure and do not involve the di±cult issues of choosing a kernel function, a bandwidth parameter, or an autoregressive lag length for the long-run variance estimation. Moreover, under suitable regularity conditions and the null hypothesis of no structural change, the asymptotic distributions of these two detectors are identical to that of the corresponding counterpart considered in Chu et al. (1996) where they consider the short memory processes
    Keywords: Structural stability, Long memory process, Fluctuation monitoring
    Date: 2009–08
  9. By: Horny, G.; Picchio, M.
    Abstract: We show that lagged duration dependence is non-parametrically identified in mixed proportional hazard models for duration data, in the presence of competing risks and consecutive spells.
    Keywords: lagged duration dependence, competing risks, mixed proportional hazard models, identification.
    JEL: C14 C41
    Date: 2009
  10. By: Gianluca Cubadda (Faculty of Economics, University of Rome "Tor Vergata"); Alain hecq (Maastricht University)
    Abstract: This paper proposes a strategy to detect the presence of common serial correlation in high-dimensional systems. We show by simulations that univariate autocorrelation tests on the factors obtained by partial least squares outperform traditional tests based on canonical correlations.
    Keywords: Serial correlation common feature; high-dimensional systems; partial least squares. JEL code: C32
    JEL: C32
    Date: 2009–12–04
  11. By: Kosuke Oya (Graduate School of Economics, Osaka University, Toyonaka, Osaka, Japan. Japan Science and Technology Agency, CREST, Toyonaka , Osaka, Japan.)
    Abstract: The aim of this study is to develop a bias-correction method for realized variance (RV) estimation, where the equilibrium price process is contaminated with market microstructure noise, such as bid-ask bounces and price changes discreteness. Though RV constitutes the simplest estimator of daily integrated variance, it remains strongly biased and many estimators proposed in previous studies require prior knowledge about the dependence structure of microstructure noise to ensure unbiasedness and consistency. The dependence structure is unknown however and it needs to be estimated. A bias-correction method based on statistical inference from the general noise dependence structure is thus proposed. The results of Monte Carlo simulation indicate that the new approach is robust with respect to changes in the dependence of microstructure noise.
    Keywords: Realized variance; Dependent microstructure noise; Two-time scales
    JEL: C01 C13 C51
    Date: 2009–11
  12. By: Mehrhoff, Jens
    Abstract: The well-known problem of too many instruments in dynamic panel data GMM is dealt with in detail in Roodman (2009, Oxford Bull. Econ. Statist.). The present paper goes one step further by providing a solution to this problem: factorisation of the standard instrument set is shown to be a valid transformation for ensuring consistency of GMM. Monte Carlo simulations show that this new estimation technique outperforms other possible transformations by having a lower bias and RMSE as well as greater robustness of overidentifying restrictions. The researcher's choice of a particular transformation can be replaced by a data-driven statistical decision. --
    Keywords: Dynamic panel data,generalised method of moments,instrument proliferation,factor analysis
    JEL: C13 C15 C23 C81
    Date: 2009
  13. By: Wen-Jen Tsay (Institute of Economics, Academia Sinica, Taipei, Taiwan); Peng-Hsuan Ke (Institute of Economics, Academia Sinica, Taipei, Taiwan)
    Abstract: A simple approximation for the bivariate normal cumulative distribution function (BNCDF) based on the error function is derived. The worst error of our method is found to four decimal places under various configurations considered in this paper’s Table 1. This finding is much better than that in Table 1 of Cox and Wermuth (1991) and in Table 1 of Lin (1995) where the worst error of both tables is up to 3 decimal places. We also apply the proposed method to approximate the likelihood function of the probit model with binary endogenous regressor. The simulations indicate that the bias and mean-squared-error (MSE) of the maximum likelihood estimator based on our method are very much similar to those obtained from using the exact method n the GAUSS package.
    Keywords: Bivariate normal distribution, cumulative distribution function, error function
    JEL: C16 C25 C63
    Date: 2009–11
  14. By: Wen-Jen Tsay (Institute of Economics, Academia Sinica, Taipei, Taiwan); Cliff J. Huang (Department of Economics Vanderbilt University); Tsu-Tan Fu (Institute of Economics, Academia Sinica, Taipei, Taiwan); I-Lin Ho (The Institute of Physics Academia Sinica Taipei, Taiwan)
    Abstract: This paper takes issues with the appropriateness of applying the stochastic frontier analysis (SFA) technique to account for environmental effects and statistical noise in the popular three-stage data envelopment analysis (DEA). A correctly specified SFA model with a censored dependent variable and the associated maximum likelihood estimation (MLE) are proposed. The simulations show that the finite sample performance of the proposed MLE of the censored SFA model is very promising. An empirical example of farmers’ credit unions in Taiwan illustrates the comparison between the censored and standard SFA in accounting for environmental effects and statistical noise.
    Keywords: Three-stage data envelopment analysis, stochastic frontier analysis, censored stochastic frontier model
    Date: 2009–03
  15. By: Ida Wolden Bache (Norges Bank); Anne Sofie Jore (Norges Bank); James Mitchell (NIESR); Shaun P. Vahey (Melbourne Business School)
    Abstract: A popular macroeconomic forecasting strategy takes combinations across many models to hedge against instabilities of unknown timing; see (among others) Stock and Watson (2004), Clark and McCracken (2010), and Jore et al. (2010). Existing studies of this forecasting strategy exclude Dynamic Stochastic General Equilibrium (DSGE) models, despite the widespread use of these models by monetary policymakers. In this paper, we combine inflation forecast densities utilizing an ensemble system comprising many Vector Autoregressions (VARs), and a policymaking DSGE model. The DSGE receives substantial weight (for short horizons) provided the VAR components exclude structural breaks. In this case, the inflation forecast densities exhibit calibration failure. Allowing for structural breaks in the VARs reduces the weight on the DSGE considerably, and produces well-calibrated forecast densities for inflation.
    Keywords: Ensemble modeling, Forecast densities, Forecast evaluation, VAR models, DSGE models
    JEL: C32 C53 E37
    Date: 2009–11–05
  16. By: Hans Manner; Johan Segers
    Abstract: Correlation mixtures of elliptical copulas arise when the correlation parameter is driven itself by a latent random process. For such copulas, both penultimate and asymptotic tail dependence are much larger than for ordinary elliptical copulas with the same unconditional correlation. Furthermore, for Gaussian and Student t-copulas, tail dependence at sub-asymptotic levels is generally larger than in the limit, which can have serious consequences for estimation and evaluation of extreme risk. Finally, although correlation mixtures of Gaussian copulas inherit the property of asymptotic independence, at the same time they fall in the newly defined category of near asymptotic dependence. The consequences of these findings for modeling are assessed by means of a simulation study and a case study involving financial time series.
    Date: 2009–12
  17. By: Bellégo, C.; Ferrara, L.
    Abstract: Recent macroeconomic evolutions during the years 2008 and 2009 have pointed out the impact of financial markets on economic activity. In this paper, we propose to evaluate the ability of a set of financial variables to forecast recessions in the euro area by using a non-linear binary response model associated with information combination. Especially, we focus on a time-varying probit model whose parameters evolve according to a Markov chain. For various forecast horizons, we provide a readable and leading signal of recession by combining information according to two combining schemes over the sample 1970-2006. First we average recession probabilities and second we linearly combine variables through a dynamic factor model in order to estimate an innovative factor-augmented probit model. Out-of-sample results over the period 2007-2008 show that financial variables would have been helpful in predicting a recession signal as September 2007, that is around six months before the effective start of the 2008-2009 recession in the euro area.
    Keywords: Macroeconomic forecasting, Business cycles, Turning points, Financial markets, Non-linear time series, Combining forecasts.
    JEL: C53 E32 E44
    Date: 2009
  18. By: David E. Allen (School of Accounting, Finance and Economics, Edith Cowan University); Michael McAleer (Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam and Tinbergen Institute); Marcel Scharth (VU University Amsterdam and Tinbergen Institute)
    Abstract: In this paper we document that realized variation measures constructed from high-frequency returns reveal a large degree of volatility risk in stock and index returns, where we characterize volatility risk by the extent to which forecasting errors in realized volatility are substantive. Even though returns standardized by ex post quadratic variation measures are nearly gaussian, this unpredictability brings considerably more uncertainty to the empirically relevant ex ante distribution of returns. Carefully modeling this volatility risk is fundamental. We propose a dually asymmetric realized volatility (DARV) model, which incorporates the important fact that realized volatility series are systematically more volatile in high volatility periods. Returns in this framework display time varying volatility, skewness and kurtosis. We provide a detailed account of the empirical advantages of the model using data on the S&P 500 index and eight other indexes and stocks.
    Date: 2009–12
  19. By: Rangan Gupta (Department of Economics, University of Pretoria); Alain Kabundi (Department of Economics and Econometrics, University of Johannesburg); Stephen M. Miller (College of Business, University of Las Vegas, Nevada)
    Abstract: We employ a 10-variable dynamic structural general equilibrium model to forecast the US real house price index as well as its turning point in 2006:Q2. We also examine various Bayesian and classical time-series models in our forecasting exercise to compare to the dynamic stochastic general equilibrium model, estimated using Bayesian methods. In addition to standard vector-autoregressive and Bayesian vector autoregressive models, we also include the information content of either 10 or 120 quarterly series in some models to capture the influence of fundamentals. We consider two approaches for including information from large data sets – extracting common factors (principle components) in a Factor-Augmented Vector Autoregressive or Factor-Augmented Bayesian Vector Autoregressive models or Bayesian shrinkage in a large-scale Bayesian Vector Autoregressive models. We compare the out-ofsample forecast performance of the alternative models, using the average root mean squared error for the forecasts. We find that the small-scale Bayesian-shrinkage model (10 variables) outperforms the other models, including the large-scale Bayesian-shrinkage model (120 variables). Finally, we use each model to forecast the turning point in 2006:Q2, using the estimated model through 2005:Q2. Only the dynamic stochastic general equilibrium model actually forecasts a turning point with any accuracy, suggesting that attention to developing forward-looking microfounded dynamic stochastic general equilibrium models of the housing market, over and above fundamentals, proves crucial in forecasting turning points.
    Keywords: US House prices, Forecasting, DSGE models, Factor Augmented Models, Large-Scale BVAR models
    JEL: C32 R31
    Date: 2009–12
  20. By: Robert Ślepaczuk (Faculty of Economic Sciences, University of Warsaw); Grzegorz Zakrzewski (Deutsche Bank PBC S.A.)
    Abstract: This paper focuses on volatility of financial markets, which is one of the most important issues in finance, especially with regard to modeling high-frequency data. Risk management, asset pricing and option valuation techniques are the areas where the concept of volatility estimators (consistent, unbiased and the most efficient) is of crucial concern. Our intention was to find the best estimator of true volatility taking into account the latest investigations in finance literature. Basing on the methodology presented in Parkinson (1980), Garman and Klass (1980), Rogers and Satchell (1991), Yang and Zhang (2000), Andersen et al. (1997, 1998, 1999a, 199b), Hansen and Lunde (2005, 2006b) and Martens (2007), we computed the various model-free volatility estimators and compared them with classical volatility estimator, most often used in financial models. In order to reveal the information set hidden in high-frequency data, we utilized the concept of realized volatility and realized range. Calculating our estimator, we carefully focused on Δ (the interval used in calculation), n (the memory of the process) and q (scaling factor for scaled estimators). Our results revealed that the appropriate selection of Δ and n plays a crucial role when we try to answer the question concerning the estimator efficiency, as well as its accuracy. Having nine estimators of volatility, we found that for optimal n (measured in days) and Δ (in minutes) we obtain the most efficient estimator. Our findings confirmed that the best estimator should include information contained not only in closing prices but in the price range as well (range estimators). What is more important, we focused on the properties of the formula itself, independently of the interval used, comparing the estimator with the same Δ, n and q parameter. We observed that the formula of volatility estimator is not as important as the process of selection of the optimal parameter n or Δ. Finally, we focused on the asymmetry between market turmoil and adjustments of volatility. Next, we put stress on the implications of our results for well-known financial models which utilize classical volatility estimator as the main input variable.
    Keywords: financial market volatility, high-frequency financial data, realized volatility and correlation, volatility forecasting, microstructure bias, the opening jump effect, the bid-ask bounce, autocovariance bias, daily patterns of volatility, emerging markets
    JEL: G14 G15 C61 C22
    Date: 2009
  21. By: Martin Fukac (Reserve Bank of New Zealand)
    Abstract: DSGE models have become a widely used tool for policymakers. This paper takes the global identification theory used for structural vectorautoregressions, and applies it to dynamic stochastic general equilibrium (DSGE) models. We use this modified theory to check whether a DSGE model structure allows for unique estimates of structural shocks and their dynamic effects. The potential cost of a lack of identification for policy oriented models along that specific dimension is huge, as the same model can generate a number of contrasting yet theoretically and empirically justifiable recommendations. The problem and methodology are illustrated using a simple New Keynesian business cycle model.
    JEL: C30 C52
    Date: 2009–12
  22. By: Anthony Garratt; James Mitchell; Shaun P. Vahey (Reserve Bank of New Zealand)
    Abstract: We propose a methodology for producing density forecasts for the output gap in real time using a large number of vector autoregessions in inflation and output gap measures. Density combination utilizes a linear mixture of experts framework to produce potentially non-Gaussian ensemble densities for the unobserved output gap. In our application, we show that data revisions alter substantially our probabilistic assessments of the output gap using a variety of output gap measures derived from univariate detrending filters. The resulting ensemble produces well-calibrated forecast densities for US inflation in real time, in contrast to those from simple univariate autoregressions which ignore the contribution of the output gap. Combining evidence from both linear trends and more flexible univariate detrending filters induces strong multi-modality in the predictive densities for the unobserved output gap. The peaks associated with these two detrending methodologies indicate output gaps of opposite sign for some bservations, reflecting the pervasive nature of model uncertainty in our US data.
    JEL: C32 C53 E37
    Date: 2009–12
  23. By: Stanislav Anatolyev (New Economic School); Natalia Kryzhanovskaya (New Economic School)
    Abstract: To predict a return characteristic, one may construct models of different complexity describing the dynamics of different objects. The most complex object is the entire predictive density, while the least complex is the characteristic whose forecast is of interest. This paper investigates, using experiments with real data, the relation between the complexity of the modeled object and the predictive quality of the return characteristic of interest, in the case when this characteristic is a return sign, or, equivalently, the direction-of-change. Importantly, we carry out the comparisons assuming that the underlying loss function is asymmetric, which is more plausible than the quadratic loss still prevailing in the analysis of returns. Our experiments are performed with returns of various frequencies on a stock market index and exchange rate. By and large, modeling the dynamics of returns by autoregressive conditional quantiles tends to produce forecasts of higher quality than modeling the whole predictive density or modeling the return indicators themselves.
    Keywords: Directional prediction, sign prediction, model complexity, prediction quality, asymmetric loss, predictive density, conditional quantiles, binary autoregression
    Date: 2009–11
  24. By: Marcin Kolasa (National Bank of Poland, ul. Swietokrzyska 11/21, PL-00-919 Warsaw, Poland.); Michał Rubaszek (National Bank of Poland, ul. Swietokrzyska 11/21, PL-00-919 Warsaw, Poland.); Paweł Skrzypczyński (National Bank of Poland, ul. Swietokrzyska 11/21, PL-00-919 Warsaw, Poland.)
    Abstract: Dynamic stochastic general equilibrium models have recently become standard tools for policy-oriented analyses. Nevertheless, their forecasting properties are still barely explored. We fill this gap by comparing the quality of real-time forecasts from a richly-specified DSGE model to those from the Survey of Professional Forecasters, Bayesian VARs and VARs using priors from a DSGE model. We show that the analyzed DSGE model is relatively successful in forecasting the US economy in the period of 1994-2008. Except for short-term forecasts of inflation and interest rates, it is as good as or clearly outperforms BVARs and DSGE-VARs. Compared to the SPF, the DSGE model generates better output forecasts at longer horizons, but less accurate short-term forecasts for interest rates. Conditional on experts' now casts, however, the forecasting power of the DSGE turns out to be similar or better than that of the SPF for all the variables and horizons. JEL Classification: C11, C32, C53, D58, E17.
    Keywords: Forecasting, DSGE, Bayesian VAR, SPF, Real-time data.
    Date: 2009–11
  25. By: Brice Franke; Michael Stolz
    Abstract: We propose an approach to the aggregation of risks which is based on estimation of simple quantities (such as covariances) associated to a vector of dependent random variables, and which avoids the use of parametric families of copulae. Our main result demonstrates that the method leads to bounds on the worst case Value at Risk for a sum of dependent random variables. Its proof applies duality theory for infinite dimensional linear programs.
    Date: 2009–12
  26. By: Li-Chun Zhang (Statistics Norway)
    Abstract: The next round of census will be completely register-based in all the Nordic countries. Household is a key statistical unit in this context, which however does not exist as such in the administrative registers available, and needs to be created by the statistical agency based on the various information available in the statistical system. Errors in such register households are thus unavoidable, and will propagate to various induced household statistics. In this paper we outline a unit-error theory which provides a framework for evaluating the statistical accuracy of these register-based household statistics, and illustrate its use through an application to the Norwegian register household data.
    Keywords: Register statistics; statistical accuracy; unit errors; prediction inference
    Date: 2009–12
  27. By: Ángel de la Fuente
    Abstract: This note develops a flexible methodology for splicing economic time series that avoids the extreme assumptions implicit in the procedures most commonly used in the literature. It allows the user to split the required correction to the older of the series being linked between its levels and growth rates on the basis what he knows or conjectures about the persistence of the factors that account for the discrepancy between the two series that emerges at their linking point. The time profile of the correction is derived from the assumption that the error in the older series reflects the inadequate coverage of emerging sectors or activities that grow faster than the aggregate.
    Keywords: linking, splicing, economic series
    JEL: C82 E01
    Date: 2009–12–03
  28. By: Scott, David J; Würtz, Diethelm; Dong, Christine; Tran, Thanh Tam
    Abstract: In this paper we demonstrate a recursive method for obtaining the moments of the generalized hyperbolic distribution. The method is readily programmable for numerical evaluation of moments. For low order moments we also give an alternative derivation of the moments of the generalized hyperbolic distribution. The expressions given for these moments may be used to obtain moments for special cases such as the hyperbolic and normal inverse Gaussian distributions. Moments for limiting cases such as the skew hyperbolic t and variance gamma distributions can be found using the same approach.
    Keywords: Generalized hyperbolic distribution; hyperbolic distribution; kurtosis; moments; normal inverse Gaussian distribution; skewed-t distribution; skewness; Student-t distribution.
    JEL: C16
    Date: 2009–12–09
  29. By: Irmingard Eder; Claudia Kl\"uppelberg
    Abstract: For the sum process $X=X^1+X^2$ of a bivariate L\'evy process $(X^1,X^2)$ with possibly dependent components, we derive a quintuple law describing the first upwards passage event of $X$ over a fixed barrier, caused by a jump, by the joint distribution of five quantities: the time relative to the time of the previous maximum, the time of the previous maximum, the overshoot, the undershoot and the undershoot of the previous maximum. The dependence between the jumps of $X^1$ and $X^2$ is modeled by a L\'evy copula. We calculate these quantities for some examples, where we pay particular attention to the influence of the dependence structure. We apply our findings to the ruin event of an insurance risk process.
    Date: 2009–12
  30. By: Xiaoshan Chen; Terence C. Mills
    Abstract: This paper analyses the impact of using different macroeconomic variables and output decompositions to estimate the euro area output gap. We estimate twelve multivariate unobserved components models with phase shifts being allowed between individual cyclical components. As output decomposition plays a central role in all multivariate models, three different output decompositions are utilised; these are a first-order stochastic cycle combined with either a local linear trend or a damped slope trend, and a second-order cycle plus an appropriate trend specification (a trend following a random walk with a constant drift is generally preferred). We also extend the commonly used trivariate models of output, inflation and unemployment to incorporate a fourth variable, either investment or industrial production. We find that the four-variate model incorporating industrial production produces the most satisfactory output gap estimates, especially when the output gap is modelled as a first-order cycle. In addition, measuring phase shifts and calculating contemporaneous correlations between individual cyclical components provides a better understanding of the different gap estimates. We conclude that the output gap estimate in all models leads the cyclical components of inflation and unemployment, but lags those of industrial production and investment. Furthermore, the output gap estimates obtained from the four-variate model including investment present the longest leads-and-lags with respect to other cyclical components, implying that investment appears to be more of a leading indicator than a coincident variable for the euro area.
    Keywords: output gap, higher-order cycle, industrial production, state-space, Kalman filter.
    JEL: C32 E32
    Date: 2009–11
  31. By: Christian Meyer
    Abstract: We collect well known and less known facts about the bivariate normal distribution and translate them into copula language. In addition, we prove a very general formula for the bivariate normal copula, we compute Gini's gamma, and we provide improved bounds and approximations on the diagonal.
    Date: 2009–12
  32. By: Robert F. Engle; José Gonzalo Rangel
    Abstract: This study models high and low frequency variation in global equity correlations using a comprehensive sample of 43 countries that includes developed and emerging markets, during the period 1995-2008. These two types of variations are modeled following the semi-parametric Factor-Spline-GARCH approach of Rangel and Engle (2008). This framework is extended and modified to incorporate the effect of multiple factors and to address the issue of non-synchronicity in international markets. Our empirical analysis suggests that the slow-moving dynamics of global correlations can be described by the Factor-Spline-GARCH specifications using either weekly or daily data. The analysis shows that the low frequency component of global correlations increased in the current financial turmoil; however, this increase was not equally distributed across countries. The countries that experienced the largest increase in correlations were mainly emerging markets.
    Keywords: Dynamic conditional correlations, high and low frequency variation, global markets, non-synchronicity.
    JEL: C32 C51 C52 G12 G15
    Date: 2009–12

This nep-ecm issue is ©2009 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.