nep-ecm New Economics Papers
on Econometrics
Issue of 2012‒05‒29
eighteen papers chosen by
Sune Karlsson
Orebro University

  1. Realized Copula By Matthias R. Fengler; Ostap Okhrin;
  2. PC-VAR estimation of vector autoregressive models By Claudio Morana
  3. Asymptotic Theory for the QMLE in GARCH-X Models with Stationary and Non-Stationary Covariates By Heejoon Han; Dennis Kristensen
  4. Estimating and Forecasting APARCH-Skew-t Models by Wavelet Support Vector Machines By Li, Yushu
  5. A copula-based analysis of false discovery rate control under dependence assumptions By Cerqueti, Roy; Costantini, Mauro; Lupi, Claudio
  6. Estimating Long Memory Causality Relationships by a Wavelet Method By Li, Yushu
  7. Improving Bayesian VAR density forecasts through autoregressive Wishart Stochastic Volatility By Karapanagiotidis, Paul
  8. Do Reservation Wages Decline Monotonically? A Novel Statistical Test By Gutknecht, Daniel
  9. Wavelet Improvement in Turning Point Detection using a HMM Model By Li, Yushu
  10. A least squares approach to latent variables extraction in formative-reflective models By Marco Fattore; Matteo Pelagatti; Giorgio Vittadini
  11. Bayesian estimation of inefficiency heterogeneity in stochastic frontier models By Jorge E. Galán; Helena Veiga; Michael P. Wiper
  12. Fire Sales Forensics: Measuring Endogenous Risk By Rama Cont; Lakshithe Wagalath
  13. Spatial depth-based classification for functional data By Carlo Sguera; Pedro Galeano; Rosa E. Lillo
  14. Local Distance-Based Generalized Linear Models using the dbstats package for R By Eva Boj; Pedro Delicado; Josep Fortiana; Anna Esteve; Adria Caballe
  15. Using the "Chandrasekhar Recursions" for likelihood evaluation of DSGE models By Edward P. Herbst>
  16. Christopher A. Sims et la représentation VAR By Jean-Baptiste Gossé; Cyriac Guillaumin
  17. Incorporating Prior Information into a GMM Objective for Mixed Logit Demand Systems By Charles Romeo
  18. On the empirical failure of purchasing power parity tests By Matteo Pelagatti; Emilio Colombo

  1. By: Matthias R. Fengler; Ostap Okhrin;
    Abstract: We introduce the notion of realized copula. Based on assumptions of the marginal distri- butions of daily stock returns and a copula family, realized copula is dened as the copula structure materialized in realized covariance estimated from within-day high-frequency data. Copula parameters are estimated in a method-of-moments type of fashion through Hoeding's lemma. Applying this procedure day by day gives rise to a time series of copula parameters that is suitably approximated by an autoregressive time series model. This allows us to capture time-varying dependency in our framework. Studying a portfolio risk-management applica- tion, we find that time-varying realized copula is superior to standard benchmark models in the literature.
    Keywords: realized variance, realized covariance, realized copula, multivariate dependence
    JEL: G12 C13 C14 C22 C50
    Date: 2012–05
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2012-034&r=ecm
  2. By: Claudio Morana
    Abstract: In this paper PC-VAR estimation of vector autoregressive models (VAR) is proposed. The estimation strategy successfully lessens the curse of dimensionality a¤ecting VAR models, when estimated using sample sizes typically available in quarterly studies. The procedure involves a dynamic regression using a subset of principal components extracted from a vector time series, and the recovery of the implied unrestricted VAR parameter estimates by solving a set of linear con- straints. PC-VAR and OLS estimation of unrestricted VAR models show the same asymptotic properties. Monte Carlo results strongly support PC-VAR estimation, yielding gains, in terms of both lower bias and higher e¢ ciency, relatively to OLS estimation of high dimen- sional unrestricted VAR models in small samples. Guidance for the selection of the number of components to be used in empirical studies is provided.
    Keywords: vector autoregressive model, principal components analysis, statistical reduction techniques.
    JEL: C22
    Date: 2012–05
    URL: http://d.repec.org/n?u=RePEc:mib:wpaper:223&r=ecm
  3. By: Heejoon Han (National University of Singapore); Dennis Kristensen (University College London and CREATES)
    Abstract: This paper investigates the asymptotic properties of the Gaussian quasi-maximum-likelihood estimators (QMLE?s) of the GARCH model augmented by including an additional explanatory variable - the so-called GARCH-X model. The additional covariate is allowed to exhibit any degree of persistence as captured by its long-memory parameter dx; in particular, we allow for both stationary and non-stationary covariates. We show that the QMLE'?s of the regression coefficients entering the volatility equation are consistent and normally distributed in large samples independently of the degree of persistence. This implies that standard inferential tools, such as t-statistics, do not have to be adjusted to the level of persistence. On the other hand, the intercept in the volatility equation is not identifi?ed when the covariate is non-stationary which is akin to the results of Jensen and Rahbek (2004, Econometric Theory 20) who develop similar results for the pure GARCH model with explosive volatility.
    Keywords: GARCH; Persistent covariate; Fractional integration; Quasi-maximum likelihood estimator; Asymptotic distribution theory.
    JEL: C22 C50 G12
    Date: 2012–05–18
    URL: http://d.repec.org/n?u=RePEc:aah:create:2012-25&r=ecm
  4. By: Li, Yushu (Department of Economics, Lund University)
    Abstract: This paper concentrates on comparing estimation and forecasting ability of Quasi-Maximum Likelihood (QML) and Support Vector Machines (SVM) for financial data. The financial series are fitted into a family of Asymmetric Power ARCH (APARCH) models. As the skewness and kurtosis are common characteristics of the financial series, a skew t distributed innovation is assumed to model the fat tail and asymmetry. Prior research indicates that the QML estimator for the APARCH model is inefficient when the data distribution shows departure from normality, so the current paper utilizes the nonparametric-based SVM method and shows that it is more efficient than the QML under the skewed Student’s t-distributed error. As the SVM is a kernel-based technique, we further investigate its performance by applying a Gaussian kernel and a wavelet kernel. The wavelet kernel is chosen due to its ability to capture the localized volatility clustering in the APGARCH model. The results are evaluated by a Monte Carlo experiment, with accuracy measured by Normalized Mean Square Error ( NMSE ). The results suggest that the SVM based method generally performs better than QML, with a consistently lower NMSE for both in sample and out of sample data. The outcomes also highlight the fact that the wavelet kernel outperforms the Gaussian kernel with a lower NMSE , is more computation efficient and has better generation capability.
    Keywords: SVM; APARCH; Wavelet Kernel; Monte Carlo Experiment
    JEL: C14 C53 C61
    Date: 2012–05–21
    URL: http://d.repec.org/n?u=RePEc:hhs:lunewp:2012_013&r=ecm
  5. By: Cerqueti, Roy; Costantini, Mauro; Lupi, Claudio
    Abstract: The false discovery rate (FDR) first introduced in Benjamini and Hochberg (1995) is a powerful approach to multiple testing. Benjamini and Yekutieli (2001) proved that the original procedure developed for independent test statistics controls the FDR also for positively dependent test statistics. Furthermore, Yekutieli (2008) showed that a modification of the original procedure can be used even in the presence of non-positively regression dependent test statistics. In this paper we elaborate on Yekutieli (2008) and introduce suitable classes of copulas to identify the conditions under which the dependence properties needed to control the FDR are satisfied.
    Keywords: Multiple testing, False discovery rate, Copulas
    JEL: C12 C40
    Date: 2012–05–19
    URL: http://d.repec.org/n?u=RePEc:mol:ecsdps:esdp12065&r=ecm
  6. By: Li, Yushu (Department of Economics, Lund University)
    Abstract: The traditional causality relationship proposed by Granger (1969) assumes the relationships between variables are short range dependent with the same integrated order. Chen (2006) proposed a bi-variate model which can catch the long-range dependent among the two variables and the series do not need to be fractionally co-integrated. A long memory fractional transfer function is introduced to catch the long-range dependent in this model and a pseudo spectrum based method is proposed to estimate the long memory parameter in the bi-variate causality model. In recent years, a wavelet domain-based method has gained popularity in estimations of long memory parameter in unit series. No extension to bi-series or multi-series has been made and this paper aims to fill this gap. We will construct an estimator for the long memory parameter in the bi-variable causality model in the wavelet domain. The theoretical background is derived and Monte Carlo simulation is used to investigate the performance of the estimator.
    Keywords: Granger causality; long memory; Monte Carlo simulation; wavelet domain
    JEL: C30 C51 C63
    Date: 2012–05–21
    URL: http://d.repec.org/n?u=RePEc:hhs:lunewp:2012_015&r=ecm
  7. By: Karapanagiotidis, Paul
    Abstract: Dramatic changes in macroeconomic time series volatility pose a challenge to contemporary vector autoregressive (VAR) forecasting models. Traditionally, the conditional volatility of such models had been assumed constant over time or allowed for breaks across long time periods. More recent work, however, has improved forecasts by allowing the conditional volatility to be completely time variant by specifying the VAR innovation variance as a distinct discrete time process. For example, Clark (2011) specifies the volatility process as an independent log random walk for each time series in the VAR. Unfortunately, there is no empirical reason to believe that the VAR innovation volatility process of macroeconomic growth series follow log random walks, nor that the volatility of each series is independent of the others. This suggests that a more robust specification on the volatility process—one that both accounts for co-persistence in conditional volatility across time series and exhibits mean reverting behaviour—should improve density forecasts, especially over the long run forecasting horizon. In this respect, I employ a latent Inverse-Wishart autoregressive stochastic volatility specification on the conditional variance equation of a Bayesian VAR, with U.S. macroeconomic time series data, in evaluating Bayesian forecast efficiency against a competing log random walk specification by Clark (2011).
    Keywords: InverseWishart distribution; stochastic volatility; predictive likelihoods; MCMC; macroeconomic time series; density forecasts; vector autoregression; steady state priors; Bayesian econometrics;
    JEL: C32 C53 E17 C11
    Date: 2012–03–10
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:38885&r=ecm
  8. By: Gutknecht, Daniel (Department of Economics, University of Warwick)
    Abstract: This paper develops a test for monotonicity of the regression function under endogeneity. The novel testing framework is applied to study monotonicity of the reservation wage as a function of elapsed unemployment duration. Hence, the objective of the paper is twofold : from a theoretical perspective, it proposes a test that formally assesses monotonicity of the regression function in the case of a continuous, endogenous regressor. This is accomplished by combining different nonparametric conditional mean estimators using either control functions or unobservable exogenous variation to address endogeneity with a test statistic based on a functional of a second order U-process. The modified statistic is shown to have a non-standard asymptotic distribution (similar to related tests) from which asymptotic critical values can directly be derived rather than approximated by bootstrap resampling methods. The test is shown to be consistent against general alternatives. From an empirical perspective, the paper provides a detailed investigation of the effect of elapsed unemployment duration on reservation wages in a nonparametric setup. This effect is difficult to measure due to the simultaneity of both variables. Despite some evidence in the literature for a declining reservation wage function over the course of unemployment, no information about the actual form of this decline has yet been provided. Using a standard job search model, it is shown that monotonicity of the reservation wage function, a restriction imposed by several empirical studies, only holds under certain (rather restrictive) conditions on the variables in the model. The test from above is applied to formally evaluate this shape restriction and it is found that reservation wage functions (conditional on different characteristics) do not decline monotonically. JEL classification: C14 ; C36 ; C54 ; J64
    Keywords: Reservation Wages ; Test for Montonicity ; Endogeneity ; Control Function ; Unobservable Instruments.
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:wrk:warwec:991&r=ecm
  9. By: Li, Yushu (Department of Economics, Lund University)
    Abstract: The Hidden Markov Model (HMM) has been widely used in regime classification and turning point detection for econometric series after the decisive paper by Hamilton (1989). The present paper will show that when using HMM to detect the turning point in cyclical series, the accuracy of the detection will be influenced when the data are exposed to high volatility or combine multiple types of cycles that have different frequency bands. Moreover, the outliers will be frequently misidentified as turning points in the HMM framework. The present paper will also show that these issues can be resolved by wavelet multi-resolution analysis based methods, due to their ability to decompose a series into different frequency bands. By providing both frequency and time resolutions, the wavelet power spectrum can identify the process dynamics at various resolution levels. Thus, the underlying information for the data at different frequency bands can be extracted by wavelet decomposition with different frequency bands, and the outliers can be detected by high-frequency wavelet detail. We apply a Monte Carlo experiment to show that detection accuracy is highly improved for HMM when it is combined with the wavelet approach. An empirical example is illustrated using US GDP growth rate data.
    Keywords: HMM; turning point; wavelet; outlier
    JEL: C22 C38 C63
    Date: 2012–05–21
    URL: http://d.repec.org/n?u=RePEc:hhs:lunewp:2012_014&r=ecm
  10. By: Marco Fattore (Dipartimento di Metodi Quantitativi per l'Economia e le Scienze Aziendali, Università degli Studi di Milano-Bicocca); Matteo Pelagatti (Dipartimento di Statistica, Università degli Studi di Milano-Bicocca); Giorgio Vittadini (Dipartimento di Metodi Quantitativi per l'Economia e le Scienze Aziendali, Università degli Studi di Milano-Bicocca)
    Abstract: In this paper, we propose a new least-squares based procedure to extract exogenous and endogenous latent variables in formative-reflective structural equation models. The procedure is a valuable alternative to PLS-PM and Lisrel; it is fully consistent with the causal structure of formative-reflective schemes and extracts both the structural parameters and the factor scores, without identification or indeterminacy problems. The algorithm can be applied to virtually any kind of formative-reflective scheme, with unidimensional and even multidimensional formative blocks. To show the effectiveness of the proposal, some simulated examples are discussed. A real data application, pertaining to customer equity management, is also provided, comparing the outputs of our approach with those of PLS-PM, which may produce inconsistent results when applied to formative-reflective schemes.
    Keywords: path model, formative-reflective model, least squares, reduced rank regression, PLS-PM
    Date: 2012–03–04
    URL: http://d.repec.org/n?u=RePEc:mis:wpaper:20120302&r=ecm
  11. By: Jorge E. Galán; Helena Veiga; Michael P. Wiper
    Abstract: Estimation of the one sided error component in stochastic frontier models may erroneously attribute firm characteristics to inefficiency if heterogeneity is unaccounted for. However, it is not clear in general in which component of the error distribution the covariates should be included. In the classical context, some studies include covariates in the scale parameter of the inefficiency with the property of preserving the shape of its distribution. We extend this idea to Bayesian inference for stochastic frontier models capturing both observed and unobserved heterogeneity under half normal, truncated and exponential distributed inefficiencies. We use the WinBugs package to implement our approach throughout. Our findings using two real data sets, illustrate the relevant effects on shrinking and separating individual posterior efficiencies when heterogeneity affects the scale of the inefficiency. We also see that the inclusion of unobserved heterogeneity is still relevant when no observable covariates are available.
    Keywords: Stochastic Frontier Models, Heterogeneity, Bayesian Inference
    Date: 2012–05
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws121007&r=ecm
  12. By: Rama Cont (LPMA - Laboratoire de Probabilités et Modèles Aléatoires - CNRS : UMR7599 - Université Paris VI - Pierre et Marie Curie - Université Paris VII - Paris Diderot); Lakshithe Wagalath (LPMA - Laboratoire de Probabilités et Modèles Aléatoires - CNRS : UMR7599 - Université Paris VI - Pierre et Marie Curie - Université Paris VII - Paris Diderot)
    Abstract: We propose a tractable framework for quantifying the impact of fire sales on the volatility and correlations of asset returns in a multi-asset setting. Our results enable to quantify the impact of fire sales on the covariance structure of asset returns and provide a quantitative explanation for spikes in volatility and correlations observed during liquidation of large portfolios. These results allow to estimate the impact and magnitude of fire sales from observation of market prices: we give conditions for the identifiability of model parameters from time series of asset prices, propose an estimator for the magnitude of fire sales in each asset class and study the consistency and large sample properties of the estimator. We illustrate our estimation methodology with two empirical examples: the hedge fund losses of August 2007 and the Great Deleveraging following the Lehman default.
    Keywords: fire sales ; endogenous risk ; systemic risk ; liquidity ; financial econometrics ; correlation ; volatility
    Date: 2012–05–12
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-00697224&r=ecm
  13. By: Carlo Sguera; Pedro Galeano; Rosa E. Lillo
    Abstract: Functional data are becoming increasingly available and tractable because of the last technological advances. We enlarge the number of functional depths by defining two new depth functions for curves. Both depths are based on a spatial approach: the functional spatial depth (FSD), that shows an interesting connection with the functional extension of the notion of spatial quantiles, and the kernelized functional spatial depth (KFSD), which is useful for studying functional samples that require an analysis at a local level. Afterwards, we consider supervised functional classification problems, and in particular we focus on cases in which the samples may contain outlying curves. For these situations, some robust methods based on the use of functional depths are available. By means of a simulation study, we show how FSD and KFSD perform as depth functions for these depth-based methods. The results indicate that a spatial depthbased classification approach may result helpful when the datasets are contaminated, and that in general it is stable and satisfactory if compared with a benchmark procedure such as the functional k-nearest neighbor classifier. Finally, we also illustrate our approach with a real dataset.
    Keywords: Depth notion, Spatial functional depth, Supervised functional classification, Depth-based method, Outliers
    Date: 2012–05
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws120906&r=ecm
  14. By: Eva Boj (Dept. de Matematica Economica, Financera i Actuarial, Univ. de Barcelona, Diagonal 690, 08034 Barcelona, Spain.); Pedro Delicado (Universitat Politecnica de Catalunya); Josep Fortiana (Universitat de Barcelona); Anna Esteve (CEEISCAT); Adria Caballe (Universitat Politecnica de Catalunya)
    Abstract: This paper introduces local distance-based generalized linear models. These models extend (weighted) distance-based linear models firstly with the generalized linear model concept, then by localizing. Distances between individuals are the only predictor information needed to fit these models. Therefore they are applicable to mixed (qualitative and quantitative) explanatory variables or when the regressor is of functional type. Models can be fitted and analysed with the R package dbstats, which implements several distancebased prediction methods.
    Keywords: Distance-based prediction, Generalized Linear Model, Local Likelihood, Iteratively Weighted Least Squares, R
    Date: 2012–05
    URL: http://d.repec.org/n?u=RePEc:xrp:wpaper:xreap2012-11&r=ecm
  15. By: Edward P. Herbst>
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:fip:fedgfe:2012-35&r=ecm
  16. By: Jean-Baptiste Gossé (CEPN - Centre d'économie de l'Université de Paris Nord - CNRS : UMR7115 - Université Paris XIII - Paris Nord); Cyriac Guillaumin (CREG - Centre de recherche en économie de Grenoble - Université Pierre Mendès-France - Grenoble II : EA4625)
    Abstract: le 10 octobre 2011, l'Académie Royale des Sciences de Suède a attribué le prix Nobel d'économie à Thomas Sargent et Christopher Sims " pour leurs recherches empiriques sur la cause et l'effet en macroéconomie ". Cet article se propose de présenter les travaux pour lesquels Sims a été primé. Pour commencer, nous retraçons son parcours académique. Nous présentons ensuite de façon synthétique les travaux qui ont valu à Sims d'être récompensé. Face aux insuffisances des modèles macroéconométriques d'inspiration keynésienne, Sims (1980) émet la fameuse critique de Sims et propose une modélisation multivariée dont les seules restrictions sont le choix des variables sélectionnées et le nombre de retards intégrés. Nous proposons en conclusion un panorama des nombreuses recherches suscitées par les travaux pionniers de Sims.
    Keywords: Mots-clefs : Sims, processus VAR.
    Date: 2011–11–19
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-00642920&r=ecm
  17. By: Charles Romeo (Economic Analysis Group, Antitrust Division, U.S. Department of Justice)
    Abstract: It is well known that random parameters specifications can generate upward sloping demands for a subset of products in the data. Nevo (2001), for example, found 0.7 percent of demands to be upward sloping. Possibly less well known is that demand system estimates can imply margins outside of the theoretical bounds for profit maximization. If such violations are numerous enough, they can confound merger simulation exercises. Using Lerner indices for multiproduct firms playing static Bertrand games, we find that up to 35 percent of implied margins for beer are outside the bounds. We characterize downward sloping demand and the theoretical bounds for profit maximization as prior information and extend the GMM objective function, incorporating inequality moments for product-level own-elasticities and brand-level or product-level Lerner indices. These moments impose a cost when the inequality is violated, and equal zero otherwise. Very few violations remain when an inequality constrained estimator is used. Importantly, the unconstrained GMM objective has multiple minima, while the constrained objective has only one minimum when the product-level constraints are used in our illustration. This is valuable for policy purposes as it enables one to limit attention to a single theoretically consistent model. Inputs to merger simulations are likewise consistent with economic theory, and, as a result, confidence in the output is increased. In a second innovation, this paper introduces merger simulation for static Stackelberg price competition games. Our illustration uses beer data, a perfect vehicle for introducing Stackelberg games as the economics literature and industry trade press have long considered Anheuser-Busch to be the industry price leader. We find evidence of positive pre-merger price conjectures consistent with beer brands being strategic complements. Allowing the leader to update their conjectures in response to a merger provides dramatically different post-merger price and share changes relative to Bertrand. The Stackelberg conjectures are used as a strategic tool that allows post-merger product repositioning unavailable under Bertrand.
    Date: 2012–04
    URL: http://d.repec.org/n?u=RePEc:doj:eagpap:201201&r=ecm
  18. By: Matteo Pelagatti (Dipartimento di Statistica, Università degli Studi di Milano-Bicocca); Emilio Colombo (Dipartimento di Economia, Università degli Studi di Milano-Bicocca)
    Abstract: In this paper we show by theoretical arguments that, even if the law of one price holds for all the goods traded in two countries, real exchange rates based on CPI are not mean-reverting and therefore statistical tests based on them should reject the PPP hypothesis. We prove that such real exchange rates are neither stationary nor integrated, and so both unit-root and stationarity tests should reject the null according to their power properties. The performance of the most common unit-root and stationarity tests in situations in which the law of one price holds is studied by means of a simulation experiment, based on real European CPI weights and prices.
    Keywords: Purchasing power parity, Law of one price, Stationarity, Unit root
    Date: 2012–05–03
    URL: http://d.repec.org/n?u=RePEc:mis:wpaper:20120501&r=ecm

This nep-ecm issue is ©2012 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.