nep-ecm New Economics Papers
on Econometrics
Issue of 2015‒08‒30
sixteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Asymptotic F and t Tests in an Efficient GMM Setting By Hwang, Jungbin; Sun, Yixiao
  2. Comparing Indirect Inference and Likelihood testing: asymptotic and small sample results By Meenagh, David; Minford, Patrick; Wickens, Michael; Xu, Yongdeng
  3. Do We Need Ultra-High Frequency Data to Forecast Variances? By Georgiana-Denisa Banulescu; Bertrand Candelon; Christophe Hurlin; Sébastien Laurent
  4. Estimation of Fractionally Integrated Panels with Fixed Effects and Cross-Section Dependence By Yunus Emre Ergemen; Carlos Velasco
  5. Estimation and Inference for Distribution Functions and Quantile Functions in Endogenous Treatment Effect Models By Yu-Chin Hsu; Robert P. Lieli; Tsung-Chih Lai
  6. Covariate-augmented unit root tests with mixed-frequency data By Cláudia Duarte
  7. Multivariate Dynamic Copula Models: Parameter Estimation and Forecast Evaluation By Aepli, Matthias D.; Frauendorfer, Karl; Fuess, Roland; Paraschiv, Florentina
  8. New Entropy Restrictions and the Quest for Better Specified Asset Pricing Models By Bakshi, Gurdip; Chabi-Yo, Fousseni
  9. Large Scale Covariance Estimates for Portfolio Selection By Francesco Lautizi
  10. Predicting Stock Returns in the Capital Asset Pricing Model Using Quantile Regression and Belief Functions By K Autchariyapanitkul; S Chanaim; S Sriboonchitta; T Denoeux
  11. Nonlinear dynamic interrelationships between real activity and stock returns By Markku Lanne; Henri Nyberg
  12. Conditional inference trees in dynamic microsimulation - modelling transition probabilities in the SMILE model By Niels Erik Kaaber Rasmussen; Marianne Frank Hansen; Peter Stephensen
  13. The Differences-in-Differences Approach with overlapping differences - Experimental Verification of Estimation Bias By Hans Bækgaard
  14. Autocorrelation robust inference using the Daniell kernel with fixed bandwidth By Javier Hualde; Fabrizio Iacone
  15. The multivariate Beveridge--Nelson decomposition with I(1) and I(2) series By Murasawa, Yasutomo
  16. Copula-Based Factor Model for Credit Risk Analysis By Lu, Meng-Jou; Chen, Cathy Yi-Hsuan; Härdle, Karl Wolfgang; Härdle

  1. By: Hwang, Jungbin; Sun, Yixiao
    Abstract: This paper considers two-step efficient GMM estimation and inference where the weighting matrix and asymptotic variance matrix are based on the series long run variance estimator. We propose a simple and easy-to-implement modification to the trinity of test statistics in the two-step efficient GMM setting and show that the modified test statistics are all asymptotically F distributed under the so-called fixed-smoothing asymptotics. The modification is multiplicative and involves the J statistic for testing over-identifying restrictions. This leads to convenient asymptotic F tests that use standard F critical values. Simulation shows that, in terms of both size and power, the asymptotic F tests perform as well as the nonstandard tests proposed recently by Sun (2014b) in finite samples. But the F tests are more appealing as the critical values are readily available from standard statistical tables. Compared to the conventional chi-square tests, the F tests are as powerful, but are much more accurate in size.
    Keywords: Social and Behavioral Sciences, Efficient GMM, F distribution, F test, Fixed-smoothing Asymptotics, Heteroskedasticity and Autocorrelation Robust, Two-step GMM
    Date: 2015–08–18
    URL: http://d.repec.org/n?u=RePEc:cdl:ucsdec:qt1c62d8xf&r=all
  2. By: Meenagh, David (Cardiff Business School); Minford, Patrick (Cardiff Business School); Wickens, Michael (Cardiff Business School); Xu, Yongdeng
    Abstract: Indirect Inference has been found to have much greater power than the Likelihood Ratio in small samples for testing DSGE models. We look at asymptotic and large sample properties of these tests to understand why this might be the case. We find that the power of the LR test is undermined when reestimation of the error parameters is permitted; this offsets the effect of the falseness of structural parameters on the overall forecast error. Even when the two tests are done on a like-for-like basis Indirect Inference has more power because it uses the distribution restricted by the DSGE model being tested.
    Keywords: Indirect Inference; Likelihood Ratio; DSGE model; structural parameters; error processes
    JEL: C12 C32 C52 E1
    Date: 2015–07
    URL: http://d.repec.org/n?u=RePEc:cdf:wpaper:2015/8&r=all
  3. By: Georgiana-Denisa Banulescu (Maastricht University - univ. Maastricht, LEO - Laboratoire d'économie d'Orleans - UO - Université d'Orléans - CNRS); Bertrand Candelon (Maastricht University - univ. Maastricht); Christophe Hurlin (LEO - Laboratoire d'économie d'Orleans - UO - Université d'Orléans - CNRS); Sébastien Laurent (AMU IAE - Institut d'Administration des Entreprises (IAE) - Aix-en-Provence - AMU - Aix-Marseille Université)
    Abstract: In this paper we study various MIDAS models in which the future daily variance is directly related to past observations of intraday predictors. Our goal is to determine if there exists an optimal sampling frequency in terms of volatility prediction. Via Monte Carlo simulations we show that in a world without microstructure noise, the best model is the one using the highest available frequency for the predictors. However, in the presence of microstructure noise, the use of ultra high-frequency predictors may be problematic, leading to poor volatility forecasts. In the application, we consider two highly liquid assets (i.e., Microsoft and S&P 500). We show that, when using raw intraday squared log-returns for the explanatory variable, there is a "high-frequency wall" or frequency limit above which MIDAS-RV forecasts deteriorate. We also show that an improvement can be obtained when using intraday squared log-returns sampled at a higher frequency, provided they are pre-filtered to account for the presence of jumps, intraday periodicity and/or microstructure noise. Finally, we compare the MIDAS model to other competing variance models including GARCH, GAS, HAR-RV and HAR-RV-J models. We find that the MIDAS model provides equivalent or even better variance forecasts than these models, when it is applied on filtered data.
    Date: 2014–10–26
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:halshs-01078158&r=all
  4. By: Yunus Emre Ergemen (Aarhus University and CREATES); Carlos Velasco (Universidad Carlos III de Madrid)
    Abstract: We consider large N, T panel data models with fixed effects, common factors allowing cross-section dependence, and persistent data and shocks, which are assumed fractionally integrated. In a basic setup, the main interest is on the fractional parameter of the idiosyncratic component, which is estimated in first differences after factor removal by projection on the cross-section average. The pooled conditional-sum-of-squares estimate is root-NT consistent but the normal asymptotic distribution might not be centered, requiring the time series dimension to grow faster than the cross-section size for correction. Generalizing the basic setup to include covariates and heterogeneous parameters, we propose individual and common-correlation estimates for the slope parameters, while error memory parameters are estimated from regression residuals. The two parameter estimates are root-T consistent and asymptotically normal and mutually uncorrelated, irrespective of possible cointegration among idiosyncratic components. A study of small-sample performance and an empirical application to realized volatility persistence are included.
    Keywords: Fractional cointegration, factor models, long memory, realized volatility
    JEL: C22 C23
    Date: 2015–08–17
    URL: http://d.repec.org/n?u=RePEc:aah:create:2015-35&r=all
  5. By: Yu-Chin Hsu (Institute of Economics, Academia Sinica, Taipei, Taiwan); Robert P. Lieli (Department of Economics Central European University, Budapest and the National Bank of Hungary); Tsung-Chih Lai (Department of Economics, National Taiwan University)
    Abstract: We propose a new monotonizing method to obtain estimators for the distribution functions of potential outcomes among the group of compliers in an endogenous treatment effect model that are monotonically increasing and bounded between zero and one. Corresponding quantile function estimators are obtain by applying the inverse map to the CDF estimators. We show that both these estimators converge weakly to zero mean Gaussian processes. A simulation method is proposed to approximate the limiting processes for uniform inference. A Monte Carlo simulation and an application addressing the effect of fertility on family income illustrate the usefulness of the results. JEL Classification: C21, C26
    Keywords: distribution function, quantile function, treatment effects, instrumental variables, inverse probability weighted estimator
    Date: 2015–08
    URL: http://d.repec.org/n?u=RePEc:sin:wpaper:15-a003&r=all
  6. By: Cláudia Duarte
    Abstract: Unit root tests typically suer from low power in small samples, which results in not rejecting the null hypothesis as often as they should. This paper tries to tackle this issue by assessing whether it is possible to improve the power performance of covariate-augmented unit root tests, namely the ADF family of tests, by exploiting mixed-frequency data. We use the mixed data sampling (MIDAS) approach to deal with mixed-frequency data. The results from a Monte Carlo exercise indicate that mixed-frequency tests have better power performance than low-frequency tests. The gains from exploiting mixed-frequency data are greater for near-integrated variables. An empirical illustration using the US unemployment rate is presented.
    JEL: C12 C15 C22
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:ptu:wpaper:w201507&r=all
  7. By: Aepli, Matthias D.; Frauendorfer, Karl; Fuess, Roland; Paraschiv, Florentina
    Abstract: This paper introduces multivariate dynamic copula models to account for the time-varying dependence structure in asset portfolios. We firstly enhance the flexibility of this structure by modeling regimes with multivariate mixture copulas. In our second approach, we derive dynamic elliptical copulas by applying the dynamic conditional correlation model (DCC) to multivariate elliptical copulas. The best-ranked copulas according to both in-sample fit and out-of-sample forecast performance indicate the importance of accounting for time-variation. The superiority of multivariate dynamic Clayton and Student-t models further highlight that positive tail dependence as well as the capability of capturing asymmetries in the dependence structure are crucial features of a well-fitting model for an equity portfolio.
    Keywords: Multivariate dynamic copulas, regime-switching copulas, dynamic conditional correlation (DCC) model, forecast performance, tail dependence.
    JEL: C32 C51 C53
    Date: 2015–07
    URL: http://d.repec.org/n?u=RePEc:usg:sfwpfi:2015:13&r=all
  8. By: Bakshi, Gurdip (University of MD); Chabi-Yo, Fousseni (OH State University)
    Abstract: Under the setting that stochastic discount factors (SDFs) jointly price a vector of returns, this paper features entropy-based restrictions on SDFs, and its correlated multiplicative components, to evaluate asset pricing models. Specifically, our entropy bound on the square of the SDFs is intended to capture the time-variation in the conditional volatility of the log SDF as well as distributional non-normalities. Each entropy bound can be inferred from the mean and the variance-covariance matrix of the vector of asset returns. Extending extant treatments, we develop entropy codependence measures and our bounds generalize to multi-period SDFs. Our approach offer ways to improve model performance.
    JEL: C51 C52 G12
    Date: 2014–05
    URL: http://d.repec.org/n?u=RePEc:ecl:ohidic:2014-07&r=all
  9. By: Francesco Lautizi (DEF, Università di Roma "Tor Vergata")
    Abstract: We propose an estimator of the Covariance Matrix (SWSE) of a large number of assets. This estimator improves the SimilarityWeighted Estimator (SWE) introduced in Munnix et al. (2014), by combining it with the shrinkage estimator of the sample covariance matrix towards the market factor developed by Ledoit and Wolf (2003). We compare the performance of our estimator to some alternatives already available form the literature and the industry. For this purpose we analyse both statistical and economic measures associated to the Global Minimum Variance (GMV) Portfolio, composed by the stocks included in the S&P 500 index and computed using the different estimators considered in our comparison.
    Date: 2015–08–07
    URL: http://d.repec.org/n?u=RePEc:rtv:ceisrp:353&r=all
  10. By: K Autchariyapanitkul (Faculty of Economics, Chiang Mai University); S Chanaim (Faculty of Economics, Chiang Mai University); S Sriboonchitta (Faculty of Economics, Chiang Mai University); T Denoeux (Heudiasyc - Heuristique et Diagnostic des Systèmes Complexes [Compiègne] - Université de Technologie de Compiègne - CNRS, Labex MS2T - Laboratoire d'Excellence "Maîtrise des Systèmes de Systèmes Technologiques" - Université de Technologie de Compiègne - CNRS)
    Abstract: We consider an inference method for prediction based on belief functions in quantile regression with an asymmetric Laplace distribution. We apply this method to the capital asset pricing model to estimate the beta coefficient and measure volatility under various market conditions at given quantiles. Likelihood-based belief functions are constructed from historical data of the securities in the S&P500 market. The results give us evidence on the systematic risk, in the form of a consonant belief function specified from the asymmetric Laplace distribution likelihood function given recorded data. Finally, we use the method to forecast the return of an individual stock.
    Date: 2014–09–26
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-01127790&r=all
  11. By: Markku Lanne (University of Helsinki and CREATES); Henri Nyberg (University of Helsinki and University of Turku)
    Abstract: We explore the differences between the causal and noncausal vector autoregressive (VAR) models in capturing the real activity-stock return-relationship. Unlike the conventional linear VAR model, the noncausal VAR model is capable of accommodating various nonlinear characteristics of the data. In quarterly U.S. data, we find strong evidence in favor of noncausality, and the best causal and noncausal VAR models imply quite different dynamics. In particular, the linear VAR model appears to underestimate the importance of the stock return shock for the real activity, and the real activity shock for the stock return.
    Keywords: Noncausal VAR model, non-Gaussianity, generalized forecast error variance decomposition, business cycles, fundamentals.
    JEL: C32 C58 E17 E44
    Date: 2015–08–18
    URL: http://d.repec.org/n?u=RePEc:aah:create:2015-36&r=all
  12. By: Niels Erik Kaaber Rasmussen; Marianne Frank Hansen (Danish Rational Economic Agents Model, DREAM); Peter Stephensen (Danish Rational Economic Agents Model, DREAM)
    Abstract: Determining transition probabilities is a vital part of dynamic microsimulation models. Modelling individual behaviour by a large number of covariates reduces the number of observations with identical characteristics. This challenges determination of the response structure. Data mining using conditional inference trees (CTREEs) is found to be a useful tool to quantify a discrete response variable conditional on multiple individual characteristics and is generally believed to provide better covariate interactions than traditional parametric discrete choice models, i.e. logit and probit models. Deriving transition probabilities from conditional inference trees is a core method used in the SMILE microsimulation model forecasting household demand for dwellings. The properties of CTREEs are investigated through an empirical application aiming to describe the household decision of moving from a number of covariates representing various demographic and dwelling characteristics. Using recursive binary partitioning, decision trees group individuals’ responses according to a selected number of conditioning covariates. Recursively splitting the population by characteristics results in smaller groups consisting of individuals with identical behaviour. Classification is induced by recognized statistical procedures evaluating heterogeneity and the number of observations within the group exposed to a potential split. If a split is statistically validated, binary partitioning results in two new tree nodes, each of which potentially can split further after the next evaluation. The recursion stops when indicated by the statistical test procedures. Nodes caused by the final split are called terminal nodes. The final tree is characterized by a minimum of variation between observations within a terminal node and maximum variation across terminal nodes. For each terminal node a transitional probability is calculated and used to describe the response of individuals with the same covariate structure as characterizing the given terminal node. That is, if a terminal node consists of single males aged 50 and above living in rental housing, individuals with such characteristics are assumed to behave identically with respect to moving when transitioning from one state to another.
    Keywords: conditional inference tree, CTREE, dynamic microsimulation, modelling transition probability
    Date: 2013–12
    URL: http://d.repec.org/n?u=RePEc:dra:wpaper:201302&r=all
  13. By: Hans Bækgaard
    Date: 2014–05
    URL: http://d.repec.org/n?u=RePEc:dra:wpaper:201403&r=all
  14. By: Javier Hualde; Fabrizio Iacone
    Abstract: We consider alternative asymptotics for frequency domain estimates of the long run variance, in which the bandwidth is kept fixed. For a weakly dependent process, this does not yield a consistent estimateof the long run variance, but the standardized mean has t limit distribution, which, for any given bandwidth, appears to be more precise than the traditional Gaussian limit. In presence of fractionally integrated data, the limit distribution of the estimate is not standard, and we derive critical values for the standardized mean for various bandwidths. Again, we find that this asymptotic result provides a better approximation than other proposals like the Memory Autocorrelation Consistent (MAC) estimate. In multivariate set up, fixed bandwidth asymptotics may be also used to provide a characterization to the limit distribution of estimates of cointegrating parameter which differs substantially from the conventional Narrow Band asymptotics.
    Keywords: long run variance estimation, long memory, large-m and fixed-masymptotic theory
    JEL: C32
    Date: 2015–08
    URL: http://d.repec.org/n?u=RePEc:yor:yorken:15/14&r=all
  15. By: Murasawa, Yasutomo
    Abstract: The consumption Euler equation implies that the output growth rate and the real interest rate are of the same order of integration; i.e., if the real interest rate is I(1), then so is the output growth rate and hence log output is I(2). To estimate the natural rates and gaps of macroeconomic variables jointly, this paper develops the multivariate Beveridge--Nelson decomposition with I(1) and I(2) series. The paper applies the method to Japanese data during 1980Q1--2013Q3 to estimate the natural rates and gaps of output, inflation, interest, and unemployment jointly.
    Keywords: gap; natural rate; trend--cycle decomposition; unit root
    JEL: C32 C82 E32
    Date: 2015–08–28
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:66319&r=all
  16. By: Lu, Meng-Jou; Chen, Cathy Yi-Hsuan; Härdle, Karl Wolfgang; Härdle
    Abstract: A standard quantitative method to access credit risk employs a factor model based on joint multi- variate normal distribution properties. By extending a one-factor Gaussian copula model to make a more accurate default forecast, this paper proposes to incorporate a state-dependent recovery rate into the con- ditional factor loading, and model them by sharing a unique common factor. The common factor governs the default rate and recovery rate simultaneously and creates their association implicitly. In accordance with Basel III, this paper shows that the tendency of default is more governed by systematic risk rather than idiosyncratic risk during a hectic period. Among the models considered, the one with random fac- tor loading and a state-dependent recovery rate turns out to be the most superior on the default prediction.
    Keywords: Factor Model, Conditional Factor Loading, State-Dependent Recovery Rate
    JEL: C38 C53 F34 G11 G17
    Date: 2015–08
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2015-042&r=all

This nep-ecm issue is ©2015 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.