nep-ecm New Economics Papers
on Econometrics
Issue of 2014‒08‒09
eighteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Variance targeting estimation of multivariate GARCH models By Francq, Christian; Horvath, Lajos; Zakoian, Jean-Michel
  2. On the relevance of weaker instruments By Bertille Antoine; Eric Renault
  3. Contrasting Bayesian and Frequentist Approaches to Autoregressions: the Role of the Initial Condition By Marek Jarociński; Albert Marcet
  4. Testing for Predictability in Panels of Small Time Series Dimensions with an Application to Chinese Stock returns By Joakim Westerlund; Paresh Kumar Narayan
  5. How good are out of sample forecasting Tests on DSGE models? By Minford, Patrick; Xu, Yongden; Zhou, Peng
  6. Filtering and Prediction in Noncausal Processes By Christian Gouriéroux; Joann Jasiak
  7. Fixed T Dynamic Panel Data Estimators with Multi-Factor Errors By Juodis, Arturas; Sarafidis, Vasilis
  8. Goodness-of-fit test for randomly censored data based on maximum correlation By Ewa Strzalkowska-Kominiak; Aurea Grané
  9. Adaptive Estimation of a Density Function using Beta Kernels By Karine Bertin; Nicolas Klutchnikoff
  10. Spatial Effects in Dynamic Conditional Correlations By E. Otranto; M. Mucciardi; P. Bertuccelli
  11. A Practical Note on the Determination of the Number of Factors Using Information Criteria with Data-Driven Penalty By Joakim Westerlund; Sagarika Mishra
  12. Multivariate Self-Exciting Threshold Autoregressive Models with eXogenous Input By Peter Martey Addo
  13. Bayesian estimation of realized stochastic volatility model by Hybrid Monte Carlo algorithm By Tetsuya Takaishi
  14. On the Prediction Performance of the Lasso By Arnak S. Dalalyan; Mohamed Hebiri; Johannes Lederer
  15. A Random Coefficient Approach to the Predictability of Stock Returns in Panels By Joakim Westerlund
  16. A Factor Analytical Approach to the Efficient Futures Market Hypothesis By Joakim Westerlund; Milda Norkute; Paresh K Narayan
  17. A unified structural equation modeling approach for the decomposition of rank-dependent indicators of socioeconomic inequality of health By KESSELS, Roselinde; ERREYGERS, Guido
  18. Tolerating defiance? Local average treatment effects without monotonicity By Chaisemartin, Clément de

  1. By: Francq, Christian; Horvath, Lajos; Zakoian, Jean-Michel
    Abstract: We establish the strong consistency and the asymptotic normality of the variance-targeting estimator (VTE) of the parameters of the multivariate CCC-GARCH($p,q$) processes. This method alleviates the numerical difficulties encountered in the maximization of the quasi likelihood by using an estimator of the unconditional variance. It is shown that the distribution of the VTE can be consistently estimated by a simple residual bootstrap technique. We also use the VTE for testing the model adequacy. A test statistic in the spirit of the score test is constructed, and its asymptotic properties are derived under the null assumption that the model is well specified. An extension of the VT method to asymmetric CCC-GARCH models incorporating leverage effects is studied. Numerical illustrations are provided and an empirical application based on daily exchange rates is proposed.
    Keywords: Adequacy Test for CCC-GARCH models, Bootstrap, Leverage Effect, Quasi Maximum Likelihood Estimation, Variance Targeting Estimator
    JEL: C13 C22
    Date: 2014–08–06
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:57794&r=ecm
  2. By: Bertille Antoine (Simon Fraser University); Eric Renault (Brown University)
    Abstract: We consider models defined by a set of moment restrictions that may be subject to weak identification. More specifically, we study the asymptotic properties of the standard GMM estimator and the Hansen J-test when additional moment restrictions that are weaker than the original ones are available. We show that the consistency of the GMM estimator is not affected by such restrictions even when they are invalid. We also provide conditions under which these restrictions may improve the efficiency of GMM estimator. Finally, we study the behavior of the Hansen J-test to assess the compatibility between existing restrictions and additional ones in order to detect ”spurious” identification that may rely on invalid moments. Our theoretical characterization of the J-test reveals that the J-test is approximately akin to checking whether the value of the parameter identified by the existing restrictions is conformable to the information provided by the additional ones. Our simulations confirm that the issue with the standard J-test is not its power but rather its size. We also show that the power of the J-test increases with the weakness of the additional restrictions and we provide some intuition for why that is.
    Keywords: GMM; Weak IV; Redundancy; J-test; Misspecification
    JEL: C32 C12 C13 C51
    Date: 2014–07–01
    URL: http://d.repec.org/n?u=RePEc:sfu:sfudps:dp14-04&r=ecm
  3. By: Marek Jarociński; Albert Marcet
    Abstract: The frequentist and the Bayesian approach to the estimation of autoregressions are often contrasted. Under standard assumptions, when the ordinary least squares (OLS) estimate is close to 1, a frequentist adjusts it upwards to counter the small sample bias, while a Bayesian who uses a at prior considers the OLS estimate to be the best point estimate. This contrast is surprising because a at prior is often interpreted as the Bayesian approach that is closest to the frequentist approach. We point out that the standard way that inference has been compared is misleading because frequentists and Bayesians tend to use different models, in particular, a different distribution of the initial condition. The contrast between the frequentist and the Bayesian at prior estimation of the autoregression disappears once we make the same assumption about the initial condition in both approaches.
    Keywords: autoregression, initial condition, bayesian estimation, small sample distribution, bias correction
    JEL: C11 C22 C32
    Date: 2014–07
    URL: http://d.repec.org/n?u=RePEc:bge:wpaper:776&r=ecm
  4. By: Joakim Westerlund (Deakin University); Paresh Kumar Narayan (Deakin University)
    Abstract: The few panel data tests for predictability of returns that exist are based on the prerequisite that both the number of time series observations, T, and the number of crosssection units, N, are large. As a result, these tests are impossible for stock markets where lengthy time series data are scarce. In response to this, the current paper develops a new test for predictability in panels where T ≥ 2 but N is large, which seems like a much more realistic assumption when using firm-level data. As an illustration, we consider the Chinese stock market, for which data is only available for 17 years but where the number firms is relatively large, 160.
    Keywords: Panel data; Predictive regression; Stock return predictability; China.
    JEL: C22 C23 G1 G12
    URL: http://d.repec.org/n?u=RePEc:dkn:ecomet:fe_2014_13&r=ecm
  5. By: Minford, Patrick (Cardiff Business School); Xu, Yongden; Zhou, Peng (Cardiff Business School)
    Abstract: Out-of-sample forecasting tests of DSGE models against time-series benchmarks such as an unrestricted VAR are increasingly used to check a) the specification b) the forecasting capacity of these models. We carry out a Monte Carlo experiment on a widely-used DSGE model to investigate the power of these tests. We find that in specification testing they have weak power relative to an in-sample indirect inference test; this implies that a DSGE model may be badly mis-specified and still improve forecasts from an unrestricted VAR. In testing forecasting capacity they also have quite weak power, particularly on the lefthand tail. By contrast a model that passes an indirect inference test of specification will almost definitely also improve on VAR forecasts.
    Keywords: Out of sample forecasts; DSGE; VAR; specification tests; indirect inference; forecast performance
    JEL: E10 E17
    Date: 2014–07
    URL: http://d.repec.org/n?u=RePEc:cdf:wpaper:2014/11&r=ecm
  6. By: Christian Gouriéroux (CREST and University of Toronto); Joann Jasiak (York University)
    Abstract: This paper revisits the filtering and prediction in noncausal and mixed autoregressive processes and provides a simple alternative set of methods that are valid for processes with infinite variances. The prediction method provides complete predictive densities and prediction intervals at any finite horizon H, for univariate and multivariate processes. It is based on an unobserved component representation of noncausal processes. The filtering procedure for the unobserved components is provided along with a simple back-forecasting estimator for the parameters of noncausal and mixed models and a simulation algorithm for noncausal and mixed autoregressive processes. The approach is illustrated by simulations
    Keywords: Noncausal Process, Nonlinear Prediction, Filtering, Look-Ahead Estimator, Speculative Bubble, Technical Analysis
    JEL: C14 G32 G23
    Date: 2014–04
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2014-15&r=ecm
  7. By: Juodis, Arturas; Sarafidis, Vasilis
    Abstract: This paper analyzes a growing group of fixed T dynamic panel data estimators with a multi-factor error structure. We use a unified notational approach to describe these estimators and discuss their properties in terms of deviations from an underlying set of basic assumptions. Furthermore, we consider the extendability of these estimators to practical situations that may frequently arise, such as their ability to accommodate unbalanced panels. Using a large-scale simulation exercise, we consider scenarios that remain largely unexplored in the literature, albeit they are of great empirical relevance. In particular, we examine (i) the effect of the presence of weakly exogenous covariates, (ii) the effect of changing the magnitude of the correlation between the factor loadings of the dependent variable and those of the covariates, (iii) the impact of the number of moment conditions on bias and size for GMM estimators, and finally the effect of sample size. Thus, our study may serve as a useful guide to practitioners who wish to allow for multiplicative sources of unobserved heterogeneity in their model.
    Keywords: Dynamic Panel Data, Factor Model, Maximum Likelihood, Fixed T Consistency, Monte Carlo Simulation
    JEL: C13 C15 C23
    Date: 2014–07–30
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:57659&r=ecm
  8. By: Ewa Strzalkowska-Kominiak; Aurea Grané
    Abstract: In this paper we study the goodness-of-fit test introduced by Fortiana and Grané (2003) and Grané (2012), in the context of randomly censored data. We construct a new test statistic undergeneral right-censoring, i.e., with unknown censoring distribution, and prove its asymptoticproperties. Additionally, we study a special case, when the censoring mechanism follows the well-known Koziol-Green model. We present an extensive simulation study on the empirical power of these two versions of the test statistic. We show the good performance of the test statistics in detecting symmetrical alternatives and their advantages over the most famousPearson-type test proposed by Akritas (1988). Finally, we apply our test to the head-and-neck-cancer data
    Keywords: Goodness-of-fit, Kaplan-Meier estimator, Maximum correlation, Random censoring
    Date: 2014–07
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws142114&r=ecm
  9. By: Karine Bertin (CIMFAV, Universidad de Valparaiso); Nicolas Klutchnikoff (CREST-ENSAI, Université de Strasbourg)
    Abstract: In this paper we are interested in the estimation of a density—defined on a compact interval of R—from n independent and identically distributed observations. In order to avoid boundary effect, beta kernel estimators are used and we propose a procedure (inspired by Lepski’s method) in order to select the bandwidth. Our procedure is proved to be adaptive in an asymptotically minimax framework. Our estimator is compared with both the cross-validation algorithm and the oracle estimator using simulated data
    Date: 2014–03
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2014-08&r=ecm
  10. By: E. Otranto; M. Mucciardi; P. Bertuccelli
    Abstract: The recent literature on time series has developed a lot of models for the analysis of the dynamic conditional correlation, involving the same variable observed in different locations; very often, in this framework, the consideration of the spatial interactions are omitted. We propose to extend a time-varying conditional correlation model (following an ARMA dynamics) to include the spatial effects, with a specification depending on the local spatial interactions. The spatial part is based on a fixed symmetric weight matrix, called Gaussian Kernel Matrix (GKM), but its effect will vary along the time depending on the degree of time correlation in a certain period. We show the theoretical aspects, with the support of simulation experiments, and apply this methodology to two space-time data sets, in a demographic and a financial framework respectively.
    Keywords: space-time correlation, time-varying correlation, weight matrix, gaussian kernel
    JEL: C13 C33 J13
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:cns:cnscwp:201406&r=ecm
  11. By: Joakim Westerlund (Deakin University); Sagarika Mishra (Deakin University)
    Abstract: As is well known, when using an information criterion to select the number of common factors in factor models the appropriate penalty is generally indetermine in the sense that it can be scaled by an arbitrary constant, c say, without affecting consistency. In an influential paper, Hallin and Liˇska (Determining the Number of Factors in the General Dynamic Factor Model, Journal of the American Statistical Association 102, 603â617, 2007)proposes a data-driven procedure for selecting the appropriate value of c. However, by removing one source of indeterminacy, the new procedure simultaneously creates several new ones,which make for rather complicated implementation, a problem that has been largely overlooked in the literature. By providing an extensive analysis using both simulated and real data, the current paper fills this gap.
    Keywords: Panel data; Common factor model; Information criterion; Data-driven penalty.
    JEL: C12 C13 C33
    URL: http://d.repec.org/n?u=RePEc:dkn:ecomet:fe_2014_15&r=ecm
  12. By: Peter Martey Addo
    Abstract: This study defines a multivariate Self--Exciting Threshold Autoregressive with eXogenous input (MSETARX) models and present an estimation procedure for the parameters. The conditions for stationarity of the nonlinear MSETARX models is provided. In particular, the efficiency of an adaptive parameter estimation algorithm and LSE (least squares estimate) algorithm for this class of models is then provided via simulations.
    Date: 2014–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1407.7738&r=ecm
  13. By: Tetsuya Takaishi
    Abstract: The hybrid Monte Carlo algorithm (HMCA) is applied for Bayesian parameter estimation of the realized stochastic volatility (RSV) model. Using the 2nd order minimum norm integrator (2MNI) for the molecular dynamics (MD) simulation in the HMCA, we find that the 2MNI is more efficient than the conventional leapfrog integrator. We also find that the autocorrelation time of the volatility variables sampled by the HMCA is very short. Thus it is concluded that the HMCA with the 2MNI is an efficient algorithm for parameter estimations of the RSV model.
    Date: 2014–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1408.0981&r=ecm
  14. By: Arnak S. Dalalyan (CREST-ENSAE); Mohamed Hebiri (Université Paris Est); Johannes Lederer (Cornell University)
    Abstract: Although the Lasso has been extensively studied, the relationship between its prediction performance and the correlations of the covariates is not fully understood. In this paper, we give new insights into this relationship in the context of multiple linear regression. We show, in particular, that the incorporation of a simple correlation measure into the tuning parameter leads to a nearly optimal prediction performance of the Lasso even for highly correlated covariates. However, we also reveal that for moderately correlated covariates, the prediction performance of the Lasso can be mediocre irrespective of the choice of the tuning parameter. For the illustration of our approach with an important application, we deduce nearly optimal rates for the least-squares estimator with total variation penalty
    Keywords: multiple linear regression, sparse recovery, total variation penalty, oracle inequalities
    Date: 2014–02
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2014-05&r=ecm
  15. By: Joakim Westerlund (Deakin University)
    Abstract: Most studies of the predictability of returns are based on time series data, and whenever panel data are used, the testing is almost always conducted in an unrestricted unit by unit fashion, which makes for a very heavy parametrization of the model. On the other hand, the few panel tests that exist are too restrictive in the sense that they are based on homogeneity assumptions that might not be true. As a response to this, the current paper proposes new predictability tests in the context of a random coefficient panel data model, in which the null of no predictability corresponds to the joint restriction that the predictive slope has zero mean and variance. The tests are applied to a large panel of stocks listed at the New York Stock Exchange. The results suggest that while the predictive slopes tend to average to zero, in case of book-to-market and cash flow-to-price the variance of the slopes is positive, which we take as evidence of predictability.
    Keywords: Panel data; Predictive regression; Stock return predictability.
    JEL: C22 C23 G1 G12
    URL: http://d.repec.org/n?u=RePEc:dkn:ecomet:fe_2014_10&r=ecm
  16. By: Joakim Westerlund (Deakin University); Milda Norkute (Lund University); Paresh K Narayan (Deakin University)
    Abstract: Most empirical evidence suggests that the efficient futures market hypothesis, henceforth referred to as EFMH, stating that spot and futures prices should cointegrate with a unit slope on futures prices, does not hold, a finding at odds with many theoretical models. This paper argues that these results can be attributed in part to the low power of univariate tests, and that the use of panel data can generate more powerful tests. The current paper can be seen as a step in this direction. In particular, a newly developed factor analytical approach is employed, which is very general and, in addition, free of the otherwise so common incidental parameters bias in the presence of fixed effects. The approach is applied to a large panel covering 17 commodities between March 1991 and August 2012. The evidence suggests that the EFMH cannot be rejected once the panel evidence has been taken into account.
    Keywords: Dynamic panel data models; Unit root; Factor analytical method; Efficient market hypothesis; Futures markets.
    JEL: C12 C13 C33 C36
    URL: http://d.repec.org/n?u=RePEc:dkn:ecomet:fe_2014_12&r=ecm
  17. By: KESSELS, Roselinde; ERREYGERS, Guido
    Abstract: In this paper we present a unified structural equation modeling (SEM) framework for the regression-based decomposition of rank-dependent indicators of socioeconomic inequality of health and compare it with simple ordinary least squares (OLS) regression. The SEM framework forms the basis for a proper use of the most prominent one- and two-dimensional decompositions and provides an argument for using the bivariate multiple regression model for two-dimensional decomposition. Within the SEM framework, the two-dimensional decomposition integrates the feedback mechanism between health and socioeconomic status and allows for dierent sets of determinants of these variables. We illustrate the SEM approach and its outperformance to OLS using data from the Ethiopia 2011 Demographic and Health Survey (DHS).
    Keywords: Inequality measurement, Concentration index, Decomposition methods, Structural Equation Modeling (SEM)
    JEL: C36 D63 I00
    Date: 2014–07
    URL: http://d.repec.org/n?u=RePEc:ant:wpaper:2014013&r=ecm
  18. By: Chaisemartin, Clément de (The University of Warwick)
    Abstract: We know that instrumental variable (IV) estimates a causal effect if the instrument satisfies a monotonicity condition. When this condition is not satisfied, we only know that IV estimates the difference between the effect of the treatment in two groups. This difference could be a very misleading measure of the treatment effect: it could be negative, even when the effect is positive in both groups. There are a large number of studies in which monotonicity is implausible. One might then question whether we should trust their estimates. I show that IV estimates a causal effect under a much weaker condition than monotonicity. I outline three criteria applied researchers can use to assess whether this condition is applicable in their studies. When this weaker condition is applicable, they can credibly interpret their estimates as causal effects. When it is not, they should interpret their results with caution.
    Keywords: instrumental variable, two stage least squares, heterogeneous eects, monotonicity, deers, internal validity
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:cge:wacage:197&r=ecm

This nep-ecm issue is ©2014 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.