nep-ecm New Economics Papers
on Econometrics
Issue of 2014‒11‒12
fifteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Regression-based analysis of cointegration systems By Javier Gomez-Biscarri; Javier Hualde
  2. LADE-based inference for ARMA models with unspecified and heavy-tailed heteroscedastic noises By Zhu, Ke; Ling, Shiqing
  3. Least squares estimation for GARCH (1,1) model with heavy tailed errors By Preminger, Arie; Storti, Giuseppe
  4. Bootstrap tests in linear models with many regressors By Patrick Richard
  5. Times Series: Cointegration By Søren Johansen
  6. Score driven asymmetric stochastic volatility models By Xiuping Mao; Esther Ruiz; Helena Veiga
  7. A Cusum Test of Common Trends in Large Heterogeneous Panels By Javier Hidalgo; Jungyoon Lee
  8. The convenient calculation of some test statistics in models of discrete choice By Darryl Holden; Roger Perman
  9. Spillover Dynamics for Systemic Risk Measurement using Spatial Financial Time Series Models By Francisco Blasques; Siem Jan Koopman; Andre Lucas; Julia Schaumburg
  10. Econometrics of Ascending Auctions by Quantile Regression By Nathalie Gimenes
  11. The Forecast Combination Puzzle: A Simple Theoretical Explanation By Gerda Claeskens; Jan Magnus; Andrey Vasnev; Wendun Wang
  12. Essays on Expectations and the Econometrics of Asset Pricing By Lof, Matthijs
  13. Bayesian D-Optimal Choice Designs for Mixtures By Aiste Ruseckaite; Peter Goos; Dennis Fok
  14. A note on implementing the Durbin and Koopman simulation smoother By Jarocinski, Marek
  15. On the Rise of Bayesian Econometrics after Cowles Foundation Monographs 10, 14 By Nalan Basturk; Cem Cakmakli; S. Pinar Ceyhan; Herman K. van Dijk

  1. By: Javier Gomez-Biscarri; Javier Hualde
    Abstract: Two estimation procedures dominate the cointegration literature: Johansen's maximum likelihood inference on vector autoregressive error correction models, and estimation of Phillips' triangular forms. This latter methodology is essentially semiparametric, focusing on estimating long run parameters by means of cointegrating regressions, but it is less used in practice than Johansen's approach, since its implementation requires prior knowledge of features such as the cointegrating rank and an appropriate set of non-cointegrated regressors. In this paper we develop a simple and automatic procedure (based on unit root and regression-based cointegration testing) which provides an estimator of the cointegrating rank and data-based just-identifying conditions for the cointegrating parameters (leading to a Phillips' triangular form). A Monte Carlo experiment and an empirical example are also provided.
    Keywords: cointegrating space, Phillips' triangular form, Johansen's methodology, regression-based cointegration testing
    JEL: C32
    Date: 2014–09
    URL: http://d.repec.org/n?u=RePEc:bge:wpaper:780&r=ecm
  2. By: Zhu, Ke; Ling, Shiqing
    Abstract: This paper develops a systematic procedure of statistical inference for the ARMA model with unspecified and heavy-tailed heteroscedastic noises. We first investigate the least absolute deviation estimator (LADE) and the self-weighted LADE for the model. Both estimators are shown to be strongly consistent and asymptotically normal when the noise has a finite variance and infinite variance, respectively. The rates of convergence of the LADE and the self-weighted LADE are $n^{-1/2}$ which is faster than those of LSE for the AR model when the tail index of GARCH noises is in (0,4], and thus they are more efficient in this case. Since their asymptotic covariance matrices can not be estimated directly from the sample, we develop the random weighting approach for statistical inference under this nonstandard case. We further propose a novel sign-based portmanteau test for model adequacy. Simulation study is carried out to assess the performance of our procedure and one real illustrating example is given.
    Keywords: ARMA(p,q) models; Asymptotic normality; Heavy-tailed noises; G/ARCH noises; LADE; Random weighting approach; Self-weighted LADE; Sign-based portmanteau test; Strong consistency.
    JEL: C1 C12 C13
    Date: 2014–10–12
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:59099&r=ecm
  3. By: Preminger, Arie; Storti, Giuseppe
    Abstract: GARCH (1,1) models are widely used for modelling processes with time varying volatility. These include financial time series, which can be particularly heavy tailed. In this paper, we propose a log-transform-based least squares estimator (LSE) for the GARCH (1,1) model. The asymptotic properties of the LSE are studied under very mild moment conditions for the errors. We establish the consistency, asymptotic normality at the standard convergence rate of square root-of-n for our estimator. The finite sample properties are assessed by means of an extensive simulation study. Our results show that LSE is more accurate than the quasi-maximum likelihood estimator (QMLE) for heavy tailed errors. Finally, we provide some empirical evidence on two financial time series considering daily and high frequency returns. The results of the empirical analysis suggest that in some settings, depending on the specific measure of volatility adopted, the LSE can allow for more accurate predictions of volatility than the usual Gaussian QMLE.
    Keywords: GARCH (1,1), least squares estimation, consistency, asymptotic normality.
    JEL: C13 C15 C22
    Date: 2014–01–17
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:59082&r=ecm
  4. By: Patrick Richard (Département d'Économique, Université de Sherbrooke)
    Abstract: This paper is concerned with bootstrap hypothesis testing in high dimensional linear regression models. Using a theoretical framework recently introduced by Anatolyev (2012), we show that bootstrap F, LR and LM tests are asymptotically valid even when the numbers of estimated parameters and tested restrictions are not asymptotically negligible fractions of the sample size. These results are derived for models with iid error terms, but Monte Carlo evidence suggests that they extend to the wild bootstrap in the presence of heteroskedasticity and to bootstrap methods for heavy tailed data.
    Keywords: bootstrap, linear regressions, high dimension.
    JEL: C12 C14 C15
    Date: 2014–08
    URL: http://d.repec.org/n?u=RePEc:shr:wpaper:14-06&r=ecm
  5. By: Søren Johansen (University of Copenhagen and CREATES)
    Abstract: An overview of results for the cointegrated VAR model for nonstationary I(1) variables is given. The emphasis is on the analysis of the model and the tools for asymptotic inference. These include: formulation of criteria on the parameters, for the process to be nonstationary and I(1), formulation of hypotheses of interest on the rank, the cointegrating relations and the adjustment coefficients. A discussion of the asymptotic distribution results that are used for inference. The results are illustrated by a few examples. A number of extensions of the theory are pointed out.
    Keywords: adjustment coefficients, cointegrating relations, cointegration, cointegrated vector autoregressive model, Dickey-Fuller distributions, error correction models, econometric analysis of macroeconomic data, likelihood inference, mixed Gaussian distribution, nonstationarity
    JEL: C32
    Date: 2014–10–21
    URL: http://d.repec.org/n?u=RePEc:aah:create:2014-38&r=ecm
  6. By: Xiuping Mao; Esther Ruiz; Helena Veiga
    Abstract: In this paper we propose a new class of asymmetric stochastic volatility (SV) models, which specifies the volatility as a function of the score of the distribution of returns conditional on volatilities based on the Generalized Autoregressive Score (GAS) model. Different specifications of the log-volatility are obtained by assuming different return error distributions. In particular, we consider three of the most popular distributions, namely, the Normal, Student-t and Generalized Error Distribution and derive the statistical properties of each of the corresponding score driven SV models. We show that some of the parameters cannot be property identified by the moments usually considered as to describe the stylized facts of financial returns, namely, excess kurtosis, autocorrelations of squares and cross-correlations between returns and future squared returns. The parameters of some restricted score driven SV models can be estimated adequately using a MCMC procedure. Finally, the new proposed models are fitted to financial returns and evaluated in terms of their in-sample and out-of-sample performance
    Keywords: BUGS, Generalized Asymmetric Stochastic Volatility, MCMC, Score driven models
    JEL: C22
    Date: 2014–10
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws142618&r=ecm
  7. By: Javier Hidalgo; Jungyoon Lee
    Abstract: This paper examines a nonparametric CUSUM-type test for common trends in large panel data sets with individual fixed effects. We consider, as in Zhang, Su and Phillips (2012), a partial linear regression model with unknown functional form for the trend component, although our test does not involve local smoothings. This conveniently forgoes the need to choose a bandwidth parameter, which due to a lack of a clear and sensible information criteria it is difficult for testing purposes. We are able to do so after making use that the number of individuals increases with no limit. After removing the parametric component of the model, when the errors are homoscedastic, our test statistic converges to a Gaussian process whose critical values are easily tabulated. We also examine the consequences of having heteroscedasticity as well as discussing the problem of how to compute valid critical values due to the very complicated covariance structure of the limiting process. Finally, we present a small Monte-Carlo experiment to shed some light on the finite sample performance of the test.
    Keywords: Common Trends, large data set, Partial linear models,Bootstrap algorithms
    JEL: C12 C13 C23
    Date: 2014–08
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:/2014/576&r=ecm
  8. By: Darryl Holden (Department of Economics, University of Strathclyde); Roger Perman (Department of Economics, University of Strathclyde)
    Abstract: The paper considers the use of artiï¬cial regression in calculating different types of score test when the logâˆ'likelihood is based on probabilities rather than densities. The calculation of the information matrix test is also considered. Results are specialised to deal with binary choice (logit and probit) models.
    Keywords: score test, information matrix, artificial regression
    JEL: C1 C2 C4 C12
    Date: 2014–10
    URL: http://d.repec.org/n?u=RePEc:str:wpaper:1410&r=ecm
  9. By: Francisco Blasques; Siem Jan Koopman; Andre Lucas; Julia Schaumburg (VU University Amsterdam)
    Abstract: We introduce a new model for time-varying spatial dependence. The model extends the well-known static spatial lag model. All parameters can be estimated conveniently by maximum likelihood. We establish the theoretical properties of the model and show that the maximum likelihood estimator for the static parameters is consistent and asymptotically normal. We also study the information theoretic optimality of the updating steps for the time-varying spatial dependence parameter. We adopt the model to empirically investigate the spatial dependence between eight European sovereign CDS spreads over the period 2009--2014, which includes the European sovereign debt crisis. We construct our spatial weight matrix using cross-border lending data and include country-specific and Europe-wide risk factors as controls. We find a high, time-varying degree of spatial spillovers in the sovereign CDS spread data. There is a downturn in spatial dependence after the first half of 2012, which is consistent with policy measures taken by the European Central Bank. The findings are robust to a wide range of alternative model specifications.
    Keywords: Spatial correlation, time-varying parameters, systemic risk, European debt crisis, generalized autoregressive score
    JEL: C13 C32 C53 E17
    Date: 2014–08–14
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20140107&r=ecm
  10. By: Nathalie Gimenes
    Abstract: This paper suggests an identification and estimation approach based on quantile regression to recover the underlying distribution of bidders' private values in ascending auctions under the IPV paradigm. The quantile regression approach provides a flexible and convenient parametrization of the private values distribution, with an estimation methodology easy to implement and with several specification tests. The quantile framework provides a new focus on the quantile level of the private values distribution and on the seller's optimal screening level, which can be both useful for bidders and seller. The empirical application on timber auctions suggests that using policy recommendations from seller's expected payoff may be sometimes inappropriate from a seller's point of view due to the low probability of selling the good. This seems to be an important issue specially in auctions with strong heterogeneity among the bidders, since the seller has incentive to screen bidders' participation by setting a high reservation price, which in turn leads to a low probability of selling the good.
    Keywords: Private values; timber auctions; ascending auctions; seller expected revenue; quantile regression identification; quantile regression estimation; quantile regression specification testing.
    JEL: C14 D44 L70
    Date: 2014–10–30
    URL: http://d.repec.org/n?u=RePEc:spa:wpaper:2014wpecon25&r=ecm
  11. By: Gerda Claeskens (KU Leuven, Belgium); Jan Magnus (VU University Amsterdam, the Netherlands); Andrey Vasnev (University of Sydney, Australia); Wendun Wang (Erasmus University, Rotterdam, the Netherlands)
    Abstract: This papers offers a theoretical explanation for the stylized fact that forecast combinations with estimated optimal weights often perform poorly in applications. The properties of the forecast combination are typically derived under the assumption that the weights are fixed, while in practice they need to be estimated. If the fact that the weights are random rather than fixed is taken into account during the optimality derivation, then the forecast combination will be biased (even when the original forecasts are unbiased) and its variance is larger than in the fixed-weights case. In particular, there is no guarantee that the 'optimal' forecast combination will be better than the equal-weights case or even improve on the original forecasts. We provide the underlying theory, some special cases and an application in the context of model selection.
    Keywords: forecast combination, optimal weights, model selection
    JEL: C53 C52
    Date: 2014–09–19
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20140127&r=ecm
  12. By: Lof, Matthijs
    Abstract: The way in which market participants form expectations affects the dynamic properties of financial asset prices and therefore the appropriateness of different econometric tools used for empirical asset pricing. In addition to standard rational expectations models, this thesis studies a class of models in which boundedly rational agents may switch between various simple expectation rules. A well-known specific example features fundamentalists, who target the fundamental value of the asset, and chartists, who try to exploit recent trends in price movements. A crucial feature of these models is that not all agents have to follow the same expectation rule, but are allowed to form heterogeneous beliefs. Chapters 2 and 3 present empirical estimations of two specific heterogeneous agent models. Since the data generating processes are assumed to be nonlinear, due to the agents' switching between expectation rules, nonlinear regression models are applied. By framing the empirical results in a heterogeneous agent framework, these chapters provide an alternative view on important topics in asset pricing, such as the prevalence of excess volatility and the relation between financial markets and the macro-economy. The final two chapters deal with noncausal, or forward-looking, autoregressive models. Chapter 4 shows that US stock prices are better described by noncausal autoregressions than by their causal counterparts. This implies that agents' expectations are not revealed to an outside observer such as an econometrician observing only realized market data. Simulation results show that heterogeneous agent models are able to generate noncausal asset prices. Chapter 5 considers the estimation of a class of standard rational expectations models. It is shown that noncausality of the instrumental variables does not have an impact on the consistency of the generalized method of moments (GMM) estimator, as long as agents form rational expectations.
    Keywords: Asset pricing, heterogeneous expectations, noncausal autoregressions, VAR, GMM, econometrics
    JEL: C22 C32 C36 C58 D84 G12 G17
    Date: 2013–05
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:59064&r=ecm
  13. By: Aiste Ruseckaite (Erasmus University Rotterdam); Peter Goos (Universiteit Antwerpen, Belgium); Dennis Fok (Erasmus University Rotterdam)
    Abstract: Consumer products and services can often be described as mixtures of ingredients. Examples are the mixture of ingredients in a cocktail and the mixture of different components of waiting time (e.g., in-vehicle and out-of-vehicle travel time) in a transportation setting. Choice experiments may help to determine how the respondents' choice of a product or service is affected by the combination of ingredients. In such studies, individuals are confronted with sets of hypothetical products or services and they are asked to choose the most preferred product or service from each set. However, there are no studies on the optimal design of choice experiments involving mixtures. We propose a method for generating an optimal design for such choice experiments. To this end, we first introduce mixture models in the choice context and next present an algorithm to construct optimal experimental designs, assuming the multinomial logit model is used to analyze the choice data. To overcome the problem that the optimal designs depend on the unknown parameter values, we adopt a Bayesian D-optimal design approach. We also consider locally D-optimal designs and compare the performance of the resulting designs to those produced by a utility-neutral (UN) approach in which designs are based on the assumption that individuals are indifferent between all choice alternatives. We demonstrate that our designs are quite different and in general perform better than the UN designs.
    Keywords: Bayesian design, Choice experiments, D-optimality, Experimental design, Mixture coordinate-exchange algorithm, Mixture experiment, Multinomial logit model, Optimal design
    JEL: C01 C10 C25 C61 C83 C90 C99
    Date: 2014–05–09
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20140057&r=ecm
  14. By: Jarocinski, Marek
    Abstract: The correct implementation of the Durbin and Koopman simulation smoother is explained. A possible misunderstanding is pointed out and clarified for both the basic state space model and for its extension that allows time-varying intercepts (mean adjustments).
    Keywords: state space model; simulation smoother; trend output
    JEL: C15 C32
    Date: 2014–10–24
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:59466&r=ecm
  15. By: Nalan Basturk (Erasmus University Rotterdam); Cem Cakmakli (Koc University, Turkey); S. Pinar Ceyhan (Erasmus University Rotterdam); Herman K. van Dijk (Erasmus University Rotterdam, the Netherlands)
    Abstract: This paper starts with a brief description of the introduction of the likelihood approach in econometrics as presented in Cowles Foundation Monographs 10 and 14. A sketch is given of the criticisms on this approach mainly from the first group of Bayesian econometricians. Publication and citation patterns of Bayesian econometric papers are analyzed in ten major econometric journals from the late 1970s until the first few months of 2014. Results indicate a cluster of journals with theoretical and applied papers, mainly consisting of Journal of Econometrics , Journal of Business and Economic Statistics and Journal of Applied Econometrics which contains the large majority of high quality Bayesian econometric papers. A second cluster of theoretical journals, mainly consisting of Econometrica and Review of Economic Studies contains few Bayesian econometric papers. The scientific impact, however, of these few papers on Bayesian econometric research is substantial. Special issues from the journals Econometric Reviews , Journal of Econometrics and Econometric Theory received wide attention. Marketing Science shows an ever increasing number of Bayesian papers since the middle nineties. The International Economic Review and the Review of Economics and Statistics show a moderate time varying increase. An upward movement in publication patterns in most journals occurs in the early 1990s due to the effect of the 'Computational Revolution'. … More abstract in the paper.
    Keywords: History, Bayesian Econometrics
    JEL: C01
    Date: 2014–07–08
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20140085&r=ecm

This nep-ecm issue is ©2014 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.