nep-ecm New Economics Papers
on Econometrics
Issue of 2016‒12‒11
eleven papers chosen by
Sune Karlsson
Örebro universitet

  1. Beyond sorting: a more powerful test for cross-sectional anomalies By Olivier Ledoit; Michael Wolf; Zhao Zhao
  2. Stock Return Prediction with Fully Flexible Models and Coefficients By Byrne, Joseph; Fu, Rong
  3. A Bayesian Infinite Hidden Markov Vector Autoregressive Model By Didier Nibbering; Richard Paap; Michel van der Wel
  4. Wavelet-based methods for high-frequency lead-lag analysis By Takaki Hayashi; Yuta Koike
  5. Stock Prices Predictability at Long-horizons: Two Tales from the Time-Frequency Domain By Nikolaos Mitianoudis; Theologos Dergiades
  6. Macroeconomic now- and forecasting based on the factor error correction model using targeted mixed frequency indicators By Kurz-Kim, Jeong-Ryeol
  7. Demand Estimation with Unobserved Choice Set Heterogeneity By Crawford, Gregory S.; Griffith, Rachel; Iaria, Alessandro
  8. Fast, approximate MCMC for Bayesian analysis of large data sets: A design based approach By Kaeding, Matthias
  9. Estimation of time-dependent Hurst exponents with variational smoothing and application to forecasting foreign exchange rates By Matthieu Garcin
  10. Impossible Inference in Econometrics: Theory and Applications to Regression Discontinuity, Bunching, and Exogeneity Tests By Marinho Bertanha; Marcelo J. Moreira
  11. A Multiple-Try Extension of the Particle Marginal Metropolis-Hastings (PMMH) Algorithm with an Independent Proposal By Takashi Kamihigashi; Hiroyuki Watanabe

  1. By: Olivier Ledoit; Michael Wolf; Zhao Zhao
    Abstract: Many researchers seek factors that predict the cross-section of stock returns. The standard methodology sorts stocks according to their factor scores into quantiles and forms a corresponding long-short portfolio. Such a course of action ignores any information on the covariance matrix of stock returns. Historically, it has been difficult to estimate the covariance matrix for a large universe of stocks. We demonstrate that using the recent DCC-NL estimator of Engle et al. (2016) substantially enhances the power of tests for cross-sectional anomalies: On average, ‘Student’ t-statistics more than double.
    Keywords: Cross-section of returns, dynamic conditional correlations, GARCH, Markowitz portfolio selection, nonlinear shrinkage
    JEL: C13 C58 G11
    Date: 2016–12
    URL: http://d.repec.org/n?u=RePEc:zur:econwp:238&r=ecm
  2. By: Byrne, Joseph; Fu, Rong
    Abstract: We evaluate stock return predictability using a fully flexible Bayesian framework, which explicitly allows for different degrees of time-variation in coefficients and in forecasting models. We believe that asset return predictability can evolve quickly or slowly, based upon market conditions, and we should account for this. Our approach has superior out-of-sample predictive performance compared to the historical mean, from a statistical and economic perspective. We also find that our model statistically dominates its nested models, including models in which parameters evolve at a constant rate. By decomposing sources of prediction uncertainty into five parts, we find that our fully flexible approach more precisely identifies time-variation in coefficients and in forecasting models, leading to mitigation of estimation risk and forecasting improvements. Finally, we relate predictability to the business cycle.
    Keywords: Stock Return Prediction, Time-Varying Coefficients and Forecasting Models, Bayesian econometrics, Forecast combination
    JEL: C11 G11 G12 G17
    Date: 2016–11–09
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:75366&r=ecm
  3. By: Didier Nibbering (Erasmus University Rotterdam, The Netherlands); Richard Paap (Erasmus University Rotterdam, The Netherlands); Michel van der Wel (Erasmus University Rotterdam, The Netherlands)
    Abstract: We propose a Bayesian infinite hidden Markov model to estimate time-varying parameters in a vector autoregressive model. The Markov structure allows for heterogeneity over time while accounting for state-persistence. By modelling the transition distribution as a Dirichlet process mixture model, parameters can vary over potentially an infinite number of regimes. The Dirichlet process however favours a parsimonious model without imposing restrictions on the parameter space. An empirical application demonstrates the ability of the model to capture both smooth and abrupt parameter changes over time, and a real-time forecasting exercise shows excellent predictive performance even in large dimensional VARs.
    Keywords: Time-Varying Parameter Vector Autoregressive Model; Semi-parametric Bayesian Inference; Dirichlet Process Mixture Model; Hidden Markov Chain; Monetary Policy Analysis; Real-time Forecasting
    JEL: C11 C14 C32 C51 C54
    Date: 2016–12–06
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20160107&r=ecm
  4. By: Takaki Hayashi; Yuta Koike
    Abstract: We propose a novel framework to investigate lead-lag relationships between two financial assets. Our framework bridges a gap between continuous-time modeling based on Brownian motion and the existing wavelet methods for lead-lag analysis based on discrete-time models and enables us to analyze the multi-scale structure of lead-lag effects. We also present a statistical methodology for the scale-by-scale analysis of lead-lag effects in the proposed framework and develop an asymptotic theory applicable to a situation including stochastic volatilities and irregular sampling. Finally, we report several numerical experiments to demonstrate how our framework works in practice.
    Date: 2016–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1612.01232&r=ecm
  5. By: Nikolaos Mitianoudis (Democritus University of Thrace, Greece); Theologos Dergiades (University of Macedonia, Greece)
    Abstract: Accepting non-linearities as an endemic feature of financial data, this paper re-examines Cochrane's "new fact in finance" hypothesis (Cochrane, Economic Perspectives -FRB of Chicago 23, 36-58, 1999). By implementing two methods, frequently encountered in digital signal processing analysis, (Undecimated Wavelet Transform and Empirical Mode Decomposition- both methods extract components in the time-frequency domain), we decompose the real stock prices and the real dividends, for the US economy, into signals that correspond to distinctive frequency bands. Armed with the decomposed signals and acting within a non-linear framework, the predictability of stock prices through the use of dividends is assessed at alternative horizons. It is shown that the "new fact in finance" hypothesis is a valid proposition, provided that dividends contribute significantly to predicting stock prices at horizons spanning beyond 32 months. The identified predictability is entirely non-linear in nature.
    Keywords: Stock prices and dividends, Time-frequency decomposition.
    JEL: G10 C14 C22 C29
    Date: 2016–12
    URL: http://d.repec.org/n?u=RePEc:mcd:mcddps:2016_04&r=ecm
  6. By: Kurz-Kim, Jeong-Ryeol
    Abstract: Since the influential paper of Stock and Watson (2002), the dynamic factor model (DFM) has been widely used for forecasting macroeconomic key variables such as GDP. However, the DFM has some weaknesses. For nowcasting, the dynamic factor model is modified by using the mixed data sampling technique. Other improvements are also studied mostly in two directions: a pre-selection is used to optimally choose a small number of indicators from a large number of indicators. The error correction mechanism takes into account the co-integrating relationship between the key variables and factors and, hence, captures the long-run dynamics of the non-stationary macroeconomic variables. This papers proposes the factor error correction model using targeted mixedfrequency indicators, which combines the three refinements for the dynamic factor model, namely the mixed data sampling technique, pre-selection methods, and the error correction mechanism. The empirical results based on euro-area data show that the now- and forecasting performance of our new model is superior to that of the subset models.
    Keywords: Factor model,MIDAS,Lasso,Elastic Net,ECM,Nowcasting,Forecasting
    JEL: C18 C23 C51 C52 C53
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:zbw:bubdps:472016&r=ecm
  7. By: Crawford, Gregory S.; Griffith, Rachel; Iaria, Alessandro
    Abstract: We present a method to estimate preferences in the presence of unobserved choice set heterogeneity. We build on the insights of Chamberlain's Fixed-Effect Logit and exploit information in observed purchase decisions in either panel or cross-section environments to construct "sufficient sets" of choices that lie within consumers' true but unobserved choice sets. This allows us to recover preference parameters without having to specify the process of choice set formation. We illustrate our ideas by estimating demand for chocolate bars on-the-go using individual-level data from the UK. Our results show that failing to account for unobserved choice set heterogeneity can lead to statistically and economically significant biases in the estimation of preference parameters.
    Keywords: attention; discrete choice demand estimation; endogenous product choice; search; sufficient sets; unobserved choice set heterogeneity
    Date: 2016–12
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:11675&r=ecm
  8. By: Kaeding, Matthias
    Abstract: We propose a fast approximate Metropolis-Hastings algorithm for large data sets embedded in a design based approach. Here, the loglikelihood ratios involved in the Metropolis-Hastings acceptance step are considered as data. The building block is one single subsample from the complete data set, so that the necessity to store the complete data set is bypassed. The subsample is taken via the cube method, a balanced sampling design, which is defined by the property that the sample mean of some auxiliary variables is close to the sample mean of the complete data set. We develop several computationally and statistically efficient estimators for the Metropolis-Hastings acceptance probability. Our simulation studies show that the approach works well and can lead to results which are close to the use of the complete data set, while being much faster. The methods are applied on a large data set consisting of all German diesel prices for the first quarter of 2015.
    Abstract: Zur bayesianischen Analyse von großen Datensätzen schlagen wir eine schnelle, approximative Variante des Metropolis-Hastings-Algorithmus vor. Für diesen ist es erforderlich, Loglikelihood-Differenzen zu berechnen; im vorgestellten Ansatz werden diese als Daten angesehen, deren Summe zu schätzen ist. Die Basis bildet hierbei eine Stichprobe aus dem vollständigen Datensatz, so dass dieser nicht gespeichert werden muss. Die Stichprobe wird via cube sampling, einem balanciertem Stichprobendesign gezogen: Hierbei wird die Stichprobe so gezogen, dass der Mittelwert von ausgewählten Hilfsvariablen in der Stichprobe nahe dem Mittelwert der Hilfsvariablen in der Grundgesamtheit ist. Mehrere algorithmisch und statistisch effiziente Schätzer für die Summe der Loglikelihood-Differenzen, basierend auf der Stichprobe, werden entwickelt. Via Monte-Carlo Simulationen werden die vorgestellten Methoden evaluiert. Dabei zeigt sich, dass es eine Abwägung zwischen Rechenaufwand und Approximationsfehler gibt, jedoch kann der Approximationsfehler unter einer signifikanten Einsparung von Rechenaufwand auf einem vernachlässigbaren Niveau gehalten werden. Die vorgestellte Methode wird auf einen großen Datensatz aus deutschen Benzinpreisen für das erste Quartal 2015 angewandt.
    Keywords: Bayesian inference,big data,approximate MCMC,survey sampling
    JEL: C11 C55 C83
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:zbw:rwirep:660&r=ecm
  9. By: Matthieu Garcin (Natixis Asset Management, Labex ReFi - Université Paris1 - Panthéon-Sorbonne)
    Abstract: Hurst exponents depict the long memory of a time series. For human-dependent phenomena, as in finance, this feature may vary in the time. It justifies modelling dynamics by multifractional Brownian motions, which are consistent with time-varying Hurst exponents. We improve the existing literature on estimating time-dependent Hurst exponents by proposing a smooth estimate obtained by variational calculus. This method is very general and not restricted to the sole Hurst framework. It is globally more accurate and easier than other existing non-parametric estimation techniques. Besides, in the field of Hurst exponents, it makes it possible to make forecasts based on the estimated multifractional Brownian motion. The application to high-frequency foreign exchange markets (GBP, CHF, SEK, USD, CAD, AUD, JPY, CNY and SGD, all against EUR) shows significantly good forecasts. When the Hurst exponent is higher than 0.5, what depicts a long-memory feature, the accuracy is higher.
    Keywords: Hurst exponent,Euler-Lagrange equation,non-parametric smoothing,foreign exchange forecast,Multifractional brownian motion
    Date: 2016–11–19
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-01399570&r=ecm
  10. By: Marinho Bertanha; Marcelo J. Moreira
    Abstract: This paper presents necessary and sufficient conditions for tests to have trivial power. By inverting these impractical tests, we demonstrate that the bounded confidence regions have error probability equal to one. This theoretical framework establishes a connection among many existing impossibility results in econometrics, those using the total variation metric and those using the L\'{e}vy-Prokhorov metric (convergence in distribution). In particular, the theory establishes conditions under which the two types of impossibility exist in econometric models. We then apply our theory to Regression Discontinuity Design models and exogeneity tests based on bunching.
    Date: 2016–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1612.02024&r=ecm
  11. By: Takashi Kamihigashi (Research Institute for Economics & Business Administration (RIEB), Kobe University, Japan); Hiroyuki Watanabe (Research Institute for Economics & Business Administration (RIEB), Kobe University, Japan)
    Abstract: In this paper we propose a multiple-try extension of the PMMH algorithm with an independent proposal. In our algorithm, I ∈ ℕ parameter particles are sampled from the independent proposal. For each of them, a particle fiter with K ∈ ℕ state particles is run. We show that the algorithm has the following properties: (i) the distribution of the Markov chain generated by the algorithm converges to the posterior of interest in total variation; (ii) as I increases to ∞, the acceptance probability at each iteration converges to 1; and (iii) as I increases to 1, the autocorrelation of any order for any parameter with bounded support converges to 0. These results indicate that the algorithm generates almost i.i.d. samples from the posterior for sufficiently large I. Our numerical experiments suggest that one can visibly improve mixing by increasing I from 1 to only 10. This does not significantly increase computation time if a computer with at least 10 threads is used.
    Keywords: Multiple-try method; Particle marginal Metropolis-Hastings; Markov chain Monte Carlo; Mixing; State space models
    Date: 2016–11
    URL: http://d.repec.org/n?u=RePEc:kob:dpaper:dp2016-36&r=ecm

This nep-ecm issue is ©2016 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.