nep-ecm New Economics Papers
on Econometrics
Issue of 2016‒04‒09
nineteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Bayesian Nonparametric Conditional Copula Estimation of Twin Data By Luciana Dalla Valle; Fabrizio Leisen; Luca Rossini
  2. Improving the Teaching of Econometrics By David Hendry; Grayham E. Mizon
  3. Methods for Nonparametric and Semiparametric Regressions with Endogeneity: a Gentle Guide By Xiaohong Chen; Yin Jia Qiu
  4. Semiparametric Varying Coefficient Models with Endogenous Covariates By S. Centorrino; J. S. Racine
  5. Non-Stationary Dynamic Factor Models for Large Datasets By Barigozzi, Matteo; Lippi, Marco; Luciani, Matteo
  6. Estimating the Spot Covariation of Asset Prices – Statistical Theory and Empirical Evidence By Markus Bibinger; Nikolaus Hautsch; Peter Malec; Markus Reiss
  7. Simple Nonparametric Estimators for the Bid-Ask Spread in the Roll Model By Xiaohong Chen; Oliver Linton; Stefan Schneeberger; Yanping Yi
  8. Copula--based Specification of vector MEMs By Fabrizio Cipollini; Robert F. Engle; Giampiero M. Gallo
  9. A Penalized Spline Estimator for Fixed Effects Panel Data Models By Peter Pütz; Thomas Kneib
  10. Bayesian Compressed Vector Autoregressions By Davide Pettenuzzo; Gary Koop; Dimitris Korobilis
  11. Is the Assumption of Linearity in Factor Models too Strong in Practice? By Nektarios Aslanidis; Luke Hartigan
  12. A Topological View on the Identification of Structural Vector Autoregressions By Klaus Neusser
  13. Calculating Joint Confidence Bands for Impulse Response Functions Using Highest Density Regions By Helmut Lütkepohl; Anna Staszewska-Bystrova; Peter Winker
  14. Overcoming Weak Identification in the Estimation of Household Resource Shares By Denni Tommasi; Alexander Wolf
  15. On clustering financial time series: a need for distances between dependent random variables By Gautier Marti; Frank Nielsen; Philippe Donnat; S\'ebastien Andler
  16. An Overview of Forecasting Facing Breaks By Jennifer Castle; David Hendry; Michael P. Clements
  17. Time-varying risk premium in large cross-sectional equity datasets By Ossola, Elisa; Gagilardini, Patrick; Scaillet, Olivier
  18. Assessing Identifying Restrictions in SVAR Models By Michele Piffer
  19. Tractable Likelihood-Based Estimation of Non-Linear DSGE Models Using Higher-Order Approximations By Kollmann, Robert

  1. By: Luciana Dalla Valle (Department of Economics, University Of Plymouth); Fabrizio Leisen (Department of Economics, University Of Kent); Luca Rossini (Department of Economics, Ca’ Foscari University Of Venice)
    Abstract: Several studies on heritability in twins aim at understanding the different contribution of environmental and genetic factors to specific traits. Considering the National Merit Twin Study, our purpose is to correctly analyse the influence of the socioeconomic status on the relationship between twins’ cognitive abilities. Our methodology is based on conditional copulas, which allow us to model the effect of a covariate driving the strength of dependence between the main variables. We propose a flexible Bayesian nonparametric approach for the estimation of conditional copulas, which can model any conditional copula density. Our methodology extends the work of Wu, Wang, and Walker (2015) by introducing dependence from a covariate in an infinite mixture model. Our results suggest that environmental factors are more influential in families with lower socio-economic position.
    Keywords: Bayesian nonparametrics, Conditional Copula models, Slice sampling
    JEL: C11 C13 C14 C51
    Date: 2016
  2. By: David Hendry; Grayham E. Mizon
    Abstract: Abstract: We recommend a major shift in the Econometrics curriculum for both graduate and undergraduate teaching. It is essential to include a range of topics that are still rarely addressed in such teaching, but are now vital for understanding and conducting empirical macroeconomic research. We focus on a new approach to macro-econometrics teaching, since even undergraduate econometrics courses must include analytical methods for time-series that exhibit both evolution from stochastic trends and abrupt changes from location shifts, and so confront the ‘non-stationarity revolution’. The complexity and size of the resulting equation specifications, formulated to include all theory-based variables, their lags and possibly non-linear functional forms, as well as potential breaks and rival candidate variables, places model selection for models of changing economic data at the centre of teaching. To illustrate our proposed new curriculum, we draw on a large UK macroeconomics database over 1860–2011. We discuss how we reached our present approach, and how the teaching of macroeconometrics, and econometrics in general, can be improved by nesting so-called ‘theory-driven’ and ‘data-driven’ approaches. In our methodology, the theory-model’s parameter estimates are unaffected by selection when the theory is complete and correct, so nothing is lost, whereas when the theory is incomplete or incorrect, improved empirical models can be discovered from the data. Recent software like Autometrics facilitates both the teaching and the implementation of econometrics, supported by simulation tools to examine operational performance, designed to be feasibly presented live in the classroom.
    Keywords: Teaching Econometrics, Model Selection, Theory Retention, Location Shifts,Indicator Saturation, Autometrics
    JEL: C51 C22
    Date: 2016–03–09
  3. By: Xiaohong Chen (Cowles Foundation, Yale University); Yin Jia Qiu (Dept. of Economics, Yale University)
    Abstract: This paper reviews recent advances in estimation and inference for nonparametric and semiparametric models with endogeneity. It first describes methods of sieves and penalization for estimating unknown functions identified via conditional moment restrictions. Examples include nonparametric instrumental variables regression (NPIV), nonparametric quantile IV regression and many more semi-nonparametric structural models. Asymptotic properties of the sieve estimators and the sieve Wald, quasi-likelihood ratio (QLR) hypothesis tests of functionals with nonparametric endogeneity are presented. For sieve NPIV estimation, the rate-adaptive data-driven choices of sieve regularization parameters and the sieve score bootstrap uniform confidence bands are described. Finally, simple sieve variance estimation and over-identification test for semiparametric two-step GMM are reviewed. Monte Carlo examples are included.
    Keywords: Conditional moment restrictions containing unknown functions, (Quantile) Instrumental variables, Linear and nonlinear functionals, Sieve minimum distance, Sieve GMM, Sieve Wald, QLR, Bootstrap, Semiparametric two-step GMM, Numerical equivalence
    JEL: C12 C14 C32
    Date: 2016–03
  4. By: S. Centorrino; J. S. Racine
    Abstract: Though parametric methods are popular in applied settings, practitioners often require nonparametric alternatives.However,fully nonparametric methods are known to suffer from the curse-of-dimensionality, which limits their practical application. Semiparametric methods occupy a middle ground, have the desirable feature that they are both flexible,and provide an attractive alternative to fully nonparametric methods, while attenuating the curse-of-dimensionality.Traditional semiparametric methods, such as the popular 'varying coefficient' specification, do not account for endogenous covariates, which restricts their application. In this paper we consider the estimation of semiparametric varying coefficient models when the functional coefficients may contain (continuous) endogenous covariates thereby extending the reach of this fl exible and powerful class of models.
    Date: 2016–03
  5. By: Barigozzi, Matteo; Lippi, Marco; Luciani, Matteo
    Abstract: We develop the econometric theory for Non-Stationary Dynamic Factor models for large panels of time series, with a particular focus on building estimators of impulse response functions to unexpected macroeconomic shocks. We derive conditions for consistent estimation of the model as both the cross-sectional size, n, and the time dimension, T, go to infinity, and whether or not cointegration is imposed. We also propose a new estimator for the non-stationary common factors, as well as an information criterion to determine the number of common trends. Finally, the numerical properties of our estimator are explored by means of a MonteCarlo exercise and of a real-data application, in which we study the effects of monetary policy and supply shocks on the US economy.
    Keywords: Dynamic Factor model ; , common trends ; impulse response functions ; unit root processes
    JEL: C00 C01 E00
    Date: 2016–03–04
  6. By: Markus Bibinger; Nikolaus Hautsch; Peter Malec; Markus Reiss
    Abstract: We propose a new estimator for the spot covariance matrix of a multi-dimensional continuous semimartingale log asset price process which is subject to noise and non-synchronous observations. The estimator is constructed based on a local average of block-wise parametric spectral covariance estimates. The latter originate from a local method of moments (LMM) which recently has been introduced by Bibinger et al. (2014). We extend the LMM estimator to allow for autocorrelated noise and propose a method to adaptively infer the autocorrelations from the data. We prove the consistency and asymptotic normality of the proposed spot covariance estimator. Based on extensive simulations we provide empirical guidance on the optimal implementation of the estimator and apply it to high-frequency data of a cross-section of NASDAQ blue chip stocks. Employing the estimator to estimate spot covariances, correlations and betas in normal but also extreme-event periods yields novel insights into intraday covariance and correlation dynamics. We show that intraday (co-)variations (i) follow underlying periodicity patterns, (ii) reveal substantial intraday variability associated with (co-)variation risk, (iii) are strongly serially correlated, and (iv) can increase strongly and nearly instantaneously if new information arrives.
    Keywords: local method of moments, spot covariance, smoothing, intraday (co-)variation risk
    JEL: C58 C14 C32
    Date: 2014–10–07
  7. By: Xiaohong Chen (Cowles Foundation, Yale University); Oliver Linton (University of Cambridge); Stefan Schneeberger (Dept. of Economics, Yale University); Yanping Yi (Shanghai University of Finance and Economics - School of Economics)
    Abstract: We propose new methods for estimating the bid-ask spread from observed transaction prices alone. Our methods are based on the empirical characteristic function instead of the sample autocovariance function like the method of Roll (1984). As in Roll (1984), we have a closed form expression for the spread, but this is only based on a limited amount of the model-implied identification restrictions. We also provide methods that take account of more identification information. We compare our methods theoretically and numerically with the Roll method as well as with its best known competitor, the Hasbrouck (2004) method, which uses a Bayesian Gibbs methodology under a Gaussian assumption. Our estimators are competitive with Roll's and Hasbrouck's when the latent true fundamental return distribution is Gaussian, and perform much better when this distribution is far from Gaussian. Our methods are applied to the E-mini futures contract on the S&P 500 during the Flash Crash of May 6, 2010. Extensions to models allowing for unbalanced order flow or Hidden Markov trade direction indicators or trade direction indicators having general asymmetric support or adverse selection are also presented, without requiring additional data.
    Keywords: Characteristic function, Deconvolution, Flash Crash, Liquidity
    JEL: C30 C32 G10
    Date: 2016–03
  8. By: Fabrizio Cipollini; Robert F. Engle; Giampiero M. Gallo
    Abstract: The Multiplicative Error Model (Engle (2002)) for nonnegative valued processes is specified as the product of a (conditionally autoregressive) scale factor and an innovation process with nonnegative support. A multivariate extension allows for the innovations to be contemporaneously correlated. We overcome the lack of sufficiently flexible probability density functions for such processes by suggesting a copula function approach to estimate the parameters of the scale factors and of the correlations of the innovation processes. We illustrate this vector MEM with an application to the interactions between realized volatility, volume and the number of trades. We show that significantly superior realized volatility forecasts are delivered in the presence of other trading activity indicators and contemporaneous correlations.
    Date: 2016–04
  9. By: Peter Pütz; Thomas Kneib
    Abstract: Estimating nonlinear effects of continuous covariates by penalized splines is well established for regressions with cross-sectional data as well as for panel data regressions with random effects. Penalized splines are particularly advantageous since they enable both the estimation of unknown nonlinear covariate effects and inferential statements about these effects. The latter are based, for example, on simultaneous confidence bands that provide a simultaneous uncertainty assessment for the whole estimated functions. In this paper, we consider fixed effects panel data models instead of random effects specifications and develop a first-difference approach for the inclusion of penalized splines in this case. We take the resulting dependence structure into account and adapt the construction of simultaneous confidence bands accordingly. In addition, the penalized spline estimates as well as the confidence bands are also made available for derivatives of the estimated effects which are of considerable interest in many application areas. As an empirical illustration, we analyze the dynamics of life satisfaction over the life span based on data from the German Socio-Economic Panel (SOEP). An open source software implementation of our methods is available in the R package pamfe.
    Keywords: First-difference estimator, life satisfaction, panel data, penalized splines, simultaneous confidence bands
    Date: 2016
  10. By: Davide Pettenuzzo (Brandeis University); Gary Koop (NUniversity of Strathclyde); Dimitris Korobilis (University of Glasgow)
    Abstract: Macroeconomists are increasingly working with large Vector Autoregressions (VARs) where the number of parameters vastly exceeds the number of observations. Existing approaches either involve prior shrinkage or the use of factor methods. In this paper, we develop an alternative based on ideas from the compressed regression literature. It involves randomly compressing the explanatory variables prior to analysis. A huge dimensional problem is thus turned into a much smaller, more computationally tractable one. Bayesian model averaging can be done over various compressions, attaching greater weight to compressions which forecast well. In a macroeconomic application involving up to 129 variables, we find compressed VAR methods to forecast better than either factor methods or large VAR methods involving prior shrinkage.
    Keywords: multivariate time series, random projection, forecasting
    JEL: C11 C32 C53
    Date: 2016–03
  11. By: Nektarios Aslanidis (Universitat Rovira i Virgili, CREIP); Luke Hartigan (School of Economics, UNSW Business School, UNSW)
    Abstract: The assumption of linearity of factor models is implicit in all empirical applications used in macroeconomic analysis. We test this assumption in a more general setting than previously considered using a well-studied macroeconomic dataset on the U.S. economy, and find strong evidence in support for regime-switching type non-linearity. Furthermore, we show non-linearity is strongly concentrated in certain groups (such as financial variables). Our results, which are robust to serial dependence, suggest the assumption of linearity underpinning factor models might be too strong and gives further support towards developing models which explicitly account for non-linearity.
    Keywords: Factor Model Non-linearity, Regime Change, Transition Variables, LM test
    JEL: C12 C18 C24 C33 C38
    Date: 2016–03
  12. By: Klaus Neusser
    Abstract: The notion of the group of orthogonal matrices acting on the set of all feasible identification schemes is used to characterize the identification problem arising in structural vector autoregressions. This approach presents several conceptual advantages. First, it provides a fundamental justification for the use of the normalized Haar measure as the natural uninformative prior. Second, it allows to derive the joint distribution of blocks of parameters defining an identification scheme. Finally, it provides a coherent way for studying perturbations of identification schemes becomes relevant, among other things, for the specification of vector autoregressions with time-varying covariance matrices
    Keywords: SVAR; identification; group action; Haar measure; perturbation
    JEL: C1 C18 C32
    Date: 2016–03
  13. By: Helmut Lütkepohl; Anna Staszewska-Bystrova; Peter Winker
    Abstract: This paper proposes a new non-parametric method of constructing joint confidence bands for impulse response functions of vector autoregressive models. The estimation uncertainty is captured by means of bootstrapping and the highest density region (HDR) approach is used to construct the bands. A Monte Carlo comparison of the HDR bands with existing alternatives shows that the former are competitive with the bootstrap-based Bonferroni and Wald confidence regions. The relative tightness of the HDR bands matched with their good coverage properties makes them attractive for applications. An application to corporate bond spreads for Germany highlights the potential for empirical work.
    Keywords: Impulse responses, joint confidence bands, highest density region, vector autoregressive process
    JEL: C32
    Date: 2016
  14. By: Denni Tommasi; Alexander Wolf
    Abstract: Dunbar et al. (2013) develop a collective model of the household that allows to identify resource shares, that is, how total household resources are divided up among household members. We show why, especially when the data exhibit relatively flat Engel curves, the model is weakly identified and induces high variability and an implausible pattern in least squares estimates. We propose an estimation strategy nested in their framework that greatly reduces this practical impediment to recovery of individual resource shares. To achieve this, we follow an empirical Bayes method that incorporates additional (or out-of-sample) information on singles and relies on mild assumptions on preferences. We show the practical usefulness of this strategy through a series of Monte Carlo simulations and by applying it to Mexican data. The results show that our approach is robust, gives a plausible picture of the household decision process, and is particularly beneficial for the practitioner who wishes to apply the DLP framework. Our welfare analysis of the PROGRESA program in Mexico is the first to include separate poverty rates for men and women in a CCT program.
    Keywords: collective model; sharing rule; resource shares; demand system; engel curve; bayes method; conditional cash transfers; PROGRESA
    JEL: D13 D11 D12 C31 I32
    Date: 2016–03
  15. By: Gautier Marti; Frank Nielsen; Philippe Donnat; S\'ebastien Andler
    Abstract: The following working document summarizes our work on the clustering of financial time series. It was written for a workshop on information geometry and its application for image and signal processing. This workshop brought several experts in pure and applied mathematics together with applied researchers from medical imaging, radar signal processing and finance. The authors belong to the latter group. This document was written as a long introduction to further development of geometric tools in financial applications such as risk or portfolio analysis. Indeed, risk and portfolio analysis essentially rely on covariance matrices. Besides that the Gaussian assumption is known to be inaccurate, covariance matrices are difficult to estimate from empirical data. To filter noise from the empirical estimate, Mantegna proposed using hierarchical clustering. In this work, we first show that this procedure is statistically consistent. Then, we propose to use clustering with a much broader application than the filtering of empirical covariance matrices from the estimate correlation coefficients. To be able to do that, we need to obtain distances between the financial time series that incorporate all the available information in these cross-dependent random processes.
    Date: 2016–03
  16. By: Jennifer Castle; David Hendry; Michael P. Clements
    Abstract: Abstract: Economic forecasting may go badly awry when there are structural breaks, such that the relationships between variables that held in the past are a poor basis for making predictions about the future. We review a body of research that seeks to provide viable strategies for economic forecasting when past relationships can no longer be relied upon.
    Keywords: Business Cycles, Forecasting, Breaks
    JEL: C51 C22
    Date: 2016–02–02
  17. By: Ossola, Elisa; Gagilardini, Patrick; Scaillet, Olivier
    Abstract: We develop an econometric methodology to infer the path of risk premia from a large unbalanced panel of individual stock returns. We estimate the time-varying risk premia implied by conditional linear asset pricing models where the conditioning includes both instruments common to all assets and asset specific instruments. The estimator uses simple weighted two-pass cross-sectional regressions, and we show its consistency and asymptotic normality under increasing cross-sectional and time series dimensions. We address consistent estimation of the asymptotic variance by hard thresholding, and testing for asset pricing restrictions induced by the no-arbitrage assumption. We derive the restrictions given by a continuum of assets in a multi-period economy under an approximate factor structure robust to asset repackaging. The empirical analysis on returns for about ten thousands US stocks from July 1964 to December 2009 shows that risk premia are large and volatile in crisis periods. They exhibit large positive and negative strays from time-invariant estimates, follow the macroeconomic cycles, and do not match risk premia estimates on standard sets of portfolios. The asset pricing restrictions are rejected for a conditional four-factor model capturing market, size, value and momentum effects.
    JEL: C12 C13 C23 C51 C52 G12
    Date: 2015
  18. By: Michele Piffer
    Abstract: This paper proposes a Bayesian approach to assess if the data support candidate set-identifying restrictions for Vector Autoregressive models. The researcher is uncertain about the validity of some sign restrictions that she is contemplating to use. She therefore expresses her uncertainty with a prior distribution that covers the parameter space both where the restrictions are satisfied and where they are not satisfied. I show that the data determine whether the probability mass in favour of the restrictions increases or not from prior to posterior. Using two applications, I find support for the restrictions used by Baumeister & Hamilton (2015a) in their two-equation model of labor demand and supply, and I find support for the true data generating process in a simulation exercise on the New Keynesian model.
    Keywords: Identification, Bayesian econometrics, sign restrictions
    JEL: C32 C11
    Date: 2016
  19. By: Kollmann, Robert
    Abstract: This paper discusses a tractable approach for computing the likelihood function of non-linear Dynamic Stochastic General Equilibrium (DSGE) models that are solved using second- and third order accurate approximations. By contrast to particle filters, no stochastic simulations are needed for the method here. The method here is, hence, much faster and it is thus suitable for the estimation of medium-scale models. The method assumes that the number of exogenous innovations equals the number of observables. Given an assumed vector of initial states, the exogenous innovations can thus recursively be inferred from the observables. This easily allows to compute the likelihood function. Initial states and model parameters are estimated by maximizing the likelihood function. Numerical examples suggest that the method provides reliable estimates of model parameters and of latent state variables, even for highly non-linear economies with big shocks.
    Keywords: Likelihood-based estimation of non-linear DSGE models, higher-order approximations, pruning, latent state variables
    JEL: C6 E3
    Date: 2016

This nep-ecm issue is ©2016 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.