nep-ecm New Economics Papers
on Econometrics
Issue of 2015‒03‒05
fifteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Bayesian Bandwidth Estimation In Nonparametric Time-Varying Coefficient Models By Tingting Cheng; Jiti Gao; Xibin Zhang
  2. Tests of Concentration for Low-Dimensional and High-Dimensional Directional Data By Christine Cutting; Davy Paindaveine; Thomas Verdebout
  3. Testing for Identification in SVAR-GARCH Models: Reconsidering the Impact of Monetary Shocks on Exchange Rates By Helmut Lütkepohl; George Milunovich
  4. Dynamic Principal Components: a New Class of Multivariate GARCH Models By Gian Piero Aielli; Massimiliano Caporin
  5. Lagged Explanatory Variables and the Estimation of Causal Effects By Bellemare, Marc F.; Masaki, Takaaki; Pepinsky, Thomas B.
  6. Estimation of Dynamic Nonlinear Random Effects Models with Unbalanced Panels By Pedro Albarrán; Raquel Carrasco; Jesús M. Carro
  7. Regression Based Estimation of Dynamic Asset Pricing Models By Adrian, Tobias; Crump, Richard K.; Moench, Emanuel
  8. Natural Experiment Policy Evaluation: A Critique By Christopher A. Hennessy; Ilya A. Strebulaev
  9. An Infinite Hidden Markov Model for Short-term Interest Rates By Maheu, John M; Yang, Qiao
  10. Volatility-related exchange traded assets: an econometric investigation By Mencía, Javier; Sentana, Enrique
  11. Fast ML estimation of dynamic bifactor models: an application to European inflation By Fiorentini, Gabriele; Galesi, Alessandro; Sentana, Enrique
  12. Bayesian Estimation of Time-Changed Default Intensity Models By Gordy, Michael B.; Szerszen, Pawel J.
  13. Early warning indicators for banking crises: a conditional moments approach By Ferrari, Stijn; Pirovano, Mara
  14. Evaluating Firm-Level Expected-Return Proxies By Lee, Charles M. C.; So, Eric C.; Wang, Charles C. Y.
  15. Beyond the local mean-variance analysis in continuous time: The problem of non-normality By Aase, Knut K.; Lillestøl, Jostein

  1. By: Tingting Cheng; Jiti Gao; Xibin Zhang
    Abstract: Bandwidth plays an important role in determining the performance of nonparametric estimators, such as the local constant estimator. In this paper, we propose a Bayesian approach to bandwidth estimation for local constant estimators of time-varying coefficients in time series models. We establish a large sample theory for the proposed bandwidth estimator and Bayesian estimators of the unknown parameters involved in the error density. A Monte Carlo simulation study shows that (i) the proposed Bayesian estimators for bandwidths and parameters in the error density have satisfactory finite sample performance; and (ii) our proposed Bayesian approach achieves better performance in estimating the bandwidths than the normal reference rule and cross-validation. Moreover, we apply our proposed Bayesian bandwidth estimation method for the time-varying coefficient models that explain Okun's law and the relationship between consumption growth and income growth in the US. For each model, we also provide calibrated parametric forms of the time-varying coefficients.
    Keywords: Local constant estimator, bandwidth, Markov chain Monte Carlo
    JEL: C11 C14 C15
    Date: 2015
  2. By: Christine Cutting; Davy Paindaveine; Thomas Verdebout
    Abstract: We consider asymptotic inference for the concentration of directional data. More precisely, wepropose tests for concentration (i) in the low-dimensional case where the sample size n goes to infinity andthe dimension p remains fixed, and (ii) in the high-dimensional case where both n and p become arbitrarilylarge. To the best of our knowledge, the tests we provide are the first procedures for concentration thatare valid in the (n; p)-asymptotic framework. Throughout, we consider parametric FvML tests, that areguaranteed to meet asymptotically the nominal level constraint under FvML distributions only, as well as“pseudo-FvML” versions of such tests, that are validity-robust within the class of rotationally symmetricdistributions.We conduct a Monte-Carlo study to check our asymptotic results and to investigate the finitesamplebehavior of the proposed tests.
    Date: 2014–02
  3. By: Helmut Lütkepohl; George Milunovich
    Abstract: Changes in residual volatility in vector autoregressive (VAR) models can be used for identifying structural shocks in a structural VAR analysis. Testable conditions are given for full identification for the case where the volatility changes can be modelled by a multivariate GARCH process. Formal statistical tests are presented for identification and their small sample properties are investigated via a Monte Carlo study. The tests are applied to investigate the validity of the identification conditions in a study of the effects of U.S. monetary policy on exchange rates. It is found that the data do not support full identification in most of the models considered, and the implied problems for the interpretation of the results are discussed.
    Keywords: Structural vector autoregression, conditional heteroskedasticity, GARCH, identification via heteroskedasticity
    JEL: C32
    Date: 2015
  4. By: Gian Piero Aielli; Massimiliano Caporin (University of Padova)
    Abstract: The OGARCH specification is the leading model for a class of multivariate GARCH (MGARCH)specifications that are based on linear combinations of univariate GARCH specifications. Most MGARCH models in this class adopt a spectral decomposition of the covariance matrix, allowing for heteroskedasticity on at least some of the principal components, while the loading matrix, which maps the conditional principal components to the asset returns, is constant over time. This paper extends the OGARCH model class to allow for time-varying loadings. Our approach closely parallels the DCC modelling approach, introduced as an extension of the CCC model, to allow for dynamic correlations. After introducing an auxiliary process that captures the relevant features of the unobservable loading dynamics, we compute the time-varying loading matrix from the auxiliary process, subject to the necessary orthonormality constraints. The resulting model (the Dynamic Principal Components, or DPC, model) preserves the OGARCH models ease of interpretation and feasibility. In particular, we show that the eigenvectors of the sample covariance matrix can consistently estimate the time-varying loadings intercept term. This property extends to the dynamic framework the well-known analogous property of the OGARCH model. Empirical examples demonstrate the benefits to the loading matrix of introducing time-varying properties.
    Keywords: Spectral Decomposition, Principal Component Analysis, Orthogonal GARCH, Scalar BEKK, DCC, Multivariate GARCH, Two-step Estimation.
    JEL: C32 C58 C13 G10
    Date: 2015–02
  5. By: Bellemare, Marc F.; Masaki, Takaaki; Pepinsky, Thomas B.
    Abstract: Across the social sciences, lagged explanatory variables are a common strategy to confront challenges to causal identification using observational data. We show that "lag identification"--the use of lagged explanatory variables to solve endogeneity problems--is an illusion: lagging independent variables merely moves the channel through which endogeneity biases causal estimates, replacing a "selection on observables" assumption with an equally untestable "no dynamics among unobservables" assumption. We build our argument intuitively using directed acyclic graphs, then provide analytical results on the bias resulting from lag identification in a simple linear regression framework. We then present simulation results that characterize how, even under favorable conditions, lag identification leads to incorrect inferences. These findings have important implications for current practice among applied researchers in political science, economics, and related disciplines. We conclude by specifying the conditions under which lagged explanatory variables are appropriate for identifying causal effects.
    Keywords: Causal Identification, Treatment Effects, Lagged Variables
    JEL: C13 C15 C21
    Date: 2015–02–23
  6. By: Pedro Albarrán; Raquel Carrasco; Jesús M. Carro
    Abstract: This paper presents and evaluates estimation methods for dynamic nonlinear correlated random effects (CRE) models with unbalanced panels. Accounting for the unbalancedness is crucial in dynamic non-linear models and it cannot be ignored even if the process that produces it is completely at random. Available approaches to estimate dynamic CRE models accounting for the initial conditions problem were developed for balanced panels and they do not work with unbalanced panels. In this type of dynamic models, just ignoring the unbalancedness produces inconsistent estimates of the parameters. Another potential "solution", used by some practitioners, is to take the sub-sample that constitutes a balanced panel and then to estimate the model using the available methods. Nonetheless, this approach is not feasible in some cases because the constructed balanced panel might not contain enough number of common periods across individuals. Moreover, when feasible, it discards useful information, which, as we show, leads to important efficiency losses. In this paper we consider several scenarios in which the sample selection process can be arbitrarily correlated with the permanent unobserved heterogeneity. The approaches we propose exploit all the observations available, can be implemented using standard solutions to the initial conditions problem, and can be easily applied in the context of commonly used models, such as dynamic binary choice models.
    Keywords: unbalanced panels, correlated random effects, dynamic non-linear models
    JEL: C23 C25
    Date: 2015–02
  7. By: Adrian, Tobias; Crump, Richard K.; Moench, Emanuel
    Abstract: We propose regression based estimators for beta representations of dynamic asset pricing models with an affine pricing kernel specification. We allow for state variables that are cross sectional pricing factors, forecasting variables for the price of risk, and factors that are both. The estimators explicitly allow for time varying prices of risk, time varying betas and serially dependent pricing factors. Our approach nests the Fama-MacBeth two-pass estimator as a special case. We provide asymptotic multistage standard errors necessary to conduct inference for asset pricing tests. We illustrate our new estimators in an application to the joint pricing of stocks and bonds. The application features strongly time varying, highly significant prices of risk which are found to be quantitatively more important than time varying betas in reducing pricing errors.
    Keywords: Dynamic Asset Pricing; Fama-MacBeth Regressions; GMM; Minimum Distance Estimation; Reduced Rank Regression; Time-varying Betas
    JEL: C58 G10 G12
    Date: 2015–03
  8. By: Christopher A. Hennessy; Ilya A. Strebulaev
    Abstract: We argue exogenous random treatment is insufficient for valid inference regarding the sign and magnitude of causal effects in dynamic environments. In such settings, treatment responses must be understood as contingent upon the typically unmodeled policy generating process. With binary assignment, this results in quantitatively significant attenuation bias. With more than two policy states, treatment responses can be biased downward, upward, or have the wrong sign. Further, it is not only generally invalid to extrapolate elasticities across policy processes, as argued by Lucas (1976), but also to extrapolate within the same policy process. We derive auxiliary assumptions beyond exogeneity for valid inference in dynamic settings. If all possible policy transitions are rare events, treatment responses approximate causal effects. However, reliance on rare events is overly-restrictive as the necessary and sufficient conditions for equality of treatment responses and causal effects is that policy variable changes have mean zero. If these conditions are not met, we show how treatment responses can nevertheless be corrected and mapped back to causal effects or extrapolated to forecast responses to future policy changes.
    JEL: C01 C22 C52 C54 G38
    Date: 2015–02
  9. By: Maheu, John M; Yang, Qiao
    Abstract: The time-series dynamics of short-term interest rates are important as they are a key input into pricing models of the term structure of interest rates. In this paper we extend popular discrete time short-rate models to include Markov switching of infinite dimension. This is a Bayesian nonparametric model that allows for changes in the unknown conditional distribution over time. Applied to weekly U.S. data we find significant parameter change over time and strong evidence of non-Gaussian conditional distributions. Our new model with an hierarchical prior provides significant improvements in density forecasts as well as point forecasts. We find evidence of recurring regimes as well as structural breaks in the empirical application.
    Keywords: hierarchical Dirichlet process prior, beam sampling, Markov switching, MCMC
    JEL: C11 C14 C22 C58
    Date: 2015–01
  10. By: Mencía, Javier; Sentana, Enrique
    Abstract: We compare Semi-Nonparametric expansions of the Gamma distribution with alternative Laguerre expansions, showing that they substantially widen the range of feasible moments of positive random variables. Then, we combine those expansions with a component version of the Multiplicative Error Model to capture the mean reversion typical in positive but stationary financial time series. Finally, we carry out an empirical application in which we compare various asset allocation strategies for Exchange Traded Notes tracking VIX futures indices, which are increasingly popular but risky financial instruments. We show the superior performance of the strategies based on our econometric model.
    Keywords: Density Expansions; Exchange Traded Notes; Multiplicative Error Model; Volatility Index Futures
    JEL: C16 G13
    Date: 2015–03
  11. By: Fiorentini, Gabriele; Galesi, Alessandro; Sentana, Enrique
    Abstract: We generalise the spectral EM algorithm for dynamic factor models in Fiorentini, Galesi and Sentana (2014) to bifactor models with pervasive global factors complemented by regional ones. We exploit the sparsity of the loading matrices so that researchers can estimate those models by maximum likelihood with many series from multiple regions. We also derive convenient expressions for the spectral scores and information matrix, which allows us to switch to the scoring algorithm near the optimum. We explore the ability of a model with a global factor and three regional ones to capture inflation dynamics across 25 European countries over 1999-2014.
    Keywords: euro area; inflation convergence; spectral maximum likelihood; Wiener-Kolmogorov filter
    JEL: C32 C38 E37
    Date: 2015–03
  12. By: Gordy, Michael B. (Board of Governors of the Federal Reserve System (U.S.)); Szerszen, Pawel J. (Board of Governors of the Federal Reserve System (U.S.))
    Abstract: We estimate a reduced-form model of credit risk that incorporates stochastic volatility in default intensity via stochastic time-change. Our Bayesian MCMC estimation method overcomes nonlinearity in the measurement equation and state-dependent volatility in the state equation. We implement on firm-level time-series of CDS spreads, and find strong in-sample evidence of stochastic volatility in this market. Relative to the widely-used CIR model for the default intensity, we find that stochastic time-change offers modest benefit in fitting the cross-section of CDS spreads at each point in time, but very large improvements in fitting the time-series, i.e., in bringing agreement between the moments of the default intensity and the model-implied moments. Finally, we obtain model-implied out-of-sample density forecasts via auxiliary particle filter, and find that the time-changed model strongly outperforms the baseline CIR model.
    Keywords: Bayesian estimation; CDS; CIR process; credit derivatives; MCMC; particle filter; stochastic time change
    JEL: C11 C15 C58 G12 G17
    Date: 2015–01–06
  13. By: Ferrari, Stijn; Pirovano, Mara
    Abstract: This paper presents a novel methodology to calculate thresholds in an early warning signalling framework for extracting signals useful to predict the occurrence of banking crises. The conditional moments based methodology does not rely on assumptions on an objective function trading off Type I and Type II errors and leads to the identification of zones corresponding to different intensities of the signal. The signalling performance of these signalling zones is similar to that of the traditional early warning method based on the optimisation of a policymaker’s loss function; our methodology in fact outperforms the latter for a number of indicators. The methodology is then extended to allow for country specificities, which leads to a substantial improvement of the signalling power. On average, across all indicators, the country-specific signalling zones outperform the pooled approach, resulting in a larger average true positive rate and a lower false alarms rate.
    Keywords: Early-warning indicators; banking crises; panel data; macro prudential policy
    JEL: C23 E58 G01
    Date: 2015–02
  14. By: Lee, Charles M. C. (Stanford University); So, Eric C. (MIT); Wang, Charles C. Y. (Harvard University)
    Abstract: We develop and implement a rigorous analytical framework for empirically evaluating the relative performance of firm-level expected-return proxies (ERPs). We show that superior proxies should closely track true expected returns both cross-sectionally and over time (that is, the proxies should exhibit lower measurement-error variances). We then compare five classes of ERPs nominated in recent studies to demonstrate how researchers can easily implement our two-dimensional evaluative framework. Our empirical analyses document a tradeoff between time-series and cross-sectional ERP performance, indicating the optimal choice of proxy may vary across research settings. Our results illustrate how researchers can use our framework to critically evaluate and compare a growing body of ERPs.
    JEL: G10 G11 G12 G14 M41
    Date: 2014–09
  15. By: Aase, Knut K. (Dept. of Business and Management Science, Norwegian School of Economics); Lillestøl, Jostein (Dept. of Business and Management Science, Norwegian School of Economics)
    Abstract: The paper investigates the effects of deviations from normality on the estimates of risk premiums and the real equilibrium, short-term interest rate in the conventional rational expectations equilibrium model of Lucas (1978). We consider a time-continuous approach, where both the aggregate consumption process as well as cumulative dividends from risky assets are assumed to be jump-diusion processes. This approach allows for random jumps in the fundamental underlying processes at random time points. Preferences are time separable and additive. We derive testable expressions for these quantities, and confront these with 20. century sample estimates. Since there are non-linear components in the formulas for the risk premiums and the interest rate, we can readily explore what effect deviation from normality has on these quantities. Our results test the boundaries of the conventional model.
    Keywords: Mean-variance analysis; Consumption based CAPM; Equilibrium real interest rate; The equity premium puzzle; jump-diffusions; Bi-variate Normal Inverse Gaussian distribution
    JEL: D50 G10 G12
    Date: 2015–02–23

This nep-ecm issue is ©2015 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.