nep-ecm New Economics Papers
on Econometrics
Issue of 2016‒05‒08
thirteen papers chosen by
Sune Karlsson
Örebro universitet

  1. MCMC Confidence sets for Identified Sets By Xiaohong Chen; Timothy Christensen; Elie Tamer
  2. Identification of Mixed Causal-Noncausal Models : How Fat Should We Go? By Hecq A.W.; Lieb L.M.; Telg J.M.A.
  3. Unit Root Testing in ARMA Models: A Likelihood Ratio Approach By Hernández Juan R.
  4. Robust small area estimation under spatial non-stationarity By Baldermann, Claudia; Salvati, Nicola; Schmid, Timo
  5. Inference for Impulse Response Coefficients From Multivariate Fractionally Integrated Processes By Richard T. Baillie; George Kapetanios; Fotis Papailias
  6. Do data revisions matter for DSGE estimation? By Givens, Gregory
  7. Density forecasting comparison of volatility models By Leopoldo Catania; Nima Nonejad
  8. On the use of high frequency measures of volatility in MIDAS regressions By Elena Andreou
  9. Assessing Causality and Delay within a Frequency Band By Jörg Breitung; Sven Schreiber
  10. Is Robust Inference with OLS Sensible in Time Series Regressions? Investigating Bias and MSE Trade-offs with Feasible GLS and VAR Approaches By Richard T. Baillie; Kun Ho Kim
  11. Estimating Production Functions of Multiproduct Firms By Valmari, Nelli
  12. A Time Series Model of Interest Rates With the Effective Lower Bound By Johannsen, Benjamin K.; Mertens, Elmar
  13. Predicting Recessions With Boosted Regression Trees By Jörg Döpke; Ulrich Fritsche; Christian Pierdzioch

  1. By: Xiaohong Chen (Cowles Foundation, Yale University); Timothy Christensen (New York University); Elie Tamer (Harvard University)
    Abstract: In complicated/nonlinear parametric models, it is hard to determine whether a parameter of interest is formally point identified. We provide computationally attractive procedures to construct confidence sets (CSs) for identified sets of parameters in econometric models defined through a likelihood or a vector of moments. The CSs for the identified set or for a function of the identified set (such as a subvector) are based on inverting an optimal sample criterion (such as likelihood or continuously updated GMM), where the cutoff values are computed directly from Markov Chain Monte Carlo (MCMC) simulations of a quasi posterior distribution of the criterion. We establish new Bernstein-von Mises type theorems for the posterior distributions of the quasi-likelihood ratio (QLR) and profile QLR statistics in partially identified models, allowing for singularities. These results imply that the MCMC criterion-based CSs have correct frequentist coverage for the identified set as the sample size increases, and that they coincide with Bayesian credible sets based on inverting a LR statistic for point-identified likelihood models. We also show that our MCMC optimal criterion-based CSs are uniformly valid over a class of data generating processes that include both partially- and point- identified models. We demonstrate good finite sample coverage properties of our proposed methods in four non-trivial simulation experiments: missing data, entry game with correlated payoff shocks, Euler equation and finite mixture models.
    Date: 2016–05
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:2037&r=ecm
  2. By: Hecq A.W.; Lieb L.M.; Telg J.M.A. (GSBE)
    Abstract: Gouriroux and Zakoian 2013 propose to use noncausal models to parsimoniously capture nonlinear features observed in financial time series and in particular bubble phenomena. In order to distinguish causal autoregressive processes from purely noncausal or mixed causal-noncausal ones, one has to depart from the Gaussianity assumption on the error distribution. This paper investigates by means of simulation how fat the tails of the distribution of the error process have to be such that those models can be identified in practice. We compare the performance of the MLE, assuming a t-distribution, with those of the LAD estimator that we propose in this paper. Similar to Davis, Knight and Liu 1992 we find that for infinite variance autoregressive processes both the MLE and LAD estimator converge faster. We further specify the general asymptotic normality results obtained in Andrews, Breidt and Davis 2006 for the case of t-distributed and Laplacian distributed error terms. We first illustrate our analysis by estimating mixed causal-noncausal autoregressions to model the demand for solar panels in Belgium over the last decade. Then we look at the presence of potential noncausal components in daily realized volatility series for 21 equity indexes. The presence of a noncausal component is confirmed in both empirical illustrations.
    Keywords: Single Equation Models; Single Variables: Time-Series Models; Dynamic Quantile Regressions; Dynamic Treatment Effect Models; Financial Econometrics;
    JEL: C22 C58
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:unm:umagsb:2015035&r=ecm
  3. By: Hernández Juan R.
    Abstract: In this paper I propose a Likelihood Ratio test for a unit root (LR) with a local-to-unity Autoregressive parameter embedded in ARMA(1,1) models. By dealing explicitly with dependence in a time series through the Moving Average, as opposed to the long Autorregresive lag approximation, the test shows gains in power and has good small-sample properties. The asymptotic distribution of the test is shown to be independent of the short-run parameters. The Monte Carlo experiments show that the LR test has higher power than the Augmented Dickey Fuller test for several sample sizes and true values of the Moving Average parameter. The exception is the case when this parameter is very close to -1 with a considerably small sample size.
    Keywords: Likelihood ratio test; ARMA model; Unit root test.
    JEL: C22
    Date: 2016–04
    URL: http://d.repec.org/n?u=RePEc:bdm:wpaper:2016-03&r=ecm
  4. By: Baldermann, Claudia; Salvati, Nicola; Schmid, Timo
    Abstract: Geographically weighted small area methods have been studied in literature for small area estimation. Although these approaches are useful for the estimation of small area means efficiently under strict parametric assumptions, they can be very sensitive to outliers in the data. In this paper, we propose a robust extension of the geographically weighted empirical best linear unbiased predictor (GWEBLUP). In particular, we introduce robust projective and predictive small area estimators under spatial non-stationarity. Mean squared error estimation is performed by two different analytic approaches that account for the spatial structure in the data. The results from the model-based simulations indicate that the proposed approach may lead to gains in terms of efficiency. Finally, the methodology is demonstrated in an illustrative application for estimating the average total cash costs for farms in Australia.
    Keywords: bias correction,geographical weighted regression,mean squared error,model-based simulation,spatial statistics
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:zbw:fubsbe:20165&r=ecm
  5. By: Richard T. Baillie (Department of Economics, Michigan State University, USA; School of Economics and Finance, Queen Mary University of London, UK; The Rimini Centre for Economic Analysis, Italy); George Kapetanios (School of Economics and Finance, Queen Mary University of London, UK); Fotis Papailias (Queen's University Management School, Queen's University Belfast, UK; quantf research, www.quantf.com)
    Abstract: This paper considers a multivariate system of fractionally integrated time series and investigates the most appropriate way for estimating Impulse Response (IR) coefficients and their associated confidence intervals. The paper extends the univariate analysis recently provided by Baillie and Kapetanios (2013), and uses a semi parametric, time domain estimator, based on a vector autoregression (VAR) approximation. Results are also derived for the orthogonalized estimated IRs which are generally more practically relevant. Simulation evidence strongly indicates the desirability of applying the Kilian small sample bias correction, which is found to improve the coverage accuracy of confidence intervals for IRs. The most appropriate order of the VAR turns out to be relevant for the lag length of the IR being estimated.
    Date: 2015–12
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:15-46&r=ecm
  6. By: Givens, Gregory
    Abstract: This paper checks whether the coefficient estimates of a famous DSGE model are robust to macroeconomic data revisions. The effects of revisions are captured by rerunning the estimation on a real-time data set compiled using the latest time series available each quarter from 1997 through 2015. Results show that point estimates of the structural parameters are generally robust to changes in the data that have occurred over the past twenty years. By comparison, estimates of the standard errors are relatively more sensitive to revisions. The latter implies that judgements about the statistical significance of certain parameters depend on which data vintage is used for estimation.
    Keywords: Data Revisions, Real-Time Data, DSGE Estimation
    JEL: C32 C82 E32 E52
    Date: 2016–04–22
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:70932&r=ecm
  7. By: Leopoldo Catania; Nima Nonejad
    Abstract: We compare the predictive ability of several volatility models for a long series of weekly log-returns of the Dow Jones Industrial Average Index from 1902 to 2016. Our focus is particularly on predicting one and multi-step ahead conditional and aggregated conditional densities. Our set of competing models includes: Well-known GARCH specifications, Markov switching GARCH, sempiparametric GARCH, Generalised Autoregressive Score (GAS), the plain stochastic volatility (SV) as well as its more flexible extensions such as SV with leverage, in-mean effects and Student-t distributed errors. We find that: (i) SV models generally outperform the GARCH specifications, (ii): The SV model with leverage effect provides very strong out-of-sample performance in terms of one and multi-steps ahead density prediction, (iii) Differences in terms of Value-at-Risk (VaR) predictions accuracy are less evident. Thus, our results have an important implication: the best performing model depends on the evaluation criterion
    Date: 2016–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1605.00230&r=ecm
  8. By: Elena Andreou
    Abstract: Many empirical studies link mixed data frequency variables such as low frequency macroeconomic or financial variables with high frequency financial indicators’ volatilities, especially within a predictive regression model context. The objective of this paper is threefold: First, we relate the standard Least Squares (LS) regression model with high frequency volatility predictors, with the corresponding Mixed Data Sampling Nonlinear LS (MIDAS-NLS) regression model (Ghysels et al., 2005, 2006), and evaluate the properties of the regression estimators of these models. We also consider alternative high frequency volatility measures as well as various continuous time models using their corresponding relevant higher-order moments to further analyze the properties of these estimators. Second, we derive the relative MSE efficiency of the slope estimator in the standard LS and MIDAS regressions, we provide conditions for relative efficiency and present the numerical results for different continuous time models. Third, we extend the analysis of the bias of the slope estimator in standard LS regressions with alternative realized measures of risk such as the Realized Covariance, Realized Beta and the Realized Skewness when the true DGP is a MIDAS model.
    Keywords: MIDAS regression model, high-frequency volatility estimators, bias, efficiency.
    JEL: C22 C53 G22
    Date: 2016–04
    URL: http://d.repec.org/n?u=RePEc:ucy:cypeua:03-2016&r=ecm
  9. By: Jörg Breitung; Sven Schreiber
    Abstract: We extend the frequency-specific Granger-causality test of Breitung et al. (2006) to a more general null hypothesis that allows causality testing at unknown frequencies within a prespecified range of frequencies. This setup corresponds better to empirical situations encountered in applied research and it is easily implemented in vector autoregressive models. We also provide tools for estimating the phase shift/delay at some prespecified frequency or frequency band. In an empirical application dealing with the dynamics of US temperatures and CO2 emissions we find that emissions cause temperature changes only at very low frequencies with more than 30 years of oscillation. Furthermore we analyze the indicator properties of new orders for German industrial production by assessing the delay at the frequencies of interest.
    Keywords: Granger causality, frequency domain, filter gain
    JEL: C32 C53 E32 Q54
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:imk:wpaper:165-2016&r=ecm
  10. By: Richard T. Baillie (Department of Economics, Michigan State University, USA; School of Economics and Finance, Queen Mary University of London, UK; The Rimini Centre for Economic Analysis, Italy); Kun Ho Kim (Department of Economics, Hanyang University, Republic of Korea)
    Abstract: It has become commonplace in applied time series econometric work to estimate regressions with consistent, but asymptotically inefficient OLS and to base inference of conditional mean parameters on robust standard errors. This approach seems mainly to have occurred due to concern at the possible violation of strict exogeneity conditions from applying GLS. We first show that even in the case of the violation of contemporaneous exogeneity, that the asymptotic bias associated with GLS will generally be less than that of OLS. This result extends to Feasible GLS where the error process is approximated by a sieve autoregression. The paper also examines the trade-offs between asymptotic bias and efficiency related to OLS, feasible GLS and inference based on full system VAR. We also provide simulation evidence and several examples including tests of efficient markets, orange juice futures and weather and a control engineering application of furnace data. The evidence and general conclusion is that the widespread use of OLS with robust standard errors is generally not a good research strategy. Conversely, there is much to recommend FGLS and VAR system based estimation.
    Date: 2016–03
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:16-04&r=ecm
  11. By: Valmari, Nelli
    Abstract: Despite the fact that multiproduct firms constitute a considerable share of firms and account for an even greater share of production, virtually all production function estimates are based on the assumption that firms are single-product producers. The single-product assumption is made due to lack of data on input allocation across the various product lines multiproduct firms operate. I provide a method to estimate product-level production functions without observable input allocations. The empirical application and Monte Carlo simulations show that the single-product firm assumption leads to biased parameter and productivity estimates and overestimated productivity differences between firms.
    Keywords: Multiproduct firm, production function, productivity
    JEL: D24 L11 L25
    Date: 2016–03–08
    URL: http://d.repec.org/n?u=RePEc:rif:wpaper:37&r=ecm
  12. By: Johannsen, Benjamin K.; Mertens, Elmar
    Abstract: Modeling interest rates over samples that include the Great Recession requires taking stock of the effective lower bound (ELB) on nominal interest rates. We propose a flexible time– series approach which includes a “shadow rate”—a notional rate that is less than the ELB during the period in which the bound is binding—without imposing no–arbitrage assumptions. The approach allows us to estimate the behavior of trend real rates as well as expected future interest rates in recent years.
    Keywords: Bayesian Econometrics ; Effective Lower Bound ; Shadow Rate ; State-Space Model ; Term Structure of Interest Rates
    JEL: C32 C34 C53 E43 E47
    Date: 2016–04–04
    URL: http://d.repec.org/n?u=RePEc:fip:fedgfe:2016-33&r=ecm
  13. By: Jörg Döpke (University of Applied Sciences Merseburg); Ulrich Fritsche (University Hamburg); Christian Pierdzioch (Helmut-Schmidt-University Hamburg)
    Abstract: We use a machine-learning approach known as Boosted Regression Trees (BRT) to reexamine the usefulness of selected leading indicators for predicting recessions. We estimate the BRT approach on German data and study the relative importance of the indicators and their marginal effects on the probability of a recession. We then use receiver operating characteristic (ROC) curves to study the accuracy of forecasts. Results show that the short-term interest rate and the term spread are important leading indicators, but also that the stock market has some predictive value. The recession probability is a nonlinear function of these leading indicators. The BRT approach also helps to recover how the recession probability depends on the interactions of the leading indicators. While the predictive power of the short-term interest rates has declined over time, the term spread and the stock market have gained in importance. We also study how the shape of a forecaster’s utility function affects the optimal choice of a cutoff value above which the estimated recession probability should be interpreted as a signal of a recession. The BRT approach shows a competitive out-of-sample performance compared to popular Probit approaches
    Keywords: : Recession forecasting; Boosting; Regression trees; ROC analysis
    JEL: C52 C53 E32 E37
    Date: 2015–12
    URL: http://d.repec.org/n?u=RePEc:gwc:wpaper:2015-004&r=ecm

This nep-ecm issue is ©2016 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.