nep-ecm New Economics Papers
on Econometrics
Issue of 2016‒05‒14
nineteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Improving Markov switching models using realized variance By Liu, Jia; Maheu, John M
  2. Non-Parametric Estimation of a Distribution Function with Interval Censored Data By Zapata, Samuel D.; Carpio, Carlos E.
  3. Identifying Regression Parameters When Variables are Measured with Error By Alicia Rambaldi; T. H. Y. Tran; Antonio Peyrache
  4. Spectrally-Corrected Estimation for High-Dimensional Markowitz Mean-Variance Optimization By Bai, Z.; Li, H.; McAleer, M.J.; Wong, W-K.
  5. Bayesian Dynamic Modeling of High-Frequency Integer Price Changes By Istvan Barra; Siem Jan Koopman
  6. The Local Fractional Bootstrap By Mikkel Bennedsen; Ulrich Hounyo; Asger Lunde; Mikko S. Pakkanen
  7. Joint Prediction Bands for Macroeconomic Risk Management By Farooq Akram; Andrew Binning; Junior Maih
  8. A data-driven selection of an appropriate seasonal adjustment approach By Webel, Karsten
  9. The Variance-Frequency Decomposition as an Instrument for VAR Identification: an Application to Technology Shocks By Lovcha, Yuliya; Pérez Laborda, Àlex
  10. Optimal Data Collection for Randomized Control Trials By Carneiro, Pedro; Lee, Sokbae; Wilhelm, Daniel
  11. How useful are (Censored) Quantile Regressions for Contingent Valuation? By Victor Champonnois; Olivier Chanel
  12. Comparing different data descriptors in Indirect Inference tests on DSGE models By Minford, Patrick; Wickens, Michael; Xu, Yongdeng
  13. Bootstrapping high-frequency jump tests By Prosper Dovonon; Sílvia Gonçalves; Ulrich Hounyo; Nour Meddahi
  14. Eye Tracking to Model Attribute Attendance By Chavez, Daniel; Palma, Marco; Collart, Alba J.
  15. Identification of Attrition Bias Using Different Types of Panel Refreshments By Adrian Chadi
  16. Forecasting Inflation using Functional Time Series Analysis By Zafar, Raja Fawad; Qayyum, Abdul; Ghouri, Saghir Pervaiz
  17. Estimating Multi-Product Production Functions and Productivity using Control Functions By Malikov, Emir
  18. GRESHAM’S LAW OF MODEL AVERAGING By In-Koo Cho; Kenneth Kasa
  19. A New Approach to Identifying the Real Effects of Uncertainty Shocks By Shin, Minchul; Zhong, Molin

  1. By: Liu, Jia; Maheu, John M
    Abstract: This paper proposes a class of models that jointly model returns and ex-post variance measures under a Markov switching framework. Both univariate and multivariate return versions of the model are introduced. Bayesian estimation can be conducted under a fixed dimension state space or an infinite one. The proposed models can be seen as nonlinear common factor models subject to Markov switching and are able to exploit the information content in both returns and ex-post volatility measures. Applications to U.S. equity returns and foreign exchange rates compare the proposed models to existing alternatives. The empirical results show that the joint models improve density forecasts for returns and point predictions of return variance. The joint Markov switching models can increase the precision of parameter estimates and sharpen the inference of the latent state variable.
    Keywords: infinite hidden Markov model, realized covariance, density forecast, MCMC
    JEL: C11 C32 C51 C58 G1
    Date: 2015–09–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:71120&r=ecm
  2. By: Zapata, Samuel D.; Carpio, Carlos E.
    Abstract: Disjoint interval-censored (DIC) observations are found in a variety of applications including survey responses, contingent valuation studies and grouped data. Despite being a recurrent type of data, little attention has been given to their analysis in the nonparametric literature. In this study, we develop an alternative approach for the estimation of the empirical distribution function of DIC data by optimizing their nonparametric maximum likelihood (ML) function. In contrast to Turnbull’s standard nonparametric method, our estimation approach does not require iterative numerical algorithms or the use of advanced statistical software packages. In fact, we demonstrate the existence of a simple closed-form solution to the nonparametric ML problem, where the empirical distribution, its variance, and measures of central tendency can be estimated by using only the frequency distribution of observations. The advantages of our estimation approach are illustrated using two empirical datasets.
    Keywords: Empirical likelihood, Mean bounds, Turnbull, Variance-covariance matrix, Research Methods/ Statistical Methods, C14, C24,
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:ags:saea16:229802&r=ecm
  3. By: Alicia Rambaldi (School of Economics, The University of Queensland); T. H. Y. Tran (Department of Economics, Yale University); Antonio Peyrache (School of Economics, The University of Queensland)
    Abstract: The paper proposes an approach for identifying and estimating the economic parameters of interest when all the variables are measured with errors and these are correlated. Two propositions show how the parameters of interest and the bias are identified. Three Monte Carlo simulations illustrate the results. The empirical application estimates returns to scale and technological progress in US manufacturing sectors. The results can be linked to previous works in the literature to demonstrate the ambiguous bias in least squares estimates of returns to scale parameters and to compare estimates of trends in technological change using two alternative identification approaches. Creation-Date: 2016-04-08
    Keywords: unobserved components; time-varying parameters; least squares bias; returns to scale; technological change
    JEL: C18 C32 E23
    URL: http://d.repec.org/n?u=RePEc:qld:uq2004:557&r=ecm
  4. By: Bai, Z.; Li, H.; McAleer, M.J.; Wong, W-K.
    Abstract: This paper considers the portfolio problem for high dimensional data when the dimension and size are both large. We analyze the traditional Markowitz mean-variance (MV) portfolio by large dimension matrix theory, and find the spectral distribution of the sample covariance is the main factor to make the expected return of the traditional MV portfolio overestimate the theoretical MV portfolio. A correction is suggested to the spectral construction of the sample covariances to be the sample spectrally- corrected covariance, and to improve the traditional MV portfolio to be spectrally corrected. In the expressions of the expected return and risk on the MV portfolio, the population covariance matrix is always a quadratic form, which will direct MV portfolio estimation. We provide the limiting behavior of the quadratic form with the sample spectrally-corrected covariance matrix, and explain the superior performance to the sample covariance as the dimension increases to infinity proportionally with the sample size. Moreover, this paper deduces the limiting behavior of the expected return and risk on the spectrally-corrected MV portfolio, and illustrates the superior properties of the spectrally-corrected MV portfolio. In simulations, we compare the spectrally-corrected estimates with the traditional and bootstrap-corrected estimates, and show the performance of the spectrally-corrected estimates are the best in portfolio returns and portfolio risk. We also compare the performance of the new proposed estimation with different optimal portfolio estimates for real data from S&P 500. The empirical findings are consistent with the theory developed in the paper.
    Keywords: Markowitz Mean-Variance Optimization, Optimal Return, Optimal Portfolio Allocation, Large Random Matrix, Bootstrap Method, Spectrally-corrected Covariance Matrix
    JEL: G11 C13 C61
    Date: 2016–04–01
    URL: http://d.repec.org/n?u=RePEc:ems:eureir:80106&r=ecm
  5. By: Istvan Barra (Vrije Universiteit Amsterdam); Siem Jan Koopman (Vrije Universiteit Amsterdam, the Netherlands)
    Abstract: We investigate high-frequency volatility models for analyzing intra-day tick by tick stock price changes using Bayesian estimation procedures. Our key interest is the extraction of intra-day volatility patterns from high-frequency integer price changes. We account for the discrete nature of the data via two different approaches: ordered probit models and discrete distributions. We allow for stochastic volatility by modeling the variance as a stochastic function of time, with intra-day periodic patterns. We consider distributions with heavy tails to address occurrences of jumps in tick by tick discrete prices changes. In particular, we introduce a dynamic version of the negative binomial difference model with stochastic volatility. For each model we develop a Markov chain Monte Carlo estimation method that takes advantage of auxiliary mixture representations to facilitate the numerical implementation. This new modeling framework is illustrated by means of tick by tick data for several stocks from the NYSE and for different periods. Different models are compared with each other based on predictive likelihoods. We find evidence in favor of our preferred dynamic negative binomial difference model.
    Keywords: Bayesian inference; discrete distributions; high-frequency dynamics; Markov chain Monte Carlo; stochastic volatility
    JEL: C22 C58
    Date: 2016–04–22
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20160028&r=ecm
  6. By: Mikkel Bennedsen; Ulrich Hounyo; Asger Lunde; Mikko S. Pakkanen
    Abstract: We introduce a bootstrap procedure for high-frequency statistics of Brownian semistationary processes. More specifically, we focus on a hypothesis test on the roughness of sample paths of Brownian semistationary processes, which uses an estimator based on a ratio of realized power variations. Our new resampling method, the local fractional bootstrap, relies on simulating an auxiliary fractional Brownian motion that mimics the fine properties of high frequency differences of the Brownian semistationary process under the null hypothesis. We prove the first order validity of the bootstrap method and in simulations we observe that the bootstrap-based hypothesis test provides considerable finite-sample improvements over an existing test that is based on a central limit theorem. This is important when studying the roughness properties of time series data; we illustrate this by applying the bootstrap method to two empirical data sets: we assess the roughness of a time series of high-frequency asset prices and we test the validity of Kolmogorov's scaling law in atmospheric turbulence data.
    Date: 2016–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1605.00868&r=ecm
  7. By: Farooq Akram; Andrew Binning; Junior Maih
    Abstract: In this paper we address the issue of assessing and communicating the joint probabilities implied by density forecasts from multivariate time series models. We focus our attention in three areas. First, we investigate a new method of producing fan charts that better communicates the uncertainty present in forecasts from multivariate time series models. Second, we suggest a new measure for assessing the plausibility of non-central point forecasts. And third, we describe how to use the density forecasts from a multivariate time series model to assess the probability of a set of future events occurring. An additional novelty of this paper is our use of a regime-switching DSGE model with an occasionally binding zero lower bound constraint, estimated on US data, to produce the density forecasts. The tools we offer will allow practitioners to better assess and communicate joint forecast probabilities, a criticism that has been leveled at central bank communications.
    Keywords: Monetary Policy, Fan charts, DSGE, Zero Lower Bound, Regime-switching, Bayesian Estimation
    Date: 2016–05
    URL: http://d.repec.org/n?u=RePEc:bny:wpaper:0045&r=ecm
  8. By: Webel, Karsten
    Abstract: Recent releases of X-13ARIMA-SEATS and JDemetra+ enable their users to choose between the non-parametric X-11 and the parametric ARIMA model-based approach to seasonal adjustment for any given time series without the necessity of switching between different software packages. To ease the selection process, we develop a decision tree whose branches combine conceptual differences between the two methods with empirical issues. The latter primarily include a thorough inspection of the squared gains of final X-11 and Wiener-Kolmogorov seasonal adjustment filters as well as a comparison of various revision measures. We finally illustrate the decision tree on selected German macroeconomic time series.
    Keywords: ARIMA model-based approach,linear filtering,signal extraction,unobserved components,X-11 approach
    JEL: C13 C14 C22
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:zbw:bubdps:072016&r=ecm
  9. By: Lovcha, Yuliya; Pérez Laborda, Àlex
    Abstract: Abstract: This paper proposes a new framework to study identification in structural VAR models. The framework is based on the variance-frequency decomposition and focuses on the contribution of the identified shock to the variance of model variables in a given frequency range. We use the hours-productivity debate as a connecting thread in our discussion since the identification problem has attracted a lot of attention in this literature. To start, we employ the framework to study the business cycle properties of a set of different identification schemes for technology shocks. Grounded on the simulation results, we propose a new model-based procedure which delivers a precise estimate of the response of hours. Finally, we put all the schemes to work with real data, obtaining substantial evidence in favor of plausible RBC parametrizations, especially from identification restrictions that perform better in simulations. This analysis also reveals that the schemes that recover a very strong response of hours (higher than the implied by typical RBC parameterizations) tend to overstate the contribution of the technology shock to the fluctuations of hours worked at business cycle frequencies. Keywords: Business cycle, frequency domain, hours worked, productivity, vector autoregressions. Classification: C1, E3
    Keywords: Cicles econòmics, 33 - Economia,
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:urv:wpaper:2072/261537&r=ecm
  10. By: Carneiro, Pedro (University College London); Lee, Sokbae (Institute for Fiscal Studies, London); Wilhelm, Daniel (University College London)
    Abstract: In a randomized control trial, the precision of an average treatment effect estimator can be improved either by collecting data on additional individuals, or by collecting additional covariates that predict the outcome variable. We propose the use of pre-experimental data such as a census, or a household survey, to inform the choice of both the sample size and the covariates to be collected. Our procedure seeks to minimize the resulting average treatment effect estimator's mean squared error, subject to the researcher's budget constraint. We rely on a modification of an orthogonal greedy algorithm that is conceptually simple and easy to implement in the presence of a large number of potential covariates, and does not require any tuning parameters. In two empirical applications, we show that our procedure can lead to substantial gains of up to 58%, measured either in terms of reductions in data collection costs or in terms of improvements in the precision of the treatment effect estimator.
    Keywords: randomized control trials, big data, data collection, optimal survey design, orthogonal greedy algorithm, survey costs
    JEL: C55 C81
    Date: 2016–04
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp9908&r=ecm
  11. By: Victor Champonnois (AMSE-GREQAM); Olivier Chanel (AMSE-GREQAM)
    Abstract: We investigate the interest of quantile regression (QR) and censored quantile regression (CQR) to deal with issues from contingent valuation (CV) data. Indeed, although (C)QR estimators have many properties of interest for CV, the literature is scarce and restricted to six studies only. We proceed in three steps. First, we provide analytical arguments showing how (C)QR can tackle many econometric issues associated with CV data. Second, we show by means of Monte Carlo simulations, how (C)QR performs w.r.t. standard (linear and censored) models. Finally, we apply and compare these four models on a French CV survey dealing with flood risk. Although our findings show the usefulness of QR for analyzing CV data, findings are mixed on the improvements from CQR estimates with respect to QR estimates.
    Keywords: contingent valuation, quantile regression, censored quantile regression, Monte Carlo simulations, flood
    JEL: C15 C9 C21
    Date: 2016–04
    URL: http://d.repec.org/n?u=RePEc:fae:wpaper:2016.12&r=ecm
  12. By: Minford, Patrick (Cardiff Business School); Wickens, Michael (Cardiff Business School); Xu, Yongdeng (Cardiff Business School)
    Abstract: Indirect inference testing can be carried out with a variety of auxiliary models. Asymptotically these different models make no difference. However, in small samples power can differ. We explore small sample power with three different auxiliary models: a VAR, average Impulse Response Functions and Moments. The latter corresponds to the Simulated Moments Method. We find that in a small macro model there is no difference in power. But in a large complex macro model the power with Moments rises more slowly with increasing misspecification than with the other two which remain similar.
    Keywords: : Indirect Inference; DGSE model; Auxiliary Models; Simulated Moments Method
    JEL: C12 C32 C52 E1
    Date: 2016–05
    URL: http://d.repec.org/n?u=RePEc:cdf:wpaper:2016/5&r=ecm
  13. By: Prosper Dovonon; Sílvia Gonçalves; Ulrich Hounyo; Nour Meddahi
    Keywords: jumps,bootstrap,block multipower variation,
    Date: 2016–05–09
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2016s-24&r=ecm
  14. By: Chavez, Daniel; Palma, Marco; Collart, Alba J.
    Abstract: The literature on choice experiments has been dealing with ways to refine preference elicitation from subjects and predictive power of models. Technological advances such as eye tracking has improved our understanding on how much of the attributes and attribute levels presented to participants is being considered in the decision making process in these kind of experiments. This study investigates subjects’ degree of attendance to attributes and how it influences their choices. The amount of time the subjects spent observing each attribute, relative to all available information on each choice set is used to estimate the attribute attendance. This indicates the revealed attendance to the attributes in the experiment. A simple econometric approach compares the parameter estimates from revealed attribute attendance adjusted models using data from an eye tracking device and a model endogenously inferring the probabilities of using information from each attribute in the choice. The results show that the assumption that participants use all the available information to make their decisions produces significant differences in the parameter estimates, leading to potential bias. The results also illustrate that model fit and predictive power is greatly increased by using revealed attendance levels using eye tracking measures. The most significant improvement however, is to endogenously infer attribute attendance; even more so with revealed attendance indicators.
    Keywords: Choice Experiments, Eye-Tracking, Attribute Attendance, Agribusiness, Institutional and Behavioral Economics, Research Methods/ Statistical Methods, C91, C18,
    Date: 2016–01–22
    URL: http://d.repec.org/n?u=RePEc:ags:saea16:230011&r=ecm
  15. By: Adrian Chadi (Institute for Labour Law and Industrial Relations in the EU, University of Trier)
    Abstract: Selective attrition out of longitudinal datasets is a concern for empirical researchers. This paper discusses a simple way to identify both direction and magnitude of potential sample bias in household panels. This idea is to exploit multiple types of simultaneous entries into the panel. The little known phenomenon of natural refreshments, which adds to entries through refreshments induced by data collectors, allows disentangling attrition bias from measurement errors connected to differences in participation experience (i.e. panel conditioning). A demonstrative application on subjective data from the German Socio-Economic Panel Study (SOEP) serves as an example and offers insights on health-related attrition.
    Keywords: Subjective health, refreshment samples, household survey, sample selectivity, panel effects
    JEL: C1 C8 I1
    Date: 2016–02
    URL: http://d.repec.org/n?u=RePEc:iaa:dpaper:201602&r=ecm
  16. By: Zafar, Raja Fawad; Qayyum, Abdul; Ghouri, Saghir Pervaiz
    Abstract: In present study we model the data using Functional Time series Analysis (FTSA). The method is basically univariate, so to check its efficiency we compared it with seasonal ARIMA models. We have used data sets of monthly frequency from 2002-2011 to forecast Consumer Price Index (CPI) of Pakistan. We withhold some data of last year (i.e. of 2011) and based on remaining year (2002-2010) we fitted model and forecasted the values of monthly CPI. Our study compares the performance of FTSA model and ARIMA model using the test data of 2011. Comparison based on forecast evaluation criteria’s and forecasted value of 2011, indicates that FTSA model using CPI general data outperforms SARIMA models
    Keywords: Forecasting, Inflation, SARIMA, FTSA
    JEL: C22 C53
    Date: 2015–03–20
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:67208&r=ecm
  17. By: Malikov, Emir
    Abstract: The existing control-function-based approaches to the identification of firm-level production functions are exclusively concerned with the estimation of single-output production functions despite that, in practice, most firms produce multiple outputs. While one can always opt to employ a single-product specification of the production process by a priori aggregating the firm's outputs, such a formulation is rarely an accurate portrayal of the firm's productive process. This paper extends the control-function-based approach to the structural identification and estimation of firm-level production functions and productivity to the multi-product setting. Specifically, I consider the nonparametric estimation of multi-product production functions. Among other advantages, explicit modeling of multiple outputs allows the identification of cross-output elasticities representing the technological trade-off between individual outputs along the firm's production possibilities frontier, which a traditional single-output production function approach is unable to deliver. To showcase the methodology, I apply it to study the multi-product production technology of Norwegian dairy farms during the 1998-2008 period.
    Keywords: control function, dairy, endogeneity, multiple-output, production function, productivity, sieve estimation, Production Economics, Productivity Analysis, Research Methods/ Statistical Methods, C14, D24, L10, Q12,
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:ags:aaea16:235108&r=ecm
  18. By: In-Koo Cho (University of Illinois); Kenneth Kasa (Simon Fraser University)
    Abstract: A decision maker doubts the stationarity of his environment. In response, he uses two models, one with time-varying parameters, and another with constant parameters. Forecasts are then based on a Bayesian Model Averaging strategy, which mixes forecasts from the two models. In reality, structural parameters are constant, but the (unknown) true model features expectational feedback, which the reduced form models neglect. This feedback permits fears of parameter instability to become self-confirming. Within the context of a standard linear present value asset pricing model, we use the tools of large deviations theory to show that even though the constant parameter model would converge to the (constant parameter) Rational Expectations Equilibrium if considered in isolation, the mere presence of an unstable alternative drives it out of consideration.
    Keywords: model averaging, asset pricing
    JEL: C63 D84
    Date: 2016–04
    URL: http://d.repec.org/n?u=RePEc:sfu:sfudps:dp16-06&r=ecm
  19. By: Shin, Minchul; Zhong, Molin
    Abstract: This paper proposes a multivariate stochastic volatility-in-vector autoregression model called the conditional autoregressive inverse Wishart-in-VAR (CAIW-in-VAR) model as a framework for studying the real effects of uncertainty shocks. We make three contributions to the literature. First, the uncertainty shocks we analyze are estimated directly from macroeconomic data so they are associated with changes in the volatility of the shocks hitting the macroeconomy. Second, we advance a new approach to identify uncertainty shocks by placing limited economic restrictions on the first and second moment responses to these shocks. Third, we consider an extension of the sign restrictions methodology of Uhlig (2005) to uncertainty shocks. To illustrate our methods, we ask what is the role of financial markets in transmitting uncertainty shocks to the real economy? We find evidence that an increase in uncertainty leads to a decline in industrial production only if associated with a deterioration in financial conditions.
    Keywords: Multivariate stochastic volatility ; Uncertainty ; Vector autoregression ; Volatility-in-mean ; Wishart process
    JEL: C11 C32 E32
    Date: 2016–04–25
    URL: http://d.repec.org/n?u=RePEc:fip:fedgfe:2016-40&r=ecm

This nep-ecm issue is ©2016 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.