nep-ecm New Economics Papers
on Econometrics
Issue of 2016‒04‒04
eighteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Bayesian Indirect Inference and the ABC of GMM By Michael Creel; Jiti Gao; Han Hong; Dennis Kristensen
  2. Bayesian Rank Selection in Multivariate Regression By Bin Jiang; Anastasios Panagiotelis; George Athanasopoulos; Rob Hyndman; Farshid Vahid
  3. A Frequency Approach to Bayesian Asymptotics By Tingting Cheng; Jiti Gao; Peter CB Phillips
  4. Adaptive shrinkage in Bayesian vector autoregressive models By Florian Huber; Martin Feldkircher
  5. Estimation of Large Dimensional Factor Models with an Unknown Number of Breaks By Shujie Ma; Liangjun Su
  6. Weibull Wind Worth: Wait and Watch? By Lillestøl, Jostein
  7. The Meta-Distribution of Standard P-Values By Nassim Nicholas Taleb
  8. A generalized exponential time series regression model for electricity prices By Niels Haldrup; Oskar Knapik; Tommaso Proietti
  9. Cross-Validation Selection of Regularization Parameter(s) for Semiparametric Transformation Models By Senay Sokullu; Sami Stouli
  10. Estimating the Integrated Parameter of the Locally Parametric Model in High-Frequency Data By Yoann Potiron
  11. Local quantile treatment effects By Blaise Melly und Kaspar Wüthrich
  12. On ill-posedness of nonparametric instrumental variable regression with convexity constraints By Scaillet, Olivier
  13. Forecasting with Large Unbalanced Datasets: The Mixed Frequency Three-Pass Regression Filter By Christian Hepenstrick; Massimiliano Marcellino
  14. Bias in Returns to Tenure When Firm Wages and Employment Comove: A Quantitative Assessment and Solution By Pedro Martins; Andy Snell; Heiko Stueber; Jonathan Thomas
  15. Quantile and expectile smoothing by F-transform. By Luciano Stefanini; Laerte Sorini; Maria Letizia Guerra
  16. Disentangling Neighborhood Effects in Person-Context Research: An Application of a Neighborhood-Based Group Decomposition By Vogel, Matt; van Ham, Maarten
  17. GPU Computing in Bayesian Inference of Realized Stochastic Volatility Model By Tetsuya Takaishi
  18. Nonparametric Euler Equation Identification andEstimation By Juan Carlos Escanciano; Stefan Hoderlein; Arthur Lewbel; Oliver Linton

  1. By: Michael Creel; Jiti Gao; Han Hong; Dennis Kristensen
    Abstract: We propose and study local linear and polynomial based nonparametric regression methods for implementing Approximate Bayesian Computation (ABC) style indirect inference and GMM estimators. These estimators do not need to rely on numerical optimization or Markov Chain Monte Carlo (MCMC) simulations. They provide an effective complement to the classical M-estimators and to MCMC methods, and can be applied to both likelihood and method of moment based models. We provide formal conditions under which frequentist inference is asymptotically valid and demonstrate the validity of estimated posterior quantiles for confidence interval construction. We also show that in this setting, local linear kernel regression methods have theoretical advantages over local constant kernel methods that are also reflected in finite sample simulation results. Our results apply to both exactly and over identified models.
    Keywords: GMM-estimators, Laplace transformations, ABC estimators, nonparametric regressions, simulation-based estimation.
    JEL: C12 C15 C22 C52
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2016-1&r=ecm
  2. By: Bin Jiang; Anastasios Panagiotelis; George Athanasopoulos; Rob Hyndman; Farshid Vahid
    Abstract: Estimating the rank of the coefficient matrix is a major challenge in multivariate regression, including vector autoregression (VAR). In this paper, we develop a novel fully Bayesian approach that allows for rank estimation. The key to our approach is reparameterizing the coefficient matrix using its singular value decomposition and conducting Bayesian inference on the decomposed parameters. By implementing a stochastic search variable selection on the singular values of the coefficient matrix, the ultimate selected rank can be identified as the number of nonzero singular values. Our approach is appropriate for small multivariate regressions as well as for higher dimensional models with up to about 40 predictors. In macroeconomic forecasting using VARs, the advantages of shrinkage through proper Bayesian priors are well documented. Consequently, the shrinkage approach proposed here that selects or averages over low rank coefficient matrices is evaluated in a forecasting environment. We show in both simulations and empirical studies that our Bayesian approach provides forecasts that are better than those of the most promising benchmark methods, dynamic factor models and factor augmented VARs.
    Keywords: Singular value decomposition, model selection, vector autoregression, macroeconomic forecasting, dynamic factor models
    JEL: C11 C52 C53
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2016-6&r=ecm
  3. By: Tingting Cheng; Jiti Gao; Peter CB Phillips
    Abstract: Ergodic theorem shows that ergodic averages of the posterior draws converge in probability to the posterior mean under the stationarity assumption. The literature also shows that the posterior distribution is asymptotically normal when the sample size of the original data considered goes to infinity. To the best of our knowledge, there is little discussion on the large sample behaviour of the posterior mean. In this paper, we aim to fill this gap. In particular, we extend the posterior mean idea to the conditional mean case, which is conditioning on a given summary statistics of the original data. We stablish a new asymptotic theory for the conditional mean estimator for the case when both the sample size of the original data concerned and the number of Markov chain Monte Carlo iterations go to infinity. Simulation studies show that this conditional mean estimator has very good finite sample performance. In addition, we employ the conditional mean estimator to estimate a GARCH(1,1) model for S&P 500 stock returns and find that the conditional mean estimator performs better than quasi-maximum likelihood estimation in terms of out-of-sample forecasting.
    Keywords: Bayesian average, conditional mean estimation, ergodic theorem, summary statistic
    JEL: C11 C15 C21
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2016-5&r=ecm
  4. By: Florian Huber (Department of Economics, Vienna University of Economics and Business); Martin Feldkircher (Oesterreichische Nationalbank (OeNB))
    Abstract: Vector autoregressive (VAR) models are frequently used for forecasting and impulse response analysis. For both applications, shrinkage priors can help improving inference. In this paper we derive the shrinkage prior of Griffin et al. (2010) for the VAR case and its relevant conditional posterior distributions. This framework imposes a set of normally distributed priors on the autoregressive coefficients and the covariances of the VAR along with Gamma priors on a set of local and global prior scaling parameters. This prior setup is then generalized by introducing another layer of shrinkage with scaling parameters that push certain regions of the parameter space to zero. A simulation exercise shows that the proposed framework yields more precise estimates of the model parameters and impulse response functions. In addition, a forecasting exercise applied to US data shows that the proposed prior outperforms other specifications in terms of point and density predictions.
    Keywords: Normal-Gamma prior, density predictions, hierarchical modeling
    JEL: C11 C30 C53 E52
    Date: 2016–03
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwwuw:wuwp221&r=ecm
  5. By: Shujie Ma (Department of Statistics, University of California, Riverside); Liangjun Su (Singapore Management University)
    Abstract: In this paper we study the estimation of a large dimensional factor model when the factor loadings exhibit an unknown number of changes over time. We propose a novel three-step procedure to detect the breaks if any and then identify their locations. In the first step, we divide the whole time span into subintervals and fit a conventional factor model on each interval. In the second step, we apply the adaptive fused group Lasso to identify intervals containing a break. In the third step, we devise a grid search method to estimate the location of the break on each identified interval. We show that with probability approaching one our method can identify the correct number of changes and estimate the break locations. Simulation studies indicate superb finite sample performance of our method. We apply our method to investigate Stock and Watson’s (2009) U.S. monthly macroeconomic data set and identify five breaks in the factor loadings, spanning 1959-2006.
    Keywords: Break point; Convergence rate; Factor model; Fused Lasso; Group Lasso; Information criterion; Principal component; Structural change; Super-consistency; Time-varying parameter.
    JEL: C12 C33 C33 C38
    Date: 2016–03
    URL: http://d.repec.org/n?u=RePEc:siu:wpaper:05-2016&r=ecm
  6. By: Lillestøl, Jostein (Dept. of Business and Management Science, Norwegian School of Economics)
    Abstract: This paper considers a decision problem in the context of the worth of a wind mill project with profitability dependent on the average wind speed. This is partly known and the issue is whether to go on with the project now or, with an additional cost, put up a test mill and observe, say for a year, and then decide. The problem is studied within a Bayesian framework and given a general analytic solution for a specific loss function of linear type, with the normal case as illustration. Explicit formulas are then derived in the case when the wind speed distribution is Weibull with known shape parameter, and the sensitivity with respect to the specification of this parameter is explored. Based on Norwegian wind speed data we then give a justification of the Weibull model. This also provides some insight to parameter stability. Finally, a complete numerical scheme for the Bayesian two-parameter Weibull model is given, illustrated with an implementation of pre-posterior Weibull analysis in R.
    Keywords: Weibull distribution; decision analysis; pre-posterior Bayesian analysis
    JEL: C00 C10 C13
    Date: 2016–01–27
    URL: http://d.repec.org/n?u=RePEc:hhs:nhhfms:2016_002&r=ecm
  7. By: Nassim Nicholas Taleb
    Abstract: We present an explicit and parsimonious probability distribution (meta-distribution) for p-values across ensembles of statistically identical phenomena, having for sole parameter the median "true" p- value. P-values are extremely skewed and volatile, regardless of the sample size n, and vary greatly across repetitions of exactly same protocols under identical stochastic copies of the phenomenon. The convenience of formula allows the investigation of scientific results, particularly meta-analyses.
    Date: 2016–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1603.07532&r=ecm
  8. By: Niels Haldrup (Aarhus University and CREATES); Oskar Knapik (Aarhus University and CREATES); Tommaso Proietti (University of Rome “Tor Vergata” and Creates)
    Abstract: We consider the issue of modeling and forecasting daily electricity spot prices on the Nord Pool Elspot power market. We propose a method that can handle seasonal and non-seasonal persistence by modelling the price series as a generalized exponential process. As the presence of spikes can distort the estimation of the dynamic structure of the series we consider an iterative estimation strategy which, conditional on a set of parameter estimates, clears the spikes using a data cleaning algorithm, and reestimates the parameters using the cleaned data so as to robustify the estimates. Conditional on the estimated model, the best linear predictor is constructed. Our modeling approach provides good fit within sample and outperforms competing benchmark predictors in terms of forecasting accuracy. We also find that building separate models for each hour of the day and averaging the forecasts is a better strategy than forecasting the daily average directly.
    Keywords: Robust estimation, long-memory, seasonality, electricity spot prices, Nord Pool power market, forecasting, robust Kalman lter, generalized exponential model
    JEL: C1 C5 C53 Q4
    Date: 2016–03–18
    URL: http://d.repec.org/n?u=RePEc:aah:create:2016-08&r=ecm
  9. By: Senay Sokullu; Sami Stouli
    Abstract: We propose cross-validation criteria for the selection of regularization parameter(s) in the semiparametric instrumental variable transformation model proposed in Florens and Sokullu (2014). In the presence of an endogenous regressor, this model is characterized by the need to choose two regularization parameters, one for the structural function and one for the transformation of the outcome. We consider both two-step and simultaneous criteria, and analyze the finite-samples performance of the estimator using the corresponding regularization parameters by means of Monte-Carlo simulations. Our numerical experiments show that simultaneous selection of regularization parameters provides significant improvements in the performance of the estimator. We also apply our methods to the choice of regularization parameters in the estimation of network effects in the German magazine industry.
    Keywords: Nonparametric IV Regression, Transformation models, Cross-Validation, Tikhonov Regularization, Ill-posed inverse problems .
    Date: 2016–03–21
    URL: http://d.repec.org/n?u=RePEc:bri:uobdis:16/672&r=ecm
  10. By: Yoann Potiron
    Abstract: In this paper, we give a general time-varying parameter model, where the multidimensional parameter follows a continuous local martingale. As such, we call it the locally parametric model. The quantity of interest is defined as the integrated value over time of the parameter process $\Theta := T^{-1} \int_0^T \theta_t^* dt$. We provide a local parametric estimator of $\Theta$ based on the original (non time-varying) parametric model estimator and conditions under which we can show consistency and the corresponding limit distribution. We show that the LPM class contains some models that come from popular problems in the high-frequency financial econometrics literature (estimating volatility, high-frequency covariance, integrated betas, leverage effect, volatility of volatility), as well as a new general asset-price diffusion model which allows for endogenous observations and time-varying noise which can be auto-correlated and correlated with the efficient price and the sampling times. Finally, as an example of how to apply the limit theory provided in this paper, we build a time-varying friction parameter extension of the (semiparametric) model with uncertainty zones (Robert and Rosenbaum (2012)), which is noisy and endogenous, and we show that we can verify the conditions for the estimation of integrated volatility.
    Date: 2016–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1603.05700&r=ecm
  11. By: Blaise Melly und Kaspar Wüthrich
    Abstract: This chapter reviews instrumental variable models of quantile treatment effects. We focus on models that achieve identification through a monotonicity assumption in the treatment choice equation. We discuss the key conditions, the role of control variables as well as the estimands in detail and review the literature on estimation and inference. Then we consider extensions to multiple and continuous instruments, to the regression discontinuity design, and discuss the testability of the assumptions. Finally, we compare this approach to the alternative instrumental variable approach reviewed by Chernozhukov et al. (2016). Two open research problems are highlighted in the conclusion
    Keywords: instrumental variables; local quantile treatment effects; monotonicity; compliers
    JEL: C21 C26
    Date: 2016–03
    URL: http://d.repec.org/n?u=RePEc:ube:dpvwib:dp1605&r=ecm
  12. By: Scaillet, Olivier
    Abstract: This note shows that adding monotonicity or convexity constraints on the regression function does not restore well-posedness in nonparametric instrumental variable regression. The minimum distance problem without regularisation is still locally ill-posed.
    Keywords: Nonparametric Estimation, Instrumental Variable, Ill-Posed Inverse Problems
    JEL: C13 C14 C26
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:gnv:wpgsem:unige:79975&r=ecm
  13. By: Christian Hepenstrick; Massimiliano Marcellino
    Abstract: In this paper, we propose a modification of the three-pass regression filter (3PRF) to make it applicable to large mixed frequency datasets with ragged edges in a forecasting context. The resulting method, labeled MF-3PRF, is very simple but compares well to alternative mixed frequency factor estimation procedures in terms of theoretical properties, finite samle performance in Monte Carlo experiments, and empirical applications to GDP growth nowcasting and forecasting for the USA and a variety of other countries.
    Keywords: Dynamic Factor Models, Mixed Frequency, GDP Nowcasting, Forecasting, Partial Least Squares
    JEL: E37 C32 C53
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:snb:snbwpa:2016-04&r=ecm
  14. By: Pedro Martins; Andy Snell; Heiko Stueber; Jonathan Thomas
    Abstract: It is well known that, unless worker-firm match quality is controlled for, returns to firm tenure (RTT) estimated directly via reduced form wage (Mincer) equations will be biased. In this paper we argue that even if match quality is properly controlled for there is a further pervasive source of bias, namely the co-movement of firm employment and firm wages. In a simple mechanical model where human capital is absent and separation is exogenous we show that positively covarying shocks (either aggregate or firm level) to firms employment and wages cause downward bias in OLS regression estimates of RTT. We show that the long established procedures for dealing with "traditional" RTT bias do not circumvent the additional problem we have identified. We argue that if a reduced form estimation of RTT is undertaken, firm-year fixed effects must be added in order to eliminate this bias. Estimates from two large panel datasets from Portugal and Germany show that the bias is empirically important. Adding firm-year fixed effects to the regression increases estimates of RTT in the two respective countries by between 3.5% and 4.5% of wages at 20 years of tenure -? over 80% (50%) of the estimated RTT level itself. The results extend to tenure correlates used in macroeconomics such as the minimum unemployment rate since joining the firm. Adding firm-year fixed effects changes estimates of these estimates also.
    Keywords: Matched data, Tenure effects, Germany, Portugal
    JEL: J31 J63 C23
    Date: 2016–03
    URL: http://d.repec.org/n?u=RePEc:cgs:wpaper:64&r=ecm
  15. By: Luciano Stefanini (Department of Economics, Society & Politics, Università di Urbino "Carlo Bo); Laerte Sorini (Department of Economics, Society & Politics, Università di Urbino "Carlo Bo); Maria Letizia Guerra (Department of Mathematics, University of Bologna)
    Abstract: In this paper we illustrate the F-transform based on generalized fuzzy partitions as a tool for quantile and expectile smoothing. This allows to represent a time series in terms of a fuzzy-valued function whose levelcuts are modeled by F-transform and estimated by quantile or expectile regression. The proposed methodology is illustrated on several historical Önancial time series in order to highlight its strong properties .
    JEL: C22 C63
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:urb:wpaper:15_12&r=ecm
  16. By: Vogel, Matt (University of Missouri-St. Louis); van Ham, Maarten (Delft University of Technology)
    Abstract: This paper proposes a framework to assess how compositional differences at the neighborhood level contribute to the moderating effect of neighborhood context on the association between individual risk-factors and delinquency. We propose a neighborhood-based group decomposition to partition person-context interactions into their constituent components. Using data from the National Longitudinal Study of Adolescent to Adult Health, we demonstrate the extent to which variation in the association between impulsivity and delinquency can be attributed to (1) differences in mean-levels of impulsivity and violence in disadvantaged neighborhoods and (2) differences in coefficients across neighborhoods. The moderating effect of neighborhood disadvantage can be attributed primarily to the stronger effect of impulsivity on violence in disadvantaged neighborhoods, while differences in average levels of violence and impulsivity account for 14 percent and 2 percent of the observed difference, respectively.
    Keywords: person-context research, neighborhood effects, decomposition
    JEL: C02 R23
    Date: 2016–03
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp9793&r=ecm
  17. By: Tetsuya Takaishi
    Abstract: The realized stochastic volatility (RSV) model that utilizes the realized volatility as additional information has been proposed to infer volatility of financial time series. We consider the Bayesian inference of the RSV model by the Hybrid Monte Carlo (HMC) algorithm. The HMC algorithm can be parallelized and thus performed on the GPU for speedup. The GPU code is developed with CUDA Fortran. We compare the computational time in performing the HMC algorithm on GPU (GTX 760) and CPU (Intel i7-4770 3.4GHz) and find that the GPU can be up to 17 times faster than the CPU. We also code the program with OpenACC and find that appropriate coding can achieve the similar speedup with CUDA Fortran.
    Date: 2016–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1603.08114&r=ecm
  18. By: Juan Carlos Escanciano; Stefan Hoderlein; Arthur Lewbel; Oliver Linton
    Abstract: We consider nonparametric identification and estimation of pricing kernels, or equivalently of marginal utility functions up to scale, in consumption based asset pricing Euler equations.Ours is the first paper to prove nonparametric identification of Euler equations under low level conditions (without imposing functional restrictions or just assuming completeness). We also propose a novel nonparametric estimator based on our identification analysis, which combines standard kernel estimation with the computation of a matrix eigenvector problem. Our esti-mator avoids the ill-posed inverse issues associated with existing nonparametric instrumental variables based Euler equation estimators. We derive limiting distributions for our estimator and for relevant associated functionals. We provide a Monte Carlo analysis and an empirical application to US household-level consumption data.
    Keywords: Euler equations, marginal utility, pricing kernel, Fredholm equations, integral equations, nonparametric identification, asset pricing.
    JEL: C14 D91 E21 G12
    Date: 2015–10–01
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:1560&r=ecm

This nep-ecm issue is ©2016 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.