nep-ecm New Economics Papers
on Econometrics
Issue of 2017‒04‒16
twelve papers chosen by
Sune Karlsson
Örebro universitet

  1. Estimation and Testing of Stochastic Frontier Models using Variational Bayes By Gholamreza Hajargasht; William E. Griffiths
  2. Conditionally Optimal Weights and Forward-Looking Approaches to Combining Forecasts By Christopher G. Gibbs; Andrey L. Vasnev
  3. Performance Comparison of Modified HP Filter, Wavelet Analysis and Empirical Mode Decomposition for Smoothing Macroeconomic Time Series By Javed Iqbal; Muhammad Nadim Hanif
  4. On selecting directions for directional distance functions in a non-parametric framework: A review By Ke Wang; Yujiao Xian; Chia-Yen Lee; Yi-Ming Wei; Zhimin Huang
  5. Fully Modified HP Filter By Muhammad Nadim Hanif; Javed Iqbal; M. Ali Choudhary
  6. Adaptively weighted group Lasso for semiparametric quantile regression models By HONDA, Toshio; ING, Ching-Kang; WU, Wei-Ying
  7. Copula–based vMEM Specifications versus Alternatives: The Case of Trading Activity By Fabrizio Cipollini; Robert F. Engle; Giampiero M. Gallo
  8. Truncation Bias By Moshe Kim; Nir Billfeld
  9. Bounds with Imperfect Instruments: Leveraging the Implicit Assumption of Intransitivity in Correlations By Wiseman, Nathan; Sorensen, Todd A.
  10. Solving DSGE models with stochastic trends By Sergei Seleznev
  11. Bayesian Assessment of Lorenz and Stochastic Dominance Using a Mixture of Gamma Densities "Abstract: Because of their applicability for ordering distributions within general classes of utility and social welfare functions, tests for stochastic and Lorenz dominance have attracted considerable attention in the literature. To date the focus has been on sampling theory tests, with some tests having a null hypothesis that X dominates Y (say), and others having a null hypothesis that Y does not dominate X. These tests can be performed in both directions, with X and Y reversed. We propose a Bayesian approach for assessing Lorenz and stochastic dominance where the three hypotheses (i) X dominates Y, (ii) Y dominates X, and (iii) neither Y nor X is dominant, are treated symmetrically. Posterior probabilities for each of the three hypotheses are obtained by estimating the distributions and counting the proportions of MCMC draws that satisfy each of the hypotheses. We apply the proposed approach to samples of Indonesian income distributions for 1999, 2002, 2005 and 2008. To ensure flexible modelling of the distributions, mixtures of gamma densities are fitted for each of the years. We introduce probability curves that depict the probability of dominance at each population proportion and which convey valuable information about dominance probabilities for restricted population proportions relevant when studying poverty orderings. The results are compared with those from some sampling theory tests and the probability curves are used to explain seemingly contradictory outcomes. " By David Lander; David Gunawan; William E. Griffiths; Duangkamon Chotikapanich
  12. Inference for Lorenz Curves "Abstract: The Lorenz curve, introduced more than 100 years ago, is still one of the main tools in poverty and inequality analysis. International institutions such as the World Bank collect and publish grouped income data in the form of population and income shares for a large number of countries. These data are often used for estimation of parametric Lorenz curves which in turn form the basis for most poverty and inequality analyses. Despite the prevalence of parametric estimation of Lorenz curves from grouped data, and the existence of well-developed nonparametric methods, a rigorous statistical foundation for estimating parametric Lorenz curves has not been provided. In this paper we propose a sound statistical framework for making inference about parametric Lorenz curves for both grouped and individual data. Building on two data generating mechanisms, efficient methods of estimation and inference are proposed and a number of results useful for comparing the two methods of inference, and aiding computation, are derived. Simulations are used to assess the estimators, and curves are estimated for some example countries. We also show how the proposed methods improve upon World Bank methods and make recommendations for improving current practices. " By Gholamreza Hajargasht; William E. Griffiths

  1. By: Gholamreza Hajargasht (Department of Economics, University of Melbourne); William E. Griffiths (Department of Economics, University of Melbourne)
    Abstract: We show how a wide range of stochastic frontier models can be estimated relatively easily using variational Bayes. We derive approximate posterior distributions and point estimates for parameters and inefficiency effects for (a) time invariant models with several alternative inefficiency distributions, (b) models with time varying effects, (c) models incorporating environmental effects, and (d) models with more flexible forms for the regression function and error terms. Despite the abundance of stochastic frontier models, there have been few attempts to test the various models against each other, probably due to the difficulty of performing such tests. One advantage of the variational Bayes approximation is that it facilitates the computation of marginal likelihoods that can be used to compare models. We apply this idea to test stochastic frontier models with different inefficiency distributions. Estimation and testing is illustrated using three examples.
    Keywords: Technical efficiency, Marginal likelihood, Time-varying panel, Environmental effects, Mixture, Semiparametric model
    Date: 2016–05
    URL: http://d.repec.org/n?u=RePEc:mlb:wpaper:2024&r=ecm
  2. By: Christopher G. Gibbs (School of Economics, UNSW Business School, UNSW); Andrey L. Vasnev (University of Sydney)
    Abstract: In applied forecasting, there is a trade-off between in-sample fit and out-of-sample forecast accuracy. Parsimonious model specifications typically outperform richer model specifications. Consequently, there is often predictable information in forecast errors that is difficult to exploit. However, we show how this predictable information can be exploited in forecast combinations. In this case, optimal combination weights should minimize conditional mean squared error, or a conditional loss function, rather than the unconditional variance as in the commonly used framework of Bates and Granger (1969). We prove that our conditionally optimal weights lead to better forecast performance. The conditionally optimal weights support other forward-looking approaches to combining forecasts, where the forecast weights depend on the expected model performance. We show that forward-looking
    Keywords: Forecast combination, conditionally optimal weights, forecast combination puzzle, inflation, Phillips curve
    JEL: C18 C53 E31
    Date: 2017–02
    URL: http://d.repec.org/n?u=RePEc:swe:wpaper:2017-10&r=ecm
  3. By: Javed Iqbal (State Bank of Pakistan); Muhammad Nadim Hanif (State Bank of Pakistan)
    Abstract: We compare performance of modified HP filter, wavelet analysis and empirical mode decomposition. Our simulation study results suggest that modified HP filter performs better for an overall time series. However, in the middle (of time series) wavelet analysis performs best. Wavelet analysis based filtering has highest ‘end points bias (EPB)’. However, it performs better when we extrapolate the subject time series to lower the EPB. Study based on observed data of real income, investment and consumption shows that the autoregressive properties and multivariate analytics of cyclical components depend upon filtering technique.
    Keywords: Business Cycle, Smoothing Macro Time Series, Modified HP Filter, Wavelet Analysis, End Point Bias in HP Filter, Simulation, Cross Country Study.
    JEL: E32 C18
    Date: 2017–03
    URL: http://d.repec.org/n?u=RePEc:sbp:wpaper:87&r=ecm
  4. By: Ke Wang; Yujiao Xian; Chia-Yen Lee; Yi-Ming Wei (Center for Energy and Environmental Policy Research (CEEP), Beijing Institute of Technology); Zhimin Huang
    Abstract: Directional distance function (DDF) has been a commonly used technique for estimating efficiency and productivity over the past two decades, and the directional vector is usually predetermined in the applications of DDF. The most critical issue of using DDF remains that how to appropriately project the inefficient decision-making unit (DMU) onto the production frontier along with a justified direction. This paper provides a comprehensive literature review on the techniques for selecting directional vector of the directional distance function. It begins with a brief introduction of the existing methods around the inclusion of the exogenous direction techniques and the endogenous direction techniques. The former commonly includes arbitrary direction and conditional direction techniques, while the latter involves the techniques for seeking theoretically optimized directions (i.e., direction towards the closest benchmark or indicating the largest efficiency improvement potential) and market-oriented directions (i.e., directions towards cost minimization, profit maximization, or marginal profit maximization benchmarks). The main advantages and disadvantages of these techniques are summarized, and the limitations inherent in the exogenous direction-selecting techniques are discussed. It also analytically argues the mechanism of each endogenous direction technique. The literature review is end up with a numerical example of efficiency estimation for power plants, in which most of the reviewed directions for DDF are demonstrated and their evaluation performance are compared.
    Keywords: Data Envelopment Analysis (DEA); Least distance; Endogenous mechanism; Cost efficiency; Profit efficiency; Marginal profit maximization
    JEL: Q54 Q40
    Date: 2017–01–02
    URL: http://d.repec.org/n?u=RePEc:biw:wpaper:99&r=ecm
  5. By: Muhammad Nadim Hanif (State Bank of Pakistan); Javed Iqbal (State Bank of Pakistan); M. Ali Choudhary (State Bank of Pakistan)
    Abstract: Business cycle estimation is core of macroeconomics research. Hodrick-Prescott (1997) filter, (or HP filter), is the most popular tool to extract cycle from a macroeconomic time series. There are certain issues with HP filter including fixed value of ? across the series/countries and end points bias (EPB). Modified HP filter (MHP) of McDermott (1997) attempted to address the first issue. Bloechl (2014) introduced a loss function minimization approach to address the EPB issue but keeping lambda fixed (as in HP filter). In this study we marry the endogenous lambda approach of McDermott (1997) with loss function minimization approach of Bloechl (2014) to analyze EPB in HP filter, while intuitively changing the weighting scheme used in the latter. We contribute by suggesting an endogenous weighting scheme along with endogenous smoothing parameter to resolve EPB issue of HP filter. We call this fully modified HP (FMHP) filter. Our FMHP filter outperforms a variety of conventional filters in a power comparison (simulation) study as well as in observed real data (univariate and multivariate) analytics for a large set of countries.
    Keywords: Business Cycle, Time Series, Fully Modified HP Filter, End Point Bias in HP Filter, Simulation, Cross Country Study.
    JEL: E32 C18
    Date: 2017–04
    URL: http://d.repec.org/n?u=RePEc:sbp:wpaper:88&r=ecm
  6. By: HONDA, Toshio; ING, Ching-Kang; WU, Wei-Ying
    Abstract: We propose an adaptively weighted group Lasso procedure for simultaneous variable selection and structure identification for varying coefficient quantile regression models and additive quantile regression models with ultra-high dimensional covariates. Under a strong sparsity condition, we establish selection consistency of the proposed Lasso procedure when the weights therein satisfy a set of general conditions. This consistency result, however, is reliant on a suitable choice of the tuning parameter for the Lasso penalty, which can be hard to make in practice. To alleviate this difficulty, we suggest a BIC-type criterion, which we call high-dimensional information criterion (HDIC), and show that the proposed Lasso procedure with the tuning parameter determined by HDIC still achieves selection consistency. Our simulation studies support strongly our theoretical findings.
    Keywords: Additive models, B-spline, high-dimensional information criteria, Lasso, structure identification, varying coefficient models
    Date: 2017–04
    URL: http://d.repec.org/n?u=RePEc:hit:econdp:2017-04&r=ecm
  7. By: Fabrizio Cipollini (Dipartimento di Statistica, Informatica, Applicazioni "G. Parenti", Università di Firenze); Robert F. Engle (Department of Finance, Stern School of Business, New York University); Giampiero M. Gallo (Dipartimento di Statistica, Informatica, Applicazioni "G. Parenti", Università di Firenze)
    Abstract: We discuss several multivariate extensions of the Multiplicative Error Model by Engle (2002) to take into account dynamic interdependence and contemporaneously correlated innovations (vector MEM or vMEM). We suggest copula functions to link Gamma marginals of the innovations, in a specification where past values and conditional expectations of the variables can be simultaneously estimated. Results with realized volatility, volumes and number of trades of the JNJ stock show that significantly superior realized volatility forecasts are delivered with a fully interdependent vMEM relative to a single equation. Alternatives involving log–Normal or semiparametric formulations produce substantially equivalent results.
    Keywords: GARCH; MEM; Realized Volatility; Trading Volume; Trading Activity; Trades; Copula; Volatility Forecasting
    JEL: C32 C51 C58 C89
    Date: 2017–04
    URL: http://d.repec.org/n?u=RePEc:fir:econom:wp2017_02&r=ecm
  8. By: Moshe Kim (University of Haifa, Department of Economics); Nir Billfeld (University of Haifa, Department of Economics, PHD student)
    Abstract: In the case of truncation, which is the widespread phenomenon plaguing the majority of all elds of empirical research, the observed data distri- bution function is truncated and related to participants' covariates only, rendering Heckman's seminal and known correction procedure not imple- mentable. Thus, for the correction of endogenous selectivity bias propa- gated by truncation we introduce a new methodology that recovers the unobserved part of the data distribution function, using only its observed truncated part. The correlation patterns among the non-participants' co- variates (which are all functions of the recovered non-participants' density function) are recovered as well. The rationale underlying the ability to recover the unobserved complete density function from the observed trun- cated density function relies on the fact that the latter is obtained by conditioning the former on the selection rule. Consequently, the param- eters set which characterizes the truncated density function contains all the parameters characterizing the unobserved non-truncated density func- tion. Thus, it is possible to characterize the unobserved non-participants' density function in terms of the parameters estimated using the truncated data soley. Once this unobserved part is recovered one can estimate the selection rule equation for the hazard rate calculation as if the full sample consisting of both participants and non-participants is observable. Monte- Carlo simulations attest to the high accuracy of the estimates and above conventional p n consistency.
    Keywords: Selectivity bias correction, Truncated Probit
    URL: http://d.repec.org/n?u=RePEc:haf:huedwp:wp201607&r=ecm
  9. By: Wiseman, Nathan (University of Nevada, Reno); Sorensen, Todd A. (University of Nevada, Reno)
    Abstract: Instrumental variables (IV) is an indispensable tool for establishing causal relationships between variables. Recent work has focused on improving bounds for cases when an ideal instrument does not exist. We leverage a principle, "Intransitivity in Correlations," related to an under-utilized property from the statistics literature. From this principle, it is straightforward to obtain new bounds. We argue that these new theoretical bounds become increasingly useful as instruments become increasingly weak or invalid.
    Keywords: instrumental variables, bounding, partial identification, transitivity in correlations
    JEL: C26
    Date: 2017–03
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp10646&r=ecm
  10. By: Sergei Seleznev (Bank of Russia, Russian Federation)
    Abstract: We propose an algorithm for solving DSGE models with stochastic trends. Several implementations help us to solve the model with a small number of stochastic trends in the absence of a balanced growth path fast and allow us to control the accuracy of approximation in a certain range. Taking into account the fact that many implementations can be easily parallelized, this algorithm enables the estimation of models in the absence of a balanced growth path. We also provide a number of possible methods for estimation.
    Keywords: Non-stationary DSGE, stochastic trends, Smolyak’s algorithm, perturbation method.
    JEL: C61 C63
    Date: 2016–09
    URL: http://d.repec.org/n?u=RePEc:bkr:wpaper:wps15&r=ecm
  11. Bayesian Assessment of Lorenz and Stochastic Dominance Using a Mixture of Gamma Densities "Abstract: Because of their applicability for ordering distributions within general classes of utility and social welfare functions, tests for stochastic and Lorenz dominance have attracted considerable attention in the literature. To date the focus has been on sampling theory tests, with some tests having a null hypothesis that X dominates Y (say), and others having a null hypothesis that Y does not dominate X. These tests can be performed in both directions, with X and Y reversed. We propose a Bayesian approach for assessing Lorenz and stochastic dominance where the three hypotheses (i) X dominates Y, (ii) Y dominates X, and (iii) neither Y nor X is dominant, are treated symmetrically. Posterior probabilities for each of the three hypotheses are obtained by estimating the distributions and counting the proportions of MCMC draws that satisfy each of the hypotheses. We apply the proposed approach to samples of Indonesian income distributions for 1999, 2002, 2005 and 2008. To ensure flexible modelling of the distributions, mixtures of gamma densities are fitted for each of the years. We introduce probability curves that depict the probability of dominance at each population proportion and which convey valuable information about dominance probabilities for restricted population proportions relevant when studying poverty orderings. The results are compared with those from some sampling theory tests and the probability curves are used to explain seemingly contradictory outcomes. "
    By: David Lander (Pennsylvania State University); David Gunawan (University of New South Wales); William E. Griffiths (Department of Economics, University of Melbourne); Duangkamon Chotikapanich (Monash University)
    Keywords: dominance probabilities; MCMC; poverty comparisons
    Date: 2016–05
    URL: http://d.repec.org/n?u=RePEc:mlb:wpaper:2023&r=ecm
  12. By: Gholamreza Hajargasht (Department of Economics, University of Melbourne); William E. Griffiths (Department of Economics, University of Melbourne)
    Keywords: GMM, GB2 Distribution, General Quadratic, Beta Lorenz Curve, Gini Coefficient, Poverty Measures, Quantile Function Estimationds and make recommendations for improving current practices.
    JEL: C13 C16 D31
    Date: 2016–01
    URL: http://d.repec.org/n?u=RePEc:mlb:wpaper:2022&r=ecm

This nep-ecm issue is ©2017 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.