nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒11‒28
fifteen papers chosen by
Sune Karlsson
Orebro University

  1. The Hodrick-Prescott (HP) Filter as a Bayesian Regression Model By Wolfgang Polasek
  2. The Extended Hodrick-Prescott (HP) Filter for Spatial Regression Smoothing By Wolfgang Polasek
  3. Semiparametric transformation model with endogeneity: a control function approach By Van Keilegom, Ingrid; Vanhems, Anne
  4. Tests for m-dependence Based on Sample Splitting Methods By Seongman Moon; Carlos Velasco
  5. On the Properties of Regression Tests of Asset Return Predictability By Seongman Moon; Carlos Velasco
  6. Asymptotic theory for iterated one-step Huber-skip estimators By Søren Johansen; Bent Nielsen
  7. Global Bahadur representation for nonparametric censored regression quantiles and its applications By Efang Kong; Oliver Linton; Yingcun Xia
  8. Bias Reduction for the Maximum Likelihood Estimator of the Parameters of the Generalized Rayleigh Family of Distributions By David E. Giles; Xiao Ling
  9. Estimating Correlated Jumps and Stochastic Volatilities By Jiří Witzany
  10. Predicting Recessions: A New Approach For Identifying Leading Indicators and Forecast Combinations By Turgut Kisinbay; Chikako Baba
  11. Testing for Collusion in Asymmetric First-Price Auctions By Gaurab Aryal; Maria F. Gabrielli
  12. Mixed fractional Brownian motion, short and long-term Dependence and economic conditions: the case of the S&P-500 Index By Dominique, C-René; Rivera-Solis, Luis Eduardo
  13. A Monte Carlo simulation comparing DEA, SFA and two simple approaches to combine efficiency estimates By Andor, Mark; Hesse, Frederik
  14. Hedonic Prices and Implicit Markets: Estimating Marginal Willingness to Pay for Differentiated Products Without Instrumental Variables By Kelly C. Bishop; Christopher Timmins
  15. Early Warning Indicators of Crisis Incidence: Evidence from a Panel of 40 Developed Countries By Jan Babecký; Tomáš Havránek; Jakub Matìjù; Marek Rusnák; Kateøina Šmídková; Boøek Vašíèek

  1. By: Wolfgang Polasek (Institute for Advanced Studies, Vienna, Austria; University of Porto, Porto, Portugal)
    Abstract: The Hodrick-Prescott (HP) method is a popular smoothing method for economic time series to get a smooth or long-term component of stationary series like growth rates. We show that the HP smoother can be viewed as a Bayesian linear model with a strong prior using differencing matrices for the smoothness component. The HP smoothing approach requires a linear regression model with a Bayesian conjugate multi-normalgamma distribution. The Bayesian approach also allows to make predictions of the HP smoother on both ends of the time series. Furthermore, we show how Bayes tests can determine the order of smoothness in the HP smoothing model. The extended HP smoothing approach is demonstrated for the non-stationary (textbook) airline passenger time series. Thus, the Bayesian extension of the HP model defines a new class of model-based smoothers for (non-stationary) time series and spatial models.
    Keywords: Hodrick-Prescott (HP) smoothers, model selection by marginal likelihoods, multi-normal-gamma distribution, Spatial sales growth data, Bayesian econometrics
    JEL: C11 C15 C52 E17 R12
    Date: 2011–11
  2. By: Wolfgang Polasek (Institute for Advanced Studies, Vienna, Austria; University of Porto, Porto, Portugal)
    Abstract: The Hodrick-Prescott (HP) method is a popular smoothing method for economic time series to get a longterm component of stationary series like growth rates. The new extended HP smoothing model is applied to data-sets with an underlying metric and requires a Bayesian linear regression model with a strong prior based on differencing matrices for the smoothness parameter and a weak prior for the regression part. We define a Bayesian spatial smoothing model with neighbors for each observation and we define a smoothness prior similar to the HP filter in time series. This opens a new approach to model-based smoothers for time series and spatial models based on MCMC. We apply it to the NUTS-2 regions of the European Union for regional GDP and GDP per capita, where the fixed effects are removed by an extended HP smoothing model.
    Keywords: Hodrick-Prescott (HP) smoothers, smoothed square loss function, spatial smoothing, smoothness prior, Bayesian econometrics
    JEL: C11 C15 C52 E17 R12
    Date: 2011–11
  3. By: Van Keilegom, Ingrid; Vanhems, Anne
    Abstract: We consider a semiparametric transformation model, in which the regression function has an additive nonparametric structure and the transformation of the response is assumed to belong to some parametric family. We suppose that endogeneity is present in the explanatory variables. Using a control function approach, we show that the pro- posed model is identified under suitable assumptions, and propose a profile likelihood estimation method for the transformation. The proposed estimator is shown to be asymptotically normal under certain regularity conditions. A small simulation study shows that the estimator behaves well in practice.
    Keywords: Additive models; Control function; Endogeneity; Instrumental variable
    Date: 2011–05–13
  4. By: Seongman Moon (Universidad Carlos III de Madrid); Carlos Velasco (Universidad Carlos III de Madrid)
    Abstract: This paper develops new inference methods for m-dependent data. Our approach is based on sample splitting by regular sampling of original data at lower frequencies, so that standard techniques can be used for independent data in individual subsamples. We then explore several alternatives of aggregation across subsample statistics and investigate their asymptotic and finite sample properties. We apply our methods to nonparametric tests of the predictability of excess returns in the presence of m-dependence. We also illustrate how our serial dependence tests can provide valid information for identifying particular economic alternatives when testing the expectations hypothesis in foreign exchange markets.
    Keywords: m-dependence, sample splitting, pooled method,Wald method, minimum/maximum/median method, expectations hypothesis.
    JEL: C14 F31 F37
    Date: 2011–08
  5. By: Seongman Moon (Universidad Carlos III de Madrid); Carlos Velasco (Universidad Carlos III de Madrid)
    Abstract: This paper investigates, both in finite samples and asymptotically, statistical inference on predictive regressions where time series are generated by present value models of asset prices. We show that regression-based tests, including robust tests such as robust conditional test and Q-test, are inconsistent and thus suffer from lack of power in local-to-unity models for the regressor persistence. The main reason is that the near-integrated regressor from the present value model slows down the convergence rates of the estimates, an effect which is masked in predictive regressions analysis with exogenous constant covariance of innovations. We illustrate these properties in a simulation study and analyze the predictability of several stock returns series.
    Keywords: present value model, predictive regression, local-to-unity assumption, conditional test, Q-test, t-test.
    JEL: C12 C22 G1
    Date: 2011–08
  6. By: Søren Johansen (University of Copenhagen and CREATES); Bent Nielsen (Department of Economics, University of Oxford)
    Abstract: Iterated one-step Huber-skip M-estimators are considered for regression problems. Each one-step estimator is a reweighted least squares estimators with zero/one weights determined by the initial estimator and the data. The asymptotic theory is given for iteration of such estimators using a tightness argument. The results apply to stationary as well as non-stationary regression problems.
    Keywords: Huber-skip, iteration, one-step M-estimators, unit roots.
    JEL: C32
    Date: 2011–11–16
  7. By: Efang Kong; Oliver Linton (Institute for Fiscal Studies and Cambridge University); Yingcun Xia
    Abstract: <p>This paper is concerned with the nonparametric estimation of regression quantiles where the response variable is randomly censored. Using results on the strong uniform convergence of U-processes, we derive a global Bahadur representation for the weighted local polynomial estimators, which is sufficiently accurate for many further theoretical analyses including inference. We consider two applications in detail: estimation of the average derivative, and estimation of the component functions in additive quantile regression models.</p>
    Date: 2011–11
  8. By: David E. Giles (Department of Economics, University of Victoria); Xiao Ling
    Abstract: We derive analytic expressions for the biases, to O(n-1), of the maximum likelihood estimators of the parameters of the generalized Rayleigh distribution family. Using these expressions to bias-correct the estimators is found to be extremely effective in terms of bias reduction, and generally results in a small reduction in relative mean squared error. In general, the analytic bias-corrected estimators are also found to be superior to the alternative of bias-correction via the bootstrap.
    Keywords: Generalized Rayleigh distribution; maximum likelihood; bias; mean squared error; bias correction
    JEL: C13 C15 C46
    Date: 2011–11–17
  9. By: Jiří Witzany (University of Economics, Prague, Czech Republic)
    Abstract: We formulate a bivariate stochastic volatility jump-diffusion model with correlated jumps and volatilities. An MCMC Metropolis-Hastings sampling algorithm is proposed to estimate the model’s parameters and latent state variables (jumps and stochastic volatilities) given observed returns. The methodology is successfully tested on several artificially generated bivariate time series and then on the two most important Czech domestic financial market time series of the FX (CZK/EUR) and stock (PX index) returns. Four bivariate models with and without jumps and/or stochastic volatility are compared using the deviance information criterion (DIC) confirming importance of incorporation of jumps and stochastic volatility into the model.
    Keywords: jump-diffusion, stochastic volatility, MCMC, Value at Risk, Monte Carlo
    JEL: C11 C15 G1
    Date: 2011–11
  10. By: Turgut Kisinbay; Chikako Baba
    Abstract: This study proposes a data-based algorithm to select a subset of indicators from a large data set with a focus on forecasting recessions. The algorithm selects leading indicators of recessions based on the forecast encompassing principle and combines the forecasts. An application to U.S. data shows that forecasts obtained from the algorithm are consistently among the best in a large comparative forecasting exercise at various forecasting horizons. In addition, the selected indicators are reasonable and consistent with the standard leading indicators followed by many observers of business cycles. The suggested algorithm has several advantages, including wide applicability and objective variable selection.
    Keywords: Business cycles , Economic forecasting , Economic indicators , Economic recession , Forecasting models , United States ,
    Date: 2011–10–13
  11. By: Gaurab Aryal; Maria F. Gabrielli
    Abstract: This paper proposes fully nonparametric tests to detect possible collusion in first-price procurement (auctions). The aim of the tests is to detect possible collusion before knowing whether or not bidders are colluding. Thus we do not rely on data on anti-competitive hearing, and in that sense is ’ex-ante’. We propose a two steps (model selection) procedure: First, we use a reduced form test of independence and symmetry to shortlist bidders whose bidding behavior is at-odds with competitive bidding, and Second, the recovered (latent) cost for these bidders must be higher under collusion than under competition, because collusion dwarfs competition, hence detecting collusion boils down to testing if the estimated cost distribution under collusion first order stochastically dominates that under competition. We propose rank based and Kolmogorov-Smirnov (K-S) tests. We implement the tests for Highway Procurement data in California and conclude that there is no evidence of collusion even though the reduced form test supports collusion.
    JEL: C1 C4 C7 D4 L4
    Date: 2011–11
  12. By: Dominique, C-René; Rivera-Solis, Luis Eduardo
    Abstract: The Kolmogorov-Mandelbrot-van Ness Process is a zero mean Gaussian process indexed by the Hurst Parameter (H). When it models financial data, a controversy arises as to whether or not financial data exhibit short or long-range dependence. This paper argues that the Mixed Fractional Brownian is a more suitable tool for the purpose as it leaves no room for controversy. It is used here to model the S&P-500 Index, sampled daily over the period 1950-2011. The main results are as follows: The S&P-500 Index is characterized by both short and long-term dependence. More explicitly, it is characterized by at least 12 distinct scaling pa-rameters that are, ex hypothesis, determined by investors’ approach to the market. When the market is dominated by “blue-chippers” or ‘long-termists’, or when bubbles are ongoing, the index is persistent; and when the market is dominated by “con-trarians”, the index jumps to anti-persistence that is a far-from-equilibrium state in which market crashes are likely to occur.
    Keywords: Gaussian Processes; Mixed Fractional Brownian Motion; Hurst Exponent; Local Self-similarity; Persistence; Anti-persistence; Definiteness of covariance Functions; Dissipative dynamic systems
    JEL: C32 D53 C16 C02
    Date: 2011–10–20
  13. By: Andor, Mark; Hesse, Frederik
    Abstract: In certain circumstances, both researchers and policy makers are faced with the challenge of determining individual efficiency scores for each decision making unit (DMU) under consideration. In this study, we use a Monte Carlo experimentation to analyze the optimal approach to determining individual efficiency scores. Our first research objective is a systematic comparison of the two most popular estimation methods, data envelopment (DEA) and stochastic frontier analysis (SFA). Accordingly we extend the existing comparisons in several ways. We are thus able to identify the factors which influence the performance of the methods and give additional information about the reasons for performance variation. Furthermore, we indicate specific situations in which an estimation technique proves superior. As none of the methods is in all respects superior, in real word applications, such as energy incentive regulation systems, it is regarded as best-practice to combine the estimates obtained from DEA and SFA. Hence in a second step, we compare the approaches to transforming the estimates into efficiency scores, with the elementary estimates of the two methods. Our results demonstrate that combination approaches can actually constitute best-practice for estimating precise efficiency scores. --
    Keywords: efficiency,data envelopment analysis,stochastic frontier analysis,simulation,regulation
    JEL: C1 C5 D2 L5 Q4
    Date: 2011
  14. By: Kelly C. Bishop; Christopher Timmins
    Abstract: The hedonic model of Rosen (1974) has become a workhorse for valuing the characteristics of differentiated products despite a number of well-documented econometric problems. For example, Bartik (1987) and Epple (1987) each describe a source of endogeneity in the second stage of Rosen's procedure that has proven difficult to overcome. In this paper, we propose a new approach for recovering the marginal willingness-to-pay function that altogether avoids these endogeneity problems. Applying this estimator to data on large changes in violent crime rates, we find that marginal willingness-to-pay increases by ten cents with each additional violent crime per 100,000 residents.
    JEL: Q51 R0
    Date: 2011–11
  15. By: Jan Babecký (Czech National Bank); Tomáš Havránek (Czech National Bank); Jakub Matìjù (Czech National Bank); Marek Rusnák (Czech National Bank); Kateøina Šmídková (Czech National Bank); Boøek Vašíèek (Czech National Bank)
    Abstract: We provide a critical review of the literature on early warning indicators of economics crises and propose methods to overcome several pitfalls of the previous contributions. We use a quarterly panel of 40 EU and OECD countries for the period 1970–2010. As the response variable, we construct a continuous index of crisis incidence capturing the real costs for the economy. As the potential warning indicators, we evaluate a wide range of variables, selected according to the previous literature and our own considerations. For each potential indicator we determine the optimal lead employing panel vector autoregression, then we select useful indicators employing Bayesian model averaging. We re-estimate the resulting specification by system GMM to account for potential endogeneity of some indicators. Subsequently, to allow for country heterogeneity, we evaluate the random coefficients estimator and illustrate the stability among endogenous clusters. Our results suggest that global variables rank among the most useful early warning indicators. In addition, housing prices emerge consistently as an important domestic source of risk.
    Keywords: Early warning indicators, Bayesian model averaging, panel VAR, dynamic panel, macro-prudential policies.
    JEL: C33 E44 E58 F47
    Date: 2011–11

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.