nep-ecm New Economics Papers
on Econometrics
Issue of 2015‒05‒22
fifteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Weighted pairwise likelihood estimation for a general class of random effects models By Vassilis G. S. Vasdekis; Dimitris Rizopoulos; Irini Moustaki
  2. Quantile forecasts of inflation under model uncertainty By Korobilis, Dimitris
  3. A Two-Step Estimator for Missing Values in Probit Model Covariates By Laitila, Thomas; Wang, Lisha
  4. Identification based on Difference-in-Differences Approaches with Multiple Treatments By Fricke, Hans
  5. Baxter`s Inequality and Sieve Bootstrap for Random Fields By Meyer, Marco; Jentsch, Carsten; Kreiss, Jens-Peter
  6. Estimating rational stock-market bubbles with sequential Monte Carlo methods By Benedikt Rotermann; Bernd Wilfling
  7. Combining Country-Specific Forecasts when Forecasting Euro Area Macroeconomic Aggregates By Jing Zeng
  8. Testing for First Order Serial Correlation in Temporally Aggregated Regression Models By Helson C. Braga; William G. Tyler
  9. Nonparametric Instrumental Variable Estimation of Binary Response Models By Samuele Centorrino; Jean-Pierre Florens
  10. The efficiency of Anderson-Darling test with limited sample size: an application to Backtesting Counterparty Credit Risk internal model By M. Formenti; L. Spadafora; M. Terraneo; F. Ramponi
  11. Average Wage Gaps and Oaxaca-Blinder Decompositions By Sloczynski, Tymon
  12. Modelling and Estimating Individual and Firm Effects with Count Panel Data By Jean-François Angers; Denise Desjardins; Georges Dionne; François Guertin
  13. Endogenous Censoring in the Mixed Proportional Hazard Model with an Application to Optimal Unemployment Insurance By Arkadiusz Szydlowski
  14. International Sign Predictability of Stock Returns: The Role of the United States By Henri Nyberg; Harri Pönkä
  15. FloGARCH : Realizing long memory and asymmetries in returns volatility By Harry Vander Elst

  1. By: Vassilis G. S. Vasdekis; Dimitris Rizopoulos; Irini Moustaki
    Abstract: Models with random effects/latent variables are widely used for capturing unobserved heterogeneity in multilevel/hierarchical data and account for associations in multivariate data. The estimation of those models becomes cumbersome as the number of latent variables increases due to high-dimensional integrations involved. Composite likelihood is a pseudo-likelihood that combines lower-order marginal or conditional densities such as univariate and/or bivariate; it has been proposed in the literature as an alternative to full maximum likelihood estimation. We propose a weighted pairwise likelihood estimator based on estimates obtained from separate maximizations of marginal pairwise likelihoods. The derived weights minimize the total variance of the estimated parameters. The proposed weighted estimator is found to be more efficient than the one that assumes all weights to be equal. The methodology is applied to a multivariate growth model for binary outcomes in the analysis of four indicators of schistosomiasis before and after drug administration.
    Keywords: categorical data; composite likelihood; generalized linear latent variable models; longitudinal data
    JEL: C1
    Date: 2014–10
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:56733&r=ecm
  2. By: Korobilis, Dimitris
    Abstract: Bayesian model averaging (BMA) methods are regularly used to deal with model uncertainty in regression models. This paper shows how to introduce Bayesian model averaging methods in quantile regressions, and allow for different predictors to affect different quantiles of the dependent variable. I show that quantile regression BMA methods can help reduce uncertainty regarding outcomes of future inflation by providing superior predictive densities compared to mean regression models with and without BMA.
    Keywords: Bayesian model averaging; quantile regression; inflation forecasts; fan charts
    JEL: C11 C22 C52
    Date: 2015–04
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:64341&r=ecm
  3. By: Laitila, Thomas (Örebro University School of Business); Wang, Lisha (Örebro University School of Business)
    Abstract: This paper includes a simulation study on the bias and MSE properties of a two-step probit model estimator for handling missing values in covariates by conditional imputation. In one smaller simulation it is compared with an asymptotically ecient estimator and in one larger it is compared with the probit ML on complete cases after listwise deletion. Simulation results obtained favors the use of the two-step probit estimator and motivates further developments of the methodology.
    Keywords: binary variable; imputation; OLS; heteroskedasticity
    JEL: C00
    Date: 2015–04–27
    URL: http://d.repec.org/n?u=RePEc:hhs:oruesi:2015_003&r=ecm
  4. By: Fricke, Hans
    Abstract: This paper discusses identification based on difference-in-differences (DiD) approaches with multiple treatments. It shows that an appropriate adaptation of the common trend assumption underlying the DiD strategy for the comparison of two treatments restricts the possibility of effect heterogeneity for at least one of the treatments. The required assumption of effect homogeneity is likely to be violated because of non-random assignment to treatment based on both observables and unobservables. However, this paper shows that, under certain conditions, the DiD estimate comparing two treatments identifies a lower bound in absolute values on the average treatment effect on the treated compared to the unobserved non-treatment state, even if effect homogeneity is violated. This is possible if, in expectation, the effects of both treatments compared to no treatment have the same sign, and one treatment has a stronger effect than the other treatment on the respective recipients. Such assumptions are plausible if treatments are ordered or vary in intensity.
    Keywords: Policy evaluation, partial identification, heterogeneous treatment effects
    JEL: C21 C23
    Date: 2015–05
    URL: http://d.repec.org/n?u=RePEc:usg:econwp:2015:10&r=ecm
  5. By: Meyer, Marco; Jentsch, Carsten; Kreiss, Jens-Peter
    Abstract: The concept of the autoregressive (AR) sieve bootstrap is investigated for the case of spatial processes in Z2. This procedure fits AR models of increasing order to the given data and, via resampling of the residuals, generates bootstrap replicates of the sample. The paper explores the range of validity of this resampling procedure and provides a general check criterion which allows to decide whether the AR sieve bootstrap asymptotically works for a specific statistic of interest or not. The criterion may be applied to a large class of stationary spatial processes. As another major contribution of this paper, a weighted Baxter-inequality for spatial processes is provided. This result yields a rate of convergence for the finite predictor coefficients, i.e. the coefficients of finite-order AR model fits, towards the autoregressive coefficients which are inherent to the underlying process under mild conditions. The developed check criterion is applied to some particularly interesting statistics like sample autocorrelations and standardized sample variograms. A simulation study shows that the procedure performs very well compared to normal approximations as well as block bootstrap methods in finite samples.
    Keywords: Autoregression , bootstrap , random fields
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:mnh:wpaper:38793&r=ecm
  6. By: Benedikt Rotermann; Bernd Wilfling
    Abstract: Considering the present-value stock-price model, we propose a new rational parametric bubble specification that is able to generate periodically recurring and stochastically deflating trajectories. Our bubble model is empirically more plausible than its predecessor variants and has neatly interpretable parameters. We transform our entire stock-price-bubble framework into a nonlinear state-space form and implement a fully-fledged estimation framework based on sequential Monte Carlo methods. This particle-filtering approach, originally stemming from the engineering literature, enables us (a) to obtain accurate parameter estimates, and (b) to reveal the (unobservable) trajectories of arbitrary rational bubble specifications. We fit our new bubble process to artificial and real-world data and demonstrate how to use parameter estimates to compare important characteristics of historical bubbles having emerged in different stock-markets with each other.
    Keywords: Present-value model, rational bubble nonlinear state space model, particle-filter estimation, EM algorithm
    JEL: C15 C32 C58 G10 G12
    Date: 2015–05
    URL: http://d.repec.org/n?u=RePEc:cqe:wpaper:4015&r=ecm
  7. By: Jing Zeng (Department of Economics, University of Konstanz, Germany)
    Abstract: European Monetary Union (EMU) member countries' forecasts are often combined to obtain the forecasts of the Euro area macroeconomic aggregate variables. The aggregation weights which are used to produce the aggregates are often considered as combination weights. This paper investigates whether using different combination weights instead of the usual aggregation weights can help to provide more accurate forecasts. In this context, we examine the performance of equal weights, the least squares estimators of the weights, the combination method recently proposed by Hyndman et al. (2011) and the weights suggested by shrinkage methods. We find that some variables like real GDP and GDP deflator can be forecasted more precisely by using flexible combination weights. Furthermore, combining only forecasts of the three largest European countries helps to improve the forecasting performance. The persistence of the individual data seems to play an important role for the relative performance of the combination.
    Keywords: Forecast combination, aggregation, macroeconomic forecasting, hierarchical time series, persistence in data
    JEL: C22 C43 C53
    Date: 2015–05–13
    URL: http://d.repec.org/n?u=RePEc:knz:dpteco:1511&r=ecm
  8. By: Helson C. Braga; William G. Tyler
    Abstract: Thls paper shows that the LM statistic for testing first order serial correlation in regression models can be computed using the Kalman Filter. It is shown tha.t when there are missing observations, the LM statistic for this tesi is equivalent to the tesi statistic derived by Robinson (1985) using the likelihood conditional on the observation times. The Kalman Filter approach is preferable because the test statistic for first order serial correlation in t.emporally aggregated regression models can be obta.ined as an extension of the previous case..
    Date: 2015–01
    URL: http://d.repec.org/n?u=RePEc:ipe:ipetds:0014&r=ecm
  9. By: Samuele Centorrino; Jean-Pierre Florens
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:nys:sunysb:14-07&r=ecm
  10. By: M. Formenti; L. Spadafora; M. Terraneo; F. Ramponi
    Abstract: This work presents a theoretical and empirical evaluation of Anderson-Darling test when the sample size is limited. The test can be applied in order to backtest the risk factors dynamics in the context of Counterparty Credit Risk modelling. We show the limits of such test when backtesting the distributions of an interest rate model over long time horizons and we propose a modified version of the test that is able to detect more efficiently an underestimation of the model's volatility. Finally we provide an empirical application.
    Date: 2015–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1505.04593&r=ecm
  11. By: Sloczynski, Tymon (Warsaw School of Economics)
    Abstract: In this paper I develop a new version of the Oaxaca–Blinder decomposition whose unexplained component recovers a parameter which I refer to as the average wage gap. Under a particular conditional independence assumption, this estimand is equivalent to the average treatment effect (ATE). I also provide treatment-effects reinterpretations of the Reimers, Cotton, and Fortin decompositions as well as estimate average wage gaps, average wage gains for men, and average wage losses for women in the United Kingdom. Conditional wage gaps increase across the wage distribution and therefore, on average, male gains are larger than female losses.
    Keywords: decomposition methods, gender wage gaps, glass ceilings, treatment effects
    JEL: C21 J31 J71
    Date: 2015–05
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp9036&r=ecm
  12. By: Jean-François Angers; Denise Desjardins; Georges Dionne; François Guertin
    Abstract: In this article, we propose a new parametric model for the modelling and estimation of accident distributions for drivers working in fleets of vehicles. The analysis uses panel data and takes into account individual and fleet effects in a non-linear model. Our sample contains more than 456,000 observations of vehicles and 87,000 observations of fleets. Non-observable factors are treated as random effects. The distribution of accidents is affected by both observable and non-observable factors from drivers, vehicles and fleets. Past experience of both individual drivers and individual fleets is very significant to explain road accidents. Unobservable factors are also significant, which means that insurance pricing should take into account both observable and unobservable factors in predicting the rate of road accidents under asymmetric information.
    Keywords: Accident distributions, drivers in fleet of vehicles, individual effect, firm effect, panel data, Poisson, gamma, Dirichlet, insurance pricing
    JEL: C23 C25 G22
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:lvl:lacicr:1506&r=ecm
  13. By: Arkadiusz Szydlowski
    Abstract: In economic duration analysis, it is routinely assumed that the process which led to censoring of the observed duration is independent of unobserved characteristics. The objective of this paper is to examine the sensitivity of parameter estimates to this independence assumption in the context of an economic model of optimal unemployment insurance. We assume a parametric model for the duration of interest and leave the distribution of censoring unrestricted, allowing it to be correlated with both observed and unobserved characteristics. This leads to loss of point-identification. We provide a practical characterization of the identified set with moment inequalities and suggest methods for estimating this set. In particular, we propose a profiled procedure that allows us to build a confidence set for a subvector of the model parameters. We apply this approach to estimate the elasticity of exit rate from unemployment with respect to unemployment benefit and find that both positive and negative values of this elasticity are supported by the data. When combined with the welfare formula in Chetty (2008), these estimates do not permit us to put an upper bound on the size of the welfare change due to an increase in the unemployment benefit. We conclude that given the available data alone, one cannot credibly judge if the unemployment benefits in the US are close to the optimal level.
    Date: 2015–06
    URL: http://d.repec.org/n?u=RePEc:lec:leecon:15/06&r=ecm
  14. By: Henri Nyberg (University of Helsinki); Harri Pönkä (University of Helsinki and CREATES)
    Abstract: We study the directional predictability of monthly excess stock market returns in the U.S. and ten other markets using univariate and bivariate binary response models. Our main interest is on the potential benefits of predicting the signs of the returns jointly, focusing on the predictive power from the U.S. to foreign markets. We introduce a new bivariate probit model that allows for such a contemporaneous predictive linkage from one market to the other. Our in-sample and out-of-sample forecasting results indicate superior predictive performance of the new model over the competing models by statistical measures and market timing performance, suggesting gradual diffusion of predictive information from the U.S. to the other markets.
    Keywords: Excess stock return, Directional predictability, Bivariate probit model, Market timing
    JEL: C22 G12 G17
    Date: 2015–05–05
    URL: http://d.repec.org/n?u=RePEc:aah:create:2015-20&r=ecm
  15. By: Harry Vander Elst (Université libre de Bruxelles)
    Abstract: We introduce the class of FloGARCH models in this paper. FloGARCH models provide a parsimonious joint model for low frequency returns and realized measures and are sufficiently flexible to capture long memory as well as asymmetries related to leverage effects. We analyze the performances of the models in a realistic numerical study and on the basis of a data set composed of 65 equities. Using more than 10 years of high-frequency transactions, we document significant statistical gains related to the FloGARCH models in terms of in-sample fit, out-of-sample fit and forecasting accuracy compared to classical and Realized GARCH models.
    Keywords: Realized GARCH models, high-frequency data, long memory, realized measures.
    JEL: C22 C53 C58 G17
    Date: 2015–04
    URL: http://d.repec.org/n?u=RePEc:nbb:reswpp:201504-280&r=ecm

This nep-ecm issue is ©2015 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.