Econometrics
http://lists.repec.org/mailman/listinfo/nep-ecm
Econometrics2015-05-22Sune KarlssonWeighted pairwise likelihood estimation for a general class of random effects models
http://d.repec.org/n?u=RePEc:ehl:lserod:56733&r=ecm
Models with random effects/latent variables are widely used for capturing unobserved heterogeneity in multilevel/hierarchical data and account for associations in multivariate data. The estimation of those models becomes cumbersome as the number of latent variables increases due to high-dimensional integrations involved. Composite likelihood is a pseudo-likelihood that combines lower-order marginal or conditional densities such as univariate and/or bivariate; it has been proposed in the literature as an alternative to full maximum likelihood estimation. We propose a weighted pairwise likelihood estimator based on estimates obtained from separate maximizations of marginal pairwise likelihoods. The derived weights minimize the total variance of the estimated parameters. The proposed weighted estimator is found to be more efficient than the one that assumes all weights to be equal. The methodology is applied to a multivariate growth model for binary outcomes in the analysis of four indicators of schistosomiasis before and after drug administration.Vassilis G. S. Vasdekis, Dimitris Rizopoulos, Irini Moustaki2014-10categorical data; composite likelihood; generalized linear latent variable models; longitudinal dataQuantile forecasts of inflation under model uncertainty
http://d.repec.org/n?u=RePEc:pra:mprapa:64341&r=ecm
Bayesian model averaging (BMA) methods are regularly used to deal with model uncertainty in regression models. This paper shows how to introduce Bayesian model averaging methods in quantile regressions, and allow for different predictors to affect different quantiles of the dependent variable. I show that quantile regression BMA methods can help reduce uncertainty regarding outcomes of future inflation by providing superior predictive densities compared to mean regression models with and without BMA.Korobilis, Dimitris2015-04Bayesian model averaging; quantile regression; inflation forecasts; fan chartsA Two-Step Estimator for Missing Values in Probit Model Covariates
http://d.repec.org/n?u=RePEc:hhs:oruesi:2015_003&r=ecm
This paper includes a simulation study on the bias and MSE properties of a two-step probit model estimator for handling missing values in covariates by conditional imputation. In one smaller simulation it is compared with an asymptotically ecient estimator and in one larger it is compared with the probit ML on complete cases after listwise deletion. Simulation results obtained favors the use of the two-step probit estimator and motivates further developments of the methodology.Laitila, Thomas, Wang, Lisha2015-04-27binary variable; imputation; OLS; heteroskedasticityIdentification based on Difference-in-Differences Approaches with Multiple Treatments
http://d.repec.org/n?u=RePEc:usg:econwp:2015:10&r=ecm
This paper discusses identification based on difference-in-differences (DiD) approaches with multiple treatments. It shows that an appropriate adaptation of the common trend assumption underlying the DiD strategy for the comparison of two treatments restricts the possibility of effect heterogeneity for at least one of the treatments. The required assumption of effect homogeneity is likely to be violated because of non-random assignment to treatment based on both observables and unobservables. However, this paper shows that, under certain conditions, the DiD estimate comparing two treatments identifies a lower bound in absolute values on the average treatment effect on the treated compared to the unobserved non-treatment state, even if effect homogeneity is violated. This is possible if, in expectation, the effects of both treatments compared to no treatment have the same sign, and one treatment has a stronger effect than the other treatment on the respective recipients. Such assumptions are plausible if treatments are ordered or vary in intensity.Fricke, Hans2015-05Policy evaluation, partial identification, heterogeneous treatment effectsBaxter`s Inequality and Sieve Bootstrap for Random Fields
http://d.repec.org/n?u=RePEc:mnh:wpaper:38793&r=ecm
The concept of the autoregressive (AR) sieve bootstrap is investigated for the case of spatial processes in Z2. This procedure fits AR models of increasing order to the given data and, via resampling of the residuals, generates bootstrap replicates of the sample. The paper explores the range of validity of this resampling procedure and provides a general check criterion which allows to decide whether the AR sieve bootstrap asymptotically works for a specific statistic of interest or not. The criterion may be applied to a large class of stationary spatial processes. As another major contribution of this paper, a weighted Baxter-inequality for spatial processes is provided. This result yields a rate of convergence for the finite predictor coefficients, i.e. the coefficients of finite-order AR model fits, towards the autoregressive coefficients which are inherent to the underlying process under mild conditions. The developed check criterion is applied to some particularly interesting statistics like sample autocorrelations and standardized sample variograms. A simulation study shows that the procedure performs very well compared to normal approximations as well as block bootstrap methods in finite samples.Meyer, Marco, Jentsch, Carsten, Kreiss, Jens-Peter2015Autoregression , bootstrap , random fieldsEstimating rational stock-market bubbles with sequential Monte Carlo methods
http://d.repec.org/n?u=RePEc:cqe:wpaper:4015&r=ecm
Considering the present-value stock-price model, we propose a new rational parametric bubble specification that is able to generate periodically recurring and stochastically deflating trajectories. Our bubble model is empirically more plausible than its predecessor variants and has neatly interpretable parameters. We transform our entire stock-price-bubble framework into a nonlinear state-space form and implement a fully-fledged estimation framework based on sequential Monte Carlo methods. This particle-filtering approach, originally stemming from the engineering literature, enables us (a) to obtain accurate parameter estimates, and (b) to reveal the (unobservable) trajectories of arbitrary rational bubble specifications. We fit our new bubble process to artificial and real-world data and demonstrate how to use parameter estimates to compare important characteristics of historical bubbles having emerged in different stock-markets with each other.Benedikt Rotermann, Bernd Wilfling2015-05Present-value model, rational bubble nonlinear state space model, particle-filter estimation, EM algorithmCombining Country-Specific Forecasts when Forecasting Euro Area Macroeconomic Aggregates
http://d.repec.org/n?u=RePEc:knz:dpteco:1511&r=ecm
European Monetary Union (EMU) member countries' forecasts are often combined to obtain the forecasts of the Euro area macroeconomic aggregate variables. The aggregation weights which are used to produce the aggregates are often considered as combination weights. This paper investigates whether using different combination weights instead of the usual aggregation weights can help to provide more accurate forecasts. In this context, we examine the performance of equal weights, the least squares estimators of the weights, the combination method recently proposed by Hyndman et al. (2011) and the weights suggested by shrinkage methods. We find that some variables like real GDP and GDP deflator can be forecasted more precisely by using flexible combination weights. Furthermore, combining only forecasts of the three largest European countries helps to improve the forecasting performance. The persistence of the individual data seems to play an important role for the relative performance of the combination.Jing Zeng2015-05-13Forecast combination, aggregation, macroeconomic forecasting, hierarchical time series, persistence in dataTesting for First Order Serial Correlation in Temporally Aggregated Regression Models
http://d.repec.org/n?u=RePEc:ipe:ipetds:0014&r=ecm
Thls paper shows that the LM statistic for testing first order serial correlation in regression models can be computed using the Kalman Filter. It is shown tha.t when there are missing observations, the LM statistic for this tesi is equivalent to the tesi statistic derived by Robinson (1985) using the likelihood conditional on the observation times. The Kalman Filter approach is preferable because the test statistic for first order serial correlation in t.emporally aggregated regression models can be obta.ined as an extension of the previous case..Helson C. Braga, William G. Tyler2015-01Nonparametric Instrumental Variable Estimation of Binary Response Models
http://d.repec.org/n?u=RePEc:nys:sunysb:14-07&r=ecm
Samuele Centorrino, Jean-Pierre Florens2014The efficiency of Anderson-Darling test with limited sample size: an application to Backtesting Counterparty Credit Risk internal model
http://d.repec.org/n?u=RePEc:arx:papers:1505.04593&r=ecm
This work presents a theoretical and empirical evaluation of Anderson-Darling test when the sample size is limited. The test can be applied in order to backtest the risk factors dynamics in the context of Counterparty Credit Risk modelling. We show the limits of such test when backtesting the distributions of an interest rate model over long time horizons and we propose a modified version of the test that is able to detect more efficiently an underestimation of the model's volatility. Finally we provide an empirical application.M. Formenti, L. Spadafora, M. Terraneo, F. Ramponi2015-05Average Wage Gaps and Oaxaca-Blinder Decompositions
http://d.repec.org/n?u=RePEc:iza:izadps:dp9036&r=ecm
In this paper I develop a new version of the Oaxaca–Blinder decomposition whose unexplained component recovers a parameter which I refer to as the average wage gap. Under a particular conditional independence assumption, this estimand is equivalent to the average treatment effect (ATE). I also provide treatment-effects reinterpretations of the Reimers, Cotton, and Fortin decompositions as well as estimate average wage gaps, average wage gains for men, and average wage losses for women in the United Kingdom. Conditional wage gaps increase across the wage distribution and therefore, on average, male gains are larger than female losses.Sloczynski, Tymon2015-05decomposition methods, gender wage gaps, glass ceilings, treatment effectsModelling and Estimating Individual and Firm Effects with Count Panel Data
http://d.repec.org/n?u=RePEc:lvl:lacicr:1506&r=ecm
In this article, we propose a new parametric model for the modelling and estimation of accident distributions for drivers working in fleets of vehicles. The analysis uses panel data and takes into account individual and fleet effects in a non-linear model. Our sample contains more than 456,000 observations of vehicles and 87,000 observations of fleets. Non-observable factors are treated as random effects. The distribution of accidents is affected by both observable and non-observable factors from drivers, vehicles and fleets. Past experience of both individual drivers and individual fleets is very significant to explain road accidents. Unobservable factors are also significant, which means that insurance pricing should take into account both observable and unobservable factors in predicting the rate of road accidents under asymmetric information.Jean-François Angers, Denise Desjardins, Georges Dionne, François Guertin2015Accident distributions, drivers in fleet of vehicles, individual effect, firm effect, panel data, Poisson, gamma, Dirichlet, insurance pricingEndogenous Censoring in the Mixed Proportional Hazard Model with an Application to Optimal Unemployment Insurance
http://d.repec.org/n?u=RePEc:lec:leecon:15/06&r=ecm
In economic duration analysis, it is routinely assumed that the process which led to censoring of the observed duration is independent of unobserved characteristics. The objective of this paper is to examine the sensitivity of parameter estimates to this independence assumption in the context of an economic model of optimal unemployment insurance. We assume a parametric model for the duration of interest and leave the distribution of censoring unrestricted, allowing it to be correlated with both observed and unobserved characteristics. This leads to loss of point-identification. We provide a practical characterization of the identified set with moment inequalities and suggest methods for estimating this set. In particular, we propose a profiled procedure that allows us to build a confidence set for a subvector of the model parameters. We apply this approach to estimate the elasticity of exit rate from unemployment with respect to unemployment benefit and find that both positive and negative values of this elasticity are supported by the data. When combined with the welfare formula in Chetty (2008), these estimates do not permit us to put an upper bound on the size of the welfare change due to an increase in the unemployment benefit. We conclude that given the available data alone, one cannot credibly judge if the unemployment benefits in the US are close to the optimal level.Arkadiusz Szydlowski2015-06International Sign Predictability of Stock Returns: The Role of the United States
http://d.repec.org/n?u=RePEc:aah:create:2015-20&r=ecm
We study the directional predictability of monthly excess stock market returns in the U.S. and ten other markets using univariate and bivariate binary response models. Our main interest is on the potential benefits of predicting the signs of the returns jointly, focusing on the predictive power from the U.S. to foreign markets. We introduce a new bivariate probit model that allows for such a contemporaneous predictive linkage from one market to the other. Our in-sample and out-of-sample forecasting results indicate superior predictive performance of the new model over the competing models by statistical measures and market timing performance, suggesting gradual diffusion of predictive information from the U.S. to the other markets.Henri Nyberg, Harri Pönkä2015-05-05Excess stock return, Directional predictability, Bivariate probit model, Market timingFloGARCH : Realizing long memory and asymmetries in returns volatility
http://d.repec.org/n?u=RePEc:nbb:reswpp:201504-280&r=ecm
We introduce the class of FloGARCH models in this paper. FloGARCH models provide a parsimonious joint model for low frequency returns and realized measures and are sufficiently flexible to capture long memory as well as asymmetries related to leverage effects. We analyze the performances of the models in a realistic numerical study and on the basis of a data set composed of 65 equities. Using more than 10 years of high-frequency transactions, we document significant statistical gains related to the FloGARCH models in terms of in-sample fit, out-of-sample fit and forecasting accuracy compared to classical and Realized GARCH models.Harry Vander Elst2015-04Realized GARCH models, high-frequency data, long memory, realized measures.