nep-ecm New Economics Papers
on Econometrics
Issue of 2015‒03‒27
sixteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Qml inference for volatility models with covariates By Francq, Christian; Thieu, Le Quyen
  2. Bootstrap Methods for Inference with Cluster-Sample IV Models By Keith Finlay; Leandro M. Magnusson
  3. A hat matrix for monotonicity constrained B-spline and P-spline regression By Kagerer, Kathrin
  4. Structural Vector Autoregressions with Heteroskedasticy By Helmut Lütkepohl; Aleksei Netšunajev; ;
  5. Generalized Exogenous Processes in DSGE: A Bayesian Approach By Alexander Meyer-Gohde; Daniel Neuhoff; ;
  6. Predicting a future observation: A reconciliation of the Bayesian and frequentist approaches By Rahul Mukherjee
  7. Distribution and Quantile Structural Functions in Treatment Effect Models: Application to Smoking Effects on Wages By Yu-Chin Hsu; Kamhon Kan; Tsung-Chih Lai
  8. IV Quantile Regression for Group-level Treatments, with an Application to the Distributional Effects of Trade By Denis Chetverikov; Bradley Larsen; Christopher Palmer
  9. Quantile Regression with Panel Data By Bryan S. Graham; Jinyong Hahn; Alexandre Poirier; James L. Powell
  10. Evaluation of sequences of treatments with application to active labor market policies By Vikström, Johan
  11. Composite Indices Based on Partial Least Squares By Jisu Yoon; Stephan Klasen; Axel Dreher; Tatyana Krivobokova
  12. Confidence Interval for Ratio of Percentiles of Two Independent and Small Samples By Li-Fei Huang
  13. A comparative study between observation- and parameter-driven zero-in ated Poisson model for longitudinal children hospital visit data By Shahariar Huda
  14. A nested recursive logit model for route choice analysis By Mai, Tien; Frejinger, Emma; Fosgerau, Mogens
  15. Toward robust early-warning models: A horse race, ensembles and model uncertainty By Holopainen, Markus; Sarlin , Peter
  16. The Zero Lower Bound: Implications for Modelling the Interest Rate By Joshua C.C. Chan; Rodney Strachan

  1. By: Francq, Christian; Thieu, Le Quyen
    Abstract: The asymptotic distribution of the Gaussian quasi-maximum likelihood estimator (QMLE) is obtained for a wide class of asymmetric GARCH models with exogenous covariates. The true value of the parameter is not restricted to belong to the interior of the parameter space, which allows us to derive tests for the significance of the parameters. In particular, the relevance of the exogenous variables can be assessed. The results are obtained without assuming that the innovations are independent, which allows conditioning on different information sets. Monte Carlo experiments and applications to financial series illustrate the asymptotic results. In particular, an empirical study demonstrates that the realized volatility is an helpful covariate for predicting squared returns, but does not constitute an ideal proxy of the volatility.
    Keywords: APARCH model augmented with explanatory variables; Boundary of the parameter space; Consistency and asymptotic distribution of the Gaussian quasi-maximum likelihood estimator; GARCH-X models; Power-transformed and Threshold GARCH with exogenous covariates
    JEL: C12 C13 C22
    Date: 2015–03
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:63198&r=ecm
  2. By: Keith Finlay (Department of Economics Tulane University New Orleans); Leandro M. Magnusson (University of Western Australia)
    Abstract: Microeconomic data often have within-cluster dependence. This dependence affects standard error estimation and inference in regression models, including the instrumental variables model. Standard corrections assume that the number of clusters is large, but when this is not the case, Wald and weak-instrument-robust tests can be severely over-sized. We examine the use of bootstrap methods to construct appropriate critical values for these tests when the number of clusters is small. We find that variants of the wild bootstrap perform well and reduce absolute size bias significantly, independent of instrument strength or cluster size. We also provide guidance in the choice among possible weak-instrument-robust tests when data have cluster dependence. These results are applicable to fixed-effects panel data models.
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:uwa:wpaper:14-12&r=ecm
  3. By: Kagerer, Kathrin
    Abstract: Splines constitute an interesting way to flexibly estimate a nonlinear relationship between several covariates and a response variable using linear regression techniques. The popularity of splines is due to their easy application and hence the low computational costs since their basis functions can be added to the regression model like usual covariates. As long as no inequality constraints and penalties are imposed on the estimation, the degrees of freedom of the model estimation can be determined straightforwardly as the number of estimated parameters. This paper derives a formula for computing the hat matrix of a penalized and inequality constrained splines estimator. Its trace gives the degrees of freedom of the model estimation which are necessary for the calculation of several information criteria that can be used e.g. for specifying the parameters for the spline or for model selection.
    Keywords: Spline; monotonicity; penalty; hat matrix; regression; Monte Carlo simulation
    JEL: C14 C52
    Date: 2015–03
    URL: http://d.repec.org/n?u=RePEc:bay:rdwiwi:31450&r=ecm
  4. By: Helmut Lütkepohl; Aleksei Netšunajev; ;
    Abstract: A growing literature uses changes in residual volatility for identifying structural shocks in vector autoregressive (VAR) analysis. A number of different models for heteroskedasticity or conditional heteroskedasticity are proposed and used in applications in this context. This study reviews the different volatility models and points out their advantages and drawbacks. It thereby enables researchers wishing to use identification of structural VAR models via heteroskedasticity to make a more informed choice of a suitable model for a specific empirical analysis. An application investigating the interaction between U.S. monetary policy and the stock market is used to illustrate the related issues.
    Keywords: Structural vector autoregression, identication via heteroskedasticity, conditional heteroskedasticity, smooth transition, Markov switching, GARCH
    JEL: C32
    Date: 2015–03
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2015-015&r=ecm
  5. By: Alexander Meyer-Gohde; Daniel Neuhoff; ;
    Abstract: The Reversible Jump Markov Chain Monte Carlo (RJMCMC) method can enhance Bayesian DSGE estimation by sampling from a posterior distribution spanning potentially nonnested models with parameter spaces of different dimensionality. We use the method to jointly sample from an ARMA process of unknown order along with the associated parameters. We apply the method to the technology process in a canonical neoclassical growth model using post war US GDP data and find that the posterior decisively rejects the standard AR(1) assumption in favor of higher order processes. While the posterior contains significant uncertainty regarding the exact order, it concentrates posterior density on hump-shaped impulse responses. A negative response of hours to a positive technology shock is within the posterior credible set when noninvertible MA representations are admitted.
    Keywords: Bayesian analysis; Dynamic stochastic general equilibrium model; Model evaluation; ARMA; Reversible Jump Markov Chain Monte Carlo
    JEL: C11 C32 C51 C52
    Date: 2015–03
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2015-014&r=ecm
  6. By: Rahul Mukherjee (Indian Institute of Management Calcutta)
    Abstract: Predicting a future observation on the basis of the existing observations is a problem of compelling practical interest in many fields of study including economics and sociology. Bayesian predictive densities, obtained via a prior specification on the underlying population, are commonly used for this purpose. This may, however, induce subjectivity because the resulting predictive set depends on the choice of prior. Moreover, one can as well consider direct frequentist methods which do not require any prior specification. This can again entail results differing from what Bayesian predictive densities yield. Thus there is a need to reconcile all these approaches.The present article aims at addressing this problem. Specifically, we explore predictive sets which have frequentist as well as Bayesian validity for arbitrary priors in an asymptotic sense. Our tools include a connection with locally unbiased tests and a shrinkage argument for Bayesian asymptotics. Our findings apply to general multiparameter statistical models and represent a significant advance over the existing work in this area which caters only to models with a single unknown parameter and that too under certain restrictions. Illustrative examples are given. Computation and simulation studies show that our results work very well in finite samples.
    Keywords: Asymptotic theory, locally unbiased test, posterior predictive density, shrinkage argument
    JEL: C11 C15
    Date: 2014–12
    URL: http://d.repec.org/n?u=RePEc:sek:iacpro:0901514&r=ecm
  7. By: Yu-Chin Hsu (Institute of Economics, Academia Sinica, Taipei, Taiwan); Kamhon Kan (Institute of Economics, Academia Sinica, Taipei, Taiwan); Tsung-Chih Lai (Department of Economics, National Taiwan University)
    Abstract: This paper examines the distribution structural functions (DSFs) and quantile structural functions (QSFs) in a semiparametric treatment effect model. The DSF and QSF are defined as the distribution function and quantile function of the counterfactural outcome when covariates are exogenously switched to a fixed value, while the unobserved heterogeneity of the whole population remains unchanged. We show that the DSFs and QSFs are identified under the unconfoundedness assumption, and then propose inverse probability weighted estimators which are n^{-1/2}-consistent and converge weakly to mean zero Gaussian processes. A simulation approach is also proposed to approximate the limiting processes. Finally, we apply the results to construct uniform confidence bands for the structural quantile treatment effect of smoking on wage, and find that smoking does not impose any wage penalty on male workers with low unobserved heterogeneity. JEL Classification: C14, C31, I19
    Keywords: distribution structural function, semiparametric models, smoking, treatment effects, quantile structural function, wages
    Date: 2015–03
    URL: http://d.repec.org/n?u=RePEc:sin:wpaper:15-a001&r=ecm
  8. By: Denis Chetverikov; Bradley Larsen; Christopher Palmer
    Abstract: We present a methodology for estimating the distributional effects of an endogenous treatment that varies at the group level when there are group-level unobservables, a quantile extension of Hausman and Taylor (1981). Because of the presence of group-level unobservables, standard quantile regression techniques are inconsistent in our setting even if the treatment is independent of unobservables. In contrast, our estimation technique is consistent as well as computationally simple, consisting of group-by-group quantile regression followed by two-stage least squares. Using the Bahadur representation of quantile estimators, we derive weak conditions on the growth of the number of observations per group that are sufficient for consistency and asymptotic zero-mean normality of our estimator. As in Hausman and Taylor (1981), micro-level covariates can be used as internal instruments for the endogenous group-level treatment if they satisfy relevance and exogeneity conditions. An empirical application indicates that low-wage earners in the US from 1990--2007 were significantly more affected by increased Chinese import competition than high-wage earners. Our approach applies to a broad range of settings in labor, industrial organization, trade, public finance, and other applied fields.
    JEL: C21 C31 C33 C36 F16 J30
    Date: 2015–03
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:21033&r=ecm
  9. By: Bryan S. Graham; Jinyong Hahn; Alexandre Poirier; James L. Powell
    Abstract: We propose a generalization of the linear quantile regression model to accommodate possibilities afforded by panel data. Specifically, we extend the correlated random coefficients representation of linear quantile regression (e.g., Koenker, 2005; Section 2.6). We show that panel data allows the econometrician to (i) introduce dependence between the regressors and the random coefficients and (ii) weaken the assumption of comonotonicity across them (i.e., to enrich the structure of allowable dependence between different coefficients). We adopt a “fixed effects” approach, leaving any dependence between the regressors and the random coefficients unmodelled. We motivate different notions of quantile partial effects in our model and study their identification. For the case of discretely-valued covariates we present analog estimators and characterize their large sample properties. When the number of time periods (T) exceeds the number of random coefficients (P), identification is regular, and our estimates are root-N-consistent. When T=P, our identification results make special use of the subpopulation of stayers - units whose regressor values change little over time - in a way which builds on the approach of Graham and Powell (2012). In this just-identified case we study asymptotic sequences which allow the frequency of stayers in the population to shrink with the sample size. One purpose of these “discrete bandwidth asymptotics” is to approximate settings where covariates are continuously-valued and, as such, there is only an infinitesimal fraction of exact stayers, while keeping the convenience of an analysis based on discrete covariates. When the mass of stayers shrinks with N, identification is irregular and our estimates converge at a slower than root-N rate, but continue to have limiting normal distributions. We apply our methods to study the effects of collective bargaining coverage on earnings using the National Longitudinal Survey of Youth 1979 (NLSY79). Consistent with prior work (e.g., Chamberlain, 1982; Vella and Verbeek, 1998), we find that using panel data to control for unobserved worker heterogeneity results in sharply lower estimates of union wage premia. We estimate a median union wage premium of about 9 percent, but with, in a more novel finding, substantial heterogeneity across workers. The 0.1 quantile of union effects is insignificantly different from zero, whereas the 0.9 quantile effect is of over 30 percent. Our empirical analysis further suggests that, on net, unions have an equalizing effect on the distribution of wages.
    JEL: C23 C31 J31
    Date: 2015–03
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:21034&r=ecm
  10. By: Vikström, Johan (IFAU - Institute for Evaluation of Labour Market and Education Policy)
    Abstract: This paper proposes a new framework for analyzing the effects of sequences of treatments with duration outcomes. Applications include sequences of active labor market policies assigned at specific unemployment durations and sequences of medical treatments. We consider evaluation under unconfoundedness and propose conditions under which the survival time under a specific treatment regime can be identified. We introduce inverse probability weighting estimators for various average effects. The finite sample properties of the estimators are investigated in a simulation study. The new estimator is applied to Swedish data on participants in training, in a work practice program and in subsidized employment. One result is that enrolling an unemployed person twice in the same program or in two different programs one after the other leads to longer unemployment spells compared to only participating in a single program once.
    Keywords: Treatment effects; dynamic treatment assignment; dynamic selection; program evaluation; work practice; training; subsidized employment
    JEL: C14 C40
    Date: 2015–03–16
    URL: http://d.repec.org/n?u=RePEc:hhs:ifauwp:2015_005&r=ecm
  11. By: Jisu Yoon (Georg-August-University Göttingen); Stephan Klasen (Georg-August-University Göttingen); Axel Dreher (University of Heidelberg); Tatyana Krivobokova (Georg-August-University Göttingen)
    Abstract: In this paper, we compare Principal Component Analysis (PCA) and Partial Least Squares (PLS) methods to generate weights for composite indices. In this context we also consider various treatments of non-metric variables when constructing such composite indices. Using simulation studies we find that dummy coding for non-metric variables yields satisfactory performance compared to more sophisticated statistical procedures. In our applications we illustrate how PLS can generate weights that differ substantially from those obtained with PCA, increasing the composite indices' predictive performance for the outcome variable considered.
    Keywords: Principal Component Analysis; PCA; Partial Least Squares; PLS; non-metric variables; wealth index; globalization
    JEL: C15 C43 R20
    Date: 2015–03–24
    URL: http://d.repec.org/n?u=RePEc:got:gotcrc:171&r=ecm
  12. By: Li-Fei Huang (Ming Chuan University)
    Abstract: In the wood industry, it is common practice to compare in terms of the ratio of two different strength properties for lumber of the same dimention, grade and species or the same strength property for lumber of two different dimensions, grades or species. Because United States lumber standards are given in terms of population fifth percentile, and strength problems arised from the weaker fifth percentile rather than the stronger mean, the ratio should be expressed in terms of the fifth percentiles of two strength distributions rather than the mean.If n is the sample size, then n(n+1)/2 averages of sample points can be created by Hodges-Lehmann method which is utilized to construct the confidence interval for the mean. If n(n+1)/2 percentiles are directly found by adjusting Hodges-Lehmann method, then the resulting distribution is highly skewed. Therefore, n(n+1)/2 percentiles are suggested to be found by shifting some proper amount from those averages. The distribution of [n(n+1)/2]^2 ratios of percentiles has large kurtosis and hence is not normal, so traditional approximation methods do not work in this case. The empirical confidence interval should be selected to give inference about the ratio of percentiles. Small samples are considered to prevent extremely large [n(n+1)/2]^2.
    Keywords: Strength of lumber, Independent and small samples, Simulation of percentiles, Simulation of ratio of percentiles, Empirical confidence interval.
    JEL: C00 C14 C15
    Date: 2014–05
    URL: http://d.repec.org/n?u=RePEc:sek:iacpro:0100806&r=ecm
  13. By: Shahariar Huda (KUWAIT UNIVERSITY)
    Abstract: Longitudinal count data with excessive zeros frequently occurs in social, biological, medical and health research. To model zero-inflated longitudinal count data, in literature, zero-inflated Poisson (ZIP) models are commonly used after separating zero and positive responses. As longitudinal count responses are likely to be serially correlated, such separation may destroy the underlying serial correlation structure. To overcome this problem recently observation- and parameter-driven modelling approaches are proposed to model zero-inflated longitudinal count responses. In the observation-driven model, the response at a specific time point is modelled through the responses at previous times points after incorporating serial correlation into account. One limitation of the observation-driven model is that it fails to accommodate the presence of any possible over dispersion, which commonly occur in the count responses. To overcome this limitation, we introduce a parameter-driven model, where the serial correlation has been captured through the latent precess using random effects and compare the results with observation-driven model. A quasi-likelihood approach has been developed to estimate the model parameters. We illustrate the methodology with analysis of two real life data sets. To examine model performance we also compare the proposed model with the observation-driven ZIP model through the simulation study.
    Keywords: Serial correlation. Compound Poisson. ZIP models. Quasi-likelihood.
    JEL: C10
    Date: 2014–12
    URL: http://d.repec.org/n?u=RePEc:sek:iacpro:0902796&r=ecm
  14. By: Mai, Tien; Frejinger, Emma; Fosgerau, Mogens
    Abstract: We propose a route choice model that relaxes the independence from irrelevant alternatives property of the logit model by allowing scale parameters to be link specific. Similar to the the recursive logit (RL) model proposed by Fosgerau et al. (2013), the choice of path is modelled as a sequence of link choices and the model does not require any sampling of choice sets. Furthermore, the model can be consistently estimated and efficiently used for prediction. A key challenge lies in the computation of the value functions, i.e. the expected maximum utility from any position in the network to a destination. The value functions are the solution to a system of non-linear equations. We propose an iterative method with dynamic accuracy that allows to efficiently solve these systems. We report estimation results and a cross-validation study for a real network. The results show that the NRL model yields sensible parameter estimates and the fit is significantly better than the RL model. Moreover, the NRL model outperforms the RL model in terms of prediction.
    Keywords: route choice modelling; nested recursive logit; substitution patterns; value iterations; maximum likelihood estimation; cross-validation
    JEL: C25
    Date: 2015–03–23
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:63161&r=ecm
  15. By: Holopainen, Markus (RiskLab Finland at Arcada University of Applied Sciences, Helsinki, Finland); Sarlin , Peter (Goethe University, Center of Excellence SAFE, Department of Economics, Hanken School of Economics, Helsinki, cRiskLab Finland at Arcada University of Applied Sciences, Helsinki, Finland)
    Abstract: This paper presents first steps toward robust early-warning models. We conduct a horse race of conventional statistical methods and more recent machine learning methods. As early-warning models based upon one approach are oftentimes built in isolation of other methods, the exercise is of high relevance for assessing the relative performance of a wide variety of methods. Further, we test various ensemble approaches to aggregating the information products of the built early-warning models, providing a more robust basis for measuring country-level vulnerabilities. Finally, we provide approaches to estimating model uncertainty in early-warning exercises, particularly model performance uncertainty and model output uncertainty. The approaches put forward in this paper are shown with Europe as a playground.
    Keywords: financial stability; early-warning models; horse race; ensembles; model uncertainty
    JEL: C43 E44 F30 G01 G15
    Date: 2015–03–04
    URL: http://d.repec.org/n?u=RePEc:hhs:bofrdp:2015_006&r=ecm
  16. By: Joshua C.C. Chan (Research School of Economics, and Centre for Applied Macroeconomic Analysis, Australian National University); Rodney Strachan (School of Economics, and Centre for Applied Macroeconomic Analysis, University of Queensland; The Rimini Centre for Economic Analysis, Italy)
    Abstract: The time-varying parameter vector autoregressive (TVP-VAR) model has been used to successfully model interest rates and other variables. As many short interest rates are now near their zero lower bound (ZLB), a feature not included in the standard TVP-VAR specification, this model is no longer appropriate. However, there remain good reasons to include short interest rates in macro models, such as to study the effect of a credit shock. We propose a TVP-VAR that accounts for the ZLB and study algorithms for computing this model that are less computationally burdensome than others yet handle many states well. To illustrate the proposed approach, we investigate the effect of the zero lower bound of interest rate on transmission of a monetary shock.
    Date: 2014–12
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:42_14&r=ecm

This nep-ecm issue is ©2015 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.