
on Econometrics 
By:  Francq, Christian; Thieu, Le Quyen 
Abstract:  The asymptotic distribution of the Gaussian quasimaximum likelihood estimator (QMLE) is obtained for a wide class of asymmetric GARCH models with exogenous covariates. The true value of the parameter is not restricted to belong to the interior of the parameter space, which allows us to derive tests for the significance of the parameters. In particular, the relevance of the exogenous variables can be assessed. The results are obtained without assuming that the innovations are independent, which allows conditioning on different information sets. Monte Carlo experiments and applications to financial series illustrate the asymptotic results. In particular, an empirical study demonstrates that the realized volatility is an helpful covariate for predicting squared returns, but does not constitute an ideal proxy of the volatility. 
Keywords:  APARCH model augmented with explanatory variables; Boundary of the parameter space; Consistency and asymptotic distribution of the Gaussian quasimaximum likelihood estimator; GARCHX models; Powertransformed and Threshold GARCH with exogenous covariates 
JEL:  C12 C13 C22 
Date:  2015–03 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:63198&r=ecm 
By:  Keith Finlay (Department of Economics Tulane University New Orleans); Leandro M. Magnusson (University of Western Australia) 
Abstract:  Microeconomic data often have withincluster dependence. This dependence affects standard error estimation and inference in regression models, including the instrumental variables model. Standard corrections assume that the number of clusters is large, but when this is not the case, Wald and weakinstrumentrobust tests can be severely oversized. We examine the use of bootstrap methods to construct appropriate critical values for these tests when the number of clusters is small. We find that variants of the wild bootstrap perform well and reduce absolute size bias significantly, independent of instrument strength or cluster size. We also provide guidance in the choice among possible weakinstrumentrobust tests when data have cluster dependence. These results are applicable to fixedeffects panel data models. 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:uwa:wpaper:1412&r=ecm 
By:  Kagerer, Kathrin 
Abstract:  Splines constitute an interesting way to flexibly estimate a nonlinear relationship between several covariates and a response variable using linear regression techniques. The popularity of splines is due to their easy application and hence the low computational costs since their basis functions can be added to the regression model like usual covariates. As long as no inequality constraints and penalties are imposed on the estimation, the degrees of freedom of the model estimation can be determined straightforwardly as the number of estimated parameters. This paper derives a formula for computing the hat matrix of a penalized and inequality constrained splines estimator. Its trace gives the degrees of freedom of the model estimation which are necessary for the calculation of several information criteria that can be used e.g. for specifying the parameters for the spline or for model selection. 
Keywords:  Spline; monotonicity; penalty; hat matrix; regression; Monte Carlo simulation 
JEL:  C14 C52 
Date:  2015–03 
URL:  http://d.repec.org/n?u=RePEc:bay:rdwiwi:31450&r=ecm 
By:  Helmut Lütkepohl; Aleksei Netšunajev; ; 
Abstract:  A growing literature uses changes in residual volatility for identifying structural shocks in vector autoregressive (VAR) analysis. A number of different models for heteroskedasticity or conditional heteroskedasticity are proposed and used in applications in this context. This study reviews the different volatility models and points out their advantages and drawbacks. It thereby enables researchers wishing to use identification of structural VAR models via heteroskedasticity to make a more informed choice of a suitable model for a specific empirical analysis. An application investigating the interaction between U.S. monetary policy and the stock market is used to illustrate the related issues. 
Keywords:  Structural vector autoregression, identication via heteroskedasticity, conditional heteroskedasticity, smooth transition, Markov switching, GARCH 
JEL:  C32 
Date:  2015–03 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2015015&r=ecm 
By:  Alexander MeyerGohde; Daniel Neuhoff; ; 
Abstract:  The Reversible Jump Markov Chain Monte Carlo (RJMCMC) method can enhance Bayesian DSGE estimation by sampling from a posterior distribution spanning potentially nonnested models with parameter spaces of different dimensionality. We use the method to jointly sample from an ARMA process of unknown order along with the associated parameters. We apply the method to the technology process in a canonical neoclassical growth model using post war US GDP data and find that the posterior decisively rejects the standard AR(1) assumption in favor of higher order processes. While the posterior contains significant uncertainty regarding the exact order, it concentrates posterior density on humpshaped impulse responses. A negative response of hours to a positive technology shock is within the posterior credible set when noninvertible MA representations are admitted. 
Keywords:  Bayesian analysis; Dynamic stochastic general equilibrium model; Model evaluation; ARMA; Reversible Jump Markov Chain Monte Carlo 
JEL:  C11 C32 C51 C52 
Date:  2015–03 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2015014&r=ecm 
By:  Rahul Mukherjee (Indian Institute of Management Calcutta) 
Abstract:  Predicting a future observation on the basis of the existing observations is a problem of compelling practical interest in many fields of study including economics and sociology. Bayesian predictive densities, obtained via a prior specification on the underlying population, are commonly used for this purpose. This may, however, induce subjectivity because the resulting predictive set depends on the choice of prior. Moreover, one can as well consider direct frequentist methods which do not require any prior specification. This can again entail results differing from what Bayesian predictive densities yield. Thus there is a need to reconcile all these approaches.The present article aims at addressing this problem. Specifically, we explore predictive sets which have frequentist as well as Bayesian validity for arbitrary priors in an asymptotic sense. Our tools include a connection with locally unbiased tests and a shrinkage argument for Bayesian asymptotics. Our findings apply to general multiparameter statistical models and represent a significant advance over the existing work in this area which caters only to models with a single unknown parameter and that too under certain restrictions. Illustrative examples are given. Computation and simulation studies show that our results work very well in finite samples. 
Keywords:  Asymptotic theory, locally unbiased test, posterior predictive density, shrinkage argument 
JEL:  C11 C15 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:sek:iacpro:0901514&r=ecm 
By:  YuChin Hsu (Institute of Economics, Academia Sinica, Taipei, Taiwan); Kamhon Kan (Institute of Economics, Academia Sinica, Taipei, Taiwan); TsungChih Lai (Department of Economics, National Taiwan University) 
Abstract:  This paper examines the distribution structural functions (DSFs) and quantile structural functions (QSFs) in a semiparametric treatment effect model. The DSF and QSF are defined as the distribution function and quantile function of the counterfactural outcome when covariates are exogenously switched to a fixed value, while the unobserved heterogeneity of the whole population remains unchanged. We show that the DSFs and QSFs are identified under the unconfoundedness assumption, and then propose inverse probability weighted estimators which are n^{1/2}consistent and converge weakly to mean zero Gaussian processes. A simulation approach is also proposed to approximate the limiting processes. Finally, we apply the results to construct uniform confidence bands for the structural quantile treatment effect of smoking on wage, and find that smoking does not impose any wage penalty on male workers with low unobserved heterogeneity. JEL Classification: C14, C31, I19 
Keywords:  distribution structural function, semiparametric models, smoking, treatment effects, quantile structural function, wages 
Date:  2015–03 
URL:  http://d.repec.org/n?u=RePEc:sin:wpaper:15a001&r=ecm 
By:  Denis Chetverikov; Bradley Larsen; Christopher Palmer 
Abstract:  We present a methodology for estimating the distributional effects of an endogenous treatment that varies at the group level when there are grouplevel unobservables, a quantile extension of Hausman and Taylor (1981). Because of the presence of grouplevel unobservables, standard quantile regression techniques are inconsistent in our setting even if the treatment is independent of unobservables. In contrast, our estimation technique is consistent as well as computationally simple, consisting of groupbygroup quantile regression followed by twostage least squares. Using the Bahadur representation of quantile estimators, we derive weak conditions on the growth of the number of observations per group that are sufficient for consistency and asymptotic zeromean normality of our estimator. As in Hausman and Taylor (1981), microlevel covariates can be used as internal instruments for the endogenous grouplevel treatment if they satisfy relevance and exogeneity conditions. An empirical application indicates that lowwage earners in the US from 19902007 were significantly more affected by increased Chinese import competition than highwage earners. Our approach applies to a broad range of settings in labor, industrial organization, trade, public finance, and other applied fields. 
JEL:  C21 C31 C33 C36 F16 J30 
Date:  2015–03 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:21033&r=ecm 
By:  Bryan S. Graham; Jinyong Hahn; Alexandre Poirier; James L. Powell 
Abstract:  We propose a generalization of the linear quantile regression model to accommodate possibilities afforded by panel data. Specifically, we extend the correlated random coefficients representation of linear quantile regression (e.g., Koenker, 2005; Section 2.6). We show that panel data allows the econometrician to (i) introduce dependence between the regressors and the random coefficients and (ii) weaken the assumption of comonotonicity across them (i.e., to enrich the structure of allowable dependence between different coefficients). We adopt a “fixed effects” approach, leaving any dependence between the regressors and the random coefficients unmodelled. We motivate different notions of quantile partial effects in our model and study their identification. For the case of discretelyvalued covariates we present analog estimators and characterize their large sample properties. When the number of time periods (T) exceeds the number of random coefficients (P), identification is regular, and our estimates are rootNconsistent. When T=P, our identification results make special use of the subpopulation of stayers  units whose regressor values change little over time  in a way which builds on the approach of Graham and Powell (2012). In this justidentified case we study asymptotic sequences which allow the frequency of stayers in the population to shrink with the sample size. One purpose of these “discrete bandwidth asymptotics” is to approximate settings where covariates are continuouslyvalued and, as such, there is only an infinitesimal fraction of exact stayers, while keeping the convenience of an analysis based on discrete covariates. When the mass of stayers shrinks with N, identification is irregular and our estimates converge at a slower than rootN rate, but continue to have limiting normal distributions. We apply our methods to study the effects of collective bargaining coverage on earnings using the National Longitudinal Survey of Youth 1979 (NLSY79). Consistent with prior work (e.g., Chamberlain, 1982; Vella and Verbeek, 1998), we find that using panel data to control for unobserved worker heterogeneity results in sharply lower estimates of union wage premia. We estimate a median union wage premium of about 9 percent, but with, in a more novel finding, substantial heterogeneity across workers. The 0.1 quantile of union effects is insignificantly different from zero, whereas the 0.9 quantile effect is of over 30 percent. Our empirical analysis further suggests that, on net, unions have an equalizing effect on the distribution of wages. 
JEL:  C23 C31 J31 
Date:  2015–03 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:21034&r=ecm 
By:  Vikström, Johan (IFAU  Institute for Evaluation of Labour Market and Education Policy) 
Abstract:  This paper proposes a new framework for analyzing the effects of sequences of treatments with duration outcomes. Applications include sequences of active labor market policies assigned at specific unemployment durations and sequences of medical treatments. We consider evaluation under unconfoundedness and propose conditions under which the survival time under a specific treatment regime can be identified. We introduce inverse probability weighting estimators for various average effects. The finite sample properties of the estimators are investigated in a simulation study. The new estimator is applied to Swedish data on participants in training, in a work practice program and in subsidized employment. One result is that enrolling an unemployed person twice in the same program or in two different programs one after the other leads to longer unemployment spells compared to only participating in a single program once. 
Keywords:  Treatment effects; dynamic treatment assignment; dynamic selection; program evaluation; work practice; training; subsidized employment 
JEL:  C14 C40 
Date:  2015–03–16 
URL:  http://d.repec.org/n?u=RePEc:hhs:ifauwp:2015_005&r=ecm 
By:  Jisu Yoon (GeorgAugustUniversity Göttingen); Stephan Klasen (GeorgAugustUniversity Göttingen); Axel Dreher (University of Heidelberg); Tatyana Krivobokova (GeorgAugustUniversity Göttingen) 
Abstract:  In this paper, we compare Principal Component Analysis (PCA) and Partial Least Squares (PLS) methods to generate weights for composite indices. In this context we also consider various treatments of nonmetric variables when constructing such composite indices. Using simulation studies we find that dummy coding for nonmetric variables yields satisfactory performance compared to more sophisticated statistical procedures. In our applications we illustrate how PLS can generate weights that differ substantially from those obtained with PCA, increasing the composite indices' predictive performance for the outcome variable considered. 
Keywords:  Principal Component Analysis; PCA; Partial Least Squares; PLS; nonmetric variables; wealth index; globalization 
JEL:  C15 C43 R20 
Date:  2015–03–24 
URL:  http://d.repec.org/n?u=RePEc:got:gotcrc:171&r=ecm 
By:  LiFei Huang (Ming Chuan University) 
Abstract:  In the wood industry, it is common practice to compare in terms of the ratio of two different strength properties for lumber of the same dimention, grade and species or the same strength property for lumber of two different dimensions, grades or species. Because United States lumber standards are given in terms of population fifth percentile, and strength problems arised from the weaker fifth percentile rather than the stronger mean, the ratio should be expressed in terms of the fifth percentiles of two strength distributions rather than the mean.If n is the sample size, then n(n+1)/2 averages of sample points can be created by HodgesLehmann method which is utilized to construct the confidence interval for the mean. If n(n+1)/2 percentiles are directly found by adjusting HodgesLehmann method, then the resulting distribution is highly skewed. Therefore, n(n+1)/2 percentiles are suggested to be found by shifting some proper amount from those averages. The distribution of [n(n+1)/2]^2 ratios of percentiles has large kurtosis and hence is not normal, so traditional approximation methods do not work in this case. The empirical confidence interval should be selected to give inference about the ratio of percentiles. Small samples are considered to prevent extremely large [n(n+1)/2]^2. 
Keywords:  Strength of lumber, Independent and small samples, Simulation of percentiles, Simulation of ratio of percentiles, Empirical confidence interval. 
JEL:  C00 C14 C15 
Date:  2014–05 
URL:  http://d.repec.org/n?u=RePEc:sek:iacpro:0100806&r=ecm 
By:  Shahariar Huda (KUWAIT UNIVERSITY) 
Abstract:  Longitudinal count data with excessive zeros frequently occurs in social, biological, medical and health research. To model zeroinflated longitudinal count data, in literature, zeroinflated Poisson (ZIP) models are commonly used after separating zero and positive responses. As longitudinal count responses are likely to be serially correlated, such separation may destroy the underlying serial correlation structure. To overcome this problem recently observation and parameterdriven modelling approaches are proposed to model zeroinflated longitudinal count responses. In the observationdriven model, the response at a specific time point is modelled through the responses at previous times points after incorporating serial correlation into account. One limitation of the observationdriven model is that it fails to accommodate the presence of any possible over dispersion, which commonly occur in the count responses. To overcome this limitation, we introduce a parameterdriven model, where the serial correlation has been captured through the latent precess using random effects and compare the results with observationdriven model. A quasilikelihood approach has been developed to estimate the model parameters. We illustrate the methodology with analysis of two real life data sets. To examine model performance we also compare the proposed model with the observationdriven ZIP model through the simulation study. 
Keywords:  Serial correlation. Compound Poisson. ZIP models. Quasilikelihood. 
JEL:  C10 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:sek:iacpro:0902796&r=ecm 
By:  Mai, Tien; Frejinger, Emma; Fosgerau, Mogens 
Abstract:  We propose a route choice model that relaxes the independence from irrelevant alternatives property of the logit model by allowing scale parameters to be link specific. Similar to the the recursive logit (RL) model proposed by Fosgerau et al. (2013), the choice of path is modelled as a sequence of link choices and the model does not require any sampling of choice sets. Furthermore, the model can be consistently estimated and efficiently used for prediction. A key challenge lies in the computation of the value functions, i.e. the expected maximum utility from any position in the network to a destination. The value functions are the solution to a system of nonlinear equations. We propose an iterative method with dynamic accuracy that allows to efficiently solve these systems. We report estimation results and a crossvalidation study for a real network. The results show that the NRL model yields sensible parameter estimates and the fit is significantly better than the RL model. Moreover, the NRL model outperforms the RL model in terms of prediction. 
Keywords:  route choice modelling; nested recursive logit; substitution patterns; value iterations; maximum likelihood estimation; crossvalidation 
JEL:  C25 
Date:  2015–03–23 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:63161&r=ecm 
By:  Holopainen, Markus (RiskLab Finland at Arcada University of Applied Sciences, Helsinki, Finland); Sarlin , Peter (Goethe University, Center of Excellence SAFE, Department of Economics, Hanken School of Economics, Helsinki, cRiskLab Finland at Arcada University of Applied Sciences, Helsinki, Finland) 
Abstract:  This paper presents first steps toward robust earlywarning models. We conduct a horse race of conventional statistical methods and more recent machine learning methods. As earlywarning models based upon one approach are oftentimes built in isolation of other methods, the exercise is of high relevance for assessing the relative performance of a wide variety of methods. Further, we test various ensemble approaches to aggregating the information products of the built earlywarning models, providing a more robust basis for measuring countrylevel vulnerabilities. Finally, we provide approaches to estimating model uncertainty in earlywarning exercises, particularly model performance uncertainty and model output uncertainty. The approaches put forward in this paper are shown with Europe as a playground. 
Keywords:  financial stability; earlywarning models; horse race; ensembles; model uncertainty 
JEL:  C43 E44 F30 G01 G15 
Date:  2015–03–04 
URL:  http://d.repec.org/n?u=RePEc:hhs:bofrdp:2015_006&r=ecm 
By:  Joshua C.C. Chan (Research School of Economics, and Centre for Applied Macroeconomic Analysis, Australian National University); Rodney Strachan (School of Economics, and Centre for Applied Macroeconomic Analysis, University of Queensland; The Rimini Centre for Economic Analysis, Italy) 
Abstract:  The timevarying parameter vector autoregressive (TVPVAR) model has been used to successfully model interest rates and other variables. As many short interest rates are now near their zero lower bound (ZLB), a feature not included in the standard TVPVAR specification, this model is no longer appropriate. However, there remain good reasons to include short interest rates in macro models, such as to study the effect of a credit shock. We propose a TVPVAR that accounts for the ZLB and study algorithms for computing this model that are less computationally burdensome than others yet handle many states well. To illustrate the proposed approach, we investigate the effect of the zero lower bound of interest rate on transmission of a monetary shock. 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:rim:rimwps:42_14&r=ecm 