|
on Econometrics |
By: | Lunardon, Nicola; Ronchetti, Elvezio |
Abstract: | The class of composite likelihood functions provides a flexible and powerful toolkit to carry out approximate inference for complex statistical models when the full likelihood is either impossible to specify or unfeasible to compute. However, the strength of the composite likelihood approach is dimmed when considering hypothesis testing about a multidimensional parameter because the finite sample behavior of likelihood ratio, Wald, and score-type test statistics is tied to the Godambe information matrix. Consequently inaccurate estimates of the Godambe information translate in inaccurate p-values. In this paper it is shown how accurate inference can be obtained by using a fully nonparametric saddlepoint test statistic derived from the composite score functions. The proposed statistic is asymptotically chi-square distributed up to a relative error of second order and does not depend on the Godambe information. The validity of the method is demonstrated through simulation studies. |
Keywords: | Empirical likelihood methods, Godambe information, Likelihood ratio adjustment, Nonparametric inference, Pairwise likelihood, Relative error, Robust tests, Saddlepoint test, Small sample inference |
URL: | http://d.repec.org/n?u=RePEc:tre:wpaper:12&r=ecm |
By: | Zhu, Ke; Ling, Shiqing |
Abstract: | This paper investigates the asymptotic theory of the quasi-maximum exponential likelihood estimators (QMELE) for ARMA–GARCH models. Under only a fractional moment condition, the strong consistency and the asymptotic normality of the global self-weighted QMELE are obtained. Based on this self-weighted QMELE, the local QMELE is showed to be asymptotically normal for the ARMA model with GARCH (finite variance) and IGARCH errors. A formal comparison of two estimators is given for some cases. A simulation study is carried out to assess the performance of these estimators, and a real example on the world crude oil price is given. |
Keywords: | ARMA–GARCH/IGARCH model; asymptotic normality; global selfweighted/local quasi-maximum exponential likelihood estimator; strong consistency. |
JEL: | C13 C5 |
Date: | 2013–11–17 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:51509&r=ecm |
By: | Boris Kaiser |
Abstract: | When decomposing differences in average economic outcome between two groups of individuals, it is common practice to base the analysis on logarithms if the dependent variable is nonnegative. This paper argues that this approach raises a number of undesired statistical and conceptual issues because decomposition terms have the interpretation of approximate percentage differences in geometric means. Instead, we suggest that the analysis should be based on the arithmetic means of the original dependent variable. We present a flexible parametric decomposition framework that can be used for all types of continuous (or count) nonnegative dependent variables. In particular, we derive a propensity-score-weighted estimator for the aggregate decomposition that is "doubly robust", that is, consistent under two separate sets of assumptions. A comparative Monte Carlo study illustrates that the proposed estimator performs well in a many situations. An application to the union wage gap in the United States finds that the importance of the unexplained union wage premium is much smaller than suggested by the standard log-wage decomposition. |
Keywords: | Oaxaca-Blinder; Decomposition Methods; Quasi-Maximum-Likelihood; Doubly Robust Estimation; Arithmetic and Geometric Means; Inverse Probability Weighting |
JEL: | C10 C50 C51 J31 |
Date: | 2013–10 |
URL: | http://d.repec.org/n?u=RePEc:ube:dpvwib:dp1308&r=ecm |
By: | Yasumasa Matsuda |
Abstract: | This paper considers analysis of nonstationary irregularly spaced data that may have multivariate observations. The nonstationarity we focus on here means a local dependency of parameters that describe covariance structures. Nonparametric and parametric ways to estimate the local dependency of the parameters are proposed by an extension of traditional periodogram for stationary time series to that for nonstationary spatial data We introduce locally stationary processes for which consistency of the estimators are proved as well as demonstrate empirical efficiency of the methods by simulated and real examples. |
Date: | 2013–05 |
URL: | http://d.repec.org/n?u=RePEc:toh:tergaa:305&r=ecm |
By: | Nobuhiko Terui; Masataka Ban |
Abstract: | In this paper, we propose a multivariate time series model for over-dispersed discrete data to explore the market structure based on sales count dynamics. We first discuss the microstructure to show that over-dispersion is inherent in the modeling of market structure based on sales count data. The model is built on the likelihood function induced by decomposing sales count response variables according to products' competitiveness and conditioning on their sum of variables, and it augments them to higher levels by using Poisson-Multinomial relationship in a hierarchical way, represented as a tree structure for the market definition. State space priors are applied to the structured likelihood to develop dynamic generalized linear models for discrete outcomes. For over-dispersion problem, Gamma compound Poisson variables for product sales counts and Dirichlet compound multinomial variables for their shares are connected in a hierarchical fashion. Instead of the density function of compound distributions, we propose a data augmentation approach for more efficient posterior computations in terms of the generated augmented variables particularly for generating forecasts and predictive density. We present the empirical application using weekly product sales time series in a store to compare the proposed models accommodating over-dispersion with alternative no over-dispersed models by several model selection criteria, including in-sample fit, out-of-sample forecasting errors, and information criterion. The empirical results show that the proposed modeling works well for the over-dispersed models based on compound Poisson variables and they provide improved results than models with no consideration of over-dispersion. |
Date: | 2013–01 |
URL: | http://d.repec.org/n?u=RePEc:toh:tmarga:113&r=ecm |
By: | Daniel Ventosa-Santaulària (División de Economía, CIDE); Carlos Vladimir Rodríguez-Caballero (Aarhus University and CREATES) |
Abstract: | Polynomial specifications are widely used, not only in applied economics, but also in epidemiology, physics, political analysis, and psychology, just to mention a few examples. In many cases, the data employed to estimate such estimations are time series that may exhibit stochastic nonstationary behavior. We extend Phillips’ (1986) results by proving an inference drawn from polynomial specifications, under stochastic nonstationarity, is misleading unless the variables cointegrate. We use a generalized polynomial specification as a vehicle to study its asymptotic and finite-sample properties. Our results, therefore, lead to a call to be cautious whenever practitioners estimate polynomial regressions. |
Keywords: | Polynomial Regression; misleading Inference; Integrated Processes |
JEL: | C12 C15 C22 |
Date: | 2013–11–15 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2013-40&r=ecm |
By: | Ypma, JY |
Abstract: | This thesis contains three chapters on dynamic models with discrete and continuous outcomes. In the rest chapter, I focus on indirect inference estimation. Indirect inference is used to estimate parameters in models where evaluation of the objective function directly is complicated or infeasible. Indirect inference is typically formulated as an optimization problem nesting one or more other optimization problems. In some cases the solution to the inner optimization problems can be obtained in one step, but when such a solution is not available, indirect inference estimation is computationally demanding. I show how constrained optimization methods can be used to replace the nesting of optimization problems and I provide Monte Carlo evidence showing when this approach is bene cial. The second chapter uses panel data from the United Kingdom to estimate a model of wage dynamics with labour participation where the variance in wages is decomposed in a permanent and a transitory component. Most studies that estimate similar models ignore non-participation; individuals without a wage are simply removed from the analysis. This leads to biased estimates of the parameters if working individuals are di erent in their unobservable characteristics compared to people that do not work. I use a dynamic selection model to include a discrete labour participation choice in a simple model of wage dynamics and compare the results to a version of the model that does not include labour participation. In the third chapter, I show how some of the assumptions on the dynamics of the unobservables in the second chapter can be relaxed. High dimensional integrals have to be approximated to estimate the less restrictive models. I use sparse grids and simulation methods to approximate these integrals and compare their performance on simulated data. |
Date: | 2013–03–28 |
URL: | http://d.repec.org/n?u=RePEc:ner:ucllon:http://discovery.ucl.ac.uk/1386923/&r=ecm |
By: | Manfred M. Fischer (Department of Socioeconomics, Vienna University of Economics and Business); Philipp Piribauer (Department of Economics, Vienna University of Economics and Business) |
Abstract: | This paper considers the problem of model uncertainty associated with variable selection and specification of the spatial weight matrix in spatial growth regression models in general and growth regression models based on the matrix exponential spatial specification in particular. A natural solution, supported by formal probabilistic reasoning, is the use of Bayesian model averaging which assigns probabilities on the model space and deals with model uncertainty by mixing over models, using the posterior model probabilities as weights. This paper proposes to adopt Bayesian information criterion model weights since they have computational advantages over fully Bayesian model weights. The approach is illustrated for both identifying model covariates and unveiling spatial structures present in pan-European growth data. |
Keywords: | model comparison, model uncertainty, spatial Durbin matrix exponential growth models, spatial weight structures, European regions |
JEL: | C11 C21 C52 O47 O52 R11 |
Date: | 2013–10 |
URL: | http://d.repec.org/n?u=RePEc:wiw:wiwwuw:wuwp158&r=ecm |
By: | Boris Kaiser |
Abstract: | We propose a new approach for performing detailed decompositions of average outcome differentials, which can be applied to all types of generalized linear models. A simulation exercise demonstrates that our method produces more convincing results than existing methods. An empirical application to the immigrant-native wage differential in Switzerland is presented. |
Keywords: | Oaxaca-Blinder; Detailed Decomposition; Generalized Linear Models |
JEL: | C10 C50 C51 J31 |
Date: | 2013–10 |
URL: | http://d.repec.org/n?u=RePEc:ube:dpvwib:dp1309&r=ecm |
By: | Autin, Florent; Claeskens, Gerda; Freyermuth, Jean-Marc |
Abstract: | In this paper, we differentiate between isotropic and hyperbolic wavelet bases in the context of multivariate nonparametric function estimation. The study of the latter leads to new phenomena and non trivial extensions of univariate studies. In this context, we first exhibit the limitations of isotropic wavelet estimators by proving that no isotropic estimator is able to guarantee the reconstruction of a function with anisotropy in an optimal or near optimal way. Second, we show that hyperbolic wavelet estimators are well suited to reconstruct anisotropic functions. In particular, for each considered estimator we focus on the rates at which it can reconstruct functions from anisotropic Besov spaces. We then compute the estimator's maxiset, this is the largest functional spaces over which its risk converges at these rates. Our results furnish novel arguments to understand the primordial role of sparsity and thresholding in multivariate contexts, notably by showing the exposure of linear methods to the curse of dimensionality. Moreover, we propose a block thresholding hyperbolic estimator and show its ability to estimate anisotropic functions at the optimal minimax rate and related, the remaining pertinence of information pooling in high dimensional settings. |
Keywords: | Anisotropy; Besov spaces; Minimax and maxiset approaches; Linear and non linear methods; Thresholding rules; Multidimensional wavelet basis; |
Date: | 2013–01 |
URL: | http://d.repec.org/n?u=RePEc:ner:leuven:urn:hdl:123456789/377183&r=ecm |
By: | Huber, Peter (Austrian Institute of Economic Research); Oberhofer, Harald (University of Salzburg); Pfaffermayr , Michael (University of Innsbruck) |
Abstract: | This paper analyzes econometric models of the Davis, Haltiwanger and Schuh (1996) job creation rate. In line with the most recent job creation literature, we focus on employment-weighted OLS estimation. Our main theoretical result reveals that employment-weighted OLS estimation of DHS job creation rate models provides biased marginal effects estimates. The reason for this is that by definition, the error terms for entering and exiting firms are non-stochastic and non-zero. This violates the crucial mean independence assumption requiring that the conditional expectation of the errors is zero for all firms. Consequently, we argue that firm entries and exits should be analyzed with separate econometric models and propose alternative maximum likelihood estimators which are easy to implement. A small-scale Monte Carlo analysis and an empirical exercise using the population of Austrian firms point to the relevance of our analytical findings. |
Keywords: | DHS job creation rate; firm size; firm age; maximum likelihood estimation; three-part model; multi-part model; Monte Carlo simulation |
JEL: | C18 C53 D22 E24 L25 L26 M13 |
Date: | 2013–11–15 |
URL: | http://d.repec.org/n?u=RePEc:ris:sbgwpe:2013_005&r=ecm |
By: | Timothy B. Armstrong (Cowles Foundation, Yale University); Shu Shen (Dept. of Economics, UC Davis) |
Abstract: | We consider inference on optimal treatment assignments. Our methods are the first to allow for inference on the treatment assignment rule that would be optimal given knowledge of the population treatment effect in a general setting. The procedure uses multiple hypothesis testing methods to determine a subset of the population for which assignment to treatment can be determined to be optimal after conditioning on all available information, with a prespecified level of confidence. A monte carlo study confirms that the procedure has good small sample behavior. We apply the method to the Mexican conditional cash transfer program Progresa. We demonstrate how the method can be used to design efficient welfare programs by selecting the right beneficiaries and statistically quantifying how strong the evidence is in favor of treating these selected individuals. |
Keywords: | Treatment assignment, Multiple testing |
JEL: | C10 C18 |
Date: | 2013–11 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:1927&r=ecm |
By: | Behl, Peter; Claeskens, Gerda; Dette, Holger |
Abstract: | We consider the problem of model selection for quantile regression analysis where a particular purpose of the modeling procedure has to be taken into account. Typical examples include estimation of the area under the curve in pharmacokinetics or estimation of the minimum eective dose in phase II clinical trials. A focused information criterion for quantile regression is developed, analyzed and investigated by means of a simulation study and data analysis. |
Keywords: | Quantile regression; Model selection focused information criterion; |
Date: | 2013–01 |
URL: | http://d.repec.org/n?u=RePEc:ner:leuven:urn:hdl:123456789/377177&r=ecm |
By: | Crabbe, Marjolein; Akinc, Deniz; Vandebroek, Martina |
Abstract: | The mixed logit choice model has become the common standard to analyze transport behavior. Efficient design of the corresponding choice experiments is therefore indispensable to obtain precise knowledge of travelers’ preferences. Accounting for the individual-specific coefficients in the model, this research advocates an individualized design approach. Individualized designs are sequentially generated for each person separately, using the answers from previous choice sets to select the next best set in a survey. In this way they are adapted to the specific preferences of an individual and therefore more efficient than an aggregate design approach. In order for individual sequential designs to be practicable, the speed of designing an additional choice set in an experiment is obviously a key issue. This paper introduces three design criteria used in optimal test design, based on Kullback-Leibler information, and compares them with the well-known D-efficiency criterion to obtain individually adapted choice designs for the mixed logit choice model. Being equally efficient to D-efficiency and at the same time much faster, the Kullback-Leibler criteria are well suited for the design of individualized choice experiments. |
Keywords: | Discrete choice; Mixed logit; Individualized design; D-efficiency; Kullback-Leibler information; |
Date: | 2013–02 |
URL: | http://d.repec.org/n?u=RePEc:ner:leuven:urn:hdl:123456789/390699&r=ecm |
By: | Juan Carlos Parra-Alvarez (Aarhus University and CREATES) |
Abstract: | This paper evaluates the accuracy of a set of techniques that approximate the solution of continuous-time DSGE models. Using the neoclassical growth model I compare linear-quadratic, perturbation and projection methods. All techniques are applied to the HJB equation and the optimality conditions that define the general equilibrium of the economy. Two cases are studied depending on whether a closed form solution is available. I also analyze how different degrees of non-linearities affect the approximated solution. The results encourage the use of perturbations for reasonable values of the structural parameters of the model and suggest the use of projection methods when a high degree of accuracy is required. |
Keywords: | Continuous-Time DSGE Models, Linear-Quadratic Approximation, Perturbation Method, Projection Method |
JEL: | C63 C68 E32 |
Date: | 2013–11–15 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2013-39&r=ecm |