|
on Econometrics |
By: | Timo Teräsvirta (Aarhus University and CREATES); Yukai Yang (CORE, Université catholique de Louvain and CREATES) |
Abstract: | We consider a nonlinear vector model called the logistic vector smooth transition autoregressive model. The bivariate single-transition vector smooth transition regression model of Camacho (2004) is generalised to a multivariate and multitransition one. A modelling strategy consisting of specification, including testing linearity, estimation and evaluation of these models is constructed. Nonlinear least squares estimation of the parameters of the model is discussed. Evaluation by misspecification tests is carried out using tests derived in a companion paper. The use of the modelling strategy is illustrated by two applications. In the first one, the dynamic relationship between the US gasoline price and consumption is studied and possible asymmetries in it considered. The second application consists of modelling two well known Icelandic riverflow series, previously considered by many hydrologists and time series analysts. JEL Classification: C32, C51, C52 |
Keywords: | Vector STAR model, Modelling nonlinearity, Vector autoregression, Generalized impulse response, Asymmetry, Oil price River flow |
Date: | 2014–03–21 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2014-08&r=ecm |
By: | Marc Henry (Départment de sciences économiques); Koen Jochmans (Département d'économie); Bernard Salanié (University of Columbia) |
Abstract: | Two-component mixtures are nonparametrically identified under tail-dominance conditions on the component distributions if a source of variation is available that affects the mixing proportions but not the component distributions. We motivate these restrictions through several examples. One interesting example is a location model where the location parameter is subject to classical measurement error. The identification analysis suggests very simple closed-form estimators of the component distributions and mixing proportions based on ratios of intermediate quantiles. We derive their asymptotic properties using results on tail empirical processes, and we provide simulation evidence on their finite-sample performance. |
Keywords: | mixture model, nonparametric identification and estimation, tail empirical process |
Date: | 2013–12 |
URL: | http://d.repec.org/n?u=RePEc:spo:wpmain:info:hdl:2441/f6h8764enu2lskk9p2m96cphi&r=ecm |
By: | Nicholas Longford |
Abstract: | The restricted maximum likelihood is preferred by many to the full maximum likelihood for estimation with variance component and other random coefficient models, because the variance estimator is unbiased. It is shown that this unbiasedness is accompanied in some balanced designs by an inflation of the mean squared error. An estimator of the cluster-level variance that is uniformly more efficient than the full maximum likelihood is derived. Estimators of the variance ratio are also studied. |
Keywords: | efficiency, random effects, truncation, variance component. |
Date: | 2014–03 |
URL: | http://d.repec.org/n?u=RePEc:upf:upfgen:1415&r=ecm |
By: | Costantini, Mauro; Lupi, Claudio |
Abstract: | Sequential panel selection methods (spsms) are based on the repeated application of panel unit root tests and are increasingly used to identify I (0) time series in macro- panels. We check the reliability of spsms by using Monte Carlo simulations based on generating the individual test statistics and the p values to be combined into panel unit root tests, both under the unit root null and under selected local alternatives. The analysis is carried out considering both independent and dependent test statistics. We show that spsms do not possess better classification performances than conventional univariate tests. |
Keywords: | Unit root, Panel data, ROC curve, Simulation |
JEL: | C12 C15 C23 |
Date: | 2014–03–29 |
URL: | http://d.repec.org/n?u=RePEc:mol:ecsdps:esdp14073&r=ecm |
By: | Yoshitsugu Kitazawa (Faculty of Economics, Kyushu Sangyo University) |
Abstract: | This paper advocates the transformations used for the consistent estimation of the full-fledged fixed effects zero-inflated Poisson model whose zero outcomes can arise from both of logit and Poisson parts and which equips both parts with the fixed effects. The valid moment conditions are constructed on the basis of the transformations. The finite sample behaviors of GMM and EL estimators employing the moment conditions are investigated by use of Monte Carlo experiments. |
Keywords: | fixed effects zero-inflated Poisson model; predetermined explanatory variables in Poisson part; moment conditions; GMM; EL; Monte Carlo experiments |
JEL: | C23 C25 C51 |
Date: | 2014–04 |
URL: | http://d.repec.org/n?u=RePEc:kyu:dpaper:66&r=ecm |
By: | Jose Olmo; William Pouliot |
Abstract: | This article introduces a U-statistic type process that is fashioned from a kernal which can depend on nuisance parameters. It is shown that this process can accommodate, in a straightforward manner, anti-symmetric kernels, which have proved useful for detecting changing patterns in the dynamics of time series, and weight functions. Weight functions have been shown to improve the power of test statistics employed to detect these changing patterns throughout the evaluation perios; early and late as well. Theory and related test statistics are developed here and applied to detection of structural breaks in linear regression models (LRM). This flexibility is exploited to develop tests to detect changes in intercept or slope in LRMs that are robust to changes in the rest of medal parameters. The statistics developed here are applied to detect changing patterns in mutual fund manager's stock selecting ability over the period 2001 to 2010. |
Keywords: | Change-Point tests; CUSUM test; Linear regression models; Stochastic processes; U-Statistics |
JEL: | C12 C22 C52 |
Date: | 2014–03 |
URL: | http://d.repec.org/n?u=RePEc:bir:birmec:14-02&r=ecm |
By: | A.S. Hurn (School of Economics and Finance, Queensland University of Technology); Annastiina Silvennoinen (School of Economics and Finance, Queensland University of Technology); Timo Teräsvirta (Aarhus University and CREATES) |
Abstract: | We consider a nonlinear vector model called the logistic vector smooth transition autoregressive model. The bivariate single-transition vector smooth transition regression model of Camacho (2004) is generalised to a multivariate and multitransition one. A modelling strategy consisting of specification, including testing linearity, estimation and evaluation of these models is constructed. Nonlinear least squares estimation of the parameters of the model is discussed. Evaluation by misspecification tests is carried out using tests derived in a companion paper. The use of the modelling strategy is illustrated by two applications. In the first one, the dynamic relationship between the US gasoline price and consumption is studied and possible asymmetries in it considered. The second application consists of modelling two well known Icelandic riverflow series, previously considered by many hydrologists and time series analysts. JEL Classification: C23, C51, L94, Q41. |
Keywords: | Smooth transition, binary choice model, logit model, electricity spot prices, peak load pricing, price spikes |
Date: | 2014–03–28 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2014-09&r=ecm |
By: | Geert Dhaene; Koen Jochmans (Département d'économie) |
Abstract: | We calculate the bias of the profile score for the regression coefficients in a multistratum autoregressive model with stratum-specific intercepts. The bias is free of incidental parameters. Centering the profile score delivers an unbiased estimating equation and, upon integration, an adjusted profile likelihood. A variety of other approaches to constructing modified profile likelihoods are shown to yield equivalent results. However, the global maximizer of the adjusted likelihood lies at infinity for any sample size, and the adjusted profile score has multiple zeros. We argue that the parameters are local maximizers inside or on an ellipsoid centered at the maximum likelihood estimator. |
Keywords: | adjusted likelihood, autoregression, incidental parameters, local maximizer, recentered estimating equation |
Date: | 2013–01 |
URL: | http://d.repec.org/n?u=RePEc:spo:wpmain:info:hdl:2441/dambferfb7dfprc9m052g20qh&r=ecm |
By: | Quoreshi, A.M.M. Shahiduzzaman (CITR, Blekinge Inst of Technology) |
Abstract: | We develop a model to account for the long memory property in a bivariate count data framework. We propose a bivariate integer-valued fractional integrated (BINFIMA) model and apply the model to high frequency stock transaction data. The BINFIMA model allows for both positive and negative correlations between the counts. The unconditional and conditional first and second order moments are given. The CLS and FGLS estimators are discussed. The model is capable of capturing the covariance between and within intra-day time series of high frequency transaction data due to macroeconomic news and news related to a specific stock. Empirically, it is found that Ericsson B has mean recursive process while AstraZeneca has long memory property. It is also found that Ericsson B and AstraZenica react in a similar way due to macroeconomic news. |
Keywords: | Count data; Intra-day; Time series; Estimation; Reaction time; Finance |
JEL: | C13 C22 C25 C51 G12 G14 |
Date: | 2014–04–02 |
URL: | http://d.repec.org/n?u=RePEc:hhs:bthcsi:2014-003&r=ecm |
By: | Hisayuki Tsukuma (Faculty of Medicine, Toho University); Tatsuya Kubokawa (Faculty of Economics, The University of Tokyo) |
Abstract: |    This paper addresses the problem of estimating the normal mean matrix with an unknown covariance matrix. Motivated by an empirical Bayes method, we suggest a unied form of the Efron-Morris type estimators based on the Moore-Penrose inverse. This form not only can be dened for any dimension and any sample size, but also can contain the Efron-Morris type or Baranchik type estimators suggested so far in the literature. Also, the unied form suggests a general class of shrinkage estimators. For shrinkage estimators within the general class, a unied expression of unbiased estimators of the risk functions is derived regardless of the dimension of covariance matrix and the size of the mean matrix. An analytical dominance result is provided for a positive-part rule of the shrinkage estimators. |
Date: | 2014–03 |
URL: | http://d.repec.org/n?u=RePEc:tky:fseres:2014cf926&r=ecm |
By: | Li, Yushu (Dept. of Business and Management Science, Norwegian School of Economics); Andersson, Jonas (Dept. of Business and Management Science, Norwegian School of Economics) |
Abstract: | In this paper, we propose a likelihood ratio and Markov chain based method to evaluate density forecasting. This method can jointly evaluate the unconditional forecasted distribution and dependence of the outcomes. This method is an extension of the widely applied evaluation method for interval forecasting proposed by Christoffersen (1998). It is also a more refined approach than the pure contingency table based density forecasting method in Wallis (2003). We show that our method has very high power against incorrect forecasting distributions and dependence. Moreover, the straightforwardness and ease of application of this joint test provide a high potentiality for further applications in both financial and economical areas. |
Keywords: | Likelihood ratio test; Markov Chain; Density forecasting |
JEL: | C14 C53 C61 |
Date: | 2014–03–25 |
URL: | http://d.repec.org/n?u=RePEc:hhs:nhhfms:2014_012&r=ecm |
By: | Yi-Ting Chen (Institute of Economics, Academia Sinica, Taipei, Taiwan); Yu-Chin Hsu (Institute of Economics, Academia Sinica, Taipei, Taiwan); Hung-Jen Wang (Department of Economics, National Taiwan University; Institute of Economics, Academia Sinica, Taipei, Taiwan) |
Abstract: | Government policies are frequently used throughout the world to promote productivity. While some of the policies are designed to work through technology enhancement, others are meant to exert the influence through effciency improvement. It is therefore important to have a program evaluation method that can distinguish the channels of effects. We propose a treatment eect stochastic frontier model for this purpose. One of the important feature of the model is that the participation in the treatment is endogenous. We illustrate the empirical application of the model using the data of large dams in India to study the eects on the agricultural production. |
Keywords: | stochastic frontier models, treatment effect |
Date: | 2014–04 |
URL: | http://d.repec.org/n?u=RePEc:sin:wpaper:14-a006&r=ecm |
By: | Laurent Davezies (Laboratoire Ville, Mobilité, Transport) |
Abstract: | Cette thèse se compose de quatre travaux indépendants. Le premier concerne les modèles partiellement identifiés, c'est-à-dire des modèles dans lesquels la valeur du paramètre d’intérêt ne peut pas être déduite de la distribution des données et des hypothèses du modèle. Dans certaines situations, aucune ou au contraire plusieurs valeurs du paramètre d’intérêt sont compatibles avec les données et les hypothèses du modèle. Ce travail démontre entre autre que si l’ensemble des distributions de probabilités compatibles avec le modèle est convexe, alors les parties extrêmes de ce convexe caractérise l’ensemble des distributions compatibles avec le modèle. Le deuxième travail propose une méthode basée sur une condition d’exclusion pour corriger de l’attrition endogène dans les panels. Nous appliquons cette méthode pour estimer les transitions sur le marché du travail à partir de l’enquête emploi française. Le troisième travail propose une méthode simple pour estimer un modèle logistique avec effets fixes et dépendance d’état tel qu’étudié par Honoré et Kiriazidou. Il propose également un nouvel estimateur des écarts-types qui semble avoir de meilleures propriétés à distance finie. Le quatrième travail est une évaluation sur les collèges de la politique éducative des Réseaux-Ambition-Réussite lancée en 2006. Nous exploitons une discontinuité dans la sélection des collèges pour comparer entre eux certains collèges « identiques » avant la mise en place de la politique. Les résultats de cette évaluation laissent place à peu d’optimisme concernant l’efficacité de cette politique. |
Keywords: | Identification partielle, Attrition, Dépendance d’état, Evaluation de politiques publiques; Partial identification, Attrition, State dependence, Policy Evaluation |
Date: | 2013–12 |
URL: | http://d.repec.org/n?u=RePEc:spo:wpecon:info:hdl:2441/6o65lgig8d0qcro9p14826c84&r=ecm |
By: | van den Berg, Gerard J. (University of Mannheim); Effraimidis, Georgios (University of Southern Denmark) |
Abstract: | Bivariate duration data frequently arise in economics, biostatistics and other areas. In "bivariate frailty models", dependence between the frailties (i.e., unobserved determinants) induces dependence between the durations. Using notions of quadrant dependence, we study restrictions that this imposes on the implied dependence of the durations, if the frailty terms act multiplicatively on the corresponding hazard rates. Marginal frailty distributions are often taken to be gamma distributions. For such cases we calculate general bounds for two association measures, Pearson's correlation coefficient and Kendall's tau. The results are employed to compare the flexibility of specific families of bivariate gamma frailty distributions. |
Keywords: | bivariate gamma distribution, duration models, competing risks, Kendall's tau, negative and positive quadrant dependence, Pearson's correlation coefficient, unobserved heterogeneity, survival analysis |
JEL: | C41 C51 C34 C33 C32 J64 |
Date: | 2014–03 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp8083&r=ecm |
By: | Chen, Kaihua |
Abstract: | This paper uncovers the“mysterious veil”above the formulations and concerned properties of existing weighted additive data envelopment analysis (WADD) models associated with dataset standardization techniques. Based on the truth that the formulation of objective functions in WADD models seems random and confused for users, the study investigates the correspondence relationship between the formulation of objective functions by statistical data-based weights aggregating slacks in WADD models and the pre-standardization of original datasets before using the traditional ADD model in terms of satisfying unit and translation invariance. Our work presents a statistical background for WADD models’ formulations, and makes them become more interpretive and more convenient to be computed and practically applied. Based on the pre-standardization techniques, two new WADD models satisfying unit invariance are formulated to enrich the family of WADD models. We compare all WADD models in some concerned properties, and give special attention to the (in)efficiency discrimination power of them. Moreover, some suggestions guiding theoretical and practical applications of WADD models are discussed. |
Keywords: | Data envelopment analysis; Weighted additive models; Formulations and applications; Dataset standardization techniques |
JEL: | C51 C61 O31 |
Date: | 2014–02 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:55072&r=ecm |
By: | Nicholas Sander (Reserve Bank of New Zealand) |
Abstract: | Macroeconomics makes extensive use of concepts for which there are no observed data. Empirical estimates of such unobservable variables - core inflation is one example - have to be estimated from observed data. The data decomposition tool helps identify the contribution of each piece of observed data to the estimate of the unobservable variable. |
Date: | 2013–12 |
URL: | http://d.repec.org/n?u=RePEc:nzb:nzbans:2013/09&r=ecm |
By: | Christoph Bergmeir; Rob J Hyndman; Jose M Benitez |
Abstract: | Exponential smoothing is one of the most popular forecasting methods. We present a method for bootstrap aggregation (bagging) of exponential smoothing methods. The bagging uses a Box-Cox transformation followed by an STL decomposition to separate the time series into trend, seasonal part, and remainder. The remainder is then bootstrapped using a moving block bootstrap, and a new series is assembled using this bootstrapped remainder. On the bootstrapped series, an ensemble of exponential smoothing models is estimated. The resulting point forecasts are averaged using the mean. We evaluate this new method on the M3 data set, showing that it consistently outperforms the original exponential smoothing models. On the monthly data, we achieve better results than any of the original M3 participants. We also perform statistical testing to explore significance of the results. Using the MASE, our method is significantly better than all the M3 participants on the monthly data. |
Keywords: | bagging, bootstrapping, exponential smoothing, STL decomposition. |
Date: | 2014 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2014-11&r=ecm |
By: | Gregori Baetschmann; Rainer Winkelmann |
Abstract: | Excess zeros are encountered in many empirical count data applications. We provide a new explanation of extra zeros, related to the underlying stochastic process that generates events. The process has two rates, a lower rate until the first event, and a higher one thereafter. We derive the corresponding distribution of the number of events during a fixed period and extend it to account for observed and unobserved heterogeneity. An application to the socio-economic determinants of the individual number of doctor visits in Germany illustrates the usefulness of the new approach. |
Keywords: | Excess zeros, Poisson process, exposure, hurdle model |
JEL: | C25 I10 |
Date: | 2014–04 |
URL: | http://d.repec.org/n?u=RePEc:zur:econwp:151&r=ecm |
By: | Silvia Terzi; Luca Moroni |
Abstract: | In the present paper we will introduce a coecient of multivariate association i.e. association in a d-variate vector of observations x = (x1; : : : ; xd), where d 2 and where each xj is itself a vector of n observations. We order the observations, divide them in slices and count how many times one observation in the r-th slice of any of the d distributions also belongs to the r-th slice of any of the others. The greater the number of overlaps between the units belonging to corresponding slices, the greater the concordance between the d distributions. This is the simple and intuitive idea our multivariate association coecient stems from. It is in fact a multidimensional concordance coecient since it assumes comonotonicity for all variables. |
Keywords: | Copula, Concordance, Local concordance, Measures of multivariate association |
JEL: | C14 |
Date: | 2014–04 |
URL: | http://d.repec.org/n?u=RePEc:rtr:wpaper:0189&r=ecm |
By: | Ariel Pakes |
Abstract: | Empirical work on choice models, especially work on relatively new topics or data sets, often starts with descriptive, or what is often colloquially referred to as "reduced form", results. Our descriptive form formalizes this process. It is derived from the underlying behavioral model, has an interpretation in terms of fit, and can sometimes be used to quantify biases in agents' expectations. We consider estimators for the descriptive form of discrete choice models with (and without) interacting agents that take account of approximation errors as well as unobservable sources of endogeneity. We conclude with an investigation of the descriptive form of two period entry models. |
JEL: | B4 C51 |
Date: | 2014–03 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:20022&r=ecm |
By: | Jennifer Castle; David Hendry; Michael P. Clements |
Abstract: | We investigate alternative robust approaches to forecasting, using a new class of robust devices, contrasted with equilibrium correction models.� Their forecasting properties are derived facing a range of likely empirical problems at the forecast origin, including measurement errors, implulses, omitted variables, unanticipated location shifts and incorrectly included variables that experience a shift.� We derive the resulting forecast biases and error variances, and indicate when the methods are likely to perform well.� The robust methods are applied to forecasting US GDP using autoregressive models, and also to autoregressive models with factors extracted from a large dataset of macroeconomic variables.� We consider forecasting performance over the Great Recession, and over an earlier more quiescent period. |
Keywords: | Robust forecasts, Smoothed Forecasting devices, Factor models, GDP forecasts, Location shifts |
JEL: | C51 C53 |
Date: | 2014–01–30 |
URL: | http://d.repec.org/n?u=RePEc:oxf:wpaper:697&r=ecm |
By: | Imbens, Guido W. (Stanford University) |
Abstract: | There is a large theoretical literature on methods for estimating causal effects under unconfoundedness, exogeneity, or selection-on-observables type assumptions using matching or propensity score methods. Much of this literature is highly technical and has not made inroads into empirical practice where many researchers continue to use simple methods such as ordinary least squares regression even in settings where those methods do not have attractive properties. In this paper I discuss some of the lessons for practice from the theoretical literature, and provide detailed recommendations on what to do. I illustrate the recommendations with three detailed applications. |
Keywords: | matching methods, propensity score methods, causality, unconfoundedness, potential outcomes, selection on observables |
JEL: | C01 C14 C21 C52 |
Date: | 2014–03 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp8049&r=ecm |
By: | Bjerkholt, Olav (Dept. of Economics, University of Oslo) |
Abstract: | The journal of the Econometric Society, Econometrica, was established in 1933 and edited by Ragnar Frisch for the first 22 years. As a new journal Econometrica had three key characteristics. First, it was devoted to a research program stated in few but significant words in the constitution of the Econometric Society and for many years printed in every issue of the journal. Second, it was the first international journal in economics. Third, it was the journal of association (Econometric Society) with members committed to a serious interest in econometrics. The paper gives a brief account of the circumstances around the establishment of the journal and of the relationship between Frisch and Alfred Cowles 3rd who in various capacities played a major role in launching the journal and keeping it going. It furthermore conveys observations and comments related to the editing of the first seven volumes of Econometrica, i.e. 1933-39. The main aim of the paper is to shed light on how the editor and a small core group of econometrician attempted to promote econometrics via Econometrica. The paper is overwhelmingly based on unpublished material from Frisch’s editorial files. Editorial principles, controversies, and style are illuminated through excerpts from the editorial correspondence. The paper was presented at ESEM-67, University of Gothenburg, 26-30 August, 2013. |
Keywords: | Econometrica; Ragnar Frisch; editorial style |
JEL: | B23 B31 |
Date: | 2014–02–23 |
URL: | http://d.repec.org/n?u=RePEc:hhs:osloec:2013_028&r=ecm |