Econometrics
http://lists.repec.org/mailman/listinfo/nep-ecm
Econometrics2014-04-11Sune KarlssonSpecification, Estimation and Evaluation of Vector Smooth Transition Autoregressive Models with Applications
http://d.repec.org/n?u=RePEc:aah:create:2014-08&r=ecm
We consider a nonlinear vector model called the logistic vector smooth transition autoregressive model. The bivariate single-transition vector smooth transition regression model of Camacho (2004) is generalised to a multivariate and multitransition one. A modelling strategy consisting of specification, including testing linearity, estimation and evaluation of these models is constructed. Nonlinear least squares estimation of the parameters of the model is discussed. Evaluation by misspecification tests is carried out using tests derived in a companion paper. The use of the modelling strategy is illustrated by two applications. In the first one, the dynamic relationship between the US gasoline price and consumption is studied and possible asymmetries in it considered. The second application consists of modelling two well known Icelandic riverflow series, previously considered by many hydrologists and time series analysts. JEL Classification: C32, C51, C52Timo Teräsvirta, Yukai Yang2014-03-21Vector STAR model, Modelling nonlinearity, Vector autoregression, Generalized impulse response, Asymmetry, Oil price River flowInference on Mixtures Under Tail Restrictions
http://d.repec.org/n?u=RePEc:spo:wpmain:info:hdl:2441/f6h8764enu2lskk9p2m96cphi&r=ecm
Two-component mixtures are nonparametrically identified under tail-dominance conditions on the component distributions if a source of variation is available that affects the mixing proportions but not the component distributions. We motivate these restrictions through several examples. One interesting example is a location model where the location parameter is subject to classical measurement error. The identification analysis suggests very simple closed-form estimators of the component distributions and mixing proportions based on ratios of intermediate quantiles. We derive their asymptotic properties using results on tail empirical processes, and we provide simulation evidence on their finite-sample performance.Marc Henry, Koen Jochmans, Bernard Salanié2013-12mixture model, nonparametric identification and estimation, tail empirical processOn the inefficiency of the restricted maximum likelihood
http://d.repec.org/n?u=RePEc:upf:upfgen:1415&r=ecm
The restricted maximum likelihood is preferred by many to the full maximum likelihood for estimation with variance component and other random coefficient models, because the variance estimator is unbiased. It is shown that this unbiasedness is accompanied in some balanced designs by an inflation of the mean squared error. An estimator of the cluster-level variance that is uniformly more efficient than the full maximum likelihood is derived. Estimators of the variance ratio are also studied.Nicholas Longford2014-03efficiency, random effects, truncation, variance component.Identifying I(0) Series in Macro-panels: Are Sequential Panel Selection Methods Useful?
http://d.repec.org/n?u=RePEc:mol:ecsdps:esdp14073&r=ecm
Sequential panel selection methods (spsms) are based on the repeated application of panel unit root tests and are increasingly used to identify I (0) time series in macro- panels. We check the reliability of spsms by using Monte Carlo simulations based on generating the individual test statistics and the p values to be combined into panel unit root tests, both under the unit root null and under selected local alternatives. The analysis is carried out considering both independent and dependent test statistics. We show that spsms do not possess better classification performances than conventional univariate tests.Costantini, Mauro, Lupi, Claudio2014-03-29Unit root, Panel data, ROC curve, SimulationConsistent estimation for the full-fledged fixed effects zero-inflated Poisson model
http://d.repec.org/n?u=RePEc:kyu:dpaper:66&r=ecm
This paper advocates the transformations used for the consistent estimation of the full-fledged fixed effects zero-inflated Poisson model whose zero outcomes can arise from both of logit and Poisson parts and which equips both parts with the fixed effects. The valid moment conditions are constructed on the basis of the transformations. The finite sample behaviors of GMM and EL estimators employing the moment conditions are investigated by use of Monte Carlo experiments.Yoshitsugu Kitazawa2014-04fixed effects zero-inflated Poisson model; predetermined explanatory variables in Poisson part; moment conditions; GMM; EL; Monte Carlo experimentsTests to Disentangle Breaks in Intercept from Slope in Linear Regression Models with Application to Management Performance in the Mutual Fund Industry
http://d.repec.org/n?u=RePEc:bir:birmec:14-02&r=ecm
This article introduces a U-statistic type process that is fashioned from a kernal which can depend on nuisance parameters. It is shown that this process can accommodate, in a straightforward manner, anti-symmetric kernels, which have proved useful for detecting changing patterns in the dynamics of time series, and weight functions. Weight functions have been shown to improve the power of test statistics employed to detect these changing patterns throughout the evaluation perios; early and late as well. Theory and related test statistics are developed here and applied to detection of structural breaks in linear regression models (LRM). This flexibility is exploited to develop tests to detect changes in intercept or slope in LRMs that are robust to changes in the rest of medal parameters. The statistics developed here are applied to detect changing patterns in mutual fund manager's stock selecting ability over the period 2001 to 2010.Jose Olmo, William Pouliot2014-03Change-Point tests; CUSUM test; Linear regression models; Stochastic processes; U-StatisticsA Smooth Transition Logit Model of the Effects of Deregulation in the Electricity Market
http://d.repec.org/n?u=RePEc:aah:create:2014-09&r=ecm
We consider a nonlinear vector model called the logistic vector smooth transition autoregressive model. The bivariate single-transition vector smooth transition regression model of Camacho (2004) is generalised to a multivariate and multitransition one. A modelling strategy consisting of specification, including testing linearity, estimation and evaluation of these models is constructed. Nonlinear least squares estimation of the parameters of the model is discussed. Evaluation by misspecification tests is carried out using tests derived in a companion paper. The use of the modelling strategy is illustrated by two applications. In the first one, the dynamic relationship between the US gasoline price and consumption is studied and possible asymmetries in it considered. The second application consists of modelling two well known Icelandic riverflow series, previously considered by many hydrologists and time series analysts. JEL Classification: C23, C51, L94, Q41.A.S. Hurn, Annastiina Silvennoinen, Timo Teräsvirta2014-03-28Smooth transition, binary choice model, logit model, electricity spot prices, peak load pricing, price spikesLikelihood inference in an Autoregression with fixed effects
http://d.repec.org/n?u=RePEc:spo:wpmain:info:hdl:2441/dambferfb7dfprc9m052g20qh&r=ecm
We calculate the bias of the profile score for the regression coefficients in a multistratum autoregressive model with stratum-specific intercepts. The bias is free of incidental parameters. Centering the profile score delivers an unbiased estimating equation and, upon integration, an adjusted profile likelihood. A variety of other approaches to constructing modified profile likelihoods are shown to yield equivalent results. However, the global maximizer of the adjusted likelihood lies at infinity for any sample size, and the adjusted profile score has multiple zeros. We argue that the parameters are local maximizers inside or on an ellipsoid centered at the maximum likelihood estimator.Geert Dhaene, Koen Jochmans2013-01adjusted likelihood, autoregression, incidental parameters, local maximizer, recentered estimating equationBivariate Integer-Valued Long Memory Model for High Frequency Financial Count Data
http://d.repec.org/n?u=RePEc:hhs:bthcsi:2014-003&r=ecm
We develop a model to account for the long memory property in a bivariate count data framework. We propose a bivariate integer-valued fractional integrated (BINFIMA) model and apply the model to high frequency stock transaction data. The BINFIMA model allows for both positive and negative correlations between the counts. The unconditional and conditional first and second order moments are given. The CLS and FGLS estimators are discussed. The model is capable of capturing the covariance between and within intra-day time series of high frequency transaction data due to macroeconomic news and news related to a specific stock. Empirically, it is found that Ericsson B has mean recursive process while AstraZeneca has long memory property. It is also found that Ericsson B and AstraZenica react in a similar way due to macroeconomic news.Quoreshi, A.M.M. Shahiduzzaman2014-04-02Count data; Intra-day; Time series; Estimation; Reaction time; Finance"A Unified Approach to Estimating a Normal Mean Matrix in High and Low Dimensions"
http://d.repec.org/n?u=RePEc:tky:fseres:2014cf926&r=ecm
ã€€ã€€ This paper addresses the problem of estimating the normal mean matrix with an unknown covariance matrix. Motivated by an empirical Bayes method, we suggest a unied form of the Efron-Morris type estimators based on the Moore-Penrose inverse. This form not only can be dened for any dimension and any sample size, but also can contain the Efron-Morris type or Baranchik type estimators suggested so far in the literature. Also, the unied form suggests a general class of shrinkage estimators. For shrinkage estimators within the general class, a unied expression of unbiased estimators of the risk functions is derived regardless of the dimension of covariance matrix and the size of the mean matrix. An analytical dominance result is provided for a positive-part rule of the shrinkage estimators.Hisayuki Tsukuma, Tatsuya Kubokawa2014-03A Likelihood Ratio and Markov Chain Based Method to Evaluate Density Forecasting
http://d.repec.org/n?u=RePEc:hhs:nhhfms:2014_012&r=ecm
In this paper, we propose a likelihood ratio and Markov chain based method to evaluate density forecasting. This method can jointly evaluate the unconditional forecasted distribution and dependence of the outcomes. This method is an extension of the widely applied evaluation method for interval forecasting proposed by Christoffersen (1998). It is also a more refined approach than the pure contingency table based density forecasting method in Wallis (2003). We show that our method has very high power against incorrect forecasting distributions and dependence. Moreover, the straightforwardness and ease of application of this joint test provide a high potentiality for further applications in both financial and economical areas.Li, Yushu, Andersson, Jonas2014-03-25Likelihood ratio test; Markov Chain; Density forecastingTreatment Effect Stochastic Frontier Models with Endogenous Selection
http://d.repec.org/n?u=RePEc:sin:wpaper:14-a006&r=ecm
Government policies are frequently used throughout the world to promote productivity. While some of the policies are designed to work through technology enhancement, others are meant to exert the influence through effciency improvement. It is therefore important to have a program evaluation method that can distinguish the channels of effects. We propose a treatment eect stochastic frontier model for this purpose. One of the important feature of the model is that the participation in the treatment is endogenous. We illustrate the empirical application of the model using the data of large dams in India to study the eects on the agricultural production.Yi-Ting Chen, Yu-Chin Hsu, Hung-Jen Wang2014-04stochastic frontier models, treatment effectFour Essays in Econometrics
http://d.repec.org/n?u=RePEc:spo:wpecon:info:hdl:2441/6o65lgig8d0qcro9p14826c84&r=ecm
Cette thèse se compose de quatre travaux indépendants. Le premier concerne les modèles partiellement identifiés, c'est-à-dire des modèles dans lesquels la valeur du paramètre d’intérêt ne peut pas être déduite de la distribution des données et des hypothèses du modèle. Dans certaines situations, aucune ou au contraire plusieurs valeurs du paramètre d’intérêt sont compatibles avec les données et les hypothèses du modèle. Ce travail démontre entre autre que si l’ensemble des distributions de probabilités compatibles avec le modèle est convexe, alors les parties extrêmes de ce convexe caractérise l’ensemble des distributions compatibles avec le modèle. Le deuxième travail propose une méthode basée sur une condition d’exclusion pour corriger de l’attrition endogène dans les panels. Nous appliquons cette méthode pour estimer les transitions sur le marché du travail à partir de l’enquête emploi française. Le troisième travail propose une méthode simple pour estimer un modèle logistique avec effets fixes et dépendance d’état tel qu’étudié par Honoré et Kiriazidou. Il propose également un nouvel estimateur des écarts-types qui semble avoir de meilleures propriétés à distance finie. Le quatrième travail est une évaluation sur les collèges de la politique éducative des Réseaux-Ambition-Réussite lancée en 2006. Nous exploitons une discontinuité dans la sélection des collèges pour comparer entre eux certains collèges « identiques » avant la mise en place de la politique. Les résultats de cette évaluation laissent place à peu d’optimisme concernant l’efficacité de cette politique.Laurent Davezies2013-12Identification partielle, Attrition, Dépendance d’état, Evaluation de politiques publiques; Partial identification, Attrition, State dependence, Policy EvaluationDependence Measures in Bivariate Gamma Frailty Models
http://d.repec.org/n?u=RePEc:iza:izadps:dp8083&r=ecm
Bivariate duration data frequently arise in economics, biostatistics and other areas. In "bivariate frailty models", dependence between the frailties (i.e., unobserved determinants) induces dependence between the durations. Using notions of quadrant dependence, we study restrictions that this imposes on the implied dependence of the durations, if the frailty terms act multiplicatively on the corresponding hazard rates. Marginal frailty distributions are often taken to be gamma distributions. For such cases we calculate general bounds for two association measures, Pearson's correlation coefficient and Kendall's tau. The results are employed to compare the flexibility of specific families of bivariate gamma frailty distributions.van den Berg, Gerard J., Effraimidis, Georgios2014-03bivariate gamma distribution, duration models, competing risks, Kendall's tau, negative and positive quadrant dependence, Pearson's correlation coefficient, unobserved heterogeneity, survival analysisWeighted Additive DEA Models Associated with Dataset Standardization Techniques
http://d.repec.org/n?u=RePEc:pra:mprapa:55072&r=ecm
This paper uncovers the“mysterious veil”above the formulations and concerned properties of existing weighted additive data envelopment analysis (WADD) models associated with dataset standardization techniques. Based on the truth that the formulation of objective functions in WADD models seems random and confused for users, the study investigates the correspondence relationship between the formulation of objective functions by statistical data-based weights aggregating slacks in WADD models and the pre-standardization of original datasets before using the traditional ADD model in terms of satisfying unit and translation invariance. Our work presents a statistical background for WADD models’ formulations, and makes them become more interpretive and more convenient to be computed and practically applied. Based on the pre-standardization techniques, two new WADD models satisfying unit invariance are formulated to enrich the family of WADD models. We compare all WADD models in some concerned properties, and give special attention to the (in)efficiency discrimination power of them. Moreover, some suggestions guiding theoretical and practical applications of WADD models are discussed.Chen, Kaihua2014-02Data envelopment analysis; Weighted additive models; Formulations and applications; Dataset standardization techniquesFresh perspectives on unobservable variables: Data decomposition of the Kalman smoother
http://d.repec.org/n?u=RePEc:nzb:nzbans:2013/09&r=ecm
Macroeconomics makes extensive use of concepts for which there are no observed data. Empirical estimates of such unobservable variables - core inflation is one example - have to be estimated from observed data. The data decomposition tool helps identify the contribution of each piece of observed data to the estimate of the unobservable variable.Nicholas Sander2013-12Bagging Exponential Smoothing Methods using STL Decomposition and Box-Cox Transformation
http://d.repec.org/n?u=RePEc:msh:ebswps:2014-11&r=ecm
Exponential smoothing is one of the most popular forecasting methods. We present a method for bootstrap aggregation (bagging) of exponential smoothing methods. The bagging uses a Box-Cox transformation followed by an STL decomposition to separate the time series into trend, seasonal part, and remainder. The remainder is then bootstrapped using a moving block bootstrap, and a new series is assembled using this bootstrapped remainder. On the bootstrapped series, an ensemble of exponential smoothing models is estimated. The resulting point forecasts are averaged using the mean. We evaluate this new method on the M3 data set, showing that it consistently outperforms the original exponential smoothing models. On the monthly data, we achieve better results than any of the original M3 participants. We also perform statistical testing to explore significance of the results. Using the MASE, our method is significantly better than all the M3 participants on the monthly data.Christoph Bergmeir, Rob J Hyndman, Jose M Benitez2014bagging, bootstrapping, exponential smoothing, STL decomposition.A dynamic hurdle model for zero-inflated count data: with an application to health care utilization
http://d.repec.org/n?u=RePEc:zur:econwp:151&r=ecm
Excess zeros are encountered in many empirical count data applications. We provide a new explanation of extra zeros, related to the underlying stochastic process that generates events. The process has two rates, a lower rate until the first event, and a higher one thereafter. We derive the corresponding distribution of the number of events during a fixed period and extend it to account for observed and unobserved heterogeneity. An application to the socio-economic determinants of the individual number of doctor visits in Germany illustrates the usefulness of the new approach.Gregori Baetschmann, Rainer Winkelmann2014-04Excess zeros, Poisson process, exposure, hurdle modelA suggestion for a multivariate concordance coefficient
http://d.repec.org/n?u=RePEc:rtr:wpaper:0189&r=ecm
In the present paper we will introduce a coecient of multivariate association i.e. association in a d-variate vector of observations x = (x1; : : : ; xd), where d 2 and where each xj is itself a vector of n observations. We order the observations, divide them in slices and count how many times one observation in the r-th slice of any of the d distributions also belongs to the r-th slice of any of the others. The greater the number of overlaps between the units belonging to corresponding slices, the greater the concordance between the d distributions. This is the simple and intuitive idea our multivariate association coecient stems from. It is in fact a multidimensional concordance coecient since it assumes comonotonicity for all variables.Silvia Terzi, Luca Moroni2014-04Copula, Concordance, Local concordance, Measures of multivariate associationBehavioral and Descriptive Forms of Choice Models
http://d.repec.org/n?u=RePEc:nbr:nberwo:20022&r=ecm
Empirical work on choice models, especially work on relatively new topics or data sets, often starts with descriptive, or what is often colloquially referred to as "reduced form", results. Our descriptive form formalizes this process. It is derived from the underlying behavioral model, has an interpretation in terms of fit, and can sometimes be used to quantify biases in agents' expectations. We consider estimators for the descriptive form of discrete choice models with (and without) interacting agents that take account of approximation errors as well as unobservable sources of endogeneity. We conclude with an investigation of the descriptive form of two period entry models.Ariel Pakes2014-03Robust Approaches to Forecasting
http://d.repec.org/n?u=RePEc:oxf:wpaper:697&r=ecm
We investigate alternative robust approaches to forecasting, using a new class of robust devices, contrasted with equilibrium correction models.� Their forecasting properties are derived facing a range of likely empirical problems at the forecast origin, including measurement errors, implulses, omitted variables, unanticipated location shifts and incorrectly included variables that experience a shift.� We derive the resulting forecast biases and error variances, and indicate when the methods are likely to perform well.� The robust methods are applied to forecasting US GDP using autoregressive models, and also to autoregressive models with factors extracted from a large dataset of macroeconomic variables.� We consider forecasting performance over the Great Recession, and over an earlier more quiescent period.Jennifer Castle, David Hendry, Michael P. Clements2014-01-30Robust forecasts, Smoothed Forecasting devices, Factor models, GDP forecasts, Location shiftsMatching Methods in Practice: Three Examples
http://d.repec.org/n?u=RePEc:iza:izadps:dp8049&r=ecm
There is a large theoretical literature on methods for estimating causal effects under unconfoundedness, exogeneity, or selection-on-observables type assumptions using matching or propensity score methods. Much of this literature is highly technical and has not made inroads into empirical practice where many researchers continue to use simple methods such as ordinary least squares regression even in settings where those methods do not have attractive properties. In this paper I discuss some of the lessons for practice from the theoretical literature, and provide detailed recommendations on what to do. I illustrate the recommendations with three detailed applications.Imbens, Guido W.2014-03matching methods, propensity score methods, causality, unconfoundedness, potential outcomes, selection on observablesPromoting Econometrics through econometrica 1933-39
http://d.repec.org/n?u=RePEc:hhs:osloec:2013_028&r=ecm
The journal of the Econometric Society, Econometrica, was established in 1933 and edited by Ragnar Frisch for the first 22 years. As a new journal Econometrica had three key characteristics. First, it was devoted to a research program stated in few but significant words in the constitution of the Econometric Society and for many years printed in every issue of the journal. Second, it was the first international journal in economics. Third, it was the journal of association (Econometric Society) with members committed to a serious interest in econometrics. The paper gives a brief account of the circumstances around the establishment of the journal and of the relationship between Frisch and Alfred Cowles 3rd who in various capacities played a major role in launching the journal and keeping it going. It furthermore conveys observations and comments related to the editing of the first seven volumes of Econometrica, i.e. 1933-39. The main aim of the paper is to shed light on how the editor and a small core group of econometrician attempted to promote econometrics via Econometrica. The paper is overwhelmingly based on unpublished material from Frisch’s editorial files. Editorial principles, controversies, and style are illuminated through excerpts from the editorial correspondence. The paper was presented at ESEM-67, University of Gothenburg, 26-30 August, 2013.Bjerkholt, Olav2014-02-23Econometrica; Ragnar Frisch; editorial style