nep-ecm New Economics Papers
on Econometrics
Issue of 2014‒02‒15
twenty-one papers chosen by
Sune Karlsson
Orebro University

  1. Testing Over-Identifying Restrictions without Consistent Estimation of the Asymptotic Covariance Matrix By Wei-Ming Lee; Chung-Ming Kuan; Yu-Chin Hsu
  2. A Lagrange Multiplier Test for Testing the Adequacy of the Constant Conditional Correlation GARCH Model By Paul Catani; Timo Teräsvirta; Meiqun Yin
  3. Split-Panel Jackknife Estimation of Fixed-Effect Models By Geert Dhaene; Koen Jochmans
  4. Inverse Probability Weighted Estimation of Local Average Treatment Effects: Higher Order MSE Expansion By Stephen G. Donald; Yu-Chin Hsu; Robert P. Lieli
  5. A general theory of rank testing By Majid M. Al-Sadoon
  6. Risk Margin Quantile Function Via Parametric and Non-Parametric Bayesian Quantile Regression By Alice X. D. Dong; Jennifer S. K. Chan; Gareth W. Peters
  7. Fast Computation of the Deviance Information Criterion for Latent Variable Models By Joshua C.C. Chan; Angelia L. Grant
  8. The Local Power of the CADF and CIPS Panel Unit Root Tests By Joakim Westerlund; Mehdi Hosseinkouchack; Martin Solberger
  9. Homogeneity test for functional data based on depth measures By Ramón Jesús Flores Díaz; Rosa E. Lillo; Juan Romo
  10. A Method for Experimental Events that Break Cointegration: Counterfactual Simulation By Bell, Peter N
  11. On the Asymptotic Distribution of the DF–GLS Test Statistic By Joakim Westerlund
  12. On the Importance of the First Observation in GLS Detrending in Unit Root Testing By Joakim Westerlund
  13. Messung des Marktrisikos mit generalisierter autoregressiver bedingter heteroskedastischer Modellierung der Volatilität: Ein Vergleich univariater und multivariater Konzepte By Krasnosselski, Nikolai; Cremers, Heinz; Sanddorf, Walter
  14. Rank and order conditions for identification in simultaneous system of cointegrating equations with integrated variables of order two By Mosconi, Rocco; Paruolo, Paolo
  15. Modeling and Estimation of Gravity Equation in the Presence of Zero Trade: A Validation of Hypotheses Using Africa's Trade Data By Kareem, Fatima O.
  16. Treatment evaluation with multiple outcome periods under endogeneity and attrition By Frölich, Markus; Huber, Martin
  17. Causal pitfalls in the decomposition of wage gaps By Huber, Martin
  18. Estimation and Solution of Models with Expectations and Structural Changes By Mariano Kulish; Adrian Pagan
  19. Intergenerational Mobility and the Informational Content of Surnames By Maia Güell; José V. Rodríguez Mora; Christopher I. Telmer
  20. Empirical Likelihood for Regression Discontinuity Design By Yukitoshi Matsushita; Taisuke Otsu; Ke-Li Xu
  21. Bayesian Stochastic Search for the Best Predictors: Nowcasting GDP Growth By Nikolaus Hautsch; Dieter Hess; Fuyu Yang

  1. By: Wei-Ming Lee (Department of Economics National Chung Cheng University); Chung-Ming Kuan (Department of Finance National Taiwan University); Yu-Chin Hsu (Institute of Economics, Academia Sinica, Taipei, Taiwan)
    Abstract: This paper extends Kiefer, Vogelsang, and Bunzel (2000) and Kiefer and Vogelsang (2002b) to propose a class of over-identifying restrictions (OIR) tests that are robust to heteroskedasticity and serial correlations of unknown form. These OIR tests do not require consistent estimation of the asymptotic covariance matrix and hence avoid choosing the bandwidth in nonparametric kernel estimation. By employing a suitable normalizing matrix to eliminate the nuisance parameters in the limit, these tests remain asymptotically pivotal. As opposed of the conventional OIR test, the proposed tests require only a consistent, but not necessarily optimal, GMM estimator. It is also shown that the asymptotic local power of these tests is invariant with respect to the choice of the weighting matrix for preliminary GMM estimator. Our simulations demonstrate that the proposed tests are properly sized in most cases and may have power comparable with that of the conventional OIR test.
    Keywords: GMM, kernel function, KVB approach, over-identifying restrictions, robust test
    JEL: C12 C22
    Date: 2014–02
    URL: http://d.repec.org/n?u=RePEc:sin:wpaper:14-a001&r=ecm
  2. By: Paul Catani (Hanken School of Economics); Timo Teräsvirta (Aarhus University and CREATES); Meiqun Yin (Beijing International Studies University)
    Abstract: A Lagrange multiplier test for testing the parametric structure of a constant conditional correlation generalized autoregressive conditional heteroskedasticity (CCC-GARCH) model is proposed. The test is based on decomposing the CCC-GARCH model multiplicatively into two components, one of which represents the null model, whereas the other one describes the misspeci?cation. A simulation study shows that the test has good ?nite sample properties. We compare the test with other tests for misspeci?cation of multivariate GARCH models. The test has high power against alternatives where the misspeci?cation is in the GARCH parameters and is superior to other tests. The test is not greatly affected by misspeci?cation in the conditional correlations and is therefore well suited for considering misspeci?cation of GARCH equations. JEL Codes: C32, C52, C58
    Keywords: constant conditional correlation, LM test, misspeci?cation testing, modelling volatility, multivariate GARCH
    Date: 2014–01–28
    URL: http://d.repec.org/n?u=RePEc:aah:create:2014-03&r=ecm
  3. By: Geert Dhaene; Koen Jochmans (Département d'économie)
    Abstract: Maximum-likelihood estimation of nonlinear models with fixed effects is subject to the incidental-parameter problem. This typically implies that point estimates suffer from large bias and confidence intervals have poor coverage. This paper presents a jackknife method to reduce this bias and to obtain confidence intervals that are correctly centered under rectangular-array asymptotics. The method is explicitly designed to handle dynamics in the data and yields estimators that are straightforward to implement and that can be readily applied to a range of models and estimands. We provide distribution theory for estimators of index coefficients and average effects, present validity tests for the jackknife, and consider extensions to higher-order bias correction and to two-step estimation problems. An empirical illustration on female labor-force participation is also provided.
    Keywords: bias reduction, dependent data, incidental-parameter problem, jackknife, nonlinear model
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:spo:wpecon:info:hdl:2441/f6h8764enu2lskk9p2m9mgp8l&r=ecm
  4. By: Stephen G. Donald (Department of Economics University of Texas at Austin); Yu-Chin Hsu (Institute of Economics, Academia Sinica, Taipei, Taiwan); Robert P. Lieli (Department of Economics Central European University, Budapest and the National Bank of Hungary)
    Abstract: We consider a modified version of the inverse probability weighted estimator for the local average treatment effect parameter proposed by Donald, Hsu and Lieli (2014). The modification consists of using local polynomial regression, rather than series logit, in estimating the instrument propensity score. We show that the modified estimator remains asymptotically normal and efficient and, as our main contribution, provide a higher order expansion of its asymptotic mean squared error.
    Keywords: local average treatment effect, inverse probability weighted estimator, higher order expansion
    JEL: C01 C13 C14
    Date: 2014–02
    URL: http://d.repec.org/n?u=RePEc:sin:wpaper:14-a002&r=ecm
  5. By: Majid M. Al-Sadoon
    Abstract: This paper develops an approach to rank testing that nests all existing rank tests and simplifies their asymptotics. The approach is based on the fact that implicit in every rank test there are estimators of the null spaces of the matrix in question. The approach yields many new insights about the behavior of rank testing statistics under the null as well as local and global alternatives in both the standard and the cointegration setting. The approach also suggests many new rank tests based on alternative estimates of the null spaces as well as the new fixed-b theory. A brief Monte Carlo study illustrates the results.
    Keywords: Rank testing, stochastic tests, classical tests, subspace estimation, cointegration.
    JEL: C12 C13 C30
    Date: 2014–02
    URL: http://d.repec.org/n?u=RePEc:upf:upfgen:1411&r=ecm
  6. By: Alice X. D. Dong; Jennifer S. K. Chan; Gareth W. Peters
    Abstract: We develop quantile regression models in order to derive risk margin and to evaluate capital in non-life insurance applications. By utilizing the entire range of conditional quantile functions, especially higher quantile levels, we detail how quantile regression is capable of providing an accurate estimation of risk margin and an overview of implied capital based on the historical volatility of a general insurers loss portfolio. Two modelling frameworks are considered based around parametric and nonparametric quantile regression models which we develop specifically in this insurance setting. In the parametric quantile regression framework, several models including the flexible generalized beta distribution family, asymmetric Laplace (AL) distribution and power Pareto distribution are considered under a Bayesian regression framework. The Bayesian posterior quantile regression models in each case are studied via Markov chain Monte Carlo (MCMC) sampling strategies. In the nonparametric quantile regression framework, that we contrast to the parametric Bayesian models, we adopted an AL distribution as a proxy and together with the parametric AL model, we expressed the solution as a scale mixture of uniform distributions to facilitate implementation. The models are extended to adopt dynamic mean, variance and skewness and applied to analyze two real loss reserve data sets to perform inference and discuss interesting features of quantile regression for risk margin calculations.
    Date: 2014–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1402.2492&r=ecm
  7. By: Joshua C.C. Chan; Angelia L. Grant
    Abstract: The deviance information criterion (DIC) has been widely used for Bayesian model comparison. However, recent studies have cautioned against the use of the DIC for comparing latent variable models. In particular, the DIC calculated using the conditional likelihood (obtained by conditioning on the latent variables) is found to be inappropriate, whereas the DIC computed using the integrated likelihood (obtained by integrating out the latent variables) seems to perform well. In view of this, we propose fast algorithms for computing the DIC based on the integrated likelihood for a variety of highdimensional latent variable models. Through three empirical applications we show that the DICs based on the integrated likelihoods have much smaller numerical standard errors compared to the DICs based on the conditional likelihoods.
    Keywords: Bayesian model comparison, state space, factor model, vector autoregression, semiparametric
    JEL: C11 C15 C32 C52
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2014-09&r=ecm
  8. By: Joakim Westerlund (Deakin University); Mehdi Hosseinkouchack; Martin Solberger
    Abstract: Very little is known about the local power of second generation panel unit root tests that are robust to cross-section dependence. This paper derives the local asymptotic power functions of the CADF and CIPS tests of Pesaran (A Simple Panel Unit Root Test in Presence of Cross-Section Dependence, Journal of Applied Econometrics 22, 265–312, 2007), which are among the most popular tests around.
    Keywords: Panel unit root test; common factor model; cross-sectional averages; crosssectional dependence; local asymptotic power.
    JEL: C12 C13 C33
    URL: http://d.repec.org/n?u=RePEc:dkn:ecomet:fe_2014_05&r=ecm
  9. By: Ramón Jesús Flores Díaz; Rosa E. Lillo; Juan Romo
    Abstract: In the context of functional data analysis, we propose a new method to test the homogeneity of families of functions. Based on some well-known depth measures, we construct four different statistics in order to measure distance between the two families. A simulation study is performed to check the efficiency of the testswhen confronted with shape and magnitude perturbation. Finally, we apply these tools to measure the homogeneity in some families of real data, obtaining good results for these new methods.
    Keywords: FDA , Homogeneity test , Functional depth
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws140101&r=ecm
  10. By: Bell, Peter N
    Abstract: In this paper I develop a method to estimate the effect of an event on a time series variable. The event is framed in a quasi-experimental setting with time series observations on a treatment variable, which is affected by the event, and a control variable, which is not. Prior to the event, the two variables are cointegrated. After the event, they are not. Since the event only affects the treatment variable, the method uses observations on the control variable after the event and the distribution of difference in differences before the event to simulate values for the treatment variable as-if the event did not occur; hence the name counterfactual simulation. I describe theoretical properties of the method and show the method in action with purpose-built data.
    Keywords: Quasi-experiment, cointegration, time series, counterfactual, simulation
    JEL: C15 C32 C63 C90
    Date: 2014–02–07
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:53523&r=ecm
  11. By: Joakim Westerlund (Deakin University)
    Abstract: In a very influential paper Elliott et al. (Efficient Tests for an Autoregressive Unit Root, Econometrica 64, 813–836, 1996) show that no uniformly most powerful test for the unit root testing problem exits, derive the relevant power envelope and characterize a family of point-optimal tests. As a by-product, they also propose a “GLS detrended” version of the conventional Dickey–Fuller test, denoted DF–GLS, that has since then become very popular among practitioners, much more so than the point-optimal tests. In view of this, it is quite strange to find that, while conjectured in Elliott et al. (1996), so far there seems to be no formal proof of the asymptotic distribution of the DF–GLS test statistic. By providing three separate proofs the current paper not only substantiates the required result, but also provides insight regarding the pros and cons of different methods of proof.
    Keywords: Unit root test; GLS detrending; Asymptotic distribution; Asymptotic local power; Method of proof.
    JEL: C12 C22
    URL: http://d.repec.org/n?u=RePEc:dkn:ecomet:fe_2014_03&r=ecm
  12. By: Joakim Westerlund (Deakin University)
    Abstract: First-differencing is generally taken to imply the loss of one observation, the first, or at least that the effect of ignoring this observation is asymptotically negligible. However, this is not always true, as in the case of GLS detrending. In order to illustrate this, the current paper considers as an example the use of GLS detrended data when testing for a unit root. The results show that the treatment of the first observation is absolutely crucial for test performance, and that ignorance causes test break-down.
    Keywords: Unit root test; GLS detrending; Local asymptotic power.
    JEL: C12 C13 C33
    URL: http://d.repec.org/n?u=RePEc:dkn:ecomet:fe_2014_07&r=ecm
  13. By: Krasnosselski, Nikolai; Cremers, Heinz; Sanddorf, Walter
    Abstract: -- The globalisation on financial markets and the development of financial derivatives has increased not only chances but also potential risk within the banking industry. Especially market risk has gained major significance since market price variation of interest rates, stocks or exchange rates can bear a substantial impact on the value of a position. Thus, a sound estimation of the volatility in the market plays a key role in quantifying market risk exposure correctly. This paper presents GARCH models which capture volatility clustering and, therefore, are appropriate to analyse financial market data. Models with Generalised AutoRegressive Conditional Heteroskedasticity are characterised by the ability to estimate and forecast time-varying volatility. In this paper, the estimation of conditional volatility is applied to Value at Risk measurement. Univariate as well as multivariate concepts are presented for the estimation of the conditional volatility.
    Keywords: ARCH,Backtesting,BEKK-GARCH,Bootstrapping,CCC-GARCH,Conditional Volatility,Constant Mean Model,DCC-GARCH,EWMA,GARCH,GJR-GARCH,Heteroskedasticity,IGARCH,Mandelbrot,Misspecification Test,Multivariate Volatility Model,Stylized Facts,Univariate Volatility Model,Value at Risk,Volatility Clustering
    JEL: C01 C02 C12 C13 C14 C15 C22 C32 C51 C52 C53 G32 G38
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:zbw:fsfmwp:208&r=ecm
  14. By: Mosconi, Rocco; Paruolo, Paolo
    Abstract: This paper discusses identification of systems of simultaneous cointegrating equations with integrated variables of order two. Rank and order conditions for identification are provided for general linear restrictions, as well as for equation-by-equation constraints. As expected, the application of the rank conditions to triangular forms and other previous formulations for these systems shows that they are just-identifying. The conditions are illustrated on models of aggregate consumption with liquid assets and on system of equations for inventories.
    Keywords: Identification, (Multi-)Cointegration, I(2), Stocks and flows, Inventory models.
    JEL: C10 C32
    Date: 2014–01–15
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:53589&r=ecm
  15. By: Kareem, Fatima O.
    Abstract: Gravity model of trade has emerged as an important and popular model in explaining and predicting bilateral trade flows. While the theoretical justification is no longer in doubt, nonetheless, its empirical application has however generated several unresolved controversies in the literature. These specifically concerns estimation challenges which revolve around the validity of the log linear transformation of the gravity equation in the presence of heteroscedasticity and zero trade observations. These two issues generate several challenges concerning the appropriate choice of the estimation techniques. This paper evaluates the performance of alternative estimation techniques in the presence of zero trade observations, checks for the validation of their assumptions and their behaviour in cases of departure from their assumptions, particularly the departure from the heteroscedasticity assumption. Analysis was based on a dataset of Africa's fish exports to the European Union, which contains about 70% zero observations. Given our dataset and the gravity equation specified, our results show that there is no one general best performing model, although most of the linear estimators outperform the GLM estimators in many of the robust checks performed. In essence, we find that choosing the best model depends on the dataset, and a lot of robust tests and advocate an encompassing toolkit approach of the methods so as to establish robustness. We concur with Head and Mayer (2013) that the gravity equation is indeed just a toolkit and cookbook in the estimation of bilateral trade flows.
    Keywords: Gravity Equation, Heteroscedasticity, Zero trade flows, Estimation techniques, Agribusiness, C13 C33 F10 F13,
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:ags:eaa140:163341&r=ecm
  16. By: Frölich, Markus; Huber, Martin
    Abstract: This paper develops a nonparametric methodology for treatment evaluation with multiple outcome periods under treatment endogeneity and missing outcomes. We use instrumental variables, pre-treatment characteristics, and short-term (or intermediate) outcomes to identify the average treatment effect on the outcomes of compliers (the subpopulation whose treatment reacts on the instrument) in multiple periods based on inverse probability weighting. Treatment selection and attrition may depend on both observed characteristics and the unobservable compliance type, which is possibly related to unobserved factors. We also provide a simulation study and apply our methods to the evaluation of a policy intervention targeting college achievement, where we find that controlling for attrition considerably affects the effect estimates.
    Keywords: Treatment effect, attrition, endogeneity, panel data, weighting
    JEL: C14 C21 C23 C24 C26
    Date: 2014–02
    URL: http://d.repec.org/n?u=RePEc:usg:econwp:2014:04&r=ecm
  17. By: Huber, Martin
    Abstract: The decomposition of gender or ethnic wage gaps into explained and unexplained components (often with the aim to assess labor market discrimination) has been a major research agenda in empirical labor economics. This paper demonstrates that conventional decompositions, no matter whether linear or non-parametric, are equivalent to assuming a (probably too) simplistic model of mediation (aimed at assessing causal mechanisms) and may therefore lack causal interpretability. The reason is that decompositions typically control for post-birth variables that lie on the causal pathway from gender/ ethnicity (which are determined at or even before birth) to wage but neglect potential endogeneity that may arise from this approach. Based on the newer literature on mediation analysis, we therefore provide more attractive identifying assumptions and discuss non-parametric identification based on reweighting.
    Keywords: Wage decomposition, causal mechanisms, mediation
    JEL: J31 J71 C21 C14
    Date: 2014–02
    URL: http://d.repec.org/n?u=RePEc:usg:econwp:2014:05&r=ecm
  18. By: Mariano Kulish; Adrian Pagan
    Abstract: Standard solution methods for linearised models with rational expectations take the structural parameters to be constant. These solutions are fundamental for likelihood based estimation of such models. Regime changes, such as those associated with either changed rules for economic policy, institutional changes, or changes in the technology of production, can generate large changes in the statistical properties of observable variables. In practice, structural change is accounted for during estimation by selecting a sub-sample for which a time-invariant structure seems valid. In this paper we develop solutions for linearised models with structural changes under a variety of assumptions regarding agents’ beliefs about those structural changes. We put the solutions in state space form and use the Kalman filter to construct the likelihood function. We apply the techniques to three examples: an inflationary program, a disinflation program and a transitory slowdown in trend growth.
    Date: 2014–02
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2014-15&r=ecm
  19. By: Maia Güell; José V. Rodríguez Mora; Christopher I. Telmer
    Abstract: We propose a new methodology for measuring intergenerational mobility in economic wellbeing. Our method is based on the joint distribution of surnames and economic outcomes. It circumvents the need for intergenerational panel data, a long-standing stumbling block for understanding mobility. A single cross-sectional dataset is sufficient. Our main idea is simple. If `inheritance' is important for economic outcomes, then rare surnames should predict economic outcomes in the cross-section. This is because rare surnames are indicative of familial linkages. Of course, if the number of rare surnames is small, this won't work. But rare surnames are abundant in the highly-skewed nature of surname distributions from most Western societies. We develop a model that articulates this idea and shows that the more important is inheritance, the more informative will be surnames. This result is robust to a variety of different assumptions about fertility and mating. We apply our method using the 2001 census from Catalonia, a large region of Spain. We use educational attainment as a proxy for overall economic well-being. A calibration exercise results in an estimate of the intergenerational correlation coefficient of 0:60. We also find evidence suggesting that mobility has decreased among the different generations of the 20th century. A complementary analysis based on sibling correlations confirms our results and provides a robustness check on our method. Our model and our data allow us to examine one possible explanation for the observed decrease in mobility. We find that the degree of assortative mating has increased over time. Overall, we argue that our method has promise because it can tap the vast mines of census data that are available in a heretofore unexploited manner.
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:fda:fdaddt:2014-01&r=ecm
  20. By: Yukitoshi Matsushita; Taisuke Otsu; Ke-Li Xu
    Date: 2014–02
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:/2014/573&r=ecm
  21. By: Nikolaus Hautsch (University of Vienna); Dieter Hess (University of Cologne); Fuyu Yang (University of East Anglia)
    Abstract: We propose a Bayesian framework for nowcasting GDP growth in real time. Using vintage data on macroeconomic announcements we set up a state space system connecting latent GDP growth rates to agencies' releases of GDP and other economic indicators. We propose a Gibbs sampling scheme to filter out daily GDP growth rates using all available macroeconomic information. The sample draws from the resulting posterior distribution, thereby allowing us to simulate backcasting, nowcasting, and forecasting densities. A stochastic search variable selection procedure yields a data-driven way of selecting the relevant GDP predictors out of a potentially large set of economic indicators.
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:uea:aepppr:2012_56&r=ecm

This nep-ecm issue is ©2014 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.