nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒11‒01
nineteen papers chosen by
Sune Karlsson
Orebro University

  1. On Liu Estimators for the Logit Regression Model By Månsson, Kristofer; Kibria, B. M. Golam; Shukur, Ghazi
  2. Optimal Jackknife for Discrete Time and Continuous Time Unit Root Models By Ye Chen; Jun Yu
  3. Testing for Panel Cointegration Using Common Correlated Effects By Anindya Banerjee; Josep Lluis Carrion-i-Silvestre
  4. A Class of Robust Tests in Augmented Predictive Regressions By Paulo M.M. Rodrigues; Antonio Rubia
  5. Learning a bayesian network from ordinal data By Flaminia Musella
  6. The estimation of Human Capital in structural models with flexible specification By Pietro Giorgio Lovaglio; Giuseppe Folloni
  7. Gaussian Process Regression Networks By Andrew Gordon Wilson; David A. Knowles; Zoubin Ghahramani
  8. Local Statistical Modeling via Cluster-Weighted Approach with Elliptical Distributions By Salvatore Ingrassia; Simona Caterina Minotti; Giorgio Vittadini
  9. Disagreement, Uncertainty and the True Predictive Density By Fabian Krüger; Ingmar Nolte
  10. Using proxy variables to control for unobservables when estimating productivity: A sensitivity analysis By Carmine Ornaghi; Ilke Van Beveren;
  11. Inference for Shared-Frailty Survival Models with Left-Truncated Data By van den Berg, Gerard J.; Drepper, Bettina
  12. GMM Estimation of Income Distributions from Grouped Data By Gholamreza Hajargsht; William E. Griffiths; Joseph Brice; D.S. Prasada Rao; Duangkamon Chotikapanich
  13. Performance analysis and optimal selection of large mean-variance portfolios under estimation risk By Francisco Rubio; Xavier Mestre; Daniel P. Palomar
  14. Simulation based estimation of threshold moving average models with contemporaneous shock asymmetry By Taştan, Hüseyin
  15. How the Timing of Grade Retention Affects Outcomes: Identification and Estimation of Time-Varying Treatment Effects By Jane Cooley Fruehwirth; Salvador Navarro; Yuya Takahashi
  16. An MVAR Framework to Capture Extreme Events in Macroprudential Stress Tests By Paolo Guarda; Abdelaziz Rouabah; John Theal
  17. Modelling Comovements of Economic Time Series: A Selective Survey By Marco Centoni; Gianluca Cubadda
  18. A Bayesian Analysis of Female Wage Dynamics Using Markov Chain Clustering By Christoph Pamminger; Regina Tüchler
  19. Spatial econometrics of innovation : Recent contributions and research perspectives By Corinne Autant-Bernard

  1. By: Månsson, Kristofer (Jönköping University); Kibria, B. M. Golam (Florida International University); Shukur, Ghazi (Linnaeus University)
    Abstract: In innovation analysis the logit model used to be applied on available data when the dependent variables are dichotomous. Since most of the economic variables are correlated between each other practitioners often meet the problem of multicollinearity. This paper introduces a shrinkage estimator for the logit model which is a generalization of the estimator proposed by Liu (1993) for the linear regression. This new estimation method is suggested since the mean squared error (MSE) of the commonly used maximum likelihood (ML) method becomes inflated when the explanatory variables of the regression model are highly correlated. Using MSE, the optimal value of the shrinkage parameter is derived and some methods of estimating it are proposed. It is shown by means of Monte Carlo simulations that the estimated MSE and mean absolute error (MAE) are lower for the proposed Liu estimator than those of the ML in the presence of multicollinearity. Finally the benefit of the Liu estimator is shown in an empirical application where different economic factors are used to explain the probability that municipalities have net increase of inhabitants.
    Keywords: Estimation; MAE; MSE; Multicollinearity; Logit; Liu; Innovation analysis
    JEL: C35 C39
    Date: 2011–10–18
    URL: http://d.repec.org/n?u=RePEc:hhs:cesisp:0259&r=ecm
  2. By: Ye Chen (School of Economics, Singapore Management Unversity); Jun Yu (School of Economics, Singapore Management Unversity)
    Abstract: Maximum likelihood estimation of the persistence parameter in the discrete time unit root model is known for suffering from a downward bias. The bias is more pronounced in the continuous time unit root model. Recently Chambers and Kyriacou (2010) introduced a new jackknife method to remove the fi…rst order bias in the estimator of the persistence parameter in a discrete time unit root model. This paper proposes an improved jackknife estimator of the persistence parameter that works for both the discrete time unit root model and the continuous time unit root model. The proposed jackknife estimator is optimal in the sense that it minimizes the variance. Simulations highlight the performance of the proposed method in both contexts. They show that our optimal jackknife reduces the variance of the jackknife method of Chambers and Kyriacou by at least 10% in both cases.
    Keywords: Bias reduction, Variance reduction, Vasicek model, Long-span Asymptotics, Autoregression
    JEL: C11 C15
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:siu:wpaper:12-2011&r=ecm
  3. By: Anindya Banerjee; Josep Lluis Carrion-i-Silvestre
    Abstract: Spurious regression analysis in panel data when time series are cross-section dependent is analyzed in the paper. We show that consistent estimation of the long-run average parameter is possible once we control for cross-section dependence using cross-section averages in the spirit of the common correlated effects approach in Pesaran (2006), Holly, Pesaran and Yamagata (2010) and Kapetanios, Pesaran and Yamagata (2011). This result is used to design a panel cointegration test statistic. The performance of the proposal is investigated in comparison with factor-based methods to control for dependence when both strong and weak cross-section dependence may be present.
    Keywords: Panel cointegration, cross-section dependence, common factors, spatial econometrics
    JEL: C12 C22
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:bir:birmec:11-16&r=ecm
  4. By: Paulo M.M. Rodrigues; Antonio Rubia
    Abstract: This paper focuses on the analytical discussion of a robust t-test for predictability and on the analysis of its finite-sample properties. Our analysis shows that the procedure proposed exhibits approximately correct size even in fairly small samples. Furthermore, the test is well-behaved under short-run dependence, and can exhibit improved power performance over alternative procedures. These appealing properties, together with the fact that the test can be applied in a simple and direct way in the linear regression context, suggests that the modified t-statistic introduced in this paper is well suited for addressing predictability in empirical applications.
    JEL: C12 C22
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:ptu:wpaper:w201126&r=ecm
  5. By: Flaminia Musella
    Abstract: Bayesian networks are graphical models that represent the joint distributionof a set of variables using directed acyclic graphs. When the dependence structure is unknown (or partially known) the network can be learnt from data. In this paper, we propose a constraint-based method to perform Bayesian networks structural learning in presence of ordinal variables. The new procedure, called OPC, represents a variation of the PC algorithm. A nonparametric test, appropriate for ordinal variables, has been used. It will be shown that, in some situation, the OPC algorithm is a solution more efficient than the PC algorithm.
    Keywords: Structural Learning, Monotone Association, Nonparametric Methods.
    JEL: C14 C51
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:rtr:wpaper:139&r=ecm
  6. By: Pietro Giorgio Lovaglio (Dept. of Quantitative Methods, University of Bicocca-Milan); Giuseppe Folloni (Department of Economics, University of Trento)
    Abstract: The present paper focuses on statistical models for estimating Human Capital (HC) at disaggregated level (worker, household, graduates). The more recent literature on HC as a latent variable states that HC can be reasonably considered a broader multi-dimensional non-observable construct, depending on several and interrelate causes, and indirectly measured by many observed indicators. In this perspective, latent variable models have been assuming a prominent role in the social science literature for the study of the interrelationships among phenomena. However, traditional estimation methods are prone to different limitations, as stringent distributional assumptions, improper solutions, and factor score indeterminacy for Covariance Structure Analysis and the lack of a global optimization procedure for the Partial Least Squares approach. To avoid these limitations, new approaches to structural equation modelling, based on Component Analysis, which estimates latent variables as exact linear combinations of observed variables minimizing a single criterion, were proposed in literature. However, these methods are limited to model particular types of relationship among sets of variables. In this paper, we propose a class of models in such a way that it enables to specify and fit a variety of relationships among latent variables and endogenous indicators. Specifically, we extend this new class of models to allow for covariate effects on the endogenous indicators. Finally, an application aimed to measure, in a realistic structural model, the causal impact of formal Human capital (HC), accumulated during Higher education, on the initial earnings for University of Milan (Italy) graduates is illustrated.
    Date: 2011–09
    URL: http://d.repec.org/n?u=RePEc:laa:wpaper:11&r=ecm
  7. By: Andrew Gordon Wilson; David A. Knowles; Zoubin Ghahramani
    Abstract: We introduce a new regression framework, Gaussian process regression networks (GPRN), which combines the structural properties of Bayesian neural networks with the non-parametric flexibility of Gaussian processes. This model accommodates input dependent signal and noise correlations between multiple response variables, input dependent length-scales and amplitudes, and heavy-tailed predictive distributions. We derive both efficient Markov chain Monte Carlo and variational Bayes inference procedures for this model. We apply GPRN as a multiple output regression and multivariate volatility model, demonstrating substantially improved performance over eight popular multiple output (multi-task) Gaussian process models and three multivariate volatility models on benchmark datasets, including a 1000 dimensional gene expression dataset.
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1110.4411&r=ecm
  8. By: Salvatore Ingrassia (Dipartimento di Impresa, Cultura e Società, Università degli Studi di Catania); Simona Caterina Minotti (Dipartimento di Statistica, Università degli Studi di Milano-Bicocca); Giorgio Vittadini (Dipartimento di Metodi Quantitativi per l'Economia e le Scienze Aziendali, Università degli Studi di Milano-Bicocca)
    Abstract: Cluster Weighted Modeling (CWM) is a mixture approach regarding the modelisation of the joint probability of data coming from a heterogeneous population. Under Gaussian assumptions, we investigate statistical properties of CWM from both the theoretical and numerical point of view; in particular, we show that CWM includes as special cases mixtures of distributions and mixtures of regressions. Further, we introduce CWM based on Student-t distributions providing more robust fitting for groups of observations with longer than normal tails or atypical observations. Theoretical results are illustrated using some empirical studies, considering both real and simulated data.
    Keywords: Cluster-Weighted Modeling, Mixture Models, Model-Based Clustering
    Date: 2011–05–28
    URL: http://d.repec.org/n?u=RePEc:mis:wpaper:20111001&r=ecm
  9. By: Fabian Krüger (Department of Economics, University of Konstanz, Germany); Ingmar Nolte (Warwick Business School, Financial Econometrics Research Centre (FERC), University of Warwick)
    Abstract: This paper generalizes the discussion about disagreement versus uncertainty in macroeconomic survey data by emphasizing the importance of the (unknown) true predictive density. Using a forecast combination approach, we ask whether cross sections of survey point forecasts help to approximate the true predictive density. We find that although these cross-sections perform poorly individually, their inclusion into combined predictive densities can significantly improve upon densities relying solely on time series information.
    Keywords: Disagreement, Uncertainty, Predictive Density, Forecast Combination
    JEL: C53 E F
    Date: 2011–09–01
    URL: http://d.repec.org/n?u=RePEc:knz:dpteco:1143&r=ecm
  10. By: Carmine Ornaghi; Ilke Van Beveren;
    Abstract: The use of proxy variables to control for unobservables when estimating a production function has become increasingly popular in empirical works in recent years. The present paper aims to contribute to this literature in three important ways. First, we provide a structured review of the different estimators and their underlying assumptions. Second, we compare the results obtained using different estimators for a sample of Spanish manufacturing firms, using definitions and data comparable to those used in most empirical works. IN comparing the performance of the different estimators, we rely on various proxy variables, apply different definitions of capital, use alternative moment conditions and allow for different timing assumptions of the inputs. Third, in the empirical analysis we propose a simple(non-graphical) test of the monotonicity assumption between productivity and the proxy variable.Our results suggest that productivity measures are more sensitive to the estimator choice rather than to the choice of proxy variables. Moreover, we find that the monotonicity assumption does not hold for a non-negligible proportion of the observations in our data. Importantly, results of a simple evaluation excercise where we compare productivity distributions of exporters versus non-exporters shows that different estimators yield different results, pointing to the importance of making suitable timing assumptions and choosing the appropriate estimator for the data at hand.
    Keywords: Total factor productivity; semiparametric estimator; simultaneity, timing assumptions, generalized method of moments
    JEL: C13 C14 D24 D40
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:lic:licosd:28711&r=ecm
  11. By: van den Berg, Gerard J. (University of Mannheim); Drepper, Bettina (University of Mannheim)
    Abstract: Shared-frailty survival models specify that systematic unobserved determinants of duration outcomes are identical within groups of individuals. We consider random-effects likelihood-based statistical inference if the duration data are subject to left-truncation. Such inference with left-truncated data can be performed in the Stata software package. We show that with left-truncated data, the commands ignore the weeding-out process before the left-truncation points, affecting the distribution of unobserved determinants among group members in the data, that is, among the group members who survive until their truncation points. We critically examine studies in the statistical literature on this issue as well as published empirical studies that use the commands. Simulations illustrate the size of the (asymptotic) bias and its dependence on the degree of truncation. We provide a Stata command file that maximizes the likelihood function that properly takes account of the interplay between truncation and dynamic selection.
    Keywords: stata, duration analysis, left-truncation, likelihood function, dynamic selection, hazard rate, unobserved heterogeneity, twin data
    JEL: C41 C34
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp6031&r=ecm
  12. By: Gholamreza Hajargsht; William E. Griffiths; Joseph Brice; D.S. Prasada Rao; Duangkamon Chotikapanich
    Abstract: We develop a GMM procedure for estimating income distributions from grouped data with unknown group bounds. The approach enables us to obtain standard errors for the estimated parameters and functions of the parameters, such as inequality and poverty measures, and to test the validity of an assumed distribution using a J-test. Using eight countries/regions for the year 2005, we show how the methodology can be applied to estimate the parameters of the generalized beta distribution of the second kind, and its special-case distributions, the beta-2, Singh-Maddala, Dagum, generalized gamma and lognormal distributions. This work extends earlier work (Chotikapanich et al., 2007, 2012) that did not specify a formal GMM framework, did not provide methodology for obtaining standard errors, and considered only the beta-2 distribution. The results show that generalized beta distribution fits the data well and outperforms other frequently used distributions.
    Keywords: GMM; generalized beta distribution; grouped data; inequality and poverty
    JEL: C13 C16 D31
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:mlb:wpaper:1129&r=ecm
  13. By: Francisco Rubio; Xavier Mestre; Daniel P. Palomar
    Abstract: We study the consistency of sample mean-variance portfolios of arbitrarily high dimension that are based on Bayesian or shrinkage estimation of the input parameters as well as weighted sampling. In an asymptotic setting where the number of assets remains comparable in magnitude to the sample size, we provide a characterization of the estimation risk by providing deterministic equivalents of the portfolio out-of-sample performance in terms of the underlying investment scenario. The previous estimates represent a means of quantifying the amount of risk underestimation and return overestimation of improved portfolio constructions beyond standard ones. Well-known for the latter, if not corrected, these deviations lead to inaccurate and overly optimistic Sharpe-based investment decisions. Our results are based on recent contributions in the field of random matrix theory. Along with the asymptotic analysis, the analytical framework allows us to find bias corrections improving on the achieved out-of-sample performance of typical portfolio constructions. Some numerical simulations validate our theoretical findings.
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1110.3460&r=ecm
  14. By: Taştan, Hüseyin
    Abstract: Persistence of shocks to macroeconomic time series may differ depending on the sign or on whether a threshold value is crossed. For example, positive shocks to gross domestic product may be more persistent than negative shocks. Threshold (or asymmetric) moving average (TMA) models, by explicitly taking into account threshold behavior, can help discriminate whether there exists persistence asymmetry. Recently, building on the works of Wecker (1981, JASA, 76(373)) and De Gooijer (1998, JTSA, 19(1)) among others, Guay and Scaillet (2003, JBES, 21(1)) proposed TMA model in which both contemporaneous and lagged asymmetric effects are present and provided indirect inference framework for estimation and testing. This paper builds on their work and examines the properties of efficient method of moments (EMM) estimation of TMA class of models using Monte Carlo simulation experiments. The model is also applied to analyze the persistence properties of shocks in Turkish business cycles.
    Keywords: Threshold moving average models; contemporaneous asymmetry; persistence of shocks; Efficient Method of Moments
    JEL: C15 C22 C01
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:34302&r=ecm
  15. By: Jane Cooley Fruehwirth (University of Wisconsin-Madison); Salvador Navarro (University of Western Ontario); Yuya Takahashi (University of Mannheim)
    Abstract: Increasingly, grade retention is viewed as an important alternative to social promotion, yet evidence to date is unable to disentangle how the effect of grade retention varies by abilities and over time. The key challenge is differential selection of students into retention across grades and by abilities. Because existing quasi-experimental methods cannot address this question, we develop a new strategy that is a hybrid between a control function and a generalization of the fixed effects approach. Applying our method to nationally-representative, longitudinal data, we find evidence of dynamic selection into retention and that the treatment effect of retention varies considerably across grades and unobservable abilities of students. Our strategy can be applied more broadly to many time-varying or multiple treatment settings.
    Keywords: time-varying treatments, dynamic selection, grade retention, factor analysis
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:uwo:hcuwoc:20117&r=ecm
  16. By: Paolo Guarda; Abdelaziz Rouabah; John Theal
    Abstract: The stress testing literature abounds with reduced-form macroeconomic models that are used to forecast the evolution of the macroeconomic environment in the context of a stress testing exercise. These models permit supervisors to estimate counterparty risk under both baseline and adverse scenarios. However, the large majority of these models are founded on the assumption of normality of the innovation series. While this assumption renders the model tractable, it fails to capture the observed frequency of distant tail events that represent the hallmark of systemic financial stress. Consequently, these kinds of macro models tend to underestimate the actual level of credit risk. This also leads to an inaccurate assessment of the degree of systemic risk inherent in the financial sector. Clearly this may have significant implications for macro-prudential policy makers. One possible way to overcome such a limitation is to introduce a mixture of distributions model in order to better capture the potential for extreme events. Based on the methodology developed by Fong, Li, Yau and Wong (2007), we have incorporated a macroeconomic model based on a mixture vector autoregression (MVAR) into the stress testing framework of Rouabah and Theal (2010) that is used at the Banque centrale du Luxembourg. This allows the counterparty credit risk model to better capture extreme tail events in comparison to models based on assuming normality of the distributions underlying the macro models. We believe this approach facilitates a more accurate assessment of credit risk.
    Keywords: financial stability, stress testing, MVAR, mixture of normals, VAR, tier 1 capital ratio, counterparty risk, Luxembourg banking sector
    JEL: C15 E44 G21
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:bcl:bclwop:bclwp063&r=ecm
  17. By: Marco Centoni (LUMSA University); Gianluca Cubadda (Faculty of Economics, University of Rome "Tor Vergata")
    Abstract: Modelling comovements amongst multiple economic variables takes up a relevant part of the literature in time series econometrics. Comovement can be defined as "move together", that is as movement that several series have in common. The pattern of the series could be of different nature, such as trend, cycles, seasonality, being the results of different driving forces. As a results, series that comove share some common features. Common trends, common cycles, common seasonality are terms that are often found in the literature, different in scope but all aimed at modeling common behavior of the series. However, modeling comovements is not only a statistical matter, since in many cases common features are predicted by economic theory, resulting from the optimizing behavior of economic agents.
    Date: 2011–10–26
    URL: http://d.repec.org/n?u=RePEc:rtv:ceisrp:215&r=ecm
  18. By: Christoph Pamminger (Department of Applied Statistics, Johannes Kepler University Linz, Austria,); Regina Tüchler
    Abstract: In this work, we analyze wage careers of women in Austria. We identify groups of female employees with similar patterns in their earnings development. Covariates such as e.g. the age of entry, the number of children or maternity leave help to detect these groups. We find three different types of female employees: (1) “high-wage mums”, women with high income and one or two children, (2) “low-wage mums”, women with low income and ‘many’ children and (3) “childless careers”, women who climb up the career ladder and do not have children. We use a Markov chain clustering approach to find groups in the discretevalued time series of income states. Additional covariates are included when modeling group membership via a multinomial logit model.
    Keywords: Income Career, Transition Data, Multinomial Logit, Auxiliary Mixture Sampler, Markov Chain Monte Carlo
    Date: 2011–07
    URL: http://d.repec.org/n?u=RePEc:jku:nrnwps:2011_04&r=ecm
  19. By: Corinne Autant-Bernard (Université de Lyon, Lyon, F-69007, France ; Université Jean Monnet, Saint-Etienne,F-42000, France ; CNRS, GATE Lyon St Etienne, Saint-Etienne, F-42000, France)
    Abstract: Preliminary introduced by Anselin, Varga and Acs (1997) spatial econometric tools are widely used in economic geography of innovation. Taking into account spatial autocorrelation and spatial heterogeneity of regional innovation, this paper analyzes how these techniques have improved the ability to quantify knowledge spillovers, to measure their spatial extent, and to explore the underlying mechanisms and especially the interactions between geographical and social distance. It is also argued that the recent developments of spatio-dynamic models opens new research lines to investigate the temporal dimension of both spatial knowledge flows and innovation networks, two issues that should rank high in the research agenda of the geography of innovation.
    Keywords: Geography of innovation, spatial correlation, spatio‐dynamic panels, innovation networks
    JEL: O31 R12 C31
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:gat:wpaper:1120&r=ecm

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.