nep-ecm New Economics Papers
on Econometrics
Issue of 2013‒07‒20
nineteen papers chosen by
Sune Karlsson
Orebro University

  1. A Proposed Estimator for Dynamic Probit Models By Gao, Wei; Yao, Qiwei; Bergsman, Wicher
  2. Comparison of Parametric and Semi-Parametric Binary Response Models By Xiangjin Shen; Shiliang Li; Hiroki Tsurumi
  3. Testing for Uncorrelated Residuals in Dynamic Count Models with an Application to Corporate Bankruptcy By Sant'Anna, Pedro H. C.
  4. Testing for state dependence in binary panel data with individual covariates By Bartolucci, Francesco; Nigro, Valentina; Pigini, Claudia
  5. Testing power-law cross-correlations: Rescaled covariance test By Ladislav Kristoufek
  6. Testing for Structural Stability of Factor Augmented Forecasting Models By Valentina Corradi; Norman Swanson
  7. A Survey of Recent Advances in Forecast Accuracy Comparison Testing, with an Extension to Stochastic Dominance By Valentina Corradi; Norman Swanson
  8. Density and Conditional Distribution Based Specification Analysis By Diep Duong; Norman Swanson
  9. Combining Two Consistent Estimators By John Chao; Jerry Hausman; Whitney Newey; Norman Swanson; Tiemen Woutersen
  10. Diffusion Index Model Specification and Estimation Using Mixed Frequency Datasets By Kihwan Kim; Norman Swanson
  11. An Expository Note on the Existence of Moments of Fuller and HFUL Estimators By John Chao; Jerry Hausman; Whitney Newey; Norman Swanson; Tiemen Woutersen
  12. "Modfiied Conditional AIC in Linear Mixed Models" By Yuki Kawakubo; Tatsuya Kubokawa
  13. Zipf Law and the Firm Size Distribution: a critical discussion of popular estimators By Giulio Bottazzi; Davide Pirino; Federico Tamagni
  14. Heavy tailed time series with extremal independence By Rafal Kulik; Philippe Soulier
  15. Mining Big Data Using Parsimonious Factor and Shrinkage Methods By Hyun Hak Kim; Norman Swanson
  16. Importance sampling for jump processes and applications to finance By Laetitia Badouraly Kassim; Jérôme Lelong; Imane Loumrhari
  17. THE ASSESSMENT AND IMPROVEMENT OF THE ACCURACY FOR THE FORECAST INTERVALS By Bratu, Mihaela
  18. CDO Surfaces Dynamics By Barbara Choroś-Tomczyk; Wolfgang Karl Härdle; Ostap Okhrin;
  19. Forecasting multivariate time series under present-value-model short- and long-run co-movement restrictions By Guillén, Osmani Teixeira de Carvalho; Hecq, Alain; Issler, João Victor; Saraiva, Diogo

  1. By: Gao, Wei; Yao, Qiwei; Bergsman, Wicher
    Abstract: In this paper, new estimating methods proposed for dynamic and static probit models with panel data. Simulation studies show that the proposed estimators work relatively well.
    Keywords: Dynamic and static probit models; Panel data; Generalized Linear models
    JEL: C13
    Date: 2013–07–15
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:48336&r=ecm
  2. By: Xiangjin Shen (Rutgers University, Economics Department); Shiliang Li (Rutgers University, Statistics Department); Hiroki Tsurumi (Rutgers University, Economics Department)
    Abstract: A Bayesian semi-parametric estimation of the binary response model using Markov Chain Monte Carlo algorithms is proposed. The performances of the parametric and semi-parametric models are presented. The mean squared errors, receiver operating characteristic curve, and the marginal effect are used as the model selection criteria. Simulated data and Monte Carlo experiments show that unless the binary data is extremely unbalanced the semi-parametric and parametric models perform equally well. However, if the data is extremely unbalanced the maximum likelihood estimation does not converge whereas the Bayesian algorithms do. An application is also presented.
    Keywords: Semi-parametric binary response models, Markov Chain Monte Carlo algorithms, Kernel densities, Optimal bandwidth, Receiver operating characteristic curve
    JEL: C14 C35 C11
    Date: 2013–07–12
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201308&r=ecm
  3. By: Sant'Anna, Pedro H. C.
    Abstract: This article proposes a new diagnostic test for dynamic count models, which is well suited for risk management. Our test proposal is of the Portmanteau-type test for lack of residual autocorrelation. Unlike previous proposals, the resulting test statistic is asymptotically pivotal when innovations are uncorrelated, but not necessarily iid nor a martingale difference. Moreover, the proposed test is able to detect local alternatives converging to the null at the parametric rate T^{1/2}, with T the sample size.The finite sample performance of the test statistic is examined by means of a Monte Carlo experiment. Finally, using a dataset on U.S. corporate bankruptcies, we apply our test proposal to check if common risk models are correctly specified.
    Keywords: Time Series of counts; Residual autocorrelation function; Model checking; Credit risk management.
    JEL: C12 C22 C25 G3 G33
    Date: 2013–05
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:48376&r=ecm
  4. By: Bartolucci, Francesco; Nigro, Valentina; Pigini, Claudia
    Abstract: We propose a test for state dependence in binary panel data under the dynamic logit model with individual covariates. For this aim, we rely on a quadratic exponential model in which the association between the response variables is accounted for in a different way with respect to more standard formulations. The level of association is measured by a single parameter that may be estimated by a conditional maximum likelihood approach. Under the dynamic logit model, the conditional estimator of this parameter converges to zero when the hypothesis of absence of state dependence is true. This allows us to implement a Wald test for this hypothesis which may be very simply performed and attains the nominal significance level under any structure of the individual covariates. Through an extensive simulation study, we find that our test has good finite sample properties and it is more robust to the presence of (autocorrelated) covariates in the model specification in comparison with other existing testing procedures for state dependence. The test is illustrated by an application based on data coming from the Panel Study of Income Dynamics.
    Keywords: conditional inference, dynamic logit model, quadratic exponential model, Wald test
    JEL: C12 C23
    Date: 2013–07–11
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:48233&r=ecm
  5. By: Ladislav Kristoufek
    Abstract: We introduce a new test for detection of power-law cross-correlations among a pair of time series - the rescaled covariance test. The test is based on a power-law divergence of the covariance of the partial sums of the long-range cross-correlated processes. Utilizing a heteroskedasticity and auto-correlation robust estimator of the long-term covariance, we develop a test with desirable statistical properties which is well able to distinguish between short- and long-range cross-correlations. Such test should be used as a starting point in the analysis of long-range cross-correlations prior to an estimation of bivariate long-term memory parameters. As an application, we show that the relationship between volatility and traded volume, and volatility and returns in the financial markets can be labeled as the one with power-law cross-correlations.
    Date: 2013–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1307.4727&r=ecm
  6. By: Valentina Corradi (Warwick University); Norman Swanson (Rutgers University)
    Abstract: Mild factor loading instability, particularly if sufficiently independent across the different constituent variables, does not affect the estimation of the number of factors, nor subsequent estimation of the factors themselves (see e.g. Stock and Watson (2009)). This result does not hold in the presence of large common breaks in the factor loadings, however. In this case, information criteria overestimate the number of breaks. Additionally, estimated factors are no longer consistent estimators of "true" factors. Hence, various recent research papers in the diffusion index literature focus on testing the constancy of factor loadings. One reason why this is a positive development is that in applied work, factor augmented forecasting models are used widely for prediction, and it is important to understand when such models are stable. Now, forecast failure of factor augmented models can be due to either factor loading instability, regression coefficient instability, or both. To address this issue, we develop a test for the joint hypothesis of structural stability of both factor loadings and factor augmented forecasting model regression coefficients. The proposed statistic is based on the difference between full sample and rolling sample estimators of the sample covariance of the factors and the variable to be forecasted. Failure to reject the null ensures the structural stability of the factor augmented forecasting model. If the null is instead rejected, one can proceed to disentangle the cause of the rejection as being due to either (or both) of the afore mentioned varieties of instability. Standard inference can be carried out, as the suggested statistic has a chi-squared limiting distribution. We also establish the first order validity of (block) bootstrap critical values. Finally, we provide an empirical illustration by testing for the structural stability of factor augmented forecasting models for 11 U.S. macroeconomic indicators.
    Keywords: diffusion index, factor loading stability, forecast failure, forecast stability, regression coefficient stability
    JEL: C12 C22 C53
    Date: 2013–07–16
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201314&r=ecm
  7. By: Valentina Corradi (Warwick University); Norman Swanson (Rutgers University)
    Abstract: In recent years, an impressive body or research on predictive accuracy testing and model comparison has been published in the econometrics discipline. Key contributions to this literature include the paper by Diebold and Mariano (DM: 1995) that sets the groundwork for much of the subsequent work in the area, West (1996) who considers a variant of the DM test that allows for parameter estimation error in certain contexts, and White (2000) who develops testing methodology suitable for comparing many models. In this chapter, we begin by reviewing various key testing results in the extant literature, both under vanishing and non-vanishing parameter estimation error, with focus on the construction of valid bootstrap critical values in the case of non-vanishing parameter estimation error, under recursive estimation schemes, drawing on Corradi and Swanson (2007a). We then review recent extensions to the evaluation of multiple confidence intervals and predictive densities, for both the case of a known conditional distribution (Corradi and Swanson 2006a,b) and of an unknown conditional distribution (Corradi and Swanson 2007b). Finally, we introduce a novel approach in which forecast combinations are evaluated via the examination of the quantiles of the expected loss distribution. More precisely, we compare models looking at cumulative distribution functions (CDFs) of prediction errors, for a given loss function, via the principle of stochastic dominance; and we choose the model whose CDF is stochastically dominated, over some given range of interest.
    Keywords: block bootstrap, recursive estimation scheme, reality check, parameter estimation error, forecasting
    JEL: C22 C51
    Date: 2013–07–15
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201309&r=ecm
  8. By: Diep Duong (Rutgers University); Norman Swanson (Rutgers University)
    Abstract: The technique of using densities and conditional distributions to carry out consistent specification testing and model selection amongst multiple diffusion processes have received considerable attention from both financial theoreticians and empirical econometricians over the last two decades. One reason for this interest is that correct specification of diffusion models describing dynamics of financial assets is crucial for many areas in finance including equity and option pricing, term structure modeling, and risk management, for example. In this paper, we discuss advances to this literature introduced by Corradi and Swanson (2005), who compare the cumulative distribution (marginal or joint) implied by a hypothesized null model with corresponding empirical distributions of observed data. We also outline and expand upon further testing results from Bhardwaj, Corradi and Swanson (BCS: 2008) and Corradi and Swanson (2011). In particular, parametric specification tests in the spirit of the conditional Kolmogorov test of Andrews (1997) that rely on block bootstrap resampling methods in order to construct test critical values are first discussed. Thereafter, extensions due to BCS (2008) for cases where the functional form of the conditional density is unknown are introduced, and related continuous time simulation methods are introduced. Finally, we broaden our discussion from single process specification testing to multiple process model selection by discussing how to construct predictive densities and how to compare the accuracy of predictive densities derived from alternative (possibly misspecified) diffusion models. In particular, we generalize simulation Steps outlined in Cai and Swanson (2011) to multifactor models where the number of latent variables is larger than three. These final tests can be thought of as continuous time generalizations of the discrete time "reality check" test statistics of White (2000), which are widely used in empirical finance (see e.g. Sullivan, Timmermann and White (1999, 2001)). We finish the chapter with an empirical illustration of model selection amongst alternative short term interest rate models.
    Keywords: multi-factor diffusion process, specification test, out-of-sample forecast, jump process, block bootstrap
    JEL: C22 C51
    Date: 2013–07–16
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201312&r=ecm
  9. By: John Chao (University of Maryland); Jerry Hausman (MIT); Whitney Newey (MIT); Norman Swanson (Rutgers University); Tiemen Woutersen (University of Arizona)
    Abstract: This paper shows how a weighted average of a forward and reverse Jackknife IV estimator (JIVE) yields estimators that are robust against heteroscedasticity and many instruments. These estimators, called HFUL (Heteroscedasticity robust Fuller) and HLIM (Heteroskedasticity robust limited information maximum likelihood (LIML)) were introduced by Hausman et al. (2012), but without derivation. Combining consistent estimators is a theme that is associated with Jerry Hausman and, therefore, we present this derivation in this volume. Additionally, and in order to further understand and interpret HFUL and HLIM in the context of jackknife type variance ratio estimators, we show that a new variant of HLIM, under specific grouped data settings with dummy instruments, simplifies to the Bekker and van der Ploeg (2005) MM (method of moments) estimator.
    Keywords: endogeneity, instrumental variables, jackknife estimation, many moments, Hausman (1978) test
    JEL: C13 C31
    Date: 2013–07–16
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201310&r=ecm
  10. By: Kihwan Kim (Rutgers University); Norman Swanson (Rutgers University)
    Abstract: In this chapter, we discuss the use of mixed frequency models and diffusion index approximation methods in the context of prediction. In particular, select recent specification and estimation methods are outlined, and an empirical illustration is provided wherein U.S. unemployment forecasts are constructed using both classical principal components based diffusion indexes as well as using a combination of diffusion indexes and factors formed using small mixed frequency datasets. Preliminary evidence that mixed frequency based forecasting models yield improvements over standard fixed frequency models is presented.
    Keywords: forecasting, diffusion index, mixed frequency, recursive estimation, Kalman filter
    JEL: C22 C51
    Date: 2013–07–16
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201315&r=ecm
  11. By: John Chao (University of Maryland); Jerry Hausman (MIT); Whitney Newey (MIT); Norman Swanson (Rutgers University); Tiemen Woutersen (University of Arizona)
    Abstract: In a recent paper, Hausman et al. (2012) propose a new estimator, HFUL (Heteroscedasticity robust Fuller), for the linear model with endogeneity. This estimator is consistent and asymptotically normally distributed in the many instruments and many weak instruments asymptotics. Moreover, this estimator has moments, just like the estimator by Fuller (1977). The purpose of this note is to discuss at greater length the existence of moments result given in Hausman et al. (2012). In particular, we intend to answer the following questions: Why does LIML not have moments? Why does the Fuller modification lead to estimators with moments? Is normality required for the Fuller estimator to have moments? Why do we need a condition such as Hausman et al. (2012), Assumption 9? Why do we have the adjustment formula?
    Keywords: endogeneity, instrumental variables, jacknife estimation, many moments, existence of moments
    JEL: C13 C31
    Date: 2013–07–16
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201311&r=ecm
  12. By: Yuki Kawakubo (Graduate School of Economics, University of Tokyo); Tatsuya Kubokawa (Faculty of Economics, University of Tokyo)
    Abstract:    In linear mixed models, the conditional Akaike Information Criterion (cAIC) is a procedure for variable selection in light of the prediction of specific clusters or random effects. This is useful in problems involving prediction of random effects such as small area estimation, and much attention has been received since suggested by Vaida and Blanchard (2005). A weak point of cAIC is that it is derived as an unbiased estimator of conditional Akaike information (cAI) in the overspecified case, namely in the case that candidate models include the true model. This results in larger biases in the underspecified case that the true model is not included in candidate models. In this paper, we derive the modified cAIC (McAIC) to cover both the underspecified and overspecified cases, and investigate properties of McAIC. It is numerically shown that McAIC has less biases and less prediction errors than cAIC.
    Date: 2013–07
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2013cf895&r=ecm
  13. By: Giulio Bottazzi; Davide Pirino; Federico Tamagni
    Abstract: The upper tail of the firm size distribution is often assumed to follows a Power Law behavior. Recently, using different estimators and on different data sets, several papers conclude that this distribution follows the Zipf Law, that is that the fraction of firms whose size is above a given value is inversely proportional to the value itself. We compare the different methods through which this conclusion has been reached. We find that the family of estimators most widely adopted, based on an OLS regression, is in fact unreliable and basically useless for appropriate inference. This finding rises some doubts about previously identified Zipf Laws. In general, when individual observations are available, we recommend the adoption of the Hill estimator over any other method.
    Keywords: Firm size distribution; Zipf Law; Power-like distribution
    Date: 2013–07–12
    URL: http://d.repec.org/n?u=RePEc:ssa:lemwps:2013/17&r=ecm
  14. By: Rafal Kulik; Philippe Soulier
    Abstract: We consider strictly stationary heavy tailed time series whose finite-dimensional exponent measures are concentrated on axes, and hence their extremal properties cannot be tackled using classical multivariate regular variation that is suitable for time series with extremal dependence. We recover relevant information about limiting behavior of time series with extremal independence by introducing a sequence of scaling functions and conditional scaling exponent. Both quantities provide more information about joint extremes than a widely used tail dependence coefficient. We calculate the scaling functions and the scaling exponent for variety of models, including Markov chains, exponential autoregressive model, stochastic volatility with heavy tailed innovations or volatility. Theory is illustrated by numerical studies and data analysis.
    Date: 2013–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1307.1501&r=ecm
  15. By: Hyun Hak Kim (Bank of Korea); Norman Swanson (Rutgers University)
    Abstract: A number of recent studies in the economics literature have focused on the usefulness of factor models in the context of prediction using "big data". In this paper, our over-arching question is whether such "big data" are useful for modelling low frequency macroeconomic variables such as unemployment, inflation and GDP. In particular, we analyze the predictive benefits associated with the use dimension reducing independent component analysis (ICA) and sparse principal component analysis (SPCA), coupled with a variety of other factor estimation as well as data shrinkage methods, including bagging, boosting, and the elastic net, among others. We do so by carrying out a forecasting "horse-race", involving the estimation of 28 different baseline model types, each constructed using a variety of specification approaches, estimation approaches, and benchmark econometric models; and all used in the prediction of 11 key macroeconomic variables relevant for monetary policy assessment. In many instances, we find that various of our benchmark specifications, including autoregressive (AR) models, AR models with exogenous variables, and (Bayesian) model averaging, do not dominate more complicated nonlinear methods, and that using a combination of factor and other shrinkage methods often yields superior predictions. For example, simple averaging methods are mean square forecast error (MSFE) "best" in only 9 of 33 key cases considered. This is rather surprising new evidence that model averaging methods do not necessarily yield MSFE-best predictions. However, in order to "beat" model averaging methods, including arithmetic mean and Bayesian averaging approaches, we have introduced into our "horse-race" numerous complex new models involve combining complicated factor estimation methods with interesting new forms of shrinkage. For example, SPCA yields MSFE-best prediction models in many cases, particularly when coupled with shrinkage. This result provides strong new evidence of the usefulness of sophisticated factor based forecasting, and therefore, of the use of "big data" in macroeconometric forecasting.
    Keywords: prediction, independent component analysis, robust regression, shrinkage, factors
    JEL: C32 C53 G17
    Date: 2013–07–16
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201316&r=ecm
  16. By: Laetitia Badouraly Kassim (LJK - Laboratoire Jean Kuntzmann - CNRS : UMR5224 - Université Joseph Fourier - Grenoble I - Université Pierre-Mendès-France - Grenoble II - Institut Polytechnique de Grenoble - Grenoble Institute of Technology); Jérôme Lelong (LJK - Laboratoire Jean Kuntzmann - CNRS : UMR5224 - Université Joseph Fourier - Grenoble I - Université Pierre-Mendès-France - Grenoble II - Institut Polytechnique de Grenoble - Grenoble Institute of Technology); Imane Loumrhari (LJK - Laboratoire Jean Kuntzmann - CNRS : UMR5224 - Université Joseph Fourier - Grenoble I - Université Pierre-Mendès-France - Grenoble II - Institut Polytechnique de Grenoble - Grenoble Institute of Technology)
    Abstract: Adaptive importance sampling techniques are widely known for the Gaussian setting of Brownian driven diffusions. In this work, we want to extend them to jump processes. Our approach relies on a change of the jump intensity combined with the standard exponential tilting for the Brownian motion. The free parameters of our framework are optimized using sample average approximation techniques. We illustrate the efficiency of our method on the valuation of financial derivatives in several jump models.
    Keywords: Importance sampling; sample average approximation; adaptive Monte Carlo methods.
    Date: 2013–07–08
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-00842362&r=ecm
  17. By: Bratu, Mihaela (Academy of Economic Studies. Faculty of Cybernetics, Statistics and Economic Informatics.)
    Abstract: The objective of this research is to present some accuracy measures associated to forecast intervals, taken into account the fact that in literature some specific accuracy indicators for this type of prediction have not been proposed yet. For the quarterly inflation rate provided by the National Bank of Romania, forecast intervals were built on the horizon 2010-2012. According to the number of intervals that include the real value and to an econometric procedure based on DUMMY variables, the intervals based on historical errors (RMSE- root mean squared errors) are better than those based on BCA bootstrap procedure. However, the new indicator proposed in this paper as a measure of global accuracy, M indicator, the forecast intervals based on BCA bootstraping are more accurate than the intervals based on historical RMSE. Bayesian intervals were constructed for quarterly USA inflation in 2012 using aprioristic information, but the smaller intervals did not imply an increase in the degree of accuracy.
    Keywords: forecast intervals, accuracy, uncertainty, BCA bootstrap intervals, indicator M
    JEL: C10 C14 L6
    Date: 2013–07
    URL: http://d.repec.org/n?u=RePEc:rjr:wpmems:132602&r=ecm
  18. By: Barbara Choroś-Tomczyk; Wolfgang Karl Härdle; Ostap Okhrin;
    Abstract: Modelling the dynamics of credit derivatives is a challenging task in finance and economics. The recent crisis has shown that the standard market models fail to measure and forecast financial risks and their characteristics. This work studies risk of collateralized debt obligations (CDOs) by investigating the evolution of tranche spread surfaces and base correlation surfaces using a dynamic semiparametric factor model (DSFM). The DSFM offers a combination of flexible functional data analysis and dimension reduction methods, where the change in time is linear but the shape is nonparametric. The study provides an empirical analysis based on iTraxx Europe tranches and proposes an application to curve trading strategies. The DSFM allows us to describe the dynamics of all the tranches for all available maturities and series simultaneously which yields better understanding of the risk associated with trading CDOs and other structured products.
    Keywords: base correlation, collateralized debt obligation, curve trade, dynamic factor model, semiparametric model
    JEL: C14 C51 G11 G17
    Date: 2013–07
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2013-032&r=ecm
  19. By: Guillén, Osmani Teixeira de Carvalho; Hecq, Alain; Issler, João Victor; Saraiva, Diogo
    Abstract: It is well known that cointegration between the level of two variables (e.g.prices and dividends) is a necessary condition to assess the empirical validityof a present-value model (PVM) linking them. The work on cointegration,namelyon long-run co-movements, has been so prevalent that it is often over-looked that another necessary condition for the PVM to hold is that the forecast error entailed by the model is orthogonal to the past. This amounts toinvestigate whether short-run co-movememts steming from common cyclicalfeature restrictions are also present in such a system.In this paper we test for the presence of such co-movement on long- andshort-term interest rates and on price and dividend for the U.S. economy. Wefocuss on the potential improvement in forecasting accuracies when imposingthose two types of restrictions coming from economic theory.
    Date: 2013–07–01
    URL: http://d.repec.org/n?u=RePEc:fgv:epgewp:742&r=ecm

This nep-ecm issue is ©2013 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.