nep-ecm New Economics Papers
on Econometrics
Issue of 2013‒03‒16
eighteen papers chosen by
Sune Karlsson
Orebro University

  1. Inference in non stationary asymmetric garch models By Francq, Christian; Zakoian, Jean-Michel
  2. Bayesian Bandwidth Selection in Nonparametric Time-Varying Coefficient Models By Tingting Cheng; Jiti Gao; Xibin Zhang
  3. “Determining the Number of Regimes in Markov-Switching VAR and VMA Models” By Maddalena Cavicchioli
  4. Generalised empirical likelihood-based kernel density estimation By Vitaliy Oryshchenko; Richard J. Smith
  5. A merging algorithm for Gaussian mixture components By Andrea Pastore; Stefano Tonellato
  6. A potential solution to problems in ordered choice models involving endogenous ordinal variables for self-reported questions By HASAN, HAMID; Rehman, Attiqur
  7. Model choice and size distribution: a Bayequentist approach By John-Oliver Engler; Stefan Baumgaertner
  8. Bootstrap Co-integration Rank Testing: The Effect of Bias-Correcting Parameter Estimates By Cavaliere, Giuseppe; Taylor, A. M. Robert; Trenkler, Carsten
  9. Modeling the Dependence of Conditional Correlations on Volatility By L. Bauwens; Edoardo Otranto
  10. Weak Identification in Probit Models with Endogenous Covariates By Jean-Marie Dufour; Joachim Wilde
  11. The Cascade Bayesian Approach for a controlled integration of internal data, external data and scenarios. By Bertrand K. Hassani; Alexis Renaudin
  12. What Are We Weighting For? By Gary Solon; Steven J. Haider; Jeffrey Wooldridge
  13. Estimating the dose-response function through the GLM approach By Guardabascio, Barbara; Ventura, Marco
  14. Time Instability of the U.S. Monetary System: Multiple Break Tests and Reduced Rank TVP VAR By Dukpa Kim; Yohei Yamamoto
  15. Common correlated effects and international risk sharing By Peter Fuleky; L Ventura; Qianxue Zhao
  16. Plug-in estimation of level sets in a non-compact setting with applications in multivariate risk theory By Elena Di Bernardino; Thomas Laloë; Véronique Maume-Deschamps; Clémentine Prieur
  17. Bootstrapping Realized Multivariate Volatility Measures. By Donovon, Prosper; Goncalves, Silvia; Meddahi, Nour
  18. Unpredictability in Economic Analysis, Econometric Modeling and Forecasting By David F. Hendry; Grayham E. Mizon

  1. By: Francq, Christian; Zakoian, Jean-Michel
    Abstract: This paper considers the statistical inference of the class of asymmetric power-transformed GARCH(1,1) models in presence of possible explosiveness. We study the explosive behavior of volatility when the strict stationarity condition is not met. This allows us to establish the asymptotic normality of the quasi-maximum likelihood estimator (QMLE) of the parameter, including the power but without the intercept, when strict stationarity does not hold. Two important issues can be tested in this framework: asymmetry and stationarity. The tests exploit the existence of a universal estimator of the asymptotic covariance matrix of the QMLE. By establishing the local asymptotic normality (LAN) property in this nonstationary framework, we can also study optimality issues.
    Keywords: GARCH models; Inconsistency of estimators; Local power of tests; Nonstationarity; Quasi Maximum Likelihood Estimation
    JEL: C01 C12 C13 C22
    Date: 2013–03–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:44901&r=ecm
  2. By: Tingting Cheng; Jiti Gao; Xibin Zhang
    Abstract: Bandwidth plays an important role in determining the performance of local linear estimators. In this paper, we propose a Bayesian approach to bandwidth selection for local linear estimation of time–varying coefficient time series models, where the errors are assumed to follow the Gaussian kernel error density. A Markov chain Monte Carlo algorithm is presented to simultaneously estimate the bandwidths for local linear estimators in the regression function and the bandwidth for the Gaussian kernel error–density estimator. A Monte Carlo simulation study shows that: 1) our proposed Bayesian approach achieves better performance in estimating the bandwidths for local linear estimators than normal reference rule and cross–validation; 2) compared with the parametric assumption of either the Gaussian or the mixture of two Gaussians, Gaussian kernel error–density assumption is a data–driven choice and helps gain robustness in terms of different specification of the true error density. Moreover, we apply our proposed Bayesian sampling method to the estimation of bandwidth for the time–varying coefficient models that explain Okun’s law and the relationship between consumption growth and income growth in the U.S. For each model, we also provide calibrated parametric form of its time–varying coefficients.
    Keywords: Bayes factors, bandwidth, marginal likelihood, local linear estimator, random-walk Metropolis algorithm.
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2013-7&r=ecm
  3. By: Maddalena Cavicchioli (Advanced School of Economics, Department of Economics, University Of Venice Cà Foscari)
    Abstract: We give stable finite order VARMA(p*; q*) representations for M-state Markov switching second-order stationary time series whose autocovariances satisfy a certain matrix relation. The upper bounds for p* and q* are elementary functions of the dimension K of the process, the number M of regimes, the autoregressive and moving average orders of the initial model. If there is no cancellation, the bounds become equalities, and this solves the identification problem. Our class of time series include every M-state Markov switching multivariate moving average models and autoregressive models in which the regime variable is uncorrelated with the observable. Our results include, as particular cases, those obtained by Krolzig (1997), and improve the bounds given by Zhang and Stine (2001) and Francq and Zakoian (2001) for our classes of dynamic models. Data simulations and an application on foreign exchange rates complete the paper.
    Keywords: Second-order stationary time series, VMA models, VAR models, State-Space models, Markov chains, changes in regime, regime number.
    JEL: C01 C32 C50 C52
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:ven:wpaper:2013:03&r=ecm
  4. By: Vitaliy Oryshchenko (Department of Economics, University of Oxford); Richard J. Smith (UCL, IFS and University of Cambridge)
    Abstract: If additional information about the distribution of a random variable is available in the form of moment conditions, a weighted kernel density estimate re ecting the extra information can be constructed by replacing the uniform weights with the generalised empirical likelihood probabilities. It is shown that the resultant density estimator provides an improved approximation to the moment constraints. Moreover, a reduction in variance is achieved due to the systematic use of the extra moment information.
    Keywords: weighted kernel density estimation, moment conditions, higher-order expansions, normal mixtures.
    JEL: C14
    Date: 2013–02–12
    URL: http://d.repec.org/n?u=RePEc:nuf:econwp:1303&r=ecm
  5. By: Andrea Pastore (Department of Economics, University Of Venice Cà Foscari); Stefano Tonellato (Department of Economics, University Of Venice Cà Foscari)
    Abstract: In finite mixture model clustering, each component of the fitted mixture is usually associated with a cluster. In other words, each component of the mixture is interpreted as the probability distribution of the variables of interest conditionally on the membership to a given cluster. The Gaussian mixture model (GMM) is very popular in this context for its simplicity and flexibility. It may happen, however, that the components of the fitted model are not well separated. In such a circumstance, the number of clusters is often overestimated and a better clustering could be obtained by joining some subsets of the partition based on the fitted GMM. Some methods for the aggregation of mixture components have been recently proposed in the literature. In this work, we propose a hierarchical aggregation algorithm based on a generalisation of the definition of silhouette-width taking into account the Mahalanobis distances induced by the precison matrices of the components of the fitted GMM. The algorithm chooses the number of groups corresponding to the hierarchy level giving rise to the highest average-silhouette-width. Some simulation experiments and real data applications indicate that its performance is at least as good as the one of other existing methods.
    Keywords: similarity indices, Rand index, mixture models, bootstrap.
    JEL: C39 C46
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:ven:wpaper:2013:04&r=ecm
  6. By: HASAN, HAMID; Rehman, Attiqur
    Abstract: Most of the surveys in social sciences generally consist of ordinal variables. Sometimes researchers need to model behaviour of ordinal variables in simultaneous equation system involving many endogenous ordinal variables. This situation leads to a very complex likelihood function which is extremely hard to solve. The solutions suggested in the literature are even harder to understand by applied researchers. The present study suggests a simulation method to avoid this problem altogether by converting ordinal variables into continuous variables and use standard simultaneous regression models. The proposed method involves generating random numbers from continuous probability distributions (uniform and truncated normal distributions) within a discrete probability distribution. This method can be fruitfully be used in ordered logit and probit models. The limitations of this method are also discussed.
    Keywords: Endogenous Ordinal variables, Simultaneous Equation System, Ordered Logit, Ordered Probit.
    JEL: C1 C3 C4
    Date: 2013–03–09
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:44908&r=ecm
  7. By: John-Oliver Engler (Department of Sustainability Sciences and Department of Economics Leuphana University of Lueneburg, Germany); Stefan Baumgaertner (Department of Sustainability Sciences and Department of Economics Leuphana University of Lueneburg, Germany)
    Abstract: We propose a new three-step model-selection framework for size distributions in empirical data. It generalizes a recent frequentist plausibility-of-fit analysis (Step 1) and combines it with a relative ranking based on the Bayesian Akaike Information Criterion (Step 2). We enhance these statistical criteria with the additional criterion of microfoundation (Step 3) which is to select the size distribution that comes with a dynamic micro model of size dynamics. A numerical performance test of Step 1 shows that our generalization is able to correctly rule out the distribution hypotheses unjustified by the data at hand. We then illustrate our approach, and demonstrate its usefulness, with a sample of commercial cattle farms in Namibia. In conclusion, the framework proposed here has the potential to reconcile the ongoing debate about size distribution models in empirical data, the two most prominent of which are the Pareto and the lognormal distribution.
    Keywords: model choice, model selection, hypothesis testing, size distributions, Gibrat's Law, Pareto distribution, rank-size rule, environmental risk, semi-arid rangelands, cattle farming
    JEL: C12 C52 D30 D31 O44
    Date: 2013–02
    URL: http://d.repec.org/n?u=RePEc:lue:wpaper:265&r=ecm
  8. By: Cavaliere, Giuseppe; Taylor, A. M. Robert; Trenkler, Carsten
    Abstract: In this paper we investigate bootstrap-based methods for bias-correcting the first-stage parameter estimates used in some recently developed bootstrap implementations of the co-integration rank tests of Johansen (1996). In order to do so we adapt the framework of Kilian (1998) which estimates the bias in the original parameter estimates using the average bias in the corresponding parameter esti- mates taken across a large number of auxiliary bootstrap replications. A number of possible implementations of this procedure are discussed and concrete recommendations made on the basis of finite sample performance evaluated by Monte Carlo simulation methods. Our results show that bootstrap-based bias-correction methods can significantly improve upon the small sample performance of the bootstrap co-integration rank tests. A brief application of the techniques developed in this paper to international dynamic consumption risk sharing within Europe is also considered.
    Keywords: Co-integration , trace test , bias-correction , bootstrap
    JEL: C30 C32
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:mnh:wpaper:32993&r=ecm
  9. By: L. Bauwens; Edoardo Otranto
    Abstract: Several models have been developed to capture the dynamics of the conditional correlations between time series of financial returns, but few studies have investigated the determinants of the correlation dynamics. A common opinion is that the market volatility is a major determinant of the correlations. We extend some models to capture explicitly the dependence of the correlations on the volatility of the market of interest. The models differ in the way by which the volatility influences the correlations, which can be transmitted through linear or nonlinear, and direct or indirect effects. They are applied to different data sets to verify the presence and possible regularity of the volatility impact on correlations.
    Keywords: volatility effects; conditional correlation; DCC; Markov switching
    JEL: C32
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:cns:cnscwp:201304&r=ecm
  10. By: Jean-Marie Dufour; Joachim Wilde (Universitaet Osnabrueck)
    Abstract: Weak identification is a well known topic for linear multiple equation models. However, little is known whether this problem also matters for probit models with endogenous covariates. Therefore, the behaviour of the usual z-statistic in case of weak identification is analysed in a simulation study. It shows large size distortions. However, a new puzzle is found: The magnitude of the size distortion depends heavily on the parameter value that is tested. Alternatively the LR-statistic was calculated which is known to be more robust against weak identification in case of linear multiple equation models. The same seems to be true for probit equations. No size distortions are found. However, medium undersizing is observed.
    Keywords: probit model, weak identification
    JEL: C
    Date: 2013–03–01
    URL: http://d.repec.org/n?u=RePEc:iee:wpaper:wp0095&r=ecm
  11. By: Bertrand K. Hassani (Centre d'Economie de la Sorbonne et Santander UK); Alexis Renaudin (Aon Global Risk Consulting)
    Abstract: According to the last proposals of the Basel Committee on Banking Supervision, banks under the Advanced Measurement Approach (AMA) must use four different sources of information to assess their Operational Risk capital requirement. The fourth including "business environment and internal control factors", i.e. qualitative criteria, the three main quantitative sources available to banks to build the Loss Distribution are Internal Loss Data, External Loss Data, and Scenario Analysis. This paper proposes an innovative methodology to bring together these three different sources in the Loss Distribution Approach (LDA) framework through a Bayesian strategy. The integration of the different elements is performed in two different steps to ensure an internal data driven model is obtained. In a first step, scenarios are used to inform the prior distributions and external data informs the likelihood component of the posterior function. In the second step, the initial posterior function is used as the prior distribution and the internal loss data inform the likelihood component of the second posterior. This latter posterior function enables the estimation of the parameters of the severity distribution selected to represent the Operational Risk event types.
    Keywords: Operational risk, loss distribution approach, Bayesian inference, Marchov chain Monte Carlo, extreme value theory, non-parametric statistics, risk measures.
    JEL: C02 C11 C13 C63 G32
    Date: 2013–02
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:13009&r=ecm
  12. By: Gary Solon; Steven J. Haider; Jeffrey Wooldridge
    Abstract: The purpose of this paper is to help empirical economists think through when and how to weight the data used in estimation. We start by distinguishing two purposes of estimation: to estimate population descriptive statistics and to estimate causal effects. In the former type of research, weighting is called for when it is needed to make the analysis sample representative of the target population. In the latter type, the weighting issue is more nuanced. We discuss three distinct potential motives for weighting when estimating causal effects: (1) to achieve precise estimates by correcting for heteroskedasticity, (2) to achieve consistent estimates by correcting for endogenous sampling, and (3) to identify average partial effects in the presence of unmodeled heterogeneity of effects. In each case, we find that the motive sometimes does not apply in situations where practitioners often assume it does. We recommend diagnostics for assessing the advisability of weighting, and we suggest methods for appropriate inference.
    JEL: C1
    Date: 2013–02
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:18859&r=ecm
  13. By: Guardabascio, Barbara; Ventura, Marco
    Abstract: This paper revises the estimation of the dose-response function as in Hirano and Imbens (2004) by proposing a flexible way to estimate the generalized propensity score when the treatment variable is not necessarily normally distributed. We also provide a set of programs that accomplish this task by using the GLM in the first step of the computation.
    Keywords: generalized propensity score, GLM, dose-response, continuous treatment, bias removal
    JEL: C13 C52
    Date: 2013–03–13
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:45013&r=ecm
  14. By: Dukpa Kim; Yohei Yamamoto
    Abstract: Earlier attempts to find evidence of time varying coefficients in the U.S. monetary vector autoregression have been only partially successful. Structural break tests applied to typical data sets often fail to reject the null hypothesis of no break. Bayesian inferences using time varying parameter vector autoregressions provide posterior median values that capture some important movements over time, but the associated confidence intervals are often very wide and make the entire results less conclusive. We apply recently developed multiple structural break tests and find statistically significant evidence of time varying coefficients. We also develop a reduced rank time varying parameter vector autoregression with multivariate stochastic volatility. Our model has a smaller number of free parameters thereby yielding tighter confidence intervals than previously employed unrestricted time varying parameter models.
    Keywords: Time Varying Monetary Policy Rule, Ináation Persistence, Multivariate Stochastic Volatility
    JEL: C32 E52
    Date: 2013–02
    URL: http://d.repec.org/n?u=RePEc:hst:ghsdps:gd12-279&r=ecm
  15. By: Peter Fuleky (UHERO and Department of Economics, University of Hawaii at Manoa); L Ventura (Department of Economics and Law, Sapienza, University of Rome); Qianxue Zhao (Department of Economics, University of Hawaii at Manoa)
    Abstract: International risk sharing has been among the most actively researched areas of macroeconomics for the last two decades. Empirical contributions in this field make extensive use of so called "consumption insurance" tests evaluating the extent to which idiosyncratic shocks in income get transferred to consumption. A prerequisite of such a test is the isolation of country specific variation in the data. We show that the cross-sectional demeaning technique frequently used in the literature is in general inadequate to eliminate global factors from a panel data set, and can lead to misleading inference. We argue that international risk sharing tests should instead be based on a method that more reliably deals with global factors. We claim and illustrate in our empirical application that the fairly simple common correlated eects estimator for cross-sectionally dependent panels introduced by Pesaran (2006), and Kapetanios et al. (2010) is a tool that satisfies this requirement.
    Keywords: Panel data, Cross-sectional dependence, International risk sharing, Consumption insurance
    JEL: C23 C51 E21 F36
    Date: 2013–03
    URL: http://d.repec.org/n?u=RePEc:hai:wpaper:201304&r=ecm
  16. By: Elena Di Bernardino (SAF - Laboratoire de Sciences Actuarielle et Financière - Université Claude Bernard - Lyon I : EA2429); Thomas Laloë (JAD - Laboratoire Jean Alexandre Dieudonné - CNRS : UMR6621 - Université Nice Sophia Antipolis (UNS)); Véronique Maume-Deschamps (SAF - Laboratoire de Sciences Actuarielle et Financière - Université Claude Bernard - Lyon I : EA2429); Clémentine Prieur (INRIA Grenoble Rhône-Alpes / LJK Laboratoire Jean Kuntzmann - MOISE - CNRS : UMR5224 - INRIA - Laboratoire Jean Kuntzmann - Université Joseph Fourier - Grenoble I - Institut Polytechnique de Grenoble - Grenoble Institute of Technology)
    Abstract: This paper deals with the problem of estimating the level sets of an unknown distribution function $F$. A plug-in approach is followed. That is, given a consistent estimator $F_n$ of $F$, we estimate the level sets of $F$ by the level sets of $F_n$. In our setting no compactness property is a priori required for the level sets to estimate. We state consistency results with respect to the Hausdorff distance and the volume of the symmetric difference. Our results are motivated by applications in multivariate risk theory. In this sense we also present simulated and real examples which illustrate our theoretical results.
    Keywords: Level sets ; Distribution function ; Plug-in estimation ; Hausdorff distance ; Conditional Tail Expectation
    Date: 2013–02–08
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-00580624&r=ecm
  17. By: Donovon, Prosper; Goncalves, Silvia; Meddahi, Nour
    Date: 2013–01
    URL: http://d.repec.org/n?u=RePEc:ner:toulou:http://neeo.univ-tlse1.fr/3354/&r=ecm
  18. By: David F. Hendry (Department of Economics and Institute of Economic Modelling, Oxford Martin School, University of Oxford); Grayham E. Mizon (University of Southampton and Institute of Economic Modelling, Oxford Martin School, University of Oxford)
    Abstract: Unpredictability arises from intrinsic stochastic variation, unexpected instances of outliers, and unanticipated extrinsic shifts of distributions. We analyze their properties, relationships, and different effects on the three arenas in the title, which suggests considering three associated information sets. The implications of unanticipated shifts for forecasting, economic analyses of efficient markets, conditional expectations, and inter-temporal derivations are described. The potential success of general-to-specific model selection in tackling location shifts by impulse-indicator saturation is contrasted with the major difficulties confronting forecasting.
    Keywords: Unpredictability; ‘Black Swans’; Distributional shifts; Forecast failure; Model selection; Conditional expectations.
    JEL: C51 C22
    Date: 2013–02–25
    URL: http://d.repec.org/n?u=RePEc:nuf:econwp:1304&r=ecm

This nep-ecm issue is ©2013 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.