
on Econometrics 
By:  Francq, Christian; Zakoian, JeanMichel 
Abstract:  This paper considers the statistical inference of the class of asymmetric powertransformed GARCH(1,1) models in presence of possible explosiveness. We study the explosive behavior of volatility when the strict stationarity condition is not met. This allows us to establish the asymptotic normality of the quasimaximum likelihood estimator (QMLE) of the parameter, including the power but without the intercept, when strict stationarity does not hold. Two important issues can be tested in this framework: asymmetry and stationarity. The tests exploit the existence of a universal estimator of the asymptotic covariance matrix of the QMLE. By establishing the local asymptotic normality (LAN) property in this nonstationary framework, we can also study optimality issues. 
Keywords:  GARCH models; Inconsistency of estimators; Local power of tests; Nonstationarity; Quasi Maximum Likelihood Estimation 
JEL:  C01 C12 C13 C22 
Date:  2013–03–01 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:44901&r=ecm 
By:  Tingting Cheng; Jiti Gao; Xibin Zhang 
Abstract:  Bandwidth plays an important role in determining the performance of local linear estimators. In this paper, we propose a Bayesian approach to bandwidth selection for local linear estimation of timeâ€“varying coefficient time series models, where the errors are assumed to follow the Gaussian kernel error density. A Markov chain Monte Carlo algorithm is presented to simultaneously estimate the bandwidths for local linear estimators in the regression function and the bandwidth for the Gaussian kernel errorâ€“density estimator. A Monte Carlo simulation study shows that: 1) our proposed Bayesian approach achieves better performance in estimating the bandwidths for local linear estimators than normal reference rule and crossâ€“validation; 2) compared with the parametric assumption of either the Gaussian or the mixture of two Gaussians, Gaussian kernel errorâ€“density assumption is a dataâ€“driven choice and helps gain robustness in terms of different specification of the true error density. Moreover, we apply our proposed Bayesian sampling method to the estimation of bandwidth for the timeâ€“varying coefficient models that explain Okunâ€™s law and the relationship between consumption growth and income growth in the U.S. For each model, we also provide calibrated parametric form of its timeâ€“varying coefficients. 
Keywords:  Bayes factors, bandwidth, marginal likelihood, local linear estimator, randomwalk Metropolis algorithm. 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:20137&r=ecm 
By:  Maddalena Cavicchioli (Advanced School of Economics, Department of Economics, University Of Venice Cà Foscari) 
Abstract:  We give stable finite order VARMA(p*; q*) representations for Mstate Markov switching secondorder stationary time series whose autocovariances satisfy a certain matrix relation. The upper bounds for p* and q* are elementary functions of the dimension K of the process, the number M of regimes, the autoregressive and moving average orders of the initial model. If there is no cancellation, the bounds become equalities, and this solves the identification problem. Our class of time series include every Mstate Markov switching multivariate moving average models and autoregressive models in which the regime variable is uncorrelated with the observable. Our results include, as particular cases, those obtained by Krolzig (1997), and improve the bounds given by Zhang and Stine (2001) and Francq and Zakoian (2001) for our classes of dynamic models. Data simulations and an application on foreign exchange rates complete the paper. 
Keywords:  Secondorder stationary time series, VMA models, VAR models, StateSpace models, Markov chains, changes in regime, regime number. 
JEL:  C01 C32 C50 C52 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:ven:wpaper:2013:03&r=ecm 
By:  Vitaliy Oryshchenko (Department of Economics, University of Oxford); Richard J. Smith (UCL, IFS and University of Cambridge) 
Abstract:  If additional information about the distribution of a random variable is available in the form of moment conditions, a weighted kernel density estimate re ecting the extra information can be constructed by replacing the uniform weights with the generalised empirical likelihood probabilities. It is shown that the resultant density estimator provides an improved approximation to the moment constraints. Moreover, a reduction in variance is achieved due to the systematic use of the extra moment information. 
Keywords:  weighted kernel density estimation, moment conditions, higherorder expansions, normal mixtures. 
JEL:  C14 
Date:  2013–02–12 
URL:  http://d.repec.org/n?u=RePEc:nuf:econwp:1303&r=ecm 
By:  Andrea Pastore (Department of Economics, University Of Venice Cà Foscari); Stefano Tonellato (Department of Economics, University Of Venice Cà Foscari) 
Abstract:  In finite mixture model clustering, each component of the fitted mixture is usually associated with a cluster. In other words, each component of the mixture is interpreted as the probability distribution of the variables of interest conditionally on the membership to a given cluster. The Gaussian mixture model (GMM) is very popular in this context for its simplicity and flexibility. It may happen, however, that the components of the fitted model are not well separated. In such a circumstance, the number of clusters is often overestimated and a better clustering could be obtained by joining some subsets of the partition based on the fitted GMM. Some methods for the aggregation of mixture components have been recently proposed in the literature. In this work, we propose a hierarchical aggregation algorithm based on a generalisation of the definition of silhouettewidth taking into account the Mahalanobis distances induced by the precison matrices of the components of the fitted GMM. The algorithm chooses the number of groups corresponding to the hierarchy level giving rise to the highest averagesilhouettewidth. Some simulation experiments and real data applications indicate that its performance is at least as good as the one of other existing methods. 
Keywords:  similarity indices, Rand index, mixture models, bootstrap. 
JEL:  C39 C46 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:ven:wpaper:2013:04&r=ecm 
By:  HASAN, HAMID; Rehman, Attiqur 
Abstract:  Most of the surveys in social sciences generally consist of ordinal variables. Sometimes researchers need to model behaviour of ordinal variables in simultaneous equation system involving many endogenous ordinal variables. This situation leads to a very complex likelihood function which is extremely hard to solve. The solutions suggested in the literature are even harder to understand by applied researchers. The present study suggests a simulation method to avoid this problem altogether by converting ordinal variables into continuous variables and use standard simultaneous regression models. The proposed method involves generating random numbers from continuous probability distributions (uniform and truncated normal distributions) within a discrete probability distribution. This method can be fruitfully be used in ordered logit and probit models. The limitations of this method are also discussed. 
Keywords:  Endogenous Ordinal variables, Simultaneous Equation System, Ordered Logit, Ordered Probit. 
JEL:  C1 C3 C4 
Date:  2013–03–09 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:44908&r=ecm 
By:  JohnOliver Engler (Department of Sustainability Sciences and Department of Economics Leuphana University of Lueneburg, Germany); Stefan Baumgaertner (Department of Sustainability Sciences and Department of Economics Leuphana University of Lueneburg, Germany) 
Abstract:  We propose a new threestep modelselection framework for size distributions in empirical data. It generalizes a recent frequentist plausibilityoffit analysis (Step 1) and combines it with a relative ranking based on the Bayesian Akaike Information Criterion (Step 2). We enhance these statistical criteria with the additional criterion of microfoundation (Step 3) which is to select the size distribution that comes with a dynamic micro model of size dynamics. A numerical performance test of Step 1 shows that our generalization is able to correctly rule out the distribution hypotheses unjustified by the data at hand. We then illustrate our approach, and demonstrate its usefulness, with a sample of commercial cattle farms in Namibia. In conclusion, the framework proposed here has the potential to reconcile the ongoing debate about size distribution models in empirical data, the two most prominent of which are the Pareto and the lognormal distribution. 
Keywords:  model choice, model selection, hypothesis testing, size distributions, Gibrat's Law, Pareto distribution, ranksize rule, environmental risk, semiarid rangelands, cattle farming 
JEL:  C12 C52 D30 D31 O44 
Date:  2013–02 
URL:  http://d.repec.org/n?u=RePEc:lue:wpaper:265&r=ecm 
By:  Cavaliere, Giuseppe; Taylor, A. M. Robert; Trenkler, Carsten 
Abstract:  In this paper we investigate bootstrapbased methods for biascorrecting the firststage parameter estimates used in some recently developed bootstrap implementations of the cointegration rank tests of Johansen (1996). In order to do so we adapt the framework of Kilian (1998) which estimates the bias in the original parameter estimates using the average bias in the corresponding parameter esti mates taken across a large number of auxiliary bootstrap replications. A number of possible implementations of this procedure are discussed and concrete recommendations made on the basis of finite sample performance evaluated by Monte Carlo simulation methods. Our results show that bootstrapbased biascorrection methods can significantly improve upon the small sample performance of the bootstrap cointegration rank tests. A brief application of the techniques developed in this paper to international dynamic consumption risk sharing within Europe is also considered. 
Keywords:  Cointegration , trace test , biascorrection , bootstrap 
JEL:  C30 C32 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:mnh:wpaper:32993&r=ecm 
By:  L. Bauwens; Edoardo Otranto 
Abstract:  Several models have been developed to capture the dynamics of the conditional correlations between time series of financial returns, but few studies have investigated the determinants of the correlation dynamics. A common opinion is that the market volatility is a major determinant of the correlations. We extend some models to capture explicitly the dependence of the correlations on the volatility of the market of interest. The models differ in the way by which the volatility influences the correlations, which can be transmitted through linear or nonlinear, and direct or indirect effects. They are applied to different data sets to verify the presence and possible regularity of the volatility impact on correlations. 
Keywords:  volatility effects; conditional correlation; DCC; Markov switching 
JEL:  C32 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:cns:cnscwp:201304&r=ecm 
By:  JeanMarie Dufour; Joachim Wilde (Universitaet Osnabrueck) 
Abstract:  Weak identification is a well known topic for linear multiple equation models. However, little is known whether this problem also matters for probit models with endogenous covariates. Therefore, the behaviour of the usual zstatistic in case of weak identification is analysed in a simulation study. It shows large size distortions. However, a new puzzle is found: The magnitude of the size distortion depends heavily on the parameter value that is tested. Alternatively the LRstatistic was calculated which is known to be more robust against weak identification in case of linear multiple equation models. The same seems to be true for probit equations. No size distortions are found. However, medium undersizing is observed. 
Keywords:  probit model, weak identification 
JEL:  C 
Date:  2013–03–01 
URL:  http://d.repec.org/n?u=RePEc:iee:wpaper:wp0095&r=ecm 
By:  Bertrand K. Hassani (Centre d'Economie de la Sorbonne et Santander UK); Alexis Renaudin (Aon Global Risk Consulting) 
Abstract:  According to the last proposals of the Basel Committee on Banking Supervision, banks under the Advanced Measurement Approach (AMA) must use four different sources of information to assess their Operational Risk capital requirement. The fourth including "business environment and internal control factors", i.e. qualitative criteria, the three main quantitative sources available to banks to build the Loss Distribution are Internal Loss Data, External Loss Data, and Scenario Analysis. This paper proposes an innovative methodology to bring together these three different sources in the Loss Distribution Approach (LDA) framework through a Bayesian strategy. The integration of the different elements is performed in two different steps to ensure an internal data driven model is obtained. In a first step, scenarios are used to inform the prior distributions and external data informs the likelihood component of the posterior function. In the second step, the initial posterior function is used as the prior distribution and the internal loss data inform the likelihood component of the second posterior. This latter posterior function enables the estimation of the parameters of the severity distribution selected to represent the Operational Risk event types. 
Keywords:  Operational risk, loss distribution approach, Bayesian inference, Marchov chain Monte Carlo, extreme value theory, nonparametric statistics, risk measures. 
JEL:  C02 C11 C13 C63 G32 
Date:  2013–02 
URL:  http://d.repec.org/n?u=RePEc:mse:cesdoc:13009&r=ecm 
By:  Gary Solon; Steven J. Haider; Jeffrey Wooldridge 
Abstract:  The purpose of this paper is to help empirical economists think through when and how to weight the data used in estimation. We start by distinguishing two purposes of estimation: to estimate population descriptive statistics and to estimate causal effects. In the former type of research, weighting is called for when it is needed to make the analysis sample representative of the target population. In the latter type, the weighting issue is more nuanced. We discuss three distinct potential motives for weighting when estimating causal effects: (1) to achieve precise estimates by correcting for heteroskedasticity, (2) to achieve consistent estimates by correcting for endogenous sampling, and (3) to identify average partial effects in the presence of unmodeled heterogeneity of effects. In each case, we find that the motive sometimes does not apply in situations where practitioners often assume it does. We recommend diagnostics for assessing the advisability of weighting, and we suggest methods for appropriate inference. 
JEL:  C1 
Date:  2013–02 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:18859&r=ecm 
By:  Guardabascio, Barbara; Ventura, Marco 
Abstract:  This paper revises the estimation of the doseresponse function as in Hirano and Imbens (2004) by proposing a flexible way to estimate the generalized propensity score when the treatment variable is not necessarily normally distributed. We also provide a set of programs that accomplish this task by using the GLM in the first step of the computation. 
Keywords:  generalized propensity score, GLM, doseresponse, continuous treatment, bias removal 
JEL:  C13 C52 
Date:  2013–03–13 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:45013&r=ecm 
By:  Dukpa Kim; Yohei Yamamoto 
Abstract:  Earlier attempts to find evidence of time varying coefficients in the U.S. monetary vector autoregression have been only partially successful. Structural break tests applied to typical data sets often fail to reject the null hypothesis of no break. Bayesian inferences using time varying parameter vector autoregressions provide posterior median values that capture some important movements over time, but the associated confidence intervals are often very wide and make the entire results less conclusive. We apply recently developed multiple structural break tests and find statistically significant evidence of time varying coefficients. We also develop a reduced rank time varying parameter vector autoregression with multivariate stochastic volatility. Our model has a smaller number of free parameters thereby yielding tighter confidence intervals than previously employed unrestricted time varying parameter models. 
Keywords:  Time Varying Monetary Policy Rule, InÃ¡ation Persistence, Multivariate Stochastic Volatility 
JEL:  C32 E52 
Date:  2013–02 
URL:  http://d.repec.org/n?u=RePEc:hst:ghsdps:gd12279&r=ecm 
By:  Peter Fuleky (UHERO and Department of Economics, University of Hawaii at Manoa); L Ventura (Department of Economics and Law, Sapienza, University of Rome); Qianxue Zhao (Department of Economics, University of Hawaii at Manoa) 
Abstract:  International risk sharing has been among the most actively researched areas of macroeconomics for the last two decades. Empirical contributions in this field make extensive use of so called "consumption insurance" tests evaluating the extent to which idiosyncratic shocks in income get transferred to consumption. A prerequisite of such a test is the isolation of country specific variation in the data. We show that the crosssectional demeaning technique frequently used in the literature is in general inadequate to eliminate global factors from a panel data set, and can lead to misleading inference. We argue that international risk sharing tests should instead be based on a method that more reliably deals with global factors. We claim and illustrate in our empirical application that the fairly simple common correlated eects estimator for crosssectionally dependent panels introduced by Pesaran (2006), and Kapetanios et al. (2010) is a tool that satisfies this requirement. 
Keywords:  Panel data, Crosssectional dependence, International risk sharing, Consumption insurance 
JEL:  C23 C51 E21 F36 
Date:  2013–03 
URL:  http://d.repec.org/n?u=RePEc:hai:wpaper:201304&r=ecm 
By:  Elena Di Bernardino (SAF  Laboratoire de Sciences Actuarielle et Financière  Université Claude Bernard  Lyon I : EA2429); Thomas Laloë (JAD  Laboratoire Jean Alexandre Dieudonné  CNRS : UMR6621  Université Nice Sophia Antipolis (UNS)); Véronique MaumeDeschamps (SAF  Laboratoire de Sciences Actuarielle et Financière  Université Claude Bernard  Lyon I : EA2429); Clémentine Prieur (INRIA Grenoble RhôneAlpes / LJK Laboratoire Jean Kuntzmann  MOISE  CNRS : UMR5224  INRIA  Laboratoire Jean Kuntzmann  Université Joseph Fourier  Grenoble I  Institut Polytechnique de Grenoble  Grenoble Institute of Technology) 
Abstract:  This paper deals with the problem of estimating the level sets of an unknown distribution function $F$. A plugin approach is followed. That is, given a consistent estimator $F_n$ of $F$, we estimate the level sets of $F$ by the level sets of $F_n$. In our setting no compactness property is a priori required for the level sets to estimate. We state consistency results with respect to the Hausdorff distance and the volume of the symmetric difference. Our results are motivated by applications in multivariate risk theory. In this sense we also present simulated and real examples which illustrate our theoretical results. 
Keywords:  Level sets ; Distribution function ; Plugin estimation ; Hausdorff distance ; Conditional Tail Expectation 
Date:  2013–02–08 
URL:  http://d.repec.org/n?u=RePEc:hal:journl:hal00580624&r=ecm 
By:  Donovon, Prosper; Goncalves, Silvia; Meddahi, Nour 
Date:  2013–01 
URL:  http://d.repec.org/n?u=RePEc:ner:toulou:http://neeo.univtlse1.fr/3354/&r=ecm 
By:  David F. Hendry (Department of Economics and Institute of Economic Modelling, Oxford Martin School, University of Oxford); Grayham E. Mizon (University of Southampton and Institute of Economic Modelling, Oxford Martin School, University of Oxford) 
Abstract:  Unpredictability arises from intrinsic stochastic variation, unexpected instances of outliers, and unanticipated extrinsic shifts of distributions. We analyze their properties, relationships, and different effects on the three arenas in the title, which suggests considering three associated information sets. The implications of unanticipated shifts for forecasting, economic analyses of efficient markets, conditional expectations, and intertemporal derivations are described. The potential success of generaltospecific model selection in tackling location shifts by impulseindicator saturation is contrasted with the major difficulties confronting forecasting. 
Keywords:  Unpredictability; ‘Black Swans’; Distributional shifts; Forecast failure; Model selection; Conditional expectations. 
JEL:  C51 C22 
Date:  2013–02–25 
URL:  http://d.repec.org/n?u=RePEc:nuf:econwp:1304&r=ecm 