
on Econometrics 
By:  Zaichao Du (Indiana University) 
Abstract:  In this paper, we develop a general method of testing for independence when unobservable generalized errors are involved. Our method can be applied to testing for serial independence of generalized errors, and testing for independence between the generalized errors and observ able covariates. The former can serve as a uni?ed approach to testing adequacy of time series models, as model adequacy often implies that the generalized errors obtained after a suitable transformation are independent and identically distributed. The latter is a key identi?cation assumption in many nonlinear economic models. Our tests are based on a classical sample dependence measure, the Hoe¤dingBlumKieferRosenblattype empirical process applied to generalized residuals. We establish a uniform expansion of the process, thereby deriving an ex plicit expression for the parameter estimation e¤ect, which causes our tests not to be nuisance parameterfree. To circumvent this problem, we propose a multipliertype bootstrap to approx imate the limit distribution. Our bootstrap procedure is computationally very simple as it does not require a reestimation of the parameters in each bootstrap replication. In a simulation study, we apply our method to test the adequacy of ARMAGARCH and Hansen (1994) skewed t models, and document a good ?nite sample performance of our test. Finally, an empirical application to some daily exchange rate data highlights the merits of our approach. 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:inu:caeprp:2009023&r=ecm 
By:  J. Carlos Escanciano (Indiana University) 
Abstract:  This article investigates model checks for a class of possibly nonlinear heteroskedastic time series models, including but not restricted to ARMAGARCH models. We propose omnibus tests based on functionals of certain weighted standardized residual empirical processes. The new tests are asymptotically distributionfree, suitable when the conditioning set is in?nite dimensional, and consistent against a class of Pitman?s local alternatives converging at the parametric rate n??1=2; with n the sample size. A Monte Carlo study shows that the simulated level of the proposed tests is close to the asymptotic level already for moderate sample sizes and that tests have a satisfactory power performance. Finally, we illustrate our methodology with an application to the wellknown S&P 500 daily stock index. The paper also contains an asymptotic uniform expansion for weighted residual empirical processes when initial conditions are considered, a result of independent interest. 
Date:  2009–09 
URL:  http://d.repec.org/n?u=RePEc:inu:caeprp:2009019&r=ecm 
By:  Jin Seo Cho (Korea University); ChirokHan (Korea University); Peter C. B. Phillips (Yale University, University of Auckland, University of Southampton & Singapore Management University) 
Abstract:  Least absolute deviations (LAD) estimation of linear timeseries models is considered under conditional heteroskedasticity and serial correlation. The limit theory of the LAD estimator is obtained without assuming the finite density condition for the errors that is required in standard LAD asymptotics. The results are particularly useful in application of LAD estimation to financial timeseries data. 
Keywords:  Asymptotic leptokurtosis, Convex function, Infinite density, Least absolute deviations, Median, Weak convergence. 
JEL:  C12 G11 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:iek:wpaper:0917&r=ecm 
By:  Dominique Guegan (CES  Centre d'économie de la Sorbonne  CNRS : UMR8174  Université PanthéonSorbonne  Paris I, EEPPSE  Ecole d'Économie de Paris  Paris School of Economics  Ecole d'Économie de Paris); Patrick Rakotomarolahy (CES  Centre d'économie de la Sorbonne  CNRS : UMR8174  Université PanthéonSorbonne  Paris I) 
Abstract:  This article gives the asymptotic properties of multivariate knearest neighbor regression estimators for dependent variables belonging to Rd, d > 1. The results derived here permit to provide consistent forecasts, and confidence intervals for time series An illustration of the method is given through the estimation of economic indicators used to compute the GDP with the bridge equations. An empirical forecast accuracy comparison is provided by comparing this nonparametric method with a parametric one based on ARIMA modelling that we consider as a benchmark because it is still often used in Central Banks to nowcast and forecast the GDP. 
Keywords:  Multivariate knearest neighbor, asymptotic normality of the regression, mixing time series, confidence intervals, forecasts, economic indicators, euro area. 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:hal:cesptp:halshs00423871_v2&r=ecm 
By:  David E. Giles (Department of Economics, University of Victoria); Hui Feng (Department of Economics, Business & Mathematics, King's College, University of Western Ontario) 
Abstract:  We derive expressions for the firstorder bias of the MLE for a Poisson regression model and show how these can be used to adjust the estimator and reduce bias without increasing MSE. The analytic results are supported by Monte Carlo simulations and an empirical application. 
Keywords:  Poisson regression; maximum likelihood estimation; bias reduction 
JEL:  C13 C25 
Date:  2009–12–23 
URL:  http://d.repec.org/n?u=RePEc:vic:vicewp:0909&r=ecm 
By:  Mancino Maria Elvira (Dipartimento di Matematica per le Decisioni, University of Firenze); Simona Sanfelici (Dipartimento di Economia, University of Parma) 
Abstract:  We analyze the properties of different estimators of multivariate volatilities in the presence of microstructure noise, with particular focus on the Fourier estimator. This estimator is consistent in the case of asynchronous data and robust to microstructure effects; further we prove the positive semidefiniteness of the estimated covariance matrix. The in sample and forecasting properties of Fourier method are analyzed through Monte Carlo simulations. We study the economic benefit of applying the Fourier covariance estimation methodology over other estimators in the presence of market microstructure noise from the perspective of an assetallocation decision problem. We find that using Fourier methodology yields statistically significant economic gains under strong microstructure effects 
Keywords:  nonparametric covariance estimation, nonsynchronicity, microstructure, optimal portfolio choice, Fourier analysis 
JEL:  G11 C14 C22 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:flo:wpaper:200909&r=ecm 
By:  Duchesne, Pierre; Francq, Christian 
Abstract:  Generalized Wald's method constructs testing procedures having chisquared limiting distributions from test statistics having singular normal limiting distributions by use of generalized inverses. In this article, the use of twoinverses for that problem is investigated, in order to propose new test statistics with convenient asymptotic chisquare distributions. Alternatively, Imhofbased test statistics can also be defined, which converge in distribution to weighted sum of chisquare variables; The critical values of such procedures can be found using Imhof's (1961) algorithm. The asymptotic distributions of the test statistics under the null and alternative hypotheses are discussed. Under fixed and local alternatives, the asymptotic powers are compared theoretically. Simulation studies are also performed to compare the exact powers of the test statistics in finite samples. A data analysis on the temperature and precipitation variability in the European Alps illustrates the proposed methods. 
Keywords:  twoinverses; generalized Wald's method; generalized inverses; multivariate analysis; singular normal distribution 
JEL:  C12 
Date:  2010–01 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:19740&r=ecm 
By:  Ramses H. Mena; Matteo Ruggiero; Stephen G. Walker 
Abstract:  This paper is concerned with the construction of a continuous parameter sequence of random probability measures and its application for modeling random phenomena evolving in continuous time. At each time point we have a random probability measure which is generated by a Bayesian nonparametric hierarchical model, and the dependence structure is induced through a WrightFisher diffusion with mutation. The sequence is shown to be a stationary and reversible diffusion taking values on the space of probability measures. A simple estimation procedure for discretely observed data is presented and illustrated with simulated and real data sets. 
Keywords:  Bayesian nonparametric inference, continuous time dependent random measure, Markov process, measurevalued process, stationary process, stickbreaking process 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:icr:wpmath:262009&r=ecm 
By:  Javier Mencía (Banco de España); Enrique Sentana (CEMFI) 
Abstract:  We derive Lagrange Multiplier and Likelihood Ratio specifi cation tests for the null hypotheses of multivariate normal and Student t innovations using the Generalised Hyperbolic distribution as our alternative hypothesis. We decompose the corresponding Lagrange Multipliertype tests into skewness and kurtosis components, from which we obtain more powerful onesided KuhnTucker versions that are equivalent to the Likelihood Ratio test, whose asymptotic distribution we provide. We conduct detailed Monte Carlo exercises to study our proposed tests in finite samples. Finally, we present an empirical application to ten US sectoral stock returns, which indicates that their conditional distribution is mildly asymmetric and strongly leptokurtic. 
Keywords:  Bootstrap, Inequality Constraints, Kurtosis, Normality Tests, Skewness, Supremum Test, Underidentifed parameters 
JEL:  C12 C52 C32 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:bde:wpaper:0929&r=ecm 
By:  Anders Bredahl Kock (CREATES, Aarhus University); Timo Teräsvirta (CREATES, Aarhus University) 
Abstract:  In this paper, nonlinear models are restricted to mean nonlinear parametric models. Several such models popular in time series econo metrics are presented and some of their properties discussed. This in cludes two models based on universal approximators: the Kolmogorov Gabor polynomial model and two versions of a simple artificial neural network model. Techniques for generating multiperiod forecasts from nonlinear models recursively are considered, and the direct (nonrecursive) method for this purpose is mentioned as well. Forecasting with com plex dynamic systems, albeit less frequently applied to economic fore casting problems, is briefly highlighted. A number of large published studies comparing macroeconomic forecasts obtained using different time series models are discussed, and the paper also contains a small simulation study comparing recursive and direct forecasts in a partic ular case where the datagenerating process is a simple artificial neural network model. Suggestions for further reading conclude the paper. 
Keywords:  forecast accuracy, KolmogorovGabor, nearest neigh bour, neural network, nonlinear regression 
JEL:  C22 C45 C52 C53 
Date:  2010–01–01 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201001&r=ecm 
By:  Jouchi Nakajima (Department of Statistical Science, Duke University and Bank of Japan); Yasuhiro Omori (Faculty of Economics, University of Tokyo) 
Abstract:  Bayesian analysis of a stochastic volatility model with a generalized hyperbolic (GH) skew Student's terror distribution is described where we first consider an asymmetric heavytailness as well as leverage effects. An efficient Markov chain Monte Carlo estimation method is described exploiting a normal variancemean mixture representation of the error distribution with an inverse gamma distribution as a mixing distribution. The proposed method is illustrated using simulated data, daily TOPIX and S&P500 stock returns. The model comparison for stock returns is conducted based on the marginal likelihood in the empirical study. The strong evidence of the leverage and asymmetric heavytailness is found in the stock returns. Further, the prior sensitivity analysis is conducted to investigate whether obtained results are robust with respect to the choice of the priors. 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2009cf701&r=ecm 
By:  Kasahara, Hiroyuki; Shimotsu, Katsumi 
Abstract:  This paper considers the estimation problem of structural models for which empirical restrictions are characterized by a fixed point constraint, such as structural dynamic discrete choice models or models of dynamic games. We analyze the conditions under which the nested pseudolikelihood (NPL) algorithm converges to a consistent estimator and derive its convergence rate. We find that the NPL algorithm may not necessarily converge to a consistent estimator when the fixed point mapping does not have a local contraction property. To address the issue of divergence, we propose alternative sequential estimation procedures that can converge to a consistent estimator even when the NPL algorithm does not. 
Keywords:  contraction, dynamic games, nested pseudo likelihood, recursive projection method 
JEL:  C13 C14 C63 
Date:  2009–11 
URL:  http://d.repec.org/n?u=RePEc:hit:econdp:200918&r=ecm 
By:  Russell Davidson (GREQAM  Groupement de Recherche en Économie Quantitative d'AixMarseille  Université de la Méditerranée  AixMarseille II  Université Paul Cézanne  AixMarseille III  Ecole des Hautes Etudes en Sciences Sociales (EHESS)  CNRS : UMR6579, CIREG  Centre interuniversitaire de recherche en économie quantitative  Université de Montréal, Department of Economics, McGill University  McGill University) 
Abstract:  Testing for a unit root in a series obtained by summing a stationary MA(1) process with a parameter close to 1 leads to serious size distortions under the null, on account of the near cancellation of the unit root by the MA component in the driving stationary series. The situation is analysed from the point of view of bootstrap testing, and an exact quantitative account is given of the error in rejection probability of a bootstrap test. A particular method of estimating the MA parameter is recommended, as it leads to very little distortion even when the MA parameter is close to 1. A new bootstrap procedure with still better properties is proposed. While more computationally demanding than the usual bootstrap, it is much less so than the double bootstrap. 
Keywords:  Unit root test, bootstrap, MA(1), size distortion 
Date:  2009–12–30 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:halshs00443561_v1&r=ecm 
By:  Russell Davidson (GREQAM  Groupement de Recherche en Économie Quantitative d'AixMarseille  Université de la Méditerranée  AixMarseille II  Université Paul Cézanne  AixMarseille III  Ecole des Hautes Etudes en Sciences Sociales (EHESS)  CNRS : UMR6579, CIREG  Centre interuniversitaire de recherche en économie quantitative  Université de Montréal, Department of Economics, McGill University  McGill University); James Mackinnon (Department of Economics  Queen's University, Kingston, Ontario) 
Abstract:  We study several tests for the coefficient of the single righthandside endogenous variable in a linear equation estimated by instrumental variables. We show that writing all the test statistics—Student's t, AndersonRubin, the LM statistic of Kleibergen and Moreira (K), and likelihood ratio (LR)—as functions of six random quantities leads to a number of interesting results about the properties of the tests under weakinstrument asymptotics. We then propose several new procedures for bootstrapping the three nonexact test statistics and also a new conditional bootstrap version of the LR test. These use more efficient estimates of the parameters of the reducedform equation than existing procedures. When the best of these new procedures is used, both the K and conditional bootstrap LR tests have excellent performance under the null. However, power considerations suggest that the latter is probably the method of choice. 
Keywords:  bootstrap, weak instruments, IV estimation 
Date:  2009–12–22 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:halshs00442713_v1&r=ecm 
By:  Toshio Honda 
Abstract:  We consider the nonparametric estimation of the regression functions for dependent data. Suppose that the covariates are observed with additive errors in the data and we employ nonparametric deconvolution kernel techniques to estimate the regression functions in this paper. We investigate how the strength of time dependence affects the asymptotic properties of the local constant and linear estimators. We treat both shortrange dependent and longrange dependent linear processes in a unified way and demonstrate that the longrange dependence (LRD) of the covariates affects the asymptotic properties of the nonparametric estimators as well as the LRD of regression errors does. 
Keywords:  local polynomial regression, errorsinvariables, deconvolution, ordinary smooth case, supersmooth case, linear processes, longrange dependence 
Date:  2009–11 
URL:  http://d.repec.org/n?u=RePEc:hst:ghsdps:gd09092&r=ecm 
By:  Stefano Iacus (Department of Economics, Business and Statistics, University of Milan, IT); Nakahiro Yoshida (Graduate School of Mathematical Sciences, Tokyo University, Tokyo) 
Abstract:  We consider a multidimensional Ito process Y=(Y_t), t in [0,T], with some unknown drift coefficient process b_t and volatility coefficient sigma(X_t,theta) with covariate process X=(X_t), t in[0,T], the function sigma(x,theta) being known up to theta in Theta. For this model we consider a change point problem for the parameter theta in the volatility component. The change is supposed to occur at some point t* in (0,T). Given discrete time observations from the process (X,Y), we propose quasimaximum likelihood estimation of the change point. We present the rate of convergence of the change point estimator and the limit thereoms of aymptotically mixed type. 
Keywords:  It\^o processes, discrete time observations, change point estimation, volatility, 
Date:  2009–06–18 
URL:  http://d.repec.org/n?u=RePEc:bep:unimip:1084&r=ecm 
By:  Tsunehiro Ishihara (Graduate School of Economics, University of Tokyo); Yasuhiro Omori (Faculty of Economics, University of Tokyo) 
Abstract:  The efficient Bayesian estimation method using Markov chain Monte Carlo is proposed for a multivariate stochastic volatility model that is a natural extension of the univariate stochastic volatility model with leverage and heavytailed errors, where we further incorporate cross leverage effects among stock returns. Our method is based on a multimove sampler which samples a block of latent volatility vectors and is described first in the literature for a multivariate stochastic volatility model with cross leverage and heavytailed errors. Its high sampling efficiency is shown using numerical examples in comparison with a singlemove sampler which samples one latent volatility vector at a time given other latent vectors and parameters. The empirical studies are given using five dimensional stock return indices in Tokyo Stock Exchange. 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2009cf700&r=ecm 
By:  Russell Davidson (GREQAM  Groupement de Recherche en Économie Quantitative d'AixMarseille  Université de la Méditerranée  AixMarseille II  Université Paul Cézanne  AixMarseille III  Ecole des Hautes Etudes en Sciences Sociales (EHESS)  CNRS : UMR6579, CIREG  Centre interuniversitaire de recherche en économie quantitative  Université de Montréal, Department of Economics, McGill University  McGill University); James Mackinnon (Department of Economics  Queen's University, Kingston, Ontario) 
Abstract:  We propose a wild bootstrap procedure for linear regression models estimated by instrumental variables. Like other bootstrap procedures that we have proposed elsewhere, it uses efficient estimates of the reducedform equation(s). Unlike them, it takes account of possible heteroskedasticity of unknown form. We apply this procedure to t tests, including heteroskedasticityrobust t tests, and to the AndersonRubin test. We provide simulation evidence that it works far better than older methods, such as the pairs bootstrap. We also show how to obtain reliable confidence intervals by inverting bootstrap tests. An empirical example illustrates the utility of these procedures. 
Keywords:  Instrumental variables estimation, twostage least squares, weak instruments, wild bootstrap, pairs bootstrap, residual bootstrap, confidence intervals, AndersonRubin test 
Date:  2009–12–30 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:halshs00443550_v1&r=ecm 
By:  JeanPhilippe Cayen; MarcAndré Gosselin; Sharon Kozicki 
Abstract:  The workhorse DSGE model used for monetary policy evaluation is designed to capture business cycle fluctuations in an optimizationbased format. It is commonplace to loglinearize models and express them with variables in deviationfromsteadystate format. Structural parameters are either calibrated, or estimated using data prefiltered to extract trends. Such procedures treat past and future trends as fully known by all economic agents or, at least, as independent of cyclical behaviour. With such a setup, in a forecasting environment it seems natural to add forecasts from DSGE models to trend forecasts. While this may be an intuitive starting point, efficiency can be improved in multiple dimensions. Ideally, behaviour of trends and cycles should be jointly modeled. However, for computational reasons it may not be feasible to do so, particularly with medium or largescale models. Nevertheless, marginal improvements on the standard framework can still be made. First, prefiltering of data can be amended to incorporate structural links between the various trends that are implied by the economic theory on which the model is based, improving the efficiency of trend estimates. Second, forecast efficiency can be improved by building a forecast model for modelconsistent trends. Third, decomposition of shocks into permanent and transitory components can be endogenized to also be modelconsistent. This paper proposes a unified framework for introducing these improvements. Application of the methodology validates the existence of considerable deviations between trends used for detrending data prior to structural parameter estimation and modelconsistent estimates of trends, implying the potential for efficiency gains in forecasting. Such deviations also provide information on aspects of the model that are least coherent with the data, possibly indicating model misspecification. Additionally, the framework provides a structure for examining cyclical responses to trend shocks, among other extensions. 
Keywords:  Business fluctuations and cycles; Econometric and statistical methods 
JEL:  E3 D52 C32 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:bca:bocawp:0935&r=ecm 
By:  Isabel Molina; Daniel Pena; Betsabe Perez 
Abstract:  In this work we extend the procedure proposed by Peña and Yohai (1999) for computing robust regression estimates in linear models with fixed effects. We propose to calculate the principal sensitivity components associated to each cluster and delete the set of possible outliers based on an appropriate robust scale of the residuals. Some advantage of our robust procedure are: (a) it is computationally low demanding, (b) it is able to avoid the swamping effect often present in similar methods, (c) it is appropriate for contamination in the error term (vertical outliers) and possibly masked high leverage points (horizontal outliers). The performance of the robust procedure is investigated through several simulation studies. 
Keywords:  Fixed effects models, Outlier detection, Principal sensitivity vector 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws098827&r=ecm 
By:  Wolfgang Karl Härdle; Ya’acov Ritov; Song Song 
Abstract:  In this paper uniform confidence bands are constructed for nonparametric quantile estimates of regression functions. The method is based on the bootstrap, where resampling is done from a suitably estimated empirical density function (edf) for residuals. It is known that the approximation error for the uniform confidence band by the asymptotic Gumbel distribution is logarithmically slow. It is proved that the bootstrap approximation provides a substantial improvement. The case of multidimensional and discrete regressor variables is dealt with using a partial linear model. Comparison to classic asymptotic uniform bands is presented through a simulation study. An economic application considers the labour market differential effect with respect to different education levels. 
Keywords:  Bootstrap, Quantile Regression, Confidence Bands, Nonparametric Fitting, Kernel Smoothing, Partial Linear Model 
JEL:  C14 C21 C31 J01 J31 J71 
Date:  2010–01 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2010002&r=ecm 
By:  Franz Buscha (Centre of Employment Research, University of Westminster); Anna Conte (Strategic Interaction Group, MaxPlanckInstitut fÃ¼r Ã–konomik, Jena) 
Abstract:  In this paper, we discuss the derivation and application of a bivariate ordered probit model with mixed effects. Our approach allows one to estimate the distribution of the effect (gamma) of an endogenous ordered variable on an ordered explanatory variable. By allowing gamma to vary over the population, our estimator offers a more flexible parametric setting to recover the causal effect of an endogenous variable in an ordered choice setting. We use Monte Carlo simulations to examine the performance of the maximum likelihood estimator of our system and apply this to a relevant example from the UK education literature. 
Keywords:  : bivariate ordered probit, maximum likelihood, mixed effects, truancy 
JEL:  C35 C51 I20 
Date:  2009–12–21 
URL:  http://d.repec.org/n?u=RePEc:jrp:jrpwrp:2009103&r=ecm 
By:  Ramses H. Mena; Stephen G. Walker 
Abstract:  This paper studies a novel idea for constructing continuoustime stationary Markov models. The approach undertaken is based on a latent representation of the corresponding transition probabilities that conveys to appealing ways to study and simulate the dynamics of the constructed processes. Some wellknown models are shown to fall within this construction shedding some light on both theoretical and applied properties. As an illustration of the capabilities of our proposal a simple estimation problem is posed. 
Keywords:  Gibbs sampler; Markov process; Stationary process 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:icr:wpmath:252009&r=ecm 
By:  Russell Davidson (GREQAM  Groupement de Recherche en Économie Quantitative d'AixMarseille  Université de la Méditerranée  AixMarseille II  Université Paul Cézanne  AixMarseille III  Ecole des Hautes Etudes en Sciences Sociales (EHESS)  CNRS : UMR6579, CIREG  Centre interuniversitaire de recherche en économie quantitative  Université de Montréal, Department of Economics, McGill University  McGill University) 
Abstract:  Although attention has been given to obtaining reliable standard errors for the plugin estimator of the Gini index, all standard errors suggested until now are either complicated or quite unreliable. An approximation is derived for the estimator by which it is expressed as a sum of IID random variables. This approximation allows us to develop a reliable standard error that is simple to compute. A simple but effective bias correction is also derived. The quality of inference based on the approximation is checked in a number of simulation experiments, and is found to be very good unless the tail of the underlying distribution is heavy. Bootstrap methods are presented which alleviate this problem except in cases in which the variance is very large or fails to exist. Similar methods can be used to find reliable standard errors of other indices which are not simply linear functionals of the distribution function, such as Sen's poverty index and its modification known as the SenShorrocksThon index. 
Keywords:  Gini index, delta method, asymptotic inference, jackknife, bootstrap 
Date:  2009–12–30 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:halshs00443553_v1&r=ecm 
By:  Tatsuya Kubokawa (Faculty of Economics, University of Tokyo) 
Abstract:  The linear mixed models (LMM) and the empirical best linear unbiased predictor (EBLUP) induced from LMM have been well studied and extensively used for a long time in many applications. Of these, EBLUP in small area estimation has been recognized as a useful tool in various practical statistics. In this paper, we give a review on LMM and EBLUP from a aspect of small area estimation. Especially, we explain why EBLUP is likely to be reliable. The reason is that EBLUP possesses the shrinkage function and the pooling effects as desirable properties, which arise from the setup of random effects and common paramers in LMM. Such important properties of EBLUP are clarified as well as some recent results of the mean squared error estimation, the confidence interval and the variable selection procedures are summarized. 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2009cf702&r=ecm 
By:  Le, Vo Phuong Mai (Cardiff Business School); Minford, Patrick (Cardiff Business School); Wickens, Michael 
Abstract:  We review the methods used in many papers to evaluate DSGE models by comparing their simulated moments and other features with data equivalents. We note that they select, scale and characterise the shocks without reference to the data; crucially they fail to use the joint distribution of the features under comparison. We illustrate this point by recomputing an assessment of a twocountry model in a recent paper; we .nd that the paper.s conclusions are essentially reversed. 
Keywords:  Boostrap; USEU model; DSGE; VAR; indirect inference; Wald statistic; anomaly; puzzle 
JEL:  C12 C32 C52 E1 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:cdf:wpaper:2009/31&r=ecm 
By:  James G. MacKinnon (Queen's University) 
Abstract:  This paper provides tables of critical values for some popular tests of cointegration and unit roots. Although these tables are necessarily based on computer simulations, they are much more accurate than those previously available. The results of the simulation experiments are summarized by means of response surface regressions in which critical values depend on the sample size. From these regressions, asymptotic critical values can be read off directly, and critical values for any finite sample size can easily be computed with a hand calculator. Added in 2010 version: A new appendix contains additional results that are more accurate and cover more cases than the ones in the original paper. 
Keywords:  unit root test, DickeyFuller test, EngleGranger test, ADF test 
JEL:  C16 C22 C32 C12 C15 
Date:  2010–01 
URL:  http://d.repec.org/n?u=RePEc:qed:wpaper:1227&r=ecm 
By:  Wolfgang Karl Härdle; Yarema Okhrin; Weining Wang 
Abstract:  Pricing kernels implicit in option prices play a key role in assessing the risk aversion over equity returns. We deal with nonparametric estimation of the pricing kernel (Empirical Pricing Kernel) given by the ratio of the riskneutral density estimator and the subjective density estimator. The former density can be represented as the second derivative w.r.t. the European call option price function, which we estimate by nonparametric regression. The subjective density is estimated nonparametrically too. In this framework, we develop the asymptotic distribution theory of the EPK in the L1 sense. Particularly, to evaluate the overall variation of the pricing kernel, we develop a uniform confidence band of the EPK. Furthermore, as an alternative to the asymptotic approach, we propose a bootstrap confidence band. The developed theory is helpful for testing parametric specifications of pricing kernels and has a direct extension to estimating risk aversion patterns. The established results are assessed and compared in a MonteCarlo study. As a real application, we test risk aversion over time induced by the EPK. 
Keywords:  Empirical Pricing Kernel, Confidence band, Bootstrap; Kernel Smoothing; Nonparametric 
JEL:  C00 C14 J01 J31 
Date:  2010–01 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2010003&r=ecm 
By:  Stefan Boes (Socioeconomic Institute, University of Zurich) 
Abstract:  This paper explores semimonotonicity constraints in the distribution of potential outcomes, first, conditional on an instrument, and second, in terms of the response function. The imposed assumptions are strictly weaker than traditional instrumental variables assumptions and can be gainfully employed to bound the counterfactual distributions, even though point identification is only achieved in special cases. The bounds have a simple analytical form and thus have much practical relevance in all instances when strong exogeneity assumptions cannot be credibly invoked. The bounding strategy is illustrated in a simulated data example and applied to the effect of education on smoking. 
Keywords:  nonparametric bounds, treatment effects, causality, endogeneity, instrumental variables, policy evaluation 
JEL:  C14 C21 C25 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:soz:wpaper:0920&r=ecm 
By:  Daniel Berkowitz; Mehmet Caner; Ying Fang 
Abstract:  Valid instrumental variables must be relevant and exogenous. However, in practice it is difficult to find instruments that perfectly satisfy the orthogonality condition and at the same time are strongly correlated with the endogenous regressors. In this paper we show how a mild violation of the exogeneity assumption affects the limit of the Anderson Rubin (1949) test. The AndersonRubin (AR) test statistic is frequently used because it is robust to identification problems. However, when there is a mild violation of exogeneity the AR test is oversized and with larger samples the problem gets worse. In order to correct this problem, we introduce the fractionally resampled AndersonRubin (FAR) test that is derived by modifying the resampling technique of Wu (1990). Our technical innovation is to treat the block size as a random variable. We prove that this choice recovers the limit of the AR test under a mild violation of exogeneity. We also prove that the optimal of block size converges in probability to one half. Simulations show that in finite samples the FAR is conservative; thus, we propose block sizes in the range of one quarter to one third that have good finite sample properties. 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:pit:wpaper:386&r=ecm 
By:  Russell Davidson (GREQAM  Groupement de Recherche en Économie Quantitative d'AixMarseille  Université de la Méditerranée  AixMarseille II  Université Paul Cézanne  AixMarseille III  Ecole des Hautes Etudes en Sciences Sociales (EHESS)  CNRS : UMR6579, CIREG  Centre interuniversitaire de recherche en économie quantitative  Université de Montréal, Department of Economics, McGill University  McGill University) 
Abstract:  The bootstrap is a statistical technique used more and more widely in econometrics. While it is capable of yielding very reliable inference, some precautions should be taken in order to ensure this. Two “Golden Rules” are formulated that, if observed, help to obtain the best the bootstrap can offer. Bootstrapping always involves setting up a bootstrap datagenerating process (DGP). The main types of bootstrap DGP in current use are discussed, with examples of their use in econometrics. The ways in which the bootstrap can be used to construct confidence sets differ somewhat from methods of hypothesis testing. The relation between the two sorts of problem is discussed. 
Keywords:  Bootstrap, hypothesis test, confidence set 
Date:  2009–12–22 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:halshs00442693_v1&r=ecm 
By:  Adriana Cornea (Imperial College London  Imperial College London); Russell Davidson (GREQAM  Groupement de Recherche en Économie Quantitative d'AixMarseille  Université de la Méditerranée  AixMarseille II  Université Paul Cézanne  AixMarseille III  Ecole des Hautes Etudes en Sciences Sociales (EHESS)  CNRS : UMR6579, CIREG  Centre interuniversitaire de recherche en économie quantitative  Université de Montréal, Department of Economics, McGill University  McGill University) 
Abstract:  It is known that Efron's resampling bootstrap of the mean of random variables with common distribution in the domain of attraction of the stable laws with infinite variance is not consistent, in the sense that the limiting distribution of the bootstrap mean is not the same as the limiting distribution of the mean from the real sample. Moreover, the limiting distribution of the bootstrap mean is random and unknown. The conventional remedy for this problem, at least asymptotically, is either the m out of n bootstrap or subsampling. However, we show that both these procedures can be quite unreliable in other than very large samples. A parametric bootstrap is derived by considering the distribution of the bootstrap P value instead of that of the bootstrap statistic. The quality of inference based on the parametric bootstrap is examined in a simulation study, and is found to be satisfactory with heavytailed distributions unless the tail index is close to 1 and the distribution is heavily skewed. 
Keywords:  bootstrap inconsistency, stable distribution, domain of attraction, infinite variance 
Date:  2009–12–30 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:halshs00443564_v1&r=ecm 
By:  Ricardo Mora; Javier Ruiz Castillo 
Abstract:  In this paper the KullbackLeibler notion of discrepancy (Kullback and Leibler, 1951) is used to propose a measure of segregation within a general statistical framework. Under general conditions, this measure coincides with the Mutual Information index of segregation, M, first proposed by Theil and Finizza (1971), and fully characterized in terms of eight ordinal axioms by Frankel and Volij (2009). In this paper, two specific issues are addressed in relation to this index: the evaluation of statistical significance for observed differences in M measurements, and the control for the statistical association between demographic groups and schools and other socioeconomic variables. Among the main results of the paper it is established that M can be decomposed to isolate segregation conditional on any vector of socioeconomic characteristics. Furthermore, consistent estimators for M and the terms in its decomposition are proposed, and their asymptotic properties are obtained. As a result, the M index now stands as the only index of segregation which has been fully characterized in terms of axiomatic properties, is well embedded into a general statistical framework, and can be used when samples are finite and a multivariate framework is required. The usefulness of the approach is illustrated by looking at patterns of multigroup school segregation in the U.S. for the school years 198990 and 200506. 
Keywords:  Multigroup segregation measurement, Axiomatic properties, Econometric models 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:cte:werepe:we09448&r=ecm 
By:  Russell Davidson (GREQAM  Groupement de Recherche en Économie Quantitative d'AixMarseille  Université de la Méditerranée  AixMarseille II  Université Paul Cézanne  AixMarseille III  Ecole des Hautes Etudes en Sciences Sociales (EHESS)  CNRS : UMR6579, CIREG  Centre interuniversitaire de recherche en économie quantitative  Université de Montréal, Department of Economics, McGill University  McGill University); JeanYves Duclos (CIRPEE  Université de Laval, Department of Economics  Université de Laval) 
Abstract:  Asymptotic and bootstrap tests are studied for testing whether there is a relation of stochastic dominance between two distributions. These tests have a null hypothesis of nondominance, with the advantage that, if this null is rejected, then all that is left is dominance. This also leads us to define and focus on restricted stochastic dominance, the only empirically useful form of dominance relation that we can seek to infer in many settings. One testing procedure that we consider is based on an empirical likelihood ratio. The computations necessary for obtaining a test statistic also provide estimates of the distributions under study that satisfy the null hypothesis, on the frontier between dominance and nondominance. These estimates can be used to perform dominance tests that can turn out to provide much improved reliability of inference compared with the asymptotic tests so far proposed in the literature. 
Keywords:  Stochastic dominance, empirical likelihood, bootstrap test 
Date:  2009–12–30 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:halshs00443560_v1&r=ecm 
By:  Tetsuya Takaishi 
Abstract:  The hybrid Monte Carlo (HMC) algorithm is applied for the Bayesian inference of the stochastic volatility (SV) model. We use the HMC algorithm for the Markov chain Monte Carlo updates of volatility variables of the SV model. First we compute parameters of the SV model by using the artificial financial data and compare the results from the HMC algorithm with those from the Metropolis algorithm. We find that the HMC algorithm decorrelates the volatility variables faster than the Metropolis algorithm. Second we make an empirical study for the time series of the Nikkei 225 stock index by the HMC algorithm. We find the similar correlation behavior for the sampled data to the results from the artificial financial data and obtain a $\phi$ value close to one ($\phi \approx 0.977$), which means that the time series has the strong persistency of the volatility shock. 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1001.0024&r=ecm 
By:  Russell Davidson (GREQAM  Groupement de Recherche en Économie Quantitative d'AixMarseille  Université de la Méditerranée  AixMarseille II  Université Paul Cézanne  AixMarseille III  Ecole des Hautes Etudes en Sciences Sociales (EHESS)  CNRS : UMR6579, CIREG  Centre interuniversitaire de recherche en économie quantitative  Université de Montréal, Department of Economics, McGill University  McGill University) 
Abstract:  Extensions are presented to the results of Davidson and Duclos (2007), whereby the null hypothesis of restricted stochastic non dominance can be tested by both asymptotic and bootstrap tests, the latter having considerably better properties as regards both size and power. In this paper, the methodology is extended to tests of higherorder stochastic dominance. It is seen that, unlike the firstorder case, a numerical nonlinear optimisation problem has to be solved in order to construct the bootstrap DGP. Conditions are provided for a solution to exist for this problem, and efficient numerical algorithms are laid out. The empirically important case in which the samples to be compared are correlated is also treated, both for firstorder and for higherorder dominance. For all of these extensions, the bootstrap algorithm is presented. Simulation experiments show that the bootstrap tests perform considerably better than asymptotic tests, and yield reliable inference in moderately sized samples. 
Keywords:  Higherorder stochastic dominance, empirical likelihood, bootstrap test, correlated samples 
Date:  2009–12–30 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:halshs00443556_v1&r=ecm 
By:  Zisimos Koustas (Department of Economics, Brock University); JeanFrancois Lamarche (Department of Economics, Brock University) 
Abstract:  This paper estimates a nonlinear threshold model using instrumental variables. This estimation strategy was originally developed with dynamic panel models in mind and we extend it to time series models. In particular, we consider a forwardlooking Taylor rule and test to see if the Bank of England followed a nonlinear Taylor rule in setting the shortterm interest rate. 
Keywords:  Thresholds; Nonlinear Models; Instrumental Variables; Taylor Rule 
JEL:  C22 C12 C13 C87 E58 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:brk:wpaper:0909&r=ecm 
By:  Russell Davidson (GREQAM  Groupement de Recherche en Économie Quantitative d'AixMarseille  Université de la Méditerranée  AixMarseille II  Université Paul Cézanne  AixMarseille III  Ecole des Hautes Etudes en Sciences Sociales (EHESS)  CNRS : UMR6579, CIREG  Centre interuniversitaire de recherche en économie quantitative  Université de Montréal, Department of Economics, McGill University  McGill University); James Mackinnon (Department of Economics  Queen's University, Kingston, Ontario) 
Abstract:  We develop a method based on the use of polar coordinates to investigate the existence of moments for instrumental variables and related estimators in the linear regression model. For generalized IV estimators, we obtain familiar results. For JIVE, we obtain the new result that this estimator has no moments at all. Simulation results illustrate the consequences of its lack of moments. 
Keywords:  instrumental variables, JIVE, moments of estimators 
Date:  2009–12–22 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:halshs00442692_v1&r=ecm 
By:  Maria Nieswand; Astrid Cullmann; Anne Neumann 
Abstract:  This paper provides an empirical demonstration for a practical approach of efficiency evaluation against the background of limited data availability in some regulated industries. Here, traditional DEA may result in a lack of discriminatory power when high numbers of variables but only limited observations are available. We apply PCADEA for radial efficiency measurement to US natural gas transmission companies in 2007. This allows us to reduce dimensions of the optimization problem while maintaining most of the variation in the original data. Our results suggest that the PCADEA methodology reduces the probability of overestimation of the individual firmspecific performance. It also allows for a large number of original variables without substantially reducing the discriminatory power of the model.<br /> 
Keywords:  Efficiency analysis, DEA, PCA, company regulation, natural gas transmission 
JEL:  C14 L51 L95 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:diw:diwwpp:dp962&r=ecm 
By:  Russell Davidson (GREQAM  Groupement de Recherche en Économie Quantitative d'AixMarseille  Université de la Méditerranée  AixMarseille II  Université Paul Cézanne  AixMarseille III  Ecole des Hautes Etudes en Sciences Sociales (EHESS)  CNRS : UMR6579, CIREG  Centre interuniversitaire de recherche en économie quantitative  Université de Montréal, Department of Economics, McGill University  McGill University) 
Abstract:  Many simulation experiments have shown that, in a variety of circumstances, bootstrap tests perform better than current asymptotic theory predicts. Specifically, the discrepancy between the actual rejection probability of a bootstrap test under the null and the nominal level of the test appears to be smaller than suggested by theory, which in any case often yields only a rate of convergence of this discrepancy to zero. Here it is argued that the Edgeworth expansions on which much theory is based provide a quite inaccurate account of the finitesample distributions of even quite basic statistics. Other methods are investigated in the hope that they may give better agreement with simulation evidence. They also suggest ways in which bootstrap procedures can be improved so as to yield more accurate inference. 
Keywords:  bootstrap discrepancy, bootstrap test, Edgeworth expansion 
Date:  2009–12–30 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:halshs00443552_v1&r=ecm 
By:  Arthur M. Berd 
Abstract:  We present a continuoustime maximum likelihood estimation methodology for credit rating transition probabilities, taking into account the presence of censored data. We perform rolling estimates of the transition matrices with exponential time weighting with varying horizons and discuss the underlying dynamics of transition generator matrices in the longterm and shortterm estimation horizons. 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:0912.4621&r=ecm 
By:  Antonio Lijoi; Igor Pruenster 
Abstract:  Bayesian nonparametric inference is a relatively young area of research and it has recently undergone a strong development. Most of its success can be explained by the considerable degree of exibility it ensures in statistical modelling, if compared to parametric alternatives, and by the emergence of new and ecient simulation techniques that make nonparametric models amenable to concrete use in a number of applied statistical problems. Since its introduction in 1973 by T.S. Ferguson, the Dirichlet process has emerged as a cornerstone in Bayesian nonparametrics. Nonetheless, in some cases of interest for statistical applications the Dirichlet process is not an adequate prior choice and alternative nonparametric models need to be devised. In this paper we provide a review of Bayesian nonparametric models that go beyond the Dirichlet process. 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:icr:wpmath:232009&r=ecm 
By:  Emanuel Moench; Serena Ng; Simon Potter 
Abstract:  This paper uses multilevel factor models to characterize within and betweenblock variations as well as idiosyncratic noise in large dynamic panels. Blocklevel shocks are distinguished from genuinely common shocks, and the estimated blocklevel factors are easy to interpret. The framework achieves dimension reduction and yet explicitly allows for heterogeneity between blocks. The model is estimated using a Markov chain MonteCarlo algorithm that takes into account the hierarchical structure of the factors. We organize a panel of 447 series into blocks according to the timing of data releases and use a fourlevel model to study the dynamics of real activity at both the block and aggregate levels. While the effect of the economic downturn of 200709 is pervasive, growth cycles are synchronized only loosely across blocks. The state of the leading and the lagging sectors, as well as that of the overall economy, is monitored in a coherent framework. 
Keywords:  Econometric models ; Economic forecasting ; Economic indicators ; Markov processes 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:fip:fednsr:412&r=ecm 
By:  Michelle L. Barnes; Fabià GumbauBrisa; Denny Lie; Giovanni P. Olivei 
Abstract:  We compare estimates of the New Keynesian Phillips Curve (NKPC) when the curve is specified in two different ways. In the standard difference equation (DE) form, current inflation is a function of past inflation, expected future inflation, and real marginal costs. The alternative closed form (CF) specification explicitly solves the DE form to express inflation as a function of past inflation and a presentdiscounted value of current and expected future marginal costs. The CF specification places modelconsistent constraints on expected future inflation that are not imposed in the DE form. In a Monte Carlo exercise, we show that estimating the CF version of the NKPC gives estimates that are much more efficient than the estimates obtained from the DE specification. We then compare DE and CF estimates of the NKPC with timevarying trend inflation on actual data. The data and estimation methodology are the same as in Cogley and Sbordone (2008). We show that DE and CF estimates differ substantially and have very different implications for inflation dynamics. As in Cogley and Sbordone, it is possible to estimate DE specifications of the NKPC where lagged inflation plays no role once trend inflation is taken into account. The CF estimates of the NKPC, however, typically imply as large a role for lagged inflation as for expected future inflation. These estimates thus suggest that trend inflation is not in itself sufficient to explain the persistent dynamics of inflation. 
Keywords:  Inflation (Finance) ; Phillips curve 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:fip:fedbwp:0915&r=ecm 
By:  Antonio Lijoi; Igor Pruenster 
Abstract:  The present paper provides a review of the results concerning distributional properties of means of random probability measures. Our interest in this topic has originated from inferential problems in Bayesian Nonparametrics. Nonetheless, it is worth noting that these random quantities play an important role in seemingly unrelated areas of research. In fact, there is a wealth of contributions both in the statistics and in the probability literature that we try to summarize in a unified framework. Particular attention is devoted to means of the Dirichlet process given the relevance of the Dirichlet process in Bayesian Nonparametrics. We then present a number of recent contributions concerning means of more general random probability measures and highlight connections with the moment problem, combinatorics, special functions, excursions of stochastic processes and statistical physics. 
Keywords:  Bayesian Nonparametrics; Completely random measures; Cifarelli–Regazzini identity; Dirichlet process; Functionals of random probability measures; Generalized Stieltjes transform; Neutral to the right processes; Normalized random measures; Posterior distribution; Random means; Random probability measure; Two–parameter Poisson–Dirichlet process. 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:icr:wpmath:222009&r=ecm 
By:  Clifford R. Wymer 
Abstract:  The dynamics of economic behaviour is often developed in theory as a continuous time system. Rigorous estimation and testing of such systems, and the analysis of some aspects of their properties, is of particular importance in distinguishing between competing hypotheses and the resulting models. The consequences for the international economy during the past eighteen months of failures in the financial sector, and particularly the banking sector, make it essential that the dynamics of financial and commodity markets and of macroeconomic policy are well understood. The nonlinearity of the economic system means that it’s properties are heavily dependent on it’s parameter values. The estimators discussed here are tools to provide those parameter estimates 
Keywords:  Continuous time; Dynamics. 
Date:  2009–09 
URL:  http://d.repec.org/n?u=RePEc:des:wpaper:16&r=ecm 
By:  Patrizia Berti (Dipartimento di Matematica Pura ed Applicata G. Vitali, Universita di Modena e ReggioEmilia); Michele Gori (Dipartimento di Matematica per le Decisioni, Universita di Firenze); Pietro Rigo (Dipartimento di Economia Politica e Metodi Quantitativi, Universita di Pavia) 
Abstract:  This note deals with the conditional form of the law of large numbers (LLN). Let $T$ be a separable metric space, equipped with a non atomic probability $Q$, and $\mathcal{H}$ the class of Borel subsets $H\subset T$ satisfying $Q(H)>0$. Let $\mathcal{P}$ be any consistent set of finite dimensional distributions indexed by $T$. If $\mathcal{H}_0\subset\mathcal{H}$ is finite, there is a stochastic process $X=\{X_t:t\in T\}$ such that $X\sim\mathcal{P}$ and the conditional LLN holds for each $H\in\mathcal{H}_0$. The same is true if $\mathcal{H}_0=\sigma(\mathcal{U})\cap\mathcal{H}$ where $\mathcal{U}$ is a countable Borel partition of $T$. Under a suitable finitely additive probability, one also obtains $X\sim\mathcal{P}$ and the conditional LLN for each $H\in\mathcal{H}$. 
Keywords:  Aggregate uncertainty, Extension, Finitely additive probability, Individual risk, Law of large numbers. 
JEL:  C02 C60 D80 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:flo:wpaper:200910&r=ecm 
By:  D. Sornette; A. Saichev; V. Filimonov 
Abstract:  We present a new theory of homogeneous volatility (and variance) estimators for arbitrary stochastic processes. The main tool of our theory is the parsimonious encoding of all the information contained in the OHLC prices for a given time interval by the joint distributions of the highminusopen, lowminusopen and closeminusopen values, whose analytical expression is derived exactly for Wiener processes with drift. The efficiency of the new proposed estimators is favorably compared with that of the GarmanKlass, RogerSatchell and maximum likelihood estimators. 
Keywords:  Variance and volatility estimators, efficiency, homogeneous functions, Schwarz inequality, extremes of Wiener processes 
JEL:  C13 C51 
Date:  2009–10–25 
URL:  http://d.repec.org/n?u=RePEc:stz:wpaper:ccss0900007&r=ecm 
By:  Mamatzakis, E; Milidonis, A; Christodoulakis, G 
Abstract:  Using a unique dataset for the five major UK insurance industries, we adopt a novel approach in the insurance literature and model the evolution of their underwriting returns as Regime Switching processes, which outperforms standard approaches. This produces estimates of timevarying conditional regime probabilities and captures the nonnormality present in the data, thus allowing the study of joint dynamics of industry regime probabilities using Dynamic Panel and Panel Vector AutoRegressions and their attribution to economic factors. Our evidence uncovers high/low volatility regime switching for all industries, where their joint evolution is mainly attributed to industry specific factors. Impulse response functions and variance decompositions from a panel VAR identify a plethora of causal links among our variables and their underlying persistence of interaction, showing that shocks from changes in claims assert a positive impact on the probability of high volatility regime. 
Keywords:  Insurance; Reinsurance; Business Cycles; Regime Switching; Panel VAR 
JEL:  E30 G30 C1 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:19485&r=ecm 
By:  Pedro Albarran; Ignacio Ortuno; Javier RuizCastillo 
Abstract:  This paper introduces a novel methodology for comparing the citation distributions of research units working in the same homogeneous field. Given a critical citation level (CCL), we suggest using two real valued indicators to describe the shape of any distribution: a highimpact and a lowimpact measure defined over the set of articles with citations above or below the CCL. The key to this methodology is the identification of a citation distribution with an income distribution. Once this step is taken, it is easy to realize that the measurement of lowimpact coincides with the measurement of economic poverty. In turn, it is equally natural to identify the measurement of highimpact with the measurement of a certain notion of economic affluence. On the other hand, it is seen that the ranking of citation distributions according to a family of lowimpact measures, originally suggested by Foster et al. (1984) for the measurement of economic poverty, is essentially characterized by a number of desirable axioms. Appropriately redefined, these same axioms lead to the selection of an equally convenient class of decomposable highimpact measures. These two families are shown to satisfy other interesting properties that make them potentially useful in empirical applications, including the comparison of research units working in different fields. 
Date:  2009–11 
URL:  http://d.repec.org/n?u=RePEc:cte:werepe:we095735&r=ecm 