nep-ecm New Economics Papers
on Econometrics
Issue of 2010‒01‒16
fifty papers chosen by
Sune Karlsson
Orebro University

  1. Nonparametric Bootstrap Tests for Independence of Generalized Errors By Zaichao Du
  3. LAD Asymptotics under Conditional Heteroskedasticity with Possibly Infinite Error Densities By Jin Seo Cho; Chirok-Han; Peter C. B. Phillips
  4. The Multivariate k-Nearest Neighbor Model for Dependent Variables : One-Sided Estimation and Forecasting By Dominique Guegan; Patrick Rakotomarolahy
  5. Almost Unbiased Estimation of the Poisson Regression Model By David E. Giles; Hui Feng
  6. Covariance estimation and dynamic asset allocation under microstructure effects via Fourier methodology By Mancino Maria Elvira; Simona Sanfelici
  7. On testing for the mean vector of a multivariate distribution with generalized and {2}-inverses By Duchesne, Pierre; Francq, Christian
  8. Geometric Stick-Breaking Processes for Continuous-Time Nonparametric Modeling By Ramses H. Mena; Matteo Ruggiero; Stephen G. Walker
  9. Distributional tests in multivariate dynamic models with Normal and Student t innovations By Javier Mencía; Enrique Sentana
  10. Forecasting with nonlinear time series models By Anders Bredahl Kock; Timo Teräsvirta
  11. "Stochastic Volatility Model with Leverage and Asymmetrically Heavy-Tailed Error Using GH Skew Student's t-Distribution" By Jouchi Nakajima; Yasuhiro Omori
  12. Sequential Estimation of Structural Models with a Fixed Point Constraint By Kasahara, Hiroyuki; Shimotsu, Katsumi
  13. Size distortion of bootstrap tests: application to a unit root test By Russell Davidson
  14. Bootstrap inference in a linear equation estimated by instrumental variables By Russell Davidson; James Mackinnon
  15. Nonparametric regression for dependent data in the errors-in-variables problem By Toshio Honda
  16. Estimation for the change point of the volatility in a stochastic differential equation By Stefano Iacus; Nakahiro Yoshida
  17. "Efficient Bayesian Estimation of a Multivariate Stochastic Volatility Model with Cross Leverage and Heavy-Tailed Errors" By Tsunehiro Ishihara; Yasuhiro Omori
  18. Wild bootstrap tests for IV regression By Russell Davidson; James Mackinnon
  19. Estimating DSGE-Model-Consistent Trends for Use in Forecasting By Jean-Philippe Cayen; Marc-André Gosselin; Sharon Kozicki
  20. Robust estimation in linear regression models with fixed effects By Isabel Molina; Daniel Pena; Betsabe Perez
  21. Partial Linear Quantile Regression and Bootstrap Confidence Bands By Wolfgang Karl Härdle; Ya’acov Ritov; Song Song
  22. A Bivariate Ordered Probit Estimator with Mixed Effects By Franz Buscha; Anna Conte
  23. On a Construction of Markov Models in Continuous Time By Ramses H. Mena; Stephen G. Walker
  24. Reliable inference for the GINI Index By Russell Davidson
  25. "A Review of Linear Mixed Models and Small Area Estimation" By Tatsuya Kubokawa
  26. Some problems in the testing of DSGE models By Le, Vo Phuong Mai; Minford, Patrick; Wickens, Michael
  27. Critical Values for Cointegration Tests By James G. MacKinnon
  28. Uniform confidence bands for pricing kernels By Wolfgang Karl Härdle; Yarema Okhrin; Weining Wang
  29. Bounds on Counterfactual Distributions Under Semi-Monotonicity Constraints By Stefan Boes
  30. The Validity of Instruments Revisited By Daniel Berkowitz; Mehmet Caner; Ying Fang
  31. Bootstraping econometric models By Russell Davidson
  32. A parametric bootstrap for heavytailed distributions By Adriana Cornea; Russell Davidson
  33. The statistical properties of the mutual information index of multigroup segregation By Ricardo Mora; Javier Ruiz Castillo
  34. Testing for restricted stochastic dominance By Russell Davidson; Jean-Yves Duclos
  35. Bayesian Inference of Stochastic Volatility Model by Hybrid Monte Carlo By Tetsuya Takaishi
  36. Testing for restricted stochastic dominance: some further results By Russell Davidson
  37. Instrumental variable estimation of a nonlinear Taylor rule By Zisimos Koustas; Jean-Francois Lamarche
  38. Moments of IV and JIVE estimators By Russell Davidson; James Mackinnon
  39. Overcoming Data Limitations in Nonparametric Benchmarking: Applying PCA-DEA to Natural Gas Transmission By Maria Nieswand; Astrid Cullmann; Anne Neumann
  40. Exploring the bootstrap discrepancy By Russell Davidson
  41. Dynamic Estimation of Credit Rating Transition Probabilities By Arthur M. Berd
  42. Models beyond the Dirichlet process By Antonio Lijoi; Igor Pruenster
  43. Dynamic hierarchical factor models By Emanuel Moench; Serena Ng; Simon Potter
  44. Closed-form estimates of the New Keynesian Phillips Curve with time-varying trend inflation By Michelle L. Barnes; Fabià Gumbau-Brisa; Denny Lie; Giovanni P. Olivei
  45. Distributional Properties of means of Random Probability Measures By Antonio Lijoi; Igor Pruenster
  46. Estimation of Continuous Time Models in Economics: an Overview By Clifford R. Wymer
  47. A note on the law of large numbers in economics By Patrizia Berti; Michele Gori; Pietro Rigo
  48. Most Efficient Homogeneous Volatility Estimators By D. Sornette; A. Saichev; V. Filimonov
  49. An P-VAR analysis of the dynamics of UK insurance underwriting regimes By Mamatzakis, E; Milidonis, A; Christodoulakis, G
  50. The measurement of low- and high-impact in citation distributions: technical results By Pedro Albarran; Ignacio Ortuno; Javier Ruiz-Castillo

  1. By: Zaichao Du (Indiana University)
    Abstract: In this paper, we develop a general method of testing for independence when unobservable generalized errors are involved. Our method can be applied to testing for serial independence of generalized errors, and testing for independence between the generalized errors and observ- able covariates. The former can serve as a uni?ed approach to testing adequacy of time series models, as model adequacy often implies that the generalized errors obtained after a suitable transformation are independent and identically distributed. The latter is a key identi?cation assumption in many nonlinear economic models. Our tests are based on a classical sample dependence measure, the Hoe¤ding-Blum-Kiefer-Rosenblat-type empirical process applied to generalized residuals. We establish a uniform expansion of the process, thereby deriving an ex- plicit expression for the parameter estimation e¤ect, which causes our tests not to be nuisance parameter-free. To circumvent this problem, we propose a multiplier-type bootstrap to approx- imate the limit distribution. Our bootstrap procedure is computationally very simple as it does not require a reestimation of the parameters in each bootstrap replication. In a simulation study, we apply our method to test the adequacy of ARMA-GARCH and Hansen (1994) skewed t models, and document a good ?nite sample performance of our test. Finally, an empirical application to some daily exchange rate data highlights the merits of our approach.
    Date: 2009–12
  2. By: J. Carlos Escanciano (Indiana University)
    Abstract: This article investigates model checks for a class of possibly nonlinear heteroskedastic time series models, including but not restricted to ARMA-GARCH models. We propose omnibus tests based on functionals of certain weighted standardized residual empirical processes. The new tests are asymptotically distribution-free, suitable when the conditioning set is in?nite- dimensional, and consistent against a class of Pitman?s local alternatives converging at the parametric rate n??1=2; with n the sample size. A Monte Carlo study shows that the simulated level of the proposed tests is close to the asymptotic level already for moderate sample sizes and that tests have a satisfactory power performance. Finally, we illustrate our methodology with an application to the well-known S&P 500 daily stock index. The paper also contains an asymptotic uniform expansion for weighted residual empirical processes when initial conditions are considered, a result of independent interest.
    Date: 2009–09
  3. By: Jin Seo Cho (Korea University); Chirok-Han (Korea University); Peter C. B. Phillips (Yale University, University of Auckland, University of Southampton & Singapore Management University)
    Abstract: Least absolute deviations (LAD) estimation of linear time-series models is considered under conditional heteroskedasticity and serial correlation. The limit theory of the LAD estimator is obtained without assuming the finite density condition for the errors that is required in standard LAD asymptotics. The results are particularly useful in application of LAD estimation to financial time-series data.
    Keywords: Asymptotic leptokurtosis, Convex function, Infinite density, Least absolute deviations, Median, Weak convergence.
    JEL: C12 G11
    Date: 2009
  4. By: Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics - Ecole d'Économie de Paris); Patrick Rakotomarolahy (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I)
    Abstract: This article gives the asymptotic properties of multivariate k-nearest neighbor regression estimators for dependent variables belonging to Rd, d > 1. The results derived here permit to provide consistent forecasts, and confidence intervals for time series An illustration of the method is given through the estimation of economic indicators used to compute the GDP with the bridge equations. An empirical forecast accuracy comparison is provided by comparing this non-parametric method with a parametric one based on ARIMA modelling that we consider as a benchmark because it is still often used in Central Banks to nowcast and forecast the GDP.
    Keywords: Multivariate k-nearest neighbor, asymptotic normality of the regression, mixing time series, confidence intervals, forecasts, economic indicators, euro area.
    Date: 2009–12
  5. By: David E. Giles (Department of Economics, University of Victoria); Hui Feng (Department of Economics, Business & Mathematics, King's College, University of Western Ontario)
    Abstract: We derive expressions for the first-order bias of the MLE for a Poisson regression model and show how these can be used to adjust the estimator and reduce bias without increasing MSE. The analytic results are supported by Monte Carlo simulations and an empirical application.
    Keywords: Poisson regression; maximum likelihood estimation; bias reduction
    JEL: C13 C25
    Date: 2009–12–23
  6. By: Mancino Maria Elvira (Dipartimento di Matematica per le Decisioni, University of Firenze); Simona Sanfelici (Dipartimento di Economia, University of Parma)
    Abstract: We analyze the properties of different estimators of multivariate volatilities in the presence of microstructure noise, with particular focus on the Fourier estimator. This estimator is consistent in the case of asynchronous data and robust to microstructure effects; further we prove the positive semi-definiteness of the estimated covariance matrix. The in sample and forecasting properties of Fourier method are analyzed through Monte Carlo simulations. We study the economic benefit of applying the Fourier covariance estimation methodology over other estimators in the presence of market microstructure noise from the perspective of an asset-allocation decision problem. We find that using Fourier methodology yields statistically significant economic gains under strong microstructure effects
    Keywords: nonparametric covariance estimation, non-synchronicity, microstructure, optimal portfolio choice, Fourier analysis
    JEL: G11 C14 C22
    Date: 2009–12
  7. By: Duchesne, Pierre; Francq, Christian
    Abstract: Generalized Wald's method constructs testing procedures having chi-squared limiting distributions from test statistics having singular normal limiting distributions by use of generalized inverses. In this article, the use of two-inverses for that problem is investigated, in order to propose new test statistics with convenient asymptotic chi-square distributions. Alternatively, Imhof-based test statistics can also be defined, which converge in distribution to weighted sum of chi-square variables; The critical values of such procedures can be found using Imhof's (1961) algorithm. The asymptotic distributions of the test statistics under the null and alternative hypotheses are discussed. Under fixed and local alternatives, the asymptotic powers are compared theoretically. Simulation studies are also performed to compare the exact powers of the test statistics in finite samples. A data analysis on the temperature and precipitation variability in the European Alps illustrates the proposed methods.
    Keywords: two-inverses; generalized Wald's method; generalized inverses; multivariate analysis; singular normal distribution
    JEL: C12
    Date: 2010–01
  8. By: Ramses H. Mena; Matteo Ruggiero; Stephen G. Walker
    Abstract: This paper is concerned with the construction of a continuous parameter sequence of random probability measures and its application for modeling random phenomena evolving in continuous time. At each time point we have a random probability measure which is generated by a Bayesian nonparametric hierarchical model, and the dependence structure is induced through a Wright-Fisher diffusion with mutation. The sequence is shown to be a stationary and reversible diffusion taking values on the space of probability measures. A simple estimation procedure for discretely observed data is presented and illustrated with simulated and real data sets.
    Keywords: Bayesian non-parametric inference, continuous time dependent random measure, Markov process, measure-valued process, stationary process, stick-breaking process
    Date: 2009–12
  9. By: Javier Mencía (Banco de España); Enrique Sentana (CEMFI)
    Abstract: We derive Lagrange Multiplier and Likelihood Ratio specifi cation tests for the null hypotheses of multivariate normal and Student t innovations using the Generalised Hyperbolic distribution as our alternative hypothesis. We decompose the corresponding Lagrange Multiplier-type tests into skewness and kurtosis components, from which we obtain more powerful one-sided Kuhn-Tucker versions that are equivalent to the Likelihood Ratio test, whose asymptotic distribution we provide. We conduct detailed Monte Carlo exercises to study our proposed tests in finite samples. Finally, we present an empirical application to ten US sectoral stock returns, which indicates that their conditional distribution is mildly asymmetric and strongly leptokurtic.
    Keywords: Bootstrap, Inequality Constraints, Kurtosis, Normality Tests, Skewness, Supremum Test, Underidentifed parameters
    JEL: C12 C52 C32
    Date: 2009–12
  10. By: Anders Bredahl Kock (CREATES, Aarhus University); Timo Teräsvirta (CREATES, Aarhus University)
    Abstract: In this paper, nonlinear models are restricted to mean nonlinear parametric models. Several such models popular in time series econo- metrics are presented and some of their properties discussed. This in- cludes two models based on universal approximators: the Kolmogorov- Gabor polynomial model and two versions of a simple artificial neural network model. Techniques for generating multi-period forecasts from nonlinear models recursively are considered, and the direct (non-recursive) method for this purpose is mentioned as well. Forecasting with com- plex dynamic systems, albeit less frequently applied to economic fore- casting problems, is briefly highlighted. A number of large published studies comparing macroeconomic forecasts obtained using different time series models are discussed, and the paper also contains a small simulation study comparing recursive and direct forecasts in a partic- ular case where the data-generating process is a simple artificial neural network model. Suggestions for further reading conclude the paper.
    Keywords: forecast accuracy, Kolmogorov-Gabor, nearest neigh- bour, neural network, nonlinear regression
    JEL: C22 C45 C52 C53
    Date: 2010–01–01
  11. By: Jouchi Nakajima (Department of Statistical Science, Duke University and Bank of Japan); Yasuhiro Omori (Faculty of Economics, University of Tokyo)
    Abstract: Bayesian analysis of a stochastic volatility model with a generalized hyperbolic (GH) skew Student's t-error distribution is described where we first consider an asymmetric heavy-tailness as well as leverage effects. An efficient Markov chain Monte Carlo estimation method is described exploiting a normal variance-mean mixture representation of the error distribution with an inverse gamma distribution as a mixing distribution. The proposed method is illustrated using simulated data, daily TOPIX and S&P500 stock returns. The model comparison for stock returns is conducted based on the marginal likelihood in the empirical study. The strong evidence of the leverage and asymmetric heavy-tailness is found in the stock returns. Further, the prior sensitivity analysis is conducted to investigate whether obtained results are robust with respect to the choice of the priors.
    Date: 2009–12
  12. By: Kasahara, Hiroyuki; Shimotsu, Katsumi
    Abstract: This paper considers the estimation problem of structural models for which empirical restrictions are characterized by a fixed point constraint, such as structural dynamic discrete choice models or models of dynamic games. We analyze the conditions under which the nested pseudo-likelihood (NPL) algorithm converges to a consistent estimator and derive its convergence rate. We find that the NPL algorithm may not necessarily converge to a consistent estimator when the fixed point mapping does not have a local contraction property. To address the issue of divergence, we propose alternative sequential estimation procedures that can converge to a consistent estimator even when the NPL algorithm does not.
    Keywords: contraction, dynamic games, nested pseudo likelihood, recursive projection method
    JEL: C13 C14 C63
    Date: 2009–11
  13. By: Russell Davidson (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - Université de la Méditerranée - Aix-Marseille II - Université Paul Cézanne - Aix-Marseille III - Ecole des Hautes Etudes en Sciences Sociales (EHESS) - CNRS : UMR6579, CIREG - Centre interuniversitaire de recherche en économie quantitative - Université de Montréal, Department of Economics, McGill University - McGill University)
    Abstract: Testing for a unit root in a series obtained by summing a stationary MA(1) process with a parameter close to -1 leads to serious size distortions under the null, on account of the near cancellation of the unit root by the MA component in the driving stationary series. The situation is analysed from the point of view of bootstrap testing, and an exact quantitative account is given of the error in rejection probability of a bootstrap test. A particular method of estimating the MA parameter is recommended, as it leads to very little distortion even when the MA parameter is close to -1. A new bootstrap procedure with still better properties is proposed. While more computationally demanding than the usual bootstrap, it is much less so than the double bootstrap.
    Keywords: Unit root test, bootstrap, MA(1), size distortion
    Date: 2009–12–30
  14. By: Russell Davidson (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - Université de la Méditerranée - Aix-Marseille II - Université Paul Cézanne - Aix-Marseille III - Ecole des Hautes Etudes en Sciences Sociales (EHESS) - CNRS : UMR6579, CIREG - Centre interuniversitaire de recherche en économie quantitative - Université de Montréal, Department of Economics, McGill University - McGill University); James Mackinnon (Department of Economics - Queen's University, Kingston, Ontario)
    Abstract: We study several tests for the coefficient of the single right-hand-side endogenous variable in a linear equation estimated by instrumental variables. We show that writing all the test statistics—Student's t, Anderson-Rubin, the LM statistic of Kleibergen and Moreira (K), and likelihood ratio (LR)—as functions of six random quantities leads to a number of interesting results about the properties of the tests under weakinstrument asymptotics. We then propose several new procedures for bootstrapping the three non-exact test statistics and also a new conditional bootstrap version of the LR test. These use more efficient estimates of the parameters of the reduced-form equation than existing procedures. When the best of these new procedures is used, both the K and conditional bootstrap LR tests have excellent performance under the null. However, power considerations suggest that the latter is probably the method of choice.
    Keywords: bootstrap, weak instruments, IV estimation
    Date: 2009–12–22
  15. By: Toshio Honda
    Abstract: We consider the nonparametric estimation of the regression functions for dependent data. Suppose that the covariates are observed with additive errors in the data and we employ nonparametric deconvolution kernel techniques to estimate the regression functions in this paper. We investigate how the strength of time dependence affects the asymptotic properties of the local constant and linear estimators. We treat both short-range dependent and long-range dependent linear processes in a unified way and demonstrate that the long-range dependence (LRD) of the covariates affects the asymptotic properties of the nonparametric estimators as well as the LRD of regression errors does.
    Keywords: local polynomial regression, errors-in-variables, deconvolution, ordinary smooth case, supersmooth case, linear processes, long-range dependence
    Date: 2009–11
  16. By: Stefano Iacus (Department of Economics, Business and Statistics, University of Milan, IT); Nakahiro Yoshida (Graduate School of Mathematical Sciences, Tokyo University, Tokyo)
    Abstract: We consider a multidimensional Ito process Y=(Y_t), t in [0,T], with some unknown drift coefficient process b_t and volatility coefficient sigma(X_t,theta) with covariate process X=(X_t), t in[0,T], the function sigma(x,theta) being known up to theta in Theta. For this model we consider a change point problem for the parameter theta in the volatility component. The change is supposed to occur at some point t* in (0,T). Given discrete time observations from the process (X,Y), we propose quasi-maximum likelihood estimation of the change point. We present the rate of convergence of the change point estimator and the limit thereoms of aymptotically mixed type.
    Keywords: It\^o processes, discrete time observations, change point estimation, volatility,
    Date: 2009–06–18
  17. By: Tsunehiro Ishihara (Graduate School of Economics, University of Tokyo); Yasuhiro Omori (Faculty of Economics, University of Tokyo)
    Abstract: The efficient Bayesian estimation method using Markov chain Monte Carlo is proposed for a multivariate stochastic volatility model that is a natural extension of the univariate stochastic volatility model with leverage and heavy-tailed errors, where we further incorporate cross leverage effects among stock returns. Our method is based on a multi-move sampler which samples a block of latent volatility vectors and is described first in the literature for a multivariate stochastic volatility model with cross leverage and heavy-tailed errors. Its high sampling efficiency is shown using numerical examples in comparison with a single-move sampler which samples one latent volatility vector at a time given other latent vectors and parameters. The empirical studies are given using five dimensional stock return indices in Tokyo Stock Exchange.
    Date: 2009–12
  18. By: Russell Davidson (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - Université de la Méditerranée - Aix-Marseille II - Université Paul Cézanne - Aix-Marseille III - Ecole des Hautes Etudes en Sciences Sociales (EHESS) - CNRS : UMR6579, CIREG - Centre interuniversitaire de recherche en économie quantitative - Université de Montréal, Department of Economics, McGill University - McGill University); James Mackinnon (Department of Economics - Queen's University, Kingston, Ontario)
    Abstract: We propose a wild bootstrap procedure for linear regression models estimated by instrumental variables. Like other bootstrap procedures that we have proposed elsewhere, it uses efficient estimates of the reduced-form equation(s). Unlike them, it takes account of possible heteroskedasticity of unknown form. We apply this procedure to t tests, including heteroskedasticity-robust t tests, and to the Anderson-Rubin test. We provide simulation evidence that it works far better than older methods, such as the pairs bootstrap. We also show how to obtain reliable confidence intervals by inverting bootstrap tests. An empirical example illustrates the utility of these procedures.
    Keywords: Instrumental variables estimation, two-stage least squares, weak instruments, wild bootstrap, pairs bootstrap, residual bootstrap, confidence intervals, Anderson-Rubin test
    Date: 2009–12–30
  19. By: Jean-Philippe Cayen; Marc-André Gosselin; Sharon Kozicki
    Abstract: The workhorse DSGE model used for monetary policy evaluation is designed to capture business cycle fluctuations in an optimization-based format. It is commonplace to loglinearize models and express them with variables in deviation-from-steady-state format. Structural parameters are either calibrated, or estimated using data pre-filtered to extract trends. Such procedures treat past and future trends as fully known by all economic agents or, at least, as independent of cyclical behaviour. With such a setup, in a forecasting environment it seems natural to add forecasts from DSGE models to trend forecasts. While this may be an intuitive starting point, efficiency can be improved in multiple dimensions. Ideally, behaviour of trends and cycles should be jointly modeled. However, for computational reasons it may not be feasible to do so, particularly with medium- or large-scale models. Nevertheless, marginal improvements on the standard framework can still be made. First, pre-filtering of data can be amended to incorporate structural links between the various trends that are implied by the economic theory on which the model is based, improving the efficiency of trend estimates. Second, forecast efficiency can be improved by building a forecast model for model-consistent trends. Third, decomposition of shocks into permanent and transitory components can be endogenized to also be model-consistent. This paper proposes a unified framework for introducing these improvements. Application of the methodology validates the existence of considerable deviations between trends used for detrending data prior to structural parameter estimation and model-consistent estimates of trends, implying the potential for efficiency gains in forecasting. Such deviations also provide information on aspects of the model that are least coherent with the data, possibly indicating model misspecification. Additionally, the framework provides a structure for examining cyclical responses to trend shocks, among other extensions.
    Keywords: Business fluctuations and cycles; Econometric and statistical methods
    JEL: E3 D52 C32
    Date: 2009
  20. By: Isabel Molina; Daniel Pena; Betsabe Perez
    Abstract: In this work we extend the procedure proposed by Peña and Yohai (1999) for computing robust regression estimates in linear models with fixed effects. We propose to calculate the principal sensitivity components associated to each cluster and delete the set of possible outliers based on an appropriate robust scale of the residuals. Some advantage of our robust procedure are: (a) it is computationally low demanding, (b) it is able to avoid the swamping effect often present in similar methods, (c) it is appropriate for contamination in the error term (vertical outliers) and possibly masked high leverage points (horizontal outliers). The performance of the robust procedure is investigated through several simulation studies.
    Keywords: Fixed effects models, Outlier detection, Principal sensitivity vector
    Date: 2009–12
  21. By: Wolfgang Karl Härdle; Ya’acov Ritov; Song Song
    Abstract: In this paper uniform confidence bands are constructed for nonparametric quantile estimates of regression functions. The method is based on the bootstrap, where resampling is done from a suitably estimated empirical density function (edf) for residuals. It is known that the approximation error for the uniform confidence band by the asymptotic Gumbel distribution is logarithmically slow. It is proved that the bootstrap approximation provides a substantial improvement. The case of multidimensional and discrete regressor variables is dealt with using a partial linear model. Comparison to classic asymptotic uniform bands is presented through a simulation study. An economic application considers the labour market differential effect with respect to different education levels.
    Keywords: Bootstrap, Quantile Regression, Confidence Bands, Nonparametric Fitting, Kernel Smoothing, Partial Linear Model
    JEL: C14 C21 C31 J01 J31 J71
    Date: 2010–01
  22. By: Franz Buscha (Centre of Employment Research, University of Westminster); Anna Conte (Strategic Interaction Group, Max-Planck-Institut für Ökonomik, Jena)
    Abstract: In this paper, we discuss the derivation and application of a bivariate ordered probit model with mixed effects. Our approach allows one to estimate the distribution of the effect (gamma) of an endogenous ordered variable on an ordered explanatory variable. By allowing gamma to vary over the population, our estimator offers a more flexible parametric setting to recover the causal effect of an endogenous variable in an ordered choice setting. We use Monte Carlo simulations to examine the performance of the maximum likelihood estimator of our system and apply this to a relevant example from the UK education literature.
    Keywords: : bivariate ordered probit, maximum likelihood, mixed effects, truancy
    JEL: C35 C51 I20
    Date: 2009–12–21
  23. By: Ramses H. Mena; Stephen G. Walker
    Abstract: This paper studies a novel idea for constructing continuous-time stationary Markov models. The approach undertaken is based on a latent representation of the corresponding transition probabilities that conveys to appealing ways to study and simulate the dynamics of the constructed processes. Some well-known models are shown to fall within this construction shedding some light on both theoretical and applied properties. As an illustration of the capabilities of our proposal a simple estimation problem is posed.
    Keywords: Gibbs sampler; Markov process; Stationary process
    Date: 2009–12
  24. By: Russell Davidson (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - Université de la Méditerranée - Aix-Marseille II - Université Paul Cézanne - Aix-Marseille III - Ecole des Hautes Etudes en Sciences Sociales (EHESS) - CNRS : UMR6579, CIREG - Centre interuniversitaire de recherche en économie quantitative - Université de Montréal, Department of Economics, McGill University - McGill University)
    Abstract: Although attention has been given to obtaining reliable standard errors for the plugin estimator of the Gini index, all standard errors suggested until now are either complicated or quite unreliable. An approximation is derived for the estimator by which it is expressed as a sum of IID random variables. This approximation allows us to develop a reliable standard error that is simple to compute. A simple but effective bias correction is also derived. The quality of inference based on the approximation is checked in a number of simulation experiments, and is found to be very good unless the tail of the underlying distribution is heavy. Bootstrap methods are presented which alleviate this problem except in cases in which the variance is very large or fails to exist. Similar methods can be used to find reliable standard errors of other indices which are not simply linear functionals of the distribution function, such as Sen's poverty index and its modification known as the Sen-Shorrocks-Thon index.
    Keywords: Gini index, delta method, asymptotic inference, jackknife, bootstrap
    Date: 2009–12–30
  25. By: Tatsuya Kubokawa (Faculty of Economics, University of Tokyo)
    Abstract: The linear mixed models (LMM) and the empirical best linear unbiased predictor (EBLUP) induced from LMM have been well studied and extensively used for a long time in many applications. Of these, EBLUP in small area estimation has been recognized as a useful tool in various practical statistics. In this paper, we give a review on LMM and EBLUP from a aspect of small area estimation. Especially, we explain why EBLUP is likely to be reliable. The reason is that EBLUP possesses the shrinkage function and the pooling effects as desirable properties, which arise from the setup of random effects and common paramers in LMM. Such important properties of EBLUP are clarified as well as some recent results of the mean squared error estimation, the confidence interval and the variable selection procedures are summarized.
    Date: 2009–12
  26. By: Le, Vo Phuong Mai (Cardiff Business School); Minford, Patrick (Cardiff Business School); Wickens, Michael
    Abstract: We review the methods used in many papers to evaluate DSGE models by comparing their simulated moments and other features with data equivalents. We note that they select, scale and characterise the shocks without reference to the data; crucially they fail to use the joint distribution of the features under comparison. We illustrate this point by recomputing an assessment of a two-country model in a recent paper; we .nd that the paper.s conclusions are essentially reversed.
    Keywords: Boostrap; US-EU model; DSGE; VAR; indirect inference; Wald statistic; anomaly; puzzle
    JEL: C12 C32 C52 E1
    Date: 2009–12
  27. By: James G. MacKinnon (Queen's University)
    Abstract: This paper provides tables of critical values for some popular tests of cointegration and unit roots. Although these tables are necessarily based on computer simulations, they are much more accurate than those previously available. The results of the simulation experiments are summarized by means of response surface regressions in which critical values depend on the sample size. From these regressions, asymptotic critical values can be read off directly, and critical values for any finite sample size can easily be computed with a hand calculator. Added in 2010 version: A new appendix contains additional results that are more accurate and cover more cases than the ones in the original paper.
    Keywords: unit root test, Dickey-Fuller test, Engle-Granger test, ADF test
    JEL: C16 C22 C32 C12 C15
    Date: 2010–01
  28. By: Wolfgang Karl Härdle; Yarema Okhrin; Weining Wang
    Abstract: Pricing kernels implicit in option prices play a key role in assessing the risk aversion over equity returns. We deal with nonparametric estimation of the pricing kernel (Empirical Pricing Kernel) given by the ratio of the risk-neutral density estimator and the subjective density estimator. The former density can be represented as the second derivative w.r.t. the European call option price function, which we estimate by nonparametric regression. The subjective density is estimated nonparametrically too. In this framework, we develop the asymptotic distribution theory of the EPK in the L1 sense. Particularly, to evaluate the overall variation of the pricing kernel, we develop a uniform confidence band of the EPK. Furthermore, as an alternative to the asymptotic approach, we propose a bootstrap confidence band. The developed theory is helpful for testing parametric specifications of pricing kernels and has a direct extension to estimating risk aversion patterns. The established results are assessed and compared in a Monte-Carlo study. As a real application, we test risk aversion over time induced by the EPK.
    Keywords: Empirical Pricing Kernel, Confidence band, Bootstrap; Kernel Smoothing; Nonparametric
    JEL: C00 C14 J01 J31
    Date: 2010–01
  29. By: Stefan Boes (Socioeconomic Institute, University of Zurich)
    Abstract: This paper explores semi-monotonicity constraints in the distribution of potential outcomes, first, conditional on an instrument, and second, in terms of the response function. The imposed assumptions are strictly weaker than traditional instrumental variables assumptions and can be gainfully employed to bound the counterfactual distributions, even though point identification is only achieved in special cases. The bounds have a simple analytical form and thus have much practical relevance in all instances when strong exogeneity assumptions cannot be credibly invoked. The bounding strategy is illustrated in a simulated data example and applied to the effect of education on smoking.
    Keywords: nonparametric bounds, treatment effects, causality, endogeneity, instrumental variables, policy evaluation
    JEL: C14 C21 C25
    Date: 2009–12
  30. By: Daniel Berkowitz; Mehmet Caner; Ying Fang
    Abstract: Valid instrumental variables must be relevant and exogenous. However, in practice it is difficult to find instruments that perfectly satisfy the orthogonality condition and at the same time are strongly correlated with the endogenous regressors. In this paper we show how a mild violation of the exogeneity assumption affects the limit of the Anderson- Rubin (1949) test. The Anderson-Rubin (AR) test statistic is frequently used because it is robust to identification problems. However, when there is a mild violation of exogeneity the AR test is oversized and with larger samples the problem gets worse. In order to correct this problem, we introduce the fractionally resampled Anderson-Rubin (FAR) test that is derived by modifying the resampling technique of Wu (1990). Our technical innovation is to treat the block size as a random variable. We prove that this choice recovers the limit of the AR test under a mild violation of exogeneity. We also prove that the optimal of block size converges in probability to one half. Simulations show that in finite samples the FAR is conservative; thus, we propose block sizes in the range of one quarter to one third that have good finite sample properties.
    Date: 2009–12
  31. By: Russell Davidson (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - Université de la Méditerranée - Aix-Marseille II - Université Paul Cézanne - Aix-Marseille III - Ecole des Hautes Etudes en Sciences Sociales (EHESS) - CNRS : UMR6579, CIREG - Centre interuniversitaire de recherche en économie quantitative - Université de Montréal, Department of Economics, McGill University - McGill University)
    Abstract: The bootstrap is a statistical technique used more and more widely in econometrics. While it is capable of yielding very reliable inference, some precautions should be taken in order to ensure this. Two “Golden Rules” are formulated that, if observed, help to obtain the best the bootstrap can offer. Bootstrapping always involves setting up a bootstrap data-generating process (DGP). The main types of bootstrap DGP in current use are discussed, with examples of their use in econometrics. The ways in which the bootstrap can be used to construct confidence sets differ somewhat from methods of hypothesis testing. The relation between the two sorts of problem is discussed.
    Keywords: Bootstrap, hypothesis test, confidence set
    Date: 2009–12–22
  32. By: Adriana Cornea (Imperial College London - Imperial College London); Russell Davidson (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - Université de la Méditerranée - Aix-Marseille II - Université Paul Cézanne - Aix-Marseille III - Ecole des Hautes Etudes en Sciences Sociales (EHESS) - CNRS : UMR6579, CIREG - Centre interuniversitaire de recherche en économie quantitative - Université de Montréal, Department of Economics, McGill University - McGill University)
    Abstract: It is known that Efron's resampling bootstrap of the mean of random variables with common distribution in the domain of attraction of the stable laws with infinite variance is not consistent, in the sense that the limiting distribution of the bootstrap mean is not the same as the limiting distribution of the mean from the real sample. Moreover, the limiting distribution of the bootstrap mean is random and unknown. The conventional remedy for this problem, at least asymptotically, is either the m out of n bootstrap or subsampling. However, we show that both these procedures can be quite unreliable in other than very large samples. A parametric bootstrap is derived by considering the distribution of the bootstrap P value instead of that of the bootstrap statistic. The quality of inference based on the parametric bootstrap is examined in a simulation study, and is found to be satisfactory with heavy-tailed distributions unless the tail index is close to 1 and the distribution is heavily skewed.
    Keywords: bootstrap inconsistency, stable distribution, domain of attraction, infinite variance
    Date: 2009–12–30
  33. By: Ricardo Mora; Javier Ruiz Castillo
    Abstract: In this paper the Kullback-Leibler notion of discrepancy (Kullback and Leibler, 1951) is used to propose a measure of segregation within a general statistical framework. Under general conditions, this measure coincides with the Mutual Information index of segregation, M, first proposed by Theil and Finizza (1971), and fully characterized in terms of eight ordinal axioms by Frankel and Volij (2009). In this paper, two specific issues are addressed in relation to this index: the evaluation of statistical significance for observed differences in M measurements, and the control for the statistical association between demographic groups and schools and other socioeconomic variables. Among the main results of the paper it is established that M can be decomposed to isolate segregation conditional on any vector of socioeconomic characteristics. Furthermore, consistent estimators for M and the terms in its decomposition are proposed, and their asymptotic properties are obtained. As a result, the M index now stands as the only index of segregation which has been fully characterized in terms of axiomatic properties, is well embedded into a general statistical framework, and can be used when samples are finite and a multivariate framework is required. The usefulness of the approach is illustrated by looking at patterns of multigroup school segregation in the U.S. for the school years 1989-90 and 2005-06.
    Keywords: Multigroup segregation measurement, Axiomatic properties, Econometric models
    Date: 2009–12
  34. By: Russell Davidson (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - Université de la Méditerranée - Aix-Marseille II - Université Paul Cézanne - Aix-Marseille III - Ecole des Hautes Etudes en Sciences Sociales (EHESS) - CNRS : UMR6579, CIREG - Centre interuniversitaire de recherche en économie quantitative - Université de Montréal, Department of Economics, McGill University - McGill University); Jean-Yves Duclos (CIRPEE - Université de Laval, Department of Economics - Université de Laval)
    Abstract: Asymptotic and bootstrap tests are studied for testing whether there is a relation of stochastic dominance between two distributions. These tests have a null hypothesis of nondominance, with the advantage that, if this null is rejected, then all that is left is dominance. This also leads us to define and focus on restricted stochastic dominance, the only empirically useful form of dominance relation that we can seek to infer in many settings. One testing procedure that we consider is based on an empirical likelihood ratio. The computations necessary for obtaining a test statistic also provide estimates of the distributions under study that satisfy the null hypothesis, on the frontier between dominance and nondominance. These estimates can be used to perform dominance tests that can turn out to provide much improved reliability of inference compared with the asymptotic tests so far proposed in the literature.
    Keywords: Stochastic dominance, empirical likelihood, bootstrap test
    Date: 2009–12–30
  35. By: Tetsuya Takaishi
    Abstract: The hybrid Monte Carlo (HMC) algorithm is applied for the Bayesian inference of the stochastic volatility (SV) model. We use the HMC algorithm for the Markov chain Monte Carlo updates of volatility variables of the SV model. First we compute parameters of the SV model by using the artificial financial data and compare the results from the HMC algorithm with those from the Metropolis algorithm. We find that the HMC algorithm decorrelates the volatility variables faster than the Metropolis algorithm. Second we make an empirical study for the time series of the Nikkei 225 stock index by the HMC algorithm. We find the similar correlation behavior for the sampled data to the results from the artificial financial data and obtain a $\phi$ value close to one ($\phi \approx 0.977$), which means that the time series has the strong persistency of the volatility shock.
    Date: 2009–12
  36. By: Russell Davidson (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - Université de la Méditerranée - Aix-Marseille II - Université Paul Cézanne - Aix-Marseille III - Ecole des Hautes Etudes en Sciences Sociales (EHESS) - CNRS : UMR6579, CIREG - Centre interuniversitaire de recherche en économie quantitative - Université de Montréal, Department of Economics, McGill University - McGill University)
    Abstract: Extensions are presented to the results of Davidson and Duclos (2007), whereby the null hypothesis of restricted stochastic non dominance can be tested by both asymptotic and bootstrap tests, the latter having considerably better properties as regards both size and power. In this paper, the methodology is extended to tests of higherorder stochastic dominance. It is seen that, unlike the first-order case, a numerical nonlinear optimisation problem has to be solved in order to construct the bootstrap DGP. Conditions are provided for a solution to exist for this problem, and efficient numerical algorithms are laid out. The empirically important case in which the samples to be compared are correlated is also treated, both for first-order and for higher-order dominance. For all of these extensions, the bootstrap algorithm is presented. Simulation experiments show that the bootstrap tests perform considerably better than asymptotic tests, and yield reliable inference in moderately sized samples.
    Keywords: Higher-order stochastic dominance, empirical likelihood, bootstrap test, correlated samples
    Date: 2009–12–30
  37. By: Zisimos Koustas (Department of Economics, Brock University); Jean-Francois Lamarche (Department of Economics, Brock University)
    Abstract: This paper estimates a nonlinear threshold model using instrumental variables. This estimation strategy was originally developed with dynamic panel models in mind and we extend it to time series models. In particular, we consider a forward-looking Taylor rule and test to see if the Bank of England followed a nonlinear Taylor rule in setting the short-term interest rate.
    Keywords: Thresholds; Nonlinear Models; Instrumental Variables; Taylor Rule
    JEL: C22 C12 C13 C87 E58
    Date: 2009–12
  38. By: Russell Davidson (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - Université de la Méditerranée - Aix-Marseille II - Université Paul Cézanne - Aix-Marseille III - Ecole des Hautes Etudes en Sciences Sociales (EHESS) - CNRS : UMR6579, CIREG - Centre interuniversitaire de recherche en économie quantitative - Université de Montréal, Department of Economics, McGill University - McGill University); James Mackinnon (Department of Economics - Queen's University, Kingston, Ontario)
    Abstract: We develop a method based on the use of polar coordinates to investigate the existence of moments for instrumental variables and related estimators in the linear regression model. For generalized IV estimators, we obtain familiar results. For JIVE, we obtain the new result that this estimator has no moments at all. Simulation results illustrate the consequences of its lack of moments.
    Keywords: instrumental variables, JIVE, moments of estimators
    Date: 2009–12–22
  39. By: Maria Nieswand; Astrid Cullmann; Anne Neumann
    Abstract: This paper provides an empirical demonstration for a practical approach of efficiency evaluation against the background of limited data availability in some regulated industries. Here, traditional DEA may result in a lack of discriminatory power when high numbers of variables but only limited observations are available. We apply PCA-DEA for radial efficiency measurement to US natural gas transmission companies in 2007. This allows us to reduce dimensions of the optimization problem while maintaining most of the variation in the original data. Our results suggest that the PCA-DEA methodology reduces the probability of over-estimation of the individual firm-specific performance. It also allows for a large number of original variables without substantially reducing the discriminatory power of the model.<br />
    Keywords: Efficiency analysis, DEA, PCA, company regulation, natural gas transmission
    JEL: C14 L51 L95
    Date: 2009
  40. By: Russell Davidson (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - Université de la Méditerranée - Aix-Marseille II - Université Paul Cézanne - Aix-Marseille III - Ecole des Hautes Etudes en Sciences Sociales (EHESS) - CNRS : UMR6579, CIREG - Centre interuniversitaire de recherche en économie quantitative - Université de Montréal, Department of Economics, McGill University - McGill University)
    Abstract: Many simulation experiments have shown that, in a variety of circumstances, bootstrap tests perform better than current asymptotic theory predicts. Specifically, the discrepancy between the actual rejection probability of a bootstrap test under the null and the nominal level of the test appears to be smaller than suggested by theory, which in any case often yields only a rate of convergence of this discrepancy to zero. Here it is argued that the Edgeworth expansions on which much theory is based provide a quite inaccurate account of the finite-sample distributions of even quite basic statistics. Other methods are investigated in the hope that they may give better agreement with simulation evidence. They also suggest ways in which bootstrap procedures can be improved so as to yield more accurate inference.
    Keywords: bootstrap discrepancy, bootstrap test, Edgeworth expansion
    Date: 2009–12–30
  41. By: Arthur M. Berd
    Abstract: We present a continuous-time maximum likelihood estimation methodology for credit rating transition probabilities, taking into account the presence of censored data. We perform rolling estimates of the transition matrices with exponential time weighting with varying horizons and discuss the underlying dynamics of transition generator matrices in the long-term and short-term estimation horizons.
    Date: 2009–12
  42. By: Antonio Lijoi; Igor Pruenster
    Abstract: Bayesian nonparametric inference is a relatively young area of research and it has recently undergone a strong development. Most of its success can be explained by the considerable degree of exibility it ensures in statistical modelling, if compared to parametric alternatives, and by the emergence of new and ecient simulation techniques that make nonparametric models amenable to concrete use in a number of applied statistical problems. Since its introduction in 1973 by T.S. Ferguson, the Dirichlet process has emerged as a cornerstone in Bayesian nonparametrics. Nonetheless, in some cases of interest for statistical applications the Dirichlet process is not an adequate prior choice and alternative nonparametric models need to be devised. In this paper we provide a review of Bayesian nonparametric models that go beyond the Dirichlet process.
    Date: 2009–12
  43. By: Emanuel Moench; Serena Ng; Simon Potter
    Abstract: This paper uses multi-level factor models to characterize within- and between-block variations as well as idiosyncratic noise in large dynamic panels. Block-level shocks are distinguished from genuinely common shocks, and the estimated block-level factors are easy to interpret. The framework achieves dimension reduction and yet explicitly allows for heterogeneity between blocks. The model is estimated using a Markov chain Monte-Carlo algorithm that takes into account the hierarchical structure of the factors. We organize a panel of 447 series into blocks according to the timing of data releases and use a four-level model to study the dynamics of real activity at both the block and aggregate levels. While the effect of the economic downturn of 2007-09 is pervasive, growth cycles are synchronized only loosely across blocks. The state of the leading and the lagging sectors, as well as that of the overall economy, is monitored in a coherent framework.
    Keywords: Econometric models ; Economic forecasting ; Economic indicators ; Markov processes
    Date: 2009
  44. By: Michelle L. Barnes; Fabià Gumbau-Brisa; Denny Lie; Giovanni P. Olivei
    Abstract: We compare estimates of the New Keynesian Phillips Curve (NKPC) when the curve is specified in two different ways. In the standard difference equation (DE) form, current inflation is a function of past inflation, expected future inflation, and real marginal costs. The alternative closed form (CF) specification explicitly solves the DE form to express inflation as a function of past inflation and a present-discounted value of current and expected future marginal costs. The CF specification places model-consistent constraints on expected future inflation that are not imposed in the DE form. In a Monte Carlo exercise, we show that estimating the CF version of the NKPC gives estimates that are much more efficient than the estimates obtained from the DE specification. We then compare DE and CF estimates of the NKPC with time-varying trend inflation on actual data. The data and estimation methodology are the same as in Cogley and Sbordone (2008). We show that DE and CF estimates differ substantially and have very different implications for inflation dynamics. As in Cogley and Sbordone, it is possible to estimate DE specifications of the NKPC where lagged inflation plays no role once trend inflation is taken into account. The CF estimates of the NKPC, however, typically imply as large a role for lagged inflation as for expected future inflation. These estimates thus suggest that trend inflation is not in itself sufficient to explain the persistent dynamics of inflation.
    Keywords: Inflation (Finance) ; Phillips curve
    Date: 2009
  45. By: Antonio Lijoi; Igor Pruenster
    Abstract: The present paper provides a review of the results concerning distributional properties of means of random probability measures. Our interest in this topic has originated from inferential problems in Bayesian Nonparametrics. Nonetheless, it is worth noting that these random quantities play an important role in seemingly unrelated areas of research. In fact, there is a wealth of contributions both in the statistics and in the probability literature that we try to summarize in a unified framework. Particular attention is devoted to means of the Dirichlet process given the relevance of the Dirichlet process in Bayesian Nonparametrics. We then present a number of recent contributions concerning means of more general random probability measures and highlight connections with the moment problem, combinatorics, special functions, excursions of stochastic processes and statistical physics.
    Keywords: Bayesian Nonparametrics; Completely random measures; Cifarelli–Regazzini identity; Dirichlet process; Functionals of random probability measures; Generalized Stieltjes transform; Neutral to the right processes; Normalized random measures; Posterior distribution; Random means; Random probability measure; Two–parameter Poisson–Dirichlet process.
    Date: 2009–12
  46. By: Clifford R. Wymer
    Abstract: The dynamics of economic behaviour is often developed in theory as a continuous time system. Rigorous estimation and testing of such systems, and the analysis of some aspects of their properties, is of particular importance in distinguishing between competing hypotheses and the resulting models. The consequences for the international economy during the past eighteen months of failures in the financial sector, and particularly the banking sector, make it essential that the dynamics of financial and commodity markets and of macro-economic policy are well understood. The nonlinearity of the economic system means that it’s properties are heavily dependent on it’s parameter values. The estimators discussed here are tools to provide those parameter estimates
    Keywords: Continuous time; Dynamics.
    Date: 2009–09
  47. By: Patrizia Berti (Dipartimento di Matematica Pura ed Applicata G. Vitali, Universita di Modena e Reggio-Emilia); Michele Gori (Dipartimento di Matematica per le Decisioni, Universita di Firenze); Pietro Rigo (Dipartimento di Economia Politica e Metodi Quantitativi, Universita di Pavia)
    Abstract: This note deals with the conditional form of the law of large numbers (LLN). Let $T$ be a separable metric space, equipped with a non atomic probability $Q$, and $\mathcal{H}$ the class of Borel subsets $H\subset T$ satisfying $Q(H)>0$. Let $\mathcal{P}$ be any consistent set of finite dimensional distributions indexed by $T$. If $\mathcal{H}_0\subset\mathcal{H}$ is finite, there is a stochastic process $X=\{X_t:t\in T\}$ such that $X\sim\mathcal{P}$ and the conditional LLN holds for each $H\in\mathcal{H}_0$. The same is true if $\mathcal{H}_0=\sigma(\mathcal{U})\cap\mathcal{H}$ where $\mathcal{U}$ is a countable Borel partition of $T$. Under a suitable finitely additive probability, one also obtains $X\sim\mathcal{P}$ and the conditional LLN for each $H\in\mathcal{H}$.
    Keywords: Aggregate uncertainty, Extension, Finitely additive probability, Individual risk, Law of large numbers.
    JEL: C02 C60 D80
    Date: 2009–12
  48. By: D. Sornette; A. Saichev; V. Filimonov
    Abstract: We present a new theory of homogeneous volatility (and variance) estimators for arbitrary stochastic processes. The main tool of our theory is the parsimonious encoding of all the information contained in the OHLC prices for a given time interval by the joint distributions of the high-minus-open, low-minus-open and close-minus-open values, whose analytical expression is derived exactly for Wiener processes with drift. The efficiency of the new proposed estimators is favorably compared with that of the Garman-Klass, Roger-Satchell and maximum likelihood estimators.
    Keywords: Variance and volatility estimators, efficiency, homogeneous functions, Schwarz inequality, extremes of Wiener processes
    JEL: C13 C51
    Date: 2009–10–25
  49. By: Mamatzakis, E; Milidonis, A; Christodoulakis, G
    Abstract: Using a unique dataset for the five major UK insurance industries, we adopt a novel approach in the insurance literature and model the evolution of their underwriting returns as Regime Switching processes, which outperforms standard approaches. This produces estimates of time-varying conditional regime probabilities and captures the non-normality present in the data, thus allowing the study of joint dynamics of industry regime probabilities using Dynamic Panel and Panel Vector Auto-Regressions and their attribution to economic factors. Our evidence uncovers high/low volatility regime switching for all industries, where their joint evolution is mainly attributed to industry specific factors. Impulse response functions and variance decompositions from a panel VAR identify a plethora of causal links among our variables and their underlying persistence of interaction, showing that shocks from changes in claims assert a positive impact on the probability of high volatility regime.
    Keywords: Insurance; Reinsurance; Business Cycles; Regime Switching; Panel VAR
    JEL: E30 G30 C1
    Date: 2009–12
  50. By: Pedro Albarran; Ignacio Ortuno; Javier Ruiz-Castillo
    Abstract: This paper introduces a novel methodology for comparing the citation distributions of research units working in the same homogeneous field. Given a critical citation level (CCL), we suggest using two real valued indicators to describe the shape of any distribution: a highimpact and a low-impact measure defined over the set of articles with citations above or below the CCL. The key to this methodology is the identification of a citation distribution with an income distribution. Once this step is taken, it is easy to realize that the measurement of lowimpact coincides with the measurement of economic poverty. In turn, it is equally natural to identify the measurement of high-impact with the measurement of a certain notion of economic affluence. On the other hand, it is seen that the ranking of citation distributions according to a family of low-impact measures, originally suggested by Foster et al. (1984) for the measurement of economic poverty, is essentially characterized by a number of desirable axioms. Appropriately redefined, these same axioms lead to the selection of an equally convenient class of decomposable high-impact measures. These two families are shown to satisfy other interesting properties that make them potentially useful in empirical applications, including the comparison of research units working in different fields.
    Date: 2009–11

This nep-ecm issue is ©2010 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.