nep-ecm New Economics Papers
on Econometrics
Issue of 2007‒09‒16
twenty-two papers chosen by
Sune Karlsson
Orebro University

  1. Enhanced routines for instrumental variables/GMM estimation and testing By Christopher F Baum; Mark E. Schaffer; Steven Stillman
  2. Testing the Granger noncausality hypothesis in stationary nonlinear models of unknown functional form By Péguin-Feissolle, Anne; Strikholm, Birgit; Teräsvirta, Timo
  3. Applications of Subsampling, Hybrid, and Size-Correction Methods By Donald W.K. Andrews; Patrik Guggenberger
  4. GMM Estimation of the Number of Latent Factors By Perez, Marcos; Ahn, Seung Chan
  5. Testing Conditional Independence via Rosenblatt Transforms By Kyungchul Song
  6. Rank-1/2: A Simple Way to Improve the OLS Estimation of Tail Exponents By Xavier Gabaix; Rustam Ibragimov
  7. With or Without U? - The appropriate test for a U shaped relationship. By Lind, Jo Thori; Mehlum, Halvor
  8. Regression discontinuity design with covariates By Markus Frölich
  9. Factor Analysis in a Model with Rational Expectations By Andreas Beyer; Roger E. A. Farmer; Jérôme Henry; Massimiliano Marcellino
  10. Convergence and asymptotic variance of bootstrapped finite-time ruin probabilities with partly shifted risk processes. By Stéphane Loisel; Christian Mazza; Didier Rullière
  11. On Rate Optimality for Ill-posed Inverse Problems in Econometrics By Xiaohong Chen; Markus Reiss
  12. Testing for Cointegration Using the Johansen Methodology when Variables are Near-Integrated By Erik Hjalmarsson; Pär Österholm
  13. How to Adjust for Nonignorable Nonresponse: Calibration, Heckit or FIML? By Johansson, Fredrik
  14. Temporal aggregation, systematic sampling, and the Hodrick-Prescott filter By Agustín Maravall; Ana del Río
  15. Kriging Models That Are Robust With Respect to Simulation Errors By Siem, A.Y.D.; Hertog, D. den
  16. Dynamic and Structural Features of Intifada Violence: A Markov Process Approach By Ivan Jeliazkov; Dale J. Poirier
  17. Can I use a Panel? Panel Conditioning and Attrition Bias in panel Surveys By Das, J.W.M.; Toepoel, V.; Soest, A.H.O. van
  18. Power transformations in correspondence analysis By Michael Greenacre
  19. Capturing Common Components in High-Frequency Financial Time Series: A Multivariate Stochastic Multiplicative Error Model By Nikolaus Hautsch
  20. Robustness analysis and convergence of empirical finite-time ruin probabilities and estimation risk solvency margin. By Stéphane Loisel; Christian Mazza; Didier Rullière
  21. On Finite-Time Ruin Probabilities for Classical Risk Models By Claude Lefèvre; Stéphane Loisel
  22. Central Limit Theorems For Local Emprical Processes Near Boundaries of Sets By Einmahl, J.H.J.; Khmaladze, E.V.

  1. By: Christopher F Baum (Boston College); Mark E. Schaffer (Heriot-Watt University); Steven Stillman (Motu Economic and Public Policy Research)
    Abstract: We extend our 2003 paper on instrumental variables (IV) and GMM estimation and testing and describe enhanced routines that address HAC standard errors, weak instruments, LIML and k-class estimation, tests for endogeneity and RESET and autocorrelation tests for IV estimates.
    Keywords: instrumental variables, weak instruments, generalized method of moments, endogeneity, heteroskedasticity, serial correlation, HAC standard errors, LIML, CUE, overidentifying restrictions, Frisch-Waugh-Lovell theorem, RESET, Cumby-Huizinga test
    JEL: C20 C22 C23 C12 C13 C87
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:hwe:certdp:0706&r=ecm
  2. By: Péguin-Feissolle, Anne (GREQAM); Strikholm, Birgit (Dept. of Economic Statistics, Stockholm School of Economics); Teräsvirta, Timo (CREATES, School of Economics and Management)
    Abstract: In this paper we propose a general method for testing the Granger noncausality hypothesis in stationary nonlinear models of unknown functional form. These tests are based on a Taylor expansion of the nonlinear model around a given point in a sample space. We study the performance of our tests by a Monte Carlo experiment and compare these to the most widely used linear test. Our tests appear to be well-sized and have reasonably good power properties.
    Keywords: Hypothesis testing; causality
    JEL: C22 C51
    Date: 2007–08–27
    URL: http://d.repec.org/n?u=RePEc:hhs:hastef:0672&r=ecm
  3. By: Donald W.K. Andrews (Cowles Foundation, Yale University); Patrik Guggenberger (Department of Economics, UCLA)
    Abstract: This paper analyzes the properties of subsampling, hybrid subsampling, and size-correction methods in two non-regular models. The latter two procedures are introduced in Andrews and Guggenberger (2005b). The models are non-regular in the sense that the test statistics of interest exhibit a discontinuity in their limit distribution as a function of a parameter in the model. The first model is a linear instrumental variables (IV) model with possibly weak IVs estimated using two-stage least squares (2SLS). In this case, the discontinuity occurs when the concentration parameter is zero. The second model is a linear regression model in which the parameter of interest may be near a boundary. In this case, the discontinuity occurs when the parameter is on the boundary. The paper shows that in the IV model one-sided and equal-tailed two-sided subsampling tests and confidence intervals (CIs) based on the 2SLS t statistic do not have correct asymptotic size. This holds for both fully- and partially-studentized t statistics. But, subsampling procedures based on the partially-studentized t statistic can be size-corrected. On the other hand, symmetric two-sided subsampling tests and CIs are shown to have (essentially) correct asymptotic size when based on a partially-studentized t statistic. Furthermore, all types of hybrid subsampling tests and CIs are shown to have correct asymptotic size in this model. The above results are consistent with "impossibility" results of Dufour (1997) because subsampling and hybrid subsampling CIs are shown to have infinite length with positive probability. Subsampling CIs for a parameter that may be near a lower boundary are shown to have incorrect asymptotic size for upper one-sided and equal-tailed and symmetric two-sided CIs. Again, size-correction is possible. In this model as well, all types of hybrid subsampling CIs are found to have correct asymptotic size.
    Keywords: Asymptotic size, Finite-sample size, Hybrid test, Instrumental variable, Over-rejection, Parameter near boundary, Size correction, Subsampling confidence interval, Subsampling test, Weak instrument
    JEL: C12 C15
    Date: 2007–05
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1608&r=ecm
  4. By: Perez, Marcos; Ahn, Seung Chan
    Abstract: We propose a generalized method of moment (GMM) estimator of the number of latent factors in linear factor models. The method is appropriate for panels a large (small) number of cross-section observations and a small (large) number of time-series observations. It is robust to heteroskedasticity and time series autocorrelation of the idiosyncratic components. All necessary procedures are similar to three stage least squares, so they are computationally easy to use. In addition, the method can be used to determine what observable variables are correlated with the latent factors without estimating them. Our Monte Carlo experiments show that the proposed estimator has good finite-sample properties. As an application of the method, we estimate the number of factors in the US stock market. Our results indicate that the US stock returns are explained by three factors. One of the three latent factors is not captured by the factors proposed by Chen Roll and Ross 1986 and Fama and French 1996.
    Keywords: Factor models; GMM; number of factors; asset pricing
    JEL: C10 G12 C13 C33
    Date: 2007–09–09
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:4862&r=ecm
  5. By: Kyungchul Song (Department of Economics, University of Pennsylvania)
    Abstract: This paper investigates the problem of testing conditional independence of Y and Z given λθ(X) for some unknown θ ∈ Θ ⊂ Rd, for a parametric function λθ(·). For instance, such a problem is relevant in recent literatures of heterogeneous treatment effects and contract theory. First, this paper finds that using Rosenblatt transforms in a certain way, we can construct a class of tests that are asymptotically pivotal and asymptotically unbiased against √n-converging Pitman local alternatives. The asymptotic pivotalness is convenient especially because the asymptotic critical values remain invariant over different estimators of the unknown parameter θ. Even when tests are asymptotically pivotal, however, it is often the case that simulation methods to obtain asymptotic critical values are yet unavailable or complicated, and hence this paper suggests a simple wild bootstrap procedure. A special case of the proposed testing framework is to test the presence of quantile treatment effects in a program evaluation data set. Using the JTPA training data set, we investigate the validity of nonexperimental procedures for inferences about quantile treatment effects of the job training program.
    Keywords: Conditional independence, asymptotic pivotal tests, Rosenblatt transforms, wild bootstrap
    JEL: C12 C14 C52
    Date: 2007–09–05
    URL: http://d.repec.org/n?u=RePEc:pen:papers:07-026&r=ecm
  6. By: Xavier Gabaix; Rustam Ibragimov
    Abstract: Despite the availability of more sophisticated methods, a popular way to estimate a Pareto exponent is still to run an OLS regression: log(Rank)=a-b log(Size), and take b as an estimate of the Pareto exponent. The reason for this popularity is arguably the simplicity and robustness of this method. Unfortunately, this procedure is strongly biased in small samples. We provide a simple practical remedy for this bias, and propose that, if one wants to use an OLS regression, one should use the Rank-1/2, and run log(Rank-1/2)=a-b log(Size). The shift of 1/2 is optimal, and reduces the bias to a leading order. The standard error on the Pareto exponent zeta is not the OLS standard error, but is asymptotically (2/n)^(1/2) zeta. Numerical results demonstrate the advantage of the proposed approach over the standard OLS estimation procedures and indicate that it performs well under dependent heavy-tailed processes exhibiting deviations from power laws. The estimation procedures considered are illustrated using an empirical application to Zipf's law for the U.S. city size distribution.
    JEL: C13
    Date: 2007–09
    URL: http://d.repec.org/n?u=RePEc:nbr:nberte:0342&r=ecm
  7. By: Lind, Jo Thori; Mehlum, Halvor
    Abstract: Non-linear relationships are common in economic theory, and such relationships are also frequently tested empirically. We argue that the usual test of non-linear relationships is flawed, and derive the appropriate test for a U shaped relationship. Our test gives the exact necessary and sufficient conditions for the test of a U shape in both finite samples and for a large class of models.
    Keywords: U shape; hypothesis test; Kuznets curve; Fieller interval
    JEL: C12 C20
    Date: 2007–09–10
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:4823&r=ecm
  8. By: Markus Frölich
    Abstract: In this paper, the regression discontinuity design (RDD) is generalized to account for differences in observed covariates X in a fully nonparametric way. It is shown that the treatment effect can be estimated at the rate for one-dimensional nonparametric regression irrespective of the dimension of X. It thus extends the analysis of Hahn, Todd and van der Klaauw (2001) and Porter (2003), who examined identification and estimation without covariates, requiring assumptions that may often be too strong in applications. In many applications, individuals to the left and right of the threshold differ in observed characteristics. Houses may be constructed in different ways across school attendance district boundaries. Firms may differ around a threshold that implies certain legal changes, etc. Accounting for these differences in covariates is important to reduce bias. In addition, accounting for covariates may also reduces variance. Finally, estimation of quantile treatment effects (QTE) is also considered.
    Keywords: Treatment effect, causal effect, complier, LATE, nonparametric regression, endogeneity
    JEL: C13 C14 C21
    Date: 2007–08
    URL: http://d.repec.org/n?u=RePEc:usg:dp2007:2007-32&r=ecm
  9. By: Andreas Beyer; Roger E. A. Farmer; Jérôme Henry; Massimiliano Marcellino
    Abstract: DSGE models are characterized by the presence of expectations as explanatory variables. To use these models for policy evaluation, the econometrician must estimate the parameters of expectation terms. Standard estimation methods have several drawbacks, including possible lack or weakness of identification of the parameters, misspecification of the model due to omitted variables or parameter instability, and the common use of inefficient estimation methods. Several authors have raised concerns over the implications of using inappropriate instruments to achieve identification. In this paper we analyze the practical relevance of these problems and we propose to combine factor analysis for information extraction from large data sets and GMM to estimate the parameters of systems of forward looking equations. Using these techniques, we evaluate the robustness of recent findings on the importance of forward looking components in the equations of a standard New-Keynesian model.
    JEL: E5 E52 E58
    Date: 2007–09
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:13404&r=ecm
  10. By: Stéphane Loisel (SAF - EA2429 - Laboratoire de Science Actuarielle et Financière - [Université Claude Bernard - Lyon I]); Christian Mazza (Département de Mathématiques - [Université de Fribourg]); Didier Rullière (SAF - EA2429 - Laboratoire de Science Actuarielle et Financière - [Université Claude Bernard - Lyon I])
    Abstract: The classical risk model is considered and a sensitivity analysis of finite-time ruin probabilities is carried out. We prove the weak convergence of a sequence of empirical finite-time ruin probabilities. So-called partly shifted risk processes are introduced, and used to derive an explicit expression of the asymptotic variance of the considered estimator. This provides a clear representation of the influence function associated with finite time ruin probabilities, giving a useful tool to quantify estimation risk according to new regulations.
    Keywords: Finite-time ruin probability; robustness; Solvency II; reliable ruin probability; asymptotic normality; influence function; partly shifted risk process; Estimation Risk Solvency Margin. (ERSM).
    Date: 2007–08–29
    URL: http://d.repec.org/n?u=RePEc:hal:papers:hal-00168716_v1&r=ecm
  11. By: Xiaohong Chen (Cowles Foundation, Yale University); Markus Reiss (University of Heidelberg)
    Abstract: In this paper, we clarify the relations between the existing sets of regularity conditions for convergence rates of nonparametric indirect regression (NPIR) and nonparametric instrumental variables (NPIV) regression models. We establish minimax risk lower bounds in mean integrated squared error loss for the NPIR and the NPIV models under two basic regularity conditions that allow for both mildly ill-posed and severely ill-posed cases. We show that both a simple projection estimator for the NPIR model, and a sieve minimum distance estimator for the NPIV model, can achieve the minimax risk lower bounds, and are rate-optimal uniformly over a large class of structure functions, allowing for mildly ill-posed and severely ill-posed cases.
    Keywords: Nonparametric instrumental regression, Nonparametric indirect regression, Statistical ill-posed inverse problems, Minimax risk lower bound, Optimal rate
    JEL: C14 C30
    Date: 2007–09
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1626&r=ecm
  12. By: Erik Hjalmarsson; Pär Österholm
    Abstract: We investigate the properties of Johansen's (1988, 1991) maximum eigenvalue and trace tests for cointegration under the empirically relevant situation of near-integrated variables. Using Monte Carlo techniques, we show that in a system with near-integrated variables, the probability of reaching an erroneous conclusion regarding the cointegrating rank of the system is generally substantially higher than the nominal size. The risk of concluding that completely unrelated series are cointegrated is therefore non-negligible. The spurious rejection rate can be reduced by performing additional tests of restrictions on the cointegrating vector(s), although it is still substantially larger than the nominal size.
    Date: 2007–06–22
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:07/141&r=ecm
  13. By: Johansson, Fredrik (Department of Economics)
    Abstract: When a survey response mechanism depends on the variable of interest measured within the same survey and observed for only part of the sample, the situation is one of nonignorable nonresponse. Ignoring the nonresponse is likely to generate significant bias in the estimates. To solve this, one option is the joint modelling of the response mechanism and the variable of interest. Another option is to calibrate each observation with weights constructed from auxiliary data. In an application where earnings equations are estimated these approaches are compared to reference estimates based on large a Swedish register based data set without nonresponse.
    Keywords: Earning equations; Nonignorable response mechanism; Calibration; Selection; Full-information maximum likelihood
    JEL: C15 C24 C34 C42 J31
    Date: 2007–08–22
    URL: http://d.repec.org/n?u=RePEc:hhs:uunewp:2007_022&r=ecm
  14. By: Agustín Maravall (Banco de España); Ana del Río (Banco de España)
    Abstract: Maravall and del Río (2001), analized the time aggregation properties of the Hodrick-Prescott (HP) filter, which decomposes a time series into trend and cycle, for the case of annual, quarterly, and monthly data, and showed that aggregation of the disaggregate component cannot be obtained as the exact result from direct application of an HP filter to the aggregate series. The present paper shows how, using several criteria, one can find HP decompositions for different levels of aggregation that provide similar results. We use as the main criterion for aggregation the preservation of the period associated with the frequency for which the filter gain is ½; this criterion is intuitive and easy to apply. It is shown that the Ravn and Uhlig (2002) empirical rule turns out to be a first-order approximation to our criterion, and that alternative —more complex— criteria yield similar results. Moreover, the values of the parameter λ of the HP filter, that provide results that are approximately consistent under aggregation, are considerably robust with respect to the ARIMA model of the series. Aggregation is seen to work better for the case of temporal aggregation than for systematic sampling. Still a word of caution is made concerning the desirability of exact aggregation consistency. The paper concludes with a clarification having to do with the questionable spuriousness of the cycles obtained with HP filter.
    Keywords: Time series, Filtering and Smoothing, Time aggregation, Trend estimation, Business cycles, ARIMA models
    JEL: C22 C43 C82 E32 E66
    Date: 2007–09
    URL: http://d.repec.org/n?u=RePEc:bde:wpaper:0728&r=ecm
  15. By: Siem, A.Y.D.; Hertog, D. den (Tilburg University, Center for Economic Research)
    Abstract: In the field of the Design and Analysis of Computer Experiments (DACE) meta-models are used to approximate time-consuming simulations. These simulations often contain simulation-model errors in the output variables. In the construction of meta-models, these errors are often ignored. Simulation-model errors may be magnified by the meta-model. Therefore, in this paper, we study the construction of Kriging models that are robust with respect to simulation-model errors. We introduce a robustness criterion, to quantify the robustness of a Kriging model. Based on this robustness criterion, two new methods to find robust Kriging models are introduced. We illustrate these methods with the approximation of the Six-hump camel back function and a real life example. Furthermore, we validate the two methods by simulating artificial perturbations. Finally, we consider the influence of the Design of Computer Experiments (DoCE) on the robustness of Kriging models.
    Keywords: Kriging;robustness;simulation-model error
    JEL: C60
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:200768&r=ecm
  16. By: Ivan Jeliazkov (Department of Economics, University of California-Irvine); Dale J. Poirier (Department of Economics, University of California-Irvine)
    Abstract: This paper analyzes the daily incidence of violence during the Second Intifada. We compare several alternative statistical models with different dynamic and structural stability characteristics while keeping modelling complexity to a minimum by only maintaining the assumption that the process under consideration is at most a second order discrete Markov process. For the pooled data, the best model is one with asymmetric dynamics, where one Israeli and two Palestinian lags determine the conditional probability of violence. However, when we allow for structural change, the evidence strongly favors the hypothesis of structural instability across political regime sub-periods, within which dynamics are generally weak.
    Keywords: Bayesian; Conjugate prior; Israeli-Palestinian conflict; Marginal likelihood
    JEL: C1 C2
    Date: 2007–09
    URL: http://d.repec.org/n?u=RePEc:irv:wpaper:070801&r=ecm
  17. By: Das, J.W.M.; Toepoel, V.; Soest, A.H.O. van (Tilburg University, Center for Economic Research)
    Abstract: Over the past decades there has been an increasing use of panel surveys at the household or individual level, instead of using independent cross-sections. Panel data have important advantages, but there are also two potential drawbacks: attrition bias and panel conditioning effects. Attrition bias can arise if respondents drop out of the panel non-randomly, i.e., when attrition is correlated to a variable of interest. Panel conditioning arises if responses in one wave are in?uenced by participation in the previous wave(s). The experience of the previous interview(s) may affect the answers of respondents in a next interview on the same topic, such that their answers differ systematically from the answers of individuals who are interviewed for the first time. The literature has mainly focused on estimating attrition bias; less is known on panel conditioning effects. In this study we discuss how to disentangle the total bias in panel surveys due to attrition and panel conditioning into a panel conditioning and an attrition effect, and develop a test for panel conditioning allowing for non-random attrition. First, we consider a fully nonparametric approach without any assumptions other than those on the sample design, leading to interval identification of the measures for the attrition and panel conditioning effect. Second, we analyze the proposed measures under additional assumptions concerning the attrition process, making it possible to obtain point estimates and standard errors for both the attrition bias and the panel conditioning effect. We illustrate our method on a variety of questions from two-wave surveys conducted in a Dutch household panel. We found a significant bias due to panel conditioning in knowledge questions, but not in other types of questions. The examples show that the bounds can be informative if the attrition rate is not too high. Point estimates of the panel conditioning effect do not vary a lot with the different assumptions on the attrition process.
    Keywords: panel conditioning;attrition bias;measurement error;panel surveys
    JEL: C42 C81 C93
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:200756&r=ecm
  18. By: Michael Greenacre
    Abstract: Power transformations of positive data tables, prior to applying the correspondence analysis algorithm, are shown to open up a family of methods with direct connections to the analysis of log-ratios. Two variations of this idea are illustrated. The first approach is simply to power the original data and perform a correspondence analysis – this method is shown to converge to unweighted log-ratio analysis as the power parameter tends to zero. The second approach is to apply the power transformation to the contingency ratios, that is the values in the table relative to expected values based on the marginals – this method converges to weighted log-ratio analysis, or the spectral map. Two applications are described: first, a matrix of population genetic data which is inherently two-dimensional, and second, a larger cross-tabulation with higher dimensionality, from a linguistic analysis of several books.
    Keywords: Box-Cox transformation, chi-square distance, contingency ratio, correspondence analysis, log-ratio analysis, power transformation, ratio data, singular value decomposition, spectral map
    JEL: C19 C88
    Date: 2007–08
    URL: http://d.repec.org/n?u=RePEc:upf:upfgen:1044&r=ecm
  19. By: Nikolaus Hautsch (Humboldt University Berlin and CFS)
    Abstract: We introduce a multivariate multiplicative error model which is driven by componentspecific observation driven dynamics as well as a common latent autoregressive factor. The model is designed to explicitly account for (information driven) common factor dynamics as well as idiosyncratic effects in the processes of highfrequency return volatilities, trade sizes and trading intensities. The model is estimated by simulated maximum likelihood using efficient importance sampling. Analyzing five minutes data from four liquid stocks traded at the New York Stock Exchange, we find that volatilities, volumes and intensities are driven by idiosyncratic dynamics as well as a highly persistent common factor capturing most causal relations and cross-dependencies between the individual variables. This confirms economic theory and suggests more parsimonious specifications of high-dimensional trading processes. It turns out that common shocks affect the return volatility and the trading volume rather than the trading intensity.
    Keywords: Net Foreign Assets; Valuation Adjustment; International Financial Integration
    JEL: C15 C32 C52
    Date: 2007–09–04
    URL: http://d.repec.org/n?u=RePEc:cfs:cfswop:wp200725&r=ecm
  20. By: Stéphane Loisel (SAF - EA2429 - Laboratoire de Science Actuarielle et Financière - [Université Claude Bernard - Lyon I]); Christian Mazza (Département de Mathématiques - [Université de Fribourg]); Didier Rullière (SAF - EA2429 - Laboratoire de Science Actuarielle et Financière - [Université Claude Bernard - Lyon I])
    Abstract: We consider the classical risk model and carry out a sensitivity and robustness analysis of finite-time ruin probabilities. We provide algorithms to compute the related influence functions. We also prove the weak convergence of a sequence of empirical finite-time ruin probabilities starting from zero initial reserve toward a Gaussian random variable. We define the concepts of reliable finite-time ruin probability as a Value-at-Risk of the estimator of the finite-time ruin probability. To control this robust risk measure, an additional initial reserve is needed and called Estimation Risk Solvency Margin (ERSM). We apply our results to show how portfolio experience could be rewarded by cut-offs in solvency capital requirements. An application to catastrophe contamination and numerical examples are also developed.
    Keywords: Finite-time ruin probability; robustness; Solvency II; reliable ruin probability; asymptotic Normality; influence function; Estimation Risk Solvency Margin (ERSM)
    Date: 2007–08–29
    URL: http://d.repec.org/n?u=RePEc:hal:papers:hal-00168714_v1&r=ecm
  21. By: Claude Lefèvre (Département de Mathématique - [Université Libre de Bruxelles]); Stéphane Loisel (SAF - EA2429 - Laboratoire de Science Actuarielle et Financière - [Université Claude Bernard - Lyon I])
    Abstract: This paper is concerned with the problem of ruin in the classical compound binomial and compound Poisson risk models. Our primary purpose is to extend to those models an exact formula derived by Picard and Lefèvre (1997) for the probability of (non-)ruin within finite time. First, a standard method based on the ballot theorem and an argument of Seal-type provides an initial (known) formula for that probability. Then, a concept of pseudo-distributions for the cumulated claim amounts, combined with some simple implications of the ballot theorem, leads to the desired formula. Two expressions for the (non-)ruin probability over an infinite horizon are also deduced as corollaries. Finally, an illustration within the framework of Solvency II is briefly presented.
    Keywords: ruin probability; finite and infinite horizon; compound binomial model; compound Poisson model; ballot theorem; pseudo-distributions; Solvency II; Value-at-Risk.
    Date: 2007–08–31
    URL: http://d.repec.org/n?u=RePEc:hal:papers:hal-00168958_v1&r=ecm
  22. By: Einmahl, J.H.J.; Khmaladze, E.V. (Tilburg University, Center for Economic Research)
    Abstract: AMS 2000 subject classifications. 60F05, 60F17, 60G55, 62G30.
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:200766&r=ecm

This nep-ecm issue is ©2007 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.