
on Econometrics 
By:  Xiaohong Chen (Institute for Fiscal Studies and Yale); Yingyao Hu; Arthur Lewbel (Institute for Fiscal Studies and Boston College) 
Abstract:  <p>This paper considers identification and estimation of a nonparametric regression model with an unobserved discrete covariate. The sample consists of a dependent variable and a set of covariates, one of which is discrete and arbitrarily correlates with the unobserved covariate. The observed discrete covariate has the same support as the unobserved covariate, and can be interpreted as a proxy or mismeasure of the unobserved one, but with a nonclassical measurement error that has an unknown distribution. We obtain nonparametric identification of the model given monotonicity of the regression function and a rank condition that is directly testable given the data. Our identification strategy does not require additional sample information, such as instrumental variables or a secondary sample. We then estimate the model via the method of sieve maximum likelihood, and provide rootn asymptotic normality and semiparametric efficiency of smooth functionals of interest. Two small simulations are presented to illustrate the identification and the estimation results. </p><p> </p><p></p> 
Date:  2007–08 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:18/07&r=ecm 
By:  Jerry Hausman (Institute for Fiscal Studies and Massachussets Institute of Technology); Whitney Newey (Institute for Fiscal Studies and Massachussets Institute of Technology); Tiemen Woutersen (Institute for Fiscal Studies and John Hopkins University); John Chao; Norman Swanson 
Abstract:  <p><p><p><p><p><p><p><p>It is common practice in econometrics to correct for heteroskedasticity.This paper corrects instrumental variables estimators with many instruments for heteroskedasticity.We give heteroskedasticity robust versions of the limited information maximum likelihood (LIML) and Fuller (1977, FULL) estimators; as well as heteroskedasticity consistent standard errors thereof. The estimators are based on removing the own observation terms in the numerator of the LIML variance ratio. We derive asymptotic properties of the estimators under many and many weak instruments setups. Based on a series of Monte Carlo experiments, we find that the estimators perform as well as LIML or FULL under homoskedasticity, and have much lower bias and dispersion under heteroskedasticity, in nearly all cases considered.</p></p></p></p></p></p></p>< /p> 
JEL:  C12 C13 C14 
Date:  2007–09 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:22/07&r=ecm 
By:  Kuswanto, Heri; Sibbertsen, Philipp 
Abstract:  We show that specific nonlinear time series models such as SETAR, LSTAR, ESTAR and Markov switching which are common in econometric practice can hardly be distinguished from long memory by standard methods such as the GPH estimator for the memory parameter or linearity tests either general or against a specific nonlinear model. We show by Monte Carlo that under certain conditions, the nonlinear data generating process can have misleading either stationary or nonstationary long memory properties. 
Keywords:  Nonlinear models, longrange dependencies 
JEL:  C12 C22 
Date:  2007–11 
URL:  http://d.repec.org/n?u=RePEc:han:dpaper:dp380&r=ecm 
By:  Xiaohong Chen (Institute for Fiscal Studies and Yale); Yingyao Hu; Arthur Lewbel (Institute for Fiscal Studies and Boston College) 
Abstract:  <p>This note considers nonparametric identification of a general nonlinear regression model with a dichotomous regressor subject to misclassification error. The available sample information consists of a dependent variable and a set of regressors, one of which is binary and errorridden with misclassification error that has unknown distribution. Our identification strategy does not parameterize any regression or distribution functions, and does not require additional sample information such as instrumental variables, repeated measurements, or an auxiliary sample. Our main identifying assumption is that the regression model error has zero conditional third moment. The results include a closedform solution for the unknown distributions and the regression function.</p> 
Date:  2007–08 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:17/07&r=ecm 
By:  Muhammad Akram; Rob J. Hyndman; J. Keith Ord 
Abstract:  We consider the properties of nonlinear exponential smoothing state space models under various assumptions about the innovations, or error, process. Our interest is restricted to those models that are used to describe nonnegative observations, because many series of practical interest are so constrained. We first demonstrate that when the innovations process is assumed to be Gaussian, the resulting prediction distribution may have an infinite variance beyond a certain forecasting horizon. Further, such processes may converge almost surely to zero; an examination of purely multiplicative models reveals the circumstances under which this condition arises. We then explore effects of using an (invalid) Gaussian distribution to describe the innovations process when the underlying distribution is lognormal. Our results suggest that this approximation causes no serious problems for parameter estimation or for forecasting one or two steps ahead. However, for longerterm forecasts the true prediction intervals become increasingly skewed, whereas those based on the Gaussian approximation may have a progressively larger negative component. In addition, the Gaussian approximation is clearly inappropriate for simulation purposes. The performance of the Gaussian approximation is compared with those of two lognormal models for shortterm forecasting using data on the weekly sales of over three hundred items of costume jewelry. 
Keywords:  Forecasting; time series; exponential smoothing; positivevalued processes; seasonality; state space models. 
JEL:  C53 C22 C51 
Date:  2007–11 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:200714&r=ecm 
By:  Sibbertsen, Philipp; Kruse, Robinson 
Abstract:  We show that tests for a break in the persistence of a time series in the classical I(0)  I(1) framework have serious size distortions when the actual data generating process exhibits longrange dependencies. We prove that the limiting distribution of a CUSUM of squares based test depends on the true memory parameter if the DGP exhibits long memory. We propose adjusted critical values for the test and give finite sample response curves which allow the practitioner to easily implement the test and to compute the relevant critical values. We furthermore prove consistency of the test and prove consistency for a simple break point estimator also under long memory. We show that the test has satisfying power properties when the correct critical values are used. 
Keywords:  break in pesistence, long memory, CUSUM of squares based test 
JEL:  C12 C22 
Date:  2007–11 
URL:  http://d.repec.org/n?u=RePEc:han:dpaper:dp381&r=ecm 
By:  Christian Hansen (Institute for Fiscal Studies and Chicago GSB); James B. McDonald; Whitney Newey (Institute for Fiscal Studies and Massachussets Institute of Technology) 
Abstract:  <p><p>Instrumental variables are often associated with low estimator precision. This paper explores efficiency gains which might be achievable using moment conditions which are nonlinear in the disturbances and are based on flexible parametric families for error distributions. We show that these estimators can achieve the semiparametric efficiency bound when the true error distribution is a member of the parametric family. Monte Carlo simulations demonstrate low efficiency loss in the case of normal error distributions and potentially significant efficiency improvements in the case of thicktailed and/or skewed error distributions.</p> 
Date:  2007–08 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:21/07&r=ecm 
By:  D'Agostino, Antonello; Giannone, Domenico 
Abstract:  This paper compares the predictive ability of the factor models of Stock and Watson (2002) and Forni, Hallin, Lippi, and Reichlin (2005) using a large panel of US macroeconomic variables. We propose a nesting procedure of comparison that clarifies and partially overturns the results of similar exercises in the literature. Our main conclusion is that for the dataset at hand the two methods have a similar performance and produce highly collinear forecasts. 
Keywords:  Factor Models; Forecasting; Large CrossSection 
JEL:  C31 C52 C53 
Date:  2007–11 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:6564&r=ecm 
By:  Villani, Mattias (Research Department, Central Bank of Sweden); Kohn, Robert (Faculty of Business, University of New South Wales); Giordani, Paolo (Research Department, Central Bank of Sweden) 
Abstract:  We model a regression density nonparametrically so that at each value of the covariates the density is a mixture of normals with the means, variances and mixture probabilities of the com ponents changing smoothly as a function of the covariates. The model extends existing models in two important ways. First, the components are allowed to be heteroscedastic regressions as the standard model with homoscedastic regressions can give a poor fit to heteroscedastic data, especially when the number of covariates is large. Furthermore, we typically need a lot fewer heteroscedastic components, which makes it easier to interpret the model and speeds up the computation. The second main extension is to introduce a novel variable selection prior into all the components of the model. The variable selection prior acts as a selfadjusting mech anism that prevents overfitting and makes it feasible to fit highdimensional nonparametric surfaces. We use Bayesian inference and Markov Chain Monte Carlo methods to estimate the model. Simulated and real examples are used to show that the full generality of our model is required to fit a large class of densities. 
Keywords:  Bayesian inference; Markov Chain Monte Carlo; Mixture of Experts; Predictive inference; Splines; ValueatRisk; Variable selection 
JEL:  E50 
Date:  2007–09–01 
URL:  http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0211&r=ecm 
By:  Susanne Schennach (Institute for Fiscal Studies and University of Chicago); Yingyao Hu; Arthur Lewbel (Institute for Fiscal Studies and Boston College) 
Abstract:  <p><p>This note establishes that the fully nonparametric classical errorsinvariables model is identifiable from data on the regressor and the dependent variable alone, unless the specification is a member of a very specific parametric family. This family includes the linear specification with normally distributed variables as a special case. This result relies on standard primitive regularity conditions taking the form of smoothness and monotonicity of the regression function and nonvanishing characteristic functions of the disturbances. </p></p> 
Date:  2007–07 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:14/07&r=ecm 
By:  Xiaohong Chen (Institute for Fiscal Studies and Yale); Markus Reiss 
Abstract:  <p><p>In this paper, we clarify the relations between the existing sets of regularity conditions for convergence rates of nonparametric indirect regression (NPIR) and nonparametric instrumental variables (NPIV) regression models. We establish minimax risk lower bounds in mean integrated squared error loss for the NPIR and the NPIV models under two basic regularity conditions that allow for both mildly illposed and severely illposed cases.We show that both a simple projection estimator for the NPIR model, and a sieve minimum distance estimator for the NPIV model,can achieve the minimax risk lower bounds, and are rateoptimal uniformly over a large class of structure functions, allowing for mildly illposed and severely illposed cases.</p></p> 
JEL:  C14 C30 
Date:  2007–09 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:20/07&r=ecm 
By:  Thomas A. Severini; Gautam Tripathi 
Abstract:  <p><p>The main objective of this paper is to derive the efficiency bounds for estimating certain linear functionals of an unknown structural function when the latter is not itself a conditional expectation.</p></p> 
Date:  2007–05 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:13/07&r=ecm 
By:  Nakatani, Tomoaki (Dept. of Economic Statistics, Stockholm School of Economics); Teräsvirta, Timo (CREATES, School of Economics and Management, University of Aarhus) 
Abstract:  In this article, we derive a set of necessary and sufficient conditions for the positive definiteness of the conditional covariance matrix in the conditional correlation (CC) GARCH models. Under the new conditions, it is possible to introduce negative interdependence among volatilities even in the simplest CCGARCH(1,1) formulation. An empirical example illustrates how the conditions are imposed and verified in practice. 
Keywords:  Multivariate GARCH; positivity constraints; conditional correlation 
JEL:  C12 
Date:  2007–10–15 
URL:  http://d.repec.org/n?u=RePEc:hhs:hastef:0675&r=ecm 
By:  Jaap Abbring (Institute for Fiscal Studies and Tinbergen Institute) 
Abstract:  <p>We study a mixed hittingtime (MHT) model that specifies durations as the first time a Levy process  a continuoustime process with stationary and independent increments  crosses a heterogeneous threshold. Such models are of substantial interest because they can be reduced from optimalstopping models with heterogeneous agents that do not naturally produce a mixed proportional hazards (MPH) structure. We show how strategies for analyzing the MPH model's identifiability can be adapted to prove identifiability of an MHT model with observed regressors and unobserved heterogeneity. We discuss inference from censored data and extensions to timevarying covariates and latent processes with more general time and dependency structures. We conclude by discussing the relative merits of the MHT and MPH models as complementary frameworks for econometric duration analysis.</p> 
JEL:  C14 C41 
Date:  2007–07 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:15/07&r=ecm 
By:  Alexandre Belloni; Victor Chernozhukov (Institute for Fiscal Studies and Massachussets Institute of Technology) 
Abstract:  <p><p>In this paper we examine the implications of the statistical large sample theory for the computational complexity of Bayesian and quasiBayesian estimation carried out using Metropolis random walks. Our analysis is motivated by the LaplaceBernsteinVon Mises central limit theorem, which states that in large samples the posterior or quasiposterior approaches a normal density. Using this observation, we establish polynomial bounds on the computational complexity of general Metropolis random walks methods in large samples. Our analysis covers cases, where the underlying loglikelihood or extremum criterion function is possibly nonconcave, discontinuous, and of increasing dimension. However, the central limit theorem restricts the deviations from continuity and logconcavity of the loglikelihood or extremum criterion function in a very specific manner.</p></p> 
Date:  2007–05 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:12/07&r=ecm 
By:  Victor Chernozhukov (Institute for Fiscal Studies and Massachussets Institute of Technology); Ivan FernandezVal; Alfred Galichon 
Abstract:  <p><p>Suppose that a target function is monotonic, namely, weakly increasing, and an original estimate of the target function is available, which is not weakly increasing. Many common estimation methods used in statistics produce such estimates. We show that these estimates can always be improved with no harm using rearrangement techniques: The rearrangement methods, univariate and multivariate, transform the original estimate to a monotonic estimate, and the resulting estimate is closer to the true curve in common metrics than the original estimate. We illustrate the results with a computational example and an empirical example dealing with ageheight growth charts. </p></p> 
Date:  2007–04 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:09/07&r=ecm 
By:  Sancetta, A. 
Abstract:  Given the sequential update nature of Bayes rule, Bayesian methods find natural application to prediction problems. Advances in computational methods allow to routinely use Bayesian methods in econometrics. Hence, there is a strong case for feasible predictions in a Bayesian framework. This paper studies the theoretical properties of Bayesian predictions and shows that under minimal conditions we can derive finite sample bounds for the loss incurred using Bayesian predictions under the KullbackLeibler divergence. In particular, the concept of universality of predictions is discussed and universality is established for Bayesian predictions in a variety of settings. These include predictions under almost arbitrary loss functions, model averaging, predictions in a non stationary environment and under model missspecification. Given the possibility of regime switches and multiple breaks in economic series, as well as the need to choose among different forecasting models, which may inevitably be missspecified, the finite sample results derived here are of interest to economic and financial forecasting. Key words: Bayesian prediction, model averaging, universal prediction. 
JEL:  C11 C44 C53 
Date:  2007–11 
URL:  http://d.repec.org/n?u=RePEc:cam:camdae:0755&r=ecm 
By:  Jerry Hausman (Institute for Fiscal Studies and Massachussets Institute of Technology); Konrad Menzel; Randall Lewis; Whitney Newey (Institute for Fiscal Studies and Massachussets Institute of Technology) 
Abstract:  <p><p><p>2SLS is by far the mostused estimator for the simultaneous equation problem. However, it is now wellrecognized that 2SLS can exhibit substantial finite sample (secondorder) bias when the model is overidentified and the first stage partial R2 is low. The initial recommendation to solve this problem was to do LIML, e.g.Bekker(1994) or Staiger and Stock (1997). </p><p></p><p></p><p> </p><p></p><p></p><p>However, Hahn, Hausman, and Kuersteiner (HHK 2004) demonstrated that the "problem" of LIML led to undesirable estimates in this situation. Morimune (1983) analyzed both the bias in 2SLS and the lack of moments in LIML. While it was long known that LIML did not have finite sample moments, it was less known that this lack of moments led to the undesirable property of considerable dispersion in the estimates, e.g. the interquartile range was much larger than 2SLS. HHK developed a jackknife 2SLS (J2SLS) estimator that attenuated the 2SLS bias problem and had good dispersion properties. They found in their empirical results that the J2SLS estimator or the Fuller estimator, which modifies LIML to have moments, did well on both the bias and dispersion criteria. Since the Fuller estimator had smaller second order MSE, HHK recommended using the Fuller estimator. However, Bekker and van der Ploeg (2005) and Hausman, Newey and Woutersen (HNW 2005) recognized that both Fuller and LIML are inconsistent with heteroscedasticity as the number of instruments becomes large in the Bekker (1994)sequence. Since econometricians recognize that heteroscedasticity is often present, this finding presents a problem.Hausman, Newey,Woutersen, Chao and Swanson (HNWCS 2007) solve this problem by proposing jackknife LIML (HLIML) and jackknife Fuller (HFull)estimators that are consistent in the presence of heteroscedasticity. HLIML does not have moments so HNWCS (2007)recommend using HFull, which does have moments. However, a problem remains. If serial correlation or clustering exists, neither HLIML nor HFull is consistent. </p><p></p><p></p><p> </p><p></p><p></p><p>The continuous updating estimator, CUE, which is the GMMlike generalization of LIML, introduced by Hansen, Heaton, and Yaron (1996) would solve this problem. The CUE estimator also allows treatment of nonlinear specifications which the above estimators need not allow for and also allows for general non spherical disturbances. However, CUE suffers from the moment problem and exhibits wide dispersion. GMM does not suffer from the no moments problem, but like 2SLS, GMM has finite sample bias that grows with the number of moments. </p><p></p><p></p><p> </p><p></p><p></p><p>In this paper we modify CUE to solve the no moments/large dispersion problem. We consider the dual formulation of CUE and we modify the CUE first order conditions by adding a term of order 1/T. To first order the variance of the estimator is the same as GMM or CUE, so no large sample efficiency is lost. The resulting estimator has moments up to the degree of overidentification and demonstrates considerably reduced bias relative to GMM and reduced dispersion relative to CUE. Thus, we expect the new estimator will be useful for empirical research. We next consider a similar approach but use a class of functions which permits us to specify an estimator with all integral moments existing. Lastly, we demonstrate how this approach can be extended to the entire family of Maximum Empirical Likelihood (MEL) Estimators, so these estimators will have integral moments of all orders. </p><p></p><p></p><p></p></p></p> 
Date:  2007–09 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:24/07&r=ecm 
By:  Ariel Pakes (Institute for Fiscal Studies and Harvard University); J. Porter; Kate Ho; Joy Ishii 
Abstract:  <p><p>This paper provides conditions under which the inequality constraints generated by either single agent optimizing behavior, or by the Nash equilibria of multiple agent problems, can be used as a basis for estimation and inference. We also add to the econometric literature on inference in models defined by inequality constraints by providing a new specification test and methods of inference for the boundaries of the model's identified set. Two applications illustrate how the use of inequality constraints can simplify the problem of obtaining estimators from complex behavioral models of substantial applied interest.</p></p> 
Date:  2007–07 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:16/07&r=ecm 
By:  Richard Spady (Institute for Fiscal Studies and European University Institute) 
Abstract:  <p><p>We model attitudes as latent variables that induce stochastic dominance relations in (item) responses. Observable characteristics that affect attitudes can be incorporated into the analysis to improve the measurement of the attitudes; the measurements are posterior distributions that condition on the responses and characteristics of each respondent. Methods to use these measurements to characterize the relation between attitudes and behaviour are developed and implemented.</p></p> 
Date:  2007–06 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:26/07&r=ecm 
By:  Francesco Audrino; Dominik Colagelo 
Abstract:  We propose a new semiparametric model for the implied volatility surface, which incorporates machine learning algorithms. Given a starting model, a treeboosting algorithm sequentially minimizes the residuals of observed and estimated implied volatility. To overcome the poor predicting power of existing models, we include a grid in the region of interest, and implement a crossvalidation strategy to find an optimal stopping value for the tree boosting. Back testing the outofsample appropriateness of our model on a large data set of implied volatilities on S&P 500 options, we provide empirical evidence of its strong predictive potential, as well as comparing it to other standard approaches in the literature. 
Keywords:  Implied Volatility, Implied Volatility Surface, Forecasting, Tree Boosting, Regression Tree, Functional Gradient Descent 
JEL:  C13 C14 C51 C53 C63 G12 G13 
Date:  2007–11 
URL:  http://d.repec.org/n?u=RePEc:usg:dp2007:200742&r=ecm 
By:  Victor Chernozhukov (Institute for Fiscal Studies and Massachussets Institute of Technology); Ivan FernandezVal; Alfred Galichon 
Abstract:  <p>This paper applies a regularization procedure called increasing rearrangement to monotonize Edgeworth and CornishFisher expansions and any other related approximations of distribution and quantile functions of sample statistics. Besides satisfying the logical monotonicity, required of distribution and quantile functions, the procedure often delivers strikingly better approximations to the distribution and quantile functions of the sample mean than the original EdgeworthCornishFisher expansions.</p> 
Date:  2007–08 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:19/07&r=ecm 
By:  Victor Chernozhukov (Institute for Fiscal Studies and Massachussets Institute of Technology); Ivan FernandezVal; Alfred Galichon 
Abstract:  <p><p><p><p><p><p>The most common approach to estimating conditional quantile curves is to fit a curve, typically linear, pointwise for each quantile. Linear functional forms, coupled with pointwise fitting, are used for a number of reasons including parsimony of the resulting approximations and good computational properties. The resulting fits, however, may not respect a logical monotonicity requirement that the quantile curve be increasing as a function of probability. This paper studies the natural monotonization of these empirical curves induced by sampling from the estimated nonmonotone model, and then taking the resulting conditional quantile curves that by construction are monotone in the probability.</p></p></p></p></p></p> 
Date:  2007–04 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:10/07&r=ecm 
By:  Hajo Holzmann (Institute of Stochastics, University of Karlsruhe / Germany); Sebastian Vollmer (IberoAmerica Institute, University of Goettingen / Germany); Julian Weisbrod (Department of Economics, University of Goettingen / Germany) 
Abstract:  In this paper we analyze the world´s crossnational distribution of income and its evolution from 1970 to 2003. We argue that modeling this distribution by a finite mixture and investigating its number of components has advantages over nonparametric inference concerning the number of modes. In particular, the number of components of the distribution does not depend on the scale chosen (original or logarithmic), whereas the number of modes does. Instead of socalled twinpeaks, we find that the distribution appears to have only two components in 19701975, but consists of three components from 1976 onwards, a low, average and high meanincome group, with group means diverging over time. Here we apply recently developed modified likelihood ratio tests for the number of components in a finite mixture. The intra distributional dynamics are investigated in detail using posterior probability estimates. 
Keywords:  crossnational income distribution; mixture models; modified likelihood ratio test; nonparametric density estimation 
JEL:  C12 O11 O47 F01 
Date:  2007–09–11 
URL:  http://d.repec.org/n?u=RePEc:got:iaidps:162&r=ecm 
By:  Alain Chaboud; Benjamin Chiquoine; Erik Hjalmarsson; Mico Loretan 
Abstract:  Using two newly available ultrahighfrequency datasets, we investigate empirically how frequently one can sample certain foreign exchange and U.S. Treasury security returns without contaminating estimates of their integrated volatility with market microstructure noise. Using volatility signature plots and a recentlyproposed formal decision rule to select the sampling frequency, we find that one can sample FX returns as frequently as once every 15 to 20 seconds without contaminating volatility estimates; bond returns may be sampled as frequently as once every 2 to 3 minutes on days without U.S. macroeconomic announcements, and as frequently as once every 40 seconds on announcement days. With a simple realized kernel estimator, the sampling frequencies can be increased to once every 2 to 5 seconds for FX returns and to about once every 30 to 40 seconds for bond returns. These sampling frequencies, especially in the case of FX returns, are much higher than those often recommended in the empirical literature on realized volatility in equity markets. We suggest that the generally superior depth and liquidity of trading in FX and government bond markets contributes importantly to this difference. 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:fip:fedgif:905&r=ecm 
By:  Di Kuang (Dept of Statistics, University of Oxford); Bent Nielsen (Nuffield College, Oxford University); J. P. Nielsen (Cass Business School, City University London) 
Abstract:  In this paper, we consider the identification problem arising in the ageperiodcohort models, as well as in the extended chain ladder model. We propose a canonical parametrization based on the accelerations of the trends in the three factors. This parametrization is exactly identified. It eases interpretation, estimation and forecasting. The canonical parametrization is shown to apply for a class of index sets which have trapezoid shapes, including various Lexis diagrams and the insurance reserving triangles. 
Date:  2007–11–20 
URL:  http://d.repec.org/n?u=RePEc:nuf:econwp:0705&r=ecm 
By:  Bassler, Kevin E.; Gunaratne, Gemunu H.; McCauley, Joseph L. 
Abstract:  The discovery of the dynamics of a time series requires construction of the transition density, 1point densities and scaling exponents provide no knowledge of the dynamics. Time series require some sort of statistical regularity, otherwise there is no basis for analysis. We state the possible tests for statistical regularity in terms of increments. The condition for stationary increments, not scaling, detemines long time pair autocorrelations. An incorrect assumption of stationary increments generates spurious stylized facts, fat tails and a Hurst exponent Hs=1/2, when the increments are nonstationary, as they are in FX markets. The nonstationarity arises from systematic uneveness in noise traders’ behavior. Spurious results arise mathematically from using a log increment with a ‘sliding window’. The Hurst exponent Hs generated by the using the sliding window technique on a time series plays the same role as Mandelbrot’s Joseph exponent. Mandelbrot originally assumed that the ‘badly behaved second moment of cotton returns is due to fat tails, but that nonconvergent behavior providess instead direct evidence for nonstationary increments. 
Keywords:  Stylized facts; nonstationary time series analysis;regression; martingales; uncorrelated increments; fat tails; efficient market hypothesis;sliding windows 
JEL:  C51 C22 
Date:  2007–10–17 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:5813&r=ecm 
By:  Spyros Konstantopoulos (Northwestern University and IZA) 
Abstract:  Multilevel models are widely used in education and social science research. However, the effects of omitting levels of the hierarchy on the variance decomposition and the clustering effects have not been well documented. This paper discusses how omitting one level in threelevel models affects the variance decomposition and clustering in the resulting twolevel models. Specifically, I used the ANOVA framework and provided results for simple models that do not include predictors and assumed balanced nested data (or designs). The results are useful for teacher and school effects research as well as for power analysis during the designing stage of a study. The usefulness of the methods is demonstrated using data from Project STAR. 
Keywords:  variance decomposition, nested designs, clustering 
JEL:  C00 
Date:  2007–11 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp3178&r=ecm 
By:  Elisa Luciano; Jaap Spreeuw; Elena Vigna 
Abstract:  Stochastic mortality, i.e. modelling death arrival via a jump process with stochastic intensity, is gaining increasing reputation as a way to rep resent mortality risk. This paper represents a .rst attempt to model the mortality risk of couples of individuals, according to the stochastic inten sity approach. We extend to couples the Cox processes set up, namely the idea that mortality is driven by a jump process whose intensity is itself a stochastic process, proper of a particular generation within each gen der. Dependence between the survival times of the members of a couple is captured by an Archimedean copula. We also provide a methodology for fitting the joint survival function by working separately on the (analytical) copula and the (analytical) mar gins. First, we calibrate and select the best fit copula according to the methodology of Wang and Wells (2000b) for censored data. Then, we provide a samplebased calibration for the intensity, using a time homogeneous, non meanreverting, affine process: this gives the marginal survival functions. By coupling the best fit copula with the calibrated mar gins we obtain a joint survival function which incorporates the stochastic nature of mortality improvements. Several measures of time dependent association can be computed out of it. We apply the methodology to a well known insurance dataset, using a sample generation. The best fit copula turns out to be a Nelsen one, which implies not only positive dependency, but dependency increasing with age. 
Keywords:  stochastic mortality, bivariate mortality, copula functions, longevity risk. 
JEL:  G22 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:cca:wpaper:43&r=ecm 