
on Econometrics 
By:  d'Haultfoeuille, Xavier (CRESTINSEE); Maurel, Arnaud (ENSAECREST) 
Abstract:  This paper considers the identification and estimation of an extension of Roy's model (1951) of occupational choice, which includes a nonpecuniary component in the decision equation and allows for uncertainty on the potential outcomes. This framework is well suited to various economic contexts, including educational and sectoral choices, or migration decisions. We focus in particular on the identification of the nonpecuniary component under the condition that at least one variable affects the selection probability only through potential earnings, that is under the opposite of the usual exclusion restrictions used to identify switching regressions models and treatment effects. Point identification is achieved if such variables are continuous, while bounds are obtained otherwise. As a result, the distribution of the ex ante treatment effects can be point or set identified without any usual instruments. We propose a threestages semiparametric estimation procedure for this model, which yields rootn consistent and asymptotically normal estimators. We apply our results to the educational context, by providing new evidence from French data that nonpecuniary factors are a key determinant of higher education attendance decisions. 
Keywords:  Roy model, nonparametric identification, exclusion restrictions, schooling choices, ex ante returns to schooling 
JEL:  C14 C25 J24 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp4606&r=ecm 
By:  Li, Feng (Department of Statistics, Stockholm University); Villani, Mattias (Research Department, Central Bank of Sweden); Kohn, Robert (Economics, The University of New South Wales,) 
Abstract:  A general model is proposed for flexibly estimating the density of a continuous response variable conditional on a possibly highdimensional set of covariates. The model is a finite mixture of asymmetric studentt densities with covariate dependent mixture weights. The four parameters of the components, the mean, degrees of freedom, scale and skewness, are all modelled as functions of the covariates. Inference is Bayesian and the computation is carried out using Markov chain Monte Carlo simulation. To enable model parsimony, a variable selection prior is used in each set of covariates and among the covariates in the mixing weights. The model is used to analyse the distribution of daily stock market returns, and shown to more accurately forecast the distribution of returns than other widely used models for financial data. 
Keywords:  Bayesian inference; Markov Chain Monte Carlo; Mixture of Experts; Variable selection; Volatility modeling. 
JEL:  C11 C53 
Date:  2009–10–01 
URL:  http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0233&r=ecm 
By:  Lucia Alessi (European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.); Matteo Barigozzi (European Center for the Advanced Research in Economics and Statistics (ECARES), Université libre de Bruxelles, Belgium.); Marco Capasso (Utrecht School of Economics, Utrecht University, P.O. Box 80.115, 3508 TC Utrecht, The Netherlands.) 
Abstract:  We propose a new method for multivariate forecasting which combines Dynamic Factor and multivariate GARCH models. The information contained in large datasets is captured by few dynamic common factors, which we assume being conditionally heteroskedastic. After presenting the model, we propose a multistep estimation technique which combines asymptotic principal components and multivariate GARCH. We also prove consistency of the estimated conditional covariances. We present simulation results in order to assess the finite sample properties of the estimation technique. Finally, we carry out two empirical applications respectively on macroeconomic series, with a particular focus on different measures of inflation, and on financial asset returns. Our model outperforms the benchmarks in forecasting the inflation level, its conditional variance and the volatility of returns. Moreover, we are able to predict all the conditional covariances among the observable series. JEL Classification: C52, C53. 
Keywords:  Dynamic Factor Models, Multivariate GARCH, Conditional Covariance, Inflation Forecasting, Volatility Forecasting. 
Date:  2009–11 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20091115&r=ecm 
By:  Giordani, Paolo (Research Department, Central Bank of Sweden); Villani, Mattias (Research Department, Central Bank of Sweden) 
Abstract:  We introduce a nonGaussian dynamic mixture model for macroeconomic forecasting. The Locally Adaptive Signal Extraction and Regression (LASER) model is designed to capture relatively persistent AR processes (signal) contaminated by high frequency noise. The distribution of the innovations in both noise and signal is robustly modeled using mixtures of normals. The mean of the process and the variances of the signal and noise are allowed to shift suddenly or gradually at unknown locations and number of times. The model is then capable of capturing movements in the mean and conditional variance of a series as well as in the signaltonoise ratio. Four versions of the model are used to forecast six quarterly US and Swedish macroeconomic series. We conclude that (i) allowing for infrequent and large shifts in mean while imposing normal iid errors often leads to erratic forecasts, (ii) such shifts/breaks versions of the model can forecast well if robustified by allowing for nonnormal errors and time varying variances, (iii) infrequent and large shifts in error variances outperform smooth and continuous shifts substantially when it comes to interval coverage, (iv) for point forecasts, robust time varying specifications improve slightly upon fixed parameter specifications on average, but the relative performances can differ sizably in various subsamples, v) for interval forecasts, robust versions that allow for infrequent shifts in variances perform substantially and consistently better than time invariant specifications. 
Keywords:  Bayesian inferene; Foreast evaluation; Regime swithing; Statespace modeling; Dynamic Mixture models 
JEL:  C11 C53 
Date:  2009–10–01 
URL:  http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0234&r=ecm 
By:  Yoonseok Lee (Dept. of Economics, University of Michigan); Ryo Okui (Institute of Economic Research, Kyoto University) 
Abstract:  This paper considers specification testing for instrumental variables estimation in the presence of many instruments. The test proposed is a modified version of the Sargan (1958, Econometrica 26(3): 393415) test of overidentifying restrictions. The test statistic asymptotically follows the standard normal distribution under the null hypothesis of correct specification when the number of instruments increases with the sample size. We find that the new test statistic is numerically equivalent up to a sign to the test statistic proposed by Hahn and Hausman (2002, Econometrica 70(1): 163189). We also assess the size and power properties of the test. 
Keywords:  Instrumental variables estimation, Many instruments, Overidentifying restrictions test, Specification test 
JEL:  C12 C21 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1741&r=ecm 
By:  Lucia Alessi; Matteo Barigozzi; Marco Capasso 
Abstract:  We modify the criterion by Bai and Ng (2002) for determining the number of factors in approximate factor models. As in the original criterion, for any given number of factors we estimate the common and idiosyncratic components of the model by applying principal component analysis. We select the true number of factors as the number that minimizes the variance explained by the idiosyncratic component. In order to avoid overparametrization, minimization is subject to penalization. At this step, we modify the original procedure by multiplying the penalty function by a positive real number, which allows us to tune its penalizing power, by analogy with the method used by Hallin and Liška (2007) in the frequency domain. The contribution of this paper is twofold. First, our criterion retains the asymptotic properties of the original criterion, but corrects its tendency to overestimate the true number of factors. Second, we provide a computationally easy way to implement the new method by iteratively applying the original criterion. Monte Carlo simulations show that our criterion is in general more robust than the original one. A better performance is achieved in particular in the case of large idiosyncratic disturbances. These conditions are the most difficult for detecting a factor structure but are not unusual in the empirical context. Two applications on a macroeconomic and a financial dataset are also presented. 
Keywords:  Approximate factor models, information criterion, model selection. 
JEL:  C52 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2009_023&r=ecm 
By:  Cavit Pakel (Department of Economics and OxfordMan Institute, University of Oxford, Oxford); Neil Shephard (OxfordMan Institute and Department of Economics, University of Oxford, Oxford); Kevin Sheppard (OxfordMan Institute and Department of Economics, University of Oxford, Oxford) 
Abstract:  We investigate the properties of the composite likelihood (CL) method for (T ×N_T ) GARCH panels. The defining feature of a GARCH panel with time series length T is that, while nuisance parameters are allowed to vary across N_T series, other parameters of interest are assumed to be common. CL pools information across the panel instead of using information available in a single series only. Simulations and empirical analysis illustrate that in reasonably large T CL performs well. However, due to the estimation error introduced through nuisance parameter estimation, CL is subject to the “incidental parameter” problem for small T. 
Keywords:  ARCH models; composite likelihood; nuisance parameters; panel data 
JEL:  C01 C14 C32 
Date:  2009–10–02 
URL:  http://d.repec.org/n?u=RePEc:nuf:econwp:0912&r=ecm 
By:  Alvaro Cartea; Dimitrios Karyampas 
Abstract:  Using high frequency data for the price dynamics of equities we measure the impact that market microstructure noise has on estimates of the: (i) volatility of returns; and (ii) variancecovariance matrix of n assets. We propose a Kalmanfilterbased methodology that allows us to deconstruct price series into the true efficient price and the microstructure noise. This approach allows us to employ volatility estimators that achieve very low Root Mean Squared Errors (RMSEs) compared to other estimators that have been proposed to deal with market microstructure noise at high frequencies. Furthermore, this price series decomposition allows us to estimate the variance covariance matrix of $n$ assets in a more efficient way than the methods so far proposed in the literature. We illustrate our results by calculating how microstructure noise affects portfolio decisions and calculations of the equity beta in a CAPM setting. 
Keywords:  Volatility estimation, Highfrequency data, Market microstructure theory, Covariation of assets, Matrix process, Kalman filter 
JEL:  G12 G14 C22 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:cte:wbrepe:wp097609&r=ecm 
By:  McAleer, M.; Medeiros, M.C. (Erasmus Econometric Institute) 
Abstract:  In this paper we consider a nonlinear model based on neural networks as well as linear models to forecast the daily volatility of the S&P 500 and FTSE 100 indexes. As a proxy for daily volatility, we consider a consistent and unbiased estimator of the integrated volatility that is computed from high frequency intraday returns. We also consider a simple algorithm based on bagging (bootstrap aggregation) in order to specify the models analyzed in the paper. 
Keywords:  financial econometrics;volatility forecasting;neural networks;nonlinear models;realized volatility;bagging 
Date:  2009–11–24 
URL:  http://d.repec.org/n?u=RePEc:dgr:eureir:1765017303&r=ecm 
By:  B. Nielsen (Dept of Economics and Nuffield College, University of Oxford, Oxford.) 
Abstract:  Johansen derived the asymptotic theory for his cointegration rank test statisic for a vector autoregression where the parameters are restricted so the process is integrated of order one. It is investigated to what extent these parameter restrictions are binding. The eigenvalues of Johansen’s eigenvalue problem are shown to have the same consistency rates accross the parameter space. The test statistic is shown to have the usual asymptotic distribution as long as the possibilities of additional unit roots and of singular explosiveness are ruled out. To prove the results the convergence of stochastic integrals with respect to singular explosive processes is considered. 
Date:  2009–09–22 
URL:  http://d.repec.org/n?u=RePEc:nuf:econwp:0910&r=ecm 
By:  Neil Shephard (OxfordMan Institute and Department of Economics, University of Oxford); Kevin Sheppard (Department of Economics and OxfordMan Institute, University of Oxford) 
Abstract:  This paper studies in some detail a class of high frequency based volatility (HEAVY) models. These models are direct models of daily asset return volatility based on realized measures constructed from high frequency data. Our analysis identifies that the models have momentum and mean reversion effects, and that they adjust quickly to structural breaks in the level of the volatility process. We study how to estimate the models and how they perform through the credit crunch, comparing their fit to more traditional GARCH models. We analyse a model based bootstrap which allow us to estimate the entire predictive distribution of returns. We also provide an analysis of missing data in the context of these models. 
Keywords:  ARCH models; bootstrap; missing data; multiplicative error model; multistep ahead prediction; nonnested likelihood ratio test; realised kernel; realised volatility. 
Date:  2009–07–10 
URL:  http://d.repec.org/n?u=RePEc:nuf:econwp:0903&r=ecm 
By:  Andre A. P.; Francisco J. Nogales; Esther Ruiz 
Abstract:  This article addresses the problem of forecasting portfolio valueatrisk (VaR) with multivariate GARCH models visàvis univariate models. Existing literature has tried to answer this question by analyzing only small portfolios and using a testing framework not appropriate for ranking VaR models. In this work we provide a more comprehensive look at the problem of portfolio VaR forecasting by using more appropriate statistical tests of comparative predictive ability. Moreover, we compare univariate vs. multivariate VaR models in the context of diversified portfolios containing a large number of assets and also provide evidence based on Monte Carlo experiments. We conclude that, if the sample size is moderately large, multivariate models outperform univariate counterparts on an outofsample basis. 
Keywords:  Market risk, Backtesting, Conditional predictive ability, GARCH, Volatility, Capital requirements, Basel II 
Date:  2009–11 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws097222&r=ecm 
By:  Hahn, Jinyong (UCLA); Hirano, Keisuke (U AZ); Karlan, Dean (Yale University and MIT Jameel Poverty Action Lab) 
Abstract:  Many social experiments are run in multiple waves, or are replications of earlier social experiments. In principle, the sampling design can be modified in later stages or replications to allow for more efficient estimation of causal effects. We consider the design of a twostage experiment for estimating an average treatment effect, when covariate information is available for experimental subjects. We use data from the first stage to choose a conditional treatment assignment rule for units in the second stage of the experiment. This amounts to choosing the propensity score, the conditional probability of treatment given covariates. We propose to select the propensity score to minimize the asymptotic variance bound for estimating the average treatment effect. Our procedure can be implemented simply using standard statistical software and has attractive largesample properties. 
JEL:  C10 C13 C14 C90 C93 
Date:  2009–01 
URL:  http://d.repec.org/n?u=RePEc:ecl:yaleco:59&r=ecm 
By:  Jouni Sohkanen (Dept of Economics, University of Oxford, Oxford); B. Nielsen (Nuffield College, University of Oxford, Oxford.) 
Abstract:  We undertake a generalization of the cumulative sum of squares (CUSQ) test to the case of nonstationary autoregressive distributed lag models with quite general deterministic time trends. The test may be validly implemented with either ordinary least squares residuals or standardized forecast errors. Simulations suggest that there is little at stake in the choice between the two in the unit root case under Gaussian innovations, and that there is only very modest variation in the finite sample distribution across the parameter space. 
Date:  2009–08–31 
URL:  http://d.repec.org/n?u=RePEc:nuf:econwp:0909&r=ecm 
By:  Laurent Ferrara (CES  Centre d'économie de la Sorbonne  CNRS : UMR8174  Université PanthéonSorbonne  Paris I, Banque de France  Business Conditions and Macroeconomic Forecasting Directorate); Dominique Guegan (CES  Centre d'économie de la Sorbonne  CNRS : UMR8174  Université PanthéonSorbonne  Paris I, EEPPSE  Ecole d'Économie de Paris  Paris School of Economics  Ecole d'Économie de Paris); Patrick Rakotomarolahy (CES  Centre d'économie de la Sorbonne  CNRS : UMR8174  Université PanthéonSorbonne  Paris I) 
Abstract:  This papier formalizes the process of forecasting unbalanced monthly data sets in order to obtain robust nowcasts and forecasts of quarterly GDP growth rate through a semiparametric modelling. This innovative approach lies on the use on nonparametric methods, based on nearest neighbors and on radial basis function approaches, ti forecast the monthly variables involved in the parametric modelling of GDP using bridge equations. A realtime experience is carried out on Euro area vintage data in order to anticipate, with an advance ranging from six to one months, the GDP flash estimate for the whole zone. 
Keywords:  Euro area GDP, realtime nowcasting, forecasting, nonparametric models. 
Date:  2009–11 
URL:  http://d.repec.org/n?u=RePEc:hal:cesptp:halshs00344839_v2&r=ecm 
By:  Daisuke Nagakura (Institute for Monetary and Economic Studies, Bank of Japan (Email: daisuke.nagakura@boj.or.jp)) 
Abstract:  In this paper, we develop the asymptotic theory of Hwang and Basawa (2005) for explosive random coefficient autoregressive (ERCA) models. Applying the theory, we prove that a locally best invariant (LBI) test in McCabe and Tremayne (1995), which is for the null of a unit root (UR) process against the alternative of a stochastic unit root (STUR) process, is inconsistent against a class of ERCA models. This class includes a class of STUR processes as special cases. We show, however, that the wellknown DickeyFuller (DF) UR tests and an LBI test of Lee (1998) are consistent against a particular case of this class of ERCA models. 
Keywords:  Locally Best Invariant Test, Consistency, DickeyFuller Test, LBI, RCA, STUR 
JEL:  C12 
Date:  2009–10 
URL:  http://d.repec.org/n?u=RePEc:ime:imedps:09e23&r=ecm 
By:  D.S. Poskitt 
Abstract:  This paper develops a new methodology for identifying the structure of VARMA time series models. The analysis proceeds by examining the echelon canonical form and presents a fully automatic data driven approach to model specification using a new technique to determine the Kronecker invariants. A novel feature of the inferential procedures developed here is that they work in terms of a canonical scalar ARMAX representation in which the exogenous regressors are given by predetermined contemporaneous and lagged values of other variables in the VARMA system. This feature facilitates the construction of algorithms which, from the perspective of macroeconomic modeling, are efficacious in that they do not use AR approximations at any stage. Algorithms that are applicable to both asymptotically stationary and unitroot, partially nonstationary (cointegrated) time series models are presented. A sequence of lemmas and theorems show that the algorithms are based on calculations that yield strongly consistent estimates. 
Keywords:  Keywords: Algorithms, asymptotically stationary and cointegrated time series, echelon 
JEL:  C32 C52 C63 C87 
Date:  2009–11–12 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:200912&r=ecm 
By:  Carlos SánchezGonzález (Department of Economic Theory and Economic History, University of Granada.); Tere M. GarcíaMuñoz (Department of Economic Theory and Economic History, University of Granada.) 
Abstract:  This paper describes a design for a least mean square error estimator in discrete time systems where the components of the state vector, in measurement equation, are corrupted by different multiplicative noises in addition to observation noise. We show how known results can be considered a particular case of the algorithm stated in this paper 
Keywords:  State estimation, multiplicative noise, uncertain observations 
Date:  2009–11–27 
URL:  http://d.repec.org/n?u=RePEc:gra:wpaper:09/09&r=ecm 
By:  Dominique Guegan (EEPPSE  Ecole d'Économie de Paris  Paris School of Economics  Ecole d'Économie de Paris, CES  Centre d'économie de la Sorbonne  CNRS : UMR8174  Université PanthéonSorbonne  Paris I); Justin Leroux (Institute for Applied Economics  HEC MONTRÉAL) 
Abstract:  We propose a novel methodology for forecasting chaotic systems which is based on exploiting the information conveyed by the local Lyapunov exponents of a system. This information is used to correct for the inevitable bias of most nonparametric predictors. Using simulated data, we show that gains in prediction accuracy can be substantial. 
Keywords:  chaotic systems 
Date:  2009–09 
URL:  http://d.repec.org/n?u=RePEc:hal:cesptp:halshs00431726_v2&r=ecm 
By:  Nael AlAnaswah; Bernd Wilfling 
Abstract:  In this paper we use a statespace model with Markovswitching to detect speculative bubbles in stockprice data. Our two innovations are (1) to adapt this technology to the statespace representation of a wellknown presentvalue stockprice model, and (2) to estimate the model via Kalmanfiltering using a plethora of artificial as well as realworld data sets that are known to contain bubble periods. Analyzing the smoothed regime probabilities, we find that our technology is well suited to detecting stockprice bubbles in both types of data sets. 
Keywords:  Stock market dynamics; Detection of speculative bubbles; Present value models; Statespace models with Markovswitching 
JEL:  C22 G12 
Date:  2009–09 
URL:  http://d.repec.org/n?u=RePEc:cqe:wpaper:0309&r=ecm 
By:  D. S. Poskitt; Arivalzahan Sengarapillai 
Abstract:  In this paper we investigate the use of description length principles to select an appropriate number of basis functions for functional data. We provide a flexible definition of the dimension of a random function that is constructed directly from the KarhunenLoève expansion of the observed process. Our results show that although the classical, principle component variance decomposition technique will behave in a coherent manner, in general, the dimension chosen by this technique will not be consistent. We describe two description length criteria, and prove that they are consistent and that in low noise settings they will identify the true finite dimension of a signal that is embedded in noise. Two examples, one from massspectroscopy and the one from climatology, are used to illustrate our ideas. We also explore the application of different forms of the bootstrap for functional data and use these to demonstrate the workings of our theoretical results. 
Keywords:  Bootstrap, consistency, dimension determination, KarhunenLoève expansion, signaltonoise ratio, variance decomposition 
JEL:  C14 C22 
Date:  2009–11–12 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:200913&r=ecm 
By:  Dominique Guegan (CES  Centre d'économie de la Sorbonne  CNRS : UMR8174  Université PanthéonSorbonne  Paris I, EEPPSE  Ecole d'Économie de Paris  Paris School of Economics  Ecole d'Économie de Paris); PierreAndré Maugis (CES  Centre d'économie de la Sorbonne  CNRS : UMR8174  Université PanthéonSorbonne  Paris I) 
Abstract:  We present here a new way of building vine copulas that allows us to create a vast number of new vine copulas, allowing for more precise modeling in high dimensions. To deal with this great number of copulas we present a new efficient selection methodology using a lattice structure on the vine set. Our model allows for a lot of degrees of freedom, but further improvements face numerous problems caused by vines' complexity as an estimator in a statistical and computational way, problems that we will expose in this paper. Robust nvariate models would be a great breakthrough for asset risk management in banks and insurance companies. 
Keywords:  Vines, multivariate copulas, model selection. 
Date:  2009–11 
URL:  http://d.repec.org/n?u=RePEc:hal:cesptp:halshs00348884_v2&r=ecm 
By:  Crowley, Patrick M (College of Business, Texas A&M University); Schildt, Tony (Bank of Finland) 
Abstract:  Many indicators of business and growth cycles have been constructed by both private and public agencies and are now in use as monitoring devices of economic conditions and for forecasting purposes. As these indicators are largely composite constructs using other economic data, their frequency composition is likely different to that of the variables they are used as indicators for. <p> In this paper we use the HilbertHuang transform, which comprises the empirical mode decomposition (EMD) and the Hilbert spectrum, in order to analyse the frequency content of comparable OECD confidence indicators and national sentiment indicators for industrial production and consumption. We then compare these with the frequency content of both industrial production and real consumption growth data. The HilbertHuang methodology first uses a sifting process (EMD) to identify the embedded frequencies within a time series, and the changing nature of these embedded frequencies (IMFs) can then be analysed by estimating the instantaneous frequency (using the Hilbert spectrum). This methodology has several advantages over conventional spectral analysis: it handles nonstationary and nonlinear processes, and it can cope with short data series. <p> The aim of this paper is to decompose both indicator and actual economic variables to evaluate i) whether the number of IMFs are equivalent in both indicators and actual variables and ii) to see which frequencies are accounted for in indicators and which frequencies are not. 
Keywords:  economic growth; HilbertHuang transform; empirical mode decomposition; frequency domain; economic indicators 
JEL:  C63 E21 E32 
Date:  2009–12–22 
URL:  http://d.repec.org/n?u=RePEc:hhs:bofrdp:2009_033&r=ecm 
By:  Crowley, Patrick M (College of Business, Texas A&M University) 
Abstract:  The HilbertHuang transform (HHT) was developed late last century but has still to be introduced to the vast majority of economists. The HHT transform is a way of extracting the frequency mode features of cycles embedded in any time series using an adaptive data method that can be applied without making any assumptions about stationarity or linear datagenerating properties. This paper introduces economists to the two constituent parts of the HHT transform, namely empirical mode decomposition (EMD) and Hilbert spectral analysis. Illustrative applications using HHT are also made to two financial and three economic time series. 
Keywords:  business cycles; growth cycles; HilbertHuang transform (HHT); empirical mode decomposition (EMD); economic time series; nonstationarity; spectral analysis 
JEL:  C49 E00 
Date:  2009–11–21 
URL:  http://d.repec.org/n?u=RePEc:hhs:bofrdp:2009_032&r=ecm 
By:  Dominique Guegan (CES  Centre d'économie de la Sorbonne  CNRS : UMR8174  Université PanthéonSorbonne  Paris I, EEPPSE  Ecole d'Économie de Paris  Paris School of Economics  Ecole d'Économie de Paris) 
Abstract:  This paper focuses on the use of dynamical chaotic systems in Economics and Finance. In these fields, researchers employ different methods from those taken by mathematicians and physicists. We discuss this point. Then, we present statistical tools and problems which are innovative and can be useful in practice to detect the existence of chaotic behavior inside real data sets. 
Keywords:  Chaos ; Deterministic dynamical system ; Economics ; Estimation theory ; Finance ; Forecasting 
Date:  2009–04 
URL:  http://d.repec.org/n?u=RePEc:hal:cesptp:halshs00375713_v2&r=ecm 
By:  Yoichi Okita (National Graduate Institute for Policy Studies); Wade D. Pfau (National Graduate Institute for Policy Studies); Giang Thanh Long (National Economics University (NEU)) 
Abstract:  Obtaining appropriate forecasts for the future population is a vital component of public policy analysis for issues ranging from government budgets to pension systems. Traditionally, demographic forecasters rely on a deterministic approach with various scenarios informed by expert opinion. This approach has been widely criticized, and we apply an alternative stochastic modeling framework that can provide a probability distribution for forecasts of the Japanese population. We find the potential for much greater variability in the future demographic situation for Japan than implied by existing deterministic forecasts. This demands greater flexibility from policy makers when confronting population aging issues. 
Keywords:  stochastic population forecasts, Japan, LeeCarter method 
JEL:  J1 C53 
Date:  2009–05 
URL:  http://d.repec.org/n?u=RePEc:ngi:dpaper:0906&r=ecm 
By:  Stewart, Jay (U.S. Bureau of Labor Statistics) 
Abstract:  Timeuse surveys collect very detailed information about individuals' activities over a short period of time, typically one day. As a result, a large fraction of observations have values of zero for the time spent in many activities, even for individuals who do the activity on a regular basis. For example, it is safe to assume that all parents do at least some childcare, but a relatively large fraction report no time spent in childcare on their diary day. Because of the large number of zeros Tobit would seem to be the natural approach. However, it is important to recognize that the zeros in timeuse data arise from a mismatch between the reference period of the data (the diary day) and the period of interest, which is typically much longer. Thus it is not clear that Tobit is appropriate. In this study, I examine the bias associated with alternative estimation procedures for estimating the marginal effects of covariates on time use. I begin by adapting the infrequency of purchase model, which is typically used to analyze expenditures, to timediary data and showing that OLS estimates are unbiased. Next, using simulated data, I examine the bias associated with three procedures that are commonly used to analyze timediary data – Tobit, the Cragg (1971) twopart model, and OLS – under a number of alternative assumptions about the datagenerating process. I find that the estimated marginal effects from Tobits are biased and that the extent of the bias varies with the fraction of zerovalue observations. The twopart model performs significantly better, but generates biased estimated in certain circumstances. Only OLS generates unbiased estimates in all of the simulations considered here. 
Keywords:  Tobit, time use 
JEL:  C24 J22 
Date:  2009–11 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp4588&r=ecm 