nep-ecm New Economics Papers
on Econometrics
Issue of 2009‒12‒11
27 papers chosen by
Sune Karlsson
Orebro University

  1. Inference on a Generalized Roy Model, with an Application to Schooling Decisions in France By d'Haultfoeuille, Xavier; Maurel, Arnaud
  2. Flexible Modeling of Conditional Distributions Using Smooth Mixtures of Asymmetric Student T Densities By Li, Feng; Villani, Mattias; Kohn, Robert
  3. Estimation and forecasting in large datasets with conditionally heteroskedastic dynamic common factors. By Lucia Alessi; Matteo Barigozzi; Marco Capasso
  4. Forecasting Macroeconomic Time Series With Locally Adaptive Signal Extraction By Giordani, Paolo; Villani, Mattias
  5. A Specification Test for Instrumental Variables Regression with Many Instruments By Yoonseok Lee; Ryo Okui
  6. A Robust Criterion for Determining the Number of Factors in Approximate Factor Models By Lucia Alessi; Matteo Barigozzi; Marco Capasso
  7. Nuisance parameters, composite likelihoods and a panel of GARCH models By Cavit Pakel; Neil Shephard; Kevin Sheppard
  8. Volatility and covariation of financial assets: a high-frequency analysis By Alvaro Cartea; Dimitrios Karyampas
  9. Forecasting Realized Volatility with Linear and Nonlinear Models By McAleer, M.; Medeiros, M.C.
  10. Test for cointegration rank in general vector autoregressions By B. Nielsen
  11. Realising the future: forecasting with high frequency based volatility (HEAVY) models By Neil Shephard; Kevin Sheppard
  12. Comparing univariate and multivariate models to forecast portfolio value-at-risk By Andre A. P.; Francisco J. Nogales; Esther Ruiz
  13. Adaptive Experimental Design Using the Propensity Score By Hahn, Jinyong; Hirano, Keisuke; Karlan, Dean
  14. Asymptotic behaviour of the CUSUM of squares test under stochastic and deterministic time trends By Jouni Sohkanen; B. Nielsen
  15. GDP nowcasting with ragged-edge data : A semi-parametric modelling By Laurent Ferrara; Dominique Guegan; Patrick Rakotomarolahy
  16. Inconsistency of a Unit Root Test against Stochastic Unit Root Processes By Daisuke Nagakura
  17. Vector Autoregresive Moving Average Identification for Macroeconomic Modeling: Algorithms and Theory By D.S. Poskitt
  18. Recursive linear estimation for discrete time systems in the presence of different multiplicative observation noises By Carlos Sánchez-González; Tere M. García-Muñoz
  19. Forecasting chaotic systems : The role of local Lyapunov exponents By Dominique Guegan; Justin Leroux
  20. Identification of speculative bubbles using state-space models with Markov-switching By Nael Al-Anaswah; Bernd Wilfling
  21. Description Length and Dimensionality Reduction in Functional Data Analysis By D. S. Poskitt; Arivalzahan Sengarapillai
  22. New Prospects on Vines By Dominique Guegan; Pierre-André Maugis
  23. An analysis of the embedded frequency content of macroeconomic indicators and their counterparts using the Hilbert-Huang transform By Crowley, Patrick M; Schildt, Tony
  24. How do you make a time series sing like a choir? Using the Hilbert-Huang transform to extract embedded frequencies from economic or financial time series By Crowley, Patrick M
  25. Chaos in Economics and Finance By Dominique Guegan
  26. A STOCHASTIC FORECAST MODEL FOR JAPAN'S POPULATION By Yoichi Okita; Wade D. Pfau; Giang Thanh Long
  27. Tobit or Not Tobit? By Stewart, Jay

  1. By: d'Haultfoeuille, Xavier (CREST-INSEE); Maurel, Arnaud (ENSAE-CREST)
    Abstract: This paper considers the identification and estimation of an extension of Roy's model (1951) of occupational choice, which includes a non-pecuniary component in the decision equation and allows for uncertainty on the potential outcomes. This framework is well suited to various economic contexts, including educational and sectoral choices, or migration decisions. We focus in particular on the identification of the non-pecuniary component under the condition that at least one variable affects the selection probability only through potential earnings, that is under the opposite of the usual exclusion restrictions used to identify switching regressions models and treatment effects. Point identification is achieved if such variables are continuous, while bounds are obtained otherwise. As a result, the distribution of the ex ante treatment effects can be point or set identified without any usual instruments. We propose a three-stages semiparametric estimation procedure for this model, which yields root-n consistent and asymptotically normal estimators. We apply our results to the educational context, by providing new evidence from French data that non-pecuniary factors are a key determinant of higher education attendance decisions.
    Keywords: Roy model, nonparametric identification, exclusion restrictions, schooling choices, ex ante returns to schooling
    JEL: C14 C25 J24
    Date: 2009–12
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp4606&r=ecm
  2. By: Li, Feng (Department of Statistics, Stockholm University); Villani, Mattias (Research Department, Central Bank of Sweden); Kohn, Robert (Economics, The University of New South Wales,)
    Abstract: A general model is proposed for flexibly estimating the density of a continuous response variable conditional on a possibly high-dimensional set of covariates. The model is a finite mixture of asymmetric student-t densities with covariate dependent mixture weights. The four parameters of the components, the mean, degrees of freedom, scale and skewness, are all modelled as functions of the covariates. Inference is Bayesian and the computation is carried out using Markov chain Monte Carlo simulation. To enable model parsimony, a variable selection prior is used in each set of covariates and among the covariates in the mixing weights. The model is used to analyse the distribution of daily stock market returns, and shown to more accurately forecast the distribution of returns than other widely used models for financial data.
    Keywords: Bayesian inference; Markov Chain Monte Carlo; Mixture of Experts; Variable selection; Volatility modeling.
    JEL: C11 C53
    Date: 2009–10–01
    URL: http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0233&r=ecm
  3. By: Lucia Alessi (European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.); Matteo Barigozzi (European Center for the Advanced Research in Economics and Statistics (ECARES), Université libre de Bruxelles, Belgium.); Marco Capasso (Utrecht School of Economics, Utrecht University,  P.O. Box 80.115,  3508 TC  Utrecht, The Netherlands.)
    Abstract: We propose a new method for multivariate forecasting which combines Dynamic Factor and multivariate GARCH models. The information contained in large datasets is captured by few dynamic common factors, which we assume being conditionally heteroskedastic. After presenting the model, we propose a multi-step estimation technique which combines asymptotic principal components and multivariate GARCH. We also prove consistency of the estimated conditional covariances. We present simulation results in order to assess the finite sample properties of the estimation technique. Finally, we carry out two empirical applications respectively on macroeconomic series, with a particular focus on different measures of inflation, and on financial asset returns. Our model outperforms the benchmarks in forecasting the inflation level, its conditional variance and the volatility of returns. Moreover, we are able to predict all the conditional covariances among the observable series. JEL Classification: C52, C53.
    Keywords: Dynamic Factor Models, Multivariate GARCH, Conditional Covariance, Inflation Forecasting, Volatility Forecasting.
    Date: 2009–11
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20091115&r=ecm
  4. By: Giordani, Paolo (Research Department, Central Bank of Sweden); Villani, Mattias (Research Department, Central Bank of Sweden)
    Abstract: We introduce a non-Gaussian dynamic mixture model for macroeconomic forecasting. The Locally Adaptive Signal Extraction and Regression (LASER) model is designed to capture relatively persistent AR processes (signal) contaminated by high frequency noise. The distribution of the innovations in both noise and signal is robustly modeled using mixtures of normals. The mean of the process and the variances of the signal and noise are allowed to shift suddenly or gradually at unknown locations and number of times. The model is then capable of capturing movements in the mean and conditional variance of a series as well as in the signal-to-noise ratio. Four versions of the model are used to forecast six quarterly US and Swedish macroeconomic series. We conclude that (i) allowing for infrequent and large shifts in mean while imposing normal iid errors often leads to erratic forecasts, (ii) such shifts/breaks versions of the model can forecast well if robustified by allowing for non-normal errors and time varying variances, (iii) infrequent and large shifts in error variances outperform smooth and continuous shifts substantially when it comes to interval coverage, (iv) for point forecasts, robust time varying specifications improve slightly upon fixed parameter specifications on average, but the relative performances can differ sizably in various sub-samples, v) for interval forecasts, robust versions that allow for infrequent shifts in variances perform substantially and consistently better than time invariant specifications.
    Keywords: Bayesian inferene; Foreast evaluation; Regime swithing; State-space modeling; Dynamic Mixture models
    JEL: C11 C53
    Date: 2009–10–01
    URL: http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0234&r=ecm
  5. By: Yoonseok Lee (Dept. of Economics, University of Michigan); Ryo Okui (Institute of Economic Research, Kyoto University)
    Abstract: This paper considers specification testing for instrumental variables estimation in the presence of many instruments. The test proposed is a modified version of the Sargan (1958, Econometrica 26(3): 393-415) test of overidentifying restrictions. The test statistic asymptotically follows the standard normal distribution under the null hypothesis of correct specification when the number of instruments increases with the sample size. We find that the new test statistic is numerically equivalent up to a sign to the test statistic proposed by Hahn and Hausman (2002, Econometrica 70(1): 163-189). We also assess the size and power properties of the test.
    Keywords: Instrumental variables estimation, Many instruments, Overidentifying restrictions test, Specification test
    JEL: C12 C21
    Date: 2009–12
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1741&r=ecm
  6. By: Lucia Alessi; Matteo Barigozzi; Marco Capasso
    Abstract: We modify the criterion by Bai and Ng (2002) for determining the number of factors in approximate factor models. As in the original criterion, for any given number of factors we estimate the common and idiosyncratic components of the model by applying principal component analysis. We select the true number of factors as the number that minimizes the variance explained by the idiosyncratic component. In order to avoid overparametrization, minimization is subject to penalization. At this step, we modify the original procedure by multiplying the penalty function by a positive real number, which allows us to tune its penalizing power, by analogy with the method used by Hallin and Liška (2007) in the frequency domain. The contribution of this paper is twofold. First, our criterion retains the asymptotic properties of the original criterion, but corrects its tendency to overestimate the true number of factors. Second, we provide a computationally easy way to implement the new method by iteratively applying the original criterion. Monte Carlo simulations show that our criterion is in general more robust than the original one. A better performance is achieved in particular in the case of large idiosyncratic disturbances. These conditions are the most difficult for detecting a factor structure but are not unusual in the empirical context. Two applications on a macroeconomic and a financial dataset are also presented.
    Keywords: Approximate factor models, information criterion, model selection.
    JEL: C52
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2009_023&r=ecm
  7. By: Cavit Pakel (Department of Economics and Oxford-Man Institute, University of Oxford, Oxford); Neil Shephard (Oxford-Man Institute and Department of Economics, University of Oxford, Oxford); Kevin Sheppard (Oxford-Man Institute and Department of Economics, University of Oxford, Oxford)
    Abstract: We investigate the properties of the composite likelihood (CL) method for (T ×N_T ) GARCH panels. The defining feature of a GARCH panel with time series length T is that, while nuisance parameters are allowed to vary across N_T series, other parameters of interest are assumed to be common. CL pools information across the panel instead of using information available in a single series only. Simulations and empirical analysis illustrate that in reasonably large T CL performs well. However, due to the estimation error introduced through nuisance parameter estimation, CL is subject to the “incidental parameter” problem for small T.
    Keywords: ARCH models; composite likelihood; nuisance parameters; panel data
    JEL: C01 C14 C32
    Date: 2009–10–02
    URL: http://d.repec.org/n?u=RePEc:nuf:econwp:0912&r=ecm
  8. By: Alvaro Cartea; Dimitrios Karyampas
    Abstract: Using high frequency data for the price dynamics of equities we measure the impact that market microstructure noise has on estimates of the: (i) volatility of returns; and (ii) variance-covariance matrix of n assets. We propose a Kalman-filter-based methodology that allows us to deconstruct price series into the true efficient price and the microstructure noise. This approach allows us to employ volatility estimators that achieve very low Root Mean Squared Errors (RMSEs) compared to other estimators that have been proposed to deal with market microstructure noise at high frequencies. Furthermore, this price series decomposition allows us to estimate the variance covariance matrix of $n$ assets in a more efficient way than the methods so far proposed in the literature. We illustrate our results by calculating how microstructure noise affects portfolio decisions and calculations of the equity beta in a CAPM setting.
    Keywords: Volatility estimation, High-frequency data, Market microstructure theory, Covariation of assets, Matrix process, Kalman filter
    JEL: G12 G14 C22
    Date: 2009–12
    URL: http://d.repec.org/n?u=RePEc:cte:wbrepe:wp097609&r=ecm
  9. By: McAleer, M.; Medeiros, M.C. (Erasmus Econometric Institute)
    Abstract: In this paper we consider a nonlinear model based on neural networks as well as linear models to forecast the daily volatility of the S&P 500 and FTSE 100 indexes. As a proxy for daily volatility, we consider a consistent and unbiased estimator of the integrated volatility that is computed from high frequency intra-day returns. We also consider a simple algorithm based on bagging (bootstrap aggregation) in order to specify the models analyzed in the paper.
    Keywords: financial econometrics;volatility forecasting;neural networks;nonlinear models;realized volatility;bagging
    Date: 2009–11–24
    URL: http://d.repec.org/n?u=RePEc:dgr:eureir:1765017303&r=ecm
  10. By: B. Nielsen (Dept of Economics and Nuffield College, University of Oxford, Oxford.)
    Abstract: Johansen derived the asymptotic theory for his cointegration rank test statisic for a vector autoregression where the parameters are restricted so the process is integrated of order one. It is investigated to what extent these parameter restrictions are binding. The eigenvalues of Johansen’s eigenvalue problem are shown to have the same consistency rates accross the parameter space. The test statistic is shown to have the usual asymptotic distribution as long as the possibilities of additional unit roots and of singular explosiveness are ruled out. To prove the results the convergence of stochastic integrals with respect to singular explosive processes is considered.
    Date: 2009–09–22
    URL: http://d.repec.org/n?u=RePEc:nuf:econwp:0910&r=ecm
  11. By: Neil Shephard (Oxford-Man Institute and Department of Economics, University of Oxford); Kevin Sheppard (Department of Economics and Oxford-Man Institute, University of Oxford)
    Abstract: This paper studies in some detail a class of high frequency based volatility (HEAVY) models. These models are direct models of daily asset return volatility based on realized measures constructed from high frequency data. Our analysis identifies that the models have momentum and mean reversion effects, and that they adjust quickly to structural breaks in the level of the volatility process. We study how to estimate the models and how they perform through the credit crunch, comparing their fit to more traditional GARCH models. We analyse a model based bootstrap which allow us to estimate the entire predictive distribution of returns. We also provide an analysis of missing data in the context of these models.
    Keywords: ARCH models; bootstrap; missing data; multiplicative error model; multistep ahead prediction; non-nested likelihood ratio test; realised kernel; realised volatility.
    Date: 2009–07–10
    URL: http://d.repec.org/n?u=RePEc:nuf:econwp:0903&r=ecm
  12. By: Andre A. P.; Francisco J. Nogales; Esther Ruiz
    Abstract: This article addresses the problem of forecasting portfolio value-at-risk (VaR) with multivariate GARCH models vis-à-vis univariate models. Existing literature has tried to answer this question by analyzing only small portfolios and using a testing framework not appropriate for ranking VaR models. In this work we provide a more comprehensive look at the problem of portfolio VaR forecasting by using more appropriate statistical tests of comparative predictive ability. Moreover, we compare univariate vs. multivariate VaR models in the context of diversified portfolios containing a large number of assets and also provide evidence based on Monte Carlo experiments. We conclude that, if the sample size is moderately large, multivariate models outperform univariate counterparts on an out-of-sample basis.
    Keywords: Market risk, Backtesting, Conditional predictive ability, GARCH, Volatility, Capital requirements, Basel II
    Date: 2009–11
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws097222&r=ecm
  13. By: Hahn, Jinyong (UCLA); Hirano, Keisuke (U AZ); Karlan, Dean (Yale University and MIT Jameel Poverty Action Lab)
    Abstract: Many social experiments are run in multiple waves, or are replications of earlier social experiments. In principle, the sampling design can be modified in later stages or replications to allow for more efficient estimation of causal effects. We consider the design of a two-stage experiment for estimating an average treatment effect, when covariate information is available for experimental subjects. We use data from the first stage to choose a conditional treatment assignment rule for units in the second stage of the experiment. This amounts to choosing the propensity score, the conditional probability of treatment given covariates. We propose to select the propensity score to minimize the asymptotic variance bound for estimating the average treatment effect. Our procedure can be implemented simply using standard statistical software and has attractive large-sample properties.
    JEL: C10 C13 C14 C90 C93
    Date: 2009–01
    URL: http://d.repec.org/n?u=RePEc:ecl:yaleco:59&r=ecm
  14. By: Jouni Sohkanen (Dept of Economics, University of Oxford, Oxford); B. Nielsen (Nuffield College, University of Oxford, Oxford.)
    Abstract: We undertake a generalization of the cumulative sum of squares (CUSQ) test to the case of non-stationary autoregressive distributed lag models with quite general deterministic time trends. The test may be validly implemented with either ordinary least squares residuals or standardized forecast errors. Simulations suggest that there is little at stake in the choice between the two in the unit root case under Gaussian innovations, and that there is only very modest variation in the finite sample distribution across the parameter space.
    Date: 2009–08–31
    URL: http://d.repec.org/n?u=RePEc:nuf:econwp:0909&r=ecm
  15. By: Laurent Ferrara (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, Banque de France - Business Conditions and Macroeconomic Forecasting Directorate); Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics - Ecole d'Économie de Paris); Patrick Rakotomarolahy (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I)
    Abstract: This papier formalizes the process of forecasting unbalanced monthly data sets in order to obtain robust nowcasts and forecasts of quarterly GDP growth rate through a semi-parametric modelling. This innovative approach lies on the use on non-parametric methods, based on nearest neighbors and on radial basis function approaches, ti forecast the monthly variables involved in the parametric modelling of GDP using bridge equations. A real-time experience is carried out on Euro area vintage data in order to anticipate, with an advance ranging from six to one months, the GDP flash estimate for the whole zone.
    Keywords: Euro area GDP, real-time nowcasting, forecasting, non-parametric models.
    Date: 2009–11
    URL: http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-00344839_v2&r=ecm
  16. By: Daisuke Nagakura (Institute for Monetary and Economic Studies, Bank of Japan (E-mail: daisuke.nagakura@boj.or.jp))
    Abstract: In this paper, we develop the asymptotic theory of Hwang and Basawa (2005) for explosive random coefficient autoregressive (ERCA) models. Applying the theory, we prove that a locally best invariant (LBI) test in McCabe and Tremayne (1995), which is for the null of a unit root (UR) process against the alternative of a stochastic unit root (STUR) process, is inconsistent against a class of ERCA models. This class includes a class of STUR processes as special cases. We show, however, that the well-known Dickey-Fuller (DF) UR tests and an LBI test of Lee (1998) are consistent against a particular case of this class of ERCA models.
    Keywords: Locally Best Invariant Test, Consistency, Dickey-Fuller Test, LBI, RCA, STUR
    JEL: C12
    Date: 2009–10
    URL: http://d.repec.org/n?u=RePEc:ime:imedps:09-e-23&r=ecm
  17. By: D.S. Poskitt
    Abstract: This paper develops a new methodology for identifying the structure of VARMA time series models. The analysis proceeds by examining the echelon canonical form and presents a fully automatic data driven approach to model specification using a new technique to determine the Kronecker invariants. A novel feature of the inferential procedures developed here is that they work in terms of a canonical scalar ARMAX representation in which the exogenous regressors are given by predetermined contemporaneous and lagged values of other variables in the VARMA system. This feature facilitates the construction of algorithms which, from the perspective of macroeconomic modeling, are efficacious in that they do not use AR approximations at any stage. Algorithms that are applicable to both asymptotically stationary and unit-root, partially nonstationary (cointegrated) time series models are presented. A sequence of lemmas and theorems show that the algorithms are based on calculations that yield strongly consistent estimates.
    Keywords: Keywords: Algorithms, asymptotically stationary and cointegrated time series, echelon
    JEL: C32 C52 C63 C87
    Date: 2009–11–12
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2009-12&r=ecm
  18. By: Carlos Sánchez-González (Department of Economic Theory and Economic History, University of Granada.); Tere M. García-Muñoz (Department of Economic Theory and Economic History, University of Granada.)
    Abstract: This paper describes a design for a least mean square error estimator in discrete time systems where the components of the state vector, in measurement equation, are corrupted by different multiplicative noises in addition to observation noise. We show how known results can be considered a particular case of the algorithm stated in this paper
    Keywords: State estimation, multiplicative noise, uncertain observations
    Date: 2009–11–27
    URL: http://d.repec.org/n?u=RePEc:gra:wpaper:09/09&r=ecm
  19. By: Dominique Guegan (EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics - Ecole d'Économie de Paris, CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I); Justin Leroux (Institute for Applied Economics - HEC MONTRÉAL)
    Abstract: We propose a novel methodology for forecasting chaotic systems which is based on exploiting the information conveyed by the local Lyapunov exponents of a system. This information is used to correct for the inevitable bias of most non-parametric predictors. Using simulated data, we show that gains in prediction accuracy can be substantial.
    Keywords: chaotic systems
    Date: 2009–09
    URL: http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-00431726_v2&r=ecm
  20. By: Nael Al-Anaswah; Bernd Wilfling
    Abstract: In this paper we use a state-space model with Markov-switching to detect speculative bubbles in stock-price data. Our two innovations are (1) to adapt this technology to the state-space representation of a well-known present-value stock-price model, and (2) to estimate the model via Kalman-filtering using a plethora of artificial as well as real-world data sets that are known to contain bubble periods. Analyzing the smoothed regime probabilities, we find that our technology is well suited to detecting stock-price bubbles in both types of data sets.
    Keywords: Stock market dynamics; Detection of speculative bubbles; Present value models; State-space models with Markov-switching
    JEL: C22 G12
    Date: 2009–09
    URL: http://d.repec.org/n?u=RePEc:cqe:wpaper:0309&r=ecm
  21. By: D. S. Poskitt; Arivalzahan Sengarapillai
    Abstract: In this paper we investigate the use of description length principles to select an appropriate number of basis functions for functional data. We provide a flexible definition of the dimension of a random function that is constructed directly from the Karhunen-Loève expansion of the observed process. Our results show that although the classical, principle component variance decomposition technique will behave in a coherent manner, in general, the dimension chosen by this technique will not be consistent. We describe two description length criteria, and prove that they are consistent and that in low noise settings they will identify the true finite dimension of a signal that is embedded in noise. Two examples, one from mass-spectroscopy and the one from climatology, are used to illustrate our ideas. We also explore the application of different forms of the bootstrap for functional data and use these to demonstrate the workings of our theoretical results.
    Keywords: Bootstrap, consistency, dimension determination, Karhunen-Loève expansion, signal-to-noise ratio, variance decomposition
    JEL: C14 C22
    Date: 2009–11–12
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2009-13&r=ecm
  22. By: Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics - Ecole d'Économie de Paris); Pierre-André Maugis (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I)
    Abstract: We present here a new way of building vine copulas that allows us to create a vast number of new vine copulas, allowing for more precise modeling in high dimensions. To deal with this great number of copulas we present a new efficient selection methodology using a lattice structure on the vine set. Our model allows for a lot of degrees of freedom, but further improvements face numerous problems caused by vines' complexity as an estimator in a statistical and computational way, problems that we will expose in this paper. Robust n-variate models would be a great breakthrough for asset risk management in banks and insurance companies.
    Keywords: Vines, multivariate copulas, model selection.
    Date: 2009–11
    URL: http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-00348884_v2&r=ecm
  23. By: Crowley, Patrick M (College of Business, Texas A&M University); Schildt, Tony (Bank of Finland)
    Abstract: Many indicators of business and growth cycles have been constructed by both private and public agencies and are now in use as monitoring devices of economic conditions and for forecasting purposes. As these indicators are largely composite constructs using other economic data, their frequency composition is likely different to that of the variables they are used as indicators for. <p> In this paper we use the Hilbert-Huang transform, which comprises the empirical mode decomposition (EMD) and the Hilbert spectrum, in order to analyse the frequency content of comparable OECD confidence indicators and national sentiment indicators for industrial production and consumption. We then compare these with the frequency content of both industrial production and real consumption growth data. The Hilbert-Huang methodology first uses a sifting process (EMD) to identify the embedded frequencies within a time series, and the changing nature of these embedded frequencies (IMFs) can then be analysed by estimating the instantaneous frequency (using the Hilbert spectrum). This methodology has several advantages over conventional spectral analysis: it handles non-stationary and non-linear processes, and it can cope with short data series. <p> The aim of this paper is to decompose both indicator and actual economic variables to evaluate i) whether the number of IMFs are equivalent in both indicators and actual variables and ii) to see which frequencies are accounted for in indicators and which frequencies are not.
    Keywords: economic growth; Hilbert-Huang transform; empirical mode decomposition; frequency domain; economic indicators
    JEL: C63 E21 E32
    Date: 2009–12–22
    URL: http://d.repec.org/n?u=RePEc:hhs:bofrdp:2009_033&r=ecm
  24. By: Crowley, Patrick M (College of Business, Texas A&M University)
    Abstract: The Hilbert-Huang transform (HHT) was developed late last century but has still to be introduced to the vast majority of economists. The HHT transform is a way of extracting the frequency mode features of cycles embedded in any time series using an adaptive data method that can be applied without making any assumptions about stationarity or linear data-generating properties. This paper introduces economists to the two constituent parts of the HHT transform, namely empirical mode decomposition (EMD) and Hilbert spectral analysis. Illustrative applications using HHT are also made to two financial and three economic time series.
    Keywords: business cycles; growth cycles; Hilbert-Huang transform (HHT); empirical mode decomposition (EMD); economic time series; non-stationarity; spectral analysis
    JEL: C49 E00
    Date: 2009–11–21
    URL: http://d.repec.org/n?u=RePEc:hhs:bofrdp:2009_032&r=ecm
  25. By: Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics - Ecole d'Économie de Paris)
    Abstract: This paper focuses on the use of dynamical chaotic systems in Economics and Finance. In these fields, researchers employ different methods from those taken by mathematicians and physicists. We discuss this point. Then, we present statistical tools and problems which are innovative and can be useful in practice to detect the existence of chaotic behavior inside real data sets.
    Keywords: Chaos ; Deterministic dynamical system ; Economics ; Estimation theory ; Finance ; Forecasting
    Date: 2009–04
    URL: http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-00375713_v2&r=ecm
  26. By: Yoichi Okita (National Graduate Institute for Policy Studies); Wade D. Pfau (National Graduate Institute for Policy Studies); Giang Thanh Long (National Economics University (NEU))
    Abstract: Obtaining appropriate forecasts for the future population is a vital component of public policy analysis for issues ranging from government budgets to pension systems. Traditionally, demographic forecasters rely on a deterministic approach with various scenarios informed by expert opinion. This approach has been widely criticized, and we apply an alternative stochastic modeling framework that can provide a probability distribution for forecasts of the Japanese population. We find the potential for much greater variability in the future demographic situation for Japan than implied by existing deterministic forecasts. This demands greater flexibility from policy makers when confronting population aging issues.
    Keywords: stochastic population forecasts, Japan, Lee-Carter method
    JEL: J1 C53
    Date: 2009–05
    URL: http://d.repec.org/n?u=RePEc:ngi:dpaper:09-06&r=ecm
  27. By: Stewart, Jay (U.S. Bureau of Labor Statistics)
    Abstract: Time-use surveys collect very detailed information about individuals' activities over a short period of time, typically one day. As a result, a large fraction of observations have values of zero for the time spent in many activities, even for individuals who do the activity on a regular basis. For example, it is safe to assume that all parents do at least some childcare, but a relatively large fraction report no time spent in childcare on their diary day. Because of the large number of zeros Tobit would seem to be the natural approach. However, it is important to recognize that the zeros in time-use data arise from a mismatch between the reference period of the data (the diary day) and the period of interest, which is typically much longer. Thus it is not clear that Tobit is appropriate. In this study, I examine the bias associated with alternative estimation procedures for estimating the marginal effects of covariates on time use. I begin by adapting the infrequency of purchase model, which is typically used to analyze expenditures, to time-diary data and showing that OLS estimates are unbiased. Next, using simulated data, I examine the bias associated with three procedures that are commonly used to analyze time-diary data – Tobit, the Cragg (1971) two-part model, and OLS – under a number of alternative assumptions about the data-generating process. I find that the estimated marginal effects from Tobits are biased and that the extent of the bias varies with the fraction of zero-value observations. The two-part model performs significantly better, but generates biased estimated in certain circumstances. Only OLS generates unbiased estimates in all of the simulations considered here.
    Keywords: Tobit, time use
    JEL: C24 J22
    Date: 2009–11
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp4588&r=ecm

This nep-ecm issue is ©2009 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.