nep-ecm New Economics Papers
on Econometrics
Issue of 2008‒02‒16
seventeen papers chosen by
Sune Karlsson
Orebro University

  1. Using Implied Probabilities to Improve Estimation with Unconditional Moment Restrictions By Alain Guay; Florian Pelgrin
  2. Mixed Exponential Power Asymmetric Conditional Heteroskedasticity By Mohammed Bouaddi; Jeroen V.K. Rombouts
  3. Modeling Tick-by-Tick Realized Correlations By Fulvio Corsi; Francesco Audrino
  4. Realized Covariance Tick-by-Tick in Presence of Rounded Time Stamps and General Microstructure Effects By Fulvio Corsi; Francesco Audrino
  5. Estimation of k-factor GIGARCH process : a Monte Carlo study. By Abdou Kâ Diongue; Dominique Guegan
  6. Forecasting Macroeconomic Variables Using Diffusion Indexes in Short Samples with Structural Change By Anindya Banerjee; Massimiliano Marcellino; Igor Masten
  7. A New Approach to Drawing States in State Space Models By McCAUSLAND, William J.; MILLER, Shirley; PELLETIER, Denis
  8. Factor-MIDAS for Now- and Forecasting with Ragged-Edge Data: A Model Comparison for German GDP By Massimiliano Marcellino; Christian Schumacher
  9. Continuous time extraction of a nonstationary signal with illustrations in continuous low-pass and band-pass filtering By Tucker S. McElroy; Thomas M. Trimbur
  10. Functional Form Misspecification in Regressions with a Unit Root By Ioannis Kasparis
  11. Factor-augmented Error Correction Models By Anindya Banerjee; Massimiliano Marcellino
  12. Effect of noise filtering on predictions : on the routes of chaos. By Dominique Guegan
  13. Clustering Heteroskedastic Time Series by Model-Based Procedures By Edoardo Otranto
  14. Dynamic GMM Estimation With Structural Breaks. An Application to Global Warming and its Causes. By Travaglini, Guido
  15. Predictive Systems: Living with Imperfect Predictors By Lubos Pastor; Robert F. Stambaugh
  16. Brazil within Brazil : testing the poverty map methodology in Minas Gerais By Leite, Phillippe George; Lanjouw, Peter; Ebers, Chris
  17. Introduction to financial surveillance By Frisén, Marianne

  1. By: Alain Guay; Florian Pelgrin
    Abstract: In this paper, we investigate the information content of implied probabilities (Back and Brown, 1993) to improve estimation in unconditional moment conditions models. We propose and evaluate two 3-step euclidian empirical likelihood estimators and their bias-correction versions for weakly dependent data. The first one is the time series extension of the 3S-EEL proposed by Antoine, Bonnal and Renault (2007).The second one is new and uses in contrast only an estimator of the weighting matrix at an efficient 2-step GMM estimator, while leaving unrestricted the Jacobian matrix. Both estimators use implied probabilities to achieve higher-order improvements relative to the traditional GMM estimator. A Monte-Carlo study reveals that the finite and large sample properties of the (bias-corrected) 3-step estimators compare very favorably to the existing approaches: the 2-step GMM and the continuous updating estimator. As an application, we re-assess the empirical evidence regarding the New Keynesian Phillips curve in the US.
    Keywords: Information-based inference, Implied probabilities, Weak identification, Generalized method of moments, Philips curve
    JEL: C13 C14 E31
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:lvl:lacicr:0747&r=ecm
  2. By: Mohammed Bouaddi; Jeroen V.K. Rombouts
    Abstract: To match the stylized facts of high frequency financial time series precisely and parsimoniously, this paper presents a finite mixture of conditional exponential power distributions where each component exhibits asymmetric conditional heteroskedasticity. We provide stationarity conditions and unconditional moments to the fourth order. We apply this new class to Dow Jones index returns. We find that a two-component mixed exponential power distribution dominates mixed normal distributions with more components, and more parameters, both in-sample and out-of-sample. In contrast to mixed normal distributions, all the conditional variance processes become stationarity. This happens because the mixed exponential power distribution allows for component-specific shape parameters so that it can better capture the tail behaviour. Therefore, the more general new class has attractive features over mixed normal distributions in our application: Less components are necessary and the conditional variances in the components are stationarity processes. Results on NASDAQ index returns are similar.
    Keywords: Finite mixtures, exponential power distributions, conditional heteroskedasticity, asymmetry, heavy tails, value at risk
    JEL: C11 C22 C52
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:lvl:lacicr:0749&r=ecm
  3. By: Fulvio Corsi; Francesco Audrino
    Abstract: We propose a tree-structured heterogeneous autoregressive (tree-HAR) process as a simple and parsimonious model for the estimation and prediction of tick-by-tick realized correlations. The model can account for different time and other relevant predictors' dependent regime shifts in the conditional mean dynamics of the realized correlation series. Testing the model on S&P 500 and 30-year treasury bond futures realized correlations, we provide empirical evidence that the tree-HAR model reaches a good compromise between simplicity and flexibility, and yields accurate single- and multi-step out-of-sample forecasts. Such forecasts are also better then those obtained from other standard approaches.
    Keywords: High frequency data, Realized correlation, Stock-bond correlation, Tree-structured models, HAR, Regimes
    JEL: C13 C22 C51 C53
    Date: 2008–01
    URL: http://d.repec.org/n?u=RePEc:usg:dp2008:2008-05&r=ecm
  4. By: Fulvio Corsi; Francesco Audrino
    Abstract: This paper presents two classes of tick-by-tick covariance estimators adapted to the case of rounding in the price time stamps to a frequency lower than the typical arrival rate of tick prices. We investigate, through Monte Carlo simulations, the behavior of such estimators under realistic market microstructure conditions analogous to that of the financial data studied in the empirical section; that is, non-synchronous trading, general ARMA structure for microstructure noise, and true lead-lag cross-covariance. Simulation results show the robustness of the proposed tick-by-tick covariance estimators to time stamps rounding, and their overall performance superior to competing covariance estimators under empirically realistic microstructure conditions.
    Keywords: High frequency data, Realized covariance, Market microstructure, Bias correction
    JEL: C13 C22 C51 C53
    Date: 2008–01
    URL: http://d.repec.org/n?u=RePEc:usg:dp2008:2008-04&r=ecm
  5. By: Abdou Kâ Diongue (Université Gaston Berger et School of Economics and Finance); Dominique Guegan (Centre d'Economie de la Sorbonne et Paris School of Economics)
    Abstract: In this paper, we discuss the parameter estimation for a k-factor generalized long memory process with conditionally heteroskedastic noise. Two estimation methods are proposed. The first method is based on the conditional distribution of the process and the second is obtained as an extension of Whittle's estimation approach. For comparison purposes, Monte Carlo simulations are used to evaluate the finite sample performance of these estimation techniques.
    Keywords: Long memory, Gegenbauer polynomial, heteroskedasticity, conditional sum of squares, Whittle estimation.
    JEL: C53
    Date: 2008–01
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:b08004&r=ecm
  6. By: Anindya Banerjee; Massimiliano Marcellino; Igor Masten
    Abstract: We conduct a detailed simulation study of the forecasting performance of diffusion index-based methods in short samples with structural change. We consider several data generation processes, to mimic different types of structural change, and compare the relative forecasting performance of factor models and more traditional time series methods. We find that changes in the loading structure of the factors into the variables of interest are extremely important in determining the performance of factor models. We complement the analysis with an empirical evaluation of forecasts for the key macroeconomic variables of the Euro area and Slovenia, for which relatively short samples are officially available and structural changes are likely. The results are coherent with the findings of the simulation exercise, and confirm the relatively good performance of factor-based forecasts also in short samples with structural change.
    Keywords: Factor models, forecasts, time series models, structural change, short samples, parameter uncertainty
    JEL: C53 C32 E37
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:eui:euiwps:eco2008/17&r=ecm
  7. By: McCAUSLAND, William J.; MILLER, Shirley; PELLETIER, Denis
    Abstract: We introduce a new method for drawing state variables in Gaussian state space models from their conditional distribution given parameters and observations. Unlike standard methods, our method does not involve Kalman filtering. We show that for some important cases, our method is computationally more efficient than standard methods in the literature. We consider two applications of our method.
    Keywords: State sce models, Stochastic volatility, Count data
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:mtl:montde:2007-06&r=ecm
  8. By: Massimiliano Marcellino; Christian Schumacher
    Abstract: This paper compares different ways to estimate the current state of the economy using factor models that can handle unbalanced datasets. Due to the different release lags of business cycle indicators, data unbalancedness often emerges at the end of multivariate samples, which is sometimes referred to as the "ragged edge" of the data. Using a large monthly dataset of the German economy, we compare the performance of different factor models in the presence of the ragged edge: static and dynamic principal components based on realigned data, the Expectation-Maximisation (EM) algorithm and the Kalman smoother in a state-space model context. The monthly factors are used to estimate current quarter GDP, called the "nowcast", using different versions of what we call factor-based mixed-data sampling (Factor-MIDAS) approaches. We compare all possible combinations of factor estimation methods and Factor-MIDAS projections with respect to now-cast performance. Additionally, we compare the performance of the nowcast factor models with the performance of quarterly factor models based on time-aggregated and thus balanced data, which neglect the most timely observations of business cycle indicators at the end of the sample. Our empirical findings show that the factor estimation methods don't differ much with respect to nowcasting accuracy. Concerning the projections, the most parsimonious MIDAS projection performs best overall. Finally, quarterly models are in general outperformed by the nowcast factor models that can exploit ragged-edge data.
    Keywords: nowcasting, business cycle, large factor models, mixed-frequency data, missing values, MIDAS
    JEL: E37 C53
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:eui:euiwps:eco2008/16&r=ecm
  9. By: Tucker S. McElroy; Thomas M. Trimbur
    Abstract: This paper sets out the theoretical foundations for continuous-time signal extraction in econometrics. Continuous-time modeling gives an effective strategy for treating stock and flow data, irregularly spaced data, and changing frequency of observation. We rigorously derive the optimal continuous-lag filter when the signal component is nonstationary, and provide several illustrations, including a new class of continuous-lag Butterworth filters for trend and cycle estimation.
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:fip:fedgfe:2007-68&r=ecm
  10. By: Ioannis Kasparis
    Abstract: We examine the limit properties of the Non-linear Least Squares (NLS) estimator under functional form misspecification in regression models with a unit root. Our theoretical framework is the same as that of Park and Phillips, Econometrica 2001. We show that the limit behaviour of the NLS estimator is largely determined by the relative order of magnitude of the true and fitted models. If the estimated model is of different order of magnitude than the true model, the estimator converges to boundary points. When the pseudo-true value is on a boundary, standard methods for obtaining rates of convergence and limit distribution results are not applicable. We provide convergence rates and limit distribution results, when the pseudo-true value is an interior point. If functional form misspecification is committed in the presence of stochastic trends, the convergence rates can be slower and the limit distribution different than that obtained under correct specification.
    Keywords: Functional Form, Pseudo-true value, Unit root
    Date: 2008–02
    URL: http://d.repec.org/n?u=RePEc:ucy:cypeua:2-2008&r=ecm
  11. By: Anindya Banerjee; Massimiliano Marcellino
    Abstract: This paper brings together several important strands of the econometrics literature: errorcorrection, cointegration and dynamic factor models. It introduces the Factor-augmented Error Correction Model (FECM), where the factors estimated from a large set of variables in levels are jointly modelled with a few key economic variables of interest. With respect to the standard ECM, the FECM protects, at least in part, from omitted variable bias and the dependence of cointegration analysis on the specific limited set of variables under analysis. It may also be in some cases a refinement of the standard Dynamic Factor Model (DFM), since it allows us to include the error correction terms into the equations, and by allowing for cointegration prevent the errors from being non-invertible moving average processes. In addition, the FECM is a natural generalization of factor augmented VARs (FAVAR) considered by Bernanke, Boivin and Eliasz (2005) inter alia, which are specified in first differences and are therefore misspecified in the presence of cointegration. The FECM has a vast range of applicability. A set of Monte Carlo experiments and two detailed empirical examples highlight its merits in finite samples relative to standard ECM and FAVAR models. The analysis is conducted primarily within an in-sample framework, although the out-of-sample implications are also explored.
    Keywords: Dynamic FactorModels, Error CorrectionModels, Cointegration, Factor-augmented Error Correction Models, VAR, FAVAR
    JEL: C32 E17
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:eui:euiwps:eco2008/15&r=ecm
  12. By: Dominique Guegan (Centre d'Economie de la Sorbonne et Paris School of Economics)
    Abstract: The detection of chaotic behaviors in commodities, stock markets and weather data is usually complicated by large noise perturbation inherent to the underlying system. It is well known, that predictions, from pure deterministic chaotic systems can be accurate mainly in the short term. Thus, it will be important to be able to reconstruct in a robust way the attractor in which evolves the data, if this attractor exists. In chaotic theory, the deconvolution methods have been largely studied and there exist different approaches which are competitive and complementary. In this work, we apply two methods : the singular value method and the wavelet approach. This last one has not been investigated a lot of filtering chaotic systems. Using very large Monte Carlo simulations, we show the ability of this last deconvolution method. Then, we use the de-noised data set to do forecast, and we discuss deeply the possibility to do long term forecasts with chaotic systems.
    Keywords: Deconvolution, chaos, SVD, state space method, Wavelets method.
    JEL: C02 C32 C45 C53
    Date: 2008–01
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:b08008&r=ecm
  13. By: Edoardo Otranto
    Abstract: Financial time series are often characterized by similar volatility structures, often represented by GARCH processes. The detection of clusters of series displaying similar behavior could be important to understand the differences in the estimated processes, without having to study and compare the estimated parameters across all the series. This is particularly relevant dealing with many series, as in financial applications. The volatility of a time series can be characterized in terms of the underlying GARCH process. Using Wald tests and the AR metrics to measure the distance between GARCH processes, it is possible to develop a clustering algorithm, which can provide three classifications (with increasing degree of deepness) based on the heteroskedastic patterns of the time series. The number of clusters is detected automatically and it is not fixed a priori or a posteriori. The procedure is evaluated by simulations and applied to the sector indexes of the Italian market.
    Keywords: Agglomerative algorithm, AR metrics, Cluster analysis, GARCH models, Wald test
    JEL: C02 C19 C22
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:cns:cnscwp:200801&r=ecm
  14. By: Travaglini, Guido
    Abstract: In this paper I propose a nonstandard t-test statistic for detecting level and trend breaks of I(0) series. Theoretical and limit-distribution critical values obtained from Montecarlo experimentation are supplied. The null hypothesis of anthropogenic versus natural causes of global warming is then tested for the period 1850-2006 by means of a dynamic GMM model which incorporates the null of breaks of anthropogenic origin. World average temperatures are found to be tapering off since a few decades by now, and to exhibit no significant breaks attributable to human activities. While these play a minor causative role in climate changes, most natural forcings and in particular solar sunspots are major warmers. Finally, in contrast to widely held opinions, greenhouse gases are in general temperature dimmers.
    Keywords: Generalized Method of Moments; Multiple Breaks; Principal Component Analysis; Global Warming.
    JEL: C51 C22 Q54
    Date: 2008–02–11
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:7108&r=ecm
  15. By: Lubos Pastor; Robert F. Stambaugh
    Abstract: We develop a framework for estimating expected returns---a <i>predictive system</i>---that allows predictors to be imperfectly correlated with the conditional expected return. When predictors are imperfect, the estimated expected return depends on past returns in a manner that hinges on the correlation between unexpected returns and innovations in expected returns. We find empirically that prior beliefs about this correlation, which is most likely negative, substantially affect estimates of expected returns as well as various inferences about predictability, including assessments of a predictor's usefulness. Compared to standard predictive regressions, predictive systems deliver different and more precise estimates of expected returns.
    JEL: G1 G11 G12
    Date: 2008–02
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:13804&r=ecm
  16. By: Leite, Phillippe George; Lanjouw, Peter; Ebers, Chris
    Abstract: The small-area estimation technique developed for producing poverty maps has been applied in a large number of developing countries. Opportunities to formally test the validity of this approach remain rare due to lack of appropriately detailed data. This paper compares a set of predicted welfare estimates based on this methodology against their true values, in a setting where these true values are known. A recent study draws on Monte Carlo evidence to warn that the small-area estimation methodology could significantly over-state the precision of local-level estimates of poverty, if underlying assumptions of spatial homogeneity do not hold. Despite these concerns, the findings in this paper for the state of Minas Gerais, Brazil, indicate that the small-area estimation approach is able to produce estimates of welfare that line up quite closely to their true values. Although the setting considered here would seem, a priori, unlikely to meet the homogeneity conditions that have been argued to be essential for the method, confidence intervals for the poverty estimates also appear to be appropriate. However, this latter conclusion holds only after carefully controlling for community-level factors that are correlated with household level welfare.
    Keywords: Statistical & Mathematical Sciences,Population Policies,Science Education,Scientific Research & Science Parks,Small Area Estimation Poverty Mapping
    Date: 2008–02–01
    URL: http://d.repec.org/n?u=RePEc:wbk:wbrwps:4513&r=ecm
  17. By: Frisén, Marianne (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University)
    Abstract: In financial surveillance the aim is to signal at the optimal trading time. A systematic decision strategy is used. The information available at each possible decision time is evaluated in order to judge whether or not there is enough information for a decision about an action or if more information is necessary so that the decision should be postponed. Financial surveillance gives timely decisions.
    Keywords: financial surveillance;
    JEL: C10
    Date: 2008–02–08
    URL: http://d.repec.org/n?u=RePEc:hhs:gunsru:2008_001&r=ecm

This nep-ecm issue is ©2008 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.