nep-ecm New Economics Papers
on Econometrics
Issue of 2008‒09‒20
twenty-six papers chosen by
Sune Karlsson
Orebro University

  1. Dynamic Factor Models for Multivariate Count Data: An Application to Stock-Market Trading Activity By Jung, Robert; Liesenfeld, Roman; Richard, Jean-Francois
  2. A New Procedure to Test for H Self-Similarity By Les Oxley; Chris Price; William Rea; Marco Reale
  3. A New Method for Constructing Exact Tests without Making any Assumptions By Karl Schlag
  4. Volatility forecasting: the jumps do matter By Fulvio Corsi; Davide Pirino; Roberto Renò
  5. Testing for seasonal unit roots in heterogeneous panels using monthly data in the presence of cross sectional dependence By Otero, Jesús; Smith, Jeremy; Giulietti, Monica
  6. Testing for spatial autocorrelation: the regressors that make the power disappear By Martellosio, Federico
  7. "Improving the Rank-Adjusted Anderson-Rubin Test with Many Instruments and Persistent Heteroscedasticity" By Naoto Kunitomo; Yukitoshi Matsushita
  8. Minimum Divergence, Generalized Empirical Likelihoods, and Higher Order Expansions By Giuseppe Ragusa
  9. A Monte Carlo Study of the Necessary and Sufficient Conditions for Weak Separability By Hjertstrand, Per
  10. Do we need time series econometrics By Rao, B. Bhaskara; Singh, Rup; Kumar, Saten
  11. Realisations of Finite-Sample Frequency-Selective Filters By D.S.G. Pollock
  12. Selection on the basis of prior testing By Carlos Santos
  13. The Classical Econometric Model By D.S.G. Pollock
  14. Bayesian Interpretations of Heteroskedastic Consistent Covariance Estimators Using the Informed Bayesian Bootstrap By Dale Poirier
  15. Testing a DSGE model of the EU using indirect inference By David Meenagh; Patrick Minford; Michael Wickens
  16. Multifractality and Long-Range Dependence of Asset Returns: The Scaling Behaviour of the Markov-Switching Multifractal Model with Lognormal Volatility Components By Liu, Ruipeng; Di Matteo, T.; Lux, Thomas
  17. Hierarchical Bayes prediction for the 2008 US Presidential election By Sinha, Pankaj; Bansal, Ashok
  18. Phillips Curve Inflation Forecasts By James H. Stock; Mark W. Watson
  19. Independence and Conditional Independence in Causal Systems By Karim Chalak; Halbert White
  20. Modelling and forecasting the yield curve under model uncertainty By Paola Donati; Francesco Donati
  21. Forecasting Macroeconomic Variables in a Small Open Economy: A Comparison between Small- and Large-Scale Models By Rangan Gupta; Alain Kabundi
  22. Multivariate measures of positive dependence By Marta Cardin
  23. Sequencing Anomalies in Choice Experiments By Brett Day; Jose Luis Pinto Prades
  24. Exploiting Non-Linearities in GDP Growth for Forecasting and Anticipating Regime Changes By David N. DeJong; Hariharan Dharmarajan; Roman Liesenfeld; Jean-Francois Richard
  25. A semiparametric factor model for electricity forward curve dynamics By Borak, Szymon; Weron, Rafal
  26. Structural Differences in Economic Growth By Nalan Basturk; Richard Paap; Dick van Dijk

  1. By: Jung, Robert; Liesenfeld, Roman; Richard, Jean-Francois
    Abstract: We propose a dynamic factor model for the analysis of multivariate time series count data. Our model allows for idiosyncratic as well as common serially correlated latent factors in order to account for potentially complex dynamic interdependence between series of counts. The model is estimated under alternative count distributions (Poisson and negative binomial). Maximum Likelihood estimation requires high-dimensional numerical integration in order to marginalize the joint distribution with respect to the unobserved dynamic factors. We rely upon the Monte-Carlo integration procedure known as Efficient Importance Sampling which produces fast and numerically accurate estimates of the likelihood function. The model is applied to time series data consisting of numbers of trades in 5 minutes intervals for five NYSE stocks from two industrial sectors. The estimated model accounts for all key dynamic and distributional features of the data. We find strong evidence of a common factor which we interpret as reflecting market-wide news. In contrast, sector-specific factors are found to be statistically insignifficant.
    Keywords: Dynamic latent variables, Importance sampling, Mixture of distribution models, Poisson distribution, Simulated Maximum Likelihood
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:zbw:cauewp:7365&r=ecm
  2. By: Les Oxley (University of Canterbury); Chris Price; William Rea; Marco Reale
    Abstract: It is now recognized that long memory and structural change can be confused because the statistical properties of times series of lengths typical of many nancial and economic series are similar for both mod- els. We propose a new test aimed at distinguishing between unifractal long memory and structural change. The approach, which utilizes the computationally ecient methods based upon Atheoretical Regression Trees (ART), establishes through simulation the bivariate distribution of the number of breaks reported by ART with the CUSUM range for simulated fractionally integrated series. This bivariate distribution is then used to empirically construct a test. We apply these methods to the realized volatility series of 16 stocks in the Dow Jones Industrial Average. We show the realised volatility series are statistically sig- nicantly dierent from fractionally integrated series with the same estimated d value. We present evidence that these series have struc- tural breaks. For comparison purposes we present the results of tests by Zhang and Ohanissian, Russell, and Tsay for these series.
    Keywords: Long-range dependence; strong dependence; global dependence
    JEL: C13 C22
    Date: 2008–09–12
    URL: http://d.repec.org/n?u=RePEc:cbt:econwp:08/16&r=ecm
  3. By: Karl Schlag
    Abstract: We present a new method for constructing exact distribution-free tests (and confidence intervals) for variables that can generate more than two possible outcomes. This method separates the search for an exact test from the goal to create a non- randomized test. Randomization is used to extend any exact test relating to means of variables with finitely many outcomes to variables with outcomes belonging to a given bounded set. Tests in terms of variance and covariance are reduced to tests relating to means. Randomness is then eliminated in a separate step. This method is used to create confidence intervals for the difference between two means (or variances) and tests of stochastic inequality and correlation.
    Keywords: Distribution-free, nonparametric, exact hypothesis testing, unavoidable inaccuracy, nonparametric Behrens-Fisher problem, UMPU test, Kendall's tau, Qn
    JEL: C12 C14
    Date: 2008–08
    URL: http://d.repec.org/n?u=RePEc:upf:upfgen:1109&r=ecm
  4. By: Fulvio Corsi; Davide Pirino; Roberto Renò
    Abstract: This study reconsiders the role of jumps for volatility forecasting by showing that jumps have positive and mostly significant impact on future volatility. This result becomes apparent once volatility is correctly separated into its continuous and discontinuous component. To this purpose, we introduce the concept of threshold multipower variation (TMPV), which is based on the joint use of bipower variation and threshold estimation. With respect to alternative methods, our TMPV estimator provides less biased and robust estimates of the continuous quadratic variation and jumps. This technique also provides a new test for jump detection which has substantially more power than traditional tests. We use this separation to forecast volatility by employing an heterogeneous autoregressive (HAR) model which is suitable to parsimoniously model long memory in realized volatility time series. Empirical analysis shows that the proposed techniques improve significantly the accuracy of volatility forecasts for the S&P500 index, single stocks and US bond yields, especially in periods following the occurrence of a jump
    Keywords: volatility forecasting, jumps, bipower variation, threshold estimation, stock, bond
    JEL: G1 C1 C22 C53
    Date: 2008–06
    URL: http://d.repec.org/n?u=RePEc:usi:wpaper:534&r=ecm
  5. By: Otero, Jesús (Facultad de Economía, Universidad del Rosario); Smith, Jeremy (Department of Economics,University of Warwick); Giulietti, Monica (Aston Business School, University of Aston)
    Abstract: This paper generalises the monthly seasonal unit root tests of Franses (1991) for a heterogeneous panel following the work of Im, Pesaran, and Shin (2003), which we refer to as the F-IPS tests. The paper presents the mean and variance necessary to yield a standard normal distribution for the tests, for different number of time observations, T, and lag lengths. However, these tests are only applicable in the absence of cross-sectional dependence. Two alternative methods for modifying these F-IPS tests in the presence of cross-sectional dependency are presented : the first is the cross-sectionally augmented test,denoted CF-IPS, following Pesaran (2007), the other is a bootstap method, denoted BF-IPS. In general, the BF-IPS tests have greater power than the CF-IPS tests, although for large T and high degree of cross-sectional dependency the CF-IPS test dominates the BF-IPS test.
    Keywords: Panel unit root tests ; seasonal unit roots ; monthly data ; cross sectional dependence ; Monte Carlo
    JEL: C12 C15 C22 C23
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:wrk:warwec:865&r=ecm
  6. By: Martellosio, Federico
    Abstract: We show that for any sample size, any size of the test, and any weights matrix outside a small class of exceptions, there exists a positive measure set of regression spaces such that the power of the Cliff-Ord test vanishes as the autocorrelation increases in a spatial error model. This result extends to the tests that define the Gaussian power envelope of all invariant tests for residual spatial autocorrelation. In most cases, the regression spaces such that the problem occurs depend on the size of the test, but there also exist regression spaces such that the power vanishes regardless of the size. A characterization of such particularly hostile regression spaces is provided.
    Keywords: Cliff-Ord test; point optimal tests; power; spatial error model; spatial lag model; spatial unit root.
    JEL: C12 C21
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:10542&r=ecm
  7. By: Naoto Kunitomo (Faculty of Economics, University of Tokyo); Yukitoshi Matsushita (Graduate School of Economics, University of Tokyo)
    Abstract: Anderson and Kunitomo (2007) have developed the likelihood ratio criterion, which is called the Rank-Adjusted Anderson-Rubin (RAAR) test, for testing the coefficients of a structural equation in a system of simultaneous equations in econometrics against the alternative hypothesis that the equation of interest is identified. It is related to the statistic originally proposed by Anderson and Rubin (1949, 1950), and also to the test procedures by Kleibergen (2002) and Moreira (2003). We propose a modified procedure of RAAR test, which is suitable for the cases when there are many instruments and the disturbances have persistent heteroscedasticities.
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2008cf588&r=ecm
  8. By: Giuseppe Ragusa (Department of Economics, University of California-Irvine)
    Abstract: This paper studies the Minimum Divergence (MD) class of estimators for econometric models specified through moment restrictions. We show that MD estimators can be obtained as solutions to a computationally tractable optimization problem. This problem is similar to the one solved by the Generalized Empirical Likelihood estimators of Newey and Smith (2004), but it is equivalent to it only for a subclass of divergences. The MD framework provides a coherent testing theory: tests for overidentification and parametric restrictions in this framework can be interpreted as semiparametric versions of Pearson-type goodness of fit tests. The higher order properties of MD estimators are also studied and it is shown that MD estimators that have the same higher order bias as the Empirical Likelihood (EL) estimator also share the same higher order Mean Square Error and are all higher order efficient. We identify members of the MD class that are not only higher order efficient, but, unlike the EL estimator, well behaved when the moment restrictions are misspecified.
    Keywords: Minimum divergence; GMM; Generalized empirical likelihood; Higher order efficiency; Misspecified models.
    JEL: C12 C13 C23
    Date: 2008–05
    URL: http://d.repec.org/n?u=RePEc:irv:wpaper:080906&r=ecm
  9. By: Hjertstrand, Per (Department of Economics, Lund University)
    Abstract: Weak separability is an important concept in many fields of economic theory. This paper uses Monte Carlo experiments to investigate the performance of newly developed nonparametric revealed preference tests for weak separability. A main finding is that the bias of the sequentially implemented test for weak separability proposed by Fleissig and Whitney (A New PC-Based Test for Varian’s Weak Separability Conditions, Journal of Business and Economic Statistics 21, 133-143, 2003) is low. The theoretically unbiased Swofford and Whitney test (A revealed preference test for weakly separable utility maximization with incomplete adjustment, Journal of Econometrics 60, 235-249, 1994) is found to perform better than all sequentially implemented test procedures, but is found to suffer from an empirical bias, most likely because of the complexity in executing the test procedure. As a further source of information, we also perform sensitivity analyses on the nonparametric revealed preference tests. It is found that the Fleissig and Whitney test seems to be sensitive to measurement
    Keywords: GARP; LP test; NONPAR; Swofford and Whitney test; Weak separability
    JEL: C14 C15 C43
    Date: 2008–09–11
    URL: http://d.repec.org/n?u=RePEc:hhs:lunewp:2008_012&r=ecm
  10. By: Rao, B. Bhaskara; Singh, Rup; Kumar, Saten
    Abstract: Whether or not there is a need for the unit roots and cointegration based time series econometric methods is a methodological issue. An alternative is the econometrics of the London School of Economics (LSE) and Hendry approach based on the simpler classical methods of estimation. This is known as the general to specific method (GETS). Like all other methodological issues it is difficult to resolve which approach is better. However, we think that GETS is conceptually simpler and very useful in applied work.
    Keywords: GETS, Cointegration, Box-Jenkins’s Equations, Hendry, Granger.
    JEL: B49 C22 B41
    Date: 2008–01–16
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:10530&r=ecm
  11. By: D.S.G. Pollock
    Abstract: A filtered data sequence can be obtained by multiplying the Fourier ordinates of the data by the ordinates of the frequency response of the filter and by applying the inverse Fourier transform to carry the product back to the time domain. Using this technique, it is possible, within the constraints of a finite sample, to design an ideal frequency-selective filter that will preserve all elements within a specified range of frequencies and that will remove all elements outside it. Approximations to ideal filters that are implemented in the time domain are commonly based on truncated versions of the infinite sequences of coefficients derived from the Fourier transforms of rectangular frequency response functions. An alternative to truncating an infinite sequence of coefficients is to wrap it around a circle of a circumference equal in length to the data sequence and to add the overlying coefficients. The coefficients of the wrapped filter can also be obtained by applying a discrete Fourier transform to a set of ordinates sampled from the frequency response function. Applying the coefficients to the data via circular convolution produces results that are identical to those obtained by a multiplication in the frequency domain, which constitutes a more efficient approach.
    Keywords: Signal extraction; Linear filtering; Frequency-domain analysis
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:lec:leecon:08/32&r=ecm
  12. By: Carlos Santos (Faculdade de Economia e Gestão - Universidade Católica Portuguesa (Porto))
    Abstract: We establish that under mild conditions, testing for the individual significance of an impulse indicator in the conditional model, selected on the basis of prior testing of its significance in the impulse saturated marginal model does not require bootstrapping critical values. Extensive Monte Carlo evidence shows that the real size of a joint F test in the conditional on the block of dummies retained from the marginal is independent of nominal size used for impulse saturation used in the marginal model. The findings are shown to hold for a plethora of dynamic models and sample sizes. Such results are fundamental not only in model selection theory, but also for the emerging class of automatically computable super exogeneity tests.
    Keywords: model selection; impulse saturation, super exogeneity; bootstrapping
    JEL: C52 C22 C15
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:cap:wpaper:062008&r=ecm
  13. By: D.S.G. Pollock
    Abstract: A compendium is presented of the various approaches that may be taken in deriving the estimators of the simultaneous-equations econometric model according to the principle of maximum likelihood. The structural equations of the model have the character both of a regression equation and of an errors-in-variables equation. This partly accounts for way in which the various approaches that have been followed appear to differ widely. In the process of achieving a synthesis of the methods of estimation, some elements that have been missing from the theory are supplied.
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:lec:leecon:08/33&r=ecm
  14. By: Dale Poirier (Department of Economics, University of California-Irvine)
    Abstract: This paper provides Bayesian rationalizations for White’s heteroskedastic consistent (HC) covariance estimator and various modifications of it. An informed Bayesian bootstrap provides the statistical framework.
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:irv:wpaper:080905&r=ecm
  15. By: David Meenagh; Patrick Minford; Michael Wickens
    Abstract: We use the method of indirect inference, using the bootstrap, to test the Smets and Wouters model of the EU against a VAR auxiliary equation describing their data; the test is based on the Wald statistic. We find that their model generates excessive variance compared with the data. But their model passes the Wald test easily if the errors have the properties assumed by SW but scaled down. We compare a New Classical version of the model which also passes the test easily if error properties are chosen using New Classical priors (notably excluding shocks to preferences). Both versions have (different) difficulties fitting the data if the actual error properties are used.
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:san:cdmacp:0801&r=ecm
  16. By: Liu, Ruipeng; Di Matteo, T.; Lux, Thomas
    Abstract: In this paper we consider daily financial data from various sources (stock market indices, foreign exchange rates and bonds) and analyze their multi-scaling properties by estimating the parameters of a Markov-switching multifractal model (MSM) with Lognormal volatility components. In order to see how well estimated models capture the temporal dependency of the empirical data, we estimate and compare (generalized) Hurst exponents for both empirical data and simulated MSM models. In general, the Lognormal MSM models generate ‘apparent’ long memory in good agreement with empirical scaling provided one uses sufficiently many volatility components. In comparison with a Binomial MSM specification [7], results are almost identical. This suggests that a parsimonious discrete specification is flexible enough and the gain from adopting the continuous Lognormal distribution is very limited.
    Keywords: Markov-switching multifractal, scaling, return volatility
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:zbw:cauewp:7371&r=ecm
  17. By: Sinha, Pankaj; Bansal, Ashok
    Abstract: In this paper a procedure is developed to derive the predictive density function of a future observation for prediction in a multiple regression model under hierarchical priors for the vector parameter. The derived predictive density function is applied for prediction in a multiple regression model given in Fair (2002) to study the effect of fluctuations in economic variables on voting behavior in U.S. presidential election. Numerical illustrations suggest that the predictive performance of Fair’s model is good under hierarchical Bayes setup, except for the 1992 election. Fair’s model under hierarchical Bayes setup indicates that the forthcoming 2008 US presidential election is likely to be a very close election slightly tilted towards Republicans. It is likely that republicans will get 50.90% vote with probability for win 0.550 in the 2008 US Presidential Election.
    JEL: C11 C01
    Date: 2008–08–13
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:10470&r=ecm
  18. By: James H. Stock; Mark W. Watson
    Abstract: This paper surveys the literature since 1993 on pseudo out-of-sample evaluation of inflation forecasts in the United States and conducts an extensive empirical analysis that recapitulates and clarifies this literature using a consistent data set and methodology. The literature review and empirical results are gloomy and indicate that Phillips curve forecasts (broadly interpreted as forecasts using an activity variable) are better than other multivariate forecasts, but their performance is episodic, sometimes better than and sometimes worse than a good (not naïve) univariate benchmark. We provide some preliminary evidence characterizing successful forecasting episodes.
    JEL: C53 E37
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:14322&r=ecm
  19. By: Karim Chalak (Boston College); Halbert White (University of California-San Diego)
    Abstract: We study the interrelations between (conditional) independence and causal relations in settable systems. We provide definitions in terms of functional dependence for direct, indirect, and total causality as well as for (indirect) causality via and exclusive of a set of variables. We then provide necessary and su¢ cient causal and stochastic conditions for (conditional) dependence among random vectors of interest in settable systems. Immediate corollaries ensure the validity of Reichenbach's principle of common cause and its informative extension, the conditional Reichenbach principle of common cause. We relate our results to notions of d-separation and D-separation in the artificial intelligence literature.
    Keywords: causality, conditional independence, d-separation, direct effect, Reichenbach principle, settable systems
    JEL: C13 C14 C31
    Date: 2008–09–11
    URL: http://d.repec.org/n?u=RePEc:boc:bocoec:689&r=ecm
  20. By: Paola Donati (DG-Research, European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.); Francesco Donati (Politecnico of Torino, Department of Control and Computer Engineering, Corso Duca degli Abruzzi 24, I-10129 Torino, Italy.)
    Abstract: This paper proposes a procedure to investigate the nature and persistence of the forces governing the yield curve and to use the extracted information for forecasting purposes. The latent factors of a model of the Nelson-Siegel type are directly linked to the maturity of the yields through the explicit description of the cross-sectional dynamics of the interest rates. The intertemporal dynamics of the factors is then modeled as driven by long-run forces giving rise to enduring effects, and by medium- and short-run forces producing transitory effects. These forces are reconstructed in real time with a dynamic filter whose embedded feedback control recursively corrects for model uncertainty, including additive and parameter uncertainty and possible equation misspecifications and approximations. This correction sensibly enhances the robustness of the estimates and the accuracy of the out-of-sample forecasts, both at short and long forecast horizons. JEL Classification: G1, E4, C5.
    Keywords: Yield curve, Model uncertainty, Frequency decomposition, Monetary policy.
    Date: 2008–08
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20080917&r=ecm
  21. By: Rangan Gupta (Department of Economics, University of Pretoria); Alain Kabundi (Department of Economics and Econometrics, University of Johannesburg)
    Abstract: This paper compares the forecasting ability of five alternative models in predicting four key macroeconomic variables, namely, per capita growth rate, the Consumer Price Index (CPI) inflation, the money market rate, and the growth rate of the nominal effective exchange rate for the South African economy. Unlike the theoretical Small Open Economy New Keynesian Dynamic Stochastic General Equilibrium (SOENKDSGE), the unrestricted VAR, and the small-scale Bayesian Vector Autoregressive (BVAR) models, which are estimated based on four variables, the Dynamic Factor Model (DFM) and the large-scale BVAR models use information from a data-rich environment containing 266 macroeconomic time series observed over the period of 1983:01 to 2002:04. The results, based on Root Mean Square Errors (RMSEs), for one- to four-quarters-ahead out-of-sample forecasts over the horizon of 2003:01 to 2006:04, show that, except for the one-quarter-ahead forecast of the growth rate of the of nominal effective exchange rate, large-scale BVARs outperform the other four models consistently and, generally, significantly.
    Keywords: Small Open Economy New Keynesian Dynamic Stochastic Model, Dynamic Factor Model, VAR, BVAR, Forecast Accuracy
    JEL: C11 C13 C33 C53
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:pre:wpaper:200830&r=ecm
  22. By: Marta Cardin (Department of Applied Mathematics, University of Venice)
    Abstract: In this paper a set of desirable properties for measures of positive dependence of ordered n-tuples of continuous random variables (n >= 2) is proposed and a class of multivariate positive dependence measures is introduced. We consider the comonotonicity dependence structure as the strong dependency structure and so the class consists of measures that take values in the range [0, 1] and are defined in such a way that they equal 1 in case the random vector is comonotonic and equal 0 in case it is independent.
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:vnm:wpaper:166&r=ecm
  23. By: Brett Day (School of Environmental Sciences, University of East Anglia.); Jose Luis Pinto Prades (Department of Economics, Universidad Pablo de Olavide)
    Abstract: This paper investigates whether responses to choice experiments (CEs) are subject to sequencing anomalies. While previous research has focussed on the possibility that such anomalies relate to position in the sequence of choice tasks, our research reveals that the particular sequence of tasks matters. Using a novel experimental design that allows us to test our hypotheses using robust nonparametric statistics, we observe sequencing anomalies in CE data similar to those recorded in the dichotomous choice contingent valuation literature. Those sequencing effects operate in both price and commodity dimensions and are observed to compound over a series of choice tasks. Our findings cast serious doubt on the current practice of asking each respondent to undertake several choice tasks in a CE whilst treating each response as an independent observation on that individual’s preferences.
    Keywords: Choice experiments; sequencing anomalies; ordering effects; dichotomous choice contingent valuation; non-parametric testing.
    JEL: Q51 C14 I10
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:pab:wpaper:08.10&r=ecm
  24. By: David N. DeJong; Hariharan Dharmarajan; Roman Liesenfeld; Jean-Francois Richard
    Abstract: . . .
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:pit:wpaper:367&r=ecm
  25. By: Borak, Szymon; Weron, Rafal
    Abstract: In this paper we introduce the dynamic semiparametric factor model (DSFM) for electricity forward curves. The biggest advantage of our approach is that it not only leads to smooth, seasonal forward curves extracted from exchange traded futures and forward electricity contracts, but also to a parsimonious factor representation of the curve. Using closing prices from the Nordic power market Nord Pool we provide empirical evidence that the DSFM is an efficient tool for approximating forward curve dynamics.
    Keywords: power market; forward electricity curve; dynamic semiparametric factor model
    JEL: C51 Q40 G13
    Date: 2008–07–10
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:10421&r=ecm
  26. By: Nalan Basturk (Erasmus University Rotterdam); Richard Paap (Erasmus University Rotterdam); Dick van Dijk (Erasmus University Rotterdam)
    Abstract: This paper addresses heterogeneity in determinants of economic growth in a data-driven way. Instead of defining groups of countries with different growth characteristics a priori, based on, for example, geographical location, we use a finite mixture panel model and endogenous clustering to examine cross-country differences and similarities in the effects of growth determinants. Applying this approach to an annual unbalanced panel of 59 countries in Asia, Latin and Middle America and Africa for the period 1971-2000, we can identify two groups of countries in terms of distinct growth structures. The structural differences between the country groups mainly stem from different effects of investment, openness measures and government share in the economy. Furthermore, the detected segmentation of countries does not match with conventional classifications in the literature.
    Keywords: Economic growth; parameter heterogeneities; latent class models; panel time series
    JEL: C32 C33 O4 O57
    Date: 2008–09–12
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20080085&r=ecm

This nep-ecm issue is ©2008 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.