nep-ecm New Economics Papers
on Econometrics
Issue of 2008‒12‒14
27 papers chosen by
Sune Karlsson
Orebro University

  1. Exact optimal and adaptive inference in regression models under heteroskedasticity and non-normality of unknown forms By Jean-Marie Dufour; Abderrahim Taamouti
  2. Nonstationary-Volatility Robust Panel Unit Root Tests and the Great Moderation By Hanck, Christoph
  3. A General Framework for Observation Driven Time-Varying Parameter Models By Drew Creal; Siem Jan Koopman; André Lucas
  4. Dynamic stochastic copula models: Estimation, inference and applications By Hafner Christian M.; Manner Hans
  5. Testing for Unit Roots in the Presence of a Possible Break in Trend and Non-Stationary Volatility By Giuseppe Cavaliere; David I. Harvey; Stephen J. Leybourne; A.M. Robert Taylor
  6. Cross-Sectional Dependence Robust Block Bootstrap Panel Unit Root Tests By Palm Franz C.; Smeekes Stephan; Urbain Jean-Pierre
  7. A Simple Panel Stationarity Test in the Presence of Cross-Sectional Dependence By Kaddour Hadri; Eiji Kurozumi
  8. Simulated maximum likelihood for general stochastic volatility models: a change of variable approach By Kleppe, Tore Selland; Skaug, Hans J.
  9. Path Forecast Evaluation By Òscar Jordà; Massimiliano Marcellino
  10. A Wald Test for the Cointegration Rank in Nonstationary Fractional Systems By Avarucci Marco; Velasco Carlos
  11. Bandwidth selection for nonparametric kernel testing By Gao, Jiti; Gijbels, Irene
  12. Estimating DGSE models with long memory dynamics By Gianluca, MORETTI; Giulio, NICOLETTI
  14. A K-sample Homogeneity Test based on the Quantification of the p-p Plot By Jeroen Hinloopen; Rien Wagenvoort; Charles van Marrewijk
  15. Le Cam optimal tests for symmetry against Ferreira and Steel’s general skewed distributions By Christophe Ley; Davy Paindaveine
  16. A note on the estimation of long-run relationships in dependent cointegrated panels By Di Iorio, Francesca; Fachin, Stefano
  17. Bayesian Averaging over Many Dynamic Model Structures with Evidence on the Great Ratios and Liquidity Trap Risk By Rodney W. Strachan; Herman K. van Dijk
  18. Spline Smoothing over Difficult Regions By Siem Jan Koopman; Soon Yip Wong
  19. Fourth order pseudo maximum likelihood methods By Alberto Holly; Alain Montfort; Michael Rockinger
  20. GDP nowcasting with ragged-edge data : A semi-parametric modelling By Laurent Ferrara; Dominique Guegan; Patrick Rakotomarolahy
  21. A Comparison of Threshold Cointegration and Markov-Switching Vector Error Correction Models in Price Transmission Analysis By Ihle, Rico; Cramon-Taubadel, Stephan von
  22. Smooth-car mixed models for spatial count data By Dae-Jin Lee; Maria Durban
  23. Intertemporal Asset Allocation with Habit Formation in Preferences: An Approximate Analytical Solution By Jean Jacod; Mark Podolskij; Mathias Vetter
  24. Bayesian Forecasting of Value at Risk and Expected Shortfall using Adaptive Importance Sampling By Lennart Hoogerheide; Herman K. van Dijk
  25. Mixed Unit Roots and Deterministic Trends in Noncausality Tests By Ran, Tao; Zapata, Hector
  26. A Monthly Indicator of the Euro Area GDP By Cecilia Frale; Massimiliano Marcellino; Gian Luigi Mazzi; Tommaso Proietti
  27. Univariate Unobserved-Component Model with Non-Random Walk Permanent Component By Xu, Zhiwei

  1. By: Jean-Marie Dufour; Abderrahim Taamouti
    Abstract: In this paper, we derive simple point-optimal sign-based tests in the context of linear and nonlinear regression models with fixed regressors. These tests are exact, distribution-free, robust against heteroskedasticity of unknown form, and they may be inverted to obtain confidence regions for the vector of unknown parameters. Since the point-optimal sign tests depend on the alternative hypothesis, we propose an adaptive approach based on split-sample techniques in order to choose an alternative such that the power of point-optimal sign tests is close to the power envelope. The simulation results show that when using approximately 10% of sample to estimate the alternative and the rest to calculate the test statistic, the power of point-optimal sign test is typically close to the power envelope. We present a Monte Carlo study to assess the performance of the proposed “quasi”-point-optimal sign test by comparing its size and power to those of some common tests which are supposed to be robust against heteroskedasticity. The results show that our procedures are superior.
    Keywords: Sign test, Point-optimal test, Nonlinear model, Heteroskedasticity, Exact inference, Distribution-free, Power envelope, Split-sample, Adaptive method, Projection
    JEL: C1 C12 C14 C15 C51
    Date: 2008–11
  2. By: Hanck, Christoph
    Abstract: This paper proposes a new testing approach for panel unit roots that is, unlike previously suggested tests, robust to nonstationarity in the volatility process of the innovations of the time series in the panel. Nonstationarity volatility arises for instance when there are structural breaks in the innovation variances. A prominent example is the reduction in GDP growth variances enjoyed by many industrialized countries, known as the `Great Moderation.' The panel test is based on Simes' [Biometrika 1986, "An Improved Bonferroni Procedure for Multiple Tests of Signicance"] classical multiple test, which combines evidence from time series unit root tests of the series in the panel. As time series unit root tests, we employ recently proposed tests of Cavaliere and Taylor [Journal of Time Series Analysis, "Time-Transformed Unit Root Tests for Models with Non-Stationary Volatility"]. The panel test is robust to general patterns of cross-sectional dependence and yet straightforward to implement, only requiring valid p-values of time series unit root tests, and no resampling. Monte Carlo experiments show that other panel unit root tests suer from sometimes severe size distortions in the presence of nonstationary volatility, and that this defect can be remedied using the test proposed here. The new test is applied to test for a unit root in an OECD panel of gross domestic products, yielding inference robust to the `Great Moderation.' We nd little evidence of trend stationarity.
    Keywords: Nonstationary Volatility; Multiple Testing; Panel Unit Root Test; Cross-Sectional Dependence
    JEL: C12 C23
    Date: 2008–11–30
  3. By: Drew Creal (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam); André Lucas (VU University Amsterdam)
    Abstract: We propose a new class of observation driven time series models referred to as Generalized Autoregressive Score (GAS) models. The driving mechanism of the GAS model is the scaled score of the likelihood function. This approach provides a unified and consistent framework for introducing time-varying parameters in a wide class of non-linear models. The GAS model encompasses other well-known models such as the generalized autoregressive conditional heteroskedasticity, the autoregressive conditional duration, the autoregressive conditional intensity, and the single source of error models. In addition, the GAS specification provides a wide range of new observation driven models. Examples include non-linear regression models with time-varying parameters, observation driven analogues of unobserved components time series models, multivariate point process models with time-varying parameters and pooling restrictions, new models for time-varying copula functions, and models for time-varying higher order moments. We study the properties of GAS models and provide several non-trivial examples of their application.
    Keywords: dynamic models; time-varying parameters; non-linearity; exponential family; marked point processes; copulas
    JEL: C10 C22 C32 C51
    Date: 2008–11–06
  4. By: Hafner Christian M.; Manner Hans (METEOR)
    Abstract: We propose a new dynamic copula model where the parameter characterizing dependence follows an autoregressive process. As this model class includes the Gaussian copula with stochastic correlation process, it can be viewed as a generalization of multivariate stochastic volatility models. Despite the complexity of the model, the decoupling of marginals and dependence parameters facilitates estimation. We propose estimation in two steps, where first the parameters of the marginal distributions are estimated, and then those of the copula. Parameters of the latent processes (volatilities and dependence) are estimated using efficient importance sampling (EIS). We discuss goodness-of-fit tests and ways to forecast the dependence parameter. For two bivariate stock index series, we show that theproposed model outperforms standard competing models.
    Keywords: econometrics;
    Date: 2008
  5. By: Giuseppe Cavaliere; David I. Harvey; Stephen J. Leybourne; A.M. Robert Taylor (School of Economics and Management, University of Aarhus, Denmark)
    Abstract: In this paper we analyse the impact of non-stationary volatility on the recently devel- oped unit root tests which allow for a possible break in trend occurring at an unknown point in the sample, considered in Harris, Harvey, Leybourne and Taylor (2008) [HHLT]. HHLT's analysis hinges on a new break fraction estimator which, when a break in trend occurs, is consistent for the true break fraction at rate Op(T??1). Unlike other available estimators, however, when there is no trend break HHLT's estimator converges to zero at rate Op(T1=2). In their analysis HHLT assume the shocks to follow a linear process driven by IID innovations. Our first contribution is to show that HHLT's break fraction estimator retains the same consistency properties as demonstrated by HHLT for the IID case when the innovations display non-stationary behaviour of a quite general form, in- cluding, for example, the case of a single break in the volatility of the innovations which may or may not occur at the same time as a break in trend. However, as we subsequently demonstrate, the limiting null distribution of unit root statistics based around this es- timator are not pivotal in the presence of non-stationary volatility. Associated Monte Carlo evidence is presented to quantify the impact of various models of non-stationary volatility on both the asymptotic and finite sample behaviour of such tests. A solution to the identified inference problem is then provided by considering wild bootstrap-based implementations of the HHLT tests, using the trend break estimator from the original sample data. The proposed bootstrap method does not require the practitioner to specify a parametric model for volatility, and is shown to perform very well in practice across a range of models.
    Keywords: Unit root tests, quasi difference de-trending, trend break, non-stationary volatility, wild bootstrap
    JEL: C22
    Date: 2008–12–02
  6. By: Palm Franz C.; Smeekes Stephan; Urbain Jean-Pierre (METEOR)
    Abstract: In this paper we consider the issue of unit root testing in cross-sectionally dependent panels. We consider panels that may be characterized by various forms of cross-sectionaldependence including (but not exclusive to) the popular common factor framework. Weconsider block bootstrap versions of the group-mean Im, Pesaran, and Shin (2003) and thepooled Levin, Lin, and Chu (2002) unit root coefficient DF-tests for panel data, originallyproposed for a setting of no cross-sectional dependence beyond a common time effect. Thetests, suited for testing for unit roots in the observed data, can be easily implemented asno specification or estimation of the dependence structure is required. Asymptotic propertiesof the tests are derived for T going to infinity and N finite. Asymptotic validity of thebootstrap tests is established in very general settings, including the presence of commonfactors and even cointegration across units. Properties under the alternative hypothesisare also considered. In a Monte Carlo simulation, the bootstrap tests are found to haverejection frequencies that are much closer to nominal size than the rejection frequenciesfor the corresponding asymptotic tests. The power properties of the bootstrap tests appearto be similar to those of the asymptotic tests.
    Keywords: Economics (Jel: A)
    Date: 2008
  7. By: Kaddour Hadri; Eiji Kurozumi
    Abstract: This paper develops a simple test for the null hypothesis of stationarity in heterogeneous panel data with cross-sectional dependence in the form of a common factor in the disturbance. We do not estimate the common factor but mop-up its effect by employing the same method as the one proposed in Pesaran (2007) in the unit root testing context. Our test is basically the same as the KPSS test but the regression is augmented by cross-sectional average of the observations. We also develop a Lagrange multiplier (LM) test allowing for cross-sectional dependence and, under restrictive assumptions, compare our augmented KPSS test with the extended LM test under the null of stationarity, under the local alternative and under the fixed alternative, and discuss the differences between these two tests. We also extend our test to the more realistic case where the shocks are serially correlated. We use Monte Carlo simulations to examine the finite sample property of the augmented KPSS test.
    Keywords: Panel data, stationarity, KPSS test, cross-sectional dependence, LM test, locally best test
    JEL: C12 C33
    Date: 2008–10
  8. By: Kleppe, Tore Selland; Skaug, Hans J.
    Abstract: Maximum likelihood has proved to be a valuable tool for fitting the log-normal stochastic volatility model to financial returns time series. Using a sequential change of variable framework, we are able to cast more general stochastic volatility models into a form appropriate for importance samplers based on the Laplace approximation. We apply the methodology to two example models, showing that efficient importance samplers can be constructed even for highly non-Gaussian latent processes such as square-root diffusions.
    Keywords: Change of Variable; Heston Model; Laplace Importance Sampler; Simulated Maximum Likelihood; Stochastic Volatility
    JEL: C13 C22
    Date: 2008–07–10
  9. By: Òscar Jordà; Massimiliano Marcellino
    Abstract: A path forecast refers to the sequence of forecasts 1 to H periods into the future. A summary of the range of possible paths the predicted variable may follow for a given confidence level requires construction of simultaneous confidence regions that adjust for any covariance between the elements of the path forecast. This paper shows how to construct such regions with the joint predictive density and Scheffé’s (1953) S-method. In addition, the joint predictive density can be used to construct simple statistics to evaluate the local internal consistency of a forecasting exercise of a system of variables. Monte Carlo simulations demonstrate that these simultaneous confidence regions provide approximately correct coverage in situations where traditional error bands, based on the collection of marginal predictive densities for each horizon, are vastly off mark. The paper showcases these methods with an application to the most recent monetary episode of interest rate hikes in the U.S. macroeconomy.
    Keywords: path forecast, simultaneous confidence region, error bands
    JEL: C32 C52 C53
    Date: 2008
  10. By: Avarucci Marco; Velasco Carlos (METEOR)
    Abstract: This paper develops new methods for determining the cointegration rank in a nonstationary fractionally integrated system, extending univariate optimal methods for testing the degree of integration. We propose a simple Wald test based on the singular value decompositionof the unrestricted estimate of the long run multiplier matrix. When the "strength" of the cointegrating relationship is less than 1/2, the test statistic has a standard asymptotic distribution, like Lagrange Multiplier tests exploiting local properties. We consider the behavior of our test under estimation of short run parameters and local alternatives. We compare our procedure with other cointegration tests based on dierent principles and find that the new method has better properties in a range of situations by using information on the alternative obtained through a preliminary estimate of the cointegration strength.
    Keywords: Economics (Jel: A)
    Date: 2008
  11. By: Gao, Jiti; Gijbels, Irene
    Abstract: We propose a sound approach to bandwidth selection in nonparametric kernel testing. The main idea is to find an Edgeworth expansion of the asymptotic distribution of the test concerned. Due to the involvement of a kernel bandwidth in the leading term of the Edgeworth expansion, we are able to establish closed-form expressions to explicitly represent the leading terms of both the size and power functions and then determine how the bandwidth should be chosen according to certain requirements for both the size and power functions. For example, when a significance level is given, we can choose the bandwidth such that the power function is maximized while the size function is controlled by the significance level. Both asymptotic theory and methodology are established. In addition, we develop an easy implementation procedure for the practical realization of the established methodology and illustrate this on two simulated examples and a real data example.
    Keywords: Choice of bandwidth parameter; Edgeworth expansion; nonparametric kernel testing; power function; size function
    JEL: C14
    Date: 2005–12
  12. By: Gianluca, MORETTI; Giulio, NICOLETTI
    Abstract: Recent literature clams that key variables such as aggregate productivity and inflation display long memory dynamics. We study the impllications of this high degree of persistence on the estimation of Dynamic Stochastic General Equilibrium (DGSE) models. We show that long memory data produce substantial bias in the deep parameter estimates when a standard Kalman Filter-MLE procedure is used. We propose a modification of the Kalman Filter procedure, we mainly augment the state space, which deals with this problem. By the means of the augmented state space we can consistently estimate the model parameters as well as produce more accurate out-of-sample forecasts compared to the standard Kalman filter.
    Date: 2008–12–04
  13. By: Lambert, Dayton M.; Florax, Raymond J.G.M.; Cho, Seong-Hoon
    Abstract: This research note documents estimation procedures and results for an empirical investigation of the performance of the recently developed spatial, heteroskedasticity and autocorrelation consistent (HAC) covariance estimator calibrated with different kernel bandwidths. The empirical example is concerned with a hedonic price model for residential property values. The first bandwidth approach varies an a priori determined plug-in bandwidth criterion. The second method is a data driven cross-validation approach to determine the optimal neighborhood. The third approach uses a robust semivariogram to determine the range over which residuals are spatially correlated. Inference becomes more conservative as the plug-in bandwidth is increased. The data-driven approaches prove valuable because they are capable of identifying the optimal spatial range, which can subsequently be used to inform the choice of an appropriate bandwidth value. In our empirical example, pertaining to a standard spatial model and ditto dataset, the results of the data driven procedures can only be reconciled with relatively high plug-in values (n0.65 or n0.75). The results for the semivariogram and the cross-validation approaches are very similar which, given its computational simplicity, gives the semivariogram approach an edge over the more flexible cross-validation approach.
    Keywords: spatial HAC, semivariogram, bandwidth, hedonic model, Community/Rural/Urban Development, Demand and Price Analysis, Land Economics/Use, Research Methods/ Statistical Methods, C13, C31, R21,
    Date: 2008
  14. By: Jeroen Hinloopen (University of Amsterdam); Rien Wagenvoort (European Investment Bank, Luxemburg); Charles van Marrewijk (Erasmus University Rotterdam)
    Abstract: We propose a quantification of the p-p plot that assigns equal weight to all distances between the respective distributions: the surface between the p-p plot and the diagonal. This surface is labelled the Harmonic Weighted Mass (HWM) index. We introduce the diagonal-deviation (d-d) plot that allows the index to be computed exactly under all circumstances. For two balanced samples absent ties the finite sample distribution of the HWM index is derived. Simulations show that in most cases unbalanced samples and ties have little effect on this distribution. The d-d plot allows for a straightforward extension to the K-sample HWM index. As we have not been able to derive the distribution of the index for K>2, we simulate significance tables for K=3,...,15. An example involving economic growth rates of the G7 countries illustrates that the HWM test can have better power than alternative Empirical Distribution Function tests.
    Keywords: EDF test; p-p plot; power; d-d plot
    JEL: C12 C14
    Date: 2008–10–20
  15. By: Christophe Ley; Davy Paindaveine
    Abstract: When testing symmetry of a univariate density, (parametric classes of) densities skewed by means of the general probability transform introduced in [7] are appealing alternatives. This paper first proposes parametric tests of symmetry that are locally and asymptotically optimal (in the Le Cam sense) against such alternatives. To improve on these parametric tests, which are valid under well-specified density types only, we turn them into semiparametric tests, either by using a standard studentization approach or by resorting to the invariance principle. The second approach leads to robust yet efficient signed-rank tests, which include the celebrated sign and Wilcoxon tests as special cases, and turn out to be Le Cam optimal irrespective of the underlying original symmetric density. Optimality, however, is only achieved under well-specified “skewing mechanisms”, and we therefore evaluate the overall performances of our tests by deriving their asymptotic relative efficiencies with respect to the classical test of skewness. A Monte-Carlo study confirms the asymptotic results.
    Keywords: Rank-based inference; tests of symmetry; asymmetry models; location tests; local asymptotic normality
    Date: 2008
  16. By: Di Iorio, Francesca; Fachin, Stefano
    Abstract: We address the issue of estimation and inference in dependent nonstationary panels of small cross-section dimensions. The main conclusion is that the best results are obtained applying bootstrap inference to single-equation estimators. SUR estimators perform badly, or are even unfeasible, when the time dimension is not very large compared to the cross-section dimension.
    Keywords: Panel cointegration; FM-OLS; FM-SUR.
    JEL: C13 C15 C33
    Date: 2008–09–01
  17. By: Rodney W. Strachan (The University of Queensland, Australia); Herman K. van Dijk (Erasmus University Rotterdam, the Netherlands)
    Abstract: A Bayesian model averaging procedure is presented that makes use of a finite mixture of many model structures within the class of vector autoregressive (VAR) processes. It is applied to two empirical issues. First, stability of the Great Ratios in U.S. macro-economic time series is investigated, together with the effect of permanent shocks on business cycles. Second, the linear VAR model is extended to include a smooth transition function in a (monetary) equation and stochastic volatility in the disturbances. The risk of a liquidity trap in the U.S.A. and Japan is evaluated. Although this risk found to be reasonably high, we find only mild evidence that the monetary policy transmission mechanism is different and that central banks consider the expected cost of a liquidity trap in policy setting. Posterior probabilities of different models are evaluated using Markov chain Monte Carlo techniques.
    Keywords: Posterior probability; Grassman manifold; Orthogonal group; Cointegration; Model averaging; Stochastic trend; Impulse response;Vector autoregressive model; Great Ratios; Liquidity trap
    JEL: C11 C32 C52
    Date: 2008–10–10
  18. By: Siem Jan Koopman (VU University Amsterdam); Soon Yip Wong (VU University Amsterdam)
    Abstract: We consider the problem of smoothing data on two-dimensional grids with holes or gaps. Such grids are often referred to as difficult regions. Since the data is not observed on these locations, the gap is not part of the domain. We cannot apply standard smoothing methods since they smooth over and across difficult regions. More unfavorable properties of standard smoothers become visible when the data is observed on an irregular grid in a non-rectangular domain. In this paper, we adopt smoothing spline methods within a state space framework to smooth data on one- or two-dimensional grids with difficult regions. We make a distinction between two types of missing observations to handle the irregularity of the grid and to ensure that no smoothing takes place over and across the difficult region. For smoothing on two-dimensional grids, we introduce a two-step spline smoothing method. The proposed solution applies to all smoothing methods that can be represented in a state space framework. We illustrate our methods for three different cases of interest.
    Keywords: Bivariate smoothing; Geo-statistics; Missing observations; Smoothing spline model; State space methods
    JEL: C13 C22 C32
    Date: 2008–11–18
  19. By: Alberto Holly; Alain Montfort; Michael Rockinger
    Abstract: The objective of this paper is to extend the results on Pseudo Maximum Likelihood (PML) theory derived in Gourieroux, Monfort, and Trognon (GMT) (1984) to a situation where the first four conditional moments are specified. Such an extension is relevant in light of pervasive evidence that conditional distributions are non-Gaussian in many economic situations. The key statistical tool here is the quartic exponential family, which allows us to generalize the PML2 and QGPML1 methods proposed in GMT(1984) to PML4 and QGPML2 methods, respectively. An asymptotic theory is developed which shows, in particular, that the QGPML2 method reaches the semi-parametric bound. The key numerical tool that we use is the Gauss-Freud integration scheme which solves a computational problem that has previously been raised in several econometric fields. Simulation exercises show the feasibility and robustness of the methods.
    Keywords: Quartic Exponential Family, Pseudo Maximum Likelihood, Skewness, Kurtosis
    JEL: C01 C13 C16 C22
    Date: 2008–08
  20. By: Laurent Ferrara (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, Banque de France - Business Conditions and Macroeconomic Forecasting Directorate); Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics - Ecole d'Économie de Paris); Patrick Rakotomarolahy (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I)
    Abstract: This papier formalizes the process of forecasting unbalanced monthly data sets in order to obtain robust nowcasts and forecasts of quarterly GDP growth rate through a semi-parametric modelling. This innovative approach lies on the use on non-parametric methods, based on nearest neighbors and on radial basis function approaches, ti forecast the monthly variables involved in the parametric modelling of GDP using bridge equations. A real-time experience is carried out on Euro area vintage data in order to anticipate, with an advance ranging from six to one months, the GDP flash estimate for the whole zone.
    Keywords: Euro area GDP, real-time nowcasting, forecasting, non-parametric models.
    Date: 2008–11
  21. By: Ihle, Rico; Cramon-Taubadel, Stephan von
    Abstract: We compare two regime-dependent econometric models for price transmission analysis, namely the threshold vector error correction model and Markov-switching vector error correction model. We first provide a detailed characterization of each of the models which is followed by a comprehensive comparison. We find that the assumptions regarding the nature of their regime-switching mechanisms are fundamentally different so that each model is suitable for a certain type of nonlinear price transmission. Furthermore, we conduct a Monte Carlo experiment in order to study the performance of the estimation techniques of both models for simulated data. We find that both models are adequate for studying price transmission since their characteristics match the underlying economic theory and allow hence for an easy interpretation. Nevertheless, the results of the corresponding estimation techniques do not reproduce the true parameters and are not robust against nuisance parameters. The comparison is supplemented by a review of empirical studies in price transmission analysis in which mostly the threshold vector error correction model is applied.
    Keywords: price transmission, market integration, threshold vector error correction model, Markov-switching vector error correction model, comparison, nonlinear time series analysis, Agricultural Finance,
    Date: 2008
  22. By: Dae-Jin Lee; Maria Durban
    Abstract: Penalized splines (P-splines) and individual random effects are used for the analysis of spatial count data. P-splines are represented as mixed models to give a unified approach to the model estimation procedure. First, a model where the spatial variation is modelled by a two-dimensional P-spline at the centroids of the areas or regions is considered. In addition, individual area-effects are incorporated as random effects to account for individual variation among regions. Finally, the model is extended by considering a conditional autoregressive (CAR) structure for the random effects, these are the so called “Smooth-CAR” models, with the aim of separating the large-scale geographical trend, and local spatial correlation. The methodology proposed is applied to the analysis of lip cancer incidence rates in Scotland.
    Keywords: Mixed models, P-splines, Overdispersion, Negative Binomial, PQL, CAR models, Scottish lip cancer data
    Date: 2008–11
  23. By: Jean Jacod; Mark Podolskij; Mathias Vetter (School of Economics and Management, University of Aarhus, Denmark)
    Abstract: This paper presents some limit theorems for certain functionals of moving averages of semimartingales plus noise, which are observed at high frequency. Our method generalizes the pre-averaging approach (see [13],[11]) and provides consistent estimates for various characteristics of general semimartingales. Furthermore, we prove the associated multidimensional (stable) central limit theorems. As expected, we find central limit theorems with a convergence rate n1=4, if n is the number of observations.
    Keywords: central limit theorem, high frequency observations, microstructure noise, quadratic variation, semimartingale, stable convergence.
    JEL: C10 C13 C14
    Date: 2008–12–01
  24. By: Lennart Hoogerheide (Erasmus University Rotterdam); Herman K. van Dijk (Erasmus University Rotterdam)
    Abstract: An efficient and accurate approach is proposed for forecasting Value at Risk [VaR] and Expected Shortfall [ES] measures in a Bayesian framework. This consists of a new adaptive importance sampling method for Quantile Estimation via Rapid Mixture of <I>t</I> approximations [QERMit]. As a first step the optimal importance density is approximated, after which multi-step `high loss' scenarios are efficiently generated. Numerical standard errors are compared in simple illustrations and in an empirical GARCH model with Student-<I>t</I> errors for daily S&P 500 returns. The results indicate that the proposed QERMit approach outperforms several alternative approaches in the sense of more accurate VaR and ES estimates given the same amount of computing time, or equivalently requiring less computing time for the same numerical accuracy.
    Keywords: Value at Risk; Expected Shortfall; numerical accuracy; numerical standard error; importance sampling; mixture of Student-<I>t</I> distributions; variance reduction technique
    JEL: C11 C15 C53 D81
    Date: 2008–10–02
  25. By: Ran, Tao; Zapata, Hector
    Abstract: Using Japanese economic data and a Monte Carlo simulation, this study analyzes the consequences of ignoring deterministic trends in mixed unit-root data for Granger noncausality tests. Results from an augmented VAR suggest over-rejection in certain empirically relevant cases at various sample sizes.
    Keywords: Research Methods/ Statistical Methods,
    Date: 2008
  26. By: Cecilia Frale; Massimiliano Marcellino; Gian Luigi Mazzi; Tommaso Proietti
    Abstract: A continuous monitoring of the evolution of the economy is fundamental for the decisions of public and private decision makers. This paper proposes a new monthly indicator of the euro area real Gross Domestic Product (GDP), with several original features. First, it considers both the output side (six branches of the NACE classification) and the expenditure side (the main GDP components) and combines the two estimates with optimal weights reflecting their relative precision. Second, the indicator is based on information at both the monthly and quarterly level, modelled with a dynamic factor specification cast in state-space form. Third, since estimation of the multivariate dynamic factor model can be numerically complex, computational efficiency is achieved by implementing univariate filtering and smoothing procedures. Finally, special attention is paid to chain-linking and its implications, via a multistep procedure that exploits the additivity of the volume measures expressed at the prices of the previous year.
    Keywords: Temporal Disaggregation, Multivariate State Space Models, Dynamic factor Models, Kalman filter and smoother, Chain-linking
    JEL: E32 E37 C53
    Date: 2008
  27. By: Xu, Zhiwei
    Abstract: In this note, we revisit the univariate unobserved-component (UC) model of US GDP by relaxing the traditional random-walk assumption of the permanent component. Since our general UC model is unidentified, we investigate the upper bound of the contribution of the transitory component, and find it is dominated by the permanent component.
    Keywords: Unobserved-Component Model; Random Walk Assumption; Permanent and Transitory Shocks
    JEL: E32 C22 C49
    Date: 2008–11–11

This nep-ecm issue is ©2008 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.