nep-ecm New Economics Papers
on Econometrics
Issue of 2008‒08‒06
twenty-two papers chosen by
Sune Karlsson
Orebro University

  1. Estimation of a nonparametric regression spectrum for multivariate time series By Jan Beran; Mark A. Heiler
  2. Nonparametric Regression on Latent Covariates with an Application to Semiparametric GARCH-in-Mean Models By Christian Conrad; Enno Mammen
  3. A nonparametric regression cross spectrum for multivariate time series By Jan Beran; Mark A. Heiler
  4. On parameter estimation for locally stationary long-memory processes By Jan Beran
  5. Modelling financial time series with SEMIFAR-GARCH model By Yuanhua Feng; Jan Beran; Keming Yu
  6. Causal Effects of Monetary Shocks: Semiparametric Conditional Independence Tests with a Multinomial Propensity Score By Angrist, Joshua; Kuersteiner, Guido M.
  7. The Effect of the Great Moderation on the U.S. Business Cycle in a Time-varying Multivariate Trend-cycle Model By Drew Creal; Siem Jan Koopman; Eric Zivot
  8. Quantiles for Fractions and Other Mixed Data By Jose A. F. Machado; J. M. C. Santos Silva
  9. Multivariate Fractionally Integrated APARCH Modeling of Stock Market Volatility: A multi-country study By Christian Conrad; Menelaos Karanasos; Ning Zeng
  10. Robust Two-Stage Least Squares: some Monte Carlo experiments By Mishra, SK
  11. Invalidity of the Bootstrap and the m Out of n Bootstrap for Interval Endpoints Defined by Moment Inequalities By Donald W.K. Andrews; Sukjin Han
  12. Practical Volatility Modeling for Financial Market Risk Management By Shamiri, Ahmed; Shaari, Abu Hassan; Isa, Zaidi
  13. Forecasting Using Functional Coefficients Autoregressive Models By Giancarlo Bruno
  14. Robustness analysis and convergence of empirical finite-time ruin probabilities and estimation risk solvency margin. By Stéphane Loisel; Christian Mazza; Didier Rullière
  15. Optimal Convergence Rates in Nonparametric Regression with Fractional Time Series Errors By Yuanhua Feng; Jan Beran
  16. Testing procedures for detection of linear dependencies in efficiency models By Antonio Peyrache; Tim Coelli
  17. Identification issues in models for underreported counts By Georgios Papadopoulos; J. M. C. Santos Silva
  18. Kernel Density Estimation Based on Grouped Data: The Case of Poverty Assessment By Camelia Minoiu; Sanjay G. Reddy
  19. Predicting global stock returns By Erik Hjalmarsson
  20. Spurious Regressions in Technical Trading: Momentum or Contrarian? By Mototsugu Shintani; Tomoyoshi Yabu; and Daisuke Nagakura
  21. Measuring Service Quality: The Opinion of Europeans about Utilities By P. A. Ferrari; S. Salini
  22. Carbon Emissions and Economic Growth: Homogeneous Causality in Heterogeneous Panels By David Maddison; Katrin Rehdanz

  1. By: Jan Beran (University of Konstanz); Mark A. Heiler
    Abstract: Estimation of a nonparametric regression spectrum based on the periodogram is considered. Neither trend estimation nor smoothing of the periodogram are required. Alternatively, for cases where spectral estimation of phase shifts fails and the shift does not depend on frequency, a time domain estimator of the lag-shift is defined. Asymptotic properties of the frequency and time domain estimators are derived. Simulations and a data example illustrate the methods.
    Keywords: Periodogram, cross spectrum, regression spectrum, phase, wavelets.
    Date: 2007–12–01
    URL: http://d.repec.org/n?u=RePEc:knz:cofedp:0712&r=ecm
  2. By: Christian Conrad (University of Heidelberg, Department of Economics); Enno Mammen (University of Mannheim, Department of Economics)
    Abstract: We consider time series models in which the conditional mean of the response variable given the past depends on latent covariates. We assume that the covariates can be estimated consistently and use an iterative nonparametric kernel smoothing procedure for estimating the conditional mean function. The covariates are assumed to depend (non)parametrically on past values of the covariates and of the observations. Our procedure is based on iterative ¯ts of the covariates and nonparametric kernel smoothing of the conditional mean function. An asymptotic theory for the resulting kernel estimator is developed and the estimator is used for testing parametric speci¯cations of the mean function. Our leading example is a semiparametric class of GARCH-in-Mean models. In this set-up our procedure provides a formal framework for testing economic theories that postulate functional relations between macroeconomic or ¯nancial variables and their conditional second moments. We illustrate the usefulness of the methodology by testing the linear risk-return relation predicted by the ICAPM.
    Keywords: Speci¯cation test, GARCH-M, semiparametric regression, risk premium, ICAPM.
    JEL: C12 C14 C22 C52 G12
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:awi:wpaper:0473&r=ecm
  3. By: Jan Beran (University of Konstanz); Mark A. Heiler
    Abstract: We consider dependence structures in multivariate time series that are characterized by deterministic trends. Results from spectral analysis for stationary processes are extended to deterministic trend functions. A regression cross covariance and spectrum are defined. Estimation of these quantities is based on wavelet thresholding. The method is illustrated by a simulated example and a three-dimensional time series consisting of ECG, blood pressure and cardiac stroke volume measurements.
    Keywords: Nonparametric trend estimation, cross spectrum, wavelets, regression spectrum, phase, threshold estimator
    Date: 2008–01–01
    URL: http://d.repec.org/n?u=RePEc:knz:cofedp:0801&r=ecm
  4. By: Jan Beran (University of Konstanz)
    Abstract: We consider parameter estimation for time-dependent locally stationary long-memory processes. The asymptotic distribution of an estimator based on the local infinite autoregressive representation is derived, and asymptotic formulas for the mean squared error of the estimator, and the asymptotically optimal bandwidth are obtained. In spite of long memory, the optimal bandwidth turns out to be of the order n^(-1/5) and inversely proportional to the square of the second derivative of d. In this sense, local estimation of d is comparable to regression smoothing with iid residuals.
    Keywords: long memory, fractional ARIMA process, local stationarity, bandwidth selection
    Date: 2007–12–01
    URL: http://d.repec.org/n?u=RePEc:knz:cofedp:0713&r=ecm
  5. By: Yuanhua Feng (Heriot-Watt University, Edinburgh); Jan Beran; Keming Yu
    Abstract: A class of semiparametric fractional autoregressive GARCH models (SEMIFARGARCH), which includes deterministic trends, difference stationarity and stationarity with short- and long-range dependence, and heteroskedastic model errors, is very powerful for modelling financial time series. This paper discusses the model fitting, including an efficient algorithm and parameter estimation of GARCH error term. So that the model can be applied in practice. We then illustrate the model and estimation methods with a few of different finance data sets.
    Keywords: Financial time series, GARCH model, SEMIFAR model, parameter estimation, kernel estimation, asymptotic property.
    Date: 2007–12–01
    URL: http://d.repec.org/n?u=RePEc:knz:cofedp:0714&r=ecm
  6. By: Angrist, Joshua (MIT); Kuersteiner, Guido M. (University of California, Davis)
    Abstract: Macroeconomists have long been concerned with the causal effects of monetary policy. When the identification of causal effects is based on a selection-on-observables assumption, non-causality amounts to the conditional independence of outcomes and policy changes. This paper develops a semiparametric test for conditional independence in time series models linking a multinomial policy variable with unobserved potential outcomes. Our approach to conditional independence testing is motivated by earlier parametric tests, as in Romer and Romer (1989, 1994, 2004). The procedure developed here is semiparametric in the sense that we model the process determining the distribution of treatment – the policy propensity score – but leave the model for outcomes unspecified. A conceptual innovation is that we adapt the cross-sectional potential outcomes framework to a time series setting. This leads to a generalized definition of Sims (1980) causality. A technical contribution is the development of root-T consistent distribution-free inference methods for full conditional independence testing, appropriate for dependent data and allowing for first-step estimation of the propensity score.
    Keywords: monetary policy, propensity score, multinomial treatments, causality
    JEL: E52 C22 C31
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp3606&r=ecm
  7. By: Drew Creal (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam); Eric Zivot (University of Washington)
    Abstract: In this paper we investigate whether the dynamic properties of the U.S. business cycle have changed in the last fifty years. For this purpose we develop a flexible business cycle indicator that is constructed from a moderate set of macroeconomic time series. The coincident economic indicator is based on a multivariate trend-cycle decomposition model that accounts for time variation in macroeconomic volatility, known as the great moderation. In particular, we consider an unobserved components time series model with a common cycle that is shared across different time series but adjusted for phase shift and amplitude. The extracted cycle can be interpreted as the result of a model-based bandpass filter and is designed to emphasize the business cycle frequencies that are of interest to applied researchers and policymakers. Stochastic volatility processes and mixture distributions for the irregular components and the common cycle disturbances enable us to account for all the heteroskedasticity present in the data. The empirical results are based on a Bayesian analysis and show that time-varying volatility is only present in the a selection of idiosyncratic components while the coefficients driving the dynamic properties of the business cycle indicator have been stable over time in the last fifty years.
    Keywords: Bandpass filter; Markov chain Monte Carlo; Stochastic volatility; Trend-cycle decomposition; Unobserved components time series model
    JEL: C11 C32 E32
    Date: 2008–07–17
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20080069&r=ecm
  8. By: Jose A. F. Machado; J. M. C. Santos Silva
    Abstract: This paper studies the estimation of quantile regression for fractional data, focusing on the case where there are mass-points at zero or/and one. More generally, we propose a simple strategy for the estimation of the conditional quantiles of data from mixed distributions, which combines standard results on the estimation of censored and Box-Cox quantile regressions. The implementation of the proposed method is illustrated using a well-known dataset.
    Date: 2008–07–29
    URL: http://d.repec.org/n?u=RePEc:esx:essedp:656&r=ecm
  9. By: Christian Conrad (University of Heidelberg, Department of Economics); Menelaos Karanasos (Brunel University, Dept. of Economics and Finance); Ning Zeng (Brunel University, Dept. of Economics and Finance)
    Abstract: Tse (1998) proposes a model which combines the fractionally integrated GARCH formulation of Baillie, Bollerslev and Mikkelsen (1996) with the asymmetric power ARCH speci¯cation of Ding, Granger and Engle (1993). This paper analyzes the applicability of a multivariate constant conditional correlation version of the model to national stock market returns for eight countries. We ¯nd this multivariate speci¯cation to be generally applicable once power, leverage and long-memory e®ects are taken into consideration. In addition, we ¯nd that both the optimal fractional di®erencing parameter and power transformation are remarkably similar across countries. Out-of-sample evidence for the superior forecasting ability of the multivariate FIAPARCH framework is provided in terms of forecast error statistics and tests for equal forecast accuracy of the various models.
    Keywords: Asymmetric Power ARCH, Fractional integration, Stock returns, Volatility forecast evaluation
    JEL: C13 C22 C52
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:awi:wpaper:0472&r=ecm
  10. By: Mishra, SK
    Abstract: The Two-Stage Least Squares (2-SLS) is a well known econometric technique used to estimate the parameters of a multi-equation (or simultaneous equations) econometric model when errors across the equations are not correlated and the equation(s) concerned is (are) over-identified or exactly identified. However, in presence of outliers in the data matrix, the classical 2-SLS has a very poor performance. In this study a method has been proposed to conveniently generalize the 2-SLS to the weighted 2-SLS (W2-SLS), which is robust to the effects of outliers and perturbations in the data matrix. Monte Carlo experiments have been conducted to demonstrate the performance of the proposed method. It has been found that robustness of the proposed method is not much destabilized by the magnitude of outliers, but it is sensitive to the number of outliers/perturbations in the data matrix. The breakdown point of the method is quite high, somewhere between 45 to 50 percent of the number of points in the data matrix.
    Keywords: Two-Stage Least Squares; multi-equation econometric model; simultaneous equations; outliers; robust; weighted least squares; Monte Carlo experiments; unbiasedness; efficiency; breakdown point; perturbation; structural parameters; reduced form
    JEL: C13 C63 C14 C87 C15 C30
    Date: 2008–07–26
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:9737&r=ecm
  11. By: Donald W.K. Andrews (Cowles Foundation, Yale University); Sukjin Han (Dept. of Economics, Yale University)
    Abstract: This paper analyzes the finite-sample and asymptotic properties of several bootstrap and m out of n bootstrap methods for constructing confidence interval (CI) endpoints in models defined by moment inequalities. In particular, we consider using these methods directly to construct CI endpoints. By considering two very simple models, the paper shows that neither the bootstrap nor the m out of n bootstrap is valid in finite samples or in a uniform asymptotic sense in general when applied directly to construct CI endpoints. In contrast, other results in the literature show that other ways of applying the bootstrap, m out of n bootstrap, and subsampling do lead to uniformly asymptotically valid confidence sets in moment inequality models. Thus, the uniform asymptotic validity of resampling methods in moment inequality models depends on the way in which the resampling methods are employed.
    Keywords: Bootstrap, Coverage probability, m out of n bootstrap, Moment inequality model, Partial identification, Subsampling
    JEL: C01
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1671&r=ecm
  12. By: Shamiri, Ahmed; Shaari, Abu Hassan; Isa, Zaidi
    Abstract: Being able to choose most suitable volatility model and distribution specification is a more demanding task. This paper introduce an analyzing procedure using the Kullback-Leibler information criteria (KLIC) as a statistical tool to evaluate and compare the predictive abilities of possibly misspecified density forecast models. The main advantage of this statistical tool is that we use the censored likelihood functions to compute the tail minimum of the KLIC, to compare the performance of a density forecast models in the tails. We include an illustrative simulation and an empirical application to compare a set of distributions, including symmetric/asymmetric distribution, and a family of GARCH volatility models. We highlight the use of our approach to a daily index, the Kuala Lumpur Composite index (KLCI). Our results shows that the choice of the conditional distribution appear to be a more dominant factor in determining the adequacy of density forecasts than the choice of volatility model. Furthermore, the results support the Skewed for KLCI return distribution.
    Keywords: Density forecast; Conditional distribution; Forecast accuracy; KLIC; GARCH models
    JEL: D53 C32 C16 C52
    Date: 2007–08–20
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:9790&r=ecm
  13. By: Giancarlo Bruno (ISAE - Institute for Studies and Economic Analyses)
    Abstract: The use of linear parametric models for forecasting economic time series is widespread among practitioners, in spite of the fact that there is a large evidence of the presence of non-linearities in many of such time series. However, the empirical results stemming from the use of non-linear models are not always as good as expected. This has been sometimes associated to the difficulty in correctly specifying a non-linear parametric model. I this paper I cope with this issue by using a more general non-parametric approach, which can be used both as a preliminary tool for aiding in specifying a suitable parametric model and as an autonomous modelling strategy. The results are promising, in that the non-parametric approach achieve a good forecasting record for a considerable number of series.
    Keywords: Non-linear Time-Series Models, Non-Parametric Models.
    JEL: C52 C53
    Date: 2008–06
    URL: http://d.repec.org/n?u=RePEc:isa:wpaper:98&r=ecm
  14. By: Stéphane Loisel (SAF - EA2429 - Laboratoire de Science Actuarielle et Financière - Université Claude Bernard - Lyon I); Christian Mazza (Département de Mathématiques - Université de Fribourg); Didier Rullière (SAF - EA2429 - Laboratoire de Science Actuarielle et Financière - Université Claude Bernard - Lyon I)
    Abstract: We consider the classical risk model and carry out a sensitivity and robustness analysis of finite-time ruin probabilities. We provide algorithms to compute the related influence functions. We also prove the weak convergence of a sequence of empirical finite-time ruin probabilities starting from zero initial reserve toward a Gaussian random variable. We define the concepts of reliable finite-time ruin probability as a Value-at-Risk of the estimator of the finite-time ruin probability. To control this robust risk measure, an additional initial reserve is needed and called Estimation Risk Solvency Margin (ERSM). We apply our results to show how portfolio experience could be rewarded by cut-offs in solvency capital requirements. An application to catastrophe contamination and numerical examples are also developed.
    Keywords: Finite-time ruin probability; robustness; Solvency II; reliable ruin probability; asymptotic Normality; influence function; Estimation Risk Solvency Margin (ERSM)
    Date: 2008–04
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-00168714_v1&r=ecm
  15. By: Yuanhua Feng (Heriot-Watt University, Edinburgh); Jan Beran
    Keywords: Optimal rate of convergence, nonparametric regression, long memory, antipersistence.
    Date: 2007–01–16
    URL: http://d.repec.org/n?u=RePEc:knz:cofedp:0715&r=ecm
  16. By: Antonio Peyrache; Tim Coelli (CEPA - School of Economics, The University of Queensland)
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:qld:uqcepa:30&r=ecm
  17. By: Georgios Papadopoulos; J. M. C. Santos Silva
    Abstract: In this note we study the conditions under which leading models for underreported counts are identified. In particular, we highlight a peculiar identification problem that afflicts two of the most popular models in this class.
    Date: 2008–07–29
    URL: http://d.repec.org/n?u=RePEc:esx:essedp:657&r=ecm
  18. By: Camelia Minoiu; Sanjay G. Reddy
    Abstract: We analyze the performance of kernel density methods applied to grouped data to estimate poverty (as applied in Sala-i-Martin, 2006, QJE). Using Monte Carlo simulations and household surveys, we find that the technique gives rise to biases in poverty estimates, the sign and magnitude of which vary with the bandwidth, the kernel, the number of datapoints, and across poverty lines. Depending on the chosen bandwidth, the $1/day poverty rate in 2000 varies by a factor of 1.8, while the $2/day headcount in 2000 varies by 287 million people. Our findings challenge the validity and robustness of poverty estimates derived through kernel density estimation on grouped data.
    Keywords: Poverty , Economic models , Income distribution , Data analysis ,
    Date: 2008–07–22
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:08/183&r=ecm
  19. By: Erik Hjalmarsson
    Abstract: I test for stock return predictability in the largest and most comprehensive data set analyzed so far, using four common forecasting variables: the dividend- and earnings-price ratios, the short interest rate, and the term spread. The data contain over 20,000 monthly observations from 40 international markets, including 24 developed and 16 emerging economies. In addition, I develop new methods for predictive regressions with panel data. Inference based on the standard fixed effects estimator is shown to suffer from severe size distortions in the typical stock return regression, and an alternative robust estimator is proposed. The empirical results indicate that the short interest rate and the term spread are fairly robust predictors of stock returns in developed markets. In contrast, no strong or consistent evidence of predictability is found when considering
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:fip:fedgif:933&r=ecm
  20. By: Mototsugu Shintani (Department of Economics, Vanderbilt University, and Economist, Institute for Monetary and Economic Studies, Bank of Japan (E-mail: mototsugu.shintani@vanderbilt.edu, mototsugu.shintani@boj.or.jp)); Tomoyoshi Yabu (Assistant Professor, Graduate School of Systems and Information Engineering, University of Tsukuba (E-mail: tyabu@sk.tsukuba.ac.jp)); and Daisuke Nagakura (Economist, Institute for Monetary and Economic Studies, Bank of Japan (E-mail: daisuke.nagakura@boj.or.jp))
    Abstract: This paper investigates the spurious effect in forecasting asset returns when signals from technical trading rules are used as predictors. Against economic intuition, the simulation result shows that, even if past information has non predictive power, buy or sell signals based on the difference between the short-period and long-period moving averages of past asset prices can be statistically significant when the forecast horizon is relatively long. The theory implies that both e momentumf and econtrarianf strategies can be falsely supported, while the probability of obtaining each result depends on the type of the test statistics employed. Several modifications to these test statistics are considered for the purpose of avoiding spurious regressions. They are applied to the stock market index and the foreign exchange rate in order to reconsider the predictive power of technical trading rules.
    Keywords: Efficient market hypothesis, Nonstationary time series, Random walk, Technical analysis
    JEL: C12 C22 C25 G11 G15
    Date: 2008–06
    URL: http://d.repec.org/n?u=RePEc:ime:imedps:08-e-9&r=ecm
  21. By: P. A. Ferrari (University of Milan); S. Salini (University of Milan)
    Abstract: This paper provides a comparative analysis of statistical methods to evaluate the consumer perception about the quality of Services of General Interest. The evaluation of the service quality perceived by users is usually based on Customer Satisfaction Survey data and an ex-post evaluation is then performed. Another approach, consisting in evaluating Consumers preferences, supplies an ex-ante information on Service Quality. Here, the ex-post approach is considered, two non-standard techniques - the Rasch Model and the Nonlinear Principal Component Analysis - are presented and the potential of both methods is discussed. These methods are applied on the Eurobarometer Survey data to assess the consumer satisfaction among European countries and in different years.
    Keywords: Service Quality, Eurobarometer, Non Linear Principal Component Analysis, Rasch Analysis, Conjoint Analysis
    JEL: C33 C35 C43 L94 L95 L96
    Date: 2008–04
    URL: http://d.repec.org/n?u=RePEc:fem:femwpa:2008.36&r=ecm
  22. By: David Maddison; Katrin Rehdanz
    Abstract: This paper introduces the concept of homogeneous non-causality in heterogeneous panels. This concept is used to examine a panel of data for evidence of a causal relationship between GDP and carbon emissions. The technique is compared to the standard test for homogeneous non-causality in homogeneous panels and heterogeneous non-causality in heterogeneous panels. In North America, Asia and Oceania the homogeneous non-causality hypothesis that CO2 emissions does not Granger cause GDP cannot be rejected if heterogeneity is allowed for in the data-generating process. In North America the homogeneous non-causality hypothesis that GDP does not cause CO2 emissions cannot be rejected either
    Keywords: Energy; Carbon Emissions; Granger Causality; and Heterogeneous Panels
    JEL: C12 O13 Q54
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:kie:kieliw:1437&r=ecm

This nep-ecm issue is ©2008 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.