
on Econometrics 
By:  Jan Beran (University of Konstanz); Mark A. Heiler 
Abstract:  Estimation of a nonparametric regression spectrum based on the periodogram is considered. Neither trend estimation nor smoothing of the periodogram are required. Alternatively, for cases where spectral estimation of phase shifts fails and the shift does not depend on frequency, a time domain estimator of the lagshift is defined. Asymptotic properties of the frequency and time domain estimators are derived. Simulations and a data example illustrate the methods. 
Keywords:  Periodogram, cross spectrum, regression spectrum, phase, wavelets. 
Date:  2007–12–01 
URL:  http://d.repec.org/n?u=RePEc:knz:cofedp:0712&r=ecm 
By:  Christian Conrad (University of Heidelberg, Department of Economics); Enno Mammen (University of Mannheim, Department of Economics) 
Abstract:  We consider time series models in which the conditional mean of the response variable given the past depends on latent covariates. We assume that the covariates can be estimated consistently and use an iterative nonparametric kernel smoothing procedure for estimating the conditional mean function. The covariates are assumed to depend (non)parametrically on past values of the covariates and of the observations. Our procedure is based on iterative ¯ts of the covariates and nonparametric kernel smoothing of the conditional mean function. An asymptotic theory for the resulting kernel estimator is developed and the estimator is used for testing parametric speci¯cations of the mean function. Our leading example is a semiparametric class of GARCHinMean models. In this setup our procedure provides a formal framework for testing economic theories that postulate functional relations between macroeconomic or ¯nancial variables and their conditional second moments. We illustrate the usefulness of the methodology by testing the linear riskreturn relation predicted by the ICAPM. 
Keywords:  Speci¯cation test, GARCHM, semiparametric regression, risk premium, ICAPM. 
JEL:  C12 C14 C22 C52 G12 
Date:  2008–07 
URL:  http://d.repec.org/n?u=RePEc:awi:wpaper:0473&r=ecm 
By:  Jan Beran (University of Konstanz); Mark A. Heiler 
Abstract:  We consider dependence structures in multivariate time series that are characterized by deterministic trends. Results from spectral analysis for stationary processes are extended to deterministic trend functions. A regression cross covariance and spectrum are defined. Estimation of these quantities is based on wavelet thresholding. The method is illustrated by a simulated example and a threedimensional time series consisting of ECG, blood pressure and cardiac stroke volume measurements. 
Keywords:  Nonparametric trend estimation, cross spectrum, wavelets, regression spectrum, phase, threshold estimator 
Date:  2008–01–01 
URL:  http://d.repec.org/n?u=RePEc:knz:cofedp:0801&r=ecm 
By:  Jan Beran (University of Konstanz) 
Abstract:  We consider parameter estimation for timedependent locally stationary longmemory processes. The asymptotic distribution of an estimator based on the local infinite autoregressive representation is derived, and asymptotic formulas for the mean squared error of the estimator, and the asymptotically optimal bandwidth are obtained. In spite of long memory, the optimal bandwidth turns out to be of the order n^(1/5) and inversely proportional to the square of the second derivative of d. In this sense, local estimation of d is comparable to regression smoothing with iid residuals. 
Keywords:  long memory, fractional ARIMA process, local stationarity, bandwidth selection 
Date:  2007–12–01 
URL:  http://d.repec.org/n?u=RePEc:knz:cofedp:0713&r=ecm 
By:  Yuanhua Feng (HeriotWatt University, Edinburgh); Jan Beran; Keming Yu 
Abstract:  A class of semiparametric fractional autoregressive GARCH models (SEMIFARGARCH), which includes deterministic trends, difference stationarity and stationarity with short and longrange dependence, and heteroskedastic model errors, is very powerful for modelling financial time series. This paper discusses the model fitting, including an efficient algorithm and parameter estimation of GARCH error term. So that the model can be applied in practice. We then illustrate the model and estimation methods with a few of different finance data sets. 
Keywords:  Financial time series, GARCH model, SEMIFAR model, parameter estimation, kernel estimation, asymptotic property. 
Date:  2007–12–01 
URL:  http://d.repec.org/n?u=RePEc:knz:cofedp:0714&r=ecm 
By:  Angrist, Joshua (MIT); Kuersteiner, Guido M. (University of California, Davis) 
Abstract:  Macroeconomists have long been concerned with the causal effects of monetary policy. When the identification of causal effects is based on a selectiononobservables assumption, noncausality amounts to the conditional independence of outcomes and policy changes. This paper develops a semiparametric test for conditional independence in time series models linking a multinomial policy variable with unobserved potential outcomes. Our approach to conditional independence testing is motivated by earlier parametric tests, as in Romer and Romer (1989, 1994, 2004). The procedure developed here is semiparametric in the sense that we model the process determining the distribution of treatment – the policy propensity score – but leave the model for outcomes unspecified. A conceptual innovation is that we adapt the crosssectional potential outcomes framework to a time series setting. This leads to a generalized definition of Sims (1980) causality. A technical contribution is the development of rootT consistent distributionfree inference methods for full conditional independence testing, appropriate for dependent data and allowing for firststep estimation of the propensity score. 
Keywords:  monetary policy, propensity score, multinomial treatments, causality 
JEL:  E52 C22 C31 
Date:  2008–07 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp3606&r=ecm 
By:  Drew Creal (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam); Eric Zivot (University of Washington) 
Abstract:  In this paper we investigate whether the dynamic properties of the U.S. business cycle have changed in the last fifty years. For this purpose we develop a flexible business cycle indicator that is constructed from a moderate set of macroeconomic time series. The coincident economic indicator is based on a multivariate trendcycle decomposition model that accounts for time variation in macroeconomic volatility, known as the great moderation. In particular, we consider an unobserved components time series model with a common cycle that is shared across different time series but adjusted for phase shift and amplitude. The extracted cycle can be interpreted as the result of a modelbased bandpass filter and is designed to emphasize the business cycle frequencies that are of interest to applied researchers and policymakers. Stochastic volatility processes and mixture distributions for the irregular components and the common cycle disturbances enable us to account for all the heteroskedasticity present in the data. The empirical results are based on a Bayesian analysis and show that timevarying volatility is only present in the a selection of idiosyncratic components while the coefficients driving the dynamic properties of the business cycle indicator have been stable over time in the last fifty years. 
Keywords:  Bandpass filter; Markov chain Monte Carlo; Stochastic volatility; Trendcycle decomposition; Unobserved components time series model 
JEL:  C11 C32 E32 
Date:  2008–07–17 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20080069&r=ecm 
By:  Jose A. F. Machado; J. M. C. Santos Silva 
Abstract:  This paper studies the estimation of quantile regression for fractional data, focusing on the case where there are masspoints at zero or/and one. More generally, we propose a simple strategy for the estimation of the conditional quantiles of data from mixed distributions, which combines standard results on the estimation of censored and BoxCox quantile regressions. The implementation of the proposed method is illustrated using a wellknown dataset. 
Date:  2008–07–29 
URL:  http://d.repec.org/n?u=RePEc:esx:essedp:656&r=ecm 
By:  Christian Conrad (University of Heidelberg, Department of Economics); Menelaos Karanasos (Brunel University, Dept. of Economics and Finance); Ning Zeng (Brunel University, Dept. of Economics and Finance) 
Abstract:  Tse (1998) proposes a model which combines the fractionally integrated GARCH formulation of Baillie, Bollerslev and Mikkelsen (1996) with the asymmetric power ARCH speci¯cation of Ding, Granger and Engle (1993). This paper analyzes the applicability of a multivariate constant conditional correlation version of the model to national stock market returns for eight countries. We ¯nd this multivariate speci¯cation to be generally applicable once power, leverage and longmemory e®ects are taken into consideration. In addition, we ¯nd that both the optimal fractional di®erencing parameter and power transformation are remarkably similar across countries. Outofsample evidence for the superior forecasting ability of the multivariate FIAPARCH framework is provided in terms of forecast error statistics and tests for equal forecast accuracy of the various models. 
Keywords:  Asymmetric Power ARCH, Fractional integration, Stock returns, Volatility forecast evaluation 
JEL:  C13 C22 C52 
Date:  2008–07 
URL:  http://d.repec.org/n?u=RePEc:awi:wpaper:0472&r=ecm 
By:  Mishra, SK 
Abstract:  The TwoStage Least Squares (2SLS) is a well known econometric technique used to estimate the parameters of a multiequation (or simultaneous equations) econometric model when errors across the equations are not correlated and the equation(s) concerned is (are) overidentified or exactly identified. However, in presence of outliers in the data matrix, the classical 2SLS has a very poor performance. In this study a method has been proposed to conveniently generalize the 2SLS to the weighted 2SLS (W2SLS), which is robust to the effects of outliers and perturbations in the data matrix. Monte Carlo experiments have been conducted to demonstrate the performance of the proposed method. It has been found that robustness of the proposed method is not much destabilized by the magnitude of outliers, but it is sensitive to the number of outliers/perturbations in the data matrix. The breakdown point of the method is quite high, somewhere between 45 to 50 percent of the number of points in the data matrix. 
Keywords:  TwoStage Least Squares; multiequation econometric model; simultaneous equations; outliers; robust; weighted least squares; Monte Carlo experiments; unbiasedness; efficiency; breakdown point; perturbation; structural parameters; reduced form 
JEL:  C13 C63 C14 C87 C15 C30 
Date:  2008–07–26 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:9737&r=ecm 
By:  Donald W.K. Andrews (Cowles Foundation, Yale University); Sukjin Han (Dept. of Economics, Yale University) 
Abstract:  This paper analyzes the finitesample and asymptotic properties of several bootstrap and m out of n bootstrap methods for constructing confidence interval (CI) endpoints in models defined by moment inequalities. In particular, we consider using these methods directly to construct CI endpoints. By considering two very simple models, the paper shows that neither the bootstrap nor the m out of n bootstrap is valid in finite samples or in a uniform asymptotic sense in general when applied directly to construct CI endpoints. In contrast, other results in the literature show that other ways of applying the bootstrap, m out of n bootstrap, and subsampling do lead to uniformly asymptotically valid confidence sets in moment inequality models. Thus, the uniform asymptotic validity of resampling methods in moment inequality models depends on the way in which the resampling methods are employed. 
Keywords:  Bootstrap, Coverage probability, m out of n bootstrap, Moment inequality model, Partial identification, Subsampling 
JEL:  C01 
Date:  2008–07 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1671&r=ecm 
By:  Shamiri, Ahmed; Shaari, Abu Hassan; Isa, Zaidi 
Abstract:  Being able to choose most suitable volatility model and distribution specification is a more demanding task. This paper introduce an analyzing procedure using the KullbackLeibler information criteria (KLIC) as a statistical tool to evaluate and compare the predictive abilities of possibly misspecified density forecast models. The main advantage of this statistical tool is that we use the censored likelihood functions to compute the tail minimum of the KLIC, to compare the performance of a density forecast models in the tails. We include an illustrative simulation and an empirical application to compare a set of distributions, including symmetric/asymmetric distribution, and a family of GARCH volatility models. We highlight the use of our approach to a daily index, the Kuala Lumpur Composite index (KLCI). Our results shows that the choice of the conditional distribution appear to be a more dominant factor in determining the adequacy of density forecasts than the choice of volatility model. Furthermore, the results support the Skewed for KLCI return distribution. 
Keywords:  Density forecast; Conditional distribution; Forecast accuracy; KLIC; GARCH models 
JEL:  D53 C32 C16 C52 
Date:  2007–08–20 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:9790&r=ecm 
By:  Giancarlo Bruno (ISAE  Institute for Studies and Economic Analyses) 
Abstract:  The use of linear parametric models for forecasting economic time series is widespread among practitioners, in spite of the fact that there is a large evidence of the presence of nonlinearities in many of such time series. However, the empirical results stemming from the use of nonlinear models are not always as good as expected. This has been sometimes associated to the difficulty in correctly specifying a nonlinear parametric model. I this paper I cope with this issue by using a more general nonparametric approach, which can be used both as a preliminary tool for aiding in specifying a suitable parametric model and as an autonomous modelling strategy. The results are promising, in that the nonparametric approach achieve a good forecasting record for a considerable number of series. 
Keywords:  Nonlinear TimeSeries Models, NonParametric Models. 
JEL:  C52 C53 
Date:  2008–06 
URL:  http://d.repec.org/n?u=RePEc:isa:wpaper:98&r=ecm 
By:  Stéphane Loisel (SAF  EA2429  Laboratoire de Science Actuarielle et Financière  Université Claude Bernard  Lyon I); Christian Mazza (Département de Mathématiques  Université de Fribourg); Didier Rullière (SAF  EA2429  Laboratoire de Science Actuarielle et Financière  Université Claude Bernard  Lyon I) 
Abstract:  We consider the classical risk model and carry out a sensitivity and robustness analysis of finitetime ruin probabilities. We provide algorithms to compute the related influence functions. We also prove the weak convergence of a sequence of empirical finitetime ruin probabilities starting from zero initial reserve toward a Gaussian random variable. We define the concepts of reliable finitetime ruin probability as a ValueatRisk of the estimator of the finitetime ruin probability. To control this robust risk measure, an additional initial reserve is needed and called Estimation Risk Solvency Margin (ERSM). We apply our results to show how portfolio experience could be rewarded by cutoffs in solvency capital requirements. An application to catastrophe contamination and numerical examples are also developed. 
Keywords:  Finitetime ruin probability; robustness; Solvency II; reliable ruin probability; asymptotic Normality; influence function; Estimation Risk Solvency Margin (ERSM) 
Date:  2008–04 
URL:  http://d.repec.org/n?u=RePEc:hal:journl:hal00168714_v1&r=ecm 
By:  Yuanhua Feng (HeriotWatt University, Edinburgh); Jan Beran 
Keywords:  Optimal rate of convergence, nonparametric regression, long memory, antipersistence. 
Date:  2007–01–16 
URL:  http://d.repec.org/n?u=RePEc:knz:cofedp:0715&r=ecm 
By:  Antonio Peyrache; Tim Coelli (CEPA  School of Economics, The University of Queensland) 
Date:  2008 
URL:  http://d.repec.org/n?u=RePEc:qld:uqcepa:30&r=ecm 
By:  Georgios Papadopoulos; J. M. C. Santos Silva 
Abstract:  In this note we study the conditions under which leading models for underreported counts are identified. In particular, we highlight a peculiar identification problem that afflicts two of the most popular models in this class. 
Date:  2008–07–29 
URL:  http://d.repec.org/n?u=RePEc:esx:essedp:657&r=ecm 
By:  Camelia Minoiu; Sanjay G. Reddy 
Abstract:  We analyze the performance of kernel density methods applied to grouped data to estimate poverty (as applied in SalaiMartin, 2006, QJE). Using Monte Carlo simulations and household surveys, we find that the technique gives rise to biases in poverty estimates, the sign and magnitude of which vary with the bandwidth, the kernel, the number of datapoints, and across poverty lines. Depending on the chosen bandwidth, the $1/day poverty rate in 2000 varies by a factor of 1.8, while the $2/day headcount in 2000 varies by 287 million people. Our findings challenge the validity and robustness of poverty estimates derived through kernel density estimation on grouped data. 
Keywords:  Poverty , Economic models , Income distribution , Data analysis , 
Date:  2008–07–22 
URL:  http://d.repec.org/n?u=RePEc:imf:imfwpa:08/183&r=ecm 
By:  Erik Hjalmarsson 
Abstract:  I test for stock return predictability in the largest and most comprehensive data set analyzed so far, using four common forecasting variables: the dividend and earningsprice ratios, the short interest rate, and the term spread. The data contain over 20,000 monthly observations from 40 international markets, including 24 developed and 16 emerging economies. In addition, I develop new methods for predictive regressions with panel data. Inference based on the standard fixed effects estimator is shown to suffer from severe size distortions in the typical stock return regression, and an alternative robust estimator is proposed. The empirical results indicate that the short interest rate and the term spread are fairly robust predictors of stock returns in developed markets. In contrast, no strong or consistent evidence of predictability is found when considering 
Date:  2008 
URL:  http://d.repec.org/n?u=RePEc:fip:fedgif:933&r=ecm 
By:  Mototsugu Shintani (Department of Economics, Vanderbilt University, and Economist, Institute for Monetary and Economic Studies, Bank of Japan (Email: mototsugu.shintani@vanderbilt.edu, mototsugu.shintani@boj.or.jp)); Tomoyoshi Yabu (Assistant Professor, Graduate School of Systems and Information Engineering, University of Tsukuba (Email: tyabu@sk.tsukuba.ac.jp)); and Daisuke Nagakura (Economist, Institute for Monetary and Economic Studies, Bank of Japan (Email: daisuke.nagakura@boj.or.jp)) 
Abstract:  This paper investigates the spurious effect in forecasting asset returns when signals from technical trading rules are used as predictors. Against economic intuition, the simulation result shows that, even if past information has non predictive power, buy or sell signals based on the difference between the shortperiod and longperiod moving averages of past asset prices can be statistically significant when the forecast horizon is relatively long. The theory implies that both e momentumf and econtrarianf strategies can be falsely supported, while the probability of obtaining each result depends on the type of the test statistics employed. Several modifications to these test statistics are considered for the purpose of avoiding spurious regressions. They are applied to the stock market index and the foreign exchange rate in order to reconsider the predictive power of technical trading rules. 
Keywords:  Efficient market hypothesis, Nonstationary time series, Random walk, Technical analysis 
JEL:  C12 C22 C25 G11 G15 
Date:  2008–06 
URL:  http://d.repec.org/n?u=RePEc:ime:imedps:08e9&r=ecm 
By:  P. A. Ferrari (University of Milan); S. Salini (University of Milan) 
Abstract:  This paper provides a comparative analysis of statistical methods to evaluate the consumer perception about the quality of Services of General Interest. The evaluation of the service quality perceived by users is usually based on Customer Satisfaction Survey data and an expost evaluation is then performed. Another approach, consisting in evaluating Consumers preferences, supplies an exante information on Service Quality. Here, the expost approach is considered, two nonstandard techniques  the Rasch Model and the Nonlinear Principal Component Analysis  are presented and the potential of both methods is discussed. These methods are applied on the Eurobarometer Survey data to assess the consumer satisfaction among European countries and in different years. 
Keywords:  Service Quality, Eurobarometer, Non Linear Principal Component Analysis, Rasch Analysis, Conjoint Analysis 
JEL:  C33 C35 C43 L94 L95 L96 
Date:  2008–04 
URL:  http://d.repec.org/n?u=RePEc:fem:femwpa:2008.36&r=ecm 
By:  David Maddison; Katrin Rehdanz 
Abstract:  This paper introduces the concept of homogeneous noncausality in heterogeneous panels. This concept is used to examine a panel of data for evidence of a causal relationship between GDP and carbon emissions. The technique is compared to the standard test for homogeneous noncausality in homogeneous panels and heterogeneous noncausality in heterogeneous panels. In North America, Asia and Oceania the homogeneous noncausality hypothesis that CO2 emissions does not Granger cause GDP cannot be rejected if heterogeneity is allowed for in the datagenerating process. In North America the homogeneous noncausality hypothesis that GDP does not cause CO2 emissions cannot be rejected either 
Keywords:  Energy; Carbon Emissions; Granger Causality; and Heterogeneous Panels 
JEL:  C12 O13 Q54 
Date:  2008–07 
URL:  http://d.repec.org/n?u=RePEc:kie:kieliw:1437&r=ecm 