|
on Econometrics |
By: | Taoufik Bouezmarni; Jeroen V. K. Rombouts; Abderrahim Taamouti |
Abstract: | This paper proposes a new nonparametric test for conditional independence, which is based on the comparison of Bernstein copula densities using the Hellinger distance. The test is easy to implement because it does not involve a weighting function in the test statistic, and it can be applied in general settings since there is no restriction on the dimension of the data. In fact, to apply the test, only a bandwidth is needed for the nonparametric copula. We prove that the test statistic is asymptotically pivotal under the null hypothesis, establish local power properties, and motivate the validity of the bootstrap technique that we use in finite sample settings. A simulation study illustrates the good size and power properties of the test. We illustrate the empirical relevance of our test by focusing on Granger causality using financial time series data to test for nonlinear leverage versus volatility feedback effects and to test for causality between stock returns and trading volume. In a third application, we investigate Granger causality between macroeconomic variables |
Keywords: | Nonparametric tests, Conditional independence, Granger non-causality, Bernstein density copula, Bootstrap, Finance, Volatility asymmetry, Leverage effect, Volatility feedback effect, Macroeconomics |
JEL: | C12 C14 C15 C19 G1 G12 E3 E4 E52 |
Date: | 2009–06 |
URL: | http://d.repec.org/n?u=RePEc:cte:werepe:we093419&r=ecm |
By: | Hannu Oja; Davy Paindaveine; Sara Taskinen |
Abstract: | The so-called independent component (IC) model states that the observed p-vector X is generated via X = Z + ?, where ? is a p-vector, is an invertible matrix, and the centered random vector Z has independent marginals Zi. We consider the problem of testing, on the basis of n i.i.d. copies of X = (X(1)?,X(2)?)?, the null hypothesis under which the multivariate marginals X(1) and X(2) are independent. Under a symmetry assumption on the Zi’s, we propose parametric and nonparametric tests based on estimated independent components (which are obtained under the null, via, e.g., a recent estimator due to Oja et al. 2006). Far from excluding cases of unidentifiability where several independent components are Gaussian, as it is done in the so-called independent component analysis (ICA), our procedures can deal with the resulting possible model singularity, the nature of which we carefully investigate. The proposed nonparametric tests are based on componentwise signed ranks, in the same spirit as in Puri and Sen (1971). However, unlike the Puri and Sen tests, our tests (i) are affine-invariant and (ii) are, for adequately chosen scores, locally and asymptotically optimal (in the Le Cam sense) at prespecified densities. They are also valid without any moment assumptions. Local powers and asymptotic relative efficiencies with respect to the classical Gaussian procedure (namely, Wilks’ LRT) are derived. Finite-sample properties are investigated through a Monte-Carlo study. |
Keywords: | Tests of independence, signed rank tests, independent component models, local asymptotic normality, singular information matrices. |
Date: | 2009 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2009_018&r=ecm |
By: | Ted Juhl (University of Kansas); Zhijie Xiao (Boston College) |
Abstract: | Several widely used tests for a changing mean exhibit nonmonotonic power in finite samples due to "incorrect" estimation of nuisance parameters under the alternative. In this paper, we study the issue of nonmonotonic power in testing for changing mean. We investigate the asymptotic power properties of the tests using a new framework where alternatives are characterized as having "large" changes. The asymptotic analysis provides a theoretical explanation to the power problem. Modified tests that have monotonic power against a wide range of alternatives of structural change are proposed. Instead of estimating the nuisance parameters based on ordinary least squares residuals, the proposed tests use modified estimators based on nonparametric regression residuals. It is shown that tests based on the modified long-run variance estimator provide an improved rate of divergence of the tests under the alternative of a change in mean. Tests for structural breaks based on such an estimator are able to remain consistent while still retaining the same asymptotic distribution under the null hypothesis of constant mean. |
Keywords: | stability, changing parameters, time varying parameters |
JEL: | C22 |
Date: | 2009–06–17 |
URL: | http://d.repec.org/n?u=RePEc:boc:bocoec:709&r=ecm |
By: | Olivier Parent; James P. Lesage |
Abstract: | A space-time filter structure is introduced that can be used to accommodate dependence across space and time in the error components of panel data models that contain random effects. This general specification encompasses several more specific space-time structures that have been used recently in the panel data literature. Markov Chain Monte Carlo methods are set forth for estimating the model which allow simple treatment of initial period observations as endogenous or exogenous. Performance of the approach is demonstrated using both Monte Carlo experiments and an applied illustration. |
Date: | 2009 |
URL: | http://d.repec.org/n?u=RePEc:cin:ucecwp:2009-04&r=ecm |
By: | Kim Christensen (Aarhus University and CREATES); Roel Oomen (Deutsche Bank, London, UK and the Department of Quantitative Economics, the University of Amsterdam, The Netherlands); Mark Podolskij (ETH Zürich, Switzerland and CREATES) |
Abstract: | In this paper, we propose a new jump robust quantile-based realised variancemeasure of ex-post return variation that can be computed using potentially noisy data. This new estimator is consistent for integrated variance and we present feasible central limit theorems which show that it converges at the best attainable rate and has excellent efficiency. Asymptotically, the quantile-based realised variance is immune to finite activity jumps and outliers in the price series, while in modified form the estimator is applicable with market microstructure noise and therefore operational on highfrequency data. Simulations show that it also has superior robustness properties in finite samples, while an empirical application illustrates its use on equity data. |
Keywords: | Finite activity jumps, Integrated variance, Market microstructure noise, Order statistics, Outliers, Realised variance |
JEL: | C10 C80 |
Date: | 2009–05–01 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2009-27&r=ecm |
By: | Zhijie Xiao (Boston College) |
Abstract: | Quantile regression has important applications in risk management, portfolio optimization, and asset pricing. The current paper studies estimation, inference and financial applications of quantile regression with cointegrated time series. In addition, a new cointegration model with varying coefficients is proposed. In the proposed model, the value of cointegrating coefficients may be affected by the shocks and thus may vary over the innovation quantile. The proposed model may be viewed as a stochastic cointegration model which includes the conventional cointegration model as a special case. It also provides a useful complement to cointegration models with (G)ARCH effects. Asymptotic properties of the proposed model and limiting distribution of the cointegrating regression quantiles are derived. In the presence of endogenous regressors, fully-modified quantile regression estimators and augmented quantile cointegrating regression are proposed to remove the second order bias and nuisance parameters. Regression Wald test are constructed based on the fully modified quantile regression estimators. An empirical application to stock index data highlights the potential of the proposed method. |
Keywords: | ARCH/GARCH, Cointegration, Portfolio Optimization, Quantile Regression, Time Varying |
JEL: | C22 G1 |
Date: | 2009–01–31 |
URL: | http://d.repec.org/n?u=RePEc:boc:bocoec:708&r=ecm |
By: | Ahlgren, Niklas (Hanken School of Economics); Antell, Jan (Hanken School of Economics) |
Abstract: | Bootstrap likelihood ratio tests of cointegration rank are commonly used because they tend to have rejection probabilities that are closer to the nominal level than the rejection probabilities of the correspond- ing asymptotic tests. The e¤ect of bootstrapping the test on its power is largely unknown. We show that a new computationally inexpensive procedure can be applied to the estimation of the power function of the bootstrap test of cointegration rank. The bootstrap test is found to have a power function close to that of the level-adjusted asymp- totic test. The bootstrap test estimates the level-adjusted power of the asymptotic test highly accurately. The bootstrap test may have low power to reject the null hypothesis of cointegration rank zero, or underestimate the cointegration rank. An empirical application to Euribor interest rates is provided as an illustration of the findings. |
Keywords: | Cointegration; Likelihood ratio test; Test power; Bootstrap |
Date: | 2009–06–11 |
URL: | http://d.repec.org/n?u=RePEc:hhb:hanken:0541&r=ecm |
By: | Adam Clements (QUT); Ralf Becker (Manchester) |
Abstract: | A well developed literature exists in relation to modeling and forecasting asset return volatility. Much of this relate to the development of time series models of volatility. This paper proposes an alternative method for forecasting volatility that does not involve such a model. Under this approach a forecast is a weighted average of historical volatility. The greatest weight is given to periods that exhibit the most similar market conditions to the time at which the forecast is being formed. Weighting occurs by comparing short-term trends in volatility across time (as a measure of market conditions) by the application of a multivariate kernel scheme. It is found that at a 1 day forecast horizon, the proposed method produces forecasts that are significantly more accurate than competing approaches. |
Keywords: | Volatility, forecasts, forecast evaluation, model confidence set, nonparametric |
JEL: | C22 G00 |
Date: | 2009–05–12 |
URL: | http://d.repec.org/n?u=RePEc:qut:auncer:2009_56&r=ecm |
By: | Gunnar Bårdsen and Helmut Lütkepohl (Department of Economics, Norwegian University of Science and Technology) |
Abstract: | Sometimes forecasts of the original variable are of interest al- though a variable appears in logarithms (logs) in a system of time series. In that case converting the forecast for the log of the variable to a naive forecast of the original variable by simply applying the exponential transformation is not optimal theoretically. A simple expression for the optimal forecast un- der normality assumptions is derived. Despite its theoretical advantages the optimal forecast is shown to be inferior to the naive forecast if speci¯cation and estimation uncertainty are taken into account. Hence, in practice using the exponential of the log forecast is preferable to using the optimal forecast. |
Date: | 2009–06–16 |
URL: | http://d.repec.org/n?u=RePEc:nst:samfok:10409&r=ecm |
By: | Dae-Jin Lee; Maria Durban |
Abstract: | In recent years, spatial and spatio-temporal modelling have become an important area of research in many fields (epidemiology, environmental studies, disease mapping, ...). However, most of the models developed are constrained by the large amounts of data available. We propose the use of Penalized splines (P-splines) in a mixed model framework for smoothing spatio-temporal data. Our approach allows the consideration of interaction terms which can be decomposed as a sum of smooth functions similarly as an ANOVA decomposition. The properties of the bases used for regression allow the use of algorithms that can handle large amount of data. We show that imposing the same constraints as in a factorial design it is possible to avoid identifiability problems. We illustrate the methodology for Europe ozone levels in the period 1999-2005. |
Keywords: | ANOVA decomposition, Mixed models, Penalized Splines, Spatiotemporal data |
Date: | 2009–06 |
URL: | http://d.repec.org/n?u=RePEc:cte:wsrepe:ws093312&r=ecm |
By: | Colin Stewart |
Abstract: | This paper considers the problem of testing an expert who makes probabilistic forecasts about the outcomes of a stochastic process. I show that, under general conditions on the tester's prior, a likelihood test can distinguish informed from uninformed experts with high prior probability. The test rejects informed experts on data-generating processes where the tester quickly learns the true probabilities by updating her prior. However, the set of processes on which informed experts are rejected is topologically small. These results contrast sharply with many negative results in the literature. |
Keywords: | Probability forecasts, testing, experts |
JEL: | C44 D81 D83 |
Date: | 2009–06–24 |
URL: | http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-360&r=ecm |
By: | de Luna, Xavier (Department of Statistics, Umeå University); Lundin, Mathias (Department of Statistics, Umeå University) |
Abstract: | In observational studies, the estimation of a treatment effect on an outcome of interest is often done by controlling on a set of pre-treatment characteristics (covariates). This yields an unbiased estimator of the treatment effect when the assumption of unconfoundedness holds, that is, there are no unobserved covariates affecting both the treatment assignment and the outcome. This is in general not realistically testable. It is, therefore, important to conduct an analysis about how sensitive the inference is with respect to the unconfoundedness assumption. In this paper we propose a procedure to conduct such a Bayesian sensitivity analysis, where the usual parameter uncertainty and the uncertainty due to the unconfoundedness assumption can be compared. To measure departures from the assumption we use a correlation coefficient which is intuitively comprehensible and ensures that the results of sensitivity analyses made on different evaluation studies are comparable. Our procedure is applied to the Lalonde data and to a study of the effect of college choice on income in Sweden. |
Keywords: | Causal inference; effects of college choice; propensity score; register data |
JEL: | C11 C15 |
Date: | 2009–06–10 |
URL: | http://d.repec.org/n?u=RePEc:hhs:ifauwp:2009_012&r=ecm |
By: | Cecilia Frale; Massimiliano Marcellino; Gian Luigi Mazzi; Tommaso Proietti |
Abstract: | In this paper we propose a monthly measure for the euro area Gross Domestic Product (GDP) based on a small scale factor model for mixed frequency data, featuring two factors: the first is driven by hard data, whereas the second captures the contribution of survey variables as coincident indicators. Within this framework we evaluate both the in-sample contribution of the second survey-based factor, and the short term forecasting performance of the model in a pseudo-real time experiment. We find that the survey-based factor plays a significant role for two components of GDP: Industrial Value Added and Exports. Moreover, the two factor model outperforms in terms of out of sample forecasting accuracy the traditional autoregressive distributed lags (ADL) specifications and the single factor model, with few exceptions for Exports and in growth rates. |
Keywords: | Survey data, Temporal Disaggregation. Multivariate State Space Models. Dynamic factor Models. Kalman filter and smoother. Chain-linking |
JEL: | E32 E37 C53 |
Date: | 2009 |
URL: | http://d.repec.org/n?u=RePEc:eui:euiwps:eco2009/19&r=ecm |
By: | Lennart Hoogerheide; Richard Kleijn; Francesco Ravazzolo; Herman K. van Dijk; Marno Verbeek (Econometric and Tinbergen Institutes, Erasmus University Rotterdam; PGGM, Zeist; Norges Bank; Econometric and Tinbergen Institutes, Erasmus University Rotterdam; Rotterdam School of Management, Erasmus University Rotterdam) |
Abstract: | Several Bayesian model combination schemes, including some novel approaches that simultaneously allow for parameter uncertainty, model uncertainty and robust time varying model weights, are compared in terms of forecast accuracy and economic gains using ¯nancial and macroeconomic time series. The results indicate that the proposed time varying model weight schemes outperform other combination schemes in terms of predictive and economic gains. In an empirical application using returns on the S&P 500 index, time varying model weights provide improved forecasts with substantial economic gains in an investment strategy including transaction costs. Another empirical example refers to forecasting US economic growth over the business cycle. It suggests that time varying combination schemes may be very useful in business cycle analysis and forecasting, as these may provide an early indicator for recessions. |
Keywords: | Forecast combination, Bayesian model averaging, time varying model weights, portfolio optimization, business cycle |
Date: | 2009–06–23 |
URL: | http://d.repec.org/n?u=RePEc:bno:worpap:2009_10&r=ecm |
By: | Franses, Philip Hans; Groot, Bert de; Legerstee, Rianne (Nyenrode Business Universiteit) |
Abstract: | This paper reports on the Wald test for a1=a2=0 in the regression model (t t y = + cos(2 t ) + sin( 2 t ) + u 1 2 ? p a ? p µ a) where ? is estimated using nonlinear least squares (NLS). As this situation is not standard we provide critical values for further use. An illustration to quarterly GDP in the Netherlands is given. A power study shows that choosing inappropriate starting values for ? leads to a quick loss of power. Key words: Harmonic Regressors, Critical Values. |
Date: | 2009 |
URL: | http://d.repec.org/n?u=RePEc:dgr:nijrep:2009-05&r=ecm |
By: | Flores, Carlos A. (University of Miami); Flores-Lagunes, Alfonso (University of Florida) |
Abstract: | An important goal when analyzing the causal effect of a treatment on an outcome is to understand the mechanisms through which the treatment causally works. We define a causal mechanism effect of a treatment and the causal effect net of that mechanism using the potential outcomes framework. These effects provide an intuitive decomposition of the total effect that is useful for policy purposes. We offer identification conditions based on an unconfoundedness assumption to estimate them, within a heterogeneous effect environment, and for the cases of a randomly assigned treatment and when selection into the treatment is based on observables. Two empirical applications illustrate the concepts and methods. |
Keywords: | causal inference, causal mechanisms, post-treatment variables, principal stratification |
JEL: | C13 C21 C14 |
Date: | 2009–06 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp4237&r=ecm |
By: | Bonato, Matteo (University of Zurich); Caporin, Massimiliano (University of Padova); Ranaldo, Angelo (Swiss National Bank) |
Abstract: | In modelling and forecasting volatility, two main trade-offs emerge: mathematical tractability versus economic interpretation and accuracy versus speed. The authors attempt to reconcile, at least partially, both trade-offs. The former trade-off is crucial for many financial applications, including portfolio and risk management. The speed/accuracy trade-off is becoming more and more relevant in an environment of large portfolios, prolonged periods of high volatility (as in the current financial crisis), and the burgeoning phenomenon of algorithmic trading in which computer-based trading rules are automatically implemented. The increased availability of high-frequency data provides new tools for forecasting variances and covariances between assets. However, there is scant literature on forecasting more than one realised volatility. Following Gourieroux, Jasiak and Sufana (Journal of Econometrics, forthcoming), the authors propose a methodology to model and forecast realised covariances without any restriction on the parameters while maintaining economic interpretability. An empirical application based on variance forecasting and risk evaluation of a portfolio of two US treasury bills and two exchange rates is presented. The authors compare their model with several alternative specifications proposed in the literature. Empirical findings suggest that the model can be efficiently used in large portfolios. |
Keywords: | Wishart process; realized volatility; Granger causality; volatility spillover; Value-at-Risk |
JEL: | C13 C16 C22 C51 C53 |
Date: | 2009–06–24 |
URL: | http://d.repec.org/n?u=RePEc:ris:snbwpa:2009_003&r=ecm |
By: | Yingyao Hu, David McAdams and Matthew Shum |
Abstract: | We propose a novel methodology for nonparametric identification of first-price auction models with independent private values, which accommodates auction-specific unobserved heterogeneity and bidder asymmetries, based on recent results from the econometric literature on nonclassical measurement error in Hu and Schennach (2008). Unlike Krasnokutskaya (2009), we do not require that equilibrium bids scale with the unobserved heterogeneity. Our approach accommodates a wide variety of applications, including settings in which there is an unobserved reserve price, an unobserved cost of bidding, or an unobserved number of bidders, as well as those in which the econometrician fails to observe some factor with a non-multiplicative effect on bidder values. |
Date: | 2009–06 |
URL: | http://d.repec.org/n?u=RePEc:jhu:papers:553&r=ecm |
By: | Karen A. Kopecky (Department of Economics, The University of Western Ontario); Richard M. H. Suen (Department of Economics, University of California Riverside) |
Abstract: | This paper re-examines the Rouwenhorst method of approximating first-order autoregressive processes. This method is appealing because it can match the conditional and unconditional mean, the conditional and unconditional variance and the first-order autocorrelation of any AR(1) process. This paper provides the first formal proof of this and other results. When comparing to five other methods, the Rouwenhorst method has the best performance in approximating the business cycle moments generated by the stochastic growth model. It is shown that, equipped with the Rouwenhorst method, an alternative approach to generating these moments has a higher degree of accuracy than the simulation method. |
Keywords: | Numerical Methods, Finite State Approximations, Optimal Growth Model |
JEL: | C63 |
Date: | 2009–05 |
URL: | http://d.repec.org/n?u=RePEc:ucr:wpaper:200904&r=ecm |
By: | Helmut Luetkepohl |
Abstract: | Aggregated times series variables can be forecasted in different ways. For example, they may be forecasted on the basis of the aggregate series or forecasts of disaggregated variables may be obtained first and then these forecasts may be aggregated. A number of forecasts are presented and compared. Classical theoretical results on the relative efficiencies of different forecasts are reviewed and some complications are discussed which invalidate the theoretical results. Contemporaneous as well as temporal aggregation are considered. |
Keywords: | Autoregressive moving-average process, temporal aggregation, contemporaneous aggregation, vector autoregressive moving-average process |
JEL: | C22 C32 |
Date: | 2009 |
URL: | http://d.repec.org/n?u=RePEc:eui:euiwps:eco2009/17&r=ecm |
By: | Ralph D. Snyder; J. Keith Ord |
Abstract: | Using an innovations state space approach, it has been found that the Akaike information criterion (AIC) works slightly better, on average, than prediction validation on withheld data, for choosing between the various common methods of exponential smoothing for forecasting. There is, however, a puzzle. Should the count of the seed states be incorporated into the penalty term in the AIC formula? We examine arguments for and against this practice in an attempt to find an acceptable resolution of this question. |
Keywords: | Exponential smoothing, forecasting, Akaike information criterion, innovations state space approach |
JEL: | C22 |
Date: | 2009–06–11 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2009-4&r=ecm |
By: | Harin, Alexander |
Abstract: | A general correcting formula of forecasting (as a framework for long-use and standardized forecasts) is proposed. The formula provides new forecasting resources and areas of application including economic forecasting. |
Keywords: | forecasting; prediction; forecasting correction; planning; |
JEL: | C53 E17 F17 H68 J11 |
Date: | 2009–06–15 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:15746&r=ecm |
By: | Jan Fidrmuc; Ariane Tichit |
Abstract: | We argue that econometric analyses of growth in post-communist countries are vulnerable to structural breaks across time and/or countries. We demonstrate this by identifying structural breaks in growth for 25 countries and over 18 years. The method we use allows identification of structural breaks at a-priori unknown points in space or time. The only prior assumption is that breaks occur in relation to progress in implementing market-oriented reforms. We find robust evidence that the pattern of growth in transition has changed at least three times, yelding four different models of growth associated with different stages of reform. The speed with which individual countries progress through these stages differs considerably. |
Date: | 2009–02 |
URL: | http://d.repec.org/n?u=RePEc:edb:cedidp:09-02&r=ecm |
By: | Audrone Jakaitiene (Institute of Mathematics and Informatics, Akademijos st. 4, LT-08663 Vilnius, Lithuania.); Stéphane Dées (European Central Bank, Kaiserstrasse 29, D-60311 Frankfurt am Main, Germany.) |
Abstract: | Forecasting the world economy is a difficult task given the complex inter relationships within and across countries. This paper proposes a number of approaches to forecast short-term changes in selected world economic variables and aims, first, at ranking various forecasting methods in terms of forecast accuracy and, second, at checking whether methods forecasting directly aggregate variables (direct approaches) outperform methods based on the aggregation of country-speci.c forecasts (bottom-up approaches). Overall, all methods perform better than a simple benchmark for short horizons (up to three months ahead). Among the forecasting approaches used, factor models appear to perform the best. Moreover, direct approaches outperform bottom-up ones for real variables, but not for prices. Finally, when country-specific forecasts are adjusted to match direct forecasts at the aggregate levels (top-down approaches), the forecast accuracy is neither improved nor deteriorated (i.e. top-down and bottom-up approaches are broadly equivalent in terms of country-specific forecast accuracy). JEL Classification: C53, C32, E37, F17. |
Keywords: | Factor models, Forecasts, Time series models. |
Date: | 2009–06 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:200901059&r=ecm |
By: | Amanda E. Kowalski |
Abstract: | The extent to which consumers respond to marginal prices for medical care is important for policy. Using recent data and a new censored quantile instrumental variable (CQIV) estimator, I estimate the price elasticity of expenditure on medical care. The CQIV estimator allows the estimates to vary across the skewed expenditure distribution, it allows for censoring at zero expenditure nonparametrically, and it allows for the insurance-induced endogenous relationship between price and expenditure. For identification, I rely on cost sharing provisions that generate marginal price differences between individuals who have injured family members and individuals who do not. I estimate the price elasticity of expenditure on medical care to be stable at -2.3 across the .65 to .95 conditional quantiles of the expenditure distribution. These quantile estimates are an order of magnitude larger than previous mean estimates. I consider several explanations for why price responsiveness is larger than previous estimates would suggest. |
JEL: | I1 |
Date: | 2009–06 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:15085&r=ecm |
By: | Wouters G.; De Schepper A. |
Abstract: | Consider the problem of computing the optimal lower and upper bound for the expected value E[_(X)], where X is an uncertain random probability variable. This paper studies the case in which the density of X is restricted by multiple shape constraints, each imposed on a different subset of the domain. We derive (closed) convex hull representations that allow us to reduce the optimization problem to a class of generating measures that are composed of convex sums of local probability measures. Furthermore, the notion of mass constraints is introduced to spread out the probability mass over the entire domain. A generalization to mass uncertainty is discussed as well. |
Date: | 2009–06 |
URL: | http://d.repec.org/n?u=RePEc:ant:wpaper:2009005&r=ecm |