|
on Econometrics |
By: | Yixiao Sun (University of California, San Diego); Peter C.B. Phillips (Cowles Foundation, Yale University) |
Abstract: | In time series regression with nonparametrically autocorrelated errors, it is now standard empirical practice to construct confidence intervals for regression coefficients on the basis of nonparametrically studentized t-statistics. The standard error used in the studentization is typically estimated by a kernel method that involves some smoothing process over the sample autocovariances. The underlying parameter (M) that controls this tuning process is a bandwidth or truncation lag and it plays a key role in the finite sample properties of tests and the actual coverage properties of the associated confidence intervals. The present paper develops a bandwidth choice rule for M that optimizes the coverage accuracy of interval estimators in the context of linear GMM regression. The optimal bandwidth balances the asymptotic variance with the asymptotic bias of the robust standard error estimator. This approach contrasts with the conventional bandwidth choice rule for nonparametric estimation where the focus is the nonparametric quantity itself and the choice rule balances asymptotic variance with squared asymptotic bias. It turns out that the optimal bandwidth for interval estimation has a different expansion rate and is typically substantially larger than the optimal bandwidth for point estimation of the standard errors. The new approach to bandwidth choice calls for refined asymptotic measurement of the coverage probabilities, which are provided by means of an Edgeworth expansion of the finite sample distribution of the nonparametrically studentized t-statistic. This asymptotic expansion extends earlier work and is of independent interest. A simple plug-in procedure for implementing this optimal bandwidth is suggested and simulations confirm that the new plug-in procedure works well in finite samples. Issues of interval length and false coverage probability are also considered, leading to a secondary approach to bandwidth selection with similar properties. |
Keywords: | Asymptotic expansion, Bias, Confidence interval, Coverage probability, Edgeworth expansion, Lag kernel, Long run variance, Optimal bandwidth, Spectrum |
JEL: | C13 C14 C22 C51 |
Date: | 2008–05 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:1661&r=ecm |
By: | Taisuke Otsu (Cowles Foundation, Yale University); Myung Hwan Seo (London School of Economics); Yoon-Jae Whang (Seoul National University) |
Abstract: | We propose non-nested hypotheses tests for conditional moment restriction models based on the method of generalized empirical likelihood (GEL). By utilizing the implied GEL probabilities from a sequence of unconditional moment restrictions that contains equivalent information of the conditional moment restrictions, we construct Kolmogorov-Smirnov and Cramer-von Mises type moment encompassing tests. Advantages of our tests over Otsu and Whang's (2007) tests are: (i) they are free from smoothing parameters, (ii) they can be applied to weakly dependent data, and (iii) they allow non-smooth moment functions. We derive the null distributions, validity of a bootstrap procedure, and local and global power properties of our tests. The simulation results show that our tests have reasonable size and power performance in finite samples. |
Keywords: | Empirical likelihood, Non-nested tests, Conditional moment restrictions |
JEL: | C12 C13 C14 C22 |
Date: | 2008–05 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:1660&r=ecm |
By: | Buddelmeyer, Hielke (Melbourne Institute of Applied Economic and Social Research); Jensen, Paul H. (University of Melbourne); Oguzoglu, Umut (Melbourne Institute of Applied Economic and Social Research); Webster, Elizabeth (University of Melbourne) |
Abstract: | Since little is known about the degree of bias in estimated fixed effects in panel data models, we run Monte Carlo simulations on a range of different estimators. We find that Anderson-Hsiao IV, Kiviet’s bias-corrected LSDV and GMM estimators all perform well in both short and long panels. However, OLS outperforms the other estimators when the following holds: the cross-section is small (N = 20), the time dimension is short (T = 5) and the coefficient on the lagged dependent variable is large (γ = 0.8). |
Keywords: | dynamic model, LSDV, panel data, fixed effects |
JEL: | C23 O11 E00 |
Date: | 2008–05 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp3487&r=ecm |
By: | Bayer Christian; Hanck Christoph (METEOR) |
Abstract: | This paper suggests a combination procedure to exploit the imperfect correlation of cointegration tests to develop a more powerful meta test. To exemplify, we combine Engle and Granger (1987) and Johansen (1988) tests. Either of these underlying tests can be more powerful than the other one depending on the nature of the data-generating process. The new meta test is at least as powerful as the more powerful one of the underlying tests irrespective of the very nature of the data generating process. At the same time, our new meta test avoids the arbitrary decision which test to use if single test results conflict. Moreover it avoids the size distortion inherent in separately applying multiple tests for cointegration to the same data set. We apply our test to 143 data sets from published cointegration studies. There, in one third of all cases single tests give conflicting results whereas our meta tests provides an unambiguous test decision. |
Keywords: | Economics ; |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:dgr:umamet:2008014&r=ecm |
By: | Enzo Giacomini; Wolfgang Härdle; Volker Krätschmer |
Abstract: | Dimension reduction techniques for functional data analysis model and approximate smooth random functions by lower dimensional objects. In many applications the focus of interest lies not only in dimension reduction but also in the dynamic behaviour of the lower dimensional objects. The most prominent dimension reduction technique - functional principal components analysis - however, does not model time dependences embedded in functional data. In this paper we use dynamic semiparametric factor models (DSFM) to reduce dimensionality and analyse the dynamic structure of unknown random functions by means of inference based on their lower dimensional representation. We apply DSFM to estimate the dynamic structure of risk neutral densities implied by prices of option on the DAX stock index. |
Keywords: | dynamic factor models, dimension reduction, risk neutral density |
JEL: | C14 C32 G12 |
Date: | 2008–05 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2008-038&r=ecm |
By: | Mynbaev, Kairat |
Abstract: | Standardized slowly varying regressors are shown to be $L_p$-approximable. This fact allows one to relax the assumption on linear processes imposed in central limit results by P.C.B. Phillips, as well as provide alternative proofs for some other statements. |
Keywords: | slowly varying regressors; central limit theorem; $L_p$-approximability |
JEL: | C02 C01 |
Date: | 2007–09–01 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:8838&r=ecm |
By: | Luc Bauwens; Genaro Sucarrat |
Abstract: | The general-to-specific (GETS) methodology is widely employed in the modelling of economic series, but less so in financial volatility modelling due to computational complexity when many explanatory variables are involved. This study proposes a simple way of avoiding this problem when the conditional mean can appropriately be restricted to zero, and undertakes an out-of-sample forecast evaluation of the methodology applied to the modelling of weekly exchange rate volatility. Our findings suggest that GETS specifications perform comparatively well in both ex post and ex ante forecasting as long as sufficient care is taken with respect to functional form and with respect to how the conditioning information is used. Also, our forecast comparison provides an example of a discrete time explanatory model being more accurate than realised volatility ex post in 1 step forecasting. |
Keywords: | Exchange rate volatility, General to specific, Forecasting |
JEL: | C53 F31 |
Date: | 2008–04 |
URL: | http://d.repec.org/n?u=RePEc:cte:werepe:we081810&r=ecm |
By: | Lucia Alessi (Laboratory of Economics and Management (LEM), Sant’Anna School of Advanced Studies, Piazza Martiri della Libertà, 33, 56127 Pisa, Italy.); Matteo Barigozzi (Laboratory of Economics and Management (LEM), Sant’Anna School of Advanced Studies, Piazza Martiri della Libertà, 33, 56127 Pisa, Italy.); Marco Capasso (Urban & Regional research centre Utrecht (URU), Faculty of Geosciences, Utrecht University, P.O. Box 80.115, 3508 TC Utrecht, NL.) |
Abstract: | We propose a refinement of the criterion by Bai and Ng [2002] for determining the number of static factors in factor models with large datasets. It consists in multiplying the penalty function by a constant which tunes the penalizing power of the function itself as in the Hallin and Liška [2007] criterion for the number of dynamic factors. By iteratively evaluating the criterion for different values of this constant, we achieve more robust results than in the case of fixed penalty function. This is shown by means of Monte Carlo simulations on seven data generating processes, including heteroskedastic processes, on samples of different size. Two empirical applications are carried out on a macroeconomic and a financial dataset. JEL Classification: C52. |
Keywords: | Approximate factor models, Information criterion, Number of factors. |
Date: | 2008–05 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20080903&r=ecm |
By: | J. Keith Ord; Rob J. Hyndman; Anne B. Koehler; Ralph D. Snyder |
Abstract: | Statistical process control (SPC) has evolved beyond its classical applications in manufacturing to monitoring economic and social phenomena. This extension requires consideration of autocorrelated and possibly non-stationary time series. Less attention has been paid to the possibility that the variance of the process may also change over time. In this paper we use the innovations state space modeling framework to develop conditionally heteroscedastic models. We provide examples to show that the incorrect use of homoscedastic models may lead to erroneous decisions about the nature of the process. The framework is extended to include counts data, when we also introduce a new type of chart, the P-value chart, to accommodate the changes in distributional form from one period to the next. |
Keywords: | Control charts, count data, GARCH, heteroscedasticity, innovations, state space, statistical process control |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2008-4&r=ecm |
By: | Laurence Fung (Research Department, Hong Kong Monetary Authority); Ip-wing Yu (Research Department, Hong Kong Monetary Authority) |
Abstract: | The predictability of stock market returns has been a challenge to market practitioners and financial economists. This is also important to central banks responsible for monitoring financial market stability. A number of variables have been found as predictors of future stock market returns with impressive in-sample results. Nonetheless, the predictive power of these variables has often performed poorly for out-of-sample forecasts. This study utilises a new method known as "Aggregate Forecasting Through Exponential Re-weighting (AFTER)" to combine forecasts from different models and achieve better out-of-sample forecast performance from these variables. Empirical results suggest that, for longer forecast horizons, combining forecasts based on AFTER provides better out-of-sample predictions than the historical average return and also forecasts from models based on commonly used model selection criteria. |
Keywords: | Forecasting, Model combination, Model uncertainty |
JEL: | G11 G12 C13 |
Date: | 2008–03 |
URL: | http://d.repec.org/n?u=RePEc:hkg:wpaper:0801&r=ecm |
By: | Pedro M.D.C.B. Gouveia; Denise R. Osborn; Paulo M.M. Rodrigues |
Abstract: | Forecast combination methodologies exploit complementary relations between different types of econometric models and often deliver more accurate forecasts than the individual models on which they are based. This paper examines forecasts of seasonally unadjusted monthly industrial production data for 17 countries and the Euro Area, comparing individual model forecasts and forecast combination methods in order to examine whether the latter are able to take advantage of the properties of different seasonal specifications. In addition to linear models (with deterministic seasonality and with nonstationary stochastic seasonality), more complex models that capture nonlinearity or seasonally varying coefficients (periodic models) are also examined. Although parsimonous periodic models perform well for some countries, forecast combinations provide the best overall performance at short horizons, implying that utilizing the characteristics captured by different models can contribute to improved forecast accuracy. |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:man:cgbcrp:102&r=ecm |
By: | Anton Andriyashin |
Abstract: | Stock picking is the field of financial analysis that is of particular interest for many professional investors and researchers. In this study stock picking is implemented via binary classification trees. Optimal tree size is believed to be the crucial factor in forecasting performance of the trees. While there exists a standard method of tree pruning, which is based on the cost-complexity tradeoff and used in the majority of studies employing binary decision trees, this paper introduces a novel methodology of nonsymmetric tree pruning called Best Node Strategy (BNS). An important property of BNS is proven that provides an easy way to implement the search of the optimal tree size in practice. BNS is compared with the traditional pruning approach by composing two recursive portfolios out of XETRA DAX stocks. Performance forecasts for each of the stocks are provided by constructed decision trees. It is shown that BNS clearly outperforms the traditional approach according to the backtesting results and the Diebold-Mariano test for statistical significance of the performance difference between two forecasting methods. |
Keywords: | decision tree, stock picking, pruning, earnings forecasting, data mining |
JEL: | C14 C15 C44 C63 G12 |
Date: | 2008–05 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2008-035&r=ecm |
By: | Yélé Maweki Batana; Jean-Yves Duclos |
Abstract: | This paper tests for robust multidimensional poverty comparisons across six countries of the West African Economic and Monetary Union (WAEMU). Two dimensions are considered, nutritional status and assets. The estimation of the asset index is based on two factorial analysis methods. The first method uses Multiple Correspondence Analysis; the second is based on the maximization of a likelihood function and on bayesian analysis. Using Demographic and Health Surveys (DHS), pivotal bootstrap tests lead to statistically significant dominance relationships between 12 of the 15 possible pairs of the six WAEMU countries. Multidimensional poverty is also inferred to be more prevalent in rural than in urban areas. These results tend to support those derived from more restrictive unidimensional dominance tests. |
Keywords: | Stochastic dominance, factorial analysis, bayesian analysis, multidimensional poverty, empirical likelihood function, bootstrap tests |
JEL: | C10 C11 C12 C30 C39 I32 |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:lvl:lacicr:0808&r=ecm |
By: | Mishra, SK |
Abstract: | Effects of outliers on mean, standard deviation and Pearson’s correlation coefficient are well known. The Principal Components analysis uses Pearson’s product moment correlation coefficients to construct composite indices from indicator variables and hence may be very sensitive to effects of outliers in data. Median, mean deviation and Bradley’s coefficient of absolute correlation are less susceptible to effects of outliers. This paper proposes a method to obtain composite indices by maximization of the sum of absolute Bradley’s correlation coefficients between the indicator variable and the derived composite index. |
Keywords: | Composite index; Principal Components analysis; absolute; Bradley’s correlation coefficient; outliers; median; mean deviation; Differential Evolution; global optimization |
JEL: | C43 C61 C01 |
Date: | 2008–05–26 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:8874&r=ecm |