|
on Econometrics |
By: | Shanika L Wickramasuriya; George Athanasopoulos; Rob J Hyndman |
Abstract: | Large collections of time series often have aggregation constraints due to product or geographical hierarchies. The forecasts for the disaggregated series are usually required to add up exactly to the forecasts of the aggregated series, a constraint known as “aggregate consistencyâ€. The combination forecasts proposed by Hyndman et al. (2011) are based on a Generalized Least Squares (GLS) estimator and require an estimate of the covariance matrix of the reconciliation errors (i.e., the errors that arise due to aggregate inconsistency). We show that this is impossible to estimate in practice due to identifiability conditions. |
Keywords: | Hierarchical time series, forecasting, reconciliation, contemporaneous error correlation, trace minimization |
JEL: | C32 C53 |
Date: | 2015 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2015-15&r=ecm |
By: | Matei Demetrescu (Christian-Albrechts-University of Kiel); Christoph Hanck (University of Duisburg-Essen); Robinson Kruse (Rijksuniversiteit Groningen and CREATES) |
Abstract: | The fixed-b asymptotic framework provides refinements in the use of heteroskedasticity and autocorrelation consistent variance estimators. The resulting limiting distributions of t-statistics are, however, not pivotal when the unconditional variance changes over time. Such time-varying volatility is an important issue for many financial and macroeconomic time series. To regain pivotal fixed-b inference under time-varying volatility, we discuss three alternative approaches. We (i) employ the wild bootstrap (Cavaliere and Taylor, 2008, ET), (ii) resort to time transformations (Cavaliere and Taylor, 2008, JTSA) and (iii) consider to select test statistics and asymptotics according to the outcome of a heteroscedasticity test, since small-b asymptotics deliver standard limiting distributions irrespective of the socalled variance profile of the series. We quantify the degree of size distortions from using the standard fixed-b approach assuming homoskedasticity and compare the effectiveness of the corrections via simulations. It turns out that the wild bootstrap approach is highly recommendable in terms of size and power. An application to testing for equal predictive ability using the Survey of Professional Forecasters illustrates the usefulness of the proposed corrections. |
Keywords: | Hypothesis testing, HAC estimation, HAR testing, Bandwidth, Robustness |
JEL: | C12 C32 |
Date: | 2016–01–05 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2016-01&r=ecm |
By: | Youssef, Ahmed; Abonazel, Mohamed R. |
Abstract: | This paper considers first-order autoregressive panel model which is a simple model for dynamic panel data (DPD) models. The generalized method of moments (GMM) gives efficient estimators for these models. This efficiency is affected by the choice of the weighting matrix which has been used in GMM estimation. The non-optimal weighting matrices have been used in the conventional GMM estimators. This led to a loss of efficiency. Therefore, we present new GMM estimators based on optimal or suboptimal weighting matrices. Monte Carlo study indicates that the bias and efficiency of the new estimators are more reliable than the conventional estimators. |
Keywords: | Dynamic panel data, Generalized method of moments, Kantorovich inequality upper bound, Monte Carlo simulation, Optimal and suboptimal weighting matrices |
JEL: | C4 C5 M21 |
Date: | 2015–09–28 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:68674&r=ecm |
By: | Jakob Guldbæk Mikkelsen (Aarhus University and CREATES); Eric Hillebrand (Aarhus University and CREATES); Giovanni Urga (Cass Business School) |
Abstract: | In this paper, we develop a maximum likelihood estimator of time-varying loadings in high-dimensional factor models. We specify the loadings to evolve as stationary vector autoregressions (VAR) and show that consistent estimates of the loadings parameters can be obtained by a two-step maximum likelihood estimation procedure. In the first step, principal components are extracted from the data to form factor estimates. In the second step, the parameters of the loadings VARs are estimated as a set of univariate regression models with time-varying coefficients. We document the finite-sample properties of the maximum likelihood estimator through an extensive simulation study and illustrate the empirical relevance of the time-varying loadings structure using a large quarterly dataset for the US economy. |
Keywords: | High-dimensional factor models, dynamic factor loadings, maximum likelihood, principal components JEL classification: C33, C55, C13 |
Date: | 2015–12–15 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2015-61&r=ecm |
By: | Hwang, Jungbin; Sun, Yixiao |
Abstract: | This paper proposes new, simple, and more accurate statistical tests in a cointegrated system that allows for endogenous regressors and serially dependent errors. The approach involves first transforming the time series using some orthonormal basis functions in L²[0,1], which has energy concentrated at low frequencies, and then running an augmented regression based on the transformed data. The tests are extremely simple to implement as they can be carried out in exactly the same way as if the transformed regression is a classical linear normal regression. In particular, critical values are from the standard F or t distribution. The proposed F and t tests are robust in that they are asymptotically valid regardless of whether the number of basis functions is held fixed or allowed to grow with the sample size. The F and t tests have more accurate size in finite samples than existing tests such as the asymptotic chi-squared and normal tests based on the fully-modified OLS estimator of Phillips and Hansen (1990) and the trend IV estimator of Phillips (2014) and can be made as powerful as the latter tests. |
Keywords: | Social and Behavioral Sciences, Cointegration, F test, Alternative Asymptotics, Nonparametric Series Method, t test, Transformed and Augmented OLS |
Date: | 2016–01–04 |
URL: | http://d.repec.org/n?u=RePEc:cdl:ucsdec:qt82k1x4rd&r=ecm |
By: | Youssef, Ahmed H.; El-Sheikh, Ahmed A.; Abonazel, Mohamed R. |
Abstract: | In dynamic panel data (DPD) models, the generalized method of moments (GMM) estimation gives efficient estimators. However, this efficiency is affected by the choice of the initial weighting matrix. In practice, the inverse of the moment matrix of the instruments has been used as an initial weighting matrix which led to a loss of efficiency. Therefore, we will present new GMM estimators based on optimal or suboptimal weighting matrices in GMM estimation. Monte Carlo study indicates that the potential efficiency gain by using these matrices. Moreover, the bias and efficiency of the new GMM estimators are more reliable than any other conventional GMM estimators. |
Keywords: | Dynamic panel data, Generalized method of moments, Monte Carlo simulation, Optimal and suboptimal weighting matrices. |
JEL: | C1 C15 C4 C5 C58 |
Date: | 2014–10 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:68676&r=ecm |
By: | Jaydip Sen; Tamal Datta Chaudhuri |
Abstract: | With the rapid development and evolution of sophisticated algorithms for statistical analysis of time series data, the research community has started spending considerable effort in technical analysis of such data. Forecasting is also an area which has witnessed a paradigm shift in its approach. In this work, we have used the time series of the index values of the Auto sector in India during January 2010 to December 2015 for a deeper understanding of the behavior of its three constituent components, e.g., the Trend, the Seasonal component, and the Random component. Based on this structural analysis, we have also designed three approaches for forecasting and also computed their accuracy in prediction using suitably chosen training and test data sets. The results clearly demonstrate the accuracy of our decomposition results and efficiency of our forecasting techniques, even in presence of a dominant Random component in the time series. |
Date: | 2016–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1601.02407&r=ecm |
By: | Li, Dong; Ling, Shiqing; Zhu, Ke |
Abstract: | This paper proposes a first-order zero-drift GARCH (ZD-GARCH(1, 1)) model to study conditional heteroscedasticity and heteroscedasticity together. Unlike the classical GARCH model, ZD-GARCH(1, 1) model is always non-stationary regardless of the sign of the Lyapunov exponent $\gamma_{0}$ , but interestingly when $\gamma_{0}$ = 0, it is stable with its sample path oscillating randomly between zero and infinity over time. Furthermore, this paper studies the generalized quasi-maximum likelihood estimator (GQMLE) of ZD-GARCH(1, 1) model, and establishes its strong consistency and asymptotic normality. Based on the GQMLE, an estimator for $\gamma_{0}$, a test for stability, and a portmanteau test for model checking are all constructed. Simulation studies are carried out to assess the finite sample performance of the proposed estimators and tests. Applications demonstrate that a stable ZD-GARCH(1, 1) model is more appropriate to capture heteroscedasticity than a non-stationary GARCH(1, 1) model, which suffers from an inconsistent QMLE of the drift term |
Keywords: | Conditional heteroscedasticity; GARCH model; Generalized quasi-maximum likelihood estimator; Heteroscedasticity; Portmanteau test; Stability test; Top Lyapunov exponent; Zero-drift GARCH model. |
JEL: | C0 C01 C5 C51 |
Date: | 2016–01–01 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:68621&r=ecm |
By: | Christoph Breunig; Stefan Hoderlein; ; |
Abstract: | In this paper, we suggest and analyze a new class of specification tests for random coefficient models. These tests allow to assess the validity of central structural features of the model, in particular linearity in coefficients and generalizations of this notion like a known nonlinear functional relationship. They also allow to test for degeneracy of the distribution of a random coefficient, i.e., whether a coefficient is fixed or random, including whether an associated variable can be omitted altogether. Our tests are nonparametric in nature, and use sieve estimators of the characteristic function. We analyze their power against both global and local alternatives in large samples and through a Monte Carlo simulation study. Finally, we apply our framework to analyze the specification in a heterogeneous random coefficients consumer demand model. |
Keywords: | Nonparametric specification testing, random coefficients, unobserved heterogeneity, sieve method, characteristic function, consumer demand |
JEL: | C12 C14 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2015-053&r=ecm |
By: | Youssef, Ahmed H.; El-Sheikh, Ahmed A.; Abonazel, Mohamed R. |
Abstract: | In dynamic panel models, the generalized method of moments (GMM) has been used in many applications since it gives efficient estimators. This efficiency is affected by the choice of the initial weighted matrix. It is common practice to use the inverse of the moment matrix of the instruments as an initial weighted matrix. However, an initial optimal weighted matrix is not known, especially in the system GMM estimation procedure. Therefore, we present the optimal weighted matrix for level GMM estimator, and suboptimal weighted matrices for system GMM estimator, and use these matrices to increase the efficiency of GMM estimator. By using the Kantorovich inequality (KI), we find that the potential efficiency gain becomes large when the variance of individual effects increases compared with the variance of the errors. |
Keywords: | dynamic panel data, generalized method of moments, KI upper bound, optimal and suboptimal weighted matrices. |
JEL: | C5 C6 C61 |
Date: | 2014–06–11 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:68675&r=ecm |
By: | John Goddard; Enrico Onali |
Abstract: | The properties of statistical tests for hypotheses concerning the parameters of the multifractal model of asset returns (MMAR) are investigated, using Monte Carlo techniques. We show that, in the presence of multifractality, conventional tests of long memory tend to over-reject the null hypothesis of no long memory. Our test addresses this issue by jointly estimating long memory and multifractality. The estimation and test procedures are applied to exchange rate data for 12 currencies. In 11 cases, the exchange rate returns are accurately described by compounding a NIID series with a multifractal time-deformation process. There is no evidence of long memory. |
Date: | 2016–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1601.00903&r=ecm |
By: | Lev B Klebanov |
Abstract: | Failure of the main argument for the use of heavy tailed distribution in Finance is given. More precisely, one cannot observe so many outliers for Cauchy or for symmetric stable distributions as we have in reality. keywords:outliers; financial indexes; heavy tails; Cauchy distribution; stable distributions |
Date: | 2015–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1601.00566&r=ecm |
By: | Hitczenko, Marcin (Federal Reserve Bank of Boston) |
Abstract: | Making meaningful inferences based on survey data depends on the ability to recognize and adjust for discrepancies between the survey respondents and the target population; this partly involves understanding how survey samples differ with respect to heterogeneous clusters of the population. Ex post adjustments for unbiased population parameter estimates are usually based on easily measured variables with known distributions in the target population, like age, gender, or income. This paper focuses on identifying and assessing the effect of an overlooked source of heterogeneity and potential selection bias related to household structure and dynamics. In household economies, tasks are often concentrated among a subset of the household members, so individual differences in behavior are related to performing different roles within the household. When the sampling involves selecting individuals from within households, a tendency to choose certain types of members for participation in a survey can result in unrepresentative samples and biased estimates for any variable relating to the respondent's household duties. The Boston Fed's Survey of Consumer Payment Choice (SCPC) seeks to estimate parameters, such as the average number of monthly payments, for the entire U.S. population. This data report exploits the fact that in the 2012 SCPC some respondents come from the same household, a unique feature that enables a study of the presence and ramifications of this type of selection bias when asking about household financial decisionmaking and payment choice. Using a two-stage statistical analysis, the survey answers are adjusted for a response error to estimate a latent variable that represents each respondent's share of financial responsibility for the household. |
Keywords: | survey error; household economics; Dirichlet regression; Survey of Consumer Payment Choice |
JEL: | C11 C42 D12 D13 |
Date: | 2015–11–23 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedbdr:15-7&r=ecm |
By: | Jeffrey S. Racine |
Abstract: | Li & Racine (2004) have proposed a nonparametric kernel-based method for smoothing in the presence of categorical predictors as an alternative to the classical nonparametric approach that splits the data into subsets (‘cells’) defined by the unique combinations of the categorical predictors. Li, Simar & Zelenyuk (2014) present an alternative to Li & Racine’s (2004) method that they claim possesses lower mean square error and generalizes and improves upon the existing approaches. However, these claims do not appear to withstand scrutiny. A number of points need to be brought to the attention of practitioners, and two in particular stand out; a) Li et al.’s (2014) own simulation results reveal that their estimator performs worse than the existing classical ‘split’ estimator and appears to be inadmissible, and b) the claim that Li et al.’s (2014) estimator dominates that of Li & Racine (2004) on mean square error grounds does not appear to be the case. The classical split estimator and that of Li & Racine (2004) are both consistent, and it will be seen that Li & Racine’s (2004) estimator remains the best all around performer. And, as a practical matter, Li et al.’s (2014) estimator is not a feasible alternative in typical settings involving multinomial and multiple categorical predictors. |
Keywords: | Kernel regression, cross-validation, finite-sample performance, replication. |
Date: | 2016–01 |
URL: | http://d.repec.org/n?u=RePEc:mcm:deptwp:2016-01&r=ecm |
By: | Bornn, Luke; Neil Shephard; Reza Solgi |
Date: | 2016–01 |
URL: | http://d.repec.org/n?u=RePEc:qsh:wpaper:360971&r=ecm |
By: | Casey, Gregory; Klemp, Marc |
Abstract: | In the field of long-run economic growth, it is common to use historical or geographical variables as instruments for contemporary endogenous regressors. We study the interpretation of these conventional instrumental variable (IV) regressions in a simple, but general, framework. We are interested in estimating the long-run causal effect of changes in historical conditions. For this purpose, we develop an augmented IV estimator that accounts for the degree of persistence in the endogenous regressor. We apply our results to estimate the long-run effect of institutions on economic performance. Using panel data, we find that institutional characteristics are imperfectly persistent, implying that conventional IV regressions overestimate the long-run causal effect of institutions. When applying our augmented estimator, we find that increasing constraints on executive power from the lowest to the highest level on the standard index increases national income per capita three centuries later by 1.2 standard deviations. |
Keywords: | Long-Run Economic Growth, Instrumental Variable Regression |
JEL: | C10 C3 C30 O10 O40 |
Date: | 2016–01–07 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:68696&r=ecm |
By: | Wieladek, Tomasz (Bank of England) |
Abstract: | Interacted panel VAR (IPVAR) models allow coefficients to vary as a deterministic function of observable country characteristics. The varying coefficient Bayesian panel VAR generalises this to the stochastic case. As an application of this framework, I examine if the impact of commodity price shocks on consumption and the CPI varies with the degree of exchange rate, financial, product and labour market liberalisation on data from 1976 Q1–2006 Q4 for 18 OECD countries. The confidence bands are smaller in the deterministic case and as a result most of the characteristics affect the transmission mechanism in a statistically significant way. But only financial liberalisation is an important determinant of commodity price shocks in the stochastic case. This suggests that results from IPVAR models should be interpreted with caution. |
Keywords: | Bayesian panel VAR; commodity price shocks |
JEL: | C33 E30 |
Date: | 2016–01–08 |
URL: | http://d.repec.org/n?u=RePEc:boe:boeewp:0578&r=ecm |