|
on Econometrics |
By: | Kazuhiko Hayakawa; Vanessa Smith; Hashem Pesaran |
Abstract: | This paper proposes the transformed maximum likelihood estimator for short dynamic panel data models with interactive fixed effects, and provides an extension of Hsiao et al. (2002) that allows for a multifactor error structure. This is an important extension since it retains the advantages of the transformed likelihood approach, whilst at the same time allows for observed factors (fixed or random). Small sample results obtained from Monte Carlo simulations show that the transformed ML estimator performs well in .finite samples and outperforms the GMM estimators proposed in the literature in almost all cases considered. |
Keywords: | short T dynamic panels, transformed maximum likelihood, multi-factor error structure, interative fixed affects |
JEL: | C12 C13 C23 |
Date: | 2014–06–05 |
URL: | http://d.repec.org/n?u=RePEc:cam:camdae:1412&r=ecm |
By: | Natalia Bailey; Vanessa Smith; Hashem Pesaran |
Abstract: | This paper proposes a novel regularisation method for the estimation of large covariance matrices, which makes use of insights from the multiple testing literature. The method tests the statistical significance of individual pair-wise correlations and sets to zero those elements that are not statistically significant, taking account of the multiple testing nature of the problem. The procedure is straightforward to implement, and does not require cross validation. By using the inverse of the normal distribution at a predetermined significance level, it circumvents the challenge of evaluating the theoretical constant arising in the rate of convergence of existing thresholding estimators. We compare the performance of our multiple testing (MT) estimator to a number of thresholding and shrinkage estimators in the literature in a detailed Monte Carlo simulation study. Results show that our MT estimator performs well in a number of different settings and tends to outperform other estimators, particularly when the cross-sectional dimension, N, is larger than the time series dimension, T IF the inverse covariance matrix is of interest then we recommend a shrinkage version of the MT estimator that ensures positive definiteness |
Keywords: | Sparse correlation matrices, High-dimensional data, Multiple testing, Thresholding, Shrinkage |
JEL: | C13 C58 |
Date: | 2014–06–05 |
URL: | http://d.repec.org/n?u=RePEc:cam:camdae:1413&r=ecm |
By: | Cristina García de la Fuente; Pedro Galeano; Michael P. Wiper |
Abstract: | Financial returns often present a complex relation with previous observations, along with a slight skewness and high kurtosis. As a consequence, we must pursue the use of flexible models that are able to seize these special features: a financial process that can expose the intertemporal relation between observations, together with a distribution that can capture asymmetry and heavy tails simultaneously. A multivariate extension of the GARCH such as the Dynamic Conditional Correlation model with Skew-Slashinnovations for financial time series in a Bayesian framework is proposed in the present document, and it is illustrated using an MCMC within Gibbs algorithm performed onsimulated data, as well as real data drawn from the daily closing prices of the DAX,CAC40, and Nikkei indices |
Keywords: | Bayesian inference, Dynamic Conditional Correlation, Financial time series, Infinite mixture, Kurtosis, MCMC, Skew-Slash |
Date: | 2014–06 |
URL: | http://d.repec.org/n?u=RePEc:cte:wsrepe:ws141711&r=ecm |
By: | Millo, Giovanni |
Abstract: | The different robust estimators for the standard errors of panel models used in applied econometric practice can all be written and computed as combinations of the same simple building blocks. A framework based on high-level wrapper functions for most common usage and basic computational elements to be combined at will, coupling user-friendliness with flexibility, is integrated in the 'plm' package for panel data econometrics in R. Statistical motivation and computational approach are reviewed, and applied examples are provided. |
Keywords: | Panel data; covariance matrix estimators; R |
JEL: | C12 C23 C87 |
Date: | 2014–07–07 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:54954&r=ecm |
By: | Joshua C.C. Chan; Angelia L. Grant |
Abstract: | The deviance information criterion (DIC) has been widely used for Bayesian model comparison. In particular, a popular metric for comparing stochastic volatility models is the DIC based on the conditional likelihood—obtained by conditioning on the latent variables. However, some recent studies have argued against the use of the conditional DIC on both theoretical and practical grounds. We show via a Monte Carlo study that the conditional DIC tends to favor overfitted models, whereas the DIC calculated using the observed-data likelihood—obtained by integrating out the latent variables—seems to perform well. The main challenge for obtaining the latter DIC for stochastic volatility models is that the observed-data likelihoods are not available in closed-form. To overcome this difficulty, we propose fast algorithms for estimating the observed-data likelihoods for a variety of stochastic volatility models using importance sampling. We demonstrate the methodology with an application involving daily returns on the Standard & Poors (S&P) 500 index. |
Keywords: | Bayesian model comparison, nonlinear state space, DIC, jumps, moving average, S&P 500 |
JEL: | C11 C15 C52 C58 |
Date: | 2014–07 |
URL: | http://d.repec.org/n?u=RePEc:een:camaaa:2014-51&r=ecm |
By: | Maria Angeles Carnero Fernandez; Ana Pérez; Esther Ruiz Ortega |
Abstract: | The identification of asymmetric conditional heteroscedasticity is often based on samplecross-correlations between past and squared observations. In this paper we analyse theeffects of outliers on these cross-correlations and, consequently, on the identification ofasymmetric volatilities. We show that, as expected, one isolated big outlier biases thesample cross-correlations towards zero and hence could hide true leverage effect.Unlike, the presence of two or more big consecutive outliers could lead to detectingspurious asymmetries or asymmetries of the wrong sign. We also address the problemof robust estimation of the cross-correlations by extending some popular robustestimators of pairwise correlations and autocorrelations. Their finite sample resistanceagainst outliers is compared through Monte Carlo experiments. Situations with isolatedand patchy outliers of different sizes are examined. It is shown that a modified Ramsayweightedestimator of the cross-correlations outperforms other estimators in identifyingasymmetric conditionally heteroscedastic models. Finally, the results are illustrated withan empirical application |
Keywords: | Cross-correlations, Leverage effect, Robust correlations, EGARCH |
Date: | 2014–07 |
URL: | http://d.repec.org/n?u=RePEc:cte:wsrepe:ws141912&r=ecm |
By: | Christian M. Hafner; Michael McAleer (University of Canterbury) |
Abstract: | One of the most widely-used multivariate conditional volatility models is the dynamic conditional correlation (or DCC) specification. However, the underlying stochastic process to derive DCC has not yet been established, which has made problematic the derivation of asymptotic properties of the Quasi-Maximum Likelihood Estimators (QMLE). To date, the statistical properties of the QMLE of the DCC parameters have been derived under highly restrictive and unverifiable regularity conditions. The paper shows that the DCC model can be obtained from a vector random coefficient moving average process, and derives the stationarity and invertibility conditions. The derivation of DCC from a vector random coefficient moving average process raises three important issues: (i) demonstrates that DCC is, in fact, a dynamic conditional covariance model of the returns shocks rather than a dynamic conditional correlation model; (ii) provides the motivation, which is presently missing, for standardization of the conditional covariance model to obtain the conditional correlation model; and (iii) shows that the appropriate ARCH or GARCH model for DCC is based on the standardized shocks rather than the returns shocks. The derivation of the regularity conditions should subsequently lead to a solid statistical foundation for the estimates of the DCC parameters. |
Keywords: | Dynamic conditional correlation, dynamic conditional covariance, vector random coefficient moving average, stationarity, invertibility, asymptotic properties |
JEL: | C22 C52 C58 G32 |
Date: | 2014–07–09 |
URL: | http://d.repec.org/n?u=RePEc:cbt:econwp:14/19&r=ecm |
By: | Eleanor Sanderson; Frank Windmeijer |
Abstract: | We consider testing for weak instruments in a model with multiple endogenous variables. Unlike Stock and Yogo (2005), who considered a weak instruments problem where the rank of the matrix of reduced form parameters is near zero, here we consider a weak instruments problem of a near rank reduction of one in the matrix of reduced form parameters. For example, in a two-variable model, we consider weak instrument asymptotics of the form π1=δπ2 + c/sqrt(n) where π1 and π2 are the parameters in the two reduced-form equations, c is a vector of constants and n is the sample size. We investigate the use of a conditional first-stage F-statistic along the lines of the proposal by Angrist and Pischke (2009) and show that, unless δ=0, the variance in the denominator of their F-statistic needs to be adjusted in order to get a correct asymptotic distribution when testing the hypothesis H0: π1=δπ2. We show that a corrected conditional F-statistic is equivalent to the Cragg and Donald (1993) minimum eigenvalue rank test statistic, and is informative about the maximum total relative bias of the 2SLS estimator and the Wald tests size distortions. When δ=0 in the two-variable model, or when there are more than two endogenous variables, further information over and above the Cragg-Donald statistic can be obtained about the nature of the weak instrument problem by computing the conditional first stage F-statistics. |
Keywords: | weak instruments, multiple endogenous variables, F-test. |
JEL: | C12 C36 |
Date: | 2014–06 |
URL: | http://d.repec.org/n?u=RePEc:bri:uobdis:14/644&r=ecm |
By: | Michael Greenacre; H. Ãztaş Ayhan |
Abstract: | The problem of outliers is well-known in statistics: an outlier is a value that is far from the general distribution of the other observed values, and can often perturb the results of a statistical analysis. Various procedures exist for identifying outliers, in case they need to receive special treatment, which in some cases can be exclusion from consideration. An inlier, by contrast, is an observation lying within the general distribution of other observed values, generally does not perturb the results but is nevertheless non-conforming and unusual. For single variables, an inlier is practically impossible to identify, but in the multivariate case, thanks to interrelationships between variables, values can be identified that are observed to be more central in a distribution but would be expected, based on the other information in the data matrix, to be more outlying. We propose an approach to identify inliers in a data matrix, based on the singular value decomposition. An application is presented using a table of economic indicators for the 27 member countries of the European Union in 2011, where inlying values are identified for some countries such as Estonia and Luxembourg. |
Keywords: | imputation, inlier, outlier, singular value decomposition |
JEL: | C19 C88 |
Date: | 2014–06 |
URL: | http://d.repec.org/n?u=RePEc:bge:wpaper:763&r=ecm |
By: | Carlo Sguera; Pedro Galeano; Rosa E. Lillo |
Abstract: | This paper proposes methods to detect outliers in functional datasets. We are interested in challenging scenarios where functional samples are contaminated by outliers that may be difficult to recognize. The task of identifying a typical curves is carried out using the recently proposed kernelized functional spatial depth (KFSD). KFSD is a localdepth that can be used to order the curves of a sample from the most to the least central. Since outliers are usually among the least central curves, we introduce three new procedures that provide a threshold value for KFSD such that curves with depth values lower than the threshold are detected as outliers. The results of a simulation study show that our proposals generally out perform a battery of competitors. Finally, we consider areal application with environmental data consisting in levels of nitrogen oxides |
Keywords: | Functional depths, Functional outlier detection, Kernelized functional spatial depth, Nitrogen oxides, Smoothed resampling |
Date: | 2014–06 |
URL: | http://d.repec.org/n?u=RePEc:cte:wsrepe:we141410&r=ecm |
By: | Hashem Pesaran; Ron Smith |
Abstract: | This paper proposes tests of policy ineffectiveness in the context of macroeconometric rational expectations models. It is assumed that there is a policy intervention that takes the form of changes in the parameters of a policy rule, and that there are sufficient observations before and after the intervention. The test is based on the difference between the realisations of the outcome variable of interest and counterfactuals based on no policy intervention, using only the pre-intervention parameter estimates, and in consequence the Lucas Critique does not apply. The paper develops tests of policy ineffectiveness for a full structural model, with and without exogenous, policy or non-policy, variables. Asymptotic distributions of the proposed tests are derived both when the post intervention sample is fixed as the pre-intervention sample expands, and when both samples rise jointly but at different rates. The performance of the test is illustrated by a simulated policy analysis of a three equation New Keynesian Model, which shows that the test size is correct but the power may be low unless the model includes exogenous variables, or if the policy intervention changes the steady states, such as the inflation target. |
Keywords: | Counterfactuals, policy analysis, policy ine¤ectiveness test, macroeconomics |
JEL: | C18 C54 E65 |
Date: | 2014–06–19 |
URL: | http://d.repec.org/n?u=RePEc:cam:camdae:1415&r=ecm |
By: | Davide Delle Monache (Queen Mary, University of London); Ivan Petrella (Department of Economics, Mathematics & Statistics, Birkbeck) |
Abstract: | This paper proposes a novel and ‡exible framework to estimate autoregressive models with time-varying parameters. Our setup nests various adaptive algorithms that are commonly used in the macroeconometric literature, such as learning-expectations and forgetting-factor algorithms. These are generalized along several directions: specifically, we allow for both Student-t distributed innovations as well as time-varying volatility. Meaningful restrictions are imposed to the model parameters, so as to attain local stationarity and bounded mean values. The model is applied to the analysis of inflation dynamics. Allowing for heavy-tails leads to a significant improvement in terms of fit and forecast. Moreover, it proves to be crucial in order to obtain well-calibrated density forecasts. |
Keywords: | Time-Varying Parameters, Score-driven Models, Heavy-Tails, Adaptive Algorithms, Inflation. |
JEL: | C22 C51 C53 E31 |
Date: | 2014–07 |
URL: | http://d.repec.org/n?u=RePEc:bbk:bbkefp:1409&r=ecm |
By: | M. Hashem Pesaran (University of Southern California; Trinity College Cambridge); Ron P Smith (Department of Economics, Mathematics & Statistics, Birkbeck) |
Abstract: | The policy innovations that followed the recent Great Recession, such as unconventional monetary policies, prompted renewed interest in the question of how to measure the effectiveness of such policy interventions. To test policy effectiveness requires a model to construct a counterfactual for the outcome variable in the absence of the policy intervention and a way to determine whether the differences between the realised outcome and the model-based counter-factual outcomes are larger than what could have occurred by chance in the absence of policy intervention. Pesaran & Smith propose tests of policy ineffectiveness in the context of macroeconometric rational expectations dynamic stochastic general equilibrium models. When we are certain of the specification, estimation of the complete system imposing all the cross-equation restrictions implied by the full structural model is more efficient. But if the full model is misspecified, one may obtain more reliable estimates of the counterfactual outcomes from a parsimonious reduced form policy response equation, which conditions on lagged values, and on the policy measures and variables known to be invariant to the policy intervention. We propose policy ineffectiveness tests based on such reduced forms and illustrate the tests with an application to the unconventional monetary policy known as quantitative easing (QE) adopted in the UK. |
Keywords: | Counterfactuals, policy analysis, policy ineffectiveness test, macroeconomics, quantitative easing (QE). |
JEL: | C18 C54 E65 |
Date: | 2014–06 |
URL: | http://d.repec.org/n?u=RePEc:bbk:bbkefp:1406&r=ecm |
By: | Yang, Bill Huajian |
Abstract: | In this paper, we propose a Vasicek-type of models for estimating portfolio level probability of default (PD). With these Vasicek models, asset correlation and long-run PD for a risk homogenous portfolio both have analytical solutions, longer external time series for market and macroeconomic variables can be included, and the traditional asymptotic maximum likelihood approach can be shown to be equivalent to least square regression, which greatly simplifies parameter estimation. The analytical formula for long-run PD, for example, explicitly quantifies the contribution of uncertainty to an increase of long-run PD. We recommend the bootstrap approach to addressing the serial correlation issue for a time series sample. To validate the proposed models, we estimate the asset correlations for 13 industry sectors using corporate annual default rates from S&P for years 1981-2011, and long-run PD and asset correlation for a US commercial portfolio, using US delinquent rate for commercial and industry loans from US Federal Reserve. |
Keywords: | Portfolio level PD, long-run PD, asset correlation, time series, serial correlation, bootstrapping, binomial distribution, maximum likelihood, least square regression, Vasicek model |
JEL: | C02 C13 C5 G32 |
Date: | 2013–07–10 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:57244&r=ecm |
By: | John Goddard (Bangor University, UK); Phil Molyneux (Bangor University, UK); Jonathan Williams (Bangor University, UK) |
Abstract: | This paper contributes to the bank efficiency literature through an application of recently developed random parameters models for stochastic frontier analysis. We estimate standard fixed and random effects models, and alternative specifications of random parameters models that accommodate cross-sectional parameter heterogeneity. A Monte Carlo simulations exercise is used to investigate the implications for the accuracy of the estimated inefficiency scores of estimation using either an under-parameterized, over-parameterized or correctly specified cost function. On average, the estimated mean efficiencies obtained from random parameters models tend to be higher than those obtained using fixed or random effects, because random parameters models do not confound parameter heterogeneity with inefficiency. Using a random parameters model, we analyse the evolution of the average rank cost efficiency for Latin American banks between 1985 and 2010. Cost efficiency deteriorated during the 1990s, particularly for state-owned banks, before improving during the 2000s but prior to the subprime crisis. The effects of the latter varied between countries and bank ownership types |
Keywords: | Efficiency; stochastic frontier; random parameters models; bank ownership; Latin America |
JEL: | C23 D24 G21 |
Date: | 2013–09 |
URL: | http://d.repec.org/n?u=RePEc:bng:wpaper:13011&r=ecm |
By: | Domenico Giannone (Libera Università Internazionale degli Studi Sociali Guido Carli (LUISS); Centre for Economic Policy Research (CEPR)); Francesca Monti (Bank of England; Centre for Macroeconomics (CFM)); Lucrezia Reichlin (London Business School (LBS); Centre for Economic Policy Research (CEPR)) |
Abstract: | This paper shows how and when it is possible to obtain a mapping from a quarterly DSGE model to a monthly specification that maintains the same economic restrictions and has real coefficients. We use this technique to derive the monthly counterpart of the Gali et al (2011) model. We then augment it with auxiliary macro indicators which, because of their timeliness, can be used to obtain a now-cast of the structural model. We show empirical results for the quarterly growth rate of GDP, the monthly unemployment rate and the welfare relevant output gap defined in Gali, Smets and Wouters (2011). Results show that the augmented monthly model does best for now-casting. |
Keywords: | DSGEmodels, forecasting, temporal aggregation, mixed frequency data, large datasets |
JEL: | C33 C53 E30 |
Date: | 2014–06 |
URL: | http://d.repec.org/n?u=RePEc:cfm:wpaper:1416&r=ecm |
By: | Rodolfo G. Campos (IESE Business school); Iliana Reggio (Universidad Carlos III) |
Abstract: | We study how estimators used to impute consumption in survey data are inconsistent due to measurement error in consumption. Previous research suggests instrumenting consumption to overcome this problem. We show that, if additional regressors are present, then instrumenting consumption may still produce inconsistent estimators due to the likely correlation between additional regressors and measurement error. On the other hand, low correlations between additional regressors and instruments may reduce bias due to measurement error. We apply our fi ndings by revisiting recent research that imputes consumption data from the CEX to the PSID. |
Keywords: | consumption, measurement error, instrumental variables, consumer expenditure survey, panel study of income dynamics, income shocks |
JEL: | C13 C26 E21 |
Date: | 2013–12 |
URL: | http://d.repec.org/n?u=RePEc:bde:wpaper:1322&r=ecm |