Econometrics
http://lists.repec.org/mailman/listinfo/nep-ecm
Econometrics2014-07-21Sune KarlssonTransformed Maximum Likelihood Estimation of Short Dynamic Panel Data Models with interactive effects
http://d.repec.org/n?u=RePEc:cam:camdae:1412&r=ecm
This paper proposes the transformed maximum likelihood estimator for short dynamic panel data models with interactive fixed effects, and provides an extension of Hsiao et al. (2002) that allows for a multifactor error structure. This is an important extension since it retains the advantages of the transformed likelihood approach, whilst at the same time allows for observed factors (fixed or random). Small sample results obtained from Monte Carlo simulations show that the transformed ML estimator performs well in .finite samples and outperforms the GMM estimators proposed in the literature in almost all cases considered.Kazuhiko Hayakawa, Vanessa Smith, Hashem Pesaran2014-06-05short T dynamic panels, transformed maximum likelihood, multi-factor error structure, interative fixed affectsA multiple testing approach to the regularisation of large sample correlation matrices
http://d.repec.org/n?u=RePEc:cam:camdae:1413&r=ecm
This paper proposes a novel regularisation method for the estimation of large covariance matrices, which makes use of insights from the multiple testing literature. The method tests the statistical significance of individual pair-wise correlations and sets to zero those elements that are not statistically significant, taking account of the multiple testing nature of the problem. The procedure is straightforward to implement, and does not require cross validation. By using the inverse of the normal distribution at a predetermined significance level, it circumvents the challenge of evaluating the theoretical constant arising in the rate of convergence of existing thresholding estimators. We compare the performance of our multiple testing (MT) estimator to a number of thresholding and shrinkage estimators in the literature in a detailed Monte Carlo simulation study. Results show that our MT estimator performs well in a number of different settings and tends to outperform other estimators, particularly when the cross-sectional dimension, N, is larger than the time series dimension, T IF the inverse covariance matrix is of interest then we recommend a shrinkage version of the MT estimator that ensures positive definitenessNatalia Bailey, Vanessa Smith, Hashem Pesaran2014-06-05Sparse correlation matrices, High-dimensional data, Multiple testing, Thresholding, ShrinkageBayesian estimation of a Dynamic Conditional Correlation model with multivariate Skew-Slash innovations
http://d.repec.org/n?u=RePEc:cte:wsrepe:ws141711&r=ecm
Financial returns often present a complex relation with previous observations, along with a slight skewness and high kurtosis. As a consequence, we must pursue the use of flexible models that are able to seize these special features: a financial process that can expose the intertemporal relation between observations, together with a distribution that can capture asymmetry and heavy tails simultaneously. A multivariate extension of the GARCH such as the Dynamic Conditional Correlation model with Skew-Slashinnovations for financial time series in a Bayesian framework is proposed in the present document, and it is illustrated using an MCMC within Gibbs algorithm performed onsimulated data, as well as real data drawn from the daily closing prices of the DAX,CAC40, and Nikkei indicesCristina García de la Fuente, Pedro Galeano, Michael P. Wiper2014-06Bayesian inference, Dynamic Conditional Correlation, Financial time series, Infinite mixture, Kurtosis, MCMC, Skew-SlashRobust standard error estimators for panel models: a unifying approach
http://d.repec.org/n?u=RePEc:pra:mprapa:54954&r=ecm
The different robust estimators for the standard errors of panel models used in applied econometric practice can all be written and computed as combinations of the same simple building blocks. A framework based on high-level wrapper functions for most common usage and basic computational elements to be combined at will, coupling user-friendliness with flexibility, is integrated in the 'plm' package for panel data econometrics in R. Statistical motivation and computational approach are reviewed, and applied examples are provided.Millo, Giovanni2014-07-07Panel data; covariance matrix estimators; RIssues in Comparing Stochastic Volatility Models Using the Deviance Information Criterion
http://d.repec.org/n?u=RePEc:een:camaaa:2014-51&r=ecm
The deviance information criterion (DIC) has been widely used for Bayesian model comparison. In particular, a popular metric for comparing stochastic volatility models is the DIC based on the conditional likelihood—obtained by conditioning on the latent variables. However, some recent studies have argued against the use of the conditional DIC on both theoretical and practical grounds. We show via a Monte Carlo study that the conditional DIC tends to favor overfitted models, whereas the DIC calculated using the observed-data likelihood—obtained by integrating out the latent variables—seems to perform well. The main challenge for obtaining the latter DIC for stochastic volatility models is that the observed-data likelihoods are not available in closed-form. To overcome this difficulty, we propose fast algorithms for estimating the observed-data likelihoods for a variety of stochastic volatility models using importance sampling. We demonstrate the methodology with an application involving daily returns on the Standard & Poors (S&P) 500 index.Joshua C.C. Chan, Angelia L. Grant2014-07Bayesian model comparison, nonlinear state space, DIC, jumps, moving average, S&P 500Identification of asymmetric conditional heteroscedasticity in the presence of outliers
http://d.repec.org/n?u=RePEc:cte:wsrepe:ws141912&r=ecm
The identification of asymmetric conditional heteroscedasticity is often based on samplecross-correlations between past and squared observations. In this paper we analyse theeffects of outliers on these cross-correlations and, consequently, on the identification ofasymmetric volatilities. We show that, as expected, one isolated big outlier biases thesample cross-correlations towards zero and hence could hide true leverage effect.Unlike, the presence of two or more big consecutive outliers could lead to detectingspurious asymmetries or asymmetries of the wrong sign. We also address the problemof robust estimation of the cross-correlations by extending some popular robustestimators of pairwise correlations and autocorrelations. Their finite sample resistanceagainst outliers is compared through Monte Carlo experiments. Situations with isolatedand patchy outliers of different sizes are examined. It is shown that a modified Ramsayweightedestimator of the cross-correlations outperforms other estimators in identifyingasymmetric conditionally heteroscedastic models. Finally, the results are illustrated withan empirical applicationMaria Angeles Carnero Fernandez, Ana Pérez, Esther Ruiz Ortega2014-07Cross-correlations, Leverage effect, Robust correlations, EGARCHA One Line Derivation of DCC: Application of a Vector Random Coefficient Moving Average Process
http://d.repec.org/n?u=RePEc:cbt:econwp:14/19&r=ecm
One of the most widely-used multivariate conditional volatility models is the dynamic conditional correlation (or DCC) specification. However, the underlying stochastic process to derive DCC has not yet been established, which has made problematic the derivation of asymptotic properties of the Quasi-Maximum Likelihood Estimators (QMLE). To date, the statistical properties of the QMLE of the DCC parameters have been derived under highly restrictive and unverifiable regularity conditions. The paper shows that the DCC model can be obtained from a vector random coefficient moving average process, and derives the stationarity and invertibility conditions. The derivation of DCC from a vector random coefficient moving average process raises three important issues: (i) demonstrates that DCC is, in fact, a dynamic conditional covariance model of the returns shocks rather than a dynamic conditional correlation model; (ii) provides the motivation, which is presently missing, for standardization of the conditional covariance model to obtain the conditional correlation model; and (iii) shows that the appropriate ARCH or GARCH model for DCC is based on the standardized shocks rather than the returns shocks. The derivation of the regularity conditions should subsequently lead to a solid statistical foundation for the estimates of the DCC parameters.Christian M. Hafner, Michael McAleer2014-07-09Dynamic conditional correlation, dynamic conditional covariance, vector random coefficient moving average, stationarity, invertibility, asymptotic propertiesA Weak Instrument F-Test in Linear IV Models with Multiple Endogenous Variables
http://d.repec.org/n?u=RePEc:bri:uobdis:14/644&r=ecm
We consider testing for weak instruments in a model with multiple endogenous variables. Unlike Stock and Yogo (2005), who considered a weak instruments problem where the rank of the matrix of reduced form parameters is near zero, here we consider a weak instruments problem of a near rank reduction of one in the matrix of reduced form parameters. For example, in a two-variable model, we consider weak instrument asymptotics of the form π1=δπ2 + c/sqrt(n) where π1 and π2 are the parameters in the two reduced-form equations, c is a vector of constants and n is the sample size. We investigate the use of a conditional first-stage F-statistic along the lines of the proposal by Angrist and Pischke (2009) and show that, unless δ=0, the variance in the denominator of their F-statistic needs to be adjusted in order to get a correct asymptotic distribution when testing the hypothesis H0: π1=δπ2. We show that a corrected conditional F-statistic is equivalent to the Cragg and Donald (1993) minimum eigenvalue rank test statistic, and is informative about the maximum total relative bias of the 2SLS estimator and the Wald tests size distortions. When δ=0 in the two-variable model, or when there are more than two endogenous variables, further information over and above the Cragg-Donald statistic can be obtained about the nature of the weak instrument problem by computing the conditional first stage F-statistics.Eleanor Sanderson, Frank Windmeijer2014-06weak instruments, multiple endogenous variables, F-test.Identifying Inliers
http://d.repec.org/n?u=RePEc:bge:wpaper:763&r=ecm
The problem of outliers is well-known in statistics: an outlier is a value that is far from the general distribution of the other observed values, and can often perturb the results of a statistical analysis. Various procedures exist for identifying outliers, in case they need to receive special treatment, which in some cases can be exclusion from consideration. An inlier, by contrast, is an observation lying within the general distribution of other observed values, generally does not perturb the results but is nevertheless non-conforming and unusual. For single variables, an inlier is practically impossible to identify, but in the multivariate case, thanks to interrelationships between variables, values can be identified that are observed to be more central in a distribution but would be expected, based on the other information in the data matrix, to be more outlying. We propose an approach to identify inliers in a data matrix, based on the singular value decomposition. An application is presented using a table of economic indicators for the 27 member countries of the European Union in 2011, where inlying values are identified for some countries such as Estonia and Luxembourg.Michael Greenacre, H. Ãztaş Ayhan2014-06imputation, inlier, outlier, singular value decompositionFunctional outlier detection with a local spatial depth
http://d.repec.org/n?u=RePEc:cte:wsrepe:we141410&r=ecm
This paper proposes methods to detect outliers in functional datasets. We are interested in challenging scenarios where functional samples are contaminated by outliers that may be difficult to recognize. The task of identifying a typical curves is carried out using the recently proposed kernelized functional spatial depth (KFSD). KFSD is a localdepth that can be used to order the curves of a sample from the most to the least central. Since outliers are usually among the least central curves, we introduce three new procedures that provide a threshold value for KFSD such that curves with depth values lower than the threshold are detected as outliers. The results of a simulation study show that our proposals generally out perform a battery of competitors. Finally, we consider areal application with environmental data consisting in levels of nitrogen oxidesCarlo Sguera, Pedro Galeano, Rosa E. Lillo2014-06Functional depths, Functional outlier detection, Kernelized functional spatial depth, Nitrogen oxides, Smoothed resamplingTests of Policy Ineffectiveness in Macroeconometrics
http://d.repec.org/n?u=RePEc:cam:camdae:1415&r=ecm
This paper proposes tests of policy ineffectiveness in the context of macroeconometric rational expectations models. It is assumed that there is a policy intervention that takes the form of changes in the parameters of a policy rule, and that there are sufficient observations before and after the intervention. The test is based on the difference between the realisations of the outcome variable of interest and counterfactuals based on no policy intervention, using only the pre-intervention parameter estimates, and in consequence the Lucas Critique does not apply. The paper develops tests of policy ineffectiveness for a full structural model, with and without exogenous, policy or non-policy, variables. Asymptotic distributions of the proposed tests are derived both when the post intervention sample is fixed as the pre-intervention sample expands, and when both samples rise jointly but at different rates. The performance of the test is illustrated by a simulated policy analysis of a three equation New Keynesian Model, which shows that the test size is correct but the power may be low unless the model includes exogenous variables, or if the policy intervention changes the steady states, such as the inflation target.Hashem Pesaran, Ron Smith2014-06-19Counterfactuals, policy analysis, policy ine¤ectiveness test, macroeconomicsAdaptive Models and Heavy Tails
http://d.repec.org/n?u=RePEc:bbk:bbkefp:1409&r=ecm
This paper proposes a novel and ‡exible framework to estimate autoregressive models with time-varying parameters. Our setup nests various adaptive algorithms that are commonly used in the macroeconometric literature, such as learning-expectations and forgetting-factor algorithms. These are generalized along several directions: specifically, we allow for both Student-t distributed innovations as well as time-varying volatility. Meaningful restrictions are imposed to the model parameters, so as to attain local stationarity and bounded mean values. The model is applied to the analysis of inflation dynamics. Allowing for heavy-tails leads to a significant improvement in terms of fit and forecast. Moreover, it proves to be crucial in order to obtain well-calibrated density forecasts.Davide Delle Monache, Ivan Petrella2014-07Time-Varying Parameters, Score-driven Models, Heavy-Tails, Adaptive Algorithms, Inflation.Counterfactual Analysis in Macroeconometrics: An Empirical Investigation into the Effects of Quantitative Easing
http://d.repec.org/n?u=RePEc:bbk:bbkefp:1406&r=ecm
The policy innovations that followed the recent Great Recession, such as unconventional monetary policies, prompted renewed interest in the question of how to measure the effectiveness of such policy interventions. To test policy effectiveness requires a model to construct a counterfactual for the outcome variable in the absence of the policy intervention and a way to determine whether the differences between the realised outcome and the model-based counter-factual outcomes are larger than what could have occurred by chance in the absence of policy intervention. Pesaran & Smith propose tests of policy ineffectiveness in the context of macroeconometric rational expectations dynamic stochastic general equilibrium models. When we are certain of the specification, estimation of the complete system imposing all the cross-equation restrictions implied by the full structural model is more efficient. But if the full model is misspecified, one may obtain more reliable estimates of the counterfactual outcomes from a parsimonious reduced form policy response equation, which conditions on lagged values, and on the policy measures and variables known to be invariant to the policy intervention. We propose policy ineffectiveness tests based on such reduced forms and illustrate the tests with an application to the unconventional monetary policy known as quantitative easing (QE) adopted in the UK.M. Hashem Pesaran, Ron P Smith2014-06Counterfactuals, policy analysis, policy ineffectiveness test, macroeconomics, quantitative easing (QE).Estimating Long-Run PD, Asset Correlation, and Portfolio Level PD by Vasicek Models
http://d.repec.org/n?u=RePEc:pra:mprapa:57244&r=ecm
In this paper, we propose a Vasicek-type of models for estimating portfolio level probability of default (PD). With these Vasicek models, asset correlation and long-run PD for a risk homogenous portfolio both have analytical solutions, longer external time series for market and macroeconomic variables can be included, and the traditional asymptotic maximum likelihood approach can be shown to be equivalent to least square regression, which greatly simplifies parameter estimation. The analytical formula for long-run PD, for example, explicitly quantifies the contribution of uncertainty to an increase of long-run PD. We recommend the bootstrap approach to addressing the serial correlation issue for a time series sample. To validate the proposed models, we estimate the asset correlations for 13 industry sectors using corporate annual default rates from S&P for years 1981-2011, and long-run PD and asset correlation for a US commercial portfolio, using US delinquent rate for commercial and industry loans from US Federal Reserve.Yang, Bill Huajian2013-07-10Portfolio level PD, long-run PD, asset correlation, time series, serial correlation, bootstrapping, binomial distribution, maximum likelihood, least square regression, Vasicek modelDealing with Cross-Firm Heterogeneity in Bank Efficiency Estimates: Some evidence from Latin America
http://d.repec.org/n?u=RePEc:bng:wpaper:13011&r=ecm
This paper contributes to the bank efficiency literature through an application of recently developed random parameters models for stochastic frontier analysis. We estimate standard fixed and random effects models, and alternative specifications of random parameters models that accommodate cross-sectional parameter heterogeneity. A Monte Carlo simulations exercise is used to investigate the implications for the accuracy of the estimated inefficiency scores of estimation using either an under-parameterized, over-parameterized or correctly specified cost function. On average, the estimated mean efficiencies obtained from random parameters models tend to be higher than those obtained using fixed or random effects, because random parameters models do not confound parameter heterogeneity with inefficiency. Using a random parameters model, we analyse the evolution of the average rank cost efficiency for Latin American banks between 1985 and 2010. Cost efficiency deteriorated during the 1990s, particularly for state-owned banks, before improving during the 2000s but prior to the subprime crisis. The effects of the latter varied between countries and bank ownership typesJohn Goddard, Phil Molyneux, Jonathan Williams2013-09Efficiency; stochastic frontier; random parameters models; bank ownership; Latin AmericaExploiting the monthly data-flow in structural forecasting
http://d.repec.org/n?u=RePEc:cfm:wpaper:1416&r=ecm
This paper shows how and when it is possible to obtain a mapping from a quarterly DSGE model to a monthly specification that maintains the same economic restrictions and has real coefficients. We use this technique to derive the monthly counterpart of the Gali et al (2011) model. We then augment it with auxiliary macro indicators which, because of their timeliness, can be used to obtain a now-cast of the structural model. We show empirical results for the quarterly growth rate of GDP, the monthly unemployment rate and the welfare relevant output gap defined in Gali, Smets and Wouters (2011). Results show that the augmented monthly model does best for now-casting.Domenico Giannone, Francesca Monti, Lucrezia Reichlin2014-06DSGEmodels, forecasting, temporal aggregation, mixed frequency data, large datasetsMeasurement error in imputation procedures
http://d.repec.org/n?u=RePEc:bde:wpaper:1322&r=ecm
We study how estimators used to impute consumption in survey data are inconsistent due to measurement error in consumption. Previous research suggests instrumenting consumption to overcome this problem. We show that, if additional regressors are present, then instrumenting consumption may still produce inconsistent estimators due to the likely correlation between additional regressors and measurement error. On the other hand, low correlations between additional regressors and instruments may reduce bias due to measurement error. We apply our fi ndings by revisiting recent research that imputes consumption data from the CEX to the PSID.Rodolfo G. Campos, Iliana Reggio2013-12consumption, measurement error, instrumental variables, consumer expenditure survey, panel study of income dynamics, income shocks