
on Econometrics 
By:  Mayya Zhilova; ; ; 
Abstract:  The paper studies a problem of constructing simultaneous likelihoodbased confidence sets. We consider a simultaneous multiplier bootstrap procedure for estimating the quantiles of the joint distribution of the likelihood ratio statistics, and for adjusting the confidence level for multiplicity. Theoretical results state the bootstrap validity in the following setting: the sample size n is fixed, the maximal parameter dimension p_max and the number of considered parametric models K are s.t. (logK )^12 p_max^3/n is small. We also consider the situation when the parametric models are misspecified. If the models' misspecification is significant, then the bootstrap critical values exceed the true ones and the simultaneous bootstrap confidence set becomes conservative. Numerical experiments for local constant and local quadratic regressions illustrate the theoretical results. 
Keywords:  simultaneous inference, correction for multiplicity, familywise error, misspecified model, multiplier/weighted bootstrap 
JEL:  C13 C15 
Date:  2015–06 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2015031&r=ecm 
By:  Michele Aquaro (University of Warwick); Natalia Bailey (Queen Mary University of London); M. Hashem Pesaran (University of Southern California and University of Cambridge) 
Abstract:  This paper considers spatial autoregressive panel data models and extends their analysis to the case where the spatial coefficients differ across the spatial units. It derives conditions under which the spatial coefficients are identified and develops a quasi maximum likelihood (QML) estimation procedure. Under certain regularity conditions, it is shown that the QML estimators of individual spatial coefficients are consistent and asymptotically normally distributed when both the time and cross section dimensions of the panel are large. It derives the asymptotic covariance matrix of the QML estimators allowing for the possibility of nonGaussian error processes. Small sample properties of the proposed estimators are investigated by Monte Carlo simulations for Gaussian and nonGaussian errors, and with spatial weight matrices of differing degree of sparseness. The simulation results are in line with the paper's key theoretical findings and show that the QML estimators have satisfactory small sample properties for panels with moderate time dimensions and irrespective of the number of cross section units in the panel, under certain sparsity conditions on the spatial weight matrix. 
Keywords:  Spatial panel data models, Heterogeneous spatial lag coefficients, Identification, Quasi maximum likelihood (QML) estimators, NonGaussian errors 
JEL:  C21 C23 
Date:  2015–06 
URL:  http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp749&r=ecm 
By:  Davide Delle Monache (Banca d’Italia); Stefano Grassi (University of Kent and CREATES); Paolo Santucci de Magistris (Aarhus University and CREATES) 
Abstract:  Short memory models contaminated by level shifts have similar longmemory features as fractionally integrated processes. This makes hard to verify whether the true data generating process is a pure fractionally integrated process when employing standard estimation methods based on the autocorrelation function or the periodogram. In this paper, we propose a robust testing procedure, based on an encompassing parametric specification that allows to disentangle the level shifts from the fractionally integrated component. The estimation is carried out on the basis of a statespace methodology and it leads to a robust estimate of the fractional integration parameter also in presence of level shifts. Once the memory parameter is correctly estimated, we use the KPSS test for presence of level shift. The Monte Carlo simulations show how this approach produces unbiased estimates of the memory parameter when shifts in the mean, or other slowly varying trends, are present in the data. Therefore, the subsequent robust version of the KPSS test for the presence of level shifts has proper size and by far the highest power compared to other existing tests. Finally, we illustrate the usefulness of the proposed approach on financial data, such as daily bipower variation and turnover. 
Keywords:  Long Memory, ARFIMA Processes, Level Shifts, StateSpace methods, KPSS test 
JEL:  C10 C11 C22 C80 
Date:  2015–06–17 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201530&r=ecm 
By:  Martin,William J.; Pham,Cong S. 
Abstract:  This paper evaluates the performance of alternative estimators of the gravity equation when zero trade flows result from economicallybased datagenerating processes with heteroscedastic residuals and potentiallyomitted variables. In a standard Monte Carlo analysis, the paper finds that this combination can create seriously biased estimates in gravity models with frequencies of zero frequently observed in realworld data, and that Poisson PseudoMaximumLikelihood models can be important in solving this problem. Standard threshold?Tobit estimators perform well in a Tobitbased datagenerating process only if the analysis deals with the heteroscedasticity problem. When the data are generated by a Heckman sample selection model, the ZeroInflated Poisson model appears to have the lowest bias. When the data are generated by a Helpman, Melitz, and Rubinsteintype model with heterogeneous firms, a ZeroInflated Poisson estimator including firm numbers appears to provide the best results. Testing on realworld data for total trade throws up additional puzzles with truncated Poisson PseudoMaximumLikelihood and Poisson PseudoMaximumLikelihood estimators being very similar, and ZeroInflated Poisson and truncated Poisson PseudoMaximumLikelihood identical. Repeating the Monte Carlo analysis taking into account the high frequency of very small predicted trade flows in realworld data reconciles these findings and leads to specific recommendations for estimators. 
Keywords:  Free Trade,Economic Theory&Research,Statistical&Mathematical Sciences,Information and Communication Technologies,Econometrics 
Date:  2015–06–16 
URL:  http://d.repec.org/n?u=RePEc:wbk:wbrwps:7308&r=ecm 
By:  Johan Dahlin; Mattias Villani; Thomas B. Sch\"{o}n 
Abstract:  We consider the problem of approximate Bayesian parameter inference in nonlinear state space models with intractable likelihoods. Sequential Monte Carlo with approximate Bayesian computations (SMCABC) is an approach to approximate the likelihood in this type of models. However, such approximations can be noisy and computationally costly which hinders efficient implementations using standard methods based on optimisation and statistical simulation. We propose a novel method based on the combination of Gaussian process optimisation (GPO) and SMCABC to create a Laplace approximation of the intractable posterior. The properties of the resulting GPOABC method are studied using stochastic volatility (SV) models with both synthetic and realworld data. We conclude that the algorithm enjoys: good accuracy comparable to particle Markov chain Monte Carlo with a significant reduction in computational cost and better robustness to noise in the estimates compared with a gradientbased optimisation algorithm. Finally, we make use of GPOABC to estimate the ValueatRisk for a portfolio using a copula model with SV models for the margins. 
Date:  2015–06 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1506.06975&r=ecm 
By:  Matteo Barigozzi; Marc Hallin 
Keywords:  volatility; dynamic factor models; GARCH models 
JEL:  C32 
Date:  2015–06 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/200436&r=ecm 
By:  TaeHwan Kim (Yonsei University); Christophe Muller (AixMarseille University) 
Abstract:  We study the fittedvalue approach to quantile regression in the presence of endogeneity under a weakened form of IV condition. In this context, we exhibit the possibility of a particular form of nonconstant effect models with the fittedvalue approach, a situation often believed to be ruled out. However, only the constant effect coefficients of the model can be consistently estimated. Finally, we discuss practical examples where this approach can be useful to avoid misspecification of quantile models. 
Keywords:  TwoStage Estimation, Quantile Regression, FittedValue Approach. 
JEL:  C13 C21 C31 
Date:  2015–06 
URL:  http://d.repec.org/n?u=RePEc:yon:wpaper:2015rwp82&r=ecm 
By:  Lee, JiHyung 
Abstract:  This paper develops econometric methods for inference and prediction in quantile regression (QR) allowing for persistent predictors. Conventional QR econometric techniques lose their validity when predictors are highly persistent. I adopt and extend a methodology called IVX filtering (Magdalinos and Phillips, 2009) that is designed to handle predictor variables with various degrees of persistence. The proposed IVXQR methods correct the distortion arising from persistent multivariate predictors while preserving discriminatory power. Simulations confirm that IVXQR methods inherit the robust properties of QR. These methods are employed to examine the predictability of US stock returns at various quantile levels. 
Keywords:  IVX filtering, Local to unity, Multivariate predictors, Predictive regression, Quantile regression. 
JEL:  C1 C22 
Date:  2015–04–28 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:65150&r=ecm 
By:  Yang, Bill Huajian; Du, Zunwei 
Abstract:  Under the Vasicek asymptotic single risk factor model, stress testing based on rating transition probability involves three components: the unconditional rating transition matrix, asset correlations, and stress testing factor models for systematic downgrade (including default) risk. Conditional transition probability for stress testing given systematic risk factors can be derived accordingly. In this paper, we extend Miu and Ozdemir’s work ([14]) on stress testing under this transition probability framework by assuming different asset correlation and different stress testing factor model for each nondefault rating. We propose two Vasicek models for each nondefault rating, one with a single latent factor for rating level asset correlation, and another multifactor Vasicek model with a latent effect for systematic downgrade risk. Both models can be fitted effectively by using, for example, the SAS nonlinear mixed procedure. Analytical formulas for conditional transition probabilities are derived. Modeling downgrade risk rather than default risk addresses the issue of low default counts for high quality ratings. As an illustration, we model the transition probabilities for a corporate portfolio. Portfolio default risk and credit loss under stress scenarios are derived accordingly. Results show, stresstesting models developed in this way demonstrate desired sensitivity to risk factors, which is generally expected. 
Keywords:  Stress testing, systematic risk, asset correlation, rating migration, Vasicek model, bootstrap aggregation 
JEL:  C1 C10 C13 C5 G3 G30 G32 G38 
Date:  2015–06–18 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:65168&r=ecm 
By:  Stephan B. Bruns; David I. Stern 
Abstract:  Understanding the (causal) mechanisms at work is important for formulating evidencebased policy. But evidence from observational studies is often inconclusive with many studies finding conflicting results. In small to moderately sized samples, the outcome of Granger causality testing heavily depends on the lag length chosen for the underlying vector autoregressive (VAR) model. Using the Akaike Information Criterion, there is a tendency to overfit the VAR model and these overfitted models show an increased rate of falsepositive findings of Granger causality, leaving empirical economists with substantial uncertainty about the validity of inferences. We propose a metaregression model that explicitly controls for this overfitting bias and we show by means of simulations that, even if the primary literature is dominated by falsepositive findings of Granger causality, the metaregression model correctly identifies the absence of genuine Granger causality. We apply the suggested model to the large literature that tests for Granger causality between energy consumption and economic output. We do not find evidence for a genuine relation in the selected sample, although excess significance is present. Instead, we find evidence that this excess significance is explained by overfitting bias. 
Keywords:  Granger causality, vector autoregression, information criteria, metaanalysis, metaregression, bias, publication selection bias 
Date:  2015–06 
URL:  http://d.repec.org/n?u=RePEc:een:camaaa:201522&r=ecm 
By:  Carlo Marinelli; Stefano d'Addona 
Abstract:  We analyze the empirical performance of several nonparametric estimators of the pricing functional for European options, using historical put and call prices on the S&P500 during the year 2012. Two main families of estimators are considered, obtained by estimating the pricing functional directly, and by estimating the (BlackScholes) implied volatility surface, respectively. In each case simple estimators based on linear interpolation are constructed, as well as more sophisticated ones based on smoothing kernels, \`a la NadarayaWatson. The results based on the analysis of the empirical pricing errors in an extensive outofsample study indicate that a simple approach based on the BlackScholes formula coupled with linear interpolation of the volatility surface outperforms, both in accuracy and computational speed, all other methods. 
Date:  2015–06 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1506.06568&r=ecm 
By:  Dirk Drechsel (KOF Swiss Economic Institute, ETH Zurich, Switzerland); Heiner Mikosch (KOF Swiss Economic Institute, ETH Zurich, Switzerland); Samad Sarferaz (KOF Swiss Economic Institute, ETH Zurich, Switzerland); Matthias Bannert (KOF Swiss Economic Institute, ETH Zurich, Switzerland) 
Abstract:  This paper analyzes the effects of macroeconomic shocks on prices and output at different levels of aggregation using a bottom up approach. We show how to generate firm level impulse responses by incorporating experimental settings into surveys and by exposing firm executives to treatment scenarios. Aggregation then results in industry level and economy wide impulse responses. We further show that the effects obtained from survey experiments can be mapped into impulse responses retrieved from VARs. We apply the procedure to study the effects of oil price shocks using a representative sample of over 1000 Swiss firms. At the aggregate and industry level our findings confirm, with some notable exceptions, results from a standard VAR. At the micro level we analyze the driving forces behind firm specific impulse responses, controlling for several firm characteristics via panel data analysis and thereby solving existing puzzles. 
Keywords:  Survey based impulse responses, survey experiments, macroeconomic shock identification, firm level data, oil price shock 
JEL:  C32 C83 C99 E31 
Date:  2015–06 
URL:  http://d.repec.org/n?u=RePEc:kof:wpskof:15386&r=ecm 
By:  Dominique Guegan (Centre d'Economie de la Sorbonne); Bertrand K Hassani (Grupo Santander et Centre d'Economie de la Sorbonne); Kehan Li (Centre d'Economie de la Sorbonne) 
Abstract:  One of the key lessons of the crisis which began in 2007 has been the need to strengthen the risk coverage of the capital framework. In response, the Basel Committee in July 2009 completed a number of critical reforms to the Basel II framework which will raise capital requirements for the trading book and complex securitisation exposures, a major source of losses for many international active banks. One of the reforms is to introduce a stressed valueatrisk (VaR) capital requirement based on a continuous 12month period of significant financial stress (Basel III (2011) [1]. However the Basel framework does not specify a model to calculate the stressed VaR and leaves it up to the banks to develop an appropriate internal model to capture material risks they face. Consequently we propose a forward stress risk measure “spectral stress VaR” (SSVaR) as an implementation model of stressed VaR, by exploiting the asymptotic normality property of the distribution of estimator of VaRp. In particular to allow SSVaR incorporating the tail structure information we perform the spectral analysis to build it. Using a data set composed of operational risk factors we fit a panel of distributions to construct the SSVaR in order to stress it. Additionally we show how the SSVaR can be an indicator regarding the inner model robustness for the bank 
Keywords:  Value at Risk; Asymptotic theory; Distribution; Spectral analysis; Stress; Risk measure; Regulation 
JEL:  C1 C6 
Date:  2015–06 
URL:  http://d.repec.org/n?u=RePEc:mse:cesdoc:15052&r=ecm 
By:  McCracken, Michael W. (Federal Reserve Bank of St. Louis); Ng, Serena (Department of Economics, Columbia University) 
Abstract:  This paper describes a large, monthly frequency, macroeconomic database with the goal of establishing a convenient starting point for empirical analysis that requires "big data." The dataset mimics the coverage of those already used in the literature but has three appealing features. First, it is designed to be updated monthly using the FRED database. Second, it will be publicly accessible, facilitating comparison of related research and replication of empirical work. Third, it will relieve researchers from having to manage data changes and revisions. We show that factors extracted from our dataset share the same predictive content as those based on various vintages of the socalled StockWatson dataset. In addition, we suggest that diffusion indexes constructed as the partial sum of the factor estimates can potentially be useful for the study of business cycle chronology. 
Keywords:  diffusion index; forecasting; big data; factors. 
JEL:  C30 C33 G11 G12 
Date:  2015–06–15 
URL:  http://d.repec.org/n?u=RePEc:fip:fedlwp:2015012&r=ecm 
By:  John Deke 
Abstract:  In this brief we examine methodological criticisms of the Linear Probability Model (LPM) in general and conclude that these criticisms are not relevant to experimental impact analysis. We also point out that the LPM has advantages in terms of implementation and interpretation that make it an appealing option for researchers conducting experimental impact analysis. An important caveat on these conclusions is that outside of the context of impact analysis, there can be good reasons to avoid using the LPM for binary outcomes. 
Keywords:  Linear Probability Model, Randomized Controlled Trials, TPP, Teenage Pregnancy Prevention, Technical Assistance 
JEL:  I 
Date:  2014–12–30 
URL:  http://d.repec.org/n?u=RePEc:mpr:mprres:62a1477e274d429faf7e0c71ba1204b2&r=ecm 