nep-ecm New Economics Papers
on Econometrics
Issue of 2015‒12‒28
twenty-six papers chosen by
Sune Karlsson
Örebro universitet

  1. Inference for Nonparametric High-Frequency Estimators with an Application to Time Variation in Betas By Ilze KALNINA
  2. Semiparametric Estimation of Partially Linear Dynamic Panel Data Models with Fixed Effects By Su Liangjun; Zhang Yonghui
  3. Estimation and Inference of Threshold Regression Models with Measurement Errors By Chong, Terence Tai Leung; Chen, Haiqiang; Wong, Tsz Nga; Yan, Isabel K.
  4. Specification Test for Spatial Autoregressive Models By Su Liangjun; Xi Qu
  5. A Vector Heterogeneous Autoregressive Index model for realized volatility measures By Cubadda G.; Guardabascio B.; Hecq A.W.
  6. When is it really justifiable to ignore explanatory variable endogeneity in a regression model? By Jan F. Kiviet
  7. Bias correction for fixed effects spatial panel data models By Zhenlin Yang; Jihai Yu; Shew Fan Liu
  8. On IV estimation of a dynamic linear probability model with fixed effects By Andrew Adrian Yu Pua
  9. Shrinkage Estimation of Common Breaks in Panel Data Models via Adaptive Group Fused Lasso By Su Liangjun; Junhui Qian
  10. On Time-Varying Factor Models: Estimation and Testing By Su Liangjun; Xia Wang
  11. A Bayesian Local Likelihood Method for Modelling Parameter Time Variation in DSGE Models By Ana Beatriz Galvão; Liudas Giraitis; George Kapetanios; Katerina Petrova
  12. Discerning Non-Stationary Market Microstructure Noise and Time-Varying Liquidity in High Frequency Data By Richard Y. Chen; Per A. Mykland
  13. Panel Data Models with Interactive Fixed Effects and Multiple Structural Breaks By Degui Li; Junhui Qian; Su Liangjun
  14. Long-run priors for term structure models By Meldrum, Andrew; Roberts-Sklar, Matt
  15. Nonparametric Estimation of the Leverage Effect : A Trade-off between Robustness and Efficiency By Ilze KALNINA; Dacheng XIU
  16. Inference on Multivariate Heteroscedastic Time Varying Random Coefficient Models By Liudas Giraitis; George Kapetanios; Tony Yates
  17. Limit Theory for Continuous Time Systems with Mildly Explosive Regressors By Peter C. B. Phillips; Ye Chen; Jun Yu
  18. Modeling the Effects of Grade Retention in High School By Bart Cockx; Stijn Baert; Matteo Picchio
  19. Discriminating between (in)valid external instruments and (in)valid exclusion restrictions By Jan F. Kiviet
  20. Integrated ARCH, FIGARCH and AR Models: Origins of Long Memory By Liudas Giraitis; Donatas Surgailis; Andrius Škarnulis
  21. Testing for Fundamental Vector Moving Average Representations By Bin Chen; Jinho Choi; Juan Carlos Escanciano
  22. Sign Restrictions in Bayesian FaVARs with an Application to Monetary Policy Shocks By Pooyan Amir Ahmadi; Harald Uhlig
  23. Efficiency gains, bounds, and risk in finance By Sarisoy, Cisil
  24. Iterative Bias Correction Procedures Revisited: A Small Scale Monte Carlo Study By Arturas Juodis
  25. Robustness of Forecast Combination in Unstable Environment: A Monte Carlo Study of Advanced Algorithms By Yongchen Zhao
  26. Comparing Asset Pricing Models By Francisco Barillas; Jay Shanken

  1. By: Ilze KALNINA
    Abstract: We consider the problem of conducting inference on nonparametric high-frequency estimators without knowing their asymptotic variances. We prove that a multivariate subsampling method achieves this goal under general conditions that were not previously available in the literature. We suggest a procedure for a data-driven choice of the bandwidth parameters. Our simulation study indicates that the subsampling method is much more robust than the plug-in method based on the asymptotic expression for the variance. Importantly, the subsampling method reliably estimates the variability of the Two Scale estimator even when its parameters are chosen to minimize the finite sample Mean Squared Error; in contrast, the plugin estimator substantially underestimates the sampling uncertainty. By construction, the subsampling method delivers estimates of the variance-covariance matrices that are always positive semi-definite. We use the subsampling method to study the dynamics of financial betas of six stocks on the NYSE. We document significant variation in betas within year 2006, and find that tick data captures more variation in betas than the data sampled at moderate frequencies such as every five or twenty minutes. To capture this variation we estimate a simple dynamic model for betas. The variance estimation is also important for the correction of the errors-in-variables bias in such models. We find that the bias corrections are substantial, and that betas are more persistent than the naive estimators would lead one to believe.
    Date: 2015
  2. By: Su Liangjun (Singapore Management University); Zhang Yonghui (Renmin University of China)
    Abstract: In this paper, we study a partially linear dynamic panel data model with fixed effects, where either exogenous or endogenous variables or both enter the linear part, and the lagged dependent variable together with some other exogenous variables enter the nonparametric part. Two types of estimation methods are proposed for the first-differenced model. One is composed of a semiparametric GMM estimator for the finite dimensional parameter  and a local polynomial estimator for the infinite dimensional parameter  based on the empirical solutions to Fredholm integral equations of the second kind, and the other is a sieve IV estimate of the parametric and nonparametric components jointly. We study the asymptotic properties for these two types of estimates when the number of individuals  tends to ∞ and the time period  is fixed. We also propose a specification test for the linearity of the nonparametric component based on a weighted square distance between the parametric estimate under the linear restriction and the semiparametric estimate under the alternative. Monte Carlo simulations suggest that the proposed estimators and tests perform well in finite samples. We apply the model to study the relationship between intellectual property right (IPR) protection and economic growth, and find that IPR has a nonlinear positive effect on the economic growth rate.
    Keywords: Additive structure, Fredholm integral equation, Generated covariate, GMM, Local polynomial regression, Partially linear model, Sieve method, Time effect
    JEL: C14 C33 C36
    Date: 2015–09
  3. By: Chong, Terence Tai Leung; Chen, Haiqiang; Wong, Tsz Nga; Yan, Isabel K.
    Abstract: An important assumption underlying standard threshold regression models and their variants in the extant literature is that the threshold variable is perfectly measured. Such an assumption is crucial for consistent estimation of model parameters. This paper provides the first theoretical framework for the estimation and inference of threshold regression models with measurement errors. A new estimation method that reduces the bias of the coefficient estimates and a Hausman-type test to detect the presence of measurement errors are proposed. Monte Carlo evidence is provided and an empirical application is given.
    Keywords: Threshold Model; Measurement Error; Hausman-type Test.
    JEL: C12 C22
    Date: 2015–11–05
  4. By: Su Liangjun (Singapore Management University); Xi Qu (Shanghai Jiao Tong University)
    Abstract: This paper considers a simple test for the correct specification of linear spatial autoregressive models, assuming that the choice of the weight matrix  is true. We derive the limiting distributions of the test under the null hypothesis of correct specification and a sequence of local alternatives. We show that the test is free of nuisance parameters asymptotically under the null and prove the consistency of our test. To improve the finite sample performance of our test, we also propose a residual-based wild bootstrap and justify its asymptotic validity. We conduct a small set of Monte Carlo simulations to investigate the finite sample properties of our tests. Finally, we apply the test to two empirical datasets: the vote cast and the economic growth rate. We reject the linear spatial autoregressive model in the vote cast example but fail to reject it in the economic growth rate example.
    Keywords: Generalized method of moments; Nonlinearity; Spatial autoregression; Spatial dependence; Specification test
    JEL: C12 C14 C21
    Date: 2015–09
  5. By: Cubadda G.; Guardabascio B.; Hecq A.W. (GSBE)
    Abstract: This paper introduces a new modelling for detecting the presence of commonalities in a set of realized volatility measures. In particular, we propose a multivariate generalization of the heterogeneous autoregressive model HAR that is endowed with a common index structure. The Vector Heterogeneous Autoregressive Index model has the property to generate a common index that preserves the same temporal cascade structure as in the HAR model, a feature that is not shared by other aggregation methods e.g., principal components. The parameters of this model can be easily estimated by a proper switching algorithm that increases the Gaussian likelihood at each step. We illustrate our approach with an empirical analysis aiming at combining several realized volatility measures of the same equity index for three different markets.
    Keywords: Multiple or Simultaneous Equation Models: Time-Series Models; Dynamic Quantile Regressions; Dynamic Treatment Effect Models;
    JEL: C32
    Date: 2015
  6. By: Jan F. Kiviet
    Abstract: A conversion of standard ordinary least-squares results into inference which is robust under endogeneity of some regressors has been put forward in Ashley and Parmeter, Economics Letters, 137 (2015) 70-74. However, their conversion is based on an incorrect (though by accident conservative) asymptotic approximation and entails a neglected but avoidable randomness. By a very basic example it is illustrated why a much more sophisticated asymptotic expansion under a stricter set of assumptions is required than used by these authors. Next, particular aspects of their consequently .awed sensitivity analysis for an empirical growth model are replaced by results based on a proper limiting distribution for a feasible inconsistency corrected least-squares estimator. Finally we provide references to literature where relevant asymptotic approximations have been derived which should enable to produce similar endogeneity robust inference for more general models and hypotheses than currently available.
    Date: 2015–12–05
  7. By: Zhenlin Yang (Singapore Management University); Jihai Yu (Peking University); Shew Fan Liu (Singapore Management University)
    Abstract: This paper examines the finite sample properties of the quasi maximum likelihood (QML) estimators of the fixed effects spatial panel data (FE-SPD) models of Lee and Yu (2010). Following the general bias correction methods recently developed by Yang (2015), we derive up to third-order bias corrections for the QML estimators of the FE-SPD model, and propose a simple bootstrap method for their practical implementation. Monte Carlo results reveal that the QML estimators of the spatial parameters can be quite biased and that a second-order bias correction effectively removes the bias. The validity of the bootstrap method is established. Variance corrections are also considered, which together with bias corrections lead to improved inferences.
    Keywords: Bias correction, Variance correction, Bootstrap, Spatial panel, Individual fixed effects, Time fixed effects, Quasi maximum likelihood, Spatial lag, Spatial error, Spatial ARAR.
    JEL: C10 C13 C21 C23 C15
    Date: 2015–03
  8. By: Andrew Adrian Yu Pua (University of Amsterdam)
    Abstract: Researchers still estimate a dynamic linear probability model (LPM) with fixed effects when analyzing a panel of binary choices. Setting aside the possibility that the average marginal effect may not be point-identified, directly applying IV estimators to this dynamic LPM delivers inconsistent estimators for the true average marginal effect regardless of whether the cross-sectional or time series dimensions diverge. I also show through some examples that these inconsistent estimators are sometimes outside the nonparametric bounds proposed by Chernozhukov et al. (2013). Although there are no analytical results for GMM estimators using Arellano-Bond moment conditions, I show through an empirical example that the resulting GMM estimate of the average treatment effect of fertility on female labor participation is outside the nonparametric bounds under monotonicity.
    Date: 2015–09–27
  9. By: Su Liangjun (Singapore Management University); Junhui Qian (Shanghai Jiao Tong University)
    Abstract: In this paper we consider estimation and inference of common breaks in panel data models via adaptive group fused lasso. We consider two approaches — penalized least squares (PLS) for firstdifferenced models without endogenous regressors, and penalized GMM (PGMM) for first-differenced models with endogeneity. We show that with probability tending to one both methods can correctly determine the unknown number of breaks and estimate the common break dates consistently. We establish the asymptotic distributions of the Lasso estimators of the regression coefficients and their post Lasso versions. We also propose and validate a data-driven method to determine the tuning parameter used in the Lasso procedure. Monte Carlo simulations demonstrate that both the PLS and PGMM estimation methods work well in finite samples. We apply our PGMM method to study the effect of foreign direct investment (FDI) on economic growth using a panel of 88 countries and regions from 1973 to 2012 and find multiple breaks in the model.
    Keywords: Adaptive Lasso; Change point; Group fused Lasso; Panel data; Penalized least squares; Penalized GMM; Structural change
    JEL: C13 C23 C33 C51
    Date: 2015–09
  10. By: Su Liangjun (Singapore Management University); Xia Wang (University of Chinese Academy of Sciences)
    Abstract: Conventional factor models assume that factor loadings are fixed over a long horizon of time, which appears overly restrictive and unrealistic in applications. In this paper, we introduce a time-varying factor model where factor loadings are allowed to change smoothly over time. We propose a local version of the principal component method to estimate the latent factors and time-varying factor loadings simultaneously. We establish the limiting distributions of the estimated factors and factor loadings in the standard large  and large  framework. We also propose a BIC-type information criterion to determine the number of factors, which can be used in models with either time-varying or time-invariant factor models. Based on the comparison between the estimates of the common components under the null hypothesis of no structural changes and those under the alternative, we propose a consistent test for structural changes in factor loadings. We establish the null distribution, the asymptotic local power property, and the consistency of our test. Simulations are conducted to evaluate both our nonparametric estimates and test statistic. We also apply our test to investigate Stock and Watson’s (2009) U.S. macroeconomic data set and find strong evidence of structural changes in the factor loadings.
    Keywords: Factor model, Information criterion, Local principal component, Local smoothing, Structural change, Test, Time-varying parameter.
    JEL: C12 C14 C33 C38
    Date: 2015–07
  11. By: Ana Beatriz Galvão (University of Warwick); Liudas Giraitis (Queen Mary University of London); George Kapetanios (Queen Mary University of London); Katerina Petrova (Queen Mary University of London)
    Abstract: DSGE models have recently received considerable attention in macroeconomic analysis and forecasting. They are usually estimated using Bayesian methods, which require the computation of the likelihood function under the assumption that the parameters of the model remain fixed throughout the sample. This paper presents a Local Bayesian Likelihood method suitable for estimation of DSGE models that can accommodate time variation in all parameters of the model. There are two advantages in allowing the parameters to vary over time. The first is that it enables us to assess the possibilities of regime changes, caused by shifts in the policy preferences or the volatility of shocks, as well as the possibility of misspecification in the design of DSGE models. The second advantage is that we can compute predictive densities based on the most recent parameters' values that could provide us with more accurate forecasts. The novel Bayesian Local Likelihood method applied to the Smets and Wouters (2007) model provides evidence of time variation in the policy parameters of the model as well as the volatility of the shocks. We also show that allowing for time variation improves considerably density forecasts in comparison to the fixed parameter model and we interpret this result as evidence for the presence of stochastic volatility in the structural shocks.
    Keywords: DSGE models, Local likelihood, Bayesian methods, Time varying parameters
    JEL: C11 C53 E27 E52
    Date: 2015–12
  12. By: Richard Y. Chen; Per A. Mykland
    Abstract: In this paper, we investigate the implication of non-stationary market microstructure noise to integrated volatility estimation, provide statistical tools to test stationarity and non-stationarity in market microstructure noise, and discuss how to measure liquidity risk using high frequency financial data. In particular, we discuss the impact of non-stationary microstructure noise on TSRV (Two-Scale Realized Variance) estimator, and design three test statistics by exploiting the edge effects and asymptotic approximation. The asymptotic distributions of these test statistics are provided under both stationary and non-stationary noise assumptions respectively, and we empirically measure aggregate liquidity risks by these test statistics from 2006 to 2013. As byproducts, functional dependence and endogenous market microstructure noise are briefly discussed. Simulation studies corroborate our theoretical results. Our empirical study indicates the prevalence of non-stationary market microstructure noise in the New York Stock Exchange.
    Date: 2015–12
  13. By: Degui Li (University of York); Junhui Qian (Shanghai Jiao Tong University); Su Liangjun (Singapore Management University)
    Abstract: In this paper we consider estimation of common structural breaks in panel data models with unobservable interactive fixed effects. We introduce a penalized principal component (PPC) estimation procedure with an adaptive group fused LASSO to detect the multiple structural breaks in the models. Under some mild conditions, we show that with probability approaching one the proposed method can correctly determine the unknown number of breaks and consistently estimate the common break dates. Furthermore, we estimate the regression coefficients through the post-LASSO method and establish the asymptotic distribution theory for the resulting estimators. The developed methodology and theory are applicable to the case of dynamic panel data models. Simulation results demonstrate that the proposed method works well in finite samples with low false detection probability when there is no structural break and high probability of correctly estimating the break numbers when the structural breaks exist. We finally apply our method to study the environmental Kuznets curve for 74 countries over 40 years and detect two breaks in the data.
    Keywords: Change point; Interactive fixed effects; LASSO; Panel data; Penalized estimation; Principal component analysis.
    Date: 2015–09
  14. By: Meldrum, Andrew (Bank of England); Roberts-Sklar, Matt (Bank of England)
    Abstract: Dynamic no-arbitrage term structure models are popular tools for decomposing bond yields into expectations of future short-term interest rates and term premia. But there is insufficient information in the time series of observed yields to estimate the unconditional mean of yields in maximally flexible models. This can result in implausibly low estimates of long-term expected future short-term interest rates, as well as considerable uncertainty around those estimates. This paper proposes a tractable Bayesian approach for incorporating prior information about the unconditional means of yields. We apply it to UK data and find that with reasonable priors it results in more plausible estimates of the long-run average of yields, lower estimates of term premia in long-term bonds and substantially reduced uncertainty around these decompositions in both affine and shadow rate term structure models.
    Keywords: Affine term structure model; shadow rate term structure model; Gibbs sampler
    JEL: C11 E43 G12
    Date: 2015–12–18
  15. By: Ilze KALNINA; Dacheng XIU
    Abstract: We consider two new approaches to nonparametric estimation of the leverage effect. The first approach uses stock prices alone. The second approach uses the data on stock prices as well as a certain volatility instrument, such as the CBOE volatility index (VIX) or the Black-Scholes implied volatility. The theoretical justification for the instrument-based estimator relies on a certain invariance property, which can be exploited when high frequency data is available. The price-only estimator is more robust since it is valid under weaker assumptions. However, in the presence of a valid volatility instrument, the price-only estimator is inefficient as the instrument-based estimator has a faster rate of convergence. We consider two empirical applications, in which we study the relationship between the leverage effect and the debt-to-equity ratio, credit risk, and illiquidity.
    Keywords: derivatives, VIX, implied volatility, high frequency data, spot correlation
    JEL: G12 C22 C14
    Date: 2015
  16. By: Liudas Giraitis (Queen Mary University of London); George Kapetanios (Queen Mary University of London); Tony Yates (Unversity of Bristol)
    Abstract: In this paper we introduce the general setting of a multivariate time series autoregressive model with stochastic time-varying coefficients and time-varying conditional variance of the error process. This allows modeling VAR dynamics for non-stationary times series and estimation of time varying parameter processes by well-known rolling regression estimation techniques. We establish consistency, convergence rates and asymptotic normality for kernel estimators of the paths of coefficient processes and provide pointwise valid standard errors. The method is applied to a popular 7 variable data set to analyze evidence of time-variation in empirical objects of interest for the DSGE literature. The results of this paper serve as a starting point for further research on numerous open problems including establishing estimation results of time-varying parameters that are uniform in time <i>t</i>, constructing Bonferroni-type correction to the pointwise standard error bands and developing a valid test of the null hypothesis of no time variation.
    Keywords: Kernel estimation, Time-varying VAR, Structural change, Monetary policy shock
    JEL: C10 C14 E52 E61
    Date: 2015–12
  17. By: Peter C. B. Phillips (Yale University); Ye Chen (Singapore Management University); Jun Yu (Singapore Management University)
    Abstract: Limit theory is developed for continuous co-moving systems with mildly explosive regressors. The theory uses double asymptotics with in…ll (as the sampling interval tends to zero) and large time span asymptotics. The limit theory explicitly involves initial conditions, allows for drift in the system, is provided for single and multiple explosive regressors, and is feasible to implement in practice. Simulations show that double asymptotics deliver a good approximation to the …nite sample distribution, with both …nite sample and asymptotic distributions showing sensitivity to initial conditions. The methods are implemented in the US real estate market for an empirical application, illustrating the usefulness of double asymptotics in practical work.
    Keywords: Cointegrated system; Explosive Process; Moderate Deviations from Unity; Double Asymptotics; Real Estate Market.
    JEL: C12 C13 C58
    Date: 2015–03
  18. By: Bart Cockx (Ghent University (SHERPPA), Université catholique de Louvain (IRES), IZA and CESifo); Stijn Baert (Ghent University (SHERPPA), University of Antwerp, Université catholique de Louvain (IRES) and IZA); Matteo Picchio (Department of Economics and Social Sciences, Marche Polytechnic University; CentER, Tilburg University; Sherppa, Ghent University; IZA)
    Abstract: A dynamic discrete choice model is set up to estimate the effects of grade retention in high school, both in the short- (end-of-year evaluation) and long-run (drop-out and delay). In contrast to regression discontinuity designs, this approach captures treatment heterogeneity and controls for grade-varying unobservable determinants. A method is proposed to deal with initial conditions and with partial observability of the track choices at the start of high school. Forced track downgrading is considered as an alternative remedial measure. In the long-run, grade retention and its alternative have adverse effects on schooling outcomes and, more so, for less able pupils.
    Keywords: Education, grade retention, track mobility, dynamic discrete choice models, heterogeneous treatment effects
    JEL: C33 C35 I21
    Date: 2015–12–12
  19. By: Jan F. Kiviet
    Abstract: In models estimated by (generalized) method of moments a test of coefficient restrictions can either be based on a Wald statistic or on the difference between evaluated criterion functions. Their correspondence can be used to demonstrate that a Sargan-Hansen test statistic for overidentification restrictions is equivalent to an omitted variables test statistic for a nonunique group of variables. We prove that this is the case for incremental Sargan-Hansen tests too. However, we also demonstrate that, despite this equivalence, one can nevertheless distinguish between either the (in)validity of some additional instruments or the (un)tenability of particular exclusion restrictions. It all hinges upon the required choice made regarding the initial maintained hypothesis.
    Date: 2015–11–26
  20. By: Liudas Giraitis (Queen Mary University of London); Donatas Surgailis (Vilnius University); Andrius Škarnulis (Vilnius University)
    Abstract: Although the properties of the ARCH(∞) model are well investigated, the existence of long memory FIGARCH and IARCH solution was not established in the literature. These two popular ARCH type models which are widely used in applied literature, were causing theoretical controversy because of the suspicion that other solutions besides the trivial zero one, do not exist. Since ARCH models with non-zero intercept have a unique stationary solution and exclude long memory, the existence of finite variance FIGARCH and IARCH models and, thus, the possibility of long memory in the ARCH setting was doubtful. The present paper solves this controversy by showing that FIGARCH and IARCH equations have a non-trivial covariance stationary solution, and that such a solution exhibits long memory. The existence and uniqueness of stationary Integrated AR(∞) processes is also discussed, and long memory, as an inherited feature, is established. Summarizing, we show that covariance stationary IARCH, FIEGARCH and IAR(∞) processes exist, their class is wide, and they always have long memory.
    Keywords: AR, FIGARCH, IARCH, Long memory
    JEL: C15 C22
    Date: 2015–12
  21. By: Bin Chen (University of Rochester); Jinho Choi (Bank of Korea); Juan Carlos Escanciano (Indiana University)
    Abstract: We propose a test for invertibility or fundamentalness of structural vector autoregressive moving average models generated by non-Gaussian independent and identically distributed (iid) structural shocks. We prove that in these models and under some regularity conditions the Wold innovations are a martingale difference sequence (mds) if and only if the structural shocks are fundamental. This simple but powerful characterization suggests an empirical strategy to assess invertibility. We propose a test based on a generalized spectral density to check for the mds property of the Wold innovations. This approach does not require to specify and estimate the economic agent?'s information ?flows or to identify and estimate the structural parameters and the non-invertible roots. Moreover, the proposed test statistic uses all lags in the sample and it has a convenient asymptotic N(0; 1) distribution under the null hypothesis of invertibility, and hence, it is straightforward to implement. In case of rejection, the test can be further used to check if a given set of additional variables provides sufficient informational content to restore invertibility. A Monte Carlo study is conducted to examine the ?finite-sample performance of our test. Finally, the proposed test is applied to two widely cited works on the effects of ?fiscal shocks by Blanchard and Perotti (2002) and Ramey (2011).
    Keywords: Fundamental Representations; Generalized Spectrum; Identi?cation; Invertible Moving Average
    Date: 2015–12
  22. By: Pooyan Amir Ahmadi; Harald Uhlig
    Abstract: We propose a novel identification strategy of imposing sign restrictions directly on the impulse responses of a large set of variables in a Bayesian factor-augmented vector autoregression. We conceptualize and formalize conditions under which every additional sign restriction imposed can be qualified as either relevant or irrelevant for structural identification up to a limiting case of point identification. Deriving exact conditions we establish that, (i) in a two dimensional factor model only two out of potentially infinite sign restrictions are relevant and (ii) in contrast, in cases of higher dimension every additional sign restriction can be relevant improving structural identification. The latter result can render our approach a blessing in high dimensions. In an empirical application for the US economy we identify monetary policy shocks imposing conventional wisdom and find modest real effects avoiding various unreasonable responses specifically present and pronounced combining standard recursive identification with FAVARs.
    JEL: C22 E5
    Date: 2015–11
  23. By: Sarisoy, Cisil (Tilburg University, School of Economics and Management)
    Abstract: This thesis consists of three chapters. The first chapter analyzes efficiency gains in the estimation of expected returns based on asset pricing models and examines the economic implications of such gains in portfolio allocation exercises. The second chapter provides nonparametric efficiency bounds in the estimation of integrated smooth transformations of volatility and related processes, and analyzes the efficiency of existing estimators. The third chapter investigates pricing implications of monetary policy risk in the cross—section of stocks.
    Date: 2015
  24. By: Arturas Juodis
    Abstract: This paper considers estimation of general panel data models subject to the incidental parameter problem of Neyman and Scott (1948). Our main focus is on the finite sample behavior of analytical bias corrected Maximum Likelihood estimators as discussed in Hahn and Kuersteiner (2002), Hahn and Newey (2004) and Hahn and Kuersteiner (2011). As it is mentioned in Hahn and Newey (2004) and Arellano and Hahn (2006), in principle it is possible to iterate the bias formula to obtain an estimator that might have better finite sample properties than the one step estimator. In this paper we will investigate merits and limitations of iterative bias correction procedures in finite samples, by considering three examples: Panel AR(1), Panel VAR(1) and Static Panel Probit.
    Date: 2015–10–09
  25. By: Yongchen Zhao (Department of Economics, Towson University)
    Abstract: Based on a set of carefully designed Monte Carlo exercises, this paper documents the behavior and performance of several newly developed advanced forecast combination algorithms in unstable environments, where performance of candidate forecasts are cross-sectionally heterogeneous and dynamically evolving over time. Results from these exercises provide guidelines regarding the selection of forecast combination method based on the nature, frequency, and magnitude of instabilities in forecasts as well as the target variable. Following these guidelines, a simple forecast combination procedure is proposed and demonstrated through a real-time forecast combination exercise using the U.S. Survey of Professional Forecasters, where combined forecasts are shown to have superior performance that is not only statistically significant but also of practical importance.
    Keywords: Forecast combination, Exponential re-weighting, Shrinkage, Estimation error, Performance stability, Real-Time Data.
    JEL: C53 C22 C15
    Date: 2015–12
  26. By: Francisco Barillas; Jay Shanken
    Abstract: A Bayesian asset-pricing test is derived that is easily computed in closed-form from the standard F-statistic. Given a set of candidate traded factors, we develop a related test procedure that permits an analysis of model comparison, i.e., the computation of model probabilities for the collection of all possible pricing models that are based on subsets of the given factors. We find that the recent models of Hou, Xue and Zhang (2015a,b) and Fama and French (2015a,b) are both dominated by five and six-factor models that include a momentum factor, along with value and profitability factors that are updated monthly.
    JEL: G11 G12
    Date: 2015–12

This nep-ecm issue is ©2015 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.