
on Econometrics 
By:  John Chao (University of Maryland); Peter C.B. Phillips (Cowles Foundation, Yale University) 
Abstract:  This paper considers estimation and inference concerning the autoregressive coefficient (?) in a panel autoregression for which the degree of persistence in the time dimension is unknown. The main objective is to construct confidence intervals for ? that are asymptotically valid, having asymptotic coverage probability at least that of the nominal level uniformly over the parameter space. It is shown that a properly normalized statistic based on the AndersonHsiao IV procedure, which we call the M statistic, is uniformly convergent and can be inverted to obtain asymptotically valid interval estimates. In the unit root case confidence intervals based on this procedure are unsatisfactorily wide and uninformative. To sharpen the intervals a new procedure is developed using information from unit root pretests to select alternative confidence intervals. Two sequential tests are used to assess how close ? is to unity and to correspondingly tailor intervals near the unit root region. When ? is close to unity, the width of these intervals shrinks to zero at a faster rate than that of the confidence interval based on the M statistic. Only when both tests reject the unit root hypothesis does the construction revert to the M statistic intervals, whose width has the optimal N^{1/2}T^{1/2} rate of shrinkage when the underlying process is stable. The asymptotic properties of this pretestbased procedure show that it produces confidence intervals with at least the prescribed coverage probability in large samples. Simulations confirm that the proposed interval estimation methods perform well in finite samples and are easy to implement in practice. A supplement to the paper provides an extensive set of new results on the asymptotic behavior of panel IV estimators in weak instrument settings. 
Keywords:  Confidence interval, Dynamic panel data models, panel IV, pooled OLS, Pretesting, Uniform inference 
JEL:  C23 C36 
Date:  2017–01 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:2071&r=ecm 
By:  Alessandro Barbarino; Efstathia Bura 
Abstract:  Factor models are widely used in summarizing large datasets with few underlying latent factors and in building time series forecasting models for economic variables. In these models, the reduction of the predictors and the modeling and forecasting of the response y are carried out in two separate and independent phases. We introduce a potentially more attractive alternative, Sufficient Dimension Reduction (SDR), that summarizes x as it relates to y, so that all the information in the conditional distribution of yx is preserved. We study the relationship between SDR and popular estimation methods, such as ordinary least squares (OLS), dynamic factor models (DFM), partial least squares (PLS) and RIDGE regression, and establish the connection and fundamental differences between the DFM and SDR frameworks. We show that SDR significantly reduces the dimension of widely used macroeconomic series data with one or two sufficient reductions delivering similar forecasting performance to that of competing methods in macroforecasting. 
Keywords:  Diffusion Index ; Dimension Reduction ; Factor Models ; Forecasting ; Partial Least Squares ; Principal Components 
JEL:  C32 C53 C55 E17 
Date:  2017–01–12 
URL:  http://d.repec.org/n?u=RePEc:fip:fedgfe:201704&r=ecm 
By:  JeanThomas Bernard (Department of Economics, University of Ottawa); Ba Chu (Department of Economics, Carleton University); Lynda Khalaf (Department of Economics, Carleton University); MarcelCristian Voia (Department of Economics, Carleton University) 
Abstract:  We study estimation uncertainty when the object of interest contains one or more ratios of parameters. The ratio of parameters is a discontinuous parameter transformation; it has been shown that traditional confidence intervals often fail to cover this true ratio with very high probability. Constructing confidence sets for ratios using Fieller’s method is a viable solution as the method can avoid the discontinuity problem. This paper proposes an extension of the multivariate Fieller method beyond standard estimators, focusing on asymptotically mixed normal estimators that commonly arise in dynamic panel polynomial regression with persistent covariates. We discuss the cases where the underlying estimators converge to various distri butions, depending on the persistence level of the covariates. We show that the asymptotic distribution of the pivotal statistic used for constructing a Fieller’s confidence set remains a standard Chisquared distribution regardless of rates of convergence, thus the rates are being ‘selfnormalized’ and can be unknown. A simulation study illustrates the finite sample properties of the proposed method in a dynamic polynomial panel. Our method is demonstrated to work well in small samples, even when the persistence coefficient is unity. 
Date:  2017–01–18 
URL:  http://d.repec.org/n?u=RePEc:car:carecp:1705&r=ecm 
By:  Wei Gao; Wicher Bergsma; Qiwei Yao 
Abstract:  For discrete panel data, the dynamic relationship between successive observations is often of interest. We consider a dynamic probit model for short panel data. A problem with estimating the dynamic parameter of interest is that the model contains a large number of nuisance parameters, one for each individual. Heckman proposed to use maximum likelihood estimation of the dynamic parameter, which, however, does not perform well if the individual effects are large. We suggest new estimators for the dynamic parameter, based on the assumption that the individual parameters are random and possibly large. Theoretical properties of our estimators are derived, and a simulation study shows they have some advantages compared with Heckman's estimator and the modified profile likelihood estimator for fixed effects. 
Keywords:  Dynamic probit regression; generalized linear models; panel data; probit models; static probit regression 
JEL:  C1 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:65165&r=ecm 
By:  Kleijnen, J.P.C. (Tilburg University, Center For Economic Research); Shi, Wen 
Abstract:  In practice, most computers generate simulation outputs sequentially, so it is attractive to analyze these outputs through sequential statistical methods such as sequential probability ratio tests (SPRTs). We investigate several SPRTs for choosing between two hypothesized values for the mean output (response). One SPRT is published in Wald (1945), and allows general distribution types. For a normal (Gaussian) distribution this SPRT assumes a known variance, but in our modified SPRT we estimate the variance. Another SPRT is published in Hall (1962), and assumes a normal distribution with an unknown variance estimated from a pilot sample. We also investigate a modification, replacing this pilotsample estimator by a fully sequential estimator. We present a sequence of Monte Carlo experiments for quantifying the performance of these SPRTs. In experiment #1 the simulation outputs are normal. This experiment suggests that Wald (1945)’s SPRT with estimated variance gives significantly high error rates. Hall (1962)’s original and modified SPRTs are conservative; i.e., the actual error rates are much smaller than the prespecified (nominal) rates. The most efficient SPRT is our modified Hall (1962) SPRT. In experiment #2 we examine the robustness of the various SPRTs in case of nonnormal output. If we know that the output has a specific nonnormal distribution such as the exponential distribution, then we may also apply Wald (1945)’s original SPRT. Throughout our investigation we pay special attention to the design and analysis of these experiments. 
Keywords:  sequential test; Wald; Hall; robustness; lognormal; gamma distribution; Monte Carlo 
JEL:  C00 C10 C90 C15 C44 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:tiu:tiucen:5f24e30d79314be496f068f8898e6667&r=ecm 
By:  Kohn, Robert; Quiroz, Matias; Tran, MinhNgoc; Villani, Mattias 
Abstract:  We propose Subsampling MCMC, a Markov Chain Monte Carlo (MCMC) framework where the likelihood function for n observations is estimated from a random subset of m observations. We introduce a general and highly efficient unbiased estimator of the loglikelihood based on control variates obtained from clustering the data. The cost of computing the loglikelihood estimator is much smaller than that of the full loglikelihood used by standard MCMC. The likelihood estimate is biascorrected and used in two correlated pseudomarginal algorithms to sample from a perturbed posterior, for which we derive the asymptotic error with respect to n and m, respectively. A practical estimator of the error is proposed and we show that the error is negligible even for a very small m in our applications. We demonstrate that Subsampling MCMC is substantially more efficient than standard MCMC in terms of sampling efficiency for a given computational budget, and that it outperforms other subsampling methods for MCMC proposed in the literature. 
Keywords:  Survey sampling; Big Data; Block pseudomarginal; Estimated likelihood; Correlated pseudomarginal; Bayesian inference 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/16205&r=ecm 
By:  ChiaLin Chang (Department of Applied Economics Department of Finance National Chung Hsing University Taiwan.); Michael McAleer 
Abstract:  The purpose of the paper is to show that univariate GARCH is not a special case of multivariate GARCH, specifically the Full BEKK model, except under parametric restrictions on the offdiagonal elements of the random coefficient autoregressive coefficient matrix, provides the regularity conditions that arise from the underlying random coefficient autoregressive process, and for which the (quasi) maximum likelihood estimates have valid asymptotic properties under the appropriate parametric restrictions. The paper provides a discussion of the stochastic processes, regularity conditions, and asymptotic properties of univariate and multivariate GARCH models. It is shown that the Full BEKK model, which in practice is estimated almost exclusively, has no underlying stochastic process, regularity conditions, or asymptotic properties. 
Keywords:  Random coefficient stochastic process, Offdiagonal parametric restrictions, Diagonal and Full BEKK, Regularity conditions, Asymptotic properties, Conditional volatility, Univariate and multivariate models. 
JEL:  C22 C32 C52 C58 
Date:  2017–01 
URL:  http://d.repec.org/n?u=RePEc:ucm:doicae:1706&r=ecm 
By:  Christophe Chorro (Centre d'Economie de la Sorbonne); Florian Ielpo (Unigestion SA, Centre d'Economie de la Sorbonne et IPAG Business School); Benoît Sévi (LEMNA) 
Abstract:  The extraction of the jump component in dynamics of asset prices haw witnessed a considerably growing body of literature. Of particular interest is the decomposition of returns' quadratic variation between their continuous and jump components. Recent contributions highlight the importance of this component in forecasting volatility at different horizons. In this article, we extend a methodology developed in Maheu and McCurdy (2011) to exploit the information content of intraday data in forecasting the density of returns at horizons up to sixty days. We follow Boudt et al. (2011) to detect intraday returns that should be considered as jumps. The methodology is robust to intraweek periodicity and further delivers estimates of signed jumps in contrast to the rest of the literature where only the squared jump component can be estimated. Then, we estimate a bivariate model of returns and volatilities where the jump component is independently modeled using a jump distribution that fits the stylized facts of the estimated jumps. Our empirical results for S&P 500 futures, U.S. 10year Treasury futures, USD/CAD exchange rate and WTI crude oil futures highlight the importance of considering the continuous/jump decomposition for density forecasting while this is not the case for volatility point forecast. In particular, we show that the model considering jumps apart from the continuous component consistenly deliver better density forecasts for forecasting horizons ranging from 1 to 30 days 
Keywords:  density forecasting; jumps; realized volatility; bipower variation; median realized volatility; leverage effect 
JEL:  C15 C32 C53 G1 
Date:  2017–01 
URL:  http://d.repec.org/n?u=RePEc:mse:cesdoc:17006&r=ecm 
By:  Arnaud Dufays; Maciej Augustyniak; Luc Bauwens 
Abstract:  A new model the highdimensional Markov (HDM) model  is proposed for financial returns and their latent variances. It is also applicable to model directly realized variances. Volatility is modeled as a product of three components: a Markov chain driving volatility persistence, an independent discrete process capable of generating jumps in the volatility, and a predictable (datadriven) process capturing the leverage effect. The Markov chain and jump components allow volatility to switch abruptly between thousands of states. The transition probability matrix of the Markov chain is structured in such a way that the multiplicity of the second largest eigenvalue can be greater than one. This distinctive feature generates a high degree of volatility persistence. The statistical properties of the HDM model are derived and an economic interpretation is attached to each component. Insample results on six financial time series highlight that the HDM model compares favorably to the main existing volatility processes. A forecasting experiment shows that the HDM model significantly outperforms its competitors when predicting volatility over time horizons longer than five days. 
Keywords:  Volatility, Markovswitching, Persistence, Leverage effect. 
JEL:  C22 C51 C58 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:lvl:crrecr:1609&r=ecm 
By:  Kazuhiko Kakamu (Graduate School of Business Administration, Kobe University); Haruhisa Nishino (Faculty of Law, Politics and Economics, Chiba University) 
Abstract:  This study considers the estimation method of generalized beta (GB) distribution parameters based on grouped data from a Bayesian point of view. Because the GB distribution, which was proposed by McDonald and Xu (1995), includes several kinds of familiar distributions as special or limiting cases, it performs at least as well as those special or limiting distributions. Therefore, it is reasonable to estimate the parameters of the GB distribution. However, when the number of groups is small or when the number of parameters increases, it may become difficult to estimate the distribution parameters for grouped data using the existing estimation methods. This study uses a Tailored randomized block Metropolisâ€“Hastings (TaRBMH) algorithm proposed by Chib and Ramamurthy (2010) to estimate the GB distribution parameters, and this method is applied to one simulated and two real datasets. Moreover, the Gini coefficients from the estimated parameters for the GB distribution are examined. 
Keywords:  Generalized beta (GB) distribution; Gini coefficient; grouped data; simulated annealing; Tailored randomized block Metropolisâ€“Hastings (TaRBMH) algorithm. 
Date:  2016–03 
URL:  http://d.repec.org/n?u=RePEc:kbb:dpaper:201608&r=ecm 
By:  Jungwoo Kim (Yonsei University); Joocheol Kim (Yonsei University) 
Abstract:  A new nonparametric forecasting using onesided kernel is proposed via adopting pseudo onestep ahead data. Adopting pseudo onestep data is inspired from the difference between training error and test error, which motivates us to reduce test error minimization problem to training error minimization problem. The theoretical basis and the numerical justification of the new approach are presented. 
Keywords:  Nonparametric methods, Time series, Onesided kernel, Local regression, Exponential smoothing 
Date:  2017–01 
URL:  http://d.repec.org/n?u=RePEc:yon:wpaper:2017rwp102&r=ecm 
By:  Georgiev, Iliyan; Rodrigues, Paulo M M; Taylor, A M Robert 
Abstract:  We evaluate the impact of heavytailed innovations on some popular unit root tests. In the context of a nearintegrated series driven by linearprocess shocks, we demonstrate that their limiting distributions are altered under in nite variance visÃ vis finite variance. Reassuringly, however, simulation results suggest that the impact of heavytailed innovations on these tests are relatively small. We use the framework of Amsler and Schmidt (2012) whereby the innovations have localto nite variances being generated as a linear combination of draws from a thin tailed distribution (in the domain of attraction of the Gaussian distribution) and a heavytailed distribution (in the normal domain of attraction of a stable law). We also explore the properties of ADF tests which employ EickerWhite standard errors, demonstrating that these can yield significant power improvements over conventional tests. 
Keywords:  Infinite variance, Î±stable distribution, EickerWhite standard errors, symptotic local power functions, weak dependence 
Date:  2017–01 
URL:  http://d.repec.org/n?u=RePEc:esy:uefcwp:18832&r=ecm 
By:  Bontemps, Christian; Magnac, Thierry 
Abstract:  For the last ten years, the topic of set identification has been much studied in the econometric literature. Classical inference methods have been generalized to the case in which moment inequalities and equalities define a set instead of a point. We review several instances of partial identification by focusing on examples in which the underlying economic restrictions are expressed as linear moments. This setting illustrates the fact that convex analysis helps not only in characterizing the identified set but also for inference. In this perspective, we review inference methods using convex analysis or inversion of tests and detail how geometric characterizations can be useful. 
Keywords:  set identification, moment inequality, convex set, support function. 
Date:  2017–01 
URL:  http://d.repec.org/n?u=RePEc:tse:wpaper:31337&r=ecm 
By:  Ruiz, Esther; Vicente, Javier de 
Abstract:  In the context of Dynamic Factor Models, factors are unobserved latent variables of interest. One of the most popular procedures for the factor extraction is Principal Components (PC). Measuring the uncertainty associated to factor estimates should be part of interpreting these estimates. Several procedures have been proposed in the context of PC factor extraction to estimate this uncertainty. In this paper, we show that these methods are not adequate when implemented to measure the uncertainty associated to the factor estimation. We propose an alternative procedure and analyze its finite sample properties. The results are illustrated in the context of extracting the common factors of a large system of macroeconomic variables. 
Keywords:  Bootstrap; Extraction uncertainty; Principal Components; Dynamic Factor Models 
Date:  2016–12 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:23974&r=ecm 
By:  Stephanie Thomas 
Abstract:  Empirical economics frequently involves testing whether the predictions of a theoretical model are realized under controlled conditions. This paper proposes a new method for assessing whether binary (‘Yes’/‘No’) observations ranging over a continuous covariate exhibit a discrete change which is consistent with an underlying theoretical model. An application using observations from a controlled laboratory environment illustrates the method, however, the methodology can be used for testing for a discrete change in any binary outcome variable which occurs over a continuous covariate such as medical practice guidelines, firm entry and exit decisions, labour market decisions and many others. The observations are optimally smoothed using a nonparametric approach which is demonstrated to be superior, judged by four common criteria for such settings. Next, using the smoothed observations, two novel methods for assessment of a step pattern are proposed. Finally, nonparametric bootstrapped confidence intervals are used to evaluate the match of the pattern of the observed responses to that predicted by the theoretical model. The key methodological contributions are the two innovative methods proposed for assessing the step pattern. The promise of this approach is illustrated in an application to a controlled experimental lab data set, while the methods are easily extendable to many other settings. Further, the results generated can be easily communicated to diverse audiences. 
Keywords:  Evaluation of theoretical predictions, binary outcome data, applied nonparametric analysis, data from experiments 
JEL:  C18 C14 C4 C9 
Date:  2016–12 
URL:  http://d.repec.org/n?u=RePEc:mcm:deptwp:201612&r=ecm 
By:  Arnaud Dufays; Jeroen V.K. Rombouts 
Abstract:  Changepoint time series specifications constitute flexible models that capture unknown structural changes by allowing for switches in the model parameters. Nevertheless most models suffer from an overparametrization issue since typically only one latent state variable drives the switches in all parameters. This implies that all parameters have to change when a break happens. To gauge whether and where there are structural breaks in realized variance, we introduce the sparse changepoint HAR model. The approach controls for model parsimony by limiting the number of parameters which evolve from one regime to another. Sparsity is achieved thanks to employing a nonstandard shrinkage prior distribution. We derive a Gibbs sampler for inferring the parameters of this process. Simulation studies illustrate the excellent performance of the sampler. Relying on this new framework, we study the stability of the HAR model using realized variance series of several major international indices between January 2000 and August 2015. 
Keywords:  Realized variance, Bayesian inference, Time series, Shrinkage prior, Changepoint model, Online forecasting 
JEL:  C11 C15 C22 C51 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:lvl:crrecr:1607&r=ecm 
By:  GarcíaMartos, Carolina; Bastos, Guadalupe; Alonso Fernández, Andrés Modesto 
Abstract:  In this paper we work with multivariate time series that follow a Dynamic Factor Model. In particular, we consider the setting where factors are dominated by highly persistent AutoRegressive (AR) processes, and samples that are rather small. Therefore, the factors' AR models are estimated using small sample bias correction techniques. A Monte Carlo study reveals that biascorrecting the AR coefficients of the factors allows to obtain better results in terms of prediction interval coverage. As expected, the simulation reveals that biascorrection is more successful for smaller samples. Results are gathered assuming the AR order and number of factors are known as well as unknown. We also study the advantages of this technique for a set of Industrial Production Indexes of several European countries. 
Keywords:  Dynamic Factor Model; persistent process; autoregressive models; small sample bias correction; Dimensionality reduction 
Date:  2017–01 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:24029&r=ecm 
By:  Boneva, Lena (Bank of England); Linton, Oliver (University of Cambridge) 
Abstract:  What is the effect of funding costs on the conditional probability of issuing a corporate bond? We study this question in a novel dataset covering 5,610 issuances by US firms over the period from 1990 to 2014. Identification of this effect is complicated because of unobserved, common shocks such as the global financial crisis. To account for these shocks, we extend the common correlated effects estimator to settings where outcomes are discrete. Both the asymptotic properties and the sample behaviour of this estimator are documented. We find that for nonfinancial firms, yields are negatively related to bond issuance but that effect is larger in the precrisis period. 
Keywords:  Heterogeneous panel data; discrete choice models; capital structure 
JEL:  C23 C25 G32 
Date:  2017–01–20 
URL:  http://d.repec.org/n?u=RePEc:boe:boeewp:0640&r=ecm 
By:  Rub\'en LoaizaMaya; Michael S. Smith; Worapree Maneesoonthorn 
Abstract:  We propose parametric copulas that capture serial dependence in stationary heteroskedastic time series. We develop our copula for first order Markov series, and extend it to higher orders and multivariate series. We derive the copula of a volatility proxy, based on which we propose new measures of volatility dependence, including comovement and spillover in multivariate series. In general, these depend upon the marginal distributions of the series. Using exchange rate returns, we show that the resulting copula models can capture their marginal distributions more accurately than univariate and multivariate GARCH models, and produce more accurate value at risk forecasts. 
Date:  2017–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1701.07152&r=ecm 
By:  Enrique MoralBenito (Banco de España); Paul Allison (University of pennsylvania); Richard Williams (University of Notre Dame) 
Abstract:  The Arellano and Bond (1991) estimator is widelyused among applied researchers when estimating dynamic panels with fixed effects and predetermined regressors. This estimator might behave poorly in finite samples when the crosssection dimension of the data is small (i.e. small N), especially if the variables under analysis are persistent over time. This paper discusses a maximum likelihood estimator that is asymptotically equivalent to Arellano and Bond (1991) but presents better finite sample behaviour. Moreover, the estimator is easy to implement in Stata using the xtdpdml command as described in the companion paperWilliams et al. (2016), which also discusses further advantages of the proposed estimator for practitioners. 
Keywords:  dynamic panel data, maximum likelihood estimation 
JEL:  C23 
Date:  2017–01 
URL:  http://d.repec.org/n?u=RePEc:bde:wpaper:1703&r=ecm 
By:  Vincent Boucher 
Abstract:  I present a strategic model of network formation with positive network externalities in which individuals have preferences for being part of a clique. I build on the theory of supermodular games (Topkis, 1979) and focus on the greatest Nash equilibrium of the game. Although the structure of the equilibrium network cannot be expressed analytically, I show that it can easily be simulated. I propose an approximate Bayesian computation (ABC) framework to make inferences about individuals' preferences, and provide an illustration using data on high school friendships. 
Keywords:  Network Formation, Supermodular Games, Approximate Bayesian Computation 
JEL:  D85 C11 C15 C72 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:lvl:crrecr:1604&r=ecm 