
on Econometrics 
By:  Otilia Boldea (Tilburg University); Adriana CorneaMadeira (University of York); Alastair R. Hall (University of Manchester) 
Abstract:  This paper analyses the use of bootstrap methods to test for parameter change in linear models estimated via Two Stage Least Squares (2SLS). Two types of test are considered: one where the null hypothesis is of no change and the alternative hypothesis involves discrete change at k unknown breakpoints in the sample; and a second test where the null hypothesis is that there is discrete parameter change at l breakpoints in the sample against an alternative in which the parameters change at l + 1 breakpoints. In both cases, we consider inferences based on a supWaldtype statistic using either the wild recursive bootstrap or the wild fixed bootstrap. We establish the asymptotic validity of these bootstrap tests under a set of general conditions that allow the errors to exhibit conditional and/or unconditional heteroskedasticity, and report results from a simulation study that indicate the tests yield reliable inferences in the sample sizes often encountered in macroeconomics. The analysis covers the cases where the firststage estimation of 2SLS involves a model whose parameters are either constant or themselves subject to discrete parameter change. If the errors exhibit unconditional heteroskedasticity and/or the reduced form is unstable then the bootstrap methods are particularly attractive because the limiting distributions of the test statistics are not pivotal. 
Date:  2018–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1811.04125&r=all 
By:  Lewis, Daniel J. (Federal Reserve Bank of New York) 
Abstract:  Identification via heteroskedasticity exploits differences in variances across regimes to identify parameters in simultaneous equations. I study weak identification in such models, which arises when variances change very little or the variances of multiple shocks change close to proportionally. I show that this causes standard inference to become unreliable, propose two tests to detect weak identification, and develop nonconservative methods for robust inference on a subset of the parameter vector. I apply these tools to monetary policy shocks, identified using heteroskedasticity in high frequency data. I detect weak identification in daily data, causing standard inference methods to be invalid. However, using intraday data instead allows the shocks to be strongly identified. 
Keywords:  heteroskedasticity; weak identification; robust inference; pretesting; monetary policy; impulse response function 
JEL:  C12 C32 E43 
Date:  2018–12–01 
URL:  http://d.repec.org/n?u=RePEc:fip:fednsr:876&r=all 
By:  Pedro H. C. Sant'Anna; Jun B. Zhao 
Abstract:  This article proposes a doubly robust estimation procedure for the average treatment effect on the treated in differenceindifferences (DID) research designs. In contrast to alternative DID estimators, our proposed estimators are consistent if either (but not necessarily both) a propensity score model or outcome regression models are correctly specified. In addition, our proposed methodology accommodates linear and nonlinear specifications, allows for treatment effect heterogeneity, and can be applied with either panel or repeated cross section data. We establish the asymptotic distribution of our proposed doubly robust estimators, and propose a computationally simple bootstrap procedure to conduct asymptotically valid inference. Our inference procedures directly account for multiple testing, and are therefore suitable in situations where researchers are interested in the effect of a given policy on many different outcomes. We demonstrate the relevance of our proposed policy evaluation tools in two different applications. 
Date:  2018–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1812.01723&r=all 
By:  Tian, Maoshan (Cardiff Business School); Dixon, Huw David (Cardiff Business School) 
Abstract:  The crosssectional distribution of completed lifetimes (DCL) is a new estimator defined and derived by Dixon (2012) in the general Taylor price model (GTE). DCL can be known as the crosssectional weighted estimator summing to 1. It is a new statistics applying to describe the data. This paper focuses on the crosssectional distribution in the survival analysis. The delta method is applied to derive the variance of the of three cumulative distribution functions: the distribution of duration, crosssectional distribution of age, distribution of duration across rms. The Monte Carlo experiment is applied to do the simulation study. The empirical results show that the asymptotic variance formula of the DCL and distribution of duration performs well when the sample size above 25. With the increasing of the sample size, the bias of the variance is reduced. 
Keywords:  Delta Method, Survival Analysis, KaplanMeier Estimator 
JEL:  C19 C46 
Date:  2018–12 
URL:  http://d.repec.org/n?u=RePEc:cdf:wpaper:2018/27&r=all 
By:  Sébastien Laurent (AixMarseille Univ., CNRS, EHESS, Centrale Marseille, AMSE & AixMarseille Graduate School of Management); Shuping Shi (Department of Economics, Macquarie University & Centre for Applied Macroeconomic Analysis (CAMA)) 
Abstract:  Logarithms of prices of financial assets are conventionally assumed to follow driftdiffusion processes. While the drift term is typically ignored in the infill asymptotic theory and applications, the presence of nonzero drifts is an undeniable fact. The finite sample theory and extensive simulations provided in this paper reveal that the drift component has a nonnegligible impact on the estimation accuracy of volatility and leads to a dramatic power loss of a class of jump identification procedures. We propose an alternative construction of volatility estimators and jump tests and observe significant improvement of both in the presence of nonnegligible drift. As an illustration, we apply the new volatility estimators and jump tests, along with their original versions, to 21 years of 5minute logreturns of the NASDAQ stock price index. 
Keywords:  diffusion process, nonzero drift, finite sample theory, volatility estimation, jumps 
JEL:  C12 C14 
Date:  2018–12 
URL:  http://d.repec.org/n?u=RePEc:aim:wpaimx:1843&r=all 
By:  Antonio Fidalgo (HSFresenius University of Applied Sciences) 
Abstract:  Anthropometric historical analysis depends on the assumption that human characteristics—such as height—are normally distributed. I propose and evaluate a metric entropy, based on nonparametrically estimated densities, as a statistic for a consistent test of normality. My first test applies to full distributions for which other tests already exist and performs similarly. A modified version applies to truncated samples for which no test has been previously devised. This second test exhibits correct size and high power against standard alternatives. In contrast to the distributional prior of Floud et al. (1990), the test rejects normality in large parts of their sample; the remaining data reveal a downward trend in height, not upward as they argue. 
Keywords:  test of normality, truncated samples, anthropometrics 
JEL:  C12 C14 N3 N13 J11 
Date:  2018–12 
URL:  http://d.repec.org/n?u=RePEc:hes:wpaper:0142&r=all 
By:  Lidan Tan; Khai X. Chiong; Hyungsik Roger Moon 
Abstract:  In this paper, we investigate seemingly unrelated regression (SUR) models that allow the number of equations (N) to be large, and to be comparable to the number of the observations in each equation (T). It is well known in the literature that the conventional SUR estimator, for example, the generalized least squares (GLS) estimator of Zellner (1962) does not perform well. As the main contribution of the paper, we propose a new feasible GLS estimator called the feasible graphical lasso (FGLasso) estimator. For a feasible implementation of the GLS estimator, we use the graphical lasso estimation of the precision matrix (the inverse of the covariance matrix of the equation system errors) assuming that the underlying unknown precision matrix is sparse. We derive asymptotic theories of the new estimator and investigate its finite sample properties via MonteCarlo simulations. 
Date:  2018–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1811.05567&r=all 
By:  Stefan Richter; Weining Wang; Wei Biao Wu 
Abstract:  We develop a uniform test for detecting and dating explosive behavior of a strictly stationary GARCH$(r,s)$ (generalized autoregressive conditional heteroskedasticity) process. Namely, we test the null hypothesis of a globally stable GARCH process with constant parameters against an alternative where there is an 'abnormal' period with changed parameter values. During this period, the change may lead to an explosive behavior of the volatility process. It is assumed that both the magnitude and the timing of the breaks are unknown. We develop a double supreme test for the existence of a break, and then provide an algorithm to identify the period of change. Our theoretical results hold under mild moment assumptions on the innovations of the GARCH process. Technically, the existing properties for the QMLE in the GARCH model need to be reinvestigated to hold uniformly over all possible periods of change. The key results involve a uniform weak Bahadur representation for the estimated parameters, which leads to weak convergence of the test statistic to the supreme of a Gaussian Process. In simulations we show that the test has good size and power for reasonably large time series lengths. We apply the test to Apple asset returns and Bitcoin returns. 
Date:  2018–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1812.03475&r=all 
By:  Lui, Yiu Lim (School of Economics, Singapore Management University); Xiao, Weilin (School of Management, Zhejiang University); Yu, Jun (School of Economics, Singapore Management University) 
Abstract:  This paper firstly extends the results of Phillips and Magdalinos (2007a) by allowing for antipersistent errors in mildly explosive autoregressive models. It is shown that the Cauchy asymptotic theory remains valid for the least squares (LS) estimator. The paper then extends the results of Phillips, Magdalinos and Giraitis (2010) by allowing for serially correlated errors of various forms in localto mildexplosive autoregressive models. It is shown that the result of smooth transition in the limit theory between localtounity and mildexplosiveness remains valid for the LS estimator. Finally, the limit theory for autoregression with intercept is developed. 
Keywords:  Antipersistent; unit root; mildly explosive; limit theory; bubble; fractional integration; Young integral 
JEL:  C22 
Date:  2018–12–15 
URL:  http://d.repec.org/n?u=RePEc:ris:smuesw:2018_022&r=all 
By:  Eli BenMichael; Avi Feller; Jesse Rothstein 
Abstract:  The synthetic control method (SCM) is a popular approach for estimating the impact of a treatment on a single unit in panel data settings. The "synthetic control" is a weighted average of control units that balances the treated unit's pretreatment outcomes as closely as possible. The curse of dimensionality, however, means that SCM does not generally achieve exact balance, which can bias the SCM estimate. We propose an extension, Augmented SCM, which uses an outcome model to estimate the bias due to covariate imbalance and then debiases the original SCM estimate, analogous to bias correction for inexact matching. We motivate this approach by showing that SCM is a (regularized) inverse propensity score weighting estimator, with pretreatment outcomes as covariates and a ridge penalty on the propensity score coefficients. We give theoretical guarantees for specific cases and propose a new inference procedure. We demonstrate gains from Augmented SCM with extensive simulation studies and apply this framework to canonical SCM examples. We implement the proposed method in the new augsynth R package. 
Date:  2018–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1811.04170&r=all 
By:  Tsionas, Efthymios G.; Tran, Kien C.; Michaelides, Panayotis G. 
Abstract:  In this paper, we generalize the stochastic frontier model to allow for heterogeneous technologies and inefficiencies in a structured way that allows for learning and adapting. We propose a general model and various special cases, organized around the idea that there is switching or transition from one technology to the other(s), and construct threshold stochastic frontier models. We suggest Bayesian inferences for the general model proposed here and its special cases using Gibbs sampling with data augmentation. The new techniques are applied, with very satisfactory results, to a panel of world production functions using, as switching or transition variables, human capital, age of capital stock (representing input quality), as well as a time trend to capture structural switching 
Keywords:  Stochastic frontier Regime switching Efficiency measurement Bayesian inference Markov Chain Monte Carlo 
JEL:  C11 C13 
Date:  2017–12–15 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:86848&r=all 
By:  Andries C. van Vlodrop (Vrije Universiteit Amsterdam); Andre (A.) Lucas (Vrije Universiteit Amsterdam) 
Abstract:  We investigate covariance matrix estimation in vastdimensional spaces of 1,500 up to 2,000 stocks using fundamental factor models (FFMs). FFMs are the typical benchmark in the asset management industry and depart from the usual statistical factor models and the factor models with observed factors used in the statistical and finance literature. Little is known about estimation risk in FFMs in high dimensions. We investigate whether recent linear and nonlinear shrinkage methods help to reduce the estimation risk in the asset return covariance matrix. Our findings indicate that modest improvements are possible using highdimensional shrinkage techniques. The gains, however, are not realized using standard plugin shrinkage parameters from the literature, but require sample dependent tuning. 
Keywords:  Portfolio allocation; high dimensions; linear and nonlinear shrinkage; factor models 
JEL:  G11 C38 C58 
Date:  2018–12–22 
URL:  http://d.repec.org/n?u=RePEc:tin:wpaper:20180099&r=all 
By:  Damian Jelito; Marcin Pitera 
Abstract:  In this paper we show how to use a statistical phenomenon commonly known as the 206020 rule to construct an efficient fattail measurement framework. We construct a powerful statistical goodnessoffit test that has a direct (financial) interpretation and can be used to assess the impact of fattails on central data normality assumption. In contrast to the JarqueBera test that is based on the third and fourth moment, our test relies on the conditional second moments. We show asymptotic normality of the proposed test statistic and perform the empirical study on market data to emphasise the usefulness of our approach. 
Date:  2018–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1811.05464&r=all 
By:  Kolokolov, Aleksey; Livieri, Giulia; Pirino, Davide 
Abstract:  Asset transaction prices sampled at high frequency are much staler than one might expect in the sense that they frequently lack new updates showing zero returns. In this paper, we propose a theoretical framework for formalizing this phenomenon. It hinges on the existence of a latent continuoustime stochastic process pt valued in the open interval (0; 1), which represents at any point in time the probability of the occurrence of a zero return. Using a standard infill asymptotic design, we develop an inferential theory for nonparametrically testing, the null hypothesis that pt is constant over one day. Under the alternative, which encompasses a semimartingale model for pt, we develop nonparametric inferential theory for the probability of staleness that includes the estimation of various integrated functionals of pt and its quadratic variation. Using a large dataset of stocks, we provide empirical evidence that the null of the constant probability of staleness is fairly rejected. We then show that the variability of pt is mainly driven by transaction volume and is almost unaffected by bidask spread and realized volatility. 
Keywords:  staleness,idle time,liquidity,zero returns,stable convergence 
Date:  2018 
URL:  http://d.repec.org/n?u=RePEc:zbw:safewp:236&r=all 
By:  Luisa Corrado; Melvyn Weeks; Thanasis Stengos; M. Ege Yazgan 
Abstract:  In many applications common in testing for convergence the number of crosssectional units is large and the number of time periods are few. In these situations asymptotic tests based on an omnibus null hypothesis are characterised by a number of problems. In this paper we propose a multiple pairwise comparisons method based on an a recursive bootstrap to test for convergence with no prior information on the composition of convergence clubs. Monte Carlo simulations suggest that our bootstrapbased test performs well to correctly identify convergence clubs when compared with other similar tests that rely on asymptotic arguments. Across a potentially large number of regions, using both crosscountry and regional data for the European Union, we find that the size distortion which afflicts standard tests and results in a bias towards finding less convergence, is ameliorated when we utilise our bootstrap test. 
Date:  2018–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1812.09518&r=all 
By:  Yaein Baek 
Abstract:  This paper proposes a point estimator of the break location for a onetime structural break in linear regression models. If the break magnitude is small, the leastsquares estimator of the break date has two modes at ends of the finite sample period, regardless of the true break location. I suggest a modification of the leastsquares objective function to solve this problem. The modified objective function incorporates estimation uncertainty that varies across potential break dates. The new break point estimator is consistent and has a unimodal finite sample distribution under a small break magnitude. A limit distribution is provided under a infill asymptotic framework which verifies that the new estimator outperforms the leastsquares estimator. 
Date:  2018–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1811.03720&r=all 
By:  Jos\'e E. FigueroaL\'opez; Cheng Li; Jeffrey Nisen 
Abstract:  In this paper, we study a thresholdkernel estimation method for jumpdiffusion processes, which iteratively applies thresholding and kernel methods in an approximately optimal way to achieve improved finitesample performance. As in FigueroaL\'opez and Nisen (2013), we use the expected number of jump misclassification as the objective function to optimally select the threshold parameter of the jump detection scheme. We prove that the objective function is quasiconvex and obtain a novel secondorder infill approximation of the optimal threshold, hence extending results from the aforementioned paper. The approximate optimal threshold depends not only on the spot volatility $\sigma_t$, but also turns out to be a decreasing function of the jump intensity and the value of the jump density at the origin. The estimation methods for these quantities are then developed, where the spot volatility is estimated by a kernel estimator with a threshold and the value of the jump density at the origin is estimated by a density kernel estimator applied to those increments deemed to contains jumps by the chosen thresholding criterion. Due to the interdependency between the model parameters and the approximate optimal estimators designed to estimate them, a type of iterative fixedpoint algorithm is developed to implement them. Simulation studies show that it is not only feasible to implement the higherorder local optimal threshold scheme but also that this is superior to those based only on the first order approximation and/or on average values of the parameters over the estimation time period. 
Date:  2018–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1811.07499&r=all 
By:  Shenhao Wang; Jinhua Zhao 
Abstract:  Deep neural network (DNN) has been increasingly applied to microscopic demand analysis. While DNN often outperforms traditional multinomial logit (MNL) model, it is unclear whether we can obtain interpretable economic information from DNNbased choice model beyond prediction accuracy. This paper provides an empirical method of numerically extracting valuable economic information such as choice probability, probability derivatives (or elasticities), and marginal rates of substitution. Using a survey collected in Singapore, we find that when the economic information is aggregated over population or models, DNN models can reveal roughly Sshaped choice probability curves, inverse bellshaped driving probability derivatives regarding costs and time, and reasonable median value of time (VOT). However at the disaggregate level, choice probability curves of DNN models can be nonmonotonically decreasing with costs and highly sensitive to the particular estimation; derivatives of choice probabilities regarding costs and time can be positive at some region; VOT can be infinite, undefined, zero, or arbitrarily large. Some of these patterns can be seen as counterintuitive, while others can potentially be regarded as advantages of DNN for its flexibility to reflect certain behavior peculiarities. These patterns broadly relate to two theoretical challenges of DNN, irregularity of its probability space and large estimation errors. Overall, this study provides a practical guidance of using DNN for demand analysis with two suggestions: First, researchers can use numerical methods to obtain behaviorally intuitive choice probabilities, probability derivatives, and reasonable VOT. Second, given the large estimation errors and irregularity of the probability space of DNN, researchers should always ensemble either over population or individual models to obtain stable economic information. 
Date:  2018–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1812.04528&r=all 
By:  Paul Labonne; Martin Weale 
Abstract:  This paper derives monthly estimates of turnover for small and medium size businesses in the UK from rolling quarterly VATbased turnover data. We develop a state space approach for filtering and temporally disaggregating the VAT figures, which are noisy and exhibit dynamic unobserved components. We notably derive multivariate and nonlinear methods to make use of indicator series and data in logarithms respectively. After illustrating our temporal disaggregation method and estimation strategy using an example industry, we estimate monthly seasonally adjusted figures for the seventyfive industries for which the data are available. We thus produce an aggregate series representing approximately a quarter of gross value added in the economy. We compare our estimates with those derived from the Monthly Business Survey and find that the VATbased estimates show a different time profile and are less volatile. In addition to this empirical work our contribution to the literature on temporal disaggregation is twofold. First, we provide a discussion of the effect that noise in aggregate figures has on the estimation of disaggregated model components. Secondly, we illustrate a new temporal aggregation strategy suited for overlapping data. The technique we adopt is more parsimonious than the seminal method of Harvey and Pierse (1984) and can easily be generalised to nonoverlapping data. 
Keywords:  Temporal disaggregation, State space models, Structural time series models, Administrative data, Monthly GDP 
JEL:  E01 C32 P44 
Date:  2018–12 
URL:  http://d.repec.org/n?u=RePEc:nsr:escoed:escoedp201818&r=all 
By:  Qiang Zhang; Rui Luo; Yaodong Yang; Yuanyuan Liu 
Abstract:  Volatility is a quantity of measurement for the price movements of stocks or options which indicates the uncertainty within financial markets. As an indicator of the level of risk or the degree of variation, volatility is important to analyse the financial market, and it is taken into consideration in various decisionmaking processes in financial activities. On the other hand, recent advancement in deep learning techniques has shown strong capabilities in modelling sequential data, such as speech and natural language. In this paper, we empirically study the applicability of the latest deep structures with respect to the volatility modelling problem, through which we aim to provide an empirical guidance for the theoretical analysis of the marriage between deep learning techniques and financial applications in the future. We examine both the traditional approaches and the deep sequential models on the task of volatility prediction, including the most recent variants of convolutional and recurrent networks, such as the dilated architecture. Accordingly, experiments with realworld stock price datasets are performed on a set of 1314 daily stock series for 2018 days of transaction. The evaluation and comparison are based on the negative log likelihood (NLL) of realworld stock price time series. The result shows that the dilated neural models, including dilated CNN and Dilated RNN, produce most accurate estimation and prediction, outperforming various widelyused deterministic models in the GARCH family and several recently proposed stochastic models. In addition, the high flexibility and rich expressive power are validated in this study. 
Date:  2018–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1811.03711&r=all 
By:  Guillaume Chevillon (Department of Information Systems, Decision Sciences and Statistics, ESSEC Business School); Alain Hecq (Department of Quantitative Economics, School of Business and Economics, Maastricht University); Sébastien Laurent (AixMarseille Univ., CNRS, EHESS, Centrale Marseille, AMSE & AixMarseille Graduate School of Management) 
Abstract:  This paper shows that a large dimensional vector autoregressive model (VAR) of finite order can generate fractional integration in the marginalized univariate series. We derive highlevel assumptions under which the final equation representation of a VAR(1) leads to univariate fractional white noises and verify the validity of these assumptions for two specific models. 
Keywords:  long memory, vector autoregressive model, marginalization, final equation representation 
JEL:  C10 C32 
Date:  2018–12 
URL:  http://d.repec.org/n?u=RePEc:aim:wpaimx:1844&r=all 
By:  Matyas Barczy; Adam Dudas; Jozsef Gall 
Abstract:  We derive new approximations for the Value at Risk and the Expected Shortfall at high levels of loss distributions with positive skewness and excess kurtosis, and we describe their precisions for notable ones such as for exponential, Pareto type I, lognormal and compound (Poisson) distributions. Our approximations are motivated by extensions of the socalled Normal Power Approximation, used for approximating the cumulative distribution function of a random variable, incorporating not only the skewness but the kurtosis of the random variable in question as well. We show the performance of our approximations in numerical examples and we also give comparisons with some known ones in the literature. 
Date:  2018–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1811.06361&r=all 
By:  Doretti, Marco; Geneletti, Sara; Stanghellini, Elena 
Abstract:  Recent work (Seaman et al., 2013; Mealli & Rubin, 2015) attempts to clarify the not always wellunderstood difference between realised and everywhere definitions of missing at random (MAR) and missing completely at random. Another branch of the literature (Mohan et al., 2013; Pearl & Mohan, 2013) exploits alwaysobserved covariates to give variablebased definitions of MAR and missing completely at random. In this paper, we develop a unified taxonomy encompassing all approaches. In this taxonomy, the new concept of ‘complementary MAR’ is introduced, and its relationship with the concept of data observed at random is discussed. All relationships among these definitions are analysed and represented graphically. Conditional independence, both at the random variable and at the event level, is the formal language we adopt to connect all these definitions. Our paper covers both the univariate and the multivariate case, where attention is paid to monotone missingness and to the concept of sequential MAR. Specifically, for monotone missingness, we propose a sequential MAR definition that might be more appropriate than both everywhere and variablebased MAR to model dropout in certain contexts. 
JEL:  C1 
Date:  2018–08–01 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:87227&r=all 
By:  Keiji Nagai (Yokohama National University); Yoshihiko Nishiyama (Institute of Economic Research, Kyoto University); Kohtaro Hitomi (Kyoto Institute of Technology) 
Abstract:  We consider unit root tests under sequential sampling for an AR(1) process against both stationary and explosive alternatives. We propose three kinds of test, or t type, stopping time and Bonferroni tests, using the sequential coefficient estimator and the stopping time of Lai and Siegmund (1983). To examine the statistical properties, we obtain their weak joint limit by approximating the processes in D[0;∞) and using time change and a DDS (Dambis and DubinsSchwarz) Brownian motion. The distribution of the stopping time is characterized by a Bessel process of dimension 3/2 with and without drift, while the esitimator is asymptotically normally distributed. We implement Monte Carlo simulations and numerical computations to examine their small sample properties. 
Date:  2018–10 
URL:  http://d.repec.org/n?u=RePEc:kyo:wpaper:1003&r=all 
By:  Peter Pedroni (Williams College) 
Abstract:  This chapter discusses the challenges that shape panel cointegration techniques, with an emphasis on the challenge of maintaining the robustness of cointegration methods when temporal dependencies interact with both cross sectional heterogeneities and dependencies. It also discusses some of the open challenges that lie ahead, including the challenge of generalizing to nonlinear and time varying cointegrating relationships. The chapter is written in a nontechnical style that is intended to be assessable to nonspecialists, with an emphasis on conveying the underlying concepts and intuition. 
Keywords:  Panel Time Series, Cointegration, Nonstationary Panels, Nonlinear Panels 
JEL:  C33 
Date:  2018–10 
URL:  http://d.repec.org/n?u=RePEc:wil:wileco:201809&r=all 