
on Econometrics 
By:  Badi H. Baltagi (Syracuse University); Chihwa Kao (University of Connecticut); Fa Wang (Shanghai University of Finance and Economics) 
Abstract:  This paper tackles the identi cation and estimation of a high dimensional factor model with unknown number of latent factors and a single break in the number of factors and/or factor loadings occurring at unknown common date. First, we propose a least squares estimator of the change point based on the second moments of estimated pseudo factors and show that the estimation error of the proposed estimator is Op(1). We also show that the proposed estimator has some degree of robustness to misspeci cation of the number of pseudo factors. With the estimated change point plugged in, consistency of the estimated number of pre and postbreak factors and convergence rate of the estimated pre and postbreak factor space are then established under fairly general assumptions. The nite sample performance of our estimators is investigated using Monte Carlo experiments. JEL Classification: C13; C33 Key words: high dimensional factor model, structural change, rate of convergence, number of factors, model selection, factor space, panel data 
Date:  2016–10 
URL:  http://d.repec.org/n?u=RePEc:uct:uconnp:201634&r=ecm 
By:  ZhiQiang Jiang (ECUST, BU); YanHong Yang (ECUST, BU); GangJin Wang (HNU, BU); WeiXing Zhou (ECUST) 
Abstract:  Mutually interacting components form complex systems and the outputs of these components are usually longrange crosscorrelated. Using wavelet leaders, we propose a method of characterizing the joint multifractal nature of these longrange cross correlations, a method we call joint multifractal analysis based on wavelet leaders (MFXWL). We test the validity of the MFXWL method by performing extensive numerical experiments on the dual binomial measures with multifractal cross correlations and the bivariate fractional Brownian motions (bFBMs) with monofractal cross correlations. Both experiments indicate that MFXWL is capable to detect the cross correlations in synthetic data with acceptable estimating errors. We also apply the MFXWL method to the pairs of series from financial markets (returns and volatilities) and online worlds (online numbers of different genders and different societies) and find an intriguing joint multifractal behavior. 
Date:  2016–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1611.00897&r=ecm 
By:  Carlos Vladimir RodríguezCaballero (Aarhus University and CREATES) 
Abstract:  A panel data model with a multilevel crosssectional dependence is proposed. The factor structure is driven by toplevel common factors as well as nonpervasive factors. I propose a simple method to filter out the full factor structure that overcomes limitations in standard procedures which may mix up both levels of unobservable factors and may hamper the identification of the model. The model covers both stationary and nonstationary cases and takes into account other relevant features that make the model well suited to the analysis of many types of time series frequently addressed in macroeconomics and finance. The model makes it possible to examine the time series and crosssectional dynamics of variables allowing for a rich fractional cointegration analysis. A Monte Carlo simulation is conducted to examine the finite sample features of the suggested procedure. Findings indicate that the methodology proposed works well in a wide variety of data generation processes and has much lower biases than the alternative estimation methods either in the I(0) or I(d) cases. 
Keywords:  Crosssection dependence; Multilevel factor models; Large panels; Long memory; Fractional cointegration; Common correlated effects. 
JEL:  C12 C22 C33 
Date:  2016–10–31 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201631&r=ecm 
By:  ZhiQiang Jiang (ECUST, BU); WeiXing Zhou (ECUST); H. Eugene Stanley (BU) 
Abstract:  Complex systems are composed of mutually interacting components and the output values of these components are usually longrange crosscorrelated. We propose a method to characterize the joint multifractal nature of such longrange cross correlations based on wavelet analysis, termed multifractal cross wavelet analysis (MFXWT). We assess the performance of the MFXWT method by performing extensive numerical experiments on the dual binomial measures with multifractal cross correlations and the bivariate fractional Brownian motions (bFBMs) with monofractal cross correlations. For binomial multifractal measures, the empirical joint multifractality of MFXWT is found to be in approximate agreement with the theoretical formula. For bFBMs, MFXWT may provide spurious multifractality because of the wide spanning range of the multifractal spectrum. We also apply the MFXWT method to stock market indexes and uncover intriguing joint multifractal nature in pairs of index returns and volatilities. 
Date:  2016–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1610.09519&r=ecm 
By:  Rodolfo Metulini (Department of Economics and Management, University of Brescia, Italy); Roberto Patuelli (Department of Economics, University of Bologna, Italy; The Rimini Centre for Economic Analysis, Italy); Daniel A. Griffith (School of Economic, Political & Policy Sciences, The University of Texas at Dallas, USA) 
Abstract:  Nonlinear estimation of the gravity model with Poisson/negative binomial methods has become popular to model international trade flows, because it permits a better accounting for zero flows and extreme values in the distribution tail. Nevertheless, as trade flows are not independent from each other due to spatial autocorrelation, these methods may lead to biased parameter estimates. To overcome this problem, eigenvector spatial filtering variants of the Poisson/negative binomial specification have been proposed in the literature of gravity modelling of trade. However, no specific treatment has been developed for cases in which many zero flows are present. This paper contributes to the literature in two ways. First, by employing a stepwise selection criterion for spatial filters that is based on robust (sandwich) pvalues and does not require likelihoodbased indicators. In this respect, we develop an ad hoc backward stepwise function in R. Second, using this function, we select a reduced set of spatial filters that properly accounts for importerside and exporterside specific spatial effects, both at the count and the logit processes of zeroinflated methods. Applying this estimation strategy to a crosssection of bilateral trade flows between a set of worldwide countries for the year 2000, we find that our specification outperforms the benchmark models in terms of model fitting, both considering the AIC and in predicting zero (and small) flows. 
Keywords:  bilateral trade; unconstrained gravity model; eigenvector spatial filtering; zero flows; backward stepwise; zeroinflation 
JEL:  C14 C21 F10 
Date:  2016–10 
URL:  http://d.repec.org/n?u=RePEc:rim:rimwps:1626&r=ecm 
By:  David Card; David S. Lee; Zhuan Pei; Andrea Weber 
Abstract:  A regression kink design (RKD or RK design) can be used to identify casual effects in settings where the regressor of interest is a kinked function of an assignment variable. In this paper, we apply an RKD approach to study the effect of unemployment benefits on the duration of joblessness in Austria, and discuss implementation issues that may arise in similar settings, including the use of bandwidth selection algorithms and biascorrection procedures. Although recent developments in nonparametric estimation (e.g. Imbens et al. (2012) and Calonico et al. (2014)) are sometimes interpreted by practitioners as pointing to a default estimation procedure, we show that in any given application different procedures may perform better or worse. In particular, Monte Carlo simulations based on data generating processes that closely resemble the data from our application show that some asymptotically dominant procedures may actually perform worse than “suboptimal” alternatives in a given empirical application. 
JEL:  C2 J65 
Date:  2016–10 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:22781&r=ecm 
By:  R. Metulini; R. Patuelli; D. A. Griffith 
Abstract:  Nonlinear estimation of the gravity model with Poisson/negative binomial methods has become popular to model international trade flows, because it permits a better accounting for zero flows and extreme values in the distribution tail. Nevertheless, as trade flows are not independent from each other due to spatial autocorrelation, these methods may lead to biased parameter estimates. To overcome this problem, eigenvector spatial filtering variants of the Poisson/negative binomial specification have been proposed in the literature of gravity modelling of trade. However, no specific treatment has been developed for cases in which many zero flows are present. This paper contributes to the literature in two ways. First, by employing a stepwise selection criterion for spatial filters that is based on robust (sandwich) pvalues and does not require likelihoodbased indicators. In this respect, we develop an ad hoc backward stepwise function in R. Second, using this function, we select a reduced set of spatial filters that properly accounts for importerside and exporterside specific spatial effects, both at the count and the logit processes of zeroinflated methods. Applying this estimation strategy to a crosssection of bilateral trade flows between a set of worldwide countries for the year 2000, we find that our specification outperforms the benchmark models in terms of model fitting, both considering the AIC and in predicting zero (and small) flows. 
JEL:  C14 C21 F10 
Date:  2016–10 
URL:  http://d.repec.org/n?u=RePEc:bol:bodewp:wp1081&r=ecm 
By:  beare, brendan; shi, xiaoxia 
Abstract:  Two probability distributions with common support are said to exhibit density ratio ordering when they admit a nonincreasing density ratio. Existing statistical tests of the null hypothesis of density ratio ordering are known to be conservative, with null limiting rejection rates below the nominal significance level whenever the two distributions are unequal. We show how a bootstrap procedure can be used to shrink the critical values used in existing procedures such that the limiting rejection rate is increased to the nominal significance level on the boundary of the null. This improves power against nearby alternatives. Our procedure is based on preliminary estimation of a contact set, the form of which is obtained from a novel representation of the Hadamard directional derivative of the least concave majorant operator. Numerical simulations indicate that improvements to power can be very large in moderately sized samples. 
Keywords:  bootstrap, density ratio ordering, power 
JEL:  C12 C15 
Date:  2015–08–05 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:74772&r=ecm 
By:  Wong, Woon K. (Cardiff Business School) 
Abstract:  This article extends the variance ratio test of Lo and MacKinlay (1988) to tests of skewness and kurtosis ratios. The proposed tests are based on generalized methods of moments. In particular, overlapping observations are used and their dependencies (under the IID assumption) are explicitly modelled so that more information can be used in order to make the tests more powerful with better size properties. The proposed tests are particularly relevant to the risk management industry where risk models are estimated using daily data, although multiperiod forecasts of tail risks are required for the determination of risk capital. Applications of the tests …nd signi…cant higherorder nonlinear dependencies in global major equity markets. Failure to correctly model such nonlinear relationships is likely to have a negative impact on the accuracy of forecasts of multiperiod tail risks. 
Keywords:  Skewness, kurtosis, overlapping observations, mutiperiod tail risk, ValueatRisk 
JEL:  C10 G11 
Date:  2016–08 
URL:  http://d.repec.org/n?u=RePEc:cdf:wpaper:2016/8&r=ecm 
By:  Nikolay Doudchenko; Guido W. Imbens 
Abstract:  In a seminal paper Abadie et al (2010) develop the synthetic control procedure for estimating the effect of a treatment, in the presence of a single treated unit and a number of control units, with pretreatment outcomes observed for all units. The method constructs a set of weights such that covariates and pretreatment outcomes of the treated unit are approximately matched by a weighted average of control units. The weights are restricted to be nonnegative and sum to one, which allows the procedure to obtain the weights even when the number of lagged outcomes is modest relative to the number of control units, a setting that is not uncommon in applications. In the current paper we propose a more general class of synthetic control estimators that allows researchers to relax some of the restrictions in the ADH method. We allow the weights to be negative, do not necessarily restrict the sum of the weights, and allow for a permanent additive difference between the treated unit and the controls, similar to differenceindifference procedures. The weights directly minimize the distance between the lagged outcomes for the treated and the control units, using regularization methods to deal with a potentially large number of possible control units. 
JEL:  C01 C1 
Date:  2016–10 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:22791&r=ecm 
By:  Taras Bodnar; Ostap Okhrin; Nestor Parolya 
Abstract:  In this paper we derive the optimal linear shrinkage estimator for the largedimensional mean vector using random matrix theory. The results are obtained under the assumption that both the dimension $p$ and the sample size $n$ tend to infinity such that $n^{1}p^{1\gamma} \to c\in(0,+\infty)$ and $\gamma\in [0, 1)$. Under weak conditions imposed on the the underlying data generating process, we find the asymptotic equivalents to the optimal shrinkage intensities, prove their asymptotic normality, and estimate them consistently. The obtained nonparametric estimator for the highdimensional mean vector has a simple structure and is proven to minimize asymptotically with probability $1$ the quadratic loss in the case of $c\in(0,1)$. For $c\in(1,+\infty)$ we modify the suggested estimator by using a feasible estimator for the precision covariance matrix. At the end, an exhaustive simulation study and an application to real data are provided where the proposed estimator is compared with known benchmarks from the literature. 
Date:  2016–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1610.09292&r=ecm 
By:  Dobromił Serwa; Piotr Wdowiński 
Abstract:  We estimated a structural vector autoregressive (SVAR) model describing the links between a banking sector and a real economy. We proposed a new method to verify robustness of impulseresponse functions in a SVAR model. This method applies permutations of the variable ordering in a structural model and uses the Cholesky decomposition of the error covariance matrix to identify parameters. Impulse response functions are computed for all permutations and are then combined. We explored the method in practice by analyzing the macrofinancial linkages in the Polish economy. Our results indicate that the combined impulse response functions are more uncertain than those from a single specification ordering but some findings remain robust. It is evident that macroeconomic aggregate shocks and interest rate shocks have a significant impact on banking variables. 
Keywords:  vector autoregression, Cholesky decomposition, combined impulse response, banking sector, real economy. 
JEL:  C32 C51 C52 C87 E44 E58 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:nbp:nbpmis:246&r=ecm 
By:  Ines Wilms; Jeroen Rombouts; Christophe Croux 
Abstract:  Volatility forecasts are key inputs in financial analysis. While lasso based forecasts have shown to perform well in many applications, their use to obtain volatility forecasts has not yet received much attention in the literature. Lasso estimators produce parsimonious forecast models. Our forecast combination approach hedges against the risk of selecting a wrong degree of model parsimony. Apart from the standard lasso, we consider several lasso extensions that account for the dynamic nature of the forecast model. We apply forecast combined lasso estimators in a comprehensive forecasting exercise using realized variance time series of ten major international stock market indices. We find the lasso extended 'ordered lasso' to give the most accurate realized variance forecasts. Multivariate forecast models, accounting for volatility spillovers between different stock markets, outperform univariate forecast models for longer forecast horizons. 
Keywords:  Forecast combination, Hierarchical lasso, Lasso, Ordered Lasso, Realized variance, Volatility forecasting 
Date:  2016–10 
URL:  http://d.repec.org/n?u=RePEc:ete:kbiper:553087&r=ecm 