
on Econometrics 
By:  Moreira, Marcelo J.; Mourão, Rafael; Moreira, Humberto 
Abstract:  Researchers often rely on the tstatistic to make inference on parameters in statistical models. It is common practice to obtain critical values by simulation techniques. This paper proposes a novel numerical method to obtain an approximately similar test. This test rejects the null hypothesis when the test statistic islarger than a critical value function (CVF) of the data. We illustrate this procedure when regressors are highly persistent, a case in which commonlyused simulation methods encounter dificulties controlling size uniformly. Our approach works satisfactorily, controls size, and yields a test which outperforms the two other known similar tests. 
Date:  2016–06–06 
URL:  http://d.repec.org/n?u=RePEc:fgv:epgewp:778&r=ecm 
By:  Ciccarelli, Nicola 
Abstract:  Financial data sets exhibit conditional heteroskedasticity and asymmetric volatility. In this paper we derive a semiparametric efficient adaptive estimator of a conditional heteroskedasticity and asymmetric volatility GARCHtype model (i.e., the PTTGARCH(1,1) model). Via kernel density estimation of the unknown density function of the innovation and via the NewtonRaphson technique applied on the rootnconsistent quasimaximum likelihood estimator, we construct a more efficient estimator than the quasimaximum likelihood estimator. Through Monte Carlo simulations, we show that the semiparametric estimator is adaptive for parameters in cluded in the conditional variance of the model with respect to the unknown distribution of the innovation. 
Keywords:  Semiparametric adaptive estimation; Powertransformed and threshold GARCH. 
JEL:  C14 C22 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:72021&r=ecm 
By:  Federico M Bandi (Institute for Fiscal Studies); Valentina Corradi (Institute for Fiscal Studies); Daniel Wilhelm (Institute for Fiscal Studies and cemmap and UCL) 
Abstract:  Crossvalidation is the most common datadriven procedure for choosing smoothing parameters in nonparametric regression. For the case of kernel estimators with iid or strong mixing data, it is wellknown that the bandwidth chosen by crossvalidation is optimal with respect to the average squared error and other performance measures. In this paper, we show that the crossvalidated bandwidth continues to be optimal with respect to the average squared error even when the datagenerating process is a recurrent Markov chain. This general class of processes covers stationary as well as nonstationary Markov chains. Hence, the proposed procedure adapts to the degree of recurrence, thereby freeing the researcher from the need to assume stationary (or nonstationary) before inference begins. We study finite sample performance in a Monte Carlo study. We conclude by demonstrating the practical usefulness of crossvalidation in a highlypersistent environment, namely that of nonlinear predictive systems for market returns. 
Keywords:  Bandwidth Selection, Recurrence, Predictive Regressions 
Date:  2016–03–12 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:11/16&r=ecm 
By:  Alexei Onatski; Chen Wang 
Abstract:  Johansen’s (1988, 1991) likelihood ratio test for cointegration rank of a Gaussian VAR depends only on the squared sample canonical correlations between current changes and past levels of a simple transformation of the data. We study the asymptotic behavior of the empirical distribution of those squared canonical correlations when the number of observations and the dimensionality of the VAR diverge to infinity simultaneously and proportionally. We find that the distribution almost surely weakly converges to the socalled Wachter distribution. This finding provides a theoretical explanation for the observed tendency of Johansen’s test to find “spurious cointegration”. It also sheds light on the workings and limitations of the Bartlett correction approach to the overrejection problem. We propose a simple graphical device, similar to the scree plot, for a preliminary assessment of cointegration in highdimensional VARs. 
Date:  2016–06–15 
URL:  http://d.repec.org/n?u=RePEc:cam:camdae:1637&r=ecm 
By:  Frank Windmeijer; Helmut Farbmacher; Neil Davies; George Davey Smith 
Abstract:  We investigate the behaviour of the Lasso for selecting invalid instruments in linear instrumental variables models for estimating causal effects of exposures on outcomes, as proposed recently by Kang, Zhang, Cai and Small (2016, Journal of the American Statistical Association). Invalid instruments are such that they fail the exclusion restriction and enter the model as explanatory variables. We show that for this setup, the Lasso may not select all invalid instruments in large samples if they are relatively strong. Consistent selection also depends on the correlation structure of the instruments. We propose a median estimator that is consistent when less than 50% of the instruments are invalid, but its consistency does not depend on the relative strength of the instruments or their correlation structure. This estimator can therefore be used for adaptive Lasso estimation. The methods are applied to a Mendelian randomisation study to estimate the causal effect of BMI on diastolic blood pressure using data on individuals from the UK Biobank, with 96 single nucleotide polymorphisms as potential instruments for BMI. 
Keywords:  causal inference, instrumental variables estimation, invalid instruments, Lasso, Mendelian randomisation. 
Date:  2016–06–02 
URL:  http://d.repec.org/n?u=RePEc:bri:uobdis:16/674&r=ecm 
By:  Timothy B. Armstrong (Cowles Foundation, Yale University); Michal Kolesár (Princeton University) 
Abstract:  We consider the problem of constructing honest confidence intervals (CIs) for a scalar parameter of interest, such as the regression discontinuity parameter, in nonparametric regression based on kernel or local polynomial estimators. To ensure that our CIs are honest, we derive and tabulate novel critical values that take into account the possible bias of the estimator upon which the CIs are based. We give sharp efficiency bounds of using different kernels, and derive the optimal bandwidth for constructing honest CIs. We show that using the bandwidth that minimizes the maximum meansquared error results in CIs that are nearly efficient and that in this case, the critical value depends only on the rate of convergence. For the common case in which the rate of convergence is n^{4/5}, the appropriate critical value for 95% CIs is 2.18, rather than the usual 1.96 critical value. We illustrate our results in an empirical application. 
Keywords:  Nonparametric inference, relative efficiency 
JEL:  C12 C14 
Date:  2016–06 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:2044&r=ecm 
By:  Søren Johansen (Department of Economics, University of Copenhagen); Bent Nielsen (Department of Economics, Nuffield College) 
Abstract:  We show tightness of a general Mestimator for multiple linear regression in time series. The positive criterion function for the Mestimator is assumed lower semicontinuous and sufficiently large for large argument: Particular cases are the Huberskip and quantile regression. Tightness requires an assumption on the frequency of small regressors. We show that this is satis?ed for a variety of deterministic and stochastic regressors, including stationary an random walks regressors. The results are obtained using a detailed analysis of the condition on the regressors combined with some recent martingale results. 
Keywords:  Mestimator, robust statistics, martingales, Huberskip, quantile estimation. 
Date:  2016–06–10 
URL:  http://d.repec.org/n?u=RePEc:kud:kuiedp:1605&r=ecm 
By:  Yutao Sun 
Abstract:  We propose a bias correction method for nonlinear models with both individual and time effects. Under the presence of the incidental parameter problem, the maximum likelihood estimator derived from such models may be severely biased. Our method produces an approximation to an infeasible loglikelihood function that is not exposed to the incidental parameter problem. The maximizer derived from the approximating function serves as a biascorrected estimator that is asymptotically unbiased when the sequence N=T converges to a constant. The proposed method is general in several perspectives. The method can be extended to models with multiple fixed effects and can be easily modified to accommodate dynamic models. 
Date:  2016–05 
URL:  http://d.repec.org/n?u=RePEc:ete:ceswps:541934&r=ecm 
By:  Alfonso Ugarte 
Abstract:  We investigate the idea that when we separate an explanatory variable into its â€œbetweenâ€ and â€œwithinâ€ variations we could be roughly decomposing it into a structural (longterm) and a cyclical component respectively, and this could translate into different Between and Within estimates in panel data. 
Keywords:  Global , Research , Working Paper 
JEL:  C01 C18 C23 C33 C51 C58 G20 G21 
Date:  2016–05 
URL:  http://d.repec.org/n?u=RePEc:bbv:wpaper:1610&r=ecm 
By:  Dirk Drechsel (KOF Swiss Economic Institute, ETH Zurich, Switzerland); Stefan Neuwirth (KOF Swiss Economic Institute, ETH Zurich, Switzerland) 
Abstract:  We propose a Bayesian optimal filtering setup for improving outofsample forecasting performance when using volatile high frequency data with long lag structure for forecasting lowfrequency data. We test this setup by using realtime Swiss construction investment and construction permit data. We compare our approach to different filtering techniques and show that our proposed filter outperforms various commonly used filtering techniques in terms of extracting the more relevant signal of the indicator series for forecasting. 
Keywords:  Forecasting, construction, Switzerland, Bayesian, mixed data frequencies 
Date:  2016–06 
URL:  http://d.repec.org/n?u=RePEc:kof:wpskof:16407&r=ecm 
By:  Davide De Gaetano 
Abstract:  In this paper the problem of instability due to changes in the parameters of some Realized Volatility (RV) models has been addressed. The analysis is based on 5minute RV of four U.S. stock market indices. Three different representations of the logRV have been considered and, for each of them, the parameter instability has been detected by using the recursive estimates test. In order to analyse how instabilities in the parameters affect the forecasting performance, an outofsample forecasting exercise has been performed. In particular, several forecast combinations, designed to accommodate potential structural breaks, have been considered. All of them are based on different estimation windows, with alternative weighting schemes, and do not take into account explicitly estimated break dates. The model con_dence set has been used to compare the forecasting performances of the proposed approaches. Our analysis gives empirical evidences of the effectiveness of the combinations which make adjustments for accounting the possible most recent break point. 
Keywords:  Forecast combinations, Structural breaks, Realized volatility 
JEL:  C53 C58 G17 
Date:  2016–06 
URL:  http://d.repec.org/n?u=RePEc:rtr:wpaper:0208&r=ecm 
By:  Toru Kitagawa (Institute for Fiscal Studies and cemmap and University College London); Jose Luis Montiel Olea (Institute for Fiscal Studies and New York University); Jonathan Payne (Institute for Fiscal Studies) 
Abstract:  This paper examines the asymptotic behavior of the posterior distribution of a possibly nondifferentiable function g(theta), where theta is a finite dimensional parameter. The main assumption is that the distribution of the maximum likelihood estimator theta_n, its bootstrap approximation, and the Bayesian posterior for theta all agree asymptotically. It is shown that whenever g is Lipschitz, though not necessarily differentiable, the posterior distribution of g(theta) and the bootstrap distribution of g(theta_n) coincide asymptotically. One implication is that Bayesians can interpret bootstrap inference for g(theta) as approximately valid posterior inference in a large sample. Another implication—built on known results about bootstrap inconsistency—is that the posterior distribution of g(theta) does not coincide with the asymptotic distribution of g(theta_n) at points of nondifferentiability. Consequently, frequentists cannot presume that credible sets for a nondifferentiable parameter g(theta) can be interpreted as approximately valid confidence sets (even when this relation holds true for theta). 
Keywords:  Distribution, nondifferentiable functions 
Date:  2016–05–09 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:20/16&r=ecm 
By:  Yanfei Kang; Rob J. Hyndman; Kate SmithMiles 
Abstract:  It is common practice to evaluate the strength of forecasting methods using collections of wellstudied time series datasets, such as the M3 data. But how diverse are these time series, how challenging, and do they enable us to study the unique strengths and weaknesses of different forecasting methods? In this paper we propose a visualisation method for a collection of time series that enables a time series to be represented as a point in a 2dimensional instance space. The effectiveness of different forecasting methods can be visualised easily across this space, and the diversity of the time series in an existing collection can be assessed. Noting that the M3 dataset is not as diverse as we would ideally like, this paper also proposes a method for generating new time series with controllable characteristics to fill in and spread out the instance space, making generalisations of forecasting method performance as robust as possible. 
Keywords:  M3Competition, time series visualisation, time series generation, forecasting algorithm comparison 
JEL:  C52 C53 C55 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201610&r=ecm 
By:  Nicol\'o Musmeci; Vincenzo Nicosia; Tomaso Aste; Tiziana Di Matteo; Vito Latora 
Abstract:  We propose here a multiplex network approach to investigate simultaneously different types of dependency in complex data sets. In particular, we consider multiplex networks made of four layers corresponding respectively to linear, nonlinear, tail, and partial correlations among a set of financial time series. We construct the sparse graph on each layer using a standard network filtering procedure, and we then analyse the structural properties of the obtained multiplex networks. The study of the time evolution of the multiplex constructed from financial data uncovers important changes in intrinsically multiplex properties of the network, and such changes are associated with periods of financial stress. We observe that some features are unique to the multiplex structure and would not be visible otherwise by the separate analysis of the singlelayer networks corresponding to each dependency measure. 
Date:  2016–06 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1606.04872&r=ecm 
By:  Michal Jakubczyk 
Abstract:  Determining how to trade off individual criteria is often not obvious, especially when attributes of very different nature are juxtaposed, e.g. health and money. The difficulty stems both from the lack of adequate market experience and strong ethical component when valuing some goods, resulting in inherently imprecise preferences. Fuzzy sets can be used to model willingnesstopay/accept (WTP/WTA), so as to quantify this imprecision and support the decision making process. The preferences need then to be estimated based on available data. In the paper I show how to estimate the membership function of fuzzy WTP/WTA, when decision makers’ preferences are collected via survey with Likertbased questions. I apply the proposed methodology to an exemplary data set on WTP/WTA for health. The mathematical model contains two elements: the parametric representation of the membership function and the mathematical model how it is translated into Likert options. The model parameters are estimated in a Bayesian approach using Markovchain Monte Carlo. The results suggest a slight WTPWTA disparity and WTA being more fuzzy as WTP. The model is fragile to single respondents with lexicographic preferences, i.e. not willing to accept any tradeoffs between health and money. 
Keywords:  willingnesstopay/accept, fuzzy set, membership function, preference elicitation 
JEL:  J17 C11 C13 D71 
Date:  2016–04 
URL:  http://d.repec.org/n?u=RePEc:sgh:kaewps:2016011&r=ecm 
By:  Hamidi Sahneh, Mehdi 
Abstract:  Nonfundamentalness arises when observed variables do not contain enough information to recover structural shocks. This paper propose a new test to empirically detect nonfundamentalness, which is robust to the conditional heteroskedasticity of unknown form, does not need information outside of the specified model and could be accomplished with a standard Ftest. A Monte Carlo study based on a DSGE model is conducted to examine the finite sample performance of the test. I apply the proposed test to the U.S. quarterly data to identify the dynamic effects of supply and demand disturbances on real GNP and unemployment. 
Keywords:  NonFundamentalness; Invertibility; Vector Autoregressive. 
JEL:  C32 C5 E3 
Date:  2016–06–01 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:71924&r=ecm 
By:  Joachim Freyberger (Institute for Fiscal Studies); Matthew Masten (Institute for Fiscal Studies) 
Abstract:  We provide general compactness results for many commonly used parameter spaces in nonparametric estimation. We consider three kinds of functions: (1) functions with bounded domains which satisfy standard norm bounds, (2) functions with bounded domains which do not satisfy standard norm bounds, and (3) functions with unbounded domains. In all three cases we provide two kinds of results, compact embedding and closedness, which together allow one to show that parameter spaces defined by a ·snorm bound are compact under a norm ·c. We apply these results to nonparametric mean regression and nonparametric instrumental variables estimation. 
Keywords:  Nonparametric estimation, sieve estimation, trimming, nonparametric instrumental variables 
JEL:  C14 C26 C51 
Date:  2016–01–03 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:01/16&r=ecm 
By:  Bo Honoré (Institute for Fiscal Studies and Princeton); Áureo de Paula (Institute for Fiscal Studies and University College London) 
Abstract:  This paper introduces a bivariate version of the generalized accelerated failure time model. It allows for simultaneity in the econometric sense that the two realized outcomes depend structurally on each other. Another feature of the proposed model is that it will generate equal durations with positive probability. The motivating example is retirement decisions by married couples. In that example it seems reasonable to allow for the possibility that each partner's optimal retirement time depends on the retirement time of the spouse. Moreover, the data suggest that the wife and the husband retire at the same time for a nonnegligible fraction of couples. Our approach takes as a starting point a stylized economic model that leads to a univariate generalized accelerated failure time model. The covariates of that generalized accelerated failure time model act as utilityflow shifters in the economic model. We introduce simultaneity by allowing the utility flow in retirement to depend on the retirement status of the spouse. The econometric model is then completed by assuming that the observed outcome is the Nash bargaining solution in that simple economic model. The advantage of this approach is that it includes independent realizations from the generalized accelerated failure time model as a special case, and deviations from this special case can be given an economic interpretation. We illustrate the model by studying the joint retirement decisions in married couples using the Health and Retirement Study. We provide a discussion of relevant identifying variation and estimate our model using indirect inference. The main empirical nding is that the simultaneity seems economically important. In our preferred speci cation the indirect utility associated with being retired increases by approximately 5% when one's spouse retires. The estimated model also predicts that the marginal eff ect of a change in the husbands' pension plan on wives' retirement dates is about 3.3% of the direct eff ect on the husbands'. 
JEL:  J26 C41 C3 
Date:  2016–02–17 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:07/16&r=ecm 
By:  Rulof P. Burger, Stephan Klasen and Asmus Zoch 
Abstract:  There are longstanding concerns that household income mobility is overestimated due to measurement errors in reported incomes, especially in developing countries where collecting reliable survey data is often difficult. We propose a new approach that exploits the existence of three waves of panel data to can be used to simultaneously estimate the extent of income mobility and the reliability of the income measure. This estimator is more efficient than 2SLS estimators used in other studies and produces overidentifying restrictions that can be used to test the validity of our identifying assumptions. We also introduce a nonparametric generalisation in which both the speed of income convergence and the reliability of the income measure varies with the initial income level. This approach is applied to a threewave South African panel dataset. The results suggest that the conventional method overestimates the extent of income mobility by a factor of more than 4 and that about 20% of variation in reported household income is due to measurement error. This result is robust to the choice of income mobility measure. Nonparametric estimates show that there is relatively high (upward) income mobility for poor households, but very little (downward) income mobility for rich households, and that income is more reliably captured for rich than for poor households. 
Keywords:  Income Mobility, inequality, longitudinal data analysis, measurement error 
JEL:  J62 D63 C23 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:rza:wpaper:607&r=ecm 