nep-ecm New Economics Papers
on Econometrics
Issue of 2016‒06‒25
nineteen papers chosen by
Sune Karlsson
Örebro universitet

  1. A Critical Value Function Approach, with an Application to Persistent Time-Series By Moreira, Marcelo J.; Mourão, Rafael; Moreira, Humberto
  2. Semiparametric Efficient Adaptive Estimation of the PTTGARCH model By Ciccarelli, Nicola
  3. Possibly Nonstationary Cross-Validation By Federico M Bandi; Valentina Corradi; Daniel Wilhelm
  4. Alternative Asymptotics for Cointegration Tests in Large VARs By Alexei Onatski; Chen Wang
  5. On the Use of the Lasso for Instrumental Variables Estimation with Some Invalid Instruments By Frank Windmeijer; Helmut Farbmacher; Neil Davies; George Davey Smith
  6. Simple and Honest Confidence Intervals in Nonparametric Regression By Timothy B. Armstrong; Michal Kolesár
  7. Tightness of M-estimators for multiple linear regression in time for multiple linear regression in time series By Søren Johansen; Bent Nielsen
  8. Likelihood-based inference for nonlinear models with both individual and time effects By Yutao Sun
  9. Long and short-run components in explanatory variables and different panel-data estimates By Alfonso Ugarte
  10. Taming volatile high frequency data with long lag structure: An optimal filtering approach for forecasting By Dirk Drechsel; Stefan Neuwirth
  12. Posterior distribution of nondifferentiable functions By Toru Kitagawa; Jose Luis Montiel Olea; Jonathan Payne
  13. Visualising forecasting Algorithm Performance using Time Series Instance Spaces By Yanfei Kang; Rob J. Hyndman; Kate Smith-Miles
  14. The multiplex dependency structure of financial markets By Nicol\'o Musmeci; Vincenzo Nicosia; Tomaso Aste; Tiziana Di Matteo; Vito Latora
  15. Estimating the membership function of the fuzzy willingness-to-pay/accept for health via Bayesian modelling By Michal Jakubczyk
  16. Testing for Non-Fundamentalness By Hamidi Sahneh, Mehdi
  17. Compactness of infinite dimensional parameter spaces By Joachim Freyberger; Matthew Masten
  18. A new model for interdependent durations with an application to joint retirement By Bo Honoré; Áureo de Paula
  19. Estimating Income Mobility When Income is Measured with Error: The Case of South Africa By Rulof P. Burger, Stephan Klasen and Asmus Zoch

  1. By: Moreira, Marcelo J.; Mourão, Rafael; Moreira, Humberto
    Abstract: Researchers often rely on the t-statistic to make inference on parameters in statistical models. It is common practice to obtain critical values by simulation techniques. This paper proposes a novel numerical method to obtain an approximately similar test. This test rejects the null hypothesis when the test statistic islarger than a critical value function (CVF) of the data. We illustrate this procedure when regressors are highly persistent, a case in which commonly-used simulation methods encounter dificulties controlling size uniformly. Our approach works satisfactorily, controls size, and yields a test which outperforms the two other known similar tests.
    Date: 2016–06–06
  2. By: Ciccarelli, Nicola
    Abstract: Financial data sets exhibit conditional heteroskedasticity and asymmetric volatility. In this paper we derive a semiparametric efficient adaptive estimator of a conditional heteroskedasticity and asymmetric volatility GARCH-type model (i.e., the PTTGARCH(1,1) model). Via kernel density estimation of the unknown density function of the innovation and via the Newton-Raphson technique applied on the root-n-consistent quasi-maximum likelihood estimator, we construct a more efficient estimator than the quasi-maximum likelihood estimator. Through Monte Carlo simulations, we show that the semiparametric estimator is adaptive for parameters in- cluded in the conditional variance of the model with respect to the unknown distribution of the innovation.
    Keywords: Semiparametric adaptive estimation; Power-transformed and threshold GARCH.
    JEL: C14 C22
    Date: 2016
  3. By: Federico M Bandi (Institute for Fiscal Studies); Valentina Corradi (Institute for Fiscal Studies); Daniel Wilhelm (Institute for Fiscal Studies and cemmap and UCL)
    Abstract: Cross-validation is the most common data-driven procedure for choosing smoothing parameters in nonparametric regression. For the case of kernel estimators with iid or strong mixing data, it is well-known that the bandwidth chosen by crossvalidation is optimal with respect to the average squared error and other performance measures. In this paper, we show that the cross-validated bandwidth continues to be optimal with respect to the average squared error even when the datagenerating process is a -recurrent Markov chain. This general class of processes covers stationary as well as nonstationary Markov chains. Hence, the proposed procedure adapts to the degree of recurrence, thereby freeing the researcher from the need to assume stationary (or nonstationary) before inference begins. We study finite sample performance in a Monte Carlo study. We conclude by demonstrating the practical usefulness of cross-validation in a highly-persistent environment, namely that of nonlinear predictive systems for market returns.
    Keywords: Bandwidth Selection, Recurrence, Predictive Regressions
    Date: 2016–03–12
  4. By: Alexei Onatski; Chen Wang
    Abstract: Johansen’s (1988, 1991) likelihood ratio test for cointegration rank of a Gaussian VAR depends only on the squared sample canonical correlations between current changes and past levels of a simple transformation of the data. We study the asymptotic behavior of the empirical distribution of those squared canonical correlations when the number of observations and the dimensionality of the VAR diverge to infinity simultaneously and proportionally. We find that the distribution almost surely weakly converges to the so-called Wachter distribution. This finding provides a theoretical explanation for the observed tendency of Johansen’s test to find “spurious cointegration”. It also sheds light on the workings and limitations of the Bartlett correction approach to the over-rejection problem. We propose a simple graphical device, similar to the scree plot, for a preliminary assessment of cointegration in high-dimensional VARs.
    Date: 2016–06–15
  5. By: Frank Windmeijer; Helmut Farbmacher; Neil Davies; George Davey Smith
    Abstract: We investigate the behaviour of the Lasso for selecting invalid instruments in linear instrumental variables models for estimating causal effects of exposures on outcomes, as proposed recently by Kang, Zhang, Cai and Small (2016, Journal of the American Statistical Association). Invalid instruments are such that they fail the exclusion restriction and enter the model as explanatory variables. We show that for this setup, the Lasso may not select all invalid instruments in large samples if they are relatively strong. Consistent selection also depends on the correlation structure of the instruments. We propose a median estimator that is consistent when less than 50% of the instruments are invalid, but its consistency does not depend on the relative strength of the instruments or their correlation structure. This estimator can therefore be used for adaptive Lasso estimation. The methods are applied to a Mendelian randomisation study to estimate the causal effect of BMI on diastolic blood pressure using data on individuals from the UK Biobank, with 96 single nucleotide polymorphisms as potential instruments for BMI.
    Keywords: causal inference, instrumental variables estimation, invalid instruments, Lasso, Mendelian randomisation.
    Date: 2016–06–02
  6. By: Timothy B. Armstrong (Cowles Foundation, Yale University); Michal Kolesár (Princeton University)
    Abstract: We consider the problem of constructing honest confidence intervals (CIs) for a scalar parameter of interest, such as the regression discontinuity parameter, in nonparametric regression based on kernel or local polynomial estimators. To ensure that our CIs are honest, we derive and tabulate novel critical values that take into account the possible bias of the estimator upon which the CIs are based. We give sharp efficiency bounds of using different kernels, and derive the optimal bandwidth for constructing honest CIs. We show that using the bandwidth that minimizes the maximum mean-squared error results in CIs that are nearly efficient and that in this case, the critical value depends only on the rate of convergence. For the common case in which the rate of convergence is n^{-4/5}, the appropriate critical value for 95% CIs is 2.18, rather than the usual 1.96 critical value. We illustrate our results in an empirical application.
    Keywords: Nonparametric inference, relative efficiency
    JEL: C12 C14
    Date: 2016–06
  7. By: Søren Johansen (Department of Economics, University of Copenhagen); Bent Nielsen (Department of Economics, Nuffield College)
    Abstract: We show tightness of a general M-estimator for multiple linear regression in time series. The positive criterion function for the M-estimator is assumed lower semi-continuous and sufficiently large for large argument: Particular cases are the Huber-skip and quantile regression. Tightness requires an assumption on the frequency of small regressors. We show that this is satis?ed for a variety of deterministic and stochastic regressors, including stationary an random walks regressors. The results are obtained using a detailed analysis of the condition on the regressors combined with some recent martingale results.
    Keywords: M-estimator, robust statistics, martingales, Huber-skip, quantile estimation.
    Date: 2016–06–10
  8. By: Yutao Sun
    Abstract: We propose a bias correction method for nonlinear models with both individual and time effects. Under the presence of the incidental parameter problem, the maximum likelihood estimator derived from such models may be severely biased. Our method produces an approximation to an infeasible log-likelihood function that is not exposed to the incidental parameter problem. The maximizer derived from the approximating function serves as a bias-corrected estimator that is asymptotically unbiased when the sequence N=T converges to a constant. The proposed method is general in several perspectives. The method can be extended to models with multiple fixed effects and can be easily modified to accommodate dynamic models.
    Date: 2016–05
  9. By: Alfonso Ugarte
    Abstract: We investigate the idea that when we separate an explanatory variable into its “between†and “within†variations we could be roughly decomposing it into a structural (long-term) and a cyclical component respectively, and this could translate into different Between and Within estimates in panel data.
    Keywords: Global , Research , Working Paper
    JEL: C01 C18 C23 C33 C51 C58 G20 G21
    Date: 2016–05
  10. By: Dirk Drechsel (KOF Swiss Economic Institute, ETH Zurich, Switzerland); Stefan Neuwirth (KOF Swiss Economic Institute, ETH Zurich, Switzerland)
    Abstract: We propose a Bayesian optimal filtering setup for improving out-of-sample forecasting performance when using volatile high frequency data with long lag structure for forecasting low-frequency data. We test this setup by using real-time Swiss construction investment and construction permit data. We compare our approach to different filtering techniques and show that our proposed filter outperforms various commonly used filtering techniques in terms of extracting the more relevant signal of the indicator series for forecasting.
    Keywords: Forecasting, construction, Switzerland, Bayesian, mixed data frequencies
    Date: 2016–06
  11. By: Davide De Gaetano
    Abstract: In this paper the problem of instability due to changes in the parameters of some Realized Volatility (RV) models has been addressed. The analysis is based on 5-minute RV of four U.S. stock market indices. Three different representations of the log-RV have been considered and, for each of them, the parameter instability has been detected by using the recursive estimates test. In order to analyse how instabilities in the parameters affect the forecasting performance, an out-of-sample forecasting exercise has been performed. In particular, several forecast combinations, designed to accommodate potential structural breaks, have been considered. All of them are based on different estimation windows, with alternative weighting schemes, and do not take into account explicitly estimated break dates. The model con_dence set has been used to compare the forecasting performances of the proposed approaches. Our analysis gives empirical evidences of the effectiveness of the combinations which make adjustments for accounting the possible most recent break point.
    Keywords: Forecast combinations, Structural breaks, Realized volatility
    JEL: C53 C58 G17
    Date: 2016–06
  12. By: Toru Kitagawa (Institute for Fiscal Studies and cemmap and University College London); Jose Luis Montiel Olea (Institute for Fiscal Studies and New York University); Jonathan Payne (Institute for Fiscal Studies)
    Abstract: This paper examines the asymptotic behavior of the posterior distribution of a possibly nondifferentiable function g(theta), where theta is a finite dimensional parameter. The main assumption is that the distribution of the maximum likelihood estimator theta_n, its bootstrap approximation, and the Bayesian posterior for theta all agree asymptotically. It is shown that whenever g is Lipschitz, though not necessarily differentiable, the posterior distribution of g(theta) and the bootstrap distribution of g(theta_n) coincide asymptotically. One implication is that Bayesians can interpret bootstrap inference for g(theta) as approximately valid posterior inference in a large sample. Another implication—built on known results about bootstrap inconsistency—is that the posterior distribution of g(theta) does not coincide with the asymptotic distribution of g(theta_n) at points of nondifferentiability. Consequently, frequentists cannot presume that credible sets for a nondifferentiable parameter g(theta) can be interpreted as approximately valid confidence sets (even when this relation holds true for theta).
    Keywords: Distribution, nondifferentiable functions
    Date: 2016–05–09
  13. By: Yanfei Kang; Rob J. Hyndman; Kate Smith-Miles
    Abstract: It is common practice to evaluate the strength of forecasting methods using collections of well-studied time series datasets, such as the M3 data. But how diverse are these time series, how challenging, and do they enable us to study the unique strengths and weaknesses of different forecasting methods? In this paper we propose a visualisation method for a collection of time series that enables a time series to be represented as a point in a 2-dimensional instance space. The effectiveness of different forecasting methods can be visualised easily across this space, and the diversity of the time series in an existing collection can be assessed. Noting that the M3 dataset is not as diverse as we would ideally like, this paper also proposes a method for generating new time series with controllable characteristics to fill in and spread out the instance space, making generalisations of forecasting method performance as robust as possible.
    Keywords: M3-Competition, time series visualisation, time series generation, forecasting algorithm comparison
    JEL: C52 C53 C55
    Date: 2016
  14. By: Nicol\'o Musmeci; Vincenzo Nicosia; Tomaso Aste; Tiziana Di Matteo; Vito Latora
    Abstract: We propose here a multiplex network approach to investigate simultaneously different types of dependency in complex data sets. In particular, we consider multiplex networks made of four layers corresponding respectively to linear, non-linear, tail, and partial correlations among a set of financial time series. We construct the sparse graph on each layer using a standard network filtering procedure, and we then analyse the structural properties of the obtained multiplex networks. The study of the time evolution of the multiplex constructed from financial data uncovers important changes in intrinsically multiplex properties of the network, and such changes are associated with periods of financial stress. We observe that some features are unique to the multiplex structure and would not be visible otherwise by the separate analysis of the single-layer networks corresponding to each dependency measure.
    Date: 2016–06
  15. By: Michal Jakubczyk
    Abstract: Determining how to trade off individual criteria is often not obvious, especially when attributes of very different nature are juxtaposed, e.g. health and money. The difficulty stems both from the lack of adequate market experience and strong ethical component when valuing some goods, resulting in inherently imprecise preferences. Fuzzy sets can be used to model willingness-to-pay/accept (WTP/WTA), so as to quantify this imprecision and support the decision making process. The preferences need then to be estimated based on available data. In the paper I show how to estimate the membership function of fuzzy WTP/WTA, when decision makers’ preferences are collected via survey with Likert-based questions. I apply the proposed methodology to an exemplary data set on WTP/WTA for health. The mathematical model contains two elements: the parametric representation of the membership function and the mathematical model how it is translated into Likert options. The model parameters are estimated in a Bayesian approach using Markov-chain Monte Carlo. The results suggest a slight WTPWTA disparity and WTA being more fuzzy as WTP. The model is fragile to single respondents with lexicographic preferences, i.e. not willing to accept any trade-offs between health and money.
    Keywords: willingness-to-pay/accept, fuzzy set, membership function, preference elicitation
    JEL: J17 C11 C13 D71
    Date: 2016–04
  16. By: Hamidi Sahneh, Mehdi
    Abstract: Non-fundamentalness arises when observed variables do not contain enough information to recover structural shocks. This paper propose a new test to empirically detect non-fundamentalness, which is robust to the conditional heteroskedasticity of unknown form, does not need information outside of the specified model and could be accomplished with a standard F-test. A Monte Carlo study based on a DSGE model is conducted to examine the finite sample performance of the test. I apply the proposed test to the U.S. quarterly data to identify the dynamic effects of supply and demand disturbances on real GNP and unemployment.
    Keywords: Non-Fundamentalness; Invertibility; Vector Autoregressive.
    JEL: C32 C5 E3
    Date: 2016–06–01
  17. By: Joachim Freyberger (Institute for Fiscal Studies); Matthew Masten (Institute for Fiscal Studies)
    Abstract: We provide general compactness results for many commonly used parameter spaces in nonparametric estimation. We consider three kinds of functions: (1) functions with bounded domains which satisfy standard norm bounds, (2) functions with bounded domains which do not satisfy standard norm bounds, and (3) functions with unbounded domains. In all three cases we provide two kinds of results, compact embedding and closedness, which together allow one to show that parameter spaces defined by a ||·||s-norm bound are compact under a norm ||·||c. We apply these results to nonparametric mean regression and nonparametric instrumental variables estimation.
    Keywords: Nonparametric estimation, sieve estimation, trimming, nonparametric instrumental variables
    JEL: C14 C26 C51
    Date: 2016–01–03
  18. By: Bo Honoré (Institute for Fiscal Studies and Princeton); Áureo de Paula (Institute for Fiscal Studies and University College London)
    Abstract: This paper introduces a bivariate version of the generalized accelerated failure time model. It allows for simultaneity in the econometric sense that the two realized outcomes depend structurally on each other. Another feature of the proposed model is that it will generate equal durations with positive probability. The motivating example is retirement decisions by married couples. In that example it seems reasonable to allow for the possibility that each partner's optimal retirement time depends on the retirement time of the spouse. Moreover, the data suggest that the wife and the husband retire at the same time for a nonnegligible fraction of couples. Our approach takes as a starting point a stylized economic model that leads to a univariate generalized accelerated failure time model. The covariates of that generalized accelerated failure time model act as utility-flow shifters in the economic model. We introduce simultaneity by allowing the utility flow in retirement to depend on the retirement status of the spouse. The econometric model is then completed by assuming that the observed outcome is the Nash bargaining solution in that simple economic model. The advantage of this approach is that it includes independent realizations from the generalized accelerated failure time model as a special case, and deviations from this special case can be given an economic interpretation. We illustrate the model by studying the joint retirement decisions in married couples using the Health and Retirement Study. We provide a discussion of relevant identifying variation and estimate our model using indirect inference. The main empirical nding is that the simultaneity seems economically important. In our preferred speci cation the indirect utility associated with being retired increases by approximately 5% when one's spouse retires. The estimated model also predicts that the marginal eff ect of a change in the husbands' pension plan on wives' retirement dates is about 3.3% of the direct eff ect on the husbands'.
    JEL: J26 C41 C3
    Date: 2016–02–17
  19. By: Rulof P. Burger, Stephan Klasen and Asmus Zoch
    Abstract: There are long-standing concerns that household income mobility is over-estimated due to measurement errors in reported incomes, especially in developing countries where collecting reliable survey data is often difficult. We propose a new approach that exploits the existence of three waves of panel data to can be used to simultaneously estimate the extent of income mobility and the reliability of the income measure. This estimator is more efficient than 2SLS estimators used in other studies and produces over-identifying restrictions that can be used to test the validity of our identifying assumptions. We also introduce a nonparametric generalisation in which both the speed of income convergence and the reliability of the income measure varies with the initial income level. This approach is applied to a three-wave South African panel dataset. The results suggest that the conventional method over-estimates the extent of income mobility by a factor of more than 4 and that about 20% of variation in reported household income is due to measurement error. This result is robust to the choice of income mobility measure. Nonparametric estimates show that there is relatively high (upward) income mobility for poor households, but very little (downward) income mobility for rich households, and that income is more reliably captured for rich than for poor households.
    Keywords: Income Mobility, inequality, longitudinal data analysis, measurement error
    JEL: J62 D63 C23
    Date: 2016

This nep-ecm issue is ©2016 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.