|
on Econometrics |
By: | Giuseppe Cavaliere (University of Bologna); Morten Ørregaard Nielsen (Queen's University and CREATES); A.M. Robert Taylor (University of Essex) |
Abstract: | We consider estimation and inference in fractionally integrated time series models driven by shocks which can display conditional and unconditional heteroskedasticity of unknown form. Although the standard conditional sum-of-squares (CSS) estimator remains consistent and asymptotically normal in such cases, unconditional heteroskedasticity inflates its variance matrix by a scalar quantity, lambda>1, thereby inducing a loss in efficiency relative to the unconditionally homoskedastic case, lambda=1. We propose an adaptive version of the CSS estimator, based on non-parametric kernel-based estimation of the unconditional variance process. This eliminates the factor lambda from the variance matrix, thereby delivering the same asymptotic efficiency as that attained by the standard CSS estimator in the unconditionally homoskedastic case and, hence, asymptotic efficiency under Gaussianity. The asymptotic variance matrices of both the standard and adaptive CSS estimators depend on any conditional heteroskedasticity and/or weak parametric autocorrelation present in the shocks. Consequently, asymptotically pivotal inference can be achieved through the development of confidence regions or hypothesis tests using either heteroskedasticity robust standard errors and/or a wild bootstrap. Monte Carlo simulations and empirical applications are included to illustrate the practical usefulness of the methods proposed. |
Keywords: | adaptive estimation, conditional sum-of-squares, fractional integration, heteroskedasticity, quasi-maximum likelihood estimation, wild bootstrap |
Date: | 2017–10 |
URL: | http://d.repec.org/n?u=RePEc:qed:wpaper:1390&r=ecm |
By: | Jiti Gao; Kai Xia |
Abstract: | This paper considers a semiparametric panel data model with heterogeneous coefficients and individual-specific trending functions, where the random errors are assumed to be serially correlated and cross-sectionally dependent. We propose mean group estimators for the coefficients and trending functions involved in the model. It can be shown that the proposed estimators can achieve an asymptotic consistency with rates of root−NT and root-NTh, respectively as (N, T) -> (∞, ∞), where N is allowed to increase faster than T. Furthermore, a statistic for testing homogeneous coefficients is constructed based on the difference between the mean group estimator and a pooled estimator. Its asymptotic distributions are established under both the null and a sequence of local alternatives, even if the difference between these estimators vanishes considerably fast (can achieve root-NT2 rate at most under the null) and consistent estimator available for the covariance matrix is not required explicitly. The finite sample performance of the proposed estimators together with the size and local power properties of the test are demonstrated by simulated data examples, and an empirical application with the OECD health care expenditure dataset is also provided. |
Keywords: | Health care expenditure, nonlinear trending function, nonstationary time series. |
JEL: | C14 C22 G17 |
Date: | 2017 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2017-16&r=ecm |
By: | Piatek, Rémi (University of Copenhagen); Gensowski, Miriam (University of Copenhagen) |
Abstract: | We develop a parametrization of the multinomial probit model that yields greater insight into the underlying decision-making process, by decomposing the error terms of the utilities into latent factors and noise. The latent factors are identified without a measurement system, and they can be meaningfully linked to an economic model. We provide sufficient conditions that make this structure identified and interpretable. For inference, we design a Markov chain Monte Carlo sampler based on marginal data augmentation. A simulation exercise shows the good numerical performance of our sampler and reveals the practical importance of alternative identification restrictions. Our approach can generally be applied to any setting where researchers can specify an a priori structure on a few drivers of unobserved heterogeneity. One such example is the choice of combinations of two options, which we explore with real data on education and occupation pairs. |
Keywords: | multinomial probit, latent factors, Bayesian analysis, marginal data augmentation, educational choice, occupational choice |
JEL: | C11 C25 C35 |
Date: | 2017–09 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp11042&r=ecm |
By: | Chambers, MJ; McCrorie, JR; Thornton, MA |
Abstract: | This chapter provides a survey of methods of continuous time modelling based on an exact discrete time representation. It begins by highlighting the techniques involved with the derivation of an exact discrete time representation of an underlying continuous time model,providing specificc details for a second-order linear system of stochastic differential equations. Issues of parameter identification, Granger causality, nonstationarity, and mixed frequency data are addressed, all being important considerations in applications in economics and other disciplines. Although the focus is on Gaussian estimation of the exact discrete time model, alternative time domain (state space) and frequency domain approaches are also discussed. Computational issues are explored and two new empirical applications are included along with a discussion of applications in the field of macroeconometric modelling. |
Keywords: | Continuous time; exact discrete time representation; stochastic di erential equation; Gaussian estimation; identi cation; Granger causality; nonstationarity; mixed frequency data; computation; macroeconometric modelling. |
Date: | 2017–10 |
URL: | http://d.repec.org/n?u=RePEc:esx:essedp:20497&r=ecm |
By: | Pedersen, Rasmus Søndergaard |
Abstract: | We consider robust inference for an autoregressive parameter in a stationary autoregressive model with GARCH innovations when estimation is based on least squares estimation. As the innovations exhibit GARCH, they are by construction heavy-tailed with some tail index $\kappa$. The rate of consistency as well as the limiting distribution of the least squares estimator depend on $\kappa$. In the spirit of Ibragimov and Müller (“t-statistic based correlation and heterogeneity robust inference”, Journal of Business & Economic Statistics, 2010, vol. 28, pp. 453-468), we consider testing a hypothesis about a parameter based on a Student’s t-statistic for a fixed number of subsamples of the original sample. The merit of this approach is that no knowledge about the value of $\kappa$ nor about the rate of consistency and the limiting distribution of the least squares estimator is required. We verify that the one-sided t-test is asymptotically a level $\alpha$ test whenever $\alpha \le $ 5% uniformly over $\kappa \ge 2$, which includes cases where the innovations have infinite variance. A simulation experiment suggests that the finite-sample properties of the test are quite good. |
Keywords: | t-test, AR-GARCH, regular variation, least squares estimation |
JEL: | C12 C22 C46 C51 |
Date: | 2017–10–04 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:81979&r=ecm |
By: | Zhongwen Liang |
Abstract: | In this paper, a unified approach is proposed to derive the exact local asymptotic power for panel unit root tests, which is one of the most important issues in nonstationary panel data literature. Two most widely used panel unit root tests known as Levin-Lin-Chu (LLC, Levin, Lin and Chu (2002)) and Im-Pesaran-Shin (IPS, Im, Pesaran and Shin (2003)) tests are systematically studied for various situations to illustrate our method. Our approach is characteristic function based, and can be used directly in deriving the moments of the asymptotic distributions of these test statistics under the null and the local-to-unity alternatives. For the LLC test, the approach provides an alternative way to obtain the results that can be derived by the existing method. For the IPS test, the new results are obtained, which fills the gap in the literature where few results exist, since the IPS test is non-admissible. Moreover, our approach has the advantage in deriving Edgeworth expansions of these tests, which are also given in the paper. The simulations are presented to illustrate our theoretical findings. |
Date: | 2017–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1710.02944&r=ecm |
By: | Wayne Yuan Gao |
Abstract: | This paper characterizes the minimax linear estimator of the value of an unknown function at a boundary point of its domain in a Gaussian white noise model under the restriction that the first-order derivative of the unknown function is Lipschitz continuous (the second-order H\"{o}lder class). The result is then applied to construct the minimax optimal estimator for the regression discontinuity design model, where the parameter of interest involves function values at boundary points. |
Date: | 2017–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1710.06809&r=ecm |
By: | Yen-Chi Chen |
Abstract: | We review recent advances in modal regression studies using kernel density estimation. Modal regression is an alternative approach for investigating relationship between a response variable and its covariates. Specifically, modal regression summarizes the interactions between the response variable and covariates using the conditional mode or local modes. We first describe the underlying model of modal regression and its estimators based on kernel density estimation. We then review the asymptotic properties of the estimators and strategies for choosing the smoothing bandwidth. We also discuss useful algorithms and similar alternative approaches for modal regression, and propose future direction in this field. |
Date: | 2017–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1710.07004&r=ecm |
By: | Masafumi Nakano (Graduate School of Economics, The University of Tokyo); Akihiko Takahashi (Faculty of Economics, The University of Tokyo); Soichiro Takahashi (Graduate School of Economics, The University of Tokyo) |
Abstract: | This paper proposes a new state space approach to adaptive fuzzy modeling under the dynamic environment, where Bayesian filtering sequentially learns the model parameters including model structures themselves as state variables. In particular, our approach specifies the state transitions as meanreversion processes, which intends to incorporate and extend the established state-of-art learning techniques as follows: First, the mean-reversion levels of model parameters are determined by applying some existing learning method to a training period. Next, filtering implementation over test data enables on-line estimation of the parameters, where the estimates are adaptively tuned for each new data arrival based on the obtained reliable learning result. In this work, we concretely design a Takagi-Sugeno- Kang fuzzy model for financial investment, whose parameters follow autoregressive processes with the mean-reversion levels decided by particle swarm optimization. Since there exist Monte Carlo simulation-based algorithms called particle filtering, our methodology is applicable to a quite general setting including non-linearity, which actually arises in our investment problem. Then, an out-of-sample numerical experiment with security price data successfully demonstrates its effectiveness. |
Date: | 2017–10 |
URL: | http://d.repec.org/n?u=RePEc:tky:fseres:2017cf1067&r=ecm |
By: | Kaifeng Zhao; Seyed Hanif Mahboobi; Saeed Bagheri |
Abstract: | This paper examines and proposes several attribution modeling methods that quantify how revenue should be attributed to online advertising inputs. We adopt and further develop relative importance method, which is based on regression models that have been extensively studied and utilized to investigate the relationship between advertising efforts and market reaction (revenue). Relative importance method aims at decomposing and allocating marginal contributions to the coefficient of determination (R^2) of regression models as attribution values. In particular, we adopt two alternative submethods to perform this decomposition: dominance analysis and relative weight analysis. Moreover, we demonstrate an extension of the decomposition methods from standard linear model to additive model. We claim that our new approaches are more flexible and accurate in modeling the underlying relationship and calculating the attribution values. We use simulation examples to demonstrate the superior performance of our new approaches over traditional methods. We further illustrate the value of our proposed approaches using a real advertising campaign dataset. |
Date: | 2017–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1710.06561&r=ecm |
By: | Hans (J.L.W.) van Kippersluis (Erasmus School of Economics, The Netherlands; Tinbergen Institute, The Netherlands); Niels (C.A.) Rietveld (Erasmus School of Economics, The Netherlands) |
Abstract: | We synthesize two recent advances in the literature on instrumental variables (IVs) estimation that test and relax the exclusion restriction. Our approach first estimates the direct effect of the IV on the outcome in a subsample for which the IV does not affect the treatment variable. Subsequently, this estimate for the direct effect is used as input for the plausibly exogenous method developed by Conley, Hansen and Rossi (2012). This two-step procedure provides a novel and informed sensitivity analysis for IV estimation. We illustrate the practical use by estimating the causal effect of (i) attending Catholic high school on schooling outcomes, and (ii) the number of children on female labour supply. |
Keywords: | Instrumental variables; plausibly exogenous; exclusion restriction |
JEL: | C18 C26 J20 |
Date: | 2017–10–13 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20170096&r=ecm |
By: | Damien Rousselière |
Abstract: | This paper proposes a new estimation model to capture the complex effect of age on organization survival. Testing various theoretical propositions on organizational mortality, we study the survival of French agricultural cooperatives in comparison with other firms with which they compete. The relationship between age and mortality in organizations is analyzed using a Bayesian Generalized discrete-time semi-parametric hazard model with correlated random effects, incorporating unobserved heterogeneity and isolating the various effects of time. This analysis emphasizes the specificity of the temporal dynamics of cooperatives in relation to their special role in agriculture. |
Keywords: | Bayesian estimation, Bayesian model selection, cooperatives, generalized additive model, survival analysis |
JEL: | C11 C41 Q13 L25 |
Date: | 2017 |
URL: | http://d.repec.org/n?u=RePEc:rae:wpaper:201708&r=ecm |
By: | Daniel Kosiorowski; Dominik Mielczarek; Jerzy P. Rydlewski |
Abstract: | In this article, a new nonparametric and robust method of forecasting hierarchical functional time series is presented. The method is compared with Hyndman and Shang's method with respect to their unbiasedness, effectiveness, robustness, and computational complexity. Taking into account results of the analytical, simulation and empirical studies, we come to the conclusion that our proposal is superior over the proposal of Hyndman and Shang with respect to some statistical criteria and especially with respect to their robustness and computational complexity. The studied empirical example relates to the management of Internet service divided into four subservices. |
Date: | 2017–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1710.02669&r=ecm |
By: | Valeria Bignozzi; Claudio Macci; Lea Petrella |
Abstract: | Due to their heterogeneity, insurance risks can be properly described as a mixture of different fixed models, where the weights assigned to each model may be estimated empirically from a sample of available data. If a risk measure is evaluated on the estimated mixture instead of the (unknown) true one, then it is important to investigate the committed error. In this paper we study the asymptotic behaviour of estimated risk measures, as the data sample size tends to infinity, in the fashion of large deviations. We obtain large deviation results by applying the contraction principle, and the rate functions are given by a suitable variational formula; explicit expressions are available for mixtures of two models. Finally, our results are applied to the most common risk measures, namely the quantiles, the Expected Shortfall and the entropic risk measure. |
Date: | 2017–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1710.03252&r=ecm |
By: | Kasun Bandara; Christoph Bergmeir; Slawek Smyl |
Abstract: | With the advent of Big Data, nowadays in many applications databases containing large quantities of similar time series are available. Forecasting time series in these domains with traditional univariate forecasting procedures leaves great potentials for producing accurate forecasts untapped. Recurrent neural networks, and in particular Long Short-Term Memory (LSTM) networks have proven recently that they are able to outperform state-of-the-art univariate time series forecasting methods in this context, when trained across all available time series. However, if the time series database is heterogeneous accuracy may degenerate, so that on the way towards fully automatic forecasting methods in this space, a notion of similarity between the time series needs to be built into the methods. To this end, we present a prediction model using LSTMs on subgroups of similar time series, which are identified by time series clustering techniques. The proposed methodology is able to consistently outperform the baseline LSTM model, and it achieves competitive results on benchmarking datasets, in particular outperforming all other methods on the CIF2016 dataset. |
Date: | 2017–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1710.03222&r=ecm |
By: | Justin Sirignano; Konstantinos Spiliopoulos |
Abstract: | Stochastic gradient descent in continuous time (SGDCT) provides a computationally efficient method for the statistical learning of continuous-time models, which are widely used in science, engineering, and finance. The SGDCT algorithm follows a (noisy) descent direction along a continuous stream of data. The parameter updates occur in continuous time and satisfy a stochastic differential equation. This paper analyzes the asymptotic convergence rate of the SGDCT algorithm by proving a central limit theorem for strongly convex objective functions and, under slightly stronger conditions, for non-convex objective functions as well. An L$^p$ convergence rate is also proven for the algorithm in the strongly convex case. |
Date: | 2017–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1710.04273&r=ecm |
By: | M. Hashem Pesaran; Qiankun Zhou |
Abstract: | This paper provides a new comparative analysis of pooled least squares and fixed effects estimators of the slope coefficients in the case of panel data models when the time dimension (T) is fixed while the cross section dimension (N) is allowed to increase without bounds. The individual effects are allowed to be correlated with the regressors, and the comparison is carried out in terms of an exponent coefficient, delta, which measures the degree of pervasiveness of the fixed effects in the panel. The use of delta allows us to distinguish between poolability of small N dimensional panels with large T from large N dimensional panels with small T. It is shown that the pooled estimator remains consistent so long as delta |
Keywords: | Short panel, Fixed e¤ects estimator, Pooled estimator, Pretest estimator, Efficiency, Diagnostic test |
Date: | 2017–10 |
URL: | http://d.repec.org/n?u=RePEc:lsu:lsuwpp:2017-13&r=ecm |