on Econometrics
 Issue of 2018‒07‒30 twenty-one papers chosen by Sune Karlsson Örebro universitet

1.  By: Oliver Linton (Institute for Fiscal Studies and University of Cambridge); Ji-Liang Shiu (Institute for Fiscal Studies) Abstract: This paper develops the identification and estimation of nonlinear semi-parametric panel data models with mismeasured variables and their corresponding average partial effects using only three periods of data. The past observables are used as instruments to control the measurement error problem, and the time averages of perfectly observed variables are used to restrict the unobserved individual-specific effect by a correlated random effects specification. The proposed approach relies on the Fourier transforms of several conditional expectations of observable variables. We then estimate the model via the semi-parametric sieve Generalized Method of Moments estimator. The finite-sample properties of the estimator are investigated through Monte Carlo simulations. We use our method to estimate the effect of the wage rate on labor supply using PSID. Keywords: Correlated random effects, Measurement error, Nonlinear panel data models, Semi-parametric identification Date: 2018–01–10 URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:09/18&r=ecm
2.  By: Kleijnen, J.P.C. (Tilburg University, Center For Economic Research); van Beers, W.C.M. (Tilburg University, Center For Economic Research) Abstract: Kriging or Gaussian process (GP) modeling is an interpolation method that assumes the outputs (responses) are more correlated, the closer the inputs (ex- planatory or independent variables) are. A GP has unknown (hyper)parameters that must be estimated; the standard estimation method uses the "maximum likelihood" criterion. However, big data make it hard to compute the estimates of these GP parameters, and the resulting Kriging predictor and the variance of this predictor. To solve this problem, some authors select a relatively small subset from the big set of previously observed "old" data; their method is se- quential and depends on the variance of the Kriging predictor. The resulting designs turn out to be "local"; i.e., most design points are concentrated around the point to be predicted. We develop three alternative one-shot methods that do not depend on GP parameters: (i) select a small subset such that this sub- set still covers the original input space–albeit coarser; (ii) select a subset with relatively many— but not all— combinations close to the new combination that is to be predicted, and (iii) select a subset with the nearest neighbors (NNs) of this new combination. To evaluate these designs, we compare their squared prediction errors in several numerical (Monte Carlo) experiments. These experi- ments show that our NN design is a viable alternative for the more sophisticated sequential designs. Keywords: kriging; Gaussian process; big data; experimental design; nearest neighbor JEL: C0 C1 C9 C15 C44 Date: 2018 URL: http://d.repec.org/n?u=RePEc:tiu:tiucen:b0504930-f518-44f7-908c-6a147cef26bd&r=ecm
3.  By: Chaohua Dong (Institute for Fiscal Studies and Southwestern University of Finance and Economics, China); Oliver Linton (Institute for Fiscal Studies and University of Cambridge) Abstract: This paper considers nonparametric additive models that have a deterministic time trend and both stationary and integrated variables as components. The diverse nature of the regressors caters for applications in a variety of settings. In addition, we extend the analysis to allow the stationary regressor to be instead locally stationary, and we allow the models to include a linear form of the integrated variable. Heteroscedasticity is allowed for in all models. We propose an estimation strategy based on orthogonal series expansion that takes account of the different type of stationarity/nonstationarity possessed by each covariate. We establish pointwise asymptotic distribution theory jointly for all estimators of unknown functions and also show the conventional optimal convergence rates jointly in the L2 sense. In spite of the entanglement of different kinds of regressors, we can separate out the distribution theory for each estimator. We provide Monte Carlo simulations that establish the favourable properties of our procedures in moderate sized samples. Finally, we apply our techniques to the study of a pairs trading strategy. Keywords: Additive nonparametric models, deterministic trend, pairs trading, series estimator, stationary and locally stationary processes, unit root process Date: 2017–12–20 URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:59/17&r=ecm
4.  By: Kazuhiko Hayakawa; Shuichi Nagata; Takashi Yamagata Abstract: In this paper, we propose a robust approach against heteroskedasticity, error serial correlation and slope heterogeneity for large linear panel data models. First, we establish the asymptotic validity of the Wald test based on the widely used panel heteroskedasticity and autocorrelation consistent (HAC) variance estimator of the pooled estimator under random coefficient models. Then, we show that a similar result holds with the proposed bias-corrected principal component-based estimators for models with unobserved interactive effects. Our new theoretical result justifies the use of the same slope estimator and the variance estimator, both for slope homogeneous and heterogeneous models. This robust approach can significantly reduce the model selection uncertainty for applied researchers. In addition, we propose a novel test for the correlation and dependence of the random coefficient with covariates. The test is of great importance, since the widely used estimators and/or its variance estimators can become inconsistent when the variation of coefficients depends on covariates, in general. The finite sample evidence supports the usefulness and reliability of our approach. Date: 2018–07 URL: http://d.repec.org/n?u=RePEc:dpr:wpaper:1037&r=ecm
5.  By: Richard Ashley (Virginia Polytechnic Institute and State University); Christopher F. Parmeter (University of Miami) Abstract: This note corrects an error -- pointed out in Kiviet (2016) -- in the Ashley and Parmeter (2015a) derivation of the asymptotic distribution of the OLS parameter estimator in the usual k-variate multiple regression model, but where some or all of the explanatory variables are endogenous. This sampling distribution lies at the heart of the Ashley and Parmeter (2015a) sensitivity analysis of a hypothesis test rejection p-value with respect to potential endogeneity in the explanatory variables in such regression models, so this correction is of practical importance. We also discuss the settings in which Kiviet's way of displaying univariate sensitivity analysis results is an improvement (and in what settings it is not), and we provide new analytic results for our sensitivity analysis in an important special case. Keywords: Publication Status: Submitted JEL: C2 C15 Date: 2018–07–16 URL: http://d.repec.org/n?u=RePEc:mia:wpaper:2018-01&r=ecm
6.  By: Papa Ousmane Cissé (Centre d'Economie de la Sorbonne - Université Paris 1 Panthéon-Sorbonne, LERSTAD - Université Gaston Berger (Sénégal), LMM, IRA - Université Le Mans); Dominique Guégan (Université Paris 1 Panthéon-Sorbonne, Centre d'Economie de la Sorbonne, LabEx ReFi and Ca' Foscari University of Venezia, IPAG Business school); Abdou Kâ Diongue (LERSTAD - Université Gaston Berger (Sénégal)) Abstract: In this paper, we discuss the methods of estimating the parameters of the Seasonal FISSAR (Fractionally Integrated Separable Spatial Autoregressive with seasonality) model. First we implement the regression method based on the log-periodogram and the classical Whittle method for estimating memory parameters. To estimate the model's parameters simultaneously - innovation parameters and memory parameters- the maximum likelihood method, and the Whittle method based on the MCMC simulation are considered. We are investigated the consistency and the asymptotic normality of the estimators by simulation Keywords: Seasonal FISSAR; long memory; regression method; Whittle method; MLE method JEL: C21 C51 C52 Date: 2018–07 URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:18018&r=ecm
7.  By: Marco Giesselmann; Alexander Schmidt-Catran Abstract: An interaction in a fixed effects (FE) regression is usually specified by demeaning the product term. However, this strategy does not yield a genuine within estimator. Instead, an estimator is produced that reflects unit-level differences of interacted variables whose moderators vary within units. This is desirable if the interaction of one unit-specific and one time-dependent variable is specified in FE, but it may yield problematic results if both interacted variables vary within units. Then, as algebraic transformations show, the FE interaction estimator picks up unit-specific effect heterogeneity of both variables. Accordingly, Monte Carlo experiments reveal that it is biased if one of the interacted variables is correlated with an unobserved unit-specific moderator of the other interacted variable. In light of these insights, we propose that a within interaction of two timedependent variables be estimated by first demeaning each variable and then demeaning the product term. This “double-demeaned” estimator is not subject to bias caused by unobserved effect heterogeneity. It is, however, less efficient than standard FE and only works with T>2. Keywords: panel data, fixed effects, interaction, quadratic terms, polynomials, within estimator JEL: C33 C51 Date: 2018 URL: http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1748&r=ecm
8.  By: Shujie Ma (Institute for Fiscal Studies); Oliver Linton (Institute for Fiscal Studies and University of Cambridge); Jiti Gao (Institute for Fiscal Studies) Abstract: We propose an estimation methodology for a semiparametric quantile factor panel model. We provide tools for inference that are robust to the existence of moments and to the form of weak cross-sectional dependence in the idiosyncratic error term. We apply our method to CRSP daily data. Keywords: Dependence; Fama-French Model; Inference; Sieve Estimation Date: 2018–01–10 URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:07/18&r=ecm
9.  By: Liang, Che-Yuan (Department of Economics) Abstract: We develop a method for distributional regression of joint multidimensional choice on nonlinear prices departing from a household model of labor supply that focuses on tax policy effects. Our distribution functions are derived under minimal theoretical assumptions and have a simple structure. We allow distribution-free estimation, collective decisionmaking, and identification based on tax reforms. In our empirical application on U.S. panel data from 1980 to 2006, we provide a deepened understanding of how the configuration of the tax system affects the distribution of transitions between combinations of spouse labor supply. We also quantify biases from commonly imposed restrictions. Keywords: household labor supply; nonlinear budget sets; distributional regression; collective choice; distribution-free estimation; tax reforms JEL: D11 H24 J22 Date: 2018–02–01 URL: http://d.repec.org/n?u=RePEc:hhs:uunewp:2018_002&r=ecm
10.  By: Huang, Na; Fryzlewicz, Piotr Abstract: We propose a “NOVEL Integration of the Sample and Thresholded covariance estimators” (NOVELIST) to estimate the large covariance (correlation) and precision matrix. NOVELIST performs shrinkage of the sample covariance (correlation) towards its thresholded version. The sample covariance (correlation) component is non-sparse and can be low-rank in high dimensions. The thresholded sample covariance (correlation) component is sparse, and its addition ensures the stable invertibility of NOVELIST. The benefits of the NOVELIST estimator include simplicity, ease of implementation, computational efficiency and the fact that its application avoids eigenanalysis. We obtain an explicit convergence rate in the operator norm over a large class of covariance (correlation) matrices when the dimension p and the sample size n satisfy log p=n ! 0, and its improved version when p=n ! 0. In empirical comparisons with several popular estimators, the NOVELIST estimator performs well in estimating covariance and precision matrices over a wide range of models and sparsity classes. Real data applications are presented. Keywords: covariance regularisation; high-dimensional covariance; long memory; non-sparse modelling; singular sample covariance; high dimensionality JEL: C1 Date: 2018–06–07 URL: http://d.repec.org/n?u=RePEc:ehl:lserod:89055&r=ecm
11.  By: Blomquist, Sören (Department of Economics); Newey, Whitney K (Department of Economics, M.I.T.) Abstract: Bunching estimators were developed and extended by Saez (2010), Chetty et al. (2011) and Kleven and Waseem (2013). Using this method one can get an estimate of the taxable income elasticity from the bunching pattern around a kink point. The bunching estimator has become popular, with a large number of papers applying the method. In this paper, we show that the bunching estimator cannot identify the taxable income elasticity when the functional form of the distribution of preference heterogeneity is unknown. We find that an observed distribution of taxable income around a kink point or over the whole budget set can be consistent with any positive taxable income elasticity if the distribution of heterogeneity is unrestricted. If one is willing to assume restrictions on the heterogeneity density some information about the taxable income elasticity can be obtained. We give bounds on the taxable income elasticity based on monotonicity of the heterogeneity density and apply these bounds in an example. We also consider identification from budget set variation. We find that kinks alone may not be informative even when budget sets vary. However, if the taxable income specification is restricted to be of the parametric isoelastic form assumed in Saez (2010) the taxable income elasticity can be well identified from variation among budget sets. The key condition is that the tax rates at chosen taxable income differ across budget sets for some individuals. Keywords: Identification; bunching; taxable income elasticty JEL: C13 H20 H21 H24 Date: 2018–03–01 URL: http://d.repec.org/n?u=RePEc:hhs:uunewp:2018_004&r=ecm
12.  By: Richard Blundell (Institute for Fiscal Studies and IFS and UCL); Dennis Kristensen (Institute for Fiscal Studies and University College London); Rosa Matzkin (Institute for Fiscal Studies and UCLA) Abstract: New nonparametric methods that identify and estimate counterfactuals for individuals, when each is characterized by a vector of unobserved characteristics, are developed and applied to estimate systems of individual consumer demand and welfare measures. The unobserved characteristics are allowed to enter in unrestricted ways. Identification is delivered through two fundamental assumptions: First, the system is invertible in the vector of unobserved heterogeneity. Second, there exist external, individual-specific, covariates that are related to the unobserved heterogeneity and do not enter directly into the system of interest. The observed external variables can be either discrete or continuously distributed. Estimators based on the identifying restrictions are developed and their asymptotic properties derived. Using UK micro data on consumer demand, we apply the methods to estimate individual demand counterfactuals subject to revealed preference inequalities. Keywords: simultaneous equations, nonseparable models, constructive identification, nonparametric methods, consumer behaviour, structural demand functions, revealed preference, bounds JEL: C20 D12 Date: 2017–12–30 URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:60/17&r=ecm
13.  By: Papa Cissé (CES - Centre d'économie de la Sorbonne - CNRS - Centre National de la Recherche Scientifique - UP1 - Université Panthéon-Sorbonne, LERSTAD - laboratoire d'Etudes et de recherches en Statistiques et Développement - Université Gaston Bergé Sénégal, LMM - IRA); Dominique Guegan (UP1 - Université Panthéon-Sorbonne, CES - Centre d'économie de la Sorbonne - CNRS - Centre National de la Recherche Scientifique - UP1 - Université Panthéon-Sorbonne, Labex ReFi - UP1 - Université Panthéon-Sorbonne, IPAG - Business School, Universita di Venezia - Ca' Foscari); Abdou Kâ Diongue (LERSTAD - laboratoire d'Etudes et de recherches en Statistiques et Développement - Université Gaston Bergé Sénégal) Abstract: In this paper, we discuss the methods of estimating the parameters of the Seasonal FISSAR (Fractionally Integrated Separable Spatial Autoregressive with seasonality) model. First we implement the regression method based on the log-periodogram and the classical Whittle method for estimating memory parameters. To estimate the model's parameters simultaneously - innovation parameters and memory parameters- the maximum likelihood method, and the Whittle method based on the MCMC simulation are considered. We are investigated the consistency and the asymptotic normality of the estimators by simulation. Keywords: Seasonal FISSAR,long memory,regression method,Whittle method,MLE method Date: 2018–07 URL: http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-01832115&r=ecm
14.  By: Maximilian Podstawski; Thore Schlaak; Malte Rieth Abstract: We develop a vector autoregressive framework for combining the information in an external instrument with the information in the second moments of the data to identify latent monetary shocks in the United States. We show that the framework improves the identification of the structural model and allows testing the validity of instruments proposed in the literature. Using a valid instrument, we then document that surprise monetary contractions lead to a medium-sized significant decline in economic activity, that the contractionary effect is also present during the great moderation, and that the role of monetary shocks in driving real and financial fluctuations is small in low and big in high volatility regimes. Keywords: Monetary policy, structural vector autoregressions, identification with external instruments, heteroskedasticity, Markov switching JEL: E52 C32 E58 E32 Date: 2018 URL: http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1749&r=ecm
15.  By: Lettau, Martin; Pelger, Markus Abstract: We develop an estimator for latent asset pricing factors that fit the time-series and cross- section of expected returns. Our estimator generalizes Principal Component Analysis (PCA) by including a penalty on the pricing error in expected returns. We show that our estimator strongly dominates PCA and finds weak factors with high Sharpe-ratios that PCA cannot detect. Studying a large number of characteristic sorted portfolios we find that five latent factors with economic meaning explain well the cross-section and time-series of returns. We show that out-of-sample the maximum Sharpe-ratio of our five factors is more than twice as large as with PCA with significantly smaller pricing errors. Our factors are based on only a subset of the stock characteristics implying that a significant amount of characteristic information is redundant. Keywords: Anomalies; Cross Section of Returns; expected returns; high-dimensional data; Latent Factors; PCA; Weak Factors JEL: C14 C52 C58 G12 Date: 2018–07 URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:13049&r=ecm
16.  By: Ahmed, Hanan (Tilburg University, Center For Economic Research); Einmahl, John (Tilburg University, Center For Economic Research) Abstract: Heavy tailed phenomena are naturally analyzed by extreme value statistics. A crucial step in such an analysis is the estimation of the extreme value index, which describes the tail heaviness of the underlying probability distribution. We consider the situation where we have next to the n observations of interest another n+m observations of one or more related variables, like, e.g., financial losses due to earthquakes and the related amounts of energy released, for a longer period than that of the losses. Based on such a data set, we present an adapted version of the Hill estimator that shows greatly improved behavior and we establish the asymptotic normality of this estimator. For this adaptation the tail dependence between the variable of interest and the related variable(s) plays an important role. A simulation study confirms the substantially improved performance of our adapted estimator relative to the Hill estimator. We also present an application to the aforementioned earthquake losses. Keywords: asymptotic normality; heavy tail; Hill estimator; tail dependence; variance reduction JEL: C13 C14 Date: 2018 URL: http://d.repec.org/n?u=RePEc:tiu:tiucen:78738894-06ad-409e-ba03-531c3308e118&r=ecm
17.  By: Victor Chernozhukov (Institute for Fiscal Studies and MIT); Kaspar Wüthrich (Institute for Fiscal Studies); Yinchu Zhu (Institute for Fiscal Studies) Abstract: We extend conformal inference to general settings that allow for time series data. Our proposal is developed as a randomization method and accounts for potential serial dependence by including block structures in the permutation scheme. As a result, the proposed method retains the exact, model-free validity when the data are i.i.d. or more generally exchangeable, similar to usual conformal inference methods. When exchangeability fails, as is the case for common time series data, the proposed approach is approximately valid under weak assumptions on the conformity score. Keywords: Conformal inference, permutation and randomization, dependent data, groups Date: 2018–03–02 URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:16/18&r=ecm
18.  By: Michael Vogt (Institute for Fiscal Studies); Oliver Linton (Institute for Fiscal Studies and University of Cambridge) Abstract: We study a longitudinal data model with nonparametric regression functions that may vary across the observed subjects. In a wide range of applications, it is natural to assume that not every subject has a completely different regression function. We may rather suppose that the observed subjects can be grouped into a small number of classes whose members share the same regression curve. We develop a bandwidth-free clustering method to estimate the unknown group structure from the data. More speci cally, we construct estimators of the unknown classes and their unknown number which are free of classical bandwidth or smoothing parameters. In the theoretical part of the paper, we analyze the statistical properties of our estimators. The technical analysis is complemented by a simulation study and an application to temperature anomaly data. Keywords: Clustering of nonparametric curves; nonparametric regression; multiscale statistics; longitudinal/panel data Date: 2018–01–10 URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:08/18&r=ecm
19.  By: Lu, Xun (Hong Kong University of Science and Technology); Miao, Ke (School of Economics, Singapore Management University); Su, Liangjun (School of Economics, Singapore Management University) Abstract: In this paper we propose a jackknife method to determine the type of fixed effects in three-dimensional panel data models. We show that with probability approaching 1, the method can select the correct type of fixed effects in the presence of only weak serial or cross-sectional dependence among the error terms. In the presence of strong serial correlation, we propose a modified jackknife method and justify its selection consistency. Monte Carlo simulations demonstrate the excellent finite sample performance of our method. Applications to two datasets in macroeconomics and international trade reveal the usefulness of our method. Keywords: Consistency; Cross-validation; Fixed effect; Individual effect; Jackknife; Three-dimensional panel. JEL: C23 C33 C51 C52 F17 F47 Date: 2018–04–23 URL: http://d.repec.org/n?u=RePEc:ris:smuesw:2018_010&r=ecm
20.  By: Andrew G. Chapple Abstract: This paper attempts to find different time periods since ISIL’s formation in 2013 in which the rate of ISIL attacks or their effectiveness in terms of fatalities differ. A Bayesian model is presented for marked point process data that separates the time scale into disjoint intervals as a function of the rate of the attacks and the average number of fatalities for each attack which are the marks for the model. The model is endowed with priors to discourage intervals with few events and borrow strength among rates and intensities of adjacent intervals and uses the reversible jump approach introduced by Green (1996) to allow the number of intervals to vary as a function of the rates and intensities of attacks. Application results show that the hazard of an ISIL attack has increased drastically since 6/8/2014 since they took Mosul and again increased after 2/23/16, which corresponds with major military intervention. Keywords: Marked Point Process, Bayesian Analysis, Reversible Jump, ISIL. JEL: C11 C22 Date: 2018–06–09 URL: http://d.repec.org/n?u=RePEc:eei:rpaper:eeri_rp_2018_09&r=ecm
21.  By: Jinyong Hahn (Institute for Fiscal Studies); Jerry Hausman (Institute for Fiscal Studies and MIT); Josh Lustig (Institute for Fiscal Studies) Abstract: This paper proposes a specification test of the mixed logit models, by generalizing Hausman and McFadden?s (1984) test. We generalize the test even further by considering a model developed by Berry, Levinsohn and Pakes (1995). Date: 2017–12–12 URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:58/17&r=ecm

This nep-ecm issue is ©2018 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.