
on Econometrics 
By:  Lovcha, Yuliya; Pérez Laborda, Alejandro 
Abstract:  This paper shows that the frequency domain estimation of VAR models over a frequency band can be a good alternative to prefiltering the data when a lowfrequency cycle contaminates some of the variables. As stressed in the econometric literature, prefiltering destroys the lowfrequency range of the spectrum, leading to substantial bias in the responses of the variables to structural shocks. Our analysis shows that if the estimation is carried out in the frequency domain, but employing a sensible band to exclude (enough) contaminated frequencies from the likelihood, the resulting VAR estimates and the impulse responses to structural shocks do not present significant bias. This result is robust to several specifications of the external cycle and data lengths. An empirical application studying the effect of technology shocks on hours worked is provided to illustrate the results. Keywords: Impulseresponse, filtering, identification, technology shocks. JEL Classification: C32, C51, E32, E37 
Keywords:  Previsió econòmica, Models economètrics, Cicles econòmics, 33  Economia, 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:urv:wpaper:2072/290743&r=ecm 
By:  Mikio Ito; Akihiko Noda; Tatsuma Wada 
Abstract:  A nonBayesian, generalized least squares (GLS)based approach is formally proposed to estimate a class of timevarying AR parameter models. This approach has partly been used by Ito et al. (2014, 2016a,b), and is proven very efficient because, unlike conventional methods, it does not require the Kalman filtering and smoothing procedures, but yields a smoothed estimate that is identical to the Kalmansmoothed estimate. Unlike the maximum likelihood estimator, the possibility of the pileup problem is shown to be small. In addition, this approach enables us to possibly deal with stochastic volatility models and models with a timedependent variancecovariance matrix. 
Date:  2017–07 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1707.06837&r=ecm 
By:  Rahul Mukherjee (Indian Institute of Management Calcutta) 
Abstract:  Recent years have witnessed a significant surge of interest in causal inference under a potential outcomes framework, with applications to such diverse fields as sociology, behavioral sciences, biomedical sciences, and so on. In a finite population setting, we consider causal inference on treatment contrasts from an experimental design using potential outcomes. Adopting an approach that integrates such causal inference with finite population sampling, this is done with reference to a general scheme for assigning experimental units to treatments, along with general linear unbiased estimators of the treatment means. The assignment scheme allows the possibility of randomization restrictions, such as stratification, and unequal replications. We examine how tools from finite population sampling can be employed to develop a unified theory for our general setup. As a major breakthrough, it is shown that unbiased estimation of the sampling variance of any treatment contrast estimator is possible under conditions milder than Neymannian strict additivity. The consequences of departure from such conditions are also touched upon. Our approach applies readily to the situation where the treatments have a general factorial structure. 
Keywords:  Finite population sampling, linear unbiased estimator, Neymannian strict additivity, variance estimation. 
JEL:  C13 C83 C90 
Date:  2017–07 
URL:  http://d.repec.org/n?u=RePEc:sek:iacpro:4607283&r=ecm 
By:  Ryan T. Godwin & David E. Giles (Department of Economics, University of Manitoba) 
Abstract:  Recently, many papers have obtained analytic expressions for the biases of various maximum likelihood estimators, despite their lack of closedform solution. These bias expressions have provided an attractive alternative to the bootstrap. Unless the bias function is “flat,” however, the expressions are being evaluated at the wrong point(s). We propose an “improved” analytic bias adjusted estimator, in which the bias expression is evaluated at a more appropriate point (at the bias adjusted estimator itself). Simulations illustrate that the improved analytic bias adjusted estimator can eliminate significantly more bias than the simple estimator which has been well established in the literature. 
Keywords:  bias reduction; maximum likelihood; nonlinear bias function 
JEL:  C13 C15 
Date:  2017–07–21 
URL:  http://d.repec.org/n?u=RePEc:vic:vicewp:1702&r=ecm 
By:  Catherine Hausman; David S. Rapson 
Abstract:  Recent empirical work in several economic fields, particularly environmental and energy economics, has adapted the regression discontinuity framework to applications where time is the running variable and treatment occurs at the moment of the discontinuity. In this guide for practitioners, we discuss several features of this "Regression Discontinuity in Time" framework that differ from the more standard crosssectional RD. First, many applications (particularly in environmental economics) lack crosssectional variation and are estimated using observations far from the cutoff. This is in stark contrast to a crosssectional RD, which is conceptualized for an estimation bandwidth going to zero even as the sample size increases. Second, estimates may be biased if the timeseries properties of the data are ignored, for instance in the presence of an autoregressive process. Finally, tests for sorting or bunching near the discontinuity are often irrelevant, making the methodology closer to an event study than a regression discontinuity design. Based on these features and motivated by hypothetical examples using air quality data, we offer suggestions for the empirical researcher wishing to use the RD in time design. 
JEL:  C14 C21 C22 Q53 
Date:  2017–07 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:23602&r=ecm 
By:  Castagnetti, Carolina; Rosti, Luisa; Töpfer, Marina 
Abstract:  So far, little work has been done on directly estimating differences of wage gaps. Stud ies estimating pay differentials, generally compare them across different subsamples. This comparison does not allow to conduct any inference or, in the case of decompositions, to confront the respective decomposition components across subsamples. We propose an exten sion of an OaxacaBlinder type decomposition based on the omitted variable bias formula to directly estimate the change in pay gaps across subsamples. The method proposed can be made robust to the indexnumber problem of the standard OaxacaBlinder decomposition and to the indeterminacy problem of the interceptshift approach. Using Italian micro data, we estimate the difference in the gender pay gap across time (2005 and 2014). By applying our proposed decomposition, we find that the convergence of the gender pay gap over time is only driven by the catchingup of women in terms of observable characteristics, while the impact of antidiscrimination legislation is found to be negligible. 
Keywords:  Pay Gap,Omitted Varibale Bias Formula,OaxacaBlinder Decomposition 
JEL:  J7 J13 J31 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:zbw:hohdps:142017&r=ecm 
By:  Jeremy T. Fox 
Abstract:  I prove that the joint distribution of random coefficients and additive errors is identified in a mulltinomial choice model. No restrictions are imposed on the support of the random coefficients and additive errors. The proof uses large support variation in choicespecific explanatory variables following Lewbel (2000) but does not rely on an identification at infinity technique where the payoffs of all but two choices are set to minus infinity. 
JEL:  C25 L0 
Date:  2017–07 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:23621&r=ecm 
By:  Gloria Gheno (Free University of BozenBolzano) 
Abstract:  In economics it is very important to understand the mechanisms which determine the phenomena of interest. For example in marketing it is very important to analyze the factors which determine the future choices of the customer such that it is possible to influence the sales of a product. In most of the economic applications, to make this the researcher supposes the knowledge of the mathematical relations among the variables of interest and in particular these relations are supposed linear. In this work I propose a new estimation method for mediation models without supposing the form of the mathematical relations but aggregating many different equations. My new estimation method derives by Bauer?s work (2005) about the semiparametric approach for SEM (structural equation models). To apply my estimation method, I propose new formulas to calculate the direct, indirect and total effects. I apply my method both to simulated data and to marketing data. 
Keywords:  direct effect, marketing, mediation, indirect effect, SEM, semiparametric approach, total effect 
JEL:  C40 C14 C15 
Date:  2017–05 
URL:  http://d.repec.org/n?u=RePEc:sek:iacpro:5007336&r=ecm 
By:  Fabian Eckert (Yale University); Costas Arkolakis (Yale University) 
Abstract:  Combinatorial problems are prevalent in economics but the large dimensionality of potential solutions substantially limits the scope of their applications. We define and characterize a general class that we term combinatorial discrete choice problems and show that it incorporates existing problems in economics and engineering. We prove that intuitive sufficient conditions guarantee the existence of simple recursive procedures that can be used to identify the global maximum. We propose such an algorithm and show how it can be used to revisit problems whose computation was deemed infeasible before. We finally discuss results for a class of games characterized by these sufficient conditions. 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:red:sed017:249&r=ecm 
By:  Zeya Zhang; Weizheng Chen; Hongfei Yan 
Abstract:  This paper proposed a method for stock prediction. In terms of feature extraction, we extract the features of stockrelated news besides stock prices. We first select some seed words based on experience which are the symbols of good news and bad news. Then we propose an optimization method and calculate the positive polar of all words. After that, we construct the features of news based on the positive polar of their words. In consideration of sequential stock prices and continuous news effects, we propose a recurrent neural network model to help predict stock prices. Compared to SVM classifier with price features, we find our proposed method has an over 5% improvement on stock prediction accuracy in experiments. 
Date:  2017–07 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1707.07585&r=ecm 
By:  Luciano Lopez (University of Neuchatel, Institute of Economic Research, Rue AbramLouis Breguet 2, 2000 Neuchatel, Switzerland.); Sylvain Weber (University of Neuchatel, Institute of Economic Research, Rue AbramLouis Breguet 2, 2000 Neuchatel, Switzerland.) 
Abstract:  This article presents the Stata userwritten command xtgcause, which implements a procedure proposed by Dumitrescu and Hurlin (2012) for testing Granger causality in panel datasets. With the development of large and long panel databases, theories surrounding panel causality evolve at a fast pace and empirical researchers may sometimes find it difficult to run the most recent tests developed in the literature. In the case of Dumitrescu and Hurlin's (2012) test, it even looks like it has been inaccurately implemented in some programs, and several publications in various fields report incorrect results. These papers may therefore have reached wrong conclusions, which might reveal harmful for instance if policies are designed on such bases. This contribution constitutes an effort to put this empirical literature back on track. 
Keywords:  Stata, Granger causality, panel datasets. 
JEL:  C23 C87 
Date:  2017–02 
URL:  http://d.repec.org/n?u=RePEc:irn:wpaper:1703&r=ecm 