|
on Econometrics |
By: | Akanksha Negi |
Abstract: | This paper proposes a new class of M-estimators that double weight for the twin problems of nonrandom treatment assignment and missing outcomes, both of which are common issues in the treatment effects literature. The proposed class is characterized by a `robustness' property, which makes it resilient to parametric misspecification in either a conditional model of interest (for example, mean or quantile function) or the two weighting functions. As leading applications, the paper discusses estimation of two specific causal parameters; average and quantile treatment effects (ATE, QTEs), which can be expressed as functions of the doubly weighted estimator, under misspecification of the framework's parametric components. With respect to the ATE, this paper shows that the proposed estimator is doubly robust even in the presence of missing outcomes. Finally, to demonstrate the estimator's viability in empirical settings, it is applied to Calonico and Smith (2017)'s reconstructed sample from the National Supported Work training program. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.11485&r=all |
By: | In Choi (Department of Economics, Sogang University, Seoul); Sanghyun Jung (Department of Economics, Sogang University, Seoul) |
Abstract: | This paper proposes new estimators for the panel autoregressive (PAR) model of order 1 with short time dimensions and large cross sections. These estimators are based on the cross-sectional regression model using the rst time series ob- servations as a regressor and the last as a dependent variable. The regressors and errors of this regression model are correlated. The rst estimator is the quasi maximum likelihood estimator (QMLE). The second estimator is the bias- corrected pooled least squares estimator (BCPLSE) that eliminates the asymp- totic bias of the pooled least squares estimator by using the QMLE. The QMLE and BCPLSE are extended to the PAR model with endogenous regressors. The QMLE and BCPLSE provide consistent estimates of the PAR coe¢ cients for stationary, unit root and explosive PAR models and consistently estimate the coe¢ cients of endogenous regressors. Their nite sample properties are com- pared with those of some other estimators for the PAR model of order 1. This papers estimators are shown to perform quite well in nite samples. |
Keywords: | dynamic panels, quasi maximum likelihood estimator, pooled least squares estimator, stationarity, unit root, explosive root |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:sgo:wpaper:2007&r=all |
By: | Fei Liu; Jiti Gao; Yanrong Yang |
Abstract: | Motivated by many key features of real data from economics and finance, we study a semiparametric panel data model with time-varying regression coefficients associated with an additive factor structure. In our model, factor loadings are unknown functions of observable variables which can capture time-variant and heterogeneous covariate information. A profile marginal integration (PMI) method is proposed to estimate unknown coefficient functions, factors and their loadings jointly in a single step, which can result in estimators with closed forms. Asymptotic distributions for the proposed profile estimators are established. Two empirical applications on US stock returns and OECD health care expenditure are provided. Thorough numerical results demonstrate the finite sample performance of our estimation and its advantage over traditional models in the relevant literature. |
Keywords: | additive factor model, nonparametric kernel estimation, profile marginal integration |
JEL: | C14 C23 C33 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2020-42&r=all |
By: | Frédérique Bec; Alain Guay (Université de Cergy-Pontoise, THEMA) |
Abstract: | This paper proposes t-like unit root tests which are consistent against any stationary alternatives, nonlinear or noncausal ones included. It departs from existing tests in that it uses an unbounded grid set including all possible values taken by the series. In our setup, thanks to the very simple nonlinear stationary alternative specification and the particular choice of the thresholds set, the proposed unit root test contains the standard ADF test as a special case. This, in turn, yields a sufficient condition for consistency against any ergodic stationary alternative. From a Monte-Carlo study, it turns out that the power of our unbounded non adaptive tests, in their average and exponential versions, outperforms existing bounded tests, either adaptive or not. This is illustrated by an application to interest rate spread data. |
Keywords: | Unit root test, Threshold autoregressive model, Interest rate spread. |
JEL: | C12 C22 C32 E43 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:ema:worpap:2020-10&r=all |
By: | Tomasz Olma |
Abstract: | Truncated conditional expectation functions are objects of interest in a wide range of economic applications, including income inequality measurement, financial risk man- agement, and impact evaluation. They typically involve truncating the outcome variable above or below certain quantiles of its conditional distribution. In this paper, based on local linear methods, I propose a novel, two-stage, nonparametric estimator of such functions. In this estimation problem, the conditional quantile function is a nuisance pa- rameter, which has to be estimated in the first stage. I immunize my estimator against the first-stage estimation error by exploiting a Neyman-orthogonal moment in the second stage. This construction ensures that the proposed estimator has favorable bias proper- ties and that inference methods developed for the standard nonparametric regression can be readily adapted to conduct inference on truncated conditional expectation functions. As an extension, I consider estimation with an estimated truncation quantile level. I ap- ply my estimator in three empirical settings: (i) sharp regression discontinuity designs with a manipulated running variable, (ii) program evaluation under sample selection, and (iii) conditional expected shortfall estimation. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:bon:boncrc:crctr224_2020_244&r=all |
By: | Zhaoxing Gao; Ruey S. Tsay |
Abstract: | We propose a new framework for modeling high-dimensional matrix-variate time series by a two-way transformation, where the transformed data consist of a matrix-variate factor process, which is dynamically dependent, and three other blocks of white noises. Specifically, for a given $p_1\times p_2$ matrix-variate time series, we seek common nonsingular transformations to project the rows and columns onto another $p_1$ and $p_2$ directions according to the strength of the dynamic dependence of the series on the past values. Consequently, we treat the data as nonsingular linear row and column transformations of dynamically dependent common factors and white noise idiosyncratic components. We propose a common orthonormal projection method to estimate the front and back loading matrices of the matrix-variate factors. Under the setting that the largest eigenvalues of the covariance of the vectorized idiosyncratic term diverge for large $p_1$ and $p_2$, we introduce a two-way projected Principal Component Analysis (PCA) to estimate the associated loading matrices of the idiosyncratic terms to mitigate such diverging noise effects. A diagonal-path white noise testing procedure is proposed to estimate the order of the factor matrix. %under the assumption that the idiosyncratic term is a matrix-variate white noise process. Asymptotic properties of the proposed method are established for both fixed and diverging dimensions as the sample size increases to infinity. We use simulated and real examples to assess the performance of the proposed method. We also compare our method with some existing ones in the literature and find that the proposed approach not only provides interpretable results but also performs well in out-of-sample forecasting. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.09029&r=all |
By: | Dong, Hao (Southern Methodist University); Millimet, Daniel L. (Southern Methodist University) |
Abstract: | Estimation of the causal effect of a binary treatment on outcomes often requires conditioning on covariates to address selection on observed variables. This is not straightforward when one or more of the covariates are measured with error. Here, we present a new semi-parametric estimator that addresses this issue. In particular, we focus on inverse propensity score weighting estimators when the propensity score is of an unknown functional form and some covariates are subject to classical measurement error. Our proposed solution involves deconvolution kernel estimators of the propensity score and the regression function weighted by a deconvolution kernel density estimator. Simulations and replication of a study examining the impact of two financial literacy interventions on the business practices of entrepreneurs show our estimator to be valuable to empirical researchers. |
Keywords: | program evaluation, measurement error, propensity score, unconfoundedness, financial literacy |
JEL: | C18 C21 G21 G53 |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp13893&r=all |
By: | Silvia Noirjean; Marco Mariani; Alessandra Mattei; Fabrizia Mealli |
Abstract: | Nudging youths to visit historical and artistic heritage is a key goal pursued by cultural organizations. The field experiment we analyze is a clustered encouragement design (CED) conducted in Florence (Italy) and devised to assess how appropriate incentives assigned to high-school classes may induce teens to visit museums in their free time. In CEDs, where the focus is on causal effects for individuals, interference between units is generally unavoidable. The presence of noncompliance and spillover effects makes causal inference particularly challenging. We propose to deal with these complications by creatively blending the principal stratification framework and causal mediation methods, and exploiting information on interpersonal networks. We formally define principal natural direct and indirect effects and principal controlled direct and indirect effects, and use them to disentangle spillovers from other causal channels. The key insights are that overall principal causal effects for sub-populations of units defined by the compliance behavior combine encouragement, treatment and spillovers effects. In this situation, a synthesis of the network information may be used as a possible mediator, such that the part of the effect that is channeled by it can be attributed to spillovers. A Bayesian approach is used for inference, invoking latent ignorability assumptions on the mediator conditional on principal stratum membership. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.11023&r=all |
By: | David M. Kaplan (Department of Economics, University of Missouri) |
Abstract: | Instead of testing for unanimous agreement, I propose learning how broad of a consensus favors one distribution over another (of income, productivity, asset returns, test scores, etc.). Specifically, I propose statistical inference methods to learn about the set of utility functions for which one distribution has higher expected utility than another. With high probability, an "inner" confidence set is contained within this true set, while an "outer" confidence set contains the true set. Such confidence sets can be formed by inverting a proposed multiple testing procedure that controls the familywise error rate. Theoretical justification comes from empirical process results, given that very large classes of utility functions are generally Donsker (subject to finite moments). The theory additionally justifies a uniform (over utility functions) confidence band of expected utility differences, as well as tests with a utility-based "restricted stochastic dominance" as either the null or alternative hypothesis. Simulated and empirical examples illustrate the methodology. |
Keywords: | confidence set, Donsker, expected utility, familywise error rate, multiple testing, stochastic dominance |
JEL: | C29 |
Date: | 2020–02–20 |
URL: | http://d.repec.org/n?u=RePEc:umc:wpaper:2010&r=all |
By: | David M. Kaplan (Department of Economics, University of Missouri) |
Abstract: | I provide conditions in which unconditional quantile regression "effects" can be interpreted as policy effects. When the policy variables satisfy a conditional independence assumption (unconfoundedness), unconditional quantile regression estimates the policy effect for certain types of counterfactual policies, but not others. In particular, the policy effect can be estimated if the policy change itself satisfies conditional independence. This result complements existing identification results on distributional policy effects. |
Keywords: | counterfactual, policy, unconfoundedness |
JEL: | C21 |
Date: | 2019–10–21 |
URL: | http://d.repec.org/n?u=RePEc:umc:wpaper:2011&r=all |
By: | Won-Ki Seo |
Abstract: | Functional principal component analysis (FPCA) has played an important role in the development of functional time series (FTS) analysis. This paper investigates how FPCA can be used to analyze cointegrated functional time series and propose a modification of FPCA as a novel statistical tool. Our modified FPCA not only provides an asymptotically more efficient estimator of the cointegrating vectors, but also leads to novel KPSS-type tests for examining some essential properties of cointegrated time series. As an empirical illustration, our methodology is applied to the time series of log-earning densities. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.12781&r=all |
By: | Vincent Boucher (Department of Economics, Université Laval, CRREP and CREATE); Yann Bramoullé (Aix-Marseille Univ, CNRS, AMSE, Marseille, France.) |
Abstract: | Heckman and MaCurdy (1985) first showed that binary outcomes are compatible with linear econometric models of interactions. This key insight was unduly discarded by the literature on the econometrics of games. We consider general models of linear interactions in binary outcomes that nest linear models of peer effects in networks and linear models of entry games. We characterize when these models are well defined. Errors must have a specific discrete structure. We then analyze the models' game-theoretic microfoundations. Under complete information and linear utilities, we characterize the preference shocks under which the linear model of interactions forms a Nash equilibrium of the game. Under incomplete information and independence, we show that the linear model of interactions forms a Bayes-Nash equilibrium if and only if preference shocks are iid and uniformly distributed. We also obtain conditions for uniqueness. Finally, we propose two simple consistent estimators. We revisit the empirical analyses of teenage smoking and peer effects of Lee, Li, and Lin (2014) and of entry into airline markets of Ciliberto and Tamer (2009). Our reanalyses showcase the main interests of the linear framework and suggest that the estimations in these two studies suffer from endogeneity problems. |
Keywords: | binary outcomes, linear probability model, peer effects, econometrics of games |
JEL: | C31 C35 C57 |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:aim:wpaimx:2038&r=all |
By: | Einmahl, John (Tilburg University, School of Economics and Management); Segers, Johan |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:tiu:tiutis:edc722e6-cc70-4221-87a2-8493156e1ab3&r=all |
By: | Naftali Cohen; Srijan Sood; Zhen Zeng; Tucker Balch; Manuela Veloso |
Abstract: | Time series forecasting is essential for agents to make decisions in many domains. Existing models rely on classical statistical methods to predict future values based on previously observed numerical information. Yet, practitioners often rely on visualizations such as charts and plots to reason about their predictions. Inspired by the end-users, we re-imagine the topic by creating a framework to produce visual forecasts, similar to the way humans intuitively do. In this work, we take a novel approach by leveraging advances in deep learning to extend the field of time series forecasting to a visual setting. We do this by transforming the numerical analysis problem into the computer vision domain. Using visualizations of time series data as input, we train a convolutional autoencoder to produce corresponding visual forecasts. We examine various synthetic and real datasets with diverse degrees of complexity. Our experiments show that visual forecasting is effective for cyclic data but somewhat less for irregular data such as stock price. Importantly, we find the proposed visual forecasting method to outperform numerical baselines. We attribute the success of the visual forecasting approach to the fact that we convert the continuous numerical regression problem into a discrete domain with quantization of the continuous target signal into pixel space. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.09052&r=all |
By: | G.M. Gallo; D. Lacava; E. Otranto |
Abstract: | The financial turmoil surrounding the Great Recession called for unprecedented intervention by Central Banks - unconventional policies affected various areas in the economy, including stock market volatility. In order to evaluate such effects, by including Markov Switching dynamics within a recent Multiplicative Error Model, we propose a model–based classification of the dates of a Central Bank's announcements to distinguish the cases where the announcement implies an in- crease or a decrease in volatility, or no effect. In detail, we propose two naïve classification methods, obtained as a by– product of the model estimation, which provide very similar results to those coming from a classical k–means clustering procedure. The application on four Eurozone market volatility series shows a successful classification of 144 European Central Bank announcements. |
Keywords: | Markov switching model;Unconventional monetary policies;Stock market volatility;Multiplicative Error Model;Smoothed Probabilities;Model–based clustering. |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:cns:cnscwp:202008&r=all |
By: | Stefan Hubner |
Abstract: | This paper develops a method to use data from singles in a non–parametric collective household setting. We use it to test the controversial assumption of preference stability between singles and couples. Our test allows for unobserved heterogeneity by defining finite-dimensional types of households according to their revealed preference relations. We show how to derive a test statistic by constructing hypothetical matches of heterogeneous individuals into different types of households using tools from stochastic choice theory. We strongly reject the preference–stability hypothesis based on consumption data from the Dutch LISS, the Russian RLMS, and the Spanish ECPF panels. |
Date: | 2020–12–02 |
URL: | http://d.repec.org/n?u=RePEc:bri:uobdis:20/735&r=all |
By: | Kaiser, Caspar; Vendrik, Maarten C.M. (Macro, International & Labour Economics, Research Centre for Educ and Labour Mark, School of Business and Economics, RS: GSBE other - not theme-related research) |
Abstract: | Two recent papers argue that many results based on ordinal reports of happiness can be reversed with suitable monotonic increasing transformations of the associated happiness scale (Bond and Lang 2019; Schröder and Yitzhaki 2017). If true, empirical research utilizing such reports is in trouble. Against this background, we make four main contributions. First, we show that reversals are fundamentally made possible by explanatory variables having heterogenous effects across the distribution of happiness. We derive a simple test of whether reversals are possible by relabelling the scores of reported happiness and deduce bounds for ratios of coefficients under any labelling scheme. Second, we argue that in cases where reversals by relabelling happiness scores are impossible, reversals using an alternative method of Bond and Lang, which is based on ordered probit regressions, are highly speculative. Third, we make apparent that in order to achieve reversals, the analyst must assume that respondents use the response scale in a strongly non-linear fashion. However, drawing from the economic and psychological literature, we present arguments and evidence which suggest that respondents likely use response scales in an approximately linear manner. Fourth, using German SOEP data, we provide additional empirical evidence on whether reversals of effects of standard demographic variables are both possible and plausible. It turns out that reversals by either relabelling or by using Bond & Lang’s approach are impossible or implausible for almost all variables of interest. Although our analysis uses happiness as a special case, our theoretical considerations are applicable to any type of subjective ordinal report. |
JEL: | I31 C25 |
Date: | 2020–12–01 |
URL: | http://d.repec.org/n?u=RePEc:unm:umagsb:2020032&r=all |