|
on Econometrics |
By: | Jeffrey S. Racine; Qi Li; Karen X. Yan |
Abstract: | We propose a kernel function for ordered categorical data that overcomes certain limitations present in ordered kernel functions that have appeared in the literature on the estimation of probability mass functions for multinomial ordered data. Some of these limitations arise from assumptions made about the support of the random variable that may be at odds with the data at hand. Furthermore, many existing ordered kernel functions lack a particularly appealing property, namely the ability to deliver discrete uniform probability estimates for some value of the smoothing parameter. To overcome these limitations, we propose an asymmetric empirical support kernel function that adapts to the data at hand and possesses certain desirable features. In particular, there are no difficulties arising from zero counts caused by gaps in the data while it encompasses both the empirical proportions and the discrete uniform probabilities at the lower and upper boundaries of the smoothing parameter. We propose using likelihood and least squares cross-validation for smoothing parameter selection, and study the asymptotic behaviour of these data-driven methods. We use Monte Carlo simulations to examine the finite sample performance of the proposed estimator and we also provide a simple empirical example to illustrate the usefulness of the proposed estimator in applied settings. |
Date: | 2017–11–03 |
URL: | http://d.repec.org/n?u=RePEc:mcm:deptwp:2017-14&r=ecm |
By: | Wager, Stefan (Stanford University); Athey, Susan (Stanford University) |
Abstract: | Many scientific and engineering challenges--ranging from personalized medicine to customized marketing recommendations--require an understanding of treatment effect heterogeneity. In this paper, we develop a non-parametric causal forest for estimating heterogeneous treatment effects that extends Breiman's widely used random forest algorithm. In the potential outcomes framework with unconfoundedness, we show that causal forests are pointwise consistent for the true treatment effect, and have an asymptotically Gaussian and centered sampling distribution. We also discuss a practical method for constructing asymptotic confidence intervals for the true treatment effect that are centered at the causal forest estimates. Our theoretical results rely on a generic Gaussian theory for a large family of random forest algorithms. To our knowledge, this is the first set of results that allows any type of random forest, including classification and regression forests, to be used for provably valid statistical inference. In experiments, we find causal forests to be substantially more powerful than classical methods based on nearest-neighbor matching, especially in the presence of irrelevant covariates. |
Date: | 2017–07 |
URL: | http://d.repec.org/n?u=RePEc:ecl:stabus:3576&r=ecm |
By: | Darwin Ugarte Ontiveros; Gustavo Canavire-Bacarreza; Luis Castro Peñarrieta |
Abstract: | Average treatment effects estimands can present significant bias under the presence of outliers. Moreover, outliers can be particularly hard to detect, creating bias and inconsistency in the semi-parametric ATE estimads. In this paper, we use Monte Carlo simulations to demonstrate that semi-parametric methods, such as matching, are biased in the presence of outliers. Bad and good leverage points outliers are considered. The bias arises because bad leverage points completely change the distribution of the metrics used to define counterfactuals. Whereas good leverage points increase the chance of breaking the common support condition and distort the balance of the covariates and which may push practitioners to misspecify the propensity score. We provide some clues to diagnose the presence of outliers and propose a reweighting estimator that is robust against outliers based on the Stahel-Donoho multivariate estimator of scale and location. An application of this estimator to LaLonde (1986) data allows us to explain the Dehejia and Wahba (2002) and Smith and Todd (2005) debate on the inability of matching estimators to deal with the evaluation problem. |
Keywords: | Treatment effects, Outliers, Propensity score, Mahalanobis distance |
JEL: | C21 C14 C52 C13 |
Date: | 2017–10–30 |
URL: | http://d.repec.org/n?u=RePEc:col:000122:015810&r=ecm |
By: | Lu, Lina (Federal Reserve Bank of Boston) |
Abstract: | I consider a simultaneous spatial panel data model, jointly modeling three effects: simultaneous effects, spatial effects and common shock effects. This joint modeling and consideration of cross-sectional heteroskedasticity result in a large number of incidental parameters. I propose two estimation approaches, a quasi-maximum likelihood (QML) method and an iterative generalized principal components (IGPC) method. I develop full inferential theories for the estimation approaches and study the trade-off between the model specifications and their respective asymptotic properties. I further investigate the finite sample performance of both methods using Monte Carlo simulations. I find that both methods perform well and that the simulation results corroborate the inferential theories. Some extensions of the model are considered. Finally, I apply the model to analyze the relationship between trade and GDP using a panel data over time and across countries. |
Keywords: | Panel data model; Spatial model; Simultaneous equations system; Common shocks; Simultaneous effects; Incidental parameters; Maximum likelihood estimation; Principal components; High dimensionality; Inferential theory |
JEL: | C13 C31 C33 C38 C51 |
Date: | 2017–08–09 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedbqu:rpa17-3&r=ecm |
By: | Victor Chernozhukov; Iv\'an Fern\'andez-Val; Whitney Newey; Sami Stouli; Francis Vella |
Abstract: | This paper introduces two classes of semiparametric triangular systems with nonadditively separable unobserved heterogeneity. They are based on distribution and quantile regression modeling of the reduced-form conditional distributions of the endogenous variables. We show that these models are flexible and identify the average, distribution and quantile structural functions using a control function approach that does not require a large support condition. We propose a computationally attractive three-stage procedure to estimate the structural functions where the first two stages consist of quantile or distribution regressions. We provide asymptotic theory and uniform inference methods for each stage. In particular, we derive functional central limit theorems and bootstrap functional central limit theorems for the distribution regression estimators of the structural functions. We illustrate the implementation and applicability of our methods with numerical simulations and an empirical application to demand analysis. |
Date: | 2017–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1711.02184&r=ecm |
By: | Gonzalo Vazquez-Bare |
Abstract: | This paper employs a nonparametric potential outcomes framework to study causal spillover effects in a setting where units are clustered and their potential outcomes can depend on the treatment assignment of all the units within a cluster. Using this framework, I define parameters of interest and provide conditions under which direct and spillover effects can be identified when a treatment is randomly assigned. In addition, I characterize and discuss the causal interpretation of the estimands that are recovered by two popular estimation approaches in empirical work: a regression of an outcome on a treatment indicator (difference in means) and a regression of an outcome on a treatment indicator and the proportion of treated peers (a reduced-form linear-in-means model). It is shown that consistency and asymptotic normality of the nonparametric spillover effects estimators require a precise relationship between the number of parameters, the total sample size and the probability distribution of the treatment assignments, which has important implications for the design of experiments. The findings are illustrated with data from a conditional cash transfer pilot study and with simulations. The wild bootstrap is shown to be consistent, and simulation evidence suggests a better performance compared to the Gaussian approximation when groups are moderately large relative to the sample size. |
Date: | 2017–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1711.02745&r=ecm |
By: | Asai, M.; McAleer, M.J.; Peiris, S. |
Abstract: | In recent years fractionally differenced processes have received a great deal of attention due to their exibility in nancial applications with long memory. In this paper, we develop a new re- alized stochastic volatility (RSV) model with general Gegenbauer long memory (GGLM), which encompasses a new RSV model with seasonal long memory (SLM). The RSV model uses the infor- mation from returns and realized volatility measures simultaneously. The long memory structure of both models can describe unbounded peaks apart from the origin in the power spectrum. For estimating the RSV-GGLM model, we suggest estimating the location parameters for the peaks of the power spectrum in the rst step, and the remaining parameters based on the Whittle likelihood in the second step. We conduct Monte Carlo experiments for investigating the nite sample properties of the estimators, with a quasi-likelihood ratio test of RSV-SLM model against theRSV-GGLM model. We apply the RSV-GGLM and RSV-SLM model to three stock market indices. The estimation and forecasting results indicate the adequacy of considering general long memory. |
Keywords: | Stochastic Volatility, Realized Volatility Measure, Long Memory, Gegenbauer Poly-nomial, Seasonality, Whittle Likelihood |
JEL: | C18 C21 C58 |
Date: | 2017–11–01 |
URL: | http://d.repec.org/n?u=RePEc:ems:eureir:102576&r=ecm |
By: | Juan Carlos Parra-Alvarez; Olaf Posch; Mu-Chun Wang |
Abstract: | In this paper, we study the statistical properties of heterogeneous agent models with incomplete markets. Using a Bewley-Hugget-Aiyagari model we compute the equilibrium density function of wealth and show how it can be used for likelihood inference. We investigate the identifiability of the model parameters based on data representing a large cross-section of individual wealth. We also study the finite sample properties of the maximum likelihood estimator using Monte Carlo experiments. Our results suggest that while the parameters related to the household’s preferences can be correctly identified and accurately estimated, the parameters associated with the supply side of the economy cannot be separately identified leading to inferential problems that persist even in large samples. In the presence of partially identification problems, we show that an empirical strategy based on fixing the value of one the troublesome parameters allows us to pin down the other unidentified parameter without compromising the estimation of the remaining parameters of the model. An empirical illustration of our maximum likelihood framework using the 2013 SCF data for the U.S. confirms the results from our identification experiments. |
Keywords: | heterogeneous agent models, continuous-time, Fokker-Planck equations, identification, maximum likelihood |
JEL: | C10 C13 C63 E21 E24 |
Date: | 2017 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_6717&r=ecm |
By: | Martinez, Andrew (Federal Reserve Bank of Cleveland) |
Abstract: | Although the trajectory and path of future outcomes plays an important role in policy decisions, analyses of forecast accuracy typically focus on individual point forecasts. However, it is important to examine the path forecasts errors since they include the forecast dynamics. We use the link between path forecast evaluation methods and the joint predictive density to propose a test for differences in system path forecast accuracy. We also demonstrate how our test relates to and extends existing joint testing approaches. Simulations highlight both the advantages and disadvantages of path forecast accuracy tests in detecting a broad range of differences in forecast errors. We compare the Federal Reserve’s Greenbook point and path forecasts against four DSGE model forecasts. The results show that differences in forecast-error dynamics can play an important role in the assessment of forecast accuracy. |
Keywords: | GFESM; log determinant; log score; mean square error; |
JEL: | C12 C22 C52 C53 |
Date: | 2017–11–02 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedcwp:1717&r=ecm |
By: | Ferroni, Filippo (Federal Reserve Bank of Chicago); Grassi, Stefano (University of Rome); Leon-Ledesma, Miguel A. (University of Kent) |
Abstract: | DSGE models are typically estimated assuming the existence of certain primal shocks that drive macroeconomic fluctuations. We analyze the consequences of estimating shocks that are "non-existent" and propose a method to select the primal shocks driving macroeconomic uncertainty. Forcing these non-existing shocks in estimation produces a downward bias in the estimated internal persistence of the model. We show how these distortions can be reduced by using priors for standard deviations whose support includes zero. The method allows us to accurately select primal shocks and estimate model parameters with high precision. We revisit the empirical evidence on an industry standard medium-scale DSGE model and find that government and price markup shocks are innovations that do not generate statistically significant dynamics. |
Keywords: | Reduced rank covariance matrix; DSGE models; stochastic dimension search |
JEL: | C10 E27 E32 |
Date: | 2017–08–01 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedhwp:wp-2017-20&r=ecm |
By: | Karanasos, Menelaos (Brunel University London); Xu, Yongdeng (Cardiff Business School) |
Abstract: | In this paper we review and generalize results on the derivation of tractable non-negativity (necessary and sufficient) conditions for N-dimensional asymmetric power GARCH/HEAVY models and MEM. We show that these non-negativity constraints are translated into simple matrix inequalities, which are easily handled. One main concern is that the existence of such conditions is often ignored by researchers. We hope that our paper will create more awareness of the presence of these non-negativity conditions and increase their usage. In practice these constraints may not be fulfilled. To handle these cases we propose a new mixture formulation in order to eliminate some of these constraints. By using the exponential specification for some (but not all) of the conditional variables in the system we considerably reduce the dimensions of them. We also obtain new theoretical results about the second moment structure and the optimal forecasts of such multivariate processes. Four empirical examples are included to show the effectiveness of the proposed method. |
Keywords: | Bootstrap, indirect inference, gravity model, classical trade model, UK trade |
JEL: | C32 C53 C58 G15 |
Date: | 2017–11 |
URL: | http://d.repec.org/n?u=RePEc:cdf:wpaper:2017/14&r=ecm |
By: | Todd E Clark; Michael W McCracken; Elmar Mertens |
Abstract: | We develop uncertainty measures for point forecasts from surveys such as the Survey of Professional Forecasters, Blue Chip, or the Federal Open Market Committee's Summary of Economic Projections. At a given point of time, these surveys provide forecasts for macroeconomic variables at multiple horizons. To track time-varying uncertainty in the associated forecast errors, we derive a multiple-horizon speci cation of stochastic volatility. Compared to constant-variance approaches, our stochastic-volatility model improves the accuracy of uncertainty measures for survey forecasts. |
Keywords: | stochastic volatility, survey forecasts, fan charts |
JEL: | E37 C53 |
Date: | 2017–10 |
URL: | http://d.repec.org/n?u=RePEc:bis:biswps:667&r=ecm |
By: | Tim J. Boonen (University of Amsterdam); Montserrat Guillén (Riskcenter, Department of Econometrics, University of Barcelona. Diagonal Av. 690, 08034, Barcelona, Spain.); Miguel Santolino (Riskcenter, Department of Econometrics, University of Barcelona. Diagonal Av. 690, 08034, Barcelona, Spain.) |
Abstract: | We analyse models for panel data that arise in risk allocation problems,when a given set of sources are the cause of an aggregate risk value. We focus on the modeling and forecasting of proportional contributions to risk. Compositional data methods are proposed and the regression is flexible to incorporate external information from other variables. We guarantee that projected proportional contributions add up to 100%, and we introduce a method to generate confidence regions with the same restriction. An illustration using data from the stock exchange is provided. |
Keywords: | Simplex, capital allocation, dynamic management. |
JEL: | C02 G22 D81 |
Date: | 2017–10 |
URL: | http://d.repec.org/n?u=RePEc:xrp:wpaper:xreap2017-04&r=ecm |
By: | Licht, Adrian; Escribano Sáez, Álvaro; Blazsek, Szabolcs Istvan |
Abstract: | In this paper, we introduce a new model by extending the dynamic conditional score(DCS) model of the multivariate t-distribution and name it as the quasi-vectorautoregressive (QVAR) model. QVAR is a score-driven nonlinear multivariatedynamic location model, in which the conditional score vector of the log-likelihood (LL)updates the dependent variables. For QVAR, we present the details of theeconometric formulation, the computation of the impulse response function, and themaximum likelihood (ML) estimation and related conditions of consistency andasymptotic normality. As an illustration, we use quarterly data for period 1987:Q1 to2013:Q2 from the following variables: quarterly percentage change in crude oil realprice, quarterly United States (US) inflation rate, and quarterly US real gross domesticproduct (GDP) growth. We find that the statistical performance of QVAR is superior tothat of VAR and VARMA. Interestingly, stochastic annual cyclical effects withdecreasing amplitude are found for QVAR, whereas those cyclical effects are notfound for VAR or VARMA. |
Keywords: | Multivariate Student's t errors; Cyclical IRF; Impulse response function (IRF); Non-linear vector MA models; Quasi-VAR (QVAR) models; Multivariate dynamic location models; Dynamic conditional score (DCS) models |
JEL: | C52 C32 |
Date: | 2017–10–01 |
URL: | http://d.repec.org/n?u=RePEc:cte:werepe:25739&r=ecm |
By: | Thomas Goodwin; Jing Tian |
Abstract: | We propose a state space modeling framework to evaluate a set of forecasts that target the same variable but are updated along the forecast horizon. The approach decomposes forecast errors into three distinct horizon-specific processes, namely, bias, rational error and implicit error, and attributes forecast revisions to corrections for these forecast errors. We derive the conditions under which forecasts that contain error that is irrelevant to the target can still present the second moment bounds of rational forecasts. By evaluating multi-horizon daily maximum temperature forecasts for Melbourne, Australia, we demonstrate how this modeling framework analyzes the dynamics of the forecast revision structure across horizons. Understanding forecast revisions is critical for weather forecast users to determine the optimal timing for their planning decision. |
Keywords: | Rational forecasts, implicit forecasts, forecast revision structure, weather forecasts. |
JEL: | C32 C53 |
Date: | 2017–11 |
URL: | http://d.repec.org/n?u=RePEc:een:camaaa:2017-67&r=ecm |
By: | Hausman, Jerry A. (MIT); Pinkovskiy, Maxim L. (Federal Reserve Bank of New York) |
Abstract: | We propose a novel estimator for the dynamic panel model, which solves the failure of strict exogeneity by calculating the bias in the first-order conditions as a function of the autoregressive parameter and solving the resulting equation. We show that this estimator performs well as compared with approaches in current use. We also propose a general method for including predetermined variables in fixed-effects panel regressions that appears to perform well. |
Keywords: | dynamic panel data; bias correction; econometrics |
JEL: | C2 C23 C26 |
Date: | 2017–10–01 |
URL: | http://d.repec.org/n?u=RePEc:fip:fednsr:824&r=ecm |
By: | Pillay, Sagaren; de Beer, Joe |
Abstract: | Statistical data are often compiled at different frequencies. When analysing high and low frequency data on the same variable one often encounters consistency problems. In particular, the lack of consistency between quarterly and annual data makes it very difficult for time series analysis. This paper discusses the processes and challenges for the alignment of the quarterly and annual financial statistics surveys by industry. The process consists of three phases, the initial editing, to deal with large inconsistencies, a presentation of the methodology using the quarterly related series to interpolate the annual series, and an analysis of the results. In the initial editing phase the large differences are resolved by manually editing the input data and imputing for missing data. The temporal disaggregation/benchmarking technique used are based on the Fernandez optimisation method of allowing random drift in the error process. The main characteristic of this method is that quarter- to- quarter movements are preserved while quarterly-annual alignment is achieved. The diagnostics performed indicate that the Fernandez random walk model method produces plausible results. |
Keywords: | disaggregation, benchmarking, optimisation, Fernandez random walk model 1 |
JEL: | C18 |
Date: | 2016–09 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:82130&r=ecm |
By: | Ryan Chahrour (Boston College); Kyle Jurado (Duke University) |
Abstract: | When can structural shocks be recovered from observable data? We present a necessary and sufficient condition that gives the answer for any linear model. Invertibility, which requires that shocks be recoverable from current and past data only, is sufficient but not necessary. This means that semi-structural empirical methods like structural vector autoregression analysis can be applied even to models with non-invertible shocks. We illustrate these results in the context of a simple model of consumption determination with productivity shocks and non-productivity noise shocks. In an application to postwar U.S. data, we find that non-productivity shocks account for a large majority of fluctuations in aggregate consumption over business cycle frequencies. |
Keywords: | structural vector autoregression, noise shocks |
JEL: | D84 E32 C31 |
Date: | 2017–11–01 |
URL: | http://d.repec.org/n?u=RePEc:boc:bocoec:935&r=ecm |
By: | Matyas Barczy; Mohamed Ben Alaya; Ahmed Kebaier; Gyula Pap |
Abstract: | We consider a stable Cox--Ingersoll--Ross process driven by a standard Wiener process and a spectrally positive strictly stable L\'evy process, and we study asymptotic properties of the maximum likelihood estimator (MLE) for its growth rate based on continuous time observations. We distinguish three cases: subcritical, critical and supercritical. In all cases we prove strong consistency of the MLE in question, in the subcritical case asymptotic normality, and in the supercritical case asymptotic mixed normality are shown as well. In the critical case the description of the asymptotic behaviour of the MLE in question remains open. |
Date: | 2017–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1711.02140&r=ecm |
By: | Davy Paindaveine; Thomas Verdebout |
Abstract: | We consider one of the most important problems in directional statistics, namely the problem of testing the null hypothesis that the spike direction theta of a Fisher-von Mises-Langevin distribution on the p-dimensional unit hypersphere is equal to a given direction theta_0. After a reduction through invariance arguments, we derive local asymptotic normality (LAN) results in a general high-dimensional framework where the dimension p_n goes to infinity at an arbitrary rate with the sample size n, and where the concentration kappa_n behaves in a completely free way with n, which offers a spectrum of problems ranging from arbitrarily easy to arbitrarily challenging ones. We identify seven asymptotic regimes, depending on the convergence/divergence properties of (kappa_n), that yield different contiguity rates and different limiting experiments. In each regime, we derive Le Cam optimal tests under specified kappa_n and we compute, from the Le Cam third lemma, asymptotic powers of the classical Watson test under contiguous alternatives. We further establish LAN results with respect to both spike direction and concentration, which allows us to discuss optimality also under unspecified kappa_n. To obtain a full understanding of the non-null behavior of the Watson test, we use martingale CLTs to derive its local asymptotic powers in the broader, semiparametric, model of rotationally symmetric distributions. A Monte Carlo study shows that the finite-sample behaviors of the various tests remarkably agree with our asymptotic results. |
Date: | 2017–11 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2013/260378&r=ecm |
By: | Allin Cottrell (Wake Forest University) |
Date: | 2017–09 |
URL: | http://d.repec.org/n?u=RePEc:anc:wgretl:4&r=ecm |
By: | Renée Fry-McKibbin; Cody Yu-Ling Hsiao; Vance L. Martin |
Abstract: | Joint tests of contagion are derived which are designed to have power where contagion operates simultaneously through coskewness, cokurtosis and covolatility. Finite sample properties of the new tests are evaluated and compared with existing tests of contagion that focus on a single channel. Applying the tests to daily Eurozone equity returns from 2005 to 2014 shows that contagion operates through higher order moment channels during the GFC and the European debt crisis, which are not necessarily detected by traditional tests based on correlations. |
Keywords: | Coskewness, Cokurtosis, Covolatility, Lagrange multiplier tests, European financial crisis, equity markets. |
JEL: | C1 F3 |
Date: | 2017–10 |
URL: | http://d.repec.org/n?u=RePEc:een:camaaa:2017-65&r=ecm |