|
on Econometrics |
By: | Helmut Lütkepohl; Thore Schlaak |
Abstract: | Different bootstrap methods and estimation techniques for inference for structural vector autoregressive (SVAR) models identified by conditional heteroskedasticity are reviewed and compared in a Monte Carlo study. The model is a SVAR model with generalized autoregressive conditional heteroskedastic (GARCH) innovations. The bootstrap methods considered are a wild bootstrap, a moving blocks bootstrap and a GARCH residual based bootstrap. Estimation is done by Gaussian maximum likelihood, a simplified procedure based on univariate GARCH estimations and a method that does not re-estimate the GARCH parameters in each bootstrap replication. It is found that the computationally most efficient method is competitive with the computationally more demanding methods and often leads to the smallest confidence sets without sacrificing coverage precision. An empirical model for assessing monetary policy in the U.S. is considered as an example. It is found that the different inference methods for impulse responses lead to qualitatively very similar results. |
Keywords: | Structural vector autoregression, conditional heteroskedasticity, GARCH, identification via heteroskedasticity |
JEL: | C32 |
Date: | 2018 |
URL: | http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1750&r=ecm |
By: | Eric Blankmeyer |
Abstract: | Errors-in-variables is a long-standing, difficult issue in linear regression; and progress depends in part on new identifying assumptions. I characterize measurement error as bad-leverage points and assume that fewer than half the sample observations are heavily contaminated, in which case a high-breakdown robust estimator may be able to isolate and down weight or discard the problematic data. In simulations of simple and multiple regression where eiv affects 25% of the data and R-squared is mediocre, certain high-breakdown estimators have small bias and reliable confidence intervals. |
Date: | 2018–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1807.02814&r=ecm |
By: | Amaresh K Tiwari |
Abstract: | We propose a new control function (CF) method for binary response outcomes in a triangular system with unobserved heterogeneity of multiple dimensions. The identified CFs are the expected values of the heterogeneity terms in the reduced form equations conditional on the endogenous, Xi ≡ (xi1, . . . ,xiT ), and the exogenous, Zi ≡ (zi1, . . . , ziT ), variables. The method requires weaker restrictions compared to traditional CF methods for triangular systems with imposed structures similar to ours, and point-identifies average partial effects with discrete instruments. We discuss semiparametric identification of structural measures using the proposed CFs. An application and Monte Carlo experiments compare several alternative methods with ours. |
Keywords: | Control Functions, Unobserved Heterogeneity, Identification, Instrumental Variables, Average Partial Effects, Child Labor. |
JEL: | C13 C18 C33 |
Date: | 2018 |
URL: | http://d.repec.org/n?u=RePEc:mtk:febawb:110&r=ecm |
By: | Honore, Bo E. (Princeton University); Hu, Luojia (Federal Reserve Bank of Chicago) |
Abstract: | It is well understood that classical sample selection models are not semiparametrically identified without exclusion restrictions. Lee (2009) developed bounds for the parameters in a model that nests the semiparametric sample selection model. These bounds can be wide. In this paper, we investigate bounds that impose the full structure of a sample selection model with errors that are independent of the explanatory variables but have unknown distribution. We find that the additional structure in the classical sample selection model can significantly reduce the identified set for the parameters of interest. Specifically, we construct the identified set for the parameter vector of interest. It is a one-dimensional line-segment in the parameter space, and we demonstrate that this line segment can be short in principle as well as in practice. We show that the identified set is sharp when the model is correct and empty when model is not correct. We also provide non-sharp bounds under the assumption that the model is correct. These are easier to compute and associated with lower statistical uncertainty than the sharp bounds. Throughout the paper, we illustrate our approach by estimating a standard sample selection model for wages. |
Keywords: | Sample Selection; exclusion Restrictions; bounds; Partial Identification |
JEL: | C10 C14 |
Date: | 2018–07–02 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedhwp:wp-2018-10&r=ecm |
By: | Chunrong Ai; Lukang Huang; Zheng Zhang |
Abstract: | Wang and Tchetgen Tchetgen (2017) studied identification and estimation of the average treatment effect when some confounders are unmeasured. Under their identification condition, they showed that the semiparametric efficient influence function depends on five unknown functionals. They proposed to parameterize all functionals and estimate the average treatment effect from the efficient influence function by replacing the unknown functionals with estimated functionals. They established that their estimator is consistent when certain functionals are correctly specified and attains the semiparametric efficiency bound when all functionals are correctly specified. In applications, it is likely that those functionals could all be misspecified. Consequently their estimator could be inconsistent or consistent but not efficient. This paper presents an alternative estimator that does not require parameterization of any of the functionals. We establish that the proposed estimator is always consistent and always attains the semiparametric efficiency bound. A simple and intuitive estimator of the asymptotic variance is presented, and a small scale simulation study reveals that the proposed estimation outperforms the existing alternatives in finite samples. |
Date: | 2018–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1807.05678&r=ecm |
By: | Farnè, Matteo; Vouldis, Angelos T. |
Abstract: | Outlier detection in high-dimensional datasets poses new challenges that have not been investigated in the literature. In this paper, we present an integrated methodology for the identification of outliers which is suitable for datasets with higher number of variables than observations. Our method aims to utilise the entire relevant information present in a dataset to detect outliers in an automatized way, a feature that renders the method suitable for application in large dimensional datasets. Our proposed five-step procedure for regression outlier detection entails a robust selection stage of the most explicative variables, the estimation of a robust regression model based on the selected variables, and a criterion to identify outliers based on robust measures of the residuals' dispersion. The proposed procedure deals also with data redundancy and missing observations which may inhibit the statistical processing of the data due to the ill-conditioning of the covariance matrix. The method is validated in a simulation study and an application to actual supervisory data on banks’ total assets. JEL Classification: C18, C81, G21 |
Keywords: | banking data, high dimension, missing data, outlier detection, robust regression, variable selection |
Date: | 2018–07 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20182171&r=ecm |
By: | Dmitry Arkhangelsky; Guido Imbens |
Abstract: | We develop a new approach for estimating average treatment effects in the observational studies with unobserved cluster-level heterogeneity. The previous approach relied heavily on linear fixed effect specifications that severely limit the heterogeneity between clusters. These methods imply that linearly adjusting for differences between clusters in average covariate values addresses all concerns with cross-cluster comparisons. Instead, we consider an exponential family structure on the within-cluster distribution of covariates and treatments that implies that a low-dimensional sufficient statistic can summarize the empirical distribution, where this sufficient statistic may include functions of the data beyond average covariate values. Then we use modern causal inference methods to construct flexible and robust estimators. |
Date: | 2018–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1807.02099&r=ecm |
By: | Honore, Bo E. (Princeton University); Hu, Luojia (Federal Reserve Bank of Chicago) |
Abstract: | The bootstrap is a convenient tool for calculating standard errors of the parameter estimates of complicated econometric models. Unfortunately, the bootstrap can be very time-consuming. In a recent paper, Honoré and Hu (2017), we propose a “Poor (Wo)man's Bootstrap” based on one-dimensional estimators. In this paper, we propose a modified, simpler method and illustrate its potential for estimating asymptotic variances. |
Keywords: | standard error; bootstrap; inference; censored regression; two-step estimation |
JEL: | C10 C15 C18 |
Date: | 2018–06–29 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedhwp:wp-2018-11&r=ecm |
By: | GONÇALVES, Sílvia; PERRON, Benoit |
Abstract: | We consider bootstrap methods for factor-augmented regressions with cross sectional dependence among idiosyncratic errors. This is important to capture the bias of the OLS estimator derived recently by Gonçalves and Perron (2014). We .first show that a common approach of resampling cross sectional vectors over time is invalid in this context because it induces a zero bias. We then propose the cross-sectional dependent (CSD) bootstrap where bootstrap samples are obtained by taking a random vector and multiplying it by the square root of a consistent estimator of the covariance matrix of the idiosyncratic errors. We show that if the covariance matrix estimator is consistent in the spectral norm, then the CSD bootstrap is consistent, and we verify this condition for the thresholding estimator of Bickel and Levina (2008). Finally, we apply our new bootstrap procedure to forecasting inflation using convenience yields as recently explored by Gospodinov and Ng (2013). |
Keywords: | Factor model; bootstrap; asymptotic bias |
Date: | 2018 |
URL: | http://d.repec.org/n?u=RePEc:mtl:montde:2018-07&r=ecm |
By: | Bontemps, Christian; Kumar, Rohit |
Abstract: | In this paper, we consider inference procedures for entry games with complete information. Due to the presence of multiple equilibria, we know that such a model may be set identified without imposing further restrictions. We complete the model with the unknown selection mechanism and characterize geometrically the set of predicted choice probabilities, in our case, a convex polytope with many facets. Testing whether a parameter belongs to the identified set is equivalent to testing whether the true choice probability vector belongs to this convex set. Using tools from the convex analysis, we calculate the support function and the extreme points. The calculation yields a finite number of inequalities, when the explanatory variables are discrete, and we characterized them once for all. We also propose a procedure that selects the moment inequalities without having to evaluate all of them. This procedure is computationally feasible for any number of players and is based on the geometry of the set. Furthermore, we exploit the specific structure of the test statistic used to test whether a point belongs to a convex set to propose the calculation of critical values that are computed once and independent of the value of the parameter tested, which drastically improves the calculation time. Simulations in a separate section suggest that our procedure performs well compared with existing methods. |
Keywords: | set identification; entry games; convex set; support function |
Date: | 2018–07 |
URL: | http://d.repec.org/n?u=RePEc:tse:wpaper:32886&r=ecm |
By: | Hecq, Alain; Goetz, Thomas |
Abstract: | We analyze Granger causality testing in mixed-frequency VARs with possibly (co)integrated time series. It is well known that conducting inference on a set of parameters is dependent on knowing the correct (co)integration order of the processes involved. Corresponding tests are, however, known to often suffer from size distortions and/or a loss of power. Our approach, which boils down to the mixed-frequency analogue of the one by Toda and Yamamoto (1995) or Dolado and Lutkepohl (1996), works for variables that are stationary, integrated of an arbitrary order, or cointegrated. As it only requires an estimation of a mixed-frequency VAR in levels with appropriately adjusted lag length, after which Granger causality tests can be conducted using simple standard Wald test, it is of great practical appeal. We show that the presence of non-stationary and trivially cointegrated highfrequency regressors (Goetz et al., 2013) leads to standard distributions when testing for causality on a parameter subset, without any need to augment the VAR order. Monte Carlo simulations and two applications involving the oil price and consumer prices as well as GDP and industrial production in Germany illustrate our approach. |
Keywords: | Mixed frequencies; Granger causality; Hypothesis testing, Vector autoregressions; Cointegration |
JEL: | C32 |
Date: | 2018–06–27 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:87746&r=ecm |
By: | Stanislav Anatolyev; Anna Mikusheva |
Abstract: | This paper re-examines the problem of estimating risk premia in linear factor pricing models. Typically, the data used in the empirical literature are characterized by weakness of some pricing factors, strong cross-sectional dependence in the errors, and (moderately) high cross-sectional dimensionality. Using an asymptotic framework where the number of assets/portfolios grows with the time span of the data while the risk exposures of weak factors are local-to-zero, we show that the conventional two-pass estimation procedure delivers inconsistent estimates of the risk premia. We propose a new estimation procedure based on sample-splitting instrumental variables regression. The proposed estimator of risk premia is robust to weak included factors and to the presence of strong unaccounted cross-sectional error dependence. We derive the many-asset weak factor asymptotic distribution of the proposed estimator, show how to construct its standard errors, verify its performance in simulations, and revisit some empirical studies. |
Date: | 2018–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1807.04094&r=ecm |
By: | Milan Kumar Das; Anindya Goswami |
Abstract: | We have developed a statistical technique to test the model assumption of binary regime switching extension of the geometric Brownian motion (GBM) model by proposing a new discriminating statistics. Given a time series data, we have identified an admissible class of the regime switching candidate models for the statistical inference. By performing several systematic experiments, we have successfully shown that the sampling distribution of the test statistics differs drastically, if the model assumption changes from GBM to Markov modulated GBM, or to semi-Markov modulated GBM. Furthermore, we have implemented this statistics for testing the regime switching hypothesis with Indian sectoral indices. |
Date: | 2018–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1807.04393&r=ecm |
By: | Conny Wunsch; Renate Strobl |
Abstract: | Understanding the mechanisms through which treatment effects come about is crucial for designing effective interventions. The identification of such causal mechanisms is challenging and typically requires strong assumptions. This paper discusses identification and estimation of natural direct and indirect effects in so-called double randomization designs that combine two experiments. The first and main experiment randomizes the treatment and measures its effect on the mediator and the outcome of interest. A second auxiliary experiment randomizes the mediator of interest and measures its effect on the outcome. We show that such designs allow for identification based on an assumption that is weaker than the assumption of sequential ignorability that is typically made in the literature. It allows for unobserved confounders that do not cause heterogeneous mediator effects. We demonstrate estimation of direct and indirect effects based on different identification strategies that we compare to our approach using data from a laboratory experiment we conducted in Kenya. |
Keywords: | direct and indirect effects, causal inference, mediation analysis, identification |
JEL: | C14 C31 C90 |
Date: | 2018 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_7142&r=ecm |
By: | Catherine Doz (PJSE - Paris Jourdan Sciences Economiques - UP1 - Université Panthéon-Sorbonne - ENS Paris - École normale supérieure - Paris - INRA - Institut National de la Recherche Agronomique - EHESS - École des hautes études en sciences sociales - ENPC - École des Ponts ParisTech - CNRS - Centre National de la Recherche Scientifique, PSE - Paris School of Economics); Anna Petronevich (PSE - Paris School of Economics, CREST - Centre de Recherche en Economie et Statistique [Bruz] - ENSAI - Ecole Nationale de la Statistique et de l'Analyse de l'Information [Bruz]) |
Abstract: | The Markov-Switching Dynamic Factor Model (MS-DFM) has been used in different applications, notably in the business cycle analysis. When the cross-sectional dimension of data is high, the Maximum Likelihood estimation becomes unfeasible due to the excessive number of parameters. In this case, the MS-DFM can be estimated in two steps, which means that in the first step the common factor is extracted from a database of indicators, and in the second step the Markov-Switching autoregressive model is fit to this extracted factor. The validity of the two-step method is conventionally accepted, although the asymptotic properties of the two-step estimates have not been studied yet. In this paper we examine their consistency as well as the small-sample behavior with the help of Monte Carlo simulations. Our results indicate that the two-step estimates are consistent when the number of cross-section series and time observations is large, however, as expected, the estimates and their standard errors tend to be biased in small samples. |
Keywords: | Markov-switching, Dynamic Factor models, two-step estimation,small-sample performance, consistency, Monte Carlo simulations |
Date: | 2017–09 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:halshs-01592863&r=ecm |
By: | Alfons, A.; Ates, N.Y.; Groenen, P.J.F. |
Abstract: | Mediation analysis is central to theory building and testing in organizations research. Management scholars often use linear regression analysis based on normal-theory maximum likelihood estimators to test mediation. However, these estimators are very sensitive to deviations from normality assumptions, such as outliers or heavy tails of the observed distribution. This sensitivity seriously threatens the empirical testing of theory about mediation mechanisms, as many empirical studies lack reporting of outlier treatments and checks on model assumptions. To overcome this threat, we develop a fast and robust mediation method that yields reliable results even when the data deviate from normality assumptions. Simulation studies show that our method is both superior in estimating the effect size and more reliable in assessing its significance than the existing methods. We illustrate the mechanics of our proposed method in three empirical cases and provide freely available software in R and SPSS to enhance its accessibility and adoption by researchers and practitioners. |
Keywords: | Mediation analysis, robust statistics, linear regression, bootstrap |
Date: | 2018–08–03 |
URL: | http://d.repec.org/n?u=RePEc:ems:eureri:109594&r=ecm |
By: | Strobl, Renate; Wunsch, Conny |
Abstract: | Understanding the mechanisms through which treatment effects come about is crucial for designing effective interventions. The identification of such causal mechanisms is challenging and typically requires strong assumptions. This paper discusses identification and estimation of natural direct and indirect effects in so-called double randomization designs that combine two experiments. The first and main experiment randomizes the treatment and measures its effect on the mediator and the outcome of interest. A second auxiliary experiment randomizes the mediator of interest and measures its effect on the outcome. We show that such designs allow for identification based on an assumption that is weaker than the assumption of sequential ignorability that is typically made in the literature. It allows for unobserved confounders that do not cause heterogeneous mediator effects. We demonstrate estimation of direct and indirect effects based on different identification strategies that we compare to our approach using data from a laboratory experiment we conducted in Kenya. |
Keywords: | causal inference; Direct and indirect effects; identification; mediation analysis |
JEL: | C31 |
Date: | 2018–07 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:13028&r=ecm |
By: | Alexis Derumigny (CREST; ENSAE); Jean-David Fermanian (CREST; ENSAE) |
Abstract: | Conditional Kendall's tau is a measure of dependence between two random variables, conditionally on some covariates. We study nonparametric estimators of such quantities using kernel smoothing techniques. Then, we assume a regression-type relationship between conditional Kendall's tau and covariates, in a parametric setting with possibly a large number of regressors. This model may be sparse, and the underlying parameter is estimated through a penalized criterion. The theoretical properties of all these estimators are stated. We prove non-asymptotic bounds with explicit constants that hold with high probability. We derive their consistency, their asymptotic law and some oracle properties. Some simulations and applications to real data conclude the paper. |
Keywords: | conditional dependence measures, kernel smoothing, regression-type models |
Date: | 2018–02–21 |
URL: | http://d.repec.org/n?u=RePEc:crs:wpaper:2018-01&r=ecm |
By: | Stelios Arvanitis (Athens University of Economics and Business); O. Scaillet (University of Geneva and Swiss Finance Institute); Nikolas Topaloglou (Athens University of Economics and Business) |
Abstract: | Using properties of the cdf of a random variable defined as a saddle-type point of a real valued continuous stochastic process, we derive first-order asymptotic properties of tests for stochastic spanning w.r.t. a stochastic dominance relation. First, we define the concept of Markowitz stochastic dominance spanning, and develop an analytical representation of the spanning property. Second, we construct a non-parametric test for spanning via the use of an empirical analogy. The method determines whether introducing new securities or relaxing investment constraints improves the investment opportunity set of investors driven by Markowitz stochastic dominance. In an application to standard data sets of historical stock market returns, we reject market portfolio Markowitz efficiency as well as two-fund separation. Hence there exists evidence that equity management through base assets can outperform the market, for investors with Markowitz type preferences. |
Keywords: | Saddle-Type Point, Markowitz Stochastic Dominance, Spanning Test, Linear and Mixed integer programming, reverse S-shaped utility |
JEL: | C12 C14 C44 C58 D81 G11 |
Date: | 2018–02 |
URL: | http://d.repec.org/n?u=RePEc:chf:rpseri:rp1808&r=ecm |
By: | Vishal Kamat |
Abstract: | This paper studies the identifying content of the instrument monotonicity assumption of Imbens and Angrist (1994) on the distribution of potential outcomes in a model with a binary outcome, a binary treatment and an exogenous binary instrument. In the context of this setup, conclusions from previous results can be generally summarized as follows: (i) imposing instrument monotonicity can misspecify the model; and (ii) when the model is not misspecified, instrument monotonicity does not have any identifying content on the marginal distributions of the potential outcomes. In this paper, I demonstrate that instrument monotonicity can however have identifying content on features of the joint distribution of the potential outcomes when the model is not misspecified. I illustrate how this identifying content can lead to additional informative conclusions with respect to the proportion who benefit, a specific feature of the joint distribution of the potential outcomes. |
Date: | 2018–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1807.01661&r=ecm |
By: | Ernesto Carrella; Richard M. Bailey; Jens Koed Madsen |
Abstract: | By recasting indirect inference estimation as a prediction rather than a minimization and by using regularized regressions, we can bypass the three major problems of estimation: selecting the summary statistics, defining the distance function and minimizing it numerically. By substituting regression with classification we can extend this approach to model selection as well. We present three examples: a statistical fit, the parametrization of a simple real business cycle model and heuristics selection in a fishery agent-based model. The outcome is a method that automatically chooses summary statistics, weighs them and use them to parametrize models without running any direct minimization. |
Date: | 2018–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1807.01579&r=ecm |
By: | Christian Gouriéroux (University of Toronto, TSE and PSL); Alain Monfort (CREST); Jean-Michel Zakoian (CREST; University of Lille) |
Abstract: | In a transformation model yt = c[a(xt, ß),ut], where the errors ut are i.i.d and independent of the explanatory variables xt, the parameters can be estimated by a pseudo-maximum likelihood (PML) method, that is, by using a misspecified distribution of the errors, but the PML estimator of ß is in general not consistent. We explain in this paper how to nest the initial model in an identified augmented model with more parameters in order to derive consistent PML estimators of appropriate functions of parameter ß. The usefulness of the consistency result is illustrated by examples of systems of nonlinear equations, conditionally heteroskedastic models, stochastic volatility, or models with spatial interactions. |
Keywords: | Pseudo-Maximum Likelihood, Transformation Model, Identification, Consistency, Stochastic Volatility, Conditional Heteroskedasticity, Spatial Interactions |
Date: | 2018–06–01 |
URL: | http://d.repec.org/n?u=RePEc:crs:wpaper:2018-08&r=ecm |
By: | Silvia Miranda Agrippino (Bank of England); Giovanni Ricco (Observatoire français des conjonctures économiques) |
Abstract: | This paper discusses the conditions for indentification with external instruments in Structural VARs under partial invertibility. We observe that in this case the shocks of interest and their effects can be recovered using an external instrument, provided that a condition of limited lag exogeneity holds. This condition is weaker than that required for LP-IV, and allows for recoverability of impact effects also une VAR misspecification. We assess our claims in a simulated environment, and provide an emirical application to the relevant cas of identification of monetary policy shocks. |
Keywords: | Identification with external instruments; Structural VAR; Invertibility; Monetary Policy Shocks |
JEL: | C3 C32 E30 E52 |
Date: | 2018–07 |
URL: | http://d.repec.org/n?u=RePEc:spo:wpmain:info:hdl:2441/sb7ftvod18eb8hqptthmmeddt&r=ecm |
By: | Moriah B. Bostian (Department of Economics, Lewis & Clark College, Portland, OR USA); Cinzia Daraio (Department of Computer, Control and Management Engineering Antonio Ruberti (DIAG), University of Rome La Sapienza, Rome, Italy); Rolf Fare (Department of Applied Economics, Oregon State University, Corvallis, OR USA); Shawna Grosskopf (Department of Economics, Oregon State University, Corvallis, OR USA); Maria Grazia Izzo (Department of Computer, Control and Management Engineering Antonio Ruberti (DIAG), University of Rome La Sapienza, Rome, Italy ; Center for Life Nano Science, Fondazione Istituto Italiano di Tecnologia (IIT), Rome, Italy); Luca Leuzzi (CNR-NANOTEC, Institute of Nanotechnology, Soft and Living Matter Lab, Rome, Italy ; Department of Physics, Sapienza University of Rome, Italy); Giancarlo Ruocco (Center for Life Nano Science, Fondazione Istituto Italiano di Tecnologia (IIT), Rome, Italy ; Department of Physics, Sapienza University of Rome, Italy); William L. Weber (Department of Economics and Finance, Southeast Missouri State University, Cape Girardeau, MO USA) |
Abstract: | Networks are general models that represent the relationships within or between systems widely studied in statistical mechanics. Nonparametric productivity networks (Network-DEA) typically analyzes the networks in a descriptive rather than statistical framework. We fill this gap by developing a general framework-involving information science, machine learning and statistical inference from the physics of complex systems- for modeling the production process based on the axiomatics of Network-DEA connected to Georgescu-Roegen funds and flows model. The proposed statistical approach allows us to infer the network topology in a Bayesian framework. An application to assess knowledge productivity at a world-country level is provided. |
Keywords: | Network DEA ; Bayesian statistics ; Generalized multicomponent Ising Model ; Georgescu Roegen |
Date: | 2018 |
URL: | http://d.repec.org/n?u=RePEc:aeg:report:2018-06&r=ecm |
By: | Antoine Mandel (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique); Amir Sani (CFM-Imperial Institute of Quantitative Finance - Imperial College London, CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique) |
Abstract: | Forecast combination algorithms provide a robust solution to noisy data and shifting process dynamics. However in practice, sophisticated combination methods often fail to consistently outperform the simple mean combination. This "forecast combination puzzle" limits the adoption of alternative com- bination approaches and forecasting algorithms by policy-makers. Through an adaptive machine learning algorithm designed for streaming data, this pa- per proposes a novel time-varying forecast combination approach that retains distribution-free guarantees in performance while automatically adapting com- binations according to the performance of any selected combination approach or forecaster. In particular, the proposed algorithm offers policy-makers the ability to compute the worst-case loss with respect to the mean combination ex-ante, while also guaranteeing that the combination performance is never worse than this explicit guarantee. Theoretical bounds are reported with re- spect to the relative mean squared forecast error. Out-of-sample empirical performance is evaluated on the Stock and Watson seven-country dataset and the ECB Sur- vey of Professional Forecasters. |
Keywords: | Forecasting,Forecast Combination Puzzle,Forecast combinations,Machine Learning,Econometrics,Apprentissage statistique,Combinaison de prédicteurs,Econométrie |
Date: | 2017–04–19 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:halshs-01317974&r=ecm |
By: | Jan Obloj; Johannes Wiesel |
Abstract: | We consider statistical estimation of superhedging prices using historical stock returns in a frictionless market with d traded assets. We introduce a simple plugin estimator based on empirical measures, show it is consistent but lacks suitable robustness. This is addressed by our improved estimators which use a larger set of martingale measures defined through a tradeoff between the radius of Wasserstein balls around the empirical measure and the allowed norm of martingale densities. We also study convergence rates, convergence of superhedging strategies, and our study extends, in part, to the case of a market with traded options and to a multiperiod setting. |
Date: | 2018–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1807.04211&r=ecm |