nep-ecm New Economics Papers
on Econometrics
Issue of 2018‒01‒22
28 papers chosen by
Sune Karlsson
Örebro universitet

  1. Marked and Weighted Empirical Processes of Residuals with Applications to Robust Regressions By Vanessa Berenguer Rico; Bent Nielsen
  2. Latent Instrumental Variables: A Critical Review By Irene Hueter
  3. Confidence bands for coefficients in high dimensional linear models with error-in-variables By Alexandre Belloni; Victor Chernozhukov; Abhishek Kaul
  4. Identifying Latent Group Structures in Nonlinear Panels By Wang, Wuyi; Su, Liangjun
  5. Confidence set for group membership By Andreas Dzemski; Ryo Okui
  6. Semiparametric efficient empirical higher order influence function estimators By Rajarshi Mukherjee; Whitney K. Newey; James Robins
  7. Frequency Domain Estimation of Cointegrating Vectors with Mixed Frequency and Mixed Sample Data By Chambers, MJ
  8. Extreme Returns and Intensity of Trading By Gloria Gonzalez-Rivera; Wei Lin
  9. Locally stationary spatio-temporal processes By Yasumasa Matsuda; Yoshihiro Yajima
  10. Maximum Likelihood Estimation in Possibly Misspecified Dynamic Models with Time-Inhomogeneous Markov Regimes By Demian Pouzo; Zacharias Psaradakis; Martin Sola
  11. Sparse Bayesian time-varying covariance estimation in many dimensions By Gregor Kastner
  12. Estimation and Inference of Treatment Effects with $L_2$-Boosting in High-Dimensional Settings By Ye Luo; Martin Spindler
  13. Solving Dynamic Discrete Choice Models: Integrated or Expected Value Function? By Patrick Kofod Mogensen
  14. Incomplete English auction models with heterogeneity By Andrew Chesher; Adam Rosen
  15. Oracle Estimation of a Change Point in High Dimensional Quantile Regression By Sokbae Lee; Yuan Liao; Myung Hwan Seo; Youngki Shin
  16. Some Large Sample Results for the Method of Regularized Estimators By Michael Jansson; Demian Pouzo
  17. Cross-Validating Synthetic Controls By Becker, Martin; Klößner, Stefan; Pfeifer, Gregor
  18. A Flexible Fourier Form Nonlinear Unit Root Test Based on ESTAR Model By Güriş, Burak
  19. A New Kind of Two-Stage Least Squares Based on Shapley Value Regression By Mishra, SK
  20. General Aggregation of Misspecified Asset Pricing Models By Gospodinov, Nikolay; Maasoumi, Esfandiar
  21. Working Paper 14-17 - Modelling unobserved heterogeneity in distribution - Finite mixtures of the Johnson family of distributions By Peter Willemé
  22. About tests of the "simplifying" assumption for conditional copulas By Alexis Derumigny; Jean-David Fermanian
  23. Understanding the effect of measurement error on quantile regressions By Andrew Chesher
  24. Revealed Price Preference: Theory and Empirical Analysis By Rahul Deb; Yuichi Kitamura; John K. -H. Quah; J\"org Stoye
  25. Too Good to Be True? Fallacies in Evaluating Risk Factor Models By Gospodinov, Nikolay; Kan, Raymond; Robotti, Cesare
  26. Regression Based Expected Shortfall Backtesting By Sebastian Bayer; Timo Dimitriadis
  27. Composite Indirect Inference with Application By Christian Gouriéroux; Alain Monfort
  28. Structural Interpretation of Vector Autoregressions with Incomplete Identification: Revisiting the Role of Oil Supply and Demand Shocks By Christiane J.S. Baumeister; James D. Hamilton

  1. By: Vanessa Berenguer Rico; Bent Nielsen
    Abstract: Abstract A new class of marked and weighted empirical processes of residuals is introduced. The framework is general enough to accommodate both stationary and non-stationary regressions as well as a wide class of estimation procedures with applications in misspecification testing and robust statistics. Two applications are presented. First, we analyze the relationship between truncated moments and linear statistical functionals of residuals. In particular, we show that the asymptotic behaviour of these functionals, expressed as integrals with respect to their empirical distribution functions, can be easily analyzed given the main theorems of the paper. In our context the integrands can be unbounded provided that the underlying distribution meets certain moment conditions. A general first order asymptotic approximation of the statistical functionals is derived and then applied to some cases of interest. Second, the consequences of using the standard cumulant based normality test for robust regressions are analyzed. We show that the rescaling of the moment based statistic is case dependent, i.e., it depends on the truncation and the estimation method being used. Hence, using the standard least squares normalizing constants in robust regressions will lead to incorrect inferences. However, if appropriate normalizations, which we derive, are used then the test statistic is asymptotically chi-square.
    Date: 2017–12–20
    URL: http://d.repec.org/n?u=RePEc:oxf:wpaper:841&r=ecm
  2. By: Irene Hueter (Columbia University)
    Abstract: This paper considers the estimation problem in linear regression when endogeneity is present, that is, when explanatory variables are correlated with the random error, and also addresses the question of a priori testing for potential endogeneity. We provide a survey of the latent instrumental variables (LIV) approach proposed by Ebbes (2004) and Ebbes et al. (2004, 2005, 2009) and examine its performance compared to the methods of ordinary least squares (OLS) and IV regression. The distinctive feature of Ebbes’ approach is that no observed instruments are required. Instead ‘optimal’ instruments are estimated from data and allow for endogeneity testing. Importantly, this Hausman-LIV test is a simple tool that can be used to test for potential endogeneity in regression analysis and indicate when LIV regression is more appropriate and should be performed instead of OLS regression. The LIV models considered comprise the standard one where the latent variable is discrete with at least two fixed categories and two interesting extensions, multilevel models where a nonparametric Bayes algorithm completely determines the LIV’s distribution from data. This paper suggests that while Ebbes’ new method is a distinct contribution, its formulation is problematic in certain important respects. Specifically the various publications of Ebbes and collaborators employ three distinct and inequivalent statistical concepts exchangeably, treating all as one and the same. We clarify this and then discuss estimation of returns of education in income based on data from three studies that Ebbes (2004) revisited, where ‘education’ is potentially endogenous due to omitted ‘ability.’ While the OLS estimate exhibits a slight upwards bias of 7%, 8%, and 6%, respectively, relative to the LIV estimate for the three studies, IV estimation leads to an enormous bias of 93%, 40%, and -24% when there is no consensus about the direction of the bias. This provides one instance among many well known applications where IVs introduced more substantial biases to the estimated causal effects than OLS, even though IVs were pioneered to overcome the endogeneity problem. In a second example we scrutinize the results of Ferguson et al. (2015) on the estimated effect of campaign expenditures on the proportions of Democratic and Republican votes in US House and Senate elections between 1980 and 2014, where ‘campaign money’ is potentially endogenous in view of omitted variables such as ‘a candidate’s popularity.’ A nonparametric Bayesian spatial LIV regression model was adopted to incorporate identified spatial autocorrelation and account for endogeneity. The relative bias of the spatial regression estimate as compared to the spatial LIV estimate ranges between -17% to 18% for the House and between -25% to 7% for the Senate.
    Keywords: U.S. Endogeneity, instrumental variables, latent instrumental variables, omitted variables, regression, returns to education, return to campaign money in Congressional elections.
    JEL: C01 C8 C36
    Date: 2016–07
    URL: http://d.repec.org/n?u=RePEc:thk:wpaper:46&r=ecm
  3. By: Alexandre Belloni (Institute for Fiscal Studies); Victor Chernozhukov (Institute for Fiscal Studies and MIT); Abhishek Kaul (Institute for Fiscal Studies)
    Abstract: We study high-dimensional linear models with error-in-variables. Such models are motivated by various applications in econometrics, finance and genetics. These models are challenging because of the need to account for measurement errors to avoid non-vanishing biases in addition to handle the high dimensionality of the parameters. A recent growing literature has proposed various estimators that achieve good rates of convergence. Our main contribution complements this literature with the construction of simultaneous confidence regions for the parameters of interest in such high-dimensional linear models with error-in-variables. These confidence regions are based on the construction of moment conditions that have an additional orthogonality property with respect to nuisance parameters. We provide a construction that requires us to estimate an auxiliary high-dimensional linear model with error-in-variables for each component of interest. We use a multiplier bootstrap to compute critical values for simultaneous confidence intervals for a target subset of the components. We show its validity despite of possible (moderate) model selection mistakes, and allowing the number of target coefficients to be larger than the sample size. We apply and discuss the implications of our results to two examples and conduct Monte Carlo simulations to illustrate the performance of the proposed procedure for each variable whose coefficient is the target of inference.
    Keywords: honest confidence regions, error-in-variables, high dimensional models
    Date: 2017–05–17
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:22/17&r=ecm
  4. By: Wang, Wuyi (School of Economics, Singapore Management University); Su, Liangjun (School of Economics, Singapore Management University)
    Abstract: We propose a procedure to identify latent group structures in nonlinear panel data models where some regression coefficients are heterogeneous across groups but homogeneous within a group and the group number and membership are unknown. To identify the group structures, we consider the order statistics for the preliminary unconstrained consistent estimators of the regression coefficients and translate the problem of classification into the problem of break detection. Then we extend the sequential binary segmentation algorithm of Bai (1997) for break detection from the time series setup to the panel data framework. We demonstrate that our method is able to identify the true latent group structures with probability approaching one and the post-classification estimators are oracle-efficient. The method has the advantage of more convenient implementation compared with some alternative methods, which is a desirable feature in nonlinear panel applications. To improve the finite sample performance, we also consider an alternative version based on the spectral decomposition of certain estimated matrix and link the group identification issue to the community detection problem in the network literature. Simulations show that our method has good finite sample performance. We apply this method to explore how individuals' portfolio choices respond to their financial status and other characteristics using the Netherlands household panel data from year 1993 to 2015, and find three latent groups.
    Keywords: Binary segmentation algorithm; clustering; community detection; network; oracle estimator; panel structure model; parameter heterogeneity; singular value decomposition.
    JEL: C33 C38 C51
    Date: 2017–12–16
    URL: http://d.repec.org/n?u=RePEc:ris:smuesw:2017_019&r=ecm
  5. By: Andreas Dzemski; Ryo Okui
    Abstract: This paper develops procedures for computing a confidence set for a latent group structure. We study panel data models with unobserved grouped heterogeneity where each unit's regression curve is determined by the unit's latent group membership. Our main contribution is a new joint confidence set for group membership. Each element of the joint confidence set is a vector of possible group assignments for all units. The vector of true group memberships is contained in the confidence set with a pre-specified probability. The confidence set inverts a test for group membership. This test exploits a characterization of the true group memberships by a system of moment inequalities. Our procedure solves a high-dimensional one-sided testing problem and tests group membership simultaneously for all units. We also propose a procedure for identifying units for which group membership is obviously determined. These units can be ignored when computing critical values. We justify the joint confidence set under $N, T \to \infty$ asymptotics where we allow $T$ to be much smaller than $N$. Our arguments rely on the theory of self-normalized sums and high-dimensional central limit theorems. We contribute new theoretical results for testing problems with a large number of moment inequalities including an anti-concentration inequality for the quasi-likelihood ratio (QLR) statistic. Monte Carlo results indicate that our confidence set has adequate coverage and is informative. We illustrate the practical relevance of our confidence set in two applications.
    Date: 2017–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1801.00332&r=ecm
  6. By: Rajarshi Mukherjee (Institute for Fiscal Studies); Whitney K. Newey (Institute for Fiscal Studies and MIT); James Robins (Institute for Fiscal Studies)
    Abstract: Robins et al. (2008, 2016b) applied the theory of higher order infuence functions (HOIFs) to derive an estimator of the mean of an outcome Y in a missing data model with Y missing at random conditional on a vector X of continuous covariates; their estimator, in contrast to previous estimators, is semiparametric efficient under minimal conditions. However the Robins et al. (2008, 2016b) estimator depends on a non-parametric estimate of the density of X. In this paper, we introduce a new HOIF estimator that has the same asymptotic properties as their estimator but does not require non-parametric estimation of a multivariate density, which is important because accurate estimation of a high dimensional density is not feasible at the moderate sample sizes often encountered in applications. We also show that our estimator can be generalized to the entire class of functionals considered by Robins et al. (2008) which include the average effect of a treatment on a response Y when a vector X suffices to control confounding and the expected conditional variance of a response Y given a vector X.
    Keywords: Higher Order In?uence Functions, Doubly Robust Functionals, Semiparametric E?ciency, Higher Order U-Statistics
    Date: 2017–06–14
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:30/17&r=ecm
  7. By: Chambers, MJ
    Abstract: This paper proposes a model suitable for exploiting fully the information contained in mixed frequency and mixed sample data in the estimation of cointegrating vectors. The asymptotic properties of easy-to-compute spectral regression estimators of the cointegrating vectors are derived and these estimators are shown to belong to the class of optimal cointegration estimators. Furthermore, Wald statistics based on these estimators have asymptotic chi-square distributions which enable inferences to be made straightforwardly. Simulation experiments suggest that the finite sample performance of a spectral regression estimator in an augmented mixed frequency model is particularly encouraging as it is capable of dramatically reducing the root mean squared error obtained in an entirely low frequency model to the levels comparable to an infeasible high frequency model. The finite sample size and power properties of the Wald statistic are also found to be good. An empirical example, to stock price and dividend data, is provided to demonstrate the methods in practice.
    Keywords: mixed frequency data, mixed sample data, cointegration, spectral regression
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:esx:essedp:21144&r=ecm
  8. By: Gloria Gonzalez-Rivera (Department of Economics, University of California Riverside); Wei Lin (Capital University of Economics and Business, China)
    Abstract: Asymmetric information models of market microstructure claim that variables like trading intensity are proxies for latent information on the value of financial assets. We consider the interval-valued time series (ITS) of low/high returns and explore the relationship between these extreme returns and the intensity of trading. We assume that the returns (or prices) are generated by a latent process with some unknown conditional density. At each period of time, from this density, we have some random draws (trades) and the lowest and highest returns are the realized extreme observations of the latent process over the sample of draws. In this context, we propose a semiparametric model of extreme returns that exploits the results provided by extreme value theory. If properly centered and standardized extremes have well defined limiting distributions, the conditional mean of extreme returns is a highly nonlinear function of conditional moments of the latent process and of the conditional intensity of the process that governs the number of draws. We implement a two-step estimation procedure. First, we estimate parametrically the regressors that will enter into the nonlinear function, and in a second step, we estimate nonparametrically the conditional mean of extreme returns as a function of the generated regressors. Unlike current models for ITS, the proposed semiparametric model is robust to misspecification of the conditional density of the latent process. We fit several nonlinear and linear models to the 5-min and 1-min low/high returns to seven major banks and technology stocks, and find that the nonlinear specification is superior to the current linear models and that the conditional volatility of the latent process and the conditional intensity of the trading process are major drivers of the dynamics of extreme returns.
    Keywords: Trading intensity, Interval-valued Time Series, Generalized Extreme Value Distribution, Nonparametric regression, Generated Regressor
    JEL: C01 C14 C32 C51
    Date: 2017–12
    URL: http://d.repec.org/n?u=RePEc:ucr:wpaper:201801&r=ecm
  9. By: Yasumasa Matsuda; Yoshihiro Yajima
    Abstract: This paper proposes a locally stationary spatio-temporal processes to analyze the motivating example of US precipitation data, which is a huge data set composed of monthly observations of precipitation on thousands of monitoring points scattered irregularly all over US conti- nent. Allowing the parameters of continuous autoregressive and moving average (CARMA) random fields by Brockwell and Matsuda [2] to be dependent spatially, we generalize lo- cally stationary time series by Dahlhaus [3] to spatio-temporal processes that are locally stationary in space. We develop Whittle likelihood estimation for the spatially dependent parameters and derive the asymptotic properties rigorously. We demonstrate that the spatio- temporal models actually work to account for nonstationary spatial covariance structures in US precipitation data.
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:toh:dssraa:72&r=ecm
  10. By: Demian Pouzo; Zacharias Psaradakis; Martin Sola
    Abstract: This paper considers maximum likelihood (ML) estimation in a large class of models with hidden Markov regimes. We investigate consistency and local asymptotic normality of the ML estimator under general conditions which allow for autoregressive dynamics in the observable process, time-inhomogeneous Markov regime sequences, and possible model misspecification. A Monte Carlo study examines the finite-sample properties of the ML estimator. An empirical application is also discussed.
    Date: 2016–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1612.04932&r=ecm
  11. By: Gregor Kastner
    Abstract: We address the curse of dimensionality in dynamic covariance estimation by modeling the underlying co-volatility dynamics of a time series vector through latent time-varying stochastic factors. The use of a global-local shrinkage prior for the elements of the factor loadings matrix pulls loadings on superfluous factors towards zero. To demonstrate the merits of the proposed framework, the model is applied to simulated data as well as to daily log-returns of 300 S&P 500 members. Our approach yields precise correlation estimates, strong implied minimum variance portfolio performance and superior forecasting accuracy in terms of log predictive scores when compared to typical benchmarks.
    Date: 2016–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1608.08468&r=ecm
  12. By: Ye Luo; Martin Spindler
    Abstract: Boosting algorithms are very popular in Machine Learning and have proven very useful for prediction and variable selection. Nevertheless in many applications the researcher is interested in inference on treatment effects or policy variables in a high-dimensional setting. Empirical researchers are more and more faced with rich datasets containing very many controls or instrumental variables, where variable selection is challenging. In this paper we give results for the valid inference of a treatment effect after selecting from among very many control variables and the estimation of instrumental variables with potentially very many instruments when post- or orthogonal $L_2$-Boosting is used for the variable selection. This setting allows for valid inference on low-dimensional components in a regression estimated with $L_2$-Boosting. We give simulation results for the proposed methods and an empirical application, in which we analyze the effectiveness of a pulmonary artery catheter.
    Date: 2017–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1801.00364&r=ecm
  13. By: Patrick Kofod Mogensen
    Abstract: Dynamic Discrete Choice Models (DDCMs) are important in the structural estimation literature. Since the structural errors are practically always continuous and unbounded in nature, researchers often use the expected value function. The idea to solve for the expected value function made solution more practical and estimation feasible. However, as we show in this paper, the expected value function is impractical compared to an alternative: the integrated (ex ante) value function. We provide brief descriptions of the inefficacy of the former, and benchmarks on actual problems with varying cardinality of the state space and number of decisions. Though the two approaches solve the same problem in theory, the benchmarks support the claim that the integrated value function is preferred in practice.
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1801.03978&r=ecm
  14. By: Andrew Chesher (Institute for Fiscal Studies and University College London); Adam Rosen (Institute for Fiscal Studies and Duke University)
    Abstract: This paper studies identification and estimation of the distribution of bidder valuations in an incomplete model of English auctions. As in Haile and Tamer (2003) bidders are assumed to (i) bid no more than their valuations and (ii) never let an opponent win at a price they are willing to beat. Unlike the model studied by Haile and Tamer (2003), the requirement of independent private values is dropped, enabling the use of these restrictions on bidder behavior with affiliated private values, for example through the presence of auction specific unobservable heterogeneity. In addition, a semiparametric index restriction on the effect of auction-specific observable heterogeneity is incorporated, which, relative to nonparametric methods, can be helpful in alleviating the curse of dimensionality with a moderate or large number of covariates. The identification analysis employs results from Chesher and Rosen (2017) to characterize identified sets for bidder valuation distributions and functionals thereof.
    Date: 2017–05–31
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:27/17&r=ecm
  15. By: Sokbae Lee; Yuan Liao; Myung Hwan Seo; Youngki Shin
    Abstract: In this paper, we consider a high-dimensional quantile regression model where the sparsity structure may differ between two sub-populations. We develop $\ell_1$-penalized estimators of both regression coefficients and the threshold parameter. Our penalized estimators not only select covariates but also discriminate between a model with homogeneous sparsity and a model with a change point. As a result, it is not necessary to know or pretest whether the change point is present, or where it occurs. Our estimator of the change point achieves an oracle property in the sense that its asymptotic distribution is the same as if the unknown active sets of regression coefficients were known. Importantly, we establish this oracle property without a perfect covariate selection, thereby avoiding the need for the minimum level condition on the signals of active covariates. Dealing with high-dimensional quantile regression with an unknown change point calls for a new proof technique since the quantile loss function is non-smooth and furthermore the corresponding objective function is non-convex with respect to the change point. The technique developed in this paper is applicable to a general M-estimation framework with a change point, which may be of independent interest. The proposed methods are then illustrated via Monte Carlo experiments and an application to tipping in the dynamics of racial segregation.
    Date: 2016–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1603.00235&r=ecm
  16. By: Michael Jansson; Demian Pouzo
    Abstract: We present a general framework for studying regularized estimators; i.e., estimation problems wherein "plug-in" type estimators are either ill-defined or ill-behaved. We derive primitive conditions that imply consistency and asymptotic linear representation for regularized estimators, allowing for slower than $\sqrt{n}$ estimators as well as infinite dimensional parameters. We also provide data-driven methods for choosing tuning parameters that, under some conditions, achieve the aforementioned results. We illustrate the scope of our approach by studying a wide range of applications, revisiting known results and deriving new ones.
    Date: 2017–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1712.07248&r=ecm
  17. By: Becker, Martin; Klößner, Stefan; Pfeifer, Gregor
    Abstract: While the literature on synthetic control methods mostly abstracts from out-of-sample measures, Abadie et al. (2015) have recently introduced a cross-validation approach. This technique, however, is not well-defined since it hinges on predictor weights which are not uniquely defined. We fix this issue, proposing a new, well-defined cross-validation technique, which we apply to the original Abadie et al. (2015) data. Additionally, we discuss how this new technique can be used for comparing different specifications based on out-of-sample measures, avoiding the danger of cherry-picking.
    Keywords: Synthetic Control Methods; Cross-Validation; Specification Search.
    JEL: C22 C52
    Date: 2017–08–17
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:83679&r=ecm
  18. By: Güriş, Burak
    Abstract: This study suggests a new nonlinear unit root test procedure with Fourier function. In this test procedure, structural breaks are modeled by means of a Fourier function and nonlinear adjustment is modeled by means of an Exponential Smooth Threshold Autoregressive (ESTAR) model. The Monte Carlo simulation results indicate that the proposed test has good size and power properties. This test eliminates the problems of over-acceptance of the null of nonstationarity to allow multiple smooth temporary breaks and nonlinearity together into the test procedure.
    Keywords: Flexible Fourier Form, Unit Root Test, Nonlinearity
    JEL: C12 C2 C22
    Date: 2017–12
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:83472&r=ecm
  19. By: Mishra, SK
    Abstract: The Two-Stage Least squares method for obtaining the estimated structural coefficients of a simultaneous linear equations model is a celebrated method that uses OLS at the first stage for estimating the reduced form coefficients and obtaining the expected values in the arrays of current exogenous variables. At the second stage it uses OLS, equation by equation, in which the explanatory expected current endogenous variables are used as instruments representing their observed counterpart. It has been pointed out that since the explanatory expected current endogenous variables are linear functions of the predetermined variables in the model, inclusion of such expected current endogenous variables together with a subset of predetermined variables as regressors make the estimation procedure susceptible to the deleterious effects of collinearity, which may render some of the estimated structural coefficients with inflated variance as well as wrong sign. As a remedy to this problem, the use of Shapley value regression at the second stage has been proposed. For illustration a model has been constructed in which the measures of the different aspects of globalization are the endogenous variables while the measures of the different aspects of democracy are the predetermined variables. It has been found that the conventional (OLS-based) Two-Stage Least Squares (2-SLS) gives some of the estimated structural coefficients with an unexpected sign. In contrast, all structural coefficients estimated with the proposed 2-SLS (in which Shapley value regression has been used at the second stage) have an expected sign. These empirical findings suggest that the measures of globalization are conformal among themselves as well as they are positively affected by democratic regimes.
    Keywords: Simultaneous equations model; Two-Stage Least Squares; Instrumental Variables; Collinearity; Shapley Value Regression; Democracy Index; Index of Globalization
    JEL: C30 C36 C51 C61 C71
    Date: 2017–12–30
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:83534&r=ecm
  20. By: Gospodinov, Nikolay (Federal Reserve Bank of Atlanta); Maasoumi, Esfandiar (Emory University)
    Abstract: This paper proposes an entropy-based approach for aggregating information from misspecified asset pricing models. The statistical paradigm is shifted away from parameter estimation of an optimally selected model to stochastic optimization based on a risk function of aggregation across models. The proposed method relaxes the perfect substitutability of the candidate models, which is implicitly embedded in the linear pooling procedures, and ensures that the aggregation weights are selected with a proper (Hellinger) distance measure that satisfies the triangle inequality. The empirical results illustrate the robustness and the pricing ability of the aggregation approach to stochastic discount factor models.
    Keywords: entropy; model aggregation; asset pricing; misspecified models; oracle inequality; Hellinger distance
    JEL: C13 C52 G12
    Date: 2017–11–01
    URL: http://d.repec.org/n?u=RePEc:fip:fedawp:2017-10&r=ecm
  21. By: Peter Willemé
    Abstract: This paper proposes a new model to account for unobserved heterogeneity in empirical modelling. The model extends the well-known Finite Mixture (or Latent Class) Model by using the Johnson family of distributions for the component densities. Due to the great variety of distributional shapes that can be assumed by the Johnson family, the method does not impose the usual a priori assumptions regarding the type of densities that are mixed.
    JEL: C10 C13 C46 C51 C52
    Date: 2017–08–31
    URL: http://d.repec.org/n?u=RePEc:fpb:wpaper:1714&r=ecm
  22. By: Alexis Derumigny (CREST; ENSAE); Jean-David Fermanian (CREST; ENSAE)
    Abstract: We discuss the so-called "simplifying assumption" of conditional copulas in a general framework. We introduce several tests of the latter assumption for non- and semiparametric copula models. Some related test procedures based on conditioning subsets instead of pointwise events are proposed. The limiting distribution of such test statistics under the null are approximated by several bootstrap schemes, most of them being new. We prove the validity of a particular semiparametric bootstrap scheme. Some simulations illustrate the relevance of our results.
    Keywords: conditional copula; simplifying assumption; bootstrap
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2017-02&r=ecm
  23. By: Andrew Chesher (Institute for Fiscal Studies and University College London)
    Abstract: The impact of measurement error in explanatory variables on quantile regression functions is investigated using a small variance approximation. The approximation shows how the error contaminated and error free quantile regression functions are related. A key factor is the distribution of the error free explanatory variable. Exact calculations probe the accuracy of the approximation. The order of the approximation error is unchanged if the density of the error free explanatory variable is replaced by the density of the error contaminated explanatory variable which is easily estimated. It is then possible to use the approximation to investigate the sensitivity of estimates to varying amounts of measurement error.
    Keywords: measurement error, parameter approximations, quantile regression.
    Date: 2017–05–10
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:19/17&r=ecm
  24. By: Rahul Deb; Yuichi Kitamura; John K. -H. Quah; J\"org Stoye
    Abstract: We develop a model of demand where consumers trade-off the utility of consumption against the disutility of expenditure. This model is appropriate whenever a consumer's demand over a $\it{strict}$ subset of all available goods is being analyzed. Data sets consistent with this model are characterized by the absence of revealed preference cycles over prices. For the random utility extension of the model, we devise nonparametric statistical procedures for testing and welfare comparisons. The latter requires the development of novel tests of linear hypotheses for partially identified parameters. Our applications on national household expenditure data provide support for the model and yield informative bounds concerning welfare rankings across different prices.
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1801.02702&r=ecm
  25. By: Gospodinov, Nikolay (Federal Reserve Bank of Atlanta); Kan, Raymond (University of Toronto); Robotti, Cesare (University of Georgia)
    Abstract: This paper is concerned with statistical inference and model evaluation in possibly misspecified and unidentified linear asset-pricing models estimated by maximum likelihood and one-step generalized method of moments. Strikingly, when spurious factors (that is, factors that are uncorrelated with the returns on the test assets) are present, the models exhibit perfect fit, as measured by the squared correlation between the model's fitted expected returns and the average realized returns. Furthermore, factors that are spurious are selected with high probability, while factors that are useful are driven out of the model. Although ignoring potential misspecification and lack of identification can be very problematic for models with macroeconomic factors, empirical specifications with traded factors (e.g., Fama and French, 1993, and Hou, Xue, and Zhang, 2015) do not suffer of the identification problems documented in this study.
    Keywords: asset pricing; spurious risk factors; unidentified models; model misspecification; continuously updated GMM; maximum likelihood; goodness-of-fit; rank test
    JEL: C12 C13 G12
    Date: 2017–11–01
    URL: http://d.repec.org/n?u=RePEc:fip:fedawp:2017-09&r=ecm
  26. By: Sebastian Bayer; Timo Dimitriadis
    Abstract: In this article, we introduce a regression based backtest for the risk measure Expected Shortfall (ES) which is based on a joint regression framework for the quantile and the ES. We also introduce a second variant of this ES backtest which allows for testing one-sided hypotheses by only testing an intercept parameter. These two backtests are the first backtests in the literature which solely backtest the risk measure ES as they only require ES forecasts as input parameters. In contrast, the existing ES backtesting techniques require forecasts for further quantities such as the Value at Risk, the volatility or even the entire (tail) distribution. As the regulatory authorities only receive forecasts for the ES, backtests including further input parameters are not applicable in practice. We compare the empirical performance of our new backtests to existing approaches in terms of their empirical size and power through several different simulation studies. We find that our backtests clearly outperform the existing backtesting procedures in the literature in terms of their size and (size-adjusted) power properties throughout all considered simulation experiments. We provide an R package for these ES backtests which is easily applicable for practitioners.
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1801.04112&r=ecm
  27. By: Christian Gouriéroux (CREST; University of Toronto); Alain Monfort (CREST)
    Abstract: It is frequent to deal with parametric models which are di cult to analyze, due to the large number of data and/or parameters, complicated nonlinearities, or unobservable variables. The aim is to explain how to analyze such models by means of a set of simpli ed models, called instrumental models, and how to combine these instrumental models in an optimal way. In this respect a bridge between the econometric literature on indirect inference and the statistical literature on composite likelihood is provided. The composite indirect inference principle is illustrated by an application to the analysis of corporate risks.
    Keywords: Indirect Inference; Composite Likelihood; Instrumental Model; Pseudo Maximum Likelihood; Corporate Risk; Asymptotic Single Risk Factor
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2017-07&r=ecm
  28. By: Christiane J.S. Baumeister; James D. Hamilton
    Abstract: Traditional approaches to structural vector autoregressions can be viewed as special cases of Bayesian inference arising from very strong prior beliefs. These methods can be generalized with a less restrictive formulation that incorporates uncertainty about the identifying assumptions themselves. We use this approach to revisit the importance of shocks to oil supply and demand. Supply disruptions turn out to be a bigger factor in historical oil price movements and inventory accumulation a smaller factor than implied by earlier estimates. Supply shocks lead to a reduction in global economic activity after a significant lag, whereas shocks to oil demand do not.
    JEL: C32 E32 Q43
    Date: 2017–12
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:24167&r=ecm

This nep-ecm issue is ©2018 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.