|
on Econometrics |
By: | Gergely Ganics (Banco de España); Atsushi Inoue (Vanderbilt University); Barbara Rossi (ICREA - Univ. Pompeu Fabra) |
Abstract: | In this paper we propose methods to construct confidence intervals for the bias of the two-stage least squares estimator, and the size distortion of the associated Wald test in instrumental variables models. Importantly our framework covers the local projections — instrumental variable model as well. Unlike tests for weak instruments, whose distributions are non-standard and depend on nuisance parameters that cannot be estimated consistently, the confidence intervals for the strength of identification are straightforward and computationally easy to calculate, as they are obtained from inverting a chi-squared distribution. Furthermore, they provide more information to researchers on instrument strength than the binary decision offered by tests. Monte Carlo simulations show that the confidence intervals have good small sample coverage. We illustrate the usefulness of the proposed methods to measure the strength of identification in two empirical situations: the estimation of the intertemporal elasticity of substitution in a linearized Euler equation, and government spending multipliers. |
Keywords: | instrumental variables, weak instruments, weak identification, concentration parameter, local projections |
JEL: | C22 C52 C53 |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:bde:wpaper:1841&r=all |
By: | Ryosuke Igari (Faculty of Business Administration, Hosei University); Takahiro Hoshino (Faculty of Economics, Keio University) |
Abstract: | In statistics, researchers have rigorously investigated the reproductive property, which maintains that the sum of independent random variables with the same distribution follows the same family of distributions. However, even if a distribution of the sum of random variables demonstrates the reproductive property, estimating parameters appropriately from only summed observations is difficult. This is because of identification problems when component random variables have different parameters. In this study, we develop a method to effectively estimate parameters from the sum of independent random variables with different parameters. In particular, we focus on the sum of Gamma random variables composed of two types of distributions. We generalize the result according to Moschopoulos(1985) to a proportional hazard model with covariates and a frailty model to capture individual heterogeneities. Additionally, to estimate each parameter from the sum of random variables, we incorporate auxiliary information using quasi-Bayesian methods, and we propose the estimation procedure by Markov chain Monte Carlo. We confirm the effectiveness of the proposed method through a simulation study and apply it to the interpurchase timing model in marketing. |
Keywords: | Survival Analysis, Random Effects, Auxiliary Information, Quasi-Bayesian Inference, Markov Chain Monte Carlo |
JEL: | C11 C41 M31 |
Date: | 2018–12–17 |
URL: | http://d.repec.org/n?u=RePEc:keo:dpaper:2018-021&r=all |
By: | Phillip Heiler; Jana Mareckova |
Abstract: | This paper introduces a flexible regularization approach that reduces point estimation risk of group means stemming from e.g. categorical regressors, (quasi-)experimental data or panel data models. The loss function is penalized by adding weighted squared l2-norm differences between group location parameters and informative first-stage estimates. Under quadratic loss, the penalized estimation problem has a simple interpretable closed-form solution that nests methods established in the literature on ridge regression, discretized support smoothing kernels and model averaging methods. We derive risk-optimal penalty parameters and propose a plug-in approach for estimation. The large sample properties are analyzed in an asymptotic local to zero framework by introducing a class of sequences for close and distant systems of locations that is sufficient for describing a large range of data generating processes. We provide the asymptotic distributions of the shrinkage estimators under different penalization schemes. The proposed plug-in estimator uniformly dominates the ordinary least squares in terms of asymptotic risk if the number of groups is larger than three. Monte Carlo simulations reveal robust improvements over standard methods in finite samples. Real data examples of estimating time trends in a panel and a difference-in-differences study illustrate potential applications. |
Date: | 2019–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1901.01898&r=all |
By: | Davy Paindaveine; Thomas Verdebout |
Abstract: | Motivated by the fact that circular or spherical data are often much concentrated around a location θ, we consider inference about θunder high concentration asymptotic scenarios for which the probability of any fixed spherical cap centered at θ converges to one as the sample size n diverges to infinity. Rather than restricting to Fisher– von Mises–Langevin distributions, we consider a much broader, semiparametric, class of rotationally symmetric distributions indexed by the location parameter θ, a scalar concentration parameter κ and a functional nuisance f. We determine the class of distributions for which high concentration is obtained as κ diverges to infinity. For such distributions, we then consider inference (point estimation, confidence zone estimation, hypothesis testing) on θ in asymptotic scenarios where κn diverges to infinity at an arbitrary rate with the sample size n. Our asymptotic investigation reveals that, interestingly, optimal inference procedures on θ show consistency rates that depend on f. Using asymptotics “`a la Le Cam”, we show that the spherical mean is, at any f, a parametrically super-efficient estimator of θ and that the Watson and Wald tests for H0 :θ = θ0 enjoy similar, non-standard, optimality properties. Our results are illustrated by Monte Carlo simulations. On a technical point of view, our asymptotic derivations require challenging expansions of rotationally symmetric functionals for large arguments of the nuisance function f. |
Keywords: | Concentrated distributions, Directional statistics, Le Cam’s asymptotic theory of statistical experiments, Local asymptotic normality, Super-efficiency |
Date: | 2019–01 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2013/280743&r=all |
By: | Victor Chernozhukov; Kaspar Wuthrich; Yinchu Zhu |
Abstract: | This paper studies inference on treatment effects in aggregate panel data settings with a single treated unit and many control units. We propose new methods for making inference on average treatment effects in settings where both the number of pre-treatment and the number of post-treatment periods are large. We use linear models to approximate the counterfactual mean outcomes in the absence of the treatment. The counterfactuals are estimated using constrained Lasso, an essentially tuning free regression approach that nests difference-in-differences and synthetic control as special cases. We propose a $K$-fold cross-fitting procedure to remove the bias induced by regularization. To avoid the estimation of the long run variance, we construct a self-normalized $t$-statistic. The test statistic has an asymptotically pivotal distribution (a student $t$-distribution with $K-1$ degrees of freedom), which makes our procedure very easy to implement. Our approach has several theoretical advantages. First, it does not rely on any sparsity assumptions. Second, it is fully robust against misspecification of the linear model. Third, it is more efficient than difference-in-means and difference-in-differences estimators. The proposed method demonstrates an excellent performance in simulation experiments, and is taken to a data application, where we re-evaluate the economic consequences of terrorism. |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1812.10820&r=all |
By: | Ben Deaner |
Abstract: | Nonparametric instrumental variables (NPIV) estimators are highly sensitive to the failure of instrumental validity. We show that even an arbitrarily small deviation from full instrumental validity can lead to an arbitrarily large asymptotic bias for a broad class of NPIV estimators. Strong smoothness conditions on the structural function can mitigate this problem. Unfortunately, if the researcher allows for an arbitrarily small failure of instrumental validity then the failure of such a smoothness condition is generally not testable and in fact one cannot identify any upper bound on the magnitude of the failure. To address these problems we propose an alternative method in which the structural function is treated as partially identified. Under our procedure the researcher achieves robust confidence sets using a priori bounds on the deviation from instrumental validity and approximation error. Our procedure is based on the sieve-minimum distance method and has an added advantage in that it reduces the need to choose the size of the sieve space either directly or algorithmically. We also present a related method that allows the researcher to assess the sensitivity of their NPIV estimates to misspecification. This sensitivity analysis can inform the choice of sieve space in point estimation. |
Date: | 2019–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1901.01241&r=all |
By: | Prosper Dovonon; Alastair Hall |
Abstract: | This paper presents a limiting distribution theory for GMM and Indirect Inference estimators when local identification conditions fail at first-order but hold at second-order. These limit distributions are shown to be non-standard, but we show that they can be easily simulated, making it possible to perform inference about the parameters in this setting. We illustrate our results in the context of a dynamic panel data model in which the parameter of interest is identified locally at second order by non-linear moment restrictions but not at first order at a particular point in the parameter space. Our simulation results indicate that our theory leads to reliable inferences in moderate to large samples in the neighbourhood of this point of first-order identification failure. In contrast, inferences based on standard asymptotic theory (derived under the assumption of first-order local identification) are very misleading in the neighbourhood of the point of first-order local identification failure. |
Keywords: | Moment-based estimation,First-order identification failure,Minimum-chi squared estimation,Simulation-based estimation, |
Date: | 2018–12–18 |
URL: | http://d.repec.org/n?u=RePEc:cir:cirwor:2018s-37&r=all |
By: | Harold D Chiang |
Abstract: | This paper studies inference for multiple/many average partial effects when outcome variable is binary or fractional under a data rich, cluster sampling environment. The number of average partial effects of interest can be much larger than number of sample clusters. We propose a post-double-selection estimator as well as a Neyman orthogonal moment estimator, both based on l1-penalization, and explore their asymptotic properties. The proposed estimators do not require oracle property for valid inference. We propose easy-to-implement algorithms for high-dimensional hypotheses testing and construction of simultaneously valid confidence intervals that are cluster-robust, robust against imperfect model selection and asymptotically non-conservative using a new multiplier cluster bootstrap. |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1812.09397&r=all |
By: | Davy Paindaveine; Julien Remy; Thomas Verdebout |
Abstract: | We consider inference on the first principal direction of a p -variate elliptical distribution. We do so in challenging double asymptotic scenarios for which this direction eventually fails to be identifiable. In order to achieve robustness not only with respect to such weak identifiability but also with respect to heavy tails, we focus on sign-based statistical procedures, that is, on procedures that involve the observations only through their direction from the center of the distribution. We actually consider the generic problem of testing the null hypothesis that the first principal direction coincides with a given direction of R p. We first focus on weak identifiability setups involving single spikes (that is, involving spectra for which the smallest eigenvalue has multiplicity p-1). We show that, irrespective of the degree of weak identifiability, such setups offer local alternatives for which the corresponding sequence of statistical experiments converges in the Le Cam sense. Interestingly, the limiting experiments depend on the degree of weak identifiability. We exploit this convergence result to build optimal sign tests for the problem considered. In classical asymptotic scenarios where the spectrum is fixed, these tests are shown to be asymptotically equivalent to the sign-based likelihood ratio tests available in the literature. Unlike the latter, however, the proposed sign tests are robust to arbitrarily weak identifiability. We show that our tests meet the asymptotic level constraint irrespective of the structure of the spectrum, hence also in possibly multi-spike setups. Finally, we fully characterize the non-nullasymptotic distributions of the corresponding test statistics under weak identifiability, which allows us to quantify the corresponding local asymptotic powers. Monte Carlo exercises confirm our asymptotic results. |
Keywords: | Le Cam's asymptotic theory of statistical experiments, Local asymptotic normality, Principal component analysis, Sign tests, Weak identi ability. |
Date: | 2019–01 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2013/280742&r=all |
By: | Matthew A. Masten; Alexandre Poirier |
Abstract: | What should researchers do when their baseline model is refuted? We provide four constructive answers. First, researchers can measure the extent of falsification. To do this, we consider continuous relaxations of the baseline assumptions of concern. We then define the falsification frontier: the boundary between the set of assumptions which falsify the model and those which do not. This frontier provides a quantitative measure of the extent of falsification. Second, researchers can present the identified set for the parameter of interest under the assumption that the true model lies somewhere on this frontier. We call this the falsification adaptive set. This set generalizes the standard baseline estimand to account for possible falsification. Third, researchers can present the identified set for a specific point on this frontier. Finally, as a sensitivity analysis, researchers can present identified sets for points beyond the frontier. To illustrate these four ways of salvaging falsified models, we study overidentifying restrictions in two instrumental variable models: a homogeneous effects linear model, and heterogeneous effect models with either binary or continuous outcomes. In the linear model, we consider the classical overidentifying restrictions implied when multiple instruments are observed. We generalize these conditions by considering continuous relaxations of the classical exclusion restrictions. By sufficiently weakening the assumptions, a falsified baseline model becomes non-falsified. We obtain analogous results in the heterogeneous effects models, where we derive identified sets for marginal distributions of potential outcomes, falsification frontiers, and falsification adaptive sets under continuous relaxations of the instrument exogeneity assumptions. |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1812.11598&r=all |
By: | Lechner, Michael |
Abstract: | Uncovering the heterogeneity of causal effects of policies and business decisions at various levels of granularity provides substantial value to decision makers. This paper develops new estimation and inference procedures for multiple treatment models in a selection-on-observables framework by modifying the Causal Forest approach suggested by Wager and Athey (2018). The new estimators have desirable theoretical and computational properties for various aggregation levels of the causal effects. An Empirical Monte Carlo study shows that they may outperform previously suggested estimators. Inference tends to be accurate for effects relating to larger groups and conservative for effects relating to fine levels of granularity. An application to the evaluation of an active labour market programme shows the value of the new methods for applied research. |
Keywords: | Causal machine learning, statistical learning, average treatment effects, conditional average treatment effects, multiple treatments, selection-on-observable, causal forests |
JEL: | C21 J68 |
Date: | 2019–01 |
URL: | http://d.repec.org/n?u=RePEc:usg:econwp:2019:01&r=all |
By: | Aknouche, Abdelhakim; Demmouche, Nacer; Touche, Nassim |
Abstract: | A Bayesian MCMC estimate of a periodic asymmetric power GARCH (PAP-GARCH) model whose coefficients, power, and innovation distribution are periodic over time is proposed. The properties of the PAP-GARCH model such as periodic ergodicity, finiteness of moments and tail behaviors of the marginal distributions are first examined. Then, a Bayesian MCMC estimate based on Griddy-Gibbs sampling is proposed when the distribution of the innovation of the model is standard Gaussian or standardized Student with a periodic degree of freedom. Selecting the orders and the period of the PAP-GARCH model is carried out via the Deviance Information Criterion (DIC). The performance of the proposed Griddy-Gibbs estimate is evaluated through simulated and real data. In particular, applications to Bayesian volatility forecasting and Value-at-Risk estimation for daily returns on the S&P500 index are considered. |
Keywords: | Periodic Asymmetric Power GARCH model, probability properties, Griddy-Gibbs estimate, Deviance Information Criterion, Bayesian forecasting, Value at Risk. |
JEL: | C11 C15 C58 |
Date: | 2018–05–11 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:91136&r=all |
By: | Kohtaro Hitomi (Kyoto Institute of Technology); Masamune Iwasawa (Graduate School of Economics, The University of Tokyo); Yoshihiko Nishiyama (Institute of Economic Research, Kyoto University) |
Abstract: | We propose a rate optimal specification test for instrumental variable regression models based on the nearest neighbor observation with respect to instruments. The proposed test has uniform power against a set of non-smooth alternatives. The optimal minimax rate is n-1/4 for any dimension of instruments, where n is sample size. This rate coincides with the fastest possible rate achievable by any tests under the local alternative setting when the alternative is constructed by a non-smooth function and/or the dimension of the instrument is large. Since such local alternative belongs to the set of alternatives considered in this study, our test is preferable in a large dimension setting. In the simulation and empirical applications with a large dimension of instruments, we observe that the test works well and the power approaches one reasonably fast as the sample size increase. |
Keywords: | instrumental variable model; specification test; minimax approach; k- nearest neighbor method |
JEL: | C12 C14 |
Date: | 2018–03 |
URL: | http://d.repec.org/n?u=RePEc:kyo:wpaper:986&r=all |
By: | Gabriele Fiorentini (Università di Firenze, Italy; Rimini Centre for Economic Analysis); Enrique Sentana (CEMFI, Spain) |
Abstract: | We propose tests for smooth but persistent serial correlation in risk premia and volatilities that exploit the non-normality of financial returns. Our parametric tests are robust to distributional misspecification, while our semiparametric tests are as powerful as if we knew the true return distribution. Local power analyses confirm their gains over existing methods, while Monte Carlo exercises assess their finite sample reliability. We apply our tests to quarterly returns on the five Fama-French factors for international stocks, whose distributions are mostly symmetric and fat-tailed. Our results highlight noticeable differences across regions and factors and confirm the fragility of Gaussian tests. |
Keywords: | financial forecasting, moment tests, misspecification, robustness, volatility |
JEL: | C12 C22 G17 |
Date: | 2019–01 |
URL: | http://d.repec.org/n?u=RePEc:rim:rimwps:19-01&r=all |
By: | Hiroaki Kaido; Kaspar Wuthrich |
Abstract: | The instrumental variable quantile regression (IVQR) model of Chernozhukov and Hansen (2005,2006) is a flexible and powerful tool for evaluating the impact of endogenous covariates on the whole distribution of the outcome of interest. Estimation, however, is computationally burdensome because the GMM objective function is non-smooth and non-convex. This paper shows that the IVQR estimation problem can be decomposed into a set of conventional quantile regression sub-problems, which are convex and can be solved efficiently. This allows for reformulating the original estimation problem as the problem of finding the fixed point of a low dimensional map. This reformulation leads to new identification results and, most importantly, to practical, easy to implement, and computationally tractable estimators. We explore estimation algorithms based on the contraction mapping theorem and algorithms based on root-finding methods. We prove consistency and asymptotic normality of our estimators and establish the validity of a bootstrap procedure for estimating the limiting laws. Monte Carlo simulations support the estimator's enhanced computational tractability and demonstrate desirable finite sample properties. |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1812.10925&r=all |
By: | Delle Monache, Davide (Bank of Italy); Petrella, Ivan (University of Warwick and CEPR) |
Abstract: | In this work we explore a novel approach to estimating Gaussian state space models in the classical framework without making use of the Kalman filter and Kalman smoother. By formulating the model in matrix form, we obtain expressions for the likelihood function and the smoothed state vector that are computationally feasible and generally more efficient than the standard filtering approach. Finally, we highlight a convenient way to retrieve the filtering weights and to deal with data irregularities. |
Keywords: | State space models; Likelihood; Smoother; Sparse matrices; JEL Classification Numbers: C22 ; C32 ; C51 ; C53; C82; |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:wrk:wrkemf:19&r=all |
By: | Neng-Chieh Chang |
Abstract: | This paper discusses difference-in-differences (DID) estimation when there exist many control variables, potentially more than the sample size. In this case, traditional estimation methods, which require a limited number of variables, do not work. One may consider using statistical or machine learning (ML) methods. However, by the well-known theory of inference of ML methods proposed in Chernozhukov et al. (2018), directly applying ML methods to the conventional semiparametric DID estimators will cause significant bias and make these DID estimators fail to be sqrt{N}-consistent. This article proposes three new DID estimators for three different data structures, which are able to shrink the bias and achieve sqrt{N}-consistency and asymptotic normality with mean zero when applying ML methods. This leads to straightforward inferential procedures. In addition, I show that these new estimators have the small bias property (SBP), meaning that their bias will converge to zero faster than the pointwise bias of the nonparametric estimator on which it is based. |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1812.10846&r=all |
By: | Tobias Hartl; Roland Weigand |
Abstract: | We propose convenient inferential methods for potentially nonstationary multivariate unobserved components models with fractional integration and cointegration. Based on finite-order ARMA approximations in the state space representation, maximum likelihood estimation can make use of the EM algorithm and related techniques. The approximation outperforms the frequently used autoregressive or moving average truncation, both in terms of computational costs and with respect to approximation quality. Monte Carlo simulations reveal good estimation properties of the proposed methods for processes of different complexity and dimension. |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1812.09142&r=all |
By: | Chen, Siyan; Desiderio, Saul |
Abstract: | In this paper we present a simple approach to factor analysis to estimate the true correlations between observable variables and a single common factor. We first provide the exact formula for the correlations under the orthogonality conditions, and then we show how to consistently estimate them using a random sample and a proper instrumental variable. |
Keywords: | Factor analysis, correlation, instrumental variable estimation |
JEL: | C1 C38 |
Date: | 2018 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:90426&r=all |
By: | Timothy M. Christensen |
Abstract: | This paper studies identification and estimation of a class of dynamic models in which the decision maker (DM) is uncertain about the data-generating process. The DM maximizes his or her continuation value under a worst-case model which lies within a nonparametric neighborhood of a benchmark model. The DM's benchmark model and preference parameters are jointly underidentified. With the DM's benchmark model fixed, primitive conditions are established for nonparametric identification of the worst-case model and local identification of the DM's preference parameters. The key step in the identification analysis is to establish existence and uniqueness of the DM's continuation value function allowing for unbounded statespace and unbounded utilities, both of which are important in applications. To do so, we derive new fixed-point results which use monotonicity and convexity of the value function recursion and which are embedded within a Banach space of "thin-tailed" functions that arises naturally from the structure of recursion. The fixed-point results are quite general and are also applied to models where the DM learns about a hidden state and Rust-type dynamic discrete choice models. A perturbation result is derived which provides a necessary and sufficient condition for consistent estimation of continuation values and the worst-case model. A robust consumption-investment problem is studied as an empirical application and some connections are drawn with the literature on macroeconomic uncertainty. |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1812.11246&r=all |
By: | Victor Lapshin (National Research University Higher School of Economics); Sofia Sokhatskaya (National Research University Higher School of Economics) |
Abstract: | Estimates of the term structure of interest rates depend heavily on the quality of the market data from which it is constructed. Estimated rates can be incorrect because of observation errors and omissions in the data. The usual way to deal with the heteroscedasticity of observation errors is by introducing weights in the fitting procedure. There is currently no consensus in the literature about the choice of such weights. We introduce a non-parametric bootstrap-based method of introducing observation errors drawn from the empirical distribution into the model data, which allows us to perform a comparison test of different weighting schemes without implicitly favoring one of the contesting models – a common design flaw in comparison studies. We use government bonds from several countries with examples of both liquid and illiquid bond markets. We show that realistic observation errors can greatly distort the estimated yield curve. Moreover, we show that using different weights or other modifications to account for observation errors in bond price data does not always improve the term structure estimates, and often worsens the situation. Based on our comparison, we advise to either use equal weights or weights proportional to the inverse duration in practical applications |
Keywords: | term structure of interest rates, zero-coupon yield curve, bond prices, weights, cross-validation. |
JEL: | E43 |
Date: | 2018 |
URL: | http://d.repec.org/n?u=RePEc:hig:wpaper:73/fe/2018&r=all |
By: | Tobias Hartl; Roland Weigand |
Abstract: | We investigate a setup for fractionally cointegrated time series which is formulated in terms of latent integrated and short-memory components. It accommodates nonstationary processes with different fractional orders and cointegration of different strengths and is applicable in high-dimensional settings. In an application to realized covariance matrices, we find that orthogonal short- and long-memory components provide a reasonable fit and competitive out-of-sample performance compared to several competitor methods. |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1812.09149&r=all |
By: | Yanchun Jin (Graduate School of Economics, Kyoto University); Ryo Okui (NYU Shanghai) |
Abstract: | We propose an econometric procedure to test for the presence of overconfidence using data collected by "ranking experiments." Our approach applies the techniques from the moment inequality literature. Although a ranking experiment is a typical way to collect data for analyzing overconfidence, Benoit and Dubra (2011) show that a ranking experiment may generate data that indicate overconfidence even if participants are purely rational Bayesian updaters. Instead, they provide a set of inequalities that are consistent with purely rational Bayesian updaters. We propose to apply the tests of moment inequalities developed by Romano et al. (2014) to test such a set of inequalities. Then, we examine the data from Svenson (1981) on driving safety. Our results indicate the presence of overconfidence on safety among the US subjects tested by Svenson. However, the other cases tested do not show evidence of overconfidence. |
Keywords: | overconfidence; ranking experiments; moment inequality; driving safety |
JEL: | C12 D03 D81 R41 |
Date: | 2018–01 |
URL: | http://d.repec.org/n?u=RePEc:kyo:wpaper:984&r=all |
By: | Natalia Lazzati; John K.-H. Quah; Koji Shirai (School of Economics, Kwansei Gakuin University) |
Abstract: | We develop a nonparametric approach to test for monotone behavior in optimizing agents and to make out-of-sample predictions. Our approach could be applied to simultaneous games with ordered actions, with agents playing pure strategy Nash equilibria or Bayesian Nash equilibria. We require no parametric assumptions on payoff functions nor distributional assumptions on the unobserved heterogeneity of agents. Multiplicity of optimal solutions (or equilibria) is not excluded, and we are agnostic about how they are selected. To illustrate how our approach works, we include an empirical application to an IO entry game. |
Keywords: | revealed preference; monotone comparative statics; single crossing differences;supermodular games; entry games |
JEL: | C1 C6 C7 D4 L1 |
Date: | 2018–04 |
URL: | http://d.repec.org/n?u=RePEc:kgu:wpaper:184&r=all |
By: | Nail Kashaev |
Abstract: | Identification of discrete outcome models is often established by using special covariates that have full support. This paper shows how these identification results can be extended to a large class of commonly used semiparametric discrete outcome models when all covariates are bounded. I apply the proposed methodology to multinomial choice models, bundles models, and finite games of complete information. |
Date: | 2018–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1811.05555&r=all |
By: | David Preinerstorfer |
Abstract: | In testing for correlation of the errors in regression models the power of tests can be very low for strongly correlated errors. This counterintuitive phenomenon has become known as the "zero-power trap". Despite a considerable amount of literature devoted to this problem, mainly focusing on its detection, a convincing solution has not yet been found. In this article we first discuss theoretical results concerning the occurrence of the zero-power trap phenomenon. Then, we suggest and compare three ways to avoid it. Given an initial test that suffers from the zero-power trap, the method we recommend for practice leads to a modified test whose power converges to one as the correlation gets very strong. Furthermore, the modified test has approximately the same power function as the initial test, and thus approximately preserves all of its optimality properties. We also provide some numerical illustrations in the context of testing for network generated correlation. |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1812.10752&r=all |
By: | Fabio Canova; Filippo Ferroni |
Abstract: | We study what happens to identi?ed shocks and to dynamic responses when the structural model features q disturbances and m endogenous variables, q = m, but only m1 |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:bny:wpaper:0071&r=all |