|
on Econometrics |
By: | Jörg Breitung (Institute of Econometrics, University of Cologne); Ralf Brüggemann (Department of Economics, University of Konstanz) |
Abstract: | In this paper we provide a general framework for linear projection estimators for impulse responses in structural vector autoregressions (SVAR). An important advantage of our projection estimator is that for a large class of SVAR systems (that includes the recursive (Cholesky) identification scheme) standard OLS inference is valid without adjustment for generated regressors, autocorrelated errors or nonstationary variables. We also provide a framework for SVAR models that can be estimated by instrumental (proxy) variables. We show that this class of models (that includes also identification by long-run restrictions) result in a set of quadratic moment conditions that can be used to obtain the asymptotic distribution of this estimator, whereas standard inference based on instrumental variable (IV) projections is invalid. Furthermore, we propose a generalized least squares (GLS) version of the projections that performs similarly to the conventional (iterated) method of estimating impulse responses by inverting the estimated SVAR representation into the MA(∞) representation. Monte Carlo experiments indicate that the proposed OLS projections perform similarly to Jord`a’s (2005) projection estimator but enables us to apply standard inference on the estimated impulse responses. The GLS versions of the projections provide estimates with much smaller standard errors and confidence intervals whenever the horizon h of the impulse responses gets large. |
Keywords: | structural vector autoregressive models, impulse responses, local projections |
JEL: | C32 C51 |
Date: | 2019–12–11 |
URL: | http://d.repec.org/n?u=RePEc:knz:dpteco:1905&r=all |
By: | Xiaohong Chen (Cowles Foundation, Yale University); Zhuo Huang (Peking University); Yanping Yi (School of Economics and Academy of Financial Research, Zhejiang University) |
Abstract: | This paper considers estimation of semi-nonparametric GARCH ï¬ ltered copula models in which the individual time series are modelled by semi-nonparametric GARCH and the joint distributions of the multivariate standardized innovations are characterized by parametric copulas with nonparametric marginal distributions. The models extend those of Chen and Fan (2006) to allow for semi-nonparametric conditional means and volatilities, which are estimated via the method of sieves such as splines. The ï¬ tted residuals are then used to estimate the copula parameters and the marginal densities of the standardized innovations jointly via the sieve maximum likelihood (SML). We show that, even using nonparametrically ï¬ ltered data, both our SML and the two-step copula estimator of Chen and Fan (2006) are still root-n consistent and asymptotically normal, and the asymptotic variances of both estimators do not depend on the nonparametric ï¬ ltering errors. Even more surprisingly, our SML copula estimator using the ï¬ ltered data achieves the full semiparametric efficiency bound as if the standardized innovations were directly observed. These nice properties lead to simple and more accurate estimation of Value-at-Risk (VaR) for multivariate ï¬ nancial data with flexible dynamics, contemporaneous tail dependence and asymmetric distributions of innovations. Monte Carlo studies demonstrate that our SML estimators of the copula parameters and the marginal distributions of the standardized innovations have smaller variances and smaller mean squared errors compared to those of the two-step estimators in ï¬ nite samples. A real data application is presented. |
Keywords: | Semi-nonparametric dynamic models, Residual copulas, Semiparametric multistep, Residual sieve maximum likelihood, Semiparametric efficiency |
JEL: | C14 C22 G32 |
Date: | 2019–10 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2215&r=all |
By: | Isaiah Andrews (Institute for Fiscal Studies and Harvard University); Toru Kitagawa (Institute for Fiscal Studies and cemmap and University College London); Adam McCloskey (Institute for Fiscal Studies and Brown University) |
Abstract: | In an important class of econometric problems, researchers select a target parameter by maximizing the Euclidean norm of a data-dependent vector. Examples that can be cast into this frame include threshold regression models with estimated thresholds, and structural break models with estimated breakdates. Estimation and inference procedures that ignore the randomness of the target parameter can be severely biased and misleading when this randomness is non-negligible. This paper proposes conditional and unconditional inference in such settings, reflecting the data-dependent choice of target parameters. We detail the construction of quantile-unbiased estimators and confidence sets with correct coverage, and prove their asymptotic validity under data generating process such that the target parameter remains random in the limit. We also provide a novel sample splitting approach that improves on conventional split-sample inference. |
Date: | 2019–10–15 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:51/19&r=all |
By: | Anisha Ghosh (Institute for Fiscal Studies); Oliver Linton (Institute for Fiscal Studies and University of Cambridge) |
Abstract: | We propose a solution to the measurement error problem that plagues the estimation of the relation between the expected return of the stock market and its conditional variance due to the latency of these conditional moments. We use intra-period returns to construct a nonparametric proxy for the latent conditional variance in the first step which is subsequently used as an input in the second step to estimate the parameters characterizing the risk-return tradeoff via a GMM approach. We propose a bias-correction to the standard GMM estimator derived under a double asymptotic framework, wherein the number of intra-period returns, N, as well as the number of low frequency time periods, T , simultaneously go to infinity. Simulation exercises show that the bias-correction is particularly relevant for small values of N which is the case in empirically realistic scenarios. The methodology lends itself to additional applications, such as the empirical evaluation of factor models, wherein the factor betas may be estimated using intra-period returns and the unexplained returns or alphas subsequently recovered at lower frequencies. |
Date: | 2019–11–29 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:65/19&r=all |
By: | Rahul Mukherjee (Indian Institute of Management Calcutta); Tirthankar Dasgupta (Rutgers University) |
Abstract: | Factorial experiments are currently undergoing a popularity surge in social and behavioral sciences. A key challenge here arises from randomization restrictions. Consider an experiment to assess the causal effects of two factors, expert review and teacher bonus scheme, on 40 schools in a state. A completely randomized assignment can disperse the schools undergoing review all over the state, thus entailing prohibitively high cost. A practical alternative is to divide these schools by geographic proximity into four groups called whole-plots, two of which are randomly assigned to expert review. The teacher bonus scheme is then applied to half of the schools chosen randomly within each whole-plot. This is an example of a classic split-plot design. Randomization-based analysis, avoiding rigid linear model assumptions, is the most natural methodology to draw causal inference from finite population split-plot experiments as above. Recently, Zhao, Ding, Mukerjee and Dasgupta (2018, Annals of Statistics) investigated this for balanced split-plot designs, where whole-plots are of equal size. However, this can often pose practical difficulty in social sciences. Thus, if the 40 schools are spread over four counties with 8, 8, 12 and 12 schools, then each county is a natural whole-plot, the design is unbalanced, and the analysis in Zhao et al. (2018) is not applicable.We investigate causal inference in split-plot designs that are possibly unbalanced, using the potential outcomes framework. We start with an unbiased estimator of a typical treatment contrast and first examine how far Zhao et al.?s (2018) approach can be adapted to our more general setup. It is seen that this approach, aided by a variable transformation, yields an expression for the sampling variance of the treatment contrast estimator but runs into difficulty in variance estimation. Specifically, as in the balanced case and elsewhere in causal inference (Mukerjee, Dasgupta and Rubin, 2018, Journal of the American Statistical Association), the resulting variance estimator is conservative, i.e., has a nonnegative bias. But, unlike most standard situations, the bias does not vanish even under strict additivity of treatment effects. To overcome this problem, a careful matrix analysis is employed leading to a new variance estimator which is also conservative, but enjoys the nice property of becoming unbiased under a condition even milder than strict additivity. We also discuss the issue of minimaxity with a view to controlling the bias in variance estimation, and explore the bias via simulations. |
Keywords: | Bias, factorial experiment, finite population, minimaxity, potential outcome, variance estimation. |
JEL: | C10 C13 C90 |
Date: | 2019–10 |
URL: | http://d.repec.org/n?u=RePEc:sek:iacpro:9711678&r=all |
By: | Sebastian Ankargren; Paulina Jon\'eus |
Abstract: | We discuss the issue of estimating large-scale vector autoregressive (VAR) models with stochastic volatility in real-time situations where data are sampled at different frequencies. In the case of a large VAR with stochastic volatility, the mixed-frequency data warrant an additional step in the already computationally challenging Markov Chain Monte Carlo algorithm used to sample from the posterior distribution of the parameters. We suggest the use of a factor stochastic volatility model to capture a time-varying error covariance structure. Because the factor stochastic volatility model renders the equations of the VAR conditionally independent, settling for this particular stochastic volatility model comes with major computational benefits. First, we are able to improve upon the mixed-frequency simulation smoothing step by leveraging a univariate and adaptive filtering algorithm. Second, the regression parameters can be sampled equation-by-equation in parallel. These computational features of the model alleviate the computational burden and make it possible to move the mixed-frequency VAR to the high-dimensional regime. We illustrate the model by an application to US data using our mixed-frequency VAR with 20, 34 and 119 variables. |
Date: | 2019–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1912.02231&r=all |
By: | Belloni, Alexandre (Fuqua Business School, Duke University); Chen, Mingli (Department of Economics, University of Warwick); Madrid Padilla, Oscar Hernan (Department of Statistics, University of California, Los Angeles); Wang, Zixuan (Kevin) (Harvard University) |
Abstract: | We propose a generalization of the linear panel quantile regression model to accommodate both sparse and dense parts: sparse means while the number of covariates available is large, potentially only a much smaller number of them have a nonzero impact on each conditional quantile of the response variable; while the dense part is represented by a low-rank matrix that can be approximated by latent factors and their loadings. Such a structure poses problems for traditional sparse estimators, such as the `1-penalised Quantile Regression, and for traditional latent factor estimator, such as PCA. We propose a new estimation procedure, based on the ADMM algorithm, that consists of combining the quantile loss function with `1 and nuclear norm regularization. We show, under general conditions, that our estimator can consistently estimate both the nonzero coefficients of the covariates and the latent low-rank matrix. Our proposed model has a “Characteristics + Latent Factors” Asset Pricing Model interpretation: we apply our model and estimator with a large-dimensional panel of financial data and find that (i) characteristics have sparser predictive power once latent factors were controlled (ii) the factors and coefficients at upper and lower quantiles are different from the median. |
Keywords: | High-dimensional quantile regression ; factor model ; nuclear norm regularization ; panel data ; asset pricing ; characteristic-based model |
URL: | http://d.repec.org/n?u=RePEc:wrk:warwec:1230&r=all |
By: | Charles F. Manski |
Abstract: | In the early 1940s, Haavelmo proposed a probabilistic structure for econometric modeling, aiming to make econometrics useful for public decision making. His fundamental contribution has become thoroughly embedded in subsequent econometric research, yet it could not fully answer all the deep issues that the author raised. Notably, Haavelmo struggled to formalize the implications for decision making of the fact that models can at most approximate actuality. In the same period, Wald initiated his own seminal development of statistical decision theory. Haavelmo favorably cited Wald, but econometrics subsequently did not embrace statistical decision theory. Instead, it focused on study of identification, estimation, and statistical inference. This paper proposes statistical decision theory as a framework for evaluation of the performance of models in decision making. I particularly consider the common practice of as-if optimization: specification of a model, point estimation of its parameters, and use of the point estimate to make a decision that would be optimal if the estimate were accurate. A central theme is that one should evaluate as-if optimization or any other model-based decision rule by its performance across the state space, not the model space. I use prediction and treatment choice to illustrate. Statistical decision theory is conceptually simple, but application is often challenging. Advancement of computation is the primary task to continue building the foundations sketched by Haavelmo and Wald. |
Date: | 2019–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1912.08726&r=all |
By: | Alexandre Belloni (Kevin); Mingli Chen (Kevin); Oscar Hernan Madrid Padilla (Kevin); Zixuan (Kevin); Wang |
Abstract: | We propose a generalization of the linear panel quantile regression model to accommodate both \textit{sparse} and \textit{dense} parts: sparse means while the number of covariates available is large, potentially only a much smaller number of them have a nonzero impact on each conditional quantile of the response variable; while the dense part is represent by a low-rank matrix that can be approximated by latent factors and their loadings. Such a structure poses problems for traditional sparse estimators, such as the $\ell_1$-penalised Quantile Regression, and for traditional latent factor estimator, such as PCA. We propose a new estimation procedure, based on the ADMM algorithm, consists of combining the quantile loss function with $\ell_1$ \textit{and} nuclear norm regularization. We show, under general conditions, that our estimator can consistently estimate both the nonzero coefficients of the covariates and the latent low-rank matrix. Our proposed model has a "Characteristics + Latent Factors" Asset Pricing Model interpretation: we apply our model and estimator with a large-dimensional panel of financial data and find that (i) characteristics have sparser predictive power once latent factors were controlled (ii) the factors and coefficients at upper and lower quantiles are different from the median. |
Date: | 2019–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1912.02151&r=all |
By: | Klopp, Eric (Saarland University); Klößner, Stefan |
Abstract: | In structural equation models with latent variables, the latter must be scaled because otherwise model estimates would not be uniquely determined. The scaling of latent variables yields the estimated parameters to be relative to the scaling constraints, with consequences for the estimated parameters’ interpretation. These issues have been discussed in the literature before, but a systematic account is still lacking. In this paper, we demonstrate the impact of the seemingly arbitrary choice of a scaling method on the estimated parameters in structural equation models with latent variables. To this end, we first develop the necessary mathematical foundations, by introducing the concepts change of scale as well as identified up to scaling and formally defining the three common scaling methods for latent variables. We also provide and prove a proposition that explains how parameters can be converted from one scaling method to another, without the need for re-estimating the model. Furthermore, we show how estimated parameters and their population equivalents are related. We use one regression and one mediation example to demonstrate applications of the provided propositions and to exemplify the interpretation of estimated parameters. Moreover, as an application of the framework developed in this paper, we investigate the impact of the scaling method on the power of Wald tests for single parameters. Limitations and further applications are considered, too. |
Date: | 2019–08–11 |
URL: | http://d.repec.org/n?u=RePEc:osf:osfxxx:c9ke8&r=all |
By: | Kyle Colangelo (Institute for Fiscal Studies); Ying-Ying Lee (Institute for Fiscal Studies) |
Abstract: | We propose a nonparametric inference method for causal e?ects of continuous treatment variables, under unconfoundedness and in the presence of high-dimensional or nonparametric nuisance parameters. Our simple kernel-based double debiased machine learning (DML) estimators for the average dose-response function (or the average structural function) and the partial e?ects are asymptotically normal with a nonparametric convergence rate. The nuisance estimators for the conditional expectation function and the generalized propensity score can be nonparametric kernel or series estimators or ML methods. Using doubly robust in?uence function and cross-?tting, we give tractable primitive conditions under which the nuisance estimators do not a?ect the ?rst-order large sample distribution of the DML estimators. |
Date: | 2019–10–21 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:54/19&r=all |
By: | Eleni Aristodemou (Institute for Fiscal Studies and University College London); Adam Rosen (Institute for Fiscal Studies and Duke University) |
Abstract: | In this paper we analyze a discrete choice model for partially ordered alternatives. The alternatives are di?erentiated along two dimensions, the ?rst an unordered “horizontal” dimension, and the second an ordered “vertical” dimension. The model can be used in circumstances in which individuals choose amongst products of di?erent brands, wherein each brand o?ers an ordered choice menu, for example by o?ering products of varying quality. The unordered-ordered nature of the discrete choice problem is used to characterize the identi?ed set of model parameters. Following an initial nonparametric analysis that relies on shape restrictions inherent in the ordered dimension of the problem, we then provide a specialized analysis for a parametric generalization of the ordered probit model. Conditions for point identi?cation are established when the distribution of unobservable heterogeneity is known, but remain elusive when the distribution is instead restricted to the multivariate normal family with parameterized variance. Rather than invoke the restriction that the distribution is known, or simply assume that model parameters are point identi?ed, we consider the use of inference methods that allow for the possibility of set identi?cation, and which are therefore robust to the possible lack of point identi?cation. A Monte Carlo analysis is provided in which inference is carried out using a method proposed by Chen, Christensen, and Tamer (2018), which is insensitive to the possible lack of point identi?cation and is found to perform adequately. An empirical illustration is then conducted using consumer purchase data in the UK to study consumers’ choice of razor blades in which each brand has product o?erings vertically di?erentiated by quality. |
Date: | 2019–11–18 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:62/19&r=all |
By: | Joaquim Andrade; Pedro Cordeiro; Guilherme Lambais |
Abstract: | This paper analyzes identification issues of a behavorial New Keynesian model and estimates it using likelihood-based and limited-information methods with identification-robust confidence sets. The model presents some of the same difficulties that exist in simple benchmark DSGE models, but the analytical solution is able to indicate in what conditions the cognitive discounting parameter (attention to the future) can be identified and the robust estimation methods is able to confirm its importance for explaining the proposed behavioral model. |
Date: | 2019–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1912.07601&r=all |
By: | Denis Chetverikov (Institute for Fiscal Studies and UCLA); Daniel Wilhelm (Institute for Fiscal Studies and cemmap and UCL); Dongwoo Kim (Institute for Fiscal Studies and Simon Fraser University) |
Abstract: | We propose a new nonparametric test of stochastic monotonicity which adapts to the unknown smoothness of the conditional distribution of interest, possesses desirable asymptotic properties, is conceptually easy to implement, and computationally attractive. In particular, we show that the test asymptotically controls size at a polynomial rate, is non-conservative, and detects certain smooth local alternatives that converge to the null with the fastest possible rate. Our test is based on a data-driven bandwidth value and the critical value for the test takes this randomness into account. Monte Carlo simulations indicate that the test performs well in ?nite samples. In particular, the simulations show that the test controls size and, under some alternatives, is signi?cantly more powerful than existing procedures. |
Date: | 2019–10–15 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:49/19&r=all |
By: | Dang, Khue-Dung (School of Economics, UNSW Business School, University of New South Wales, ARC Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS)); Quiroz, Matias (School of Economics, UNSW Business School, University of New South Wales, ARC Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS), Research Division, Sveriges Riksbank); Kohn, Robert (School of Economics, UNSW Business School, University of New South Wales, ARC Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS)); Tran, Minh-Ngoc (ARC Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS), Discipline of Business Analytics, University of Sidney); Villani, Mattias (ARC Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS), Division of Statistics and Machine Learning, Linköping University, Department of Statistics, Stockholm University.) |
Abstract: | Hamiltonian Monte Carlo (HMC) samples efficiently from high-dimensional posterior distributions with proposed parameter draws obtained by iterating on a discretized version of the Hamiltonian dynamics. The iterations make HMC computationally costly, especially in problems with large datasets, since it is necessary to compute posterior densities and their derivatives with respect to the parameters. Naively computing the Hamiltonian dynamics on a subset of the data causes HMC to lose its key ability to generate distant parameter proposals with high acceptance probability. The key insight in our article is that efficient subsampling HMC for the parameters is possible if both the dynamics and the acceptance probability are computed from the same data subsample in each complete HMC iteration. We show that this is possible to do in a principled way in a HMC-within-Gibbs framework where the subsample is updated using a pseudo marginal MH step and the parameters are then updated using an HMC step, based on the current subsample. We show that our subsampling methods are fast and compare favorably to two popular sampling algorithms that utilize gradient estimates from data subsampling. We also explore the current limitations of subsampling HMC algorithms by varying the quality of the variance reducing control variates used in the estimators of the posterior density and its gradients. |
Keywords: | Large datasets; Bayesian inference; Stochastic gradient |
JEL: | C11 C15 C55 |
Date: | 2019–04–01 |
URL: | http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0372&r=all |
By: | Bruno Ferman |
Abstract: | We propose a simple way to assess the quality of asymptotic approximations required for inference methods. Our assessment can detect problems when the asymptotic theory that justifies the inference method is invalid and/or the structure of the empirical application is far from "Asymptopia". Our assessment can be easily applied to a wide range of applications. We illustrate the use of our assessment for the case of stratified randomized experiments. |
Date: | 2019–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1912.08772&r=all |
By: | Carlos Cesar Trucios-Maza; João H. G Mazzeu; Luis K. Hotta; Pedro L. Valls Pereira; Marc Hallin |
Abstract: | General dynamic factor models have demonstrated their capacity to circumvent the curse of dimensionality in time series and have been successfully applied in many economic and financial applications. However, their performance in the presence of outliers has not been analysed yet. In this paper, we study the impact of additive outliers on the identification, estimation and forecasting performance of general dynamic factor models. Based on our findings, we propose robust identification, estimation and forecasting procedures. Our proposal is evaluated via Monte Carlo experiments and in empirical data. |
Keywords: | Dimension reduction; Forecast; Jumps; Large panels |
Date: | 2019–12 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2013/298201&r=all |
By: | Naveen Narisetty (Institute for Fiscal Studies); Roger Koenker (Institute for Fiscal Studies and UCL) |
Abstract: | A new quantile regression model for survival data is proposed that permits a positive proportion of subjects to become unsusceptible to recurrence of disease following treatment or based on other observable characteristics. In contrast to prior proposals for quantile regression estimation of censored survival models, we propose a new “data augmentation” approach to estimation. Our approach has computational advantages over earlier approaches proposed by Wu and Yin (2013, 2017). We compare our method with the two estimation strategies proposed by Wu and Yin and demonstrate its advantageous empirical performance in simulations. The methods are also illustrated with data from a Lung Cancer survival study. |
Date: | 2019–10–30 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:56/19&r=all |
By: | Andrew Chesher (Institute for Fiscal Studies and University College London); Adam Rosen (Institute for Fiscal Studies and Duke University); Zahra Siddique (Institute for Fiscal Studies) |
Abstract: | Recent research underscores the sensitivity of conclusions drawn from the application of econometric methods devised for quantitative outcome variables to data featuring ordinal outcomes. The issue is particularly acute in the analysis of happiness data, for which no natural cardinal scale exists, and which is thus routinely collected by ordinal response. With ordinal responses, comparisons of means across di?erent populations and the signs of OLS regression coe?cients have been shown to be sensitive to monotonic transformations of the cardinal scale onto which ordinal responses are mapped. In many applications featuring ordered outcomes, including responses to happiness surveys, researchers may wish to study the impact of a ceteris paribus change in certain variables induced by a policy shift. Insofar as some of these variables may be manipulated by the individuals involved, they may be endogenous. This paper examines the use of instrumental variable (IV) methods to measure the e?ect of such changes. While linear IV estimators suffer from the same pitfalls as averages and OLS coe?cient estimates when outcome variables are ordinal, nonlinear models that explicitly respect the ordered nature of the response variable can be used. This is demonstrated with an application to the study of the effect of neighborhood characteristics on subjective well-being among participants in the Moving to Opportunity housing voucher experiment. In this context, the application of nonlinear IV models can be used to estimate marginal effects and counterfactual probabilities of categorical responses induced by changes in neighborhood characteristics such as the level of neighborhood poverty. |
Date: | 2019–11–29 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:66/19&r=all |
By: | Ghislain B. D. Aïhounton (Department of Food and Resource Economics, University of Copenhagen); Arne Henningsen (Department of Food and Resource Economics, University of Copenhagen) |
Abstract: | The inverse hyperbolic sine (IHS) transformation is frequently applied in econometric studies to transform right-skewed variables that include zero or negative values. We confirm a previous study that shows that regression results can largely depend on the units of measurement of IHS-transformed variables. Hence, arbitrary choices regarding the units of measurement for these variables can have a considerable effect on recommendations for policies or business decisions. In order to address this problem, we suggest a procedure for choosing units of measurement for IHS-transformed variables. A Monte Carlo simulation assesses this procedure under various scenarios and a replication of the study by Bellemare and Wichman (2019) illustrates the relevance and applicability of our suggested procedure. |
Keywords: | inverse hyperbolic sine, arcsinh, unit of measurement, scale factor |
JEL: | C1 C5 |
Date: | 2019–12 |
URL: | http://d.repec.org/n?u=RePEc:foi:wpaper:2019_10&r=all |
By: | Mogens Fosgerau (Institute for Fiscal Studies and University of Copenhagen); Dennis Kristensen (Institute for Fiscal Studies and University College London) |
Abstract: | We establish nonparametric identification in a class of so-called index models using a novel approach that relies on general topological results. Our proof strategy imposes very weak smoothness conditions on the functions to be identified and does not require any large support conditions on the regressors in our model. We apply the general identification result to additive random utility and competing risk models. |
Date: | 2019–10–15 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:52/19&r=all |
By: | Andrei Sirchenko (University of Amsterdam) |
Abstract: | This paper develops an ordered choice model for the federal funds rate target with endogenous switching among three latent regimes and possibly endogenous explanatory variables. Estimated for the Greenspan era (1987-2006), the new model detects recurring switches among three policy regimes (interpreted as loose, neutral and tight policy stances) in response to the state of economy, outperforms the Taylor rule and the existing discrete-choice models both in and out of sample, correctly predicts out of sample 90% of the Fed decisions during the next thirteen years, successfully handles the zero lower bound period by a prolonged switch to a loose policy regime with no-change to the target rate (while the Taylor rule and the conventional ordered probit model predict further cuts), and delivers markedly different inference. The empirical results suggest that the endogeneity of explanatory variables does matter in modelling monetary policy and can distort the inference: the marginal effects on the choice probabilities can differ by several times and even have the opposite signs. |
Date: | 2019–12–18 |
URL: | http://d.repec.org/n?u=RePEc:ame:wpaper:1901&r=all |