nep-ecm New Economics Papers
on Econometrics
Issue of 2021‒02‒22
25 papers chosen by
Sune Karlsson
Örebro universitet

  1. Extensions to IVX Methods of Inference for Return Predictability By Georgiev, Iliyan; Demetrescu, Matei; Rodrigues, Paulo MM; Taylor, AM Robert
  2. A Unified Framework for Specification Tests of Continuous Treatment Effect Models By Wei Huang; Oliver Linton; Zheng Zhang
  3. Inference on two component mixtures under tail restrictions By Marc Henry; Koen Jochmans; Bernard Salani\'e
  4. Identification and Inference Under Narrative Restrictions By Raffaella Giacomini; Toru Kitagawa; Matthew Read
  5. Robust inference for threshold regression models By Hidalgo, Javier; Lee, Jungyoon; Seo, Myung Hwan
  6. Counterfactual Inference of the Mean Outcome under a Convergence of Average Logging Probability By Masahiro Kato
  7. Constructing valid instrumental variables in generalized linear causal models from directed acyclic graphs By {\O}yvind Hoveid
  8. A Distance Covariance-based Estimator By Emmanuel Selorm Tsyawo; Abdul-Nasah Soale
  9. Linear programming approach to nonparametric inference under shape restrictions: with an application to regression kink designs By Harold D. Chiang; Kengo Kato; Yuya Sasaki; Takuya Ura
  10. Simple Tests for Stock Return Predictability with Good Size and Power Properties By Harvey, David I; Leybourne, Stephen J; Taylor, AM Robert
  11. Spatial Correlation Robust Inference By Ulrich K. M\"uller; Mark W. Watson
  12. Doubly functional graphical models in high dimensions By Qiao, Xinghao; Qian, Cheng; James, Gareth M.; Guo, Shaojun
  13. Testing for Nonlinear Cointegration under Heteroskedasticity By Christoph Hanck; Till Massing
  14. Common Components Structural VARs By Mario Forni; Luca Gambetti; marco Lippi; Luca Sala
  15. Big Data meets Causal Survey Research: Understanding Nonresponse in the Recruitment of a Mixed-mode Online Panel By Barbara Felderer; Jannis Kueck; Martin Spindler
  16. Statistical Power for Estimating Treatment Effects Using Difference-in-Differences and Comparative Interrupted Time Series Designs with Variation in Treatment Timing By Peter Z. Schochet
  17. Beta-Adjusted Covariance Estimation By Kirill Dragun; Kris Boudt; Orimar Sauri; Steven Vanduffel
  18. Macroeconomic Uncertainty and Vector Autoregressions By Mario Forni; Luca Gambetti; Luca Sala
  19. Exploring the dependencies among main cryptocurrency log-returns: A hidden Markov model By Pennoni, Fulvia; Bartolucci, Francesco; Forte, Gianfranco; Ametrano, Ferdinando
  20. Manifold Learning with Approximate Nearest Neighbors By Fan Cheng; Rob J Hyndman; Anastasios Panagiotelis
  21. Predictive Quantile Regression with Mixed Roots and Increasing Dimensions By Rui Fan; Ji Hyung Lee; Youngki Shin
  22. Aggregate Output Measurements: a Common Trend Approach By Martín Almuzara; Gabriele Fiorentini; Enrique Sentana
  23. A User's Guide to Approximate Randomization Tests with a Small Number of Clusters By Yong Cai; Ivan A. Canay; Deborah Kim; Azeem M. Shaikh
  24. Asymmetric Effects of Monetary Policy Easing and Tightening By Davide Debortoli; Mario Forni; Luca Gambetti; Luca Sala
  25. Modeling extreme events: time-varying extreme tail shape By Schwaab, Bernd; Zhang, Xin; Lucas, André

  1. By: Georgiev, Iliyan; Demetrescu, Matei; Rodrigues, Paulo MM; Taylor, AM Robert
    Abstract: Predictive regression methods are widely used to examine the predictability of (excess) returns on stocks and other equities by lagged macroeconomic and financial variables. Extended IV [IVX] estimation and inference has proved a particularly valuable tool in this endeavour as it allows for possibly strongly persistent and endogenous regressors. This paper makes three distinct contributions to the literature. First we demonstrate that, provided either a suitable bootstrap implementation is employed or heteroskedasticity-consistent standard errors are used, the IVX-based predictability tests of Kostakis et al. (2015) retain asymptotically pivotal inference, regardless of the degree of persistence or endogeneity of the (putative) predictor, under considerably weaker assumptions on the innovations than are required by Kostakis et al. (2015) in their analysis. In particular, we allow for quite general forms of conditional and unconditional heteroskedasticity in the innovations, neither of which are tied to a parametric model. Second, and associatedly, we develop asymptotically valid bootstrap implementations of the IVX tests under these conditions. Monte Carlo simulations show that the bootstrap methods we propose can deliver considerably more accurate finite sample inference than the asymptotic implementation of these tests used in Kostakis et al. (2015) under certain problematic parameter constellations, most notably for their implementation against one-sided alternatives, and where multiple predictors are included. Third, under the same conditions as we consider for the fullsample tests, we show how sub-sample implementations of the IVX approach, coupled with a suitable bootstrap, can be used to develop asymptotically valid one-sided and two-sided tests for the presence of temporary windows of predictability.
    Date: 2021–02–12
    URL: http://d.repec.org/n?u=RePEc:esy:uefcwp:29779&r=all
  2. By: Wei Huang; Oliver Linton; Zheng Zhang
    Abstract: We propose a general framework for the specification testing of continuous treatment effect models. We assume a general residual function, which includes the average and quantile treatment effect models as special cases. The null models are identified under the confoundedness condition and contain a nonparametric weighting function. We propose a test statistic for the null model in which the weighting function is estimated by solving an expanding set of moment equations. We establish the asymptotic distributions of our test statistic under the null hypothesis and under fixed and local alternatives. The proposed test statistic is shown to be more efficient than that constructed from the true weighting function and can detect local alternatives deviated from the null models at the rate of $O_{P}(N^{-1/2})$. A simulation method is provided to approximate the null distribution of the test statistic. Monte-Carlo simulations show that our test exhibits a satisfactory finite-sample performance, and an application shows its practical value.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.08063&r=all
  3. By: Marc Henry; Koen Jochmans; Bernard Salani\'e
    Abstract: Many econometric models can be analyzed as finite mixtures. We focus on two-component mixtures and we show that they are nonparametrically point identified by a combination of an exclusion restriction and tail restrictions. Our identification analysis suggests simple closed-form estimators of the component distributions and mixing proportions, as well as a specification test. We derive their asymptotic properties using results on tail empirical processes and we present a simulation study that documents their finite-sample performance.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.06232&r=all
  4. By: Raffaella Giacomini; Toru Kitagawa; Matthew Read
    Abstract: We consider structural vector autoregressions subject to 'narrative restrictions', which are inequality restrictions on functions of the structural shocks in specific periods. These restrictions raise novel problems related to identification and inference, and there is currently no frequentist procedure for conducting inference in these models. We propose a solution that is valid from both Bayesian and frequentist perspectives by: 1) formalizing the identification problem under narrative restrictions; 2) correcting a feature of the existing (single-prior) Bayesian approach that can distort inference; 3) proposing a robust (multiple-prior) Bayesian approach that is useful for assessing and eliminating the posterior sensitivity that arises in these models due to the likelihood having flat regions; and 4) showing that the robust Bayesian approach has asymptotic frequentist validity. We illustrate our methods by estimating the effects of US monetary policy under a variety of narrative restrictions.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.06456&r=all
  5. By: Hidalgo, Javier; Lee, Jungyoon; Seo, Myung Hwan
    Abstract: This paper considers robust inference in threshold regression models when the practitioners do not know whether at the threshold point the true specification has a kink or a jump, nesting previous works that assume either continuity or discontinuity at the threshold. We find that the parameter values under the kink restriction are irregular points of the Hessian matrix, destroying the asymptotic normality and inducing the cube-root convergence rate for the threshold estimate. However, we are able to obtain the same asymptotic distribution as Hansen (2000) for the quasi-likelihood ratio statistic for the unknown threshold. We propose to construct confidence intervals for the threshold by bootstrap test inversion. Finite sample performances of the proposed procedures are examined through Monte Carlo simulations and an economic empirical application is given.
    Keywords: Change point; Cube root; Grid bootstrap; Kink
    JEL: C12 C13 C24
    Date: 2019–06–01
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:100333&r=all
  6. By: Masahiro Kato
    Abstract: Adaptive experiments, including efficient average treatment effect estimation and multi-armed bandit algorithms, have garnered attention in various applications, such as social experiments, clinical trials, and online advertisement optimization. This paper considers estimating the mean outcome of an action from samples obtained in adaptive experiments. In causal inference, the mean outcome of an action has a crucial role, and the estimation is an essential task, where the average treatment effect estimation and off-policy value estimation are its variants. In adaptive experiments, the probability of choosing an action (logging probability) is allowed to be sequentially updated based on past observations. Due to this logging probability depending on the past observations, the samples are often not independent and identically distributed (i.i.d.), making developing an asymptotically normal estimator difficult. A typical approach for this problem is to assume that the logging probability converges in a time-invariant function. However, this assumption is restrictive in various applications, such as when the logging probability fluctuates or becomes zero at some periods. To mitigate this limitation, we propose another assumption that the average logging probability converges to a time-invariant function and show the doubly robust (DR) estimator's asymptotic normality. Under the assumption, the logging probability itself can fluctuate or be zero for some actions. We also show the empirical properties by simulations.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.08975&r=all
  7. By: {\O}yvind Hoveid
    Abstract: Unlike other techniques of causality inference, the use of valid instrumental variables can deal with unobserved sources of both variable errors, variable omissions, and sampling bias, and still arrive at consistent estimates of average treatment effects. The only problem is to find the valid instruments. Using the definition of Pearl (2009) of valid instrumental variables, a formal condition for validity can be stated for variables in generalized linear causal models. The condition can be applied in two different ways: As a tool for constructing valid instruments, or as a foundation for testing whether an instrument is valid. When perfectly valid instruments are not found, the squared bias of the IV-estimator induced by an imperfectly valid instrument -- estimated with bootstrapping -- can be added to its empirical variance in a mean-square-error-like reliability measure.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.08056&r=all
  8. By: Emmanuel Selorm Tsyawo; Abdul-Nasah Soale
    Abstract: Weak instruments present a major setback to empirical work. This paper introduces an estimator that admits weak, uncorrelated, or mean-independent instruments that are non-independent of endogenous covariates. Relative to conventional instrumental variable methods, the proposed estimator weakens the relevance condition considerably without imposing a stronger exclusion restriction. Identification mainly rests on (1) a weak conditional median exclusion restriction imposed on pairwise differences in disturbances and (2) non-independence between covariates and instruments. Under mild conditions, the estimator is consistent and asymptotically normal. Monte Carlo experiments showcase an excellent performance of the estimator, and two empirical examples illustrate its practical utility.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.07008&r=all
  9. By: Harold D. Chiang; Kengo Kato; Yuya Sasaki; Takuya Ura
    Abstract: We develop a novel method of constructing confidence bands for nonparametric regression functions under shape constraints. This method can be implemented via a linear programming, and it is thus computationally appealing. We illustrate a usage of our proposed method with an application to the regression kink design (RKD). Econometric analyses based on the RKD often suffer from wide confidence intervals due to slow convergence rates of nonparametric derivative estimators. We demonstrate that economic models and structures motivate shape restrictions, which in turn contribute to shrinking the confidence interval for an analysis of the causal effects of unemployment insurance benefits on unemployment durations.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.06586&r=all
  10. By: Harvey, David I; Leybourne, Stephen J; Taylor, AM Robert
    Abstract: We develop easy-to-implement tests for return predictability which, relative to extant tests in the literature, display attractive finite sample size control and power across a wide range of persistence and endogeneity levels for the predictor. Our approach is based on the standard regression t-ratio and a variant where the predictor is quasi-GLS (rather than OLS) demeaned. In the strongly persistent near-unit root environment, the limiting null distributions of these statistics depend on the endogeneity and local-to-unity parameters characterising the predictor. Analysis of the asymptotic local power functions of feasible implementations of these two tests, based on asymptotically conservative critical values, motivates a switching procedure between the two, employing the quasi-GLS demeaned variant unless the magnitude of the estimated endogeneity correlation parameter is small. Additionally, if the data suggests the predictor is weakly persistent, our approach switches into the standard t-ratio test with reference to standard normal critical values.
    Keywords: predictive regression; persistence; endogeneity; quasi-GLS demeaning; unit root test; hybrid statistic
    Date: 2021–02–15
    URL: http://d.repec.org/n?u=RePEc:esy:uefcwp:29814&r=all
  11. By: Ulrich K. M\"uller; Mark W. Watson
    Abstract: We propose a method for constructing confidence intervals that account for many forms of spatial correlation. The interval has the familiar `estimator plus and minus a standard error times a critical value' form, but we propose new methods for constructing the standard error and the critical value. The standard error is constructed using population principal components from a given `worst-case' spatial covariance model. The critical value is chosen to ensure coverage in a benchmark parametric model for the spatial correlations. The method is shown to control coverage in large samples whenever the spatial correlation is weak, i.e., with average pairwise correlations that vanish as the sample size gets large. We also provide results on correct coverage in a restricted but nonparametric class of strong spatial correlations, as well as on the efficiency of the method. In a design calibrated to match economic activity in U.S. states the method outperforms previous suggestions for spatially robust inference about the population mean.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.09353&r=all
  12. By: Qiao, Xinghao; Qian, Cheng; James, Gareth M.; Guo, Shaojun
    Abstract: We consider estimating a functional graphical model from multivariate functional observations. In functional data analysis, the classical assumption is that each function has been measured over a densely sampled grid. However, in practice the functions have often been observed, with measurement error, at a relatively small number of points. We propose a class of doubly functional graphical models to capture the evolving conditional dependence relationship among a large number of sparsely or densely sampled functions. Our approach first implements a nonparametric smoother to perform functional principal components analysis for each curve, then estimates a functional covariance matrix and finally computes sparse precision matrices, which in turn provide the doubly functional graphical model. We derive some novel concentration bounds, uniform convergence rates and model selection properties of our estimator for both sparsely and densely sampled functional data in the high-dimensional large-$p$, small-$n$ regime. We demonstrate via simulations that the proposed method significantly outperforms possible competitors. Our proposed method is applied to a brain imaging dataset.
    Keywords: constrained `1-minimization; functional principal component; functional precision matrix; graphical model; high-dimensional data; sparesely sampled functional data
    JEL: C1
    Date: 2020–06–01
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:103120&r=all
  13. By: Christoph Hanck; Till Massing
    Abstract: This article discusses cointegration tests for nonlinear cointegration in the presence of variance breaks in the errors. We build on approaches of Cavaliere and Taylor (2006, Journal of Time Series Analysis) for heteroskedastic cointegration tests and of Choi and Saikkonen (2010, Econometric Theory) for nonlinear cointegration tests. We propose a bootstrap test and prove its consistency. A Monte Carlo study shows the approach to have appealing finite sample properties and to work better than an approach using subresiduals. We provide an empirical application to the environmental Kuznets curves (EKC), finding that the cointegration tests do not reject the EKC hypothesis in most cases.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.08809&r=all
  14. By: Mario Forni; Luca Gambetti; marco Lippi; Luca Sala
    Abstract: Small scale VAR models are subject to two major issues: first, the information set might be too narrow; second, many macroeconomic variables are measured with error. The two features produce distorted estimates of the impulse response functions. We propose a new procedure, called Common Components Structural VARs (CC-SVAR), which solves both problems. It consists in (a) treating the variables, prior to estimation, in order to extract their common components; this eliminates measurement errors; (b) estimating a VAR with m > q common components, that is a singular VAR, where q is the number of shocks driving the economy; this solves the fundamentalness problem. SVARs and CC-SVARs are compared in the empirical analysis of monetary policy and technology shocks. The results obtained by SVARs are not robust, in that they strongly depend on the choice and the treatment of the variables considered. On the contrary, using CCSVARs (i) contractionary monetary shocks produce a decrease of prices independently of the variables included in the model, (ii) irrespective of whether hours worked enter the model in log-levels or growth rates, technology improvements produce an increase in hours worked.
    Keywords: Structural VAR models, structural factor models, nonfundamentalness, measurement errors
    JEL: C32 E32
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:mod:recent:147&r=all
  15. By: Barbara Felderer; Jannis Kueck; Martin Spindler
    Abstract: Survey scientists increasingly face the problem of high-dimensionality in their research as digitization makes it much easier to construct high-dimensional (or "big") data sets through tools such as online surveys and mobile applications. Machine learning methods are able to handle such data, and they have been successfully applied to solve \emph{predictive} problems. However, in many situations, survey statisticians want to learn about \emph{causal} relationships to draw conclusions and be able to transfer the findings of one survey to another. Standard machine learning methods provide biased estimates of such relationships. We introduce into survey statistics the double machine learning approach, which gives approximately unbiased estimators of causal parameters, and show how it can be used to analyze survey nonresponse in a high-dimensional panel setting.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.08994&r=all
  16. By: Peter Z. Schochet
    Abstract: This article develops new closed-form variance expressions for power analyses for commonly used panel model estimators. The main contribution is to incorporate variation in treatment timing into the analysis, but the variance formulas also account for other key design features that arise in practice: autocorrelated errors, unequal measurement intervals, and clustering due to the unit of treatment assignment. We consider power formulas for both cross-sectional and longitudinal models and allow for covariates to improve precision. An illustrative power analysis provides guidance on appropriate sample sizes for various model specifications. An available Shiny R dashboard performs the sample size calculations.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.06770&r=all
  17. By: Kirill Dragun; Kris Boudt; Orimar Sauri; Steven Vanduffel (-)
    Abstract: The increase in trading frequency of Exchanged Traded Funds (ETFs) presents a positive externality for nancial risk management when the price of the ETF is available at a higher frequency than the price of the component stocks. The positive spillover consists in improving the accuracy of pre-estimators of the integrated covariance of the stocks included in the ETF. The proposed Beta Adjusted Covariance (BAC) equals the preestimator plus a minimal adjustment matrix such that the covariance-implied stock-ETF beta equals a target beta. We focus on the Hayashi and Yoshida (2005) pre-estimator and derive the asymptotic distribution of its implied stock-ETF beta. The simulation study conrms that the accuracy gains are substantial in all cases considered. In the empirical part of the paper, we show the gains in tracking error eciency when using the BAC adjustment for constructing portfolios that replicate a broad index using a subset of stocks.
    Keywords: High-frequency data, realized covariances, ETF, asynchronicity, stock-ETF beta, Localized Hayashi-Yoshida, Index tracking
    JEL: C22 C51 C53 C58
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:rug:rugwps:21/1010&r=all
  18. By: Mario Forni; Luca Gambetti; Luca Sala
    Abstract: We estimate macroeconomic uncertainty and the effects of uncertainty shocks by means of a new procedure based on standard VARs. Under suitable assumptions, our procedure is equivalent to using the square of the VAR forecast error as an external instrument in a proxy SVAR. We add orthogonality constraints to the standard proxy SVAR identification scheme. We also derive a VAR-based measure of uncertainty. We apply our method to a US data set; we find that uncertainty is mainly exogenous and is responsible of a large fraction of business-cycle fluctuations.
    Keywords: Uncertainty shocks, OLS estimation, Stochastic volatility
    JEL: C32 E32
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:mod:recent:148&r=all
  19. By: Pennoni, Fulvia; Bartolucci, Francesco; Forte, Gianfranco; Ametrano, Ferdinando
    Abstract: A multivariate hidden Markov model is proposed to explain the price evolution of Bitcoin, Ethereum, Ripple, Litecoin, and Bitcoin Cash. The observed daily log-returns of these five major cryptocurrencies are modeled jointly. They are assumed to be correlated according to a variance-covariance matrix conditionally on a latent Markov process having a finite number of states. For the purpose of comparing states according to their volatility, we estimate specific variance-covariance matrix varying across states. Maximum likelihood estimation of the model parameters is carried out by the Expectation-Maximization algorithm. The hidden states represent different phases of the market identified through the estimated expected values and volatility of the log-returns. We reach interesting results in detecting these phases of the market and the implied transition dynamics. We also find evidence of structural medium term trend in the correlations of Bitcoin with the other cryptocurrencies.
    Keywords: Bitcoin, Bitcoin cash, decoding, Ethereum, expectation-maximization algorithm, Litecoin, Ripple, time-series
    JEL: C32 C51 C53
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:106150&r=all
  20. By: Fan Cheng; Rob J Hyndman; Anastasios Panagiotelis
    Abstract: Manifold learning algorithms are valuable tools for the analysis of high-dimensional data, many of which include a step where nearest neighbors of all observations are found. This can present a computational bottleneck when the number of observations is large or when the observations lie in more general metric spaces, such as statistical manifolds, which require all pairwise distances between observations to be computed. We resolve this problem by using a broad range of approximate nearest neighbor algorithms within manifold learing algorithms and evaluating their impact on embedding accuracy. We use approximate nearest neighbors for statistical maifolds by exploiting the connection between Hellinger/Total variation distance for discrete distributions and the L2/L1 norm. Via a thorough empirical investigation based on the benchmark MNIST dataset, it is shown that approximate nearest neighbors lead to substantial improvements in computational time with little to no loss in the accuracy of the embedding produced by a manifold learning algorithm. This result is robust to the use of different manifold learning algorithms, to the use of different approximate nearest neighbor algorithms, and to the use of different measures of embedding accuracy. The proposed method is applied to learning statistical manifolds data on distributions of electricity usage. This application demonstrates how the proposed methods can be used to visualize and identify anomalies and uncover underlying structure within high-dimensional data in a way that is scalable to large datasets.
    Keywords: statistical manifold, dimension reduction, anomaly detection, k-d trees, Hellinger distance, smart meter data
    JEL: C55 C65 C80
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2021-3&r=all
  21. By: Rui Fan; Ji Hyung Lee; Youngki Shin
    Abstract: In this paper we study the benefit of using the adaptive LASSO for predictive quantile regression. It is common that predictors in predictive quantile regression have various degrees of persistence and exhibit different signal strengths in explaining the dependent variable. We show that the adaptive LASSO has the consistent variable selection and the oracle properties under the simultaneous presence of stationary, unit root and cointegrated predictors. Some encouraging simulation and out-of-sample prediction results are reported.
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2101.11568&r=all
  22. By: Martín Almuzara (Federal Reserve Bank of New York); Gabriele Fiorentini (Dipartimento di Statistica, Informatica, Applicazioni "G. Parenti", Università di Firenze); Enrique Sentana (CEMFI and CEPR)
    Abstract: We analyze a model for N different measurements of a persistent latent time series when measurement errors are mean-reverting, which implies a common trend among measure-ments. We study the consequences of overdifferencing, finding potentially large biases in maximum likelihood estimators of the dynamics parameters and reductions in the preci-sion of smoothed estimates of the latent variable, especially for multiperiod objects such as quinquennial growth rates. We also develop an R 2 measure of common trend observability that determines the severity of misspecification. Finally, we apply our framework to US quarterly data on GDP and GDI, obtaining an improved aggregate output measure.
    Keywords: Cointegration, GDP, GDI, Overdifferencing, Signal Extraction
    JEL: C32 E01
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:fir:econom:wp2021_03&r=all
  23. By: Yong Cai; Ivan A. Canay; Deborah Kim; Azeem M. Shaikh
    Abstract: This paper provides a user's guide to the general theory of approximate randomization tests developed in Canay, Romano, and Shaikh (2017) when specialized to linear regressions with clustered data. Such regressions include settings in which the data is naturally grouped into clusters, such as villages or repeated observations over time on individual units, as well as settings with weak temporal dependence, in which pseudo-clusters may be formed using blocks of consecutive observations. An important feature of the methodology is that it applies to settings in which the number of clusters is small -- even as small as five. We provide a step-by-step algorithmic description of how to implement the test and construct confidence intervals for the quantity of interest. We additionally articulate the main requirements underlying the test, emphasizing in particular common pitfalls that researchers may encounter. Finally, we illustrate the use of the methodology with two applications that further elucidate these points. The required software to replicate these empirical exercises and to aid researchers wishing to employ the methods elsewhere is provided in both {\tt R} and {\tt Stata}.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.09058&r=all
  24. By: Davide Debortoli; Mario Forni; Luca Gambetti; Luca Sala
    Abstract: Monetary policy easing and tightening have asymmetric effects: a policy easing has large effects on prices but small effects on real activity variables. The opposite is found for a policy tightening: large real effects but small effects on prices. Non-linearities are estimated using a new and simple procedure based on linear Structural Vector Autoregressions with exogenous variables (SVARX). We rationalize the results through the lenses of a simple model with downward nominal wage rigidities.
    Keywords: monetary policy shocks, non-linear effects, structural VAR models
    JEL: C32 E32
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:mod:recent:146&r=all
  25. By: Schwaab, Bernd; Zhang, Xin; Lucas, André
    Abstract: We propose a dynamic semi-parametric framework to study time variation in tail parameters. The framework builds on the Generalized Pareto Distribution (GPD) for modeling peaks over thresholds as in Extreme Value Theory, but casts the model in a conditional framework to allow for time-variation in the tail shape parameters. The score-driven updates used improve the expected Kullback-Leibler divergence between the model and the true data generating process on every step even if the GPD only fits approximately and the model is mis-specified, as will be the case in any finite sample. This is confirmed in simulations. Using the model, we find that Eurosystem sovereign bond purchases during the euro area sovereign debt crisis had a beneficial impact on extreme upper tail quantiles, leaning against the risk of extremely adverse market outcomes while active. JEL Classification: C22, G11
    Keywords: dynamic tail risk, European Central Bank (ECB), extreme value theory, observation-driven models, Securities Markets Programme (SMP)
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20212524&r=all

This nep-ecm issue is ©2021 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.