nep-ecm New Economics Papers
on Econometrics
Issue of 2022‒10‒24
eleven papers chosen by
Sune Karlsson
Örebro universitet

  1. Conditional likelihood ratio test with many weak instruments By Sreevidya Ayyar; Yukitoshi Matsushita; Taisuke Otsu
  2. Efficient Integrated Volatility Estimation in the Presence of Infinite Variation Jumps via Debiased Truncated Realized Variations By B. Cooper Boniece; Jos\'e E. Figueroa-L\'opez; Yuchen Han
  3. In-fill asymptotic distribution of the change point estimator when estimating breaks one at a time By Tayanagi, Toshikazu; 田柳, 俊和; Kurozumi, Eiji; 黒住, 英司
  4. Posterior Probabilities: Nonmonotonicity, Asymptotic Rates, Log-Concavity, and Tur\'an's Inequality By Sergiu Hart; Yosef Rinott
  5. Power to the Researchers: Calculating Power After Estimation By Alex Tian; Tom Coupé; Sayak Khatua; W. Robert Reed; Ben Wood
  6. Causal Impulse Responses for Time Series By Leonardo Marinho
  7. Anomaly Detection on Financial Time Series by Principal Component Analysis and Neural Networks By St\'ephane Cr\'epey; Lehdili Noureddine; Nisrine Madhar; Maud Thomas
  8. Estimating Causal Effects of Monetary Policy for a Small Open Economy: Econometric Model and Estimation Framework By Markus Brueckner
  9. Interpretable Selective Learning in Credit Risk By Dangxing Chen; Weicheng Ye; Jiahui Ye
  10. Monotonic Neural Additive Models: Pursuing Regulated Machine Learning Models for Credit Scoring By Dangxing Chen; Weicheng Ye
  11. Generalized Gloves of Neural Additive Models: Pursuing transparent and accurate machine learning models in finance By Dangxing Chen; Weicheng Ye

  1. By: Sreevidya Ayyar; Yukitoshi Matsushita; Taisuke Otsu
    Abstract: This paper extends validity of the conditional likelihood ratio (CLR) test developed by Moreira (2003) to instrumental variable regression models with unknown error variance and many weak instruments. In this setting, we argue that the conventional CLR test with estimated error variance loses exact similarity and is asymptotically invalid. We propose a modified critical value function for the likelihood ratio (LR) statistic with estimated error variance, and prove that this modified test achieves asymptotic validity under many weak instrument asymptotics. Our critical value function is constructed by representing the LR using four statistics, instead of two as in Moreira (2003). A simulation study illustrates the desirable properties of our test.
    Keywords: Many weak instruments, Conditional likelihood ratio test
    JEL: C12
    Date: 2022–10
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:624&r=
  2. By: B. Cooper Boniece; Jos\'e E. Figueroa-L\'opez; Yuchen Han
    Abstract: Statistical inference for stochastic processes based on high-frequency observations has been an active research area for more than two decades. One of the most well-known and widely studied problems is the estimation of the quadratic variation of the continuous component of an It\^o semimartingale with jumps. Several rate- and variance-efficient estimators have been proposed in the literature when the jump component is of bounded variation. However, to date, very few methods can deal with jumps of unbounded variation. By developing new high-order expansions of the truncated moments of a locally stable L\'evy process, we construct a new rate- and variance-efficient volatility estimator for a class of It\^o semimartingales whose jumps behave locally like those of a stable L\'evy process with Blumenthal-Getoor index $Y\in (1,8/5)$ (hence, of unbounded variation). The proposed method is based on a two-step debiasing procedure for the truncated realized quadratic variation of the process. Our Monte Carlo experiments indicate that the method outperforms other efficient alternatives in the literature in the setting covered by our theoretical framework.
    Date: 2022–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2209.10128&r=
  3. By: Tayanagi, Toshikazu; 田柳, 俊和; Kurozumi, Eiji; 黒住, 英司
    Date: 2022–09
    URL: http://d.repec.org/n?u=RePEc:hit:econdp:2022-03&r=
  4. By: Sergiu Hart; Yosef Rinott
    Abstract: In the standard Bayesian framework data are assumed to be generated by a distribution parametrized by $\theta$ in a parameter space $\Theta$, over which a prior distribution $\pi$ is given. A Bayesian statistician quantifies the belief that the true parameter is $\theta_{0}$ in $\Theta$ by its posterior probability given the observed data. We investigate the behavior of the posterior belief in $\theta_{0}$ when the data are generated under some parameter $\theta_{1},$ which may or may not be the same as $\theta_{0}.$ Starting from stochastic orders, specifically, likelihood ratio dominance, that obtain for resulting distributions of posteriors, we consider monotonicity properties of the posterior probabilities as a function of the sample size when data arrive sequentially. While the $\theta_{0}$-posterior is monotonically increasing (i.e., it is a submartingale) when the data are generated under that same $\theta_{0}$, it need not be monotonically decreasing in general, not even in terms of its overall expectation, when the data are generated under a different $\theta_{1}.$ In fact, it may keep going up and down many times, even in simple cases such as iid coin tosses. We obtain precise asymptotic rates when the data come from the wide class of exponential families of distributions; these rates imply in particular that the expectation of the $\theta_{0}$-posterior under $\theta_{1}\neq\theta_{0}$ is eventually strictly decreasing. Finally, we show that in a number of interesting cases this expectation is a log-concave function of the sample size, and thus unimodal. In the Bernoulli case we obtain this by developing an inequality that is related to Tur\'{a}n's inequality for Legendre polynomials.
    Date: 2022–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2209.11728&r=
  5. By: Alex Tian; Tom Coupé (University of Canterbury); Sayak Khatua; W. Robert Reed (University of Canterbury); Ben Wood
    Abstract: Calculating statistical power before estimation is considered good practice. However, there is no generally accepted method for calculating power after estimation. There are several reasons why one would want to do this. First, there is general interest in knowing whether ex ante power calculations are dependable guides of actual power. Further, knowing the statistical power of an estimated equation can aid one in interpreting the associated estimates. This study proposes a simple method for calculating power after estimation. To assess its performance, we conduct Monte Carlo experiments customized to produce simulated datasets that resemble actual data from studies funded by the International Initiative for Impact Evaluation (3ie). In addition to the final reports, 3ie provided ex ante power calculations from the funding applications, along with data and code to reproduce the estimates in the final reports. After determining that our method performs adequately, we apply it to the 3ie-funded studies. We find an average ex post power of 75.4%, not far from the 80% commonly claimed in the 3ie funding applications. However, we observe significantly more estimates of low power than would be expected given the ex ante claims. We conclude by providing three examples to illustrate how ex post power can aid the interpretation of estimates that are (i) insignificant and low powered, (ii) insignificant and high powered, and (iii) significant and low powered.
    Keywords: Ex Ante Power, Ex Post Power, Hypothesis Testing, Monte Carlo simulation
    JEL: C12 C15 C18
    Date: 2022–10–01
    URL: http://d.repec.org/n?u=RePEc:cbt:econwp:22/17&r=
  6. By: Leonardo Marinho
    Abstract: I develop the concept of impulse response in a causal fashion, defining analytical tools suitable for different policy analysis. Applications of techniques presented to models containing features like confounders or nonlinearities through Monte Carlo experiments are given. I also apply some of these techniques to practical macroeconomic problems, computing impulse responses of GDP, interest rate, inflation and real exchange rate to monetary policy decisions of Banco Central do Brasil, the Brazilian Central Bank.
    Date: 2022–09
    URL: http://d.repec.org/n?u=RePEc:bcb:wpaper:570&r=
  7. By: St\'ephane Cr\'epey (LPSM, UPCit\'e); Lehdili Noureddine (LPSM, UPCit\'e); Nisrine Madhar (LPSM, UPCit\'e); Maud Thomas (LPSM, SU)
    Abstract: We consider time series representing a wide variety of risk factors in the context of financial risk management. A major issue of these data is the presence of anomalies that induce a miscalibration of the models used to quantify and manage risk, whence potentially erroneous risk measures on their basis. Therefore, the detection of anomalies is of utmost importance in financial risk management. We propose an approach that aims at improving anomaly detection on financial time series, overcoming most of the inherent difficulties. One first concern is to extract from the time series valuable features that ease the anomaly detection task. This step is ensured through a compression and reconstruction of the data with the application of principal component analysis. We define an anomaly score using a feed-forward neural network. A time series is deemed contaminated when its anomaly score exceeds a given cutoff. This cutoff value is not a hand-set parameter, instead it is calibrated as a parameter of the neural network throughout the minimisation of a customized loss function. The efficiency of the proposed model with respect to several well-known anomaly detection algorithms is numerically demonstrated. We show on a practical case of value-at-risk estimation, that the estimation errors are reduced when the proposed anomaly detection model is used, together with a naive imputation approach to correct the anomaly.
    Date: 2022–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2209.11686&r=
  8. By: Markus Brueckner
    Abstract: This paper provides an econometric model and estimation framework that enables to obtain estimates of causal effects of monetary policy in a small open economy. The model and estimation framework explicitly takes into account the endogeneity of monetary policy: i.e. if the central bank has an target inflation, then monetary policy itself is a function of economic shocks that affect inflation and other macroeconomic outcomes of interest. This is the standard simultaneity problem, which to date has not been satisfactorily addressed in the empirical monetary policy literature. The simultaneity problem can only be addressed if one has a valid instrument for monetary policy. In this paper I provide a local projections instrumental variables framework that enables to provide causal estimates of the dynamic effects that monetary policy in a small open economy has on inflation, the output gap, credit growth, and the exchange rate in the presence of external shocks.
    JEL: C3 E3 E4 E6
    Date: 2022–10
    URL: http://d.repec.org/n?u=RePEc:acb:cbeeco:2022-689&r=
  9. By: Dangxing Chen; Weicheng Ye; Jiahui Ye
    Abstract: The forecasting of the credit default risk has been an important research field for several decades. Traditionally, logistic regression has been widely recognized as a solution due to its accuracy and interpretability. As a recent trend, researchers tend to use more complex and advanced machine learning methods to improve the accuracy of the prediction. Although certain non-linear machine learning methods have better predictive power, they are often considered to lack interpretability by financial regulators. Thus, they have not been widely applied in credit risk assessment. We introduce a neural network with the selective option to increase interpretability by distinguishing whether the datasets can be explained by the linear models or not. We find that, for most of the datasets, logistic regression will be sufficient, with reasonable accuracy; meanwhile, for some specific data portions, a shallow neural network model leads to much better accuracy without significantly sacrificing the interpretability.
    Date: 2022–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2209.10127&r=
  10. By: Dangxing Chen; Weicheng Ye
    Abstract: The forecasting of credit default risk has been an active research field for several decades. Historically, logistic regression has been used as a major tool due to its compliance with regulatory requirements: transparency, explainability, and fairness. In recent years, researchers have increasingly used complex and advanced machine learning methods to improve prediction accuracy. Even though a machine learning method could potentially improve the model accuracy, it complicates simple logistic regression, deteriorates explainability, and often violates fairness. In the absence of compliance with regulatory requirements, even highly accurate machine learning methods are unlikely to be accepted by companies for credit scoring. In this paper, we introduce a novel class of monotonic neural additive models, which meet regulatory requirements by simplifying neural network architecture and enforcing monotonicity. By utilizing the special architectural features of the neural additive model, the monotonic neural additive model penalizes monotonicity violations effectively. Consequently, the computational cost of training a monotonic neural additive model is similar to that of training a neural additive model, as a free lunch. We demonstrate through empirical results that our new model is as accurate as black-box fully-connected neural networks, providing a highly accurate and regulated machine learning method.
    Date: 2022–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2209.10070&r=
  11. By: Dangxing Chen; Weicheng Ye
    Abstract: For many years, machine learning methods have been used in a wide range of fields, including computer vision and natural language processing. While machine learning methods have significantly improved model performance over traditional methods, their black-box structure makes it difficult for researchers to interpret results. For highly regulated financial industries, transparency, explainability, and fairness are equally, if not more, important than accuracy. Without meeting regulated requirements, even highly accurate machine learning methods are unlikely to be accepted. We address this issue by introducing a novel class of transparent and interpretable machine learning algorithms known as generalized gloves of neural additive models. The generalized gloves of neural additive models separate features into three categories: linear features, individual nonlinear features, and interacted nonlinear features. Additionally, interactions in the last category are only local. The linear and nonlinear components are distinguished by a stepwise selection algorithm, and interacted groups are carefully verified by applying additive separation criteria. Empirical results demonstrate that generalized gloves of neural additive models provide optimal accuracy with the simplest architecture, allowing for a highly accurate, transparent, and explainable approach to machine learning.
    Date: 2022–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2209.10082&r=

This nep-ecm issue is ©2022 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.