nep-ecm New Economics Papers
on Econometrics
Issue of 2023‒09‒04
27 papers chosen by
Sune Karlsson, Örebro universitet


  1. Randomization Inference of Heterogeneous Treatment Effects under Network Interference By Julius Owusu
  2. Inference for Low-rank Completion without Sample Splitting with Application to Treatment Effect Estimation By Jungjun Choi; Hyukjun Kwon; Yuan Liao
  3. Composite Quantile Factor Models By Xiao Huang
  4. Testing for Threshold Effects in Presence of Heteroskedasticity and Measurement Error with an application to Italian Strikes By Francesco Angelini; Massimiliano Castellani; Simone Giannerini; Greta Goracci
  5. Structural Breaks in Seemingly Unrelated Regression Models By Shahnaz Parsaeian
  6. Efficient Variational Inference for Large Skew-t Copulas with Application to Intraday Equity Returns By Lin Deng; Michael Stanley Smith; Worapree Maneesoonthorn
  7. Autoregressive networks By Jiang, Binyan; Li, Jialiang; Yao, Qiwei
  8. Identification in Dynamic Binary Choice Models By Gary Chamberlain
  9. Hypothesis Tests Under Separation By Rainey, Carlisle
  10. Solving the Forecast Combination Puzzle By David T. Frazier; Ryan Covey; Gael M. Martin; Donald Poskitt
  11. A Guide to Impact Evaluation under Sample Selection and Missing Data: Teacher's Aides and Adolescent Mental Health By Simon Calmar Andersen; Louise Beuchert; Phillip Heiler; Helena Skyt Nielsen
  12. Matrix Completion When Missing Is Not at Random and Its Applications in Causal Panel Data Models By Jungjun Choi; Ming Yuan
  13. Towards Practical Robustness Auditing for Linear Regression By Daniel Freund; Samuel B. Hopkins
  14. Shadow-rate VARs By Carriero, Andrea; Clark, Todd E.; Marcellino, Massimiliano; Mertens, Elmar
  15. Econometric Analysis with Compositional and Non-Compositional Covariates By Ben-Gad, M.
  16. Deep spectral Q-learning with application to mobile health By Gao, Yuhe; Shi, Chengchun; Song, Rui
  17. Estimators for Topic-Sampling Designs By Clifford, Scott; Rainey, Carlisle
  18. Sig-Splines: universal approximation and convex calibration of time series generative models By Magnus Wiese; Phillip Murray; Ralf Korn
  19. Combining Large Numbers of Density Predictions with Bayesian Predictive Synthesis By Tony Chernis
  20. Causal Inference for Banking Finance and Insurance A Survey By Satyam Kumar; Yelleti Vivek; Vadlamani Ravi; Indranil Bose
  21. Peer effects and endogenous social interactions By Koen Jochmans
  22. What's Logs Got to do With it: On the Perils of log Dependent Variables and Difference-in-Differences By Brendon McConnell
  23. Statistically efficient advantage learning for offline reinforcement learning in infinite horizons By Shi, Chengchun; Luo, Shikai; Le, Yuan; Zhu, Hongtu; Song, Rui
  24. Statistical Decision Theory Respecting Stochastic Dominance By Charles F. Manski; Aleksey Tetenov
  25. Characterizing Correlation Matrices that Admit a Clustered Factor Representation By Chen Tong; Peter Reinhard Hansen
  26. How to measure inFLAtion volatility. A note By Alfredo García-Hiernaux; María T. González-Pérez; David E. Guerrero
  27. The Distributional Impact of Money Growth and Inflation Disaggregates: A Quantile Sensitivity Analysis By Matteo Iacopini; Aubrey Poon; Luca Rossini; Dan Zhu

  1. By: Julius Owusu
    Abstract: We design randomization tests of heterogeneous treatment effects when units interact on a network. Our modeling strategy allows network interference into the potential outcomes framework using the concept of network exposure mapping. We consider three null hypotheses that represent different notions of homogeneous treatment effects, but due to nuisance parameters and the multiplicity of potential outcomes, the hypotheses are not sharp. To address the issue of multiple potential outcomes, we propose a conditional randomization inference method that expands on existing methods. Additionally, we propose two techniques that overcome the nuisance parameter issue. We show that our conditional randomization inference method, combined with either of the proposed techniques for handling nuisance parameters, produces asymptotically valid p-values. We illustrate the testing procedures on a network data set and the results of a Monte Carlo study are also presented.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2308.00202&r=ecm
  2. By: Jungjun Choi; Hyukjun Kwon; Yuan Liao
    Abstract: This paper studies the inferential theory for estimating low-rank matrices. It also provides an inference method for the average treatment effect as an application. We show that the least square estimation of eigenvectors following the nuclear norm penalization attains the asymptotic normality. The key contribution of our method is that it does not require sample splitting. In addition, this paper allows dependent observation patterns and heterogeneous observation probabilities. Empirically, we apply the proposed procedure to estimating the impact of the presidential vote on allocating the U.S. federal budget to the states.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.16370&r=ecm
  3. By: Xiao Huang
    Abstract: This paper introduces the method of composite quantile factor model for factor analysis in high-dimensional panel data. We propose to estimate the factors and factor loadings across different quantiles of the data, allowing the estimates to better adapt to features of the data at different quantiles while still modeling the mean of the data. We develop the limiting distribution of the estimated factors and factor loadings, and an information criterion for consistent factor number selection is also discussed. Simulations show that the proposed estimator and the information criterion have good finite sample properties for several non-normal distributions under consideration. We also consider an empirical study on the factor analysis for 246 quarterly macroeconomic variables. A companion R package cqrfactor is developed.
    Date: 2023–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2308.02450&r=ecm
  4. By: Francesco Angelini; Massimiliano Castellani; Simone Giannerini; Greta Goracci
    Abstract: Many macroeconomic time series are characterised by nonlinearity both in the conditional mean and in the conditional variance and, in practice, it is important to investigate separately these two aspects. Here we address the issue of testing for threshold nonlinearity in the conditional mean, in the presence of conditional heteroskedasticity. We propose a supremum Lagrange Multiplier approach to test a linear ARMA-GARCH model against the alternative of a TARMA-GARCH model. We derive the asymptotic null distribution of the test statistic and this requires novel results since the difficulties of working with nuisance parameters, absent under the null hypothesis, are amplified by the non-linear moving average, combined with GARCH-type innovations. We show that tests that do not account for heteroskedasticity fail to achieve the correct size even for large sample sizes. Moreover, we show that the TARMA specification naturally accounts for the ubiquitous presence of measurement error that affects macroeconomic data. We apply the results to analyse the time series of Italian strikes and we show that the TARMA-GARCH specification is consistent with the relevant macroeconomic theory while capturing the main features of the Italian strikes dynamics, such as asymmetric cycles and regime-switching.
    Date: 2023–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2308.00444&r=ecm
  5. By: Shahnaz Parsaeian (Department of Economics, University of Kansas, Lawrence, KS 66045)
    Abstract: This paper develops an efficient Stein-like shrinkage estimator for estimating slope parameters under structural breaks in seemingly unrelated regression models, which is then used for forecasting. The proposed method is a weighted average of two estimators: a restricted estimator that estimates the parameters under the restriction of no break in the coefficients, and an unrestricted estimator that considers break points and estimates the parameters using the observations within each regime. It is established that the asymptotic risk of the Stein-like shrinkage estimator is smaller than that of the unrestricted estimator, which is the method typically used to estimate the slope coefficients under structural breaks. Furthermore, this paper proposes an averaging minimal mean squared error estimator in which the averaging weight is derived by minimizing its asymptotic risk. The superiority of the two proposed estimators over the unrestricted estimator in terms of the mean squared forecast errors are also derived. Further, analytical comparison between the asymptotic risks of the proposed estimators is provided. Insights from the theoretical analysis are demonstrated in Monte Carlo simulations, and through an empirical example of forecasting output growth rates of G7 countries.
    Keywords: Forecasting; Seemingly unrelated regression; Structural breaks; Stein-like shrinkage estimator; Minimal mean squared error estimator
    JEL: C13 C23 C52 C53
    Date: 2023–08
    URL: http://d.repec.org/n?u=RePEc:kan:wpaper:202308&r=ecm
  6. By: Lin Deng; Michael Stanley Smith; Worapree Maneesoonthorn
    Abstract: Large skew-t factor copula models are attractive for the modeling of financial data because they allow for asymmetric and extreme tail dependence. We show that the copula implicit in the skew-t distribution of Azzalini and Capitanio (2003) allows for a higher level of pairwise asymmetric dependence than two popular alternative skew-t copulas. Estimation of this copula in high dimensions is challenging, and we propose a fast and accurate Bayesian variational inference (VI) approach to do so. The method uses a conditionally Gaussian generative representation of the skew-t distribution to define an augmented posterior that can be approximated accurately. A fast stochastic gradient ascent algorithm is used to solve the variational optimization. The new methodology is used to estimate copula models for intraday returns from 2017 to 2021 on 93 U.S. equities. The copula captures substantial heterogeneity in asymmetric dependence over equity pairs, in addition to the variability in pairwise correlations. We show that intraday predictive densities from the skew-t copula are more accurate than from some other copula models, while portfolio selection strategies based on the estimated pairwise tail dependencies improve performance relative to the benchmark index.
    Date: 2023–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2308.05564&r=ecm
  7. By: Jiang, Binyan; Li, Jialiang; Yao, Qiwei
    Abstract: We propose a rst-order autoregressive (i.e. AR(1)) model for dynamic network processes in which edges change over time while nodes remain unchanged. The model depicts the dynamic changes explicitly. It also facilitates simple and ecient statistical inference methods including a permutation test for diagnostic checking for the tted network models. The proposed model can be applied to the network processes with various underlying structures but with independent edges. As an illustration, an AR(1) stochastic block model has been investigated in depth, which characterizes the latent communities by the transition probabilities over time. This leads to a new and more eective spectral clustering algorithm for identifying the latent communities. We have derived a nite sample condition under which the perfect recovery of the community structure can be achieved by the newly dened spectral clustering algorithm. Furthermore the inference for a change point is incorporated into the AR(1) stochastic block model to cater for possible structure changes. We have derived the explicit error rates for the maximum likelihood estimator of the change-point. Application with three real data sets illustrates both relevance and usefulness of the proposed AR(1) models and the associate inference methods.
    Keywords: AR(1) networks; change point; dynamic stochastic block model; Hamming distance; maximum likelihood estimation; spectral clustering algorithm; Yule-Walker equation
    JEL: C1
    Date: 2023–08–15
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:119983&r=ecm
  8. By: Gary Chamberlain
    Abstract: This paper studies identification in a binary choice panel data model with choice probabilities depending on a lagged outcome, additional observed regressors and an unobserved unit-specific effect. It is shown that with two consecutive periods of data identification is not possible (in a neighborhood of zero), even in the logistic case.
    Date: 2023–07–26
    URL: http://d.repec.org/n?u=RePEc:azt:cemmap:16/23&r=ecm
  9. By: Rainey, Carlisle
    Abstract: Separation commonly occurs in political science, usually when a binary explanatory variable perfectly predicts a binary outcome. In these situations, methodologists often recommend penalized maximum likelihood or Bayesian estimation. But researchers might struggle to identify an appropriate penalty or prior distribution. Fortunately, I show that researchers can easily test hypotheses about the model coefficients with standard frequentist tools. While the popular Wald test produces misleading (even nonsensical) p-values under separation, I show that likelihood ratio tests and score tests behave in the usual manner. Therefore, researchers can produce meaningful p-values with standard frequentist tools under separation without the use of penalties or prior information.
    Date: 2023–07–30
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:bmvnu&r=ecm
  10. By: David T. Frazier; Ryan Covey; Gael M. Martin; Donald Poskitt
    Abstract: We demonstrate that the forecasting combination puzzle is a consequence of the methodology commonly used to produce forecast combinations. By the combination puzzle, we refer to the empirical finding that predictions formed by combining multiple forecasts in ways that seek to optimize forecast performance often do not out-perform more naive, e.g. equally-weighted, approaches. In particular, we demonstrate that, due to the manner in which such forecasts are typically produced, tests that aim to discriminate between the predictive accuracy of competing combination strategies can have low power, and can lack size control, leading to an outcome that favours the naive approach. We show that this poor performance is due to the behavior of the corresponding test statistic, which has a non-standard asymptotic distribution under the null hypothesis of no inferior predictive accuracy, rather than the {standard normal distribution that is} {typically adopted}. In addition, we demonstrate that the low power of such predictive accuracy tests in the forecast combination setting can be completely avoided if more efficient estimation strategies are used in the production of the combinations, when feasible. We illustrate these findings both in the context of forecasting a functional of interest and in terms of predictive densities. A short empirical example {using daily financial returns} exemplifies how researchers can avoid the puzzle in practical settings.
    Date: 2023–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2308.05263&r=ecm
  11. By: Simon Calmar Andersen; Louise Beuchert; Phillip Heiler; Helena Skyt Nielsen
    Abstract: This paper is concerned with identification, estimation, and specification testing in causal evaluation problems when data is selective and/or missing. We leverage recent advances in the literature on graphical methods to provide a unifying framework for guiding empirical practice. The approach integrates and connects to prominent identification and testing strategies in the literature on missing data, causal machine learning, panel data analysis, and more. We demonstrate its utility in the context of identification and specification testing in sample selection models and field experiments with attrition. We provide a novel analysis of a large-scale cluster-randomized controlled teacher's aide trial in Danish schools at grade 6. Even with detailed administrative data, the handling of missing data crucially affects broader conclusions about effects on mental health. Results suggest that teaching assistants provide an effective way of improving internalizing behavior for large parts of the student population.
    Date: 2023–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2308.04963&r=ecm
  12. By: Jungjun Choi; Ming Yuan
    Abstract: This paper develops an inferential framework for matrix completion when missing is not at random and without the requirement of strong signals. Our development is based on the observation that if the number of missing entries is small enough compared to the panel size, then they can be estimated well even when missing is not at random. Taking advantage of this fact, we divide the missing entries into smaller groups and estimate each group via nuclear norm regularization. In addition, we show that with appropriate debiasing, our proposed estimate is asymptotically normal even for fairly weak signals. Our work is motivated by recent research on the Tick Size Pilot Program, an experiment conducted by the Security and Exchange Commission (SEC) to evaluate the impact of widening the tick size on the market quality of stocks from 2016 to 2018. While previous studies were based on traditional regression or difference-in-difference methods by assuming that the treatment effect is invariant with respect to time and unit, our analyses suggest significant heterogeneity across units and intriguing dynamics over time during the pilot program.
    Date: 2023–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2308.02364&r=ecm
  13. By: Daniel Freund; Samuel B. Hopkins
    Abstract: We investigate practical algorithms to find or disprove the existence of small subsets of a dataset which, when removed, reverse the sign of a coefficient in an ordinary least squares regression involving that dataset. We empirically study the performance of well-established algorithmic techniques for this task -- mixed integer quadratically constrained optimization for general linear regression problems and exact greedy methods for special cases. We show that these methods largely outperform the state of the art and provide a useful robustness check for regression problems in a few dimensions. However, significant computational bottlenecks remain, especially for the important task of disproving the existence of such small sets of influential samples for regression problems of dimension $3$ or greater. We make some headway on this challenge via a spectral algorithm using ideas drawn from recent innovations in algorithmic robust statistics. We summarize the limitations of known techniques in several challenge datasets to encourage further algorithmic innovation.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.16315&r=ecm
  14. By: Carriero, Andrea; Clark, Todd E.; Marcellino, Massimiliano; Mertens, Elmar
    Abstract: VARs are a popular tool for forecasting and structural analysis, but ill-suited to handle occasionally binding constraints, like the effective lower bound on nominal interest rates. We extend the VAR framework by modeling interest rates as censored observations of a latent shadow-rate process, and propose an efficient sampler for Bayesian estimation of such 'shadow-rate VARs.' We analyze specifications where actual and shadow rates serve as explanatory variables and find benefits of including both. In comparison to a standard VAR, shadow-rate VARs generate superior predictions for short- and long-term interest rates, and deliver some gains for macroeconomic variables in US data. Our structural analysis estimates economic responses to shocks in financial conditions, showing strong differences in the reaction of interest rates depending on whether the ELB binds or not. After an adverse shock, our shadow-rate VAR sees a stronger decline of economic activity at the ELB rather than when not.
    Keywords: Macroeconomic forecasting, effective lower bound, term structure, censored observations
    JEL: C34 C53 E17 E37 E43 E47
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:zbw:bubdps:142023&r=ecm
  15. By: Ben-Gad, M.
    Abstract: In this paper I consider how best to incorporate compositional data (shares of a whole which can be represented as points on a simplex) together with noncompositional data as covariates in a linear regression. The standard method for incorporating compositional data in regressions is to omit one share to overcome the problem of singularity. I demonstrate that doing so ignores the compositional nature of the data and the resulting models are not objects in a vector space, which in turn reduces their usefulness. In terms of Aitchison geometry - the only geometry that can generate a vector space on a simplex - I show how this method also grossly distorts the relationship between points in the compositional data set. Futhermore, the regression coefficients that result are not permutation invariant, so unless there is an obvious baseline category to be omitted with which the other variables in the composition ought naturally to be compared, this approach gives researchers latitude to choose the permutation of the model that supports a particular hypothesis or appears most convincing in terms of p-values. The alternatives in this paper build on work by Aitchison (1982, 1986) on additive logarithmic ratio (ALR) transformations and Egozcue et al. (2003) on isometric logarithmic ratio (ILR) transformations. Transforming the compositional data using ALRs generates regressions that are permutation invariant and hyperplanes in a vector space. However, ALRs translate the points in the simplex into coordinates relative to an oblique basis, so the angles and distances between the data points remain somewhat distorted|though this distortion is inversely related to the number of shares in the composition. By contrast, ILRs eliminate the distortion by translating the points into coordinates relative to an orthogonal basis. However, the resulting regressions are no longer permutation invariant and are difficult to interpret. To overcome these shortcomings, Hron et al. (2012) suggest using ILRs, but combining the coefficient estimates across all the different permutations to produce one statistical model. I demonstrate that estimating a separate regression for each permutation is unnecessary - estimating either a single regression using ALR coordinates or a constrained regression and then multiplying the resulting regression coefficients and standard errors associated with the compositional variables by a simple factor is sufficient. Though log-ratios incorporate more information about the nature of compositional data as coordinates in a simplex, I demonstrate that it does not exacerbate the inherent multicollinearity present in compositional datasets. Throughout, I use economic growth regressions with compositional data on ten religious categories, similar to Barro and McCleary (2003) and McCleary and Barro (2006), to demonstrate and contrast all these different approaches.
    Keywords: Compositional Data; Aitchison Geometry; Isometric Logarithmic Ratios; Economic Growth Regressions
    Date: 2022–10–02
    URL: http://d.repec.org/n?u=RePEc:cty:dpaper:22/01&r=ecm
  16. By: Gao, Yuhe; Shi, Chengchun; Song, Rui
    Abstract: Dynamic treatment regimes assign personalized treatments to patients sequentially over time based on their baseline information and time-varying covariates. In mobile health applications, these covariates are typically collected at different frequencies over a long time horizon. In this paper, we propose a deep spectral Q-learning algorithm, which integrates principal component analysis (PCA) with deep Q-learning to handle the mixed frequency data. In theory, we prove that the mean return under the estimated optimal policy converges to that under the optimal one and establish its rate of convergence. The usefulness of our proposal is further illustrated via simulations and an application to a diabetes dataset.
    Keywords: dynamic treatment regimes; mixed frequency data; principal component analysis; reinforcement learning
    JEL: C1
    Date: 2023–12–01
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:119445&r=ecm
  17. By: Clifford, Scott; Rainey, Carlisle
    Abstract: When researchers design an experiment, they must usually fix important details, or what we call the “topic” of the experiment. For example, researchers studying the impact of party cues on attitudes must inform respondents of the parties’ positions on a particular policy. In doing so, the researchers implement just one of many possible designs. Clifford, Leeper, and Rainey (2023) argue that researchers should implement many of the possible designs in parallel—what they call “topic sampling”—to generalize to a larger population of topics. We describe two estimators for topic-sampling designs. First, we describe a nonparametric estimator of the typical effect that is unbiased under the assumptions of the design. Second, we describe a hierarchical model that researchers can use to describe the heterogeneity. We suggest describing the variation in three ways: (1) the standard deviation in treatment effects across topics, (2) the treatment effects for particular topics, and (3) perhaps how the treatment effects for particular topics vary with topic-level predictors. We evaluate the performance of the hierarchical model using the Strengthening Democracy Challenge megastudy and show that the hierarchical model works well.
    Date: 2023–08–11
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:7ady6&r=ecm
  18. By: Magnus Wiese; Phillip Murray; Ralf Korn
    Abstract: We propose a novel generative model for multivariate discrete-time time series data. Drawing inspiration from the construction of neural spline flows, our algorithm incorporates linear transformations and the signature transform as a seamless substitution for traditional neural networks. This approach enables us to achieve not only the universality property inherent in neural networks but also introduces convexity in the model's parameters.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.09767&r=ecm
  19. By: Tony Chernis
    Abstract: Bayesian predictive synthesis is a flexible method of combining density predictions. The flexibility comes from the ability to choose an arbitrary synthesis function to combine predictions. I study the choice of synthesis function when combining large numbers of predictions—a common occurrence in macroeconomics. Estimating combination weights with many predictions is difficult, so I consider shrinkage priors and factor modelling techniques to address this problem. The dense weights of factor modelling provide an interesting contrast with the sparse weights implied by shrinkage priors. I find that the sparse weights of shrinkage priors perform well across exercises.
    Keywords: Econometric and statistical methods
    JEL: C11 C52 C53 E37
    Date: 2023–08
    URL: http://d.repec.org/n?u=RePEc:bca:bocawp:23-45&r=ecm
  20. By: Satyam Kumar; Yelleti Vivek; Vadlamani Ravi; Indranil Bose
    Abstract: Causal Inference plays an significant role in explaining the decisions taken by statistical models and artificial intelligence models. Of late, this field started attracting the attention of researchers and practitioners alike. This paper presents a comprehensive survey of 37 papers published during 1992-2023 and concerning the application of causal inference to banking, finance, and insurance. The papers are categorized according to the following families of domains: (i) Banking, (ii) Finance and its subdomains such as corporate finance, governance finance including financial risk and financial policy, financial economics, and Behavioral finance, and (iii) Insurance. Further, the paper covers the primary ingredients of causal inference namely, statistical methods such as Bayesian Causal Network, Granger Causality and jargon used thereof such as counterfactuals. The review also recommends some important directions for future research. In conclusion, we observed that the application of causal inference in the banking and insurance sectors is still in its infancy, and thus more research is possible to turn it into a viable method.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.16427&r=ecm
  21. By: Koen Jochmans (TSE-R - Toulouse School of Economics - UT Capitole - Université Toulouse Capitole - UT - Université de Toulouse - EHESS - École des hautes études en sciences sociales - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement)
    Abstract: This paper proposes a solution to the problem of the self-selection of peers in the linear-in-means model. We do not require to specify a model for how the selection of peers comes about. Rather, we exploit two restrictions that are inherent in many such specifications to construct conditional moment conditions. The restrictions in question are that link decisions that involve a given individual are not all independent of one another, but that they are independent of the link decisions made between other pairs of individuals that are located sufficiently far away in the network. These conditions imply that instrumental variables can be constructed from leave-own-out networks.
    Keywords: Instrumental variable, Linear-in-means model, Network, Self-selection
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-04164668&r=ecm
  22. By: Brendon McConnell
    Abstract: The log transformation of the dependent variable is not innocuous when using a difference-in-differences (DD) model. With a dependent variable in logs, the DD term captures an approximation of the proportional difference in growth rates across groups. As I show with both simulations and two empirical examples, if the baseline outcome distributions are sufficiently different across groups, the DD parameter for a log-specification can be different in sign to that of a levels-specification. I provide a condition, based on (i) the aggregate time effect, and (ii) the difference in relative baseline outcome means, for when the sign-switch will occur.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2308.00167&r=ecm
  23. By: Shi, Chengchun; Luo, Shikai; Le, Yuan; Zhu, Hongtu; Song, Rui
    Abstract: We consider reinforcement learning (RL) methods in offline domains without additional online data collection, such as mobile health applications. Most of existing policy optimization algorithms in the computer science literature are developed in online settings where data are easy to collect or simulate. Their generalizations to mobile health applications with a pre-collected offline dataset remain unknown. The aim of this paper is to develop a novel advantage learning framework in order to efficiently use pre-collected data for policy optimization. The proposed method takes an optimal Q-estimator computed by any existing state-of-the-art RL algorithms as input, and outputs a new policy whose value is guaranteed to converge at a faster rate than the policy derived based on the initial Q-estimator. Extensive numerical experiments are conducted to back up our theoretical findings. A Python implementation of our proposed method is available at https://github.com/leyuanheart/SEAL
    Keywords: reinforcement learning; advantage learning; infinite horizons; rate of convergence; mobile health applications; T&F deal
    JEL: C1
    Date: 2022–09–27
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:115598&r=ecm
  24. By: Charles F. Manski; Aleksey Tetenov
    Abstract: The statistical decision theory pioneered by Wald (1950) has used state-dependent mean loss (risk) to measure the performance of statistical decision functions across potential samples. We think it evident that evaluation of performance should respect stochastic dominance, but we do not see a compelling reason to focus exclusively on mean loss. We think it instructive to also measure performance by other functionals that respect stochastic dominance, such as quantiles of the distribution of loss. This paper develops general principles and illustrative applications for statistical decision theory respecting stochastic dominance. We modify the Wald definition of admissibility to an analogous concept of stochastic dominance (SD) admissibility, which uses stochastic dominance rather than mean sampling performance to compare alternative decision rules. We study SD admissibility in two relatively simple classes of decision problems that arise in treatment choice. We reevaluate the relationship between the MLE, James-Stein, and James-Stein positive part estimators from the perspective of SD admissibility. We consider alternative criteria for choice among SD-admissible rules. We juxtapose traditional criteria based on risk, regret, or Bayes risk with analogous ones based on quantiles of state-dependent sampling distributions or the Bayes distribution of loss.
    Date: 2023–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2308.05171&r=ecm
  25. By: Chen Tong; Peter Reinhard Hansen
    Abstract: The Clustered Factor (CF) model induces a block structure on the correlation matrix and is commonly used to parameterize correlation matrices. Our results reveal that the CF model imposes superfluous restrictions on the correlation matrix. This can be avoided by a different parametrization, involving the logarithmic transformation of the block correlation matrix.
    Date: 2023–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2308.05895&r=ecm
  26. By: Alfredo García-Hiernaux (DANAE and ICAE); María T. González-Pérez (Banco de España); David E. Guerrero (CUNEF)
    Abstract: This paper proposes a statistical model and a conceptual framework to estimate inflation volatility assuming rational inattention, where the decay in the level of attention reflects the arrival of news in the market. We estimate trend inflation and the conditional inflation volatility for Germany, Spain, the euro area and the United States using monthly data from January 2002 to March 2022 and test whether inflation was equal to or below 2% in this period in these regions. We decompose inflation volatility into positive and negative surprise components and characterise different inflation volatility scenarios during the Great Financial Crisis, the Sovereign Debt Crisis, and the post-COVID period. Our volatility measure outperforms the GARCH(1, 1) model and the rolling standard deviation in one-step ahead volatility forecasts both in-sample and out-of-sample. The methodology proposed in this article is appropriate for estimating the conditional volatility of macro-financial variables. We recommend the inclusion of this measure in inflation dynamics monitoring and forecasting exercises.
    Keywords: inflation, inflation trend, inflation volatility, rational inattention, positive and negative surprises.
    JEL: C22 C32 E3 E4 E5
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:bde:wpaper:2314&r=ecm
  27. By: Matteo Iacopini; Aubrey Poon; Luca Rossini; Dan Zhu
    Abstract: We propose an alternative method to construct a quantile dependence system for inflation and money growth. By considering all quantiles, we assess how perturbations in one variable's quantile lead to changes in the distribution of the other variable. We demonstrate the construction of this relationship through a system of linear quantile regressions. The proposed framework is exploited to examine the distributional effects of money growth on the distributions of inflation and its disaggregate measures in the United States and the Euro area. Our empirical analysis uncovers significant impacts of the upper quantile of the money growth distribution on the distribution of inflation and its disaggregate measures. Conversely, we find that the lower and median quantiles of the money growth distribution have a negligible influence on the distribution of inflation and its disaggregate measures.
    Date: 2023–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2308.05486&r=ecm

This nep-ecm issue is ©2023 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.