nep-cmp New Economics Papers
on Computational Economics
Issue of 2023‒03‒27
twenty-one papers chosen by



  1. Does Machine Learning Amplify Pricing Errors in the Housing Market? -- The Economics of Machine Learning Feedback Loops By Nikhil Malik; Emaad Manzoor
  2. The performance of time series forecasting based on classical and machine learning methods for S&P 500 index By Maudud Hassan Uzzal; Robert Ślepaczuk
  3. The Effects of Artificial Intelligence on the World as a Whole from an Economic Perspective By Sharma, Rahul
  4. Simultaneous upper and lower bounds of American option prices with hedging via neural networks By Ivan Guo; Nicolas Langren\'e; Jiahao Wu
  5. Finding the Optimal Currency Composition of Foreign Exchange Reserves with a Quantum Computer By Martin Vesely
  6. The Macroeconomy as a Random Forest By Philippe Goulet Coulombe
  7. Attitudes and Latent Class Choice Models using Machine learning By Lorena Torres Lahoz; Francisco Camara Pereira; Georges Sfeir; Ioanna Arkoudi; Mayara Moraes Monteiro; Carlos Lima Azevedo
  8. Parametric Differential Machine Learning for Pricing and Calibration By Arun Kumar Polala; Bernhard Hientzsch
  9. Genetic multi-armed bandits: a reinforcement learning approach for discrete optimization via simulation By Deniz Preil; Michael Krapp
  10. Assessing and Comparing Fixed-Target Forecasts of Arctic Sea Ice: Glide Charts for Feature-Engineered Linear Regression and Machine Learning Models By Francis X. Diebold; Maximilian Gobel; Philippe Goulet Coulombe
  11. A Neural Phillips Curve and a Deep Output Gap By Philippe Goulet Coulombe
  12. Unsupervised Machine Learning for Explainable Health Care Fraud Detection By Shubhranshu Shekhar; Jetson Leder-Luis; Leman Akoglu
  13. Reevaluating the Taylor Rule with Machine Learning By Alper Deniz Karakas
  14. The global economic impact of AI technologies in the fight against financial crime By James Bell
  15. Competitive Model Selection in Algorithmic Targeting By Ganesh Iyer; T. Tony Ke
  16. On the Validity of Using Webpage Texts to Identify the Target Population of a Survey: An Application to Detect Online Platforms By Daas, Piet; Hassink, Wolter; Klijs, Bart
  17. Industrial Policy for Advanced AI: Compute Pricing and the Safety Tax By Mckay Jensen; Nicholas Emery-Xu; Robert Trager
  18. Logistic Regression Collaborating with AI Beam Search By Tom, Daniel
  19. Artificial intelligence adoption in the public sector- a case study By Laura Nurski
  20. Forecasting Macroeconomic Tail Risk in Real Time: Do Textual Data Add Value? By Philipp Ad\"ammer; Jan Pr\"user; Rainer Sch\"ussler
  21. Post-Episodic Reinforcement Learning Inference By Vasilis Syrgkanis; Ruohan Zhan

  1. By: Nikhil Malik; Emaad Manzoor
    Abstract: Machine learning algorithms are increasingly employed to price or value homes for sale, properties for rent, rides for hire, and various other goods and services. Machine learning-based prices are typically generated by complex algorithms trained on historical sales data. However, displaying these prices to consumers anchors the realized sales prices, which will in turn become training samples for future iterations of the algorithms. The economic implications of this machine learning "feedback loop" - an indirect human-algorithm interaction - remain relatively unexplored. In this work, we develop an analytical model of machine learning feedback loops in the context of the housing market. We show that feedback loops lead machine learning algorithms to become overconfident in their own accuracy (by underestimating its error), and leads home sellers to over-rely on possibly erroneous algorithmic prices. As a consequence at the feedback loop equilibrium, sale prices can become entirely erratic (relative to true consumer preferences in absence of ML price interference). We then identify conditions (choice of ML models, seller characteristics and market characteristics) where the economic payoffs for home sellers at the feedback loop equilibrium is worse off than no machine learning. We also empirically validate primitive building blocks of our analytical model using housing market data from Zillow. We conclude by prescribing algorithmic corrective strategies to mitigate the effects of machine learning feedback loops, discuss the incentives for platforms to adopt these strategies, and discuss the role of policymakers in regulating the same.
    Date: 2023–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2302.09438&r=cmp
  2. By: Maudud Hassan Uzzal (University of Warsaw, Faculty of Economic Sciences, Quantitative Finance Research Group); Robert Ślepaczuk (University of Warsaw, Faculty of Economic Sciences, Department of Quantitative Finance, Quantitative Finance Research Group)
    Abstract: Based on one step ahead forecasts, this study compares the forecasting abilities of the traditional technique (ARIMA) with recurrent neural network (LSTM). In order to check the possible use of these forecasts in different asset management methods, these forecasts are afterwards included into trading signals of investment strategies. As a benchmark method, the Random Walk model producing naive forecasts has been utilized. This research examines daily data from the S&P 500 index for 20 years, from 2000 to 2020, and it includes information on some significant market turbulence. The methods were tested in terms of robustness to changes in parameters and hyperparameters and evaluated based on various error metrics (MAE, MAPE, RMSE MSE). The results show that ARIMA outperforms LSTM in terms of one step ahead forecasts. Finally, LSTM model with a variety of hyperparameters - including a number of epochs, a loss function, an optimizer, activation functions, a number of units, a batch size, and a learning rate - was tested in order to check its robustness.
    Keywords: deep learning, recurrent neural networks, ARIMA, algorithmic investment strategies, trading systems, LSTM, walk-forward process, optimization
    JEL: C4 C14 C45 C53 C58 G13
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:war:wpaper:2023-05&r=cmp
  3. By: Sharma, Rahul
    Abstract: Artificial intelligence (AI) has made tremendous advances in recent years, and there is no doubt that this technology will have a significant impact on the overall economy in terms of productivity, growth, markets, as well as innovation. A growing number of perspectives on the impact of artificial intelligence (AI) are flooding the business press, but finding one that deals with the economic impact of AI in a unique and original way is becoming increasingly difficult. In terms of adoption rates, there has been quite an uneven adoption rate of AI and ML (artificial intelligence and machine learning) methods in the economics profession, as far as adoption rates are concerned, as far as AI and ML methods are concerned. Microeconomics is one of the most prominent fields in which artificial intelligence and machine learning are being used. As a result of the explosion of data collection, especially at the consumer level (companies such as Google for example), artificial intelligence and machine learning have become increasingly apparent and feasible. It has been observed that, due to the enormous amount of information that these models require in order to be useful, their application has been heavily concentrated in the field of microeconomics, which is a field that requires a vast amount of data in order to be useful.
    Keywords: artificial intelligence, machine learning, macroeconomics, internet of things, inventory management, technology
    JEL: O1 O32 Q5 Q55
    Date: 2021–04–15
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:116596&r=cmp
  4. By: Ivan Guo; Nicolas Langren\'e; Jiahao Wu
    Abstract: In this paper, we introduce two methods to solve the American-style option pricing problem and its dual form at the same time using neural networks. Without applying nested Monte Carlo, the first method uses a series of neural networks to simultaneously compute both the lower and upper bounds of the option price, and the second one accomplishes the same goal with one global network. The avoidance of extra simulations and the use of neural networks significantly reduce the computational complexity and allow us to price Bermudan options with frequent exercise opportunities in high dimensions, as illustrated by the provided numerical experiments. As a by-product, these methods also derive a hedging strategy for the option, which can also be used as a control variate for variance reduction.
    Date: 2023–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2302.12439&r=cmp
  5. By: Martin Vesely
    Abstract: Portfolio optimization is an inseparable part of strategic asset allocation at the Czech National Bank. Quantum computing is a new technology offering algorithms for that problem. The capabilities and limitations of quantum computers with regard to portfolio optimization should therefore be investigated. In this paper, we focus on applications of quantum algorithms to dynamic portfolio optimization based on the Markowitz model. In particular, we compare algorithms for universal gate-based quantum computers (the QAOA, the VQE and Grover adaptive search), single-purpose quantum annealers, the classical exact branch and bound solver and classical heuristic algorithms (simulated annealing and genetic optimization). To run the quantum algorithms we use the IBM QuantumTM gate-based quantum computer. We also employ the quantum annealer offered by D-Wave. We demonstrate portfolio optimization on finding the optimal currency composition of the CNB's FX reserves. A secondary goal of the paper is to provide staff of central banks and other financial market regulators with literature on quantum optimization algorithms, because financial firms are active in finding possible applications of quantum computing.
    Keywords: Foreign exchange reserves, portfolio optimization, quadratic unconstrained binary optimization, quantum computing
    JEL: C61 C63 G11
    Date: 2023–02
    URL: http://d.repec.org/n?u=RePEc:cnb:wpaper:2023/1&r=cmp
  6. By: Philippe Goulet Coulombe (University of Pennsylvania)
    Abstract: I develop Macroeconomic Random Forest (MRF), an algorithm adapting the canonical Machine Learning (ML) tool to flexibly model evolving parameters in a linear macro equation. Its main output, Generalized Time-Varying Parameters (GTVPs), is a versatile device nesting many popular nonlinearities (threshold/switching, smooth transition, structural breaks/change) and allowing for sophisticated new ones. The approach delivers clear forecasting gains over numerous alternatives, predicts the 2008 drastic rise in unemployment, and performs well for inflation. Unlike most ML-based methods, MRF is directly interpretable — via its GTVPs. For instance, the successful unemployment forecast is due to the influence of forward-looking variables (e.g., term spreads, housing starts) nearly doubling before every recession. Interestingly, the Phillips curve has indeed flattened, and its might is highly cyclical.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:bbh:wpaper:21-05&r=cmp
  7. By: Lorena Torres Lahoz (DTU Management, Technical University of Denmark); Francisco Camara Pereira (DTU Management, Technical University of Denmark); Georges Sfeir (DTU Management, Technical University of Denmark); Ioanna Arkoudi (DTU Management, Technical University of Denmark); Mayara Moraes Monteiro (DTU Management, Technical University of Denmark); Carlos Lima Azevedo (DTU Management, Technical University of Denmark)
    Abstract: Latent Class Choice Models (LCCM) are extensions of discrete choice models (DCMs) that capture unobserved heterogeneity in the choice process by segmenting the population based on the assumption of preference similarities. We present a method of efficiently incorporating attitudinal indicators in the specification of LCCM, by introducing Artificial Neural Networks (ANN) to formulate latent variables constructs. This formulation overcomes structural equations in its capability of exploring the relationship between the attitudinal indicators and the decision choice, given the Machine Learning (ML) flexibility and power in capturing unobserved and complex behavioural features, such as attitudes and beliefs. All of this while still maintaining the consistency of the theoretical assumptions presented in the Generalized Random Utility model and the interpretability of the estimated parameters. We test our proposed framework for estimating a Car-Sharing (CS) service subscription choice with stated preference data from Copenhagen, Denmark. The results show that our proposed approach provides a complete and realistic segmentation, which helps design better policies.
    Date: 2023–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2302.09871&r=cmp
  8. By: Arun Kumar Polala; Bernhard Hientzsch
    Abstract: Differential machine learning (DML) is a recently proposed technique that uses samplewise state derivatives to regularize least square fits to learn conditional expectations of functionals of stochastic processes as functions of state variables. Exploiting the derivative information leads to fewer samples than a vanilla ML approach for the same level of precision. This paper extends the methodology to parametric problems where the processes and functionals also depend on model and contract parameters, respectively. In addition, we propose adaptive parameter sampling to improve relative accuracy when the functionals have different magnitudes for different parameter sets. For calibration, we construct pricing surrogates for calibration instruments and optimize over them globally. We discuss strategies for robust calibration. We demonstrate the usefulness of our methodology on one-factor Cheyette models with benchmark rate volatility specification with an extra stochastic volatility factor on (two-curve) caplet prices at different strikes and maturities, first for parametric pricing, and then by calibrating to a given caplet volatility surface. To allow convenient and efficient simulation of processes and functionals and in particular the corresponding computation of samplewise derivatives, we propose to specify the processes and functionals in a low-code way close to mathematical notation which is then used to generate efficient computation of the functionals and derivatives in TensorFlow.
    Date: 2023–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2302.06682&r=cmp
  9. By: Deniz Preil; Michael Krapp
    Abstract: This paper proposes a new algorithm, referred to as GMAB, that combines concepts from the reinforcement learning domain of multi-armed bandits and random search strategies from the domain of genetic algorithms to solve discrete stochastic optimization problems via simulation. In particular, the focus is on noisy large-scale problems, which often involve a multitude of dimensions as well as multiple local optima. Our aim is to combine the property of multi-armed bandits to cope with volatile simulation observations with the ability of genetic algorithms to handle high-dimensional solution spaces accompanied by an enormous number of feasible solutions. For this purpose, a multi-armed bandit framework serves as a foundation, where each observed simulation is incorporated into the memory of GMAB. Based on this memory, genetic operators guide the search, as they provide powerful tools for exploration as well as exploitation. The empirical results demonstrate that GMAB achieves superior performance compared to benchmark algorithms from the literature in a large variety of test problems. In all experiments, GMAB required considerably fewer simulations to achieve similar or (far) better solutions than those generated by existing methods. At the same time, GMAB's overhead with regard to the required runtime is extremely small due to the suggested tree-based implementation of its memory. Furthermore, we prove its convergence to the set of global optima as the simulation effort goes to infinity.
    Date: 2023–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2302.07695&r=cmp
  10. By: Francis X. Diebold (University of Pennsylvania); Maximilian Gobel (University of Lisbon); Philippe Goulet Coulombe (University of Quebec in Montreal)
    Abstract: We use "glide charts" (plots of sequences of root mean squared forecast errors as the target date is approached) to evaluate and compare fixed-target forecasts of Arctic sea ice. We first use them to evaluate the simple feature-engineered linear regression (FELR) forecasts of Diebold and Göbel (2022), and to compare FELR forecasts to naive pure-trend benchmark forecasts. Then we introduce a much more sophisticated feature-engineered machine learning (FEML) model, and we use glide charts to evaluate FEML forecasts and compare them to a FELR benchmark. Our substantive results include the frequent appearance of predictability thresholds, which differ across months, meaning that accuracy initially fails to improve as the target date is approached but then increases progressively once a threshold lead time is crossed. Also, we find that FEML can improve appreciably over FELR when forecasting "turning point" months in the annual cycle at horizons of one to three months ahead.
    Keywords: Seasonal climate forecasting, forecast evaluation and comparison, prediction
    JEL: Q54 C22 C52 C53
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:bbh:wpaper:22-04&r=cmp
  11. By: Philippe Goulet Coulombe (University of Quebec in Montreal)
    Abstract: Many problems plague the estimation of Phillips curves. Among them is the hurdle that the two key components, inflation expectations and the output gap, are both unobserved. Traditional remedies include creating reasonable proxies for the notable absentees or extracting them via some form of assumptions-heavy filtering procedure. I propose an alternative route: a Hemisphere Neural Network (HNN) whose peculiar architecture yields a final layer where components can be interpreted as latent states within a Neural Phillips Curve. There are benefits. First, HNN conducts the supervised estimation of nonlinearities that arise when translating a high-dimensional set of observed regressors into latent states. Second, computations are fast. Third, forecasts are economically interpretable. Fourth, inflation volatility can also be predicted by merely adding a hemisphere to the model. Among other findings, the contribution of real activity to inflation appears severely underestimated in traditional econometric specifications. Also, HNN captures out-of-sample the 2021 upswing in inflation and attributes it first to an abrupt and sizable disanchoring of the expectations component, followed by a wildly positive gap starting from late 2020. HNN’s gap unique path comes from dispensing with unemployment and GDP in favor of an amalgam of nonlinearly processed alternative tightness indicators – some of which are skyrocketing as of early 2022.
    Date: 2022–01
    URL: http://d.repec.org/n?u=RePEc:bbh:wpaper:22-01&r=cmp
  12. By: Shubhranshu Shekhar; Jetson Leder-Luis; Leman Akoglu
    Abstract: The US spends more than 4 trillion dollars per year on health care, largely conducted by private providers and reimbursed by insurers. A major concern in this system is overbilling, waste and fraud by providers, who face incentives to misreport on their claims in order to receive higher payments. In this work, we develop novel machine learning tools to identify providers that overbill insurers. Using large-scale claims data from Medicare, the US federal health insurance program for elderly adults and the disabled, we identify patterns consistent with fraud or overbilling among inpatient hospitalizations. Our proposed approach for fraud detection is fully unsupervised, not relying on any labeled training data, and is explainable to end users, providing reasoning and interpretable insights into the potentially suspicious behavior of the flagged providers. Data from the Department of Justice on providers facing anti-fraud lawsuits and case studies of suspicious providers validate our approach and findings. We also perform a post-analysis to understand hospital characteristics, those not used for detection but associate with a high suspiciousness score. Our method provides an 8-fold lift over random targeting, and can be used to guide investigations and auditing of suspicious providers for both public and private health insurance systems.
    JEL: C19 D73 I13 K42 M42
    Date: 2023–02
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:30946&r=cmp
  13. By: Alper Deniz Karakas
    Abstract: This paper aims to reevaluate the Taylor Rule, through a linear and a nonlinear method, such that its estimated federal funds rates match those actually previously implemented by the Federal Reserve Bank. In the linear method, this paper uses an OLS regression model to find more accurate coefficients within the same Taylor Rule equation in which the dependent variable is the federal funds rate, and the independent variables are the inflation rate, the inflation gap, and the output gap. The intercept in the OLS regression model would capture the constant equilibrium target real interest rate set at 2. The linear OLS method suggests that the Taylor Rule overestimates the output gap and standalone inflation rate's coefficients for the Taylor Rule. The coefficients this paper suggests are shown in equation (2). In the nonlinear method, this paper uses a machine learning system in which the two inputs are the inflation rate and the output gap and the output is the federal funds rate. This system utilizes gradient descent error minimization to create a model that minimizes the error between the estimated federal funds rate and the actual previously implemented federal funds rate. Since the machine learning system allows the model to capture the more realistic nonlinear relationship between the variables, it significantly increases the estimation accuracy as a result. The actual and estimated federal funds rates are almost identical besides three recessions caused by bubble bursts, which the paper addresses in the concluding remarks. Overall, the first method provides theoretical insight while the second suggests a model with improved applicability.
    Date: 2023–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2302.08323&r=cmp
  14. By: James Bell
    Abstract: Is the rapid adoption of Artificial Intelligence a sign that creative destruction (a capitalist innovation process first theorised in 1942) is occurring? Although its theory suggests that it is only visible over time in aggregate, this paper devises three hypotheses to test its presence on a macro level and research methods to produce the required data. This paper tests the theory using news archives, questionnaires, and interviews with industry professionals. It considers the risks of adopting Artificial Intelligence, its current performance in the market and its general applicability to the role. The results suggest that creative destruction is occurring in the AML industry despite the activities of the regulators acting as natural blockers to innovation. This is a pressurised situation where current-generation Artificial Intelligence may offer more harm than benefit. For managers, this papers results suggest that safely pursuing AI in AML requires having realistic expectations of Artificial Intelligence's benefits combined with using a framework for AI Ethics.
    Date: 2023–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2302.13823&r=cmp
  15. By: Ganesh Iyer; T. Tony Ke
    Abstract: This paper studies how market competition influences the algorithmic design choices of firms in the context of targeting. Firms face the general trade-off between bias and variance when choosing the design of a supervised learning algorithm in terms of model complexity or the number of predictors to accommodate. Each firm then appoints a data analyst that uses the chosen algorithm to estimate demand for multiple consumer segments, based on which, it devises a targeting policy to maximize estimated profit. We show that competition may induce firms to strategically choose simpler algorithms which involve more bias. This implies that more complex/flexible algorithms tend to have higher value for firms with greater monopoly power.
    JEL: D43 L13 M37
    Date: 2023–03
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:31002&r=cmp
  16. By: Daas, Piet (Eindhoven University of Technology); Hassink, Wolter (Utrecht University); Klijs, Bart (Statistics Netherlands)
    Abstract: A statistical classification model was developed to identify online platform organizations based on the texts on their website. The model was subsequently used to identify all (potential) platform organizations with a website included in the Dutch Business Register. The empirical outcomes of the statistical model were plausible in terms of the words and the bimodal distribution of fitted probabilities, but the results indicated an overestimation of the number of platform organizations. Next, the external validity of the outcomes was investigated through a survey held under the organizations that were identified as a platform organization by the statistical classification model. The response by the organizations to the survey confirmed a substantial number of type-I errors. Furthermore, it revealed a positive association between the fitted probability of the text-based classification model and the organization's response to the survey question on being an online platform organization. The survey results indicated that the text-based classification model can be used to obtain a subpopulation of potential platform organizations from the entire population of businesses with a website.
    Keywords: online platform organizations, external validation, type-I error, machine learning, web pages
    JEL: C81 C83 D20 D83 L20
    Date: 2023–02
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp15941&r=cmp
  17. By: Mckay Jensen; Nicholas Emery-Xu; Robert Trager
    Abstract: Using a model in which agents compete to develop a potentially dangerous new technology (AI), we study how changes in the pricing of factors of production (computational resources) affect agents' strategies, particularly their spending on safety meant to reduce the danger from the new technology. In the model, agents split spending between safety and performance, with safety determining the probability of a ``disaster" outcome, and performance determining the agents' competitiveness relative to their peers. For given parameterizations, we determine the theoretically optimal spending strategies by numerically computing Nash equilibria. Using this approach we find that (1) in symmetric scenarios, compute price increases are safety-promoting if and only if the production of performance scales faster than the production of safety; (2) the probability of a disaster can be made arbitrarily low by providing a sufficiently large subsidy to a single agent; (3) when agents differ in productivity, providing a subsidy to the more productive agent is often better for aggregate safety than providing the same subsidy to other agent(s) (with some qualifications, which we discuss); (4) when one agent is much more safety-conscious, in the sense of believing that safety is more difficult to achieve, relative to his competitors, subsidizing that agent is typically better for aggregate safety than subsidizing its competitors; however, subsidizing an agent that is only somewhat more safety-conscious often decreases safety. Thus, although subsidizing a much more safety-conscious, or productive, agent often improves safety as intuition suggests, subsidizing a somewhat more safety-conscious or productive agent can often be harmful.
    Date: 2023–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2302.11436&r=cmp
  18. By: Tom, Daniel
    Abstract: We systematically explore the universe of all models using AI search methods. We automate much of the data preparation and testing of each model built along the way. The result is a method and system that generate superior production ready logistic regression models, beating an industry standard consumer credit risk score, GBM and NN ML models. We also incorporate into our system a method to eliminate disparate impact used by the FRB and the FTC.
    Keywords: Modeling, Regression, Logistic, AIC, IRLS, AI, ML, NN, GBM, KS, IV, GC, Wald, X2, PSI, VIF, correlation coefficient, condition index, proportion-ofvariation, reject inference, FRB, FTC, CRA, disparate impact, BISG, SBC, ARM, Intel, GPU, GPGPU, BLAS, LAPACK, transformation, normalization
    JEL: C61
    Date: 2021–12–25
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:116592&r=cmp
  19. By: Laura Nurski
    Abstract: The goal is to identify pitfalls in the process of technology adoption and to provide some lessons for both policy and business
    Date: 2023–03
    URL: http://d.repec.org/n?u=RePEc:bre:wpaper:node_8829&r=cmp
  20. By: Philipp Ad\"ammer; Jan Pr\"user; Rainer Sch\"ussler
    Abstract: We examine the incremental value of news-based data relative to the FRED-MD economic indicators for quantile predictions (now- and forecasts) of employment, output, inflation and consumer sentiment. Our results suggest that news data contain valuable information not captured by economic indicators, particularly for left-tail forecasts. Methods that capture quantile-specific non-linearities produce superior forecasts relative to methods that feature linear predictive relationships. However, adding news-based data substantially increases the performance of quantile-specific linear models, especially in the left tail. Variable importance analyses reveal that left tail predictions are determined by both economic and textual indicators, with the latter having the most pronounced impact on consumer sentiment.
    Date: 2023–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2302.13999&r=cmp
  21. By: Vasilis Syrgkanis; Ruohan Zhan
    Abstract: We consider estimation and inference with data collected from episodic reinforcement learning (RL) algorithms; i.e. adaptive experimentation algorithms that at each period (aka episode) interact multiple times in a sequential manner with a single treated unit. Our goal is to be able to evaluate counterfactual adaptive policies after data collection and to estimate structural parameters such as dynamic treatment effects, which can be used for credit assignment (e.g. what was the effect of the first period action on the final outcome). Such parameters of interest can be framed as solutions to moment equations, but not minimizers of a population loss function, leading to Z-estimation approaches in the case of static data. However, such estimators fail to be asymptotically normal in the case of adaptive data collection. We propose a re-weighted Z-estimation approach with carefully designed adaptive weights to stabilize the episode-varying estimation variance, which results from the nonstationary policy that typical episodic RL algorithms invoke. We identify proper weighting schemes to restore the consistency and asymptotic normality of the re-weighted Z-estimators for target parameters, which allows for hypothesis testing and constructing reliable confidence regions for target parameters of interest. Primary applications include dynamic treatment effect estimation and dynamic off-policy evaluation.
    Date: 2023–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2302.08854&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.