|
on Computational Economics |
Issue of 2023‒05‒01
twenty-two papers chosen by |
By: | Maximilian Tschuchnig; Petra Tschuchnig; Cornelia Ferner; Michael Gadermayr |
Abstract: | Inflation is a major determinant for allocation decisions and its forecast is a fundamental aim of governments and central banks. However, forecasting inflation is not a trivial task, as its prediction relies on low frequency, highly fluctuating data with unclear explanatory variables. While classical models show some possibility of predicting inflation, reliably beating the random walk benchmark remains difficult. Recently, (deep) neural networks have shown impressive results in a multitude of applications, increasingly setting the new state-of-the-art. This paper investigates the potential of the transformer deep neural network architecture to forecast different inflation rates. The results are compared to a study on classical time series and machine learning models. We show that our adapted transformer, on average, outperforms the baseline in 6 out of 16 experiments, showing best scores in two out of four investigated inflation rates. Our results demonstrate that a transformer based neural network can outperform classical regression and machine learning models in certain inflation rates and forecasting horizons. |
Date: | 2023–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2303.15364&r=cmp |
By: | El Amine Cherrat; Snehal Raj; Iordanis Kerenidis; Abhishek Shekhar; Ben Wood; Jon Dee; Shouvanik Chakrabarti; Richard Chen; Dylan Herman; Shaohan Hu; Pierre Minssen; Ruslan Shaydulin; Yue Sun; Romina Yalovetzky; Marco Pistoia |
Abstract: | Quantum machine learning has the potential for a transformative impact across industry sectors and in particular in finance. In our work we look at the problem of hedging where deep reinforcement learning offers a powerful framework for real markets. We develop quantum reinforcement learning methods based on policy-search and distributional actor-critic algorithms that use quantum neural network architectures with orthogonal and compound layers for the policy and value functions. We prove that the quantum neural networks we use are trainable, and we perform extensive simulations that show that quantum models can reduce the number of trainable parameters while achieving comparable performance and that the distributional approach obtains better performance than other standard approaches, both classical and quantum. We successfully implement the proposed models on a trapped-ion quantum processor, utilizing circuits with up to $16$ qubits, and observe performance that agrees well with noiseless simulation. Our quantum techniques are general and can be applied to other reinforcement learning problems beyond hedging. |
Date: | 2023–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2303.16585&r=cmp |
By: | Marlon Azinovic; Jan \v{Z}emli\v{c}ka |
Abstract: | Contemporary deep learning based solution methods used to compute approximate equilibria of high-dimensional dynamic stochastic economic models are often faced with two pain points. The first problem is that the loss function typically encodes a diverse set of equilibrium conditions, such as market clearing and households' or firms' optimality conditions. Hence the training algorithm trades off errors between those -- potentially very different -- equilibrium conditions. This renders the interpretation of the remaining errors challenging. The second problem is that portfolio choice in models with multiple assets is only pinned down for low errors in the corresponding equilibrium conditions. In the beginning of training, this can lead to fluctuating policies for different assets, which hampers the training process. To alleviate these issues, we propose two complementary innovations. First, we introduce Market Clearing Layers, a neural network architecture that automatically enforces all the market clearing conditions and borrowing constraints in the economy. Encoding economic constraints into the neural network architecture reduces the number of terms in the loss function and enhances the interpretability of the remaining equilibrium errors. Furthermore, we present a homotopy algorithm for solving portfolio choice problems with multiple assets, which ameliorates numerical instabilities arising in the context of deep learning. To illustrate our method we solve an overlapping generations model with two permanent risk aversion types, three distinct assets, and aggregate shocks. |
Date: | 2023–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2303.14802&r=cmp |
By: | Murray Z. Frank; Jing Gao; Keer Yang |
Abstract: | There is considerable evidence that machine learning algorithms have better predictive abilities than humans in various financial settings. But, the literature has not tested whether these algorithmic predictions are more rational than human predictions. We study the predictions of corporate earnings from several algorithms, notably linear regressions and a popular algorithm called Gradient Boosted Regression Trees (GBRT). On average, GBRT outperformed both linear regressions and human stock analysts, but it still overreacted to news and did not satisfy rational expectation as normally defined. By reducing the learning rate, the magnitude of overreaction can be minimized, but it comes with the cost of poorer out-of-sample prediction accuracy. Human stock analysts who have been trained in machine learning methods overreact less than traditionally trained analysts. Additionally, stock analyst predictions reflect information not otherwise available to machine algorithms. |
Date: | 2023–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2303.16158&r=cmp |
By: | Lutz Sommer (Albstadt-Sigmaringen University, Jakobstr. 1-6, 72458, Albstadt, Germany Author-2-Name: Author-2-Workplace-Name: Author-3-Name: Author-3-Workplace-Name: Author-4-Name: Author-4-Workplace-Name: Author-5-Name: Author-5-Workplace-Name: Author-6-Name: Author-6-Workplace-Name: Author-7-Name: Author-7-Workplace-Name: Author-8-Name: Author-8-Workplace-Name:) |
Abstract: | " Objective - Artificial Intelligence (AI) tools are becoming more accessible and more manageable in terms of practical implementation, enabling them to be used in many new areas, including the selection of international managers based on their international experience. The choice of personnel in a global environment is a challenge that has been the subject of heated debate for decades, both in practice and theory. Wrong decisions are cost-intensive and possibly contribute to economic failure. The present study aimed to test machine learning algorithms - as sub-disciplines of Artificial Intelligence (AI) - on a low-coding basis. Methodology/Technique - A fictitious use case with a corresponding data set of 75 managers was generated for this purpose. Its applicability in relation to personnel selection for an international task was tested. In the next step, selected AI algorithms were used to test which of these algorithms led to high prediction accuracy. Finding - The results show that with minimal programming effort, the ML algorithm achieved an accuracy of over 80% when selecting suitable managers for international assignments - based on the international experience of this group of people. The linear discriminant analysis has proven particularly relevant, and both the training and validation data provided values above 80%. In summary, ML algorithms' usefulness and feasibility in personnel selection in an international environment could be confirmed. Novelty - It could be confirmed that for implementing the manager selection, freely available algorithms in Python achieve sufficiently good results with an accuracy of 80%. Type of Paper - Empirical" |
Keywords: | Artificial Intelligence; International Experience; Manager; Machine Learning; Decision Making; Human Resources Management. |
JEL: | M16 C89 |
Date: | 2023–03–31 |
URL: | http://d.repec.org/n?u=RePEc:gtr:gatrjs:gjbssr632&r=cmp |
By: | Zhen Zeng; Rachneet Kaur; Suchetha Siddagangappa; Saba Rahimi; Tucker Balch; Manuela Veloso |
Abstract: | Time series forecasting is important across various domains for decision-making. In particular, financial time series such as stock prices can be hard to predict as it is difficult to model short-term and long-term temporal dependencies between data points. Convolutional Neural Networks (CNN) are good at capturing local patterns for modeling short-term dependencies. However, CNNs cannot learn long-term dependencies due to the limited receptive field. Transformers on the other hand are capable of learning global context and long-term dependencies. In this paper, we propose to harness the power of CNNs and Transformers to model both short-term and long-term dependencies within a time series, and forecast if the price would go up, down or remain the same (flat) in the future. In our experiments, we demonstrated the success of the proposed method in comparison to commonly adopted statistical and deep learning methods on forecasting intraday stock price change of S&P 500 constituents. |
Date: | 2023–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2304.04912&r=cmp |
By: | Zhizhong Tan; Min Hu; Yixuan Wang; Lu Wei; Bin Liu |
Abstract: | It is a challenging problem to predict trends of futures prices with traditional econometric models as one needs to consider not only futures' historical data but also correlations among different futures. Spatial-temporal graph neural networks (STGNNs) have great advantages in dealing with such kind of spatial-temporal data. However, we cannot directly apply STGNNs to high-frequency future data because future investors have to consider both the long-term and short-term characteristics when doing decision-making. To capture both the long-term and short-term features, we exploit more label information by designing four heterogeneous tasks: price regression, price moving average regression, price gap regression (within a short interval), and change-point detection, which involve both long-term and short-term scenes. To make full use of these labels, we train our model in a continual manner. Traditional continual GNNs define the gradient of prices as the parameter important to overcome catastrophic forgetting (CF). Unfortunately, the losses of the four heterogeneous tasks lie in different spaces. Hence it is improper to calculate the parameter importance with their losses. We propose to calculate parameter importance with mutual information between original observations and the extracted features. The empirical results based on 49 commodity futures demonstrate that our model has higher prediction performance on capturing long-term or short-term dynamic change. |
Date: | 2023–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2303.16532&r=cmp |
By: | Christian Mitsch |
Abstract: | This paper focuses on a decentralized profit-center firm that uses negotiated transfer pricing as an instrument to coordinate the production process. Moreover, the firm's headquarters gives its divisions full authority over operating decisions and it is assumed that each division can additionally make an upfront investment decision that enhances the value of internal trade. On early works, the paper expands the number of divisions by one downstream division and relaxes basic assumptions, such as the assumption of common knowledge of rationality. Based on an agent-based simulation, it is examined whether cognitively bounded individuals modeled by fuzzy Q-learning achieve the same results as fully rational utility maximizers. In addition, the paper investigates different constellations of bargaining power to see whether a deviation from the recommended optimal bargaining power leads to a higher managerial performance. The simulation results show that fuzzy Q-learning agents perform at least as well or better than fully individual rational utility maximizers. The study also indicates that, in scenarios with different marginal costs of divisions, a deviation from the recommended optimal distribution ratio of the bargaining power of divisions can lead to higher investment levels and, thus, to an increase in the headquarters' profit. |
Date: | 2023–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2303.14515&r=cmp |
By: | Cavaillé, Charlotte; Van Der Straeten, Karine; Chen, Daniel L. |
Abstract: | Survey design often approximates a prediction problem: the goal is to select instruments that best predict the value of an unobserved construct or a future outcome. We demonstrate how advances in machine learning techniques can help choose among competing instruments. First, we randomly assign respondents to one of four survey instruments to predict a behavior defined by our validation strategy. Next, we assess the optimal instrument in two stages. A machine learning model first predicts the behavior using individual covariates and survey responses. Then, using doubly robust welfare maximization and prediction error from the first stage, we learn the optimal survey method and examine how it varies across education levels. |
Date: | 2023–04–04 |
URL: | http://d.repec.org/n?u=RePEc:tse:wpaper:128022&r=cmp |
By: | Osorio Rodarte, Israel; Maliszewska, Maryla; Pereira, Maria Filipa Seara |
Abstract: | This paper assess the economic and distributional impacts of a set of ex-ante trade policy simulation in Rwanda based on a global top-down macro-micro simulation framework. Policies under analysis include Rwanda’s integration in the African Continental Free Trade Area, greater participation of Rwanda in GVCs vis-à-vis reshoring global production, and the effects of temporary trade restrictions with main trading partners. |
Keywords: | Labor and Human Capital, International Relations/Trade |
Date: | 2022 |
URL: | http://d.repec.org/n?u=RePEc:ags:pugtwp:333483&r=cmp |
By: | Valentin Zelenyuk (School of Economics and Centre for Efficiency and Productivity Analysis (CEPA) at The University of Queensland, Australia); Valentyn Panchenko (School of Economics, University of New South Wales, Australia;) |
Abstract: | We present a cohesive generalized framework for an aggregation of the Nerlovian profit indicators and of the directional distance functions, frequently used in productivity and efficiency analysis in operations research and econometrics (e.g., via data envelopment analysis or stochastic frontier analysis). Our theoretical framework allows for greater flexibility than previous approaches, and embraces many other approaches as special cases. In the proposed aggregation scheme, the aggregation weights are mathematically derived from assumptions made about the optimization behavior and about the chosen directions of measurement. We also discuss various interesting special cases of popular directions, including the case of Farrelltype effiiency. |
Keywords: | Efficiency; Productivity; Aggregation; Data Envelopment Analysis |
JEL: | D24 O4 |
Date: | 2023–01 |
URL: | http://d.repec.org/n?u=RePEc:qld:uqcepa:184&r=cmp |
By: | Chendi Ni; Yuying Li; Peter A. Forsyth |
Abstract: | We study the optimal multi-period asset allocation problem with leverage constraints in a persistent, high-inflation environment. Based on filtered high-inflation regimes, we discover that a portfolio containing an equal-weighted stock index partially stochastically dominates a portfolio containing a capitalization-weighted stock index. Assuming the asset prices follow the jump diffusion model during high inflation periods, we establish a closed-form solution for the optimal strategy that outperforms a passive strategy under the cumulative quadratic tracking difference (CD) objective. The closed-form solution provides insights but requires unrealistic constraints. To obtain strategies under more practical considerations, we consider a constrained optimal control problem with bounded leverage. To solve this optimal control problem, we propose a novel leverage-feasible neural network (LFNN) model that approximates the optimal control directly. The LFNN model avoids high-dimensional evaluation of the conditional expectation (common in dynamic programming (DP) approaches). We establish mathematically that the LFNN approximation can yield a solution that is arbitrarily close to the solution of the original optimal control problem with bounded leverage. Numerical experiments show that the LFNN model achieves comparable performance to the closed-form solution on simulated data. We apply the LFNN approach to a four-asset investment scenario with bootstrap resampled asset returns. The LFNN strategy consistently outperforms the passive benchmark strategy by about 200 bps (median annualized return), with a greater than 90% probability of outperforming the benchmark at the terminal date. These results suggest that during persistent inflation regimes, investors should favor short-term bonds over long-term bonds, and the equal-weighted stock index over the cap-weighted stock index. |
Date: | 2023–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2304.05297&r=cmp |
By: | Laura Felber; Simon Beyeler |
Abstract: | In this paper, we assess the value of high-frequency payments data for nowcasting economic activity. Focusing on Switzerland, we predict real GDP based on an unprecedented 'complete' set of transaction payments data: a combination of real-time gross settlement payment system data as well as debit and credit card data. Following a strongly data-driven machine learning approach, we find payments data to bear an accurate and timely signal about economic activity. When we assess the performance of the models by the initially published GDP numbers (pseudo real-time evaluation), we find a state-dependent value of the data: the payment models slightly outperform the benchmark models in times of crisis but are clearly inferior in 'normal' times. However, when we assess the performance of the models by revised and more final GDP numbers, we find payments data to be unconditionally valuable: the payment models outperform the benchmark models by up to 11% in times of crisis and by up to 12% in 'normal' times. We thus conclude that models based on payments data should become an integral part of policymakers' decision-making. |
Keywords: | Nowcasting, GDP, machine learning, payments data, COVID-19 |
JEL: | C52 C53 C55 E37 |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:snb:snbwpa:2023-01&r=cmp |
By: | Davood Pirayesh Neghab; Mucahit Cevik; M. I. M. Wahab |
Abstract: | The complexity and ambiguity of financial and economic systems, along with frequent changes in the economic environment, have made it difficult to make precise predictions that are supported by theory-consistent explanations. Interpreting the prediction models used for forecasting important macroeconomic indicators is highly valuable for understanding relations among different factors, increasing trust towards the prediction models, and making predictions more actionable. In this study, we develop a fundamental-based model for the Canadian-U.S. dollar exchange rate within an interpretative framework. We propose a comprehensive approach using machine learning to predict the exchange rate and employ interpretability methods to accurately analyze the relationships among macroeconomic variables. Moreover, we implement an ablation study based on the output of the interpretations to improve the predictive accuracy of the models. Our empirical results show that crude oil, as Canada's main commodity export, is the leading factor that determines the exchange rate dynamics with time-varying effects. The changes in the sign and magnitude of the contributions of crude oil to the exchange rate are consistent with significant events in the commodity and energy markets and the evolution of the crude oil trend in Canada. Gold and the TSX stock index are found to be the second and third most important variables that influence the exchange rate. Accordingly, this analysis provides trustworthy and practical insights for policymakers and economists and accurate knowledge about the predictive model's decisions, which are supported by theoretical considerations. |
Date: | 2023–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2303.16149&r=cmp |
By: | Davila-Pena, Laura; Borm, Peter (Tilburg University, Center For Economic Research); Garcia-Jurado, Ignacio; Schouten, Jop (Tilburg University, Center For Economic Research) |
Keywords: | Scheduling; Connection problems; Sequencing problems; Graph machine scheduling problems; cost allocation |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:tiu:tiucen:17013f33-1d65-4294-802c-b526a1c25105&r=cmp |
By: | David Wu; Sebastian Jaimungal |
Abstract: | The objectives of option hedging/trading extend beyond mere protection against downside risks, with a desire to seek gains also driving agent's strategies. In this study, we showcase the potential of robust risk-aware reinforcement learning (RL) in mitigating the risks associated with path-dependent financial derivatives. We accomplish this by leveraging the Jaimungal, Pesenti, Wang, Tatsat (2022) and their policy gradient approach, which optimises robust risk-aware performance criteria. We specifically apply this methodology to the hedging of barrier options, and highlight how the optimal hedging strategy undergoes distortions as the agent moves from being risk-averse to risk-seeking. As well as how the agent robustifies their strategy. We further investigate the performance of the hedge when the data generating process (DGP) varies from the training DGP, and demonstrate that the robust strategies outperform the non-robust ones. |
Date: | 2023–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2303.15216&r=cmp |
By: | Davila-Pena, Laura; Borm, Peter (Tilburg University, School of Economics and Management); Garcia-Jurado, Ignacio; Schouten, Jop (Tilburg University, School of Economics and Management) |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:tiu:tiutis:17013f33-1d65-4294-802c-b526a1c25105&r=cmp |
By: | Galindev, Ragchaasuren; Decaluwé, Bernard |
Abstract: | This paper develops a CGE model with an explicit monetary policy rule which replaces typical exogenous numeraire so that the model can have endogenous absolute price. A model with exogenous numeraire can be derived as a special case of the current model through a flexible parameterization. It shows that in the presence of nominal rigidities, simulation results for the same shock can be different depending on the choice of exogenous numeraire. Moreover, the current model also gives different results for the same shock by creating inflation. The paper highlights the importance of choosing numeraire or considering the interest rate rule instead of fixed numeraire in the presence of nominal rigidities. |
Keywords: | Public Economics |
Date: | 2022 |
URL: | http://d.repec.org/n?u=RePEc:ags:pugtwp:333464&r=cmp |
By: | Qianqian Xie; Weiguang Han; Yanzhao Lai; Min Peng; Jimin Huang |
Abstract: | Recently, large language models (LLMs) like ChatGPT have demonstrated remarkable performance across a variety of natural language processing tasks. However, their effectiveness in the financial domain, specifically in predicting stock market movements, remains to be explored. In this paper, we conduct an extensive zero-shot analysis of ChatGPT's capabilities in multimodal stock movement prediction, on three tweets and historical stock price datasets. Our findings indicate that ChatGPT is a "Wall Street Neophyte" with limited success in predicting stock movements, as it underperforms not only state-of-the-art methods but also traditional methods like linear regression using price features. Despite the potential of Chain-of-Thought prompting strategies and the inclusion of tweets, ChatGPT's performance remains subpar. Furthermore, we observe limitations in its explainability and stability, suggesting the need for more specialized training or fine-tuning. This research provides insights into ChatGPT's capabilities and serves as a foundation for future work aimed at improving financial market analysis and prediction by leveraging social media sentiment and historical stock data. |
Date: | 2023–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2304.05351&r=cmp |
By: | Graham Cookson;Jake Hitch |
Abstract: | In recent years, U.S. policymakers have been considering reforms to tackle high and rising prescription drug spending, including unprecedented direct limits on prices and price growth for top-selling medicines. By affecting expected prices and revenues, this type of reform will impact biopharmaceutical companies’ incentives to innovate. However, the magnitudes and timings of impacts on the numbers of new drugs coming to market are unclear. To inform policymaking in this area, the Congressional Budget Office (CBO) has developed an economic simulation model which can, in theory, be used to predict the effects of any policy that alters expected costs or returns from new drug development. Recently, the model has been used to evaluate the drug pricing provisions in the Build Back Better Act (BBBA). The CBO estimates that the policy will have minor negative effects on the number of new drugs coming to market, at least in the first three decades following implementation. However, as we detail in this updated report, despite technical improvements made since it was first published, the modelling is oversimplified and significant uncertainty in the estimates remains. More fundamentally, CBO restricts attention to the numbers of new drugs coming to market, but the value, not the volume of innovation, matters most for patients. For these reasons and others which we expand upon in this report, policymakers should exercise caution when relying on this modelling approach to predict the impacts of real-world policy changes. |
Keywords: | Limitations of CBO’s Simulation Model of New Drug Development as a Tool for Policymakers |
JEL: | I1 |
Date: | 2022–06–01 |
URL: | http://d.repec.org/n?u=RePEc:ohe:conres:002394&r=cmp |
By: | Shijie Wu; Ozan Irsoy; Steven Lu; Vadim Dabravolski; Mark Dredze; Sebastian Gehrmann; Prabhanjan Kambadur; David Rosenberg; Gideon Mann |
Abstract: | The use of NLP in the realm of financial technology is broad and complex, with applications ranging from sentiment analysis and named entity recognition to question answering. Large Language Models (LLMs) have been shown to be effective on a variety of tasks; however, no LLM specialized for the financial domain has been reported in literature. In this work, we present BloombergGPT, a 50 billion parameter language model that is trained on a wide range of financial data. We construct a 363 billion token dataset based on Bloomberg's extensive data sources, perhaps the largest domain-specific dataset yet, augmented with 345 billion tokens from general purpose datasets. We validate BloombergGPT on standard LLM benchmarks, open financial benchmarks, and a suite of internal benchmarks that most accurately reflect our intended usage. Our mixed dataset training leads to a model that outperforms existing models on financial tasks by significant margins without sacrificing performance on general LLM benchmarks. Additionally, we explain our modeling choices, training process, and evaluation methodology. As a next step, we plan to release training logs (Chronicles) detailing our experience in training BloombergGPT. |
Date: | 2023–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2303.17564&r=cmp |
By: | Gael M. Martin; David T. Frazier; Ruben Loaiza-Maya; Florian Huber; Gary Koop; John Maheu; Didier Nibbering; Anastasios Panagiotelis |
Abstract: | The Bayesian statistical paradigm provides a principled and coherent approach to probabilistic forecasting. Uncertainty about all unknowns that characterize any forecasting problem -- model, parameters, latent states -- is factored into the forecast distribution, with forecasts conditioned only on what is known or observed. Allied with the elegance of the method, Bayesian forecasting is now underpinned by the burgeoning field of Bayesian computation, which enables Bayesian forecasts to be produced for virtually any problem, no matter how large, or complex. The current state of play in Bayesian forecasting is the subject of this review. The aim is to provide readers with an overview of modern approaches to the field, set in some historical context. Whilst our primary focus is on applications in the fields of economics and finance, and their allied disciplines, sufficient general details about implementation are provided to aid and inform all investigators. |
Keywords: | Bayesian prediction, macroeconomics, finance, marketing, electricity demand, Bayesian computational methods, loss-based Bayesian prediction |
JEL: | C01 C11 C53 |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2023-1&r=cmp |