nep-cmp New Economics Papers
on Computational Economics
Issue of 2023‒04‒17
twenty-six papers chosen by
Stan Miles
Thompson Rivers University

  1. Forecasting the movements of Bitcoin prices: an application of machine learning algorithms By Hakan Pabuccu; Serdar Ongan; Ayse Ongan
  2. The Impact of Feature Selection and Transformation on Machine Learning Methods in Determining the Credit Scoring By Oguz Koc; Omur Ugur; A. Sevtap Kestel
  3. Is COVID-19 reflected in AnaCredit dataset? A big data - machine learning approach for analysing behavioural patterns using loan level granular information By Anastasios Petropoulos; Evangelos Stavroulakis; Panagiotis Lazaris; Vasilis Siakoulis; Nikolaos Vlachogiannakis
  4. Langevin algorithms for Markovian Neural Networks and Deep Stochastic control By Pierre Bras; Gilles Pagès
  5. Neural Stochastic Agent-Based Limit Order Book Simulation: A Hybrid Methodology By Zijian Shi; John Cartlidge
  6. Comparing Out-of-Sample Performance of Machine Learning Methods to Forecast U.S. GDP Growth By Ba Chu; Shafiullah Qureshi
  7. A parsimonious neural network approach to solve portfolio optimization problems without using dynamic programming By Pieter M. van Staden; Peter A. Forsyth; Yuying Li
  8. Stock Trend Prediction: A Semantic Segmentation Approach By Shima Nabiee; Nader Bagherzadeh
  9. Collusion and Artificial Intelligence: A computational experiment with sequential pricing algorithms under stochastic costs By Gonzalo Ballestero
  10. Predicting Poverty with Missing Incomes By Paolo Verme
  11. Strategic Trading in Quantitative Markets through Multi-Agent Reinforcement Learning By Hengxi Zhang; Zhendong Shi; Yuanquan Hu; Wenbo Ding; Ercan E. Kuruoglu; Xiao-Ping Zhang
  12. Deep hybrid model with satellite imagery: how to combine demand modeling and computer vision for behavior analysis? By Qingyi Wang; Shenhao Wang; Yunhan Zheng; Hongzhou Lin; Xiaohu Zhang; Jinhua Zhao; Joan Walker
  13. Application of supervised learning models in the Chinese futures market By Fuquan Tang
  14. Improving CNN-base Stock Trading By Considering Data Heterogeneity and Burst By Keer Yang; Guanqun Zhang; Chuan Bi; Qiang Guan; Hailu Xu; Shuai Xu
  15. The Economic Characteristics of an Aging Society: a Dynamic Computable General Equilibrium Analysis By Zuo, Xuejin; Peng, Xiujian; Yang, Xin; Yang, Xiaoping; Yue, Han; Wang, Meifeng; Adams, Philip
  16. Real Option Pricing using Quantum Computers By Alberto Manzano; Gonzalo Ferro; \'Alvaro Leitao; Carlos V\'azquez; Andr\'es G\'omez
  17. Insights from Adding Transportation Sector Detail into an Economy-Wide Model: The Case of the ADAGE CGE Model By Cai, Yongxia; Woollacott, Jared; Beach, Robert; Rafelski, Lauren; Ramig, Christopher; Shelby, Michael
  18. Research on CPI Prediction Based on Natural Language Processing By Xiaobin Tang; Nuo Lei
  19. Style Miner: Find Significant and Stable Explanatory Factors in Time Series with Constrained Reinforcement Learning By Dapeng Li; Feiyang Pan; Jia He; Zhiwei Xu; Dandan Tu; Guoliang Fan
  20. Analysing the response of U.S. financial market to the Federal Open Market Committee statements and minutes based on computational linguistic approaches By Xuefan, Pan
  21. Runge-Kutta integrators for fast and accurate solutions in GEMPACK By Schiffman, Florian
  22. A comprehensive short and long-run assessment on the impact of the EU-Mercosur agreement on Brazil By González, Javier; Latorre, María C.; Valverde, Gabriela Ortiz
  23. Mitigation pathway of domestic mixed environmental taxes and the effects of trade restrictions on air pollution mitigation in China By Hu, Xiurong; Liu, Junfeng
  24. Enhancing labour productivity by improving nutrition in Kenya: micro-econometric estimates for dynamic CGE model calibration By Ramos, Maria Priscila; Custodio, Estefania; Jiménez, Sofía; Sartori, Martina; Ferrari, Emanuele
  25. On Robustness of Double Linear Policy with Time-Varying Weights By Xin-Yu Wang; Chung-Han Hsieh
  26. Art-ificial Intelligence: The Effect of AI Disclosure on Evaluations of Creative Content By Manav Raj; Justin Berg; Rob Seamans

  1. By: Hakan Pabuccu; Serdar Ongan; Ayse Ongan
    Abstract: Cryptocurrencies, such as Bitcoin, are one of the most controversial and complex technological innovations in today's financial system. This study aims to forecast the movements of Bitcoin prices at a high degree of accuracy. To this aim, four different Machine Learning (ML) algorithms are applied, namely, the Support Vector Machines (SVM), the Artificial Neural Network (ANN), the Naive Bayes (NB) and the Random Forest (RF) besides the logistic regression (LR) as a benchmark model. In order to test these algorithms, besides existing continuous dataset, discrete dataset was also created and used. For the evaluations of algorithm performances, the F statistic, accuracy statistic, the Mean Absolute Error (MAE), the Root Mean Square Error (RMSE) and the Root Absolute Error (RAE) metrics were used. The t test was used to compare the performances of the SVM, ANN, NB and RF with the performance of the LR. Empirical findings reveal that, while the RF has the highest forecasting performance in the continuous dataset, the NB has the lowest. On the other hand, while the ANN has the highest and the NB the lowest performance in the discrete dataset. Furthermore, the discrete dataset improves the overall forecasting performance in all algorithms (models) estimated.
    Date: 2023–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2303.04642&r=cmp
  2. By: Oguz Koc; Omur Ugur; A. Sevtap Kestel
    Abstract: Banks utilize credit scoring as an important indicator of financial strength and eligibility for credit. Scoring models aim to assign statistical odds or probabilities for predicting if there is a risk of nonpayment in relation to many other factors which may be involved in. This paper aims to illustrate the beneficial use of the eight machine learning (ML) methods (Support Vector Machine, Gaussian Naive Bayes, Decision Trees, Random Forest, XGBoost, K-Nearest Neighbors, Multi-layer Perceptron Neural Networks) and Logistic Regression in finding the default risk as well as the features contributing to it. An extensive comparison is made in three aspects: (i) which ML models with and without its own wrapper feature selection performs the best; (ii) how feature selection combined with appropriate data scaling method influences the performance; (iii) which of the most successful combination (algorithm, feature selection, and scaling) delivers the best validation indicators such as accuracy rate, Type I and II errors and AUC. An open-access credit scoring default risk data sets on German and Australian cases are taken into account, for which we determine the best method, scaling, and features contributing to default risk best and compare our findings with the literature ones in related. We illustrate the positive contribution of the selection method and scaling on the performance indicators compared to the existing literature.
    Date: 2023–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2303.05427&r=cmp
  3. By: Anastasios Petropoulos (Bank of Greece); Evangelos Stavroulakis (Bank of Greece); Panagiotis Lazaris (Bank of Greece); Vasilis Siakoulis (Bank of Greece); Nikolaos Vlachogiannakis (Bank of Greece)
    Abstract: In this study, we explore the impact of COVID-19 pandemic on the default risk of loan portfolios of the Greek banking system, using cutting edge machine learning technologies, like deep learning. Our analysis is based on loan level monthly data, spanning a 42-month period, collected through the ECB AnaCredit database. Our dataset contains more than three million records, including both the pre- and post-pandemic periods. We develop a series of credit rating models implementing state of the art machine learning algorithms. Through an extensive validation process, we explore the best machine learning technique to build a behavioral credit scoring model and subsequently we investigate the estimated sensitivities of various features on predicting default risk. To select the best candidate model, we perform comparisons of the classification accuracy of the proposed methods, in 2-months out-of-time period. Our empirical results indicate that the Deep Neural Networks (DNN) have a superior predictive performance, signalling better generalization capacity against Random Forests, Extreme Gradient Boosting (XGBoost), and logistic regression. The proposed DNN model can accurately simulate the non-linearities caused by the pandemic outbreak on the evolution of default rates for Greek corporate customers. Under this multivariate setup we apply interpretability algorithms to isolate the impact of COVID-19 on the probability of default, controlling for the rest of the features of the DNN. Our results indicate that the impact of the pandemic peaks in the first year, and then it slowly decreases, though without reaching yet the pre COVID-19 levels. Furthermore, our empirical results also suggest different behavioral patterns between Stage 1 and Stage 2 loans, and that default rate sensitivities vary significantly across sectors. The current empirical work can facilitate a more in-depth analysis of AnaCredit database, by providing robust statistical tools for a more effective and responsive micro and macro supervision of credit risk.
    Keywords: Credit Risk;Deep Learning; AnaCredit; COVID-19
    JEL: G24 C38 C45 C55
    Date: 2023–03
    URL: http://d.repec.org/n?u=RePEc:bog:wpaper:315&r=cmp
  4. By: Pierre Bras (LPSM (UMR_8001) - Laboratoire de Probabilités, Statistique et Modélisation - SU - Sorbonne Université - CNRS - Centre National de la Recherche Scientifique - UPCité - Université Paris Cité); Gilles Pagès (LPSM (UMR_8001) - Laboratoire de Probabilités, Statistique et Modélisation - UPD7 - Université Paris Diderot - Paris 7 - SU - Sorbonne Université - CNRS - Centre National de la Recherche Scientifique)
    Abstract: Stochastic Gradient Descent Langevin Dynamics (SGLD) algorithms, which add noise to the classic gradient descent, are known to improve the training of neural networks in some cases where the neural network is very deep. In this paper we study the possibilities of training acceleration for the numerical resolution of stochastic control problems through gradient descent, where the control is parametrized by a neural network. If the control is applied at many discretization times then solving the stochastic control problem reduces to minimizing the loss of a very deep neural network. We numerically show that Langevin algorithms improve the training on various stochastic control problems like hedging and resource management, and for different choices of gradient descent methods.
    Keywords: Langevin algorithm, SGLD, Markovian neural network, Stochastic control, Deep neural network, Stochastic optimization
    Date: 2022–12–22
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-03980632&r=cmp
  5. By: Zijian Shi; John Cartlidge
    Abstract: Modern financial exchanges use an electronic limit order book (LOB) to store bid and ask orders for a specific financial asset. As the most fine-grained information depicting the demand and supply of an asset, LOB data is essential in understanding market dynamics. Therefore, realistic LOB simulations offer a valuable methodology for explaining empirical properties of markets. Mainstream simulation models include agent-based models (ABMs) and stochastic models (SMs). However, ABMs tend not to be grounded on real historical data, while SMs tend not to enable dynamic agent-interaction. To overcome these limitations, we propose a novel hybrid LOB simulation paradigm characterised by: (1) representing the aggregation of market events' logic by a neural stochastic background trader that is pre-trained on historical LOB data through a neural point process model; and (2) embedding the background trader in a multi-agent simulation with other trading agents. We instantiate this hybrid NS-ABM model using the ABIDES platform. We first run the background trader in isolation and show that the simulated LOB can recreate a comprehensive list of stylised facts that demonstrate realistic market behaviour. We then introduce a population of `trend' and `value' trading agents, which interact with the background trader. We show that the stylised facts remain and we demonstrate order flow impact and financial herding behaviours that are in accordance with empirical observations of real markets.
    Date: 2023–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2303.00080&r=cmp
  6. By: Ba Chu (Department of Economics, Carleton University); Shafiullah Qureshi (Department of Economics, Carleton University)
    Abstract: We run a 'horse race' among popular forecasting methods, including machine learning (ML) and deep learning (DL) methods, employed to forecast U.S. GDP growth. Given the unstable nature of GDP growth data, we implement a recursive forecasting strategy to calculate the out-of-sample performance metrics of forecasts for multiple subperiods. We use three sets of predictors: a large set of 224 predictors [of U.S. GDP growth] taken from a large quarterly macroeconomic database (namely, FRED-QD), a small set of nine strong predictors selected from the large set, and another small set including these nine strong predictors together with a high-frequency business condition index. We then obtain the following three main findings: (1) when forecasting with a large number of predictors with mixed predictive power, density-based ML methods (such as bagging or boosting) can outperform sparsity-based methods (such as Lasso) for long-horizon forecast, but this is not necessarily the case for short-horizon forecast; (2) density-based ML methods tend to perform better with a large set of predictors than with a small subset of strong predictors; and (3) parsimonious models using a strong high-frequency predictor can outperform sophisticated ML and DL models using a large number of low-frequency predictors, highlighting the important role of predictors in economic forecasting. We also find that ensemble ML methods (which are the special cases of density-based ML methods) can outperform popular DL methods.
    Keywords: Lasso, Ridge Regression, Random Forest, Boosting Algorithms, Artifical Neural Networks, Dimensional Reduction Methods, MIDAS, GDP growth
    Date: 2021–10–30
    URL: http://d.repec.org/n?u=RePEc:car:carecp:21-12&r=cmp
  7. By: Pieter M. van Staden; Peter A. Forsyth; Yuying Li
    Abstract: We present a parsimonious neural network approach, which does not rely on dynamic programming techniques, to solve dynamic portfolio optimization problems subject to multiple investment constraints. The number of parameters of the (potentially deep) neural network remains independent of the number of portfolio rebalancing events, and in contrast to, for example, reinforcement learning, the approach avoids the computation of high-dimensional conditional expectations. As a result, the approach remains practical even when considering large numbers of underlying assets, long investment time horizons or very frequent rebalancing events. We prove convergence of the numerical solution to the theoretical optimal solution of a large class of problems under fairly general conditions, and present ground truth analyses for a number of popular formulations, including mean-variance and mean-conditional value-at-risk problems. We also show that it is feasible to solve Sortino ratio-inspired objectives (penalizing only the variance of wealth outcomes below the mean) in dynamic trading settings with the proposed approach. Using numerical experiments, we demonstrate that if the investment objective functional is separable in the sense of dynamic programming, the correct time-consistent optimal investment strategy is recovered, otherwise we obtain the correct pre-commitment (time-inconsistent) investment strategy. The proposed approach remains agnostic as to the underlying data generating assumptions, and results are illustrated using (i) parametric models for underlying asset returns, (ii) stationary block bootstrap resampling of empirical returns, and (iii) generative adversarial network (GAN)-generated synthetic asset returns.
    Date: 2023–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2303.08968&r=cmp
  8. By: Shima Nabiee; Nader Bagherzadeh
    Abstract: Market financial forecasting is a trending area in deep learning. Deep learning models are capable of tackling the classic challenges in stock market data, such as its extremely complicated dynamics as well as long-term temporal correlation. To capture the temporal relationship among these time series, recurrent neural networks are employed. However, it is difficult for recurrent models to learn to keep track of long-term information. Convolutional Neural Networks have been utilized to better capture the dynamics and extract features for both short- and long-term forecasting. However, semantic segmentation and its well-designed fully convolutional networks have never been studied for time-series dense classification. We present a novel approach to predict long-term daily stock price change trends with fully 2D-convolutional encoder-decoders. We generate input frames with daily prices for a time-frame of T days. The aim is to predict future trends by pixel-wise classification of the current price frame. We propose a hierarchical CNN structure to encode multiple price frames to multiscale latent representation in parallel using Atrous Spatial Pyramid Pooling blocks and take that temporal coarse feature stacks into account in the decoding stages. Our hierarchical structure of CNNs makes it capable of capturing both long and short-term temporal relationships effectively. The effect of increasing the input time horizon via incrementing parallel encoders has been studied with interesting and substantial changes in the output segmentation masks. We achieve overall accuracy and AUC of %78.18 and 0.88 for joint trend prediction over the next 20 days, surpassing other semantic segmentation approaches. We compared our proposed model with several deep models specifically designed for technical analysis and found that for different output horizons, our proposed models outperformed other models.
    Date: 2023–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2303.09323&r=cmp
  9. By: Gonzalo Ballestero (Department of Economics, Universidad de San Andres)
    Abstract: Firms increasingly delegate their strategic decisions to algorithms. A potential con- cern is that algorithms may undermine competition by leading to pricing outcomes that are collusive, even without having been designed to do so. This paper investigates whether Q-learning algorithms can learn to collude in a setting with sequential price competition and stochastic marginal costs adapted from Maskin and Tirole (1988). By extending a previous model developed in Klein (2021), I find that sequential Q-learning algorithms leads to supracompetitive profits despite they compete under uncertainty and this finding is robust to various extensions. The algorithms can coordinate on focal price equilibria or an Edgeworth cycle provided that uncertainty is not too large. However, as the market environment becomes more uncertain, price wars emerge as the only possible pricing pattern. Even though sequential Q-learning algorithms gain supracompetitive profits, uncertainty tends to make collusive outcomes more difficult to achieve.
    Keywords: Competition Policy
    Date: 2021–11
    URL: http://d.repec.org/n?u=RePEc:sad:ypaper:1&r=cmp
  10. By: Paolo Verme (World Bank)
    Abstract: Poverty prediction models are used by economists to address missing data issues in a variety of contexts such as poverty profiling, targeting with proxy-means tests, cross-survey imputations such as poverty mapping, or vulnerability analyses. Based on the models used by this literature, this paper conducts an experiment by artificially corrupting data with different patterns and shares of missing incomes. It then compares the capacity of classic econometric and machine learning models to predict poverty under these different scenarios. It finds that the quality of predictions and the choice of the optimal prediction model are dependent on the distribution of observed and unobserved incomes, the poverty line, the choice of objective function and policy preferences, and various other modeling choices. Logistic and random forest models are found to be more robust than other models to variations in these features, but no model invariably outperforms all others. The paper concludes with some reflections on the use of these models for predicting poverty.
    Keywords: Income modeling, Income Distributions, Poverty Predictions
    JEL: D31 D63 E64 O15
    Date: 2023–03
    URL: http://d.repec.org/n?u=RePEc:inq:inqwps:ecineq2023-642&r=cmp
  11. By: Hengxi Zhang; Zhendong Shi; Yuanquan Hu; Wenbo Ding; Ercan E. Kuruoglu; Xiao-Ping Zhang
    Abstract: Due to the rapid dynamics and a mass of uncertainties in the quantitative markets, the issue of how to take appropriate actions to make profits in stock trading remains a challenging one. Reinforcement learning (RL), as a reward-oriented approach for optimal control, has emerged as a promising method to tackle this strategic decision-making problem in such a complex financial scenario. In this paper, we integrated two prior financial trading strategies named constant proportion portfolio insurance (CPPI) and time-invariant portfolio protection (TIPP) into multi-agent deep deterministic policy gradient (MADDPG) and proposed two specifically designed multi-agent RL (MARL) methods: CPPI-MADDPG and TIPP-MADDPG for investigating strategic trading in quantitative markets. Afterward, we selected 100 different shares in the real financial market to test these specifically proposed approaches. The experiment results show that CPPI-MADDPG and TIPP-MADDPG approaches generally outperform the conventional ones.
    Date: 2023–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2303.11959&r=cmp
  12. By: Qingyi Wang; Shenhao Wang; Yunhan Zheng; Hongzhou Lin; Xiaohu Zhang; Jinhua Zhao; Joan Walker
    Abstract: Classical demand modeling analyzes travel behavior using only low-dimensional numeric data (i.e. sociodemographics and travel attributes) but not high-dimensional urban imagery. However, travel behavior depends on the factors represented by both numeric data and urban imagery, thus necessitating a synergetic framework to combine them. This study creates a theoretical framework of deep hybrid models with a crossing structure consisting of a mixing operator and a behavioral predictor, thus integrating the numeric and imagery data into a latent space. Empirically, this framework is applied to analyze travel mode choice using the MyDailyTravel Survey from Chicago as the numeric inputs and the satellite images as the imagery inputs. We found that deep hybrid models outperform both the traditional demand models and the recent deep learning in predicting the aggregate and disaggregate travel behavior with our supervision-as-mixing design. The latent space in deep hybrid models can be interpreted, because it reveals meaningful spatial and social patterns. The deep hybrid models can also generate new urban images that do not exist in reality and interpret them with economic theory, such as computing substitution patterns and social welfare changes. Overall, the deep hybrid models demonstrate the complementarity between the low-dimensional numeric and high-dimensional imagery data and between the traditional demand modeling and recent deep learning. It generalizes the latent classes and variables in classical hybrid demand models to a latent space, and leverages the computational power of deep learning for imagery while retaining the economic interpretability on the microeconomics foundation.
    Date: 2023–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2303.04204&r=cmp
  13. By: Fuquan Tang
    Abstract: Based on the characteristics of the Chinese futures market, this paper builds a supervised learning model to predict the trend of futures prices and then designs a trading strategy based on the prediction results. The Precision, Recall and F1-score of the classification problem show that our model can meet the accuracy requirements for the classification of futures price movements in terms of test data. The backtest results show that our trading system has an upward trending return curve with low capital retracement.
    Date: 2023–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2303.04581&r=cmp
  14. By: Keer Yang; Guanqun Zhang; Chuan Bi; Qiang Guan; Hailu Xu; Shuai Xu
    Abstract: In recent years, there have been quite a few attempts to apply intelligent techniques to financial trading, i.e., constructing automatic and intelligent trading framework based on historical stock price. Due to the unpredictable, uncertainty and volatile nature of financial market, researchers have also resorted to deep learning to construct the intelligent trading framework. In this paper, we propose to use CNN as the core functionality of such framework, because it is able to learn the spatial dependency (i.e., between rows and columns) of the input data. However, different with existing deep learning-based trading frameworks, we develop novel normalization process to prepare the stock data. In particular, we first empirically observe that the stock data is intrinsically heterogeneous and bursty, and then validate the heterogeneity and burst nature of stock data from a statistical perspective. Next, we design the data normalization method in a way such that the data heterogeneity is preserved and bursty events are suppressed. We verify out developed CNN-based trading framework plus our new normalization method on 29 stocks. Experiment results show that our approach can outperform other comparing approaches.
    Date: 2023–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2303.09407&r=cmp
  15. By: Zuo, Xuejin; Peng, Xiujian; Yang, Xin; Yang, Xiaoping; Yue, Han; Wang, Meifeng; Adams, Philip
    Abstract: China is experiencing rapid population ageing. The elderly 65 and older accounted for 13.5 per cent of the total population in 2020. It will continue to increase to 40 per cent in 2100. What’s the economic implication of population aging? Most research has focused on the macroeconomic effects of a declining labour force and increasing elderly. There is insufficient research on the changes in demand for goods and services brought about by population ageing. The research on the impact of such changes on the economy under the computable general equilibrium (CGE) framework is even rare. This paper attempts to fill the research gap in this area. Using a dynamic CGE model of the Chinese economy, in the baseline scenario we projected China’s economic growth path over the period of 2019 to 2100. We assumed that there is no change in the age-specific consumption demand even though there is population ageing which is reflected by the declining working-age population and the increasing elderly population. The simulation results revealed that China has to rely on technology improvement and capital stock increases to support its economic growth. The increasing elderly will put high pressure on China’s general government budget balance. Starting with the baseline described above, we constructed a policy scenario that deviated from the baseline due to ageing-induced changes to household and government consumption preferences for education, health and aged care services. With ageing, demand shifts against education and towards health and aged care services. The simulation results show that the effects on the macroeconomy of age-structure driven changes are negligible, even though the changes will affect the industrial outputs and cause small adjustments to economic structure. The increased demand for medical and aged-care services will exceed the decreased demand for education, thus driving up the general government budget deficit.
    Keywords: Agricultural and Food Policy
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:ags:pugtwp:333484&r=cmp
  16. By: Alberto Manzano; Gonzalo Ferro; \'Alvaro Leitao; Carlos V\'azquez; Andr\'es G\'omez
    Abstract: We present a novel methodology to price derivative contracts using quantum computers by means of Quantum Accelerated Monte Carlo. Our contribution is an algorithm that permits pricing derivative contracts with negative payoffs. Note that the presence of negative payoffs can give rise to negative prices. This behaviour cannot be captured by existing quantum algorithms. Although the procedure we describe is different from the standard one, the main building blocks are the same. Thus, all the extensive research that has been performed is still applicable. Moreover, we experimentally compare the performance of the proposed methodology against other proposals employing a quantum emulator and show that we retain the speedups.
    Date: 2023–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2303.06089&r=cmp
  17. By: Cai, Yongxia; Woollacott, Jared; Beach, Robert; Rafelski, Lauren; Ramig, Christopher; Shelby, Michael
    Abstract: The transportation sector is expected to undergo major structural changes in the coming decades, particularly with the emergence of new vehicle technologies. There is a need to understand the economy-wide impacts of evolving conditions in the transportation sector and computable general equilibrium (CGE) models can provide valuable insights in this area. However, to date, few CGE models have established detailed representations of the transportation sector. The major contribution of this work is to demonstrate, and provide insight into, how transportation subsector and technological detail influences modelled economic and environmental outcomes in the ADAGE model. The results presented in this paper indicate projected outcomes based on cost assumptions and model structure, not specific forecasts of future outcomes. They provide a useful diagnostic tool for gaining insight on likely directions and relative magnitude of market and environmental outcomes under different technology and cost assumptions. EV technologies, both hybrid and battery, see significant penetration in the U.S vehicle fleet from 2020 to 2050 whereas natural gas and fuel cell electric vehicles do not. Since the ADAGE model represents the whole economy, and both the transportation and electricity sector are integrated and linked together in ADAGE, the model is well-suited to estimate the sectoral, as well as the overall GHG impacts, of the wider use of electric vehicles. Increased penetration of EVs results in significant reductions in U.S. transportation sector GHG emissions, increases in U.S. electricity sector GHG emissions, and reduced overall, economy-wide, U.S. GHG emissions. As expected, higher oil prices lead to more rapid penetration of AFVs, and lower oil prices lead to slower penetration of AFVs.
    Keywords: Research and Development/Tech Change/Emerging Technologies, None/Blank
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:ags:pugtwp:333451&r=cmp
  18. By: Xiaobin Tang; Nuo Lei
    Abstract: In the past, the seed keywords for CPI prediction were often selected based on empirical summaries of research and literature studies, which were prone to select omitted and invalid variables. In this paper, we design a keyword expansion technique for CPI prediction based on the cutting-edge NLP model, PANGU. We improve the CPI prediction ability using the corresponding web search index. Compared with the unsupervised pre-training and supervised downstream fine-tuning natural language processing models such as BERT and NEZHA, the PANGU model can be expanded to obtain more reliable CPI-generated keywords by its excellent zero-sample learning capability without the limitation of the downstream fine-tuning data set. Finally, this paper empirically tests the keyword prediction ability obtained by this keyword expansion method with historical CPI data.
    Date: 2023–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2303.05666&r=cmp
  19. By: Dapeng Li; Feiyang Pan; Jia He; Zhiwei Xu; Dandan Tu; Guoliang Fan
    Abstract: In high-dimensional time-series analysis, it is essential to have a set of key factors (namely, the style factors) that explain the change of the observed variable. For example, volatility modeling in finance relies on a set of risk factors, and climate change studies in climatology rely on a set of causal factors. The ideal low-dimensional style factors should balance significance (with high explanatory power) and stability (consistent, no significant fluctuations). However, previous supervised and unsupervised feature extraction methods can hardly address the tradeoff. In this paper, we propose Style Miner, a reinforcement learning method to generate style factors. We first formulate the problem as a Constrained Markov Decision Process with explanatory power as the return and stability as the constraint. Then, we design fine-grained immediate rewards and costs and use a Lagrangian heuristic to balance them adaptively. Experiments on real-world financial data sets show that Style Miner outperforms existing learning-based methods by a large margin and achieves a relatively 10% gain in R-squared explanatory power compared to the industry-renowned factors proposed by human experts.
    Date: 2023–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2303.11716&r=cmp
  20. By: Xuefan, Pan (University of Warwick)
    Abstract: I conduct content analysis and extent the existing models of analysing the reaction of the stock market and foreign currency markets to the release of Federal Open Market Committee (FOMC) statements and meeting minutes. The tone changes and uncertainty level of the monetary policy communication are constructed using the dictionary-based word-count approach at the whole document level. I further apply the Latent Dirichlet Allocation (LDA) algorithm to investigate the different impacts of topics in the meeting minutes. High-frequency data is used as the analysis is an event study. I find that the tone change and uncertainty level have limited explanation power on the magnitude of the effect of the release of FOMC documents especially statements on the financial market. The communication from FOMC is more informative for the market during the zero lower bound period, compared to the whole sample period.
    Keywords: Monetary policy ;Communication ; Text Mining JEL Classification: E52 ; E58
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:wrk:wrkesp:43&r=cmp
  21. By: Schiffman, Florian
    Abstract: In GEMPACK, models are always solved as initial value problems(IVP)using the linearized form of the levels equations. While this allows the user to solve each step of the IVP efficiently, the overall accuracy and speed is determined by the integration scheme and the number of integration steps. Up to GEMPACK 12.1, only the Euler, leapfrog midpoint and Gragg’s method were available as well as their 2 and 3 point Richardson extrapolations. While Euler provides excellent stability it is very costly to obtain accurate solutions. In contrast the latter two integrators allow for faster convergence but oftentimes suffer from instabilities. In the current beta version of GEMPACK we address this issue by introducing explicit and embedded Runge Kutta (RK) integrators as an alternative. Our focus in this work is on the embedded RK methods. Using the embedded RK methods we developed a new adaptive step size algorithm that is designed to overcome problem common to CGE models. Such problems include asymptotes in the levels variables as well as coping with the different scales on which the results can vary. Our algorithm provides rapid convergence towards the true solution as well as increased robustness exceeding that of Eulers method. In addition, the new algorithm allows us to provide users with a component-by-component global error estimate. In all our tests we have found that the error estimates appeared to be upper bounds of the true error. Furthermore, this compenent-by-component error estimates are an excellent debugging tool when developing or extending a CGE model.In all but the simplest test cases, we have found that using adaptive step size embedded RK methods provided solutions at least one order of magnitude closer to the true solution in less than half the time to solution required by the old integration schemes.
    Keywords: Research and Development/Tech Change/Emerging Technologies
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:ags:pugtwp:333479&r=cmp
  22. By: González, Javier; Latorre, María C.; Valverde, Gabriela Ortiz
    Abstract: After 20 years of negotiations, the European Union (EU27) and Mercosur (made up of Argentina, Brazil, Paraguay, and Uruguay) signed an "Association Agreement" that not only liberalizes trade in goods and services, but also expands into other aspects such as sustainability and respect for human rights. Thanks to its scope and the market size of its member economies, it is one of the largest trade agreements in the world. As far as trade in goods is concerned, the EU27 undertakes to liberalize 92% of imports coming from Mercosur over a period of up to 10 years. Concerning Mercosur members, they commit themselves to liberalize 91% of imports coming from the EU over a period of up to 15 years. Regarding services, the agreement comprises all modes of supply, including the liberalization of investment (establishment) both in the services sector and in other sectors. It also embodies the elimination of unnecessary technical barriers to trade (TBTs). The latter leads to creating a framework within which technical regulations and standards can converge. We employ a Computable General Equilibrium (CGE) methodology, namely, the static and dynamic setting of the Global Trade Analysis Project (GTAP) (Hertel and Tsigas, 1997; Corong et al., 2017; Aguiar et al., 2019a), using GEMPACK (General Equilibrium Modelling Package) software. The combination of both the static and the dynamic model allows to estimate the effect of the agreement in the short and long run. It also offers a way to explore the potential effects of capital flows related to the agreement (Ortiz-Valverde and Latorre, 2020). Moreover, it allows to better estimate the subsequent reduction in tariffs and the evolution of quotas, which are further liberalized as the years pass by. Dynamic estimations also constitute a novelty in the analysis of this agreement since most studies have focused on the outcomes after the agreement is fully implemented.
    Keywords: International Relations/Trade, International Relations/Trade
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:ags:pugtwp:333391&r=cmp
  23. By: Hu, Xiurong; Liu, Junfeng
    Abstract: China’s environmental protection tax (EPT) has been implemented since the beginning of 2018 to conquer the severe air pollution problems. Meanwhile, carbon tax (CAT) has been approved as the most effective way on climate mitigation. However, the combined effects across different environmental taxes on emission reduction have not been comprehensively characterized. Besides environmental taxes, changes in trade policy between countries may also influence pollution emissions, which also needs a deeper investigation. In order to provide insights for decision-making on air pollution mitigation under an uncertain worldwide trade policy, we simulated the effects of the combinations of EPT and CAP changes on air pollutants emissions and economic activities. Utilizing a multiple-province computable general equilibrium (CGE) model, we quantify the emissions changes resulting from the individual or mixed policy components: including variating EPT from 2yuan per kilogram emissions to12yuan/kg, CAT from 50yuan per tonne CO2 emissions to 300yuan/tonne. Our results show that although CAP policy may result in greater emission reductions than that of the EPT, the EPT policy is more cost-effective to the CAP policy. Besides, CAP is most redundant to the EPT, while the EPT is complementary to a CAP policy. On province level, in most provinces, carbon pricing could increase the air pollution mitigation but also strengthen the burden of GDP at the same time, while Heilongjiang, Tianjin, Jiangsu, Hainan, Guangxi and Jiangxi provinces could result in air pollution reduction and GDP increase. Provincial distribution is vital for regional equality. We suggest to introduce relative smaller tax rate on provinces who suffer large GDP loss, or provide subsidies to these provinces.
    Keywords: Environmental Economics and Policy, International Relations/Trade
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:ags:pugtwp:333460&r=cmp
  24. By: Ramos, Maria Priscila; Custodio, Estefania; Jiménez, Sofía; Sartori, Martina; Ferrari, Emanuele
    Abstract: Kenya is particularly concerned about the achievement of the Sustainable Development Goal #2 (SDG #2: zero hunger), and its associated consequences for society. Malnutrition in all its forms (stunting, wasting, micronutrient deficiencies and/or overweight/obesity) can compromise human development and economic growth through different pathways. In this context, it is possible to identify at least two pathways through which improving FS&N could enhance labour productivity. Improving the dietary nutrients intake (calories, macro and micronutrients) could allow for better (i) learning capacity and (ii) the reinforcement of health conditions. Besides, education and good health improve labour productivity. Thus, the aim of this paper is to provide insights about the linkages between FS&N indicators and labour productivity for dynamic pathways in a CGE framework, particularly modelling baseline’s drivers about L-productivity and growth. Moreover, the estimates would also allow performing food policy scenarios to get positive impacts over nutrition and health and thus, on economic growth. Our results show that, indeed, daily micronutrients (iron, zinc, calcium, vitamins B2 and A) intakes are significant and positive to explain labour productivity improvement (wage increase), as well as education, while disabilities and/or diseases impact negatively and significantly on labour performance. We also note that in the case of vitamins C and B12 the relation is negative when all the variables are included in the regression but positive when we consider them separately. All in all, results confirm the virtuous cycle between health, nutrition, education and labour productivity.
    Keywords: Food Security and Poverty, Labor and Human Capital
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:ags:pugtwp:333426&r=cmp
  25. By: Xin-Yu Wang; Chung-Han Hsieh
    Abstract: In this paper, we extend the existing double linear policy by incorporating time-varying weights instead of constant weights and study a certain robustness property, called robust positive expectation (RPE), in a discrete-time setting. We prove that the RPE property holds by employing a novel elementary symmetric polynomials characterization approach and derive an explicit expression for both the expected cumulative gain-loss function and its variance. To validate our theory, we perform extensive Monte Carlo simulations using various weighting functions. Furthermore, we demonstrate how this policy can be effectively incorporated with standard technical analysis techniques, using the moving average as a trading signal.
    Date: 2023–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2303.10806&r=cmp
  26. By: Manav Raj; Justin Berg; Rob Seamans
    Abstract: The emergence of generative AI technologies, such as OpenAI's ChatGPT chatbot, has expanded the scope of tasks that AI tools can accomplish and enabled AI-generated creative content. In this study, we explore how disclosure regarding the use of AI in the creation of creative content affects human evaluation of such content. In a series of pre-registered experimental studies, we show that AI disclosure has no meaningful effect on evaluation either for creative or descriptive short stories, but that AI disclosure has a negative effect on evaluations for emotionally evocative poems written in the first person. We interpret this result to suggest that reactions to AI-generated content may be negative when the content is viewed as distinctly "human." We discuss the implications of this work and outline planned pathways of research to better understand whether and when AI disclosure may affect the evaluation of creative content.
    Date: 2023–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2303.06217&r=cmp

This nep-cmp issue is ©2023 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.