|
on Computational Economics |
Issue of 2024‒04‒08
fourteen papers chosen by |
By: | Bantle, Melissa |
Abstract: | The paper uses a machine learning technique to build up a screen for collusive behavior. Such tools can be applied by competition authorities but also by companies to screen the behavior of their suppliers. The method is applied to the German retail gasoline market to detect anomalous behavior in the price setting of the filling stations. Therefore, the algorithm identifies anomalies in the data-generating process. The results show that various anomalies can be detected with this method. These anomalies in the price setting behavior are then discussed with respect to their implications for the competitiveness of the market. |
Keywords: | Machine Learning, Cartel Screens, Fuel Retail Market |
JEL: | C53 K21 L44 |
Date: | 2024 |
URL: | http://d.repec.org/n?u=RePEc:zbw:hohdps:285380&r=cmp |
By: | Mohammad Ali Labbaf Khaniki; Mohammad Manthouri |
Abstract: | This study presents an innovative approach for predicting cryptocurrency time series, specifically focusing on Bitcoin, Ethereum, and Litecoin. The methodology integrates the use of technical indicators, a Performer neural network, and BiLSTM (Bidirectional Long Short-Term Memory) to capture temporal dynamics and extract significant features from raw cryptocurrency data. The application of technical indicators, such facilitates the extraction of intricate patterns, momentum, volatility, and trends. The Performer neural network, employing Fast Attention Via positive Orthogonal Random features (FAVOR+), has demonstrated superior computational efficiency and scalability compared to the traditional Multi-head attention mechanism in Transformer models. Additionally, the integration of BiLSTM in the feedforward network enhances the model's capacity to capture temporal dynamics in the data, processing it in both forward and backward directions. This is particularly advantageous for time series data where past and future data points can influence the current state. The proposed method has been applied to the hourly and daily timeframes of the major cryptocurrencies and its performance has been benchmarked against other methods documented in the literature. The results underscore the potential of the proposed method to outperform existing models, marking a significant progression in the field of cryptocurrency price prediction. |
Date: | 2024–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2403.03606&r=cmp |
By: | Daixin Wang; Zhiqiang Zhang; Yeyu Zhao; Kai Huang; Yulin Kang; Jun Zhou |
Abstract: | User financial default prediction plays a critical role in credit risk forecasting and management. It aims at predicting the probability that the user will fail to make the repayments in the future. Previous methods mainly extract a set of user individual features regarding his own profiles and behaviors and build a binary-classification model to make default predictions. However, these methods cannot get satisfied results, especially for users with limited information. Although recent efforts suggest that default prediction can be improved by social relations, they fail to capture the higher-order topology structure at the level of small subgraph patterns. In this paper, we fill in this gap by proposing a motif-preserving Graph Neural Network with curriculum learning (MotifGNN) to jointly learn the lower-order structures from the original graph and higherorder structures from multi-view motif-based graphs for financial default prediction. Specifically, to solve the problem of weak connectivity in motif-based graphs, we design the motif-based gating mechanism. It utilizes the information learned from the original graph with good connectivity to strengthen the learning of the higher-order structure. And considering that the motif patterns of different samples are highly unbalanced, we propose a curriculum learning mechanism on the whole learning process to more focus on the samples with uncommon motif distributions. Extensive experiments on one public dataset and two industrial datasets all demonstrate the effectiveness of our proposed method. |
Date: | 2024–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2403.06482&r=cmp |
By: | Young Shin Kim; Hyun-Gyoon Kim |
Abstract: | In this study, we discuss a machine learning technique to price exotic options with two underlying assets based on a non-Gaussian Levy process model. We introduce a new multivariate Levy process model named the generalized normal tempered stable (gNTS) process, which is defined by time-changed multivariate Brownian motion. Since the probability density function (PDF) of the gNTS process is not given by a simple analytic formula, we use the conditional real-valued non-volume preserving (CRealNVP) model, which is a sort of flow-based generative networks. After that, we discuss the no-arbitrage pricing on the gNTS model for pricing the quanto option whose underlying assets consist of a foreign index and foreign exchange rate. We also present the training of the CRealNVP model to learn the PDF of the gNTS process using a training set generated by Monte Carlo simulation. Next, we estimate the parameters of the gNTS model with the trained CRealNVP model using the empirical data observed in the market. Finally, we provide a method to find an equivalent martingale measure on the gNTS model and to price the quanto option using the CRealNVP model with the risk-neutral parameters of the gNTS model. |
Date: | 2024–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2402.17919&r=cmp |
By: | Rambod Rahmani; Marco Parola; Mario G. C. A. Cimino |
Abstract: | Due to the recent increase in interest in Financial Technology (FinTech), applications like credit default prediction (CDP) are gaining significant industrial and academic attention. In this regard, CDP plays a crucial role in assessing the creditworthiness of individuals and businesses, enabling lenders to make informed decisions regarding loan approvals and risk management. In this paper, we propose a workflow-based approach to improve CDP, which refers to the task of assessing the probability that a borrower will default on his or her credit obligations. The workflow consists of multiple steps, each designed to leverage the strengths of different techniques featured in machine learning pipelines and, thus best solve the CDP task. We employ a comprehensive and systematic approach starting with data preprocessing using Weight of Evidence encoding, a technique that ensures in a single-shot data scaling by removing outliers, handling missing values, and making data uniform for models working with different data types. Next, we train several families of learning models, introducing ensemble techniques to build more robust models and hyperparameter optimization via multi-objective genetic algorithms to consider both predictive accuracy and financial aspects. Our research aims at contributing to the FinTech industry in providing a tool to move toward more accurate and reliable credit risk assessment, benefiting both lenders and borrowers. |
Date: | 2024–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2403.03785&r=cmp |
By: | Juan C. King; Roberto Dale; Jos\'e M. Amig\'o |
Abstract: | The objective of this paper is the construction of new indicators that can be useful to operate in the cryptocurrency market. These indicators are based on public data obtained from the blockchain network, specifically from the nodes that make up Bitcoin mining. Therefore, our analysis is unique to that network. The results obtained with numerical simulations of algorithmic trading and prediction via statistical models and Machine Learning demonstrate the importance of variables such as the hash rate, the difficulty of mining or the cost per transaction when it comes to trade Bitcoin assets or predict the direction of price. Variables obtained from the blockchain network will be called here blockchain metrics. The corresponding indicators (inspired by the "Hash Ribbon") perform well in locating buy signals. From our results, we conclude that such blockchain indicators allow obtaining information with a statistical advantage in the highly volatile cryptocurrency market. |
Date: | 2024–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2403.00770&r=cmp |
By: | Oluwafemi F Olaiyapo |
Abstract: | The objective of this research is to examine how sentiment analysis can be employed to generate trading signals for the Foreign Exchange (Forex) market. The author assessed sentiment in social media posts and news articles pertaining to the United States Dollar (USD) using a combination of methods: lexicon-based analysis and the Naive Bayes machine learning algorithm. The findings indicate that sentiment analysis proves valuable in forecasting market movements and devising trading signals. Notably, its effectiveness is consistent across different market conditions. The author concludes that by analyzing sentiment expressed in news and social media, traders can glean insights into prevailing market sentiments towards the USD and other pertinent countries, thereby aiding trading decision-making. This study underscores the importance of weaving sentiment analysis into trading strategies as a pivotal tool for predicting market dynamics. |
Date: | 2024–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2403.00785&r=cmp |
By: | Alessandro Niro; Michael Werner |
Abstract: | Detecting anomalies is important for identifying inefficiencies, errors, or fraud in business processes. Traditional process mining approaches focus on analyzing 'flattened', sequential, event logs based on a single case notion. However, many real-world process executions exhibit a graph-like structure, where events can be associated with multiple cases. Flattening event logs requires selecting a single case identifier which creates a gap with the real event data and artificially introduces anomalies in the event logs. Object-centric process mining avoids these limitations by allowing events to be related to different cases. This study proposes a novel framework for anomaly detection in business processes that exploits graph neural networks and the enhanced information offered by object-centric process mining. We first reconstruct and represent the process dependencies of the object-centric event logs as attributed graphs and then employ a graph convolutional autoencoder architecture to detect anomalous events. Our results show that our approach provides promising performance in detecting anomalies at the activity type and attributes level, although it struggles to detect anomalies in the temporal order of events. |
Date: | 2024–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2403.00775&r=cmp |
By: | Benjamin Wee |
Abstract: | Simulation Based Calibration (SBC) is applied to analyse two commonly used, competing Markov chain Monte Carlo algorithms for estimating the posterior distribution of a stochastic volatility model. In particular, the bespoke 'off-set mixture approximation' algorithm proposed by Kim, Shephard, and Chib (1998) is explored together with a Hamiltonian Monte Carlo algorithm implemented through Stan. The SBC analysis involves a simulation study to assess whether each sampling algorithm has the capacity to produce valid inference for the correctly specified model, while also characterising statistical efficiency through the effective sample size. Results show that Stan's No-U-Turn sampler, an implementation of Hamiltonian Monte Carlo, produces a well-calibrated posterior estimate while the celebrated off-set mixture approach is less efficient and poorly calibrated, though model parameterisation also plays a role. Limitations and restrictions of generality are discussed. |
Date: | 2024–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2402.12384&r=cmp |
By: | Jamotton, Charlotte (Université catholique de Louvain, LIDAM/ISBA, Belgium); Hainaut, Donatien (Université catholique de Louvain, LIDAM/ISBA, Belgium) |
Abstract: | This article explores the application of Latent Dirichlet Allocation (LDA) to structured tabular insurance data. LDA is a probabilistic topic modelling approach initially developed in Natural Language Processing (NLP) to uncover the underlying structure of (unstructured) textual data. It was designed to represent textual documents as mixture of latent (hidden) topics, and topics as mixtures of words. This study introduces the LDA’s document-topic distribution as a soft clustering tool for unsupervised learningtasks in the actuarial field. By defining each topic as a risk profile, and by treating insurance policies as documents and the modalities of categorical covariates as words, we show how LDA can be extended beyond textual data and can offer a framework to uncover underlying structures within insurance portfolios. Our experimental results and analysis highlight how the modelling of policies based on topic cluster membership, and the identification of dominant modalities within each risk profile, can give insights into the prominent risk factors contributing to higher or lower claim frequencies. |
Keywords: | Latent dirichlet allocation ; topic modelling ; soft clustering ; insurance data ; risk profile ; natural language processing |
Date: | 2024–03–08 |
URL: | http://d.repec.org/n?u=RePEc:aiz:louvad:2024008&r=cmp |
By: | Savvakis C. Savvides (Visiting Lecturer, John Deutsch International Executive Programs, Queens University, Canada.) |
Abstract: | Through this paper the author highlights the importance of constructing an integrated financial model and in using growth patterns in projecting the key parameter projections to generate consistent and meaningful scenarios during a Monte Carlo simulation risk analysis application and to avoid and contain the correlation problem. The Integrated Financial Model© by Savvakis C. Savvides was created and tested after many years of expertise of the author in corporate lending and project finance as well as from teaching investment appraisal and risk analysis and the development of several related software. It is argued that to apply Monte Carlo Simulation Risk Analysis in a meaningful manner and to enhance the decision-making process the methodology should not be used “as a toy†but rather in a thoughtful manner that takes into consideration all aspects of a prudently constructed business plan and as this is manifested through an integrated financial model. The use of growth pattern functions for the key risk variables is essential so as to contain the correlation problem and for the simulation to be based on consistent and realistic scenarios. |
Keywords: | Market analysis, quantity demanded, elasticity of demand, project evaluation, market segmentation, market penetration. |
JEL: | D11 D61 H43 L21 M31 |
Date: | 2024–03–18 |
URL: | http://d.repec.org/n?u=RePEc:qed:dpaper:4615&r=cmp |
By: | Andrzej Daniluk; Evgeny Lakshtanov; Rafal Muchorski |
Abstract: | We present a novel technique of Monte Carlo error reduction that finds direct application in option pricing and Greeks estimation. The method is applicable to any LSV modelling framework and concerns a broad class of payoffs, including path-dependent and multi-asset cases. Most importantly, it allows to reduce the Monte Carlo error even by an order of magnitude, which is shown in several numerical examples. |
Date: | 2024–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2402.12528&r=cmp |
By: | Dengler, Thomas; Gerke, Rafael; Giesen, Sebastian; Kienzler, Daniel; Röttger, Joost; Scheer, Alexander; Wacks, Johannes |
Abstract: | Optimal policy projections (OPPs) offer a flexible way to derive scenario-based policy recommendations. This note describes how to calculate OPPs for a simple textbook New Keynesian model and provides illustrations for various examples. It also demonstrates the versatility of the approach by showing OPP results for simulations conducted using a medium-scale DSGE model and a New Keynesian model with heterogeneous households. |
Keywords: | Optimal monetary policy, macroeconomic projections, New Keynesian models, household heterogeneity |
JEL: | C63 E20 E31 E47 E52 E58 |
Date: | 2024 |
URL: | http://d.repec.org/n?u=RePEc:zbw:bubtps:285379&r=cmp |
By: | Wagner, Joachim |
Abstract: | The use of cloud computing by firms can be expected to go hand in hand with higher productivity, more innovations, and lower costs, and, therefore, should be positively related to export activities. Empirical evidence on the link between cloud computing and exports, however, is missing. This paper uses firm level data for manufacturing enterprises from the 27 member countries of the European Union taken from the Flash Eurobarometer 486 survey conducted in February - May 2020 to investigate this link. Applying standard parametric econometric models and a new machine-learning estimator, Kernel-Regularized Least Squares (KRLS), we find that firms which use cloud computing do more often export, do more often export to various destinations all over the world, and do export to more different destinations. The estimated cloud computing premium for extensive margins of exports is statistically highly significant after controlling for firm size, firm age, patents, and country. Furthermore, the size of this premium can be considered to be large. Extensive margins of exports and the use of cloud computing are positively related. |
Keywords: | Cloud computing, exports, firm level data, Flash Eurobarometer 486, kernel-regularized least squares (KRLS) |
JEL: | D22 F14 |
Date: | 2024 |
URL: | http://d.repec.org/n?u=RePEc:zbw:kcgwps:285359&r=cmp |