|
on Computational Economics |
Issue of 2022‒07‒11
thirteen papers chosen by |
By: | Nicole Koenigstein |
Abstract: | The growth of machine-readable data in finance, such as alternative data, requires new modeling techniques that can handle non-stationary and non-parametric data. Due to the underlying causal dependence and the size and complexity of the data, we propose a new modeling approach for financial time series data, the $\alpha_{t}$-RIM (recurrent independent mechanism). This architecture makes use of key-value attention to integrate top-down and bottom-up information in a context-dependent and dynamic way. To model the data in such a dynamic manner, the $\alpha_{t}$-RIM utilizes an exponentially smoothed recurrent neural network, which can model non-stationary times series data, combined with a modular and independent recurrent structure. We apply our approach to the closing prices of three selected stocks of the S\&P 500 universe as well as their news sentiment score. The results suggest that the $\alpha_{t}$-RIM is capable of reflecting the causal structure between stock prices and news sentiment, as well as the seasonality and trends. Consequently, this modeling approach markedly improves the generalization performance, that is, the prediction of unseen data, and outperforms state-of-the-art networks such as long short-term memory models. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2205.01639&r= |
By: | William Lefebvre; Gr\'egoire Loeper; Huy\^en Pham |
Abstract: | We propose machine learning methods for solving fully nonlinear partial differential equations (PDEs) with convex Hamiltonian. Our algorithms are conducted in two steps. First the PDE is rewritten in its dual stochastic control representation form, and the corresponding optimal feedback control is estimated using a neural network. Next, three different methods are presented to approximate the associated value function, i.e., the solution of the initial PDE, on the entire space-time domain of interest. The proposed deep learning algorithms rely on various loss functions obtained either from regression or pathwise versions of the martingale representation and its differential relation, and compute simultaneously the solution and its derivatives. Compared to existing methods, the addition of a differential loss function associated to the gradient, and augmented training sets with Malliavin derivatives of the forward process, yields a better estimation of the PDE's solution derivatives, in particular of the second derivative, which is usually difficult to approximate. Furthermore, we leverage our methods to design algorithms for solving families of PDEs when varying terminal condition (e.g. option payoff in the context of mathematical finance) by means of the class of DeepOnet neural networks aiming to approximate functional operators. Numerical tests illustrate the accuracy of our methods on the resolution of a fully nonlinear PDE associated to the pricing of options with linear market impact, and on the Merton portfolio selection problem. |
Date: | 2022–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2205.09815&r= |
By: | Ziyu Wang; Yuhao Zhou; Jun Zhu |
Abstract: | We investigate nonlinear instrumental variable (IV) regression given high-dimensional instruments. We propose a simple algorithm which combines kernelized IV methods and an arbitrary, adaptive regression algorithm, accessed as a black box. Our algorithm enjoys faster-rate convergence and adapts to the dimensionality of informative latent features, while avoiding an expensive minimax optimization procedure, which has been necessary to establish similar guarantees. It further brings the benefit of flexible machine learning models to quasi-Bayesian uncertainty quantification, likelihood-based model selection, and model averaging. Simulation studies demonstrate the competitive performance of our method. |
Date: | 2022–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2205.10772&r= |
By: | Raphael P. B. Piovezan; Pedro Paulo de Andrade Junior |
Abstract: | This article aims to propose and apply a machine learning method to analyze the direction of returns from Exchange Traded Funds (ETFs) using the historical return data of its components, helping to make investment strategy decisions through a trading algorithm. In methodological terms, regression and classification models were applied, using standard datasets from Brazilian and American markets, in addition to algorithmic error metrics. In terms of research results, they were analyzed and compared to those of the Na\"ive forecast and the returns obtained by the buy & hold technique in the same period of time. In terms of risk and return, the models mostly performed better than the control metrics, with emphasis on the linear regression model and the classification models by logistic regression, support vector machine (using the LinearSVC model), Gaussian Naive Bayes and K-Nearest Neighbors, where in certain datasets the returns exceeded by two times and the Sharpe ratio by up to four times those of the buy & hold control model. |
Date: | 2022–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2205.12746&r= |
By: | Tsang, Andrew |
Abstract: | This paper applies causal machine learning methods to analyze the heterogeneous regional impacts of monetary policy in China. The method uncovers the heterogeneous regional impacts of different monetary policy stances on the provincial figures for real GDP growth, CPI inflation and loan growth compared to the national averages. The varying effects of expansionary and contractionary monetary policy phases on Chinese provinces are highlighted and explained. Subsequently, applying interpretable machine learning, the empirical results show that the credit channel is the main channel affecting the regional impacts of monetary policy. An imminent conclusion of the uneven provincial responses to the "one size fits all" monetary policy is that different policymakers should coordinate their efforts to search for the optimal fiscal and monetary policy mix. |
Keywords: | China,monetary policy,regional heterogeneity,machine learning,shadow banking |
JEL: | E52 C54 R11 E61 |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:zbw:uhhwps:62&r= |
By: | Xuxin Mao; Janine Boshoff; Garry Young; Hande Kucuk |
Abstract: | This research explores new ways of applying machine learning to detect outliers in alternative price data resources such as web-scraped data and scanner data sources. Based on text vectorisation and clustering methods, we build a universal methodology framework which identifies outliers in both data sources. We provide a unique way of conducting goods classification and outlier detection. Using Density based spatial clustering of applications with noise (DBSCAN), we can provide two layers of outlier detection for both scanner data and web-scraped data. For web-scraped data we provide a method to classify text information and identify clusters of products. The framework allows us to efficiently detect outliers and explore abnormal price changes that may be omitted by the current practices in line with the 2019 Consumer Prices Indices Manual 2019. Our methodology also provides a good foundation for building better measurement of consumer prices with standard time series data transformed from alternative data sources. |
Keywords: | consumer price index, machine learning, outlier detection, scanner data, text density based clustering, web-scraped data |
JEL: | C43 E31 |
Date: | 2021–11 |
URL: | http://d.repec.org/n?u=RePEc:nsr:escoet:escoe-tr-12&r= |
By: | German Rodikov; Nino Antulov-Fantulin |
Abstract: | Volatility models of price fluctuations are well studied in the econometrics literature, with more than 50 years of theoretical and empirical findings. The recent advancements in neural networks (NN) in the deep learning field have naturally offered novel econometric modeling tools. However, there is still a lack of explainability and stylized knowledge about volatility modeling with neural networks; the use of stylized facts could help improve the performance of the NN for the volatility prediction task. In this paper, we investigate how the knowledge about the "physics" of the volatility process can be used as an inductive bias to design or constrain a cell state of long short-term memory (LSTM) for volatility forecasting. We introduce a new type of $\sigma$-LSTM cell with a stochastic processing layer, design its learning mechanism and show good out-of-sample forecasting performance. |
Date: | 2022–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2205.07022&r= |
By: | Syed Abul, Basher; Perry, Sadorsky |
Abstract: | Bitcoin has grown in popularity and has now attracted the attention of individual and institutional investors. Accurate Bitcoin price direction forecasts are important for determining the trend in Bitcoin prices and asset allocation. This paper addresses several unanswered questions. How important are business cycle variables like interest rates, inflation, and market volatility for forecasting Bitcoin prices? Does the importance of these variables change across time? Are the most important macroeconomic variables for forecasting Bitcoin prices the same as those for gold prices? To answer these questions, we utilize tree-based machine learning classifiers, along with traditional logit econometric models. The analysis reveals several important findings. First, random forests predict Bitcoin and gold price directions with a higher degree of accuracy than logit models. Prediction accuracy for bagging and random forests is between 75% and 80% for a five-day prediction. For 10-day to 20-day forecasts bagging and random forests record accuracies greater than 85%. Second, technical indicators are the most important features for predicting Bitcoin and gold price direction, suggesting some degree of market inefficiency. Third, oil price volatility is important for predicting Bitcoin and gold prices indicating that Bitcoin is a substitute for gold in diversifying this type of volatility. By comparison, gold prices are more influenced by inflation than Bitcoin prices, indicating that gold can be used as a hedge or diversification asset against inflation. |
Keywords: | forecasting; machine learning; random forests; Bitcoin; gold; inflation |
JEL: | C58 E44 G17 |
Date: | 2022–06–06 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:113293&r= |
By: | Philipp Ratz |
Abstract: | Artificial Neural Networks (ANN) have been employed for a range of modelling and prediction tasks using financial data. However, evidence on their predictive performance, especially for time-series data, has been mixed. Whereas some applications find that ANNs provide better forecasts than more traditional estimation techniques, others find that they barely outperform basic benchmarks. The present article aims to provide guidance as to when the use of ANNs might result in better results in a general setting. We propose a flexible nonparametric model and extend existing theoretical results for the rate of convergence to include the popular Rectified Linear Unit (ReLU) activation function and compare the rate to other nonparametric estimators. Finite sample properties are then studied with the help of Monte-Carlo simulations to provide further guidance. An application to estimate the Value-at-Risk of portfolios of varying sizes is also considered to show the practical implications. |
Date: | 2022–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2205.07101&r= |
By: | George Kapetanios; Fotis Papailias |
Abstract: | National statistics offices and similar institutions often produce country indices which are based on the aggregation of large number of disaggregate series. In some cases these disaggregate series are also published and, therefore, are available to be used for further research. In other cases the disaggregate series are available only for in-house purposes and are still under research on whether more indices could be extracted. This report is concerned with the very specific task of comparing gains in nowcasting using a single aggregate variable/index versus the full use of all the available disaggregate indices. This approach should be viewed as part of an overall dataset assessment framework where our aim is to assist the applied statistician on whether a novel dataset of time series could be useful to economics researchers |
Keywords: | factor models, neural networks, nowcasting, penalised regression, support vector regression |
JEL: | C53 E37 |
Date: | 2022–05 |
URL: | http://d.repec.org/n?u=RePEc:nsr:escoet:escoe-tr-17&r= |
By: | Nicolas Boursin; Carl Remlinger; Joseph Mikael; Carol Anne Hargreaves |
Abstract: | Driven by the good results obtained in computer vision, deep generative methods for time series have been the subject of particular attention in recent years, particularly from the financial industry. In this article, we focus on commodity markets and test four state-of-the-art generative methods, namely Time Series Generative Adversarial Network (GAN) Yoon et al. [2019], Causal Optimal Transport GAN Xu et al. [2020], Signature GAN Ni et al. [2020] and the conditional Euler generator Remlinger et al. [2021], are adapted and tested on commodity time series. A first series of experiments deals with the joint generation of historical time series on commodities. A second set deals with deep hedging of commodity options trained on he generated time series. This use case illustrates a purely data-driven approach to risk hedging. |
Date: | 2022–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2205.13942&r= |
By: | Farmer, J. Doyne; Carro, Adrian; Hinterschweiger, Marc; Uluc, Arzu |
Abstract: | We develop an agent-based model of the UK housing market to study the impact of macroprudential policy experiments on key housing market indicators. The heterogeneous nature of this model enables us to assess the effects of such experiments on the housing, rental and mortgage markets not only in the aggregate, but also at the level of individual households and sub-segments, such as first-time buyers, homeowners, buy-to-let investors, and renters. This approach can therefore offer a broad picture of the disaggregated effects of financial stability policies. The model is calibrated using a large selection of micro-data, including data from a leading UK real estate online search engine as well as loan-level regulatory data. With a series of comparative statics exercises, we investigate the impact of (i) a hard loan-to-value limit, and (ii) a soft loan-to-income limit, allowing for a limited share of unconstrained new mortgages. We find that, first, these experiments tend to mitigate the house price cycle by reducing credit availability and therefore leverage. Second, an experiment targeting a specific risk measure may also affect other risk metrics, thus necessitating a careful calibration of the policy to achieve a given reduction in risk. Third, experiments targeting the owner-occupier housing market can spill over to the rental sector, as a compositional shift in home ownership from owner-occupiers to buy-to-let investors affects both the supply of and demand for rental properties. |
Keywords: | Agent-based model, housing market, macroprudential policy, borrower-based measures, buy-to-let sector |
JEL: | D1 D31 E58 G51 R21 R31 |
Date: | 2022–04 |
URL: | http://d.repec.org/n?u=RePEc:amz:wpaper:2022-06&r= |
By: | Sung Jae Jun; Sokbae Lee |
Abstract: | The log odds ratio is a common parameter to measure association between (binary) outcome and exposure variables. Much attention has been paid to its parametric but robust estimation, or its nonparametric estimation as a function of confounders. However, discussion on how to use a summary statistic by averaging the log odds ratio function is surprisingly difficult to find despite the popularity and importance of averaging in other contexts such as estimating the average treatment effect. We propose a couple of efficient double/debiased machine learning (DML) estimators of the average log odds ratio, where the odds ratios are adjusted for observed (potentially high dimensional) confounders and are averaged over them. The estimators are built from two equivalent forms of the efficient influence function. The first estimator uses a prospective probability of the outcome conditional on the exposure and confounders; the second one employs a retrospective probability of the exposure conditional on the outcome and confounders. Our framework encompasses random sampling as well as outcome-based or exposure-based sampling. Finally, we illustrate how to apply the proposed estimators using real data. |
Date: | 2022–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2205.14048&r= |