nep-cmp New Economics Papers
on Computational Economics
Issue of 2019‒09‒02
fifteen papers chosen by
Stan Miles
Thompson Rivers University

  1. Modulations Recognition using Deep Neural Network in Wireless Communications By Mossad, Omar S.; ElNainay, Mustafa; Torki, Marwan
  2. A Generalized Endogenous Grid Method for Models with the Option to Default By Jang, Youngsoo; Lee, Soyoung
  3. Machine Learning vs Traditional Forecasting Methods: An Application to South African GDP By Lisa-Cheree Martin
  4. Macroeconomic Impacts of Trade Credit: An Agent-Based Modeling Exploration By Michel Alexandre; Gilberto Tadeu Lima
  5. Quantum Algorithms for Portfolio Optimization By Iordanis Kerenidis; Anupam Prakash; D\'aniel Szil\'agyi
  6. Fiscal Reform – Aid or Hindrance: A Computable General Equilibrium (CGE) Analysis for Saudi Arabia By Roos Elizabeth; Adams Philip
  7. An artificial neural network augmented GARCH model for Islamic stock market volatility: Do asymmetry and long memory matter? By Manel Hamdi; Walid Chkili
  8. Intra-day Equity Price Prediction using Deep Learning as a Measure of Market Efficiency By David Byrd; Tucker Hybinette Balch
  9. Fair and Unbiased Algorithmic Decision Making: Current State and Future Challenges By Songul Tolan
  10. Machine Learning With Kernels for Portfolio Valuation and Risk Management By Lotfi Boudabsa; Damir Filipović
  11. On deep calibration of (rough) stochastic volatility models By Christian Bayer; Blanka Horvath; Aitor Muguruza; Benjamin Stemper; Mehdi Tomas
  12. Fast Pricing of Energy Derivatives with Mean-reverting Jump Processes By Nicola Cufaro Petroni; Piergiacomo Sabino
  13. AlphaStock: A Buying-Winners-and-Selling-Losers Investment Strategy using Interpretable Deep Reinforcement Attention Networks By Jingyuan Wang; Yang Zhang; Ke Tang; Junjie Wu; Zhang Xiong
  14. Predicting Consumer Default: A Deep Learning Approach By Albanesi, Stefania; Vamossy, Domonkos
  15. Partially Censored Posterior for Robust and Efficient Risk Evaluation By Agnieszka Borowska; Lennart Hoogerheide; Siem Jan Koopman; Herman van Dijk

  1. By: Mossad, Omar S.; ElNainay, Mustafa; Torki, Marwan
    Abstract: Automatic modulations recognition is one of the most important aspects in cognitive radios (CRs). Unlicensed users or secondary users (SUs) tend to classify the incoming signals to recognize the type of users in the system. Once the available users are detected and classified accurately, the CR can modify his transmission parameters to avoid any interference with the licensed users or primary users (PUs). In this paper, we propose a deep learning technique to detect the modulations schemes used in a number of sampled transmissions. This approach uses a deep neural network that consists of a large number of convolutional filters to extract the distinct features that separate the various modulation classes. The training is performed to improve the overall classification accuracy with a major focus on the misclassified classes. The results demonstrate that our approach outperforms the recently proposed Convolutional, Long Short Term Memory (LSTM), Deep Neural Network (CLDNN) in terms of overall classification accuracy. Moreover, the classification accuracy obtained by the proposed approach is greater than the CLDNN algorithm at the highest signal-to-noise ratio used.
    Keywords: modulation recognition,deep learning,convolutional neural networks
    Date: 2019
  2. By: Jang, Youngsoo; Lee, Soyoung
    Abstract: We develop an endogenous grid method for models with the option to default in which price schedules are endogenously determined in equilibrium and depend on individuals’ states. The algorithm has noticeable computational benefits in efficiency and accuracy. We obtain these computational benefits by combining Fella’s (2014) identification for non-concave regions with our algorithm that numerically searches for risky borrowing limits. These two procedures identify the region of solution sets to which Carroll’s (2006) endogenous grid method is applicable. To demonstrate the method, we apply our method to Nakajima and Rios-Rull’s(2014) model. In terms of computation time, this method is seven to twenty-seven times faster than the conventional grid search method. Moreover, various types of accuracy tests indicate that our method yields more accurate results than the grid search method.
    Keywords: Endogenous grid method, Default, Bankruptcy
    JEL: C63
    Date: 2019–08
  3. By: Lisa-Cheree Martin (Department of Economics, Stellenbosch University)
    Abstract: This study employs traditional autoregressive and vector autoregressive forecasting models, as well as machine learning methods of forecasting, in order to compare the performance of each of these techniques. Each technique is used to forecast the percentage change of quarterly South African Gross Domestic Product, quarter-on-quarter. It is found that machine learning methods outperform traditional methods according to the chosen criteria of minimising root mean squared error and maximising correlation with the actual trend of the data. Overall, the outcomes suggest that machine learning methods are a viable option for policy-makers to use, in order to aid their decision-making process regarding trends in macroeconomic data. As this study is limited by data availability, it is recommended that policy-makers consider further exploration of these techniques.
    Keywords: Machine learning, Forecasting, Elastic-net, Random Forests, Support Vector Machines, Recurrent Neural Networks
    JEL: C32 C45 C53 C88
    Date: 2019
  4. By: Michel Alexandre; Gilberto Tadeu Lima
    Abstract: This paper explores the effects of trade credit by assessing its macroeconomic impacts on several dimensions. To that end, we develop an agent-based model (ABM) with two types of firms: downstream firms, which produce a final good for consumption purposes using intermediate goods, and upstream firms, which produce and supply those intermediate goods to the downstream firms. Upstream firms can act as trade credit suppliers, by allowing delayed payment of a share of their sales to downstream firms. Our results suggest a potential trade-off between financial robustness as measured by the proportion of non-performing loans and the average output level. The intuitive reason is that greater availability of trade credit, which however does not necessarily imply proportionately greater actual use of it by downstream firms, allows more financial resources to remain in the real sector, favoring the latter’s financial robustness. Yet, given that trade credit is proportionally more beneficial to smaller downstream firms, it enhances market competition. This results in a decrease in markups and thereby in profits and dividends, which contributes negatively to aggregate demand formation
    Keywords: Trade credit; agent-based modeling; macroeconomic effects
    JEL: C63 E27 G32
    Date: 2019–08–19
  5. By: Iordanis Kerenidis; Anupam Prakash; D\'aniel Szil\'agyi
    Abstract: We develop the first quantum algorithm for the constrained portfolio optimization problem. The algorithm has running time $\widetilde{O} \left( n\sqrt{r} \frac{\zeta \kappa}{\delta^2} \log \left(1/\epsilon\right) \right)$, where $r$ is the number of positivity and budget constraints, $n$ is the number of assets in the portfolio, $\epsilon$ the desired precision, and $\delta, \kappa, \zeta$ are problem-dependent parameters related to the well-conditioning of the intermediate solutions. If only a moderately accurate solution is required, our quantum algorithm can achieve a polynomial speedup over the best classical algorithms with complexity $\widetilde{O} \left( \sqrt{r}n^\omega\log(1/\epsilon) \right)$, where $\omega$ is the matrix multiplication exponent that has a theoretical value of around $2.373$, but is closer to $3$ in practice. We also provide some experiments to bound the problem-dependent factors arising in the running time of the quantum algorithm, and these experiments suggest that for most instances the quantum algorithm can potentially achieve an $O(n)$ speedup over its classical counterpart.
    Date: 2019–08
  6. By: Roos Elizabeth (Centre of Policy Studies, Victoria University); Adams Philip (Centre of Policy Studies, Victoria University)
    Abstract: The oil price fell from around $US110 per barrel in 2014 to less than $US50per barrel at the start of 2017. This put enormous pressure on government budgets within the Gulf Cooperation Council (GCC) region, especially the budgets of oil exporting countries. The focus of GCC economic policies quickly shifted to fiscal reform. In this paper we use a dynamic CGE model to investigate the economic impact of introducing a 5 per cent Value Added Tax (VAT and a tax on business profit, with specific reference to the Kingdom of Saudi Arabia (KSA). Our study shows that although the introduction of new taxes improves government tax revenue, markets are distorted lowering economic efficiency and production due to a tax. In all simulations, real GDP, real investment and capital stock falls in the long-run. This highlights the importance of (1) understanding the potential harm caused to economic efficiency and production due to taxes, and (2) fiscal reform includes both government expenditure reform and identifying non-oil revenue sources. This allows for the design of an optimal tax system that meets all future requirements for each of the individual Gulf States.
    Date: 2019–08–21
  7. By: Manel Hamdi (International Financial Group-Tunisia, Faculty of Economics and Management of Tunis, University of Tunis); Walid Chkili (International Financial Group-Tunisia, Faculty of Economics and Management of Tunis, University of Tunis)
    Abstract: The aim of this paper is to study the volatility and forecast accuracy of the Islamic stock market. For this purpose, we construct a new hybrid GARCH-type models based on artificial neural network (ANN). This model is applied to daily prices for DW Islamic markets during the period June 1999-December 2016. Our in-sample results show that volatility of Islamic stock market can be better described by the FIAPARCH approach that take into account asymmetry and long memory features. Considering the out of sample analysis, we have applied a hybrid forecasting model, which combines the FIAPARCH approach and the artificial neural network (ANN). Empirical results show that the proposed hybrid model (FIAPARCH-ANN) outperforms all other single models such as GARCH, FIGARCH, FIAPARCH in terms of all performance criteria used in our study.
    Date: 2019–08–21
  8. By: David Byrd; Tucker Hybinette Balch
    Abstract: In finance, the weak form of the Efficient Market Hypothesis asserts that historic stock price and volume data cannot inform predictions of future prices. In this paper we show that, to the contrary, future intra-day stock prices could be predicted effectively until 2009. We demonstrate this using two different profitable machine learning-based trading strategies. However, the effectiveness of both approaches diminish over time, and neither of them are profitable after 2009. We present our implementation and results in detail for the period 2003-2017 and propose a novel idea: the use of such flexible machine learning methods as an objective measure of relative market efficiency. We conclude with a candidate explanation, comparing our returns over time with high-frequency trading volume, and suggest concrete steps for further investigation.
    Date: 2019–08
  9. By: Songul Tolan (European Commission – JRC)
    Abstract: Machine learning algorithms are now frequently used in sensitive contexts that substantially affect the course of human lives, such as credit lending or criminal justice. This is driven by the idea that‘objective’ machines base their decisions solely on facts and remain unaffected by human cognitive biases, discriminatory tendencies or emotions. Yet, there is overwhelming evidence showing that algorithms can inherit or even perpetuate human biases in their decision making when they are based on data that contains biased human decisions. This has led to a call for fairness-aware machine learning. However, fairness is a complex concept which is also reflected in the attempts to formalize fairness for algorithmic decision making. Statistical formalizations of fairness lead to a long list of criteria that are each flawed (or harmful even) in different contexts. Moreover,inherent tradeoffs in these criteria make it impossible to unify them in one general framework. Thus,fairness constraintsin algorithms have to be specific to the domains to which the algorithms are applied. In the future, research in algorithmic decision making systems should be aware of data and developer biases and add a focus on transparency to facilitate regular fairness audits.
    Keywords: fairness, machine learning, algorithmic bias, algorithmic transparency
    Date: 2018–12
  10. By: Lotfi Boudabsa (Ecole Polytechnique Fédérale de Lausanne - School of Basic Sciences); Damir Filipović (Ecole Polytechnique Fédérale de Lausanne; Swiss Finance Institute)
    Abstract: We introduce a computational framework for dynamic portfolio valuation and risk management building on machine learning with kernels. We learn the replicating martingale of a portfolio from a finite sample of its terminal cumulative cash flow. The learned replicating martingale is given in closed form thanks to a suitable choice of the kernel. We develop an asymptotic theory and prove convergence and a central limit theorem. We also derive finite sample error bounds and concentration inequalities. Numerical examples show good results for a relatively small training sample size.
    Keywords: dynamic portfolio valuation, kernel ridge regression, learning theory, reproducing kernel Hilbert space, portfolio risk management
    Date: 2019–06
  11. By: Christian Bayer; Blanka Horvath; Aitor Muguruza; Benjamin Stemper; Mehdi Tomas
    Abstract: Techniques from deep learning play a more and more important role for the important task of calibration of financial models. The pioneering paper by Hernandez [Risk, 2017] was a catalyst for resurfacing interest in research in this area. In this paper we advocate an alternative (two-step) approach using deep learning techniques solely to learn the pricing map -- from model parameters to prices or implied volatilities -- rather than directly the calibrated model parameters as a function of observed market data. Having a fast and accurate neural-network-based approximating pricing map (first step), we can then (second step) use traditional model calibration algorithms. In this work we showcase a direct comparison of different potential approaches to the learning stage and present algorithms that provide a suffcient accuracy for practical use. We provide a first neural network-based calibration method for rough volatility models for which calibration can be done on the y. We demonstrate the method via a hands-on calibration engine on the rough Bergomi model, for which classical calibration techniques are diffcult to apply due to the high cost of all known numerical pricing methods. Furthermore, we display and compare different types of sampling and training methods and elaborate on their advantages under different objectives. As a further application we use the fast pricing method for a Bayesian analysis of the calibrated model.
    Date: 2019–08
  12. By: Nicola Cufaro Petroni; Piergiacomo Sabino
    Abstract: The law of a mean-reverting (Ornstein-Uhlenbeck) process driven by a compound Poisson with exponential jumps is investigated in the context of the energy derivatives pricing. The said distribution turns out to be related to the self-decomposable gamma laws, and its density and characteristic function are here given in closed-form. Algorithms for the exact simulation of such a process are accordingly derived with the advantage of being significantly faster (at least 30 times) than those available in the literature. They are also extended to more general cases (bilateral exponential jumps, and time-dependent intensity of the Poisson process). These results are finally applied to the pricing of gas storages and swings under jump-diffusion market models, and the apparent computational advantages of the proposed procedures are emphasized
    Date: 2019–08
  13. By: Jingyuan Wang; Yang Zhang; Ke Tang; Junjie Wu; Zhang Xiong
    Abstract: Recent years have witnessed the successful marriage of finance innovations and AI techniques in various finance applications including quantitative trading (QT). Despite great research efforts devoted to leveraging deep learning (DL) methods for building better QT strategies, existing studies still face serious challenges especially from the side of finance, such as the balance of risk and return, the resistance to extreme loss, and the interpretability of strategies, which limit the application of DL-based strategies in real-life financial markets. In this work, we propose AlphaStock, a novel reinforcement learning (RL) based investment strategy enhanced by interpretable deep attention networks, to address the above challenges. Our main contributions are summarized as follows: i) We integrate deep attention networks with a Sharpe ratio-oriented reinforcement learning framework to achieve a risk-return balanced investment strategy; ii) We suggest modeling interrelationships among assets to avoid selection bias and develop a cross-asset attention mechanism; iii) To our best knowledge, this work is among the first to offer an interpretable investment strategy using deep reinforcement learning models. The experiments on long-periodic U.S. and Chinese markets demonstrate the effectiveness and robustness of AlphaStock over diverse market states. It turns out that AlphaStock tends to select the stocks as winners with high long-term growth, low volatility, high intrinsic value, and being undervalued recently.
    Date: 2019–07
  14. By: Albanesi, Stefania; Vamossy, Domonkos
    Abstract: We develop a model to predict consumer default based on deep learning. We show that the model consistently outperforms standard credit scoring models, even though it uses the same data. Our model is interpretable and is able to provide a score to a larger class of borrowers relative to standard credit scoring models while accurately tracking variations in systemic risk. We argue that these properties can provide valuable insights for the design of policies targeted at reducing consumer default and alleviating its burden on borrowers and lenders, as well as macroprudential regulation.
    Keywords: Consumer default; credit scores; deep learning; macroprudential policy
    JEL: C45 D1 E27 E44 G21 G24
    Date: 2019–08
  15. By: Agnieszka Borowska (Vrije Universiteit Amsterdam); Lennart Hoogerheide (Vrije Universiteit Amsterdam); Siem Jan Koopman (Vrije Universiteit Amsterdam); Herman van Dijk (Erasmus University Rotterdam)
    Abstract: A novel approach to inference for a specific region of the predictive distribution is introduced. An important domain of application is accurate prediction of financial risk measures, where the area of interest is the left tail of the predictive density of logreturns. Our proposed approach originates from the Bayesian approach to parameter estimation and time series forecasting, however it is robust in the sense that it provides a more accurate estimation of the predictive density in the region of interest in case of misspecification. The first main contribution of the paper is the novel concept of the Partially Censored Posterior (PCP), where the set of model parameters is partitioned into two subsets: for the first subset of parameters we consider the standard marginal posterior, for the second subset of parameters (that are particularly related to the region of interest) we consider the conditional censored posterior. The censoring means that observations outside the region of interest are censored: for those observations only the probability of being outside the region of interest matters. This quasi-Bayesian approach yields more precise parameter estimation than a fully censored posterior for all parameters, and has more focus on the region of interest than a standard Bayesian approach. The second main contribution is that we introduce two novel methods for computationally efficient simulation: Conditional MitISEM, a Markov chain Monte Carlo method to simulate model parameters from the Partially Censored Posterior, and PCP-QERMit, an Importance Sampling method that is introduced to further decrease the numerical standard errors of the Value-at-Risk and Expected Shortfall estimators. The third main contribution is that we consider the effect of using a time-varying boundary of the region of interest, which may provide more information about the left tail of the distribution of the standardized innovations. Extensive simulation and empirical studies show the ability of the introduced method to outperform standard approaches.
    Keywords: Bayesian inference, censored likelihood, censored posterior, partially censored posterior, misspecification, density forecasting, Markov chain Monte Carlo, importance sampling, mixture of Student's t, Value-at-Risk, Expected Shortfall
    JEL: C11 C53 C58
    Date: 2019–08–19

This nep-cmp issue is ©2019 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.