nep-cmp New Economics Papers
on Computational Economics
Issue of 2020‒06‒15
23 papers chosen by



  1. Machine Learning, the Treasury Yield Curve and Recession Forecasting By Michael Puglia; Adam Tucker
  2. Computation of Expected Shortfall by fast detection of worst scenarios By Bruno Bouchard; Adil Reghai; Benjamin Virrion
  3. Applications of Machine Learning to Estimating the Sizes and Market Impact of Hidden Orders in the BRICS Financial Markets By Maake, Witness; Van Zyl, Terence
  4. A Computable General Equilibrium Analysis of Environmental Tax Reform in Japan By Shiro Takeda; Toshi H. Arimura
  5. Forecast combinations in machine learning By Qiu, Yue; Xie, Tian; Yu, Jun
  6. Making text count: economic forecasting using newspaper text By Kalamara, Eleni; Turrell, Arthur; Redl, Chris; Kapetanios, George; Kapadia, Sujit
  7. Comparison of tree-based models performance in prediction of marketing campaign results using Explainable Artificial Intelligence tools By Marcin Chlebus; Zuzanna Osika
  8. Zero-Intelligence vs. Human Agents: An Experimental Analysis of the Efficiency of Double Auctions and Over-the-Counter Markets of Varying Sizes By Giuseppe Attanasi; Samuele Centorrino; Elena Manzoni
  9. Monetary Policy Options at the Effective Lower Bound: Assessing the Federal Reserve’s Current Policy Toolkit By Hess Chung; Etienne Gagnon; Taisuke Nakata; Matthias Paustian; Bernd Schlusche; James Trevino; Diego Vilán; Wei Zheng
  10. ELMOD documentation: Modeling of flow-based market coupling and congestion management By Schönheit, David; Hladik, Dirk; Hobbie, Hannes; Möst, Dominik
  11. EDGE-M3: A Dynamic General Equilibrium Micro-Macro Model for the EU Member States By Diego d’Andria; Jason DeBacker; Richard W. Evans; Jonathan Pycroft; Wouter van der Wielen; Magdalena Zachlod-Jelec
  12. Random forest versus logit models: which offers better early warning of fiscal stress? By Jarmulska, Barbara
  13. Machine learning time series regressions with an application to nowcasting By Andrii Babii; Eric Ghysels; Jonas Striaukas
  14. A moment matching method for option pricing under stochastic interest rates By Fabio Antonelli; Alessandro Ramponi; Sergio Scarlatti
  15. Forecasting gasoline prices with mixed random forest error correction models By Wang, Dandan; Escribano Saez, Alvaro
  16. Breiman's "Two Cultures" Revisited and Reconciled By Subhadeep; Mukhopadhyay; Kaijun Wang
  17. Deep Learning for Portfolio Optimisation By Zihao Zhang; Stefan Zohren; Stephen Roberts
  18. Inference Using Simulated Neural Moments By Michael Creel
  19. COVID-19 and Global Economic Growth: Policy Simulations with a Pandemic-Enabled Neoclassical Growth Model By Ian M. Trotter; Lu\'is A. C. Schmidt; Bruno C. M. Pinto; Andrezza L. Batista; J\'essica Pellenz; Maritza Isidro; Aline Rodrigues; Attawan G. S. Suela; Loredany Rodrigues
  20. Using Machine Learning to Forecast Future Earnings By Xinyue Cui; Zhaoyu Xu; Yue Zhou
  21. The Possible Effects of the Extended Lockdown Period on the South African Economy: A CGE Analysis By Jan H van Heerden
  22. Information weighting under least squares learning By Jaqueson K. Galimberti
  23. Non-concave expected utility optimization with uncertain time horizon: an application to participating life insurance contracts By Christian Dehm; Thai Nguyen; Mitja Stadje

  1. By: Michael Puglia; Adam Tucker
    Abstract: We use machine learning methods to examine the power of Treasury term spreads and other financial market and macroeconomic variables to forecast US recessions, vis-à-vis probit regression. In particular we propose a novel strategy for conducting cross-validation on classifiers trained with macro/financial panel data of low frequency and compare the results to those obtained from standard k-folds cross-validation. Consistent with the existing literature we find that, in the time series setting, forecast accuracy estimates derived from k-folds are biased optimistically, and cross-validation strategies which eliminate data "peeking" produce lower, and perhaps more realistic, estimates of forecast accuracy. More strikingly, we also document rank reversal of probit, Random Forest, XGBoost, LightGBM, neural network and support-vector machine classifier forecast performance over the two cross-validation methodologies. That is, while a k-folds cross-validation indicates tha t the forecast accuracy of tree methods dominates that of neural networks, which in turn dominates that of probit regression, the more conservative cross-validation strategy we propose indicates the exact opposite, and that probit regression should be preferred over machine learning methods, at least in the context of the present problem. This latter result stands in contrast to a growing body of literature demonstrating that machine learning methods outperform many alternative classification algorithms and we discuss some possible reasons for our result. We also discuss techniques for conducting statistical inference on machine learning classifiers using Cochrane's Q and McNemar's tests; and use the SHapley Additive exPlanations (SHAP) framework to decompose US recession forecasts and analyze feature importance across business cycles.
    Keywords: Shapley; Probit; XGBoost; Treasury yield curve; Neural network; LightGBM; Recession; Tree ensemble; Support-vector machine; Random forest
    JEL: C45 C53 E37
    Date: 2020–05–20
    URL: http://d.repec.org/n?u=RePEc:fip:fedgfe:2020-38&r=all
  2. By: Bruno Bouchard (CEREMADE - CEntre de REcherches en MAthématiques de la DEcision - CNRS - Centre National de la Recherche Scientifique - Université Paris Dauphine-PSL); Adil Reghai (Natixis Asset Management); Benjamin Virrion (CEREMADE - CEntre de REcherches en MAthématiques de la DEcision - CNRS - Centre National de la Recherche Scientifique - Université Paris Dauphine-PSL, Natixis Asset Management)
    Abstract: We consider a multi-step algorithm for the computation of the historical expected shortfall such as defined by the Basel Minimum Capital Requirements for Market Risk. At each step of the algorithm, we use Monte Carlo simulations to reduce the number of historical scenarios that potentially belong to the set of worst scenarios. The number of simulations increases as the number of candidate scenarios is reduced and the distance between them diminishes. For the most naive scheme, we show that the L p-error of the estimator of the Expected Shortfall is bounded by a linear combination of the probabilities of inversion of favorable and unfavorable scenarios at each step, and of the last step Monte Carlo error associated to each scenario. By using concentration inequalities, we then show that, for sub-gamma pricing errors, the probabilities of inversion converge at an exponential rate in the number of simulated paths. We then propose an adaptative version in which the algorithm improves step by step its knowledge on the unknown parameters of interest: mean and variance of the Monte Carlo estimators of the different scenarios. Both schemes can be optimized by using dynamic programming algorithms that can be solved off-line. To our knowledge, these are the first non-asymptotic bounds for such estimators. Our hypotheses are weak enough to allow for the use of estimators for the different scenarios and steps based on the same random variables, which, in practice, reduces considerably the computational effort. First numerical tests are performed.
    Keywords: ranking and selection,sequential design,Expected Shortfall,Bayesian filter
    Date: 2020–05–25
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-02619589&r=all
  3. By: Maake, Witness; Van Zyl, Terence
    Abstract: The research aims to investigate the role of hidden orders on the structure of the average market impact curves in the five BRICS financial markets. The concept of market impact is central to the implementation of cost-effective trading strategies during financial order executions. The literature of Lillo et al. (2003) is replicated using the data of visible orders from the five BRICS financial markets. We repeat the implementation of Lillo et al. (2003) to investigate the effect of hidden orders. We subsequently study the dynamics of hidden orders. The research applies machine learning to estimate the sizes of hidden orders. We revisit the methodology of Lillo et al. (2003) to compare the average market impact curves in which true hidden orders are added to visible orders to the average market impact curves in which hidden orders sizes are estimated via machine learning. The study discovers that : (1) hidden orders sizes could be uncovered via machine learning techniques such as Generalized Linear Models (GLM), Artificial Neural Networks (ANN), Support Vector Machines (SVM), and Random Forests (RF); and (2) there exist no set of market features that are consistently predictive of the sizes of hidden orders across different stocks. Artificial Neural Networks produce large R^2 and small MSE on the prediction of hidden orders of individual stocks across the five studied markets. Random Forests produce the ˆ most appropriate average price impact curves of visible and estimated hidden orders that are closest to the average market impact curves of visible and true hidden orders. In some markets, hidden orders produce a convex power-law far-right tail in contrast to visible orders which produce a concave power-law far-right tail. Hidden orders may affect the average price impact curves for orders of size less than the average order size; meanwhile, hidden orders may not affect the structure of the average price impact curves in other markets. The research implies ANN and RF as the recommended tools to uncover hidden orders.
    Keywords: Hidden Orders; Market Features; GLM; ANN; SVM; RF; Hidden Order Sizes; Market Impact; BRICS(Brazil, Russia, India, China, and South Africa)
    JEL: C4 C8 D4
    Date: 2020–02–28
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:99075&r=all
  4. By: Shiro Takeda (Kyoto Sangyo University, Motoyama, Kamigamo, Kita-Ku, Kyoto City, 603-8555, Japan. Research Institute for Environmental Economics and Management (RIEEM), Waseda University, 1?6?1 Nishiwaseda, Shinjuku-ku, Tokyo 169?8050, Japan.); Toshi H. Arimura (Faculty of Political Science and Economics & Research Institute for Environmental Economics and Management (RIEEM), Waseda University, 1-6-1 Nishiwaseda, Shinjuku-ku, Tokyo, 169-8050, Japan.)
    Abstract: The Japanese government plans to reduce greenhouse gas emissions by 80% by 2050. However, it is not yet clear which policy measures the government will adopt to achieve this goal. In this regard, environmental tax reform, which is the combination of carbon regulation and the reduction of existing distortionary taxes, has attracted much attention. This paper examines the effects of environmental tax reform in Japan. Using a dynamic computable general equilibrium (CGE) model, we analyze the quantitative impacts of environmental tax reform and clarify which types of environmental tax reform are the most desirable. In the simulation, we introduce a carbon tax and consider the following five scenarios for the use of carbon tax revenue: 1) a lump-sum rebate to the household, 2) a cut in social security contributions, 3) a cut in income taxes, 4) a cut in corporate taxes and 5) a cut in consumption taxes. The first scenario is a pure carbon tax, and the other four scenarios are types of environmental tax reform. Our CGE simulation shows that environmental tax reform tends to generate more desirable impacts than the pure carbon tax by improving welfare or increasing GDP while reducing emissions (double dividend). In particular, we show that a cut in corporate taxes leads to the most desirable policy in terms of GDP and national income.
    Keywords: Carbon Tax; Environmental Tax Reform; Double Dividend; Computable General Equilibrium; Climate Change; Tax Interaction Effects; Paris Agreement
    JEL: Q54 Q58 C68 H23
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:was:dpaper:2002&r=all
  5. By: Qiu, Yue (Shanghai University of International Business and Economics); Xie, Tian (Shanghai University of Finance and Economics); Yu, Jun (School of Economics, Singapore Management University)
    Abstract: This paper introduces novel methods to combine forecasts made by machine learning techniques. Machine learning methods have found many successful applications in predicting the response variable. However, they ignore model uncertainty when the relationship between the response variable and the predictors is nonlinear. To further improve the forecasting performance, we propose a general framework to combine multiple forecasts from machine learning techniques. Simulation studies show that the proposed machine-learning-based forecast combinations work well. In empirical applications to forecast key macroeconomic and financial variables, we find that the proposed methods can produce more accurate forecasts than individual machine learning techniques and the simple average method, later of which is known as hard to beat in the literature.
    Keywords: Model uncertainty; Machine learning; Nonlinearity; Forecast combinations
    JEL: C52 C53
    Date: 2020–05–11
    URL: http://d.repec.org/n?u=RePEc:ris:smuesw:2020_013&r=all
  6. By: Kalamara, Eleni (King’s College London); Turrell, Arthur (Bank of England); Redl, Chris (International Monetary Fund); Kapetanios, George (King’s College London); Kapadia, Sujit (European Central Bank)
    Abstract: We consider the best way to extract timely signals from newspaper text and use them to forecast macroeconomic variables using three popular UK newspapers that collectively represent UK newspaper readership in terms of political perspective and editorial style. We find that newspaper text can improve economic forecasts both in absolute and marginal terms. We introduce a powerful new method of incorporating text information in forecasts that combines counts of terms with supervised machine learning techniques. This method improves forecasts of macroeconomic variables including GDP, inflation, and unemployment, including relative to existing text-based methods. Forecast improvements occur when it matters most, during stressed periods.
    Keywords: Text; forecasting; machine learning
    JEL: C55 J42
    Date: 2020–05–22
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:0865&r=all
  7. By: Marcin Chlebus (Faculty of Economic Sciences, University of Warsaw); Zuzanna Osika (Faculty of Economic Sciences, University of Warsaw)
    Abstract: The research uses tree-based models to predict the success of telemarketing campaign of Portuguese bank. The Portuguese bank dataset was used in the past in different researches with different models to predict the success of campaign. We propose to use boosting algorithms, which have not been used before to predict the response for the campaign and to use Explainable AI (XAI) methods to evaluate model’s performance. The paper tries to examine whether 1) complex boosting algorithms perform better and 2) XAI tools are better indicators of models’ performance than commonly used discriminatory power’s measures like AUC. Portuguese bank telemarketing dataset was used with five machine learning algorithms, namely Random Forest (RF), AdaBoost, GBM, XGBoost and CatBoost, which were then later compared based on their AUC and XAI tools analysis – Permutated Variable Importance and Partial Dependency Profile. Two best performing models based on their AUC were XGBoost and CatBoost, with XGBoost having slightly higher AUC. Then, these models were examined using PDP and VI, which resulted in discovery of XGBoost potenitial overfitting and choosing CatBoost over XGBoost. The results show that new boosting models perform better than older models and that XAI tools could be helpful with models’ comparisons.
    Keywords: direct marketing, telemarketing, relationship marketing, data mining, machine learning, random forest, adaboost, gbm, catboost, xgboost, bank marketing, XAI, variable importance, partial dependency profile
    JEL: C25 C44 M31
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:war:wpaper:2020-15&r=all
  8. By: Giuseppe Attanasi (University of Côte d’Azur, Nice); Samuele Centorrino (Stony Brook University); Elena Manzoni (Department of Economics (University of Verona))
    Abstract: We study two well-known electronic markets: an over-the-counter (OTC) market, in which each agent looks for the best counterpart through bilateral negotiations, and a double auc- tion (DA) market, in which traders post their quotes publicly. We focus on the DA-OTC efficiency gap and show how it varies with different market sizes (10, 20, 40, and 80 traders). We compare experimental results from a sample of 6,400 undergraduate students in Economics with zero-intelligent (ZI) agent-based simulations. Simulations with ZI traders show that the traded quantity (with respect to the efficient one) increases with market size under both DA and OTC. Experimental results with human traders confirm the same tendency under DA, while the share of periods in which the traded quantity is higher (lower) than the efficient one decreases (increases) with market size under OTC, ultimately leading to a DA-OTC efficiency gap increasing with market size. We rationalize these results by putting forward a novel game-theoretical model of OTC market as a repeated bargaining procedure under incomplete information on buyers’ valuations and sellers’ costs, showing how efficiency decreases slightly with size due to two counteracting effects: acceptance rates in earlier periods decrease with size, and earlier offers increase, but not always enough to compensate the decrease in acceptance rates.
    Keywords: Market Design, Classroom Experiment, Agent-based Modelling, Game-theoretic Modelling.
    JEL: C70 C91 C92 D41 D47
    Date: 2020–03
    URL: http://d.repec.org/n?u=RePEc:ver:wpaper:05/2020&r=all
  9. By: Hess Chung (Board of Governors of the Federal Reserve System); Etienne Gagnon (Board of Governors of the Federal Reserve System); Taisuke Nakata (University of Tokyo); Matthias Paustian (Board of Governors of the Federal Reserve System); Bernd Schlusche (Board of Governors of the Federal Reserve System); James Trevino (Board of Governors of the Federal Reserve System); Diego Vilán (Board of Governors of the Federal Reserve System); Wei Zheng (Visa Inc.)
    Abstract: We simulate the FRB/US model and a number of statistical models to quantify some of the risks stemming from the effective lower bound (ELB) on the federal funds rate and to assess the efficacy of adjustments to the federal funds rate target, balance sheet policies, and forward guidance to provide monetary policy accommodation in the event of a recession. Over the next decade, our simulations imply a roughly 20 to 50 percent probability that the federal funds rate will be constrained by the ELB at some point. We also find that forward guidance and balance sheet polices of the kinds used in response to the Global Financial Crisis are modestly effective in speeding up the labor market recovery and return of inflation to 2 percent following an economic slump. However, these policies have only small effects in limiting the initial rise in the unemployment rate during a recession because of transmission lags. As with any model-based analysis, we also discuss a number of caveats regarding our results.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:cfi:fseres:cf483&r=all
  10. By: Schönheit, David; Hladik, Dirk; Hobbie, Hannes; Möst, Dominik
    Abstract: This paper documents ELMOD, a linear optimization model with a nodal pricing approach, covering the energy market and electricity grid of Europe. In the presented formulation, ELMOD is used for the computation of market coupling results without grid constraints and subsequent computation of congestion management, i.e. redispatch and curtailment. Furthermore, flow-based market coupling is implemented, as the EU-stipulated calculation method for cross-border trading capacities. A short case study presents exemplary results for market outcomes based on flow-based market coupling, i.e. n-1 secure trading domains, import-export balances, and zonal prices, as well as necessary congestion management measures.
    Keywords: energy system modeling,electricity grid models,linear optimization,congestion management,flow-based market coupling,n-1 security criterion
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:esprep:217278&r=all
  11. By: Diego d’Andria (European Commission - JRC); Jason DeBacker; Richard W. Evans; Jonathan Pycroft; Wouter van der Wielen (European Commission - JRC); Magdalena Zachlod-Jelec (European Commission - JRC)
    Abstract: This paper provides a technical description of the overlapping generations model used by the Joint Research Centre to analyse tax policy reforms, including in particular pension and demographic issues. The main feature of the EDGE-M3 model lies in its high level of disaggregation and the close connection between microeconomic and macroeconomic mechanisms which makes it a very suitable model to analyse the redistributive impact of policies. EDGE-M3 features eighty generations and seven earnings-ability types of individuals. To facilitate a realistic dynamic population structure EDGE-M3 includes Eurostat’s demographic projections. In terms of calibration, the EDGE-M3 family of overlapping generations models is heavily calibrated on microeconomic data. This al-lows the introduction of the underlying individuals’ characteristics in a macro model to the greatest extent possible. In particular, it includes the richness of the tax code by means of income tax and social insurance contribution rate functions estimated using data from the EUROMOD microsimulation model. This feature allows in particular a close connection between the macro and the micro model. In addition, the earnings profiles of the seven heterogeneous agent types are estimated using survey data. Finally, the labour supply, bequests and consumption tax calibration are all done using detailed microeconomic data, making the model highly suitable for the analysis of intra- and intergenerational analysis of tax policy.
    Keywords: computable general equilibrium, overlapping generations, heterogeneous ability, fiscal policy, microsimulation
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:ipt:taxref:202003&r=all
  12. By: Jarmulska, Barbara
    Abstract: This study seeks to answer whether it is possible to design an early warning system framework that can signal the risk of fiscal stress in the near future, and what shape such a system should take. To do so, multiple models based on econometric logit and the random forest models are designed and compared. Using a dataset of 20 annual frequency variables pertaining to 43 advanced and emerging countries during 1992-2018, the results confirm the possibility of obtaining an effective model, which correctly predicts 70-80% of fiscal stress events and tranquil periods. The random forest-based early warning model outperforms logit models. While the random forest model is commonly understood to provide lower interpretability than logit models do, this study employs tools that can be used to provide useful information for understanding what is behind the black-box. These tools can provide information on the most important explanatory variables and on the shape of the relationship between these variables and the outcome classification. Thus, the study contributes to the discussion on the usefulness of machine learning methods in economics. JEL Classification: C40, C53, H63, G01
    Keywords: early warning system, interpretability of machine learning, predictive performance
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20202408&r=all
  13. By: Andrii Babii; Eric Ghysels; Jonas Striaukas
    Abstract: This paper introduces structured machine learning regressions for high-dimensional time series data potentially sampled at different frequencies. The sparse-group LASSO estimator can take advantage of such time series data structures and outperforms the unstructured LASSO. We establish oracle inequalities for the sparse-group LASSO estimator within a framework that allows for the mixing processes and recognizes that the financial and the macroeconomic data may have heavier than exponential tails. An empirical application to nowcasting US GDP growth indicates that the estimator performs favorably compared to other alternatives and that the text data can be a useful addition to more traditional numerical data.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.14057&r=all
  14. By: Fabio Antonelli; Alessandro Ramponi; Sergio Scarlatti
    Abstract: In this paper we present a simple, but new, approximation methodology for pricing a call option in a Black \& Scholes market characterized by stochastic interest rates. The method, based on a straightforward Gaussian moment matching technique applied to a conditional Black \& Scholes formula, is quite general and it applies to various models, whether affine or not. To check its accuracy and computational time, we implement it for the CIR interest rate model correlated with the underlying, using the Monte Carlo simulations as a benchmark. The method's performance turns out to be quite remarkable, even when compared with analogous results obtained by the affine approximation technique presented in Grzelak and Oosterlee (2011) and by the expansion formula introduced in Kim and Kunimoto (1999), as we show in the last section.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.14063&r=all
  15. By: Wang, Dandan; Escribano Saez, Alvaro
    Abstract: The use of machine learning (ML) models has been shown to have advantages over alternative and more traditional time series models in the presence of big data. One of the most successful ML forecasting procedures is the Random Forest (RF) machine learning algorithm. In this paper we propose a mixed RF approach for modeling departures from linearity, instead of starting with a completely nonlinear or nonparametric model. The methodology is applied to the weekly forecasts of gasoline prices that are cointegrated with international oil prices and exchange rates. The question of interest is whether gasoline prices react asymmetrically to increases in oil prices rather than to decreases in oil prices, the "rockets and feathers" hypothesis. In this literature most authors estimate parametric nonlinear error correction models using nonlinear least squares. Recent specifications for nonlinear error correction models include threshold autoregressive models (TAR), double threshold error correction models (ECM) or double threshold smooth transition autoregressive (STAR) models. In this paper, we describe the econometric methodology that combines linear dynamic autoregressive distributed lag (ARDL) models with cointegrated variables with added nonlinear components, or price asymmetries, estimated by the powerful tool of RF. We apply our mixed RF specification strategy to weekly prices of the Spanish gasoline market from 2010 to 2019. We show that the new mixed RF error correction model has important advantages over competing parametric and nonparametric models, in terms of the generality of model specification, estimation and forecasting.
    Keywords: Mixed Random Forest; Random Forest; Machine Learning; Nonlinear Error Correction; Cointegration; Rockets And Feathers Hypothesis; Forecasting Gasoline Prices
    JEL: L71 L13 D43 C53 C52 C24 B23
    Date: 2020–06–04
    URL: http://d.repec.org/n?u=RePEc:cte:werepe:30557&r=all
  16. By: Subhadeep (DEEP); Mukhopadhyay; Kaijun Wang
    Abstract: In a landmark paper published in 2001, Leo Breiman described the tense standoff between two cultures of data modeling: parametric statistical and algorithmic machine learning. The cultural division between these two statistical learning frameworks has been growing at a steady pace in recent years. What is the way forward? It has become blatantly obvious that this widening gap between "the two cultures" cannot be averted unless we find a way to blend them into a coherent whole. This article presents a solution by establishing a link between the two cultures. Through examples, we describe the challenges and potential gains of this new integrated statistical thinking.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.13596&r=all
  17. By: Zihao Zhang; Stefan Zohren; Stephen Roberts
    Abstract: We adopt deep learning models to directly optimise the portfolio Sharpe ratio. The framework we present circumvents the requirements for forecasting expected returns and allows us to directly optimise portfolio weights by updating model parameters. Instead of selecting individual assets, we trade Exchange-Traded Funds (ETFs) of market indices to form a portfolio. Indices of different asset classes show robust correlations and trading them substantially reduces the spectrum of available assets to choose from. We compare our method with a wide range of algorithms with results showing that our model obtains the best performance over the testing period, from 2011 to the end of April 2020, including the financial instabilities of the first quarter of 2020. A sensitivity analysis is included to understand the relevance of input features and we further study the performance of our approach under different cost rates and different risk levels via volatility scaling.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.13665&r=all
  18. By: Michael Creel
    Abstract: This paper deals with Laplace type methods used with moment-based, simulation-based, econometric estimators. It shows that confidence intervals based upon quantiles of a tuned MCMC chain may have coverage which is far from the nominal level. It discusses how neural networks may be used to easily and automatically reduce the dimension of an initial set of moments to the minimum number of moments needed to maintain identification. When estimation and inference is based on the neural moments, which are the result of filtering moments through a trained neural net, confidence intervals have correct coverage in almost all cases, and departures from correct coverage are small.
    Keywords: neural networks, Laplace type estimators, simulation-based estimation
    JEL: C11 C12 C13 C45
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:bge:wpaper:1182&r=all
  19. By: Ian M. Trotter; Lu\'is A. C. Schmidt; Bruno C. M. Pinto; Andrezza L. Batista; J\'essica Pellenz; Maritza Isidro; Aline Rodrigues; Attawan G. S. Suela; Loredany Rodrigues
    Abstract: During the COVID-19 pandemic of 2019/2020, authorities have used temporary ad-hoc policy measures, such as lockdowns and mass quarantines, to slow its transmission. However, the consequences of widespread use of these unprecedented measures are poorly understood. To contribute to the understanding of the economic and human consequences of such policy measures, we therefore construct a mathematical model of an economy under the impact of a pandemic, select parameter values to represent the global economy under the impact of COVID-19, and perform numerical experiments by simulating a large number of possible policy responses. By varying the starting date of the policy intervention in the simulated scenarios, we find that the most effective policy intervention occurs around the time when the number of active infections is growing at its highest rate. The degree of the intervention, above a certain threshold, does not appear to have a great impact on the outcomes in our simulations, due to the strongly concave relationship we assume between production shortfall and reduction in the infection rate. Our experiments further suggest that the intervention should last until after the peak determined by the reduced infection rate. The model and its implementation, along with the general insights from our policy experiments, may help policymakers design effective emergency policy responses in the face a serious pandemic, and contribute to our understanding of the relationship between the economic growth and the spread of infectious diseases.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.13722&r=all
  20. By: Xinyue Cui; Zhaoyu Xu; Yue Zhou
    Abstract: In this essay, we have comprehensively evaluated the feasibility and suitability of adopting the Machine Learning Models on the forecast of corporation fundamentals (i.e. the earnings), where the prediction results of our method have been thoroughly compared with both analysts' consensus estimation and traditional statistical models. As a result, our model has already been proved to be capable of serving as a favorable auxiliary tool for analysts to conduct better predictions on company fundamentals. Compared with previous traditional statistical models being widely adopted in the industry like Logistic Regression, our method has already achieved satisfactory advancement on both the prediction accuracy and speed. Meanwhile, we are also confident enough that there are still vast potentialities for this model to evolve, where we do hope that in the near future, the machine learning model could generate even better performances compared with professional analysts.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.13995&r=all
  21. By: Jan H van Heerden (Department of Economics, University of Pretoria, Pretoria, 0002, South Africa)
    Keywords: Lockdown, Covid-19, South African economy
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:pre:wpaper:202042&r=all
  22. By: Jaqueson K. Galimberti
    Abstract: This paper evaluates how adaptive learning agents weight different pieces of information when forming expectations with a recursive least squares algorithm. The analysis is based on a renewed and more general non-recursive representation of the learning algorithm, namely, a penalized weighted least squares estimator, where a penalty term accounts for the effects of the learning initials. The paper then draws behavioral implications of alternative specifications of the learning mechanism, such as the cases with decreasing, constant, regime-switching, adaptive, and age-dependent gains, as well as practical recommendations on their computation. One key new finding is that without a proper account for the uncertainty about the learning initial, a constant-gain can generate a time-varying profile of weights given to past observations, particularly distorting the estimation and behavioral interpretation of this mechanism in small samples of data. In fact, simulations and empirical estimation of a Phillips curve model with learning indicate that this particular misspecification of the initials can lead to estimates where inflation rates are less responsive to expectations and output gaps than in reality, or “flatter” Phillips curves.
    Keywords: bounded rationality, expectations, adaptive learning, memory
    JEL: D83 D84 D90 E37 C32 C63
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2020-46&r=all
  23. By: Christian Dehm; Thai Nguyen; Mitja Stadje
    Abstract: We examine an expected utility maximization problem with an uncertain time horizon, a classical example being a life insurance contract due at the time of death. Life insurance contracts usually have an option-like form leading to a non-concave optimization problem. We consider general utility functions and give necessary and sufficient optimality conditions, deriving a computationally tractable algorithm. A numerical study is done to illustrate our findings. Our analysis suggests that the possible occurrence of a premature stopping leads to a reduced performance of the optimal portfolio compared to a setting without time-horizon uncertainty.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.13831&r=all

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.