nep-cmp New Economics Papers
on Computational Economics
Issue of 2018‒01‒22
ten papers chosen by



  1. Predict Forex Trend via Convolutional Neural Networks By Yun-Cheng Tsai; Jun-Hao Chen; Jun-Jie Wang
  2. Stochastic Dynamic Pricing for EV Charging Stations with Renewables Integration and Energy Storage By Chao Luo; Yih-Fang Huang; Vijay Gupta
  3. Macroeconomic Indicator Forecasting with Deep Neural Networks By Cook, Thomas R.; Smalter Hall, Aaron
  4. Double/debiased machine learning for treatment and structural parameters By Victor Chernozhukov; Denis Chetverikov; Mert Demirer; Esther Duflo; Christian Hansen; Whitney K. Newey; James Robins
  5. Dynamic Pricing and Energy Management Strategy for EV Charging Stations under Uncertainties By Chao Luo; Yih-Fang Huang; Vijay Gupta
  6. On Numerical Methods for Spread Options By Mesias Alfeus; Erik Schlögl
  7. PrivySense: $\underline{Pri}$ce $\underline{V}$olatilit$\underline{y}$ based $\underline{Sen}$timent$\underline{s}$ $\underline{E}$stimation from Financial News using Machine Learning By Raeid Saqur; Nicole Langballe
  8. "Robust Technical Trading with Fuzzy Knowledge-based Systems" By Masafumi Nakano; Akihiko Takahashi; Soichiro Takahashi
  9. Optimal Debt Management in a Liquidity Trap By Romanos Priftis; Rigas Oikonomou; Hafedh Bouakez
  10. Hospital Readmission is Highly Predictable from Deep Learning By Damien Échevin; Qing Li; Marc-André Morin

  1. By: Yun-Cheng Tsai; Jun-Hao Chen; Jun-Jie Wang
    Abstract: Deep learning is an effective approach to solving image recognition problems. People draw intuitive conclusions from trading charts; this study uses the characteristics of deep learning to train computers in imitating this kind of intuition in the context of trading charts. The three steps involved are as follows: 1. Before training, we pre-process the input data from quantitative data to images. 2. We use a convolutional neural network (CNN), a type of deep learning, to train our trading model. 3. We evaluate the model's performance in terms of the accuracy of classification. A trading model is obtained with this approach to help devise trading strategies. The main application is designed to help clients automatically obtain personalized trading strategies.
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1801.03018&r=cmp
  2. By: Chao Luo; Yih-Fang Huang; Vijay Gupta
    Abstract: This paper studies the problem of stochastic dynamic pricing and energy management policy for electric vehicle (EV) charging service providers. In the presence of renewable energy integration and energy storage system, EV charging service providers must deal with multiple uncertainties --- charging demand volatility, inherent intermittency of renewable energy generation, and wholesale electricity price fluctuation. The motivation behind our work is to offer guidelines for charging service providers to determine proper charging prices and manage electricity to balance the competing objectives of improving profitability, enhancing customer satisfaction, and reducing impact on power grid in spite of these uncertainties. We propose a new metric to assess the impact on power grid without solving complete power flow equations. To protect service providers from severe financial losses, a safeguard of profit is incorporated in the model. Two algorithms --- stochastic dynamic programming (SDP) algorithm and greedy algorithm (benchmark algorithm) --- are applied to derive the pricing and electricity procurement policy. A Pareto front of the multiobjective optimization is derived. Simulation results show that using SDP algorithm can achieve up to 7% profit gain over using greedy algorithm. Additionally, we observe that the charging service provider is able to reshape spatial-temporal charging demands to reduce the impact on power grid via pricing signals.
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1801.02128&r=cmp
  3. By: Cook, Thomas R. (Federal Reserve Bank of Kansas City); Smalter Hall, Aaron (Federal Reserve Bank of Kansas City)
    Abstract: Economic policymaking relies upon accurate forecasts of economic conditions. Current methods for unconditional forecasting are dominated by inherently linear models {{p}} that exhibit model dependence and have high data demands. {{p}} We explore deep neural networks as an {{p}} opportunity to improve upon forecast accuracy with limited data and while remaining agnostic as to {{p}} functional form. We focus on predicting civilian unemployment using models based on four different neural network architectures. Each of these models outperforms bench- mark models at short time horizons. One model, based on an Encoder Decoder architecture outperforms benchmark models at every forecast horizon (up to four quarters).
    Keywords: Neural networks; Forecasting; Macroeconomic indicators
    JEL: C14 C45 C53
    Date: 2017–09–29
    URL: http://d.repec.org/n?u=RePEc:fip:fedkrw:rwp17-11&r=cmp
  4. By: Victor Chernozhukov (Institute for Fiscal Studies and MIT); Denis Chetverikov (Institute for Fiscal Studies and UCLA); Mert Demirer (Institute for Fiscal Studies); Esther Duflo (Institute for Fiscal Studies); Christian Hansen (Institute for Fiscal Studies and Chicago GSB); Whitney K. Newey (Institute for Fiscal Studies and MIT); James Robins (Institute for Fiscal Studies)
    Abstract: We revisit the classic semiparametric problem of inference on a low dimensional parameter ?0 in the presence of high-dimensional nuisance parameters ?0. We depart from the classical setting by allowing for ?0 to be so high-dimensional that the traditional assumptions, such as Donsker properties, that limit complexity of the parameter space for this object break down. To estimate ?0, we consider the use of statistical or machine learning (ML) methods which are particularly well-suited to estimation in modern, very high-dimensional cases. ML methods perform well by employing regularization to reduce variance and trading off regularization bias with overfitting in practice. However, both regularization bias and overfitting in estimating ?0 cause a heavy bias in estimators of ?0 that are obtained by naively plugging ML estimators of ?0 into estimating equations for ?0. This bias results in the naive estimator failing to be N -1/2 consistent, where N is the sample size. We show that the impact of regularization bias and overfitting on estimation of the parameter of interest ?0 can be removed by using two simple, yet critical, ingredients: (1) using Neyman-orthogonal moments/scores that have reduced sensitivity with respect to nuisance parameters to estimate ?0, and (2) making use of cross-fitting which provides an efficient form of data-splitting. We call the resulting set of methods double or debiased ML (DML). We verify that DML delivers point estimators that concentrate in a N -1/2-neighborhood of the true parameter values and are approximately unbiased and normally distributed, which allows construction of valid confidence statements. The generic statistical theory of DML is elementary and simultaneously relies on only weak theoretical requirements which will admit the use of a broad array of modern ML methods for estimating the nuisance parameters such as random forests, lasso, ridge, deep neural nets, boosted trees, and various hybrids and ensembles of these methods. We illustrate the general theory by applying it to provide theoretical properties of DML applied to learn the main regression parameter in a partially linear regression model, DML applied to learn the coefficient on an endogenous variable in a partially linear instrumental variables model, DML applied to learn the average treatment effect and the average treatment effect on the treated under unconfoundedness, and DML applied to learn the local average treatment effect in an instrumental variables setting. In addition to these theoretical applications, we also illustrate the use of DML in three empirical examples.
    Date: 2017–06–02
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:28/17&r=cmp
  5. By: Chao Luo; Yih-Fang Huang; Vijay Gupta
    Abstract: This paper presents a dynamic pricing and energy management framework for electric vehicle (EV) charging service providers. To set the charging prices, the service providers faces three uncertainties: the volatility of wholesale electricity price, intermittent renewable energy generation, and spatial-temporal EV charging demand. The main objective of our work here is to help charging service providers to improve their total profits while enhancing customer satisfaction and maintaining power grid stability, taking into account those uncertainties. We employ a linear regression model to estimate the EV charging demand at each charging station, and introduce a quantitative measure for customer satisfaction. Both the greedy algorithm and the dynamic programming (DP) algorithm are employed to derive the optimal charging prices and determine how much electricity to be purchased from the wholesale market in each planning horizon. Simulation results show that DP algorithm achieves an increased profit (up to 9%) compared to the greedy algorithm (the benchmark algorithm) under certain scenarios. Additionally, we observe that the integration of a low-cost energy storage into the system can not only improve the profit, but also smooth out the charging price fluctuation, protecting the end customers from the volatile wholesale market.
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1801.02783&r=cmp
  6. By: Mesias Alfeus (Finance Discipline Group, UTS Business School, University of Technology Sydney); Erik Schlögl (Finance Discipline Group, UTS Business School, University of Technology Sydney)
    Abstract: Spread options are multi-asset options whose payoffs depend on the difference of two underlying financial variables. In most cases, analytically closed form solutions for pricing such payoffs are not available, and the application of numerical pricing methods turns out to be non-trivial. We consider several such non-trivial cases and explore the performance of the highly efficient numerical technique of Hurd and Zhou (2010), comparing this with Monte Carlo simulation and the lower bound approximation formula of Caldana and Fusai (2013). We show that the former is in essence an application of the two–dimensional Parseval Identity. As application examples, we price spread options in a model where asset prices are driven by a multivariate normal inverse Gaussian (NIG) process, in a threefactor stochastic volatility model, as well as in examples of models driven by other popular multivariate Lévy processes such as the variance Gamma process, and discuss the price sensitivity with respect to volatility. We also consider examples in the fixed–income market, specifically, on cross–currency interest rate spreads and on LIBOR/OIS spreads. In terms of FFT computation, we have used the FFTW library (see Frigo and Johnson (2010)) and we document appropriate usage of this library to reconcile it with the MATLAB ifft2 counterpart.
    Date: 2018–01–01
    URL: http://d.repec.org/n?u=RePEc:uts:rpaper:388&r=cmp
  7. By: Raeid Saqur; Nicole Langballe
    Abstract: As machine learning ascends the peak of computer science zeitgeist, the usage and experimentation with sentiment analysis using various forms of textual data seems pervasive. The effect is especially pronounced in formulating securities trading strategies, due to a plethora of reasons including the relative ease of implementation and the abundance of academic research suggesting automated sentiment analysis can be productively used in trading strategies. The source data for such analyzers ranges a broad spectrum like social media feeds, micro-blogs, real-time news feeds, ex-post financial data etc. The abstract technique underlying these analyzers involve supervised learning of sentiment classification where the classifier is trained on annotated source corpus, and accuracy is measured by testing how well the classifiers generalizes on unseen test data from the corpus. Post training, and validation of fitted models, the classifiers are used to execute trading strategies, and the corresponding returns are compared with appropriate benchmark returns (for e.g., the S&P500 returns). In this paper, we introduce $\underline{a\ novel\ technique\ of\ using\ price\ volatilities\ to\ empirically\ determine\ the\ sentiment\ in\ news\ data}$, instead of the traditional reverse approach. We also perform meta sentiment analysis by evaluating the efficacy of existing sentiment classifiers and the precise definition of sentiment from securities trading context. We scrutinize the efficacy of using human-annotated sentiment classification and the tacit assumptions that introduces subjective bias in existing financial news sentiment classifiers.
    Date: 2017–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1801.00091&r=cmp
  8. By: Masafumi Nakano (Graduate School of Economics, The University of Tokyo); Akihiko Takahashi (Faculty of Economics, The University of Tokyo); Soichiro Takahashi (Graduate School of Economics, The University of Tokyo)
    Abstract: This paper proposes a framework of robust technical trading with fuzzy knowledge-based systems (KBSs). Particularly, our framework consists of two modules, i.e., (i) a module for preparing candidate investment proposals and (ii) a module for their evaluation to construct a well-performed portfolio. Moreover, our framework effectively utilizes fuzzy KBSs for representation of human expert knowledge: Precisely, in the 1st module, three sets of fuzzy IF-THEN rules implement linguistic technical trading rules, which are designed speci cally for getting well performance in different market phases. On the other hand, the 2nd module exploits fuzzy logic to evaluate the prepared investment candidates in terms of multilateral performance measures frequently used in practice. In an out-of-sample numerical experiment, our framework successfully generates a series of portfolios, which show long-term satisfactory records in the prolonged slumping Japanese stock market.
    Date: 2017–07
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2017cf1053&r=cmp
  9. By: Romanos Priftis (European Commission); Rigas Oikonomou (Université Catholique de Louvain); Hafedh Bouakez (HEC Montreal)
    Abstract: We study optimal debt management in the face of shocks that can drive the economy into a liquidity trap and call for an increase in public spending in order to mitigate the resulting recession. Our approach follows the literature of macroeconomic models of debt management, which we extend to the case where the zero lower bound on the short-term interest rate may bind. We wish to identify the conditions under which removing long-maturity government debt from the secondary market can be an optimal policy outcome. We show that the optimal debt-management strategy is to issue short-term debt if the government faces a sizable exogenous increase in public spending and if its initial liability is not very large. In this case, our results run against the standard prescription of the debt-management literature. In contrast, if the initial debt level is high, then issuing long term government bonds is optimal. Finding the portfolios requires to solve the model using global numerical approximation methods. As a methodological contribution, we propose numerical procedures within the class of parameterized expectations algorithms (PEA) to solve the nonlinear model subject to the zero lower bound.
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:red:sed017:1316&r=cmp
  10. By: Damien Échevin; Qing Li; Marc-André Morin
    Abstract: Hospital readmission is costly and existing models are often poor or moderate in predicting readmission. We sought to develop and test a method that can be applied generally by hospitals. Such a tool can help clinicians identify patients who are more likely to be readmitted, either at early stages of hospital stay or at hospital discharge. Relying on state-of-the art machine learning algorithms, we predict probability of 30-day readmission at hospital admission and at hospital discharge using administrative data on 1,633,099 hospital stays from Quebec between 1995 and 2012. We measure performance of the predictions with the area under receiver operating characteristic curve (AUC). Deep Learning produced excellent prediction of readmission province-wide, and Random Forest reached very similar level. The AUC for these two algorithms reached above 78% at hospital admission and above 87% at hospital discharge, and the diagnostic codes are among the most predictive variables. The ease of implementation of machine learning algorithms, together with objectively validated reliability, brings new possibilities for cost reduction in the health care system.
    Keywords: Machine learning; Logistic regression; Risk of re-hospitalisation; Healthcare costs
    JEL: I10 C52
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:lvl:criacr:1701&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.