|
on Computational Economics |
Issue of 2021‒12‒06
27 papers chosen by |
By: | Mao Guan; Xiao-Yang Liu |
Abstract: | Deep reinforcement learning (DRL) has been widely studied in the portfolio management task. However, it is challenging to understand a DRL-based trading strategy because of the black-box nature of deep neural networks. In this paper, we propose an empirical approach to explain the strategies of DRL agents for the portfolio management task. First, we use a linear model in hindsight as the reference model, which finds the best portfolio weights by assuming knowing actual stock returns in foresight. In particular, we use the coefficients of a linear model in hindsight as the reference feature weights. Secondly, for DRL agents, we use integrated gradients to define the feature weights, which are the coefficients between reward and features under a linear regression model. Thirdly, we study the prediction power in two cases, single-step prediction and multi-step prediction. In particular, we quantify the prediction power by calculating the linear correlations between the feature weights of a DRL agent and the reference feature weights, and similarly for machine learning methods. Finally, we evaluate a portfolio management task on Dow Jones 30 constituent stocks during 01/01/2009 to 09/01/2021. Our approach empirically reveals that a DRL agent exhibits a stronger multi-step prediction power than machine learning methods. |
Date: | 2021–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2111.03995&r= |
By: | Chaeshick Chung (Department of Economics, Sogang University); Sukjin Park (Department of Economics, Sogang University) |
Abstract: | This paper applies the Dual-Stage Attention-Based Recurrent Neural Network(DA- RNN) model to predict future price movements using microstructure variables. The biggest feature of the DA-RNN model is that it adaptively selects relevant variables according to market conditions. We analyze whether microstructure variables have predictive power for future price movements, and what factors in uence this predic- tive power. We nd that microstructure variables possess predictive power against the direction of future price movements. This predictive power depends on how many uninformed traders exist in the market. Moreover, the importance of mi- crostructure variables is negatively related to market liquidity. Thus, while mi- crostructure variables are more important in severe market conditions with high transaction costs, the e ect of trading on price dynamics depends on market struc- ture. |
Keywords: | Attention Mechanism, Deep Learning, Machine Learning, Market Mi- crostructure, Informed Trading |
JEL: | G10 G14 G17 |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:sgo:wpaper:2108&r= |
By: | Kaur, Karman; Mehar, Mamta; Prasad, Narayan |
Keywords: | Crop Production/Industries |
Date: | 2021–08 |
URL: | http://d.repec.org/n?u=RePEc:ags:iaae21:315051&r= |
By: | Zechu Li; Xiao-Yang Liu; Jiahao Zheng; Zhaoran Wang; Anwar Walid; Jian Guo |
Abstract: | Machine learning techniques are playing more and more important roles in finance market investment. However, finance quantitative modeling with conventional supervised learning approaches has a number of limitations. The development of deep reinforcement learning techniques is partially addressing these issues. Unfortunately, the steep learning curve and the difficulty in quick modeling and agile development are impeding finance researchers from using deep reinforcement learning in quantitative trading. In this paper, we propose an RLOps in finance paradigm and present a FinRL-Podracer framework to accelerate the development pipeline of deep reinforcement learning (DRL)-driven trading strategy and to improve both trading performance and training efficiency. FinRL-Podracer is a cloud solution that features high performance and high scalability and promises continuous training, continuous integration, and continuous delivery of DRL-driven trading strategies, facilitating a rapid transformation from algorithmic innovations into a profitable trading strategy. First, we propose a generational evolution mechanism with an ensemble strategy to improve the trading performance of a DRL agent, and schedule the training of a DRL algorithm onto a GPU cloud via multi-level mapping. Then, we carry out the training of DRL components with high-performance optimizations on GPUs. Finally, we evaluate the FinRL-Podracer framework for a stock trend prediction task on an NVIDIA DGX SuperPOD cloud. FinRL-Podracer outperforms three popular DRL libraries Ray RLlib, Stable Baseline 3 and FinRL, i.e., 12% \sim 35% improvements in annual return, 0.1 \sim 0.6 improvements in Sharpe ratio and 3 times \sim 7 times speed-up in training time. We show the high scalability by training a trading agent in 10 minutes with $80$ A100 GPUs, on NASDAQ-100 constituent stocks with minute-level data over 10 years. |
Date: | 2021–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2111.05188&r= |
By: | Corredera, Alberto; Ruiz Mora, Carlos |
Abstract: | We present a data-driven framework for optimal scenario selection in stochastic optimization with applications in power markets. The proposed methodology relies in the existence of auxiliary information and the use of machine learning techniques to narrow the set of possible realizations (scenarios) of the variables of interest. In particular, we implement a novel validation algorithm that allows optimizing each machine learning hyperparameter to further improve the prescriptive power of the resulting set of scenarios. Supervised machine learning techniques are examined, including kNN and decision trees, and the validation process is adapted to work with time-dependent datasets. Moreover, we extend the proposed methodology to work with unsupervised techniques with promising results. We test the proposed methodology in a realistic power market application: optimal trading strategy in forward and spot markets for an electricity retailer under uncertain spot prices. Results indicate that the retailer can greatly benefit from the proposed data-driven methodology and improve its market performance. Moreover, we perform an extensive set of numerical simulations to analyze under which conditions the best machine learning hyperparameters, in terms of prescriptive performance, differ from those that provide the best predictive accuracy. |
Keywords: | Or in energy; Data-Driven; Electricity Retailer; Hyperparameter Selection; Machine Learning |
Date: | 2021–11–25 |
URL: | http://d.repec.org/n?u=RePEc:cte:wsrepe:33693&r= |
By: | Klaus-Peter Hellwig |
Abstract: | In this paper I assess the ability of econometric and machine learning techniques to predict fiscal crises out of sample. I show that the econometric approaches used in many policy applications cannot outperform a simple heuristic rule of thumb. Machine learning techniques (elastic net, random forest, gradient boosted trees) deliver significant improvements in accuracy. Performance of machine learning techniques improves further, particularly for developing countries, when I expand the set of potential predictors and make use of algorithmic selection techniques instead of relying on a small set of variables deemed important by the literature. There is considerable agreement across learning algorithms in the set of selected predictors: Results confirm the importance of external sector stock and flow variables found in the literature but also point to demographics and the quality of governance as important predictors of fiscal crises. Fiscal variables appear to have less predictive value, and public debt matters only to the extent that it is owed to external creditors. |
Date: | 2021–05–27 |
URL: | http://d.repec.org/n?u=RePEc:imf:imfwpa:2021/150&r= |
By: | Anders Nõu; Darya Lapitskaya; Mustafa Hakan Eratalay; Rajesh Sharma |
Abstract: | For stock market predictions, the essence of the problem is usually predicting the magnitude and direction of the stock price movement as accurately as possible. There are different approaches (e.g., econometrics and machine learning) for predicting stock returns. However, it is non-trivial to find an approach which works the best. In this paper, we make a thorough analysis of the predictive accuracy of different machine learning and econometric approaches for predicting the returns and volatilities on the OMX Baltic Benchmark price index, which is a relatively less researched stock market. Our results show that the machine learning methods, namely the support vector regression and k-nearest neighbours, predict the returns better than autoregressive moving average models for most of the metrics, while for the other approaches, the results were not conclusive. Our analysis also highlighted that training and testing sample size plays an important role on the outcome of machine learning approaches. |
Keywords: | machine learning, neural networks, autoregressive moving average, generalized autore- gressive conditional heteroskedasticity |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:mtk:febawb:135&r= |
By: | Bluwstein, Kristina; Buckmann, Marcus; Joseph, Andreas; Kapadia, Sujit; Şimşek, Özgür |
Abstract: | We develop early warning models for financial crisis prediction by applying machine learning techniques to macrofinancial data for 17 countries over 1870–2016. Most nonlin-ear machine learning models outperform logistic regression in out-of-sample predictions and forecasting. We identify economic drivers of our machine learning models using a novel framework based on Shapley values, uncovering nonlinear relationships between the predic-tors and crisis risk. Throughout, the most important predictors are credit growth and the slope of the yield curve, both domestically and globally. A flat or inverted yield curve is of most concern when nominal interest rates are low and credit growth is high. JEL Classification: C40, C53, E44, F30, G01 |
Keywords: | credit growth, machine learning, Shapley values, yield curve, financial crises, financial stability |
Date: | 2021–11 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20212614&r= |
By: | Simon Paetzold; Mizuho Kida |
Abstract: | The Financial Action Task Force’s gray list publicly identifies countries with strategic deficiencies in their AML/CFT regimes (i.e., in their policies to prevent money laundering and the financing of terrorism). How much gray-listing affects a country’s capital flows is of interest to policy makers, investors, and the Fund. This paper estimates the magnitude of the effect using an inferential machine learning technique. It finds that gray-listing results in a large and statistically significant reduction in capital inflows. |
Keywords: | capital flows, AML/CFT, gray list, machine learning, emerging market economies; inferential machine learning technique; gray-listing affect; analysis using machine learning; gray list; coefficient estimate; Capital flows; Capital inflows; Anti-money laundering and combating the financing of terrorism (AML/CFT); Machine learning; Foreign direct investment; Global |
Date: | 2021–05–27 |
URL: | http://d.repec.org/n?u=RePEc:imf:imfwpa:2021/153&r= |
By: | Alexander Kell |
Abstract: | A transition to a low-carbon electricity supply is crucial to limit the impacts of climate change. Reducing carbon emissions could help prevent the world from reaching a tipping point, where runaway emissions are likely. Runaway emissions could lead to extremes in weather conditions around the world -- especially in problematic regions unable to cope with these conditions. However, the movement to a low-carbon energy supply can not happen instantaneously due to the existing fossil-fuel infrastructure and the requirement to maintain a reliable energy supply. Therefore, a low-carbon transition is required, however, the decisions various stakeholders should make over the coming decades to reduce these carbon emissions are not obvious. This is due to many long-term uncertainties, such as electricity, fuel and generation costs, human behaviour and the size of electricity demand. A well choreographed low-carbon transition is, therefore, required between all of the heterogenous actors in the system, as opposed to changing the behaviour of a single, centralised actor. The objective of this thesis is to create a novel, open-source agent-based model to better understand the manner in which the whole electricity market reacts to different factors using state-of-the-art machine learning and artificial intelligence methods. In contrast to other works, this thesis looks at both the long-term and short-term impact that different behaviours have on the electricity market by using these state-of-the-art methods. |
Date: | 2021–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2111.00987&r= |
By: | Emanuel Kohlscheen |
Abstract: | This paper examines the drivers of CPI inflation through the lens of a simple, but computationally intensive machine learning technique. More specifically, it predicts inflation across 20 advanced countries between 2000 and 2021, relying on 1,000 regression trees that are constructed based on six key macroeconomic variables. This agnostic, purely data driven method delivers (relatively) good outcome prediction performance. Out of sample root mean square errors (RMSE) systematically beat even the in-sample benchmark econometric models, with a 28% RMSE reduction relative to a naïve AR(1) model and a 8% RMSE reduction relative to OLS. Overall, the results highlight the role of expectations for inflation outcomes in advanced economies, even though their importance appears to have declined somewhat during the last 10 years. |
Keywords: | expectations, forecast, inflation, machine learning, oil price, output gap, Phillips curve |
JEL: | E27 E30 E31 E37 E52 F41 |
Date: | 2021–11 |
URL: | http://d.repec.org/n?u=RePEc:bis:biswps:980&r= |
By: | Davide Cividino (Polytechnic University of Turin); Rebecca Westphal (ETH Zürich - Department of Management, Technology, and Economics (D-MTEC)); Didier Sornette (ETH Zürich - Department of Management, Technology, and Economics (D-MTEC); Swiss Finance Institute; Southern University of Science and Technology; Tokyo Institute of Technology) |
Abstract: | We present an agent-based model (ABM) of a financial market with n > 1 risky assets, whose price dynamics result from the interaction between rational fundamentalists and trend following imitative noise traders. The interactions and opinion formation of the noise traders are described by an extended O(n) vector model, which generalise the Ising model used previously in ABMs with a single risky asset. Efficient rejection-free transition probabilities are derived to describe realistic investment decisions at the micro level of individual noise traders. The ABM is validated by testing for several characteristics of financial markets such as volatility clustering and fat-tails of the distribution of returns. Furthermore, the model is able to account for the development of endogenous bubbles and crashes. We distinguish three different regimes depending on the traders’ propensity to imitate others. In the subcritical regime of the O(n) vector model, the traders’ opinions are idiosyncratic and no bubbles emerge. Around the critical value of the O(n) vector model, cross sectionally asynchronous bubbles emerge. Above the critical value, small random price fluctuations may be amplified by noise traders herding into a given asset, which then impels fundamentalists to re-equilibrate their more valuable portfolios that have become unbalanced, thus pushing the prices of the other assets upward. The resulting transient increase of the momenta of these assets triggers a reorientation of the noise traders’ portfolios that further amplifies the burgeoning bubbles. We have thus identified a mechanism by which the cautious risk-adverse contrarian rebalancing strategy of fundamentalists leads to systemic risks in the form of cascades of bubbles spreading the whole financial market. |
Keywords: | financial bubbles; agent-based model; arbitrageurs; noise traders; fundamentalists; multi-assets; O(n) vector model; synchronisation |
JEL: | C63 G01 G17 |
Date: | 2021–11 |
URL: | http://d.repec.org/n?u=RePEc:chf:rpseri:rp2176&r= |
By: | Yue-Jun Zhang (Business School, Hunan University, Changsha 410082, China; Center for Resource and Environmental Management, Hunan University, Changsha 410082, China); Han Zhang (Business School, Hunan University, Changsha 410082, China; Center for Resource and Environmental Management, Hunan University, Changsha 410082, China); Rangan Gupta (Department of Economics, University of Pretoria, Private Bag X20, Hatfield 0028, South Africa) |
Abstract: | Forecasting of the artificial intelligence index returns is of great significance for financial market stability and the development of artificial intelligence industry. To provide investors more reliable reference in terms of artificial intelligence index investment, this paper selects the Nasdaq CTA Artificial Intelligence and Robotics (AI) Index as the research target, and proposes novel hybrid methods to forecast the AI index returns by considering its nonlinear and time-varying characteristics. Specifically, this paper uses the ensemble empirical mode decomposition (EEMD) method to decompose the AI index returns, and combines the least square support vector machine approach together with the particle swarm optimization (PSO-LSSVM) method and the generalized autoregressive conditional heteroskedasticity (GARCH) model to construct novel hybrid forecasting methods. The empirical results indicate that: first, the decomposition and integration models usually produce superior forecasting accuracy than the single forecasting models, due to the complicated feature of the non-decomposed data. Second, the newly proposed hybrid forecasting method (i.e., the EEMD-PSO-LSSVM-GARCH model) which combines the advantage of traditional econometric models and machine learning techniques can yield the optimal forecasting performance for the AI index returns. |
Keywords: | AI index return forecasting, PSO-LSSVM model, GARCH model, Decomposition and integration model, Combination model |
JEL: | Q43 G15 E37 |
Date: | 2021–11 |
URL: | http://d.repec.org/n?u=RePEc:pre:wpaper:202182&r= |
By: | Jeronymo Marcondes Pinto; Jennifer L. Castle |
Abstract: | Forecasting economic indicators is an important task for analysts. However, many indicators suffer from structural breaks leading to forecast failure. Methods that are robust following a structural break have been proposed in the literature but they come at a cost: an increase in forecast error variance. We propose a method to select between a set of robust and non-robust forecasting models. Our method uses time-series clustering to identify possible structural breaks in a time series, and then switches between forecasting models depending on the series dynamics. We perform a rigorous empirical evaluation with 400 simulated series with an artificial structural break and with real data economic series: Industrial Production and Consumer Prices for all Western European countries available from the OECD database. Our results show that the proposed method statistically outperforms benchmarks in forecast accuracy for most case scenarios, particularly at short horizons. |
Keywords: | Machine Learning, Forecasting, Structural Breaks, Model Selection, Cluster Analysis |
Date: | 2021–10–13 |
URL: | http://d.repec.org/n?u=RePEc:oxf:wpaper:950&r= |
By: | Heimann, Tobias; Delzeit, Ruth |
Keywords: | Marketing, Resource /Energy Economics and Policy |
Date: | 2021–08 |
URL: | http://d.repec.org/n?u=RePEc:ags:iaae21:315270&r= |
By: | Saiz, Lorena; Ashwin, Julian; Kalamara, Eleni |
Abstract: | This paper shows that newspaper articles contain timely economic signals that can materially improve nowcasts of real GDP growth for the euro area. Our text data is drawn from fifteen popular European newspapers, that collectively represent the four largest Euro area economies, and are machine translated into English. Daily sentiment metrics are created from these news articles and we assess their value for nowcasting. By comparing to competitive and rigorous benchmarks, we find that newspaper text is helpful in nowcasting GDP growth especially in the first half of the quarter when other lower-frequency soft indicators are not available. The choice of the sentiment measure matters when tracking economic shocks such as the Great Recession and the Great Lockdown. Non-linear machine learning models can help capture extreme movements in growth, but require sufficient training data in order to be effective so become more useful later in our sample. JEL Classification: C43, C45, C55, C82, E37 |
Keywords: | business cycles, COVID-19, forecasting, machine learning, text analysis |
Date: | 2021–11 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20212616&r= |
By: | Olga Diukanova (European Commission - JRC); Mariana Chioncel (University of Bucharest) |
Abstract: | This study evaluates the potential economic impacts of Research & Development (R&D) investments in Romania during the 2021-2027 policy cycle. The assessment is based on three distinct R&D investments scenarios: (1) 2% Gross domestic Expenditure on R&D (GERD) intensity target achieved by 2029, with equal split between public and private investment, in accordance with the R&D investment targets declared in the national strategic documents; (2) gradual increase of GERD intensity to 2.25% by 2029, with public investment of 1.25% of GDP (in line with the new ERA target); and (3) 0.48% of GDP, “business as usual’ scenario (following the same investment pattern as in the past years). The results of computer simulations with the RHOMOLO model, which is a dynamic multi-regional computable general equilibrium (CGE) model developed by the Joint Research Centre (JRC) of the European Commission, show that the most pronounced GDP impacts in Romania would be achieved with the highest intensity of R&D policy funding. Aside from the capital city region RO32, the less developed regions RO12, RO22, RO31 and RO41 exhibit the highest GDP multipliers across Romanian regions, which indicates the high potential of R&D funding in these regions. The strongest spillover effects emerge from the regions that in certain years make substantial R&D domestic private and public investments relative to the size of their economies. Although R&D investments augment factor productivity that depreciates gradually in the absence of continuous funding, the strength of lagged effects of R&D funding depends on the intensity of R&D investments rather than on the source of funding. However, in the short run, the economic cost for Romania is determined by the source of R&D investments: despite their small size, the EU investments that are largely financed by other EU member states, produce quite sizeable GDP multipliers in Romania compared to the national public and private investments. |
Keywords: | RHOMOLO, Cohesion Policy, regional growth, regional development, Romania. |
JEL: | C68 R13 |
Date: | 2021–11 |
URL: | http://d.repec.org/n?u=RePEc:ipt:termod:202110&r= |
By: | Bhumjai Tangsawasdirat; Suranan Tanpoonkiat; Burasakorn Tangsatchanan |
Abstract: | This paper aims to provide an introduction to Credit Risk Database (CRD), a collection of financial and non-financial data for SME credit risk analysis, for Thailand. Aligning with the Bank of Thailand (BOT)’s strategic plan to develop the data ecosystem to help reduce asymmetric information problem in the financial sector, CRD is an initiative to effectively utilize data already collected from financial institutions as a part of the BOT’s supervisory mandate. Our first use case is intended to help improve financial access for SMEs, by building credit risk models that can work as a complementary tool to help financial institutions and Credit Guarantee Corporation assess SMEs financial prospects in parallel with internal credit score. Focusing on SMEs who are new borrowers, we use only SME’s financial and non-financial data as our explanatory variables while disregarding past default-related data such as loan repayment behavior. Credit risk models of various methodologies are then built from CRD data to allow financial institutions to conduct effective risk-based pricing, offering different sets of interest rates and loan terms. Statistical methods (i.e. logit regression and credit scoring) and machine learning methods (i.e. decision tree and random forest) are used to build credit risk models that can help quantify the SME’s one-year forward probability of default. Out-of-sample prediction results indicate that the statistical and machine learning models yield reasonably accurate probability of default predictions, with the maximum Area under the ROC Curve (AUC) at approximately 70-80%. The model with the best performance, as compared by the maximum AUC, is the random forest model. However, the credit scoring model that is developed from logistic regression of weighted-of-evidence variables is more user-friendly for credit loan providers to interpret and develop practical application, achieving the second-best AUC. |
Keywords: | Credit Risk Database; Credit Score; Credit Risk Assessment; Credit Scoring Model; Thai SMEs |
JEL: | C52 C53 C55 D81 G21 G32 |
Date: | 2021–11 |
URL: | http://d.repec.org/n?u=RePEc:pui:dpaper:168&r= |
By: | Lilian N. Rolim; Carolina Troncoso Baltar, Gilberto Tadeu Lima |
Abstract: | We investigate the effect of labor productivity growth and workers’s bargaining power on income distribution in a novel agent-based macroeconomic model mostly inspired by the post-Keynesian literature. Its main novelties are a wage bargaining process and a mark-up adjustment rule featuring a broader set of dimensions and coupled channels of interaction. The former allows nominal wages to be endogenously determined by interactions involving firms and workers, which are mediated by workers’s bargaining power. The latter assumes that firms also consider their position relative to workers (through their unit costs) to set their mark-up rates, thus linking the evolution of nominal wages in the bargaining process and labor productivity growth to the functional income distribution. This has implications for the personal income distribution through a three-class structure for households. The model reproduces numerous stylized facts, including those concerning the income distribution dynamics. By capturing the inherent social conflict over the distribution of income, our results show the importance of the coevolutionary interaction between workers’s bargaining power and productivity growth to the dynamics of income inequality and to its relationship with output. This leads to a policy dilemma between promoting productivity growth and improving income equality which can, nonetheless, be attenuated by combining policies and institutions that sustain workers’s strength with policies that stimulate technological innovation and productivity growth. |
Keywords: | Agent-based modeling; labor productivity; wage bargaining; personal income inequality; functional income inequality |
JEL: | C63 D31 D33 E2 |
Date: | 2021–11–23 |
URL: | http://d.repec.org/n?u=RePEc:spa:wpaper:2021wpecon27&r= |
By: | Jaydip Sen; Abhishek Dutta; Sidra Mehtab |
Abstract: | Predicting future stock prices and their movement patterns is a complex problem. Hence, building a portfolio of capital assets using the predicted prices to achieve the optimization between its return and risk is an even more difficult task. This work has carried out an analysis of the time series of the historical prices of the top five stocks from the nine different sectors of the Indian stock market from January 1, 2016, to December 31, 2020. Optimum portfolios are built for each of these sectors. For predicting future stock prices, a long-and-short-term memory (LSTM) model is also designed and fine-tuned. After five months of the portfolio construction, the actual and the predicted returns and risks of each portfolio are computed. The predicted and the actual returns of each portfolio are found to be high, indicating the high precision of the LSTM model. |
Date: | 2021–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2111.04709&r= |
By: | Reda Cherif; Karl Walentin; Brandon Buell; Carissa Chen; Jiawen Tang; Nils Wendt |
Abstract: | The COVID-19 pandemic underscores the critical need for detailed, timely information on its evolving economic impacts, particularly for Sub-Saharan Africa (SSA) where data availability and lack of generalizable nowcasting methodologies limit efforts for coordinated policy responses. This paper presents a suite of high frequency and granular country-level indicator tools that can be used to nowcast GDP and track changes in economic activity for countries in SSA. We make two main contributions: (1) demonstration of the predictive power of alternative data variables such as Google search trends and mobile payments, and (2) implementation of two types of modelling methodologies, machine learning and parametric factor models, that have flexibility to incorporate mixed-frequency data variables. We present nowcast results for 2019Q4 and 2020Q1 GDP for Kenya, Nigeria, South Africa, Uganda, and Ghana, and argue that our factor model methodology can be generalized to nowcast and forecast GDP for other SSA countries with limited data availability and shorter timeframes. |
Keywords: | model prediction; quantile plot; ML model; GDP YoY; data variable; YoY percent change; Factor models; Machine learning; Time series analysis; Spot exchange rates; Mobile banking; Africa; Sub-Saharan Africa |
Date: | 2021–05–01 |
URL: | http://d.repec.org/n?u=RePEc:imf:imfwpa:2021/124&r= |
By: | Giacomo De Giorgi; Matthew Harding; Gabriel Vasconcelos |
Abstract: | Data on hundreds of variables related to individual consumer finance behavior (such as credit card and loan activity) is routinely collected in many countries and plays an important role in lending decisions. We postulate that the detailed nature of this data may be used to predict outcomes in seemingly unrelated domains such as individual health. We build a series of machine learning models to demonstrate that credit report data can be used to predict individual mortality. Variable groups related to credit cards and various loans, mostly unsecured loans, are shown to carry significant predictive power. Lags of these variables are also significant thus indicating that dynamics also matters. Improved mortality predictions based on consumer finance data can have important economic implications in insurance markets but may also raise privacy concerns. |
Date: | 2021–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2111.03662&r= |
By: | Armand Hatchuel (CGS i3 - Centre de Gestion Scientifique i3 - MINES ParisTech - École nationale supérieure des mines de Paris - PSL - Université Paris sciences et lettres - CNRS - Centre National de la Recherche Scientifique); Pascal Le Masson (CGS i3 - Centre de Gestion Scientifique i3 - MINES ParisTech - École nationale supérieure des mines de Paris - PSL - Université Paris sciences et lettres - CNRS - Centre National de la Recherche Scientifique); Maxime Thomas (CGS i3 - Centre de Gestion Scientifique i3 - MINES ParisTech - École nationale supérieure des mines de Paris - PSL - Université Paris sciences et lettres - CNRS - Centre National de la Recherche Scientifique); Benoit Weil (CGS i3 - Centre de Gestion Scientifique i3 - MINES ParisTech - École nationale supérieure des mines de Paris - PSL - Université Paris sciences et lettres - CNRS - Centre National de la Recherche Scientifique) |
Abstract: | Generative design (GD) algorithms is a fast growing field. From the point of view of Design Science, this fast growth leads to wonder what exactly is 'generated' by GD algorithms and how? In the last decades, advances in design theory enabled to establish conditions and operators that characterize design generativity. Thus, it is now possible to study GD algorithms with the lenses of Design Science in order to reach a deeper and unified understanding of their generative techniques, their differences and, if possible, find new paths for improving their generativity. In this paper, first, we rely on C-K ttheory to build a canonical model of GD, based independent of the field of application of the algorithm. This model shows that GD is generative if and only if it builds, not one single artefact, but a "topology of artefacts" that allows for design constructability, covering strategies, and functional comparability of designs. Second, we use the canonical model to compare four well documented and most advanced types of GD algorithms. From these cases, it appears that generating a topology enables the analyses of interdependences and the design of resilience. |
Keywords: | C-K theory,generative design algorithms,Design theory,Computational design methods,Design informatics |
Date: | 2021–08 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-03398565&r= |
By: | Simon Büchler, Maximilian v. Ehrlich |
Abstract: | We analyze land use regulation and the determinants thereof across the majority of Swiss municipalities. Based on a comprehensive survey, we construct several indices on the ease of local residential development, which capture various aspects of local regulation and land use coordination across jurisdictions. The indices provide harmonized information about what local regulation entails and the local regulatory environment across municipalities. Our analysis shows that, among others, historical building density, socio-demographic factors, local taxes, cultural aspects, and the quality of natural amenities are important determinants of local land-use regulation. We test the validity of the index with regard to information about the local refusal rates of development projects and show that the index captures a significant part of the variation in local housing supply elasticities. Based on a machine learning cross-validation model, we impute the values for nonresponding municipalities. |
Keywords: | Local regulation, zoning, housing markets |
JEL: | R1 R14 R31 R52 |
Date: | 2021–08 |
URL: | http://d.repec.org/n?u=RePEc:rdv:wpaper:credresearchpaper32&r= |
By: | Felipe Gonzalez (GeePs - Laboratoire Génie électrique et électronique de Paris - CentraleSupélec - SU - Sorbonne Université - Université Paris-Saclay - CNRS - Centre National de la Recherche Scientifique); Marc Petit (GeePs - Laboratoire Génie électrique et électronique de Paris - CentraleSupélec - SU - Sorbonne Université - Université Paris-Saclay - CNRS - Centre National de la Recherche Scientifique); Yannick Perez (LGI - Laboratoire Génie Industriel - CentraleSupélec - Université Paris-Saclay) |
Abstract: | Electric vehicle (EV) grid integration presents significant challenges and opportunities for electricity system operation and planning. Proper assessment of the costs and benefits involved in EV integration hinges on correctly modeling and evaluating EV-user driving and charging patterns. Recent studies have evidenced that EV users do not plug in their vehicle every day (here called non-systematic plug-in behavior), which can alter the impacts of EV charging and the flexibility that EV fleets can provide to the system. This work set out to evaluate the effect of considering non-systematic plug-in behavior in EV grid integration studies. To do so, an open-access agent-based EV simulation model that includes a probabilistic plug-in decision module was developed and calibrated to match the charging behavior observed in the Electric Nation project, a large-scale smart charging trial. Analysis shows that users tend to plug-in their EV between 2 and 3 times per week, with a lower plug-in frequency for large-battery EVs and large heterogeneity in user charging preferences. Results computed using our model show that non-systematic plug-in behavior effects reduce the impact of EV charging, especially for price-responsive charging, as fewer EVs charge simultaneously. On the other hand, non-systematic plug-in can reduce available flexibility, particularly when considering current trends towards larger battery sizes. Counter-intuitively, large-battery fleets can have reduced flexibility compared to small-battery fleets, both in power and stored energy, due to lower plug-in frequency and higher energy requirements per charging session. Improving plug-in ratios of EV users appears as key enabler for flexibility. In comparison, augmenting charging power can increase the flexibility provided by EV fleets but at the expense of larger impacts on distribution grids. |
Date: | 2021–11 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-03363782&r= |
By: | Mr. Mico Mrkaic; Borislava Mircheva; Jelle Barkema; Yuanchen Yang |
Abstract: | This paper dives into the Fund’s historical coverage of cross-border spillovers in its surveillance. We use a state-of-the-art deep learning model to analyze the discussion of spillovers in all IMF Article IV staff reports between 2010 and 2019. We find that overall, while the discussion of spillovers decreased over time, it was pronounced in the staff reports of some systemically important economies and during periods of global spillover events. Spillover discussions were more prominent in staff reports covering advanced and emerging market economies, possibly reflecting their role as sources of global spillovers. The coverage of spillovers was higher in the context of the real, financial, and external sectors. Also, countries with larger economies, higher trade and capital account openess and lower inflation are more likely to discuss spillovers in their Article IV staff reports. |
Keywords: | spillover discussion; model performance; discussion of spillover; General spillover pattern; spillover event; IMF staff calculation; Spillovers; Probit models; Machine learning; Capital account; Inflation; Global |
Date: | 2021–05–07 |
URL: | http://d.repec.org/n?u=RePEc:imf:imfwpa:2021/134&r= |
By: | Carlos A. Abanto-Valle (Department of Statistics, Federal University of Rio de Janeiro); Gabriel Rodríguez (Department of Economics, Pontificia Universidad Católica del Perú); Luis M. Castro Cepero (Department of Statistics, Pontificia Universidad Católica de Chile); Hernán B. Garrafa-Aragón (Escuela de Ingeniería Estadística de la Universidad Nacional de Ingeniería) |
Abstract: | The stochastic volatility in mean (SVM) model proposed by Koopman and Uspensky (2002) is revisited. This paper has two goals. The first is to offer a methodology that requires less computational time in simulations and estimates compared with others proposed in the literature as in Abanto-Valle et al. (2021) and others. To achieve the first goal, we propose to approximate the likelihood function of the SVM model applying Hidden Markov Models (HMM) machinery to make possible Bayesian inference in real-time. We sample from the posterior distribution of parameters with a multivariate Normal distribution with mean and variance given by the posterior mode and the inverse of the Hessian matrix evaluated at this posterior mode using importance sampling (IS). The frequentist properties of estimators is anlyzed conducting a simulation study. The second goal is to provide empirical evidence estimating the SVM model using daily data for five Latin American stock markets. The results indicate that volatility negatively impacts returns, suggesting that the volatility feedback effect is stronger than the effect related to the expected volatility. This result is exact and opposite to the finding of Koopman and Uspensky (2002). We compare our methodology with the Hamiltonian Monte Carlo (HMC) and Riemannian HMC methods based on Abanto-Valle et al. (2021). JEL Classification-JE: C11, C15, C22, C51, C52, C58, G12. |
Keywords: | Stock Latin American Markets, Stochastic Volatility in Mean, Feed-Back Effect, Hamiltonian Monte Carlo, Hidden Markov Models, Riemannian Manifold Hamiltonian Monte Carlo, Non Linear State Space Models. |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:pcp:pucwps:wp00502&r= |