|
on Computational Economics |
Issue of 2020‒05‒04
thirty-two papers chosen by |
By: | Leonard Sabetti; Ronald Heijmans |
Abstract: | Financial market infrastructures and their participants play a crucial role in the economy. Financial or operational challenges faced by one participant can have contagion effects and pose risks to the broader financial system. Our paper applies (deep) neural networks (autoencoder) to detect anomalous flows from payments data in the Canadian Automated Clearing and Settlement System (ACSS) similar to Triepels et al. (2018). We evaluate several neural network architecture setups based on the size and number of hidden layers, as well as differing activation functions dependent on how the input data was normalized. As the Canadian financial system has not faced bank runs in recent memory, we train the models on "normal" data and evaluate out-of-sample using test data based on historical anomalies as well as simulated bank runs. Our out-of-sample simulations demonstrate the autoencoder's performance in different scenarios, and results suggest that the autoencoder detects anomalous payment flows reasonably well. Our work highlights the challenges and trade-offs in employing a workhorse deep-learning model in an operational context and raises policy questions around how such outlier signals can be used by the system operator in complying with the prominent payment systems guidelines and by financial stability experts in assessing the impact on the financial system of a financial institution that shows extreme behaviour. |
Keywords: | Anomaly Detection; Autoencoder; Neural Network; Articial intelligence; ACSS; Financial Market Infrastructure; Retail Payments |
JEL: | C45 E42 E58 |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:dnb:dnbwpp:681&r=all |
By: | Raquel M. Gaspar; Sara D. Lopes; Bernardo Sequeira |
Abstract: | In this paper we use neural networks (NN), a machine learning method, to price Americanput options. We propose two distinct NN models – a simple one and a more complex one. The performance of two NN models is compared to the popular Least-Square Monte Carlo Method(LSM).This study relies on market American put option prices, with four large US companies asunderlying – Bank of America Corp (BAC), General Motors (GM), Coca-Cola Company (KO) andProcter and Gamble Company (PG). Our dataset includes all options traded from December 2018to March 2019.All methods show a good accuracy, however, once calibrated, NNs do better in terms ofexecution time and Root Mean Square Error (RMSE). Although on average both NN modelsperform better than LSM, the simpler model (NN model 1) performs quite close to LSM. On the other hand our NN model 2 substantially outperforms the other models, having a RMSE ca. 40% lower than that of the LSM. The lower RMSE is consistent across all companies, strike levels andmaturities. |
Keywords: | Machine learning, Neural networks, American put options, Least-square Monte Carlo |
JEL: | C45 C63 G13 G17 |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:ise:remwps:wp01222020&r=all |
By: | Marek Stelmach (Faculty of Economic Sciences, University of Warsaw); Marcin Chlebus (Faculty of Economic Sciences, University of Warsaw) |
Abstract: | Stacked ensembles approaches have been recently gaining importance in complex predictive problems where extraordinary performance is desirable. In this paper we develop a multilayer stacking framework and apply it to a large dataset related to credit scoring with multiple, imbalanced classes. Diverse base estimators (among others, bagged and boosted tree algorithms, regularized logistic regression, neural networks, Naive Bayes classifier) are examined and we propose three meta learners to be finally combined into a novel, weighted ensemble. To prevent bias in meta features construction, we introduce a nested cross-validation schema into the architecture, while weighted log loss evaluation metric is used to overcome training bias towards the majority class. Additional emphasis is placed on a proper data preprocessing steps and Bayesian optimization for hyperparameter tuning to ensure that the solution do not overfits. Our study indicates better stacking results compared to all individual base classifiers, yet we stress the importance of an assessment whether the improvement compensates increased computational time and design complexity. Furthermore, conducted analysis shows extremely good performance among bagged and boosted trees, both in base and meta learning phase. We conclude with a thesis that a weighted meta ensemble with regularization properties reveals the least overfitting tendencies. |
Keywords: | stacked ensembles, nested cross-validation, Bayesian optimization, multiclass problem, imbalanced classes |
JEL: | G32 C38 C51 C52 C55 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:war:wpaper:2020-08&r=all |
By: | Maria Priscila Ramos (Instituto Interdisciplinario de Economía Política de Buenos Aires); Estefania Custodio (European Commission – JRC); Sofia Jimenez (Universidad de Zaragoza); Alfredo Mainar Causape (European Commission – JRC); Pierre Boulanger (European Commission – JRC); Emanuele Ferrari (European Commission – JRC) |
Abstract: | Kenya, such as other African countries, is particularly concerned about the achievement of the Sustainable Development Goal #2 (SDG #2: zero hunger), and its associated consequences for the society. Empirical evidence about food security and nutrition in Kenya accounts for deficiencies in food access, food sufficiency and food quality at the household level. These deficiencies are among others the causes of all forms of malnutrition (stunting, wasting and overweight), which can lead to cognitive impairment, limited immunity to diseases, low educational performance, increased risk of chronic disease and even mortality cases of children in this country. To solve the food security and nutrition problems in Kenya is a challenging issue because of the different dimensions to be tackled (economic, environmental, educational, health and sanitation) and also because of the heterogeneity that characterizes households (income and food expenditure, education level of households’ head, regional sanitation coverage, access to potable water / waste water system, etc.). In the recent past, the Government of Kenya supported the construction of a roughly €1.1 billion fertilizer plant in Eldoret in the framework of a fertilizer cost reduction strategy aiming at stabilizing fertilizer prices and making fertilizer more accessible through local manufacturing, blending and bulk procurement. Increasing the domestic production of fertilizers should reduce the price of fertilizer, making them more accessible for farmers. Co-authors of this report, employing the STatic Applied General Equilibrium for DEVelopment (STAGE-DEV) Computable General Equilibrium (CGE) model, calibrated on a Social Accounting Matrix Kenya 2014, evaluated the impact on food security of the creation of the fertiliser plant together with three additional policy scenarios (market access, extension and subsidies removal). For the purpose of this study, we developed a macro-micro simulation model, based on the previously developed CGE and policy scenarios and on microsimulations using the Kenya Integrated Household Budged Survey 2015/2016. The objective is to produce new set of food security indicators using macro-micro model linkages and it is purely methodological. The policy results, which should be taken with some caution, are discussed in terms of initial economic (per capita income), food security (household dietary diversity and dietary energy consumption) and children’s nutritional (stunting, wasting) status at the household level. Furthermore, national results are disaggregated by metropolitan areas (Nairobi and Mombasa) and the rest of urban and rural zones of the country. Main results suggest that increasing fertilizers’ availability coupled with increasing market access through the improvement of infrastructures and the reduction of transport costs (market access scenario) will increase overall purchasing power. Supporting pro-poor growth, this development will benefit the most those households with lower diet diversity and higher stunting rates. This policy scenario also leads to the largest increases in diet energy consumption, with similar distributive results as for the purchasing power impact. Increasing fertilizers’ availability paired with improving crops productivity in agricultural practices (extension scenario) leads to the largest increase in energy consumption, particularly from fats in the diet, among households with low diet diversity. Average protein and carbohydrate consumption at national level increase the most within the market access scenario. The results confirm the findings of the previous report. Increasing fertilizer availability in Kenya is not enough to improve food security in the country. The contribution of complementary policies, such as increasing the market access for fertilizers and agriculture by improving the rural infrastructure or improving the extension services to train small-holder farmers about fertilizer and land use, that give farmers better access to input and output markets is needed. |
Keywords: | Nutrition, CGE, Kenya, Agricultural policy |
JEL: | C68 |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:ipt:iptwpa:jrc119390&r=all |
By: | Grilli, Luca; Santoro, Domenico |
Abstract: | In this paper, we consider 2 types of instruments traded on the markets, stocks and cryptocurrencies. In particular, stocks are traded in a market subject to opening hours, while cryptocurrencies are traded in a 24-hour market. What we want to demonstrate through the use of a particular type of generative neural network is that the instruments of the non-timetable market have a different amount of information, and are therefore more suitable for forecasting. In particular, through the use of real data we will demonstrate how there are also stocks subject to the same rules as cryptocurrencies. |
Keywords: | Neural Network, Price Forecasting, Cryptocurrencies, Market Hours, Generative Model |
JEL: | C45 E37 F17 G17 |
Date: | 2020–04–24 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:99846&r=all |
By: | Yoseph Y. Getachew (University of Pretoria, Pretoria, South Africa); Stephen J. Turnovsky (University of Washington, Seattle, WA 98105) |
Abstract: | We develop a model that characterizes the joint determination of income distribution and macroeconomic aggregate dynamics. We identify multiple channels through which alternative public policies such as transfers, consumption and income taxes, and public investment will affect the inequality—efficiency trade off. Some policy changes can affect net income inequality both directly, and indirectly by inducing structural changes in the private-public capital ratio. This in turn influences market inequality and determines the distribution of the next period’s investment and net income. Income tax and transfers have both a direct income effect and an indirect substitution effect, whereas the consumption tax has only the latter. After developing some theoretical propositions summarizing these policy tradeoffs, we present extensive numerical simulations motivated by the South African National Development Plan 2030, the objective of which is to tame soaring inequality and increase per capita GDP. Our numerical simulations illustrate how the judicious combination of these policies may help achieve these targets. The simulations also suggest that the sharp decline in private-public capital ratio coupled with high degree of complementarity between the public and private capitals could be behind the persistence of market inequality in South Africa during the last two decades. |
Keywords: | Redistribution policies, Incomplete Capital Market, Idiosyncratic shocks, Efficiency, Inequality |
JEL: | D31 O41 |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:pre:wpaper:202028&r=all |
By: | Hauser, Luisa-Marie; Schlag, Carsten-Henning; Wolf, André |
Abstract: | This paper analyses the macroeconomic implications of a future shift in the age structure of the Swiss population. It estimates the long-run effects for Swiss GDP growth and its components in an Overlapping Generations Model (OLG model). Recent population projections by the Federal Statistical Office (FSO) serve as a basis. To document the sensitivity of the results with respect to the demographic assumptions, simulations were undertaken for a range of alternative scenarios concerning fertility, migration and agespecific labor supply. Our projections over the time horizon 2018-2060 document a significant loss in terms of economic growth in both absolute and per capita terms. According to our simulations, this would primarily affect the income of the middle-aged age groups. Likewise, the process of ageing would have consequences for the composition of Swiss GDP: the share of government spending on domestic value added is simulated to increase, due to its demography-related components. A sensitivity analysis reveals that more favourable assumptions concerning future net immigration, fertility and labor market participation could mitigate, but not fully compensate these trends. |
Keywords: | Ageing,OLG-models,Long-term GDP forecasts,Switzerland |
JEL: | J11 C68 E37 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:zbw:hwwirp:191&r=all |
By: | Schnaubelt, Matthias |
Abstract: | This paper presents the first large-scale application of deep reinforcement learning to optimize the placement of limit orders at cryptocurrency exchanges. For training and out-of-sample evaluation, we use a virtual limit order exchange to reward agents according to the realized shortfall over a series of time steps. Based on the literature, we generate features that inform the agent about the current market state. Leveraging 18 months of high-frequency data with 300 million historic trades and more than 3.5 million order book states from major exchanges and currency pairs, we empirically compare state-of-the-art deep reinforcement learning algorithms to several benchmarks. We find proximal policy optimization to reliably learn superior order placement strategies when compared to deep double Q-networks and other benchmarks. Further analyses shed light into the black box of the learned execution strategy. Important features are current liquidity costs and queue imbalances, where the latter can be interpreted as predictors of short-term mid-price returns. To preferably execute volume in limit orders to avoid additional market order exchange fees, order placement tends to be more aggressive in expectation of unfavorable price movements. |
Keywords: | Finance,Optimal Execution,Limit Order Markets,Machine learning,Deep Reinforcement Learning |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:zbw:iwqwdp:052020&r=all |
By: | Christophe Croux; Julapa Jagtiani; Tarunsai Korivi; Milos Vulanovic |
Abstract: | This study examines key default determinants of fintech loans, using loan-level data from the LendingClub consumer platform during 2007–2018. We identify a robust set of contractual loan characteristics, borrower characteristics, and macroeconomic variables that are important in determining default. We find an important role of alternative data in determining loan default, even after controlling for the obvious risk characteristics and the local economic factors. The results are robust to different empirical approaches. We also find that homeownership and occupation are important factors in determining default. Lenders, however, are required to demonstrate that these factors do not result in any unfair credit decisions. In addition, we find that personal loans used for medical financing or small business financing are more risky than other personal loans, holding the same characteristics of the borrowers. Government support through various public-private programs could potentially make funding more accessible to those in need of medical services and small businesses without imposing excessive risk to small peer-to-peer (P2P) investors. |
Keywords: | crowdfunding; lasso selection methods; peer-to-peer lending; household finance; machine learning; financial innovation; big data; P2P/marketplace lending |
JEL: | G21 D14 D10 G29 G20 |
Date: | 2020–04–16 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedpwp:87815&r=all |
By: | Vadlamani Ravi; Vadlamani Madhav |
Abstract: | It is well-known that disciplines such as mechanical engineering, electrical engineering, civil engineering, aerospace engineering, chemical engineering and software engineering witnessed successful applications of reliability engineering concepts. However, the concept of reliability in its strict sense is missing in financial services. Therefore, in order to fill this gap, in a first-of-its-kind-study, we define the reliability of a bank/firm in terms of the financial ratios connoting the financial health of the bank to withstand the likelihood of insolvency or bankruptcy. For the purpose of estimating the reliability of a bank, we invoke a statistical and machine learning algorithm namely, logistic regression (LR). Once, the parameters are estimated in the 1st stage, we fix them and treat the financial ratios as decision variables. Thus, in the 1st stage, we accomplish the hitherto unknown way of estimating the reliability of a bank. Subsequently, in the 2nd stage, in order to maximize the reliability of the bank, we formulate an unconstrained optimization problem in a single-objective environment and solve it using the well-known particle swarm optimization (PSO) algorithm. Thus, in essence, these two stages correspond to predictive and prescriptive analytics respectively. The proposed 2-stage strategy of using them in tandem is beneficial to the decision-makers within a bank who can try to achieve the optimal or near-optimal values of the financial ratios in order to maximize the reliability which is tantamount to safeguarding their bank against solvency or bankruptcy. |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2004.11122&r=all |
By: | Alexander Mihailov (Department of Economics, University of Reading) |
Abstract: | This paper considers 3 scenarios regarding the duration of the COVID-19 pandemic lockdown, staying for 1, 2 or 3 quarters, and 2 types of exceptionally rare and devastating disruptions in employment modeled as adverse labor supply shocks, a temporary one with negligible loss in the labor force due to deaths or a permanent one, with significant loss from deaths. The temporary labor supply shock simulations delimit a lower bound, designed to match about 1/4 of the labor force unable to work, and an upper bound, matching about 3/4 of the labor force made economically inactive, broadly consistent with estimates. The permanent labor supply shock is designed to match, in 3 scenarios again, up to 1% loss of the labor force due to mortality, twice milder than the Spanish flu 2% death rate. Estimated calibrations of the Galí-Smets-Wouters (2012) model with indivisible labor for 5 major and most affected by the COVID-19 pandemic economies are simulated: the US, France, Germany, Italy and Spain. The simulations suggest that even in the most optimistic scenario of a brief (lasting for 1 quarter) and mild (with 1/4 of the labor force unable to work) lockdown, the loss of per-capita consumption (6-7% in annualized terms down from the long-run trend in the impact quarter) and per-capita output (3-4% down) will be quite damaging, but recoverable relatively quickly, in 1-2 years. In the most pessimistic simulated scenario of temporary loss the effects will be 10-15 times more devastating, and the loss of output and consumption will persist beyond 10-15 years. Permanent loss of up to 1.5 percentage points of per-capita consumption and output characterizes the simulated permanent labor supply shock. |
Keywords: | COVID-19 pandemic, simulated macroeconomic effects, medium-scale New Keynesian DSGE models, indivisible labor, shocks to the disutility of labor supply, calibration according to Bayesian estimates |
JEL: | C63 D58 E24 E27 E32 E37 |
Date: | 2020–04–20 |
URL: | http://d.repec.org/n?u=RePEc:rdg:emxxdp:em-dp2020-07&r=all |
By: | Schnaubelt, Matthias; Seifert, Oleg |
Abstract: | We apply state-of-the-art financial machine learning to assess the return-predictive value of more than 45,000 earnings announcements on a majority of S&P1500 constituents. To represent the diverse information content of earnings announcements, we generate predictor variables based on various sources such as analyst forecasts, earnings press releases and analyst conference call transcripts. We sort announcements into decile portfolios based on the model's abnormal return prediction. In comparison to three benchmark models, we find that random forests yield superior abnormal returns which tend to increase with the forecast horizon for up to 60 days after the announcement. We subject the model's learning and out-of-sample performance to further analysis. First, we find larger abnormal returns for small-cap stocks and a delayed return drift for growth stocks. Second, while revenue and earnings surprises are the main predictors for the contemporary reaction, we find that a larger range of variables, mostly fundamental ratios and forecast errors, is used to predict post-announcement returns. Third, we analyze variable contributions and find the model to recover non-linear patterns of common capital markets effects such as the value premium. Leveraging the model's predictions in a zero-investment trading strategy yields annualized returns of 11.63 percent at a Sharpe ratio of 1.39 after transaction costs. |
Keywords: | Earnings announcements,Asset pricing,Machine learning,Natural languageprocessing |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:zbw:iwqwdp:042020&r=all |
By: | Stefano Baruffaldi (Max Planck Institute for Innovation and Competition); Brigitte van Beuzekom; Hélène Dernis; Dietmar Harhoff (Max Planck Institute for Innovation and Competition); Nandan Rao; David Rosenfeld; Mariagrazia Squicciarini |
Abstract: | This paper identifies and measures developments in science, algorithms and technologies related to artificial intelligence (AI). Using information from scientific publications, open source software (OSS) and patents, it finds a marked increase in AI-related developments over recent years. Since 2015, AI-related publications have increased by 23% per year; from 2014 to 2018, AI-related OSS contributions grew at a rate three times greater than other OSS contributions; and AI-related inventions comprised, on average, more than 2.3% of IP5 patent families in 2017. China’s growing role in the AI space also emerges. The analysis relies on a three-pronged approach based on established bibliometric and patent-based methods, and machine learning (ML) implemented on purposely collected OSS data. |
Date: | 2020–05–01 |
URL: | http://d.repec.org/n?u=RePEc:oec:stiaaa:2020/05-en&r=all |
By: | Richard Heuver; Ron Berndsen |
Abstract: | The Liquidity Coverage Ratio (LCR) requirement of the Basel III framework is aimed at making banks more resilient against liquidity shocks and indicates the extent to which a bank is able to meet its payment obligations over a 30-day stress period. Notwithstanding the fact that it forms an important addition to the available information for regulators, it presents information on the status of a single bank on a monthly reporting basis. In this paper we generate an LCR-like statistic on a daily basis and simulate liquidity failure of each of the systemically important banks, using historical payments data from TARGET2. The aim of the paper is to uncover paths of contagion. The trigger is a bank with a deteriorating LCR and the knock-on effect is modelled as the impact on the LCR of other banks. We generate then the cascade of contagion, which in general consists of multiple paths, trying to answer the question to what extent the financial network further deteriorates. In doing so we provide paths of contagion which give a sense of potential systemic risk present in the network. We find that the majority of damage is caused by a small group of large banks. Furthermore we find groups of banks that are very vulnerable to shocks, regardless of the size or location of the disruption. Our model reveals that the shortfall of liquidity at the stressed bank is a more important driver than the addition of liquidity at the other banks. A version of the contagion network based on a 14-day period reveals a monthly pattern, which is in line with other literature in which window dressing is addressed. The data used in this paper are available to supervisors, central banks and resolution authorities, therefore making it possible to anticipate contagion of failing liquidity coverage within their payment network on a daily basis. |
Keywords: | Liquidity Coverage; Basel III; payment systems; graph theory; simulation modeling |
JEL: | E58 G21 E42 C63 |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:dnb:dnbwpp:678&r=all |
By: | Figari, Francesco; Fiorio, Carlo V. |
Abstract: | This paper analyses the extent to which the Italian welfare system provides monetary compensation for those who lost their earnings due to the lockdown imposed by the government in order to contain the COVID-19 pandemic in March 2020. In assessing first-order effects of the businesses temporarily shut down and the government’s policy measures on household income, counterfactual scenarios are simulated with EUROMOD, the EU-wide microsimulation model, integrated with information on the workers who the lockdown is more likely to affect. This paper provides timely evidence on the differing degrees of relative and absolute resilience of the household incomes of the individuals affected by the lockdown. These arise from the variations in the protection offered by the tax-benefit system, coupled with personal and household circumstances of the individuals at risk of income loss. |
Date: | 2020–04–22 |
URL: | http://d.repec.org/n?u=RePEc:ese:emodwp:em6-20&r=all |
By: | Daria Dzyabura (New Economic School, Moscow, Russia); Renana Peres (Hebrew University of Jerusalem, Israel) |
Abstract: | Understanding how consumers perceive brands is at the core of effective brand management. In this paper, we present the Brand Visual Elicitation Platform (B-VEP), an electronic tool we developed that allows consumers to create online collages of images that represent how they view a brand. Respondents select images for the collage from a searchable repository of tens of thousands of images. We implement an unsupervised machine-learning approach to analyze the collages and elicit the associations they describe. We demonstrate the platform’s operation by collecting large, unaided, directly elicited data for 303 large US brands from 1,851 respondents. Using machine learning and image-processing approaches to extract from these images systematic content associations, we obtain a rich set of associations for each brand. We combine the collage-making task with well-established brand-perception measures such as brand personality and brand equity, and suggest various applications for brand management. |
Keywords: | Image processing, machine learning, branding, brand associations, brand collages, Latent Dirichlet Allocation |
Date: | 2019–12 |
URL: | http://d.repec.org/n?u=RePEc:abo:neswpt:w0260&r=all |
By: | Shaun Byck; Ronald Heijmans |
Abstract: | Canada's Large Value Transfer System (LVTS) is in the process of being replaced by a real-time gross settlement (RTGS) system. A pure RTGS system typically requires participants to hold large amounts of intraday liquidity in order to settle their payment obligations. Implementing one or more liquidity-saving mechanisms (LSMs) can reduce the amount of liquidity participants need to hold. This paper investigates how much liquidity requirements can be reduced with the implementation of different LSMs in the Financial Network Analytics simulation engine using LVTS transaction data from 2018. These LSMs include: 1) Bilateral offsetting, 2) FIFO-Bypass, 3) Multilateral offsetting, and 4) a combination of all LSMs. We simulate two different scenarios. In the first scenario, all payments from Tranche 1, which are considered time-critical, are settled in a pure RTGS payment stream, while less time-critical Tranche 2 payments are settled in a payment stream with LSMs. In the second scenario, we settle all payments (Tranches 1 and 2) in the LSM stream. Our results show that when there is ample liquidity available in the system, there is minimal benefit from LSMs as payments are settled without much delay-the effectiveness of LSMs increases as the amount of intraday liquidity decreases. A combination of LSMs shows a reduction in liquidity requirements that is larger than any one individual LSM. |
Keywords: | Liquidity Saving Mechanism; Simulation; LVTS; RTGS; Financial Market Infrastructure; Intraday Liquidity; Collateral |
JEL: | E42 E50 E58 E59 G21 |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:dnb:dnbwpp:682&r=all |
By: | Mathias Dolls |
Abstract: | This paper develops a decomposition framework to study the importance of different stabilization channels of an unemployment re-insurance scheme for the euro area. Running counterfactual simulations based on household micro data for the period 2000–16, the paper finds that the re-insurance would have cushioned on average 12% (8%) of income losses through interregional (intertemporal) smoothing. These results suggest that the smoothing effect of the re-insurance which is due to asymmetries in labor market shocks would have raised the income insurance of a typical unemployment insurance scheme in the euro area by more than 50%. The simulated re-insurance scheme would have been revenue-neutral at EA-19, but not at the member-state level. Average annual net contributions would have amounted to -0.1–0.1 per cent of GDP. The paper discusses how different variants of the re-insurance might affect the risk of moral hazard. |
Keywords: | European fiscal integration, unemployment re-insurance, automatic stabilizers, euro area reform |
JEL: | F55 H23 J65 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_8219&r=all |
By: | Enkhbayar Shagdar (Economic Research Institute for Northeast Asia (ERINA)); Tomoyoshi Nakajima (Economic Research Institute for Northeast Asia (ERINA)) |
Abstract: | Recent developments on the Korean Peninsula and worldwide may bring an end to the DPRK's isolation from the world economy. Employing the Global Trade Analysis Project (GTAP) Data Base and the standard GTAP Model (the Model), this paper analyzed the expected economic impacts to be brought by the DPRK's return to international society. However, as the DPRK is not a separate GTAP region, but is represented in the database as part of a composite region of the Rest of East Asia (XEA) along with Macao, the DPRK's data was generated using the SplitReg program, and the resulting data was used as the base data in the Model. The generated data indicated that the DPRK's GDP value was higher by about one-third than those commonly reported in the existing publicly available data. Upon generating the DPRK data, three economic revitalization and integration scenarios: (i) total factor productivity (TFP) growth in the DPRK; (ii) Korean Unification; and (iii) Northeast Asia free trade agreement (FTA), were considered in the analyses. The simulation results of assuming that the DPRK's total factor productivity would grow by 30% (60% of labor productivity growth of the ROK between 1963 and 1973) as a result of the country's return to international markets indicated that the DPRK would have a welfare gain of $6.6 billion associated mostly with the gains in technical change along with allocative efficiency improvements and terms-of-trade gains in investment and savings. The government services sector would be the largest beneficiary of these gains, followed by agriculture, extraction, heavy and light manufacturing sectors. Most of the other regions in the model would benefit from welfare gains as well, with the European Union (EU28), China and the U.S. being the largest beneficiaries mainly due to their gains in terms-of-trade in goods and services. The other two scenarios also resulted in welfare gains for the DRPK, but on smaller scales. As a result of the Korean Unification scenario, the DPRK would have a welfare gain of $1.7 billion, while it would be equal to $107 billion in the case of a free trade agreement in Northeast Asia. Contrary to the first scenario, most of these welfare gains were associated with the country's gains in terms of trade in goods and services. In terms of impacts on industry, all sectors will benefit from the TFP growth, while there will be winners and losers in the Korean Unification and Northeast Asia FTA scenarios. |
Keywords: | CGE analysis, the DRPK's economy, Total Factor Productivity |
JEL: | D58 O53 D24 |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:eri:dpaper:2003&r=all |
By: | Tam Tran-The |
Abstract: | Credit risk management, the practice of mitigating losses by understanding the adequacy of a borrower's capital and loan loss reserves, has long been imperative to any financial institution's long-term sustainability and growth. MassMutual is no exception. The company is keen on effectively monitoring downgrade risk, or the risk associated with the event when credit rating of a company deteriorates. Current work in downgrade risk modeling depends on multiple variations of quantitative measures provided by third-party rating agencies and risk management consultancy companies. As these structured numerical data become increasingly commoditized among institutional investors, there has been a wide push into using alternative sources of data, such as financial news, earnings call transcripts, or social media content, to possibly gain a competitive edge in the industry. The volume of qualitative information or unstructured text data has exploded in the past decades and is now available for due diligence to supplement quantitative measures of credit risk. This paper proposes a predictive downgrade model using solely news data represented by neural network embeddings. The model standalone achieves an Area Under the Receiver Operating Characteristic Curve (AUC) of more than 80 percent. The output probability from this news model, as an additional feature, improves the performance of our benchmark model using only quantitative measures by more than 5 percent in terms of both AUC and recall rate. A qualitative evaluation also indicates that news articles related to our predicted downgrade events are specially relevant and high-quality in our business context. |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2004.08204&r=all |
By: | Daria Dzyabura (New Economic School, Moscow, Russia); Siham El Kihal (Frankfurt School of Finance & Management, Germany); John R. Hauser (MIT Sloan School of Management, USA); Marat Ibragimov (MIT Sloan School of Management, USA) |
Abstract: | In online channels, products are returned at high rates. Shipping, processing, and refurbishing are so costly that a retailer's profit is extremely sensitive to return rates. In many product categories, such as the $500 billion fashion industry, direct experiments are not feasible because the fashion season is over before sufficient data are observed. We show that predicting return rates prior to product launch enhances profit substantially. Using data from a large European retailer (over 1.5 million transactions for about 4,500 fashion items), we demonstrate that machine-learning methods applied to product images enhance predictive ability relative to the retailer’s benchmark (category, seasonality, price, and color labels). Custom image-processing features (RGB color histograms, Gabor filters) capture color and patterns to improve predictions, but deep-learning features improve predictions significantly more. Deep learning appears to capture color-pattern-shape and other intangibles associated with high return rates for apparel. We derive an optimal policy for launch decisions that takes prediction uncertainty into account. The optimal deep-learning-based policy improves profits, achieving 40% of the improvement that would be achievable with perfect information. We show that the retailer could further enhance predictive ability and profits if it could observe the discrepancy in online and offline sales. |
Keywords: | machine learning, image processing, product returns |
Date: | 2019–09–03 |
URL: | http://d.repec.org/n?u=RePEc:abo:neswpt:w0259&r=all |
By: | Katrin Heßler (Johannes Gutenberg-University Mainz, Germany); Stefan Irnich (Johannes Gutenberg-University Mainz, Germany); Tobias Kreiter (scc EDV-Beratung AG); Ulrich Pferschy (University of Graz) |
Abstract: | We consider a packing problem that arises in a direct-shipping system in the food and beverage industry: Trucks are the containers and products to be distributed are the items. The packing is constrained by two independent quantities, weight (e.g., measured in kg) and volume (number of pallets). Additionally, the products are grouped into the three categories standard, cooled, and frozen (the latter two require refrigerated trucks). Products of different categories can be transported in one truck using separated zones, but the cost of a truck depends on the transported product categories. Moreover, product splitting should be avoided so that (un-)loading is simplified. As a result, we seek for a feasible packing optimizing the following objective functions in a strictly lexicographic sense: minimize the (1) total number of trucks; (2) number of refrigerated trucks; (3) number of refrigerated trucks which contain frozen products; (4) number of refrigerated trucks which also transport standard products; (5) and minimize product splitting. This is a real-world application of a bin-packing problem with cardinality constraints a.k.a. the two-dimensional vector packing problem, with additional constraints. We provide a heuristic and an exact solution approach. The heuristic meta-scheme considers the multi-compartment and item-fragmentation features of the problem and applies various problem-specific heuristics. The exact solution algorithm covering all five stages is based on branch-and-price using stabilization techniques exploiting dual-optimal inequalities. Computational results on real-world and difficult self-generated instances prove the applicability of our approach. |
Keywords: | bin packing, lexicographic objective, heuristics, column generation, dual-optimal inequalities |
Date: | 2020–04–14 |
URL: | http://d.repec.org/n?u=RePEc:jgu:wpaper:2009&r=all |
By: | Tarek A. Hassan (Boston University, NBER, and CEPR); Laurence van Lent (Tilburg University); Stephan Hollander (Frankfurt School of Finance and Management); Ahmed Tahoun (London Business School) |
Abstract: | Using tools described in our earlier work (Hassan et al., 2019, 2020), we develop textbased measures of the costs, benefits, and risks listed firms in the US and over 80 other countries associate with the spread of Covid-19 and other epidemic diseases. We identify which firms expect to gain or lose from an epidemic disease and which are most affected by the associated uncertainty as a disease spreads in a region or around the world. As Covid-19 spreads globally in the first quarter of 2020, we find that firms’ primary concerns relate to the collapse of demand, increased uncertainty, and disruption in supply chains. Other important concerns relate to capacity reductions, closures, and employee welfare. By contrast, financing concerns are mentioned relatively rarely. We also identify some firms that foresee opportunities in new or disrupted markets due to the spread of the disease. Finally, we find some evidence that firms that have experience with SARS or H1N1 have more positive expectations about their ability to deal with the coronavirus outbreak. |
Keywords: | Epidemic diseases, pandemic, exposure, virus, firms, uncertainty, sentiment, machine learning |
JEL: | I15 I18 D22 G15 |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:bos:iedwpr:dp-340&r=all |
By: | Paul D McNelis; James Yetman |
Abstract: | We assess the dynamics of volatility spillovers among global systemically important banks (G-SIBs). We measure spillovers using vector-autoregressive models of range volatility of the equity prices of G-SIBs, together with machine learning methods. We then compare the size of these spillovers with the degree of systemic importance measured by the Basel Committee on Banking Supervision's G-SIB bucket designations. We find a high positive correlation between the two. We also find that higher bank capital reduces volatility spillovers, especially for banks in higher G-SIB buckets. Our results suggest that requiring banks that are designated as being more systemically important globally to hold additional capital is likely to reduce volatility spillovers from them to other large banks. |
Keywords: | G-SIBs, contagion, connectedness, bank capital, cross validation |
JEL: | C58 F65 G21 G28 |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:bis:biswps:856&r=all |
By: | Lukasz A. Drozd; Marina Tavares |
Abstract: | We consider several epidemiological simulations of the COVID-19 pandemic using the textbook SIR model and discuss the basic implications of these results for crafting an adequate response to the ensuing economic crisis. Our simulations are meant to be illustrative of the findings reported in the epidemiological literature using more sophisticated models (e.g., Ferguson et al. (2020)). The key observation we stress is that moderating the epidemiological response of social distancing according to the models may come at a steep price of extending the duration of the pandemic and hence the time these measures need to stay in place to be effective. We caution against ignoring this tradeoff as well as the fact that the timeline of the pandemic remains uncertain at this point. Consistent with the prudent advice of hoping for the best but preparing for the worst, we argue that a comprehensive economic response should address the question of how to safely “hibernate” the national economy for a flexible time period. We provide a discussion of basic policy guidelines and highlight the key policy challenges. |
Keywords: | SIR model; COVID-19 pandemic; containment policies |
JEL: | E1 H0 I1 |
Date: | 2020–04–14 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedpwp:87788&r=all |
By: | Ryan Banerjee; Anamaria Illes; Enisse Kharroubi; José María Serena Garralda |
Abstract: | The Covid-19 shock is placing enormous strains on corporates cash buffers. Corporate financial statements from 2019 suggest that, 50% of firms do not have sufficient cash to cover total debt servicing costs over the coming year. Credit lines could provide firms with additional liquidity. On average undrawn credit stood around 120% of debt servicing costs at end 2019. However, access is uneven and banks may be reluctant to renew or extend them in the current environment. Sticky operating expenses result in many firms running operating losses, placing an additional burden on cash buffers. Estimates indicate that following a 10% drop in revenues, operating expenses only fall by 6% on average. Simulations suggest that if revenues fall by 25% in 2020, then closing the entire funding gap with debt would raise firm leverage by around 10 percentage points. |
Date: | 2020–04–28 |
URL: | http://d.repec.org/n?u=RePEc:bis:bisblt:10&r=all |
By: | Torry, Malcolm |
Abstract: | A Citizen’s Basic Income, sometimes called a Basic Income, a Universal Basic Income, or a Citizen’s Income, is an unconditional and nonwithdrawable income paid to every individual. There have been calls during the coronavirus crisis for both an Emergency Basic Income (an immediate Basic Income to protect individuals’ incomes) and for a Recovery Basic Income (a Basic Income to be implemented with a view to preventing a recession once the virus outbreak begins to subside), and also for a permanent Citizen’s Basic Income scheme. This working paper summarises the results of microsimulation research on a Recovery Basic Income and on a subsequent sustainable revenue neutral Citizen’s Basic Income. An appendix studies the implementation of a Citizen’s Basic Income scheme in the context of different Universal Credit roll-out assumptions. |
Date: | 2020–04–29 |
URL: | http://d.repec.org/n?u=RePEc:ese:emodwp:em7-20&r=all |
By: | Pauline Affeldt; Tomaso Duso; Florian Szücs |
Abstract: | We study the evolution of EC merger decisions over the first 25 years of common European merger policy. Using a novel dataset at the level of the relevant antitrust markets and containing all merger cases scrutinized by the Commission over the 1990-2014 period, we evaluate how consistently arguments related to structural market parameters – dominance, concentration, barriers to entry, and foreclosure – were applied over time and across different dimensions such as the geographic market definition and the complexity of the merger. Simple, linear probability models as usually applied in the literature overestimate on average the effects of the structural indicators. Using non-parametric machine learning techniques, we find that dominance is positively correlated with competitive concerns, especially in concentrated markets and in complex mergers. Yet, its importance has decreased over time and significantly following the 2004 merger policy reform. The Commission’s competitive concerns are also correlated with concentration and the more so, the higher the entry barriers and the risks of foreclosure. These patterns are not changing over time. The role of the structural indicators in explaining competitive concerns does not change depending on the geographic market definition. |
Keywords: | merger policy, EU Commission, dominance, concentration, entry barriers, foreclosure, causal forests |
JEL: | K21 L40 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_8213&r=all |
By: | Daniel S. Hamermesh |
Abstract: | Using the 2012-13 American Time Use Survey, I find that both who people spend time with and how they spend it affect their happiness, adjusted for numerous demographic and economic variables. Satisfaction among married individuals increases most with additional time spent with spouse. Among singles, satisfaction decreases most as more time is spent alone. Assuming that lock-downs constrain married people to spend time solely with their spouses, simulations show that their happiness may have been increased compared to before the lock-downs; but sufficiently large losses of work time and income reverse this inference. Simulations demonstrate clearly that, assuming lock-downs impose solitude on singles, their happiness was reduced, reductions that are made more severe by income and work losses. |
JEL: | I12 I31 J22 |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:27018&r=all |
By: | Chenxu Li; Olivier Scaillet; Yiwen Shen |
Abstract: | This paper establishes a new decomposition of optimal dynamic portfolio choice under general incomplete-market diffusion models by disentangling the fundamental impacts on optimal policy from market incompleteness and flexible wealth-dependent utilities. We derive explicit dynamics of the components for the optimal policy, and obtain an equation system for solving the shadow price of market incompleteness, which is found to be dependent on both market state and wealth level. We identify a new important hedge component for non-myopic investors to hedge the uncertainty in shadow price due to variation in wealth level. As an application, we establish and compare the decompositions of optimal policy under general models with the prevalent HARA and CRRA utilities. Under nonrandom but possibly time-varying interest rate, we solve in closed-form the HARA policy as a combination of a bond holding scheme and a corresponding CRRA strategy. Finally, we develop a simulation method to implement the decomposition of optimal policy under the general incomplete market setting, whereas existing approaches remain elusive. |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2004.10096&r=all |
By: | Andrey Itkin; Dmitry Muravey |
Abstract: | In this paper we derive semi-closed form prices of barrier (perhaps, time-dependent) options for the Hull-White model, ie., where the underlying follows a time-dependent OU process with a mean-reverting drift. Our approach is similar to that in (Carr and Itkin, 2020) where the method of generalized integral transform is applied to pricing barrier options in the time-dependent OU model, but extends it to an infinite domain (which is an unsolved problem yet). Alternatively, we use the method of heat potentials for solving the same problems. By semi-closed solution we mean that first, we need to solve numerically a linear Volterra equation of the first kind, and then the option price is represented as a one-dimensional integral. Our analysis shows that computationally our method is more efficient than the backward and even forward finite difference methods (if one uses them to solve those problems), while providing better accuracy and stability. |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2004.09591&r=all |
By: | Derek Singh; Shuzhong Zhang |
Abstract: | This paper investigates arbitrage properties of financial markets under distributional uncertainty using Wasserstein distance as the ambiguity measure. The weak and strong forms of the classical arbitrage conditions are considered. A relaxation is introduced for which we coin the term statistical arbitrage. The simpler dual formulations of the robust arbitrage conditions are derived. A number of interesting questions arise in this context. One question is: can we compute a critical Wasserstein radius beyond which an arbitrage opportunity exists? What is the shape of the curve mapping the degree of ambiguity to statistical arbitrage levels? Other questions arise regarding the structure of best (worst) case distributions and optimal portfolios. Towards answering these questions, some theory is developed and computational experiments are conducted for specific problem instances. Finally some open questions and suggestions for future research are discussed. |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2004.09432&r=all |