|
on Computational Economics |
Issue of 2020‒03‒23
27 papers chosen by |
By: | Parisa Golbayani; Dan Wang; Ionut Florescu |
Abstract: | Recent literature implements machine learning techniques to assess corporate credit rating based on financial statement reports. In this work, we analyze the performance of four neural network architectures (MLP, CNN, CNN2D, LSTM) in predicting corporate credit rating as issued by Standard and Poor's. We analyze companies from the energy, financial and healthcare sectors in US. The goal of the analysis is to improve application of machine learning algorithms to credit assessment. To this end, we focus on three questions. First, we investigate if the algorithms perform better when using a selected subset of features, or if it is better to allow the algorithms to select features themselves. Second, is the temporal aspect inherent in financial data important for the results obtained by a machine learning algorithm? Third, is there a particular neural network architecture that consistently outperforms others with respect to input features, sectors and holdout set? We create several case studies to answer these questions and analyze the results using ANOVA and multiple comparison testing procedure. |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2003.02334&r=all |
By: | Manav Kaushik; A K Giri |
Abstract: | In todays global economy, accuracy in predicting macro-economic parameters such as the foreign the exchange rate or at least estimating the trend correctly is of key importance for any future investment. In recent times, the use of computational intelligence-based techniques for forecasting macroeconomic variables has been proven highly successful. This paper tries to come up with a multivariate time series approach to forecast the exchange rate (USD/INR) while parallelly comparing the performance of three multivariate prediction modelling techniques: Vector Auto Regression (a Traditional Econometric Technique), Support Vector Machine (a Contemporary Machine Learning Technique), and Recurrent Neural Networks (a Contemporary Deep Learning Technique). We have used monthly historical data for several macroeconomic variables from April 1994 to December 2018 for USA and India to predict USD-INR Foreign Exchange Rate. The results clearly depict that contemporary techniques of SVM and RNN (Long Short-Term Memory) outperform the widely used traditional method of Auto Regression. The RNN model with Long Short-Term Memory (LSTM) provides the maximum accuracy (97.83%) followed by SVM Model (97.17%) and VAR Model (96.31%). At last, we present a brief analysis of the correlation and interdependencies of the variables used for forecasting. |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2002.10247&r=all |
By: | Zura Kakushadze; Willie Yu |
Abstract: | We give explicit algorithms and source code for extracting factors underlying Treasury yields using (unsupervised) machine learning (ML) techniques, such as nonnegative matrix factorization (NMF) and (statistically deterministic) clustering. NMF is a popular ML algorithm (used in computer vision, bioinformatics/computational biology, document classification, etc.), but is often misconstrued and misused. We discuss how to properly apply NMF to Treasury yields. We analyze the factors based on NMF and clustering and their interpretation. We discuss their implications for forecasting Treasury yields in the context of out-of-sample ML stability issues. |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2003.05095&r=all |
By: | Steven Y. K. Wong (University of Technology Sydney); Jennifer Chan (University of Sydney); Lamiae Azizi (University of Sydney); Richard Y. D. Xu (University of Technology Sydney) |
Abstract: | We consider the problem of neural network training in a time-varying context. Machine learning algorithms have excelled in problems that do not change over time. However, problems encountered in financial markets are often non-stationary. We propose the online early stopping algorithm and show that a neural network trained using this algorithm can track a function changing with unknown dynamics. We applied the proposed algorithm to the stock return prediction problem studied in Gu et al. (2019) and achieved mean rank correlation of 4.69%, almost twice as high as the expanding window approach. We also show that prominent factors, such as the size effect and momentum, exhibit time varying stock return predictiveness. |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2003.02515&r=all |
By: | Weiwei Jiang |
Abstract: | Stock market prediction has been a classical yet challenging problem, with the attention from both economists and computer scientists. With the purpose of building an effective prediction model, both linear and machine learning tools have been explored for the past couple of decades. Lately, deep learning models have been introduced as new frontiers for this topic and the rapid development is too fast to catch up. Hence, our motivation for this survey is to give a latest review of recent works on deep learning models for stock market prediction. We not only category the different data sources, various neural network structures, and common used evaluation metrics, but also the implementation and reproducibility. Our goal is to help the interested researchers to synchronize with the latest progress and also help them to easily reproduce the previous studies as baselines. Base on the summary, we also highlight some future research directions in this topic. |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2003.01859&r=all |
By: | Marijn A. Bolhuis; Brett Rayner |
Abstract: | We leverage insights from machine learning to optimize the tradeoff between bias and variance when estimating economic models using pooled datasets. Specifically, we develop a simple algorithm that estimates the similarity of economic structures across countries and selects the optimal pool of countries to maximize out-of-sample prediction accuracy of a model. We apply the new alogrithm by nowcasting output growth with a panel of 102 countries and are able to significantly improve forecast accuracy relative to alternative pools. The algortihm improves nowcast performance for advanced economies, as well as emerging market and developing economies, suggesting that machine learning techniques using pooled data could be an important macro tool for many countries. |
Date: | 2020–02–28 |
URL: | http://d.repec.org/n?u=RePEc:imf:imfwpa:20/44&r=all |
By: | Yuri F. Saporito; Zhaoyu Zhang |
Abstract: | In this paper we propose a generalization of the Deep Galerking Method (DGM) of \cite{dgm} to deal with Path-Dependent Partial Differential Equations (PPDEs). These equations firstly appeared in the seminal work of \cite{fito_dupire}, where the functional It\^o calculus was developed to deal with path-dependent financial derivatives contracts. The method, which we call Path-Dependent DGM (PDGM), consists of using a combination of feed-forward and Long Short-Term Memory architectures to model the solution of the PPDE. We then analyze several numerical examples, many from the Financial Mathematics literature, that show the capabilities of the method under very different situations. |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2003.02035&r=all |
By: | Oksana Bashchenko (HEC Lausanne; Swiss Finance Institute); Alexis Marchal (EPFL; SFI) |
Abstract: | We develop a methodology for detecting asset bubbles using a neural network. We rely on the theory of local martingales in continuous-time and use a deep network to estimate the diffusion coefficient of the price process more accurately than the current estimator, obtaining an improved detection of bubbles. We show the outperformance of our algorithm over the existing statistical method in a laboratory created with simulated data. We then apply the network classification to real data and build a zero net exposure trading strategy that exploits the risky arbitrage emanating from the presence of bubbles in the US equity market from 2006 to 2008. The profitability of the strategy provides an estimation of the economical magnitude of bubbles as well as support for the theoretical assumptions relied on. |
Keywords: | Bubbles, Strict local martingales, High-frequency data, Deep learning, LSTM |
JEL: | C22 C45 C58 G12 |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:chf:rpseri:rp2008&r=all |
By: | Giovanni Mandras (European Commission - JRC); Andrea Conte (European Commission - JRC); Simone Salotti (European Commission - JRC) |
Abstract: | The European Commission's Joint Research Centre (JRC) is supporting an Innovation Agenda for the Western Balkans (Albania, Bosnia and Herzegovina, Kosovo*, Montenegro, North Macedonia and Serbia). Smart Specialisation is the European Union (EU) place-based policy aiming at more thematic concentration in research and innovation (R&I) investments via the evidence-based identification of the strengths and potential of a given economy. Access to data and economic analysis are key to a better identification of both current and future socio-economic policy challenges. The EU Instrument for Pre-accession Assistance (IPA) supports reforms in the enlargement countries with financial and technical help. Out of the almost €4000 million of EU financial support to Western Balkans over the programming period 2014-2022, €664 are destined to North Macedonia. Economic modelling simulations using national Input-Output data for North Macedonia show the potential benefits related to investments in the country. The analysis uses a detailed sectorial disaggregation. |
Keywords: | rhomolo, region, growth, smart specialisation, western balkans |
JEL: | C67 C82 E61 |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:ipt:iptwpa:jrc119971&r=all |
By: | Ngo-Hoang, Dai-Long |
Abstract: | Nowadays, we are surrounded by a large number of complex phenomena such as virus epidemic, rumor spreading, social norms formation, emergence of new technologies, rise of new economic trends and disruption of traditional businesses. To deal with such phenomena, social scientists often apply reductionism approach where they reduce such phenomena to some lower-lever variables and model the relationships among them through a scheme of equations (e.g. Partial differential equations and ordinary differential equations). This reductionism approach which is often called equation based modeling (EBM) has some fundamental weaknesses in dealing with real world complex systems, for example in modeling how a housing bubble arises from a housing market, the whole market is reduced into some factors (i.e. economic agents) with unbounded rationality and often perfect information, and the model built from the relationships among such factors is used to explain the housing bubble while adaptability and the evolutionary nature of all engaged economic agents along with network effects go unaddressed. In tackling deficiencies of reductionism approach, in the past two decades, the Complex Adaptive System (CAS) framework has been found very influential. In contrast to reductionism approach, under this framework, the socio-economic phenomena such as housing bubbles are studied in an organic manner where the economic agents are supposed to be both boundedly rational and adaptive. According to CAS framework, the socio-economic aggregates such as housing bubbles emerge out of the ways agents of a socio-economic system interact and decide. As the most powerful methodology of CAS modeling, Agent-based modeling (ABM) has gained a growing application among academicians and practitioners. ABMs show how simple behavioral rules of agents and local interactions among them at micro-scale can generate surprisingly complex patterns at macro-scale. Despite a growing number of ABM publications, those researchers unfamiliar with this methodology have to study a number of works to understand (1) the why and what of ABMs and (2) the ways they are rigorously developed. Therefore, the major focus of this paper is to help social sciences researchers get a big picture of ABMs and know how to develop them both systematically and rigorously. |
Date: | 2019–01–27 |
URL: | http://d.repec.org/n?u=RePEc:osf:agrixi:xutyz&r=all |
By: | Micah Goldblum; Avi Schwarzschild; Naftali Cohen; Tucker Balch; Ankit B. Patel; Tom Goldstein |
Abstract: | Algorithmic trading systems are often completely automated, and deep learning is increasingly receiving attention in this domain. Nonetheless, little is known about the robustness properties of these models. We study valuation models for algorithmic trading from the perspective of adversarial machine learning. We introduce new attacks specific to this domain with size constraints that minimize attack costs. We further discuss how these attacks can be used as an analysis tool to study and evaluate the robustness properties of financial models. Finally, we investigate the feasibility of realistic adversarial attacks in which an adversarial trader fools automated trading systems into making inaccurate predictions. |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2002.09565&r=all |
By: | Pietro Battiston; Simona Gamba; Alessandro Santoro |
Abstract: | Tax authorities around the world are increasingly employing data mining and machine learning algorithms to predict individual behaviours. Although the traditional literature on optimal tax administration provides useful tools for ex-post evaluation of policies, it disregards the problem of which taxpayers to target. This study identifies and characterises a loss function that assigns a social cost to any prediction-based policy. We define such measure as the difference between the social welfare of a given policy and that of an ideal policy unaffected by prediction errors. We show how this loss function shares a relationship with the receiver operating characteristic curve, a standard statistical tool used to evaluate prediction performance. Subsequently, we apply our measure to predict inaccurate tax returns issued by self-employed and sole proprietorships in Italy. In our application, a random forest model provides the best prediction: we show how it can be interpreted using measures of variable importance developed in the machine learning literature. |
Keywords: | policy prediction problems, tax behaviour, big data, machine learning |
JEL: | H26 H32 C53 |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:mib:wpaper:436&r=all |
By: | Emile Cammeraat; Ernesto Crivelli |
Abstract: | This paper evaluates elements of a comprehensive reform of the Italian tax system. Reform options are guided by the principles of reducing complexity, broadening the tax base, and lowering marginal tax rates, especially the tax burden on labor income. The revenue and distributional implications of personal income and property tax reforms are assessed with EUROMOD, while a microsimulation model is developed to evaluate VAT reform options. Simulations suggest that a substantial reduction in the tax burden on labor income can be obtained with a revenue-neutral base-broadening reform that streamlines tax expenditures and updates the property valuation system. In addition, a comprehensive reform would benefit low- and middle-income households the most, by lowering significantly their overall current tax liability, which results in increased progressivity of the tax system. |
Date: | 2020–02–21 |
URL: | http://d.repec.org/n?u=RePEc:imf:imfwpa:20/37&r=all |
By: | C\'onall Kelly; Gabriel Lord; Heru Maulana |
Abstract: | We introduce an adaptive Euler method for the approximate solution of the Cox-Ingersoll-Ross short rate model. An explicit discretisation is applied over an adaptive mesh to the stochastic differential equation (SDE) governing the square root of the solution, relying upon a class of path-bounded timestepping strategies which work by reducing the stepsize as solutions approach a neighbourhood of zero. The method is hybrid in the sense that a backstop method is invoked if the timestep becomes too small, or to prevent solutions from overshooting zero and becoming negative. Under parameter constraints that imply Feller's condition, we prove that such a scheme is strongly convergent, of order at least 1/2. Under Feller's condition we also prove that the probability of ever needing the backstop method to prevent a negative value can be made arbitrarily small. Numerically, we compare this adaptive method to fixed step schemes extant in the literature, both implicit and explicit, and a novel semi-implicit adaptive variant. We observe that the adaptive approach leads to methods that are competitive over the entire domain of Feller's condition. |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2002.10206&r=all |
By: | Oksana Bashchenko (HEC Lausanne; Swiss Finance Institute); Alexis Marchal (EPFL; SFI) |
Abstract: | We develop a new method that detects jumps nonparametrically in financial time series and significantly outperforms the current benchmark on simulated data. We use a long short- term memory (LSTM) neural network that is trained on labelled data generated by a process that experiences both jumps and volatility bursts. As a result, the network learns how to disentangle the two. Then it is applied to out-of-sample simulated data and delivers results that considerably differ from the benchmark: we obtain fewer spurious detection and identify a larger number of true jumps. When applied to real data, our approach for jump screening allows to extract a more precise signal about future volatility. |
Keywords: | Jumps, Volatility Burst, High-Frequency Data, Deep Learning, LSTM |
JEL: | C14 C32 C45 C58 G17 |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:chf:rpseri:rp2010&r=all |
By: | Marijn A. Bolhuis; Brett Rayner |
Abstract: | We develop a framework to nowcast (and forecast) economic variables with machine learning techniques. We explain how machine learning methods can address common shortcomings of traditional OLS-based models and use several machine learning models to predict real output growth with lower forecast errors than traditional models. By combining multiple machine learning models into ensembles, we lower forecast errors even further. We also identify measures of variable importance to help improve the transparency of machine learning-based forecasts. Applying the framework to Turkey reduces forecast errors by at least 30 percent relative to traditional models. The framework also better predicts economic volatility, suggesting that machine learning techniques could be an important part of the macro forecasting toolkit of many countries. |
Date: | 2020–02–28 |
URL: | http://d.repec.org/n?u=RePEc:imf:imfwpa:20/45&r=all |
By: | Bird,Julia Helen; Venables,Anthony J. |
Abstract: | As one of world's fastest growing cities, Dhaka faces acute challenges in housing its growing population and developing a more productive economy. Central to this is the scarcity of high-quality urban land. Yet a vast tract of land near the heart of the city, East Dhaka, currently remains predominantly agricultural and undeveloped as a consequence of flooding. This paper uses a computable spatial general equilibrium model that captures the economic geography of the city, to estimate the economic returns of coordinated action to develop this land. The model captures different productive sectors, household skill levels, and types of housing. Firms and residents choose their location within the city given the transport network and land availability, generating a pattern of commercial and residential land-use. The paper estimates the incremental impacts on income, employment and population of an embankment and other flood protection measures to protect this land, as well as from improvement in transport infrastructure and targeted support for economic development in East Dhaka. |
Keywords: | Transport Services,Urban Housing and Land Settlements,Urban Housing,Municipal Management and Reform,Urban Governance and Management,Pulp&Paper Industry,Plastics&Rubber Industry,General Manufacturing,Textiles, Apparel&Leather Industry,Construction Industry,Business Cycles and Stabilization Policies,Food&Beverage Industry,Common Carriers Industry,Labor Markets |
Date: | 2019–03–01 |
URL: | http://d.repec.org/n?u=RePEc:wbk:wbrwps:8762&r=all |
By: | Paolo Brunori (University of Florence); Guido Neidhofer (ZEW - Leibniz Centre for European Economic Research) |
Abstract: | We show that measures of inequality of opportunity (IOP) fully consistent with Roemer (1998)’s IOP theory can be straightforwardly estimated by adopting a machine learning approach, and apply our novel method to analyse the development of IOP in Germany during the last three decades. Hereby, we take advantage of information contained in 25 waves of the Socio-Economic Panel. Our analysis shows that in Germany IOP declined immediately after reunification, increased in the first decade of the century, and slightly declined again after 2010. Over the entire period, at the top of the distribution we always find individuals that resided in West-Germany before the fall of the Berlin Wall, whose fathers had a high occupational position, and whose mothers had a high educational degree. East-German residents in 1989, with low educated parents, persistently qualify at the bottom. |
JEL: | D63 D30 D31 |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:dls:wpaper:0259&r=all |
By: | Yongyang Cai; Kenneth Judd; Rong Xu |
Abstract: | We apply numerical dynamic programming techniques to solve discrete-time multi-asset dynamic portfolio optimization problems with proportional transaction costs and shorting/borrowing constraints. Examples include problems with multiple assets, and many trading periods in a finite horizon problem. We also solve dynamic stochastic problems, with a portfolio including one risk-free asset, an option, and its underlying risky asset, under the existence of transaction costs and constraints. These examples show that it is now tractable to solve such problems. |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2003.01809&r=all |
By: | Giovanni Dosi (Scuola Superiore Sant’Anna); Richard B. Freeman (Harvard University and NBER); Marcelo C. Pereira (University of Campinas and Scuola Superiore Sant’Anna); Andrea Roventini (Scuola Superiore Sant’Anna and OFCE, Sciences Po); Maria Enrica Virgillito (Scuola Superiore Sant’Anna) |
Abstract: | This paper presents an Agent-Based Model (ABM) that seeks to explain the concordance of sluggish growth of productivity and of real wages found in macro-economic statistics, and the increased dispersion of firm productivity and worker earnings found in micro level statistics in advanced economies at the turn of the 21st century. It shows that a single market process unleashed by the decline of unionization can account for both the macro and micro economic phenomena, and that deunionization can be modeled as an endogenous outcome of competition between high wage firms seeking to raise productive capacity and low productivity firms seeking to cut wages. The model highlights the antipodal competitive dynamics between a “winner-takes-all economy” in which corporate strategies focused on cost reductions lead to divergence in productivity and wages and a “social market economy” in which competition rewards the accumulation of firm-level capabilities and worker skills with a more egalitarian wage structure. |
Keywords: | Unionisation, productivity slowdown, market selection, reallocation, agent-based model |
JEL: | J51 E02 E24 C63 |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:fce:doctra:2005&r=all |
By: | Maximilian Beikirch; Torsten Trimborn |
Abstract: | The Levy-Levy-Solomon model (A microscopic model of the stock market: cycles, booms, and crashes, Economic Letters 45 (1))is one of the most influential agent-based economic market models. In several publications this model has been discussed and analyzed. Especially Lux and Zschischang (Some new results on the Levy, Levy and Solomon microscopic stock market model, Physica A, 291(1-4)) have shown that the model exhibits finite-size effects. In this study we extend existing work in several directions. First, we show simulations which reveal finite-size effects of the model. Secondly, we shed light on the origin of these finite-size effects. Furthermore, we demonstrate the sensitivity of the Levy-Levy-Solomon model with respect to random numbers. Especially, we can conclude that a low-quality pseudo random number generator has a huge impact on the simulation results. Finally, we study the impact of the stopping criteria in the market clearance mechanism of the Levy-Levy-Solomon model. |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2002.10222&r=all |
By: | James Wallbridge |
Abstract: | We introduce a new deep learning architecture for predicting price movements from limit order books. This architecture uses a causal convolutional network for feature extraction in combination with masked self-attention to update features based on relevant contextual information. This architecture is shown to significantly outperform existing architectures such as those using convolutional networks (CNN) and Long-Short Term Memory (LSTM) establishing a new state-of-the-art benchmark for the FI-2010 dataset. |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2003.00130&r=all |
By: | Francesco Giavazzi; Felix Iglhaut; Giacomo Lemoli; Gaia Rubera |
Abstract: | We study the role of perceived threats from cultural diversity induced by terrorist attacks and a salient criminal event on public discourse and voters' support for far-right parties. We first develop a rule which allocates Twitter users in Germany to electoral districts and then use a machine learning method to compute measures of textual similarity between the tweets they produce and tweets by accounts of the main German parties. Using the dates of the aforementioned exogenous events we estimate constituency-level shifts in similarity to party language. We find that following these events Twitter text becomes on average more similar to that of the main far-right party, AfD, while the opposite happens for some of the other parties. Regressing estimated shifts in similarity on changes in vote shares between federal elections we find a significant association. Our results point to the role of perceived threats on the success of nationalist parties. |
JEL: | C45 D72 H56 |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:26825&r=all |
By: | Andree,Bo Pieter Johannes; Spencer,Phoebe Girouard; Chamorro,Andres; Dogo,Harun |
Abstract: | This paper revisits the issue of environment and development raised in the 1992 World Development Report, with new analysis tools and data. The paper discusses inference and interpretation in a machine learning framework. The results suggest that production gradually favors conserving the earth's resources as gross domestic product increases, but increased efficiency alone is not sufficient to offset the effects of growth in scale. Instead, structural change in the economy shapes environmental outcomes across GDP. The analysis finds that average development is associated with an inverted $U$-shape in deforestation, pollution, and carbon intensities. Per capita emissions follow a $J$-curve. Specifically, poverty reduction occurs alongside degrading local environments and higher income growth poses a global burden through carbon. Local economic structure further determines the shape, amplitude, and location of tipping points of the Environmental Kuznets Curve. The models are used to extrapolate environmental output to 2030. The daunting implications of continued development are a reminder that immediate and sustained global efforts are required to mitigate forest loss, improve air quality, and shift the global economy to a 2°pathway. |
Keywords: | Global Environment,Inequality,Environmental Disasters&Degradation,Common Carriers Industry,Food&Beverage Industry,Plastics&Rubber Industry,Business Cycles and Stabilization Policies,Textiles, Apparel&Leather Industry,Pulp&Paper Industry,Construction Industry,General Manufacturing,Nutrition |
Date: | 2019–02–25 |
URL: | http://d.repec.org/n?u=RePEc:wbk:wbrwps:8756&r=all |
By: | Ben Moews; Gbenga Ibikunle |
Abstract: | Standard methods and theories in finance can be ill-equipped to capture highly non-linear interactions in financial prediction problems based on large-scale datasets, with deep learning offering a way to gain insights into correlations in markets as complex systems. In this paper, we apply deep learning to econometrically constructed gradients to learn and exploit lagged correlations among S&P 500 stocks to compare model behaviour in stable and volatile market environments, and under the exclusion of target stock information for predictions. In order to measure the effect of time horizons, we predict intraday and daily stock price movements in varying interval lengths and gauge the complexity of the problem at hand with a modification of our model architecture. Our findings show that accuracies, while remaining significant and demonstrating the exploitability of lagged correlations in stock markets, decrease with shorter prediction horizons. We discuss implications for modern finance theory and our work's applicability as an investigative tool for portfolio managers. Lastly, we show that our model's performance is consistent in volatile markets by exposing it to the environment of the recent financial crisis of 2007/2008. |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2002.10385&r=all |
By: | Matthias Aistleitner (Institute for Comprehensive Analysis of the Economy, Johannes Kepler University Linz, Austria); Claudius Graebner (Institute for Socio-Economics, University of Duisburg-Essen, Germany; Institute for Comprehensive Analysis of the Economy, Johannes Kepler University Linz, Austria); Anna Hornykewycz (Institute for Comprehensive Analysis of the Economy, Johannes Kepler University Linz, Austria) |
Abstract: | The accumulation of new technological capabilities is of high empirical relevance, both for the development of countries and the business success of firms. In this paper, we aim to delineate strategies how these processes of capability accumulation can be considered more accurately in comprehensive macroeconomic models. To this end, we conduct an interdisciplinary review of the literature specialized on capability accumulation by analyzing both empirical as well as theoretical literature on the firm and aggregated level. In doing so, we collect evidence various determinants and mechanisms of capability accumulation and align them with the current representation of capability accumulation in macroeconomic models. Based on these results, we make some suggestions on how macroeconomists may integrate these determinants derived from the specialized literature into their models. |
Keywords: | Capability accumulation, complexity, economic development, innovation, technological change, agent-based modeling, endogeneous growth, knowledge accumulation and learning |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:ico:wpaper:105&r=all |
By: | Walsh,Brian James; Hallegatte,Stephane |
Abstract: | Traditional risk assessments use asset losses as the main metric to measure the severity of a disaster. This paper proposes an expanded risk assessment based on a framework that adds socioeconomic resilience and uses wellbeing losses as its main measure of disaster severity. Using a new, agent-based model that represents explicitly the recovery and reconstruction process at the household level, this risk assessment provides new insights into disaster risks in the Philippines. First, there is a close link between natural disasters and poverty. On average, the estimates suggest that almost half a million Filipinos per year face transient consumption poverty due to natural disasters. Nationally, the bottom income quintile suffers only 9 percent of the total asset losses, but 31 percent of the total wellbeing losses. The average annual wellbeing losses due to disasters in the Philippines is estimated at US$3.9 billion per year, more than double the asset losses of US$1.4 billion. Second, the regions identified as priorities for risk-management interventions differ depending on which risk metric is used. Cost-benefit analyses based on asset losses direct risk reduction investments toward the richest regions and areas. A focus on poverty or wellbeing rebalances the analysis and generates a different set of regional priorities. Finally, measuring disaster impacts through poverty and wellbeing impacts allows the quantification of the benefits from interventions like rapid post-disaster support and adaptive social protection. Although these measures do not reduce asset losses, they efficiently reduce their consequences for wellbeing by making the population more resilient. |
Keywords: | Inequality,Natural Disasters,Disaster Management,Hazard Risk Management,Social Risk Management,Disability,Services&Transfers to Poor,Access of Poor to Social Services,Economic Assistance |
Date: | 2019–01–31 |
URL: | http://d.repec.org/n?u=RePEc:wbk:wbrwps:8723&r=all |