|
on Computational Economics |
Issue of 2020‒06‒22
37 papers chosen by |
By: | Bruno Bouchard (CEREMADE); Adil Reghai (CEREMADE); Benjamin Virrion (CEREMADE) |
Abstract: | We consider a multi-step algorithm for the computation of the historical expected shortfall such as defined by the Basel Minimum Capital Requirements for Market Risk. At each step of the algorithm, we use Monte Carlo simulations to reduce the number of historical scenarios that potentially belong to the set of worst scenarios. The number of simulations increases as the number of candidate scenarios is reduced and the distance between them diminishes. For the most naive scheme, we show that the L p-error of the estimator of the Expected Shortfall is bounded by a linear combination of the probabilities of inversion of favorable and unfavorable scenarios at each step, and of the last step Monte Carlo error associated to each scenario. By using concentration inequalities, we then show that, for sub-gamma pricing errors, the probabilities of inversion converge at an exponential rate in the number of simulated paths. We then propose an adaptative version in which the algorithm improves step by step its knowledge on the unknown parameters of interest: mean and variance of the Monte Carlo estimators of the different scenarios. Both schemes can be optimized by using dynamic programming algorithms that can be solved off-line. To our knowledge, these are the first non-asymptotic bounds for such estimators. Our hypotheses are weak enough to allow for the use of estimators for the different scenarios and steps based on the same random variables, which, in practice, reduces considerably the computational effort. First numerical tests are performed. |
Date: | 2020–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2005.12593&r=all |
By: | Christian Tilk (Johannes Gutenberg University); Katharina Olkis (Johannes Gutenberg University); Stefan Irnich (Johannes Gutenberg University) |
Abstract: | The ongoing rise in e-commerce comes along with an increasing number of first-time delivery failures due to the absence of the customer at the delivery location. Failed deliveries result in rework which in turn has a large impact on the carriers’ delivery cost. In the classical vehicle routing problem (VRP) with time windows, each customer request has only one location and one time window describing where and when shipments need to be delivered. In contrast, we introduce and analyze the vehicle routing problem with delivery options (VRPDO), in which some requests can be shipped to alternative locations with possibly different time windows. Furthermore, customers may prefer some delivery options. The carrier must then select, for each request, one delivery option such that the carriers’ overall cost is minimized and a given service level regarding customer preferences is achieved. Moreover, when delivery options share a common location, e.g., a locker, capacities must be respected when assigning shipments. The VRPDO generalizes several known extensions of the VRP with time windows, e.g., the generalized VRP with time windows, the multi-vehicle traveling purchaser problem, and the VRP with roaming delivery locations. To solve the VRPDO exactly, we present a new branch-price-and-cut algorithm. The associated pricing subproblem is a shortest-path problem with resource constraints that we solve with a bidirectional labeling algorithm on an auxiliary network. We focus on the comparison of two alternative modeling approaches for the auxiliary network and present optimal solutions for instances with up to 100 delivery options. Moreover, we provide 17 new optimal solutions for the benchmark set for the VRP with roaming delivery locations. |
Date: | 2020–05–29 |
URL: | http://d.repec.org/n?u=RePEc:jgu:wpaper:2017&r=all |
By: | Imanol Perez Arribas; Cristopher Salvi; Lukasz Szpruch |
Abstract: | Mathematical models, calibrated to data, have become ubiquitous to make key decision processes in modern quantitative finance. In this work, we propose a novel framework for data-driven model selection by integrating a classical quantitative setup with a generative modelling approach. Leveraging the properties of the signature, a well-known path-transform from stochastic analysis that recently emerged as leading machine learning technology for learning time-series data, we develop the Sig-SDE model. Sig-SDE provides a new perspective on neural SDEs and can be calibrated to exotic financial products that depend, in a non-linear way, on the whole trajectory of asset prices. Furthermore, we our approach enables to consistently calibrate under the pricing measure $\mathbb Q$ and real-world measure $\mathbb P$. Finally, we demonstrate the ability of Sig-SDE to simulate future possible market scenarios needed for computing risk profiles or hedging strategies. Importantly, this new model is underpinned by rigorous mathematical analysis, that under appropriate conditions provides theoretical guarantees for convergence of the presented algorithms. |
Date: | 2020–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2006.00218&r=all |
By: | Xin Man; Ernest Chan |
Abstract: | Feature selection in machine learning is subject to the intrinsic randomness of the feature selection algorithms (for example, random permutations during MDA). Stability of selected features with respect to such randomness is essential to the human interpretability of a machine learning algorithm. We proposes a rank based stability metric called instability index to compare the stabilities of three feature selection algorithms MDA, LIME, and SHAP as applied to random forests. Typically, features are selected by averaging many random iterations of a selection algorithm. Though we find that the variability of the selected features does decrease as the number of iterations increases, it does not go to zero, and the features selected by the three algorithms do not necessarily converge to the same set. We find LIME and SHAP to be more stable than MDA, and LIME is at least as stable as SHAP for the top ranked features. Hence overall LIME is best suited for human interpretability. However, the selected set of features from all three algorithms significantly improves various predictive metrics out of sample, and their predictive performances do not differ significantly. Experiments were conducted on synthetic datasets, two public benchmark datasets, and on proprietary data from an active investment strategy. |
Date: | 2020–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2005.12483&r=all |
By: | Jerzy Grobelny; Rafal Michalski |
Abstract: | Two simulation experiments were conducted to verify whether the idea of virtual force scatter plot algorithm, used for searching solutions of the facility layout problems, may be used as an input to the classical CRAFT and simulated annealing (SA) algorithms. The proposed approach employs a regular grid for specifying possible locations of objects. Three independent variables were investigated in the first experiment, namely, (1) the size of the problem: 16, 36 and 64 objects, (2) the type of links between objects: grid, line, and loop, and (3) the shape of the possible places in which the objects can be situated: circle, row and square. The patterns of possible location places were also adapted to the analysis of examples taken from literature, included in the second experiment. The gathered data were statistically analyzed. The results shows substantial decrease in goal function means for all of the examined experimental conditions, if the proposed starting solutions are applied to the CRAFT algorithm. The application of the approach to SA is profitable in specific tasks. The presented comparative numerical results show, in which circumstances the proposed method is superior over various genetic algorithms and other hybrid approaches. Overall, the experimental data investigation demonstrates the usefulness of the proposed method and encourages further research in this direction. |
Keywords: | Production layout; Human factors; Facility layout problem; Initial solutions; Scatter plots; Simulated annealing; Simulation experiments |
JEL: | C00 D24 L16 L23 L91 M11 |
Date: | 2020–08–19 |
URL: | http://d.repec.org/n?u=RePEc:ahh:wpaper:worms2012&r=all |
By: | Jesús Fernández-Villaverde (University of Pennsylvania, NBER and CEPR); Samuel Hurtado (Banco de España); Galo Nuño (Banco de España) |
Abstract: | We postulate a nonlinear DSGE model with a financial sector and heterogeneous households. In our model, the interaction between the supply of bonds by the financial sector and the precautionary demand for bonds by households produces significant endogenous aggregate risk. This risk induces an endogenous regime-switching process for output, the risk-free rate, excess returns, debt, and leverage. The regime-switching generates i) multimodal distributions of the variables above; ii) time-varying levels of volatility and skewness for the same variables; and iii) supercycles of borrowing and deleveraging. All of these are important properties of the data. In comparison, the representative household version of the model cannot generate any of these features. Methodologically, we discuss how nonlinear DSGE models with heterogeneous agents can be efficiently computed using machine learning and how they can be estimated with a likelihood function, using inference with diffusions. |
Keywords: | heterogeneous agents, wealth distribution, financial frictions, continuoustime, machine learning, neural networks, structural estimation, likelihood function |
JEL: | C45 C63 E32 E44 G01 G11 |
Date: | 2020–06 |
URL: | http://d.repec.org/n?u=RePEc:bde:wpaper:2013&r=all |
By: | Glyn Wittwer (Centre of Policy Studies, Victoria University, Australia); Kym Anderson (Wine Economics Research Centre, School of Economics, University of Adelaide, Australia, and Arndt-Corden Dept of Economics, Australian National University, Canberra ACT 2601, Australia) |
Abstract: | This paper describes a new empirical model of the world’s markets for alcoholic beverages and, to illustrate its usefulness, reports results from projections of those markets from 2016- 18 to 2025 under various scenarios. It not only revises and updates a model of the world’s wine markets (Wittwer, Berger and Anderson, 2003) but also adds beer and spirits so as to capture the substitutability of those beverages among consumers. The model has some of the features of an economywide computable general equilibrium model, with international trade linking the markets of its 44 countries and seven residual regions. It is used to simulate prospects for these markets by 2025 (business-as-usual), which points to Asia’s rise. Then two alternative scenarios to 2025 are explored: one simulates the withdrawal of the United Kingdom from the European Union (EU); the other simulates the effects of the recent imposition of additional 25% tariffs on selected beverages imported by the United States from several EU member countries. Future applications of the model are discussed in the concluding section. |
Keywords: | CGE modeling; wine; beer; spirits; changes in beverage preferences; international trade in beverages; premiumization of alcohol markets |
JEL: | C53 F11 F17 Q13 |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:adl:winewp:2019-05&r=all |
By: | Riccardo Doyle |
Abstract: | Interbank contagion can theoretically exacerbate losses in a financial system and lead to additional cascade defaults during downturn. In this paper we produce default analysis using both regression and neural network models to verify whether interbank contagion offers any predictive explanatory power on default events. We predict defaults of U.S. domiciled commercial banks in the first quarter of 2010 using data from the preceding four quarters. A number of established predictors (such as Tier 1 Capital Ratio and Return on Equity) are included alongside contagion to gauge if the latter adds significance. Based on this methodology, we conclude that interbank contagion is extremely explanatory in default prediction, often outperforming more established metrics, in both regression and neural network models. These findings have sizeable implications for the future use of interbank contagion as a variable of interest for stress testing, bank issued bond valuation and wider bank default prediction. |
Date: | 2020–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2005.12619&r=all |
By: | Xie, Tian (Shanghai University of Finance and Economics); Yu, Jun (School of Economics, Singapore Management University); Zeng, Tao (Zhejiang University) |
Abstract: | The data market has been growing at an exceptional pace. Consequently, more sophisticated strategies to conduct economic forecasts have been introduced with machine learning techniques. Does machine learning pose a threat to conventional econometric methods in terms of forecasting? Moreover, does machine learning present great opportunities to cross-fertilize the field of econometric forecasting? In this report, we develop a pedagogical framework that identifies complementarity and bridges between the two strands of literature. Existing econometric methods and machine learning techniques for economic forecasting are reviewed and compared. The advantages and disadvantages of these two classes of methods are discussed. A class of hybrid methods that combine conventional econometrics and machine learning are introduced. New directions for integrating the above two are suggested. The out-of-sample performance of alternatives is compared when they are employed to forecast the Chicago Board Options Exchange Volatility Index and the harmonized index of consumer prices for the euro area. In the first exercise, econometric methods seem to work better, whereas machine learning methods generally dominate in the second empirical application. |
Date: | 2020–05–30 |
URL: | http://d.repec.org/n?u=RePEc:ris:smuesw:2020_016&r=all |
By: | Grilli, Luca; Santoro, Domenico |
Abstract: | In this paper we want to demonstrate how it is possible to improve the forecast by using Boltzmann entropy like the classic financial indicators, throught neural networks. In particular, we show how it is possible to increase the scope of entropy by moving from cryptocurrencies to equities and how this type of architectures highlight the link between the indicators and the information that they are able to contain. |
Keywords: | Neural Network; Price Forecasting; LSTM; Entropy |
JEL: | C45 E37 F17 G17 |
Date: | 2020–05–22 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:100578&r=all |
By: | Calypso Herrera; Florian Krach; Josef Teichmann |
Abstract: | Continuous stochastic processes are widely used to model time series that exhibit a random behaviour. Predictions of the stochastic process can be computed by the conditional expectation given the current information. To this end, we introduce the controlled ODE-RNN that provides a data-driven approach to learn the conditional expectation of a stochastic process. Our approach extends the ODE-RNN framework which models the latent state of a recurrent neural network (RNN) between two observations with a neural ordinary differential equation (neural ODE). We show that controlled ODEs provide a general framework which can in particular describe the ODE-RNN, combining in a single equation the continuous neural ODE part with the jumps introduced by RNN. We demonstrate the predictive capabilities of this model by proving that, under some regularities assumptions, the output process converges to the conditional expectation process. |
Date: | 2020–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2006.04727&r=all |
By: | Susan Namirembe Kavuma; Christine Byaruhanga; Nicholas Musoke; Patrick Loke; Michael Noble; Gemma Wright |
Abstract: | The distributional analysis of consumption taxes is useful for establishing the welfare impact of tax policy. This paper uses the UGAMOD microsimulation model to establish the tax incidence and welfare impact of excise duty in Uganda. The results reveal that households in the top deciles pay more in excise duty as a percentage of their consumption than households in the bottom deciles. Post-fiscal consumption is almost the same as pre-fiscal consumption for the first seven deciles, but there is a sharp reduction in post-fiscal consumption in the tenth decile. |
Keywords: | excise duty, microsimulation, Poverty, Uganda |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:unu:wpaper:wp-2020-70&r=all |
By: | Dhruv Sharma; Jean-Philippe Bouchaud; Marco Tarzia; Francesco Zamponi |
Abstract: | We introduce a prototype agent-based model of the macroeconomy, with a budgetary constraint at its core. The model is related to a class of constraint satisfaction problems, which has been thoroughly investigated in computer science. We identify three different regimes of our toy economy upon varying the amount of debt that each agent can accumulate before defaulting. In presence of a very loose constraint on debt, endogenous crises leading to waves of synchronized bankruptcies are present. In the opposite regime of very tight debt constraining, the bankruptcy rate is extremely high and the economy remains structure-less. In an intermediate regime, the economy is stable with very low bankruptcy rate and no aggregate-level crises. This third regime displays a rich phenomenology: the system spontaneously and dynamically self-organizes in a set of cheap and expensive goods (i.e. some kind of "speciation"), with switches triggered by random fluctuations and feedback loops. Our analysis confirms the central role that debt levels play in the stability of the economy. |
Date: | 2020–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2005.11748&r=all |
By: | Beatriz Salvador; Cornelis W. Oosterlee; Remco van der Meer |
Abstract: | Artificial neural networks (ANNs) have recently also been applied to solve partial differential equations (PDEs). In this work, the classical problem of pricing European and American financial options, based on the corresponding PDE formulations, is studied. Instead of using numerical techniques based on finite element or difference methods, we address the problem using ANNs in the context of unsupervised learning. As a result, the ANN learns the option values for all possible underlying stock values at future time points, based on the minimization of a suitable loss function. For the European option, we solve the linear Black-Scholes equation, whereas for the American option, we solve the linear complementarity problem formulation. Two-asset exotic option values are also computed, since ANNs enable the accurate valuation of high-dimensional options. The resulting errors of the ANN approach are assessed by comparing to the analytic option values or to numerical reference solutions (for American options, computed by finite elements). |
Date: | 2020–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2005.12059&r=all |
By: | Corral Rodas,Paul Andres; Molina,Isabel; Nguyen,Minh Cong |
Abstract: | After almost two decades of poverty maps produced by the World Bank and multiple advances in the literature, this paper presents a methodological update to the World Bank's toolkit for small area estimation. The paper reviews the computational procedures of the current methods used by the World Bank: the traditional approach by Elbers, Lanjouw and Lanjouw (2003) and the Empirical Best/Bayes (EB) addition introduced by Van der Weide (2014). The addition extends the EB procedure of Molina and Rao (2010) by considering heteroscedasticity and includes survey weights, but uses a different bootstrap approach, here referred to as clustered bootstrap. Simulation experiments comparing these methods to the original EB approach of Molina and Rao (2010) provide empirical evidence of the shortcomings of the clustered bootstrap approach, which yields biased point estimates. The main contributions of this paper are then two: 1) to adapt the original Monte Carlo simulation procedure of Molina and Rao (2010) for the approximation of the extended EB estimators that include heteroscedasticity and survey weights as in Van der Weide (2014); and 2) to adapt the parametric bootstrap approach for mean squared error (MSE) estimation considered by Molina and Rao (2010), and proposed originally by González-Manteiga et al. (2008), to these extended EB estimators. Simulation experiments illustrate that the revised Monte Carlo simulation method yields estimators that are considerably less biased and more efficient in terms of MSE than those obtained from the clustered bootstrap approach, and that the parametric bootstrap MSE estimators are in line with the true MSEs under realistic scenarios. |
Keywords: | Inequality,De Facto Governments,Public Sector Administrative&Civil Service Reform,Public Sector Administrative and Civil Service Reform,Economics and Finance of Public Institution Development,State Owned Enterprise Reform,Democratic Government,Employment and Unemployment,Labor&Employment Law,Adaptation to Climate Change |
Date: | 2020–05–21 |
URL: | http://d.repec.org/n?u=RePEc:wbk:wbrwps:9256&r=all |
By: | Jerzy Grobelny; Rafal Michalski |
Abstract: | The paper presents simulation experiment results regarding properties of linguistic pattern based simulated annealing used for solving the facilities layout problems in logistics. In the article, we investigate four different arrangements (02 × 18, 03 × 12, 04 × 09, and 06 × 06) comprising of 36 items. The examined layouts also differ in the links matrix density (20%, 40%, and 60%) and in defining distance between objects’ pairs for the distance membership function (absolute and relative). We formally examine how these factors influence corrected mean truth values and average classical goal function values based on Manhattan distance metric. The results generally revealed a significant influence of all of the studied effects on the analyzed dependent variables. Some of the findings, however, were surprising and confirmed previous outcomes showing that the linguistic pattern approach is not a simple extension of the classic simulated annealing. |
Keywords: | Facilities layout; Optimization; Linguistic variables; Logistics; Fuzzy sets |
JEL: | C00 D24 L16 L23 L91 M11 |
Date: | 2018–09–15 |
URL: | http://d.repec.org/n?u=RePEc:ahh:wpaper:worms1809&r=all |
By: | ITF |
Abstract: | This report examines how new shared services could change mobility in Lyon, France. It presents simulations for five different scenarios in which different shared transport options replace privately owned cars in the Lyon metropolitan area. The simulations offer insights on how shared mobility can reduce congestion, lower CO2 emissions and free public space. The analysis also looks at quality of service, cost and citizens’ access to opportunities. The interaction of shared mobility services with mass public transport and optimal operational conditions for the transition are also examined. The findings provide decision makers with evidence to weigh opportunities and challenges created by new shared transport services. The report is part of a series of studies on shared mobility in different urban and metropolitan contexts. |
Date: | 2020–04–07 |
URL: | http://d.repec.org/n?u=RePEc:oec:itfaac:74-en&r=all |
By: | Giovanni Dosi (LEM - Laboratory of Economics and Management - Sant'Anna School of Advanced Studies); Mauro Napoletano (OFCE - Observatoire français des conjonctures économiques - Sciences Po - Sciences Po); Andrea Roventini; Tania Treibich (OFCE - Observatoire français des conjonctures économiques - Sciences Po - Sciences Po) |
Abstract: | In this work we study the granular origins of business cycles and their possible underlying drivers. As shown by Gabaix (Econometrica 79:733–772, 2011), the skewed nature of firm size distributions implies that idiosyncratic (and independent) firm-level shocks may account for a significant portion of aggregate volatility. Yet, we question the original view grounded on "supply granularity", as proxied by productivity growth shocks – in line with the Real Business Cycle framework–, and we provide empirical evidence of a "demand granularity", based on investment growth shocks instead. The role of demand in explaining aggregate fluctuations is further corroborated by means of a macroeconomic Agent-Based Model of the "Schumpeter meeting Keynes" family Dosi et al. (J Econ Dyn Control 52:166–189, 2015). Indeed, the investigation of the possible microfoundation of RBC has led us to the identification of a sort of microfounded Keynesian multiplier. |
Keywords: | Business cycles,Granular residual,Granularity hypothesis,Agent-based models,Firm dynamics,Productivity growth,Investment growth |
Date: | 2019–03 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-02557845&r=all |
By: | Sudhanshu Pani |
Abstract: | The tatonnement process in high frequency order driven markets is modeled as a search by buyers for sellers and vice-versa. We propose a total order book model, comprising limit orders and latent orders, in the absence of a market maker. A zero intelligence approach of agents is employed using a diffusion-drift-reaction model, to explain the trading through continuous auctions (price and volume). The search (levy or brownian) for transaction price is the primary diffusion mechanism with other behavioural dynamics in the model inspired from foraging, chemotaxis and robotic search. Analytic and asymptotic analysis is provided for several scenarios and examples. Numerical simulation of the model extends our understanding of the relative performance between brownian, superdiffusive and ballistic search in the model. |
Date: | 2020–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2006.00775&r=all |
By: | Munisamy Gopinath; Feras A. Batarseh; Jayson Beckman |
Abstract: | Predicting agricultural trade patterns is critical to decision making in the public and private domains, especially in the current context of trade disputes among major economies. Focusing on seven major agricultural commodities with a long history of trade, this study employed data-driven and deep-learning processes: supervised and unsupervised machine learning (ML) techniques – to decipher patterns of trade. The supervised (unsupervised) ML techniques were trained on data until 2010 (2014), and projections were made for 2011-2016 (2014-2020). Results show the high relevance of ML models to predicting trade patterns in near- and long-term relative to traditional approaches, which are often subjective assessments or time-series projections. While supervised ML techniques quantified key economic factors underlying agricultural trade flows, unsupervised approaches provide better fits over the long-term. |
JEL: | C45 F14 Q17 |
Date: | 2020–05 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:27151&r=all |
By: | Athey, Susan (Stanford U); Bryan, Kevin (U of Toronto); Gans, Joshua S. (U of Toronto) |
Abstract: | The allocation of decision authority by a principal to either a human agent or an artificial intelligence (AI) is examined. The principal trades off an AI's more aligned choice with the need to motivate the human agent to expend effort in learning choice payoffs. When agent effort is desired, it is shown that the principal is more likely to give that agent decision authority, reduce investment in AI reliability and adopt an AI that may be biased. Organizational design considerations are likely to impact on how AIs are trained. |
Date: | 2020–01 |
URL: | http://d.repec.org/n?u=RePEc:ecl:stabus:3856&r=all |
By: | Federico Bassi (Centre de recherche en économie de l’Université Paris Nord (CEPN) and Université Sorbonne Paris Nord); Tom Bauermann (Ruhr-University Bochum and Ruhr-Graduate School in Economics); Dany Lang (Université Sorbonne Paris Nord); Mark Setterfield (Department of Economics, New School for Social Research) |
Abstract: | Post Keynesian macrodynamic models make various assumptions about the normal rate of capacity utilization. Those rooted in the Classical and neo-Keynesian traditions assume the normal rate is fixed, whereas Kaleckian models treat it as a variable that is endogenous to the actual rate of capacity utilization. This paper contributes to the debate about the normal rate of capacity utilization by developing a model of strong or genuine hysteresis, in which firms make discrete decisions about the normal rate depending on the degree of uncertainty about demand conditions. An agent-based model based on empirical analysis of 25 sectors of the US economy is used to show that hysteresis can cause variation in the normal rate of capacity utilization within a subset of the range of observed variation in the actual capacity utilization rate. This suggests that the economy exhibits both constancy and (endogenous) variability in the normal rate of utilization over different ranges of variation in the actual rate. More broadly speaking, the genuine hysteresis model is shown to provide the basis for a synthesis of Post Keynesian macrodynamics that draws on both the Classical/neo-Keynesian and Kaleckian modeling traditions. |
Keywords: | Normal rate of capacity utilization, Harrodian instability, genuine hysteresis, Kaleckian growth theory |
JEL: | C63 E11 E12 L6 L7 L9 |
Date: | 2020–06 |
URL: | http://d.repec.org/n?u=RePEc:new:wpaper:2007&r=all |
By: | Jim Malley; Ulrich Woitek |
Abstract: | To better understand the quantitative implications of human capital externalities at the aggregate level, we estimate a two-sector endogenous growth model with knowledge spill-overs. To achieve this, we account for trend growth in a model consistent fashion and employ a Markov-chain Monte-Carlo (MCMC) algorithm to estimate the model’s posterior parameter distributions. Using U.S. quarterly data from 1964-2017, we find significant positive externalities to aggregate human capital. Our analysis further shows that eliminating this market failure leads to sizeable increases in education-time, endogenous growth and aggregate welfare. |
Keywords: | Human capital externalities, endogenous growth, Bayesian estimation |
JEL: | C11 C52 E32 |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:gla:glaewp:2019_04&r=all |
By: | Ka Wai Tsang; Zhaoyi He |
Abstract: | This paper introduces a new functional optimization approach to portfolio optimization problems by treating the unknown weight vector as a function of past values instead of treating them as fixed unknown coefficients in the majority of studies. We first show that the optimal solution, in general, is not a constant function. We give the optimal conditions for a vector function to be the solution, and hence give the conditions for a plug-in solution (replacing the unknown mean and variance by certain estimates based on past values) to be optimal. After showing that the plug-in solutions are sub-optimal in general, we propose gradient-ascent algorithms to solve the functional optimization for mean-variance portfolio management with theorems for convergence provided. Simulations and empirical studies show that our approach can perform significantly better than the plug-in approach. |
Date: | 2020–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2005.12774&r=all |
By: | Giuseppe Bruno; Hiren Jani; Rafael Schmidt; Bruno Tissot |
Date: | 2020–04–24 |
URL: | http://d.repec.org/n?u=RePEc:bis:bisifr:11&r=all |
By: | Hassan, Tarek Alexander; Hollander, Stephan; Tahoun, Ahmed; van Lent, Laurence |
Abstract: | Using tools described in our earlier work Hassan et al. (2019,2020), we develop text-based measures of the costs, benefits, and risks listed firms in the US and over 80 other countries associate with the spread of Covid-19 and other epidemic diseases. We identify which firms expect to gain or lose from an epidemic disease and which are most affected by the associated uncertainty as a disease spreads in a region or around the world. As Covid-19 spreads globally in the first quarter of 2020, we find that firms' primary concerns relate to the collapse of demand, increased uncertainty, and disruption in supply chains. Other important concerns relate to capacity reductions, closures, and employee welfare. By contrast, financing concerns are mentioned relatively rarely. We also identify some firms that foresee opportunities in new or disrupted markets due to the spread of the disease. Finally, we find some evidence that firms that have experience with SARS or H1N1 have more positive expectations about their ability to deal with the coronavirus outbreak. |
Keywords: | Epidemic diseases; exposure; firms; Machine Learning; Pandemic; sentiment; uncertainty; virus |
JEL: | D22 E0 F0 G15 I15 I18 |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:14573&r=all |
By: | Evita Mailli; Peter A. Xepapadeas; Phoebe Koundouri |
Abstract: | This chapter describes a decision support tool that was designed and developed for the MERMAID project (EU-FP7), which has already indicated in this book developed concepts for next-generation offshore platforms for multi-use of ocean space for energy extraction, aquaculture and platform-related transport. Specifically, it evaluated the potential and challenges of building multi-use offshore platforms (MUOPs). The MERMAID project considers four offshore study sites for multi-use offshore platforms, Atlantic Ocean site, Wadden - North Sea site, Baltic sea site, and Mediterranean Sea site. Each site is considered in terms of its available resources and unique features. This tool was part of the framework for assessing the socio-economic impact of MUOPs, and as such, utilized web and data analytics state of the art technologies in order to provide researchers with a framework for evaluating feasibility and potential of each MUOP's proposed design and location. |
Keywords: | MERMAID, multi-use offshore platforms, socio-economic assessment, energy extraction, aquaculture, transport, web-based tool |
Date: | 2020–05–30 |
URL: | http://d.repec.org/n?u=RePEc:aue:wpaper:2022&r=all |
By: | Maximilian Gobel; Tanya Araújo |
Abstract: | The determination of reliable early-warning indicators of economic crises is a hot topic in economic sciences. Pinning down recurring patterns or combinations of macroeconomic indicators is indispensable for adequate policy adjustments to prevent a looming crisis. We investigate the ability of several macroeconomic variables telling crisis countries apart from non-crisis economies. We introduce a selfcalibrated clustering-algorithm, which accounts for both similarity and dissimilarity in macroeconomic fundamentals across countries. Furthermore, imposing a desired community structure, we allow the data to decide by itself, which combination of indicators would have most accurately foreseen the exogeneously de?ned network topology. We quantitatively evaluate the degree of matching between the data-generated clustering and the desired community-structure. |
Keywords: | Early-Warning Models, Crisis Prediction, Macroeconomic Dynamics, Network Analysis, Community Structure, Great Recession, Clustering Algorithm |
JEL: | C38 G01 C52 |
Date: | 2020–05 |
URL: | http://d.repec.org/n?u=RePEc:ise:remwps:wp01282020&r=all |
By: | Junyi Li; Xitong Wang; Yaoyang Lin; Arunesh Sinha; Micheal P. Wellman |
Abstract: | We propose an approach to generate realistic and high-fidelity stock market data based on generative adversarial networks (GANs). Our Stock-GAN model employs a conditional Wasserstein GAN to capture history dependence of orders. The generator design includes specially crafted aspects including components that approximate the market's auction mechanism, augmenting the order history with order-book constructions to improve the generation task. We perform an ablation study to verify the usefulness of aspects of our network structure. We provide a mathematical characterization of distribution learned by the generator. We also propose statistics to measure the quality of generated orders. We test our approach with synthetic and actual market data, compare to many baseline generative models, and find the generated data to be close to real data. |
Date: | 2020–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2006.04212&r=all |
By: | Aymeric Ricome (JRC - European Commission - Joint Research Centre [Seville]); Kamel Louhichi (ECO-PUB - Economie Publique - AgroParisTech - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Sergio Gomez-Y-Paloma (JRC - European Commission - Joint Research Centre [Seville]) |
Abstract: | The government of Tanzania is willing to improve the socio-economic environment for the farming sector to encourage farmers to produce (and sell) more products from their activities. To that end, the central government is reforming the local tax system and particularly the agricultural produce cess, which is a turnover tax on marketed agricultural products charged by local government authorities (LGAs) at a maximum of 5% of the farm-gate price. Although it constitutes a significant source of revenue for many LGAs, this tax restricts an increase in production by farmers, and thus improvement of their livelihoods. In 2017, the government reduced the maximum cess rate from 5% to 3%. However, this reduction seems insufficient according to stakeholders, and several options to further reduce the rate are currently under discussion by the government. This report provides an ex ante impact assessment of the main reform options, using a microeconomic simulation model called FSSIM-Dev (Farming System Simulator for Developing Countries). Based on positive mathematical programming, this model was applied to a representative sample of 3,134 farm households spread throughout the country, taken from the World Bank LSMS–ISA surveys. Simulation results show that reduction of the cess rate leads to greater intensification and an increase in farm income, ranging between +2% and +21% depending on options and regions. The largest positive impacts are observed in the Northern and Western highlands. As expected, large farms and farms specialized in cash crops tend to gain more from the reduction in cess. At the individual farm household level, the impact is modest: 95% of the farms will experience an income increase of less than 10%. The impact on food security and rural poverty reduction is quite limited (improvement is less than 2%). Finally, the results show that a uniform cess rate of 1% for all crops seems to be the most efficient policy option. |
Abstract: | Ce rapport présente les résultats d'une analyse d'impact de plusieurs options de réforme de la taxe sur les produits agricoles en Tanzanie. Il s'agit d'une taxe sur le chiffre d'affaire des produits agricoles commercialisés perçue par les collectivités locales (LGA) fixée à un taux maximal de 5% du prix producteur. Bien qu'elle constitue une source de revenus importante pour de nombreuses LGA, cette taxe empêche l'augmentation de la production agricole, et donc l'amélioration des moyens de subsistance des exploitants. En 2017, le gouvernement a réduit le taux maximal de 5% à 3%. Cependant, cette réduction semble insuffisante selon les parties prenantes, et plusieurs options pour réduire davantage ce taux sont actuellement à l'étude par le gouvernement. Cette analyse est réalisée à l'aide d'un modèle microéconomique appliqué à un échantillon représentatif de 3134 ménages agricoles répartis sur l'ensemble du pays provenant des enquêtes LSMS-ISA de la Banque Mondiale. Les effets potentiels des options de réforme simulées sur l'utilisation des terres, la production, l'utilisation des intrants, le revenu agricole, les revenus des gouvernements locaux et certains indicateurs liés à la sécurité alimentaire sont présentés et discutés dans ce rapport. |
Keywords: | agrarian reform,agricultural levy,agricultural policy,agricultural production,agricultural production policy,economic analysis,economic consequence,farm household,farm income,food security,land use,local government,research report,Tanzania,Analyse d’impact,Taxe agricole,Modèle de ménage agricole,Sécurité alimentaire,LSMS-ISA,Tanzanie |
Date: | 2020–04–06 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:hal-02535711&r=all |
By: | Oguzhan Cepni (Central Bank of the Republic of Turkey, Haci Bayram Mah. Istiklal Cad. No:10 06050, Ankara, Turkey); Rangan Gupta (Department of Economics, University of Pretoria, Pretoria, 0002, South Africa); Yigit Onay (Central Bank of the Republic of Turkey, Haci Bayram Mah. Istiklal Cad. No:10 06050, Ankara, Turkey) |
Abstract: | This paper analyzes the predictive ability of aggregate and dis-aggregate proxies of investor sentiment, over and above standard macroeconomic predictors, in forecasting housing returns in China, using an array of machine learning models. Using a monthly out-of-sample period of 2011:01 to 2018:12, given an in-sample of 2006:01-2010:12, we find that indeed the new aligned investor sentiment index proposed in this paper has greater predictive power for housing returns than the a principal component analysis (PCA)-based sentiment index, used earlier in the literature. Moreover, shrinkage models utilizing the dis-aggregate sentiment proxies do not result in forecast improvement indicating that aligned sentiment index optimally exploits information in the dis-aggregate proxies of investor sentiment. Furthermore, when we let the machine learning models to choose from all key control variables and the aligned sentiment index, the forecasting accuracy is improved at all forecasting horizons, rather than just the short-run as witnessed under standard predictive regressions. This result suggests that machine learning methods are flexible enough to capture both structural change and time-varying information in a set of predictors simultaneously to forecast housing returns of China in a precise manner. Given the role of the real estate market in China’s economic growth, our result of accurate forecasting of housing returns, based on investor sentiment and macroeconomic variables using state-of-the-art machine learning methods, has important implications for both investors and policymakers. |
Keywords: | Housing prices, Investor sentiment, Bayesian shrinkage, Time-varying parameter model |
JEL: | C22 C32 C52 G12 R31 |
Date: | 2020–06 |
URL: | http://d.repec.org/n?u=RePEc:pre:wpaper:202055&r=all |
By: | Falco J. Bargagli-Dtoffi (IMT School for advanced studies); Massimo Riccaboni (IMT School for advanced studies); Armando Rungi (IMT School for advanced studies) |
Abstract: | In this contribution, we exploit machine learning techniques to predict the risk of failure of firms. Then, we propose an empirical definition of zombies as firms that persist in a status of high risk, beyond the highest decile, after which we observe that the chances to transit to lower risk are minimal. We implement a Bayesian Additive Regression Tree with Missing Incorporated in Attributes (BART-MIA), which is specifically useful in our setting as we provide evidence that patterns of undisclosed accounts correlate with firms failures. After training our algorithm on 304,906 firms active in Italy in the period 2008-2017, we show how it outperforms proxy models like the Z-scores and the Distance-to-Default, traditional econometric methods, and other widely used machine learning techniques. We document that zombies are on average 21% less productive, 76% smaller, and they increased in times of financial crisis. In general, we argue that our application helps in the design of evidence-based policies in the presence of market failures, for example optimal bankruptcy laws. We believe our framework can help to inform the design of support programs for highly distressed firms after the recent pandemic crisis. |
Keywords: | machine learning; Bayesian statistical learning; financial constraints; bankruptcy;zombie firms |
JEL: | C53 C55 G32 G33 L21 L25 |
Date: | 2020–06 |
URL: | http://d.repec.org/n?u=RePEc:ial:wpaper:1/2020&r=all |
By: | Aurore Oskar KOWALEWSKI (IESEG School of Management & LEM-CNRS 9221); Paweł PISANY (Institute of Economics, Polish Academy of Sciences) |
Abstract: | This study investigates the determinants of fintech company creation and activity using a cross-country sample that includes developed and developing countries. Using a random effect negative binomial model and explainable machine learning algorithms, we show the positive role of technology advancements in each economy, quality of research, and more importantly, the level of university-industry collaboration. Additionally, we find that demographic factors may play a role in fintech creation and activity. Some fintech companies may find the quality and stringency of regulation to be an obstacle. Our results also show the sophisticated interactions between the banking sector and fintech companies that we may describe as a mix of cooperation and competition. |
Keywords: | fintech, innovation, start up, developed countries, developing countries |
JEL: | G21 G23 L26 O30 |
Date: | 2020–07 |
URL: | http://d.repec.org/n?u=RePEc:ies:wpaper:f202006&r=all |
By: | Alexander Subbotin (Max Planck Institute for Demographic Research, Rostock, Germany); Samin Aref (Max Planck Institute for Demographic Research, Rostock, Germany) |
Abstract: | We study international mobility in academia with a focus on migration of researchers to and from Russia. Using millions of Scopus publications from 1996 to 2019, we analyze detailed records of more than half a million researchers who have published with a Russian affiliation address at some point in their careers. Migration of researchers is observed through the changes in their affiliation addresses. We compute net migration rates based on incoming and outgoing flows of researchers which indicate that while Russia has been a donor country in the late 1990s and early 2000s, in more recent years, it has experienced relatively balanced flows and a symmetric circulation of researchers. Using subject categories of publications, we obtain a profile of possibly mixed disciplines for each researcher. This allows us to quantify the impact of migration on each field of science. For a country assumed to be losing scientists, our analysis shows that while Russia has suffered a net loss in most disciplines and more so in pharmacology, agriculture, environmental science, and energy, it is actually on the winning side of a brain circulation system for dentistry, psychology, and chemistry. For the discipline of nursing, there is a balanced circulation of researchers to and from Russia. Our substantive results reveal new aspects of international mobility in academia and its impact on a national science system which could inform policy development. Methodologically, our new approach can be adopted as a framework of analysis for studying scholarly migration in other countries. |
Keywords: | Russian Federation, bibliographies, brain drain, circular migration, computational demography, computational social science, digital demography, information sciences, international migration, labor migration, libraries, library science |
JEL: | J1 Z0 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:dem:wpaper:wp-2020-025&r=all |
By: | Federico Bassi (Dipartimento di Scienze Sociali ed Economiche - Università degli Studi di Roma "La Sapienza" [Rome]); Tom Bauermann; Dany Lang (CEPN - Centre d'Economie de l'Université Paris Nord - USPC - Université Sorbonne Paris Cité - CNRS - Centre National de la Recherche Scientifique - Université Sorbonne Paris Nord); Mark Setterfield |
Abstract: | ost Keynesian macrodynamic models make various assumptions about the normal rate of capacity utilization. Those rooted in the Classical and neo-Keynesian traditions assume the normal rate is fixed, whereas Kaleckian models treat it as a variable that is endogenous to the actual rate of capacity utilization. This paper contributes to the debate about the normal rate of capacity utilization by developing a model of strong or genuine hysteresis, in which firms make discrete decisions about the normal rate depending on the degree of uncertainty about demand conditions. An agent-based model based on empirical analysis of 25 sectors of the US economy is used to show that hysteresis can cause variation in the normal rate of capacity utilization within a subset of the range of observed variation in the actual capacity utilization rate. This suggests that the economy exhibits both constancy and (endogenous) variability in the normal rate of utilization over different ranges of variation in the actual rate. More broadly speaking, the genuine hysteresis model is shown to provide the basis for a synthesis of Post Keynesian macrodynamics that draws on both the Classical/neo-Keynesian and Kaleckian modeling traditions. |
Date: | 2020–06–11 |
URL: | http://d.repec.org/n?u=RePEc:hal:cepnwp:halshs-02865532&r=all |
By: | Christopher Demone; Olivia Di Matteo; Barbara Collignon |
Abstract: | In this study, we enhance Markowitz portfolio selection with graph theory for the analysis of two portfolios composed of either EU or US assets. Using a threshold-based decomposition of their respective covariance matrices, we perturb the level of risk in each portfolio and build the corresponding sets of graphs. We show that the “superimposition” of all graphs in a set allows for the (re)construction of the efficient frontiers. We also identify a relationship between the Sharpe ratio (SR) of a given portfolio and the topology of the corresponding network of assets. More specifically, we suggest SR = f(topology) ≈ f(ECC/BC), where ECC is the eccentricity and BC is the betweenness centrality averaged over all nodes in the network. At each threshold, the structural analysis of the correlated networks provides unique insights into the relationships between assets, agencies, risks, returns and cash flows. We observe that the best threshold or best graph representation corresponds to the portfolio with the highest Sharpe ratio. We also show that simulated annealing performs better than a gradient-based solver. |
Keywords: | Central bank research |
JEL: | C02 |
Date: | 2020–06 |
URL: | http://d.repec.org/n?u=RePEc:bca:bocawp:20-21&r=all |
By: | Emaad Manzoor; George H. Chen; Dokyun Lee; Michael D. Smith |
Abstract: | Deliberation among individuals online plays a key role in shaping the opinions that drive votes, purchases, donations and other critical offline behavior. Yet, the determinants of opinion-change via persuasion in deliberation online remain largely unexplored. Our research examines the persuasive power of $\textit{ethos}$ -- an individual's "reputation" -- using a 7-year panel of over a million debates from an argumentation platform containing explicit indicators of successful persuasion. We identify the causal effect of reputation on persuasion by constructing an instrument for reputation from a measure of past debate competition, and by controlling for unstructured argument text using neural models of language in the double machine-learning framework. We find that an individual's reputation significantly impacts their persuasion rate above and beyond the validity, strength and presentation of their arguments. In our setting, we find that having 10 additional reputation points causes a 31% increase in the probability of successful persuasion over the platform average. We also find that the impact of reputation is moderated by characteristics of the argument content, in a manner consistent with a theoretical model that attributes the persuasive power of reputation to heuristic information-processing under cognitive overload. We discuss managerial implications for platforms that facilitate deliberative decision-making for public and private organizations online. |
Date: | 2020–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2006.00707&r=all |