nep-cmp New Economics Papers
on Computational Economics
Issue of 2022‒02‒07
eighteen papers chosen by



  1. Black-box Bayesian inference for economic agent-based models By Farmer, J. Doyne; Dyer, Joel; Cannon, Patrick; Schmon, Sebastian
  2. The Financial Network Channel of Monetary Policy Transmission: An Agent-Based Model By Michel Alexandre; Gilberto Tadeu Lima, Luca Riccetti, Alberto Russo
  3. LSTM Architecture for Oil Stocks Prices Prediction By Javad T. Firouzjaee; Pouriya Khaliliyan
  4. 'Moving On' -- Investigating Inventors' Ethnic Origins Using Supervised Learning By Matthias Niggli
  5. On the exact separation of cover inequalities of maximum-depth By Catanzaro, Daniele; Coniglio, Stefano; Furini, Fabio
  6. Predicting housing prices. A long term housing price path for Spanish regions By Paloma Taltavull de La Paz
  7. The Earth is Not Flat: A New World of High-Dimensional Peer Effects By Aurélien Sallin; Simone Balestra
  8. Robust Algorithmic Collusion By Nicolas Eschenbaum; Filip Melgren; Philipp Zahn
  9. Using Non-Stationary Bandits for Learning in Repeated Cournot Games with Non-Stationary Demand By Kshitija Taywade; Brent Harrison; Judy Goldsmith
  10. A Massively Parallel Exact Solution Algorithm for the Balanced Minimum Evolution Problem By Catanzaro, Daniele; Frohn, Martin; Pesenti, Raffaele
  11. Shifting the Tax Burden away from Labour towards Inheritances and Gifts – Simulation results for Germany By Andreas THIEMANN; Diana OGNYANOVA; Edlira NARAZANI; Balazs PALVOLGYI; Athena Kalyva; Alexander LEODOLTER
  12. Dynamic Portfolio Optimization with Inverse Covariance Clustering By Yuanrong Wang; Tomaso Aste
  13. An Analysis of an Alternative Pythagorean Expected Win Percentage Model: Applications Using Major League Baseball Team Quality Simulations By Justin Ehrlich; Christopher Boudreaux; James Boudreau; Shane Sanders
  14. Artificial Intelligence and Big Data in the Age of COVID-19 By Francisco J. Bariffi; Julia M. Puaschunder
  15. Control and Spread of Contagion in Networks By John Higgins; Tarun Sabarwal
  16. A multimodal transport model to evaluate transport policies in the North of France By Kilani, M.; Diop, N.; De Wolf, Daniel
  17. Public debt in the 21st century By Xavier Timbeau; Elliot Aurissergues; Eric Heyer
  18. Robust Portfolio Optimization: A Stochastic Evaluation of Worst-Case Scenarios By Paulo Rotella Junior; Luiz Celio Souza Rocha; Rogerio Santana Peruchi; Giancarlo Aquila; Karel Janda; Edson de Oliveira Pamplona

  1. By: Farmer, J. Doyne; Dyer, Joel; Cannon, Patrick; Schmon, Sebastian
    Abstract: Simulation models, in particular agent-based models, are gaining popularity in economics. The considerable flexibility they offer, as well as their capacity to reproduce a variety of empirically observed behaviors of complex systems, give them broad appeal, and the increasing availability of cheap computing power has made their use feasible. Yet a widespread adoption in real-world modelling and decision-making scenarios has been hindered by the difficulty of performing parameter estimation for such models. In general, simulation models lack a tractable likelihood function, which precludes a straightforward application of standard statistical inference techniques. A number of recent works (Grazzini et al., 2017; Platt, 2020, 2021) have sought to address this problem through the application of likelihood-free inference techniques, in which parameter estimates are determined by performing some form of comparison between the observed data and simulation output. However, these approaches are (a) founded on restrictive assumptions, and/or (b) typically require many hundreds of thousands of simulations. These qualities make them unsuitable for large-scale simulations in economics and can cast doubt on the validity of these inference methods in such scenarios. In this paper, we investigate the efficacy of two classes of simulation-efficient black-box approximate Bayesian inference methods that have recently drawn significant attention within the probabilistic machine learning community: neural posterior estimation and neural density ratio estimation. We present a number of benchmarking experiments in which we demonstrate that neural network based black-box methods provide state of the art parameter inference for economic simulation models, and crucially are compatible with generic multivariate time-series data. In addition, we suggest appropriate assessment criteria for use in future benchmarking of approximate Bayesian inference procedures for economic simulation models.
    Date: 2022–02
    URL: http://d.repec.org/n?u=RePEc:amz:wpaper:2022-05&r=
  2. By: Michel Alexandre; Gilberto Tadeu Lima, Luca Riccetti, Alberto Russo
    Abstract: The purpose of this paper is to contribute to a further understanding of the impact of monetary policy shocks on a financial network, which we dub the “financial network channel of monetary policy transmission†. To this aim, we develop an agent-based model (ABM) in which banks extend loans to firms. The bank-firm credit network is endogenously time-varying as determined by plausible behavioral assumptions, with both firms and banks being always willing to close a credit deal with the network partner perceived to be less risky. We then assess through simulations how exogenous shocks to the policy interest rate affect some key topological measures of the bank-firm credit network (density, assortativity, size of largest component, and degree distribution). Our simulations show that such topological features of the bank-firm credit network are significantly affected by shocks to the policy interest rate, and this impact varies quantitatively and qualitatively with the sign, magnitude, and duration of the shocks.
    Keywords: Financial network; monetary policy shocks; agent-based modeling.
    JEL: C63 E51 E52 G21
    Date: 2022–01–19
    URL: http://d.repec.org/n?u=RePEc:spa:wpaper:2022wpecon1&r=
  3. By: Javad T. Firouzjaee; Pouriya Khaliliyan
    Abstract: Oil companies are among the largest companies in the world whose economic indicators in the global stock market have a great impact on the world economy and market due to their relation to gold, crude oil, and the dollar. To quantify these relations we use the correlation feature and the relationships between stocks with the dollar, crude oil, gold, and major oil company stock indices, we create datasets and compare the results of forecasts with real data. To predict the stocks of different companies, we use Recurrent Neural Networks (RNNs) and LSTM, because these stocks change in time series. We carry on empirical experiments and perform on the stock indices dataset to evaluate the prediction performance in terms of several common error metrics such as Mean Square Error (MSE), Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Mean Absolute Percentage Error (MAPE). The received results are promising and present a reasonably accurate prediction for the price of oil companies' stocks in the near future. The results show that RNNs do not have the interpretability, and we cannot improve the model by adding any correlated data.
    Date: 2022–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2201.00350&r=
  4. By: Matthias Niggli
    Abstract: Patent data provides rich information about technical inventions, but does not disclose the ethnic origin of inventors. In this paper, I use supervised learning techniques to infer this information. To do so, I construct a dataset of 95'202 labeled names and train an artificial recurrent neural network with long-short-term memory (LSTM) to predict ethnic origins based on names. The trained network achieves an overall performance of 91% across 17 ethnic origins. I use this model to classify and investigate the ethnic origins of 2.68 million inventors and provide novel descriptive evidence regarding their ethnic origin composition over time and across countries and technological fields. The global ethnic origin composition has become more diverse over the last decades, which was mostly due to a relative increase of Asian origin inventors. Furthermore, the prevalence of foreign-origin inventors is especially high in the USA, but has also increased in other high-income economies. This increase was mainly driven by an inflow of non-western inventors into emerging high-technology fields for the USA, but not for other high-income countries.
    Date: 2022–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2201.00578&r=
  5. By: Catanzaro, Daniele (Université catholique de Louvain, LIDAM/CORE, Belgium); Coniglio, Stefano; Furini, Fabio
    Abstract: We investigate the problem of separating cover inequalities of maximum-depth exactly. We propose a pseudopolynomial-time dynamic-programming algorithm for its solution, thanks to which we show that this problem is weakly NP-hard (similarly to the problem of separating cover inequalities of maximum violation). We carry out extensive computational experiments on instances of the knapsack and the multi-dimensional knapsack problems with and without conflict constraints. The results show that, with a cutting-plane generation method based on the maximum-depth criterion, we can optimize over the cover-inequality closure by generating a number of cuts smaller than when adopting the standard maximum-violation criterion. We also introduce the Point-to-Hyperplane Distance Knapsack Problem (PHD-KP), a problem closely related to the separation problem for maximum-depth cover inequalities, and show how the proposed dynamic programming algorithm can be adapted for effectively solving the PHD-KP as well.
    Keywords: Knapsack Problem ; Cover Inequalities ; Dynamic Programming ; Mixed Integer Nonlinear Programming ; Cutting Plane Generation
    Date: 2021–01–01
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2021018&r=
  6. By: Paloma Taltavull de La Paz
    Abstract: This paper aims to forecast the long term trend of housing prices in the Spanish cities with more than 25 thousand inhabitants, a total of 275 individual municipalities. Based on a causal model explaining housing prices based on six fundamental variables (changes in population, income, number of mortgages, interest rates, vacant and housing prices), a pool VECM technique is used to estimate a housing price model and calculate the 'stable long term price', a central concept defined in the formal valuation process. The model covers the period 1995-2020, and the long term is approached from 2000 to 2026, so the prediction exercise includes backcast and forecast period allowing to extract the long term cycle housing price have followed during last 20 years and project it further six years. The analytical process follows three steps. Firstly, it identifies the cities following a common pattern in their housing market by clustering twice the cities: (1) using house price time series and (2) using a machine learning approach with the six fundamental variables. Results give a comprehensible evolution of the long term component of housing prices and the model also permits the understanding of the main drivers of housing prices in each Spanish region. Clustering cities with two statistical tools give pretty similar results in some cities but is different in others. The challenge of finding the correct grouping is critical to understanding the housing market and forecasting their prices.
    Keywords: Error correction models; Forecast; Housing Prices; Housing valuation; Machine Learning; Time Series
    JEL: R3
    Date: 2021–01–01
    URL: http://d.repec.org/n?u=RePEc:lre:wpaper:lares-2021-4dra&r=
  7. By: Aurélien Sallin; Simone Balestra
    Abstract: The majority of recent peer-effect studies in education have focused on the effect of one particular type of peers on classmates. This view fails to take into account the reality that peer effects are heterogeneous for students with different characteristics, and that there are at least as many peer effect functions as there are types of peers. In this paper, we develop a general empirical framework that accounts for systematic interactions between peer types and nonlinearities of peer effects. We use machine-learning methods to (i) understand which dimensions of peer characteristics are the most predictive of academic success, (ii) estimate high-dimensional peer effects functions, and (iii) investigate performance-improving classroom allocation through policy-relevant simulations. First, we find that students' own characteristics are the most predictive of academic success, and that the most predictive peer effects are generated by students with special needs, low-achieving students, and male students. Second, we show that peer effects traditionally reported by the literature likely miss important nonlinearities in the distribution of peer proportions. Third, we determine that classroom compositions that are the most balanced in students' characteristics are the best ways to reach maximal aggregated school performance.
    Keywords: peer effects, high dimensionality, machine learning, classroom composition
    JEL: C31 H75 I21 I28
    Date: 2022–01
    URL: http://d.repec.org/n?u=RePEc:iso:educat:0189&r=
  8. By: Nicolas Eschenbaum; Filip Melgren; Philipp Zahn
    Abstract: This paper develops a formal framework to assess policies of learning algorithms in economic games. We investigate whether reinforcement-learning agents with collusive pricing policies can successfully extrapolate collusive behavior from training to the market. We find that in testing environments collusion consistently breaks down. Instead, we observe static Nash play. We then show that restricting algorithms' strategy space can make algorithmic collusion robust, because it limits overfitting to rival strategies. Our findings suggest that policy-makers should focus on firm behavior aimed at coordinating algorithm design in order to make collusive policies robust.
    Date: 2022–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2201.00345&r=
  9. By: Kshitija Taywade; Brent Harrison; Judy Goldsmith
    Abstract: Many past attempts at modeling repeated Cournot games assume that demand is stationary. This does not align with real-world scenarios in which market demands can evolve over a product's lifetime for a myriad of reasons. In this paper, we model repeated Cournot games with non-stationary demand such that firms/agents face separate instances of non-stationary multi-armed bandit problem. The set of arms/actions that an agent can choose from represents discrete production quantities; here, the action space is ordered. Agents are independent and autonomous, and cannot observe anything from the environment; they can only see their own rewards after taking an action, and only work towards maximizing these rewards. We propose a novel algorithm 'Adaptive with Weighted Exploration (AWE) $\epsilon$-greedy' which is remotely based on the well-known $\epsilon$-greedy approach. This algorithm detects and quantifies changes in rewards due to varying market demand and varies learning rate and exploration rate in proportion to the degree of changes in demand, thus enabling agents to better identify new optimal actions. For efficient exploration, it also deploys a mechanism for weighing actions that takes advantage of the ordered action space. We use simulations to study the emergence of various equilibria in the market. In addition, we study the scalability of our approach in terms number of total agents in the system and the size of action space. We consider both symmetric and asymmetric firms in our models. We found that using our proposed method, agents are able to swiftly change their course of action according to the changes in demand, and they also engage in collusive behavior in many simulations.
    Date: 2022–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2201.00486&r=
  10. By: Catanzaro, Daniele (Université catholique de Louvain, LIDAM/CORE, Belgium); Frohn, Martin (Université catholique de Louvain, LIDAM/CORE, Belgium); Pesenti, Raffaele
    Abstract: The Balanced Minimum Evolution Problem (BMEP) is an [APX]-hard nonlinear network design problem that consists of finding a phylogeny that minimizes the cross-entropy of the molecular sequences extracted from a given set of taxa. By combining massive parallelism with recent theoretical advances on the polyhedral combinatorics of the problem and new insights on the relationships between the BMEP and information entropy, we design a new exact solution algorithm that proves to be up to an order of magnitude faster than the current state-of-the-art sequential-version on generic instances and able to solve up to 25% more taxa within the same time limit. We also investigate some issues related to numerical stability and statistical consistency of the BMEP, arising in particular when dealing with large instances. We show, as a negative finding, that no rescaling technique may ensure numerical stability, by guaranteeing at the same time the statistical consistency of the optimal solution to the problem.
    Keywords: Combinatorial optimization ; network design ; balanced minimum evolution ; implicit enumeration algorithms ; parallel computing ; numerical stability
    Date: 2021–01–01
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2021023&r=
  11. By: Andreas THIEMANN (European Commission – JRC); Diana OGNYANOVA (European Commission – DG ECFIN); Edlira NARAZANI (European Commission – JRC); Balazs PALVOLGYI (European Commission - DG ECFIN); Athena Kalyva (Greek Ministry of Finance); Alexander LEODOLTER (European Commission – DG ECFIN)
    Abstract: Germany’s tax system places a relatively strong emphasis on direct taxes, particularly on labour. At the same time, revenues from the inheritance and gift tax are relatively low. This points towards a large-scale transfer of wealth from one generation to the next that is largely untaxed and thereby maintaining the high degree of wealth inequality observed in Germany. This is due mainly to the wide-ranging tax exemptions for business assets, which make the system complex, inefficient and regressive. This paper presents three hypothetical budget-neutral scenarios of broadening the inheritance and gift tax base while reducing the tax burden on labour income. Keeping the current progressive rates but abolishing tax exemptions would lead to about EUR 9-12 billion additional annual inheritance and gift tax revenue. Replacing the current tax regime by a flat rate of 10% or 15% could yield about EUR 0.5-2.3 billion or EUR 4-6.5 billion. Using EUROMOD, the microsimulation model of the EU, we show that these additional revenues could be used to reduce the tax burden on labour, which would improve income equality. Furthermore, estimations of labour supply responses to these reforms, based on the EUROLAB labour supply model, indicate that lowering the tax burden on labour may also lead to a slight increase in labour supply in particular for low-income earners.
    Keywords: tax shift, inheritance and gift tax, tax wedge on labour, wealth inequality.
    JEL: D31 H2 J2
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:ipt:taxref:202116&r=
  12. By: Yuanrong Wang; Tomaso Aste
    Abstract: Market conditions change continuously. However, in portfolio's investment strategies, it is hard to account for this intrinsic non-stationarity. In this paper, we propose to address this issue by using the Inverse Covariance Clustering (ICC) method to identify inherent market states and then integrate such states into a dynamic portfolio optimization process. Extensive experiments across three different markets, NASDAQ, FTSE and HS300, over a period of ten years, demonstrate the advantages of our proposed algorithm, termed Inverse Covariance Clustering-Portfolio Optimization (ICC-PO). The core of the ICC-PO methodology concerns the identification and clustering of market states from the analytics of past data and the forecasting of the future market state. It is therefore agnostic to the specific portfolio optimization method of choice. By applying the same portfolio optimization technique on a ICC temporal cluster, instead of the whole train period, we show that one can generate portfolios with substantially higher Sharpe Ratios, which are statistically more robust and resilient with great reductions in maximum loss in extreme situations. This is shown to be consistent across markets, periods, optimization methods and selection of portfolio assets.
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2112.15499&r=
  13. By: Justin Ehrlich; Christopher Boudreaux; James Boudreau; Shane Sanders
    Abstract: We ask if there are alternative contest models that minimize error or information loss from misspecification and outperform the Pythagorean model. This article aims to use simulated data to select the optimal expected win percentage model among the choice of relevant alternatives. The choices include the traditional Pythagorean model and the difference-form contest success function (CSF). Method. We simulate 1,000 iterations of the 2014 MLB season for the purpose of estimating and analyzing alternative models of expected win percentage (team quality). We use the open-source, Strategic Baseball Simulator and develop an AutoHotKey script that programmatically executes the SBS application, chooses the correct settings for the 2014 season, enters a unique ID for the simulation data file, and iterates these steps 1,000 times. We estimate expected win percentage using the traditional Pythagorean model, as well as the difference-form CSF model that is used in game theory and public choice economics. Each model is estimated while accounting for fixed (team) effects. We find that the difference-form CSF model outperforms the traditional Pythagorean model in terms of explanatory power and in terms of misspecification-based information loss as estimated by the Akaike Information Criterion. Through parametric estimation, we further confirm that the simulator yields realistic statistical outcomes. The simulation methodology offers the advantage of greatly improved sample size. As the season is held constant, our simulation-based statistical inference also allows for estimation and model comparison without the (time series) issue of non-stationarity. The results suggest that improved win (productivity) estimation can be achieved through alternative CSF specifications.
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2112.14846&r=
  14. By: Francisco J. Bariffi (University Carlos III of Madrid, Spain); Julia M. Puaschunder (The New School, Department of Economics, School of Public Engagement, USA)
    Abstract: The view that the COVID-19 pandemic has set in motion profound changes in our modern societies is practically unanimous. The global effort to contain, cure, and eradicate COVID-19 has been greatly benefited by the use, development and/or adaptation of technological tools for mass surveillance based on artificial intelligence and robotics systems. The management of the COVID-19 pandemic yet has also revealed many shortcomings generated from the need to make decisions “in extremis†. Systematic lockdowns of entire populations pushed humans to increase exposure to digital devices in order to achieve some sort of social connection. Some nations with the capable technology development used AI systems to access individual digital data in order to control and contain the SARS-CoV-2. Massive surveillance of entire populations is now possible. In this way, the problem arises of how to establish an adequate balance and control between the utility and the results offered by mass surveillance systems based on artificial intelligence and robotics in the fight against COVID-19 on the one hand, and the protection of personal and collective fundamental rights and freedoms, on the other.
    Keywords: Artificial Intelligence, AI, Anti-Discrimination, Big Data, COVID-19, COVID Long Haulers, Democratization of Healthcare Information, Digitalization, Healthcare, Human Rights, Massive Surveillance, Prevention, Tracking
    Date: 2021–10
    URL: http://d.repec.org/n?u=RePEc:smo:lpaper:0115&r=
  15. By: John Higgins (Department of Economics, University of Wisconsin, Madison, WI 53706, USA); Tarun Sabarwal (Department of Economics, University of Kansas, Lawrence, KS 66045, USA)
    Abstract: We study proliferation of an action in binary action network coordination games that are generalized to include global effects. This captures important aspects of proliferation of a particular action or narrative in online social networks, providing a basis to understand their impact on societal outcomes. Our model naturally captures complementarities among starting sets, network resilience, and global effects, and highlights interdependence in channels through which contagion spreads. We present new, natural, and computationally tractable algorithms to define and compute equilibrium objects that facilitate the general study of contagion in networks and prove their theoretical properties. Our algorithms are easy to implement and help to quantify relationships previously inaccessible due to computational intractability. Using these algorithms, we study the spread of contagion in scale-free networks with 1,000 players using millions of Monte Carlo simulations. Our analysis provides quantitative and qualitative insight into the design of policies to control or spread contagion in networks. The scope of application is enlarged given the many other situations across different fields that may be modeled using this framework.
    Keywords: Network games, coordination games, contagion, algorithmic computation
    JEL: C62 C72
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:kan:wpaper:202201&r=
  16. By: Kilani, M.; Diop, N.; De Wolf, Daniel (Université catholique de Louvain, LIDAM/CORE, Belgium)
    Abstract: We develop a passenger transport model for the North of France and use it to discuss the impacts of some policies focusing on the limitations of emissions and congestion. The model is calibrated for the North of France, and includes both urban and intercity trips. Four transport modes are considered: walking, biking, public transport and private cars. The model is calibrated to match the mode shares and the dynamic of congestion along a full day. The simulations are conducted within the MATSim framework. We evaluate the impacts, on traffic flows and emissions, of two pricing reforms: free public transport and road pricing in city center of Lille (the main metropolitan area in the study region).
    Keywords: Multimodal transport ; Emissions and congestion ; Transport simulation (MATSIM)
    Date: 2021–01–01
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2021030&r=
  17. By: Xavier Timbeau (OFCE - Observatoire français des conjonctures économiques - Sciences Po - Sciences Po); Elliot Aurissergues (OFCE - Observatoire français des conjonctures économiques - Sciences Po - Sciences Po); Eric Heyer (OFCE - Observatoire français des conjonctures économiques - Sciences Po - Sciences Po)
    Abstract: We propose a definition of public debt sustainability based on the possibility of conducting a fiscal effort or giving support to a macroeconomic path that makes it possible to reach a public debt target over a given horizon. The concepts of a fiscal effort and a macroeconomic trajectory are both speculative, as they rely on the anticipation of unknown futures. By making the parameters of these futures explicit and using them in a parsimonious model, we can generate trajectories that are not forecasts but a means of assessing the effort required to reach a targetthat is conditional on explicit assumptions. Debtwatch is a web applicatio n, freely accessible at https://ofce.shinyapps.io/debtwatchr, that can be used to carry out simulations, not only for France but also for other European countries and certain non-European countries such as the United States, including by modifying the parameters and exchanging assumptions with others. It is possible to carry out a calculation that is transparent (the assumptions are known and can be shared) an d reproducible (the sa me assumptions lead to the same results) and which shouldhelp to further the debate on public debt targets and the associated efforts for a selection of developed countries.
    Keywords: public debt,Debtwatch,simulations,calculation,developed countries
    Date: 2021–10–22
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-03477397&r=
  18. By: Paulo Rotella Junior (Department of Production Engineering, Federal University of Paraiba, Brazil & Department of Management, Federal Institute of Education, Science and Technology - North of Minas Gerais, Brazil & Faculty of Finance and Accounting, Prague University of Economics and Business, Czech Republic & Faculty of Social Sciences, Charles University, Czech Republic); Luiz Celio Souza Rocha (Department of Management, Federal Institute of Education, Science and Technology - North of Minas Gerais, Brazil); Rogerio Santana Peruchi (Department of Production Engineering, Federal University of Paraiba, Brazil); Giancarlo Aquila (IEPG, Federal University of Itajuba, Brazil); Karel Janda (Faculty of Finance and Accounting, Prague University of Economics and Business, Czech Republic & Faculty of Social Sciences, Charles University, Czech Republic); Edson de Oliveira Pamplona (Institute of Production and Management Engineering, Federal University of Itajuba, Brazil)
    Abstract: This article presents a new approach for building robust portfolios based on stochastic efficiency analysis and periods of market downturn. The empirical analysis is done on assets traded on the Brazil Stock Exchange, B3 (Brasil, Bolsa, Balcao). We start with information on the assets from periods of market downturn (worst-case) and we group them using hierarchical clustering. Then we do stochastic efficiency analysis on these data using the Chance Constrained Data Envelopment Analysis (CCDEA) model. Finally, we use a classical model of capital allocation to obtain the optimal share of each asset. Our model is able to accommodate investors who exhibit different risk behaviors (from conservatives to risky investors) by varying the level of probability in fulfilling the constraints (1-αi) of the CCDEA model. We show that the optimal portfolios constructed with the use of information from periods of market downturns perform better for the Sharpe ratio (SR) in the validation period. The combined use of these approaches, using also fundamentalist variables and information on market downturns, allows us to build robust portfolios, with higher cumulative returns in the validation period, and portfolios with lower beta values.
    Keywords: Robust optimization, Stochastic evaluation, Chance Constrained DEA, Worst-case markets, Portfolios
    JEL: G11 G14 C38 C61
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:fau:wpaper:wp2022_03&r=

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.