|
on Computational Economics |
Issue of 2020‒04‒27
twelve papers chosen by |
By: | Brunori, Paolo; Neidhöfer, Guido |
Abstract: | We show that measures of inequality of opportunity (IOP) fully consistent with Roemer (1998)'s IOP theory can be straightforwardly estimated by adopting a machine learning approach, and apply our novel method to analyse the development of IOP in Germany during the last three decades. Hereby, we take advantage of information contained in 25 waves of the Socio-Economic Panel. Our analysis shows that in Germany IOP declined immediately after reunification, increased in the first decade of the century, and slightly declined again after 2010. Over the entire period, at the top of the distribution we always find individuals that resided in West-Germany before the fall of the Berlin Wall, whose fathers had a high occupational position, and whose mothers had a high educational degree. East-German residents in 1989, with low educated parents, persistently qualify at the bottom. |
Keywords: | Inequality,Opportunity,SOEP,Germany |
JEL: | D63 D30 D31 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:zbw:zewdip:20013&r=all |
By: | Òscar Jordà; Moritz Schularick; Alan M. Taylor |
Abstract: | Business cycles are costlier and stabilization policies more beneficial than widely thought. This paper shows that all business cycles are asymmetric and resemble mini “disasters”. By this we mean that growth is pervasively fat-tailed and non-Gaussian. Using long-run historical data, we show empirically that this is true for all advanced economies since 1870. Focusing on the peacetime sample, we develop a tractable local projection framework to estimate consumption growth paths for normal and financial-crisis recessions. Using random coefficient local projections we get an easy and transparent mapping from the estimates to the calibrated simulation model. Simulations show that substantial welfare costs arise not just from the large rare disasters, but also from the smaller but more frequent mini-disasters in every cycle. In postwar America, households would sacrifice more than 10 percent of consumption to avoid such cyclical fluctuations. |
JEL: | E13 E21 E22 E32 |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:26962&r=all |
By: | Nataliia Ostapenko |
Abstract: | I propose a new approach to identifying exogenous monetary policy shocks that requires no priors on the underlying macroeconomic structure, nor any observation of monetary policy actions. My approach entails directly estimating the unexpected changes in the federal funds rate as those which cannot be predicted from the internal Federal Open Market Committee's (FOMC) discussions. I employ deep learning and basic machine learning regressors to predict the effective federal funds rate from the FOMC's discussions without imposing any time-series structure. The result of the standard three variable Structural Vector Autoregression (SVAR) with my new measure shows that economic activity and inflation decline in response to a monetary policy shock. |
Keywords: | monetary policy, identification, shock, deep learning, FOMC, transcripts |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:mtk:febawb:123&r=all |
By: | Javier Barbero (European Commission - JRC); Olga Diukanova (European Commission - JRC); Carlo Gianelle (European Commission - JRC); Simone Salotti (European Commission - JRC); Artur Santoalha (TIK Centre for Technology, Innovation and culture - UIO) |
Abstract: | We make the case for a technology-enabled approach to Smart Specialisation policy making in order to foster its effectiveness by proposing a novel type of economic impact assessment. We use the RHOMOLO model to gauge empirically the general equilibrium effects implied by the Smart Specialisation logic of intervention as foreseen by the policy makers designing and implementing the European Cohesion policy. More specifically, we simulate the macroeconomic effects of achieving the R&D personnel targets planned by a set of Southern European regions. We discuss the implications of the proposed methodology for future assessments of Smart Specialisation. |
Keywords: | Rhomolo, Region, Growth, Smart Specialisation; ex-ante policy impact assessment; CGE models; Cohesion policy. |
JEL: | C68 O38 R13 R58 |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:ipt:termod:202001&r=all |
By: | Ari, Anil; Ratnovski, Lev; Chen, Sophia |
Abstract: | This paper presents a new dataset on the dynamics of non-performing loans (NPLs) during 88 banking crises since 1990. The data show similarities across crises during NPL build-ups but less so during NPL resolutions. We find a close relationship between NPL problems—elevated and unresolved NPLs—and the severity of post-crisis recessions. A machine learning approach identifies a set of pre-crisis predictors of NPL problems related to weak macroeconomic, institutional, corporate, and banking sector conditions. Our findings suggest that reducing pre-crisis vulnerabilities and promptly addressing NPL problems during a crisis are important for post-crisis output recovery. JEL Classification: E32, E44, G21, N10, N20 |
Keywords: | banking crises, crisis resolution, debt, non-performing loans, recessions |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20202395&r=all |
By: | Guy Melard |
Abstract: | The paper is a survey of repeated surveys with a few examples. Repeated surveys are surveys conducted across time. Therefore, the results appear as time series. The main question in repeated surveys is how to summarize the results, either using only the last survey, or using some (weighted) average of the most recent surveys. An example in economic statistics will be treated: the monthly business surveys in several countries. Besides, simulation results are presented based on some of the techniques proposed in the literature on repeated surveys and TRAMO-SEATS sometimes used for business surveys. |
Keywords: | étude de Monte Carlo; Enquête de conjoncture auprès des entreprises; modèle de série temporelle |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2013/304511&r=all |
By: | Gallego, Guillermo; Li, Anran; Truong, Van-Anh; Wang, Xinshang |
Abstract: | We propose one of the first models of “product framing” and pricing. Product framing refers to the way consumer choice is influenced by how the products are framed, or displayed. We present a model where a set of products are displayed, or framed, into a set of virtual web pages. We assume that consumers consider only products in the top pages, with different consumers willing to see different numbers of pages. Consumers select a product, if any, from these pages following a general choice model. We show that the product framing problem is NP-hard. We derive algorithms with guaranteed performance relative to an optimal algorithm under reasonable assumptions. Our algorithms are fast and easy to implement. We also present structural results and design algorithms for pricing under framing effects for the multi- nomial logit model. We show that for profit maximization problems, at optimality, products are displayed in descending order of their value gap and in ascending order of their markups. |
Keywords: | analysis of algorithms; choice models; marketing; pricing |
JEL: | J50 |
Date: | 2020–01–07 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:101983&r=all |
By: | Agnes Cseh (Centre for Economic and Regional Studies, Institute of Economics); Klaus Heeger (Technische Universität Berlin, Faculty IV Electrical Engineering and Computer Science, Institute of Software Engineering and Theoretical Computer Science, Chair of Algorithmics and Computational Complexity) |
Abstract: | In the stable marriage problem, a set of men and a set of women are given, each of whom has a strictly ordered preference list over the acceptable agents in the opposite class. A matching is called stable if it is not blocked by any pair of agents, who mutually prefer each other to their respective partner. Ties in the preferences allow for three different definitions for a stable matching: weak, strong and super-stability. Besides this, acceptable pairs in the instance can be restricted in their ability of blocking a matching or being part of it, which again generates three categories of restrictions on acceptable pairs. Forced pairs must be in a stable matching, forbidden pairs must not appear in it, and lastly, free pairs cannot block any matching.Our computational complexity study targetsthe existence of a stable solution for each of the three stability definitions, in the presence of each of the three types of restricted pairs. We solve all cases that were still open. As a byproduct, we also derive that the maximum size weakly stable matching problem is hard even in very dense graphs, which may be of independent interest. |
Keywords: | stable matchings, restricted edges,complexity |
JEL: | C63 C78 |
Date: | 2020–01 |
URL: | http://d.repec.org/n?u=RePEc:has:discpr:2007&r=all |
By: | Peter Biro (Centre for Economic and Regional Studies and Department of Operations Research and Actuarial Sciences, Corvinus University ofBudapest, Hungary); Jens Gudmundsson (Department of Food and Resource Economics, University of Copenhagen, Denmark,) |
Abstract: | We allocate objects to agents as exemplified primarily by school choice. Welfare judgments of the object-allocating agency are encoded as edge weights in the acceptability graph. The welfare of an allocation is the sum of its edge weights. We introduce the constrained welfare-maximizing solution, which is the allocation of highest welfare among the Pareto-efficient allocations. We identify conditions under which this solution is easily determined from a computational point of view. For the unrestricted case, we formulate an integer program and find this to be viable in practice as it quickly solves a real-world instance of kindergarten allocation and large-scale simulated instances. Incentives to report preferences truthfully are discussed briefly. |
Keywords: | Assignment, Pareto-efficiency, Welfare-maximization, Complexity, Integerprogrammin |
JEL: | C6 C78 D61 |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:has:discpr:2016&r=all |
By: | Nüst, Daniel (University of Münster); Sochat, Vanessa; Marwick, Ben; Eglen, Stephen; Head, Tim; Hirst, Tony |
Abstract: | Containers are greatly improving computational science by packaging software and data dependencies. In a scholarly context, transparency and support of reproducibility are the largest drivers for using these containers. It follows that choices that are made with respect to building containers can make or break a workflow’s reproducibility. The build for the container image is often created based on the instructions in the Dockerfile format. The rules presented here help researchers to write understandable Dockerfiles for typical data science workflows. By following the rules in this article researchers can create containers suitable for sharing with fellow scientists, for inclusion in scholarly communication such as education or scientific papers, and for an effective and sustainable personal workflow. |
Date: | 2020–04–17 |
URL: | http://d.repec.org/n?u=RePEc:osf:osfxxx:fsd7t&r=all |
By: | Cheng, Cindy; Barcelo, Joan; Hartnett, Allison; Kubinec, Robert (Princeton University); Messerschmidt, Luca |
Abstract: | As the COVID-19 pandemic spreads around the world, governments have implemented a broad set of policies to limit the spread of the pandemic. In this paper we present an initial release of a large hand-coded dataset of more than 4,500 separate policy announcements from governments around the world. This data is being made publicly available, in combination with other data that we have collected (including COVID-19 tests, cases, and deaths) as well as a number of country-level covariates. Due to the speed of the COVID-19 outbreak, we will be releasing this data on a daily basis with a 5-day lag for record validity checking. In a truly global effort, our team is comprised of more than 190 research assistants across 18 time zones and makes use of cloud-based managerial and data collection technology in addition to machine learning coding of news sources. We analyze the dataset with a Bayesian time-varying ideal point model showing the quick acceleration of more harsh policies across countries beginning in mid-March and continuing to the present. While some relatively low-cost policies like task forces and health monitoring began early, countries generally adopted more harsh measures within a narrow time window, suggesting strong policy diffusion effects. |
Date: | 2020–04–12 |
URL: | http://d.repec.org/n?u=RePEc:osf:socarx:dkvxy&r=all |
By: | Pierre Durand; Gaëtan Le Quang |
Abstract: | Banking regulation faces multiple challenges that call for rethinking the way it is designed in order to tackle the specific risks associated with banking activities. In this paper, we argue that regulators should focus on designing strong equity requirements instead of implementing several complex rules. Such a constraint in equity is however opposed by the banking industry because of its presumed adverse impact on banks' performance. Resorting to Random Forest regressions on a large dataset of banks balance sheet variables, we show that the ratio of equity over total assets has a clear positive effect on banks' performance, as measured by the return on assets. On the contrary, the impact of this ratio on the shareholder value, as measured by the return on equity, is most of the time weakly negative. Strong equity requirements do not, therefore, impede banks' performance but do reduce the shareholder value. This may be the reason why the banking industry so fiercely opposes strong equity requirements. |
Keywords: | Banking regulation ; Capital requirements ; Basel III ; Random Forest Regression |
JEL: | C44 G21 G28 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:drm:wpaper:2020-2&r=all |