|
on Computational Economics |
Issue of 2020‒05‒25
fourteen papers chosen by |
By: | Lilit Popoyan (Institute of Economics (LEM), Scuola Superiore Sant’Anna, Pisa (Italy)); Mauro Napoletano (Sciences Po-OFCE, and SKEMA Business School); Andrea Roventini (EMbeDS and Institute of Economics (LEM)) |
Abstract: | We develop a macroeconomic agent-based model to study how financial instability can emerge from the co-evolution of interbank and credit markets and the policy responses to mitigate its impact on the real economy. The model is populated by heterogenous firms, consumers, and banks that locally interact in different markets. In particular, banks provide credit to firms according to a Basel II or III macro-prudential frameworks and manage their liquidity in the interbank market. The Central Bank performs monetary policy according to different types of Taylor rules. We find that the model endogenously generates market freezes in the interbank market which interact with the financial accelerator possibly leading to firm bankruptcies, banking crises and the emergence of deep downturns. This requires the timely intervention of the Central Bank as a liquidity lender of last resort. Moreover, we find that the joint adoption of a three mandate Taylor rule tackling credit growth and the Basel III macro-prudential frame-work is the best policy mix to stabilize financial and real economic dynamics. However, as the Liquidity Coverage Ratio spurs financial instability by increasing the pro-cyclicality of banks’ liquid reserves, a new counter-cyclical liquidity buffer should be added to Basel III to improve its performance further. Finally, we find that the Central Bank can also dampen financial in- stability by employing a new unconventional monetary-policy tool involving active management of the interest-rate corridor in the interbank market. |
Keywords: | Financial instability; interbank market freezes; monetary policy; macro-prudential policy; Basel III regulation; Tinbergen principle; agent-based models. |
JEL: | C63 E52 E6 G01 G21 G28 |
Date: | 2020–05 |
URL: | http://d.repec.org/n?u=RePEc:fce:doctra:2014&r=all |
By: | Wallimann, Hannes (Faculty of Economics and Social Sciences); Imhof, David; Huber, Martin |
Abstract: | We propose a new method for flagging bid rigging, which is particularly useful for detecting incomplete bid-rigging cartels. Our approach combines screens, i.e. statistics derived from the distribution of bids in a tender, with machine learning to predict the probability of collusion. As a methodological innovation, we calculate such screens for all possible subgroups of three or four bids within a tender and use summary statistics like the mean, median, maximum, and minimum of each screen as predictors in the machine learning algorithm. This approach tackles the issue that competitive bids in incomplete cartels distort the statistical signals produced by bid rigging. We demonstrate that our algorithm outperforms previously suggested methods in applications to incomplete cartels based on empirical data from Switzerland. |
Keywords: | Bid rigging detection; screening methods; descriptive statistics; machine learning; random forest; lasso; ensemble methods |
JEL: | C21 C45 C52 D22 D40 K40 |
Date: | 2020–04–01 |
URL: | http://d.repec.org/n?u=RePEc:fri:fribow:fribow00513&r=all |
By: | Dimitris Korobilis; Davide Pettenuzzo |
Abstract: | As the amount of economic and other data generated worldwide increases vastly, a challenge for future generations of econometricians will be to master efficient algorithms for inference in empirical models with large information sets. This Chapter provides a review of popular estimation algorithms for Bayesian inference in econometrics and surveys alternative algorithms developed in machine learning and computing science that allow for efficient computation in high-dimensional settings. The focus is on scalability and parallelizability of each algorithm, as well as their ability to be adopted in various empirical settings in economics and finance. |
Keywords: | MCMC, approximate inference, scalability, parallel computation |
JEL: | C11 C15 C49 C88 |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:gla:glaewp:2020_09&r=all |
By: | Spiegel, Alisa; Severini, Simone; Britz, Wolfgang; Coletta, Attilio |
Abstract: | Recent literature reviews of empirical models for long-term investment analysis in agriculture see gaps with regard to (i) separating investment and financing decisions, and (ii) explicit consideration of associated risk and temporal flexibility and (iii) taking farm-level resource endowments and other constraints into account. Inspired by real options approaches, this paper therefore develops step-wise a model which extend a simple net present value calculation to a farm-scale simulation model which considers time flexibility, different financing options and downside risk aversion. We assess the different model variants empirically by analyzing investments into hazelnut orchards in Italy outside of traditional producing regions. The variants return quite different optimal results with respect to scale and timing of the investment, its financing and expected NPV. The step-wise approach reveals which aspects drive these differences and underlines that considering temporal flexibility, different of financing options and riskiness can considerably improve traditional NPV analysis. |
Keywords: | Agribusiness, Agricultural Finance, Farm Management, Financial Economics, Land Economics/Use, Production Economics, Risk and Uncertainty |
Date: | 2020–05–19 |
URL: | http://d.repec.org/n?u=RePEc:ags:ubfred:303668&r=all |
By: | Schmidt, Florian Alexander |
Abstract: | Since 2017 the automotive industry has developed a high demand for ground truth data. Without this data, the ambitious goal of producing fully autonomous vehicles will remain out of reach. The self-driving car depends on self-learning algorithms, which in turn have to undergo a lot of supervised training. This requires vast amounts of manual labour, performed by crowdworkers across the globe. As a consequence, the demand in training data is transforming the crowdsourcing industry. This study is an investigation into the dynamics of this shift and its impacts on the working conditions of the crowdworkers. |
Keywords: | crowdworking,artificial Intelligence,self-driving cars,automotive industry,global labour markets,AI |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:zbw:hbsfof:155&r=all |
By: | Emir Zunic; Kemal Korjenic; Kerim Hodzic; Dzenana Donko |
Abstract: | This paper presents a framework capable of accurately forecasting future sales in the retail industry and classifying the product portfolio according to the expected level of forecasting reliability. The proposed framework, that would be of great use for any company operating in the retail industry, is based on Facebook's Prophet algorithm and backtesting strategy. Real-world sales forecasting benchmark data obtained experimentally in a production environment in one of the biggest retail companies in Bosnia and Herzegovina is used to evaluate the framework and demonstrate its capabilities in a real-world use case scenario. |
Date: | 2020–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2005.07575&r=all |
By: | Paola Tubaro (CNRS - Centre National de la Recherche Scientifique, TAU - TAckling the Underspecified - LRI - Laboratoire de Recherche en Informatique - UP11 - Université Paris-Sud - Paris 11 - CentraleSupélec - CNRS - Centre National de la Recherche Scientifique - Inria Saclay - Ile de France - Inria - Institut National de Recherche en Informatique et en Automatique, LRI - Laboratoire de Recherche en Informatique - UP11 - Université Paris-Sud - Paris 11 - CentraleSupélec - CNRS - Centre National de la Recherche Scientifique); Antonio Casilli (I3, une unité mixte de recherche CNRS (UMR 9217) - Institut interdisciplinaire de l’innovation - X - École polytechnique - Télécom ParisTech - MINES ParisTech - École nationale supérieure des mines de Paris - CNRS - Centre National de la Recherche Scientifique, Télécom ParisTech, IMT - Institut Mines-Télécom [Paris]); Marion Coville (Université de Poitiers) |
Abstract: | This paper sheds light on the role of digital platform labour in the development of today's artificial intelligence, predicated on data-intensive machine learning algorithms. Focus is on the specific ways in which outsourcing of data tasks to myriad 'micro-workers', recruited and managed through specialized platforms, powers virtual assistants, self-driving vehicles and connected objects. Using qualitative data from multiple sources, we show that micro-work performs a variety of functions, between three poles that we label, respectively, 'artificial intelligence preparation', 'artificial intelligence verification' and 'artificial intelligence impersonation'. Because of the wide scope of application of micro-work, it is a structural component of contemporary artificial intelligence production processes - not an ephemeral form of support that may vanish once the technology reaches maturity stage. Through the lens of micro-work, we prefigure the policy implications of a future in which data technologies do not replace human workforce but imply its marginalization and precariousness. |
Keywords: | Digital platform labour,micro-work,datafied production processes,artificial intelligence,machine learning |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-02554196&r=all |
By: | Gary Koop; Dimitris Korobilis |
Abstract: | This paper proposes a variational Bayes algorithm for computationally efficient posterior and predictive inference in time-varying parameter (TVP) models. Within this context we specify a new dynamic variable/model selection strategy for TVP dynamic regression models in the presence of a large number of predictors. This strategy allows for assessing in individual time periods which predictors are relevant (or not) for forecasting the dependent variable. The new algorithm is evaluated numerically using synthetic data and its computational advantages are established. Using macroeconomic data for the US we find that regression models that combine time-varying parameters with the information in many predictors have the potential to improve forecasts of price inflation over a number of alternative forecasting models. |
Keywords: | dynamic linear model; approximate posterior inference; dynamic variable selection; forecasting |
JEL: | C11 C13 C52 C53 C61 |
Date: | 2020–05 |
URL: | http://d.repec.org/n?u=RePEc:gla:glaewp:2020_11&r=all |
By: | Farbmacher, Helmut (Max Planck Society); Huber, Martin; Langen, Henrika (Faculty of Economics and Social Sciences); Spindler, Martin (Universität Hamburg) |
Abstract: | This paper combines causal mediation analysis with double machine learning to control for observed confounders in a data-driven way under a selection-on-observables assumption in a high-dimensional setting. We consider the average indirect effect of a binary treatment operating through an intermediate variable (or mediator) on the causal path between the treatment and the outcome, as well as the unmediated direct effect. Estimation is based on efficient score functions, which possess a multiple robustness property w.r.t. misspecifications of the outcome, mediator, and treatment models. This property is key for selecting these models by double machine learning, which is combined with data splitting to prevent overfitting in the estimation of the effects of interest. We demonstrate that the direct and indirect effect estimators are asymptotically normal and root-n consistent under specific regularity conditions and investigate the finite sample properties of the suggested methods in a simulation study when considering lasso as machine learner. We also provide an empirical application to the U.S. National Longitudinal Survey of Youth, assessing the indirect effect of health insurance coverage on general health operating via routine checkups as mediator, as well as the direct effect. We find a moderate short term effect of health insurance coverage on general health which is, however, not mediated by routine checkups. |
Keywords: | Mediation; direct and indirect effects; causal mechanisms; double machine learning; effcient score |
JEL: | C21 |
Date: | 2020–05–01 |
URL: | http://d.repec.org/n?u=RePEc:fri:fribow:fribow00515&r=all |
By: | Daniel Borup (Aarhus University, CREATES and the Danish Finance Institute (DFI)); Bent Jesper Christensen (Aarhus University, CREATES and the Dale T. Mortensen Center); Nicolaj N. Mühlbach (Aarhus University and CREATES); Mikkel S. Nielsen (Columbia University) |
Abstract: | Random forest regression (RF) is an extremely popular tool for the analysis of high-dimensional data. Nonetheless, its benefits may be lessened in sparse settings, due to weak predictors, and a pre-estimation dimension reduction (targeting) step is required. We show that proper targeting controls the probability of placing splits along strong predictors, thus providing an important complement to RF’s feature sampling. This is supported by simulations using representative finite samples. Moreover, we quantify the immediate gain from targeting in terms of increased strength of individual trees. Macroeconomic and financial applications show that the bias-variance tradeoff implied by targeting, due to increased correlation among trees in the forest, is balanced at a medium degree of targeting, selecting the best 10–30% of commonly applied predictors. Improvements in predictive accuracy of targeted RF relative to ordinary RF are considerable, up to 12–13%, occurring both in recessions and expansions, particularly at long horizons. |
Keywords: | Random forests, LASSO, high-dimensional forecasting, weak predictors, targeted predictors |
JEL: | C53 C55 E17 G12 |
Date: | 2020–05–14 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2020-03&r=all |
By: | John R. J. Thompson; Longlong Feng; R. Mark Reesor; Chuck Grace |
Abstract: | In Canada, financial advisors and dealers by provincial securities commissions, and those self-regulatory organizations charged with direct regulation over investment dealers and mutual fund dealers, respectively to collect and maintain Know Your Client (KYC) information, such as their age or risk tolerance, for investor accounts. With this information, investors, under their advisor's guidance, make decisions on their investments which are presumed to be beneficial to their investment goals. Our unique dataset is provided by a financial investment dealer with over 50,000 accounts for over 23,000 clients. We use a modified behavioural finance recency, frequency, monetary model for engineering features that quantify investor behaviours, and machine learning clustering algorithms to find groups of investors that behave similarly. We show that the KYC information collected does not explain client behaviours, whereas trade and transaction frequency and volume are most informative. We believe the results shown herein encourage financial regulators and advisors to use more advanced metrics to better understand and predict investor behaviours. |
Date: | 2020–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2005.03625&r=all |
By: | Ikuya Takao (Aoyama Gakuin University); Shota Chikayama (Aoyama Gakuin University); Takashi Kaburagi (International Christian University); Satoshi Kumagai (Aoyama Gakuin University); Toshiyuki Matsumoto (Aoyama Gakuin University); Yosuke Kurihara (Aoyama Gakuin University) |
Abstract: | With the increase in world?s older population, the cases of fall accidents among the elderly has increased, with 32% - 42% of the annual fall accidents from people aged 70 years and older. Because the impact of falls can cause some serious aftereffects on the elderly, early detection of the fall accident and appropriate treatment based on the condition after the fall are required. To perform appropriate treatment upon arrival at the incident scene, it is essential to assess the behavior and condition of the elderly after the fall (whether they are standing, conscious, or unconscious) in advance. Under these circumstances, many previous researches have proposed automatic fall detection systems for early detection. Our previous studies have proposed an unconstrained fall detection system utilizing a microwave Doppler sensor; however, these studies did not focus on behavior assessment after fall accident. Hence, this study proposes a system that could assess behavior after fall accident.This study used two types of sensor: a microwave Doppler sensor, which was attached to the ceiling, and a piezoelectric ceramic, which was installed on the floor. The output signal from the microwave Doppler sensor was extracted for 3 seconds after the time the impact on the floor was detected by the piezoelectric ceramic. The extracted data was downsampled to reduce the data size and was applied to the autoencoder to compress dimension. The compressed data was used as input to a neural network for multiclass discrimination. The output from the neural network was converted into probability by the softmax function. Finally, the class with the largest probability was determined as the motion after the fall.In the verification experiment, four subjects in their 20s were set up. Each subject was asked to stand up and not to stand up (unconscious and conscious) after each fall, with 30 times data for each set, which corresponded to 3 classes to be discriminated. The sampling frequency, sampling points, and measurement time were set to 4000 Hz, 60000 points, and 15 seconds, respectively. From the data collected, 2400 points were obtained with downsampling point of five. Then, the dimension of the hidden layer was set to 50. For the evaluation of the proposed system, leave-one-subject-out cross-validation was performed. The result shows a correct answer rate of one for all subjects. |
Keywords: | After fall motion discrimination, Autoencoder |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:sek:iacpro:10012518&r=all |
By: | Jorge Gallego; Mounu Prem; Juan F. Vargas |
Abstract: | The public health crisis caused by the COVID-19 pandemic, coupled with the subsequent economic emergency and social turmoil, has pushed governments to substantially and swiftly increase spending. Because of the pressing nature of the crisis, public procurement rules and procedures have been relaxed in many places in order to expedite transactions. However, this may also create opportunities for corruption. Using contract-level information on public spending from Colombia’s e-procurement platform, and a difference-in-differences identification strategy, we find that municipalities classified by a machine learning algorithm as traditionally more prone to corruption react to the pandemic-led spending surge by using a larger proportion of discretionary non-competitive contracts and increasing their average value. This is especially so in the case of contracts to procure crisis-related goods and services. Our evidence suggests that large negative shocks that require fast and massive spending may increase corruption, thus at least partially offsetting the mitigating effects of this fiscal instrument. |
Keywords: | Corruption, COVID-19, Public procurement, Machine learning |
JEL: | H57 H75 D73 I18 |
Date: | 2020–05–14 |
URL: | http://d.repec.org/n?u=RePEc:col:000518:018164&r=all |
By: | Antoine Bozio (IPP - Institut des politiques publiques, PJSE - Paris Jourdan Sciences Economiques - UP1 - Université Panthéon-Sorbonne - ENS Paris - École normale supérieure - Paris - INRA - Institut National de la Recherche Agronomique - EHESS - École des hautes études en sciences sociales - ENPC - École des Ponts ParisTech - CNRS - Centre National de la Recherche Scientifique, PSE - Paris School of Economics); Simon Rabaté (IPP - Institut des politiques publiques, Centraal Planbureau); Audrey Rain (IPP - Institut des politiques publiques); Maxime Tô (IPP - Institut des politiques publiques, UCL - University College of London [London], Institute for Fiscal Studies) |
Abstract: | A points system, operating at defined yield, makes it possible to rethink how pension systems are managed. Instead of having to make repeated ad hoc changes to the parameters of the system, it is possible to define change rules that other guarantees to future pensioners, as regards not only their entitlements but also the long-term sustainability of the system. In this brief, and based on simulations of a variety of shocks to the pension system, we study what management rules deserve to be chosen. Two rules absolutely must be selected: firstly the growth in the value of the pension point should match the growth in salaries; and secondly converting the points into pension should take into account the life expectancy of each generation (cohort). A third rule that is important for the long term, is the relationship between the rules for index-linking claimed pensions and the amounts of the pensions when they start being claimed. This rule should serve as a guide to managers so that they can steer the system towards an equilibrium that is not based on too low an index-linking of the pensions. Such management implies high institutional autonomy for the system, whereby the managers need to be accountable for the finnancial equilibrium and for the risks to pension revaluation. |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:hal:pseptp:halshs-02516413&r=all |