|
on Computational Economics |
Issue of 2019‒11‒18
eleven papers chosen by |
By: | Aubry, Mathieu; Kräussl, Roman; Manso, Gustavo; Spaenjers, Christophe |
Abstract: | We study the accuracy and usefulness of automated (i.e., machine-generated) valuations for illiquid and heterogeneous real assets. We assemble a database of 1.1 million paintings auctioned between 2008 and 2015. We use a popular machine-learning technique - neural networks - to develop a pricing algorithm based on both non-visual and visual artwork characteristics. Our out-of-sample valuations predict auction prices dramatically better than valuations based on a standard hedonic pricing model. Moreover, they help explaining price levels and sale probabilities even after conditioning on auctioneers' pre-sale estimates. Machine learning is particularly helpful for assets that are associated with high price uncertainty. It can also correct human experts' systematic biases in expectations formation - and identify ex ante situations in which such biases are likely to arise. |
Keywords: | asset valuation,auctions,experts,big data,machine learning,computer vision,art |
JEL: | C50 D44 G12 Z11 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:zbw:cfswop:635&r=all |
By: | Johann Pfitzinger (Department of Economics, Stellenbosch University); Nico Katzke (Department of Economics, Stellenbosch University & Prescient Securities, Cape Town) |
Abstract: | Hierarchical Risk Parity (HRP) is a risk-based portfolio optimisation algorithm, which has been shown to generate diversified portfolios with robust out-of-sample properties without the need for a positive-definite return covariance matrix (Lopez de Prado 2016). The algorithm applies machine learning techniques to identify the underlying hierarchical correlation structure of the portfolio, allowing clusters of similar assets to compete for capital. The resulting allocation is both well-diversified over risk sources and intuitively appealing. This paper proposes a method of fully exploiting the information created by the clustering process, achieving enhanced out-of-sample risk and return characteristics. In addition, a practical approach to calculating HRP weights under box and group constraints is introduced. A comprehensive set of portfolio simulations over 6 equity universes demonstrates the appeal of the algorithm for portfolios consisting of 20 - 200 assets. HRP delivers highly diversified allocations with low volatility, low portfolio turnover and competitive performance metrics. |
Keywords: | Risk Parity, Diversification, Portfolio Optimisation, Clustering |
JEL: | G11 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:sza:wpaper:wpapers328&r=all |
By: | Suss, Joel (Bank of England); Treitel, Henry (Bank of England) |
Abstract: | Using novel data and machine learning techniques, we develop an early warning system for bank distress. The main input variables come from confidential regulatory returns, and our measure of distress is derived from supervisory assessments of bank riskiness from 2006 through to 2012. We contribute to a nascent academic literature utilising new methodologies to anticipate negative firm outcomes, comparing and contrasting classic linear regression techniques with modern machine learning approaches that are able to capture complex non-linearities and interactions. We find the random forest algorithm significantly and substantively outperforms other models when utilising the AUC and Brier Score as performance metrics. We go on to vary the relative cost of false negatives (missing actual cases of distress) and false positives (wrongly predicting distress) for discrete decision thresholds, finding that the random forest again outperforms the other models. We also contribute to the literature examining drivers of bank distress, using state of the art machine learning interpretability techniques, and demonstrate the benefits of ensembling techniques in gaining additional performance benefits. Overall, this paper makes important contributions, not least of which is practical: bank supervisors can utilise our findings to anticipate firm weaknesses and take appropriate mitigating action ahead of time. |
Keywords: | Machine learning; bank distress; early warning system |
JEL: | C14 C33 C52 C53 G21 |
Date: | 2019–10–04 |
URL: | http://d.repec.org/n?u=RePEc:boe:boeewp:0831&r=all |
By: | Xiao, Tim |
Abstract: | The incremental risk charge (IRC) is a new regulatory requirement from the Basel Committee in response to the recent financial crisis. Notably few models for IRC have been developed in the literature. This paper proposes a methodology consisting of two Monte Carlo simulations. The first Monte Carlo simulation simulates default, migration, and concentration in an integrated way. Combining with full re-valuation, the loss distribution at the first liquidity horizon for a subportfolio can be generated. The second Monte Carlo simulation is the random draws based on the constant level of risk assumption. It convolutes the copies of the single loss distribution to produce one year loss distribution. The aggregation of different subportfolios with different liquidity horizons is addressed. Moreover, the methodology for equity is also included, even though it is optional in IRC. |
Date: | 2018–08–16 |
URL: | http://d.repec.org/n?u=RePEc:osf:frenxi:6b3hu&r=all |
By: | Correia, Isabel; Melo, Teresa |
Abstract: | We address a stochastic multi-period facility location problem with two customer segments, each having distinct service requirements. While customers in one segment receive preferred service, customers in the other segment accept delayed deliveries as long as lateness does not exceed a pre-specified threshold. In this case, late shipments incur additional tardiness penalty costs. The objective is to define a schedule for facility deployment and capacity scalability that satisfies all customer demands at minimum cost. Facilities can have their capacities adjusted over the planning horizon through incrementally increasing or reducing the number of modular units they hold. These two features, capacity expansion and capacity contraction, can help substantially improve the flexibility in responding to demand changes. Future customer demands are assumed to be unknown. We propose two different frameworks for the capacity scalability decisions and present a two-stage stochastic model for each one of them. When demand uncertainty is captured by a finite set of scenarios, each of which having some known probability of occurrence, we develop the extensive forms of the associated stochastic programs. Additional inequalities are derived to enhance the original formulations. An extensive computational study with randomly generated instances that are solved with a generalpurpose optimization solver demonstrate the usefulness of the proposed enhancements. Specifically, a considerably larger number of instances can be solved to optimality in much shorter computing times. Useful insights are also provided on the impact of the two different frameworks for planning capacity adjustments on the network configuration and total cost. |
Keywords: | facility location,dynamic capacity adjustment,delivery lateness,stochastic programming,valid inequalities |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:zbw:htwlog:17&r=all |
By: | Marco Taboga (Bank of Italy) |
Abstract: | We analyze the potential determinants of the size of venture capital financing rounds. We employ stacked generalization and boosted trees, two of the most powerful machine learning tools in terms of predictive power, to examine a large dataset on start-ups, venture capital funds and financing transactions. We find that the size of financing rounds is mainly associated with the characteristics of the firms being financed and with the features of the countries in which the firms are headquartered. Cross-country differences in the degree of development of the venture capital industry, while highly correlated with the size of funding rounds, are not significant once we control for other country-level characteristics. We discuss how our findings contribute to the debate about policy interventions aimed at stimulating start-up financing. |
Keywords: | venture capital, financial institutions, country characteristics, machine learning |
JEL: | G24 F0 C19 |
Date: | 2019–11 |
URL: | http://d.repec.org/n?u=RePEc:bdi:wptemi:td_1243_19&r=all |
By: | Veale, Michael; Binns, Reuben; Van Kleek, Max |
Abstract: | Cite as Michael Veale, Reuben Binns and Max Van Kleek (2018) Some HCI Priorities for GDPR-Compliant Machine Learning. The General Data Protection Regulation: An Opportunity for the CHI Community? (CHI-GDPR 2018), Workshop at ACM CHI'18, 22 April 2018, Montreal, Canada. In this short paper, we consider the roles of HCI in enabling the better governance of consequential machine learning systems using the rights and obligations laid out in the recent 2016 EU General Data Protection Regulation (GDPR)---a law which involves heavy interaction with people and systems. Focussing on those areas that relate to algorithmic systems in society, we propose roles for HCI in legal contexts in relation to fairness, bias and discrimination; data protection by design; data protection impact assessments; transparency and explanations; the mitigation and understanding of automation bias; and the communication of envisaged consequences of processing. |
Date: | 2018–03–19 |
URL: | http://d.repec.org/n?u=RePEc:osf:lawarx:wm6yk&r=all |
By: | Kambale Kavese (Eastern Cape Socio Economic Consultation Council); Andrew Phiri (Department of Economics, Nelson Mandela University) |
Abstract: | An “Economy-Wide Leontief Multiplier Based Model” calibrated on Supply and Use Framework, and a “Micro-Simulation Model” is used to assess post-recession trends in macroeconomic, labour, and fiscal multipliers for South Africa. The simulations show that during the post-reces¬sion era, the effect of exogenous shock in the economy, like increases in investment spending, although positive, yielded a smaller return in terms of tax revenue, job creation and economic growth. At sector level, these results demonstrate how the inter-industry and industry-consumer links have weakened in the post-recession period. At policy level, the findings imply that the persisting low growth trajectory associated with weaker inter-industry linkages could be exacerbated, while the fiscal austerity measures associated with weaker forward and backword tax linkages could be prolonged. We recommend government should follow a priorities-based spending policy that yields optimal socioeconomic returns. |
Keywords: | Supply and Use (SUT) tables; Fiscal multipliers; Ficscal multipliers; employmetn multipliers; South Africa. |
JEL: | C67 D57 E62 J21 R15 |
Date: | 2019–11 |
URL: | http://d.repec.org/n?u=RePEc:mnd:wpaper:1910&r=all |
By: | Sokbae Lee; Yuan Liao; Myung Hwan Seo; Youngki Shin |
Abstract: | We propose a novel two-regime regression model where the switching between the regimes is driven by a vector of possibly unobservable factors. When the factors are latent, we estimate them by the principal component analysis of a panel data set. We show that the optimization problem can be reformulated as mixed integer optimization and present two alternative computational algorithms. We derive the asymptotic distributions of the resulting estimators under the scheme that the threshold effect shrinks to zero. In particular, we establish a phase transition that describes the effect of first stage factor estimation as the cross-sectional dimension of panel data increases relative to the time-series dimension. Moreover, we develop a consistent factor selection procedure with a penalty term on the number of factors and present bootstrap methods for carrying out inference and testing linearity with the aid of efficient computational algorithms. Finally, we illustrate our methods via numerical studies. |
Keywords: | threshold regression; mixed integer optimization; phase transition; oracle properties; L0-penalization |
Date: | 2019–07 |
URL: | http://d.repec.org/n?u=RePEc:snu:ioerwp:no128&r=all |
By: | Daniel Jacob; Wolfgang Karl H\"ardle; Stefan Lessmann |
Abstract: | The paper proposes an estimator to make inference on key features of heterogeneous treatment effects sorted by impact groups (GATES) for non-randomised experiments. Observational studies are standard in policy evaluation from labour markets, educational surveys, and other empirical studies. To control for a potential selection-bias we implement a doubly-robust estimator in the first stage. Keeping the flexibility to use any machine learning method to learn the conditional mean functions as well as the propensity score we also use machine learning methods to learn a function for the conditional average treatment effect. The group average treatment effect is then estimated via a parametric linear model to provide p-values and confidence intervals. The result is a best linear predictor for effect heterogeneity based on impact groups. Cross-splitting and averaging for each observation is a further extension to avoid biases introduced through sample splitting. The advantage of the proposed method is a robust estimation of heterogeneous group treatment effects under mild assumptions, which is comparable with other models and thus keeps its flexibility in the choice of machine learning methods. At the same time, its ability to deliver interpretable results is ensured. |
Date: | 2019–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1911.02688&r=all |
By: | Gogas, Periklis (Democritus University of Thrace, Department of Economics); Papadimitriou, Theophilos (Democritus University of Thrace, Department of Economics); Sofianos, Emmanouil (Democritus University of Thrace, Department of Economics) |
Abstract: | The issue of whether or not money affects real economic activity (money neutrality) has attracted significant empirical attention over the last five decades. If money is neutral even in the short-run, then monetary policy is ineffective and its role limited. If money matters, it will be able to forecast real economic activity. In this study, we test the traditional simple sum monetary aggregates that are commonly used by central banks all over the world and also the theoretically correct Divisia monetary aggregates proposed by the Barnett Critique (Chrystal and MacDonald, 1994; Belongia and Ireland, 2014), both in three levels of aggregation: M1, M2, and M3. We use them to directionally forecast the Eurocoin index: A monthly index that measures the growth rate of the euro area GDP. The data span from January 2001 to June 2018. The forecasting methodology we employ is support vector machines (SVM) from the area of machine learning. The empirical results show that: (a) The Divisia monetary aggregates outperform the simple sum ones and (b) both monetary aggregates can directionally forecast the Eurocoin index reaching the highest accuracy of 82.05% providing evidence against money neutrality even in the short term. |
Keywords: | Eurocoin; simple sum; Divisia; SVM; machine learning; forecasting; money neutrality |
JEL: | E00 E27 E42 E51 E58 |
Date: | 2019–07–05 |
URL: | http://d.repec.org/n?u=RePEc:ris:duthrp:2016_004&r=all |