|
on Computational Economics |
Issue of 2019‒11‒04
nine papers chosen by |
By: | Tae-Hwy Lee (Department of Economics, University of California Riverside); Jianghao Chu (UCR); Aman Ullah (UCR) |
Abstract: | Freund and Schapire (1997) introduced "Discrete AdaBoost" (DAB) which has been mysteriously effective for the high-dimensional binary classi cation or binary prediction. In an effort to understand the myth, Friedman, Hastie and Tibshirani (FHT, 2000) show that DAB can be understood as statistical learning which builds an additive logistic regression model via Newton-like updating minimization of the exponential loss. From this statistical point of view, FHT proposed three modi fications of DAB, namely, Real AdaBoost (RAB), LogitBoost (LB), and Gentle AdaBoost (GAB). All of DAB, RAB, LB, GAB solve for the logistic regression via different algorithmic designs and different objective functions. The RAB algorithm uses class probability estimates to construct real-valued contributions of the weak learner, LB is an adaptive Newton algorithm by stagewise optimization of the Bernoulli likelihood, and GAB is an adaptive Newton algorithm via stagewise optimization of the exponential loss. The same authors of FHT published an influential textbook, The Elements of Statistical Learn- ing (ESL, 2001 and 2008). A companion book An Introduction to Statistical Learning (ISL) by James et al. (2013) was published with applications in R. However, both ESL and ISL (e.g., sections 4.5 and 4.6) do not cover these four AdaBoost algorithms while FHT provided some simulation and empirical studies to compare these methods. Given numerous potential applications, we believe it would be useful to collect the R libraries of these AdaBoost algorithms, as well as more recently developed extensions to Ad- aBoost for probability prediction with examples and illustrations. Therefore, the goal of this chapter is to do just that, i.e., (i) to provide a user guide of these alternative AdaBoost algorithms with step-by-step tutorial of using R (in a way similar to ISL, e.g., Section 4.6), (ii) to compare AdaBoost with alternative machine learning classi fication tools such as the deep neural network (DNN), logistic regression with LASSO and SIM-RODEO, and (iii) to demonstrate the empirical applications in economics, such as prediction of business cycle turning points and directional prediction of stock price indexes. We revisit Ng (2014) who used DAB for prediction of the business cycle turning points by comparing the results from RAB, LB, GAB, DNN, logistic regression and SIM-RODEO. |
Keywords: | AdaBoost, R, Binary classi cation, Logistic regression, DAB, RAB, LB, GAB, DNN |
Date: | 2018–07 |
URL: | http://d.repec.org/n?u=RePEc:ucr:wpaper:201907&r=all |
By: | Domenico Delli Gatti; Jakob Grazzini |
Abstract: | We propose two novel methods to “bring ABMs to the data”. First, we put forward a new Bayesian procedure to estimate the numerical values of ABM parameters that takes into account the time structure of simulated and observed time series. Second, we propose a method to forecast aggregate time series using data obtained from the simulation of an ABM. We apply our methodological contributions to a medium-scale macro agent-based model. We show that the estimated model is capable of reproducing features of observed data and of forecasting one-period ahead output-gap and investment with a remarkable degree of accuracy. |
Keywords: | agent-based models, estimation, forecasting |
JEL: | C11 C13 C53 C63 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_7894&r=all |
By: | Damir Filipović (Ecole Polytechnique Fédérale de Lausanne; Swiss Finance Institute); Kathrin Glau (Queen Mary University of London); Yuji Nakatsukasa (University of Oxford); Francesco Statti (Ecole Polytechnique Fédérale de Lausanne) |
Abstract: | We propose a methodology for computing single and multi-asset European option prices, and more generally expectations of scalar functions of (multivariate) random variables. This new approach combines the ability of Monte Carlo simulation to handle high-dimensional problems with the efficiency of function approximation. Specifically, we first generalize the recently developed method for multivariate integration in [arXiv:1806.05492] to integration with respect to probability measures. The method is based on the principle “approximate and integrate” in three steps i) sample the integrand at points in the integration domain, ii) approximate the integrand by solving a least-squares problem, iii) integrate the approximate function. In high-dimensional applications we face memory limitations due to large storage requirements in step ii). Combining weighted sampling and the randomized extended Kaczmarz algorithm we obtain a new efficient approach to solve large-scale least-squares problems. Our convergence and cost analysis along with numerical experiments show the effectiveness of the method in both low and high dimensions, and under the assumption of a limited number of available simulations. |
Keywords: | sMonte Carlo, Monte Carlo under budget constraints, variance reduction, multi-asset options, Kaczmarz algorithm, weighted sampling, large-scale least-squares problems |
Date: | 2019–10 |
URL: | http://d.repec.org/n?u=RePEc:chf:rpseri:rp1954&r=all |
By: | Hinterlang, Natascha |
JEL: | C45 C53 E47 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:zbw:vfsc19:203503&r=all |
By: | Julia M. Puaschunder (The New School, Department of Economics) |
Abstract: | Legal scholarship exists since the beginnings of science. Attempts to quantify the economic consequences of legal codifications have been made in the vibrant interdisciplinary field of law and economics. The socio-economic calculus of public policies has been addressed in new public management. Behavioral economics has entered the legal scientific discourse in the emerging field of empirical legal studies backed by ample evidence of the effect of law on socio-dynamics retrieved from manifold field and laboratory experiments. Behavioral insights is the most recent Nobel Prize crowned development to understand human decision making in the legal and public fields to help civil servants and legal executives foster the socio-economic outcomes of their work. All these variant interdisciplinary approaches aim at enlightening at legal codifications’ socio-economic outcomes to improve public collectives. In all these cases, empirics derived from quantitative and qualitative research help gain inferences for legal theory building and the strengthening of public policy implementations. This article argues that the time is ripe to dare the next step in legal empirical analyses by drawing from insights retrieved from big data and algorithmic machine learning but also introduce the use of optimal control macrodynamic modelling—a methodology originating in physics that entered macroeconomics and related disciplines to quantify and optimally control economic theory and practice. Given the ongoing big data revolution and exponentially rising data transfer coupled with unprecedented computational power advancements, the means are now available—for the first time—to push for empirical legal studies embracing novel tools—such as hierarchical modelling Bayesian statistics as well as optimum control sophistications—to derive inferences on how to improve legal theory and practices in innovative ways as never before possible. On the brink of artificial intelligence entering the labor force at a large scale, legal scholarship can now adapt to the novel market opportunities with acknowledging unprecedented computational power and methodological sophistication in deriving insights from big data. Heralding a new age of legal empirical macrodynamics also serves the legal community in light of the predicted heightening demands for creativity as future valuable asset of humanoid legal practitioners and scholars in comparison to repetitive tasks likely soon being outsourced to AI and machine learning. |
Keywords: | Artificial Intelligence, Behavioral Economics, Behavioral Political Economy, Big Data, Governance, Machine Learning, New Public Management, Legal Scholarship, Optimal Control, Social Credit Score |
Date: | 2019–08 |
URL: | http://d.repec.org/n?u=RePEc:smo:epaper:001jp&r=all |
By: | Strittmatter, Anthony |
JEL: | H75 I38 J22 J31 C21 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:zbw:vfsc19:203499&r=all |
By: | Vladimir Puzyrev |
Abstract: | This study attempts to analyze patterns in cryptocurrency markets using a special type of deep neural networks, namely a convolutional autoencoder. The method extracts the dominant features of market behavior and classifies the 40 studied cryptocurrencies into several classes for twelve 6-month periods starting from 15th May 2013. Transitions from one class to another with time are related to the maturement of cryptocurrencies. In speculative cryptocurrency markets, these findings have potential implications for investment and trading strategies. |
Date: | 2019–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1910.12281&r=all |
By: | Giuseppe Carlo Calafiore; Marisa Hillary Morales; Vittorio Tiozzo; Serge Marquie |
Abstract: | Predicting the exit (e.g. bankrupt, acquisition, etc.) of privately held companies is a current and relevant problem for investment firms. The difficulty of the problem stems from the lack of reliable, quantitative and publicly available data. In this paper, we contribute to this endeavour by constructing an exit predictor model based on qualitative data, which blends the outcomes of three classifiers, namely, a Logistic Regression model, a Random Forest model, and a Support Vector Machine model. The output of the combined model is selected on the basis of the majority of the output classes of the component models. The models are trained using data extracted from the Thomson Reuters Eikon repository of 54697 US and European companies over the 1996-2011 time span. Experiments have been conducted for predicting whether the company eventually either gets acquired or goes public (IPO), against the complementary event that it remains private or goes bankrupt, in the considered time window. Our model achieves a 63\% predictive accuracy, which is quite a valuable figure for Private Equity investors, who typically expect very high returns from successful investments. |
Date: | 2019–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1910.13969&r=all |
By: | Yvette Burton (Columbia University School of Professional Studies) |
Abstract: | Research Question and Objectives: Is there subtle gender bias in the way companies word and code job listings in such fields as engineering and programming? Although the Civil Rights Act effectively bans companies from explicitly requesting workers of a particular gender, the language in these listings may discourage many women from applying.The objectives of the research are to create to foundational constructs leaders can use to address the growing employee competency and business performance gaps created by the impact of lack of gender diversity among data scientist roles, and siloes across enterprise talent strategies. These two objectives include: Integrated Data Scientist and HCM Leadership Development Strategies and AI Leadership Assessment and Development w/ Risk Audits. |
Keywords: | Coding Bias, Artificial Intelligence, Data Scientists, Leadership Development, Business Performance, Digital Workforce Solutions, Behavioral Analytics, Twenty-First Century Skills Gaps, Human Capital Management, STEM, Enterprise Risk Management. |
JEL: | C89 D81 J24 |
Date: | 2019–07 |
URL: | http://d.repec.org/n?u=RePEc:sek:iacpro:9110624&r=all |