|
on Computational Economics |
Issue of 2023‒11‒27
twelve papers chosen by |
By: | Zhang, Luyao |
Abstract: | In this research, we explore the nexus between artificial intelligence (AI) and blockchain, two paramount forces steering the contemporary digital era. AI, replicating human cognitive functions, encompasses capabilities from visual discernment to complex decision-making, with significant applicability in sectors such as healthcare and finance. Its influence during the web2 epoch not only enhanced the prowess of user-oriented platforms but also prompted debates on centralization. Conversely, blockchain provides a foundational structure advocating for decentralized and transparent transactional archiving. Yet, the foundational principle of "code is law" in blockchain underscores an imperative need for the fluid adaptability that AI brings. Our analysis methodically navigates the corpus of literature on the fusion of blockchain with machine learning, emphasizing AI's potential to elevate blockchain's utility. Additionally, we chart prospective research trajectories, weaving together blockchain and machine learning in niche domains like causal machine learning, reinforcement mechanism design, and cooperative AI. These intersections aim to cultivate interdisciplinary pursuits in AI for Science, catering to a broad spectrum of stakeholders. |
Date: | 2023–11–02 |
URL: | http://d.repec.org/n?u=RePEc:osf:osfxxx:g2q5t&r=cmp |
By: | Jozef Barunik; Lubos Hanus |
Abstract: | We propose a novel machine learning approach to probabilistic forecasting of hourly intraday electricity prices. In contrast to recent advances in data-rich probabilistic forecasting that approximate the distributions with some features such as moments, our method is non-parametric and selects the best distribution from all possible empirical distributions learned from the data. The model we propose is a multiple output neural network with a monotonicity adjusting penalty. Such a distributional neural network can learn complex patterns in electricity prices from data-rich environments and it outperforms state-of-the-art benchmarks. |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2310.02867&r=cmp |
By: | Miroslav Despotovic; David Koch; Matthias Zeppelzauer; Stumpe Eric; Simon Thaler; Wolfgang A. Brunauer |
Abstract: | Today's data analysis techniques allow for the combination of multiple different data modalities, which should also allow for more accurate feature extraction. In our research, we leverage the capacity of machine learning tools to build a model with shared neural network layers and multiple inputs that is more flexible and allows for more robust extraction of real estate attributes. The most common form of data for a real estate assessment is data structured in tables, such as size or year of construction, but also descriptions of the real estate. Other data that can also be easily found in real estate listings are visual data such as exterior and interior photographs. In the presented approach, we fuse textual information and variable quantity of interior photographs per condominium for condition assessment and investigate how multiple modalities can be efficiently combined using deep learning. We train and test the performance of a pre-trained convolutional neural network fine-tuned with variable quantity of interior views of selected condominiums. In parallel, we train and test the pre-trained bidirectional encoder-transformer language model using text data from the same observations. Finally, we build an experimental neural network model using both modalities for the same task and compare the performance with the models trained with a single modality. Our initial assumption that coupling both networks would lead to worse performance compared to fine-tuned single-modal models was not confirmed, as we achieved the better performance with the proposed multi-modal model despite the impairment of a very unbalanced dataset. The novelty here is the multimodal modeling of variable quantity of real estate-related attributes in a unified model that integrates all available modalities and can thus use their complementary information. With the presented approach, we intend to extend the existing information extraction methods for automated valuation models, which in turn would contribute to a higher transparency of valuation procedures and thus to more reliable statements about the value of real estate. |
Keywords: | Avm; Computer vision; Hedonic Pricing; NLP |
JEL: | R3 |
Date: | 2023–01–01 |
URL: | http://d.repec.org/n?u=RePEc:arz:wpaper:eres2023_22&r=cmp |
By: | Moritz Stang; Bastian Krämer; Marcelo Del Cajias; Wolfgang Schäfers |
Abstract: | Besides its structural and economic characteristics, the location of a property is probably one of the most important determinants of its underlying value. In contrast to property valuations, there are hardly any approaches to date that evaluate the quality of a real estate location in an automated manner. The reasons are the complexity, the number of interactions and the non-linearities underlying the quality specifications of a certain location. These are difficult to represent by traditional econometric models. The aim of this paper is thus to present a newly developed data-driven approach for the assessments of real estate locations. By combining a state-of-the-art machine learning algorithm and the local post-hoc model agnostic method of Shapley Additive Explanations, the newly developed SHAP location score is able to account for empirical complexities, especially for non-linearities and higher order interactions. The SHAP location score represents an intuitive and flexible approach based on econometric modeling techniques and the basic assumptions of hedonic pricing theory. The approach can be applied post-hoc to any common machine learning method and can be flexibly adapted to the respective needs. This constitutes a significant extension of traditional urban models and offers many advantages for a wide range of real estate players. |
Keywords: | Automated Location Valuation Model; Explainable AI; Location Analytics; Machine Learning |
JEL: | R3 |
Date: | 2023–01–01 |
URL: | http://d.repec.org/n?u=RePEc:arz:wpaper:eres2023_139&r=cmp |
By: | Michael Pinelis; David Ruppert |
Abstract: | We construct the maximally predictable portfolio (MPP) of stocks using machine learning. Solving for the optimal constrained weights in the multi-asset MPP gives portfolios with a high monthly coefficient of determination, given the sample covariance matrix of predicted return errors from a machine learning model. Various models for the covariance matrix are tested. The MPPs of S&P 500 index constituents with estimated returns from Elastic Net, Random Forest, and Support Vector Regression models can outperform or underperform the index depending on the time period. Portfolios that take advantage of the high predictability of the MPP's returns and employ a Kelly criterion style strategy consistently outperform the benchmark. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.01985&r=cmp |
By: | Matthias Soot; Sabine Horvath; Hans-Berndt Neuner; Alexandra Weitkamp |
Abstract: | Property rates, usually used in the income approach, can be determined in a reverse income approach model for every transaction where the net yield is known. The height of the property rates represents the risk of the asset that is traded. The height of the yield, therefore, depends on influencing parameters that can explain the risk. A classical approach to investigate these influences is a multiple linear regression model. In an inhomogeneous market, the investigation leads to bad results for the classic approach. In this work, we will compare different parametric and non-parametric methods to model the height of the rates. Thus, we present the application of Artificial Neural Networks (ANN) as well as Random Forest Regression (RFR) as non-parametric methods and compare the results with parametric approaches like the classic multiple linear regression (MLR) as well as a Geographical Weighted Regression (GWR). The dataset consists of a submarket of mixed-use-buildings (residential and commercial) in the federal state of Lower Saxony (Germany). The asset class of mixed-use is only traded 200 times per year in the federal state with more than 8 million inhabitants. Therefore, the investigated sample (including 5 years of data) comes from the official purchase price database. Beside the building characteristics (No. of floors, year of construction and average rent per sqm), locational parameters are considered (standard land value, population forecast, and population structure). Due to the inhomogeneous rural, urban and socio-demographic environment, the models can be complex. The evaluation of the different approaches led to inhomogeneous results. No perfect method can be determined for the dataset. Our goal is to understand and interpret the different results in the view of how the methods work. Therefore, we investigate the results by means of the used influencing parameters (model size), sample sizes and the influence/significance of the parameters on the result. The patterns found are discussed in comparison of methods and in the context of the data. We conclude our contribution by formulating the possibilities and limitations. |
Keywords: | Complexity; Machine Learning; mixed use buildings |
JEL: | R3 |
Date: | 2023–01–01 |
URL: | http://d.repec.org/n?u=RePEc:arz:wpaper:eres2023_241&r=cmp |
By: | Anett Wins; Marcelo Del Cajias |
Abstract: | Modern location analysis evaluates location attractiveness almost in real time, combining the knowledge of local real estate experts and artificial intelligence. In this paper we develop an algorithm – The Amenities Magnet algorithm – that measures and benchmarks the attractiveness of locations based on the urban amenities’ footprint of the surrounding area, grouped according to relevance for residential purposes and taking distance information from Google and OpenStreetMap into account. As cities are continuously evolving, benchmarking locations’ amenity-wise change of attractiveness over time helps to detect upswing areas and thus supports investment decisions. According to the 15-minute city concept, the welfare of residents is proportional to the amenities accessible within a short walk or bike ride. Measuring individual scorings for the seven basic living needs results in a more detailed, disaggregated location assessment. Based on these insights, an advanced machine learning (ML) algorithm under the Gradient Boosting framework (XGBoost) is adapted to model residential rental prices for the region Greater Manchester, United Kingdom, and achieves an improved predictive power. To extract interpretable results and quantify the contribution of certain amenities to rental prices eXplainable Artificial Intelligence (XAI) methods are used. Tenants' willingness to pay (WTP) for accessibility to amenities varies by type. In Manchester tram stops, bars, schools and the proximity to the city center in particular emerged as relevant value drivers. Even if the results of the case study are not generally applicable, the methodology can be transferred to any market in order to reveal regional patterns. |
Keywords: | Amenities Magnet algorithm; location analysis; residential rental pricing; XGBoost |
JEL: | R3 |
Date: | 2023–01–01 |
URL: | http://d.repec.org/n?u=RePEc:arz:wpaper:eres2023_102&r=cmp |
By: | Jon Danielsson; Andreas Uthemann |
Abstract: | Artificial intelligence (AI) is making rapid inroads in financial regulations. It will benefit micro regulations, concerned with issues like consumer protection and routine banking regulations, because of ample data, short time horizons, clear objectives, and repeated decisions that leave plenty of data for AI to train on. It is different with macro regulations focused on the stability of the entire financial system. Here, infrequent and mostly unique events frustrate AI learning. Distributed human decision making in times of extreme stress has strong advantages over centralised AI decisions, which, coupled with the catastrophic cost of mistakes, raises questions about AI used in macro regulations. However, AI will likely become widely used by stealth as it takes over increasingly high level advice and decisions, driven by significant cost efficiencies, robustness and accuracy compared to human regulators. We propose six criteria against which to judge the suitability of AI use by the private sector and financial regulation. |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2310.11293&r=cmp |
By: | Sinan Güne; Mustafa Tombul; Harun Tanrivermis |
Abstract: | Energy consumption prediction on buildings can help building owners and operators to reduce energy costs, reduce environmental impact, improve occupant comfort, and optimize building performance. Study aims to develop a prediction model for energy consumption prediction in university campus buildings using machine learning techniques with time series and physics/engineering-based datasets. Time series energy consumption data sets from existing buildings, as well as building physics/engineering data, will be analyzed to estimate campus scale energy consumption. Time series data will be used for heating/cooling and lighting, and physics/engineering data will be used for outdoor data such as outdoor air temperature, relative humidity, and building specific characteristics such as building floor area, floor height, and material type. To improve prediction accuracy, a simulation study will be conducted using a physics-based approach, and a model will be developed. The results of this approach will be used as input for the data-based approach, and a hybrid model will be presented for prediction using deep learning techniques such as LSTM and RNN. Within the scope of the study, studies on energy consumption prediction of existing buildings generally use models containing time series datasets on energy consumption or models containing building physical information. Considering that each of these data impacts energy consumption, evaluating data together helps make more accurate consumption forecasts. However, evaluating these data together is a big problem in itself. Within the scope of the study, predictions will be made for using these two data types together and the advantages and shortcomings of the model results compared to data-based models will be discussed. While previous research has primarily focused on either time series datasets or building physical information, this study will think to be one of the first to evaluate these two data types together in order to provide more accurate energy consumption predictions and generalizable results. |
Keywords: | Energy Consumption; Energy Efficiency; gray-box based model; Machine Learning |
JEL: | R3 |
Date: | 2023–01–01 |
URL: | http://d.repec.org/n?u=RePEc:arz:wpaper:eres2023_255&r=cmp |
By: | Julia Angerer; Wolfgang Brunauer |
Abstract: | Due to the massive amount of real-estate related text documents, the necessity to automatically process the data is evident. Especially purchase contracts contain valuable transaction and property description information, like usable area. In this research project, a natural language processing (NLP) approach using open-source transformer-based models was investigated. The potential of pre-trained language models for zero-shot classification is highlighted, especially in cases where no training data is available. This approach is particularly relevant for analyzing purchase contracts in the legal domain, where it can be challenging to manually extract the information or to build comprehensive regular expression rules manually. A data set consisting of classified contract sentence parts, each containing onesize and context information, was created manually for model comparison. The experiments conducted in this study demonstrate that pre-trained language models can accurately classify sentence parts containing a size, with varying levels of performance across different models. The results suggest that pre-trained language models can be effective tools for processing textual data in the real estate and legal domains and can provide valuable insights into the underlying structures and patterns in such data. Overall, this research contributes to the understanding of the capabilities of pre-trained language models in NLP and highlights their potential for practical applications in real-world settings, particularly in the legal domain where there is a large volume of textual data and annotated training data is not available. |
Keywords: | contract documents; Information Extraction; Natural Language Processing; zero-shot classification |
JEL: | R3 |
Date: | 2023–01–01 |
URL: | http://d.repec.org/n?u=RePEc:arz:wpaper:eres2023_304&r=cmp |
By: | Kentaro Hoshisashi; Carolyn E. Phelan; Paolo Barucca |
Abstract: | Volatility smile and skewness are two key properties of option prices that are represented by the implied volatility (IV) surface. However, IV surface calibration through nonlinear interpolation is a complex problem due to several factors, including limited input data, low liquidity, and noise. Additionally, the calibrated surface must obey the fundamental financial principle of the absence of arbitrage, which can be modeled by various differential inequalities over the partial derivatives of the option price with respect to the expiration time and the strike price. To address these challenges, we have introduced a Derivative-Constrained Neural Network (DCNN), which is an enhancement of a multilayer perceptron (MLP) that incorporates derivatives in the output function. DCNN allows us to generate a smooth surface and incorporate the no-arbitrage condition thanks to the derivative terms in the loss function. In numerical experiments, we apply the stochastic volatility model with smile and skewness parameters and simulate it with different settings to examine the stability of the calibrated model under different conditions. The results show that DCNNs improve the interpolation of the implied volatility surface with smile and skewness by integrating the computation of the derivatives, which are necessary and sufficient no-arbitrage conditions. The developed algorithm also offers practitioners an effective tool for understanding expected market dynamics and managing risk associated with volatility smile and skewness. |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2310.16703&r=cmp |
By: | Daniela Sele (ETH); Marina Chugunova (Max Planck Institute for Innovation and Competition) |
Abstract: | Are people algorithm averse, as some previous literature indicates? If so, can the retention of human oversight increase the uptake of algorithmic recommendations, and does keeping a human in the loop improve accuracy? Answers to these questions are of utmost importance given the fast-growing availability of algorithmic recommendations and current intense discussions about regulation of automated decision-making. In an online experiment, we find that 66% of participants prefer algorithmic to equally accurate human recommendations if the decision is delegated fully. This preference for algorithms increases by further 7 percentage points if participants are able to monitor and adjust the recommendations before the decision is made. In line with automation bias, participants adjust the recommendations that stem from an algorithm by less than those from another human. Importantly, participants are less likely to intervene with the least accurate recommendations and adjust them by less, raising concerns about the monitoring ability of a human in a Human-in-the-Loop system. Our results document a trade-off: while allowing people to adjust algorithmic recommendations increases their uptake, the adjustments made by the human monitors reduce the quality of final decisions. |
Keywords: | automated decision-making; algorithm aversion; algorithm appreciation; automation bias; |
JEL: | O33 C90 D90 |
Date: | 2023–10–24 |
URL: | http://d.repec.org/n?u=RePEc:rco:dpaper:438&r=cmp |