|
on Big Data |
By: | Havranek, Tomas; Zeynalov, Ayaz |
Abstract: | In this paper, we examine the usefulness of Google Trends data in predicting monthly tourist arrivals and overnight stays in Prague during the period between January 2010 and December 2016. We offer two contributions. First, we analyze whether Google Trends provides significant forecasting improvements over models without search data. Second, we assess whether a high-frequency variable (weekly Google Trends) is more useful for accurate forecasting than a low-frequency variable (monthly tourist arrivals) using Mixed-data sampling (MIDAS). Our results stress the potential of Google Trends to offer more accurate prediction in the context of tourism: we find that Google Trends information, both two months and one week ahead of arrivals, is useful for predicting the actual number of tourist arrivals. The MIDAS forecasting model that employs weekly Google Trends data outperforms models using monthly Google Trends data and models without Google Trends data. |
Keywords: | Google trends, mixed-frequency data, forecasting, tourism |
JEL: | C53 L83 |
Date: | 2018–11–22 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:90205&r=big |
By: | Hrvoje Jakopovi? (University of Zagreb, Faculty of Political Science) |
Abstract: | Public relations research has been facing many challenges in a fast-changing media environment. How to measure public relations effects? This remains the key question for many scholars and communication professionals. In the time of big data, possibilities to measure different aspects of human activities seem accessible. However, the challenge of coping with 3Vs (Volume, Velocity, Variety) of big data seems as an exhaustible effort to get a whole picture and interpret the meaning of these data. Undoubtedly, big data research represents an interdisciplinary approach. In public relations research interdisciplinarity was always present and therefore scholars and public relations professionals are in search of possible tools, designs and solutions that can help in big data analysis. The aim of this paper is to present possible research designs and solutions for public relations research concerning big data and user-generated content (UGC). As communicative practices are increasingly changing and moving on to social media platforms, focus of public relations research is also moving online. The author is examining collection, aggregation, analysis and interpretation of data obtained from various online sources that are publicly available. In terms of big data, the analysis is focused on user-generated content as a potential manifestation of public relations activities. The author is analysing UGC with real-time sentiment analysis and other available tools. |
Keywords: | public relations research; big data; sentiment analysis; research design |
JEL: | C88 |
Date: | 2018–11 |
URL: | http://d.repec.org/n?u=RePEc:sek:iacpro:7310177&r=big |
By: | bailek, Alexandra |
Abstract: | The hospital readmission rate has been proposed as an important outcome indicator computable from routine statistics. The purpose of this research is to investigate the Economic Impact of service in hospitals and integrated delivery networks in the United States based on the readmission rates as the target variable. The data set includes information from 130 hospitals and integrated delivery networks in the United States from 1999 to 2008 to investigate significance of different factors in readmission rate. The dataset contains 101,766 patients’ encounters and 50 variables. The 30-day readmission rate is considered as an indicator of the quality of the health providers and is used as target variable in this project. Preliminary data analysis shows that age, admission type, discharge disposition etc. is correlated to the readmission rate and will be incorporated for further data analysis. Data analysis are performed on the diabetic patient dataset to develop a classification model to predict the likelihood for a discharged patient to be readmitted within 30 days. KNN, Naive Bayes and Logistic Regression algorithm were used to classify data and KNN appears to be the best approach to develop the model. Hospitalisations and drug prescriptions accounted for 50% and 20% of total readmission expenditure, respectively. Long term nursing home care after hospital admission cost an additional £46.4 million. With the ability to identify those patients who are more likely to be readmitted within 30 days, we can deploy the hospital resources more economically affordable while improving services. Based on the results it can be concluded that the direct cost of readmission rate for hospitals rose to £459 million in 2000 and nursing home costs rose to £111 million. Also, it can be perceived that a reduced length of hospital stay was associated with increased readmission rates for jaundice and dehydration. |
Keywords: | Predictive Modeling, Re-admission, Simulation, Healthcare Service, Classification Modeling, Health care Quality Indicator |
JEL: | A1 C1 C11 C3 C5 C53 O2 |
Date: | 2018–10–11 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:89875&r=big |
By: | Ilkka Tuomi (Oy Meaning Processing Ltd) |
Abstract: | This report describes the current state of the art in artificial intelligence (AI) and its potential impact for learning, teaching, and education. It provides conceptual foundations for well-informed policy-oriented work, research, and forward-looking activities that address the opportunities and challenges created by recent developments in AI. The report is aimed for policy developers, but it also makes contributions that are of interest for AI technology developers and researchers studying the impact of AI on economy, society, and the future of education and learning. |
Keywords: | Artificial Intelligence, Education, Learning, Teaching, Foresight |
Date: | 2018–11 |
URL: | http://d.repec.org/n?u=RePEc:ipt:iptwpa:jrc113226&r=big |
By: | Turland, M.; Slade, P. |
Abstract: | Modern farm machinery captures geocoded data on all aspects of a farming operation. These detailed datasets are called big data. Although some of this data is useful to individual farmers, much of it has little value to the farmer that collects it. Capturing the true value of big data comes when it is aggregated over many farms, allowing researchers to find underlying trends. To analyze farmers willingness to share data we conduct a hypothetical choice experiment that asked farmers in Saskatchewan whether they would join a big data program. The choice tasks varied the type of organization that operated the big data program, and included financial and non-financial incentives. Heteroscedastic and random effects probit models are presented using data from a survey constructed for this study. The results are consistent across models and find that farmers are most willing to share their data with university researchers, followed by crop input suppliers or grower associations, and financial institutions or equipment manufacturers. Farmers are least willing to share their data with government. Farmers are more willing to share data in the presence of a financial incentive or non-financial incentive such as comparative benchmark statistics or prescription maps generated from the data submitted. Acknowledgement : |
Keywords: | Research and Development/ Tech Change/Emerging Technologies |
Date: | 2018–07 |
URL: | http://d.repec.org/n?u=RePEc:ags:iaae18:277275&r=big |
By: | Jian-Huang She; Dan Grecu |
Abstract: | A new challenge to quantitative finance after the recent financial crisis is the study of credit valuation adjustment (CVA), which requires modeling of the future values of a portfolio. In this paper, following recent work in [Weinan E(2017), Han(2017)], we apply deep learning to attack this problem. The future values are parameterized by neural networks, and the parameters are then determined through optimization. Two concrete products are studied: Bermudan swaption and Mark-to-Market cross-currency swap. We obtain their expected positive/negative exposures, and further study the resulting functional form of future values. Such an approach represents a new framework for modeling XVA, and it also sheds new lights on other methods like American Monte Carlo. |
Date: | 2018–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1811.08726&r=big |
By: | Samuel Sponem (HEC Montréal - HEC Montréal) |
Keywords: | contrôleur de gestion,société du contrôle,Big data,contrôle de gestion |
Date: | 2018 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:halshs-01913075&r=big |
By: | Bluhm, Richard (Institute of Macroeconomics, Leibniz University Hannover, and UNU-MERIT); Krause, Melanie (Hamburg University, Department of Economics) |
Abstract: | Tracking the development of cities in emerging economies is difficult with conventional data. We show that satellite images of nighttime lights are a reliable proxy for economic activity at the city level, provided they are first corrected for topcoding. The commonly-used data fail to capture the true brightness of many cities. We present a stylized model of urban luminosity and empirical evidence which both suggest that these 'top lights' can be characterized by a Pareto distribution. We then propose a simple correction procedure which recovers the full distribution of city lights. Our results show that the brightest cities account for nearly a third of global economic activity. Applying this approach to cities in Sub-Saharan Africa, we find that primate cities are outgrowing secondary cities but are changing from within. Poorer neighborhoods are developing, but sub-centers are forming so that Africa's largest cities are also becoming increasingly fragmented. |
Keywords: | Development, urban growth, night lights, top-coding, inequality |
JEL: | O10 O18 R11 R12 |
Date: | 2018–11–05 |
URL: | http://d.repec.org/n?u=RePEc:unm:unumer:2018041&r=big |
By: | Ali Al-Aradi; Adolfo Correia; Danilo Naiff; Gabriel Jardim; Yuri Saporito |
Abstract: | In this work we apply the Deep Galerkin Method (DGM) described in Sirignano and Spiliopoulos (2018) to solve a number of partial differential equations that arise in quantitative finance applications including option pricing, optimal execution, mean field games, etc. The main idea behind DGM is to represent the unknown function of interest using a deep neural network. A key feature of this approach is the fact that, unlike other commonly used numerical approaches such as finite difference methods, it is mesh-free. As such, it does not suffer (as much as other numerical methods) from the curse of dimensionality associated with highdimensional PDEs and PDE systems. The main goals of this paper are to elucidate the features, capabilities and limitations of DGM by analyzing aspects of its implementation for a number of different PDEs and PDE systems. Additionally, we present: (1) a brief overview of PDEs in quantitative finance along with numerical methods for solving them; (2) a brief overview of deep learning and, in particular, the notion of neural networks; (3) a discussion of the theoretical foundations of DGM with a focus on the justification of why this method is expected to perform well. |
Date: | 2018–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1811.08782&r=big |
By: | Budzinski, Oliver; Stöhr, Annika |
Abstract: | The ubiquitous process of digitization changes economic competition on markets in several ways and leads to the emergence of new business models. The increasing roles of digital platforms as well as data-driven markets represent two relevant examples. These developments challenge competition policy, which must consider the special economic characteristics of digital goods and markets. In Germany, national competition law was amended in 2017 in order to accommodate for digitization-driven changes in the economy and plans for further changes are already discussed. We review this institutional change from an economics perspective and argue that most of the reform's elements point into the right direction. However, some upcoming challenges may have been overlooked so far. Furthermore, we discuss whether European competition policy should follow the paragon of the German reform and amend its institutional framework accordingly. We find scope for reform particularly regarding data-driven markets, whereas platform economics appear to be already well-established. |
Keywords: | competition policy,antitrust,industrial economics,digitization,media economics,institutional economics,industrial organization,big data,algorithms,platform economics,two-sided markets,personalized data,privacy,internet economics,consumer protection |
JEL: | L40 K21 L86 L82 L81 L10 L15 D80 |
Date: | 2018 |
URL: | http://d.repec.org/n?u=RePEc:zbw:tuiedp:117&r=big |