nep-big New Economics Papers
on Big Data
Issue of 2019‒09‒23
twenty-one papers chosen by
Tom Coupé
University of Canterbury

  1. Artificial Intelligence and the UK Labour Market: Questions, methods and a call for a systematic approach to information gathering By Tim Hinks
  2. Testing the employment and skill impact of new technologies: A survey and some methodological issues By Barbieri, Laura; Mussida, Chiara; Piva, Mariacristina; Vivarelli, Marco
  3. Geographic Clustering of Firms in China By Douglas Hanley; Chengying Luo; Mingqin Wu
  4. Application of machine learning in real estate transactions – automation of due diligence processes based on digital building documentation By Philipp Maximilian Müller
  5. Assessing forecast gains from ‘deep learning’ over time-series methodologies By Yi Wu; Sotiris Tsolacos
  6. MD&A Disclosure and Performance of U.S. REITs: The Information Content of Textual Tone By Marina Koelbl
  7. Challenges in Machine Learning for Document Classification in the Real Estate Industry By Mario Bodenbender; Björn-Martin Kurzrock
  8. AVMs: An international comparison of the opportunities and challenges By Lynn Johnson
  9. México | La crisis por escasez de gasolina: un análisis de Big Data By Guillermo Jr. Cardenas Salgado; Luis Antonio Espinosa; Juan Jose Li Ng; Carlos Serrano
  10. A Deep Learning Framework for Pricing Financial Instruments By Qiong Wu; Zheng Zhang; Andrea Pizzoferrato; Mihai Cucuringu; Zhenming Liu
  11. Improving Regulatory Effectiveness through Better Targeting: Evidence from OSHA By Johnson, Matthew S; Levine, David I; Toffel, Michael W
  12. Shallow Self-Learning for Reject Inference in Credit Scoring By Nikita Kozodoi; Panagiotis Katsas; Stefan Lessmann; Luis Moreira-Matias; Konstantinos Papakonstantinou
  13. Huff Inspired Gravity Model in Valuation of homes near Scenic lands -- A geographically weighted regression based hedonic model By Jay Mittal; Sweta Byahut
  14. A Structural Model of a Multitasking Salesforce: Job Task Allocation and Incentive Plan Design By Minkyung Kim; K. Sudhir; Kosuke Uetake
  15. A Study of Big Data for Business Growth in SMEs: Opportunities & Challenges By Iqbal, Muhammad; Alam Kazmi, Syed Hasnain; Manzoor, Dr. Amir; Rehman Soomrani, Dr. Abdul; Butt, Shujaat Hussain; Shaikh, Khurram Adeel
  16. Construct Validation for a Nonlinear Measurement Model in Marketing and Consumer Behavior Research By Toshikuni Sato
  17. Adaptability of Digital Technologies to Sustainable Construction Practices In Sri Lanka By Terans Gunawardhana; Kanchana Perera
  18. Job Vacancies, the Beveridge Curve, and Supply Shocks: The Frequency and Content of Help-Wanted Ads in Pre- and Post-Mariel Miami By Anastasopoulos, Jason; Borjas, George J.; Cook, Gavin G.; Lachanski, Michael
  19. Looking forward via the Past: An Investigation of the Evolution of the Knowledge Base of Robotics Firms. By Estolatan, Eric; Geuna, Aldo
  20. Looking forward via the Past: An Investigation of the Evolution of the Knowledge Base of Robotics Firms. By Estolatan, Eric; Geuna, Aldo
  21. Using Foursquare data to reveal spatial and temporal patterns in London By Maarten Vanhoof; Antonia Godoy-Lorite; Roberto Murcio; Iacopo Iacopini; Natalia Zdanowska; Juste Raimbault; Richard Milton; Elsa Arcaute; Mike Batty

  1. By: Tim Hinks (University of the West of England, Bristol)
    Abstract: Whilst much work has recently been produced on the impact robots will have on labour markets of rich countries there is no substantial body of work as yet into what impact artificial intelligence will have on labour markets. Recently Raj and Seamans (2018) have called for an urgent need to gather firm-level information on what AI is being used and how the use of AI is changing over time. In Felton, Raj and Seamans (2018) the authors use a measure of AI in US firms and map the areas in firms in which AI is used against a broad range of job requirements in occupations in order to ascertain the probability that occupations will be made redundant or which job requirements will become redundant. In the UK we currently have nothing comparable to the level of detail found in these data sources. This paper calls for a concerted and rigorous approach to gathering this information at individual and firm-level in order to give some idea of which jobs and job requirements are under threat from AI and crucially whether the quality of jobs is or will decline.
    Date: 2019–01–03
    URL: http://d.repec.org/n?u=RePEc:uwe:wpaper:20191903&r=all
  2. By: Barbieri, Laura (Università Cattolica di Piacenza); Mussida, Chiara (Università Cattolica di Piacenza); Piva, Mariacristina (Università Cattolica di Piacenza); Vivarelli, Marco (Università Cattolica di Milano)
    Abstract: The present technological revolution, characterized by the pervasive and growing presence of robots, automation, Artificial Intelligence and machine learning, is going to transform societies and economic systems. However, this is not the first technological revolution humankind has been facing, but it is probably the very first one with such an accelerated diffusion pace involving all the industrial sectors. Studying its mechanisms and consequences (will the world turn into a jobless society or not?), mainly considering the labor market dynamics, is a crucial matter. This paper aims at providing an updated picture of main empirical evidence on the relationship between new technologies and employment both in terms of overall consequences on the number of employees, tasks required, and wage/inequality effect.
    Keywords: technology, innovation, employment, skill, task, routine
    JEL: O33
    Date: 2019–09–11
    URL: http://d.repec.org/n?u=RePEc:unm:unumer:2019032&r=all
  3. By: Douglas Hanley (University of Pittsburgh); Chengying Luo (University of Pittsburgh); Mingqin Wu (South China Normal University)
    Abstract: The spatial arrangement of firms is known to be a critical factor influencing a variety of firm level outcomes. Numerous existing studies have investigated the importance of firm density and localization at various spatial scales, as well as agglomeration by industry. In this paper, we bring relatively new data and techniques to bear on the issue. Regarding the data, we use a comprehensive census of firms conducted by the National Bureau of Statistics of China (NBS). This covers firms in all industries and localities, and we have waves from both 2004 and 2008 available. Past studies have largely relied on manufacturing firms. This additional data allows us to look more closely at clustering within services, as well as potential spillovers between services and manufacturing. Further, by looking at the case of China, we get a snapshot of a country (especially in the early 2000s) in a period of rapid transition, but one that has already industrialized to a considerable degree. Additionally, this is an environment shaped by far more aggressive industrial policies than those seen in much of Western Europe and North America. In terms of techniques, we take a machine learning approach to understanding firm clustering and agglomeration. Specifically, we use images generated by density maps of firm location data (from the NBS data) as well as linked satellite imagery from the Landsat 7 spacecraft. This allows us to frame the issue as one of prediction. By predicting firm outcomes such as profitability, productivity, and growth using these images, we can understand their relationship to firm clustering. By turning this into a prediction problem using images as inputs, we can tap into the rich and rapidly evolving literature in computer science and machine learning on deep convolutional neural networks (CNNs). Additionally, we can utilize software and hardware tools developed for these purposes.
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:red:sed019:1522&r=all
  4. By: Philipp Maximilian Müller
    Abstract: To minimize risks and increase transparency, every company needs reliable information. The quality and completeness of digital building documentation is more and more a factor as “deal maker” and “deal breaker” in real estate transactions. However, there is a fundamental lack of instruments for leveraging internal data and a risk of overlooking the essentials.In real estate transactions, the parties generally have just a few weeks for due diligence (DD). A large variety of Documents needs to be elaborately prepared and make available in data rooms. As a result, gaps in the documentation may remain hidden and can only be identified with great effort. Missing documents may result in high purchase price discounts. Therefore, investors are increasingly using a data-driven approach to gain essential knowledge in transaction processes. Digital technologies in due diligence processes should help to reduce existing information asymmetries and sustain data-supported decisions.The paper describes an approach to automate Due Diligence processes with a focus on Technical Due Diligence (TDD) using Machine Learning (ML), esp. Information Extraction. The overall aim is to extract relevant information from building-related documents to generate a semi-automated report on the structural (and environmental) condition of properties.The contribution examines due diligence reports on more than twenty office and retail properties. More than ten different companies generated the reports between 2006 and 2016. The research work provides a standardized TDD reporting structure which will be of relevance for both research and practice. To define relevant information for the report, document classes are reviewed and contained data prioritized. Based on this, various document classes are analyzed and relevant text passages are segmented. A framework is developed to extract data from the documents, store it and provide it in a standardized form. Moreover the current use of Machine Learning in DD processes, the research method and framework used for the automation of TDD and its potential benefits for transactions and risk management are presented.
    Keywords: Artificial Intelligence; digital building documentation; Due diligence; Machine Learning; Real estate transactions
    JEL: R3
    Date: 2019–01–01
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:eres2019_208&r=all
  5. By: Yi Wu; Sotiris Tsolacos
    Abstract: There is a plethora of standard time series techniques for time series forecasting including ARIMA, ARIMAX, Spectral Analysis and Decomposition. A requirement for the application of these techniques is some degree of correlation in the series (eg the AR terms) and past effects from innovations. These properties imply that each observation is partially predictable from previous observations, from previous random spikes, or from both. An obvious assumption made is that the correlations inherent in the data set have been adequately modeled. Thus after a model has been built, any leftover variations (residuals) are considered i.i.d, independent and normally distributed with mean zero and constant variance over time. There is no further information from the residuals that can be used in the model. Implicit in these techniques is the notion that existing patterns in the time series will continue into the future. These standard techniques work well for short-term prediction, but do not prove to be effective in capturing the characteristics of data in longer period. ARIMA for instance gives more importance to immediate data points in the test set and tries to perform well for them but as we get far we see a larger variance in the predicted output. Due to the dynamic nature of the time series data often these assumptions are not met when there is non-linear autocorrelation in the series. Non-linearities in the data can be efficiently addressed with Deep Learning Techniques. Time series data are often subject to sequence dependence problem, which Deep Learning Techniques such as RNN can resolve as they are adaptive in nature. Other variants of Deep Learning such as LSTM (Long Short Term Memory) and GRU (Gated Recurrent Units) which can easily be trained based on long-term period to pick up the true dynamics of series and achieve better modeling and forecast results. Investors and real estate analysts are increasingly coming across of Deep Learning methods for market analysis and portfolio construction. We investigate potential forecast gains arising from the adoption of these models over conventional time-series models. We make use of the monthly data-series at the city level in the Europe produced by RCA. We are interested both in directional and point forecasts. The forecast evaluation takes place over different time horizons and with the application of conventional forecast assessment metrics including Diebold Mariano.
    Keywords: Artificial Intelligence; deep learning; forecast gains
    JEL: R3
    Date: 2019–01–01
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:eres2019_212&r=all
  6. By: Marina Koelbl
    Abstract: Textual sentiment analysis provides an increasingly important approach to address many pivotal questions in behavioral finance. Not least because in today’s world a huge amount of information is stored as text instead of numeric data (Nasukawa and Nagano, 2001). As an example, Chen et al. (2014) analyzes articles published on Seeking Alpha and finds the fraction of negative words to be correlated with contemporaneous and subsequent stock returns. Tetlock (2007) emphasizes that high values of media pessimism induce downward pressure on market prices. Moreover, Li (2010) and Davis et al. (2012) investigate corporate disclosures such as earnings press releases or annual and quarterly reports and find disclosure tone to be associated with future firm performance. Sentiment analysis has also garnered increased attention in related real estate research in recent years. For example, Ruscheinsky et al. (forthcoming) extract sentiment from newspaper articles and analyze the relationship between measures of sentiment and US REIT prices. However, sentiment analysis in real estate still lacks behind. Whereas related research in accounting and finance investigates multiple disclosure outlets like news media, public corporate disclosures, analyst reports, and internet postings, real estate literature only covers a limited spectrum. Although, corporate disclosures are a natural source of textual sentiment for researchers since they are official releases that come from insiders who have better knowledge of the firm than outsiders (e.g. media-persons) they have not yet been analyzed in a real estate context (Kearney and Liu, 2014). By observing annual and quarterly reports of U.S. REITs present in the NAREIT over a 15-year timespan (2003 - 2017), this study examines whether the information disclosed in the Management’s Discussion and Analysis (MD&A) of U.S. REITs is associated with future firm performance and generates a market response. The MD&A is particularly suitable for the analysis because the U.S. Securities and Exchange Commission (SEC) mandates publicly traded firms to signal expectations regarding future firm performance in this section (SEC, 2003). To assess the tone of the MD&A, the Loughran and McDonald (2011) financial dictionary as well as a machine learning approach are employed. In order to allow a deeper understanding of disclosure practices, the study also observes readability of the MD&A and topics discussed in this section to examine whether those aspects are linked to either disclosure tone or future firm performance. To the best of my knowledge, this is the first study to analyze exclusively for REITs whether language in the MD&A is associated with future firm performance and if the market responds to unexpected levels of sentiment.
    Keywords: 10K; REITs; Sentiment; Textual Analysis
    JEL: R3
    Date: 2019–01–01
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:eres2019_281&r=all
  7. By: Mario Bodenbender; Björn-Martin Kurzrock
    Abstract: Data rooms are becoming more and more important for the real estate industry. They permit the creation of protected areas in which a variety of relevant documents are typically made available to interested parties. In addition to supporting purchase and sales processes, they are used primarily in larger construction projects.The structures and index designations of data rooms have not yet been uniformly regulated on an international basis. Data room indices are created based on different types of approaches and thus the indices also diverge in terms of their depth of detail as well as in the range of topics. In practice, rules already exist for structuring documentation for individual phases, as well as for transferring data between these phases. Since all of the documentation must be transferable when changing to another life cycle phase or participant, the information must always be clearly identified and structured in order to enable the protection, access and administration of this information at all times. This poses a challenge for companies because the documents are subject to several rounds of restructuring during their life cycle, which are not only costly, but also always entail the risk of data loss. The goal of current research is therefore a seamless storage as well as a permanent and unambiguous classification of the documents over the individual life cycle phases.In the field of text classification, machine learning offers considerable potential in the sense of reduced workload, process acceleration and quality improvement. In data rooms, machine learning (in particular document classification) is used to automatically classify the documents contained in the data room or the documents to be imported and assign them to a suitable index point. In this manner, a document is always classified in the class to which it belongs with the greatest probability (ex: due to word frequency). An essential prerequisite for the success of machine learning for document classification is the quality of the document classes as well as the training data. When defining the document classes, it must be guaranteed on the one hand that these do not overlap in terms of their content, so that it is possible to clearly allocate the documents thematically. On the other hand, it must also be possible to consider documents that may appear later and be able to scale the model according to the requirements. For the training and test set, as well as for the documents to be analyzed later, the quality of the respective documents and their readability are also decisive factors. In order to effectively analyze the documents, the content must also be standardized and it must be possible to remove non-relevant content in advance.Based on the empirical analysis of 8,965 digital documents of fourteen properties from eight different owners, the paper presents a model with more than 1,300 document classes as a basis for an automated structuring and migration of documents in the life cycle of real estate. To validate these classes, machine learning algorithms were learned and analyzed to determine under which conditions and how the highest possible accuracy of classification can be achieved. Stemmer and stop word lists used specifically for these analyses were also developed for this purpose. Using these lists, the accuracy of a classification is further increased by machine learning, since they were specifically aligned to terms used in the real estate industry.The paper also shows which aspects have to be taken into account at an early stage when digitizing extensive data/document inventories, since automation using machine learning can only be as good as the quality, legibility and interpretability of the data allow.
    Keywords: data room; Digitization; document classification; Machine Learning; real estate data
    JEL: R3
    Date: 2019–01–01
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:eres2019_370&r=all
  8. By: Lynn Johnson
    Abstract: The definition of AVM has been disputed across academia with a number of references excluding the inclusion of human interaction suggesting that models need to work independently from a professional. RICS describe an AVM (2008) ‘as one or more mathematical techniques to provide an estimate of value of a specified property at a specified date, accompanied by a measure of confidence in the accuracy of the result, without human intervention post initiation.’ An AVM is a mathematical model operated through computer software to determine Market Value or Market Rent of a property. There are several types of AVM including Artificial Neural Networks and Multiple Regression Analysis. According to Susskind and Susskind (2015) the professions are becoming antiquated, opaque, no longer affordable and unsustainable in an era of increasingly capable expert systems. This sentiment would appear to be spreading within the real estate world Property Technology or Proptech is now a term which features heavily in real estate press, professional bodies and to some extent academia. RICS, in their 2017 publication on the impact of emerging technologies on the surveying profession and subsequent (2017) paper into the future of valuation, the latter of which considers how the valuation process is undertaken and managed. It identifies two main issues or disruptors, technological developments and changing client expectations, both of which may provide increased pressure for the profession to adopt AVMs in both residential and commercial real estate. As Klaus Schwab, the founder and executive chair of the World Economic forum stated in 2016 there is a revolution which is fundamentally altering the way we live and work it is providing huge opportunities for business growth but also circumstances for disruptive innovation.Most Research on AVMs has explored how they are employed in residential markets (see Boshoff & Kock, 2013). It is apparent that European countries such as Germany, Romania, Netherlands and most recently Sweden have all introduced AVM legislation to ensure quality and assurance within the current valuation processes (European Mortgage Federation, 2017). However, there is little research into implementation of AVMs in commercial real estate. Gilbertson and Preston (2005) believed that this was due to the lack of transparency and accurate data available for commercial property transactions. A salient point established by Boshoff and De Kock (2013) is that many professionals consider commercial valuations as intricate, commercial property is classed as heterogeneous and not easily fungible (in comparison to stocks and shares). Illustrating this complexity, recent research carried out by Amidu and Boyd (2017) suggests that when commercial real estate professionals undertake valuations, they are problem solvers. They tend to use their own tacit knowledge and expertise of markets.There are opportunities and challenges for all involved with commercial real estate valuations. However, challenges may outweigh the opportunities and many questions remain unanswered. Do AVMs provide the level of certainty needed by clients? Will they be prone to fraudulent activity? Most importantly what level of accuracy can be achieved? As Amidu and Boyd (2017) argue commercial real estate professionals are problem solvers and use tacit knowledge. Are these challenges International or wholly attributed to the UK markets?The aim of this research is to consider the differences in the challenges and opportunities of AMVs Internationally.
    Keywords: AVM; Commercial; International; Tacit Knowledge; Valuation
    JEL: R3
    Date: 2019–01–01
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:eres2019_189&r=all
  9. By: Guillermo Jr. Cardenas Salgado; Luis Antonio Espinosa; Juan Jose Li Ng; Carlos Serrano
    Abstract: Se analizaron las operaciones por día y hora en TPVs de las gasolineras de la ZM del Valle de México. La crisis inició a las 12:00hrs del martes 8 de enero, duró 13 días, concluyó el 20 de enero, se cargó 16% más gasolina por operación, y se incrementó hasta 400% la compra a altas horas de la noche y en la madrugada. An analysis of the POS operations in gas stations in the Valle de Mexico metro-area are analyzed by day and hour. The crisis began at noon on Tuesday January 8, lasted 13 days, ended on January 20, 16% more gasoline was loaded per operation, and the purchase was increased up to 400% at late night and in the early morning.
    Keywords: trends, tendencias, Gasoline, Gasolina, Crisis, Crisis, Big Data, Big Data, Incomplete markets, Mercados incompletos, Mexico, México, Analysis with Big Data, Análisis con Big Data, Regional Analysis Mexico, Análisis Regional México, Consumption, Consumo, Energy and Commodities, Energía y Materias Primas, Working Papers, Documento de Trabajo
    JEL: D45 D52 H44 Q31
    Date: 2019–07
    URL: http://d.repec.org/n?u=RePEc:bbv:wpaper:1909&r=all
  10. By: Qiong Wu; Zheng Zhang; Andrea Pizzoferrato; Mihai Cucuringu; Zhenming Liu
    Abstract: We propose an integrated deep learning architecture for the stock movement prediction. Our architecture simultaneously leverages all available alpha sources. The sources include technical signals, financial news signals, and cross-sectional signals. Our architecture possesses three main properties. First, our architecture eludes overfitting issues. Although we consume a large number of technical signals but has better generalization properties than linear models. Second, our model effectively captures the interactions between signals from different categories. Third, our architecture has low computation cost. We design a graph-based component that extracts cross-sectional interactions which circumvents usage of SVD that's needed in standard models. Experimental results on the real-world stock market show that our approach outperforms the existing baselines. Meanwhile, the results from different trading simulators demonstrate that we can effectively monetize the signals.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.04497&r=all
  11. By: Johnson, Matthew S; Levine, David I; Toffel, Michael W
    Abstract: We study how a regulator can best allocate its limited inspection resources. We direct our analysis to a US Occupational Safety and Health Administration (OSHA) inspection program that targeted dangerous establishments and allocated some inspections via random assignment. We find that inspections reduced serious injuries by an average of 9% over the following five years. We use new machine learning methods to estimate the effects of counterfactual targeting rules OSHA could have deployed. OSHA could have averted over twice as many injuries if its inspections had targeted the establishments where we predict inspections would avert the most injuries. The agency could have averted nearly as many additional injuries by targeting the establishments predicted to have the most injuries. Both of these targeting regimes would have generated over $1 billion in social value over the decade we examine. Our results demonstrate the promise, and limitations, of using machine learning to improve resource allocation. JEL Classifications: I18; L51; J38; J8
    Keywords: Social and Behavioral Sciences, Public Policy
    Date: 2019–09–01
    URL: http://d.repec.org/n?u=RePEc:cdl:indrel:qt1gq7z4j3&r=all
  12. By: Nikita Kozodoi; Panagiotis Katsas; Stefan Lessmann; Luis Moreira-Matias; Konstantinos Papakonstantinou
    Abstract: Credit scoring models support loan approval decisions in the financial services industry. Lenders train these models on data from previously granted credit applications, where the borrowers' repayment behavior has been observed. This approach creates sample bias. The scoring model (i.e., classifier) is trained on accepted cases only. Applying the resulting model to screen credit applications from the population of all borrowers degrades model performance. Reject inference comprises techniques to overcome sampling bias through assigning labels to rejected cases. The paper makes two contributions. First, we propose a self-learning framework for reject inference. The framework is geared toward real-world credit scoring requirements through considering distinct training regimes for iterative labeling and model training. Second, we introduce a new measure to assess the effectiveness of reject inference strategies. Our measure leverages domain knowledge to avoid artificial labeling of rejected cases during strategy evaluation. We demonstrate this approach to offer a robust and operational assessment of reject inference strategies. Experiments on a real-world credit scoring data set confirm the superiority of the adjusted self-learning framework over regular self-learning and previous reject inference strategies. We also find strong evidence in favor of the proposed evaluation measure assessing reject inference strategies more reliably, raising the performance of the eventual credit scoring model.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.06108&r=all
  13. By: Jay Mittal; Sweta Byahut
    Abstract: This research uses a hedonic Price modelling framework to assess the marginal implicit price effect of conservation easements (CE) lands on single family houses in Worcester, MA. The house price premium is anticipated with the growing visual accessibility from home to conservation easements lands. The CE lands of interest here are voluntarily protected, privately owned, scenic lands and are based in the urbanized area of City of Worcester, MA. The premium, and the visual accessibility was measured using the transaction of the surrounding homes, and homes spatial relationship with the CE lands. These protected CE lands are perpetually protected with natural, historic, and scenic characteristics that are attractive to the environmental amenity seekers. The home premiums as capitalized due to the visual accessibility of protected lands was measured using a combined weighted measure of ‘view’ and ‘proximity.' This was developed using the Huff's gravity model inspired index -- Gravity Inspired Visibility Index (GIVI). First, a detailed digital elevation model (DEM) raster with all view obstructing buildings and physicals structures stitched an the topography surface was generated and then the views and distances from homes to scenic lands were used to generate the GIVI, using the Viewshed analysis in ArcGIS. The geographically weighted regression (GWR) based hedonic model was then employed to measure the combined effect of both -- distance and view of scenic lands from each homes. Both the global (adjusted R sq =0.52, AICc =29,828) and the geographically weighted regression (GWR) models (adjusted R sq = 0.59, AICc =29,729) estimated the price effect, and the GWR model outperformed the global model. The results from the GWR model indicated an average 3.4% price premium on the mean value of homes in the study area. The spatial variation in home premiums (as percentage values) was also found clearer and more spatially clustered in the GWR model. The highest premium value for select homes in the sample was found to be as high as 34.6% of the mean home price. This is a significant effect of visual accessibility to the preserved scenic lands for land conservation. This research offers a useful framework for evaluating the effect of land protection for land use planning, land conservation and for real estate valuation purposes. It also offers useful insights for conservation agencies, local governments, professional planners, and real estate professionals for prioritizing land sites with scenic views.
    Keywords: Conservation Easement (Environmental Amenity); Geographically weighted regression (GWR); Hedonic Price Modeling (HPM); Real Estate Valuation; Viewshed in GIS
    JEL: R3
    Date: 2019–01–01
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:eres2019_242&r=all
  14. By: Minkyung Kim (School of Management, Yale University); K. Sudhir (Cowles Foundation & School of Management, Yale University); Kosuke Uetake (School of Management, Yale University)
    Abstract: We develop the first structural model of a multitasking salesforce to address questions of job design and incentive compensation design. The model incorporates three novel features: (i) multitasking effort choice given a multidimensional incentive plan; (ii) salesperson’s private information about customers and (iii) dynamic intertemporal tradeoffs in effort choice across the tasks. The empirical application uses data from a micro nance bank where loan officers are jointly responsible and incentivized for both loan acquisition repayment but has broad relevance for salesforce management in CRM settings involving customer acquisition and retention. We extend two-step estimation methods used for unidimensional compensation plans for the multitasking model with private information and intertemporal incentives by combining flexible machine learning (random forest) for the inference of private information and the first-stage multitasking policy function estimation. Estimates reveal two latent segments of salespeople-a “hunter” segment that is more efficient in loan acquisition and a “farmer” segment that is more efficient in loan collection. We use counterfactuals to assess how (1) multi-tasking versus specialization in job design; (ii) performance combination across tasks (multiplicative versus additive); and (iii) job transfers that impact private information impact firm profits and specific segment behaviors.
    Keywords: Salesforce compensation, Multitasking, Multi-dimensional incentives, Private information, Adverse selection, Moral hazard
    JEL: C61 J33 L11 L23 L14 M31 M52 M55
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:2199&r=all
  15. By: Iqbal, Muhammad; Alam Kazmi, Syed Hasnain; Manzoor, Dr. Amir; Rehman Soomrani, Dr. Abdul; Butt, Shujaat Hussain; Shaikh, Khurram Adeel
    Abstract: In today's world the data is considered as an extremely valued asset and its volume is increasing exponentially every day. This voluminous data is also known as Big Data. The Big Data can be described by 3Vs: the extreme Volume of data, the wide Variety of data types, and the Velocity required processing the data. Business companies across the globe, from multinationals to small and medium enterprises (SMEs), are discovering avenues to use this data for their business growth. In order to bring significant change in businesses growth the use of Big Data is foremost important. Nowadays, mostly business organization, small or big, wishes valuable and accurate information in decision-making process. Big data can help SMEs to anticipate their target audience and customer preferences and needs. Simply, there is a dire necessity for SMEs to seriously consider big data adoption. This study focusses on SMEs due to the fact that SMEs are backbone of any economy and have ability and flexibility for quicker adaptation to changes towards productivity. The big data holds different contentious issues such as; suitable computing infrastructure for storage, processing and producing functional information from it, and security and privacy issues. The objective of this study is to survey the main potentials & threats to Big Data and propose the best practices of Big Data usage in SMEs to improve their business process.
    Keywords: SME; Big Data; Efficieny; Analytics; Competitive Advantage
    JEL: C8 M1 M15
    Date: 2018–03–04
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:96034&r=all
  16. By: Toshikuni Sato
    Abstract: This study proposes a method to evaluate the construct validity for a nonlinear measurement model. Construct validation is required when applying measurement and structural equation models to questionnaire data from consumer and related social science research. However, previous studies have not sufficiently discussed the nonlinear measurement model and its construct validation. This study focuses on convergent and discriminant validation as important processes to check whether estim ated latent variables represent defined constructs To assess the convergent and discriminant validity in the nonlinear measurement model, previous methods are extended and new indexes are investigated by simulation studies. Empirical analysis is also provided, which shows that a nonlinear measurement model is better than linear model in both fitting and validity. Moreover, a new concept of construct validation is discussed for future research: it considers the interpretability of machine learning (such as neural networks) because construct validation plays an important role in interpreting latent variable s.
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:toh:dssraa:101&r=all
  17. By: Terans Gunawardhana; Kanchana Perera
    Abstract: Enormous literature sources suggest that with the development of digital technologies many industries tend to change their business models, strategies and applications. Accordingly, some scholars argue that the construction industries are facing significant challenges as more processes are digitised and automated. Therefore, this study focused on reviewing how developing technologies affect the sustainability of future construction industry. The research aimed to examine the adaptability of digital technologies in the future of the Sri Lankan construction industry. In this study, the objectives were formulated as, to identify the current level of application of modern technologies towards sustainable practices in Sri Lankan construction industry, to determine the possible developments in advanced technologies towards sustainable practices in Sri Lankan construction industry, and explore the potential issues of modern technologies Sustainable practices in Sri Lankan construction industry and solutions for them. The qualitative approach was adapted to attain the aim and objectives of the research. A content analysis was done to analyse the responses received from semi-structured interviews and validated through the stakeholder analysis. One of significant findings of the research indicate that lack of awareness about the advantages of adopting technologies in construction industry activities has become a severe problem, in this case, actions should be taken to increase the knowledge of the entire industry. There were some identified limitations throughout the whole research process. Mainly, time was recognised as a crucial boundary for the research, especially for data collection process. However, these study results suggest to carry out some research in the future to assess effect through economic, social and environmental aspects of technologies used in the construction industry and to develop a framework to understand the future role of each expert in Sri Lankan construction industry due to due to changes in technologies.
    Keywords: Big data; Construction Industry; Digital Technologies; sustainability; Technologies Adaptability
    JEL: R3
    Date: 2019–01–01
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:eres2019_85&r=all
  18. By: Anastasopoulos, Jason (University of Georgia); Borjas, George J. (Harvard University); Cook, Gavin G. (Princeton University); Lachanski, Michael (Princeton University)
    Abstract: Beginning in 1951, the Conference Board constructed a monthly job vacancy index by counting the number of help-wanted ads published in local newspapers in 51 metropolitan areas. We use the Help-Wanted Index (HWI) to document how immigration changes the number of job vacancies in the affected labor markets. Our analysis revisits the Mariel episode. The data reveal a marked drop in Miami's HWI relative to many alternative control groups in the first 4 or 5 years after Mariel, followed by recovery afterwards. The Miami evidence is consistent with the observed relation between immigration and the HWI across all metropolitan areas in the 1970- 2000 period: these spatial correlations suggest that more immigration reduces the number of job vacancies. We also explore some of the macro implications of the Mariel supply shock and show that Miami's Beveridge curve shifted inwards by the mid-1980s, suggesting a more efficient labor market, in contrast to the outward nationwide shift coincident with the onset of the 1980- 1982 recession. Finally, we examine the text of the help-wanted ads published in a number of newspapers and document a statistically and economically significant post-Mariel decline in the relative number of low-skill vacancies advertised in the Miami Herald.
    Keywords: job vacancies, immigration, Beveridge Curve
    JEL: J23 J6
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp12581&r=all
  19. By: Estolatan, Eric; Geuna, Aldo (University of Turin)
    Abstract: The case studies described in this paper investigate the evolution of the knowledge bases of the two leading EU robotics firms - KUKA and COMAU. The analysis adopts an evolutionary perspective and a systems approach to examine a set of derived patent-based measures to explore firm behavior in technological knowledge search and accumulation. The investigation is supplemented by analyses of the firms' historical archives, firm strategies and prevailing economic context at selected periods. Our findings suggest that while these enterprises maintain an outwardlooking innovation propensity and a diversified knowledge base they tend to have a higher preference for continuity and stability of their existing technical knowledge sets. The two companies studied exhibit partially different responses to the common and on-going broader change in the robotics industry (i.e. the emergence of artificial intelligence and ICT for application to robotics); KUKA is shown to be more outward-looking than COMAU. Internal restructuring, economic shocks and firm specificities are found to be stronger catalysts of change than external technology-based stimuli.
    Date: 2019–06
    URL: http://d.repec.org/n?u=RePEc:uto:labeco:201904&r=all
  20. By: Estolatan, Eric; Geuna, Aldo (University of Turin)
    Abstract: The case studies described in this paper investigate the evolution of the knowledge bases of the two leading EU robotics firms - KUKA and COMAU. The analysis adopts an evolutionary perspective and a systems approach to examine a set of derived patent-based measures to explore firm behavior in technological knowledge search and accumulation. The investigation is supplemented by analyses of the firms' historical archives, firm strategies and prevailing economic context at selected periods. Our findings suggest that while these enterprises maintain an outwardlooking innovation propensity and a diversified knowledge base they tend to have a higher preference for continuity and stability of their existing technical knowledge sets. The two companies studied exhibit partially different responses to the common and on-going broader change in the robotics industry (i.e. the emergence of artificial intelligence and ICT for application to robotics); KUKA is shown to be more outward-looking than COMAU. Internal restructuring, economic shocks and firm specificities are found to be stronger catalysts of change than external technology-based stimuli.
    Date: 2019–06
    URL: http://d.repec.org/n?u=RePEc:uto:dipeco:201916&r=all
  21. By: Maarten Vanhoof (CASA - Centre for Advanced Spatial Analysis - UCL - University College of London [London]); Antonia Godoy-Lorite (CASA - Centre for Advanced Spatial Analysis - UCL - University College of London [London]); Roberto Murcio (CASA - Centre for Advanced Spatial Analysis - UCL - University College of London [London]); Iacopo Iacopini (The Alan Turing Institute, CASA - Centre for Advanced Spatial Analysis - UCL - University College of London [London]); Natalia Zdanowska (CASA - Centre for Advanced Spatial Analysis - UCL - University College of London [London]); Juste Raimbault (ISC-PIF - Institut des Systèmes Complexes - Paris Ile-de-France - ENS Cachan - École normale supérieure - Cachan - UP1 - Université Panthéon-Sorbonne - UP11 - Université Paris-Sud - Paris 11 - X - École polytechnique - Institut Curie - SU - Sorbonne Université - CNRS - Centre National de la Recherche Scientifique, CASA - Centre for Advanced Spatial Analysis - UCL - University College of London [London]); Richard Milton (CASA - Centre for Advanced Spatial Analysis - UCL - University College of London [London]); Elsa Arcaute (CASA - Centre for Advanced Spatial Analysis - UCL - University College of London [London]); Mike Batty (CASA - Centre for Advanced Spatial Analysis - UCL - University College of London [London])
    Date: 2019–07–08
    URL: http://d.repec.org/n?u=RePEc:hal:journl:halshs-02284843&r=all

This nep-big issue is ©2019 by Tom Coupé. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.