|
on Big Data |
By: | Bertin Martens (European Commission – JRC - IPTS); Songül Tolan (European Commission – JRC) |
Abstract: | There is a long-standing economic research literature on the impact of technological innovation and automation in general on employment and economic growth. Traditional economic models trade off a negative displacement or substitution effect against a positive complementarity effect on employment. Economic history since the industrial revolution as strongly supports the view that the net effect on employment and incomes is positive though recent evidence points to a declining labour share in total income. There are concerns that with artificial intelligence (AI) "this time may be different". The state-of-the-art task-based model creates an environment where humans and machines compete for the completion of tasks. It emphasizes the labour substitution effects of automation. This has been tested on robots data, with mixed results. However, the economic characteristics of rival robots are not comparable with non-rival and scalable AI algorithms that may constitute a general purpose technology and may accelerate the pace of innovation in itself. These characteristics give a hint that this time might indeed be different. However, there is as yet very little empirical evidence that relates AI or Machine Learning (ML) to employment and incomes. General growth models can only present a wide range of highly diverging and hypothetical scenarios, from growth implosion to an optimistic future with growth acceleration. Even extreme scenarios of displacement of men by machines offer hope for an overall wealthier economic future. The literature is clearer on the negative implications that automation may have for income equality. Redistributive policies to counteract this trend will have to incorporate behavioural responses to such policies. We conclude that that there are some elements that suggest that the nature of AI/ML is different from previous technological change but there is no empirical evidence yet to underpin this view. |
Keywords: | labour markets, employment, technological change, task-based model, artificial intelligence, income distribution, |
JEL: | J62 O33 |
Date: | 2018–08 |
URL: | http://d.repec.org/n?u=RePEc:ipt:decwpa:2018-08&r=all |
By: | Catherine D'Hondt; Rudy De Winne; Eric Ghysels; Steve Raymond |
Abstract: | Artificial intelligence, or AI, enhancements are increasingly shaping our daily lives. Financial decision-making is no exception to this. We introduce the notion of AI Alter Egos, which are shadow robo-investors, and use a unique data set covering brokerage accounts for a large cross-section of investors over a sample from January 2003 to March 2012, which includes the 2008 financial crisis, to assess the benefits of robo-investing. We have detailed investor characteristics and records of all trades. Our data set consists of investors typically targeted for robo-advising. We explore robo-investing strategies commonly used in the industry, including some involving advanced machine learning methods. The man versus machine comparison allows us to shed light on potential benefits the emerging robo-advising industry may provide to certain segments of the population, such as low income and/or high risk averse investors. |
Date: | 2019–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1907.03370&r=all |
By: | Bertin Martens (European Commission – JRC - IPTS) |
Abstract: | Digitization triggered a steep drop in the cost of information. The resulting data glut created a bottleneck because human cognitive capacity is unable to cope with large amounts of information. Artificial intelligence and machine learning (AI/ML) triggered a similar drop in the cost of machine-based decision-making and helps in overcoming this bottleneck. Substantial change in the relative price of resources puts pressure on ownership and access rights to these resources. This explains pressure on access rights to data. ML thrives on access to big and varied datasets. We discuss the implications of access regimes for the development of AI in its current form of ML. The economic characteristics of data (non-rivalry, economies of scale and scope) favour data aggregation in big datasets. Non-rivalry implies the need for exclusive rights in order to incentivise data production when it is costly. The balance between access and exclusion is at the centre of the debate on data regimes. We explore the economic implications of several modalities for access to data, ranging from exclusive monopolistic control to monopolistic competition and free access. Regulatory intervention may push the market beyond voluntary exchanges, either towards more openness or reduced access. This may generate private costs for firms and individuals. Society can choose to do so if the social benefits of this intervention outweigh the private costs. We briefly discuss the main EU legal instruments that are relevant for data access and ownership, including the General Data Protection Regulation (GDPR) that defines the rights of data subjects with respect to their personal data and the Database Directive (DBD) that grants ownership rights to database producers. These two instruments leave a wide legal no-man's land where data access is ruled by bilateral contracts and Technical Protection Measures that give exclusive control to de facto data holders, and by market forces that drive access, trade and pricing of data. The absence of exclusive rights might facilitate data sharing and access or it may result in a segmented data landscape where data aggregation for ML purposes is hard to achieve. It is unclear if incompletely specified ownership and access rights maximize the welfare of society and facilitate the development of AI/ML. |
Keywords: | digital data, ownership and access rights, trade in data, machine learning, artificial intelligence |
JEL: | L00 |
Date: | 2018–09 |
URL: | http://d.repec.org/n?u=RePEc:ipt:decwpa:2018-09&r=all |
By: | Michael Allan Ribers; Hannes Ullrich |
Abstract: | Antibiotic resistance constitutes a major health threat. Predicting bacterial causes of infections is key to reducing antibiotic misuse, a leading driver of antibiotic resistance. We train a machine learning algorithm on administrative and microbiological laboratory data from Denmark to predict diagnostic test outcomes for urinary tract infections. Based on predictions, we develop policies to improve prescribing in primary care, highlighting the relevance of physician expertise and policy implementation when patient distributions vary over time. The proposed policies delay antibiotic prescriptions for some patients until test results are known and give them instantly to others. We find that machine learning can reduce antibiotic use by 7.42 percent without reducing the number of treated bacterial infections. As Denmark is one of the most conservative countries in terms of antibiotic use, this result is likely to be a lower bound of what can be achieved elsewhere. |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1906.03044&r=all |
By: | Jeremy D. Turiel; Tomaso Aste |
Abstract: | Logistic Regression and Support Vector Machine algorithms, together with Linear and Non-Linear Deep Neural Networks, are applied to lending data in order to replicate lender acceptance of loans and predict the likelihood of default of issued loans. A two phase model is proposed; the first phase predicts loan rejection, while the second one predicts default risk for approved loans. Logistic Regression was found to be the best performer for the first phase, with test set recall macro score of $77.4 \%$. Deep Neural Networks were applied to the second phase only, were they achieved best performance, with validation set recall score of $72 \%$, for defaults. This shows that AI can improve current credit risk models reducing the default risk of issued loans by as much as $70 \%$. The models were also applied to loans taken for small businesses alone. The first phase of the model performs significantly better when trained on the whole dataset. Instead, the second phase performs significantly better when trained on the small business subset. This suggests a potential discrepancy between how these loans are screened and how they should be analysed in terms of default prediction. |
Date: | 2019–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1907.01800&r=all |
By: | A Itkin |
Abstract: | Recent progress in the field of artificial intelligence, machine learning and also in computer industry resulted in the ongoing boom of using these techniques as applied to solving complex tasks in both science and industry. Same is, of course, true for the financial industry and mathematical finance. In this paper we consider a classical problem of mathematical finance - calibration of option pricing models to market data, as it was recently drawn some attention of the financial society in the context of deep learning and artificial neural networks. We highlight some pitfalls in the existing approaches and propose resolutions that improve both performance and accuracy of calibration. We also address a problem of no-arbitrage pricing when using a trained neural net, that is currently ignored in the literature. |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1906.03507&r=all |
By: | Emir Hrnjic; Nikodem Tomczak |
Abstract: | Behavioral economics changed the way we think about market participants and revolutionized policy-making by introducing the concept of choice architecture. However, even though effective on the level of a population, interventions from behavioral economics, nudges, are often characterized by weak generalisation as they struggle on the level of individuals. Recent developments in data science, artificial intelligence (AI) and machine learning (ML) have shown ability to alleviate some of the problems of weak generalisation by providing tools and methods that result in models with stronger predictive power. This paper aims to describe how ML and AI can work with behavioral economics to support and augment decision-making and inform policy decisions by designing personalized interventions, assuming that enough personalized traits and psychological variables can be sampled. |
Date: | 2019–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1907.02100&r=all |
By: | Jose Luis Montiel Olea; Pietro Ortoleva; Mallesh M Pai; Andrea Prat |
Abstract: | Different agents compete to predict a variable of interest related to a set of covariates via an unknown data generating process. All agents are Bayesian, but may consider different subsets of covariates to make their prediction. After observing a common dataset, who has the highest confidence in her predictive ability? We characterize it and show that it crucially depends on the size of the dataset. With small data, typically it is an agent using a model that is `small-dimensional,' in the sense of considering fewer covariates than the true data generating process. With big data, it is instead typically `large-dimensional,' possibly using more variables than the true model. These features are reminiscent of model selection techniques used in statistics and machine learning. However, here model selection does not emerge normatively, but positively as the outcome of competition between standard Bayesian decision makers. The theory is applied to auctions of assets where bidders observe the same information but hold different priors. |
Date: | 2019–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1907.03809&r=all |
By: | Brandon Da Silva; Sylvie Shang Shi |
Abstract: | Training deep learning models that generalize well to live deployment is a challenging problem in the financial markets. The challenge arises because of high dimensionality, limited observations, changing data distributions, and a low signal-to-noise ratio. High dimensionality can be dealt with using robust feature selection or dimensionality reduction, but limited observations often result in a model that overfits due to the large parameter space of most deep neural networks. We propose a generative model for financial time series, which allows us to train deep learning models on millions of simulated paths. We show that our generative model is able to create realistic paths that embed the underlying structure of the markets in a way stochastic processes cannot. |
Date: | 2019–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1906.03232&r=all |
By: | Pelau, Corina; Ene, Irina |
Abstract: | The presence of Artificial Intelligence in our everyday life has become one of the most debated topics nowadays. In opposition to the past, nowadays, in the age of broadband connectivity, it is difficult for individuals to imagine their everyday life, at work or in their spare time, without computers, internet, mobile applications or other devices. Most of these devices have had a contribution to the improvement of our everyday life by being more efficient and having a higher convenience. Few people are aware of the fact that, by continuously developing and improving these technologies, they might become more intelligent than we are and that they will have the potential to control us. In the attempt to make these devices friendlier to consumers, they have started to take human-like aspect and even having own identities. We have nowadays call center answering machines with names or robots with names and citizenship. The objective of this article is to determine the acceptance and preference of consumers for personalized or human-like robots or devices. For four different cases, the respondents had to choose between a classic device and a human-like robot. The results of the research show, with a high significance, that consumers still prefer the classic devices over anthropomorphic robots. |
Keywords: | Artificial intelligence, robots, consumer, anthropomorphism, perception |
JEL: | M0 M31 |
Date: | 2018 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:94617&r=all |
By: | Paola Tubaro (LRI - Laboratoire de Recherche en Informatique - UP11 - Université Paris-Sud - Paris 11 - Inria - Institut National de Recherche en Informatique et en Automatique - CentraleSupélec - CNRS - Centre National de la Recherche Scientifique, TAU - TAckling the Underspecified - LRI - Laboratoire de Recherche en Informatique - UP11 - Université Paris-Sud - Paris 11 - Inria - Institut National de Recherche en Informatique et en Automatique - CentraleSupélec - CNRS - Centre National de la Recherche Scientifique - UP11 - Université Paris-Sud - Paris 11 - Inria Saclay - Ile de France - Inria - Institut National de Recherche en Informatique et en Automatique - CNRS - Centre National de la Recherche Scientifique, CNRS - Centre National de la Recherche Scientifique); Antonio Casilli (I3, une unité mixte de recherche CNRS (UMR 9217) - Institut interdisciplinaire de l’innovation - CNRS - Centre National de la Recherche Scientifique - X - École polytechnique - Télécom ParisTech - MINES ParisTech - École nationale supérieure des mines de Paris, Télécom ParisTech) |
Abstract: | This paper delves into the human factors in the "back-office" of artificial intelligence and of its data-intensive algorithmic underpinnings. We show that the production of AI is a labor-intensive process, which particularly needs the little-qualified, inconspicuous and low-paid contribution of "micro-workers" who annotate, tag, label, correct and sort the data that help to train and test smart solutions. We illustrate these ideas in the high-profile case of the automotive industry, one of the largest clients of digital data-related micro-working services, notably for the development of autonomous and connected cars. This case demonstrates how micro-work has a place in long supply chains, where tech companies compete with more traditional industry players. Our analysis indicates that the need for micro-work is not a transitory, but a structural one, bound to accompany the further development of the sector; and that its provision involves workers in different geographical and linguistic areas, requiring the joint study of multiple platforms operating at both global and local levels. |
Keywords: | Artificial intelligence,Micro-work,Automotive industry,Digital platform economy,Organization of work |
Date: | 2019–06–05 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-02148979&r=all |
By: | Xinyi Li; Yinchuan Li; Yuancheng Zhan; Xiao-Yang Liu |
Abstract: | Portfolio allocation is crucial for investment companies. However, getting the best strategy in a complex and dynamic stock market is challenging. In this paper, we propose a novel Adaptive Deep Deterministic Reinforcement Learning scheme (Adaptive DDPG) for the portfolio allocation task, which incorporates optimistic or pessimistic deep reinforcement learning that is reflected in the influence from prediction errors. Dow Jones 30 component stocks are selected as our trading stocks and their daily prices are used as the training and testing data. We train the Adaptive DDPG agent and obtain a trading strategy. The Adaptive DDPG's performance is compared with the vanilla DDPG, Dow Jones Industrial Average index and the traditional min-variance and mean-variance portfolio allocation strategies. Adaptive DDPG outperforms the baselines in terms of the investment return and the Sharpe ratio. |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1907.01503&r=all |
By: | Soybilgen, Baris |
Abstract: | We use dynamic factors and neural network models to identify current and past states (instead of future) of the US business cycle. In the first step, we reduce noise in data by using a moving average filter. Then, dynamic factors are extracted from a large-scale data set consisted of more than 100 variables. In the last step, these dynamic factors are fed into the neural network model for predicting business cycle regimes. We show that our proposed method follows US business cycle regimes quite accurately in sample and out of sample without taking account of the historical data availability. Our results also indicate that noise reduction is an important step for business cycle prediction. Furthermore using pseudo real time and vintage data, we show that our neural network model identifies turning points quite accurately and very quickly in real time. |
Keywords: | Dynamic Factor Model; Neural Network; Recession; Business Cycle |
JEL: | C38 E32 E37 |
Date: | 2018–07–05 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:94715&r=all |
By: | Francois Belletti; Davis King; Kun Yang; Roland Nelet; Yusef Shafi; Yi-Fan Chen; John Anderson |
Abstract: | Monte Carlo methods are core to many routines in quantitative finance such as derivatives pricing, hedging and risk metrics. Unfortunately, Monte Carlo methods are very computationally expensive when it comes to running simulations in high-dimensional state spaces where they are still a method of choice in the financial industry. Recently, Tensor Processing Units (TPUs) have provided considerable speedups and decreased the cost of running Stochastic Gradient Descent (SGD) in Deep Learning. After having highlighted computational similarities between training neural networks with SGD and stochastic process simulation, we ask in the present paper whether TPUs are accurate, fast and simple enough to use for financial Monte Carlo. Through a theoretical reminder of the key properties of such methods and thorough empirical experiments we examine the fitness of TPUs for option pricing, hedging and risk metrics computation. We show in the following that Tensor Processing Units (TPUs) in the cloud help accelerate Monte Carlo routines compared to Graphics Processing Units (GPUs) which in turn decreases the cost associated with running such simulations while leveraging the flexibility of the cloud. In particular we demonstrate that, in spite of the use of mixed precision, TPUs still provide accurate estimators which are fast to compute. We also show that the Tensorflow programming model for TPUs is elegant, expressive and simplifies automated differentiation. |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1906.02818&r=all |
By: | Rémy Le Boennec (Institut VEDECOM); Fouad Hadj Selem (Institut VEDECOM); Ghazaleh Khodabandelou (Institut VEDECOM) |
Keywords: | Intelligence artificielle,Inférence des flux de mobilité,Déplacement domicile-travail,Report modal |
Date: | 2019–06–11 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-02160862&r=all |
By: | Fabrice Daniel |
Abstract: | This article studies the financial time series data processing for machine learning. It introduces the most frequent scaling methods, then compares the resulting stationarity and preservation of useful information for trend forecasting. It proposes an empirical test based on the capability to learn simple data relationship with simple models. It also speaks about the data split method specific to time series, avoiding unwanted overfitting and proposes various labelling for classification and regression. |
Date: | 2019–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1907.03010&r=all |
By: | Michael Lechner; Gabriel Okasa |
Abstract: | In econometrics so-called ordered choice models are popular when interest is in the estimation of the probabilities of particular values of categorical outcome variables with an inherent ordering, conditional on covariates. In this paper we develop a new machine learning estimator based on the random forest algorithm for such models without imposing any distributional assumptions. The proposed Ordered Forest estimator provides a flexible estimation method of the conditional choice probabilities that can naturally deal with nonlinearities in the data, while taking the ordering information explicitly into account. In addition to common machine learning estimators, it enables the estimation of marginal effects as well as conducting inference thereof and thus providing the same output as classical econometric estimators based on ordered logit or probit models. An extensive simulation study examines the finite sample properties of the Ordered Forest and reveals its good predictive performance, particularly in settings with multicollinearity among the predictors and nonlinear functional forms. An empirical application further illustrates the estimation of the marginal effects and their standard errors and demonstrates the advantages of the flexible estimation compared to a parametric benchmark model. |
Date: | 2019–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1907.02436&r=all |
By: | Clement Gastaud; Theophile Carniel; Jean-Michel Dalle |
Abstract: | We address the issue of the factors driving startup success in raising funds. Using the popular and public startup database Crunchbase, we explicitly take into account two extrinsic characteristics of startups: the competition that the companies face, using similarity measures derived from the Word2Vec algorithm, as well as the position of investors in the investment network, pioneering the use of Graph Neural Networks (GNN), a recent deep learning technique that enables the handling of graphs as such and as a whole. We show that the different stages of fundraising, early- and growth-stage, are associated with different success factors. Our results suggest a marked relevance of startup competition for early stage while growth-stage fundraising is influenced by network features. Both of these factors tend to average out in global models, which could lead to the false impression that startup success in fundraising would mostly if not only be influenced by its intrinsic characteristics, notably those of their founders. |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1906.03210&r=all |
By: | Zihao Zhang; Stefan Zohren; Stephen Roberts |
Abstract: | We showcase how Quantile Regression (QR) can be applied to forecast financial returns using Limit Order Books (LOBs), the canonical data source of high-frequency financial time-series. We develop a deep learning architecture that simultaneously models the return quantiles for both buy and sell positions. We test our model over millions of LOB updates across multiple different instruments on the London Stock Exchange. Our results suggest that the proposed network not only delivers excellent performance but also provides improved prediction robustness by combining quantile estimates. |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1906.04404&r=all |
By: | Hyungjun Park; Min Kyu Sim; Dong Gu Choi |
Abstract: | A goal of financial portfolio trading is maximizing the trader's utility by allocating capital to assets in a portfolio in the investment horizon. Our study suggests an approach for deriving an intelligent portfolio trading strategy using deep Q-learning. In this approach, we introduce a Markov decision process model to enable an agent to learn about the financial environment and develop a deep neural network structure to approximate a Q-function. In addition, we devise three techniques to derive a trading strategy that chooses reasonable actions and is applicable to the real world. First, the action space of the learning agent is modeled as an intuitive set of trading directions that can be carried out for individual assets in the portfolio. Second, we introduce a mapping function that can replace an infeasible agent action in each state with a similar and valuable action to derive a reasonable trading strategy. Last, we introduce a method by which an agent simulates all feasible actions and learns about these experiences to utilize the training data efficiently. To validate our approach, we conduct backtests for two representative portfolios, and we find that the intelligent strategy derived using our approach is superior to the benchmark strategies. |
Date: | 2019–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1907.03665&r=all |
By: | Yuxuan Huang; Luiz Fernando Capretz; Danny Ho |
Abstract: | Application of neural network architectures for financial prediction has been actively studied in recent years. This paper presents a comparative study that investigates and compares feed-forward neural network (FNN) and adaptive neural fuzzy inference system (ANFIS) on stock prediction using fundamental financial ratios. The study is designed to evaluate the performance of each architecture based on the relative return of the selected portfolios with respect to the benchmark stock index. The results show that both architectures possess the ability to separate winners and losers from a sample universe of stocks, and the selected portfolios outperform the benchmark. Our study argues that FNN shows superior performance over ANFIS. |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1906.05327&r=all |
By: | Rute Martins Caeiro |
Abstract: | This paper analyzes the role of social networks in the diffusion of knowledge and adoption of cultivation techniques, from trainees to the wider community, in the context of an extension project in Guinea-Bissau. In order to test for social learning, we exploit a detailed census of households and social connections across different dimensions. More precisely, we make use of a village photo directory in order to obtain a comprehensive and fully mapped social network dataset. We find evidence that agricultural information spreads across networks from project participants to non-participants, with different networks having different importance. The most relevant connection is found to be between the network of people from which individuals would ‘borrow money’. We are also able to disentangle the relative importance of weak and strong ties: in our context, weak ties are as important in the diffusion of agricultural knowledge as strong ties. Despite positive diffusion effects in knowledge, we found limited evidence of network effects in adoption behavior. Finally, using longitudinal network data, we document improvements in the network position of treated farmers over time. |
JEL: | O13 O31 O33 Q16 |
Date: | 2019–07 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:26065&r=all |
By: | Lotfi Boudabsa; Damir Filipovic |
Abstract: | We introduce a computational framework for dynamic portfolio valuation and risk management building on machine learning with kernels. We learn the replicating martingale of a portfolio from a finite sample of its terminal cumulative cash flow. The learned replicating martingale is given in closed form thanks to a suitable choice of the kernel. We develop an asymptotic theory and prove convergence and a central limit theorem. We also derive finite sample error bounds and concentration inequalities. Numerical examples show good results for a relatively small training sample size. |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1906.03726&r=all |
By: | Sandra Johnson; Peter Robinson; Kishore Atreya; Claudio Lisco |
Abstract: | Supply chains lend themselves to blockchain technology, but certain challenges remain, especially around invoice financing. For example, the further a supplier is removed from the final consumer product, the more difficult it is to get their invoices financed. Moreover, for competitive reasons, retailers and manufacturers do not want to disclose their supply chains. However, upstream suppliers need to prove that they are part of a `stable' supply chain to get their invoices financed, which presents the upstream suppliers with huge, and often unsurmountable, obstacles to get the necessary finance to fulfil the next order, or to expand their business. Using a fictitious supply chain use case, which is based on a real world use case, we demonstrate how these challenges have the potential to be solved by combining more advanced and specialised blockchain technologies with other technologies such as Artificial Intelligence. We describe how atomic crosschain functionality can be utilised across private blockchains to retrieve the information required for an invoice financier to make informed decisions under uncertainty, and consider the effect this decision has on the overall stability of the supply chain. |
Date: | 2019–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1906.03306&r=all |
By: | Erika Arraño; Katherine Jara |
Abstract: | The monitoring of the evolution of job ads posted by companies is used to estimate the economy’s labor demand, as it relates to economic activity and constitutes a matter of interest, for both business cycle evaluation and structural economic analysis. The digitalization of the economy, together with the massification of Internet use and access, has transformed the way potential employees are attracted. Companies have gone from posting job ads in the printed press, to doing so on specific web sites. Most recently, the state of the art in recruitment technology allows to collect, store and handle large volumes of unstructured information, by the use of specialized software. This document presents an Online Job Ad Index, based on public information posted on the main online recruitment web sites. This index seeks to complement the analysis of the labor market, as it is available before the results of employment surveys. The index is available at the Statistics Database of the Central Bank of Chile, at Employment chapter. |
Date: | 2019–07 |
URL: | http://d.repec.org/n?u=RePEc:chb:bcchee:129&r=all |