nep-big New Economics Papers
on Big Data
Issue of 2018‒09‒03
twenty papers chosen by
Tom Coupé
University of Canterbury

  1. Economic Policy for Artificial Intelligence By Ajay K. Agrawal; Joshua S. Gans; Avi Goldfarb
  2. Bitcoin technical trading with artificial neural network By Masafumi Nakano; Akihiko Takahashi; Soichiro Takahashi
  3. Local applications for global data and AI By Mathews, Steve
  4. Loss Data Analytics By Edward Frees
  5. Big Data in Finance and the Growth of Large Firms By Juliane Begenau; Laura Veldkamp; Maryam Farboodi
  6. Application of Artificial Neural Networks in Cold Rolling Process By Atuwo, Tamaraebi
  7. Deep learning, deep change? Mapping the development of the Artificial Intelligence General Purpose Technology By J. Klinger; J. Mateos-Garcia; K. Stathoulopoulos
  8. How to digitalise agricultural systems in the developing world By Jarvis, Andy
  9. Probabilistic electricity price forecasting with NARX networks: Combine point or probabilistic forecasts? By Grzegorz Marcjasz; Bartosz Uniejewski; Rafal Weron
  10. Deep Learning for Energy Markets By Michael Polson; Vadim Sokolov
  11. A Panel Quantile Approach to Attrition Bias in Big Data: Evidence from a Randomized Experiment By Matthew Harding; Carlos Lamarche
  12. Unlocking the power of digital agriculture By Bergvinson, David
  13. Human Rights and Artificial Intelligence: An Urgently Needed Agenda By Risse, Mathias
  14. The race for an artificial general intelligence: Implications for public policy By Naude, Wim; Dimitri, Nicola
  15. Web mining of firm websites: A framework for web scraping and a pilot study for Germany By Kinne, Jan; Axenbeck, Janna
  16. Q&A: Uses and challenges of ‘big data’ for agricultural development By Ritman, Kim; Mathews, Steve; Herrero, Mario; Street, Ken
  17. Air pollution and health - A provincial level analysis of China By Wei Zheng; Patrick Paul Walsh
  18. How can ‘big data’ transform smallholder farmers’ lives and livelihoods? By Laperriere, Andre
  19. Farmers and Habits: The Challenge of Identifying the Sources of Persistence in Tillage Decisions By Wallander, Steven; Bowman, Maria; Beeson, Peter; Claassen, Roger
  20. Le tecnologie di Industria 4.0 e le PMI/Technologies of Industry 4.0 and SMEs By Angelo Bonomi

  1. By: Ajay K. Agrawal; Joshua S. Gans; Avi Goldfarb
    Abstract: Recent progress in artificial intelligence (AI) – a general purpose technology affecting many industries - has been focused on advances in machine learning, which we recast as a quality-adjusted drop in the price of prediction. How will this sharp drop in price impact society? Policy will influence the impact on two key dimensions: diffusion and consequences. First, in addition to subsidies and IP policy that will influence the diffusion of AI in ways similar to their effect on other technologies, three policy categories - privacy, trade, and liability - may be uniquely salient in their influence on the diffusion patterns of AI. Second, labor and antitrust policies will influence the consequences of AI in terms of employment, inequality, and competition.
    JEL: L86 O3
    Date: 2018–06
  2. By: Masafumi Nakano (Graduate School of Economics, University of Tokyo); Akihiko Takahashi (Graduate School of Economics, University of Tokyo); Soichiro Takahashi (Graduate School of Economics, University of Tokyo)
    Abstract: This paper explores Bitcoin intraday technical trading based on artificial neural networks for the return prediction. In particular, our deep learning method successfully discovers trading signals through a seven layered neural network structure for given input data of technical indicators, which are calculated by the past time-series data over every 15 minutes. Under feasible settings of execution costs, the numerical experiments demonstrate that our approach significantly improves the performance of a buy-and-hold strategy. Especially, our model performs well for a challenging period from December 2017 to January 2018, during which Bitcoin suffers from substantial minus returns. Furthermore, various sensitivity analysis is implemented for the change of the number of layers, activation functions, input data and output classification to confirm the robustness of our approach.
    Date: 2018–07
  3. By: Mathews, Steve
    Abstract: ‘Big data’ has great unrealised potential in most parts of the agricultural value-chain. We can divide that data up into several categories, each with its own good and bad points. Starting in 1972, the US Landsat program collected the original ‘big data’, but capability to perform meaningful analysis of the photos remained very expensive until recently. Other flows have begun in the past few decades, from private satellites, point-of-sale systems, land-based sensors, and aerial drones. Unlike Landsat, the various newer sources have different ownership statuses. Globally, most smallholders don’t generate the revenue to pay for any of the various proprietary data sources or analysis. But we see significant value in the application of machine learning/’big data’ techniques to publicly available satellite and other sources. Advances in information technology allow us to disseminate good-quality yield, drought, and other analyses at a much lower cost than previously. As a result, relatively small external contributions can bring the established benefits of modern modelling expertise to a hugely broader and more diverse audience.
    Keywords: Research and Development/Tech Change/Emerging Technologies, Teaching/Communication/Extension/Profession
    Date: 2017–08–08
  4. By: Edward Frees (for the Actuarial Community)
    Abstract: Loss Data Analytics is an interactive, online, freely available text. The idea behind the name Loss Data Analytics is to integrate classical loss data models from applied probability with modern analytic tools. In particular, we seek to recognize that big data (including social media and usage based insurance) are here and high speed computation is readily available. The online version contains many interactive objects (quizzes, computer demonstrations, interactive graphs, video, and the like) to promote deeper learning. A subset of the book is available for offline reading in pdf and EPUB formats. The online text will be available in multiple languages to promote access to a worldwide audience.
    Date: 2018–08
  5. By: Juliane Begenau (Stanford University); Laura Veldkamp (New York University); Maryam Farboodi (Princeton University)
    Abstract: One of the most important trends in modern macroeconomics is the shift from small firms to large firms. At the same time, financial markets have been transformed by advances in information technology. We explore the hypothesis that the use of big data in financial markets has lowered the cost of capital for large firms, relative to small ones, enabling large firms to grow larger. As faster processors crunch ever more data -- macro announcements, earnings statements, competitors' performance metrics, export demand, etc. -- large firms become more valuable targets for this data analysis. Large firms, with more economic activity and a longer firm history offer more data to process. Once processed, that data can better forecast firm value, reduce the risk of equity investment, and thus reduce the firm's cost of capital. As big data technology improves, large firms attract a more than proportional share of the data processing, enabling large firms to invest cheaply and grow larger.
    Date: 2018
  6. By: Atuwo, Tamaraebi
    Abstract: Rolling is one of the most complicated processes in metal forming. Knowing the exact amount of basic parameters, especially inter-stand tensions can be effective in controlling other parameters in this process. Inter-stand tensions affect rolling pressure, rolling force, forward and backward slips and neutral angle. Calculating this effect is an important step in continuous rolling design and control. Since inter-stand tensions cannot be calculated analytically, attempt is made to describes an approach based on artificial neural network (ANN) in order to identify the applied parameters in a cold tandem rolling mill. Due to the limited experimental data, in this subject a five stand tandem cold rolling mill is simulated through finite element method. The outputs of the FE simulation are applied in training the network and then, the network is employed for prediction of tensions in a tandem cold rolling mill. Here, after changing and checking the different designs of the network, the 11-42-4 structure by one hidden layer is selected as the best network. The verification factor of ANN results according to experimental data are over R=0.9586 for training and testing the data sets. The experimental results obtained from the five stands tandem cold rolling mill. This paper proposed new ANN for prediction of inter-stand tensions. Also, this ANN method shows a fuzzy control algorithm for investigating the effect of front and back tensions on reducing the thickness deviations of hot rolled steel strips. The average of the training and testing data sets is mentioned 0.9586. It means they have variable values which are discussed in details in section 4. According to Table 7, this proposed ANN model has the correlation coefficients of 0.9586, 0.9798, 0.9762 and 0.9742, respectively for training data sets and 0.9905, 0.9798, 0.9762 and 0.9803, respectively for the testing data sets. These obtained numbers indicate the acceptable accuracy of the ANN method in predicting the inter-stand tensions of the rolling tandem mill. This method provides a highly accurate solution with reduced computational time and is suitable for on-line control or optimization in tandem cold rolling mills. Due to the limited experimental data, for data extraction for the ANN simulation, a 2D tandem cold rolling process is simulated using ABAQUS 6.9 software. For designing a network for this rolling problem, various structures of neural networks are studied in MATLAB 7.8 software.
    Keywords: Artificial neural networks, Computational time, On-line control, Finite element modeling, Training and testing data, Tandem cold rolling mill, Hidden layer
    JEL: L16 L61 L63 L71 L72
    Date: 2018–07
  7. By: J. Klinger; J. Mateos-Garcia; K. Stathoulopoulos
    Abstract: General Purpose Technologies (GPTs) that can be applied in many industries are an important driver of economic growth and national and regional competitiveness. In spite of this, the geography of their development and diffusion has not received significant attention in the literature. We address this with an analysis of Deep Learning (DL), a core technique in Artificial Intelligence (AI) increasingly being recognized as the latest GPT. We identify DL papers in a novel dataset from ArXiv, a popular preprints website, and use CrunchBase, a technology business directory to measure industrial capabilities related to it. After showing that DL conforms with the definition of a GPT, having experienced rapid growth and diffusion into new fields where it has generated an impact, we describe changes in its geography. Our analysis shows China's rise in AI rankings and relative decline in several European countries. We also find that initial volatility in the geography of DL has been followed by consolidation, suggesting that the window of opportunity for new entrants might be closing down as new DL research hubs become dominant. Finally, we study the regional drivers of DL clustering. We find that competitive DL clusters tend to be based in regions combining research and industrial activities related to it. This could be because GPT developers and adopters located close to each other can collaborate and share knowledge more easily, thus overcoming coordination failures in GPT deployment. Our analysis also reveals a Chinese comparative advantage in DL after we control for other explanatory factors, perhaps underscoring the importance of access to data and supportive policies for the successful development of this complex, `omni-use' technology.
    Date: 2018–08
  8. By: Jarvis, Andy
    Abstract: In rural Nepal recently, lots of the smallholders I visited took selfies with me on their smartphones, sharing them on social media. Until recently, it was the other way around. It was an epiphany moment: if the tech revolution has now reached smallholders, the data revolution will surely follow. Yet the agriculture sector still lags behind in the data revolution. In the US, a recent report by McKinsey placed agriculture dead last out of 23 sectors that they analysed with respect to the extent to which they are harnessing the opportunities of ‘digitalisation’. The report argues that it is no coincidence that the sectors highest in terms of digitalisation are also showing the highest economic growth (such as finance and media). For the developing world, the picture is likely even worse. Mobile money in East Africa is transforming the finance sector, yet the farmer has very limited access to digital services that help him or her better manage crops and livestock. Agriculture in Africa is only touching the surface of digitalisation – markets are largely informal, extension is face-to-face, and farm data either non-existent or completely off grid. Many of the successes of digitalisation in agriculture have been riding on the shirt tails of mechanisation – sensors on tractors is where much of the innovation is today. It is the means to gather information, rapidly analyse and adjust management, whilst the Internet of Things means the data is getting transmitted and feeding the cloud with invaluable information to better tailor precision farming. Whilst this model may be very appropriate for commercial and mechanised largescale farming, it’s not readily transferrable to the 570 million smallholder farmers in the world. Alternative visions for digital agriculture are needed, and there are a number of game-changers in the mix right now. First, smartphone penetration and 3G networks are sweeping across rural areas, and this opens a wealth of opportunities to kick start the data ecosystem. They become the node for information exchange. Second, satellite images are on the cusp of becoming fit for purpose in agriculture. Their spatial resolution can finally detect meaningful patterns in the field, and the return periods are such that we can link satellite images potentially with activities in the field in nearer real time. And where satellites struggle, drones can often do the job at limited cost. And thirdly, our analytical capacity to make sense of the dirty data that agriculture tends to generate is now greatly enhanced. By combining multiple data streams, and analysing in new ways, we can now pick out some of the critical signals to spur better decisions in the agricultural sector, be it at field level or national policy decisions. Unfortunately, a number of key impediments are still holding back a democratic data revolution that reaches the marginalised smallholder farmer. Data itself is a barrier. You need some data to be able to say something useful; yet data on site-specific farming practices, socio-economic conditions of farmers, gender-related factors and others is often hard to come by. Better use of existing data is needed to start with – open data initiatives need to be strengthened, and C:/ drives need to be liberated. Another impediment is that many of the successes in developed countries are closely tied to private sector input supplies and machinery; yet in the smallholder context such services are in their infancy, and the reach of the private sector remains limited. And an alternative service provider, public extension, is likewise severely limited in reach, with just a tiny fraction of farmers having access. There is exciting innovation in some regions (e.g. the i-Hub in Nairobi) with a boon of private sector data-intelligence-related services providing farmers with data services, but few of these start-ups reach scale, and failure rates are too high. There is also a danger of poor-quality services proliferating and giving datadriven farming a bad name. Research can help develop better open access methods, APIs [resources used in programming] to additional high quality data layers, and thus support the emerging private sector to maintain high standards of quality. The enabling environment can also be improved – greater investment in data-related agricultural R&D is needed, and training needs to be improved to develop a new generation of agronomists who are fully data- and analytics-literate. With the building of greater capacity in people and their institutions, digital agriculture can be mainstreamed into extension programs and agricultural R&D, and contribute to a stronger private sector in data-related services to agriculture. At the CGIAR Platform for Big Data in Agriculture, we have identified four areas of work that are ripe for disruption, and we are currently calling for formation of novel partnerships that combine research and agricultural development to solve some of these intractable problems. These ‘Inspire Challenges’ provide the opportunity to receive US$100k grants to trial out risky approaches that: 1) reveal food systems, 2) monitor pests and diseases, 3) disrupt impact assessment, and 4) empower data-driven farming. We are tremendously excited about the prospects of ‘big data’ in agriculture. The lack of ‘digitalisation’ can only be seen as an enormous opportunity. The time is now to digitalise agriculture and democratise the benefits beyond those few with a tractor, and to explore different pathways that are inclusive of the 570 million smallholders who are producing 70% of global food supply.
    Keywords: International Development, Research and Development/Tech Change/Emerging Technologies
    Date: 2017–08–08
  9. By: Grzegorz Marcjasz; Bartosz Uniejewski; Rafal Weron
    Abstract: A recent electricity price forecasting (EPF) study has shown that the Seasonal Component Artificial Neural Network (SCANN) modeling framework, which consists of decomposing a series of spot prices into a trend-seasonal and a stochastic component, modeling them independently and then combining their forecasts, can yield more accurate point predictions than an approach in which the same non-linear autoregressive NARX-type neural network is calibrated to the prices themselves. Here, considering two novel extensions of the SCANN concept to probabilistic forecasting, we find that (i) efficiently calibrated NARX networks can outperform their autoregressive counterparts, even without combining forecasts from many runs, and that (ii) in terms of accuracy it is better to construct probabilistic forecasts directly from point predictions, however, if speed is a critical issue, running quantile regression on combined point forecasts (i.e., committee machines) may be an option worth considering. Moreover, we confirm an earlier observation that averaging probabilities outperforms averaging quantiles when combining predictive distributions in EPF.
    Keywords: Electricity spot price; Probabilistic forecast; Combining forecasts; Long-term seasonal component; NARX neural network; Quantile regression
    JEL: C14 C22 C45 C51 C53 Q47
    Date: 2018–07–13
  10. By: Michael Polson; Vadim Sokolov
    Abstract: Deep Learning (DL) provides a methodology to predict extreme loads observed in energy grids. Forecasting energy loads and prices is challenging due to sharp peaks and troughs that arise from intraday system constraints due to supply and demand fluctuations. We propose the use of deep spatio-temporal models and extreme value theory (DL-EVT) to capture the tail behavior of load spikes. Deep architectures such as ReLU and LSTM can model generation trends and temporal dependencies, while EVT captures highly volatile load spikes. To illustrate our methodology, we use hourly price and demand data from the PJM interconnection, for 4719 nodes and develop deep predictor. DL-EVT outperforms traditional Fourier and time series methods, both in-and out-of-sample, by capturing the nonlinearities in prices. Finally, we conclude with directions for future research.
    Date: 2018–08
  11. By: Matthew Harding; Carlos Lamarche
    Abstract: This paper introduces a quantile regression estimator for panel data models with individual heterogeneity and attrition. The method is motivated by the fact that attrition bias is often encountered in Big Data applications. For example, many users sign-up for the latest program but few remain active users several months later, making the evaluation of such interventions inherently very challenging. Building on earlier work by Hausman and Wise (1979), we provide a simple identification strategy that leads to a two-step estimation procedure. In the first step, the coefficients of interest in the selection equation are consistently estimated using parametric or nonparametric methods. In the second step, standard panel quantile methods are employed on a subset of weighted observations. The estimator is computationally easy to implement in Big Data applications with a large number of subjects. We investigate the conditions under which the parameter estimator is asymptotically Gaussian and we carry out a series of Monte Carlo simulations to investigate the finite sample properties of the estimator. Lastly, using a simulation exercise, we apply the method to the evaluation of a recent Time-of-Day electricity pricing experiment inspired by the work of Aigner and Hausman (1980).
    Date: 2018–08
  12. By: Bergvinson, David
    Abstract: Digital agriculture encompasses a value chain framework that supports smallholder farmers’ access to services, knowledge and markets. This helps to unlock the economic potential of agriculture, preserve natural resources and accelerate equitable economic growth in rural communities. Digital agriculture is already the nerve centre for modern food systems. It enables democratisation of information and distillation of big data analytics to provide timely and targeted insight for farmers, input suppliers, aggregators, processors and consumers. These insights are now delivered to the location of a decision (e.g. a farmer’s field on a smart phone) on how to optimise profitability, increase value chain efficiency and support consumer awareness on food and its impact on their nutrition, the rural economy and the environmental footprint of agriculture. Digital tools have the potential to compress value chains and reduce transaction costs, thus moving more value to the farmers’ end for improving incomes and livelihoods. Through ‘big data’ and systems biology, the nutritional quality of crops can be improved by gaining a deeper understanding of the interaction of food, nutrition and human health. Spatial Data Infrastructure combined with unique digital identification can support an ecosystem of integrated services to better serve the needs of farmers – whether it be access to inputs, credit, insurance or markets. Downscaled observed weather data are critically important to support all actors along the value chain, given that agriculture is a solar- and water- driven industry. Maintaining the trust of farmers and consumers is vitally important, so policies to manage personal identification information are essential. However, data also needs to be granular to support precision agriculture practices. An ecosystem of different tools and platforms supported by pragmatic and visionary policies and institutions will position countries to uniquely unlock the power of digital technology to accelerate agricultural development and ultimately enable us to deliver on the Sustainable Development Goals – one country at a time.
    Keywords: Research and Development/Tech Change/Emerging Technologies, Resource /Energy Economics and Policy
    Date: 2017–08–08
  13. By: Risse, Mathias (Harvard U)
    Abstract: Artificial intelligence generates challenges for human rights. Inviolability of human life is the central idea behind human rights, an underlying implicit assumption being the hierarchical superiority of humankind to other forms of life meriting less protection. These basic assumptions are questioned through the anticipated arrival of entities that are not alive in familiar ways but nonetheless are sentient and intellectually and perhaps eventually morally superior to humans. To be sure, this scenario may never come to pass and in any event lies in a part of the future beyond current grasp. But it is urgent to get this matter on the agenda. Threats posed by technology to other areas of human rights are already with us. My goal here is to survey these challenges in a way that distinguishes short-, medium term and long-term perspectives.
    Date: 2018–05
  14. By: Naude, Wim (Maastricht University, Maastricht School of Management, RWTH Aachen, IZA Bonn and UNU-MERIT); Dimitri, Nicola (University of Siena)
    Abstract: An arms race for an artificial general intelligence (AGI) would be detrimental for and even pose an existential threat to humanity if it results in an unfriendly AGI. In this paper an all-pay contest model is developed to derive implications for public policy to avoid such an outcome. It is established that in a winner-takes all race, where players must invest in R&D, only the most competitive teams will participate. Given the difficulty of AGI the number of competing teams is unlikely ever to be very large. It is also established that the intention of teams competing in an AGI race, as well as the possibility of an intermediate prize is important in determining the quality of the eventual AGI. The possibility of an intermediate prize will raise quality of research but also the probability of finding the dominant AGI application and hence will make public control more urgent. It is recommended that the danger of an unfriendly AGI can be reduced by taxing AI and by using public procurement. This would reduce the pay-off of contestants, raise the amount of R&D needed to compete, and coordinate and incentivize co-operation, all outcomes that will help alleviate the control and political problems in AI. Future research is needed to elaborate the design of systems of public procurement of AI innovation and for appropriately adjusting the legal frameworks underpinning high-tech innovation, in particular dealing with patents created by AI.
    Keywords: Artificial intelligence, innovation, technology, public policy
    JEL: O33 O38 O14 O15 H57
    Date: 2018–08–08
  15. By: Kinne, Jan; Axenbeck, Janna
    Abstract: Nowadays, almost all (relevant) firms have their own websites which they use to publish information about their products and services. Using the example of innovation in firms, we outline a framework for extracting information from firm websites using web scraping and data mining. For this purpose, we present an easy and free-to-use web scraping tool for large-scale data retrieval from firm websites. We apply this tool in a large-scale pilot study to provide information on the data source (i.e. the population of firm websites in Germany), which has as yet not been studied rigorously in terms of its qualitative and quantitative properties. We find, inter alia, that the use of websites and websites' characteristics (number of subpages and hyperlinks, text volume, language used) differs according to firm size, age, location, and sector. Web-based studies also have to contend with distinct outliers and the fact that low broadband availability appears to prevent firms from operating a website. Finally, we propose two approaches based on neural network language models and social network analysis to derive firm-level information from the extracted web data.
    Keywords: Web Mining,Web Scraping,R&D,R&I,STI,Innovation,Indicators,Text Mining
    JEL: O30 C81 C88
    Date: 2018
  16. By: Ritman, Kim; Mathews, Steve; Herrero, Mario; Street, Ken
    Keywords: Agricultural and Food Policy, Research and Development/Tech Change/Emerging Technologies
    Date: 2017–08–08
  17. By: Wei Zheng (School of Economics and Development, Wuhan University, China; School of Politics and International Relations, University College Dublin, Dublin, Ireland); Patrick Paul Walsh (School of Politics and International Relations, University College Dublin, Dublin, Ireland)
    Abstract: During the past 30 years, China has experienced high growth, and its economic expansion has been one of the strongest in world history. The rapid economic growth has accompanied by rapid increases in energy consumption, which has led to considerable air pollution and significantly affected mortality rate. In this study, Grossman Health Function was applied together with satellite-retrieved PM2.5 pollution data to estimate mortality rate caused by PM2.5 from 2001 to 2012. The results show some new evidence of the impact of sociological, economic and environmental factors on mortality rate of the population of China using the fixed effect (FE) and system generalized method of moments (GMM-sys) estimation methods. The PM2.5 has long-term positive significant effects on mortality. China is now experiencing a substantial mortality burden associated with current air pollution. Health care system and people’s education level are important in lowering mortality.
    Keywords: PM2.5, Mortality rate, Temperature
    Date: 2018–07–27
  18. By: Laperriere, Andre
    Abstract: For many years ‘big data’ has been considered by many as the privilege of the few. Because of its volume, it could only be handled by large corporations, essentially based in the west; because of its complexity, it required high level specialists to manage it, and because of the cost of putting it together, it rested out of reach of the common person’s purse. This has changed. During this lifetime, the world has gone through three consecutive and very fundamental revolutions. The first was the Internet connecting the world together. The second was the emergence of intelligent devices, starting with mobile phones, bringing knowledge to your fingertips. The third revolution is here: open data. Knowledge can now flow across the world with accuracy, at a speed and volume never reached before. The world of agriculture is one of the key beneficiaries of this latest revolution, seeing for the first time the innovative benefits of a true ‘cooperative development process’ taking place. Governments are opening their data; research is working hand in hand with the private sector; and civil society – consumers and farmers alike – is voicing its needs and triggering innovation tailored to its capacity, situation and choices. As a result, even in the most remote areas today you can see applications using the latest technology – and ‘big data’ – in the hands of farmers and in a form and shape that makes sense for them. Applications are affordable and manageable, allowing their users to gradually overcome subsistence farming to reach a higher quality of life. Globally this means that continents where agriculture is still the key development engine see their economy improving, hunger decreasing and innovation flourishing. This is what will lead the world to overcome the emerging food security challenges ahead of us, and contribute greatly to allow developing countries to reach their full potential. This presentation describes this process and gives concrete examples of where and how ‘big data’ is now used by small farmers; and more generally, how open data is changing the face of global agriculture.
    Keywords: Farm Management, Research and Development/Tech Change/Emerging Technologies
    Date: 2017–08–08
  19. By: Wallander, Steven; Bowman, Maria; Beeson, Peter; Claassen, Roger
    Abstract: A number of government programs, including USDA conservation programs, provide financial incentives to entice changes in behavior. An important question for these programs is whether temporary payments can lead to persistent behavioral changes. Over the past 20 years, the USDA Environmental Quality Incentives Program (EQIP) has provided more than $250 million to farmers adopting no-till crop production. In contrast to conventional tillage, which turns over the soil prior to planting, no-till can produce a number of environmental goods such as soil carbon sequestration, especially if farmers adopt no-till continuously for a long time period. This study examines whether temporary no-till payments result in persistent adoption of no-till beyond the term of conservation contracts. In the first part of our analysis, we examine field-level survey data, model no-till adoption as a second-order Markov process, and establish that in general there is considerable persistence in farmers’ tillage decisions. In the second part of our analysis, we examine a unique dataset of satellite-based estimates of field-level residue estimates in the Northern High Plains and examine changes in residue before, during, and after enrollment in EQIP. We conclude by discussing the potential implications of persistence for program outcomes as well as the challenges in identifying the mechanisms driving persistence.
    Keywords: Agricultural and Food Policy, Environmental Economics and Policy, Land Economics/Use
    Date: 2017–12–01
  20. By: Angelo Bonomi (CNR-IRCRES, National Research Council, Research Institute on Sustainable Economic Growth, via Real Collegio 30, Moncalieri (TO) – Italy)
    Abstract: This paper concerns a study on the next production revolution called Industry 4.0 based on confluence of various technologies, mainly digital, with far reaching consequences especially for productivity and employment. This study considers the implementation of Industry 4.0 in SMEs and industrial districts that represent a great part of Italian industry. The latter represents certainly a major challenge to such implementation because of the existence of various obstacles constituted by availability of investment capitals, small scale productions and tendency to develop and to adopt only incremental innovations rather than radical ones typical of Industry 4.0. In this work we study the technologies involved in Industry 4.0, taking account of existence of specific technologies, called enabling technologies, whose confluence in the manufacturing industry determines the implementation of Industry 4.0. Such enabling technologies originate from the major fields of R&D activities such as nanotechnologies, biotechnologies, digital technologies and artificial intelligence (AI). In this paper we study the dynamic and possible evolution characterizing the formation of the various enabling technologies in a sort of ramification process, using specific models of technology, technology innovation and R&D, and their relation with manufacturing in SMEs and industrial districts. The results of the study underlines the importance of AI in determining possibilities and limits to Industry 4.0, the necessity to disrupt the tendency of SMEs in adopting only incremental innovations, the existence of “intranality effects” raising difficulties from the supply chain, and the importance of technology consulting firms in the integration of ICT in operating technologies of a manufacturing activity.
    Keywords: Industry 4.0, SMEs, industrial districts, technology innovation
    JEL: O14 O25 O33
    Date: 2018–06

This nep-big issue is ©2018 by Tom Coupé. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.