
on Computational Economics 
Issue of 2018‒09‒03
nine papers chosen by 
By:  Atuwo, Tamaraebi 
Abstract:  Rolling is one of the most complicated processes in metal forming. Knowing the exact amount of basic parameters, especially interstand tensions can be effective in controlling other parameters in this process. Interstand tensions affect rolling pressure, rolling force, forward and backward slips and neutral angle. Calculating this effect is an important step in continuous rolling design and control. Since interstand tensions cannot be calculated analytically, attempt is made to describes an approach based on artificial neural network (ANN) in order to identify the applied parameters in a cold tandem rolling mill. Due to the limited experimental data, in this subject a five stand tandem cold rolling mill is simulated through finite element method. The outputs of the FE simulation are applied in training the network and then, the network is employed for prediction of tensions in a tandem cold rolling mill. Here, after changing and checking the different designs of the network, the 11424 structure by one hidden layer is selected as the best network. The verification factor of ANN results according to experimental data are over R=0.9586 for training and testing the data sets. The experimental results obtained from the five stands tandem cold rolling mill. This paper proposed new ANN for prediction of interstand tensions. Also, this ANN method shows a fuzzy control algorithm for investigating the effect of front and back tensions on reducing the thickness deviations of hot rolled steel strips. The average of the training and testing data sets is mentioned 0.9586. It means they have variable values which are discussed in details in section 4. According to Table 7, this proposed ANN model has the correlation coefficients of 0.9586, 0.9798, 0.9762 and 0.9742, respectively for training data sets and 0.9905, 0.9798, 0.9762 and 0.9803, respectively for the testing data sets. These obtained numbers indicate the acceptable accuracy of the ANN method in predicting the interstand tensions of the rolling tandem mill. This method provides a highly accurate solution with reduced computational time and is suitable for online control or optimization in tandem cold rolling mills. Due to the limited experimental data, for data extraction for the ANN simulation, a 2D tandem cold rolling process is simulated using ABAQUS 6.9 software. For designing a network for this rolling problem, various structures of neural networks are studied in MATLAB 7.8 software. 
Keywords:  Artificial neural networks, Computational time, Online control, Finite element modeling, Training and testing data, Tandem cold rolling mill, Hidden layer 
JEL:  L16 L61 L63 L71 L72 
Date:  2018–07 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:88520&r=cmp 
By:  Masafumi Nakano (Graduate School of Economics, University of Tokyo); Akihiko Takahashi (Graduate School of Economics, University of Tokyo); Soichiro Takahashi (Graduate School of Economics, University of Tokyo) 
Abstract:  This paper explores Bitcoin intraday technical trading based on artificial neural networks for the return prediction. In particular, our deep learning method successfully discovers trading signals through a seven layered neural network structure for given input data of technical indicators, which are calculated by the past timeseries data over every 15 minutes. Under feasible settings of execution costs, the numerical experiments demonstrate that our approach significantly improves the performance of a buyandhold strategy. Especially, our model performs well for a challenging period from December 2017 to January 2018, during which Bitcoin suffers from substantial minus returns. Furthermore, various sensitivity analysis is implemented for the change of the number of layers, activation functions, input data and output classification to confirm the robustness of our approach. 
Date:  2018–07 
URL:  http://d.repec.org/n?u=RePEc:cfi:fseres:cf441&r=cmp 
By:  Robert F. Phillips 
Abstract:  Under suitable conditions, onestep generalized method of moments (GMM) based on the firstdifference (FD) transformation is numerically equal to onestep GMM based on the forward orthogonal deviations (FOD) transformation. However, when the number of time periods ($T$) is not small, the FOD transformation requires less computational work. This paper shows that the computational complexity of the FD and FOD transformations increases with the number of individuals ($N$) linearly, but the computational complexity of the FOD transformation increases with $T$ at the rate $T^{4}$ increases, while the computational complexity of the FD transformation increases at the rate $T^{6}$ increases. Simulations illustrate that calculations exploiting the FOD transformation are performed orders of magnitude faster than those using the FD transformation. The results in the paper indicate that, when onestep GMM based on the FD and FOD transformations are the same, Monte Carlo experiments can be conducted much faster if the FOD version of the estimator is used. 
Date:  2018–08 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1808.05995&r=cmp 
By:  Siddharth Barman; Sanath Kumar Krishnamurthy; Rohit Vaish 
Abstract:  We study the problem of fairly allocating a set of indivisible goods among agents with additive valuations. The extent of fairness of an allocation is measured by its Nash social welfare, which is the geometric mean of the valuations of the agents for their bundles. While the problem of maximizing Nash social welfare is known to be APXhard in general, we study the effectiveness of simple, greedy algorithms in solving this problem in two interesting special cases. First, we show that a simple, greedy algorithm provides a 1.061approximation guarantee when agents have identical valuations, even though the problem of maximizing Nash social welfare remains NPhard for this setting. Second, we show that when agents have binary valuations over the goods, an exact solution (i.e., a Nash optimal allocation) can be found in polynomial time via a greedy algorithm. Our results in the binary setting extend to provide novel, exact algorithms for optimizing Nash social welfare under concave valuations. Notably, for the above mentioned scenarios, our techniques provide a simple alternative to several of the existing, more sophisticated techniques for this problem such as constructing equilibria of Fisher markets or using real stable polynomials. 
Date:  2018–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1801.09046&r=cmp 
By:  Grzegorz Marcjasz; Bartosz Uniejewski; Rafal Weron 
Abstract:  A recent electricity price forecasting (EPF) study has shown that the Seasonal Component Artificial Neural Network (SCANN) modeling framework, which consists of decomposing a series of spot prices into a trendseasonal and a stochastic component, modeling them independently and then combining their forecasts, can yield more accurate point predictions than an approach in which the same nonlinear autoregressive NARXtype neural network is calibrated to the prices themselves. Here, considering two novel extensions of the SCANN concept to probabilistic forecasting, we find that (i) efficiently calibrated NARX networks can outperform their autoregressive counterparts, even without combining forecasts from many runs, and that (ii) in terms of accuracy it is better to construct probabilistic forecasts directly from point predictions, however, if speed is a critical issue, running quantile regression on combined point forecasts (i.e., committee machines) may be an option worth considering. Moreover, we confirm an earlier observation that averaging probabilities outperforms averaging quantiles when combining predictive distributions in EPF. 
Keywords:  Electricity spot price; Probabilistic forecast; Combining forecasts; Longterm seasonal component; NARX neural network; Quantile regression 
JEL:  C14 C22 C45 C51 C53 Q47 
Date:  2018–07–13 
URL:  http://d.repec.org/n?u=RePEc:wuu:wpaper:hsc1805&r=cmp 
By:  Broman, Emanuel (CTS  Centre for Transport Studies Stockholm (KTH and VTI)); Eliasson, Jonas (City of Stockholm Traffic Department) 
Abstract:  In recent years, several countries have deregulated passenger railway markets to allow open access. The aim is for competition to lower fares and increase quality of service, thereby increasing demand, economic efficiency and overall social welfare. In this paper, we use a stylised simulation model to study how open access competition affects fares, demand, supply, consumer surplus and operator profits compared to a profitmaximising monopoly and to a welfaremaximising benchmark situation. We conclude that aggregate social welfare increases substantially when going from profitmaximising monopoly to duopoly competition, as consumers make large gains while operators’ profits fall. According to simulations, there generally exists a stable competitive Nash equilibrium with two or more profitable operators. Although operators are identical in the model setup, the Nash equilibrium outcome is asymmetric: one operator has more departures and higher average fares than the other. If operators are allowed to collude, however, for example by trading or selling departure slots, the equilibrium situation tends to revert to monopoly: it will be profitable for one operator to buy the other’s departure slots to gain monopoly power. The regulatory framework must therefore prevent collusion and facilitate market entry. Even the potential for competitive entry tends to increase social welfare, as the monopolist has incentives to increase supply as an entry deterrence strategy. 
Keywords:  open access; rail reform; capacity allocation; passenger 
JEL:  D43 R41 R48 
Date:  2018–08–23 
URL:  http://d.repec.org/n?u=RePEc:hhs:ctswps:2018_012&r=cmp 
By:  Ajay K. Agrawal; Joshua S. Gans; Avi Goldfarb 
Abstract:  Recent progress in artificial intelligence (AI) – a general purpose technology affecting many industries  has been focused on advances in machine learning, which we recast as a qualityadjusted drop in the price of prediction. How will this sharp drop in price impact society? Policy will influence the impact on two key dimensions: diffusion and consequences. First, in addition to subsidies and IP policy that will influence the diffusion of AI in ways similar to their effect on other technologies, three policy categories  privacy, trade, and liability  may be uniquely salient in their influence on the diffusion patterns of AI. Second, labor and antitrust policies will influence the consequences of AI in terms of employment, inequality, and competition. 
JEL:  L86 O3 
Date:  2018–06 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:24690&r=cmp 
By:  Kinne, Jan; Axenbeck, Janna 
Abstract:  Nowadays, almost all (relevant) firms have their own websites which they use to publish information about their products and services. Using the example of innovation in firms, we outline a framework for extracting information from firm websites using web scraping and data mining. For this purpose, we present an easy and freetouse web scraping tool for largescale data retrieval from firm websites. We apply this tool in a largescale pilot study to provide information on the data source (i.e. the population of firm websites in Germany), which has as yet not been studied rigorously in terms of its qualitative and quantitative properties. We find, inter alia, that the use of websites and websites' characteristics (number of subpages and hyperlinks, text volume, language used) differs according to firm size, age, location, and sector. Webbased studies also have to contend with distinct outliers and the fact that low broadband availability appears to prevent firms from operating a website. Finally, we propose two approaches based on neural network language models and social network analysis to derive firmlevel information from the extracted web data. 
Keywords:  Web Mining,Web Scraping,R&D,R&I,STI,Innovation,Indicators,Text Mining 
JEL:  O30 C81 C88 
Date:  2018 
URL:  http://d.repec.org/n?u=RePEc:zbw:zewdip:18033&r=cmp 
By:  Katarzyna Hubicka; Grzegorz Marcjasz; Rafal Weron 
Abstract:  We propose a novel concept in energy forecasting and show that averaging dayahead electricity price forecasts of a predictive model across 28 to 728day calibration windows yields better results than selecting only one 'optimal' window length. Even more significant accuracy gains can be achieved by averaging over a few, carefully selected windows. 
Keywords:  Electricity price forecasting; Combining forecasts; Calibration window; Autoregression; NARX neural network; Committee machine; DieboldMariano test 
JEL:  C14 C22 C45 C51 C53 Q47 
Date:  2018–07–07 
URL:  http://d.repec.org/n?u=RePEc:wuu:wpaper:hsc1803&r=cmp 