
on Computational Economics 
Issue of 2019‒12‒23
twentysix papers chosen by 
By:  Buscombe, Daniel 
Abstract:  The application of deep learning, specifically deep convolutional neural networks (DCNNs), to the classification of remotely sensed imagery of natural landscapes has the potential to greatly assist in the analysis and interpretation of geomorphic processes. However, the general usefulness of deep learning applied to conventional photographic imagery at a landscape scale is, at yet, largely unproven. If DCNNbased image classification is to gain wider application and acceptance within the geoscience community, demonstrable successes need to be coupled with accessible tools to retrain deep neural networks to discriminate landforms and land uses in landscape imagery. Here, we present an efficient approach to train/apply DCNNs with/on sets of photographic images, using a powerful graphical method, called a conditional random field (CRF), to generate DCNN training and testing data using minimal manual supervision. We apply the method to several sets of images of natural landscapes, acquired from satellites, aircraft, unmanned aerial vehicles, and fixed camera installations. We synthesize our findings to examine the general effectiveness of transfer learning to landscape scale image classification. Finally, we show how DCNN predictions on small regions of images might be used in conjunction with a CRF for highly accurate pixellevel classification of images. 
Date:  2018–06–18 
URL:  http://d.repec.org/n?u=RePEc:osf:eartha:5mx3c&r=all 
By:  Christine Balagué (MMS  Département Management, Marketing et Stratégie  IMT  Institut MinesTélécom [Paris]  TEM  Télécom Ecole de Management  IMTBS  Institut MinesTélécom Business School, LITEM  Laboratoire en Innovation, Technologies, Economie et Management  UEVE  Université d'ÉvryVald'Essonne  IMTBS  Institut MinesTélécom Business School); El Mehdi Rochd (MMS  Département Management, Marketing et Stratégie  IMT  Institut MinesTélécom [Paris]  TEM  Télécom Ecole de Management  IMTBS  Institut MinesTélécom Business School, LITEM  Laboratoire en Innovation, Technologies, Economie et Management  UEVE  Université d'ÉvryVald'Essonne  IMTBS  Institut MinesTélécom Business School) 
Abstract:  Most of product recommender systems in marketing are based on artificial intelligence algorithms using machine learning or deep learning techniques. One of the current challenges for companies is to avoid negative effects of these product recommender systems on customers (or prospects), such as unfairness, biais, discrimination, opacity, encapsulated opinion in the implemented recommender systems algorithms. This research focuses on the fairness challenge. We first make a literature review on the importance and challenges of using ethical algorithms. Second, we define the fairness concept and present the reasons why it is important for companies to address this issue in marketing. Third, we present the different methodologies used in recommender systems algorithms. Using a dataset in the entertainment industry, we measure the algorithm fairness for each methology and compare the results. Finally, we improve the existing methods by proposing a new product recommender system aiming at increasing fairness versus previous methods, without compromising the recommendation systems performance. 
Keywords:  Recommender systems,Ethics,Algorithms,Fairness 
Date:  2019 
URL:  http://d.repec.org/n?u=RePEc:hal:journl:hal02332033&r=all 
By:  Hartleb, J.; Schmidt, M.E. 
Abstract:  Timetabling for railway services often aims at optimizing travel times for passengers. At the same time, restricting assumptions on passenger behavior and passenger modeling are made. While research has shown that passenger distribution on routes can be modeled with a discrete choice model, this has not been considered in timetabling yet. We investigate how a passenger distribution can be integrated into an optimization framework for timetabling and present two mixedinteger linear programs for this problem. Both approaches design timetables and simultaneously find a corresponding passenger distribution on available routes. One model uses a linear distribution model to estimate passenger route choices, the other model uses an integrated simulation framework to approximate a passenger distribution according to the logit model, a commonly used route choice model. We compare both new approaches with three stateoftheart timetabling methods and a heuristic approach on a set of artificial instances and a partial network of Netherlands Railways (NS). 
Keywords:  transportation, timetabling, public transport, route choice, discrete choice model, passenger distribution 
Date:  2019–12–17 
URL:  http://d.repec.org/n?u=RePEc:ems:eureri:122487&r=all 
By:  Heinrich, Florian; Appel, Franziska; Balmann, Alfons 
Abstract:  After land prices in Germany increased continuously since 2006, policy makers, representatives of farmers’ unions, NGOs, and farmers started and continued to discuss or propose new land market regulations to stop price increases and to protect particularly smaller farmers. In this paper we analyze different types of regulations for the land rental market with the agentbased model AgriPoliS. Our simulation results show that price and farm size limitations may inhibit rental priceincreases and reduce structural change. The regulations do however not lead to a conservation in the number of small farms; neither do they have a substantial positive impact on their profitabilityand competitiveness. Many small farms still exit agricultural production and only few are able to grow into a larger size class. Beyond redistributional costs, e.g. beared by landowners, economic and social costs result from reduced average economic land rents, less regional valueadded and less employment caused by a reduced functionality of the land market and biased incentives. 
Keywords:  structural change,land market,land market regulation,agentbased modeling 
JEL:  Q15 Q18 C63 
Date:  2019 
URL:  http://d.repec.org/n?u=RePEc:zbw:esprep:208388&r=all 
By:  JeanBernard Chatelain (PJSE  Paris Jourdan Sciences Economiques  UP1  Université PanthéonSorbonne  ENS Paris  École normale supérieure  Paris  INRA  Institut National de la Recherche Agronomique  EHESS  École des hautes études en sciences sociales  ENPC  École des Ponts ParisTech  CNRS  Centre National de la Recherche Scientifique, PSE  Paris School of Economics); Kirsten Ralf (Ecole Supérieure du Commerce Extérieur  ESCE, INSEEC U. Research Center  ESCE International Business School, INSEEC U. Research Center) 
Abstract:  This article presents an algorithm that extends Ljungqvist and Sargent's (2012) dynamic Stackelberg game to the case of dynamic stochastic general equilibrium models including forcing variables. Its first step is the solution of the discounted augmented linear quadratic regulator as in Hansen and Sargent (2007). It then computes the optimal initial anchor of "jump" variables such as inflation. We demonstrate that it is of no use to compute nonobservable Lagrange multipliers for all periods in order to obtain impulse response functions and welfare. The algorithm presented, however, enables the computation of a historydependent representation of a Ramsey policy rule that can be implemented by policy makers and estimated within a vector autoregressive model. The policy instruments depend on the lagged values of the policy instruments and of the private sector's predetermined and "jump" variables. The algorithm is applied on the newKeynesian Phillips curve as a monetary policy transmission mechanism. 
Keywords:  forcing variables,newKeynesian Phillips curve,Stackelberg dynamic game,augmented linear quadratic regulator,Ramsey optimal policy,algorithm 
Date:  2019–10–25 
URL:  http://d.repec.org/n?u=RePEc:hal:journl:hal01577606&r=all 
By:  Gert Bijnens; Shyngys Karimov; Jozef Konings 
Abstract:  In 2015 Belgium suspended the automatic wage indexation for a period of 12 months in order to boost competitiveness and increase employment. This paper uses a novel, machine learning based approach to construct a counterfactual experiment. This artificial counterfactual allows us to analyze the employment impact of suspending the indexation mechanism. We find a positive impact on employment of 0.5 percent which corresponds to a labor demand elasticity of 0.25. This effect is more pronounced for manufacturing firms, where the impact on employment can reach 2 percent, which corresponds to a labor demand elasticity of 1. 
Keywords:  labor demand, wage elasticity, counterfactual analysis, artificial control, machine learning 
Date:  2019–11–27 
URL:  http://d.repec.org/n?u=RePEc:ete:vivwps:643831&r=all 
By:  Cannon, Alex J. (Environment and Climate Change Canada) 
Abstract:  The goal of quantile regression is to estimate conditional quantiles for specified values of quantile probability using linear or nonlinear regression equations. These estimates are prone to "quantile crossing", where regression predictions for different quantile probabilities do not increase as probability increases. In the context of the environmental sciences, this could, for example, lead to estimates of the magnitude of a 10yr return period rainstorm that exceed the 20yr storm, or similar nonphysical results. This problem, as well as the potential for overfitting, is exacerbated for small to moderate sample sizes and for nonlinear quantile regression models. As a remedy, this study introduces a novel nonlinear quantile regression model, the monotone composite quantile regression neural network (MCQRNN), that (1) simultaneously estimates multiple noncrossing, nonlinear conditional quantile functions; (2) allows for optional monotonicity, positivity/nonnegativity, and generalized additive model constraints; and (3) can be adapted to estimate standard leastsquares regression and noncrossing expectile regression functions. First, the MCQRNN model is evaluated on synthetic data from multiple functions and error distributions using Monte Carlo simulations. MCQRNN outperforms the benchmark models, especially for nonnormal error distributions. Next, the MCQRNN model is applied to realworld climate data by estimating rainfall IntensityDurationFrequency (IDF) curves at locations in Canada. IDF curves summarize the relationship between the intensity and occurrence frequency of extreme rainfall over storm durations ranging from minutes to a day. Because annual maximum rainfall intensity is a nonnegative quantity that should increase monotonically as the occurrence frequency and storm duration decrease, monotonicity and nonnegativity constraints are key constraints in IDF curve estimation. In comparison to standard QRNN models, the ability of the MCQRNN model to incorporate these constraints, in addition to noncrossing, leads to more robust and realistic estimates of extreme rainfall. 
Date:  2017–12–05 
URL:  http://d.repec.org/n?u=RePEc:osf:eartha:wg7sn&r=all 
By:  Michael B. Giles; AbdulLateef HajiAli 
Abstract:  Computing risk measures of a financial portfolio comprising thousands of options is a challenging problem because (a) it involves a nested expectation requiring multiple evaluations of the loss of the financial portfolio for different risk scenarios and (b) evaluating the loss of the portfolio is expensive and the cost increases with its size. In this work, we look at applying Multilevel Monte Carlo (MLMC) with adaptive inner sampling to this problem and discuss several practical considerations. In particular, we discuss a subsampling strategy that results in a method whose computational complexity does not increase with the size of the portfolio. We also discuss several control variates that significantly improve the efficiency of MLMC in our setting. 
Date:  2019–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1912.05484&r=all 
By:  Aline Souza Magalhães (CedeplarUFMG); Edson Paulo Domingues (CedeplarUFMG); Bruna Stein Ciasca (CedeplarUFMG) 
Abstract:  Water scarcity situation are increasingly occurring in certain regions in Brazil. In addition to extreme events vulnerability, the increased exports, household consumption and the water use intensity of the economic activities, generates water demands growth that contributes to water stress. The aim of this paper is to explore the relationship between structural characteristics of the Brazilian economy and growth path, with water use. Our main contribution is the articulation of a computable general equilibrium (CGE) model with a recursive dynamics to sector data of water withdrawal and consumption. We suggest that this is an appropriate methodological framework to study the relation of economics water use, which may overcome some of the limitations of partial equilibrium econometric models or inputoutput models. The results indicate that the agricultural sector and the miningmetallurgical and construction chain have more intense impacts on water demand from exports. Furthermore, there is a greater dependence on the electricity and gas sectors, and sewage, considering the effects on water withdrawal due to the increase in households demand. 
Keywords:  Water use, Virtual water, Computable general equilibrium. 
JEL:  Q25 Q51 C68 
Date:  2019–12 
URL:  http://d.repec.org/n?u=RePEc:cdp:texdis:td616&r=all 
By:  Martin Bagaram (University of Washington [Seattle]) 
Abstract:  Reliabilityredundancy is a recurrent problem in engineering where designed systems are meant to be very reliable. However, the cost of manufacturing very high reliability components increases exponentially, therefore redundancy of less reliable components is a palliative solution. Nonetheless, the question remains how many components of low reliability (and of what extent of reliability) should be coupled to produce a system of high reliability. In this paper, I compare the performance of particle swarm optimization (PSO) and simulated annealing (SA) on a system of electricity distribution in a rural hospital. The results proved that PSO outperformed SA. In addition, considering the problem as reliability maximization and cost minimization biobjective give a useful insight on how the cost increase exponentially at a certain given reliability of the system. 
Date:  2017–11–12 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:hal02350487&r=all 
By:  Apel, Mikael (Monetary Policy Department, Central Bank of Sweden); Blix Grimaldi, Marianna (Swedish National Debt Office); Hull, Isaiah (Research Department, Central Bank of Sweden) 
Abstract:  The purpose of central bank minutes is to give an account of monetary policy meeting discussions to outside observers, thereby enabling them to draw informed conclusions about future policy. However, minutes are by necessity a shortened and edited representation of a broader discussion. Consequently, they may omit information that is predictive of future policy decisions. To investigate this, we compare the information content of the FOMC's minutes and transcripts, focusing on three dimensions which are likely to be excluded from the minutes: 1) the committee's degree of hawkishness; 2) the chairperson's degree of hawkishness; and 3) the level of agreement between committee members. We measure committee and chairperson hawkishness with a novel dictionary that is constructed using the FOMC's minutes and transcripts. We measure agreement by performing deep transfer learning, a technique that involves training a deep learning model on one set of documents  U.S. congressional debates  and then making predictions on another: FOMC transcripts. Our findings suggest that transcripts are more informative than minutes and heightened committee agreement typically precedes policy rate increases. 
Keywords:  Central Bank Communication; Monetary Policy; Machine Learning 
JEL:  D71 D83 E52 E58 
Date:  2019–11–01 
URL:  http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0381&r=all 
By:  Tiwari, Richa; Jayaswal, Sachin; Sinha, Ankur 
Abstract:  In this paper, we study the hub location problem of an entrant airline that tries to maximize its market share, in a market with already existing competing players. The routes open for use can be either of multiple allocation or single allocation type. The entrant's problem is modelled as a nonlinear integer program in both the situations, which is intractable for offtheshelf commercial solvers, like CPLEX and Gurobi, etc. Hence, we propose four alternate approaches to solve the problem. The first is based on a mixed integer second order conic program reformulation, while the second uses lifted polymatroid cuts based approximation of second order cone constraints. The third is the second order conic program within Lagrangian relaxation, while the fourth uses approximated lifted polymatroid cuts within lagrangian relaxation. The four methods performs differently for the single allocation and multiple allocation models, and second approach is the best for single allocation model and for smaller instances in multiple allocation model. As the problem size in multiple allocation model increases, the third method starts to be the better performer in terms of computation time. 
Date:  2019–12–10 
URL:  http://d.repec.org/n?u=RePEc:iim:iimawp:14616&r=all 
By:  Dhyani, Sneha; Jayaswal, Sachin; Sinha, Ankur; Vidyarthi, Navneet 
Abstract:  In this paper, we study the single allocation hub location problem with capacity selection in the presence of congestion at hubs. Accounting for congestion at hubs leads to a nonlinear mixed integer program, for which we propose 18 alternate mixed integer second order conic program (MISOCP) reformulations. Based on our computational studies, we identify the best MISOCPbased reformulation, which turns out to be 20 
Date:  2019–12–10 
URL:  http://d.repec.org/n?u=RePEc:iim:iimawp:14617&r=all 
By:  Kamilla, Isti; Nugrahani, Endar H; Lesmana, Donny Citra 
Abstract:  Asumsi suku bunga konstan pada penentuan harga opsi barrier tidak sesuai dengan kondisi sebenarnya dalam dunia keuangan, karena suku bunga berfluktuasi terhadap waktu. Modifikasi metode Monte Carlo adalah metode yang dibuat untuk menghitung harga opsi barrier dengan suku bunga tak konstan. Ide dasar dari metode ini adalah menggunakan model CoxIngersollRoss sebagai model suku bunga dan menggunakan bilangan acak berdistribusi seragam dan sebuah exit probability untuk menampilkan estimasi Monte Carlo dari waktu pertama kali harga saham menyentuh level barrier. 
Date:  2018–01–30 
URL:  http://d.repec.org/n?u=RePEc:osf:inarxi:zfbn7&r=all 
By:  Bryan T. Kelly; Asaf Manela; Alan Moreira 
Abstract:  Text data is ultrahigh dimensional, which makes machine learning techniques indispensable for textual analysis. Text is often selected—journalists, speechwriters, and others craft messages to target their audiences’ limited attention. We develop an economically motivated high dimensional selection model that improves learning from text (and from sparse counts data more generally). Our model is especially useful when the choice to include a phrase is more interesting than the choice of how frequently to repeat it. It allows for parallel estimation, making it computationally scalable. A first application revisits the partisanship of US congressional speech. We find that earlier spikes in partisanship manifested in increased repetition of different phrases, whereas the upward trend starting in the 1990s is due to entirely distinct phrase selection. Additional applications show how our model can backcast, nowcast, and forecast macroeconomic indicators using newspaper text, and that it substantially improves outofsample fit relative to alternative approaches. 
JEL:  C1 C4 C55 C58 E17 G12 G17 
Date:  2019–11 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:26517&r=all 
By:  Maruyama, Yuuki 
Abstract:  In this model, the stock price is determined by two variables: the fundamental value and the current risk preference of people. Suppose that the fundamental value follows Geometric Brownian motion and the function of the risk preference of people follows OrnsteinUhlenbeck process. There are only two types of asset: money (safe asset) and stocks (risk asset). In this case, the profit rate of equity investment is mean reverting, and longterm investment is more advantageous than shortterm investment. The market is arbitragefree. Also, based on this model, I suggest a solution to the Equity Premium Puzzle. 
Date:  2019–10–07 
URL:  http://d.repec.org/n?u=RePEc:osf:osfxxx:6gwjq&r=all 
By:  Colignatus, Thomas 
Abstract:  Family planning could focus on delaying the having of children, instead of (just) reducing the number of children per woman. 66% of all children are born in the mothers’ age group of 1529. A delay of births to the age of 30+ would cause a reduction of the world population by about 0.8 billion in a direct effect. A secondary effect arises when the later born children grow up and have their delay too. There can also be a learning effect. World population might reduce from 11 to 8 billion in 2100. This would cut projected emissions by some 20%. The effect seems important enough to have more research on reasons, causes and consequences of such delay. Strong delay will cause swings in the dependency ratio, which would require economic flexibility, like a rising retirement age from 65 to 70 years. Article 26 of the Universal Declaration of Human Rights of 1948 stipulates the right to education. This right need not be discussed anew. It may be that education does not adequately discuss family planning though. 
Keywords:  family planning, fertility, birth delay, climate change, population, carbon tax, fertility tax, political economy 
JEL:  J11 J13 P16 Q01 Q54 Q56 
Date:  2019–12–11 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:97447&r=all 
By:  Nowosad, Jakub; Stepinski, Tomasz 
Abstract:  There is a keen interest in inferring spatial associations between different variables spanning the same study area. We present a method for quantitative assessment of such associations in the case where spatial variables are either in the form of regionalizations or in the form of thematic maps. The proposed index of spatial association – called the Vmeasure – is adapted from a measure originally developed in computer science, where it was used to compare clusterings, to spatial science for comparing regionalizations. The Vmeasure is rooted in the information theory and, at its core, it is equivalent to mutual information between the two regionalizations. Here we reintroduce the Vmeasure in terms of spatial variance analysis instead of information theory. We identify three different contexts for application of the Vmeasure, comparative, associative, and derivative, and present an example of an application for each of them. In the derivative context, the Vmeasure is used to select an optimal number of regions for clusteringderived regionalizations. In effect, this also constitutes a novel way to determine the number of clusters for nonspatial clustering tasks as well. The advantage of Vmeasure over the Mapcurves method is discussed. We also use the insight from deriving the Vmeasure in terms of spatial variance analysis to point out a shortcoming of the Geographical Detector – a method to quantify associations between numerical and categorical spatial variables. The opensource software for calculating the Vmeasure accompanies this paper. 
Date:  2018–04–19 
URL:  http://d.repec.org/n?u=RePEc:osf:eartha:rcjh7&r=all 
By:  Lassila, Jukka; Valkonen, Tarmo 
Abstract:  Abstract Ageing populations pose a major challenge for longterm sustainability of public finances. The respond has been a wave of pension reforms that has lowered markedly the projected pension expenditure in EU countries. The increase in the second major expenditure item, health and longterm care costs, has become the most important element of fiscal sustainability gaps. We compare different demographybased approaches generally used to evaluate the costs. The interaction of different projection approaches and demography is illustrated by using realizations of a stochastic population projection as inputs in a numerical expenditure model. Our example country is Finland. Our results show that considering the effects of proximity to death on the expenditure generates markedly slower expected expenditure growth for the health and longterm care costs than using agespecific costs or the method developed and used by the European Commission and the Finnish Ministry of Finance. In addition, the sensitivity of the expenditure projections to demographic risks is lower. The differences in the outcomes of the different approaches are largest in longterm care costs, which are in any case growing faster in Finland than the health care expenditure because on population ageing. 
Keywords:  Population ageing, Demographic uncertainty, Health care costs, Longterm care costs 
JEL:  H55 H68 J11 
Date:  2019–12–20 
URL:  http://d.repec.org/n?u=RePEc:rif:wpaper:74&r=all 
By:  Karolis Liaudinskas 
Abstract:  Can humans achieve rationality, as defined by the expected utility theory, by automating their decision making? We use millisecondstamped transactionlevel data from the Copenhagen Stock Exchange to estimate the disposition effect – the tendency to sell winning but not losing stocks – among algorithmic and human professional daytraders. We find that: (1) the disposition effect is substantial among humans but virtually zero among algorithms; (2) this difference is not fully explained by rational explanations and is, at least partially, attributed to prospect theory, realization utility and beliefs in meanreversion; (3) the disposition effect harms trading performance, which further deems such behavior irrational. 
Keywords:  disposition effect, algorithmic trading, financial markets, rationality, automation 
JEL:  D8 D91 G11 G12 G23 O3 
Date:  2019–11 
URL:  http://d.repec.org/n?u=RePEc:bge:wpaper:1133&r=all 
By:  Arpit Gupta; Stijn Van Nieuwerburgh 
Abstract:  We propose a new valuation method for private equity investments. First, we construct a cashflow replicating portfolio for the private investment, applying Machine Learning techniques on cashflows on various listed equity and fixed income instruments. The second step values the replicating portfolio using a flexible asset pricing model that accurately prices the systematic risk in bonds of different maturities and a broad crosssection of equity factors. The method delivers a measure of the riskadjusted profit earned on a PE investment and a time series for the expected return on PE fund categories. We apply the method to buyout, venture capital, real estate, and infrastructure funds, among others. Accounting for horizondependent risk and exposure to a broad crosssection of equity factors results in negative average riskadjusted profits. Substantial crosssectional variation and persistence in performance suggests some funds outperform. We also find declining expected returns on PE funds in the later part of the sample. 
JEL:  G00 G11 G12 G23 G32 R30 R51 
Date:  2019–11 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:26514&r=all 
By:  Mark Bognanni; John Zito 
Abstract:  We develop a sequential Monte Carlo (SMC) algorithm for Bayesian inference in vector autoregressions with stochastic volatility (VARSV). The algorithm builds particle approximations to the sequence of the model’s posteriors, adapting the particles from one approximation to the next as the window of available data expands. The parallelizability of the algorithm’s computations allows the adaptations to occur rapidly. Our particular algorithm exploits the ability to marginalize many parameters from the posterior analytically and embeds a known Markov chain Monte Carlo (MCMC) algorithm for the model as an effective mutation kernel for fighting particle degeneracy. We show that, relative to using MCMC alone, our algorithm increases the precision of inference while reducing computing time by an order of magnitude when estimating a mediumscale VARSV model. 
Keywords:  Vector autoregressions; sequential Monte Carlo; RaoBlackwellization; particle filter; stochastic volatility 
JEL:  E17 C11 C51 C32 
Date:  2019–12–16 
URL:  http://d.repec.org/n?u=RePEc:fip:fedcwq:86647&r=all 
By:  Steven F. Lehrer; Tian Xie; Tao Zeng 
Abstract:  Social media data presents challenges for forecasters since one must convert text into data and deal with issues related to these measures being collected at different frequencies and volumes than traditional financial data. In this paper, we use a deep learning algorithm to measure sentiment within Twitter messages on an hourly basis and introduce a new method to undertake MIDAS that allows for a weaker discounting of historical data that is wellsuited for this new data source. To evaluate the performance of approach relative to alternative MIDAS strategies, we conduct an out of sample forecasting exercise for the consumer confidence index with both traditional econometric strategies and machine learning algorithms. Irrespective of the estimator used to conduct forecasts, our results show that (i) including consumer sentiment measures from Twitter greatly improves forecast accuracy, and (ii) there are substantial gains from our proposed MIDAS procedure relative to common alternatives. 
JEL:  C58 G17 
Date:  2019–11 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:26505&r=all 
By:  Tommaso Ciarli; Alex Coad; Alessio Moneta 
Abstract:  This paper introduces a little known category of estimators  Linear NonGaussian vector autoregression models that are acyclic or cyclic  imported from the machine learning literature, to revisit a wellknown debate. Does exporting increase firm productivity? Or is it only more productive firms that remain in the export market? We focus on a relatively wellstudied country (Chile) and on alreadyexporting firms (i.e. the intensive margin of exporting). We explicitly look at the coevolution of productivity and growth, and attempt to ascertain both contemporaneous and lagged causal relationships. Our findings suggest that exporting does not have any causal influence on the other variables. Instead, export seems to be determined by other dimensions of firm growth. With respect to learning by exporting (LBE), we find no evidence that export growth causes productivity growth within the period and very little evidence that exporting growth has a causal effect on subsequent TFP growth. 
Keywords:  Productivity; Exporting; Learningbyexporting; Causality; Structural VAR; Independent Component Analysis. 
Date:  2019–12–20 
URL:  http://d.repec.org/n?u=RePEc:ssa:lemwps:2019/39&r=all 
By:  Ljunge, Martin (Research Institute of Industrial Economics (IFN)) 
Abstract:  Individuals with ancestry from countries with advanced information technology in 1500 AD, such as movable type and paper, adopt the internet faster than those with less advanced ancestry. The analysis illustrates persistence over five centuries in information technology adoption in European and U.S. populations. The results hold when excluding the most and least advanced ancestries, and when accounting for additional deep roots of development. Historical information technology is a better predictor of internet adoption than current development. A machine learning procedure supports the findings. Human capital is a plausible channel as 1500 AD information technology predicts early 20th century school enrollment, which predicts 21st century internet adoption. A threestage model including human capital around 1990, yields similar results. 
Keywords:  Internet; Technology diffusion; Information technology; Intergenerational transmission; Printing press 
JEL:  D13 D83 J24 N70 O33 Z13 
Date:  2019–12–18 
URL:  http://d.repec.org/n?u=RePEc:hhs:iuiwop:1312&r=all 
By:  Garnadi, Agah D.; Nurdiati, Sri; Erliana, Windiani 
Abstract:  Current formulas in credibility theory often calculate net premium as a weighted sum of the average experience of the policyholder and the average experience of the entire collection of policyholders. Because these formulas are linear, they are easy to use. Another advantage of linear formulas is that the estimate changes a fixed amount per change in claim experience, if an insurer uses which a formal, then the policyholder can predict the change in premium. In a series of writing, Young(1997,1998,2000) apply decision theory to develop a credibility formula that minimizes a loss function that is linear combination of a squarederror term and a secondderivative term or first order term. This loss function as a variational forms, is equivalent to fourth order or second order linear differential equation, respectively. This allows us for evaluation to Green's function computation via symbolic calculation to compute details of Green's function to obtain the solution. 
Date:  2017–11–18 
URL:  http://d.repec.org/n?u=RePEc:osf:inarxi:wg7qa&r=all 