nep-cmp New Economics Papers
on Computational Economics
Issue of 2019‒12‒23
twenty-six papers chosen by



  1. Landscape classification with deep neural networks. By Buscombe, Daniel
  2. Towards using responsible artificial intelligence in product recommender systems in marketing By Christine Balagué; El Mehdi Rochd
  3. Railway timetabling with integrated passenger distribution By Hartleb, J.; Schmidt, M.E.
  4. Can land market regulations fulfill their promises? By Heinrich, Florian; Appel, Franziska; Balmann, Alfons
  5. A Simple Algorithm for Solving Ramsey Optimal Policy with Exogenous Forcing Variables By Jean-Bernard Chatelain; Kirsten Ralf
  6. Wage Indexation and Jobs. A Machine Learning Approach By Gert Bijnens; Shyngys Karimov; Jozef Konings
  7. Non-crossing nonlinear regression quantiles by monotone composite quantile regression neural network, with application to rainfall extremes By Cannon, Alex J.
  8. Sub-sampling and other considerations for efficient risk estimation in large portfolios By Michael B. Giles; Abdul-Lateef Haji-Ali
  9. Uso da água no Brasil e sua relação com condicionantes econômicos: análise a partir de simulações com um modelo de equilíbrio geral By Aline Souza Magalhães; Edson Paulo Domingues; Bruna Stein Ciasca
  10. Comparison of Simulated Annealing and Particle Swarm Optimization on Reliability-Redundancy Problem By Martin Bagaram
  11. How Much Information Do Monetary Policy Committees Disclose? Evidence from the FOMC's Minutes and Transcripts By Apel, Mikael; Blix Grimaldi, Marianna; Hull, Isaiah
  12. Competitive Hub Location Problems: Model and Solution Approaches By Tiwari, Richa; Jayaswal, Sachin; Sinha, Ankur
  13. Alternate Second Order Conic Programming Reformulations for Hub Location with Capacity Selection under Demand By Dhyani, Sneha; Jayaswal, Sachin; Sinha, Ankur; Vidyarthi, Navneet
  14. Metode Monte Carlo untuk Menentukan Harga Opsi Barrier dengan Suku Bunga Takkonstan By Kamilla, Isti; Nugrahani, Endar H; Lesmana, Donny Citra
  15. Text Selection By Bryan T. Kelly; Asaf Manela; Alan Moreira
  16. A model of mean reversion in stock prices and the Equity Premium Puzzle By Maruyama, Yuuki
  17. A numerical exercise on climate change and family planning: World population might reduce from 11 to 8 billion in 2100 if women of age 15-29 wait and have their first child at age 30+ By Colignatus, Thomas
  18. Spatial association between regionalizations using the information-theoretical V-measure By Nowosad, Jakub; Stepinski, Tomasz
  19. Alternative Demography-based Projection Approaches for Public Health and Long-term Care Expenditure By Lassila, Jukka; Valkonen, Tarmo
  20. Human vs. Machine: Disposition Effect Among Algorithmic and Human Day-traders By Karolis Liaudinskas
  21. Valuing Private Equity Strip by Strip By Arpit Gupta; Stijn Van Nieuwerburgh
  22. Sequential Bayesian Inference for Vector Autoregressions with Stochastic Volatility By Mark Bognanni; John Zito
  23. Does High Frequency Social Media Data Improve Forecasts of Low Frequency Consumer Confidence Measures? By Steven F. Lehrer; Tian Xie; Tao Zeng
  24. Exporting and productivity as part of the growth process: Causal evidence from a data-driven structural VAR By Tommaso Ciarli; Alex Coad; Alessio Moneta
  25. From Gutenberg to Google: The Internet Is Adopted Earlier if Ancestors Had Advanced Information Technology in 1500 AD By Ljunge, Martin
  26. Univariate Credibility as a Boundary-Value Problem, A Symbolic Green’s Function Method (Regular Case) By Garnadi, Agah D.; Nurdiati, Sri; Erliana, Windiani

  1. By: Buscombe, Daniel
    Abstract: The application of deep learning, specifically deep convolutional neural networks (DCNNs), to the classification of remotely sensed imagery of natural landscapes has the potential to greatly assist in the analysis and interpretation of geomorphic processes. However, the general usefulness of deep learning applied to conventional photographic imagery at a landscape scale is, at yet, largely unproven. If DCNN-based image classification is to gain wider application and acceptance within the geoscience community, demonstrable successes need to be coupled with accessible tools to retrain deep neural networks to discriminate landforms and land uses in landscape imagery. Here, we present an efficient approach to train/apply DCNNs with/on sets of photographic images, using a powerful graphical method, called a conditional random field (CRF), to generate DCNN training and testing data using minimal manual supervision. We apply the method to several sets of images of natural landscapes, acquired from satellites, aircraft, unmanned aerial vehicles, and fixed camera installations. We synthesize our findings to examine the general effectiveness of transfer learning to landscape scale image classification. Finally, we show how DCNN predictions on small regions of images might be used in conjunction with a CRF for highly accurate pixel-level classification of images.
    Date: 2018–06–18
    URL: http://d.repec.org/n?u=RePEc:osf:eartha:5mx3c&r=all
  2. By: Christine Balagué (MMS - Département Management, Marketing et Stratégie - IMT - Institut Mines-Télécom [Paris] - TEM - Télécom Ecole de Management - IMT-BS - Institut Mines-Télécom Business School, LITEM - Laboratoire en Innovation, Technologies, Economie et Management - UEVE - Université d'Évry-Val-d'Essonne - IMT-BS - Institut Mines-Télécom Business School); El Mehdi Rochd (MMS - Département Management, Marketing et Stratégie - IMT - Institut Mines-Télécom [Paris] - TEM - Télécom Ecole de Management - IMT-BS - Institut Mines-Télécom Business School, LITEM - Laboratoire en Innovation, Technologies, Economie et Management - UEVE - Université d'Évry-Val-d'Essonne - IMT-BS - Institut Mines-Télécom Business School)
    Abstract: Most of product recommender systems in marketing are based on artificial intelligence algorithms using machine learning or deep learning techniques. One of the current challenges for companies is to avoid negative effects of these product recommender systems on customers (or prospects), such as unfairness, biais, discrimination, opacity, encapsulated opinion in the implemented recommender systems algorithms. This research focuses on the fairness challenge. We first make a literature review on the importance and challenges of using ethical algorithms. Second, we define the fairness concept and present the reasons why it is important for companies to address this issue in marketing. Third, we present the different methodologies used in recommender systems algorithms. Using a dataset in the entertainment industry, we measure the algorithm fairness for each methology and compare the results. Finally, we improve the existing methods by proposing a new product recommender system aiming at increasing fairness versus previous methods, without compromising the recommendation systems performance.
    Keywords: Recommender systems,Ethics,Algorithms,Fairness
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-02332033&r=all
  3. By: Hartleb, J.; Schmidt, M.E.
    Abstract: Timetabling for railway services often aims at optimizing travel times for passengers. At the same time, restricting assumptions on passenger behavior and passenger modeling are made. While research has shown that passenger distribution on routes can be modeled with a discrete choice model, this has not been considered in timetabling yet. We investigate how a passenger distribution can be integrated into an optimization framework for timetabling and present two mixed-integer linear programs for this problem. Both approaches design timetables and simultaneously find a corresponding passenger distribution on available routes. One model uses a linear distribution model to estimate passenger route choices, the other model uses an integrated simulation framework to approximate a passenger distribution according to the logit model, a commonly used route choice model. We compare both new approaches with three state-of-the-art timetabling methods and a heuristic approach on a set of artificial instances and a partial network of Netherlands Railways (NS).
    Keywords: transportation, timetabling, public transport, route choice, discrete choice model, passenger distribution
    Date: 2019–12–17
    URL: http://d.repec.org/n?u=RePEc:ems:eureri:122487&r=all
  4. By: Heinrich, Florian; Appel, Franziska; Balmann, Alfons
    Abstract: After land prices in Germany increased continuously since 2006, policy makers, representatives of farmers’ unions, NGOs, and farmers started and continued to discuss or propose new land market regulations to stop price increases and to protect particularly smaller farmers. In this paper we analyze different types of regulations for the land rental market with the agent-based model AgriPoliS. Our simulation results show that price and farm size limitations may inhibit rental priceincreases and reduce structural change. The regulations do however not lead to a conservation in the number of small farms; neither do they have a substantial positive impact on their profitabilityand competitiveness. Many small farms still exit agricultural production and only few are able to grow into a larger size class. Beyond redistributional costs, e.g. beared by landowners, economic and social costs result from reduced average economic land rents, less regional value-added and less employment caused by a reduced functionality of the land market and biased incentives.
    Keywords: structural change,land market,land market regulation,agent-based modeling
    JEL: Q15 Q18 C63
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:zbw:esprep:208388&r=all
  5. By: Jean-Bernard Chatelain (PJSE - Paris Jourdan Sciences Economiques - UP1 - Université Panthéon-Sorbonne - ENS Paris - École normale supérieure - Paris - INRA - Institut National de la Recherche Agronomique - EHESS - École des hautes études en sciences sociales - ENPC - École des Ponts ParisTech - CNRS - Centre National de la Recherche Scientifique, PSE - Paris School of Economics); Kirsten Ralf (Ecole Supérieure du Commerce Extérieur - ESCE, INSEEC U. Research Center - ESCE International Business School, INSEEC U. Research Center)
    Abstract: This article presents an algorithm that extends Ljungqvist and Sargent's (2012) dynamic Stackelberg game to the case of dynamic stochastic general equilibrium models including forcing variables. Its first step is the solution of the discounted augmented linear quadratic regulator as in Hansen and Sargent (2007). It then computes the optimal initial anchor of "jump" variables such as inflation. We demonstrate that it is of no use to compute non-observable Lagrange multipliers for all periods in order to obtain impulse response functions and welfare. The algorithm presented, however, enables the computation of a history-dependent representation of a Ramsey policy rule that can be implemented by policy makers and estimated within a vector auto-regressive model. The policy instruments depend on the lagged values of the policy instruments and of the private sector's predetermined and "jump" variables. The algorithm is applied on the new-Keynesian Phillips curve as a monetary policy transmission mechanism.
    Keywords: forcing variables,new-Keynesian Phillips curve,Stackelberg dynamic game,augmented linear quadratic regulator,Ramsey optimal policy,algorithm
    Date: 2019–10–25
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-01577606&r=all
  6. By: Gert Bijnens; Shyngys Karimov; Jozef Konings
    Abstract: In 2015 Belgium suspended the automatic wage indexation for a period of 12 months in order to boost competitiveness and increase employment. This paper uses a novel, machine learning based approach to construct a counterfactual experiment. This artificial counterfactual allows us to analyze the employment impact of suspending the indexation mechanism. We find a positive impact on employment of 0.5 percent which corresponds to a labor demand elasticity of -0.25. This effect is more pronounced for manufacturing firms, where the impact on employment can reach 2 percent, which corresponds to a labor demand elasticity of -1.
    Keywords: labor demand, wage elasticity, counterfactual analysis, artificial control, machine learning
    Date: 2019–11–27
    URL: http://d.repec.org/n?u=RePEc:ete:vivwps:643831&r=all
  7. By: Cannon, Alex J. (Environment and Climate Change Canada)
    Abstract: The goal of quantile regression is to estimate conditional quantiles for specified values of quantile probability using linear or nonlinear regression equations. These estimates are prone to "quantile crossing", where regression predictions for different quantile probabilities do not increase as probability increases. In the context of the environmental sciences, this could, for example, lead to estimates of the magnitude of a 10-yr return period rainstorm that exceed the 20-yr storm, or similar nonphysical results. This problem, as well as the potential for overfitting, is exacerbated for small to moderate sample sizes and for nonlinear quantile regression models. As a remedy, this study introduces a novel nonlinear quantile regression model, the monotone composite quantile regression neural network (MCQRNN), that (1) simultaneously estimates multiple non-crossing, nonlinear conditional quantile functions; (2) allows for optional monotonicity, positivity/non-negativity, and generalized additive model constraints; and (3) can be adapted to estimate standard least-squares regression and non-crossing expectile regression functions. First, the MCQRNN model is evaluated on synthetic data from multiple functions and error distributions using Monte Carlo simulations. MCQRNN outperforms the benchmark models, especially for non-normal error distributions. Next, the MCQRNN model is applied to real-world climate data by estimating rainfall Intensity-Duration-Frequency (IDF) curves at locations in Canada. IDF curves summarize the relationship between the intensity and occurrence frequency of extreme rainfall over storm durations ranging from minutes to a day. Because annual maximum rainfall intensity is a non-negative quantity that should increase monotonically as the occurrence frequency and storm duration decrease, monotonicity and non-negativity constraints are key constraints in IDF curve estimation. In comparison to standard QRNN models, the ability of the MCQRNN model to incorporate these constraints, in addition to non-crossing, leads to more robust and realistic estimates of extreme rainfall.
    Date: 2017–12–05
    URL: http://d.repec.org/n?u=RePEc:osf:eartha:wg7sn&r=all
  8. By: Michael B. Giles; Abdul-Lateef Haji-Ali
    Abstract: Computing risk measures of a financial portfolio comprising thousands of options is a challenging problem because (a) it involves a nested expectation requiring multiple evaluations of the loss of the financial portfolio for different risk scenarios and (b) evaluating the loss of the portfolio is expensive and the cost increases with its size. In this work, we look at applying Multilevel Monte Carlo (MLMC) with adaptive inner sampling to this problem and discuss several practical considerations. In particular, we discuss a sub-sampling strategy that results in a method whose computational complexity does not increase with the size of the portfolio. We also discuss several control variates that significantly improve the efficiency of MLMC in our setting.
    Date: 2019–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1912.05484&r=all
  9. By: Aline Souza Magalhães (Cedeplar-UFMG); Edson Paulo Domingues (Cedeplar-UFMG); Bruna Stein Ciasca (Cedeplar-UFMG)
    Abstract: Water scarcity situation are increasingly occurring in certain regions in Brazil. In addition to extreme events vulnerability, the increased exports, household consumption and the water use intensity of the economic activities, generates water demands growth that contributes to water stress. The aim of this paper is to explore the relationship between structural characteristics of the Brazilian economy and growth path, with water use. Our main contribution is the articulation of a computable general equilibrium (CGE) model with a recursive dynamics to sector data of water withdrawal and consumption. We suggest that this is an appropriate methodological framework to study the relation of economics water use, which may overcome some of the limitations of partial equilibrium econometric models or input-output models. The results indicate that the agricultural sector and the mining-metallurgical and construction chain have more intense impacts on water demand from exports. Furthermore, there is a greater dependence on the electricity and gas sectors, and sewage, considering the effects on water withdrawal due to the increase in households demand.
    Keywords: Water use, Virtual water, Computable general equilibrium.
    JEL: Q25 Q51 C68
    Date: 2019–12
    URL: http://d.repec.org/n?u=RePEc:cdp:texdis:td616&r=all
  10. By: Martin Bagaram (University of Washington [Seattle])
    Abstract: Reliability-redundancy is a recurrent problem in engineering where designed systems are meant to be very reliable. However, the cost of manufacturing very high reliability components increases exponentially, therefore redundancy of less reliable components is a palliative solution. Nonetheless, the question remains how many components of low reliability (and of what extent of reliability) should be coupled to produce a system of high reliability. In this paper, I compare the performance of particle swarm optimization (PSO) and simulated annealing (SA) on a system of electricity distribution in a rural hospital. The results proved that PSO outperformed SA. In addition, considering the problem as reliability maximization and cost minimization bi-objective give a useful insight on how the cost increase exponentially at a certain given reliability of the system.
    Date: 2017–11–12
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-02350487&r=all
  11. By: Apel, Mikael (Monetary Policy Department, Central Bank of Sweden); Blix Grimaldi, Marianna (Swedish National Debt Office); Hull, Isaiah (Research Department, Central Bank of Sweden)
    Abstract: The purpose of central bank minutes is to give an account of monetary policy meeting discussions to outside observers, thereby enabling them to draw informed conclusions about future policy. However, minutes are by necessity a shortened and edited representation of a broader discussion. Consequently, they may omit information that is predictive of future policy decisions. To investigate this, we compare the information content of the FOMC's minutes and transcripts, focusing on three dimensions which are likely to be excluded from the minutes: 1) the committee's degree of hawkishness; 2) the chairperson's degree of hawkishness; and 3) the level of agreement between committee members. We measure committee and chairperson hawkishness with a novel dictionary that is constructed using the FOMC's minutes and transcripts. We measure agreement by performing deep transfer learning, a technique that involves training a deep learning model on one set of documents - U.S. congressional debates - and then making predictions on another: FOMC transcripts. Our findings suggest that transcripts are more informative than minutes and heightened committee agreement typically precedes policy rate increases.
    Keywords: Central Bank Communication; Monetary Policy; Machine Learning
    JEL: D71 D83 E52 E58
    Date: 2019–11–01
    URL: http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0381&r=all
  12. By: Tiwari, Richa; Jayaswal, Sachin; Sinha, Ankur
    Abstract: In this paper, we study the hub location problem of an entrant airline that tries to maximize its market share, in a market with already existing competing players. The routes open for use can be either of multiple allocation or single allocation type. The entrant's problem is modelled as a non-linear integer program in both the situations, which is intractable for off-the-shelf commercial solvers, like CPLEX and Gurobi, etc. Hence, we propose four alternate approaches to solve the problem. The first is based on a mixed integer second order conic program reformulation, while the second uses lifted polymatroid cuts based approximation of second order cone constraints. The third is the second order conic program within Lagrangian relaxation, while the fourth uses approximated lifted polymatroid cuts within lagrangian relaxation. The four methods performs differently for the single allocation and multiple allocation models, and second approach is the best for single allocation model and for smaller instances in multiple allocation model. As the problem size in multiple allocation model increases, the third method starts to be the better performer in terms of computation time.
    Date: 2019–12–10
    URL: http://d.repec.org/n?u=RePEc:iim:iimawp:14616&r=all
  13. By: Dhyani, Sneha; Jayaswal, Sachin; Sinha, Ankur; Vidyarthi, Navneet
    Abstract: In this paper, we study the single allocation hub location problem with capacity selection in the presence of congestion at hubs. Accounting for congestion at hubs leads to a non-linear mixed integer program, for which we propose 18 alternate mixed integer second order conic program (MISOCP) reformulations. Based on our computational studies, we identify the best MISOCP-based reformulation, which turns out to be 20
    Date: 2019–12–10
    URL: http://d.repec.org/n?u=RePEc:iim:iimawp:14617&r=all
  14. By: Kamilla, Isti; Nugrahani, Endar H; Lesmana, Donny Citra
    Abstract: Asumsi suku bunga konstan pada penentuan harga opsi barrier tidak sesuai dengan kondisi sebenarnya dalam dunia keuangan, karena suku bunga berfluktuasi terhadap waktu. Modifikasi metode Monte Carlo adalah metode yang dibuat untuk menghitung harga opsi barrier dengan suku bunga tak konstan. Ide dasar dari metode ini adalah menggunakan model Cox-Ingersoll-Ross sebagai model suku bunga dan menggunakan bilangan acak berdistribusi seragam dan sebuah exit probability untuk menampilkan estimasi Monte Carlo dari waktu pertama kali harga saham menyentuh level barrier.
    Date: 2018–01–30
    URL: http://d.repec.org/n?u=RePEc:osf:inarxi:zfbn7&r=all
  15. By: Bryan T. Kelly; Asaf Manela; Alan Moreira
    Abstract: Text data is ultra-high dimensional, which makes machine learning techniques indispensable for textual analysis. Text is often selected—journalists, speechwriters, and others craft messages to target their audiences’ limited attention. We develop an economically motivated high dimensional selection model that improves learning from text (and from sparse counts data more generally). Our model is especially useful when the choice to include a phrase is more interesting than the choice of how frequently to repeat it. It allows for parallel estimation, making it computationally scalable. A first application revisits the partisanship of US congressional speech. We find that earlier spikes in partisanship manifested in increased repetition of different phrases, whereas the upward trend starting in the 1990s is due to entirely distinct phrase selection. Additional applications show how our model can backcast, nowcast, and forecast macroeconomic indicators using newspaper text, and that it substantially improves out-of-sample fit relative to alternative approaches.
    JEL: C1 C4 C55 C58 E17 G12 G17
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:26517&r=all
  16. By: Maruyama, Yuuki
    Abstract: In this model, the stock price is determined by two variables: the fundamental value and the current risk preference of people. Suppose that the fundamental value follows Geometric Brownian motion and the function of the risk preference of people follows Ornstein-Uhlenbeck process. There are only two types of asset: money (safe asset) and stocks (risk asset). In this case, the profit rate of equity investment is mean reverting, and long-term investment is more advantageous than short-term investment. The market is arbitrage-free. Also, based on this model, I suggest a solution to the Equity Premium Puzzle.
    Date: 2019–10–07
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:6gwjq&r=all
  17. By: Colignatus, Thomas
    Abstract: Family planning could focus on delaying the having of children, instead of (just) reducing the number of children per woman. 66% of all children are born in the mothers’ age group of 15-29. A delay of births to the age of 30+ would cause a reduction of the world population by about 0.8 billion in a direct effect. A secondary effect arises when the later born children grow up and have their delay too. There can also be a learning effect. World population might reduce from 11 to 8 billion in 2100. This would cut projected emissions by some 20%. The effect seems important enough to have more research on reasons, causes and consequences of such delay. Strong delay will cause swings in the dependency ratio, which would require economic flexibility, like a rising retirement age from 65 to 70 years. Article 26 of the Universal Declaration of Human Rights of 1948 stipulates the right to education. This right need not be discussed anew. It may be that education does not adequately discuss family planning though.
    Keywords: family planning, fertility, birth delay, climate change, population, carbon tax, fertility tax, political economy
    JEL: J11 J13 P16 Q01 Q54 Q56
    Date: 2019–12–11
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:97447&r=all
  18. By: Nowosad, Jakub; Stepinski, Tomasz
    Abstract: There is a keen interest in inferring spatial associations between different variables spanning the same study area. We present a method for quantitative assessment of such associations in the case where spatial variables are either in the form of regionalizations or in the form of thematic maps. The proposed index of spatial association – called the V-measure – is adapted from a measure originally developed in computer science, where it was used to compare clusterings, to spatial science for comparing regionalizations. The V-measure is rooted in the information theory and, at its core, it is equivalent to mutual information between the two regionalizations. Here we re-introduce the V-measure in terms of spatial variance analysis instead of information theory. We identify three different contexts for application of the V-measure, comparative, associative, and derivative, and present an example of an application for each of them. In the derivative context, the V-measure is used to select an optimal number of regions for clustering-derived regionalizations. In effect, this also constitutes a novel way to determine the number of clusters for non-spatial clustering tasks as well. The advantage of V-measure over the Mapcurves method is discussed. We also use the insight from deriving the V-measure in terms of spatial variance analysis to point out a shortcoming of the Geographical Detector – a method to quantify associations between numerical and categorical spatial variables. The open-source software for calculating the V-measure accompanies this paper.
    Date: 2018–04–19
    URL: http://d.repec.org/n?u=RePEc:osf:eartha:rcjh7&r=all
  19. By: Lassila, Jukka; Valkonen, Tarmo
    Abstract: Abstract Ageing populations pose a major challenge for long-term sustainability of public finances. The respond has been a wave of pension reforms that has lowered markedly the projected pension expenditure in EU countries. The increase in the second major expenditure item, health and long-term care costs, has become the most important element of fiscal sustainability gaps. We compare different demography-based approaches generally used to evaluate the costs. The interaction of different projection approaches and demography is illustrated by using realizations of a stochastic population projection as inputs in a numerical expenditure model. Our example country is Finland. Our results show that considering the effects of proximity to death on the expenditure generates markedly slower expected expenditure growth for the health and long-term care costs than using age-specific costs or the method developed and used by the European Commission and the Finnish Ministry of Finance. In addition, the sensitivity of the expenditure projections to demographic risks is lower. The differences in the outcomes of the different approaches are largest in long-term care costs, which are in any case growing faster in Finland than the health care expenditure because on population ageing.
    Keywords: Population ageing, Demographic uncertainty, Health care costs, Long-term care costs
    JEL: H55 H68 J11
    Date: 2019–12–20
    URL: http://d.repec.org/n?u=RePEc:rif:wpaper:74&r=all
  20. By: Karolis Liaudinskas
    Abstract: Can humans achieve rationality, as defined by the expected utility theory, by automating their decision making? We use millisecond-stamped transaction-level data from the Copenhagen Stock Exchange to estimate the disposition effect – the tendency to sell winning but not losing stocks – among algorithmic and human professional day-traders. We find that: (1) the disposition effect is substantial among humans but virtually zero among algorithms; (2) this difference is not fully explained by rational explanations and is, at least partially, attributed to prospect theory, realization utility and beliefs in mean-reversion; (3) the disposition effect harms trading performance, which further deems such behavior irrational.
    Keywords: disposition effect, algorithmic trading, financial markets, rationality, automation
    JEL: D8 D91 G11 G12 G23 O3
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:bge:wpaper:1133&r=all
  21. By: Arpit Gupta; Stijn Van Nieuwerburgh
    Abstract: We propose a new valuation method for private equity investments. First, we construct a cash-flow replicating portfolio for the private investment, applying Machine Learning techniques on cash-flows on various listed equity and fixed income instruments. The second step values the replicating portfolio using a flexible asset pricing model that accurately prices the systematic risk in bonds of different maturities and a broad cross-section of equity factors. The method delivers a measure of the risk-adjusted profit earned on a PE investment and a time series for the expected return on PE fund categories. We apply the method to buyout, venture capital, real estate, and infrastructure funds, among others. Accounting for horizon-dependent risk and exposure to a broad cross-section of equity factors results in negative average risk-adjusted profits. Substantial cross-sectional variation and persistence in performance suggests some funds outperform. We also find declining expected returns on PE funds in the later part of the sample.
    JEL: G00 G11 G12 G23 G32 R30 R51
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:26514&r=all
  22. By: Mark Bognanni; John Zito
    Abstract: We develop a sequential Monte Carlo (SMC) algorithm for Bayesian inference in vector autoregressions with stochastic volatility (VAR-SV). The algorithm builds particle approximations to the sequence of the model’s posteriors, adapting the particles from one approximation to the next as the window of available data expands. The parallelizability of the algorithm’s computations allows the adaptations to occur rapidly. Our particular algorithm exploits the ability to marginalize many parameters from the posterior analytically and embeds a known Markov chain Monte Carlo (MCMC) algorithm for the model as an effective mutation kernel for fighting particle degeneracy. We show that, relative to using MCMC alone, our algorithm increases the precision of inference while reducing computing time by an order of magnitude when estimating a medium-scale VAR-SV model.
    Keywords: Vector autoregressions; sequential Monte Carlo; Rao-Blackwellization; particle filter; stochastic volatility
    JEL: E17 C11 C51 C32
    Date: 2019–12–16
    URL: http://d.repec.org/n?u=RePEc:fip:fedcwq:86647&r=all
  23. By: Steven F. Lehrer; Tian Xie; Tao Zeng
    Abstract: Social media data presents challenges for forecasters since one must convert text into data and deal with issues related to these measures being collected at different frequencies and volumes than traditional financial data. In this paper, we use a deep learning algorithm to measure sentiment within Twitter messages on an hourly basis and introduce a new method to undertake MIDAS that allows for a weaker discounting of historical data that is well-suited for this new data source. To evaluate the performance of approach relative to alternative MIDAS strategies, we conduct an out of sample forecasting exercise for the consumer confidence index with both traditional econometric strategies and machine learning algorithms. Irrespective of the estimator used to conduct forecasts, our results show that (i) including consumer sentiment measures from Twitter greatly improves forecast accuracy, and (ii) there are substantial gains from our proposed MIDAS procedure relative to common alternatives.
    JEL: C58 G17
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:26505&r=all
  24. By: Tommaso Ciarli; Alex Coad; Alessio Moneta
    Abstract: This paper introduces a little known category of estimators - Linear Non-Gaussian vector autoregression models that are acyclic or cyclic - imported from the machine learning literature, to revisit a well-known debate. Does exporting increase firm productivity? Or is it only more productive firms that remain in the export market? We focus on a relatively well-studied country (Chile) and on already-exporting firms (i.e. the intensive margin of exporting). We explicitly look at the co-evolution of productivity and growth, and attempt to ascertain both contemporaneous and lagged causal relationships. Our findings suggest that exporting does not have any causal influence on the other variables. Instead, export seems to be determined by other dimensions of firm growth. With respect to learning by exporting (LBE), we find no evidence that export growth causes productivity growth within the period and very little evidence that exporting growth has a causal effect on subsequent TFP growth.
    Keywords: Productivity; Exporting; Learning-by-exporting; Causality; Structural VAR; Independent Component Analysis.
    Date: 2019–12–20
    URL: http://d.repec.org/n?u=RePEc:ssa:lemwps:2019/39&r=all
  25. By: Ljunge, Martin (Research Institute of Industrial Economics (IFN))
    Abstract: Individuals with ancestry from countries with advanced information technology in 1500 AD, such as movable type and paper, adopt the internet faster than those with less advanced ancestry. The analysis illustrates persistence over five centuries in information technology adoption in European and U.S. populations. The results hold when excluding the most and least advanced ancestries, and when accounting for additional deep roots of development. Historical information technology is a better predictor of internet adoption than current development. A machine learning procedure supports the findings. Human capital is a plausible channel as 1500 AD information technology predicts early 20th century school enrollment, which predicts 21st century internet adoption. A three-stage model including human capital around 1990, yields similar results.
    Keywords: Internet; Technology diffusion; Information technology; Intergenerational transmission; Printing press
    JEL: D13 D83 J24 N70 O33 Z13
    Date: 2019–12–18
    URL: http://d.repec.org/n?u=RePEc:hhs:iuiwop:1312&r=all
  26. By: Garnadi, Agah D.; Nurdiati, Sri; Erliana, Windiani
    Abstract: Current formulas in credibility theory often calculate net premium as a weighted sum of the average experience of the policyholder and the average experience of the entire collection of policyholders. Because these formulas are linear, they are easy to use. Another advantage of linear formulas is that the estimate changes a fixed amount per change in claim experience, if an insurer uses which a formal, then the policyholder can predict the change in premium. In a series of writing, Young(1997,1998,2000) apply decision theory to develop a credibility formula that minimizes a loss function that is linear combination of a squared-error term and a second-derivative term or first order term. This loss function as a variational forms, is equivalent to fourth order or second order linear differential equation, respectively. This allows us for evaluation to Green's function computation via symbolic calculation to compute details of Green's function to obtain the solution.
    Date: 2017–11–18
    URL: http://d.repec.org/n?u=RePEc:osf:inarxi:wg7qa&r=all

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.