nep-cmp New Economics Papers
on Computational Economics
Issue of 2018‒10‒15
eleven papers chosen by
Stan Miles
Thompson Rivers University

  1. The Evaluation of Fiscal Consolidation Strategies By Norbert Švarda
  2. An Artificial Neural Network Approach to Acreage-Share Modeling By Ramsey, Steven M.; Bergtold, Jason S.; Heier Stamm, Jessica
  3. Deep Factor Model By Kei Nakagawa; Takumi Uchida; Tomohisa Aoshima
  4. The Potential Macroeconomic and Sectoral Consequences of Brexit on Ireland By Christine Arriola; Caitlyn Carrico; David Haugh; Nigel Pain; Elena Rusticelli; Donal Smith; Frank van Tongeren; Ben Westmore
  5. Valuation Construction Permit Uncertainties in Real Estate Development Projects with Stochastic Decision Tree Analysis By Serhat Basdogan; Hilde Remøy; Ruud Binnekamp
  6. Counterfactual Policy Simulations for Assessing the Impact of Potential Regulations on E-Cigarette Attributes By Zare, Samane; Zheng, Yuqing
  7. Semi-supervised Text Regression with Conditional Generative Adversarial Networks By Tao Li; Xudong Liu; Shihan Su
  8. Complex market dynamics in the light of random matrix theory By Hirdesh K. Pharasi; Kiran Sharma; Anirban Chakraborti; Thomas H. Seligman
  9. Income Inequality and Stock Market Returns By Agnieszka (A.P.) Markiewicz; Rafal Raciborski
  10. Multivariate Stochastic Volatility with Co-Heteroscedasticity By Joshua Chan; Arnaud Doucet; Roberto Leon-Gonzalez; Rodney W. Strachan
  11. Agro-Climatic Data by County (ACDC): Methods and Data Generating Processes By Yun, Seong Do; Gramig, Benjamin M.

  1. By: Norbert Švarda
    Abstract: In this paper, we present a framework and perform an assessment of different fiscal consolidation strategies both on the revenue as well as on the expenditure sides of the budget in the context of Slovakia. The model we use for simulations is a behavioural general-equilibrium what_if model. We analyse the simulated impacts of consolidation strategies on growth and on fiscal balance (both in short- and long- term). The microsimulation approach allows us also to evaluate the distributional impacts. In addition, the approach permits to compare the statutory with the resulting tax incidence in the long-run. We simulate strategies based on taxing labour income, taxing consumption as well as cutting expenditures on social transfers. We document that corporate and labour taxes are more unfavourable to output growth, while consumption taxes belong to less damaging instruments for consolidation. We show that spending cuts may promote employment and are not detrimental to output growth.
    JEL: C63 H22 I38
    Date: 2018–10–04
  2. By: Ramsey, Steven M.; Bergtold, Jason S.; Heier Stamm, Jessica
    Keywords: Research Methods/Econometrics/Stats, Production Economics, Food and Agricultural Policy Analysis
    Date: 2018–06–20
  3. By: Kei Nakagawa; Takumi Uchida; Tomohisa Aoshima
    Abstract: We propose to represent a return model and risk model in a unified manner with deep learning, which is a representative model that can express a nonlinear relationship. Although deep learning performs quite well, it has significant disadvantages such as a lack of transparency and limitations to the interpretability of the prediction. This is prone to practical problems in terms of accountability. Thus, we construct a multifactor model by using interpretable deep learning. We implement deep learning as a return model to predict stock returns with various factors. Then, we present the application of layer-wise relevance propagation (LRP) to decompose attributes of the predicted return as a risk model. By applying LRP to an individual stock or a portfolio basis, we can determine which factor contributes to prediction. We call this model a deep factor model. We then perform an empirical analysis on the Japanese stock market and show that our deep factor model has better predictive capability than the traditional linear model or other machine learning methods. In addition , we illustrate which factor contributes to prediction.
    Date: 2018–10
  4. By: Christine Arriola; Caitlyn Carrico; David Haugh; Nigel Pain; Elena Rusticelli; Donal Smith; Frank van Tongeren; Ben Westmore
    Abstract: This paper provides estimates of the potential effects on exports, imports, production, factor demand and GDP in Ireland of an exit of the United Kingdom (UK) from the European Union (EU), focusing on trade and FDI channels. Owing to the high uncertainty regarding the final trade agreement between the negotiating parties, the choice has been made to assume a worst-case outcome where trade relations between the United Kingdom and EU are governed by World Trade Organization (WTO) most favoured nation (MFN) rules. In doing so, it provides something close to an upper bound estimate of the negative economic impact taking into account the potential for some firms to relocate to Ireland. Any final trade agreement that would result in closer relationships between the United Kingdom and the EU could reduce this negative impact. The simulations use two large-scale models: a global macroeconomic model (NiGEM) and a general equilibrium trade model (METRO). These models are used to quantify, both at the macroeconomic and the sectoral level, two key channels through which Ireland would be affected: trade and foreign direct investment. The simulation results highlight that the negative effect on trade could result in Ireland's GDP falling by 1½ per cent in the medium-term and around 2½ per cent in the long-term. The impacts are highly heterogeneous across sectors. Agriculture, food, and some smaller manufacturing sectors experience the largest declines in total gross exports at over 15%. By contrast, financial services exports increase slightly. The modelling suggests that any positive offsetting impact to the trade shock from increased inward FDI to Ireland is likely to be modest.
    Keywords: Brexit, computable general equilibrium model, European Union, foreign direct investment, international trade, Ireland, METRO model, NIGEM macroeconometric model, sectoral economic effects
    JEL: C10 C68 F13 F14 F47
    Date: 2018–10–11
  5. By: Serhat Basdogan; Hilde Remøy; Ruud Binnekamp
    Abstract: With the rapid development of real estate markets under globalization and exponential competitive market conditions, risk evaluation has been one of the most important tasks in the process of real estate investment valuation. This paper describes the relationship between construction permit uncertainties and real estate development projects by using the Decision Tree Analysis (DTA) approach together with Monte Carlo simulations. Expected Value (EV) criterion for an office development project proposed and incorporated into conventional Discounted Cash Flow (DCF) analysis which is determined by stochastic DTA. This will help utility function to come closer to the real world, so that decision making and risk analysis can be done based on the real and possible data providing better conditions for investors. The results are consistent with the results calculated by conventional DCF analysis. However research demonstrates that is of application Monte Carlo Simulation (MCS) and DTA obviate the deficiencies of conventional DCF analysis under construction permit delays and scheduling uncertainties. Results also emphasize the importance of applying EV and DTA for the construction permit delays generate a significant change in NPV and also investment decisions of real estate development projects.
    Keywords: Decision Tree Analysis; Expected Value; Monte Carlo Simulation; Real Estate Development; Valuation
    JEL: R3
    Date: 2018–01–01
  6. By: Zare, Samane; Zheng, Yuqing
    Keywords: Demand and Price Analysis, Household and Labor Economics, Behavioral & Institutional Economics
    Date: 2018–06–20
  7. By: Tao Li; Xudong Liu; Shihan Su
    Abstract: Enormous online textual information provides intriguing opportunities for understandings of social and economic semantics. In this paper, we propose a novel text regression model based on a conditional generative adversarial network (GAN), with an attempt to associate textual data and social outcomes in a semi-supervised manner. Besides promising potential of predicting capabilities, our superiorities are twofold: (i) the model works with unbalanced datasets of limited labelled data, which align with real-world scenarios; and (ii) predictions are obtained by an end-to-end framework, without explicitly selecting high-level representations. Finally we point out related datasets for experiments and future research directions.
    Date: 2018–10
  8. By: Hirdesh K. Pharasi; Kiran Sharma; Anirban Chakraborti; Thomas H. Seligman
    Abstract: We present a brief overview of random matrix theory (RMT) with the objectives of highlighting the computational results and applications in financial markets as complex systems. An oft-encountered problem in computational finance is the choice of an appropriate epoch over which the empirical cross-correlation return matrix is computed. A long epoch would smoothen the fluctuations in the return time series and suffers from non-stationarity, whereas a short epoch results in noisy fluctuations in the return time series and the correlation matrices turn out to be highly singular. An effective method to tackle this issue is the use of the power mapping, where a non-linear distortion is applied to a short epoch correlation matrix. The value of distortion parameter controls the noise-suppression. The distortion also removes the degeneracy of zero eigenvalues. Depending on the correlation structures, interesting properties of the eigenvalue spectra are found. We simulate different correlated Wishart matrices to compare the results with empirical return matrices computed using the S&P 500 (USA) market data for the period 1985-2016. We also briefly review two recent applications of RMT in financial stock markets: (i) Identification of "market states" and long-term precursor to a critical state; (ii) Characterization of catastrophic instabilities (market crashes).
    Date: 2018–09
  9. By: Agnieszka (A.P.) Markiewicz (Erasmus University Rotterdam); Rafal Raciborski (European Commission)
    Abstract: In this paper, we study the relationship between income inequality and stock market returns. We develop a quantitative general equilibrium model that links shifts in both labour and capital income inequality to stock market variables. An increase of the share of capital owners’ income from risky capital leads to higher equity premium and a rise in their non-risky, labor share of income reduces it. When we calibrate our model to match the empirical size of shifts in the last five decades, we find that the negative impact of the higher labour share of income of capital owners dominates and brings the equity premium below the historical value by 0.79 percentage points, in line with the data. If both capital and total income shares of top decile would continue growing at the historical rate between 1970 and 2014, the equity premium would continue decreasing to 6.11% in 2030, 0.92 percentage point lower than historical equity premium of 7.03%. If instead only the capital share of income continues to grow, the equity premium would be higher than the historical average by 0.57 percentage point. If the labour income dispersion remains constant, the historical equity premium of 7.03% would be reached by 2030 if the capital share of income was growing by 1.4% each year.
    Keywords: Asset Pricing; Risk Premium Dynamics; Income Inequality; Computational Macroeconomics
    JEL: D31 E32 E44 H21 O33
    Date: 2018–10–07
  10. By: Joshua Chan (Purdue University); Arnaud Doucet (University of Oxford); Roberto Leon-Gonzalez (National Graduate Institute for Policy Studies, Tokyo, Japan); Rodney W. Strachan (University of Queensland)
    Abstract: This paper develops a new methodology that decomposes shocks into homoscedastic and heteroscedastic components. This specification implies there exist linear combinations of heteroscedastic variables that eliminate heteroscedasticity. That is, these linear combinations are homoscedastic; a property we call co-heteroscedasticity. The heteroscedastic part of the model uses a multivariate stochastic volatility inverse Wishart process. The resulting model is invariant to the ordering of the variables, which we show is important for impulse response analysis but is generally important for, e.g., volatility estimation and variance decompositions. The specification allows estimation in moderately high-dimensions. The computational strategy uses a novel particle lter algorithm, a reparameterization that substantially improves algorithmic convergence and an alternating-order particle Gibbs that reduces the amount of particles needed for accurate estimation. We provide two empirical applications; one to exchange rate data and another to a large Vector Autoregression (VAR) of US macroeconomic variables. We find strong evidence for co-heteroscedasticity and, in the second application, estimate the impact of monetary policy on the homoscedastic and heteroscedastic components of macroeconomic variables.
    Date: 2018–10
  11. By: Yun, Seong Do; Gramig, Benjamin M.
    Abstract: Due to the recent popularity of raster imagery data (high resolution grid cell data), the demand for weather, soil/land and related data for research and applied decision support is increasing rapidly. Agro-Climatic Data by County (ACDC) is designed to provide the most widely-used variables extracted from the most popular high resolution gridded data sources to end users of agro-climatic variables who may not be equipped to process large geospatial datasets from multiple publicly available sources that are provided in different data formats and spatial scales. Annual county level crop yield data in USDA NASS for 1981-2015 are provided for corn, soybeans, upland cotton and winter wheat yields, and customizable growing degree days (GDDs) and cumulative precipitation for two groups of months (March-August and April-October) to capture different growing season periods for the crops from the PRISM weather data. Soil characteristic data in gSSURGO are also included for each county in the data set. All weather and soil data are processed based using NLCD land cover/land use data to exclude data for land that is not being used for non-forestry agricultural uses. This paper explains the numerical and geocomputational methods and data generating processes employed in ACDC.
    Keywords: Production Economics, Research Methods/ Statistical Methods, Resource /Energy Economics and Policy
    Date: 2018–01–16

This nep-cmp issue is ©2018 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.