nep-cmp New Economics Papers
on Computational Economics
Issue of 2020‒01‒27
25 papers chosen by
Stan Miles
Thompson Rivers University

  1. Neural Network Associative Forecasting of Demand for Goods By Osipov, Vasiliy; Zhukova, Nataly; Miloserdov, Dmitriy
  2. Evolving ab initio trading strategies in heterogeneous environments By David Rushing Dewhurst; Yi Li; Alexander Bogdan; Jasmine Geng
  3. The Sky is the Limit: Assessing Aircraft Market Diffusion with Agent-Based Modeling By Liu, Xueying; Madlener, Reinhard
  4. Searching for Interpretable Demographic Patterns By Muratova, Anna; Islam, Robiul; Mitrofanova, Ekaterina S.; Ignatov, Dmitry I.
  5. Comparing Deep Neural Network and Econometric Approaches to Predicting the Impact of Climate Change on Agricultural Yield By Timothy Neal; Michael Keane
  6. "The Squawk Bot": Joint Learning of Time Series and Text Data Modalities for Automated Financial Information Filtering By Xuan-Hong Dang; Syed Yousaf Shah; Petros Zerfos
  7. A p-step formulation for the capacitated vehicle routing problem By Dollevoet, T.A.B.; Munari, P.; Spliet, R.
  8. An Integrated Pipeline Architecture for Modeling Urban Land Use, Travel Demand, and Traffic Assignment By Waddell, Paul; Boeing, Geoff; Gardner, Max; Porter, Emily
  9. Using Wasserstein Generative Adversarial Networks for the Design of Monte Carlo Simulations By Susan Athey; Guido W. Imbens; Jonas Metzger; Evan M. Munro
  10. Debunking the granular origins of aggregate fluctuations : from real business cycles back to Keynes By Giovanni Dosi; Mauro Napoletano; Andrea Roventini; Tania Treibich
  11. DP-LSTM: Differential Privacy-inspired LSTM for Stock Prediction Using Financial News By Xinyi Li; Yinchuan Li; Hongyang Yang; Liuqing Yang; Xiao-Yang Liu
  12. Pynamical: Model and visualize discrete nonlinear dynamical systems, chaos, and fractals By Boeing, Geoff
  13. Is 'First in Family' a Good Indicator for Widening University Participation? By Adamecz-Völgyi, Anna; Henderson, Morag; Shure, Nikki
  14. ResLogit: A residual neural network logit model By Melvin Wong; Bilal Farooq
  15. The Effects of Inequality, Density, and Heterogeneous Residential Preferences on Urban Displacement and Metropolitan Structure: An Agent-Based Model By Boeing, Geoff
  16. Causes et consequences of hysteresis : aggregate demand, productivity and employment By Giovanni Dosi; Marcelo C. Pereira; Andrea Roventini; Maria Enrica Virgillito
  17. Commodity Trade Finance Platform using Distributed Ledger Technology: Token Economics in a Closed Ecosystem using Agent Based Modeling By Wang, Jianfu
  18. A Three-Country Macroeconomic Model for Portugal By Alex Pienkowski
  19. A Hierarchy Model of Income Distribution By Fix, Blair
  20. A New Approach for Quantifying the Costs of Utilizing Regional Trade Agreements By Naoto JINJI; Kazunobu HAYAKAWA; Nuttawut LAKSANAPANYAKUL; Toshiyuki MATSUURA; Taiyo YOSHIMI
  21. The Tax Structure of an Economy in Crisis: Greece 2009-2017 By Chrysa Leventi; Fidel Picos
  22. Monetary Policy, rational confidence, and Neo- Fisherian depressions By Lucio Gobbi; Ronny Mazzocchi; Roberto Tamborini
  23. The Impact of Profit Shifting on Economic Activity and Tax Competition By Alexander D Klemm; Li Liu
  24. On the role of electricity storage in capacity remuneration mechanisms By Fraunholz, Christoph; Keles, Dogan; Fichtner, Wolf
  25. Pension financing and individual retirement account By Arno Baurin; Jean Hindriks

  1. By: Osipov, Vasiliy; Zhukova, Nataly; Miloserdov, Dmitriy
    Abstract: This article discusses the applicability of recurrent neural networks with controlled elements to the problem of forecasting market demand for goods on the four month horizon. Two variants of forecasting are considered. In the first variant, time series are used to train the neural network, including the real demand values, as well as pre-order values for 1, 2 and 3 months ahead. In the second variant, there is an iterative forecasting method. It predicts the de-mand for the next month at each step, and the training set is supplemented by the values predicted for the previous months. It is shown that the proposed methods can give a sufficiently high result. At the same time, the second ap-proach demonstrates greater potential.
    Keywords: Recurrent Neural Network; Machine Learning; Data Mining; Demand Forecasting
    JEL: C45 L10
    Date: 2019–09–23
  2. By: David Rushing Dewhurst; Yi Li; Alexander Bogdan; Jasmine Geng
    Abstract: Securities markets are quintessential complex adaptive systems in which heterogeneous agents compete in an attempt to maximize returns. Species of trading agents are also subject to evolutionary pressure as entire classes of strategies become obsolete and new classes emerge. Using an agent-based model of interacting heterogeneous agents as a flexible environment that can endogenously model many diverse market conditions, we subject deep neural networks to evolutionary pressure to create dominant trading agents. After analyzing the performance of these agents and noting the emergence of anomalous superdiffusion through the evolutionary process, we construct a method to turn high-fitness agents into trading algorithms. We backtest these trading algorithms on real high-frequency foreign exchange data, demonstrating that elite trading algorithms are consistently profitable in a variety of market conditions---even though these algorithms had never before been exposed to real financial data. These results provide evidence to suggest that developing \textit{ab initio} trading strategies by repeated simulation and evolution in a mechanistic market model may be a practical alternative to explicitly training models with past observed market data.
    Date: 2019–12
  3. By: Liu, Xueying (E.ON Energy Research Center, Future Energy Consumer Needs and Behavior (FCN)); Madlener, Reinhard (E.ON Energy Research Center, Future Energy Consumer Needs and Behavior (FCN))
    Abstract: This paper presents an adapted agent-based model for the diffusion of new aircraft model series. Expanding on the classical economic decision framework, where investment decision-making is entirely based on profitability, our holistic modeling approach takes into account profitability, flexibility, as well as the environmental impact of new aircraft model series in the adoption decision process. Technical parameters such as the range and maximum take-off weight of an aircraft model series, various emissions of the aircraft engine, as well as daily operational data, are used to calibrate the model. In validation, our model produces results that are comparable to data on the market diffusion of an existing aircraft model series, the Boeing 737-500. This result shows the applicability of our model, which can also subsequently be used on aircraft with new generations of technologies. Our simulation shows that a price reduction or a decrease in emissions could lead to more adoption and faster diffusion. Furthermore, our modeling approach demonstrates that a holistic framework to include not only profitability but also flexibility and environmental impact can be helpful when modeling the investment decision-making process.
    Keywords: Transportation economics; Technological diffusion; Agent-based modeling; Aircraft
    JEL: C32 C63 L93 O33 Q53 Q55 R41
    Date: 2019–10–01
  4. By: Muratova, Anna; Islam, Robiul; Mitrofanova, Ekaterina S.; Ignatov, Dmitry I.
    Abstract: Nowadays there is a large amount of demographic data which should be analyzed and interpreted. From accumulated demographic data, more useful information can be extracted by applying modern methods of data mining. Two kinds of experiments are considered in this work: 1) generation of additional secondary features from events and evaluation of its influence on accuracy; 2) exploration of features influence on classification result using SHAP (SHapley Additive exPlanations). An algorithm for creating secondary features is proposed and applied to the dataset. The classifications were made by two methods, SVM and neural networks, and the results were evaluated. The impact of events and features on the classification results was evaluated using SHAP; it was demonstrated how to tune model for improving accuracy based on the obtained values. Applying convolutional neural network for sequences of events allowed improve classification accuracy and surpass the previous best result on the studied demographic dataset.
    Keywords: data mining; demographics; neural networks; classification; SHAP; interpretation
    JEL: C02 C15 I00 J13
    Date: 2019–09–23
  5. By: Timothy Neal (UNSW School of Economics); Michael Keane (UNSW School of Economics)
    Abstract: Predicting the impact of climate change on crop yield is difficult, in part because the production function mapping weather to yield is high dimensional and nonlinear. We compare three approaches to predicting yields: (i) deep neural networks (DNNs), (ii) traditional panel-data models, and (iii) a new panel-data model that allows for unit and time fixed-effects in both intercepts and slopes in the agricultural production function - made feasible by a new estimator developed by Keane and Neal (2020) called MO-OLS. Using U.S. county-level corn yield data from 1950-2015, we show that both DNNs and MO-OLS models outperform traditional panel data models for predicting yield, both in-sample and in a Monte Carlo cross-validation exercise. However, the MO-OLS model substantially outperforms both DNNs and traditional panel-data models in forecasting yield in a 2006-15 holdout sample. We compare predictions of all these models for climate change impacts on yields from 2016 to 2100.
    Keywords: Climate Change, Crop Yield, Panel Data, Machine Learning, Neural Net
    Date: 2020–01
  6. By: Xuan-Hong Dang; Syed Yousaf Shah; Petros Zerfos
    Abstract: Multimodal analysis that uses numerical time series and textual corpora as input data sources is becoming a promising approach, especially in the financial industry. However, the main focus of such analysis has been on achieving high prediction accuracy while little effort has been spent on the important task of understanding the association between the two data modalities. Performance on the time series hence receives little explanation though human-understandable textual information is available. In this work, we address the problem of given a numerical time series, and a general corpus of textual stories collected in the same period of the time series, the task is to timely discover a succinct set of textual stories associated with that time series. Towards this goal, we propose a novel multi-modal neural model called MSIN that jointly learns both numerical time series and categorical text articles in order to unearth the association between them. Through multiple steps of data interrelation between the two data modalities, MSIN learns to focus on a small subset of text articles that best align with the performance in the time series. This succinct set is timely discovered and presented as recommended documents, acting as automated information filtering, for the given time series. We empirically evaluate the performance of our model on discovering relevant news articles for two stock time series from Apple and Google companies, along with the daily news articles collected from the Thomson Reuters over a period of seven consecutive years. The experimental results demonstrate that MSIN achieves up to 84.9% and 87.2% in recalling the ground truth articles respectively to the two examined time series, far more superior to state-of-the-art algorithms that rely on conventional attention mechanism in deep learning.
    Date: 2019–12
  7. By: Dollevoet, T.A.B.; Munari, P.; Spliet, R.
    Abstract: We introduce a _p_-step formulation for the capacitated vehicle routing problem (CVRP). The parameter _p_ indicates the length of partial paths corresponding to the used variables. This provides a family of formulations including both the traditional arc-based and path-based formulations. Hence, it is a generalization which unifies arc-based and path-based formulations, while also providing new formulations. We show that the LP bound of the _p_-step formulation is increasing in _p_, although not monotonically. Furthermore, we prove that computing the set partitioning bound is NP-hard. This is a meaningful result in itself, but combined with the _p_-step formulation this also allows us to show that there does not exist a strongest compact formulation for the CVRP, if _P ≠ NP_. While ending the search for a strongest compact formulation, we propose the search for the strongest formulation of the CVRP with a number of variables and constraints limited by a polynomial of fixed degree. We provide new strongest such formulations of degree three and higher by using a corresponding _p_-step formulation. Furthermore, the results of our experiments suggest that there are computational advantages from using the _p_-step formulation, instead of traditional arc-based and path-based formulations.
    Date: 2020–01–01
  8. By: Waddell, Paul; Boeing, Geoff (Northeastern University); Gardner, Max; Porter, Emily
    Abstract: Integrating land use, travel demand, and traffic models represents a gold standard for regional planning, but is rarely achieved in a meaningful way, especially at the scale of disaggregate data. In this report, we present a new pipeline architecture for integrated modeling of urban land use, travel demand, and traffic assignment. Our land use model, UrbanSim, is an open-source microsimulation platform used by metropolitan planning organizations worldwide for modeling the growth and development of cities over long (~30 year) time horizons. UrbanSim is particularly powerful as a scenario analysis tool, enabling planners to compare and contrast the impacts of different policy decisions on long term land use forecasts in a statistically rigorous way. Our travel demand model, ActivitySim, is an agent-based modeling platform that produces synthetic origin--destination travel demand data. Finally, we use a static user equilibrium traffic assignment model based on the Frank-Wolfe algorithm to assign vehicles to specific network paths to make trips between origins and destinations. This traffic assignment model runs in a high-performance computing environment. The resulting congested travel time data can then be fed back into UrbanSim and ActivitySim for the next model run. This technical report introduces this research area, describes this project's achievements so far in developing this integrated pipeline, and presents an upcoming research agenda.
    Date: 2018–03–21
  9. By: Susan Athey; Guido W. Imbens; Jonas Metzger; Evan M. Munro
    Abstract: When researchers develop new econometric methods it is common practice to compare the performance of the new methods to those of existing methods in Monte Carlo studies. The credibility of such Monte Carlo studies is often limited because of the freedom the researcher has in choosing the design. In recent years a new class of generative models emerged in the machine learning literature, termed Generative Adversarial Networks (GANs) that can be used to systematically generate artificial data that closely mimics real economic datasets, while limiting the degrees of freedom for the researcher and optionally satisfying privacy guarantees with respect to their training data. In addition if an applied researcher is concerned with the performance of a particular statistical method on a specific data set (beyond its theoretical properties in large samples), she may wish to assess the performance, e.g., the coverage rate of confidence intervals or the bias of the estimator, using simulated data which resembles her setting. Tol illustrate these methods we apply Wasserstein GANs (WGANs) to compare a number of different estimators for average treatment effects under unconfoundedness in three distinct settings (corresponding to three real data sets) and present a methodology for assessing the robustness of the results. In this example, we find that (i) there is not one estimator that outperforms the others in all three settings, so researchers should tailor their analytic approach to a given setting, and (ii) systematic simulation studies can be helpful for selecting among competing methods in this situation.
    JEL: C15
    Date: 2019–12
  10. By: Giovanni Dosi (Laboratory of Economics and Management); Mauro Napoletano (Observatoire français des conjonctures économiques); Andrea Roventini (Observatoire français des conjonctures économiques); Tania Treibich (Observatoire français des conjonctures économiques)
    Abstract: In this work we study the granular origins of business cycles and their possible underlying drivers. As shown by Gabaix (Econometrica 79:733–772, 2011), the skewed nature of firm size distributions implies that idiosyncratic (and independent) firm-level shocks may account for a significant portion of aggregate volatility. Yet, we question the original view grounded on “supply granularity”, as proxied by productivity growth shocks – in line with the Real Business Cycle framework–, and we provide empirical evidence of a “demand granularity”, based on investment growth shocks instead. The role of demand in explaining aggregate fluctuations is further corroborated by means of a macroeconomic Agent-Based Model of the “Schumpeter meeting Keynes” family Dosi et al. (J Econ Dyn Control 52:166–189, 2015). Indeed, the investigation of the possible microfoundation of RBC has led us to the identification of a sort of microfounded Keynesian multiplier.
    Keywords: Business cycles; Granular residual; Granularity hypothesis; Agent-based models; Firm dynamics ; Productivity growth; Investment growth
    JEL: C63 E12 E22 E32 O4
    Date: 2019–03
  11. By: Xinyi Li; Yinchuan Li; Hongyang Yang; Liuqing Yang; Xiao-Yang Liu
    Abstract: Stock price prediction is important for value investments in the stock market. In particular, short-term prediction that exploits financial news articles is promising in recent years. In this paper, we propose a novel deep neural network DP-LSTM for stock price prediction, which incorporates the news articles as hidden information and integrates difference news sources through the differential privacy mechanism. First, based on the autoregressive moving average model (ARMA), a sentiment-ARMA is formulated by taking into consideration the information of financial news articles in the model. Then, an LSTM-based deep neural network is designed, which consists of three components: LSTM, VADER model and differential privacy (DP) mechanism. The proposed DP-LSTM scheme can reduce prediction errors and increase the robustness. Extensive experiments on S&P 500 stocks show that (i) the proposed DP-LSTM achieves 0.32% improvement in mean MPA of prediction result, and (ii) for the prediction of the market index S&P 500, we achieve up to 65.79% improvement in MSE.
    Date: 2019–12
  12. By: Boeing, Geoff (Northeastern University)
    Abstract: Pynamical is an educational Python package for introducing the modeling, simulation, and visualization of discrete nonlinear dynamical systems and chaos, focusing on one-dimensional maps (such as the logistic map and the cubic map). Pynamical facilitates defining discrete one-dimensional nonlinear models as Python functions with just-in-time compilation for fast simulation. It comes packaged with the logistic map, the Singer map, and the cubic map predefined. The models may be run with a range of parameter values over a set of time steps, and the resulting numerical output is returned as a pandas DataFrame. Pynamical can then visualize this output in various ways, including with bifurcation diagrams, two-dimensional phase diagrams, three-dimensional phase diagrams, and cobweb plots. These visualizations enable simple qualitative assessments of system behavior including phase transitions, bifurcation points, attractors and limit cycles, basins of attraction, and fractals.
    Date: 2018–06–21
  13. By: Adamecz-Völgyi, Anna (UCL Institute of Education); Henderson, Morag (UCL Institute of Education); Shure, Nikki (University College London)
    Abstract: Universities use 'first in family' or 'first generation' as an indicator to increase the diversity of their student intake, but little is known about whether it is a good indicator of disadvantage. We use nationally representative, longitudinal survey data linked to administrative data from England to provide the first comprehensive analysis of this measure. We employ parametric probability (logit) and non-parametric classification (random forest) models to look at its relative predictive power of university participation and graduation. We find that being first in family is an important barrier to university participation and graduation, over and above other sources of disadvantage. This association seems to operate through the channel of early educational attainment. Our findings indicate that the first in family indicator could be key in efforts to widen participation at universities.
    Keywords: socioeconomic gaps, higher education, widening participation, first in family, first generation, educational mobility, machine learning, predictive models
    JEL: I23 I24 J24
    Date: 2019–12
  14. By: Melvin Wong; Bilal Farooq
    Abstract: We present a Residual Logit (ResLogit) model for seamlessly integrating a data-driven Deep Neural Network (DNN) architecture in the random utility maximization paradigm. DNN models such as the Multi-layer Perceptron (MLP) have shown remarkable success in modelling complex data accurately, but recent studies have consistently demonstrated that their black-box properties are incompatible with discrete choice analysis for the purpose of interpreting decision making behaviour. Our proposed machine learning choice model is a departure from the conventional feed-forward MLP framework by using a dynamic residual neural network learning based approach. Our proposed method can be formulated as a Generalized Extreme Value (GEV) random utility maximization model for greater flexibility in capturing unobserved heterogeneity. It can generate choice model structures where the covariance between random utilities is estimated and incorporated into the random error terms, allowing for a richer set of higher-order substitution patterns than a standard logit might be able to achieve. We describe the process of our model estimation and examine the relative empirical performance and econometric implications on two mode choice experiments. We analyzed the behavioural and theoretical properties of our methodology. We showed how model interpretability is possible, while also capturing the underlying complex and unobserved behavioural heterogeneity effects in the residual covariance matrices.
    Date: 2019–12
  15. By: Boeing, Geoff (Northeastern University)
    Abstract: Urban displacement - when a household is forced to relocate due to conditions affecting its home or surroundings - often results from rising housing costs, particularly in wealthy, prosperous cities. However, its dynamics are complex and often difficult to understand. This paper presents an agent-based model of urban settlement, agglomeration, displacement, and sprawl. New settlements form around a spatial amenity that draws initial, poor settlers to subsist on the resource. As the settlement grows, subsequent settlers of varying income, skills, and interests are heterogeneously drawn to either the original amenity or to the emerging human agglomeration. As this agglomeration grows and densifies, land values increase, and the initial poor settlers are displaced from the spatial amenity on which they relied. Through path dependence, high-income residents remain clustered around this original amenity for which they have no direct use or interest. This toy model explores these dynamics, demonstrating a simplified mechanism of how urban displacement and gentrification can be sensitive to income inequality, density, and varied preferences for different types of amenities.
    Date: 2018–08–27
  16. By: Giovanni Dosi (Laboratory of Economics and Management); Marcelo C. Pereira (Universidade Estadual de Campinas); Andrea Roventini (Observatoire français des conjonctures économiques); Maria Enrica Virgillito (Scuola Superiore Sant'Anna)
    Abstract: In this work we develop an agent-based model where hysteresis in major macroeconomic variables (e.g., gross domestic product, productivity, unemployment) emerges out of the decentralized interactions of heterogeneous firms and workers. Building upon the “Schumpeter meeting Keynes” family of models (cf. in particular Dosi et al. (2016b, 2017c)), we specify an endogenous process of accumulation of workers’ skills and a state-dependent process of firms entry. Indeed, hysteresis is ubiquitous. However, this is not due to market imperfections, but rather to the very functioning of decentralized economies characterized by coordination externalities and dynamic increasing returns. So, contrary to the insider–outsider hypothesis (Blanchard and Summers, 1986), the model does not support the findings that rigid industrial relations may foster hysteretic behavior in aggregate unemployment. On the contrary, this contribution provides evidence that during severe downturns, and thus declining aggregate demand, phenomena like decreasing investment and innovation rates, skills deterioration, and declining entry dynamics are better candidates to explain long-run unemployment spells and reduced output growth. In that, more rigid labor markets may well dampen hysteretic dynamics by sustaining aggregate demand, thus making the economy more resilient.
    Keywords: Computational techniques; Employment; Institutions
    JEL: E24 E02
    Date: 2018–04
  17. By: Wang, Jianfu
    Abstract: Distributed Ledger Technology (DLT) creates a decentralized system for trust and transaction validation using executable smart contracts to update information across a distributed database. This type of ecosystem can be applied to Commodity Trade Finance to alleviate critical issues of information asymmetry and the cost of transacting which are the leading causes of the Trade Finance Gap (ie. the lack of supply of capital to meet total trade finance demand). The possibility of scaling up such ecosystems with a number of Institutional Investors and micro small medium enterprises (MSME) would be advantageous, however, it brings up its own set of challenges including the stability of the system design. Agent-based modeling (ABM) is a powerful method to assess the financial ecosystem dynamics. DLT ecosystems model well under ABM, as the agents present a clearly defined taxonomy. In this study, we use ABM to assess the Aquifer Institute Platform - a DLT-based Commodity Trade Finance system, in which a growing number of participating parties is closely related to the circulation of utility tokens and transaction flows. We study the system dynamics of the platform and propose an appropriate setup for different transaction loads.
    Date: 2018–04–02
  18. By: Alex Pienkowski
    Abstract: This paper outlines a simple three-country macroeconomic model designed to focus on the transmission of external shocks to Portugal. Building on the framework developed by Berg et al (2006), this model differentiates between shocks originating from both inside and outside the euro area, as well as domestic shocks, each of which have different implications for Portugal. This framework is also used to consider the dynamics of the Portuguese economy over recent decades. The model, which is designed to guide forecasts and undertake simulations, can easily be modified for use in other small euro area countries.
    Date: 2019–12–20
  19. By: Fix, Blair (York University)
    Abstract: Based on worldly experience, most people would agree that firms are hierarchically organized, and that pay tends to increase as one moves up the hierarchy. But how this hierarchical structure affects income distribution has not been widely studied. To remedy this situation, this paper presents a new model of income distribution that explores the effects of social hierarchy. This 'hierarchy model' takes the limited available evidence on the structure of firm hierarchies, and generalizes it to create a large-scale simulation of the hierarchical structure of the United States economy. Using this model, I conduct the first quantitative investigation of hierarchy's effect on income distribution. I find that hierarchy plays a dominant role in shaping the tail of US income distribution. The model suggests that hierarchy is responsible for generating the power-law scaling of top incomes. Moreover, I find that hierarchy can be used to unify the study of personal and functional income distribution, as well as to understand historical trends in income inequality.
    Date: 2018–04–08
  20. By: Naoto JINJI; Kazunobu HAYAKAWA; Nuttawut LAKSANAPANYAKUL; Toshiyuki MATSUURA; Taiyo YOSHIMI
    Abstract: This study proposes a new approach for quantifying two kinds of costs related to the utilization of regional trade agreements (RTAs). The first, which we call the “procurement adjustment cost,” represents the cost involved in meeting rules of origin through the adjustment of procurement sources. The second is the additional fixed costs required to utilize RTAs, including document preparation costs for the certification of origin. The proposed approach makes it possible to compute these two costs separately using product-level data. It is built on a model of international trade where heterogeneous exporters decide which tariff scheme to use. Applying our approach to Thailand’s imports from China, our estimates suggest that procurement adjustment costs to comply with RTA rules of origin at the median are equivalent to 4% of per-unit production costs. In addition, RTA utilization requires an additional 27% of fixed costs. Furthermore, simulation analysis shows that a reduction of the additional fixed costs by half would raise the RTA utilization rate by 13 percentage points, while the complete elimination of procurement adjustment costs would raise the RTA utilization rate by 32 percentage points.
    Keywords: Regional trade agreement; Preference utilization; Cost estimation
    JEL: F15 F53
    Date: 2020–01
  21. By: Chrysa Leventi (Greek Council of Economic Advisors); Fidel Picos (European Commission - JRC)
    Abstract: The 2010 Economic Adjustment Programme initiated a period of strict international supervision with respect to tax policy in Greece. The country implemented a large-scale fiscal consolidation package, aiming to reduce its public deficit below 3% of GDP by 2016. Since the beginning of the crisis, the provisions of the ‘Greek Programme’ have been revised several times, and personal income tax reform has figured prominently on almost each of the revision agendas. This paper aims to provide an assessment of the effects of the four major structural reforms that took place in Greece during and in the aftermath of the economic crisis; using microsimulation techniques, we simulate the (ceteris paribus) first-order impact of these reforms on the distribution of incomes, the state budget and work incentives, while also trying to identify the main gainers and losers of these policy changes. Our results suggest that all reforms had a revenue-increasing rationale, with the one of 2011 being designed to have the largest fiscal gains. The latter also strengthened redistribution and achieved the highest decrease in income inequality. The 2013 reform went to the opposite direction by reducing both the redistributive strength and the progressive nature of the Greek tax system. The striking discrepancies in the ways in which different household categories have been affected by the four reforms call for a deeper investigation of the possibility of moving towards more uniform personal income tax rules.
    Keywords: EUROMOD, personal income tax, reform, Greece, redistribution, work incentives
    JEL: H24 H23 I32
    Date: 2019–12
  22. By: Lucio Gobbi; Ronny Mazzocchi; Roberto Tamborini
    Abstract: We examine the so-called "Neo-Fisherian" claim that, at the zero lower bound (ZLB) of the monetary policy interest rate, and the economy in a depression equilibrium, in order to restore the desired inflation rate the policy rate should be raised consistently with the Fisher equation. This claim has been questioned on the ground that the Fisher equation cannot be used mechanically to peg the long-run inflation expectations. It is necessary to examine how inflation expectations are formed in response to, and interact with, policy actions and the evolution of the economy. Hence we study a New Keynesian economy where agents' inflation expectations are based on their correct understanding of the data generations process, and on their probabilistic confidence in the central bank's ability to keep inflation on target, driven by the observed state of the economy. We find that the Neo-Fisherian claim is a theoretical possibility depending on the interplay of a set of parameters and very low levels of agents' confidence. Yet, on the basis of simulations of the model, we may say that this possibility is remote for most commonly found empirical values of the relevant parameters. Moreover, the Neo-Fisherian policy-rate peg is not sustained by the expectations formation process.
    Keywords: conventional monetary policy, Neo-Fisherian theory, formation of inflation expectations, monetary policy at the zero lower bound
    JEL: D84 E31 E52
    Date: 2019
  23. By: Alexander D Klemm; Li Liu
    Abstract: A growing empirical literature has documented significant profit shifting activities by multinationals. This paper looks at the impact of such profit shifting on real activity and tax competition. Real activity can be affected as profit shifting changes—and theoretically most likely reduces—the cost of capital. Tax competition, even over real capital, is affected, because a permissive attitude toward profit shifting can be seen as a selective tax reduction for multinationals. Tightening profit shifting rules in turn can affect tax competition through the main rate. This paper discusses these issues theoretically and with the help of a simulation to assess the impact of profit-shifting on investment, revenues, and government behavior. Using the theoretical framework, it also provides a brief overview of the related empirical literature.
    Date: 2019–12–20
  24. By: Fraunholz, Christoph; Keles, Dogan; Fichtner, Wolf
    Abstract: In electricity markets around the world, the substantial increase of intermittent renewable electricity generation has intensified concerns about generation adequacy, ultimately driving the implementation of capacity remuneration mechanisms. Although formally technology-neutral, substantial barriers often exist in these mechanisms for non-conventional capacity such as electricity storage. In this article, we provide a rigorous theoretical discussion on design parameters and show that the concrete design of a capacity remuneration mechanism always creates a bias towards one technology or the other. In particular, we can identify the bundling of capacity auctions with call options and the definition of the storage capacity credit as essential drivers affecting the future technology mix as well as generation adequacy. In order to illustrate and confirm our theoretical findings, we apply an agent-based electricity market model and run a number of simulations. Our results show that electricity storage has a capacity value and should therefore be allowed to participate in any capacity remuneration mechanism. Moreover, we find the implementation of a capacity remuneration mechanism with call options and a strike price to increase the competitiveness of storages against conventional power plants. However, determining the amount of firm capacity an electricity storage unit can provide remains a challenging task.
    Date: 2019
  25. By: Arno Baurin (UNIVERSITE CATHOLIQUE DE LOUVAIN, Institut de Recherches Economiques et Sociales (IRES)); Jean Hindriks (UNIVERSITE CATHOLIQUE DE LOUVAIN, Center for Operations Research and Econometrics (CORE))
    Abstract: In this article, we analyze the Belgium pension financing in retrospect for the period 1995-2017 and then we provide a prospective analysis based on the demographic and economic projections of the Federal Plan Bureau. In the retrospective part, we point out the growing importance of alternative financing relative to the social security contributions. The decomposition of the public pension growth over the last decade between the average pension and the number of retirees shows that three quarters of the growth is due to the increase of the average pension. In the prospective part, we simulate the contributions and pension benefits required to balance the budget, based on different rules: Defined Contribution, Defined Benefit and the Musgrave rule (keeping constant the ratio of pension benefit to wage net of contributions). We then simulate pension adjustment via the "individual retirement account" (IRA) as proposed in Devolder (2019) and Devolder & Hindriks 2019). Under the IRA, the adjustment variables are the accrual rate (which determines the new pension claims) and the indexation rate (which determines the past pension claims). Combining those adjustment variables, our simulations show that it is possible to protect past pension claims and ensure budget balance on a yearly basis. We propose a rule of adjustment so as to equate, year by year, the replacement rate across retirees of different ages.
    Keywords: social security, pension, retirement, ageing
    JEL: H55 J11 J14 J26
    Date: 2020–01–09

This nep-cmp issue is ©2020 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.