nep-cmp New Economics Papers
on Computational Economics
Issue of 2019‒09‒09
29 papers chosen by



  1. The finer points of model comparison in machine learning: forecasting based on russian banks’ data By Denis Shibitov; Mariam Mamedli
  2. Reinforcement Learning: Prediction, Control and Value Function Approximation By Haoqian Li; Thomas Lau
  3. Predicting systemic financial crises with recurrent neural networks By Tölö, Eero
  4. Decision-facilitating information in hidden-action setups: An agent-based approach By Stephan Leitner; Friederike Wall
  5. Rethinking travel behavior modeling representations through embeddings By Francisco C. Pereira
  6. Stock Price Forecasting and Hypothesis Testing Using Neural Networks By Kerda Varaku
  7. Losing Preferential Access to Third Countries After Brexit - What is at Stake? By Freund, Florian; Pelikan, Janine; Banse, Martin
  8. Agricultural Loan Delinquency Prediction Using Machine Learning Methods By Chen, Jian; Katchova, Ani
  9. Fourier transform MCMC, heavy tailed distributions and geometric ergodicity By Denis Belomestny; Leonid Iosipoi
  10. The impact of the Grand Paris Express on the European regions: a RHOMOLO analysis By Francesco Di Comite; Giovanni Mandras; Stylianos Sakkas
  11. An algorithm for construction of a portfolio with a fundamental criterion By Pawel Kliber; Anna Rutkowska-Ziarko
  12. QCNN: Quantile Convolutional Neural Network By G\'abor Petneh\'azi
  13. Impact of Tighter Controls on Japanese Chemical Exports to Korea By Nobuhiro Hosoe
  14. Subsampling Sequential Monte Carlo for Static Bayesian Models By Gunawan, David; Dang, Khue-Dung; Quiroz, Matias; Kohn, Robert; Tran, Minh-Ngoc
  15. Does the Estimation of the Propensity Score by Machine Learning Improve Matching Estimation? The Case of Germany's Programmes for Long Term Unemployed By Goller, Daniel; Lechner, Michael; Moczall, Andreas; Wolff, Joachim
  16. Predicting Returns With Text Data By Zheng Tracy Ke; Bryan T. Kelly; Dacheng Xiu
  17. Raising the Overtime Premium and Reducing the Standard Workweek: Short-Run Impacts on U.S. Manufacturing By Sagyndykova, Galiya; Oaxaca, Ronald L.
  18. How Do Foreclosures Exacerbate Housing Downturns? By Adam M. Guren; Timothy J. McQuade
  19. Are Bitcoins price predictable? Evidence from machine learning techniques using technical indicators By Samuel Asante Gyamerah
  20. Predict Food Security with Machine Learning: Application in Eastern Africa By Zhou, Yujun; Baylis, Kathy
  21. L’émergence et la consolidation des méthodes de microsimulation en France By François Legendre
  22. Mapping Firms' Locations in Technological Space: A Topological Analysis of Patent Statistics By Emerson G. Escolar; Yasuaki Hiraoka; Mitsuru Igami; Yasin Ozcan
  23. Optimization of age-structured bioeconomic model: recruitment, weight gain and environmental effects By Ni, Yuanming
  24. Predicting Consumer Default: A Deep Learning Approach By Stefania Albanesi; Domonkos F. Vamossy
  25. The Brexit Vote, Productivity Growth and Macroeconomic Adjustments in the United Kingdom By Ben Broadbent; Federico Di Pace; Thomas Drechsel; Richard Harrison; Silvana Tenreyro
  26. Greed is good: from super-harvest to recovery in a stochastic predator-prey system By Ni, Yuanming; Sandal, Leif K.; Kvamsdal, Sturla F.; Poudel, Diwakar
  27. An introduction to flexible methods for policy evaluation By Huber, Martin
  28. Monetary Policy Rules and Macroeconomic Stability By Jayawickrema, Vishuddhi
  29. Reference Points for Retirement Behavior: Evidence from German Pension Discontinuities By Arthur Seibold

  1. By: Denis Shibitov (Bank of Russia, Russian Federation); Mariam Mamedli (Bank of Russia, Russian Federation)
    Abstract: We evaluate the forecasting ability of machine learning models to predict bank license withdrawal and the violation of statutory capital and liquidity requirements (capital adequacy ratio N1.0, common equity Tier 1 adequacy ratio N1.1, Tier 1 capital adequacy ratio N1.2, N2 instant and N3 current liquidity). On the basis of 35 series from the accounting reports of Russian banks, we form two data sets of 69 and 721 variables and use them to build random forest and gradient boosting models along with neural networks and a stacking model for different forecasting horizons (1, 2, 3, 6, 9 months). Based on the data from February 2014 to October 2018 we show that these models with fine-tuned architectures can successfully compete with logistic regression usually applied for this task. Stacking and random forest generally have the best forecasting performance comparing to the other models. We evaluate models with commonly used performance metrics (ROC-AUC and F1) and show that, depending on the task, F1-score could be better at defining the model’s performance. Comparison of the results depending on the metrics applied and types of cross-validation used illustrate the importance of choosing the appropriate metric for performance evaluation and the cross-validation procedure, which accounts for the characteristics of the data set and the task under consideration. The developed approach shows the advantages of non-linear methods for bank regulation tasks and provides the guidelines for the application of machine learning algorithms to these tasks.
    Keywords: machine learning, random forest, neural networks, gradient boosting, forecasting, bank supervision
    JEL: C53 C52 C5
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:bkr:wpaper:wps43&r=all
  2. By: Haoqian Li; Thomas Lau
    Abstract: With the increasing power of computers and the rapid development of self-learning methodologies such as machine learning and artificial intelligence, the problem of constructing an automatic Financial Trading Systems (FTFs) becomes an increasingly attractive research topic. An intuitive way of developing such a trading algorithm is to use Reinforcement Learning (RL) algorithms, which does not require model-building. In this paper, we dive into the RL algorithms and illustrate the definitions of the reward function, actions and policy functions in details, as well as introducing algorithms that could be applied to FTFs.
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1908.10771&r=all
  3. By: Tölö, Eero
    Abstract: We consider predicting systemic financial crises one to five years ahead using recurrent neural networks. The prediction performance is evaluated with the Jorda-Schularick-Taylor dataset, which includes the crisis dates and relevant macroeconomic series of 17 countries over the period 1870-2016. Previous literature has found simple neural network architectures to be useful in predicting systemic financial crises. We show that such predictions can be greatly improved by making use of recurrent neural network architectures, especially suited for dealing with time series input. The results remain robust after extensive sensitivity analysis.
    JEL: G21 C45 C52
    Date: 2019–08–27
    URL: http://d.repec.org/n?u=RePEc:bof:bofrdp:2019_014&r=all
  4. By: Stephan Leitner; Friederike Wall
    Abstract: The hidden action model captures a fundamental problem of principal-agent theory and provides an optimal sharing rule when only the outcome but not the effort can be observed. However, the hidden action model builds on various explicit and also implicit assumptions about the information of the contracting parties. This paper relaxes key assumptions regarding the availability of information included the hidden action model in order to study whether and, if so, how fast the optimal sharing rule is achieved and how this is affected by the various types of information employed in the principal-agent relation. Our analysis particularly focuses on information about the environment and feasible actions for the agent to carry out the task. For this, we follow an approach to transfer closed-form mathematical models into agent-based computational models. The results show that the extent of information about feasible options to carry out a task only has an impact on performance, if decision-makers are well informed about the environment, and that the decision whether to perform exploration or exploitation when searching for new feasible options only affects performance in specific situations. Having good information about the environment, in contrary, appears to be crucial in almost all situations.
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1908.07998&r=all
  5. By: Francisco C. Pereira
    Abstract: This paper introduces the concept of travel behavior embeddings, a method for re-representing discrete variables that are typically used in travel demand modeling, such as mode, trip purpose, education level, family type or occupation. This re-representation process essentially maps those variables into a latent space called the \emph{embedding space}. The benefit of this is that such spaces allow for richer nuances than the typical transformations used in categorical variables (e.g. dummy encoding, contrasted encoding, principal components analysis). While the usage of latent variable representations is not new per se in travel demand modeling, the idea presented here brings several innovations: it is an entirely data driven algorithm; it is informative and consistent, since the latent space can be visualized and interpreted based on distances between different categories; it preserves interpretability of coefficients, despite being based on Neural Network principles; and it is transferrable, in that embeddings learned from one dataset can be reused for other ones, as long as travel behavior keeps consistent between the datasets. The idea is strongly inspired on natural language processing techniques, namely the word2vec algorithm. Such algorithm is behind recent developments such as in automatic translation or next word prediction. Our method is demonstrated using a model choice model, and shows improvements of up to 60\% with respect to initial likelihood, and up to 20% with respect to likelihood of the corresponding traditional model (i.e. using dummy variables) in out-of-sample evaluation. We provide a new Python package, called PyTre (PYthon TRavel Embeddings), that others can straightforwardly use to replicate our results or improve their own models. Our experiments are themselves based on an open dataset (swissmetro).
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.00154&r=all
  6. By: Kerda Varaku
    Abstract: In this work we use Recurrent Neural Networks and Multilayer Perceptrons to predict NYSE, NASDAQ and AMEX stock prices from historical data. We experiment with different architectures and compare data normalization techniques. Then, we leverage those findings to question the efficient-market hypothesis through a formal statistical test.
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1908.11212&r=all
  7. By: Freund, Florian; Pelikan, Janine; Banse, Martin
    Abstract: This article takes a closer look into the pending question of how the UK might be affected by losing preferential access to Third Countries in the wake of Brexit. Although, as the formal date of divorce comes closer possibilities of losing these beneficial trade terms are not very present in the public debate. This is puzzling since as an EU member the UK has 40 trade agreements with over 70 non-European countries, covering about 15 % of its trade but legally those contracts are only valid for EU members and leaving the EU while retaining the status quo enshrined in the trade agreements would contradict with the MFN principle. Simulations of a ‘hard’ and a ‘soft’ Brexit scenario with a CGE model reveal that the additional loss in GDP is due to these changing trade relations with Third Countries are in the range of 2.5 % and 7.8% % of the total loss. Since most of the loss is associated with a changing trade environment with EFTA and Turkey the UK - if it aims to continue these deals - should focus its negotiation resources on these regions first. On the other hand the EU losses of a Brexit would be lower if the UK and Third Countries impose new tariffs on their trade flows since this would redirect trade flows toward the EU.
    Keywords: Institutional and Behavioral Economics, International Relations/Trade
    Date: 2019–08–26
    URL: http://d.repec.org/n?u=RePEc:ags:gewi19:292298&r=all
  8. By: Chen, Jian; Katchova, Ani
    Keywords: Agricultural Finance
    Date: 2019–06–25
    URL: http://d.repec.org/n?u=RePEc:ags:aaea19:290745&r=all
  9. By: Denis Belomestny; Leonid Iosipoi
    Abstract: Markov Chain Monte Carlo methods become increasingly popular in applied mathematics as a tool for numerical integration with respect to complex and high-dimensional distributions. However, application of MCMC methods to heavy tailed distributions and distributions with analytically intractable densities turns out to be rather problematic. In this paper, we propose a novel approach towards the use of MCMC algorithms for distributions with analytically known Fourier transforms and, in particular, heavy tailed distributions. The main idea of the proposed approach is to use MCMC methods in Fourier domain to sample from a density proportional to the absolute value of the underlying characteristic function. A subsequent application of the Parseval's formula leads to an efficient algorithm for the computation of integrals with respect to the underlying density. We show that the resulting Markov chain in Fourier domain may be geometrically ergodic even in the case of heavy tailed original distributions. We illustrate our approach by several numerical examples including multivariate elliptically contoured stable distributions.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.00698&r=all
  10. By: Francesco Di Comite (European Council); Giovanni Mandras (European Commission - JRC); Stylianos Sakkas (European Commission - JRC)
    Abstract: This technical report illustrates a simulation performed to assess the likely economic impact of the Grand Paris Express investments on the Île-de-France and the other European Union regions, under the working assumption of a combined 1% increase in labour productivity due to better matching between skill supply and demand and a 1% increase in accessibility due to the project. Our simulations suggest an overall medium-term positive GDP impact for the EU as a whole (0.18%), for France (0.79%) and for Île-de-France (2.61%).
    Keywords: rhomolo, region, growth, spatial general equilibrium model, Grand Paris, investment, labour productivity, transportation cost
    JEL: C67 C68 R13 R58
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:ipt:termod:201908&r=all
  11. By: Pawel Kliber (Poznan University of Economics); Anna Rutkowska-Ziarko (University of Warmia and Mazury)
    Abstract: The classical models for construction of investment portfolio do not take into account fundamental values of considered companies. In our approach we extend the portfolio choice by adding this dimension to the classical criteria of profitability and risk. It is assumed that an investor selects stock according to their attractiveness, measured by some fundamental values of companies. In this approach portfolios are assessed according to three criteria: their profitability, risk (measured by variance of returns) and fundamental value (measured by some indicators of fundamental value). In this article we consider earnings to price ratio as the measure of the fundamental value of a company. In the paper we consider an algorithm for constructing portfolios with fundamental criterion based on analytical solutions for appropriate optimization problems. In the optimization problem we consider minimizing variance with constrains on expected return and attractiveness of investment, measured with some indicators of fundamental values of companies in a portfolio. We also present empirical examples of calculating effective portfolios of stocks listed on the Warsaw Stock Exchange.
    Keywords: portfolio analysis, fundamental value, multicriterial choice, fundamental analysis
    JEL: C61 C63 G11
    Date: 2019–07
    URL: http://d.repec.org/n?u=RePEc:sek:iefpro:8911300&r=all
  12. By: G\'abor Petneh\'azi
    Abstract: A dilated causal one-dimensional convolutional neural network architecture is proposed for quantile regression. The model can forecast any arbitrary quantile, and it can be trained jointly on multiple similar time series. An application to Value at Risk forecasting shows that QCNN outperforms linear quantile regression and constant quantile estimates.
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1908.07978&r=all
  13. By: Nobuhiro Hosoe (National Graduate Institute for Policy Studies, Tokyo, Japan)
    Abstract: The Japanese Ministry of Economy, Trade and Industry announced recently that they will terminate preferential treatment in the licensing of specific chemical products for export to South Korea. This announcement evoked concern that the impact on Korean semiconductor and electronics industries, which rely heavily on imports from Japan, might cause a serious supply shortage in the global semiconductor market. To assess the economic impact of tighter export controls, this study simulates: (a) imposition of an export tax on chemical products; and (b) a productivity decline in the electronics sector in Korea, using a world trade computable general equilibrium model. The results of these simulations indicate that such a productivity decline would cause only slight harm to the Japanese and world economies, aside from the electronics sector in Korea, and that an export tax would significantly distort trade patterns and undermine the welfare of Japan and Korea in a similar magnitude. However, welfare loss normalized for GDP size would be far smaller in Japan than in Korea.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:ngi:dpaper:19-17&r=all
  14. By: Gunawan, David (School of Economics, UNSW Business School, University of New South Wales, ARC Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS).); Dang, Khue-Dung (School of Economics, UNSW Business School, University of New South Wales, ARC Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS).); Quiroz, Matias (School of Economics, UNSW Business School, University of New South Wales, ARC Centre of Excellence for Mathematical, Statistical Frontiers (ACEMS) and Research Division.); Kohn, Robert (School of Economics, UNSW Business School, University of New South Wales, ARC Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS).); Tran, Minh-Ngoc (ARC Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS) and Discipline of Business Analytics, University)
    Abstract: We show how to speed up Sequential Monte Carlo (SMC) for Bayesian inference in large data problems by data subsampling. SMC sequentially updates a cloud of particles through a sequence of distributions, beginning with a distribution that is easy to sample from such as the prior and ending with the posterior distribution. Each update of the particle cloud consists of three steps: reweighting, resampling, and moving. In the move step, each particle is moved using a Markov kernel and this is typically the most computation- ally expensive part, particularly when the dataset is large. It is crucial to have an efficient move step to ensure particle diversity. Our article makes two important contributions. First, in order to speed up the SMC computation, we use an approximately unbiased and efficient annealed likelihood estimator based on data subsampling. The subsampling approach is more memory effi- cient than the corresponding full data SMC, which is an advantage for parallel computation. Second, we use a Metropolis within Gibbs kernel with two con- ditional updates. A Hamiltonian Monte Carlo update makes distant moves for the model parameters, and a block pseudo-marginal proposal is used for the particles corresponding to the auxiliary variables for the data subsampling. We demonstrate the usefulness of the methodology for estimating three gen- eralized linear models and a generalized additive model with large datasets.
    Keywords: Hamiltonian Monte Carlo; Large datasets; Likelihood annealing
    JEL: C11 C15
    Date: 2019–04–01
    URL: http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0371&r=all
  15. By: Goller, Daniel (University of St. Gallen); Lechner, Michael (University of St. Gallen); Moczall, Andreas (Institute for Employment Research (IAB), Nuremberg); Wolff, Joachim (Institute for Employment Research (IAB), Nuremberg)
    Abstract: Matching-type estimators using the propensity score are the major workhorse in active labour market policy evaluation. This work investigates if machine learning algorithms for estimating the propensity score lead to more credible estimation of average treatment effects on the treated using a radius matching framework. Considering two popular methods, the results are ambiguous: We find that using LASSO based logit models to estimate the propensity score delivers more credible results than conventional methods in small and medium sized high dimensional datasets. However, the usage of Random Forests to estimate the propensity score may lead to a deterioration of the performance in situations with a low treatment share. The application reveals a positive effect of the training programme on days in employment for long-term unemployed. While the choice of the "first stage" is highly relevant for settings with low number of observations and few treated, machine learning and conventional estimation becomes more similar in larger samples and higher treatment shares.
    Keywords: programme evaluation, active labour market policy, causal machine learning, treatment effects, radius matching, propensity score
    JEL: J68 C21
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp12526&r=all
  16. By: Zheng Tracy Ke; Bryan T. Kelly; Dacheng Xiu
    Abstract: We introduce a new text-mining methodology that extracts sentiment information from news articles to predict asset returns. Unlike more common sentiment scores used for stock return prediction (e.g., those sold by commercial vendors or built with dictionary-based methods), our supervised learning framework constructs a sentiment score that is specifically adapted to the problem of return prediction. Our method proceeds in three steps: 1) isolating a list of sentiment terms via predictive screening, 2) assigning sentiment weights to these words via topic modeling, and 3) aggregating terms into an article-level sentiment score via penalized likelihood. We derive theoretical guarantees on the accuracy of estimates from our model with minimal assumptions. In our empirical analysis, we text-mine one of the most actively monitored streams of news articles in the financial system—the Dow Jones Newswires—and show that our supervised sentiment model excels at extracting return-predictive signals in this context.
    JEL: C53 C58 G10 G11 G12 G14 G17
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:26186&r=all
  17. By: Sagyndykova, Galiya (Nazarbayev University); Oaxaca, Ronald L. (University of Arizona)
    Abstract: A nine-factor input model is developed to estimate the monthly demand for employment, capital, and weekly hours per worker/workweek in U.S. Manufacturing. The labor inputs correspond to production and non-production workers disaggregated by overtime and non-overtime employment. Policy simulations are conducted to examine the short-run effects on the monthly growth rates for employment, labor earnings, capital usage, and the workweek from either a) raising the overtime premium to double-time, or b) reducing the standard workweek to 35 hours. Although the growth rate policy effects are heterogeneous across disaggregated labor input categories, on aver- age both policy changes exhibit negative effects on the growth rates of industry-wide employment, earnings, and non-labor input usage. The growth rate of the workweek is virtually unaffected by raising the overtime premium but is negatively impacted by reducing the standard work week.
    Keywords: overtime, employment, workweek
    JEL: J23 J88
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp12557&r=all
  18. By: Adam M. Guren; Timothy J. McQuade
    Abstract: This paper uses a structural model to show that foreclosures played a crucial role in exacerbating the recent housing bust and to analyze foreclosure mitigation policy. We consider a dynamic search model in which foreclosures freeze the market for non-foreclosures and reduce price and sales volume by eroding lender equity, destroying the credit of potential buyers, and making buyers more selective. These effects cause price-default spirals that amplify an initial shock and help the model fit both national and cross-sectional moments better than a model without foreclosure. When calibrated to the recent bust, the model reveals that the amplification generated by foreclosures is significant: Ruined credit and choosey buyers account for 25.4 percent of the total decline in non-distressed prices and lender losses account for an additional 22.6 percent. For policy, we find that principal reduction is less cost effective than lender equity injections or introducing a single seller that holds foreclosures off the market until demand rebounds. We also show that policies that slow down the pace of foreclosures can be counterproductive.
    JEL: E30 R31
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:26216&r=all
  19. By: Samuel Asante Gyamerah
    Abstract: The uncertainties in future Bitcoin price make it difficult to accurately predict the price of Bitcoin. Accurately predicting the price for Bitcoin is therefore important for decision-making process of investors and market players in the cryptocurrency market. Using historical data from 01/01/2012 to 16/08/2019, machine learning techniques (Generalized linear model via penalized maximum likelihood, random forest, support vector regression with linear kernel, and stacking ensemble) were used to forecast the price of Bitcoin. The prediction models employed key and high dimensional technical indicators as the predictors. The performance of these techniques were evaluated using mean absolute percentage error (MAPE), root mean square error (RMSE), mean absolute error (MAE), and coefficient of determination (R-squared). The performance metrics revealed that the stacking ensemble model with two base learner (random forest and generalized linear model via penalized maximum likelihood) and support vector regression with linear kernel as meta-learner was the optimal model for forecasting Bitcoin price. The MAPE, RMSE, MAE, and R-squared values for the stacking ensemble model were 0.0191%, 15.5331 USD, 124.5508 USD, and 0.9967 respectively. These values show a high degree of reliability in predicting the price of Bitcoin using the stacking ensemble model. Accurately predicting the future price of Bitcoin will yield significant returns for investors and market players in the cryptocurrency market.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.01268&r=all
  20. By: Zhou, Yujun; Baylis, Kathy
    Keywords: International Development
    Date: 2019–06–25
    URL: http://d.repec.org/n?u=RePEc:ags:aaea19:291056&r=all
  21. By: François Legendre
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:eru:erudwp:wp19-20&r=all
  22. By: Emerson G. Escolar; Yasuaki Hiraoka; Mitsuru Igami; Yasin Ozcan
    Abstract: Where do firms innovate? Mapping their locations in technological space is difficult, because it is high dimensional and unstructured. We address this issue by using a method in computational topology called the Mapper algorithm, which combines local clustering with global reconstruction. We apply this method to a panel of 333 major firms' patent portfolios in 1976--2005 across 430 technological areas. Results suggest the Mapper graph captures salient patterns in firms' patenting histories, and our measures of their uniqueness (the type and length of "flares") are correlated with firms' financial performances in a statistically and economically significant manner.
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.00257&r=all
  23. By: Ni, Yuanming (Dept. of Business and Management Science, Norwegian School of Economics)
    Abstract: More and more fishery researchers begin to acknowledge that one-dimensional biomass models may omit key information when generating management guidelines. For the more complicated age-structured models, numerous parameters require a proper estimation or a reasonable assumption. In this paper, the effects of recruitment patterns and environmental impacts on the optimal exploitation of a fish population are investigated. Based on a discrete-time age-structured bioeconomic model of Northeast Atlantic mackerel, we introduce the mechanisms that generate 6 scenarios of the problem. Using the simplest scenario, optimizations are conducted under 8 different parameter combinations. Then, the problem is solved for each scenario and simulations are conducted with constant fishing mortalities. It is found that a higher environmental volatility leads to more net profits but with a lower probability of achieving the mean values. Any parameter combination that favours the older fish tends to lend itself to pulse fishing pattern. The simulations indicate that a constant fishing mortality around 0.06 performs the best. A comparison between the optimal and the historical harvest shows that for more than 70% of the time, the optimal exploitation precedes the historical one, leading to 43% higher net profit and 34% lower fishing cost.
    Keywords: Age-structured; bioeconomic; recruitment; optimization
    JEL: C44 C61 Q00 Q20 Q22 Q50
    Date: 2019–09–03
    URL: http://d.repec.org/n?u=RePEc:hhs:nhhfms:2019_004&r=all
  24. By: Stefania Albanesi; Domonkos F. Vamossy
    Abstract: We develop a model to predict consumer default based on deep learning. We show that the model consistently outperforms standard credit scoring models, even though it uses the same data. Our model is interpretable and is able to provide a score to a larger class of borrowers relative to standard credit scoring models while accurately tracking variations in systemic risk. We argue that these properties can provide valuable insights for the design of policies targeted at reducing consumer default and alleviating its burden on borrowers and lenders, as well as macroprudential regulation.
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1908.11498&r=all
  25. By: Ben Broadbent (Bank of England; Centre for Macroeconomics (CFM)); Federico Di Pace (Bank of England); Thomas Drechsel (University of Maryland; Centre for Macroeconomics (CFM)); Richard Harrison (Bank of England; Centre for Macroeconomics (CFM)); Silvana Tenreyro (Bank of England; London School of Economics (LSE); Centre for Macroeconomics (CFM); Centre for Economic Policy Research (CEPR))
    Abstract: The UK economy has experienced significant macroeconomic adjustments following the 2016 referendum on its withdrawal from the European Union. This paper develops and estimates a small open economy model with tradable and non-tradable sectors to characterise these adjustments. We demonstrate that many of the effects of the referendum result can be conceptualised as news about a future slowdown in productivity growth in the tradable sector. Simulations show that the responses of the model economy to such news are consistent with key patterns in UK data. While overall economic growth slows, an immediate permanent fall in the relative price of non-tradable output (the real exchange rate) induces a temporary ‘sweet spot’ for tradable producers before the slowdown in tradable sector productivity associated with Brexit occurs. Resources are reallocated towards the tradable sector, tradable output growth rises and net exports increase. These developments reverse after the productivity decline in the tradable sector materialises. The negative news about tradable sector productivity also leads to a decline in domestic interest rates relative to world interest rates and to a reduction in investment growth, while employment remains relatively stable. As a by-product of our analysis, we provide a quantitative analysis of the UK business cycle.
    Keywords: Brexit, Small open economy, Productivity, Tradable sector, UK economy
    JEL: E13 E32 F17 F47 O16
    Date: 2018–08
    URL: http://d.repec.org/n?u=RePEc:cfm:wpaper:1916&r=all
  26. By: Ni, Yuanming (Dept. of Business and Management Science, Norwegian School of Economics); Sandal, Leif K. (Dept. of Business and Management Science, Norwegian School of Economics); Kvamsdal, Sturla F. (SNF - Centre for Applied Research at NHH); Poudel, Diwakar (Norwegian Polar Institute)
    Abstract: This paper demonstrates a predator-prey system of cod and capelin that confronts a possible scenario of prey extinction under the first-best policy in a stochastic world. We discover a novel ‘super-harvest’ phenomenon that the optimal harvest of the predator is even higher than the myopic policy, or the ‘greedy solution’, on part of the state space. This intrinsic attempt to harvest more predator to protect the prey is a critical evidence supporting the idea behind ‘greed is good’. We ban prey harvest and increase predator harvest in a designated state space area based on the optimal policy. Three heuristic recovery plans are generated following this principle. We employ stochastic simulations to analyse the probability of prey recovery and evaluate corresponding costs in terms of value loss percentage. We find that the alternative policies enhance prey recovery rates mostly around the area of 50% recovery probability under the optimal policy. When we scale up the predator harvest by 1.5, the prey recovery rate escalates for as much as 28% at a cost of 5% value loss. We establish two strategies: modest deviation from the optimal on a large area or intense measure on a small area. It seems more cost-effective to target the stock space with accuracy than to simply boost predator harvest when the aim is to achieve remarkable improvement of prey recovery probability.
    Keywords: Stock recovery; resilience; predator-prey; ecosystem; stochastic
    JEL: C44 C61 Q00 Q20 Q22 Q50
    Date: 2019–09–04
    URL: http://d.repec.org/n?u=RePEc:hhs:nhhfms:2019_005&r=all
  27. By: Huber, Martin
    Abstract: This chapter covers different approaches to policy evaluation for assessing the causal effect of a treatment or intervention on an outcome of interest. As an introduction to causal inference, the discussion starts with the experimental evaluation of a randomized treatment. It then reviews evaluation methods based on selection on observables (assuming a quasi-random treatment given observed covariates), instrumental variables (inducing a quasi-random shift in the treatment), difference-in-differences and changes-in-changes (exploiting changes in outcomes over time), as well as regression discontinuities and kinks (using changes in the treatment assignment at some threshold of a running variable). The chapter discusses methods particularly suited for data with many observations for a flexible (i.e. semi- or nonparametric) modeling of treatment effects, and/or many (i.e. high dimensional) observed covariates by applying machine learning to select and control for covariates in a data-driven way. This is not only useful for tackling confounding by controlling for instance for factors jointly affecting the treatment and the outcome, but also for learning effect heterogeneities across subgroups defined upon observable covariates and optimally targeting those groups for which the treatment is most effective.
    Keywords: Policy evaluation; treatment effects; machine learning; experiment; selection on observables; instrument; difference-indifferences; changes-in-changes; regression discontinuity design; regression kink design
    JEL: C21 C26 C29
    Date: 2019–08–12
    URL: http://d.repec.org/n?u=RePEc:fri:fribow:fribow00504&r=all
  28. By: Jayawickrema, Vishuddhi
    Abstract: This paper attempts to characterize the monetary policy regimes in the United States and analyze their effects on macroeconomic stability. It does so by estimating Taylor-type forward-looking monetary policy reaction functions for the pre- and post-1979 periods, and simulating the resultant coefficients in a basic New Keynesian business cycle model. The feedback coefficient on inflation in the estimated policy reaction function is found to be less than unity for the 1960-1979 period, suggesting an accommodative monetary policy stance of the Federal Reserve. However, for the 1979-2017 period, the feedback coefficient on inflation is estimated to be substantially greater than unity, implying that the Federal Reserve adopted a proactive policy stance towards controlling inflation. It is also found that in recent times, the Federal reserve has shifted its focus from short one period ahead inflation targets to longer target horizons such as one year ahead inflation targets. Meanwhile, the model simulations show that the economy exhibits greater stability under a model with post-1979 calibration than a model with a combination of pre-1979 parameters and `sunspot' shocks.
    Keywords: Monetary Policy, Monetary Policy Rules, Taylor Rule, Macroeconomic Stability
    JEL: E32 E43 E52
    Date: 2019–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:95590&r=all
  29. By: Arthur Seibold
    Abstract: This paper documents and analyzes an important and puzzling stylized fact about retirement behavior: the large concentration of job exits at specific ages. In Germany, almost 30% of workers retire precisely in the month when they reach one of three statutory retirement ages, although there is often no incentive or even a disincentive to retire at these thresholds. To study what can explain the concentration of retirements around statutory ages, I use novel administrative data covering the universe of German retirees, and I exploit unique variation in financial retirement incentives as well as statutory ages across individuals in the German pension system. Measuring retirement bunching responses to 644 different discontinuities in pension benefit profiles, I first document that financial incentives alone fail to explain retirement patterns in the data. Second, I show that there is a large direct effect of “presenting” a threshold as a statutory retirement age. Further evidence on mechanisms suggests the framing of statutory ages as reference points for retirement as a potential explanation. A number of alternative channels including firm responses are also discussed but they do not seem to drive the results. Finally, structural bunching estimation is employed to estimate reference point effects. Counterfactual simulations highlight that shifting statutory ages via pension reforms can be an effective policy to increase actual retirement ages with a positive fiscal impact.
    Keywords: retirement, reference points
    JEL: D03 H55 J26
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_7799&r=all

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.