nep-cmp New Economics Papers
on Computational Economics
Issue of 2018‒11‒05
twenty-six papers chosen by



  1. Deep calibration of rough stochastic volatility models By Christian Bayer; Benjamin Stemper
  2. Improving Stock Movement Prediction with Adversarial Training By Fuli Feng; Huimin Chen; Xiangnan He; Ji Ding; Maosong Sun; Tat-Seng Chua
  3. Early Detection of Students at Risk – Predicting Student Dropouts Using Administrative Student Data and Machine Learning Methods By Johannes Berens; Kerstin Schneider; Simon Görtz; Simon Oster; Julian Burghoff
  4. Martingale Functional Control variates via Deep Learning By Marc Sabate Vidales; David Siska; Lukasz Szpruch
  5. Geometrically Convergent Simulation of the Extrema of L\'{e}vy Processes By Jorge Gonz\'alez C\'azares; Aleksandar Mijatovi\'c; Ger\'onimo Uribe Bravo
  6. A multi†country analysis of austerity policies in the European Union By Oscar Bajo-Rubio; Antonio G. Gómez-Plana
  7. CNNPred: CNN-based stock market prediction using several data sources By Ehsan Hoseinzade; Saman Haratizadeh
  8. Offline Multi-Action Policy Learning: Generalization and Optimization By Zhengyuan Zhou; Susan Athey; Stefan Wager
  9. The Model Selection Curse By Kfir Eliaz; Ran Spiegler
  10. Reverse Quantum Annealing Approach to Portfolio Optimization Problems By Davide Venturelli; Alexei Kondratyev
  11. Framing Discrete Choice Model as Deep Neural Network with Utility Interpretation By Shenhao Wang; Jinhua Zhao
  12. IRPsim: A techno-socio-economic energy system model vision for business strategy assessment at municipal level By Scheller, Fabian; Johanning, Simon; Bruckner, Thomas
  13. An Introduction to fast-Super Paramagnetic Clustering By Lionel Yelibi; Tim Gebbie
  14. Using Preference Vector Modeling to Polarity Shift for Improvement of Opinion Mining By Chihli Hung
  15. Wide and Deep Learning for Peer-to-Peer Lending By Kaveh Bastani; Elham Asgari; Hamed Namavari
  16. Predicting Match Outcomes in Football by an Ordered Forest Estimator By Goller, Daniel; Knaus, Michael C.; Lechner, Michael; Okasa, Gabriel
  17. Culture cumulative, apprentissage social et réseaux sociaux By Claude Meidinger
  18. Model Selection Techniques -- An Overview By Jie Ding; Vahid Tarokh; Yuhong Yang
  19. Using Deep Learning for price prediction by exploiting stationary limit order book features By Avraam Tsantekidis; Nikolaos Passalis; Anastasios Tefas; Juho Kanniainen; Moncef Gabbouj; Alexandros Iosifidis
  20. Lifting the Heston model By Eduardo Abi Jaber
  21. Portfolio Construction Matters By Stefano Ciliberti; Stanislao Gualdi
  22. Funding Options from the Market By Bell, Peter
  23. How shifting investment towards low-carbon sectors impacts employment: three determinants under scrutiny By Quentin Perrier; Philippe Quirion
  24. The vertical and horizontal distributive effects of energy taxes By Thomas Douenne
  25. Fiskalische und individuelle Nettoerträge und Renditen von Bildungsinvestitionen im jungen Erwachsenenalter By Pfeiffer, Friedhelm; Stichnoth, Holger
  26. Fiscal Equalization as a Driver of Tax Increases: Empirical Evidence from Germany By Thiess Büttner; Manuela Krause

  1. By: Christian Bayer; Benjamin Stemper
    Abstract: Sparked by Al\`os, Le\'on, and Vives (2007); Fukasawa (2011, 2017); Gatheral, Jaisson, and Rosenbaum (2018), so-called rough stochastic volatility models such as the rough Bergomi model by Bayer, Friz, and Gatheral (2016) constitute the latest evolution in option price modeling. Unlike standard bivariate diffusion models such as Heston (1993), these non-Markovian models with fractional volatility drivers allow to parsimoniously recover key stylized facts of market implied volatility surfaces such as the exploding power-law behaviour of the at-the-money volatility skew as time to maturity goes to zero. Standard model calibration routines rely on the repetitive evaluation of the map from model parameters to Black-Scholes implied volatility, rendering calibration of many (rough) stochastic volatility models prohibitively expensive since there the map can often only be approximated by costly Monte Carlo (MC) simulations (Bennedsen, Lunde, & Pakkanen, 2017; McCrickerd & Pakkanen, 2018; Bayer et al., 2016; Horvath, Jacquier, & Muguruza, 2017). As a remedy, we propose to combine a standard Levenberg-Marquardt calibration routine with neural network regression, replacing expensive MC simulations with cheap forward runs of a neural network trained to approximate the implied volatility map. Numerical experiments confirm the high accuracy and speed of our approach.
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1810.03399&r=cmp
  2. By: Fuli Feng; Huimin Chen; Xiangnan He; Ji Ding; Maosong Sun; Tat-Seng Chua
    Abstract: This paper contributes a new machine learning solution for stock movement prediction, which aims to predict whether the price of a stock will be up or down in the near future. The key novelty is that we propose to employ adversarial training to improve the generalization of a recurrent neural network model. The rationality of adversarial training here is that the input features to stock prediction are typically based on stock price, which is essentially a stochastic variable and continuously changed with time by nature. As such, normal training with stationary price-based features (e.g. the closing price) can easily overfit the data, being insufficient to obtain reliable models. To address this problem, we propose to add perturbations to simulate the stochasticity of continuous price variable, and train the model to work well under small yet intentional perturbations. Extensive experiments on two real-world stock data show that our method outperforms the state-of-the-art solution with 3.11% relative improvements on average w.r.t. accuracy, verifying the usefulness of adversarial training for stock prediction task. Codes will be made available upon acceptance.
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1810.09936&r=cmp
  3. By: Johannes Berens; Kerstin Schneider; Simon Görtz; Simon Oster; Julian Burghoff
    Abstract: To successfully reduce student attrition, it is imperative to understand what the underlying determinants of attrition are and which students are at risk of dropping out. We develop an early detection system (EDS) using administrative student data from a state and a private university to predict student success as a basis for a targeted intervention. The EDS uses regression analysis, neural networks, decision trees, and the AdaBoost algorithm to identify student characteristics which distinguish potential dropouts from graduates. Prediction accuracy at the end of the first semester is 79% for the state university and 85% for the private university of applied sciences. After the fourth semester, the accuracy improves to 90% for the state university and 95% for the private university of applied sciences.
    Keywords: student attrition, machine learning, administrative student data, AdaBoost
    JEL: I23 H42 C45
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_7259&r=cmp
  4. By: Marc Sabate Vidales; David Siska; Lukasz Szpruch
    Abstract: We propose black-box-type control variate for Monte Carlo simulations by leveraging the Martingale Representation Theorem and artificial neural networks. We developed several learning algorithms for finding martingale control variate functionals both for the Markovian and non-Markovian setting. The proposed algorithms guarantee convergence to the true solution independently of the quality of the deep learning approximation of the control variate functional. We believe that this is important as the current theory of deep learning functions approximations lacks theoretical foundation. However the quality of the deep learning functional approximation determines the level of benefit of the control variate. The methods are empirically shown to work for high-dimensional problems. We provide diagnostics that shed light on appropriate network architectures.
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1810.05094&r=cmp
  5. By: Jorge Gonz\'alez C\'azares; Aleksandar Mijatovi\'c; Ger\'onimo Uribe Bravo
    Abstract: We develop a novel Monte Carlo algorithm for the simulation from the joint law of the position, the running supremum and the time of the supremum of a general L\'{e}vy process at an arbitrary finite time. We prove that the bias decays geometrically, in contrast to the power law for the random walk approximation (RWA). We identify the law of the error and, inspired by the recent work of Ivanovs [Iva18] on RWA, characterise its asymptotic behaviour. We establish a central limit theorem, construct non-asymptotic and asymptotic confidence intervals and prove that the multilevel Monte Carlo (MLMC) estimator has optimal computational complexity (i.e. of order $\epsilon^{-2}$ if the $L^2$-norm of the error is at most $\epsilon$) for locally Lipschitz and barrier-type functionals of the triplet. If the increments of the L\'{e}vy process cannot be sampled directly, we combine our algorithm with the Asmussen-Rosi\'nski approximation [AR01] by choosing the rate of decay of the cutoff level for small jumps so that the corresponding MC and MLMC estimators have minimal computational complexity. Moreover, we give an unbiased version of our estimator using ideas from Rhee-Glynn [RG15] and Vihola [Vih18].
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1810.11039&r=cmp
  6. By: Oscar Bajo-Rubio (Universidad de Castilla-La Mancha); Antonio G. Gómez-Plana (Universidad Pública de Navarra)
    Abstract: In this paper, we analyse the global effects, i.e., the effects on the world economy, from the austerity policies implemented in the European Union (EU) over the last years. Specifically, we simulate the effects of three alternative policies aimed to get a fall of one percentage point in the EU’s government deficit to GDP ratio, through a decrease in the level of public spending, and increases in consumption and in labour taxes. We examine their effects on the main macroeconomic variables of seven regions of the world economy, i.e., the EU, the US, Japan, China, Asia†Pacific, Latin America and Rest of the World. The empirical methodology makes use of a computable general equilibrium (CGE) model, through an extension of the Global Trade Analysis Project (GTAP) model. The three policy measures led to contractionary effects on the EU’s levels of activity, which were accompanied with changes in income distribution, always detrimental to labour. The effects on the rest of the world, however, were mostly negligible.
    Keywords: Computable general equilibrium, Austerity policies, Global economy, European Union
    JEL: C68 H62 H20 H50
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:aee:wpaper:1803&r=cmp
  7. By: Ehsan Hoseinzade; Saman Haratizadeh
    Abstract: Feature extraction from financial data is one of the most important problems in market prediction domain for which many approaches have been suggested. Among other modern tools, convolutional neural networks (CNN) have recently been applied for automatic feature selection and market prediction. However, in experiments reported so far, less attention has been paid to the correlation among different markets as a possible source of information for extracting features. In this paper, we suggest a CNN-based framework with specially designed CNNs, that can be applied on a collection of data from a variety of sources, including different markets, in order to extract features for predicting the future of those markets. The suggested framework has been applied for predicting the next day's direction of movement for the indices of S&P 500, NASDAQ, DJI, NYSE, and RUSSELL markets based on various sets of initial features. The evaluations show a significant improvement in prediction's performance compared to the state of the art baseline algorithms.
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1810.08923&r=cmp
  8. By: Zhengyuan Zhou; Susan Athey; Stefan Wager
    Abstract: In many settings, a decision-maker wishes to learn a rule, or policy, that maps from observable characteristics of an individual to an action. Examples include selecting offers, prices, advertisements, or emails to send to consumers, as well as the problem of determining which medication to prescribe to a patient. While there is a growing body of literature devoted to this problem, most existing results are focused on the case where data comes from a randomized experiment, and further, there are only two possible actions, such as giving a drug to a patient or not. In this paper, we study the offline multi-action policy learning problem with observational data and where the policy may need to respect budget constraints or belong to a restricted policy class such as decision trees. We build on the theory of efficient semi-parametric inference in order to propose and implement a policy learning algorithm that achieves asymptotically minimax-optimal regret. To the best of our knowledge, this is the first result of this type in the multi-action setup, and it provides a substantial performance improvement over the existing learning algorithms. We then consider additional computational challenges that arise in implementing our method for the case where the policy is restricted to take the form of a decision tree. We propose two different approaches, one using a mixed integer program formulation and the other using a tree-search based algorithm.
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1810.04778&r=cmp
  9. By: Kfir Eliaz; Ran Spiegler
    Abstract: A "statistician" takes an action on behalf of an agent, based on the agent's self-reported personal data and a sample involving other people. The action that he takes is an estimated function of the agent's report. The estimation procedure involves model selection. We ask the following question: Is truth-telling optimal for the agent given the statistician's procedure? We analyze this question in the context of a simple example that highlights the role of model selection. We suggest that our simple exercise may have implications for the broader issue of human interaction with "machine learning" algorithms.
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1810.02888&r=cmp
  10. By: Davide Venturelli; Alexei Kondratyev
    Abstract: We investigate a hybrid quantum-classical solution method to the mean-variance portfolio optimization problems. Starting from real financial data statistics and following the principles of the Modern Portfolio Theory, we generate parametrized samples of portfolio optimization problems that can be related to quadratic binary optimization forms programmable in the analog D-Wave Quantum Annealer 2000Q. The instances are also solvable by an industry-established Genetic Algorithm approach, which we use as a classical benchmark. We investigate several options to run the quantum computation optimally, ultimately discovering that the best results in terms of expected time-to-solution as a function of number of variables for the hardest instances set are obtained by seeding the quantum annealer with a solution candidate found by a greedy local search and then performing a reverse annealing protocol. The optimized reverse annealing protocol is found to be more than 100 times faster than the corresponding forward quantum annealing on average.
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1810.08584&r=cmp
  11. By: Shenhao Wang; Jinhua Zhao
    Abstract: Deep neural network (DNN) has been increasingly applied to travel demand prediction. However, no study has examined how DNN relates to utility-based discrete choice models (DCM) beyond simple comparison of prediction accuracy. To fill this gap, this paper investigates the relationship between DNN and DCM from a theoretical perspective with three major findings. First, we introduce the utility interpretation to the DNN models and demonstrate that DCM is one special case of DNN with shallow and sparse architecture, identifiable parameters, logistic loss, zero regularization, and domain-knowledge based feature transformation. Second, a sequence of four neural network models illustrate how DNN gradually trade away interpretability for predictability in the context of travel mode choice. High predictability is achieved by DNN's powerful representation learning and high model capacity; but interpretability is sacrificed through the loss of convex optimization and statistical properties, and non-identification of parameters. Third, the utility interpretation allows us to develop a numerical method of extracting important economic information from DNN including choice probability, elasticity, marginal rate of substitution, and consumer surplus. Overall, this study makes three contributions: theoretically it frames DCM as a special case of DNN and introduces the utility interpretation to DNN; methodologically it demonstrates the interpretability-predictability tradeoff between DCM and DNN and suggests the potential of their joint improvement, and practically it introduces a post-hoc numerical method to extract economic information from DNN and make it interpretable through the utility concept.
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1810.10465&r=cmp
  12. By: Scheller, Fabian; Johanning, Simon; Bruckner, Thomas
    Abstract: Decision makers of municipal energy utilities responsible for future portfolio strategies are confronted with making informed decisions within the scope of continuously evolving systems. To cope with the increasing flexibility of customers, and their autonomous decision-making processes, determining newly established municipal energy-related infrastructure has become a challenge for utilities, which are struggling to develop suitable business models. Even though business portfolio decisions are already supported by energy system models, models only considering rational choices of economical drivers seem to be insufficient. Structural decisions of different market actors are often related to bounded rationality and thus are not fully rational. A combined analysis of sociological and technological dynamics might be necessary to evaluate new business models by providing insights into the interactions between the decision processes of market actors and the performance of the supply system. This research paper outlines a multi-model vision called IRPsim (Integrated Resource Planning and Simulation) including bounded and unbounded rationality modeling approaches. The techno-socio-economic model enables the determining of system impacts of behavior patterns of market actors on the business performance of the energy supply system. The mutual dependencies of the coupled models result in an interactive and dynamic energy model application for multi-year business portfolio assessment. The mixed-integer dynamic techno-economic optimization model IRPopt (Integrated Resource Planning and Optimization) represents an adequate starting point as a result of the novel actor-oriented multi-level framework. For the socioeconomic model IRPact (Integrated Resource Planning and Interaction), empirically grounded agent-based modeling turned out to be one of the most promising approaches as it allows for considering various influences on the adoption process on a micro level. Additionally, a large share of available applied research already deals with environmental and energy-related innovations.
    Keywords: Techno-socio economic modeling,Bounded and unbounded rationality,Business model assessment,Empirically grounded agent-based modeling of innovation diffusion
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:zbw:iirmco:022018&r=cmp
  13. By: Lionel Yelibi; Tim Gebbie
    Abstract: We map stock market interactions to spin models to recover their hierarchical structure using a simulated annealing based Super-Paramagnetic Clustering (SPC) algorithm. This is directly compared to a modified implementation of a maximum likelihood approach to fast-Super-Paramagnetic Clustering (f-SPC). The methods are first applied standard toy test-case problems, and then to a dataset of 447 stocks traded on the New York Stock Exchange (NYSE) over 1249 days. The signal to noise ratio of stock market correlation matrices is briefly considered. Our result recover approximately clusters representative of standard economic sectors and mixed clusters whose dynamics shine light on the adaptive nature of financial markets and raise concerns relating to the effectiveness of industry based static financial market classification in the world of real-time data-analytics. A key result is that we show that the standard maximum likelihood methods are confirmed to converge to solutions within a Super-Paramagnetic (SP) phase. We use insights arising from this to discuss the implications of using a Maximum Entropy Principle (MEP) as opposed to the Maximum Likelihood Principle (MLP) as an optimization device for this class of problems.
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1810.02529&r=cmp
  14. By: Chihli Hung (Chung Yuan Christian University)
    Abstract: This research proposes the preference vector modeling (PVM) to deal with polarity shifts for improvement of sentiment classification for word of mouth (WOM). WOM has become a main information resource of consumers while making business or buying strategies. A polarity shift happens when the sentiment polarity of a term is different from that of its associated WOM document, which is one of the most difficult issues in the field of opinion mining. Traditional opinion mining approaches depend on predefined sentiment polarities of terms to be accumulated as the WOM?s sentiment polarity or to be trained based on machine learning techniques, but ignore the significance of polarity shift due to some specific usage of terms. There are two kinds of approaches used for detection of polarity shifts in the literature, which are rule-based approaches and machine learning approaches. However, it is hard for a rule-based approach to manually define a complete rule set. The machine learning approach, which is based on the vector space model (VSM), suffers from the curse of dimensionality. Therefore, this research proposes a novel approach to deal with polarity shifts for sentiment analysis because of the weakness of existing research in the literature. Firstly, this research proposes PVM based on an integration of opinionated documents and a star ranking system. The dimensionality of preference vectors equals the number of the star ranking system. Thus, the proposed PVM overcomes the curse of dimensionality as the number of dimensionality of the star ranking system is much fewer than that of the document vector based on VSM. Then, the automatic approach for polarity shift detection is proposed. The document preference vector is represented based on the average vector of term preference vectors. This way is able to deal with opinionated documents if they are extracted from the same scale of the star ranking systems and the same domain. Finally, the integrated approach of PVM and some classification techniques is used for improvement of sentiment classification for word of mouth.
    Keywords: Polarity Shift; Preference Vector Modeling; Opinionated Text; Sentiment Analysis; Opinion Mining
    JEL: D80 L86
    Date: 2018–07
    URL: http://d.repec.org/n?u=RePEc:sek:iacpro:6508391&r=cmp
  15. By: Kaveh Bastani; Elham Asgari; Hamed Namavari
    Abstract: This paper proposes a two-stage scoring approach to help lenders decide their fund allocations in the peer-to-peer (P2P) lending market. The existing scoring approaches focus on only either probability of default (PD) prediction, known as credit scoring, or profitability prediction, known as profit scoring, to identify the best loans for investment. Credit scoring fails to deliver the main need of lenders on how much profit they may obtain through their investment. On the other hand, profit scoring can satisfy that need by predicting the investment profitability. However, profit scoring completely ignores the class imbalance problem where most of the past loans are non-default. Consequently, ignorance of the class imbalance problem significantly affects the accuracy of profitability prediction. Our proposed two-stage scoring approach is an integration of credit scoring and profit scoring to address the above challenges. More specifically, stage 1 is designed as credit scoring to identify non-default loans while the imbalanced nature of loan status is considered in PD prediction. The loans identified as non-default are then moved to stage 2 for prediction of profitability, measured by internal rate of return. Wide and deep learning is used to build the predictive models in both stages to achieve both memorization and generalization. Extensive numerical studies are conducted based on real-world data to verify the effectiveness of the proposed approach. The numerical studies indicate our two-stage scoring approach outperforms the existing credit scoring and profit scoring approaches.
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1810.03466&r=cmp
  16. By: Goller, Daniel; Knaus, Michael C.; Lechner, Michael; Okasa, Gabriel
    Abstract: We predict the probabilities for a draw, a home win, and an away win, for the games of the German Football Bundesliga (BL1) with a new machine-learning estimator using the (large) information available up to that date. We use these individual predictions in order to simulate a league table for every game day until the end of the season. This combination of a (stochastic) simulation approach with machine learning allows us to come up with statements about the likelihood that a particular team is reaching specific places in the final league table (i.e. champion, relegation, etc.). The machine-learning algorithm used, builds on a recent development of an Ordered Random Forest. This estimator generalises common estimators like ordered probit or ordered logit maximum likelihood and is able to recover essentially the same output as the standard estimators, such as the probabilities of the alternative conditional on covariates. The approach is already in use and results for the current season can be found at www.sew.unisg.ch/soccer_analytics.
    Keywords: Prediction, Machine Learning, Random Forest, Soccer, Bundesliga
    JEL: C53
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:usg:econwp:2018:11&r=cmp
  17. By: Claude Meidinger (Centre d'Economie de la Sorbonne - Université Paris1 Panthéon-Sorbonne)
    Abstract: Discussions about the existence of a culture in non-human species is often concerned by the question whether these species could possess a cognitive complexity sufficient to allow them to imitate others. According to many authors, to imitate is a cognitively sohisticated process that depends on a functionally abstract representation of a problem and its solution, something that non human species do not seem to possess. However, the fast evolution of cognitive performances and of complex inventions in human beings could not be explained only by the improvement of the rate of innovation in individual learning and (or) the improvement of the process of imitation. Such a cumulative evolution depends also on a wider social organization characterized by an increase in the size of the social networks. The simulations displayed here show how such an increase, jointly considered with the diversity of learning processes, allow to better understand the major transitions noted in the cultural evolution of primates and human beings
    Keywords: learning processes; cumulative cultural evolution; social networks; simulations
    JEL: Z1 C63 C92
    Date: 2018–09
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:18023&r=cmp
  18. By: Jie Ding; Vahid Tarokh; Yuhong Yang
    Abstract: In the era of big data, analysts usually explore various statistical models or machine learning methods for observed data in order to facilitate scientific discoveries or gain predictive power. Whatever data and fitting procedures are employed, a crucial step is to select the most appropriate model or method from a set of candidates. Model selection is a key ingredient in data analysis for reliable and reproducible statistical inference or prediction, and thus central to scientific studies in fields such as ecology, economics, engineering, finance, political science, biology, and epidemiology. There has been a long history of model selection techniques that arise from researches in statistics, information theory, and signal processing. A considerable number of methods have been proposed, following different philosophies and exhibiting varying performances. The purpose of this article is to bring a comprehensive overview of them, in terms of their motivation, large sample performance, and applicability. We provide integrated and practically relevant discussions on theoretical properties of state-of- the-art model selection approaches. We also share our thoughts on some controversial views on the practice of model selection.
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1810.09583&r=cmp
  19. By: Avraam Tsantekidis; Nikolaos Passalis; Anastasios Tefas; Juho Kanniainen; Moncef Gabbouj; Alexandros Iosifidis
    Abstract: The recent surge in Deep Learning (DL) research of the past decade has successfully provided solutions to many difficult problems. The field of quantitative analysis has been slowly adapting the new methods to its problems, but due to problems such as the non-stationary nature of financial data, significant challenges must be overcome before DL is fully utilized. In this work a new method to construct stationary features, that allows DL models to be applied effectively, is proposed. These features are thoroughly tested on the task of predicting mid price movements of the Limit Order Book. Several DL models are evaluated, such as recurrent Long Short Term Memory (LSTM) networks and Convolutional Neural Networks (CNN). Finally a novel model that combines the ability of CNNs to extract useful features and the ability of LSTMs' to analyze time series, is proposed and evaluated. The combined model is able to outperform the individual LSTM and CNN models in the prediction horizons that are tested.
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1810.09965&r=cmp
  20. By: Eduardo Abi Jaber (CEREMADE)
    Abstract: How to reconcile the classical Heston model with its rough counterpart? We introduce a lifted version of the Heston model with n multi-factors, sharing the same Brownian motion but mean reverting at different speeds. Our model nests as extreme cases the classical Heston model (when n = 1), and the rough Heston model (when n goes to infinity). We show that the lifted model enjoys the best of both worlds: Markovianity and satisfactory fits of implied volatility smiles for short maturities with very few parameters. Further, our approach speeds up the calibration time and opens the door to time-efficient simulation schemes.
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1810.04868&r=cmp
  21. By: Stefano Ciliberti; Stanislao Gualdi
    Abstract: The role of portfolio construction in the implementation of equity market neutral factors is often underestimated. Taking the classical momentum strategy as an example, we show that one can significantly improve the main strategy's features by properly taking care of this key step. More precisely, an optimized portfolio construction algorithm allows one to significantly improve the Sharpe Ratio, reduce sector exposures and volatility fluctuations, and mitigate the strategy's skewness and tail correlation with the market. These results are supported by long-term, world-wide simulations and will be shown to be universal. Our findings are quite general and hold true for a number of other "equity factors". Finally, we discuss the details of a more realistic set-up where we also deal with transaction costs.
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1810.08384&r=cmp
  22. By: Bell, Peter
    Abstract: Investors face many different versions of The Portfolio Problem. Consider, for example, holding shares and call options on a publicly-traded equity. The options are in-the-money and live. How best should the investor go about exercising those options? They could fund from capital or use the secondary market to fund the options, as follows. When market price is above strike price, it may be possible to sell shares into market in advance of exercising the call options. This operation can yield residual cash or shares. How much should an investor do this and when? This paper presents a specific numerical example where we trade out of options when the market price breaches a 2:1 ratio to strike price and provides descriptive statistics for investors’ wealth in simulation with standard Gaussian motion for share price and specific trading rule.
    Keywords: Finance, Trading, Derivatives,
    JEL: C00 G00
    Date: 2018–10–05
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:89360&r=cmp
  23. By: Quentin Perrier (CIRED); Philippe Quirion (CIRED, CNRS)
    Abstract: The threat of climate change requires investment to be rapidly shifted from fossil fuels towards low-carbon sectors, and this shift generates heated debates about its impact on employment. Although many employment studies exist, the economic mechanisms at play remain unclear. Using stylized CGE and IO models, we identify and discuss three channels of job creation resulting from an investment shift: positive employment impacts arising from targeting sectors with a high labour share in value added, low wages and low import rates. Results are robust across both models, except for the last, which only occurs in IO. We then undertake a numerical analysis of two policies: solar panel installation and weatherproofing. These investments both yield a positive effect on employment, a result which is robust across models, due to a high labour share and low wages in these sectors. The results are roughly similar in IO and CGE for solar; for weatherproofing, the results are higher in IO because of low import rates, by a factor ranging from 1.19 to 1.87. Our conclusions challenge the idea that renewable energies boost employment by reducing imports, but they also suggest that an employment double dividend might exist when encouraging low-carbon labour-intensive sectors.
    Keywords: Renewable energies, Investment, Employment, CGE, Input-Output
    JEL: C67 C68 E24 Q42 Q43
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:fae:wpaper:2018.13&r=cmp
  24. By: Thomas Douenne (Paris School of Economics)
    Abstract: This paper proposes a micro-simulation assessment of the distributional impacts of the French carbon tax. It shows that the policy is regressive, but could be made progressive by redistributing the revenue through a flat-recycling. However, it would still generate large horizontal distributive effects and harm an important share of low-income households. The determinants of the tax incidence are characterized precisely, and alternative targeted transfers are simulated on this basis. The paper shows that given the importance of unobserved heterogeneity in the determinants of energy consumption, horizontal distributive effects are much more difficult to tackle than vertical ones.
    Keywords: Energy taxes, Distributional effects, Demand-System, Micro-simulation
    JEL: D12 H23 I32
    Date: 2018–09
    URL: http://d.repec.org/n?u=RePEc:fae:wpaper:2018.10&r=cmp
  25. By: Pfeiffer, Friedhelm; Stichnoth, Holger
    Abstract: In dieser Studie werden auf der Basis einer Weiterentwicklung des ZEW-Mikrosimulationsmodells des Abgaben- Steuer- und Transfersystems fiskalische und individuelle Nettoerträge und Renditen von Bildungsinvestitionen für junge Erwachsene bezogen auf das Jahr 2016 untersucht und mit früheren Schätzungen bezogen auf das Jahr 2012 verglichen. Nach den Ergebnissen liegt die fiskalische Bildungsrendite bezogen auf das Jahr 2016 pro Auszubildenden bei 20,6%, pro Student bei 5,0% und für eine hypothetische Kombination von Ausbildung und Studium bei 10,2%. Während die individuellen Bildungsrenditen aus dem Bruttoeinkommen im Mittel bei über 12% liegen, schrumpfen sie nach Abzug von Steuern und Sozialabgaben sowie aufgrund des Transferentzugs im Mittel auf etwa 6%. Dies verdeutlicht empirisch das Ausmaß signifikanter Interdependenzen zwischen Bildungs-, Steuer- und Sozialpolitik. Im Vergleich zu 2012 sind die fiskalischen Renditen für die Ausbildung etwas gestiegen und für das Studium etwas gefallen.
    Keywords: Bildungsinvestitionen,Bildungsrenditen,Steuer- und Transsystem
    JEL: I21 I28 J31
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:zbw:zewdip:18043&r=cmp
  26. By: Thiess Büttner; Manuela Krause
    Abstract: This paper exploits a recent devolution of tax setting powers in the German federation to study the effects of fiscal equalization on subnational governments’ tax policy. Based on an analysis of the system of fiscal equalization transfers, we argue that the redistribution of revenues provides incentives for states to raise rather than to lower their tax rates. The empirical analysis exploits differences in fiscal redistribution among the states and over time. Using a comprehensive simulation model, the paper computes the tax-policy incentives faced by each state over the years and explores their empirical effects on tax policy. The results support significant and substantial effects. Facing full equalization a state is predicted to set the tax rate from the real estate transfer tax about 1.3 percentage points higher than without. Our analysis also shows that the incentive to raise tax rates is proliferated by the equalization system because the states’ decisions to raise their tax rates have intensified fiscal redistribution over time.
    Keywords: fiscal equalization, tax autonomy, real estate transfer tax
    JEL: H77 H24 R38
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_7260&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.