nep-cmp New Economics Papers
on Computational Economics
Issue of 2019‒09‒16
27 papers chosen by



  1. Mortality rate forecasting: can recurrent neural networks beat the Lee-Carter model? By G\'abor Petneh\'azi; J\'ozsef G\'all
  2. Usage of artificial neural networks in data classification By Elda Xhumari; Julian Fejzaj
  3. Multiway Cluster Robust Double/Debiased Machine Learning By Harold D. Chiang; Kengo Kato; Yukun Ma; Yuya Sasaki
  4. Validating Weak-form Market Efficiency in United States Stock Markets with Trend Deterministic Price Data and Machine Learning By Samuel Showalter; Jeffrey Gropp
  5. Systemic Risk Clustering of China Internet Financial Based on t-SNE Machine Learning Algorithm By Mi Chuanmin; Xu Runjie; Lin Qingtong
  6. Deep Prediction Of Investor Interest: a Supervised Clustering Approach By Baptiste Barreau; Laurent Carlier; Damien Challet
  7. Robust pricing and hedging of options on multiple assets and its numerics By Stephan Eckstein; Gaoyue Guo; Tongseok Lim; Jan Obloj
  8. State Drug Policy Effectiveness: Comparative Policy Analysis of Drug Overdose Mortality By Jarrod Olson; Po-Hsu Allen Chen; Marissa White; Nicole Brennan; Ning Gong
  9. Employment of advanced approach to control inventory level by monitoring Safety Stock in Supply Chain under Uncertain environment By Riyadh Jamegh; AllaEldin Kassam; Sawsan Sabih
  10. Machine Learning in Least-Squares Monte Carlo Proxy Modeling of Life Insurance Companies By Anne-Sophie Krah; Zoran Nikoli\'c; Ralf Korn
  11. Artificial Intelligence Market Disruption By Julia M. Puaschunder
  12. Regulating the doom loop By Alogoskoufis, Spyros; Langfield, Sam
  13. De-biased Machine Learning for Compliers By Rahul Singh; Liyang Sun
  14. Virtual Historical Simulation for estimating the conditional VaR of large portfolios By Christian Francq; Jean-Michel Zakoian
  15. Myopic Agents in Assessments of Economic Conditions: Application of Weakly Supervised Learning and Text Mining By Masahiro Kato
  16. Automatic Financial Trading Agent for Low-risk Portfolio Management using Deep Reinforcement Learning By Wonsup Shin; Seok-Jun Bu; Sung-Bae Cho
  17. An analytical perturbative solution to the Merton Garman model using symmetries By Xavier Calmet; Nathaniel Wiesendanger Shaw
  18. Combining Family History and Machine Learning to Link Historical Records By Joseph Price; Kasey Buckles; Jacob Van Leeuwen; Isaac Riley
  19. Benchmarking with uncertain data: a simulation study comparing alternative methods By Jens Leth Hougaard; Pieter Jan Kerstens; Kurt Nielsen
  20. Tehran Stock Exchange Prediction Using Sentiment Analysis of Online Textual Opinions By Arezoo Hatefi Ghahfarrokhi; Mehrnoush Shamsfard
  21. Using Wasserstein Generative Adversial Networks for the Design of Monte Carlo Simulations By Susan Athey; Guido Imbens; Jonas Metzger; Evan Munro
  22. A fixed-point policy-iteration-type algorithm for symmetric nonzero-sum stochastic impulse games By Diego Zabaljauregui
  23. Deep Prediction of Investor Interest: a Supervised Clustering Approach By Baptiste Barreau; Laurent Carlier; Damien Challet
  24. Economic Operation of Grid-Connected Microgrid By Multiverse Optimization Algorithm By Mahdavi, Sadegh; Bayat, Alireza; Mirzaei, Farzad
  25. Using data mining techniques on Moodle data for classification of student?s learning styles By Alda Kika; Loreta Leka; Suela Maxhelaku; Ana Ktona
  26. Targeting customers for profit: An ensemble learning framework to support marketing decision-making By Stefan Lessmann; Kristof Coussement; Koen W. de Bock; Johannes Haupt
  27. Boosting the Hodrick-Prescott Filter By Peter C.B. Phillips; Zhentao Shi

  1. By: G\'abor Petneh\'azi; J\'ozsef G\'all
    Abstract: This article applies a long short-term memory recurrent neural network to mortality rate forecasting. The model can be trained jointly on the mortality rate history of different countries, ages, and sexes. The RNN-based method seems to outperform the popular Lee-Carter model.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.05501&r=all
  2. By: Elda Xhumari (University of Tirana, Faculty of Natural Sciences, Department of Informatics); Julian Fejzaj (University of Tirana, Faculty of Natural Sciences, Department of Informatics)
    Abstract: Data classification is broadly defined as the process of organizing data by respective categories so that it can be used and protected more efficiently. Data classification is performed for different purposes, one of the most common is for preserving data privacy. Data classification often includes a number of attributes, determining the type of data, confidentiality, and integrity. Neural networks help solve different problems. They are very good at data classification problems, they can classify any data with arbitrary precision.
    Keywords: Artificial Neural Networks, Data Classification, Naïve Bayes, Discriminant Analysis, Nearest Neighbor
    JEL: C45
    Date: 2019–07
    URL: http://d.repec.org/n?u=RePEc:sek:iacpro:9211565&r=all
  3. By: Harold D. Chiang; Kengo Kato; Yukun Ma; Yuya Sasaki
    Abstract: This paper investigates double/debiased machine learning (DML) under multiway clustered sampling environments. We propose a novel multiway cross fitting algorithm and a multiway DML estimator based on this algorithm. Simulations indicate that the proposed procedure has favorable finite sample performance.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.03489&r=all
  4. By: Samuel Showalter; Jeffrey Gropp
    Abstract: The Efficient Market Hypothesis has been a staple of economics research for decades. In particular, weak-form market efficiency -- the notion that past prices cannot predict future performance -- is strongly supported by econometric evidence. In contrast, machine learning algorithms implemented to predict stock price have been touted, to varying degrees, as successful. Moreover, some data scientists boast the ability to garner above-market returns using price data alone. This study endeavors to connect existing econometric research on weak-form efficient markets with data science innovations in algorithmic trading. First, a traditional exploration of stationarity in stock index prices over the past decade is conducted with Augmented Dickey-Fuller and Variance Ratio tests. Then, an algorithmic trading platform is implemented with the use of five machine learning algorithms. Econometric findings identify potential stationarity, hinting technical evaluation may be possible, though algorithmic trading results find little predictive power in any machine learning model, even when using trend-specific metrics. Accounting for transaction costs and risk, no system achieved above-market returns consistently. Our findings reinforce the validity of weak-form market efficiency.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.05151&r=all
  5. By: Mi Chuanmin; Xu Runjie; Lin Qingtong
    Abstract: With the rapid development of Internet finance, a large number of studies have shown that Internet financial platforms have different financial systemic risk characteristics when they are subject to macroeconomic shocks or fragile internal crisis. From the perspective of regional development of Internet finance, this paper uses t-SNE machine learning algorithm to obtain data mining of China's Internet finance development index involving 31 provinces and 335 cities and regions. The conclusion of the peak and thick tail characteristics, then proposed three classification risks of Internet financial systemic risk, providing more regionally targeted recommendations for the systematic risk of Internet finance.
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.03808&r=all
  6. By: Baptiste Barreau (MICS - Mathématiques et Informatique pour la Complexité et les Systèmes - CentraleSupélec, BNPP CIB GM Lab - BNP Paribas CIB Global Markets Data & AI Lab); Laurent Carlier (BNPP CIB GM Lab - BNP Paribas CIB Global Markets Data & AI Lab); Damien Challet (MICS - Mathématiques et Informatique pour la Complexité et les Systèmes - CentraleSupélec)
    Abstract: We propose a novel deep learning architecture suitable for the prediction of investor interest for a given asset in a given timeframe. This architecture performs both investor clustering and modelling at the same time. We first verify its superior performance on a simulated scenario inspired by real data and then apply it to a large proprietary database from BNP Paribas Corporate and Institutional Banking.
    Keywords: investor activity prediction,deep learning,neural networks,mixture of experts,clustering
    Date: 2019–09–02
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-02276055&r=all
  7. By: Stephan Eckstein; Gaoyue Guo; Tongseok Lim; Jan Obloj
    Abstract: We consider robust pricing and hedging for options written on multiple assets given market option prices for the individual assets. The resulting problem is called the multi-marginal martingale optimal transport problem. We propose two numerical methods to solve such problems: using discretisation and linear programming applied to the primal side and using penalisation and deep neural networks optimisation applied to the dual side. We prove convergence for our methods and compare their numerical performance. We show how adding further information about call option prices at additional maturities can be incorporated and narrows down the no-arbitrage pricing bounds. Finally, we obtain structural results for the case of the payoff given by a weighted sum of covariances between the assets.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.03870&r=all
  8. By: Jarrod Olson; Po-Hsu Allen Chen; Marissa White; Nicole Brennan; Ning Gong
    Abstract: Opioid overdose rates have reached an epidemic level and state-level policy innovations have followed suit in an effort to prevent overdose deaths. State-level drug law is a set of policies that may reinforce or undermine each other, and analysts have a limited set of tools for handling the policy collinearity using statistical methods. This paper uses a machine learning method called hierarchical clustering to empirically generate "policy bundles" by grouping states with similar sets of policies in force at a given time together for analysis in a 50-state, 10-year interrupted time series regression with drug overdose deaths as the dependent variable. Policy clusters were generated from 138 binomial variables observed by state and year from the Prescription Drug Abuse Policy System. Clustering reduced the policies to a set of 10 bundles. The approach allows for ranking of the relative effect of different bundles and is a tool to recommend those most likely to succeed. This study shows that a set of policies balancing Medication Assisted Treatment, Naloxone Access, Good Samaritan Laws, Medication Assisted Treatment, Prescription Drug Monitoring Programs and legalization of medical marijuana leads to a reduced number of overdose deaths, but not until its second year in force.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.01936&r=all
  9. By: Riyadh Jamegh (Baghdad goveroraten); AllaEldin Kassam (Baghdad goveroraten); Sawsan Sabih (Baghdad goveroraten)
    Abstract: In order to overcome uncertainty situation and inability to meet with customers' demand due to uncertainty, the organizations tend to keep a certain safety stock level. In this paper, the researcher used soft computing to identify optimal safety stock level (SSL), the fuzzy model uses dynamic concept to cope with high complexity environment status and control the inventory. The proposed approach deals with demand stability level, raw material availability level, and on hand inventory level by using fuzzy logic to obtain SSL. In this approach, demand stability, raw material, and on hand inventory are described linguistically and treated by inference rules of fuzzy model to extract best level of safety stock. The numerical dairy industry case study was applied with yogurt 200 gm cup product.
    Keywords: Inventory optimization, soft computing, safety stock optimization, dairy industries, inventory optimization.
    JEL: C63
    Date: 2019–07
    URL: http://d.repec.org/n?u=RePEc:sek:iacpro:8711585&r=all
  10. By: Anne-Sophie Krah; Zoran Nikoli\'c; Ralf Korn
    Abstract: Under the Solvency II regime, life insurance companies are asked to derive their solvency capital requirements from the full loss distributions over the coming year. Since the industry is currently far from being endowed with sufficient computational capacities to fully simulate these distributions, the insurers have to rely on suitable approximation techniques such as the least-squares Monte Carlo (LSMC) method. The key idea of LSMC is to run only a few wisely selected simulations and to process their output further to obtain a risk-dependent proxy function of the loss. In this paper, we present and analyze various adaptive machine learning approaches that can take over the proxy modeling task. The studied approaches range from ordinary and generalized least-squares regression variants over GLM and GAM methods to MARS and kernel regression routines. We justify the combinability of their regression ingredients in a theoretical discourse. Further, we illustrate the approaches in slightly disguised real-world experiments and perform comprehensive out-of-sample tests.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.02182&r=all
  11. By: Julia M. Puaschunder (The New School, Department of Economics)
    Abstract: The introduction of Artificial Intelligence in our contemporary society imposes historically unique challenges for humankind. The emerging autonomy of AI holds unique potentials of eternal life of robots, AI and algorithms alongside unprecedented economic superiority, data storage and computational advantages. Yet to this day, it remains unclear what impact AI taking over the workforce will have on economic growth.
    Keywords: AI, AI-GDP Index, AI market entry, Artificial Intelligence, capital, economic growth, endogenous growth, exogenous growth, Global Connectivity Index, GDP, Gross Domestic Product, labor, law and economics, society, State of the Mobile Internet Connectivity, workforce
    Date: 2019–07
    URL: http://d.repec.org/n?u=RePEc:smo:dpaper:01jp&r=all
  12. By: Alogoskoufis, Spyros; Langfield, Sam
    Abstract: Euro area governments have committed to break the doom loop between banks and sovereigns.But policymakers disagree on how to treat sovereign exposures in bank regulation. Our contributionis to model endogenous sovereign portfolio reallocation by banks in response toregulatory reform. Simulations highlight a tension between concentration and credit risk inportfolio reallocation. Resolving this tension requires regulatory reform to be complementedby an expansion in the portfolio opportunity set to include an area-wide low-risk asset. Byreinvesting into such an asset, banks would reduce both their concentration and credit riskexposure. JEL Classification: G01, G11, G21, G28
    Keywords: Bank regulation, sovereign risk, systemic risk
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20192313&r=all
  13. By: Rahul Singh; Liyang Sun
    Abstract: Instrumental variable identification is a concept in causal statistics for estimating the counterfactual effect of treatment D on output Y controlling for covariates X using observational data. Even when measurements of (Y,D) are confounded, the treatment effect on the subpopulation of compliers can nonetheless be identified if an instrumental variable Z is available, which is independent of (Y,D) conditional on X and the unmeasured confounder. We introduce a de-biased machine learning (DML) approach to estimating complier parameters with high-dimensional data. Complier parameters include local average treatment effect, average complier characteristics, and complier counterfactual outcome distributions. In our approach, the de-biasing is itself performed by machine learning, a variant called de-biased machine learning via regularized Riesz representers (DML-RRR). We prove our estimator is consistent, asymptotically normal, and semi-parametrically efficient. In experiments, our estimator outperforms state of the art alternatives. We use it to estimate the effect of 401(k) participation on the distribution of net financial assets.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.05244&r=all
  14. By: Christian Francq; Jean-Michel Zakoian
    Abstract: In order to estimate the conditional risk of a portfolio's return, two strategies can be advocated. A multivariate strategy requires estimating a dynamic model for the vector of risk factors, which is often challenging, when at all possible, for large portfolios. A univariate approach based on a dynamic model for the portfolio's return seems more attractive. However, when the combination of the individual returns is time varying, the portfolio's return series is typically non stationary which may invalidate statistical inference. An alternative approach consists in reconstituting a "virtual portfolio", whose returns are built using the current composition of the portfolio and for which a stationary dynamic model can be estimated. This paper establishes the asymptotic properties of this method, that we call Virtual Historical Simulation. Numerical illustrations on simulated and real data are provided.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.04661&r=all
  15. By: Masahiro Kato
    Abstract: We reveal thepsychological bias of economic agents in their judgments of future economic conditions by applying the behavioral economics and weakly supervised learning. In the Economy Watcher Survey, which is a dataset published by the Japanese government, there are assessments of current and future economic conditions by people with various occupations. Although this dataset gives essential insights regarding economic policy to the Japanese government and the central bank of Japan, there is no clear definition of future economic conditions. Hence, in the survey, respondents answer their assessments based on their interpretations of the future. In our research, we classify the text data using learning from positive and unlabeled data (PU learning), which is a method of weakly supervised learning. The dataset is composed of several periods, and we develop a new algorithm of PU learning for efficient training with the dataset. Through empirical analysis, we show the interpretation of the classification results from the viewpoint of behavioral economics.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.03348&r=all
  16. By: Wonsup Shin; Seok-Jun Bu; Sung-Bae Cho
    Abstract: The autonomous trading agent is one of the most actively studied areas of artificial intelligence to solve the capital market portfolio management problem. The two primary goals of the portfolio management problem are maximizing profit and restrainting risk. However, most approaches to this problem solely take account of maximizing returns. Therefore, this paper proposes a deep reinforcement learning based trading agent that can manage the portfolio considering not only profit maximization but also risk restraint. We also propose a new target policy to allow the trading agent to learn to prefer low-risk actions. The new target policy can be reflected in the update by adjusting the greediness for the optimal action through the hyper parameter. The proposed trading agent verifies the performance through the data of the cryptocurrency market. The Cryptocurrency market is the best test-ground for testing our trading agents because of the huge amount of data accumulated every minute and the market volatility is extremely large. As a experimental result, during the test period, our agents achieved a return of 1800% and provided the least risky investment strategy among the existing methods. And, another experiment shows that the agent can maintain robust generalized performance even if market volatility is large or training period is short.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.03278&r=all
  17. By: Xavier Calmet; Nathaniel Wiesendanger Shaw
    Abstract: In this paper, we introduce an analytical perturbative solution to the Merton Garman model. It is obtained by doing perturbation theory around the exact analytical solution of a model which possesses a two-dimensional Galilean symmetry. We compare our perturbative solution of the Merton Garman model to Monte Carlo simulations and find that our solutions performs surprisingly well for a wide range of parameters. We also show how to use symmetries to build option pricing models. Our results demonstrate that the concept of symmetry is important in mathematical finance.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.01413&r=all
  18. By: Joseph Price; Kasey Buckles; Jacob Van Leeuwen; Isaac Riley
    Abstract: A key challenge for research on many questions in the social sciences is that it is difficult to link historical records in a way that allows investigators to observe people at different points in their life or across generations. In this paper, we develop a new approach that relies on millions of record links created by individual contributors to a large, public, wiki-style family tree. First, we use these “true” links to inform the decisions one needs to make when using traditional linking methods. Second, we use the links to construct a training data set for use in supervised machine learning methods. We describe the procedure we use and illustrate the potential of our approach by linking individuals across the 100% samples of the US decennial censuses from 1900, 1910, and 1920. We obtain an overall match rate of about 70 percent, with a false positive rate of about 12 percent. This combination of high match rate and accuracy represents a point beyond the current frontier for record linking methods.
    JEL: C81 J1 N01
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:26227&r=all
  19. By: Jens Leth Hougaard (Department of Food and Resource Economics, University of Copenhagen; Economics, NYU Shanghai); Pieter Jan Kerstens (Department of Food and Resource Economics, University of Copenhagen); Kurt Nielsen (Department of Food and Resource Economics, University of Copenhagen)
    Abstract: We consider efficiency measurement methods in the presence of uncertain input and output data, and without the (empirically problematic) assumption of convexity of the production technology. In particular, we perform a simulation study in order to contrast two well-established methods, IDEA and Fuzzy DEA, with a recently suggested extension of Fuzzy DEA in the literature (dubbed the HB method). We demonstrate that the HB method has important advantages over the conventional methods, resulting in more accurate efficiency estimates and narrower bounds for the efficiency scores of individual Decision Making Units (DMUs): thereby providing more informative results that may lead to more effective decisions. The price is computational complexity. Although we show how to significantly speed up computational time compared to the original suggestion, the HB method remains the most computationally heavy method among those considered. This may limit the use of the method in cases where efficiency estimates have to be computed on the fly, as in interactive decision support systems based on large data sets.
    Keywords: data envelopment analysis, data uncertainty, fuzzy, imprecise data envelopment analysis, simulation
    JEL: C61 D24
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:foi:wpaper:2019_05&r=all
  20. By: Arezoo Hatefi Ghahfarrokhi; Mehrnoush Shamsfard
    Abstract: In this paper, we investigate the impact of the social media data in predicting the Tehran Stock Exchange (TSE) variables for the first time. We consider the closing price and daily return of three different stocks for this investigation. We collected our social media data from Sahamyab.com/stocktwits for about three months. To extract information from online comments, we propose a hybrid sentiment analysis approach that combines lexicon-based and learning-based methods. Since lexicons that are available for the Persian language are not practical for sentiment analysis in the stock market domain, we built a particular sentiment lexicon for this domain. After designing and calculating daily sentiment indices using the sentiment of the comments, we examine their impact on the baseline models that only use historical market data and propose new predictor models using multi regression analysis. In addition to the sentiments, we also examine the comments volume and the users' reliabilities. We conclude that the predictability of various stocks in TSE is different depending on their attributes. Moreover, we indicate that for predicting the closing price only comments volume and for predicting the daily return both the volume and the sentiment of the comments could be useful. We demonstrate that Users' Trust coefficients have different behaviors toward the three stocks.
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.03792&r=all
  21. By: Susan Athey; Guido Imbens; Jonas Metzger; Evan Munro
    Abstract: Researchers often use artificial data to assess the performance of new econometric methods. In many cases the data generating processes used in these Monte Carlo studies do not resemble real data sets and instead reflect many arbitrary decisions made by the researchers. As a result potential users of the methods are rarely persuaded by these simulations that the new methods are as attractive as the simulations make them out to be. We discuss the use of Wasserstein Generative Adversarial Networks (WGANs) as a method for systematically generating artificial data that mimic closely any given real data set without the researcher having many degrees of freedom. We apply the methods to compare in three different settings twelve different estimators for average treatment effects under unconfoundedness. We conclude in this example that (i) there is not one estimator that outperforms the others in all three settings, and (ii) that systematic simulation studies can be helpful for selecting among competing methods.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.02210&r=all
  22. By: Diego Zabaljauregui
    Abstract: Nonzero-sum stochastic differential games with impulse controls offer a realistic and far-reaching modelling framework for applications within finance, energy markets and other areas, but the difficulty in solving such problems has hindered their proliferation. Semi-analytical approaches make strong assumptions pertaining very particular cases. To the author's best knowledge, the only numerical method in the literature is the heuristic one we put forward to solve an underlying system of quasi-variational inequalities. Focusing on symmetric games, this paper presents a simpler and more efficient fixed-point policy-iteration-type algorithm which removes the strong dependence on the initial guess and the relaxation scheme of the previous method. A rigorous convergence analysis is undertaken with natural assumptions on the players strategies, which admit graph-theoretic interpretations in the context of weakly chained diagonally dominant matrices. A provably convergent single-player impulse control solver, often outperforming classical policy iteration, is also provided. The main algorithm is used to compute with high precision equilibrium payoffs and Nash equilibria of otherwise too challenging problems, and even some for which results go beyond the scope of all the currently available theory.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.03574&r=all
  23. By: Baptiste Barreau; Laurent Carlier; Damien Challet
    Abstract: We propose a novel deep learning architecture suitable for the prediction of investor interest for a given asset in a given timeframe. This architecture performs both investor clustering and modelling at the same time. We first verify its superior performance on a simulated scenario inspired by real data and then apply it to a large proprietary database from BNP Paribas Corporate and Institutional Banking.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.05289&r=all
  24. By: Mahdavi, Sadegh; Bayat, Alireza; Mirzaei, Farzad
    Abstract: In this paper, a new optimization algorithm known as the Multiverse Optimization Algorithm (MOA) is developed for optimal economic operation of the micrigrid (MG) in the grid-connected mode. Results show the merit of the proposed technique.
    Keywords: Economic dispatch, Power Market, Power Economic, Energy Management
    JEL: A1 C0 G0 H0 L0 P0
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:95893&r=all
  25. By: Alda Kika (University of Tirana, Facultu of Natural Sciences); Loreta Leka (University of Tirana, Facultu of Natural Sciences); Suela Maxhelaku (University of Tirana, Facultu of Natural Sciences); Ana Ktona (University of Tirana, Facultu of Natural Sciences)
    Abstract: Building an adaptive e-learning system based on learning styles is a very challenging task. Two approaches to determine students learning style are mainly used: using questionnaires or data mining techniques on LMS log data. In order to build an adaptive Moodle LMS based on learning styles we aim to construct and use a mixed approach. 63 students from two courses that attended the same subject ?User interface? completed the ILS (Index of Learning Styles) questionnaire based on Felder-Silverman model. This learning style model is used to assess preferences on four dimensions (active/reflective, sensing/intuitive, visual/verbal, and sequential/global). Moodle keeps detailed logs of all activities that students perform which can be used to predict the learning style for each dimension. In this paper we have analyzed student?s log data from Moodle LMS using data mining techniques for classifying their learning styles focusing on one dimension of Felder-Silverman learning style: visual/verbal. Several classification algorithms provided by WEKA as J48 Decision Tree classifier, Naive Bayes and Part are compared. A 10-fold cross validation was used to evaluate the selected classifiers. The experiments showed that the Naive Bayes reached the best result at 71.18% accuracy.
    Keywords: Learning styles; Felder-Silverman learning style model; Weka; Moodle; data mining
    Date: 2019–07
    URL: http://d.repec.org/n?u=RePEc:sek:iacpro:9211567&r=all
  26. By: Stefan Lessmann; Kristof Coussement (LEM - Lille économie management - LEM - UMR 9221 - Université de Lille - UCL - Université catholique de Lille - CNRS - Centre National de la Recherche Scientifique); Koen W. de Bock (Audencia Recherche - Audencia Business School); Johannes Haupt
    Date: 2019–05
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-02275955&r=all
  27. By: Peter C.B. Phillips (Cowles Foundation, Yale University); Zhentao Shi (The Chinese University of Hong Kong)
    Abstract: The Hodrick-Prescott (HP) filter is one of the most widely used econometric methods in applied macroeconomic research. The technique is nonparametric and seeks to decompose a time series into a trend and a cyclical component unaided by economic theory or prior trend speci?cation. Like all nonparametric methods, the HP filter depends critically on a tuning parameter that controls the degree of smoothing. Yet in contrast to modern nonparametric methods and applied work with these procedures, empirical practice with the HP filter almost universally relies on standard settings for the tuning parameter that have been suggested largely by experimentation with macroeconomic data and heuristic reasoning about the form of economic cycles and trends. As recent research has shown, standard settings may not be adequate in removing trends, particularly stochastic trends, in economic data. This paper proposes an easy-to-implement practical procedure of iterating the HP smoother that is intended to make the filter a smarter smoothing device for trend estimation and trend elimination. We call this iterated HP technique the boosted HP filter in view of its connection to L_2-boosting in machine learning. The paper develops limit theory to show that the boosted HP filter asymptotically recovers trend mechanisms that involve unit root processes, deterministic polynomial drifts, and polynomial drifts with structural breaks – the most common trends that appear in macroeconomic data and current modeling methodology. In doing so, the boosted filter provides a new mechanism for consistently estimating multiple structural breaks. A stopping criterion is used to automate the iterative HP algorithm, making it a data-determined method that is ready for modern data-rich environments in economic research. The methodology is illustrated using three real data examples that highlight the differences between simple HP filtering, the data-determined boosted filter, and an alternative autoregressive approach. These examples show that the boosted HP filter is helpful in analyzing a large collection of heterogeneous macroeconomic time series that manifest various degrees of persistence, trend behavior, and volatility.
    Keywords: Boosting, Cycles, Empirical macroeconomics, Hodrick-Prescott filter, Machine learning, Nonstationary time series, Trends, Unit root processes
    JEL: C22 E20
    Date: 2019–05
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:2192&r=all

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.