nep-cmp New Economics Papers
on Computational Economics
Issue of 2021‒08‒23
eighteen papers chosen by
Stan Miles
Thompson Rivers University

  1. InfoGram and Admissible Machine Learning By Subhadeep Mukhopadhyay
  2. Two-Stage Sector Rotation Methodology Using Machine Learning and Deep Learning Techniques By Tugce Karatas; Ali Hirsa
  3. Trade When Opportunity Comes: Price Movement Forecasting via Locality-Aware Attention and Adaptive Refined Labeling By Liang Zeng; Lei Wang; Hui Niu; Jian Li; Ruchen Zhang; Zhonghao Dai; Dewei Zhu; Ling Wang
  4. Learning from Zero: How to Make Consumption-Saving Decisions in a Stochastic Environment with an AI Algorithm By Rui (Aruhan) Shi
  5. Simulation and estimation of an agent-based market-model with a matching engine By Ivan Jericevich; Patrick Chang; Tim Gebbie
  6. Contracting, pricing, and data collection under the AI flywheel effect By Huseyin Gurkan; Francis de Véricourt
  7. Combining Machine Learning Classifiers for Stock Trading with Effective Feature Extraction By A. K. M. Amanat Ullah; Fahim Imtiaz; Miftah Uddin Md Ihsan; Md. Golam Rabiul Alam; Mahbub Majumdar
  8. Adoption of Machine Learning Systems for Medical Diagnostics in Clinics: A Qualitative Interview Study By Pumplun, Luisa; Fecho, Mariska; Wahl, Nihal; Peters, Felix; Buxmann, Peter
  9. Machine Learning and Mobile Phone Data Can Improve the Targeting of Humanitarian Assistance By Emily Aiken; Suzanne Bellue; Dean Karlan; Christopher R. Udry; Joshua Blumenstock
  10. Building a Foundation for Data-Driven, Interpretable, and Robust Policy Design using the AI Economist By Alexander Trott; Sunil Srinivasa; Douwe van der Wal; Sebastien Haneuse; Stephan Zheng
  11. Testing competing world trade models against the facts of world trade By Minford, Patrick; Xu, Yongdeng; Dong, Xue
  12. Persuading Investors: A Video-Based Study By Allen Hu; Song Ma
  13. Stochastic loss reserving with mixture density neural networks By Muhammed Taher Al-Mudafer; Benjamin Avanzi; Greg Taylor; Bernard Wong
  14. Supervised Neural Networks for Illiquid Alternative Asset Cash Flow Forecasting By Tugce Karatas; Federico Klinkert; Ali Hirsa
  15. Should the United States Rejoin the Trans-Pacific Trade Deal? By Itakura, Ken; Lee, Hiro
  16. Dynamic Currency Hedging with Ambiguity By Pawel Polak; Urban Ulrych
  17. Seeing the Forest for the Trees: using hLDA models to evaluate communication in Banco Central do Brasil By Angelo M. Fasolo; Flávia M. Graminho; Saulo B. Bastos
  18. SimUAM: A Comprehensive Microsimulation Toolchain to Evaluate the Impact of Urban Air Mobility in Metropolitan Areas By Yedavalli, Pavan; Burak Onat, Emin; Peng, Xi; Sengupta, Raja; Waddell, Paul; Bulusu, Vishwanath; Xue, Min

  1. By: Subhadeep Mukhopadhyay
    Abstract: We have entered a new era of machine learning (ML), where the most accurate algorithm with superior predictive power may not even be deployable, unless it is admissible under the regulatory constraints. This has led to great interest in developing fair, transparent and trustworthy ML methods. The purpose of this article is to introduce a new information-theoretic learning framework (admissible machine learning) and algorithmic risk-management tools (InfoGram, L-features, ALFA-testing) that can guide an analyst to redesign off-the-shelf ML methods to be regulatory compliant, while maintaining good prediction accuracy. We have illustrated our approach using several real-data examples from financial sectors, biomedical research, marketing campaigns, and the criminal justice system.
    Date: 2021–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2108.07380&r=
  2. By: Tugce Karatas; Ali Hirsa
    Abstract: Market indicators such as CPI and GDP have been widely used over decades to identify the stage of business cycles and also investment attractiveness of sectors given market conditions. In this paper, we propose a two-stage methodology that consists of predicting ETF prices for each sector using market indicators and ranking sectors based on their predicted rate of returns. We initially start with choosing sector specific macroeconomic indicators and implement Recursive Feature Elimination algorithm to select the most important features for each sector. Using our prediction tool, we implement different Recurrent Neural Networks models to predict the future ETF prices for each sector. We then rank the sectors based on their predicted rate of returns. We select the best performing model by evaluating the annualized return, annualized Sharpe ratio, and Calmar ratio of the portfolios that includes the top four ranked sectors chosen by the model. We also test the robustness of the model performance with respect to lookback windows and look ahead windows. Our empirical results show that our methodology beats the equally weighted portfolio performance even in the long run. We also find that Echo State Networks exhibits an outstanding performance compared to other models yet it is faster to implement compared to other RNN models.
    Date: 2021–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2108.02838&r=
  3. By: Liang Zeng; Lei Wang; Hui Niu; Jian Li; Ruchen Zhang; Zhonghao Dai; Dewei Zhu; Ling Wang
    Abstract: Price movement forecasting aims at predicting the future trends of financial assets based on the current market conditions and other relevant information. Recently, machine learning(ML) methods have become increasingly popular and achieved promising results for price movement forecasting in both academia and industry. Most existing ML solutions formulate the forecasting problem as a classification(to predict the direction) or a regression(to predict the return) problem in the entire set of training data. However, due to the extremely low signal-to-noise ratio and stochastic nature of financial data, good trading opportunities are extremely scarce. As a result, without careful selection of potentially profitable samples, such ML methods are prone to capture the patterns of noises instead of real signals. To address the above issues, we propose a novel framework-LARA(Locality-Aware Attention and Adaptive Refined Labeling), which contains the following three components: 1)Locality-aware attention automatically extracts the potentially profitable samples by attending to their label information in order to construct a more accurate classifier on these selected samples. 2)Adaptive refined labeling further iteratively refines the labels, alleviating the noise of samples. 3)Equipped with metric learning techniques, Locality-aware attention enjoys task-specific distance metrics and distributes attention on potentially profitable samples in a more effective way. To validate our method, we conduct comprehensive experiments on three real-world financial markets: ETFs, the China's A-share stock market, and the cryptocurrency market. LARA achieves superior performance compared with the time-series analysis methods and a set of machine learning based competitors on the Qlib platform. Extensive ablation studies and experiments demonstrate that LARA indeed captures more reliable trading opportunities.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.11972&r=
  4. By: Rui (Aruhan) Shi
    Abstract: This exercise offers an innovative learning mechanism to model economic agent’s decision-making process using a deep reinforcement learning algorithm. In particular, this AI agent is born in an economic environment with no information on the underlying economic structure and its own preference. I model how the AI agent learns from square one in terms of how it collects and processes information. It is able to learn in real time through constantly interacting with the environment and adjusting its actions accordingly (i.e., online learning). I illustrate that the economic agent under deep reinforcement learning is adaptive to changes in a given environment in real time. AI agents differ in their ways of collecting and processing information, and this leads to different learning behaviours and welfare distinctions. The chosen economic structure can be generalised to other decision-making processes and economic models.
    Keywords: expectation formation, exploration, deep reinforcement learning, bounded rationality, stochastic optimal growth
    JEL: C45 D83 D84 E21 E70
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_9255&r=
  5. By: Ivan Jericevich; Patrick Chang; Tim Gebbie
    Abstract: An agent-based model with interacting low frequency liquidity takers inter-mediated by high-frequency liquidity providers acting collectively as market makers can be used to provide realistic simulated price impact curves. This is possible when agent-based model interactions occur asynchronously via order matching using a matching engine in event time to replace sequential calendar time market clearing. Here the matching engine infrastructure has been modified to provide a continuous feed of order confirmations and updates as message streams in order to conform more closely to live trading environments. The resulting trade and quote message data from the simulations are then aggregated, calibrated and visualised. Various stylised facts are presented along with event visualisations and price impact curves. We argue that additional realism in modelling can be achieved with a small set of agent parameters and simple interaction rules once interactions are reactive, asynchronous and in event time. We argue that the reactive nature of market agents may be a fundamental property of financial markets and when accounted for can allow for parsimonious modelling without recourse to additional sources of noise.
    Date: 2021–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2108.07806&r=
  6. By: Huseyin Gurkan (ESMT European School of Management and Technology); Francis de Véricourt (ESMT European School of Management and Technology)
    Abstract: This paper explores how firms that lack expertise in machine learning (ML) can leverage the so-called AI Flywheel effect. This effect designates a virtuous cycle by which, as an ML product is adopted and new user data are fed back to the algorithm, the product improves, enabling further adoptions. However, managing this feedback loop is difficult, especially when the algorithm is contracted out. Indeed, the additional data that the AI Flywheel effect generates may change the provider's incentives to improve the algorithm over time. We formalize this problem in a simple two-period moral hazard framework that captures the main dynamics among ML, data acquisition, pricing, and contracting. We find that the firm's decisions crucially depend on how the amount of data on which the machine is trained interacts with the provider's effort. If this effort has a more (less) significant impact on accuracy for larger volumes of data, the firm underprices (overprices) the product. Interestingly, these distortions sometimes improve social welfare, which accounts for the customer surplus and profits of both the firm and provider. Further, the interaction between incentive issues and the positive externalities of the AI Flywheel effect has important implications for the firm's data collection strategy. In particular, the firm can boost its profit by increasing the product's capacity to acquire usage data only up to a certain level. If the product collects too much data per user, the firm's profit may actually decrease, i.e., more data is not necessarily better.
    Keywords: Data, machine learning, data product, pricing, incentives, contracting
    Date: 2020–03–03
    URL: http://d.repec.org/n?u=RePEc:esm:wpaper:esmt-20-01_r3&r=
  7. By: A. K. M. Amanat Ullah; Fahim Imtiaz; Miftah Uddin Md Ihsan; Md. Golam Rabiul Alam; Mahbub Majumdar
    Abstract: The unpredictability and volatility of the stock market render it challenging to make a substantial profit using any generalized scheme. This paper intends to discuss our machine learning model, which can make a significant amount of profit in the US stock market by performing live trading in the Quantopian platform while using resources free of cost. Our top approach was to use ensemble learning with four classifiers: Gaussian Naive Bayes, Decision Tree, Logistic Regression with L1 regularization and Stochastic Gradient Descent, to decide whether to go long or short on a particular stock. Our best model performed daily trade between July 2011 and January 2019, generating 54.35% profit. Finally, our work showcased that mixtures of weighted classifiers perform better than any individual predictor about making trading decisions in the stock market.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.13148&r=
  8. By: Pumplun, Luisa; Fecho, Mariska; Wahl, Nihal; Peters, Felix; Buxmann, Peter
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:dar:wpaper:127993&r=
  9. By: Emily Aiken; Suzanne Bellue; Dean Karlan; Christopher R. Udry; Joshua Blumenstock
    Abstract: The COVID-19 pandemic has devastated many low- and middle-income countries (LMICs), causing widespread food insecurity and a sharp decline in living standards. In response to this crisis, governments and humanitarian organizations worldwide have mobilized targeted social assistance programs. Targeting is a central challenge in the administration of these programs: given available data, how does one rapidly identify the individuals and families with the greatest need? This challenge is particularly acute in the large number of LMICs that lack recent and comprehensive data on household income and wealth. Here we show that non-traditional “big” data from satellites and mobile phone networks can improve the targeting of anti-poverty programs. Our approach uses traditional survey-based measures of consumption and wealth to train machine learning algorithms that recognize patterns of poverty in non-traditional data; the trained algorithms are then used to prioritize aid to the poorest regions and mobile subscribers. We evaluate this approach by studying Novissi, Togo’s flagship emergency cash transfer program, which used these algorithms to determine eligibility for a rural assistance program that disbursed millions of dollars in COVID-19 relief aid. Our analysis compares outcomes – including exclusion errors, total social welfare, and measures of fairness – under different targeting regimes. Relative to the geographic targeting options considered by the Government of Togo at the time, the machine learning approach reduces errors of exclusion by 4-21%. Relative to methods that require a comprehensive social registry (a hypothetical exercise; no such registry exists in Togo), the machine learning approach increases exclusion errors by 9-35%. These results highlight the potential for new data sources to contribute to humanitarian response efforts, particularly in crisis settings when traditional data are missing or out of date.
    JEL: C55 I32 I38 O12 O38
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:29070&r=
  10. By: Alexander Trott; Sunil Srinivasa; Douwe van der Wal; Sebastien Haneuse; Stephan Zheng
    Abstract: Optimizing economic and public policy is critical to address socioeconomic issues and trade-offs, e.g., improving equality, productivity, or wellness, and poses a complex mechanism design problem. A policy designer needs to consider multiple objectives, policy levers, and behavioral responses from strategic actors who optimize for their individual objectives. Moreover, real-world policies should be explainable and robust to simulation-to-reality gaps, e.g., due to calibration issues. Existing approaches are often limited to a narrow set of policy levers or objectives that are hard to measure, do not yield explicit optimal policies, or do not consider strategic behavior, for example. Hence, it remains challenging to optimize policy in real-world scenarios. Here we show that the AI Economist framework enables effective, flexible, and interpretable policy design using two-level reinforcement learning (RL) and data-driven simulations. We validate our framework on optimizing the stringency of US state policies and Federal subsidies during a pandemic, e.g., COVID-19, using a simulation fitted to real data. We find that log-linear policies trained using RL significantly improve social welfare, based on both public health and economic outcomes, compared to past outcomes. Their behavior can be explained, e.g., well-performing policies respond strongly to changes in recovery and vaccination rates. They are also robust to calibration errors, e.g., infection rates that are over or underestimated. As of yet, real-world policymaking has not seen adoption of machine learning methods at large, including RL and AI-driven simulations. Our results show the potential of AI to guide policy design and improve social welfare amidst the complexity of the real world.
    Date: 2021–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2108.02904&r=
  11. By: Minford, Patrick (Cardiff Business School); Xu, Yongdeng (Cardiff Business School); Dong, Xue (Zhejiang University of Finance and Economics)
    Abstract: We carry out an indirect inference test of two versions of a computable general equilibrium (CGE) model of world trade. One of these, the ‘classical’ model, is well-known as the Heckscher-Ohlin-Samuelson model of world trade, in which countries trade homogeneous products in world markets and produce according to their comparative advantage as determined by their resource endowments. The other, the ‘gravity’ model, assumes products are differentiated by geographical origin, so that trade is determined largely by demand and relative prices differing according to distance; trade in turn affects productivity through technology transfer. These two CGE models of world trade behave in very different ways and predict quite different effects for trade policy, underlining the importance of discovering which best fits the facts of international trade. Our findings here are that the classical model fits these facts fairly well in general, while the gravity model is largely strongly rejected by them.
    Keywords: Bootstrap, indirect inference, gravity model, classical trade model, trade
    Date: 2021–08
    URL: http://d.repec.org/n?u=RePEc:cdf:wpaper:2021/20&r=
  12. By: Allen Hu; Song Ma
    Abstract: Persuasive communication functions not only through content but also delivery, e.g., facial expression, tone of voice, and diction. This paper examines the persuasiveness of delivery in start-up pitches. Using machine learning (ML) algorithms to process full pitch videos, we quantify persuasion in visual, vocal, and verbal dimensions. Positive (i.e., passionate, warm) pitches increase funding probability. Yet conditional on funding, high-positivity startups underperform. Women are more heavily judged on delivery when evaluating single-gender teams, but they are neglected when co-pitching with men in mixed-gender teams. Using an experiment, we show persuasion delivery works mainly through leading investors to form inaccurate beliefs.
    JEL: C55 D91 G24 G4 G41
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:29048&r=
  13. By: Muhammed Taher Al-Mudafer; Benjamin Avanzi; Greg Taylor; Bernard Wong
    Abstract: Neural networks offer a versatile, flexible and accurate approach to loss reserving. However, such applications have focused primarily on the (important) problem of fitting accurate central estimates of the outstanding claims. In practice, properties regarding the variability of outstanding claims are equally important (e.g., quantiles for regulatory purposes). In this paper we fill this gap by applying a Mixture Density Network ("MDN") to loss reserving. The approach combines a neural network architecture with a mixture Gaussian distribution to achieve simultaneously an accurate central estimate along with flexible distributional choice. Model fitting is done using a rolling-origin approach. Our approach consistently outperforms the classical over-dispersed model both for central estimates and quantiles of interest, when applied to a wide range of simulated environments of various complexity and specifications. We further extend the MDN approach by proposing two extensions. Firstly, we present a hybrid GLM-MDN approach called "ResMDN". This hybrid approach balances the tractability and ease of understanding of a traditional GLM model on one hand, with the additional accuracy and distributional flexibility provided by the MDN on the other. We show that it can successfully improve the errors of the baseline ccODP, although there is generally a loss of performance when compared to the MDN in the examples we considered. Secondly, we allow for explicit projection constraints, so that actuarial judgement can be directly incorporated in the modelling process. Throughout, we focus on aggregate loss triangles, and show that our methodologies are tractable, and that they out-perform traditional approaches even with relatively limited amounts of data. We use both simulated data -- to validate properties, and real data -- to illustrate and ascertain practicality of the approaches.
    Date: 2021–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2108.07924&r=
  14. By: Tugce Karatas; Federico Klinkert; Ali Hirsa
    Abstract: Institutional investors have been increasing the allocation of the illiquid alternative assets such as private equity funds in their portfolios, yet there exists a very limited literature on cash flow forecasting of illiquid alternative assets. The net cash flow of private equity funds typically follow a J-curve pattern, however the timing and the size of the contributions and distributions depend on the investment opportunities. In this paper, we develop a benchmark model and present two novel approaches (direct vs. indirect) to predict the cash flows of private equity funds. We introduce a sliding window approach to apply on our cash flow data because different vintage year funds contain different lengths of cash flow information. We then pass the data to an LSTM/ GRU model to predict the future cash flows either directly or indirectly (based on the benchmark model). We further integrate macroeconomic indicators into our data, which allows us to consider the impact of market environment on cash flows and to apply stress testing. Our results indicate that the direct model is easier to implement compared to the benchmark model and the indirect model, but still the predicted cash flows align better with the actual cash flows. We also show that macroeconomic variables improve the performance of the direct model whereas the impact is not obvious on the indirect model.
    Date: 2021–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2108.02853&r=
  15. By: Itakura, Ken; Lee, Hiro
    Abstract: Before the Trans-Pacific Partnership (TPP) entered into force, the United States withdrew from the trade accord. Eleven other TPP signatories decided to revive the agreement, which led to the implementation of the Comprehensive and Progressive Agreement for TPP (CPTPP). The objectives of this paper are to estimate economic welfare effects under alternative scenarios of TPP/CPTPP, to evaluate the extent of losses to the US from its withdrawal from TPP and expected gains from rejoining the Trans-Pacific trade accord, and to examine whether the US economy would have to undergo extensive sectoral adjustments from its participation. We employ a dynamic computable general equilibrium (CGE) model to examine these issues. The results suggest that the US loses an opportunity to gain 0.4 percent in its economic welfare by withdrawing from TPP, but it would be able to recover most of its projected welfare gains by reengaging with CPTPP. Since sectoral output adjustments in the US are small, its adjustment costs from participation in CPTPP would be limited. In addition, there exist political incentives for the US to become a member of this trade accord.
    Keywords: TPP, CPTPP, US, GTAP, CGE model
    JEL: F13 F14 F15 F17
    Date: 2021–06–30
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:109133&r=
  16. By: Pawel Polak (Stony Brook University); Urban Ulrych (University of Zurich - Department of Banking and Finance; Swiss Finance Institute)
    Abstract: This paper establishes a general relation between investor's ambiguity and non-Gaussianity of financial asset returns. Based on that relation and utilizing a flexible non-Gaussian returns model for the joint distribution of portfolio and currency returns, we develop an ambiguity-adjusted dynamic currency hedging strategy for international investors. We propose an extended filtered historical simulation that combines Monte Carlo simulation based on volatility clustering patterns with the semi-parametric non-normal return distribution from historical data. This simulation allows us to incorporate investor's ambiguity into the dynamic currency hedging strategy algorithm that can numerically optimize an arbitrary risk measure, such as volatility, value-at-risk, or expected shortfall. The out-of-sample back-test results show that, for globally diversified investors, the derived dynamic currency hedging strategy with ambiguity is stable, robust, and highly risk reductive. It outperforms the benchmarks of constant hedging as well as dynamic approaches without ambiguity in terms of lower maximum drawdown and higher Sharpe and Sortino ratios in gross terms and net of transaction costs.
    Keywords: Currency Hedging, Ambiguity, Filtered Historical Simulation, Expected Shortfall, Non- Gaussianity, International Asset Allocation, Currency Risk Management
    JEL: C53 C58 F31 G11 G15
    Date: 2021–08
    URL: http://d.repec.org/n?u=RePEc:chf:rpseri:rp2160&r=
  17. By: Angelo M. Fasolo; Flávia M. Graminho; Saulo B. Bastos
    Abstract: Central bank communication is a key tool in managing ination expectations. This paper proposes a hierarchical Latent Dirichlet Allocation (hLDA) model combined with feature selection techniques to allow an endogenous selection of topic structures associated with documents published by Banco Central do Brasil's Monetary Policy Committee (Copom). These computational linguistic techniques allow building measures of the content and tone of Copom's minutes and statements. The effects of the tone are measured in different dimensions such as inflation, inflation expectations, economic activity, and economic uncertainty. Beyond the impact on the economy, the hLDA model is used to evaluate the coherence between the statements and the minutes of Copom's meetings.
    Date: 2021–08
    URL: http://d.repec.org/n?u=RePEc:bcb:wpaper:555&r=
  18. By: Yedavalli, Pavan; Burak Onat, Emin; Peng, Xi; Sengupta, Raja; Waddell, Paul; Bulusu, Vishwanath; Xue, Min
    Abstract: Over the past several years, Urban Air Mobility (UAM) has galvanized enthusiasm from investors and researchers, marrying expertise in aircraft design, transportation, logistics, artificial intelligence, battery chemistry, and broader policymaking. However, two significant questions remain unexplored: (1) What is the value of UAM in a region’s transportation network?, and (2) How can UAM be effectively deployed to realize and maximize this value to all stakeholders, including riders and local economies? To adequately understand the value proposition of UAM for metropolitan areas, we develop a holistic multi-modal toolchain, SimUAM, to model and simulate UAM and its impacts on travel behavior. This toolchain has several components: (1) MANTA: A fast, high-fidelity regional-scale traffic microsimulator, (2) VertiSim: A granular, discrete-event vertiport and pedestrian, (3) 퐹퐸3 : A high-fidelity, trajectory-based aerial microsimulation. SimUAM, rooted in granular, GPU-based microsimulation, models millions of trips and their exact movements in the street network and in the air, producing interpretable and actionable performance metrics for UAM designs and deployments. The modularity, extensibility, and speed of the platform will allow for rapid scenario planning and sensitivity analysis, effectively acting as a detailed performance assessment tool. As a result, stakeholders in UAM can understand the impacts of critical infrastructure, and subsequently define policies, requirements, and investments needed to support UAM as a viable transportation mode.
    Keywords: Engineering
    Date: 2021–08–01
    URL: http://d.repec.org/n?u=RePEc:cdl:itsrrp:qt5709d8vr&r=

This nep-cmp issue is ©2021 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.