nep-cmp New Economics Papers
on Computational Economics
Issue of 2017‒10‒29
eleven papers chosen by



  1. When Inequality Matters for Macro and Macro Matters for Inequality By SeHyoun Ahn; Greg Kaplan; Benjamin Moll; Thomas Winberry; Christian Wolf
  2. Sequential Design and Spatial Modeling for Portfolio Tail Risk Measurement By Michael Ludkovski; James Risk
  3. Geometric Learning and Filtering in Finance By Anastasia Kratsios; Cody B. Hyndman
  4. State Space Approach to Adaptive Fuzzy Modeling: Application to Financial Investment By Masafumi Nakano; Akihiko Takahashi; Soichiro Takahashi
  5. Modelling a small open economy using a wavelet-based control model By Hudgins, David; Crowley, Patrick M.
  6. Arbitrage-Free Regularization By Anastasia Kratsios; Cody B. Hyndman
  7. Model economic phenomena with CART and Random Forest algorithms By Benjamin David
  8. The Day of the Week Effect in the Crypto Currency Market By Guglielmo Maria Caporale; Alex Plastun
  9. Mean Field Game Approach to Production and Exploration of Exhaustible Commodities By Michael Ludkovski; Xuwei Yang
  10. Ranking Firms Using Revealed Preference By Isaac Sorkin
  11. Essays on business cycles with liquidity constraints and firm entry-exit dynamics under incomplete information By Ma, Zhixia

  1. By: SeHyoun Ahn; Greg Kaplan; Benjamin Moll; Thomas Winberry; Christian Wolf
    Abstract: We develop an efficient and easy-to-use computational method for solving a wide class of general equilibrium heterogeneous agent models with aggregate shocks, together with an open source suite of codes that implement our algorithms in an easy-to-use toolbox. Our method extends standard linearization techniques and is designed to work in cases when inequality matters for the dynamics of macroeconomic aggregates. We present two applications that analyze a two-asset incomplete markets model parameterized to match the distribution of income, wealth, and marginal propensities to consume. First, we show that our model is consistent with two key features of aggregate consumption dynamics that are difficult to match with representative agent models: (i) the sensitivity of aggregate consumption to predictable changes in aggregate income and (ii) the relative smoothness of aggregate consumption. Second, we extend the model to feature capital-skill complementarity and show how factor-specific productivity shocks shape dynamics of income and consumption inequality.
    JEL: A00 C00 E00
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_6581&r=cmp
  2. By: Michael Ludkovski; James Risk
    Abstract: We consider calculation of capital requirements when the underlying economic scenarios are determined by simulatable risk factors. In the respective nested simulation framework, the goal is to estimate portfolio tail risk, quantified via VaR or TVaR of a given collection of future economic scenarios representing factor levels at the risk horizon. Traditionally, evaluating portfolio losses of an outer scenario is done by computing a conditional expectation via inner-level Monte Carlo and is computationally expensive. We introduce several inter-related machine learning techniques to speed up this computation, in particular by properly accounting for the simulation noise. Our main workhorse is an advanced Gaussian Process (GP) regression approach which uses nonparametric spatial modeling to efficiently learn the relationship between the stochastic factors defining scenarios and corresponding portfolio value. Leveraging this emulator, we develop sequential algorithms that adaptively allocate inner simulation budgets to target the quantile region. The GP framework also yields better uncertainty quantification for the resulting VaR/TVaR estimators that reduces bias and variance compared to existing methods. We illustrate the proposed strategies with two case-studies in two and six dimensions.
    Date: 2017–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1710.05204&r=cmp
  3. By: Anastasia Kratsios; Cody B. Hyndman
    Abstract: We develop a method for incorporating relevant non-Euclidean geometric information into a broad range of classical filtering and statistical or machine learning algorithms. We apply these techniques to approximate the solution of the non-Euclidean filtering problem to arbitrary precision. We then extend the particle filtering algorithm to compute our asymptotic solution to arbitrary precision. Moreover, we find explicit error bounds measuring the discrepancy between our locally triangulated filter and the true theoretical non-Euclidean filter. Our methods are motivated by certain fundamental problems in mathematical finance. In particular we apply these filtering techniques to incorporate the non-Euclidean geometry present in stochastic volatility models and optimal Markowitz portfolios. We also extend Euclidean statistical or machine learning algorithms to non-Euclidean problems by using the local triangulation technique, which we show improves the accuracy of the original algorithm. We apply the local triangulation method to obtain improvements of the (sparse) principal component analysis and the principal geodesic analysis algorithms and show how these improved algorithms can be used to parsimoniously estimate the evolution of the shape of forward-rate curves. While focused on financial applications, the non-Euclidean geometric techniques presented in this paper can be employed to provide improvements to a range of other statistical or machine learning algorithms and may be useful in other areas of application.
    Date: 2017–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1710.05829&r=cmp
  4. By: Masafumi Nakano (Graduate School of Economics, the University of Tokyo); Akihiko Takahashi (Graduate School of Economics, the University of Tokyo); Soichiro Takahashi (Graduate School of Economics, the University of Tokyo)
    Abstract: This paper proposes a new state space approach to adaptive fuzzy modeling under the dynamic environment, where Bayesian filtering sequentially learns the model parameters including model structures themselves as state variables. In particular, our approach specifies the state transitions as meanreversion processes, which intends to incorporate and extend the established state-of-art learning techniques as follows: First, the mean-reversion levels of model parameters are determined by applying some existing learning method to a training period. Next, filtering implementation over test data enables on-line estimation of the parameters, where the estimates are adaptively tuned for each new data arrival based on the obtained reliable learning result. In this work, we concretely design a Takagi-Sugeno- Kang fuzzy model for financial investment, whose parameters follow autoregressive processes with the mean-reversion levels decided by particle swarm optimization. Since there exist Monte Carlo simulation-based algorithms called particle filtering, our methodology is applicable to a quite general setting including non-linearity, which actually arises in our investment problem. Then, an out-of-sample numerical experiment with security price data successfully demonstrates its effectiveness.
    Date: 2017–10
    URL: http://d.repec.org/n?u=RePEc:cfi:fseres:cf422&r=cmp
  5. By: Hudgins, David; Crowley, Patrick M.
    Abstract: This paper develops a wavelet-based control system model that can be used to simulate fiscal and monetary strategies in an open economy context in the time-frequency domain. As the emphasis on real exchange rate stability is increased, the model simulates the effects on both the aggregate and decomposed trade balance under both constant and depreciating real exchange rate targets, and also the effects on the real GDP expenditure components. This paper adds to recent research in this area by incorporating an external sector via the use of a real effective exchange rate as a driver of output. The research is also the first to analyze exchange rate effects within a time-frequency model with integrated fiscal and monetary policies in an open-economy applied wavelet-based optimal control setting. To demonstrate the usefulness of this model, we use post-apartheid South African macro data under a political targeting design for the frequency range weights, where we simulate jointly optimal fiscal and monetary policy under varying preferences for real exchange rate stability.
    JEL: C61 C63 C88 E52 E61 F47
    Date: 2017–10–18
    URL: http://d.repec.org/n?u=RePEc:bof:bofrdp:2017_032&r=cmp
  6. By: Anastasia Kratsios; Cody B. Hyndman
    Abstract: We introduce a path-dependent geometric framework which generalizes the HJM modeling approach to a wide variety of other asset classes. A machine learning regularization framework is developed with the objective of removing arbitrage opportunities from models within this general framework. The regularization method relies on minimal deformations of a model subject to a path-dependent penalty that detects arbitrage opportunities. We prove that the solution of this regularization problem is independent of the arbitrage-penalty chosen, subject to a fixed information loss functional. In addition to the general properties of the minimal deformation, we also consider several explicit examples. This paper is focused on placing machine learning methods in finance on a sound theoretical basis and the techniques developed to achieve this objective may be of interest in other areas of application.
    Date: 2017–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1710.05114&r=cmp
  7. By: Benjamin David
    Abstract: The aim of this paper is to highlight the advantages of algorithmic methods for economic research with quantitative orientation. We describe four typical problems involved in econometric modeling, namely the choice of explanatory variables, a functional form, a probability distribution and the inclusion of interactions in a model. We detail how those problems can be solved by using "CART" and "Random Forest" algorithms in a context of massive increasing data availability. We base our analysis on two examples, the identification of growth drivers and the prediction of growth cycles. More generally, we also discuss the application fields of these methods that come from a machine-learning framework by underlining their potential for economic applications.
    Keywords: decision trees, CART, Random Forest
    JEL: C4 C18 C38
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:drm:wpaper:2017-46&r=cmp
  8. By: Guglielmo Maria Caporale; Alex Plastun
    Abstract: This paper examines the day of the week effect in the crypto currency market using a variety of statistical techniques (average analysis, Student's t-test, ANOVA, the Kruskal- Wallis test, and regression analysis with dummy variables) as well as a trading simulation approach. Most crypto currencies (LiteCoin, Ripple, Dash) are found not to exhibit this anomaly. The only exception is BitCoin, for which returns on Mondays are significantly higher than those on the other days of the week. In this case the trading simulation analysis shows that there exist exploitable profit opportunities that can be interpreted as evidence against efficiency of the crypto currency market.
    Keywords: Efficient Market Hypothesis, day of the week effect, crypto currency, BitCoin, anomaly, trading strategy
    JEL: G12 C63
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1694&r=cmp
  9. By: Michael Ludkovski; Xuwei Yang
    Abstract: In a game theoretic framework, we study energy markets with a continuum of homogenous producers who produce energy from an exhaustible resource such as oil. Each producer simultaneously optimizes production rate that drives her revenues, as well as exploration effort to replenish her reserves. This exploration activity is modeled through a controlled point process that leads to stochastic increments to reserves level. The producers interact with each other through the market price that depends on the aggregate production. We employ a mean field game approach to solve for a Markov Nash equilibrium and develop numerical schemes to solve the resulting system of non-local HJB and transport equations with non-local coupling. A time-stationary formulation is also explored, as well as the fluid limit where exploration becomes deterministic.
    Date: 2017–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1710.05131&r=cmp
  10. By: Isaac Sorkin
    Abstract: This paper estimates workers' preferences for firms by studying the structure of employer-to-employer transitions in U.S. administrative data. The paper uses a tool from numerical linear algebra to measure the central tendency of worker flows, which is closely related to the ranking of firms revealed by workers' choices. There is evidence for compensating differential when workers systematically move to lower-paying firms in a way that cannot be accounted for by layoffs or differences in recruiting intensity. The estimates suggest that compensating differentials account for over half of the firm component of the variance of earnings.
    JEL: E24 J01 J3 J32 J42
    Date: 2017–10
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:23938&r=cmp
  11. By: Ma, Zhixia
    Abstract: This dissertation addresses two distinct issues. The first paper studies business cycles with asset fire sales under limited commitment in financial markets. Paper 2 and 3 study firm entry and exit dynamics in a global game with incomplete information. The second paper derives analytical solutions when firms’ productivity is uniformly distributed. The third paper extends the analysis to span more general distributions and solves the problem numerically.The first paper develops a stochastic over-lapping generations’ model to study the intertemporal and intergenerational transmission of productivity shocks. Productivity shocks cause fire sales of capital, which in turn affects the income of future generations. From a constrained-efficiency perspective, competitive equilibria can be inefficient as agents' choices in equilibrium exhibit ex-ante over-borrowing. The inefficiency arises because entrepreneurs cannot get fully financed from outside funds due to limited commitment in financial markets. The fact that the capital prices are determined in competitive markets also contributes to the above inefficiency because agents fail to internalize potential ex-post fire sales. A capital requirement policy can reduce fire sales when adverse productivity shocks occur, and can thus increase the income for all future generations. On the other hand, a lower capital stock even when good productivity shocks occur decreases income for all future generations. Overall, this paper shows that in the long run, a capital requirement policy can (strictly) increase welfare of agents.The second paper develops a static general equilibrium model to study firms' entry and exit decision in a global game with incomplete information. Firms' choices are strategic substitutes. This paper analytically proves the existence and uniqueness of a monotonic pure strategy equilibrium when the mean productivity and the productivity conditional on the mean are both drawn from uniform distributions. Using numerical examples, it is shown that when the precision of public information increases, the equilibrium switching productivity level increases and, as a result, the aggregate industry productivity increases. By reallocating resources to more productive firms, an increase in the precision of public information leads to a higher welfare.The third paper extends the problem studied in the second paper to examine whether and how the shapes of productivity distributions affect the existence of the monotonic pure strategy equilibria. The mean productivity is now drawn from a truncated normal distribution and individual firm's productivity conditional on the mean is drawn from more general (truncated) distributions, such as truncated normal, truncated gamma, and truncated exponential distributions. With numerical examples, it is shown that a unique monotonic pure strategy equilibrium continues to exist when firms’ productivity is drawn from non-uniform distributions. As in paper 2, both the aggregate productivity and the welfare per worker increase with the increase in the precision of public information. However, unlike in paper 2, the impact of an increase in the precision of private information on aggregate productivity and the welfare depends on the shape of the distribution. In particular, this impact is uncertain when the productivity conditional on the mean is drawn from truncated gamma distribution, which is skewed.
    Date: 2016–01–01
    URL: http://d.repec.org/n?u=RePEc:isu:genstf:201601010800006049&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.