New Economics Papers
on Computational Economics
Issue of 2011‒02‒26
fourteen papers chosen by



  1. Scheduling of inventory releasing jobs to stisfy time-varying demand By Nils Boysen; Malte Fliedner; S. Bock
  2. Incorporating Vehicular Emissions into an Efficient Mesoscopic Traffic Model: An Application to the Alameda Corridor By Gan, Qijian; Sun, Jielin; Jin, Wenlong; Saphores, Jean-Daniel
  3. A General Computation Scheme for a High-Order Asymptotic Expansion Method By Akihiko Takahashi; Kohta Takehara; Masashi Toda
  4. Anticipated Alternative Policy-Rate Paths in Policy Simulations By Laséen, Stefan; Svensson, Lars E.O.
  5. A Class of Adaptive EM-based Importance Sampling Algorithms for Efficient and Robust Posterior and Predictive Simulation By Lennart Hoogerheide; Anne Opschoor; Herman K. van Dijk
  6. Heuristic model selection for leading indicators in Russia and Germany By Ivan Savin; Peter Winker
  7. In and Out of Equilibrium: Evolution of Strategies in Repeated Games with Discounting By Matthijs van Veelen; Julian Garcia
  8. A Note on the Stability of the Least Squares Monte Carlo By Oleksii Mostovyi
  9. Weighted Monte Carlo: Calibrating the Smile and Preserving Martingale Condition By Alberto Elices; Eduard Gim\'enez
  10. Heterogeneous Gain Learning and the Dynamics of Asset Prices By Blake LeBaron
  11. Travel Time Variability and Airport Accessibility By Paul Koster; Eric Kroes; Erik T. Verhoef
  12. The Impact of Fiscal Consolidation and Structural Reforms on Growth in Japan By Pelin Berkmen
  13. An Estimated Dynamic Stochastic General Equilibrium Model of the Jordanian Economy By Tigran Poghosyan; Samya Beidas-Strom
  14. Tax Compliance by Firms and AuditPolicy By Ralph Bayer; Frank A Cowell

  1. By: Nils Boysen (School of Economics and Business Administration, Friedrich-Schiller-University Jena); Malte Fliedner (School of Economics and Business Administration, Friedrich-Schiller-University Jena); S. Bock
    Abstract: This paper studies a new class of single-machine scheduling problems, that are faced by Just-in-Time-suppliers satisfying a given demand. In these models the processing of jobs leads to a release of a predened number of product units into inventory. Consumption is triggered by predetermined time-varying, and product-specic demand requests. While all demands have to be fullled, the objective is to minimize the resulting product inventory. We investigate different subproblems of this general setting with regard to their computational complexity. For more restricted problem versions (equal processing times and equal number of released products) strongly polynomial time algorithms are presented. In contrast to this, NP-hardness in the strong sense is proven for more general problem versions (varying processing times or varying number of released products). Moreover, for the most general version, even finding a feasible solution is shown to be strongly NP-hard.
    Keywords: Machine scheduling, Inventory, Time-varying demand, Computational complexity
    Date: 2011–02–16
    URL: http://d.repec.org/n?u=RePEc:jen:jenjbe:2011-02&r=cmp
  2. By: Gan, Qijian; Sun, Jielin; Jin, Wenlong; Saphores, Jean-Daniel
    Abstract: We couple EMFAC with a dynamic mesoscopic traffic model to create an efficient tool for generating information about traffic dynamics and emissions of various pollutants (CO2, PM10, NOX, and TOG) on large scale networks. Our traffic flow model is the multi-commodity discrete kinematic wave (MCDKW) model, which is rooted in the cell transmission model but allows variable cell sizes for more efficient computations. This approach allows us to estimate traffic emissions and characteristics with a precision similar to microscopic simulation but much faster. To assess the performance of this tool, we analyze traffic and emissions on a large freeway network located between the ports of Los Angeles/Long Beach and downtown Los Angeles. Comparisons of our mesoscopic simulation results with microscopic simulations generated by TransModeler under both congested and free flow conditions show that hourly emission estimates of our mesoscopic model are within 4 to 15 percent of microscopic results with a computation time divided by a factor of 6 or more. Our approach provides policymakers with a tool more efficient than microsimulation for analyzing the effectiveness of regional policies designed to reduce air pollution from motor vehicles.
    Date: 2011–02–01
    URL: http://d.repec.org/n?u=RePEc:cdl:uctcwp:1798762&r=cmp
  3. By: Akihiko Takahashi (Faculty of Economics, University of Tokyo); Kohta Takehara (Graduate School of Economics, University of Tokyo); Masashi Toda (Graduate School of Economics, University of Tokyo)
    Abstract: This paper presents a new computational scheme for an asymptotic expansion method of an arbitrary order. An asymptotic expansion method in finance initiated by Kunitomo and Takahashi[9], Yoshida[34] and Takahashi [20],[21] is a widely applicable methodology for an analytic approximation of the expectation of a certain functional of diffusion processes and not only academic researchers but also many practitioners have used the methodology for a variety of financial issues such as pricing or hedging complex derivatives under high-dimensional underlying stochastic environments. In practical applications of the expansion, the crucial step is calculation of conditional expectations for a certain kind of Wiener functionals. [20], [21] and Takahashi and Takehara [23] provided explicit formulas of conditional expectations necessary for the asymptotic expansion up to the third order. This paper presents the new method for computing an arbitrary-order expansion in a general diffusion-type stochastic environment, which is powerful especially for a high-order expansion: This develops a new calculation algorithm for computing coefficients of the expansion through solving a system of ordinary differential equations that is equivalent to computing the conditional expectations. To demonstrate its effectiveness, the paper gives numerical examples of the approximation for the ă-SABR model up to the fifth order and a cross-currency Libor market model with a general stochastic volatility model of the spot foreign exchange rate up to the fourth order.
    Date: 2011–02
    URL: http://d.repec.org/n?u=RePEc:cfi:fseres:cf242&r=cmp
  4. By: Laséen, Stefan (Monetary Policy Department, Central Bank of Sweden); Svensson, Lars E.O. (Sveriges Riksbank)
    Abstract: This paper specifies a new convenient algorithm to construct policy projections conditional on alternative anticipated policy-rate paths in linearized dynamic stochastic general equilibrium (DSGE) models, such as Ramses, the Riksbank's main DSGE model. Such projections with anticipated policy-rate paths correspond to situations where the central bank transparently announces that it, conditional on current information, plans to implement a particular policy-rate path and where this announced plan for the policy rate is believed and then anticipated by the private sector. The main idea of the algorithm is to include among the predetermined variables (the "state" of the economy) the vector of nonzero means of future shocks to a given policy rule that is required to satisfy the given anticipated policy-rate path.
    Keywords: Optimal monetary policy; instrument rules; policy rules; optimal policy projections
    JEL: E52 E58
    Date: 2011–01–01
    URL: http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0248&r=cmp
  5. By: Lennart Hoogerheide (Erasmus University Rotterdam); Anne Opschoor (Erasmus University Rotterdam); Herman K. van Dijk (Erasmus University Rotterdam)
    Abstract: A class of adaptive sampling methods is introduced for efficient posterior and predictive simulation. The proposed methods are robust in the sense that they can handle target distributions that exhibit non-elliptical shapes such as multimodality and skewness. The basic method makes use of sequences of importance weighted Expectation Maximization steps in order to efficiently construct a mixture of Student-<I>t</I> densities that approximates accurately the target distribution -typically a posterior distribution, of which we only require a kernel - in the sense that the Kullback-Leibler divergence between target and mixture is minimized. We label this approach Mixture of <I>t</I> by Importance Sampling and Expectation Maximization (MitISEM). We also introduce three extensions of the basic MitISEM approach. First, we propose a method for applying MitISEM in a sequential manner, so that the candidate distribution for posterior simulation is cleverly updated when new data become available. Our results show that the computational effort reduces enormously. This sequential approach can be combined with a tempering approach, which facilitates the simulation from densities with multiple modes that are far apart. Second, we introduce a permutation-augmented MitISEM approach, for importance sampling from posterior distributions in mixture models without the requirement of imposing identification restrictions on the model's mixture regimes' parameters. Third, we propose a partial MitISEM approach, which aims at approximating the marginal and conditional posterior distributions of subsets of model parameters, rather than the joint. This division can substantially reduce the dimension of the approximation problem.
    Keywords: mixture of Student-t distributions; importance sampling; Kullback-Leibler divergence; Expectation Maximization; Metropolis-Hastings algorithm; predictive likelihoods; mixture GARCH models; Value at Risk
    JEL: C11 C15 C22
    Date: 2011–01–06
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20110004&r=cmp
  6. By: Ivan Savin (Justus Liebig University Giessen); Peter Winker (Justus Liebig University Giessen)
    Abstract: Business tendency survey indicators are widely recognized as a key instrument for business cycle forecasting. Their leading indicator property is assessed with regard to forecasting industrial production in Russia and Germany. For this purpose, vector autoregressive (VAR) models are specified and estimated to construct forecasts. As the potential number of lags included is large, we compare full–specified VAR models with subset models obtained using a Genetic Algorithm enabling ’holes’ in multivariate lag structures. The problem is complicated by the fact that a structural break and seasonal variation of indicators have to be taken into account. The models allow for a comparison of the dynamic adjustment and the forecasting performance of the leading indicators for both countries revealing marked differences between Russia and Germany.
    Keywords: Leading indicators, business cycle forecasts, VAR, model selection, genetic algorithms
    JEL: C32 C52 C53 C61 E37
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:mar:magkse:201101&r=cmp
  7. By: Matthijs van Veelen (CREED, Universiteit van Amsterdam); Julian Garcia (Vrije Universiteit)
    Abstract: Repeated games tend to have large sets of equilibria. We also know that in the repeated prisoners dilemma there is a profusion of neutrally stable strategies, but no strategy that is evolutionarily stable. This paper shows that for all of these neutrally stable strategies there is a stepping stone path out; there is always a neutral mutant that can enter a population and create an actual selective advantage for a second mutant. Such stepping stone paths out of equilibrium generally exist both in the direction of more and in the direction of less cooperation. While the central theorems show that such paths out of equilibrium exist, they could still be rare compared to the size of the strategy space. Simulations however suggest that they are not too rare to be found by a reasonable mutation process, and that typical simulation paths take the population from equilibrium to equilibrium through a series of indirect invasions. Instability does not mean we cannot draw qualitative conclusions though. The very nature of the indirect invasions implies that the population will on average be (somewhat) reciprocal and (reasonably) cooperative.
    Keywords: Repeated games; evolution; robust against indirect invasions; simulation
    JEL: C73
    Date: 2010–04–08
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20100037&r=cmp
  8. By: Oleksii Mostovyi
    Abstract: This paper analyzes Least Squares Monte Carlo (LSM) algorithm, which is proposed by Longstaff and Schwartz (2001) for pricing American style securities. This algorithm is based on the projection of the value of continuation onto a certain set of basis functions via the least squares problem. We analyze the stability of the algorithm when the number of exercise dates increases and prove that if the underlying process for the stock price is continuous then the regression problem is ill-conditioned for small values of parameter t, time.
    Date: 2011–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1102.3218&r=cmp
  9. By: Alberto Elices; Eduard Gim\'enez
    Abstract: Weighted Monte Carlo prices exotic options calibrating the probabilities of previously generated paths by a regular Monte Carlo to fit a set of option premiums. When only vanilla call and put options and forward prices are considered, the Martingale condition might not be preserved. This paper shows that this is indeed the case and overcomes the problem by adding additional synthetic options. A robust, fast and easy-to-implement calibration algorithm is presented. The results are illustrated with a geometric cliquet option which shows how the price impact can be significant.
    Date: 2011–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1102.3541&r=cmp
  10. By: Blake LeBaron (International Business School, Brandeis University)
    Abstract: This paper presents a new agent-based financial market. It is designed to be both simple enough to gain insights into the nature and structure of what is going on at both the agent and macro levels, but remain rich enough to allow for many interesting evolutionary experiments. The model is driven by heterogeneous agents who put varying weights on past information as they design portfolio strategies. It faithfully generates many of the common stylized features of asset markets. It also yields some insights into the dynamics of agent strategies and how they yield market instabilities.
    Keywords: Learning, Asset Pricing, Financial Time Series, Evolution, Memory
    Date: 2010–06
    URL: http://d.repec.org/n?u=RePEc:brd:wpaper:29&r=cmp
  11. By: Paul Koster (VU University Amsterdam); Eric Kroes (Significance BV, and VU University Amsterdam); Erik T. Verhoef (VU University Amsterdam)
    Abstract: This paper analyses the cost of access travel time variability for air travelers. Reliable access to airports is important since it is likely that the cost of missing a flight is high. First, the determinants of the preferred arrival times at airports are analyzed, including trip purpose, type of airport, flight characteristics, travel experience, type of check-in, need to check-in luggage. Second, the willingness to pay (WTP) for reduction in access travel time, early and late arrival time at the airport, and the probability to miss a flight is estimated using a stated choice experiment. The results indicate that the WTPs are relatively high, which is partially due to the low cost sensitivity of air travelers. Third, a model is developed to calculate the cost of variable travel times for air travelers going by car, taking into account travel time cost, scheduling cost and the cost of missing a flight. In this model, the value of reliability for air travelers is derived taking 'anticipating departure time choice' into account. Results of the numerical exercise show that the cost of access travel time variability for business travelers are between 3-36% of total access travel cost, and for non-business travelers between 3-30%. These numbers depend strongly on the time of the day.
    Keywords: value of reliability; scheduling; travel time variability; airport accessibility; airport choice
    JEL: R41 L91 L93 D61
    Date: 2010–06–23
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20100061&r=cmp
  12. By: Pelin Berkmen
    Abstract: With Japan’s public debt reaching historical levels, the need for fiscal consolidation and structural reforms have increased. As fiscal consolidation will require a sustained and large adjustment in the fiscal balance, its growth effect is a concern particularly for the short run. This paper uses the IMF’s Global Integrated Monetary and Fiscal Model to analyze the growth impact of fiscal consolidation and structural reforms. Although fiscal consolidation has short-term costs, the potential long-term benefits are considerable, and reforms that raise potential growth could support consolidation. Simulations show that the external environment also matters but domestic policies should be the priority.
    Keywords: Economic growth , Economic models , Fiscal consolidation , Fiscal policy , Fiscal reforms , Japan , Monetary policy , Taxes ,
    Date: 2011–01–13
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:11/13&r=cmp
  13. By: Tigran Poghosyan; Samya Beidas-Strom
    Abstract: This paper presents and estimates a small open economy dynamic stochastic general-equilibrium model (DSGE) for the Jordanian economy. The model features nominal and real rigidities, imperfect competition and habit formation in the consumer’s utility function. Oil imports are explicitly modeled in the consumption basket and domestic production. Bayesian estimation methods are employed on quarterly Jordanian data. The model’s properties are described by impulse response analysis of identified structural shocks pertinent to the economy. These properties assess the effectiveness of the pegged exchange rate regime in minimizing inflation and output trade-offs. The estimates of the structural parameters fall within plausible ranges, and simulation results suggest that while the peg amplifies output, consumption and (price and wage) inflation volatility, it offers a relatively low risk premium.
    Keywords: Income , Monetary policy , Exchange rate depreciation , Exchange rate appreciation , Economic models , External shocks , Demand , Oil prices , Price adjustments , Wage policy , Consumption , Exchange rate policy ,
    Date: 2011–02–02
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:11/28&r=cmp
  14. By: Ralph Bayer; Frank A Cowell
    Abstract: Firms are usually better informed than tax authorities about market conditions and thepotential profits of competitors. They may try to exploit this situation by underreportingtheir own taxable profits. The tax authority could offset firms' informationaladvantage by adopting "smarter" audit policies .that take into account the relationshipbetween a firm's reported profits and reports for the industry as a whole. Such anaudit policy will create an externality for the decision makers in the industry and thisexternality can be expected to affect not only firms' reporting policies but also theirmarket decisions. If public policy takes into account wider economic issues than justrevenue raising what is the appropriate way for a tax authority to run such an auditpolicy? We develop some clear policy rules in a standard model of an industry andshow the effect of these rules using simulations.ca3
    Keywords: Tax compliance, evasion, oligopoly
    JEL: H20 H21
    Date: 2010–09
    URL: http://d.repec.org/n?u=RePEc:cep:stidar:102&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.