Operations Research
http://lists.repec.orgmailman/listinfo/nep-ore
Operations Research
2016-05-28
Auxiliary Likelihood-Based Approximate Bayesian Computation in State Space Models
http://d.repec.org/n?u=RePEc:msh:ebswps:2016-09&r=ore
A new approach to inference in state space models is proposed, using approximate Bayesian computation (ABC). ABC avoids evaluation of an intractable likelihood by matching summary statistics computed from observed data with statistics computed from data simulated from the true process, based on parameter draws from the prior. Draws that produce a â€˜matchâ€™ between observed and simulated summaries are retained, and used to estimate the inaccessible posterior; exact inference being possible in the state space setting, we pursue summaries via the maximization of an auxiliary likelihood function. We derive conditions under which this auxiliary likelihood-based approach achieves Bayesian consistency and show that â€“ in a precise limiting sense â€“ results yielded by the auxiliary maximum likelihood estimator are replicated by the auxiliary score. Particular attention is given to a structure in which the state variable is driven by a continuous time process, with exact inference typically infeasible in this case due to intractable transitions Two models for continuous time stochastic volatility are used for illustration, with auxiliary likelihoods constructed by applying computationally efficient filtering methods to discrete time approximations. The extent to which the conditions for consistency are satisfied is demonstrated in both cases, and the accuracy of the proposed technique when applied to a square root volatility model also demonstrated numerically. In multiple parameter settings a separate treatment of each parameter, based on integrated likelihood techniques, is advocated as a way of avoiding the curse of dimensionality associated with ABC methods.
Gael M. Martin
Brendan P.M. McCabe
David T. Frazier
Worapree Maneesoonthorn
Christian P. Robert
Likelihood-free methods, latent diffusion models, Bayesian consistency, asymptotic sufficiency, unscented Kalman filter, stochastic volatility
2016
Inference Based on Many Conditional Moment Inequalities
http://d.repec.org/n?u=RePEc:cwl:cwldpp:2010r&r=ore
In this paper, we construct confidence sets for models defined by many conditional moment inequalities/equalities. The conditional moment restrictions in the models can be finite, countably in finite, or uncountably in finite. To deal with the complication brought about by the vast number of moment restrictions, we exploit the manageability (Pollard (1990)) of the class of moment functions. We verify the manageability condition in five examples from the recent partial identification literature. The proposed confidence sets are shown to have correct asymptotic size in a uniform sense and to exclude parameter values outside the identified set with probability approaching one. Monte Carlo experiments for a conditional stochastic dominance example and a random-coefficients binary-outcome example support the theoretical results.
Donald W.K. Andrews
Xiaoxia Shi
Asymptotic size, Conditional moment inequalities, Confidence set, Many moments, Multiple equilibria, Partial identification, Random coefficients, Stochastic dominance, Test
2015-07
The Impact of Citation Timing: A Framework and Examples
http://d.repec.org/n?u=RePEc:wai:econwp:16/04&r=ore
The literature on research evaluation has noted important differences in citation time patterns between disciplines, high and low ranked journals and types of publications. Delays in the receipt of citations suggest that the diffusion of knowledge following discovery is slower, and may thus be associated with a decrease in the impact of research. This paper provides a framework for the comparison of different citation time patterns. Using principles drawn from the literature on stochastic dominance we show that comparisons of time patterns can be based on the general characteristics of cost of delay functions. When a particular function is used to represent the cost of delay, the magnitude of the impact of differences in citation time patterns can be assessed using simple exponential discounting. We demonstrate the application of this framework in assessing different citation time patterns by applying it to comparisons of 10-year citation records for: leading journals in economics, different business subject areas, journals in economics compared with those in neuroscience and the research output of individual economists.
David L. Anderson
John Tressler
citations; citation time patterns; discounting citations; research measurement
2016-04-30
Estimation and filtering of nonlinear MS-DSGE models
http://d.repec.org/n?u=RePEc:hig:wpaper:136/ec/2016&r=ore
This article suggests and compares the properties of some nonlinear Markov-switching filters. Two of them are sigma point filters: the Markov switching central difference Kalman filter (MSCDKF) and MSCDKFA. Two of them are Gaussian assumed filters: Markov switching quadratic Kalman filter (MSQKF) and MSQKFA. A small scale financial MS-DSGE model is used for tests. MSQKF greatly outperforms other filters in terms of computational costs. It also is the first or the second best according to most tests of filtering quality (including the quality of quasi-maximum likelihood estimation with use of a filter, RMSE and LPS of unobserved variables).
Sergey Ivashchenko
regime switching, second-order approximation, non-linear MS-DSGE estimation, MSQKF, MSCDKF
2016
Inference on nonstationary time series with moving mean
http://d.repec.org/n?u=RePEc:ehl:lserod:66509&r=ore
A semiparametric model is proposed in which a parametric filtering of a nonstationary time series, incorporating fractionally differencing with short memory correction, removes correlation but leaves a nonparametric deterministic trend. Estimates of the memory parameter and other dependence parameters are proposed, and shown to be consistent and asymptotically normally distributed with parametric rate. Tests with standard asymptotics for I(1) and other hypotheses are thereby justified. Estimation of the trend function is also considered. We include a Monte Carlo study of finite-sample performance.
Jiti Gao
Peter M. Robinson
2014-12
Forecasting with Neural Networks Models.
http://d.repec.org/n?u=RePEc:ulp:sbbeta:2016-28&r=ore
This paper deals with so-called feedforward neural network model which we consider from a statistical and econometric viewpoint. It was shown how this model can be estimated by maximum likelihood. Finally, we apply the ANN methodology to model demand for electricity in South Africa. The comparison of forecasts based on a linear and ANN model respectively shows the usefulness of the latter.
Francis Bismans
Igor N. Litvine
Artificial neural networks (ANN), electricity consumption, forecasting, linear and non-linear models, recessions.
2016
Alternative Approaches for Rating INDCs: a Comparative Analysis
http://d.repec.org/n?u=RePEc:fem:femwpa:2016.18&r=ore
The “Intended nationally determined contributions” (INDCs) communicated by both developing and developed countries represent a crucial element of the Paris agreement. This paper aims at analysing the INDCs submitted by Parties, through the different tools and approaches proposed by the research community. In particular, our analysis looks at the different ways to assess the effectiveness of the proposed emission reduction pledges, both in terms of aggregate and national efforts. However, we also consider other factors that will be critical in determining the success of the Paris talks, such as the coherence and fairness of single contributions.
Marinella Davide
Paola Vesco
Paris Agreement, INDCs, Mitigation, Ambition, Efficiency, Equity, Carbon Budget
2016-03
The Formalization of a Generic Trading Company Model Using Software Agents as Active Elements
http://d.repec.org/n?u=RePEc:opa:wpaper:0029&r=ore
Business environment simulation often requires unique knowledge based on the modeler’s experience. However, even experience based simulation models need some extent of abstraction and formalization in order to achieve results that are expected. Business process simulation models usually incorporate several essential components such as the model of trading functions that reflect customer behavior, procurement functions for modeling company inputs and the optimization of production & logistics functions. When modeling management decisions, a management function model with a loopback to company economic outputs is also needed. As a solid foundation of such complex business simulations, an abstract multi-agent architecture of a trading company model is proposed. The abstract model is inhabited with active entities – software agents and a method of registering their actions in a simulation run log is proposed. This approach combines the basic notions of abstract agent archi tectures with process mining methodology. Finally, the correctness of our software-agent system is verified and a validation is provided showing that the proposed system fits the real data company outputs.
Dominik Vymetal
Sohei Ito
formal models, business process simulation, software agents, process mining, behavioral patterns
2016-04-25
Information Disclosure under Strategy-proof Voting Rules
http://d.repec.org/n?u=RePEc:bge:wpaper:904&r=ore
We consider collective decision problems where some agents have private information about alternatives and others don't. Voting takes place under strategy-proof rules. Prior to voting, informed agents may or may not disclose their private information, thus eventually influencing the preferences of those initially uninformed. We provide general conditions on the voting rules guaranteeing that informed agents will always be induced to disclose what they know. In particular, we apply this general result to environments where agent's preferences are restricted to be single-peaked or separable, and characterize the strategy-proof rules that ensure information disclosure in these settings.
Salvador Barberà
Antonio Nicolò
strategy-proofness, information disclosure, voting rules, Single-peaked preferences, Committees
2016-05
Comparing Predictive Accuracy under Long Memory - With an Application to Volatility Forecasting
http://d.repec.org/n?u=RePEc:aah:create:2016-17&r=ore
This paper extends the popular Diebold-Mariano test to situations when the forecast error loss differential exhibits long memory. It is shown that this situation can arise frequently, since long memory can be transmitted from forecasts and the forecast objective to forecast error loss differentials. The nature of this transmission mainly depends on the (un)biasedness of the forecasts and whether the involved series share common long memory. Further results show that the conventional Diebold-Mariano test is invalidated under these circumstances. Robust statistics based on a memory and autocorrelation consistent estimator and an extended fixed-bandwidth approach are considered. The subsequent Monte Carlo study provides a novel comparison of these robust statistics. As empirical applications, we conduct forecast comparison tests for the realized volatility of the Standard and Poors 500 index among recent extensions of the heterogeneous autoregressive model. While we find that forecasts improve significantly if jumps in the log-price process are considered separately from continuous components, improvements achieved by the inclusion of implied volatility turn out to be insignificant in most situations.
Robinson Kruse
Christian Leschinski
Michael Will
Equal Predictive Ability, Long Memory, Diebold-Mariano Test, Long-run Variance Estimation, Realized Volatility
2016-05-19
Coalitional Fairness with Participation Rates
http://d.repec.org/n?u=RePEc:sef:csefwp:442&r=ore
This paper investigates coalitional fairness in pure exchange economies with asymmetric information. We study allocations of resources which are immune from envy when comparisons take place between coalitions. The model allows negligible and non-negligible traders, only partially informed about the true state of nature at the time of consumption, to exchange any number, possibly infinite, of commodities. Our analysis is based on the Aubin approach to coalitions and cooperation, i.e. on a notion of cooperation allowing traders to take part in one or more coalitions simultaneously employing only shares of their endowments (participation rates). We introduce and study in detail the notion of coalition fairness with participation rates (or Aubin c-fairness) and show that flexibility in cooperation permits to recover the failure of fairness properties of equilibrium allocations. Our results provide applications to several market outcomes (ex-post core, fine core, ex-post competitive equilibria, rational expectations equilibria) and emphasize the consequences of the convexification effect due to participation rates for models with large traders and infinitely many commodities.
Achille Basile
Maria Gabriella Graziano
Ciro Tarantino
Aubin coalitions; Fairness; Asymmetric information; Core; Rational expectations equilibria; Lyapunov convexity theorem
2016-05-17
Macroeconomic policy in DGSE and agent based models redux : new developments and challenges ahead
http://d.repec.org/n?u=RePEc:fce:doctra:16011&r=ore
The Great Recession seems to be a natural experiment for economic analysis, in that it has shown the inadequacy of the predominant theoretical framework | the New Neoclassical Synthesis (NNS) | grounded on the DSGE model. In this paper, we present a critical discussion of the theoretical, empirical and political-economy pitfalls of the DSGE-based approach to policy analysis. We suggest that a more fruitful research avenue should escape the strong theoretical requirements of NNS models (e.g., equilibrium, rationality, representative agent, etc.) and consider the economy as a complex evolving system, i.e. as an ecology populated by heterogeneous agents, whose far-from-equilibrium interactions continuously change the structure of the system. This is indeed the methodological core of agent-based computational economics (ACE), which is presented in this paper. We also discuss how ACE has been applied to policy analysis issues, and we provide a survey of macroeconomic policy applications (fiscal and monetary policy, bank regulation, labor market structural reforms and climate change interventions). Finally, we conclude by discussing the methodological status of ACE, as well as the problems it raises.
G. Fagiolo
A. Roventini
Economic policy, New neoclassical synthesis, new keynesian models, DSGE models, Agent-based computational economics, agent based models, complexity theory, Great recession, Crisis.
2016-04
Coalitional Extreme Desirability in Finitely Additive Economies with Asymmetric Information
http://d.repec.org/n?u=RePEc:pra:mprapa:71084&r=ore
We prove a coalitional core-Walras equivalence theorem for an asymmetric information exchange economy with a finitely additive measure space of agents, finitely many states of nature, and an infinite dimensional commodity space having the Radon-Nikodym property and whose positive cone has possibly empty interior. The result is based on a new cone condition, firstly developed in Centrone and Martellotti (2015), called coalitional extreme desirability. As a consequence, we also derive a new individualistic core-Walras equivalence result.
Bhowmik, Anuj
Centrone, Francesca
Martellotti, Anna
Asymmetric information; Coalitional economies; Core-Walras equivalence; Extremely desirable commodity; Finitely additive measure; Walrasian expectation equilibria; Private core; Radon-Nikodym property.
2016-05-04
Distribution Model of Manufactured Products
http://d.repec.org/n?u=RePEc:clj:icmmae:1403&r=ore
In this paper we presented a model for distribution of products manufactured so that the total cost of transport is minimized. The model can be applied to a number of F units that carry goods from distribution centers Cj. The plan allows for the development of transport depending on the parameter Cj.
Gratiela Boca
Rita Toader
Cristian Anghel
Diana Toader
cost, products, distribution, transport, parameter, minim
2014-06
Asset bubbles and efficiency in a generalized two-sector model
http://d.repec.org/n?u=RePEc:mse:cesdoc:16029&r=ore
We consider a multi-sector infinite-horizon general equilibrium model. Asset supply is endogenous. The issues of equilibrium exisence, efficiency, and bubble emergence are addressed. We show how different assets give rise to very different rational bubbles. We also point out that efficient bubbly equilibria may exist
Stefano Bosi
Cuong Le Van
Ngoc-Sang Pham
infinite horizon; general equilibrium; aggregate good bubble; capital good bubble; efficiency
2016-03
Fractionality and co-fractionality between Government Bond yields
http://d.repec.org/n?u=RePEc:ssb:dispap:838&r=ore
In a co-fractional vector autoregressive (VAR) model two more parameters are estimated, compared to the traditional cointegrated VAR model. The increased number of parameters that needs to be estimated leads to identification problems; there is no unique formulation of a co-fractional system, though usually one formulation is preferred. This paper has the following contributions: (i) it discusses different kinds of identification problems in co-fractional VAR models; (ii) it proposes a specification test for higher order fractional processes; (iii) it presents an Ox program that can be used for estimating and testing co-fractional systems; and (iv) it uses the above mentioned contributions to analyse a system of Government Bonds in the US and Norway where the results indicates that the level and trend in the yield curve have a longer memory than the curvature (i.e., a linear combination of the yields of the Government Bonds that corresponds to representing the curvature of the yield curve is a co-fractional relationship).
Håvard Hungnes
Fractional cointegration
2016-04