Operations Research
http://lists.repec.org/mailman/listinfo/nep-ore
Operations Research
2022-01-03
A Dynamic Theory Of Spatial Externalities
http://d.repec.org/n?u=RePEc:ctl:louvir:2021028&r=&r=ore
This work targets the class of spatiotemporal problems with free riding under natural (pollution, epidemics...etc) diffusion and spatial externalities. Such a class brings to study a family of differential games in continuous time and space. In the fundamental pollution free riding problem we develop a strategy to solve completely the associated game contributing to the associated debate on environmental federalism. We depart from the preexisting literature in several respects. First, instead of assuming ad hoc pollution diffusion schemes across space, we consider a realistic spatiotemporal law of motion for pollution (diffusion and advection). Second, we tackle spatiotemporal non-cooperative (and cooperative) differential games instead of static games in the related literature. Precisely, we consider a circle partitioned into several states where a local authority decides autonomously about its investment, production and depollution strategies over time knowing that investment/production generates pollution, and pollution is transboundary. The time horizon is infinite. Third, we allow for a rich set of geographic heterogeneities across states while the literature assumes identical states. We solve analytically the induced non-cooperative differential game under decentralization and fully characterize the resulting long-term spatial distributions. In particular, we prove that there exist a Perfect Markov Equilibrium, unique among the class of the a‑ne feedbacks. We further provide with full exploration of the free riding problem, reflected in the so-called border effects. Finally, we explore how geographic discrepancies (the most elementary being the asymmetry of players) affect the shape of the border effects. We check in particular that our model is consistent with the set of stylized facts put forward by the related empirical literature.
Raouf Boucekkine
Giorgio Fabbri
Salvatore Federico
Fausto Gozzi
Spatial externalities, spatial diffusion, differential games in continuous time and space, infinite dimensional optimal control problems, environmental federalism
2021-11-18
Structured Additive Regression and Tree Boosting
http://d.repec.org/n?u=RePEc:chf:rpseri:rp2183&r=&r=ore
Structured additive regression (STAR) models are a rich class of regression models that include the generalized linear model (GLM) and the generalized additive model (GAM). STAR models can be fitted by Bayesian approaches, component-wise gradient boosting, penalized least-squares, and deep learning. Using feature interaction constraints, we show that such models can be implemented also by the gradient boosting powerhouses XGBoost and LightGBM, thereby benefiting from their excellent predictive capabilities. Furthermore, we show how STAR models can be used for supervised dimension reduction and explain under what circumstances covariate effects of such models can be described in a transparent way. We illustrate the methodology with case studies pertaining to house price modeling, with very encouraging results regarding both interpretability and predictive performance.
Michael Mayer
Steven C. Bourassa
Martin Hoesli
Donato Scognamiglio
machine learning, structured additive regression, gradient boosting, interpretability, transparency
2021-09
Approximating Bayes in the 21st Century
http://d.repec.org/n?u=RePEc:msh:ebswps:2021-24&r=&r=ore
The 21st century has seen an enormous growth in the development and use of approximate Bayesian methods. Such methods produce computational solutions to certain `intractable' statistical problems that challenge exact methods like Markov chain Monte Carlo: for instance, models with unavailable likelihoods, high-dimensional models, and models featuring large data sets. These approximate methods are the subject of this review. The aim is to help new researchers in particular -- and more generally those interested in adopting a Bayesian approach to empirical work -- distinguish between different approximate techniques; understand the sense in which they are approximate; appreciate when and why particular methods are useful; and see the ways in which they can can be combined.
Gael M. Martin
David T. Frazier
Christian P. Robert
Approximate Bayesian inference, intractable Bayesian problems, approximate Bayesian computation, Bayesian synthetic likelihood, variational Bayes, integrated nested Laplace approximation
2021
Identification Of Mixtures Of Dynamic Discrete Choices
http://d.repec.org/n?u=RePEc:tse:wpaper:126197&r=&r=ore
This paper provides new identification results for finite mixtures of Markov processes. Our arguments are constructive and show that identification can be achieved from knowledge of the cross-sectional distribution of three (or more) effective time-series observations under simple conditions. Our approach is contrasted with the ones taken in prior work by Kasahara and Shimotsu (2009) and Hu and Shum (2012). Most notably, monotonicity restrictions that link conditional distributions to latent types are not needed. Maximum likelihood is considered for the purpose of estimation and inference. Implementation via the EM algorithm is straightforward. Its performance is evaluated in a simulation exercise.
Higgins, Ayden
Jochmans, Koen
Discrete choice; heterogeneity; Markov process; mixture; state dependence
2021-11-30
Deep Hedging under Rough Volatility
http://d.repec.org/n?u=RePEc:chf:rpseri:rp2188&r=&r=ore
We investigate the performance of the Deep Hedging framework under training paths beyond the (finite dimensional) Markovian setup. In particular we analyse the hedging performance of the original architecture under rough volatility models with view to existing theoretical results for those. Furthermore, we suggest parsimonious but suitable network architectures capable of capturing the non-Markoviantity of time-series. Secondly, we analyse the hedging behaviour in these models in terms of P&L distributions and draw comparisons to jump diffusion models if the the rebalancing frequency is realistically small.
Blanka Horvath
Josef Teichmann
Zan Zuric
Imperfect Hedging, Derivatives Pricing, Derivatives Hedging, Deep Learning, Rough Volatility
2021-02
Asymptotics for Time-Varying Vector MA(∞) Processes
http://d.repec.org/n?u=RePEc:msh:ebswps:2021-22&r=&r=ore
Moving average infinity (MA(∞)) processes play an important role in modeling time series data. While a strand of literature on time series analysis emphasizes the importance of modeling smooth changes over time and therefore is shifting its focus from parametric models to nonparametric ones, MA(∞) processes with constant parameters are often part of the fundamental data generating mechanism. Along this line of research, an intuitive question is how to allow the underlying data generating mechanism evolves over time. To better capture the dynamics, this paper considers a new class of time-varying vector moving average infinity (VMA(∞)) processes. Accordingly, we establish some new asymptotic properties, including the law of large numbers, the uniform convergence, the central limit theory, the bootstrap consistency, and the long-run covariance matrix estimation for the class of time-varying VMA(∞) processes. Finally, we demonstrate the empirical relevance and usefulness of the newly proposed model and estimation theory through extensive simulated and real data studies.
Yayi Yan
Jiti Gao
Bin Peng
multivariate time series, nonparametric kernel estimation, time-varying Beveridgeâ€“Nelson decomposition
2021
Comment on Giacomini, Kitagawa and Read's 'Narrative Restrictions and Proxies'
http://d.repec.org/n?u=RePEc:fip:feddwp:93526&r=&r=ore
In a series of recent studies, Raffaella Giacomini and Toru Kitagawa have developed an innovative new methodological approach to estimating sign-identified structural VAR models that seeks to build a bridge between Bayesian and frequentist approaches in the literature. Their latest paper with Matthew Read contains thought-provoking new insights about modeling narrative restrictions in sign-identified structural VAR models. My discussion puts their contribution into the context of Giacomini and Kitagawa’s broader research agenda and relates it to the larger literature on estimating structural VAR models subject to sign restrictions.
Lutz Kilian
Structural VAR; single prior; multiple prior; posterior; joint inference; impulse response; narrative restrictions
2021-12-17
The Central Influencer Theorem: Spatial Voting Contests with Endogenous Coalition Formation
http://d.repec.org/n?u=RePEc:yon:wpaper:2021rwp-193&r=&r=ore
We analyze a spatial voting contest without the “one person, one vote” restriction. Players exert continuous influence effort and incurs cost accordingly. They can be heterogeneous in terms of position, disutility function, and cost function. In equilibrium, two groups endogenously emerge: players in one group try to implement more leftist policy, while those in the other group more rightist one. Since the larger group suffers more severe free-riding problem, the equilibrium policy does not converge to the center if the larger group does not have a cost advantage. We demonstrate how the location of the center (i.e., the steady-state point) depends the convexities of the utility and cost functions. We extend the model to a dynamic setting.
Subhasish M. Chowdhury
Sang-Hyun Kim
Spatial Competition; Contest; Lobbying; Median Voter Theorem
2021-12
The Fairness of Credit Scoring Models
http://d.repec.org/n?u=RePEc:leo:wpaper:2912&r=&r=ore
Christophe HURLIN
Christophe PERIGNON
Sébastien SAURIN
, Discrimination, Credit markets, Machine Learning, Artificial intelligence
2021
Noisy coding of time and reward discounting
http://d.repec.org/n?u=RePEc:rug:rugwps:21/1036&r=&r=ore
I present a model generating delay-discounting from noisy mental representations of time delays. The optimal combination of noisy signals about time delays with prior information results in a stochastic model predicting discounting to exhibit present-bias, but to be stationary and delaydependent once up-front delays are introduced. The derivation from an optimal encoding-decoding process sidesteps arbitrariness concerns voiced about earlier models. Data collected in an experiment support the need for separate but interacting parameters to capture present-bias and delaydependence. The account explains why non-trivial discounting is routinely observed in experiments using monetary rewards instead of consumption.
Ferdinand M. Vieider
2021-12
Realized GARCH, CBOE VIX, and the Volatility Risk Premium
http://d.repec.org/n?u=RePEc:arx:papers:2112.05302&r=&r=ore
We show that the Realized GARCH model yields close-form expression for both the Volatility Index (VIX) and the volatility risk premium (VRP). The Realized GARCH model is driven by two shocks, a return shock and a volatility shock, and these are natural state variables in the stochastic discount factor (SDF). The volatility shock endows the exponentially affine SDF with a compensation for volatility risk. This leads to dissimilar dynamic properties under the physical and risk-neutral measures that can explain time-variation in the VRP. In an empirical application with the S&P 500 returns, the VIX, and the VRP, we find that the Realized GARCH model significantly outperforms conventional GARCH models.
Peter Reinhard Hansen
Zhuo Huang
Chen Tong
Tianyi Wang
2021-12
Estimating initial conditions for dynamical systems with incomplete information
http://d.repec.org/n?u=RePEc:amz:wpaper:2021-20&r=&r=ore
In this paper we study the problem of inferring the initial conditions of a dynamical system under incomplete information. Studying several model systems, we infer the latent microstates that best reproduce an observed time series when the observations are sparse, noisy and aggregated under a (possibly) nonlinear observation operator. This is done by minimizing the least-squares distance between the observed time series and a model-simulated time series using gradient-based methods. We validate this method for the Lorenz and Mackey-Glass systems by making out-of-sample predictions. Finally, we analyze the predicting power of our method as a function of the number of observations available. We find a critical transition for the MackeyGlass system, beyond which it can be initialized with arbitrary precision.
Farmer, J. Doyne
Kolic, Blas
Sabuco, Juan
2021-09
Interactive Effects Panel Data Models with General Factors and Regressors
http://d.repec.org/n?u=RePEc:msh:ebswps:2021-23&r=&r=ore
This paper considers a model with general regressors and unobservable factors. An estimator based on iterated principal components is proposed, which is shown to be not only asymptotically normal and oracle efficient, but under certain conditions also free of the otherwise so common asymptotic incidental parameters bias. Interestingly, the conditions required to achieve unbiasedness become weaker the stronger the trends in the factors, and if the trending is strong enough unbiasedness comes at no cost at all. In particular, the approach does not require any knowledge of how many factors there are, or whether they are deterministic or stochastic. The order of integration of the factors is also treated as unknown, as is the order of integration of the regressors, which means that there is no need to pre-test for unit roots, or to decide on which deterministic terms to include in the model.
Bin Peng
Liangjun Su
Joakim Westerlund
Yanrong Yang
panel data, non-stationarity, principal components, interactive effects
2021
Information Flows and Memory in Games
http://d.repec.org/n?u=RePEc:igi:igierp:678&r=&r=ore
We propose that the mathematical representation of situations of strategic interactions, i.e., of games, should separate the description of the rules of the game from the description of players’ personal traits. Yet, we note that the standard extensive-form partitional representation of information in sequential games does not comply with this separation principle. We offer an alternative representation that extends to all (finite) sequential games the approach adopted in the theory of repeated games with imperfect monitoring, that is, we describe the flow of information accruing to players rather than the stock of information retained by players, as encoded in information partitions. Mnemonic abilities can be represented independently of games. Assuming that players have perfect memory, our flow representation gives rise to information partitions satisfying perfect recall. Different combinations of rules about information flows and of players mnemonic abilities may give rise to the same information partition . All extensive-form representations with information partitions, including those featuring absentmindedness, can be generated by some such combinations.
Pierpaolo Battigalli
Nicolò Generoso
2021
Price-cost Margins and Fixed Costs
http://d.repec.org/n?u=RePEc:ete:ceswps:685401&r=&r=ore
Filip Abraham
Yannick Bormans
Jozef Konings
Werner Roeger
2021-12-09
Hub-and-spoke cartels: Theory and evidence from the grocery industry
http://d.repec.org/n?u=RePEc:qed:wpaper:1473&r=&r=ore
Numerous recently uncovered cartels operated along the supply chain, with firms at one end facilitating collusion at the other { hub-and-spoke arrangements. These cartels are hard to rationalize because they induce double marginalization and higher costs. We examine Canada's alleged bread cartel and provide the first comprehensive analysis of hub-and-spoke collusion. We make three contributions: i) Using court documents and pricing data we provide evidencethat collusion existed at both ends of the supply chain, ii) we show that collusion was effective, increasing inflation by about 40% and iii) we provide a model explaining why this form of collusion arose.
Robert Clark
Ig Horstmann
Jean-Francois Houde
antitrust, vertical collusion, grocery industry
2021-09
Asymmetries in Risk Premia, Macroeconomic Uncertainty and Business Cycles
http://d.repec.org/n?u=RePEc:rim:rimwps:21-25&r=&r=ore
A large literature suggests that the expected equity risk premium is countercyclical. Using a variety of different measures for this risk premium, we document that it also exhibits growth asymmetry, i.e. the risk premium rises sharply in recessions and declines much more gradually during the following recoveries. We show that a model with recursive preferences, in which agents cannot perfectly observe the state of current productivity, can generate the observed asymmetry in the risk premium. Key for this result are endogenous fluctuations in uncertainty which induce procyclical variations in agent's nowcast accuracy. In addition to matching moments of the risk premium, the model is also successful in generating the growth asymmetry in macroeconomic aggregates observed in the data, and in matching the cyclical relation between quantities and the risk premium.
Christoph Görtz
Mallory Yeromonahos
Risk Premium, Business cycles, Bayesian Learning, Asymmetry, Uncertainty, Nowcasting
2021-12
A changepoint analysis of exchange rate and commodity price risks for Latin American stock markets
http://d.repec.org/n?u=RePEc:grz:wpaper:2021-14&r=&r=ore
Focusing on countries whose economies are exposed to fluctuations in commodity prices and exchange rates, we study the vulnerability of these stock market returns to exchange rate and commodity price shocks. Methodologically, we rely on non-parametric structural break tests and we allow for multiple changepoints in the volatilities of the different variables and for distinct breaks in the dependence between the series. This approach allows separating changes in country- and commodity specific risk and changes in the degree of spillover. The return distributions are modeled using a Copula-GARCH model incorporating the estimated changepoints and we measure risk-spillovers with the conditional Value-at-Risk. We find evidence for various changepoints at different points in time, implying changes in risk and spillovers. In particular, there is evidence of increased spillover risk after the outbreak of the global financial crisis in 2008, but conditional risk is also high after the outbreak of Covid-19.
Hans Manner
Gabriel Rodriguez
Florian Stöckler
stock markets; commodity prices; changepoint analysis; volatility; dependence modeling; copula; CoVaR.
2021-12
Forecasting Regional GDPs: a Comparison with Spatial Dynamic Panel Data Models
http://d.repec.org/n?u=RePEc:fbk:wpaper:2021-02&r=&r=ore
The monitoring of the regional (provincial) economic situation is of particular importance due to the high level of heterogeneity and interdependences among different territories. Although econometric models allow for spatial and serial correlation of various kinds, the limited availability of territorial data restricts the set of relevant predictors at a more disaggregated level, especially for GDPs. This paper evaluates the predictive performance of a spatial dynamic panel data model with individual fixed effects and some relevant exogenous regressors by using data on total GVA for 103 Italian provinces (NUTS-3 level) over the period 2000-2016. A comparison with nested panel sub-specifications as well as pure temporal autoregressive specifications has also been included. The main finding is that the spatial dynamic specification increases forecast accuracy more than its competitors throughout the out-of-sample, recognizing an important role played by both space and time. However, when temporal cointegration is detected, the random walk specification is still to be preferred in some cases even in the presence of short panels.
Anna Gloria Billé
Alessio Tomelleri
Francesco Ravazzolo
Prediction,, Spatial Correlation, Panel Data, Regional GVA forecasting
2021-12
Ambiguity, Long-Run Risks, and Asset Prices
http://d.repec.org/n?u=RePEc:fip:fedawp:93476&r=&r=ore
I generalize the long-run risks (LRR) model of Bansal and Yaron (2004) by incorporating recursive smooth ambiguity aversion preferences from Klibanoff et al. (2005, 2009) and time-varying ambiguity. Relative to the Bansal-Yaron model, the generalized LRR model is as tractable but more flexible due to its separation of ambiguity aversion from both risk aversion and the intertemporal elasticity of substitution. This three-way separation allows the model to further account for the variance premium puzzle besides the puzzles of the equity premium, the risk-free rate, and the return predictability. Specifically, the model matches reasonably well key asset-pricing moments with risk aversion under 5. Model calibration shows that the ambiguity aversion channel accounts for 77 percent of the variance premium and 40 percent of the equity premium.
Bin Wei
smooth ambiguity aversion; long-run risks; equity premium puzzle; risk-free rate puzzle; variance premium puzzle; return predictability
2021-09-08
Dividend Momentum and Stock Return Predictability: A Bayesian Approach
http://d.repec.org/n?u=RePEc:fip:fedawp:93480&r=&r=ore
A long tradition in macro finance studies the joint dynamics of aggregate stock returns and dividends using vector autoregressions (VARs), imposing the cross-equation restrictions implied by the Campbell-Shiller (CS) identity to sharpen inference. We take a Bayesian perspective and develop methods to draw from any posterior distribution of a VAR that encodes a priori skepticism about large amounts of return predictability while imposing the CS restrictions. In doing so, we show how a common empirical practice of omitting dividend growth from the system amounts to imposing the extra restriction that dividend growth is not persistent. We highlight that persistence in dividend growth induces a previously overlooked channel for return predictability, which we label "dividend momentum." Compared to estimation based on ordinary least squares, our restricted informative prior leads to a much more moderate, but still significant, degree of return predictability, with forecasts that are helpful out of sample and realistic asset allocation prescriptions with Sharpe ratios that outperform common benchmarks.
Juan Antolin-Diaz
Ivan Petrella
Juan F. Rubio-Ramirez
CS restrictions; Bayesian VAR; optimal allocation
2021-11-10
Accuracy in recursive minimal state space methods
http://d.repec.org/n?u=RePEc:cte:werepe:33753&r=&r=ore
The existence of a recursive minimal state space (MSS) representation is notalways guaranteed. However, because of its numerical efficiency, this type of equilibrium is frequently used in practice. What are the consequences of computing and simulating a model without a constructive proof? To answer this question, we identify a condition which is associated with a convergent and computable MSSrepresentation in a RBC model with state contingent taxes. This condition ensures the existence of a benchmark equilibrium that can be used to test frequently used algorithms. To verify the accuracy of simulations even if this condition does not hold, we derive a closed form recursive equilibrium which contains the MSS representation. Both benchmark representations are accurate and ergodic. We showthat state of the art algorithms, even if they are numerically convergent, may underestimate capital (and thus overestimate the benefits of capital taxes) by at least 65%, a figure which is in line with recent findings using accurate benchmarks. When an existence proof is not available, we found 2 sources of inaccuracy: the lack of a convergent operator and the absence of a well-defined (stochastic) steady state.Moreover, we identify a connection between lack of convergence and the equilibrium budget constraint which implies that simulated paths may be distorted not only in the long run but also in any period. When we have a constructive proof, inaccuracy is generated by the lack of qualitative properties in the computed policy functions.
Pierri, Damian Rene
Accuracy ;
Recursive Equilibrium ;
State Contingent Fiscal Policy
2021-12-13
Long and short memory in dynamic term structure models
http://d.repec.org/n?u=RePEc:aah:create:2021-15&r=&r=ore
I provide a unified theoretical framework for long memory term structure models and show that the recent state-space approach suffers from a parameter identification problem. I propose a different framework to estimate long memory models in a state-space setup, which addresses the shortcomings of the existing approach. The proposed framework allows asymmetrically treating the physical and risk-neutral dynamics, which simplifies estimation considerably and helps to conduct an extensive comparison with standard term structure models. Relying on a battery of tests, I find that standard term structure models perform just as well as the more complicated long memory models and produce plausible term premium estimates.
Salman Huseynov
Dynamic term structure models, Long memory, Affine model, Shadow rate model
2021-12-20
How Does the Position in Business Group Hierarchies Affect Workersâ€™ Wages?
http://d.repec.org/n?u=RePEc:bav:wpaper:213_eggerjahnkornitzky&r=&r=ore
We merge firm-level data on ownership linkages with administrative data on German workers to analyze how the position in a business group hierarchy affects workersâ€™ wages. To acknowledge that ownership linkages are not onedirectional, we propose an index of hierarchical distance to the ultimate owner that accounts for the complex network structure of business groups. After controlling for unobserved heterogeneity, we find a positive effect of larger hierarchical distance to the ultimate owner of a business group on workersâ€™ wages. To explain this finding, we develop a monitoring-based theory of business groups. Our model predicts higher wages to prevent shirking by workers if a larger hierarchical distance to the ultimate owner is associated with lower monitoring efficiency.
Hartmut Egger
Elke Jahn
Stefan Kornitzky
Business groups, ownership networks, workers wages, differencein-difference, hierarchical distance
2021-12
Employment Reconciliation and Nowcasting
http://d.repec.org/n?u=RePEc:gwc:wpaper:2021-007&r=&r=ore
The monthly release of employment data for the U.S. includes two different estimates from two different surveys. One is based on a survey of establishments (payroll) and the other is based on a survey of households. The presence of two different sources of information on broadly the same theoretical concept leads to an obvious question: can we combine the information to obtain an improved estimate of employment? In this paper we build on the research on combining different measures of output to instead combine different measures of employment. We construct a latent employment estimate which reconciles the information from the two separate surveys as well as incorporating the preliminary data revision process of the payroll data. We find that our reconciled latent employment series looks different than the initial release of payroll employment and is closer to the fully-revised data (benchmarked to a near census of employment), particularly during the Great Recession. Once we move to a real-time exercise, however, our findings suggest that the reconciled employment estimate is remarkably similar to the initial release of payroll employment with near zero weight on the household survey information.
Eiji Goto
Jan P.A.M. Jacobs
Tara M. Sinclair
Simon van Norden
employment, United States, real-time data, news, noise
2021-12
Unraveling the Exogenous Forces Behind Analysts’ Macroeconomic Forecasts
http://d.repec.org/n?u=RePEc:bdr:borrec:1184&r=&r=ore
Modern macroeconomics focuses on the identification of the primitive exogenous forces generating business cycles. This is at odds with macroeconomic forecasts collected through surveys, which are about endogenous variables. To address this divorce, our paper uses a semi-structural general equilibrium model as a multivariate filter to infer the shocks behind economic analysts’ forecasts and thus, unravel their implicit macroeconomic stories. By interpreting all analysts’ forecasts through the same lenses, it is possible to understand the differences between projected endogenous variables as differences in the types and magnitudes of shocks. It also allows to explain market’s uncertainty about the future in terms of analysts’ disagreement about these shocks. The usefulness of the approach is illustrated by adapting the canonical SOE semi-structural model in Carabenciov et al. (2008a) to Colombia and then using it to filter forecasts of its Central Bank’s Monthly Expectations Survey during the COVID-19 crisis. **** RESUMEN: La macroeconomía actualmente se centra en la identificación de las fuerzas exógenas primitivas que generan los ciclos económicos reales. En contraste, las encuestas macroeconómicas recogen pronósticos sobre variables endógenas. Con el fin de reconciliar este divorcio, este trabajo usa un modelo semi-estructural de equilibrio general como un filtro multivariado para inferir los choques que estarían detrás de los pronósticos de los analistas de mercado y, por ende, desvelar sus historias macroeconómicas implícitas. Al interpretar los pronósticos de todos los analistas a través de los mismos lentes, es posible entender las diferencias entre las variables endógenas proyectadas a partir de las diferencias en los tipos y magnitudes de los choques implícitos en ellas. Del mismo modo, la incertidumbre del mercado respecto al futuro de la economía puede ser explicada en términos del desacuerdo de los analistas frente a estos choques. La utilidad de este enfoque es ilustrada mediante un caso de estudio, en el cual se adapta a Colombia el modelo semi-estructural canónico de Carabenciov et al. (2008a) para una economía pequeña y abierta, y se utiliza luego para filtrar los pronósticos registrados en la Encuesta Mensual de Expectativas del Banco de la República durante la crisis de la COVID-19.
Marcela De Castro-Valderrama
Santiago Forero-Alvarado
Nicolás Moreno-Arias
Sara Naranjo-Saldarriaga
Expectativas macroeconómicas, pronósticos profesionales, Modelo semi-structural, Suavizado de Kalman, Expectativas de encuestas, Macroeconomic expectations, Professional forecasters, Semi-structural model, Kalman smoother, Survey expectations.
2021-12
Inference On A Distribution From Noisy Draws
http://d.repec.org/n?u=RePEc:tse:wpaper:126252&r=&r=ore
We consider a situation where the distribution of a random variable is being estimated by the empirical distribution of noisy measurements of that variable. This is common practice in, for example, teacher value-added models and other fixed-effect models for panel data. We use an asymptotic embedding where the noise shrinks with the sample size to calculate the leading bias in the empirical distribution arising from the presence of noise. The leading bias in the empirical quantile function is equally obtained. These calculations are new in the literature, where only results on smooth functionals such as the mean and variance have been derived. We provide both analytical and jackknife corrections that recenter the limit distribution and yield confidence intervals with correct coverage in large samples. Our approach can be connected to corrections for selection bias and shrinkage estimation and is to be contrasted with deconvolution. Simulation results confirm the much-improved sampling behavior of the corrected estimators. An empirical illustration on heterogeneity in deviations from the law of one price is equally provided.
Jochmans, Koen
Weidner, Martin
2021-12-11
Optimal Investment and Equilibrium Pricing under Ambiguity
http://d.repec.org/n?u=RePEc:chf:rpseri:rp2178&r=&r=ore
We consider portfolio selection under nonparametric alpha-maxmin ambiguity in the neighbourhood of a reference distribution. We show strict concavity of the portfolio problem under ambiguity aversion.Implied demand functions are nondifferentiable, resemble observed bid-ask spreads, and are consistent with existing parametric limiting participation results under ambiguity. Ambiguity seekers exhibit a discontinuous demand function, implying an empty set of reservation prices. If agents have identical, or sufficiently similar prior beliefs, the first best equilibrium is no trade. Simple sufficient conditions yield the existence of a Pareto-efficient second-best equilibrium which reconciles many observed phenomena in financial markets, such as liquidity dry-ups, portfolio inertia, and negative risk premia.
Michail Anthropelos
Paul Schneider
ambiguity,equilibrium,asset pricing
2021-11
Cash, and "Drops": Boosting vaccine registrations
http://d.repec.org/n?u=RePEc:pra:mprapa:110912&r=&r=ore
Demand (registrations), supply (availability of vaccines), and throughput (administering of vaccines) are key determinants of the progress of vaccination drives globally, including Malaysia's National COVID-19 Immunisation Programme (Program Imunisasi COVID-19 Kebangsaan, PICK). This paper will focus on the first determinant - demand. Specifically, were major policy "shocks" effective in influencing vaccine registrations? Between 24 February 2021 to 14 June 2021 when the PICK was in progress, several interventions were applied in select districts and states. These provided "natural experiments" to assess the effect of certain policy interventions on vaccine demand. In this paper, we assess the effect of two types of interventions on vaccine registrations in the PICK programme in a difference-in-difference (DiD) and panel event study settings - (1) a cash transfer programme for vaccine recipients, and (2) two instances of parallel opt-in "first come, first serve" queues. Finally, we rationalise these findings in a simple model of individual demand with preference shocks.
Suah, Jing Lian
COVID-19, vaccination drive, panel event study, difference-in-difference
2021-06-20
Production and Inventory Dynamics under Ambiguity Aversion
http://d.repec.org/n?u=RePEc:fip:fedkrw:93094&r=&r=ore
We propose a production-cost smoothing model with Knightian uncertainty due to ambiguity aversion to study the joint behavior of production, inventories, and sales. Our model can explain four facts that previous studies find difficult to account for simultaneously: (i) the high volatility of production relative to sales, (ii) the low ratio of inventory-investment volatility to sales volatility, (iii) the positive correlation between sales and inventories, and (iv) the negative correlation between the inventory-to-sales ratio and sales. We find that the stock-out avoidance motive (Kahn 1987) emerges endogenously in our model, reconciling the long debate in the inventory literature over the production- cost smoothing and the stock-out avoidance models.
Yulei Luo
Jun Nie
Xiaowen Wang
Eric R. Young
Ambiguity Aversion; Robustness; Knightian Uncertainty; Inventories; Production Cost Smoothing
2021-08-02
High Discounts and Low Fundamental Surplus: An Equivalence Result for Unemployment Fluctuations
http://d.repec.org/n?u=RePEc:fip:fedawp:93477&r=&r=ore
Ljungqvist and Sargent (2017) (LS) show that unemployment fluctuations can be understood in terms of a quantity they call the “fundamental surplus.” However, their analysis ignores risk premia, a force that Hall (2017) shows is important in understanding unemployment fluctuations. We show how the LS framework can be adapted to incorporate risk premia. We derive an equivalence result that relates parameters in economies with risk premia to those of an artificial economy without risk premia. We show how to use properties of the artificial economy to deduce how risk premia affect unemployment dynamics in the original economy.
Indrajit Mitra
Taeuk Seo
Yu Xu
risk premia; fundamental surplus; time-varying discounts; unemployment fluctuations
2021-09-24
An explicit split point procedure in model-based trees allowing for a quick fitting of GLM trees and GLM forests
http://d.repec.org/n?u=RePEc:hal:journl:hal-03448250&r=&r=ore
Classification and regression trees (CART) prove to be a true alternative to full parametric models such as linear models (LM) and generalized linear models (GLM). Although CART suffer from a biased variable selection issue, they are commonly applied to various topics and used for tree ensembles and random forests because of their simplicity and computation speed. Conditional inference trees and model-based trees algorithms for which variable selection is tackled via fluctuation tests are known to give more accurate and interpretable results than CART, but yield longer computation times. Using a closed-form maximum likelihood estimator for GLM, this paper proposes a split point procedure based on the explicit likelihood in order to save time when searching for the best split for a given splitting variable. A simulation study for non-Gaussian response is performed to assess the computational gain when building GLM trees. We also propose a benchmark on simulated and empirical datasets of GLM trees against CART, conditional inference trees and LM trees in order to identify situations where GLM trees are efficient. This approach is extended to multiway split trees and log-transformed distributions. Making GLM trees possible through a new split point procedure allows us to investigate the use of GLM in ensemble methods. We propose a numerical comparison of GLM forests against other random forest-type approaches. Our simulation analyses show cases where GLM forests are good challengers to random forests.
Christophe Dutang
Quentin Guibert
GLM,model-based recursive partitioning,GLM trees,random forest,GLM forest
2021-11-11
Macroprudential policies and Brexit: A welfare analysis
http://d.repec.org/n?u=RePEc:not:notcfc:2021/04&r=&r=ore
Brexit will bring many economic and institutional consequences. Among other, Brexit will have implications on financial stability and the implementation of macroprudential policies. One immediate effect of Brexit is the fact that the United Kingdom (UK) will no longer be subject to the jurisdiction of the European Supervisory Authorities (ESAs) nor the European Systemic Risk Board (ESRB). This paper studies the welfare implications of this change of regime, both for the UK and the European Union (EU). By means of a Dynamic Stochastic General Equilibrium model (DSGE), I compare the pre-Brexit scenario with the new one, in which the UK sets macroprudential policy independently. I find that, after Brexit, the UK is better off by setting its own macroprudential policy without taking into account Europe's welfare as a whole. Given the small relative size of the UK, this implies just slight welfare loss in the EU.
Margarita Rubio
Brexit, macroprudential policy, DSGE, welfare
2021
A statistical approach for sizing an aircraft electrical generator using extreme value theory
http://d.repec.org/n?u=RePEc:tse:wpaper:126233&r=&r=ore
The sizing of aircraft electrical generators mainly depends on the electrical loads installed in the aircraft. Currently, the generator capacity is estimated by summing the critical loads, but this method tends to overestimate the genera-tor capacity. A new method to challenge this approach is to use the electrical consumption recorded during ﬂights and study the distribution of operational ratios between the actual consumption and the theoretical maximum consump-tion then size the future aircraft generators by applying a ratio to the theoretical value. This paper focuses on the application of extreme value theory on these operational ratios to estimate the maximal capacity utilization of a generator. A real data example is provided to illustrate the approach and estimate extreme quantiles and the right endpoint of the distribution of the ratios together with their approximate conﬁdence interval in the nominal conﬁguration. In all situ-ations the right endpoint is proven to be ﬁnite and does not depend on the use procedures. This approach shows that ELA overestimates the maximal perma-nent consumption by 20% with error level of 10−3 in the nominal conﬁguration.
Boulfani, Fériel
Gendre, Xavier
Ruiz-Gazen, Anne
Salvignol, Martina
Electrical load analysis; Aeronautic electrical system; Generalized Pareto distribution; Quantile estimation; Endpoint estimation; Diagnostics for threshold selection
2021-12-08
Neural networks-based algorithms for stochastic control and PDEs in finance *
http://d.repec.org/n?u=RePEc:hal:journl:hal-03115503&r=&r=ore
This paper presents machine learning techniques and deep reinforcement learningbased algorithms for the efficient resolution of nonlinear partial differential equations and dynamic optimization problems arising in investment decisions and derivative pricing in financial engineering. We survey recent results in the literature, present new developments, notably in the fully nonlinear case, and compare the different schemes illustrated by numerical tests on various financial applications. We conclude by highlighting some future research directions.
Maximilien Germain
Huyên Pham
Xavier Warin
2021
Best-Response Dynamics, Playing Sequences, And Convergence To Equilibrium In Random Games
http://d.repec.org/n?u=RePEc:amz:wpaper:2021-23&r=&r=ore
We analyze the performance of the best-response dynamic across all normal-form games using a random games approach. The playing sequence—the order in which players update their actions—is essentially irrelevant in determining whether the dynamic converges to a Nash equilibrium in certain classes of games (e.g. in potential games) but, when evaluated across all possible games, convergence to equilibrium depends on the playing sequence in an extreme way. Our main asymptotic result shows that the best-response dynamic converges to a pure Nash equilibrium in a vanishingly small fraction of all (large) games when players take turns according to a fixed cyclic order. By contrast, when the playing sequence is random, the dynamic converges to a pure Nash equilibrium if one exists in almost all (large) games.
Pangallo, Marco
Heinrich, Torsten
Jang, Yoojin
Scott, Alex
Tarbush, Bassel
Wiese, Samuel
Mungo, Luca
Best-response dynamics, equilibrium convergence, random games
2021-11
Double Fuzzy Probabilistic Interval Linguistic Term Set and a Dynamic Fuzzy Decision Making Model based on Markov Process with tts Application in Multiple Criteria Group Decision Making
http://d.repec.org/n?u=RePEc:arx:papers:2111.15255&r=&r=ore
The probabilistic linguistic term has been proposed to deal with probability distributions in provided linguistic evaluations. However, because it has some fundamental defects, it is often difficult for decision-makers to get reasonable information of linguistic evaluations for group decision making. In addition, weight information plays a significant role in dynamic information fusion and decision making process. However, there are few research methods to determine the dynamic attribute weight with time. In this paper, I propose the concept of double fuzzy probability interval linguistic term set (DFPILTS). Firstly, fuzzy semantic integration, DFPILTS definition, its preference relationship, some basic algorithms and aggregation operators are defined. Then, a fuzzy linguistic Markov matrix with its network is developed. Then, a weight determination method based on distance measure and information entropy to reducing the inconsistency of DFPILPR and obtain collective priority vector based on group consensus is developed. Finally, an aggregation-based approach is developed, and an optimal investment case from a financial risk is used to illustrate the application of DFPILTS and decision method in multi-criteria decision making.
Zongmin Liu
2021-11
Non-Linear Employment Effects of Tax Policy
http://d.repec.org/n?u=RePEc:fip:fedgif:1333&r=&r=ore
We study the non-linear propagation mechanism of tax policy in a heterogeneous agent equilibrium business cycle model with search frictions in the labor market and an extensive margin of employment adjustment. The model exhibits endogenous job destruction and endogenous hiring standards in the form of occasionally-binding zero-surplus constraints. After parameterizing the model using U.S. data, we find that the dynamic response of employment to a temporary change in the labor income tax is highly non-linear, displaying sizable asymmetries and state-dependence. Notably, the response to a tax rate cut is at least twice as large in a recession as in an expansion.
Domenico Ferraro
Giuseppe Fiori
Search frictions; Job destruction; Heterogeneity; Aggregation; Tax policy
2021-12-20
Effects of China's Capital Controls on Individual Asset Categories
http://d.repec.org/n?u=RePEc:kob:dpaper:dp2021-25&r=&r=ore
We empirically assess the effects of China's capital controls on individual asset categories by using the local projection method. Our results show stark differences among individual asset categories. Capital controls on equity and financial credits affect both the corresponding inflows and outflows significantly, whereas those on the other three asset categories (bonds, commercial credits, and direct investment) do not.
Shigeto Kitano
Yang Zhou
Capital controls; China; Local projection
2021-12
Optimal No-Regret Learning in General Games: Bounded Regret with Unbounded Step-Sizes via Clairvoyant MWU
http://d.repec.org/n?u=RePEc:arx:papers:2111.14737&r=&r=ore
In this paper we solve the problem of no-regret learning in general games. Specifically, we provide a simple and practical algorithm that achieves constant regret with fixed step-sizes. The cumulative regret of our algorithm provably decreases linearly as the step-size increases. Our findings depart from the prevailing paradigm that vanishing step-sizes are a prerequisite for low regret as championed by all state-of-the-art methods to date. We shift away from this paradigm by defining a novel algorithm that we call Clairvoyant Multiplicative Weights Updates (CMWU). CMWU is Multiplicative Weights Updates (MWU) equipped with a mental model (jointly shared across all agents) about the state of the system in its next period. Each agent records its mixed strategy, i.e., its belief about what it expects to play in the next period, in this shared mental model which is internally updated using MWU without any changes to the real-world behavior up until it equilibrates, thus marking its consistency with the next day's real-world outcome. It is then and only then that agents take action in the real-world, effectively doing so with the ``full knowledge" of the state of the system on the next day, i.e., they are clairvoyant. CMWU effectively acts as MWU with one day look-ahead, achieving bounded regret. At a technical level, we establish that self-consistent mental models exist for any choice of step-sizes and provide bounds on the step-size under which their uniqueness and linear-time computation are guaranteed via contraction mapping arguments. Our arguments extend well beyond normal-form games with little effort.
Georgios Piliouras
Ryann Sim
Stratis Skoulakis
2021-11
Magnitude, global variation, and temporal development of the COVID-19 infection fatality burden
http://d.repec.org/n?u=RePEc:dem:wpaper:wp-2021-024&r=&r=ore
Christina Bohk-Ewald
Enrique Acosta
Timothy Riffe
Christian Dudel
Mikko Myrskylä
2021
Politics against Economics: The Case of Spanish Regional Financing
http://d.repec.org/n?u=RePEc:jau:wpaper:2021/15&r=&r=ore
The link between fiscal decentralization and economic growth is a work-horse field of research which has historically arrived to ambiguous conclusions. Nevertheless, less is known about the regional consequences of an asymmetric decentralized system as in the case of Spain. In this article, we provide evidence for the literature evaluating the two-extreme-cases regions (The Basque Country and The Valencian Community) in terms of how they have been benefited/harmed, after the approval of their respective more recent critical laws regarding the Spanish fiscal decentralization process: (i) the Basque Economic Agreement (BEA, hereinafter) approved in 2002 and (ii) the 2001-model within the common financing system. To undertake this analysis, we develop our empirical strategy based on diff-in-diff regression and the Synthetic Control Method. We intend to demonstrate that an asymmetric fiscal decentralized system, based on cultural or political reasons rather than economic ones, is not innocuous for the economic development of a given region and it has quasi-permanent consequences in terms of convergence for the whole country. We find that the BEA approved in 2002 would have increased the Basque Country level of GDP per capita under diff-in-diff regression and under Synthetic Control method. Conversely, we also find that the approval of the 2001-model, within the common financing system, has implied a considerably reduction in the Valencian level of GDP per capita, also under both methods.
Daniel Aparicio-Pérez
Maria Teresa Balaguer-Coll
Emili Tortosa-Ausina
economic growth, fiscal decentralization, difference-in-differences, synthetic control method
2021
Option Pricing with State-dependent Pricing Kernel
http://d.repec.org/n?u=RePEc:arx:papers:2112.05308&r=&r=ore
We introduce a new volatility model for option pricing that combines Markov switching with the Realized GARCH framework and leads to a novel pricing kernel with a regime-specific variance risk premium. An analytical approximation method based on an Edgeworth expansion of cumulative returns enables us to derive the pricing formula for European options in this setting. The Markov switching Realized GARCH model is easy to estimate because inferences about regimes can be deduced with realized volatility measures. In an empirical application with S&P 500 index options from 1990 to 2019, we find that investors' aversion to volatility-specific risk is time varying. The proposed framework outperforms competing methods and reduces option pricing errors by 15% or more both in-sample as well as out-of-sample.
Chen Tong
Peter Reinhard Hansen
Zhuo Huang
2021-12
Currency Wars, Trade Wars and Global Demand
http://d.repec.org/n?u=RePEc:jhu:papers:66667&r=&r=ore
This paper presents a tractable model of a global economy in which countries can use a broad range of policy instruments---the nominal interest rate, taxes on imports and exports, taxes on capital flows or foreign exchange interventions. Low demand may lead to unemployment because of downward nominal wage stickiness. Markov perfect equilibria with and without international cooperation are characterized in closed form. The welfare costs of trade and currency wars crucially depend on the state of global demand and on the policy instruments that are used by national policymakers. Countries have more incentives to deviate from free trade when global demand is low. Trade wars lower employment if they involve tariffs on imports but raise employment if they involve export subsidies. Tariff wars can lead to self-fulfilling global liquidity traps.
Jeanne, Olivier
Tariff, exchange rate, capital control
2021-12-17
Moderating Macroeconomic Bubbles Under Dispersed Information
http://d.repec.org/n?u=RePEc:ufl:wpaper:001005&r=&r=ore
Can waves of optimism and pessimism produce large macroeconomic bubbles, and if so, is there anything that policymakers can do about them? Yes and yes. I study a business cycle model where agents with rational expectations receive noisy signals about future productivity. The model features dispersed information, which allows aggregate noise shocks to produce frequent large bubbles in the capital stock. Because of the information friction, a policymaker with an informational advantage can improve outcomes. I consider policies that affect investment incentives by distorting the intertemporal wedge. I calculate the optimal policy rule, and find that policymakers should discourage investment booms after aggregate news shocks.
Jonathan J Adams
2020-09
Wealth Inequality, Uninsurable Entrepreneurial Risk and Firms Markup
http://d.repec.org/n?u=RePEc:qed:wpaper:1476&r=&r=ore
This paper examines the effect of wealth concentration on firmsâ€™ market powerwhen firm entry is driven by entrepreneurs facing uninsurable idiosyncratic risks. Undergreater wealth concentration, households in the lower end of the wealth distribution aremore risk averse and less willing (or able) to bear the risk of entrepreneurial activities.This has implications for firm entry, competitiveness, and market power.I calibrate a Schumpeterian model of endogenous growth with heterogeneous riskaverse entrepreneurs competing to catch up with firms. This model is unique in thatboth household wealth distribution and a measure of firm markup are endogenouslydetermined on a balanced growth path. I find that a spread in the wealth distributiondecreases entrepreneurial firm creation, resulting in greater aggregate firm marketpower. This result is supported by time series evidence obtained from the estimationof a structural panel VAR with OECD data from eight countries.
Samuel Brien
Wealth inequality, market power, growth, Schumpeterian, endogenous growth, entrepreneur
2021-11
Investor demand in syndicated bond issuances: stylised facts
http://d.repec.org/n?u=RePEc:stm:wpaper:50&r=&r=ore
This study analyses investor demand in syndicated EFSF and ESM bond issuances from 2014 to 2020 on an unprecedented granularity level of individual orders. In particular, we investigate three main aspects of order book dynamics: first, we determine the main factors segmenting investor demand. Second, we analyse price dynamics in the transactions and its relation to investor demand. Third, we examine whether there are any indications of order book inflation that might explain the increased volatility in order book volume. We identify issuance tranche and tenor as the main determinants of investor demand, which are to a large extent anticipated by the envisaged notional amount of the issuance. Further, we note that the pricing of ESM bond issuances is carried out in an economical manner, i.e. the new issue premium tends to be lower in a market context with large demand. Lastly, we look at the drivers of large order books and find a mixture of above average number and volume of orders. This confirms that there are no indications of order book inflation tendencies in the analysed time period.
Martin Hillebrand
Marko Mravlak
Peter Schwendner
Investor demand, bond issuance, bond syndication, bond primary market, investor behaviour, order books, order book inflation, new issue premium
2021-12-22
Optimal incentives in a limit order book: a SPDE control approach
http://d.repec.org/n?u=RePEc:arx:papers:2112.00375&r=&r=ore
With the fragmentation of electronic markets, exchanges are now competing in order to attract trading activity on their platform. Consequently, they developed several regulatory tools to control liquidity provision / consumption on their liquidity pool. In this paper, we study the problem of an exchange using incentives in order to increase market liquidity. We model the limit order book as the solution of a stochastic partial differential equation (SPDE) as in [12]. The incentives proposed to the market participants are functions of the time and the distance of their limit order to the mid-price. We formulate the control problem of the exchange who wishes to modify the shape of the order book by increasing the volume at specific limits. Due to the particular nature of the SPDE control problem, we are able to characterize the solution with a classic Feynman-Kac representation theorem. Moreover, when studying the asymptotic behavior of the solution, a specific penalty function enables the exchange to obtain closed-form incentives at each limit of the order book. We study numerically the form of the incentives and their impact on the shape of the order book, and analyze the sensitivity of the incentives to the market parameters.
Bastien Baldacci
Philippe Bergault
2021-12
Instrumental-Variable Estimation Of Exponential Regression Models With Two-Way Fixed Effects With An Application To Gravity Equations
http://d.repec.org/n?u=RePEc:tse:wpaper:126195&r=&r=ore
This paper introduces instrumental-variable estimators for exponential-regression models that feature two-way fixed effects. These techniques allow us to develop a theory-consistent approach to the estimation of cross-sectional gravity equations that can accommodate the endogeneity of policy variables. We apply this approach to a data set in which the policy decision of interest is the engagement in a free trade agreement. We explore ways to exploit the transitivity observed in the formation of trade agreements to construct instrumental variables with considerable predictive ability. Within a bilateral model, the use of these instruments has strong theoretical foundations. We obtain point estimates of the partial effect of a preferential-trade agreement on trade volume that range between 20% and 30% and find no statistical evidence of endogeneity.
Jochmans, Koen
Verardi, Vincenzo
Bias correction; count data; differencing estimator; endogeneity; fixed effects;; gravity equation; instrumental variable; transitivity
2021-11-30