Operations Research
http://lists.repec.org/mailman/listinfo/nep-ore
Operations Research
2021-09-20
The economic impact of weather and climate
http://d.repec.org/n?u=RePEc:sus:susvid:2109&r=&r=ore
Video discussion of stochastic frontier analysis of the impact of climate and weather on economic output.
Richard S.J. Tol
climate change, stochastic frontier analysis, weather, video
2021-09
Nonparametric inference for extremal conditional quantiles
http://d.repec.org/n?u=RePEc:cep:stiecm:616&r=&r=ore
This paper studies asymptotic properties of the local linear quantile estimator under the extremal order quantile asymptotics, and develops a practical inference method for conditional quantiles in extreme tail areas. By using a point process technique, the asymptotic distribution of the local linear quantile estimator is derived as a minimizer of certain functional of a Poisson point process that involves nuisance parameters. To circumvent difficulty of estimating those nuisance parameters, we propose a subsampling inference method for conditional extreme quantiles based on a self-normalized version of the local linear estimator. A simulation study illustrates usefulness of our subsampling inference to investigate extremal phenomena.
Daisuke Kurisu
Taisuke Otsu
Quantile regression, Extreme value theory, Point process, Subsampling
2021-09
Drivers of COVID-19 Outcomes: Evidence from a Heterogeneous SAR Panel Data Model
http://d.repec.org/n?u=RePEc:boc:usug21:18&r=&r=ore
In an extension of the standard spatial autoregressive (SAR) model, Aquaro, Bailey and Pesaran (ABP, Journal of Applied Econometrics, 2021) introduced a SAR panel model that allows to produce heterogeneous point estimates for each spatial unit. Their methodology has been implemented as the Stata routine hetsar (Belotti, 2021). As the COVID-19 pandemic has evolved in the U.S. since its first outbreak in February 2020 with following resurgences of multiple widespread and severe waves of the pandemic, the level of interactions between geographic units (e.g., states and counties) have differed greatly over time in terms of the prevalence of the disease. Applying ABP’s HETSAR model to 2020 and 2021 COVID-19 data outcomes (confirmed case and death rates) at the state level, we extend our previous spatial econometric analysis (Baum and Henry, 2021) on socioeconomic and demographic factors influencing the spatial spread of COVID-19 confirmed case and death rates in the U.S.A.
Christopher F Baum
Miguel Henry
2021-09-12
Estimating macro models and the potentially misleading nature of Bayesian estimation
http://d.repec.org/n?u=RePEc:cdf:wpaper:2021/22&r=&r=ore
We ask whether Bayesian estimation creates a potential estimation bias as compared with standard estimation techniques based on the data, such as maximum likelihood or indirect estimation. We investigate this with a Monte Carlo experiment in which the true version of a New Keynesian model may either have high wage/price rigidity or be close to pure flexibility; we treat each in turn as the true model and create Bayesian estimates of it under priors from the true model and its false alternative. The Bayesian estimation of macro models may thus give very misleading results by placing too much weight on prior information compared to observed data; a better method may be Indirect estimation where the bias is found to be low.
Meenagh, David
Minford, Patrick
Wickens, Michael
Bayesian; Maximum Likelihood; Indirect Inference; Estimation Bias
2021-09
Useful results for the simulation of non-optimal economies with heterogeneous agents
http://d.repec.org/n?u=RePEc:cte:werepe:33246&r=&r=ore
This paper deals with infinite horizon non-optimal economies with aggregate uncertainty and a finite number of heterogeneous agents. It derives sufficient conditions for the existence of a recursive structure,an ergodic, a stationary, and a non-stationary equilibria. It also gives an answer to the following question: is it possible to derive a general framework which guarantees that numerical simulations truly reflect the behavior of endogenous variables in the model? We provide sufficient conditions to give an affirmative answer to this question for endowment economies with incomplete markets and uncountable exogenous shocks. These conditions guarantee the ergodicity of the process and hold for a particular selection mechanism. For economies with finitely many shocks or for an arbitrary selection in economies with uncountable shocks, it is only possible to show that a computable, time independent and recursive representation generates a stationary Markov process. The results in this paper suggest that often a well defined stochastic steady state in heterogenous agent models is sensitive to the initial conditions of the economy; a fact which imply that heterogeneity may have irreversible long-lasting effects.
Pierri, Damian Rene
Non-Optimal Economies ;
Markov Equilibrium ;
Heterogeneous Agents ;
Simulations
2021-09-07
Inference in the Nonparametric Stochastic Frontier Model
http://d.repec.org/n?u=RePEc:aiz:louvad:2021029&r=&r=ore
This paper is the first in the literature to discuss in detail how to conduct various types of inference in the stochastic frontier model when it is estimated using non-parametric methods. We discuss a general and versatile inferential technique that allows for a range of practical hypotheses of interest to be tested. We also discuss several challenges that currently exist in this framework in an effort to alert researchers to potential pitfalls. Namely, it appears that when one wishes to estimate a stochastic frontier in a fully non-parametric framework, separability between inputs and determinants of inefficiency is an essential ingredient for the correct empirical size of a test. We showcase the performance of the test with a variety of Monte Carlo simulations.
Parmeter, Christopher F.
Simar, Léopold
Van Keilegom, Ingrid
Zelenyuk, Valentin
Stochastic Frontier Analysis, Efficiency, Productivity Analysis, Local-Polynomial Least- Squares
2021-09-09
Dynamic Games in Empirical Industrial Organization
http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-706&r=&r=ore
This survey is organized around three main topics: models, econometrics, and empirical applications. Section 2 presents the theoretical framework, introduces the concept of Markov Perfect Nash Equilibrium, discusses existence and multiplicity, and describes the representation of this equilibrium in terms of conditional choice probabilities. We also discuss extensions of the basic framework, including models in continuous time, the concepts of oblivious equilibrium and experience-based equilibrium, and dynamic games where firms have non-equilibrium beliefs. In section 3, we first provide an overview of the types of data used in this literature, before turning to a discussion of identification issues and results, and estimation methods. We review different methods to deal with multiple equilibria and large state spaces. We also describe recent developments for estimating games in continuous time and incorporating serially correlated unobservables, and discuss the use of machine learning methods to solving and estimating dynamic games. Section 4 discusses empirical applications of dynamic games in IO. We start describing the first empirical applications in this literature during the early 2000s. Then, we review recent applications dealing with innovation, antitrust and mergers, dynamic pricing, regulation, product repositioning, advertising, uncertainty and investment, airline network competition, dynamic matching, and natural resources. We conclude with our view of the progress made in this literature and the remaining challenges.
Victor Aguirregabiria
Allan Collard-Wexler
Stephen P. Ryan
Dynamic games; Industrial organization; Market competition; Structural models; Estimation; Identification; Counterfactuals
2021-09-03
Bounding Sets for Treatment Effects with Proportional Selection
http://d.repec.org/n?u=RePEc:ums:papers:2021-10&r=&r=ore
In linear econometric models with proportional selection on unobservables, omitted variable bias in estimated treatment effects are roots of a cubic equation involving estimated parameters from a short and intermediate regression, the former excluding and the latter including all observable controls. The roots of the cubic are functions of delta, the degree of proportional selection on unobservables, and R_max, the R-squared in a hypothetical long regression that includes the unobservable confounder and all observable controls. In this paper a simple method is proposed to compute roots of the cubic over meaningful regions of the delta-R_max plane and use the roots to construct bounding sets for the true treatment effect. The proposed method is illustrated with both a simulated and an observational data set.
Deepankar Basu
treatment effect, omitted variable bias
2021
Semi-analytical pricing of barrier options in the time-dependent $\lambda$-SABR model
http://d.repec.org/n?u=RePEc:arx:papers:2109.02134&r=&r=ore
We extend the approach of Carr, Itkin and Muravey, 2021 for getting semi-analytical prices of barrier options for the time-dependent Heston model with time-dependent barriers by applying it to the so-called $\lambda$-SABR stochastic volatility model. In doing so we modify the general integral transform method (see Itkin, Lipton, Muravey, Generalized integral transforms in mathematical finance, World Scientific, 2021) and deliver solution of this problem in the form of Fourier-Bessel series. The weights of this series solve a linear mixed Volterra-Fredholm equation (LMVF) of the second kind also derived in the paper. Numerical examples illustrate speed and accuracy of our method which are comparable with those of the finite-difference approach at small maturities and outperform them at high maturities even by using a simplistic implementation of the RBF method for solving the LMVF.
Andrey Itkin
Dmitry Muravey
2021-09
Наукастинг темпов роста стоимостных объемов экспорта и импорта по товарным группам
http://d.repec.org/n?u=RePEc:pra:mprapa:109557&r=&r=ore
In this paper we consider a set of machine learning and econometrics models, namely: Elastic Net, Random Forest, XGBoost and SSVS as applied to nowcasting a large dataset of USD volumes of Russian exports and imports by commodity group. We use lags of the volumes of export and import commodity groups, prices for some imported and exported goods and other variables, due to which the curse of dimensionality becomes quite acute. The models we use are very popular and have proven themselves well in forecasting in the presence of the curse of dimensionality, when the number of model parameters exceeds the number of observations. The best model is the weighted model of machine learning methods, which outperforms the ARIMA benchmark model in nowcasting the volume of both exports and imports. In the case of the largest commodities groups, we often get a significantly more accurate nowcasts then ARIMA model, according to the Diebold-Mariano test. In addition, nowcasts turns out to be quite close to the historical forecasts of the Bank of Russia, being constructed in similar conditions.
Maiorova, Ksenia
Fokin, Nikita
наукастинг; внешняя торговля; проклятие размерности; машинное обучение; российская экономика
2020-06
Instrumental variable estimation of large-T panel data models with common factors
http://d.repec.org/n?u=RePEc:boc:usug21:4&r=&r=ore
We introduce the xtivdfreg command in Stata, which implements a general instrumental variables (IV) approach for estimating panel data models with a large number of time series observations, T, and unobserved common factors or interactive effects, as developed by Norkute, Sarafidis, Yamagata, and Cui (2021, Journal of Econometrics) and Cui, Norkute, Sarafidis, and Yamagata (2020, ISER Discussion Paper). The underlying idea of this approach is to project out the common factors from exogenous covariates using principal components analysis, and to run IV regression in both of two stages, using defactored covariates as instruments. The resulting two-stage IV (2SIV) estimator is valid for models with homogeneous or heterogeneous slope coefficients, and has several advantages relative to existing popular approaches. In addition, the xtivdfreg command extends the 2SIV approach in two major ways. Firstly, the algorithm accommodates estimation of unbalanced panels. Secondly, the algorithm permits a flexible specification of instruments. It is shown that when one imposes zero factors, the xtivdfreg command can replicate the results of the popular ivregress Stata command. Notably, unlike ivregress, xtivdfreg permits estimation of the two-way error components panel data model with heterogeneous slope coefficients.
Sebastian Kripfganz
Vasilis Sarafidis
2021-09-12
Simple Matching Protocols for Agent-based Models.
http://d.repec.org/n?u=RePEc:ulp:sbbeta:2021-35&r=&r=ore
The main purpose of this article is to show how simple matching protocols suitable for agent-based models can be developed from scratch. Keeping the feature of the underlying economy at minimum, I develop, detail, and present the code for three matching processes. Their small size and flexibility may act as a stimulus to non-expert students to undertake such stream of literature and address a variety of research topics.
Andrea Borsato
Agent-based Modelling, Matching Protocols, Computer Simulation, Linear Matrix Algebra, R.
2021
The tenets of indirect inference in Bayesian models
http://d.repec.org/n?u=RePEc:osf:osfxxx:enzgs&r=&r=ore
This paper extends the application of Bayesian inference to probability distributions defined in terms of its quantile function. We describe the method of *indirect likelihood* to be used in the Bayesian models with sampling distributions which lack an explicit cumulative distribution function. We provide examples and demonstrate the equivalence of the "quantile-based" (indirect) likelihood to the conventional "density-defined" (direct) likelihood. We consider practical aspects of the numerical inversion of quantile function by root-finding required by the indirect likelihood method. In particular, we consider a problem of ensuring the validity of an arbitrary quantile function with the help of Chebyshev polynomials and provide useful tips and implementation of these algorithms in Stan and R. We also extend the same method to propose the definition of an *indirect prior* and discuss the situations where it can be useful
Perepolkin, Dmytro
Goodrich, Benjamin
Sahlin, Ullrika
2021-09-09
A new long-step interior point algorithm for linear programming based on the algebraic equivalent transformation
http://d.repec.org/n?u=RePEc:cvh:coecwp:2021/06&r=&r=ore
In this paper, we investigate a new primal-dual long-step interior point algorithm for linear optimization. Based on the step-size, interior point algorithms can be divided into two main groups, short-step and long-step methods. In practice, long-step variants perform better, but usually, a better theoretical complexity can be achieved for the short-step methods. One of the exceptions is the large-update algorithm of Ai and Zhang. The new wide neighbourhood and the main characteristics of the presented algorithm are based on their approach. In addition, we use the algebraic equivalent transformation technique by Darvay to determine the search directions of the method.
E. Nagy, Marianna
Varga, Anita
Mathematical programming; Linear optimization; Interior point algorithms; Algebraic equivalent transformation technique
2021-09-09
The Stata module for CUB models for rating data analysis
http://d.repec.org/n?u=RePEc:boc:usug21:16&r=&r=ore
Many survey questions are addressed as ordered rating variables to assess the extent by which a certain perception or opinion holds among respondents. These responses cannot be treated as objective measures, and a proper statistical analysis to account for their fuzziness is the class of CUB models, acronym of Combination of Uniform and Shifted Binomial (Piccolo and Simone 2019), establishing a different paradigm to model both individual perception (feeling) towards the items and uncertainty. Uncertainty can be considered as a noise for feeling measurement taking the form of heterogeneity of the distribution. CUB models are specified via a two-component discrete mixture to combine feeling and uncertainty modelling. In the baseline version, a shifted Binomial distribution accounts for the underlying feeling and a discrete Uniform accounts for heterogeneity, but different specifications are possible to encompass inflated frequencies, for instance. Featuring parameters can be linked to subjects' characteristics to derive response profiles. Then, different items (possibly measured on scales with different lengths) and groups of respondents can be represented and compared through effective visualization tools. Our contribution is tailored to present CUB modelling to the Stata community by discussing the CUB Module with different case studies to illustrate its applicative extent.
G. Cerulli
R. Simone
F. Di Iorio
D. Piccolo
C.F. Baum
2021-09-12
Extrapolative bubbles and trading volume
http://d.repec.org/n?u=RePEc:ehl:lserod:110514&r=&r=ore
We propose an extrapolative model of bubbles to explain the sharp rise in prices and volume observed in historical financial bubbles. The model generates a novel mechanism for volume: because of the interaction between extrapolative beliefs and disposition effects, investors are quick to not only buy assets with positive past returns but also sell them if good returns continue. Using account-level transaction data on the 2014–2015 Chinese stock market bubble, we test and confirm the model’s predictions about trading volume. We quantify the magnitude of the proposed mechanism and show that it can increase trading volume by another 30%.
Liao, Jingchi
Peng, Cameron
Zhu, Ning
OUP deal
2021-06-18
High-dimensional statistical learning techniques for time-varying limit order book networks
http://d.repec.org/n?u=RePEc:zbw:irtgdp:2021015&r=&r=ore
This paper provides statistical learning techniques for determining the full own-price market impact and the relevance and effect of cross-price and cross-asset spillover channels from intraday transactions data. The novel tools allow extracting comprehensive information contained in the limit order books (LOB) and quantify their impacts on the size and structure of price interdependencies across stocks. For correct empirical network determination of such dynamic liquidity price effects even in small portfolios, we require high-dimensional statistical learning methods with an integrated general bootstrap procedure. We document the importance of LOB liquidity network spillovers even for a small blue-chip NASDAQ portfolio.
Chen, Shi
Härdle, Wolfgang
Schienle, Melanie
limit order book,high-dimensional statistical learning,liquidity networks,high frequency dynamics,market impact,bootstrap,network
2021
Permanent-Transitory decomposition of cointegrated time series via Dynamic Factor Models, with an application to commodity prices
http://d.repec.org/n?u=RePEc:fem:femwpa:2021.19&r=&r=ore
In this article, we propose a cointegration-based Permanent-Transitory decomposition for non-stationary Dynamic Factor Models. Our methodology exploits the cointegration relations among the observable variables and assumes they are driven by a common and an idiosyncratic component. The common component is further split into a long-term non-stationary part and a short-term stationary one. A Monte Carlo experiment shows that taking into account the cointegration structure in the DFM leads to a much better reconstruction of the space spanned by the factors, with respect to the most standard technique of applying a factor model in differenced systems. Finally, an application of our procedure to a set of different commodity prices allows to analyse the comovement among di erent markets. We find that commodity prices move together due to long-term common forces and that the trend for most primary good prices is declining, whereas metals and energy ones exhibit an upward or at least stable pattern since the 2000s.
Chiara Casoli
Riccardo (Jack) Lucchetti
Cointegration, Dynamic Factor Models, P-T decomposition, Commodity prices co-movement
2021-07
A time-varying network for cryptocurrencies
http://d.repec.org/n?u=RePEc:zbw:irtgdp:2021016&r=&r=ore
Cryptocurrencies return cross-predictability and technological similarity yield information on risk propagation and market segmentation. To investigate these effects, we build a timevarying network for cryptocurrencies, based on the evolution of return cross-predictability and technological similarities. We develop a dynamic covariate-assisted spectral clustering method to consistently estimate the latent community structure of cryptocurrencies network that accounts for both sets of information. We demonstrate that investors can achieve better risk diversification by investing in cryptocurrencies from different communities. A cross-sectional portfolio that implements an inter-crypto momentum trading strategy earns a 1.08% daily return. By dissecting the portfolio returns on behavioral factors, we confirm that our results are not driven by behavioral mechanisms.
Guo, Li
Härdle, Wolfgang
Tao, Yubo
Community detection,Dynamic stochastic blockmodel,Covariates,Co-clustering,Network risk,Momentum
2021
El acceso a servicios en la España rural
http://d.repec.org/n?u=RePEc:bde:opaper:2122&r=&r=ore
Este documento analiza las diferencias que existen en la accesibilidad a servicios entre las zonas rurales y las urbanas en los países de la Unión Europea. Los resultados indican que, en España, las áreas rurales presentan una peor accesibilidad a servicios que sus homólogas europeas, mientras que las diferencias no son significativas en el caso de las áreas urbanas. La disponibilidad de información a nivel municipal para el caso español permite documentar un déficit en la accesibilidad a servicios de los municipios rurales frente a los urbanos dentro de cada comunidad autónoma. Asimismo, se observan algunas idiosincrasias en la geografía y en la fiscalidad de los municipios rurales que podrían explicar, al menos en parte, dicho déficit.
Mario Alloza
Víctor González-Díez
Enrique Moral-Benito
Patrocinio Tello-Casas
accesibilidad, servicios, áreas rurales y urbanas
2021-09
Introducing stipw: inverse probability weighted parametric survival models
http://d.repec.org/n?u=RePEc:boc:usug21:15&r=&r=ore
Inverse probability weighting (IPW) can be used to estimate marginal treatment effects from survival data. Currently, IPW analyses can be performed in a few steps in Stata (with robust or bootstrap standard errors) or by using stteffects ipw under some assumptions for a small number of marginal treatment effects. stipw has been developed to perform an IPW analysis on survival data and to provide a closed-form variance estimator of the model parameters using M-estimation. This method appropriately accounts for the estimation of the weights and provides a less computationally intensive alternative to bootstrapping. stipw implements the following steps: (1) A binary treatment/exposure variable is modelled against confounders using logistic regression. (2) Stabilised or unstabilised weights are estimated. (3) A weighted streg or stpm2 (Royston-Parmar) survival model is fitted with treatment/exposure as the only covariate. (4) Variance is estimated using M-estimation. As the stored variance matrix is updated, post-estimation can easily be performed with the appropriately estimated variance. Useful marginal measures, such as difference in marginal restricted survival time, can thus be calculated with uncertainties. stipw will be demonstrated on a commonly used dataset in primary biliary cirrhosis. Robust, bootstrap and M-estimation standard errors will be presented and compared.
Micki Hill
Paul C Lambert
Michael J Crowther
2021-09-12
Industry evidence and the vanishing cyclicality of labor productivity.
http://d.repec.org/n?u=RePEc:vie:viennp:vie2001&r=&r=ore
: Aggregate labor productivity used to be strongly procyclical in the United States, but the procyclicality has largely disappeared since the mid-1980s. This paper explores the industry-level evidence in order to discriminate between existing explanations of the vanishing procyclicality of the labor productivity. I document the change in the cyclical properties of productivity in the U.S. using industry-level data and focus on a particularly puzzling feature, namely that the correlations of the industry productivity with industry output and labor input remained on average much more stable before and after the mid-1980s compared to the aggregate correlations. In other words, there is little evidence for the vanishing cyclicality of labor productivity at the industry level. I construct a simple industry-level RBC model that nests two leading explanations of the vanishing cyclicality of productivity that have been proposed in the literature. I show that the two explanations have qualitatively di?erent predictions for the cyclical properties of industry-level variables. The mechanism based on a structural change in the composition of aggregate shocks is able to replicate the stability of industry-level moments across time. In contrast, the mechanism based on increased labor market ?exibility is less successful in matching the industry-level evidence.
Zuzana Molnarova
2020-01
Impact of model parametrization and formulation on the explorative power of electricity network congestion management models
http://d.repec.org/n?u=RePEc:zbw:esprep:240928&r=&r=ore
Integrating increasing shares of weather-dependent renewable energies into energy systems while maintaining high levels of security of supply constitutes a challenge for network utilities. Obtaining the goal of large shares of renewable-based generation sources on electricity supply requires an effective operation of electricity grids and efficient coordination among grid operators. Therefore, detailed modelling of grid operation has increasingly become important in recent years. Methods for modelling the operation of (extra) high-voltage grids are undergoing persistent enhancements in academia and energy industries. Existing approaches vary in data granularity and computational methods. Moreover, assumptions on technical details in grid models vary. Differences in input data and modelling methods likely have an impact on simulation results. This paper aims to identify the most relevant differences present in grid simulation models and methods for studying congestion management in a European context. Differences are studied based on a comparison of grid simulation models from eight German energy modelling institutions. The effects of model parameterization and formulation on congestion management results are further investigated with three different case studies focusing on outage simulation, line-constraint relaxation and the modelling of cross-border measures applying selected grid simulation models. Results indicate that data parametrization can have large impacts on model results about congestion management volumes and geographic distribution of necessary measures. Model key parameters must be calibrated thoroughly. The findings of this research will assist future grid modelers and power system planners in efficiently simulating congestion management and increases the validity and explorative power of grid simulation models.
Hobbie, Hannes
Mehlem, Jonas
Wolff, Christina
Weber, Lukas
Flachsbarth, Franziska
Möst, Dominik
Moser, Albert
Transmission grid,Sustainable development,Renewable energies,Model comparison,Congestion management,Optimal power flow
2021
Large-scale Victorian manufacturers: reconstructing the lost 1881 UK employer census
http://d.repec.org/n?u=RePEc:ehl:lserod:111895&r=&r=ore
We present the first available - and near-complete - list of large UK manufacturers in 1881, by complementing the employer data from that year’s population census (recovered by the British Business Census of Entrepreneurs project) with employment and capital estimates from other sources. The 438 largest firms with 1,000 or more employees accounted for around one-sixth of manufacturing output. Examples can be found in most industries. Exploiting powered machinery, intangible assets, new technologies and venture capital, and generally operating in competitive markets, their exports about equalled domestic sales. The more capital-intensive accessed stock markets, more - and in larger firms - than in follower economies. Some alleged later causes of UK decline relative to the US or Germany cannot be observed in 1881. Indeed, contemporary overseas observers - capitalist and socialist - correctly recognized the distinctive features of UK manufacturing as its exceptional development of quoted corporations, professional managers and “modern,” scalable, factory production.
Hannah, Leslie
Bennett, Robert J.
large manufacturers; capital intensity; industrial concentration; stock exchanges
2021-09-01
Outcomes of ICU patients with and without perceptions of excessive care: a comparison between cancer and non-cancer patients
http://d.repec.org/n?u=RePEc:ulb:ulbeco:2013/331296&r=&r=ore
Background: Whether Intensive Care Unit (ICU) clinicians display unconscious bias towards cancer patients is unknown. The aim of this study was to compare the outcomes of critically ill patients with and without perceptions of excessive care (PECs) by ICU clinicians in patients with and without cancer. Methods: This study is a sub-analysis of the large multicentre DISPROPRICUS study. Clinicians of 56 ICUs in Europe and the United States completed a daily questionnaire about the appropriateness of care during a 28-day period. We compared the cumulative incidence of patients with concordant PECs, treatment limitation decisions (TLDs) and death between patients with uncontrolled and controlled cancer, and patients without cancer. Results: Of the 1641 patients, 117 (7.1%) had uncontrolled cancer and 270 (16.4%) had controlled cancer. The cumulative incidence of concordant PECs in patients with uncontrolled and controlled cancer versus patients without cancer was 20.5%, 8.1%, and 9.1% (p
Dominique Benoit
Esther E.N. van der Zee
Michael Darmon
An A.K.L. Reyners
Victoria Metaxa
Djamel Mokart
Alexander Wilmer
Pieter Depuydt
Andreas Hvarfner
Katerina Rusinova
Jan G.Zijlstra
François Vincent
Dimitrios Lathyris
Anne-Pascale Meert
Jacques Devriendt
Emma Uyttersprot
Erwin Jo E.J.O. Kompanje
Ruth R.D. Piers
Elie Azoulay
Bias; Cancer; Critical care; ICU; Perception of care; Prognostication; Treatment limitation
2021-12
Measuring Inflation: Criticism and Solution
http://d.repec.org/n?u=RePEc:pra:mprapa:109724&r=&r=ore
In this study the concept of commodities is formulated according to the utility theory; following the principle of price elasticity of demand, differences of uncompensated and compensated price changes will be clearly interpreted; as the uncompensated and compensated price changes have different averaging properties, so two different CPI formulas need to be defined; arbitrary price changes are broken down into uncompensated and compensated price change to obtain a complete, dual CPI formula.
Laczó, Ferenc
Economic Value of a Commodity; Uncompensated vs. Compensated Price Change; Common Units in Measurements; Dual CPI Formula; Supply-Driven and Demand-Driven Economy
2021-06-30
Nonparametric Extrema Analysis in Time Series for Envelope Extraction, Peak Detection and Clustering
http://d.repec.org/n?u=RePEc:arx:papers:2109.02082&r=&r=ore
In this paper, we propose a nonparametric approach that can be used in envelope extraction, peak-burst detection and clustering in time series. Our problem formalization results in a naturally defined splitting/forking of the time series. With a possibly hierarchical implementation, it can be used for various applications in machine learning, signal processing and mathematical finance. From an incoming input signal, our iterative procedure sequentially creates two signals (one upper bounding and one lower bounding signal) by minimizing the cumulative $L_1$ drift. We show that a solution can be efficiently calculated by use of a Viterbi-like path tracking algorithm together with an optimal elimination rule. We consider many interesting settings, where our algorithm has near-linear time complexities.
Kaan Gokcesu
Hakan Gokcesu
2021-09
The Analysis and the Measurement of Poverty: An Interval-Based Composite Indicator
http://d.repec.org/n?u=RePEc:fem:femwpa:2021.21&r=&r=ore
The analysis and measurement of poverty is a crucial and unsolved issue in the field of social science. This work aims to measure poverty as a multidimensional notion using a new composite indicator. However, subjective choices as different weighting schemes on the indicator's construction could affect their interpretation and policy. It is necessary to consider the possible weighting configurations randomly to overcome this problem, and it is proposed in this work as interval-based composite indicators based on the results. This work aims to obtain robust and reliable measures based on a relevant conceptual model of poverty we have identified, considering various factors as weightings. Methodologically speaking, it is proposed an original procedure for measuring poverty in which it is computed a different composite indicator for each simulated weighting scheme of the identified factors. The weighting scheme in the Monte-Carlo simulation randomly creates an interval-based composite indicator based on the results. The different intervals are compared using different criteria (upper bound, center, and lower bound), and various rankings help analyze extreme scenarios and policy hypotheses. Critical situations are identified in Sicilia, Calabria, Campania and Puglia. The results demonstrate a relevant and consistent indicator measurement and the shadow sector's relevant impact on the final measures.
Carlo Drago
Poverty, Composite Indicators, Interval Data, Interval-based Composite Indicators, Symbolic Data
2021-08
Serious Games und Gamifizierung: Mehr als nur ein Spiel
http://d.repec.org/n?u=RePEc:zbw:iwkkur:542021&r=&r=ore
Der Markt für Videospiele erlebt während der Coronapandemie einen Aufschwung. Zwar halten viele die Games-Branche für eine Nische für Spezialisten. Dabei gibt es innerhalb des Games-Marktes unterschiedliche Segmente. Insbesondere die sogenannten Serious Games und die Gamifizierung sind weiterverbreitet, als so mancher denkt.
Büchel, Jan
2021
Standard Errors for Calibrated Parameters
http://d.repec.org/n?u=RePEc:arx:papers:2109.08109&r=&r=ore
Calibration, the practice of choosing the parameters of a structural model to match certain empirical moments, can be viewed as minimum distance estimation. Existing standard error formulas for such estimators require a consistent estimate of the correlation structure of the empirical moments, which is often unavailable in practice. Instead, the variances of the individual empirical moments are usually readily estimable. Using only these variances, we derive conservative standard errors and confidence intervals for the structural parameters that are valid even under the worst-case correlation structure. In the over-identified case, we show that the moment weighting scheme that minimizes the worst-case estimator variance amounts to a moment selection problem with a simple solution. Finally, we develop tests of over-identifying or parameter restrictions. We apply our methods empirically to a model of menu cost pricing for multi-product firms and to a heterogeneous agent New Keynesian model.
Matthew D. Cocci
Mikkel Plagborg-M{\o}ller
2021-09
Bias-Adjusted Treatment Effects Under Equal Selection
http://d.repec.org/n?u=RePEc:ums:papers:2021-05&r=&r=ore
In a recent contribution, Oster (2019) has proposed a method to generate bounds on treatment effects in the presence of unobservable con- founders. The method can only be implemented if a crucial problem of non-uniqueness is addressed. In this paper I demonstrate that one of the proposed methods to address non-uniqueness that relies on computing bias-adjusted treatment effects under the assumption of equal selection on observables and unobservables, is problematic on several counts. First, additional assumptions, which cannot be justified on theoretical grounds, are needed to ensure a unique solution; second, the method will not work when estimate of the treatment effect declines with the addition of controls; and third, the solution, and therefore conclusions about bias, can change dramatically if we deviate from equal selection even by a small magnitude.
Deepankar Basu
treatment effect, omitted variable bias
2021
Â¿Más dinero es más desarrollo municipal? El caso de Colombia
http://d.repec.org/n?u=RePEc:col:000089:019562&r=&r=ore
En Colombia ha surgido una política con el acto legislativo no. 4 del 2007, donde se otorga un dinero extra para los municipios con menos de 25.000 habitantes por medio del SGP, al rubro de participación de propósito general. Este ingreso se distribuye por conceptos de pobreza y de ruralidad en el municipio. Partiendo de lo anterior, se construye un espacio ideal para el análisis de impacto, ya que la política permite abordar la idea de la asignación como un proceso cuasi aleatorio de experimentación. Entonces, por medio de la regresión discontinua, se analizó la relación que tiene esta política en el PIB (y sus derivados) en los municipios seleccionados por la política, tomando desde el 2008 hasta el 2017. Los resultados indicaron que no había nexo entre la política y el desarrollo municipal aun usando diferentes especificaciones y variables de resultado.
Juan David Yépez Torrijos
Política pública, Colombia, Regresión Discontinua, Municipios, Desarrollo Económico
2021-09-06
Moment generating function of non-Markov self-excited claims processes
http://d.repec.org/n?u=RePEc:aiz:louvad:2021028&r=&r=ore
This article establishes the moment generating function (mgf) of self-excited claim processes with memory functions that admit a Fourier's transform representation. In this case, the claim and intensity processes may be reformulated as an infinite dimensional Markov processes in the complex plane. Approaching these processes by discretization and next considering the limit allows us to find their moment generating function. We illustrate the article by fitting non-Markov self-excited processes to the time-series of cyber-attacks targeting medical and other services, in the US from 2014 to 2018.
Hainaut, Donatien
self-excited process, shot noise process, Hawkes process
2021-07-02
Inflation in developing economies
http://d.repec.org/n?u=RePEc:ums:papers:2021-08&r=&r=ore
Phillips curves and natural rates of unemployment provide a poor foundation for analyzing inflation in developing economies. Structuralist alternatives have focused on distributional conflict and cross-sectoral interactions, but if the distributional claims are exogenous, the theory has formal similarities with mainstream analysis, generating a natural rate of underemployment. This paper outlines a modified structuralist model in which historically determined distributional claims eliminate this natural rate of underemployment. Economic development and structural transformation are not blocked by immutable distributional claims, but shocks to relative incomes can produce explosive inflation.
Peter Skott
Phillips curve, underemployment, distributional conáict, structuralist model
2021
Los efectos del fin del conflicto armado con las FARC sobre la participación política regional en Colombia
http://d.repec.org/n?u=RePEc:col:000089:019556&r=&r=ore
En Colombia el conflicto armado generó múltiples tensiones políticas que llevaron a que los ciudadanos de las regiones más golpeadas por el conflicto participaran menos en las elecciones. El Acuerdo de paz con las FARC, firmado en 2016, ha generado cambios que podrían fomentar la participación política en estas regiones, pero que también pueden generar nuevos conflictos. Este trabajo empírico, parte de la metodología de diferencias en diferencias con emparejamiento, para explicar los cambios en la participación política de las regiones en las que las FARC tuvo presencia violenta en sus últimos años de lucha armada (2010-2014). Los resultados encontrados indican que la participación electoral de estas regiones aumentó relativamente durante las negociaciones y el resultado se acentúa en los municipios que han apoyado menos la estratega militar para acabar con el conflicto. Para los años de post Acuerdo, el efecto no se mantiene por completo y es sensible a los programas de intervención municipal derivados de las negociaciones y a las nuevas dinámicas de violencia.
Emilio Leguízamo Londoño
Proceso de paz; Colombia; Elecciones; Participación política
2021-08-25
Bayesian Persuasion With Costly Information Acquisition
http://d.repec.org/n?u=RePEc:bon:boncrc:crctr224_2021_296&r=&r=ore
A sender choosing a signal to be disclosed to a receiver can often in fluence the receiver's actions. Is persuasion harder when the receiver has additional information sources? Does the receiver benefit from having them? We extend Bayesian persuasion to a receiver's acquisition of costly information. The game can be solved as a standard Bayesian persuasion under an additional constraint - the receiver never learns. The `threat' of learning hurts the sender. However, the outcome can also be worse for the receiver, in which case the receiver's possibility to gather additional information decreases social welfare. Furthermore, we propose a new solution method that does not rely directly on concavification, which is also applicable to standard Bayesian persuasion.
Ludmila Matysková
Alfonso Montes
Bayesian persuasion, Rational inattention, Costly information acquisition, Information design
2021-04
The day after tomorrow: mitigation and adaptation policies to deal with uncertainty
http://d.repec.org/n?u=RePEc:fem:femwpa:2021.22&r=&r=ore
The catastrophic events are characterized by â€œlow frequency and high severityâ€ . Nevertheless, during the last decades, both the frequency and the magnitude of these events have been significantly rising worldwide. In 2021, the European Commission adopted a new Strategy on Adaptation to Climate Change aiming to reinforce the adaptive capacity and minimize vulnerability to the effects of climate change and natural catastrophes. In a continuous time framework over an infinite horizon, we solve in closed form the problem of a representative consumer who holds a production technology (firm) and who optimises with respect to both the intertemporal consumption and the mix between an insurance (adaptation) against the magnitude of the catastrophic losses, and an effort strategy (mitigation) aimed at reducing the frequency of such losses. The catastrophic events are modelled as a Poisson jump process. We then propose some numerical simulations calibrated to the country-specific data of the five main European economies (Germany, France, Italy, Spain, and Netherlands). Our model demonstrates that an optimal mix of mitigation/effort strategies allows to reduce the volatility of the economic growth rate, even if its level may be lowered due to the effort costs. Simulations allow us to also conclude that different countries must optimally react differently to catastrophes, which means that a one-for-all policy does not seem to be optimal.
Davide Bazzana
Francesco Menoncin
Sergio Vergalli
Uncertainty Modelling, Catastrophic Events, Mitigation, Adaptation, Optimal management
2021-08
Booming gas - A theory of endogenous technological change in resource extraction
http://d.repec.org/n?u=RePEc:zbw:ifwkie:240207&r=&r=ore
This paper introduces endogenous technological change in a Hotelling-Herfindahl model of natural resource use to study the recent developments in the U.S. natural gas industry. We consider optimal forward-looking technology investments, and study implications for the order of extraction of conventional and shale gas, and a backstop technology, and characterize the development of gas prices. We find that technology investments increase during the extraction of conventional gas. Once production shifts towards shale gas, investments decline. Consistent with current trends, our theory explains how gas prices can follow a U-shaped path. The calibrated model suggests that U.S. shale gas production continues to grow and prices continue to decrease until 2050. We analytically and numerically show that the introduction of a carbon tax would reduce technology investments, and thus could drastically change the temporal patterns of U.S. shale gas extraction. The forward-looking behaviour of firms is crucial for such an effect, which does not occur in models that treat the improvement in extraction technology as an unanticipated shock to the industry.
Meier, Felix D.
Quaas, Martin F.
shale gas,endogenous technological change,optimal order of extraction,natural gas prices,extraction costs,renewable backstop,optimal transition,carbon tax
2021