Operations Research
http://lists.repec.org/mailman/listinfo/nep-ore
Operations Research
2021-04-12
Sharing the cost of cleaning up a polluted river
http://d.repec.org/n?u=RePEc:tin:wpaper:20210028&r=ore
Consider a group of agents located along a polluted river where every agent must pay a certain cost for cleaning up the polluted river. Following the model of Ni and Wang (2007), we propose the class of alpha-Local Responsibility Sharing methods, which generalizes the Local Responsibility Sharing (LRS) method and the Upstream Equal Sharing (UES) method. We fi rst show that the UES method is characterized by relaxing independence of upstream costs appearing in Ni and Wang (2007). Then we provide two axiomatizations with endogenous responsibility of the alpha-Local Responsibility Sharing method, one using this weak independence axiom (taken from the UES method) and one using a weak version of the no blind cost axiom (taken from the LRS method). Moreover, we also provide an axiomatization with exogenous responsibility by introducing alpha-responsibility balance. Finally, we defi ne a pollution cost-sharing game, and show that, interestingly, the Half Local Responsibility Sharing (HLRS) method coincides with the Shapley value, the nucleolus and the tau-value of the corresponding pollution cost-sharing game. This HLRS method can be seen as some kind of middle compromise of the LRS and UES methods.
Wenzhong Li
Genjiu Xu
Rene van den Brink
pollution cost-sharing problems, alpha-Local Responsibility Sharing method, axiomatization, cooperative games
2021-04-05
A lattice approach to the Beta distribution induced by stochastic dominance: Theory and applications
http://d.repec.org/n?u=RePEc:arx:papers:2104.01412&r=ore
We provide a comprehensive analysis of the two-parameter Beta distributions seen from the perspective of second-order stochastic dominance. By changing its parameters through a bijective mapping, we work with a bounded subset D instead of an unbounded plane. We show that a mean-preserving spread is equivalent to an increase of the variance, which means that higher moments are irrelevant to compare the riskiness of Beta distributions. We then derive the lattice structure induced by second-order stochastic dominance, which is feasible thanks to the topological closure of D. Finally, we consider a standard (expected-utility based) portfolio optimization problem in which its inputs are the parameters of the Beta distribution. We explicitly characterize the subset of D for which the optimal solution consists of investing 100% of the wealth in the risky asset and we provide an exhaustive numerical analysis of this optimal solution through (color-coded) graphs.
Yann Braouezec
John Cagnol
2021-04
Bayesian Estimation of Epidemiological Models: Methods, Causality, and Policy Trade-Offs
http://d.repec.org/n?u=RePEc:nbr:nberwo:28617&r=ore
We present a general framework for Bayesian estimation and causality assessment in epidemiological models. The key to our approach is the use of sequential Monte Carlo methods to evaluate the likelihood of a generic epidemiological model. Once we have the likelihood, we specify priors and rely on a Markov chain Monte Carlo to sample from the posterior distribution. We show how to use the posterior simulation outputs as inputs for exercises in causality assessment. We apply our approach to Belgian data for the COVID-19 epidemic during 2020. Our estimated time-varying-parameters SIRD model captures the data dynamics very well, including the three waves of infections. We use the estimated (true) number of new cases and the time-varying effective reproduction number from the epidemiological model as information for structural vector autoregressions and local projections. We document how additional government-mandated mobility curtailments would have reduced deaths at zero cost or a very small cost in terms of output.
Jonas E. Arias
Jesús Fernández-Villaverde
Juan Rubio Ramírez
Minchul Shin
2021-03
Cost Minimization is Essential for the Sustainable Development of an Industry: A Mathematical Economic Model Approach
http://d.repec.org/n?u=RePEc:pra:mprapa:106924&r=ore
The method of Lagrange multiplier is a very useful and powerful technique in multivariable calculus. In this study interpretation of Lagrange multiplier is given with satisfactory mathematical calculations and shows that its value is positive. For the sustainable development of an industry, cost minimization policy is crucial. In any industry the main objective is to minimize production cost for maximizing its profit. By considering Lagrange multiplier technique application an attempt has been taken here in cost minimization problem subject to production function as an output constraint. To predict future performance of an industry, mathematical calculations are necessary and all the procedures are given in this paper with detail mathematical procedures. In this study an attempt has been taken to minimize cost by considering three variables capital, labor, and other inputs of an industry by the application of economic models subject to a budget constraint, using Lagrange multiplier technique, as well as, using necessary and sufficient conditions for minimum value.
Mohajan, Haradhan
Lagrange multiplier, cost minimization, mathematical economical models, sustainability
2021-01-30
Identifying structural shocks to volatility through a proxy-MGARCH model
http://d.repec.org/n?u=RePEc:usg:econwp:2021:03&r=ore
We extend the classical MGARCH specification for volatility modeling by developing a structural MGARCH model targeting identification of shocks and volatility spillovers in a speculative return system. Similarly to the proxy-sVAR framework, we work with auxiliary proxy variables constructed from news-related measures to identify the underlying shock system. We achieve full identification with multiple proxies by chaining Givens rotations. In an empirical application, we identify an equity, bond and currency shock. We study the volatility spillovers implied by these labelled structural shocks. Our analysis shows that symmetric spillover regimes are rejected.
Fengler, Matthias
Polivka, Jeannine
Givens rotations, identification, news-based measures, proxy-MGARCH, shock labelling, structural innovations, volatility spillovers
2021-04
Global financial cycles and exchange rate forecast: A factor analysis
http://d.repec.org/n?u=RePEc:pra:mprapa:105358&r=ore
This study applies portfolio balance theory in forecasting exchange rate. The study further argues for the need to account for the role of Global Financial Cycle (GFCy). As such, the first stage of the analysis is estimate a GFCy model and obtain the idiosyncratic shock. Next, we use the results in the first stage as a predictor for exchange rate. The study builds dataset for 20 advanced and emerging countries from 1990Q1- 2017Q2. Among other things, there are three important results to note. First, our approach to forecast exchange rate is able to beat the benchmark random walk model. Second, the best prediction is made at short term forecasting horizons, i.e. 1 and 4 quarters forecast ahead. Third, the performance of the early sample size outweighs that of the late sample size.
Raheem, Ibrahim
Exchange rate; Factor models; Global financial cycle; Forecasting
2020
A reality check on the GARCH-MIDAS volatility models
http://d.repec.org/n?u=RePEc:hhs:oruesi:2021_002&r=ore
We employ a battery of model evaluation tests for a broad-set of GARCH-MIDAS models and account for data snooping bias. We document that inferences based on standard tests for GM variance components can be misleading. Our data mining free results show that the gains of macro-variables in forecasting total (long run) variance by GM models are overstated (understated). Estimation of different components of volatility is crucial for designing differentiated investing strategies, risk management plans and pricing of derivative securities. Therefore, researchers and practitioners should be wary of data mining bias, which may contaminate a forecast that may appear statistically validated using robust evaluation tests.
Virk, Nader
Javed, Farrukh
Awartani, Basel
GARCH-MIDAS models; component variance forecasts; macro-variables; data snooping
2021-03-30
COVID-19 Time-Varying Reproduction Numbers Worldwide: An Empirical Analysis of Mandatory and Voluntary Social Distancing
http://d.repec.org/n?u=RePEc:fip:feddgw:90500&r=ore
This paper estimates time-varying COVID-19 reproduction numbers worldwide solely based on the number of reported infected cases, allowing for under-reporting. Estimation is based on a moment condition that can be derived from an agent-based stochastic network model of COVID-19 transmission. The outcomes in terms of the reproduction number and the trajectory of per-capita cases through the end of 2020 are very diverse. The reproduction number depends on the transmission rate and the proportion of susceptible population, or the herd immunity effect. Changes in the transmission rate depend on changes in the behavior of the virus, reflecting mutations and vaccinations, and changes in people's behavior, reflecting voluntary or government mandated isolation. Over our sample period, neither mutation nor vaccination are major factors, so one can attribute variation in the transmission rate to variations in behavior. Evidence based on panel data models explaining transmission rates for nine European countries indicates that the diversity of outcomes resulted from the non-linear interaction of mandatory containment measures, voluntary precautionary isolation and the economic incentives that governments provided to support isolation. These effects are precisely estimated and robust to various assumptions. As a result, countries with seemingly different social distancing policies achieved quite similar outcomes in terms of the reproduction number. These results imply that ignoring the voluntary component of social distancing could introduce an upward bias in the estimates of the effects of lock-downs and support policies on the transmission rates.
Alexander Chudik
M. Hashem Pesaran
Alessandro Rebucci
COVID-19; SIR model; epidemics; multiplication factor; under-reporting; social distancing; self-isolation
2021-03-25
Dual theory of choice with multivariate risks
http://d.repec.org/n?u=RePEc:arx:papers:2102.02578&r=ore
We propose a multivariate extension of Yaari's dual theory of choice under risk. We show that a decision maker with a preference relation on multidimensional prospects that preserves first order stochastic dominance and satisfies comonotonic independence behaves as if evaluating prospects using a weighted sum of quantiles. Both the notions of quantiles and of comonotonicity are extended to the multivariate framework using optimal transportation maps. Finally, risk averse decision makers are characterized within this framework and their local utility functions are derived. Applications to the measurement of multi-attribute inequality are also discussed.
Alfred Galichon
Marc Henry
2021-02
Monte Carlo algorithm for the extrema of tempered stable processes
http://d.repec.org/n?u=RePEc:arx:papers:2103.15310&r=ore
We develop a novel Monte Carlo algorithm for the vector consisting of the supremum, the time at which the supremum is attained and the position of an exponentially tempered L\'{e}vy process. The algorithm, based on the increments of the process without tempering, converges geometrically fast (as a function of the computational cost) for discontinuous and locally Lipschitz functions of the vector. We prove that the corresponding multilevel Monte Carlo estimator has optimal computational complexity (i.e. of order $\epsilon^{-2}$ if the mean squared error is at most $\epsilon^{2}$) and provide its central limit theorem (CLT). Using the CLT we construct confidence intervals for barrier option prices and various risk measures based on drawdown under the tempered stable (CGMY) model calibrated/estimated on real-world data. We provide non-asymptotic and asymptotic comparisons of our algorithm with existing approximations, leading to rule-of-thumb guidelines for users to the best method for a given set of parameters, and illustrate its performance with numerical examples.
Jorge Ignacio Gonz\'alez C\'azares
Aleksandar Mijatovi\'c
2021-03
A NEW APPROACH TO THE OPTIMIZATION PROBLEM
http://d.repec.org/n?u=RePEc:ise:remwps:wp01712021&r=ore
A new approach to the problem of optimization is developed using tools such as the concepts of aggregate and of combined functions. The solving of a simple problem of calculus of variations with inequality constraints illustrates the potentiality of this new method.
João Ferreira do Amaral
optimization, calculus of variations, convexity, quasi-convexity
2021-04
A Global Joint Pricing Model of Stocks and Bonds Based on the Quadratic Gaussian Approach
http://d.repec.org/n?u=RePEc:shg:dpapeb:18&r=ore
A Global Joint Pricing Model of Stocks and BondsBased on the Quadratic Gaussian Approach*Kentaro KikuchiAbstractThis work presents a joint model for bond prices, stock prices, and exchangerates within multi-currency economies. The model includes three types of la-tent factors: systematic factors that determine the domestic and foreign interestrates, stock-speci c factors, and currency-speci c factors. By incorporating thestochastic discount factor re ecting these three risk factors, we derive an analyt-ical formula for bond prices and stock prices, and exchange rates based on thequadratic Gaussian approach studied primarily in term structure modeling. Ourmodel has the distinctive feature of capturing market rates in a low interest rateenvironment. Furthermore, the model not only enables a simultaneous estimationof bond, equity and currency risk premiums but also provides a foundation forsolving an investment problem re ecting realistic market conditions.
Kentaro Kikuchi
Stochastic discount factor, No arbitrage condition, Quadratic Gaus-sian term structure model, Algebraic Riccati equation
Do More School Resources Increase Learning Outcomes? Evidence from an extended school-day reform
http://d.repec.org/n?u=RePEc:uct:uconnp:2021-06&r=ore
Whether allocating more resources improves learning outcomes for students in low-performing public schools remains an open debate. We focus on the effect of increased instructional time, which is theoretically ambiguous due to possible compensating changes in effort by students, teachers or parents. Using a regression discontinuity approach, we find that a reform extending the school day increases math test scores, with a large effect size relative to other interventions. It also improved reading, technical skills and socio-emotional competencies. Our results are partly explained by reductions in home production by students, specialization by teachers and investments in pedagogical assistance to teachers.
Jorge M. Agüero
Marta Favara
Catherine Porter
Alan Sánchez
Extended school-day reform, Jornada Escolar Completa, JEC, Peru, Young Lives
2021-04
Do More School Resources Increase Learning Outcomes? Evidence from an Extended School-Day Reform
http://d.repec.org/n?u=RePEc:iza:izadps:dp14240&r=ore
Whether allocating more resources improves learning outcomes for students in low-performing public schools remains an open debate. We focus on the effect of increased instructional time, which is theoretically ambiguous due to possible compensating changes in effort by students, teachers or parents. Using a regression discontinuity approach, we find that a reform extending the school day increases math test scores, with a large effect size relative to other interventions. It also improved reading, technical skills and socio-emotional competencies. Our results are partly explained by reductions in home production by students, specialization by teachers and investments in pedagogical assistance to teachers.
Agüero, Jorge M.
Favara, Marta
Porter, Catherine
Sanchez, Alan
extended school-day reform, Jornada Escolar Completa, JEC, Peru, Young Lives
2021-03
Cluster-Robust Inference: A Guide to Empirical Practice
http://d.repec.org/n?u=RePEc:qed:wpaper:1456&r=ore
Methods for cluster-robust inference are routinely used in economics and many other disciplines. However, it is only recently that theoretical foundations for the use of these methods in many empirically relevant situations have been developed. In this paper, we use these theoretical results to provide a guide to empirical practice. We do not attempt to present a comprehensive survey of the (very large) literature. Instead, we bridge theory and practice by providing a thorough guide on what to do and why, based on recently available econometric theory and simulation evidence. The paper includes an empirical analysis of the effects of the minimum wage on teenagers using individual data, in which we practice what we preach.
James G. MacKinnon
Morten Ørregaard Nielsen
Matthew D. Webb
2021-04
Strategic justifications of the TAL-family of rules for bankruptcy problems
http://d.repec.org/n?u=RePEc:pab:wpaper:21.04&r=ore
We follow the Nash program to provide a new strategic justification for the TAL-family of rules for bankruptcy problems. The design of our game is inspired by an axiomatization of the TAL-family of rules exploiting the properties of consistency together with certain degrees of lower and upper bounds to all creditors. Bilateral negotiations of our game follow the spirit of those bounds. By means of consistency, we then extend the bilateral negotiations to an arbitrary number of creditors.
Juan D. Moreno-Ternero
Min-Hung Tsay
Chun-Hsien Yeh
Nash program; bankruptcy problems; strategic justification; consistency; TAL-family of rules
2021
Efficient Effort Equilibrium in Cooperation with Pairwise Cost Reduction
http://d.repec.org/n?u=RePEc:pra:mprapa:105604&r=ore
There are multiple situations in which bilateral interaction between agents results in considerable cost reductions. Such interaction can occur in settings where agents are interested in sharing resources, knowledge or infrastructures. Their common purpose is to obtain individual advantages, e.g. by reducing their respective individual costs. Achieving this pairwise cooperation often requires the agents involved to make some level of effort. It is natural to think that the amount by which one agent could reduce the costs of the other may depend on how much effort the latter exerts. In the first stage, agents decide how much effort they are to exert, which has a direct impact on their pairwise cost reductions. We model this first stage as a non-cooperative game, in which agents determine the level of pairwise effort to reduce the cost of their partners. In the second stage, agents engage in a bilateral interaction between independent partners. We study this bilateral cooperation as a cooperative game in which agents reduce each other's costs as a result of cooperation, so that the total reduction in the cost of each agent in a coalition is the sum of the reductions generated by the rest of the members of that coalition. In the non-cooperative game that precedes cooperation with pairwise cost reduction, the agents anticipate the cost allocation that results from the cooperative game in the second stage by incorporating the effect of the effort exerted into their cost functions. Based on this model, we explore the costs, benefits, and challenges associated with setting up a pairwise effort network. We identify a family of cost allocations with weighted pairwise reduction which are always feasible in the cooperative game and contain the Shapley value. We show that there are always cost allocations with weighted pairwise reductions that generate an optimal level of efficient effort and provide a procedure for finding the efficient effort equilibrium.
García-Martínez, Jose A.
Mayor-Serra, Antonio J.
Meca, Ana
Allocation, Cost models, Efficiency, Game Theory
2020-12-15
COVID-19 Time-varying Reproduction Numbers Worldwide: An Empirical Analysis of Mandatory and Voluntary Social Distancing
http://d.repec.org/n?u=RePEc:nbr:nberwo:28629&r=ore
This paper estimates time-varying COVID-19 reproduction numbers worldwide solely based on the number of reported infected cases, allowing for under-reporting. Estimation is based on a moment condition that can be derived from an agent-based stochastic network model of COVID-19 transmission. The outcomes in terms of the reproduction number and the trajectory of per-capita cases through the end of 2020 are very diverse. The reproduction number depends on the transmission rate and the proportion of susceptible population, or the herd immunity effect. Changes in the transmission rate depend on changes in the behavior of the virus, reflecting mutations and vaccinations, and changes in people's behavior, reflecting voluntary or government mandated isolation. Over our sample period, neither mutation nor vaccination are major factors, so one can attribute variation in the transmission rate to variations in behavior. Evidence based on panel data models explaining transmission rates for nine European countries indicates that the diversity of outcomes results from the non-linear interaction of mandatory containment measures, voluntary precautionary isolation, and the economic incentives that governments provided to support isolation. These effects are precisely estimated and robust to various assumptions. As a result, countries with seemingly different social distancing policies achieved quite similar outcomes in terms of the reproduction number. These results imply that ignoring the voluntary component of social distancing could introduce an upward bias in the estimates of the effects of lock-downs and support policies on the transmission rates. The full set of estimation results and the replication package are available on the authors' websites.
Alexander Chudik
M. Hashem Pesaran
Alessandro Rebucci
2021-04
Basic Income Simulations for the Province of British Columbia
http://d.repec.org/n?u=RePEc:pra:mprapa:105918&r=ore
An important component of the work to be completed by the British Columbia’s Expert Panel on Basic Income is to design simulations to look at how various basic income (BI) models could work in B.C. (B.C. Poverty Reduction, 2018). The intent of these simulations is to identify the potential impacts and financial implications for B.C. residents of different variants of a BI. Given the poverty reduction targets passed by the B.C. government, detailed in Petit and Tedds (2020d), the potential impacts include those on the incidence and the depths of poverty in the province (B.C. Poverty Reduction, n.d.). The panel ran over 16,000 different BI scenarios to consider in B.C., which were modelled using Statistics Canada’s Social Policy Simulation Database and Model (SPSD/M) program. We evaluate different BI scenarios in terms of their implications for a variety of measures, including cost, number of recipients, rates of poverty, depths of poverty, distributional affects, and inequality impacts. This paper provides details regarding these simulations. Our goal in this paper is simply to consider different versions of a basic income in terms of both their cost implications and their implications for poverty reduction. We believe that identifying the most effective variants of a basic income in terms of these two criteria will help sharpen the conversation about the applicability of a basic income as a policy option for B.C.
Green, David A.
Kesselman, Jonathan Rhys
Tedds, Lindsay M.
Crisan, I. Daria
Petit, Gillian
Basic income; Simulations; Statistics Canada’s Social Policy Simulation Database and Model; Poverty Reduction; Distributional affects; Inequality.
2020-12
Gender-Based Analysis Plus (GBA+) of the Current System of Income and Social Supports in British Columbia
http://d.repec.org/n?u=RePEc:pra:mprapa:105942&r=ore
This paper is one of three papers focused on bringing a GBA+ lens to the work of the Expert Panel on Basic Income. In Cameron and Tedds (2020b), background is provided on gender and intersectional analysis and an enhanced GBA+ framework is developed based on the Status of Women Canada’s GBA+ tool. In Cameron and Tedds (2020a), a GBA+ analysis is applied to two policy reforms—basic income and basic services—to consider their potential in the context of B.C.’s poverty reduction strategy. In this paper, we apply the enhanced GBA+ analysis to the current system of income and social supports in B.C. along with the suite of proposed reforms recommended in Petit and Tedds (2020d, 2020e) using BI principles. Both of these—BI principles and GBA+/intersectionality—have transformative potential. Applying a GBA+ lens along with BI principles illuminates ways we can address structural barriers such as institutional and systemic discrimination, reducing the risk of poverty among diverse groups and promoting long-term transformative change.
Petit, Gillian
Tedds, Lindsay M.
GBA+, insectionality, Basic Income, basic services, income assistance, social assistance, poverty reduction, structural barriers, systems reforms, public policy
2020-12
Distributional Offline Continuous-Time Reinforcement Learning with Neural Physics-Informed PDEs (SciPhy RL for DOCTR-L)
http://d.repec.org/n?u=RePEc:arx:papers:2104.01040&r=ore
This paper addresses distributional offline continuous-time reinforcement learning (DOCTR-L) with stochastic policies for high-dimensional optimal control. A soft distributional version of the classical Hamilton-Jacobi-Bellman (HJB) equation is given by a semilinear partial differential equation (PDE). This `soft HJB equation' can be learned from offline data without assuming that the latter correspond to a previous optimal or near-optimal policy. A data-driven solution of the soft HJB equation uses methods of Neural PDEs and Physics-Informed Neural Networks developed in the field of Scientific Machine Learning (SciML). The suggested approach, dubbed `SciPhy RL', thus reduces DOCTR-L to solving neural PDEs from data. Our algorithm called Deep DOCTR-L converts offline high-dimensional data into an optimal policy in one step by reducing it to supervised learning, instead of relying on value iteration or policy iteration methods. The method enables a computable approach to the quality control of obtained policies in terms of both their expected returns and uncertainties about their values.
Igor Halperin
2021-04
The Pricing Strategies of Online Grocery Retailers
http://d.repec.org/n?u=RePEc:nbr:nberwo:28639&r=ore
Matched product data is collected from the leading online grocers in the U.S. The same exact products are identified in scanner data. The paper documents pricing strategies within and across online (and offline) retailers. First, online retailers exhibit substantially less uniform pricing than offline retailers. Second, online price differentiation across competing chains in narrow geographies is higher than offline retailers. Third, variation in offline elasticities, shipping distance, pricing frequency, and local demo- graphics are utilized to explain price differentiation. Surprisingly, pricing technology (across time) magnifies price differentiation (across locations). This evidence motivates a high-frequency study to unpack the patterns of algorithmic pricing. The data shows that algorithms: personalize prices at the delivery zipcode level, update prices very frequently and in tiny magnitudes, reduce price synchronization, exhibit lower menu costs, constantly explore the price grid, and often match competitors’ prices.
Diego Aparicio
Zachary Metzman
Roberto Rigobon
2021-04
Structural and Predictive Analyses with a Mixed Copula-Based Vector Autoregression Model
http://d.repec.org/n?u=RePEc:pre:wpaper:202108&r=ore
In this study, we introduce a mixed copula-based vector autoregressive (VAR) model for investigating the relationship between random variables. The one-step maximum likelihood estimation is used to obtain point estimates of the autoregressive parameters and mixed copula parameters. More specifically, we combine the likelihoods of the marginal and mixed Copula to construct the full likelihood function. The simulation study is used to confirm the accuracy of the estimation as well as the reliability of the proposed model. Various mixed copula forms from a combination of Gaussian, Student-t, Clayton, Frank, Gumbel, and Joe copulas are introduced. The proposed model is compared to the traditional VAR model and single copula-based VAR models to assess its performance. Furthermore, the real data study is also conducted to validate our proposed method. As a result, it is found that the one-step maximum likelihood provides accurate and reliable results. Also, we show that if we ignore the complex and nonlinear correlation between the errors, it causes significant efficiency loss in the parameter estimation, in terms of Bias and MSE. In the application study, the mixed copula-based VAR is the best fitting Copula for our application study.
Woraphon Yamaka
Rangan Gupta
Sukrit Thongkairat
Paravee Maneejuk
Forecasting; Mixed copula; One step maximum likelihood estimation; Vector autoregressive
2021-01
Kommer vi resa mindre efter pandemin?
http://d.repec.org/n?u=RePEc:pra:mprapa:106156&r=ore
Under pandemin har våra vardagsliv förändrats på många sätt. Fler har jobbat hemifrån och vi har undvikit affärer, restauranger och många fritidsaktiviteter. Det har förstås lett till att resandet minskat. Kommer de nya vanorna och möjligheterna leda till att vi reser mindre även efter pandemin? Denna uppsats diskuterar om det är troligt att förändringarna under pandemin får kvarstående effekter på resandet även på lång sikt, baserat på analys av historiska data om resor och transporter. Att döma av historien så kommer knappast nya vanor och ökade digital tillgänglighet leda till minskat totalt resande, åtminstone inte i någon högre grad. Historiska data talar nämligen inte för att bättre kontaktmöjligheter leder till kortare restider eller reslängder totalt sett.
Jonas, Eliasson
Travel behaviour; pandemic; covid-19; travel patterns
2021-04-07
JDOI Variance Reduction Method and the Pricing of American-Style Options
http://d.repec.org/n?u=RePEc:arx:papers:2104.01365&r=ore
The present article revisits the Diffusion Operator Integral (DOI) variance reduction technique originally proposed in Heath and Platen (2002) and extends its theoretical concept to the pricing of American-style options under (time-homogeneous) L\'evy stochastic differential equations. The resulting Jump Diffusion Operator Integral (JDOI) method can be combined with numerous Monte Carlo based stopping-time algorithms, including the ubiquitous least-squares Monte Carlo (LSMC) algorithm of Longstaff and Schwartz (cf. Carriere (1996), Longstaff and Schwartz (2001)). We exemplify the usefulness of our theoretical derivations under a concrete, though very general jump-diffusion stochastic volatility dynamics and test the resulting LSMC based version of the JDOI method. The results provide evidence of a strong variance reduction when compared with a simple application of the LSMC algorithm and proves that applying our technique on top of Monte Carlo based pricing schemes provides a powerful way to speed-up these methods.
Auster Johan
Mathys Ludovic
Maeder Fabio
2021-04
Can Automatic Retention Improve Health Insurance Market Outcomes?
http://d.repec.org/n?u=RePEc:nbr:nberwo:28630&r=ore
There is growing interest in market design using default rules and other choice architecture principles to steer consumers toward desirable outcomes. Using data from Massachusetts’ health insurance exchange, we study an "automatic retention" policy intended to prevent coverage interruptions among low-income enrollees. Rather than disenroll people who lapse in paying premiums, the policy automatically switches them to an available free plan until they actively cancel or lose eligibility. We find that automatic retention has a sizable impact, switching 14% of consumers annually and differentially retaining healthy, low-cost individuals. The results illustrate the power of defaults to shape insurance coverage outcomes.
Adrianna L. McIntyre
Mark Shepard
Myles Wagner
2021-04
Inference under Covariate-Adaptive Randomization with Imperfect Compliance
http://d.repec.org/n?u=RePEc:arx:papers:2102.03937&r=ore
This paper studies inference in a randomized controlled trial (RCT) with covariate-adaptive randomization (CAR) and imperfect compliance of a binary treatment. In this context, we study inference on the LATE. As in Bugni et al. (2018,2019), CAR refers to randomization schemes that first stratify according to baseline covariates and then assign treatment status so as to achieve ``balance'' within each stratum. In contrast to these papers, however, we allow participants of the RCT to endogenously decide to comply or not with the assigned treatment status. We study the properties of an estimator of the LATE derived from a ``fully saturated'' IV linear regression, i.e., a linear regression of the outcome on all indicators for all strata and their interaction with the treatment decision, with the latter instrumented with the treatment assignment. We show that the proposed LATE estimator is asymptotically normal, and we characterize its asymptotic variance in terms of primitives of the problem. We provide consistent estimators of the standard errors and asymptotically exact hypothesis tests. In the special case when the target proportion of units assigned to each treatment does not vary across strata, we can also consider two other estimators of the LATE, including the one based on the ``strata fixed effects'' IV linear regression, i.e., a linear regression of the outcome on indicators for all strata and the treatment decision, with the latter instrumented with the treatment assignment. Our characterization of the asymptotic variance of the LATE estimators allows us to understand the influence of the parameters of the RCT. We use this to propose strategies to minimize their asymptotic variance in a hypothetical RCT based on data from a pilot study. We illustrate the practical relevance of these results using a simulation study and an empirical application based on Dupas et al. (2018).
Federico A. Bugni
Mengsi Gao
2021-02
Information Communication & Computation Technology (ICCT) as a Strategic Tool for Industry Sectors
http://d.repec.org/n?u=RePEc:pra:mprapa:105619&r=ore
Information Communication and Computation Technology (ICCT) and Nanotechnology (NT) are recently identified Universal technologies of the 21st century and are expected to substantially contribute to the development of the society by solving the basic needs, advanced wants, and dreamy desires of human beings. In this paper, the possibilities of using ICCT and its underlying ten most important emerging technologies like Artificial intelligence, Big data & business analytics, Cloud computing, Digital marketing, 3D printing, Internet of Things, Online ubiquitous education, Optical computing, Storage technology, and Virtual & Augmented Reality are explored. The emerging trends of applications of the above underlying technologies of ICCT in the primary, secondary, tertiary and quaternary industry sectors of the society are discussed, analysed, and predicted using a newly developed predictive analysis model. The advantages, benefits, constraints, and disadvantages of such technologies to fulfill the desires of human beings to lead luxurious and comfort lifestyle from various stakeholders point of views are identified and discussed. The paper also focuses on the potential applications of ICCT as a strategic tool for survival, sustainability, differentiation, and development of various primary, secondary, tertiary, and quaternary industries.
Aithal, Sreeramana
L. M., Madhushree
ICCT, Universal technology, Emerging trends, Information science & technology, Industry sectors, ICCT as a strategic tool
2019-11-15
The Application of Machine Learning Algorithms for Spatial Analysis: Predicting of Real Estate Prices in Warsaw
http://d.repec.org/n?u=RePEc:war:wpaper:2021-05&r=ore
The principal aim of this paper is to investigate the potential of machine learning algorithms in context of predicting housing prices. The most important issue in modelling spatial data is to consider spatial heterogeneity that can bias obtained results when is not taken into consideration. The purpose of this research is to compare prediction power of such methods: linear regression, artificial neural network, random forest, extreme gradient boosting and spatial error model. The evaluation was conducted using train, validation, test and k-Fold Cross-Validation methods. We also examined the ability of the above models to identify spatial dependencies, by calculating Moran’s I for residuals obtained on in-sample and out-of-sample data.
Dawid Siwicki
spatial analysis, machine learning, housing market, random forest, gradient boosting
2021
Rethinking the Role of the Representativeness Heuristic in Macroeconomics and Finance Theory
http://d.repec.org/n?u=RePEc:thk:wpaper:inetwp142&r=ore
We propose a novel interpretation and formalization of Kahneman and Tversky's findings in the Linda experiment which implies that subjects are rational in the sense of Muth's hypothesis and provides an approach to specifying rational assessment of uncertainty in macroeconomic models. Behavioral-finance theorists have appealed to Kahneman and Tversky's findings as an empirical foundation for a general approach replacing rational expectations. We show that behavioral models' specifications of participants' irrational forecasts and predictable errors are incompatible with Kahneman and Tversky's findings. Our interpretation of Kahneman and Tversky's findings is supportive of Lucas's compelling critique of inconsistent macroeconomic models.
Roman Frydman
Morten Nyboe Tabor
Uncertainty in Economic Models; Kahneman and Tversky's Experimental Findings; Behavioral Finance; Muth's Hypothesis; REH.
2020-12-14
Regularized Estimation of High-Dimensional Vector AutoRegressions with Weakly Dependent Innovations
http://d.repec.org/n?u=RePEc:arx:papers:1912.09002&r=ore
There has been considerable advance in understanding the properties of sparse regularization procedures in high-dimensional models. In time series context, it is mostly restricted to Gaussian autoregressions or mixing sequences. We study oracle properties of LASSO estimation of weakly sparse vector-autoregressive models with heavy tailed, weakly dependent innovations with virtually no assumption on the conditional heteroskedasticity. In contrast to current literature, our innovation process satisfy an $L^1$ mixingale type condition on the centered conditional covariance matrices. This condition covers $L^1$-NED sequences and strong ($\alpha$-) mixing sequences as particular examples. From a modeling perspective, it covers several multivariate-GARCH specifications, such as the BEKK model, and other factor stochastic volatility specifications that were ruled out by assumption in previous studies.
Ricardo P. Masini
Marcelo C. Medeiros
Eduardo F. Mendes
2019-12