Operations Research
http://lists.repec.org/mailman/listinfo/nep-ore
Operations Research
2019-10-14
Benchmarking Global Optimizers
http://d.repec.org/n?u=RePEc:nbr:nberwo:26340&r=ore
We benchmark seven global optimization algorithms by comparing their performance on challenging multidimensional test functions as well as a method of simulated moments estimation of a panel data model of earnings dynamics. Five of the algorithms are taken from the popular NLopt open-source library: (i) Controlled Random Search with local mutation (CRS), (ii) Improved Stochastic Ranking Evolution Strategy (ISRES), (iii) Multi-Level Single-Linkage (MLSL) algorithm, (iv) Stochastic Global Optimization (StoGo), and (v) Evolutionary Strategy with Cauchy distribution (ESCH). The other two algorithms are versions of TikTak, which is a multistart global optimization algorithm used in some recent economic applications. For completeness, we add three popular local algorithms to the comparison—the Nelder-Mead downhill simplex algorithm, the Derivative-Free Non-linear Least Squares (DFNLS) algorithm, and a popular variant of the Davidon-Fletcher-Powell (DFPMIN) algorithm. To give a detailed comparison of algorithms, we use a set of benchmarking tools recently developed in the applied mathematics literature. We find that the success rate of many optimizers vary dramatically with the characteristics of each problem and the computational budget that is available. Overall, TikTak is the strongest performer on both the math test functions and the economic application. The next-best performing optimizers are StoGo and CRS for the test functions and MLSL for the economic application.
Antoine Arnoud
Fatih Guvenen
Tatjana Kleineberg
2019-10
Comparing Tests for Identification of Bubbles
http://d.repec.org/n?u=RePEc:aah:create:2019-16&r=ore
This paper compares the log periodic power law (LPPL) and the supremum augmented Dickey Fuller (supremum ADF) procedures considering bubble detection and time stamping capabilities in a thorough analysis based on simulated data. A generalized formulation of the LPPL procedure is derived and analysed demonstrating performance improvements.
Kristoffer Pons Bertelsen
Rational bubbles, explosive processes, log periodic power law, critical points theory
2019-10-11
Multiple Days Ahead Realized Volatility Forecasting: Single, Combined and Average Forecasts
http://d.repec.org/n?u=RePEc:pra:mprapa:96272&r=ore
The task of this paper is the enhancement of realized volatility forecasts. We investigate whether a mixture of predictions (either the combination or the averaging of forecasts) can provide more accurate volatility forecasts than the forecasts of a single model.We estimate long-memory and heterogeneous autoregressive models under symmetric and asymmetric distributions for the major European Union stock market indices and the exchange rates of the Euro. The majority of models provide qualitatively similar predictions for the next trading day’s volatility forecast. However, with regard to the one-week forecasting horizon, the heterogeneous autoregressive model is statistically superior to the long-memory framework. Moreover, for the two-weeks-ahead forecasting horizon, the combination of realized volatility predictions increases the forecasting accuracy and forecast averaging provides superior predictions to those supplied by a single model. Finally, the modeling of volatility asymmetry is important for the two-weeks-ahead volatility forecasts.
Degiannakis, Stavros
averaging forecasts, combining forecasts, heterogeneous autoregressive, intra-day data, long memory, model confidence set, predictive ability, realized volatility, ultra-high frequency
2018
Conditional Sum of Squares Estimation of Multiple Frequency Long Memory Models
http://d.repec.org/n?u=RePEc:pra:mprapa:96314&r=ore
We review the multiple frequency Gegenbauer autoregressive moving average model, which is able to reproduce a wide range of autocorrelation functions. Extending the result of Chung (1996a), we propose the asymptotic distributions for a conditional sum of squares estimator of the model parameters. The parameters that determine the cycle lengths are asymptotically independent, converging at rate T for finite cycles. This result does not hold generally, most notably for the differencing parameters associated with the cycle lengths. Remaining parameters are typically not independent and converge at the standard rate of T1/2. We present simulation results to explore small sample properties of the estimator, which strongly support most distributional results while also highlighting areas that merit additional exploration. We demonstrate the applicability of the theory and estimator with an application to IBM trading volume.
Beaumont, Paul
Smallwood, Aaron
k-factor Gegenbauer processes, Asymptotic distributions, ARFIMA, Conditional sum of squares
2019-09-29
Nonzero-sum stochastic differential games with impulse controls: a verification theorem with applications
http://d.repec.org/n?u=RePEc:ehl:lserod:100003&r=ore
We consider a general nonzero-sum impulse game with two players. The main mathemat- ical contribution of the paper is a verification theorem which provides, under some regularity conditions, a suitable system of quasi-variational inequalities for the payoffs and the strate- gies of the two players at some Nash equilibrium. As an application, we study an impulse game with a one-dimensional state variable, following a real-valued scaled Brownian motion, and two players with linear and symmetric running payoffs. We fully characterize a family of Nash equilibria and provide explicit expressions for the corresponding equilibrium strategies and payoffs. We also prove some asymptotic results with respect to the intervention costs. Finally, we consider two further non-symmetric examples where a Nash equilibrium is found numerically.
Aïd, René
Basei, Matteo
Callegaro, Giorgia
Campi, Luciano
Vargiolu, Tiziano
stochastic differential game; impulse control; Nash equilibrium; quasi-variational inequality
2019-07-17
Imposing Equilibrium Restrictions in the Estimation of Dynamic Discrete Games
http://d.repec.org/n?u=RePEc:cpr:ceprdp:14007&r=ore
Imposing equilibrium restrictions provides substantial gains in the estimation of dynamic discrete games. Estimation algorithms imposing these restrictions -- MPEC, NFXP, NPL, and variations -- have different merits and limitations. MPEC guarantees local convergence, but requires the computation of high-dimensional Jacobians. The NPL algorithm avoids the computation of these matrices, but -- in games -- may fail to converge to the consistent NPL estimator. We study the asymptotic properties of the NPL algorithm treating the iterative procedure as performed in finite samples. We find that there are always samples for which the algorithm fails to converge, and this introduces a selection bias. We also propose a spectral algorithm to compute the NPL estimator. This algorithm satisfies local convergence and avoids the computation of Jacobian matrices. We present simulation evidence illustrating our theoretical results and the good properties of the spectral algorithm.
Aguirregabiria, Victor
Marcoux, Mathieu
convergence; Convergence selection bias; Dynamic discrete games; Nested pseudo-likelihood; Spectral algorithms
2019-09
Matching with Externalities
http://d.repec.org/n?u=RePEc:cpr:ceprdp:13994&r=ore
We incorporate externalities into the stable matching theory of two-sided markets. Extending the classical substitutes condition to allow for externalities, we establish that stable matchings exist when agent choices satisfy substitutability. Furthermore, we show that substitutability is a necessary condition for the existence of a stable matching in a maximal-domain sense and provide a characterization of substitutable choice functions. In addition, we establish novel comparative statics on externalities and show that the standard insights of matching theory, like the existence of side-optimal stable matchings and the deferred acceptance algorithm, remain valid despite the presence of externalities even though the standard fixed-point techniques do not apply.
Pycia, Marek
Yenmez, M. Bumin
2019-09
Will Artificial Intelligence Replace Computational Economists Any Time Soon?
http://d.repec.org/n?u=RePEc:cpr:ceprdp:14024&r=ore
Artificial intelligence (AI) has impressive applications in many fields (speech recognition, computer vision, etc.). This paper demonstrates that AI can be also used to analyze complex and high-dimensional dynamic economic models. We show how to convert three fundamental objects of economic dynamics -- lifetime reward, Bellman equation and Euler equation -- into objective functions suitable for deep learning (DL). We introduce all-in-one integration technique that makes the stochastic gradient unbiased for the constructed objective functions. We show how to use neural networks to deal with multicollinearity and perform model reduction in Krusell and Smith's (1998) model in which decision functions depend on thousands of state variables -- we literally feed distributions into neural networks! In our examples, the DL method was reliable, accurate and linearly scalable. Our ubiquitous Python code, built with Dolo and Google TensorFlow platforms, is designed to accommodate a variety of models and applications.
Maliar, Lilia
Maliar, Serguei
Winant, Pablo
artificial intelligence; Bellman equation; deep learning; Dynamic Models; Dynamic programming; Euler Equation; Machine Learning; neural network; stochastic gradient; value function
2019-09
Higher Orders of Rationality and the Structure of Games
http://d.repec.org/n?u=RePEc:bge:wpaper:1120&r=ore
Identifying individual levels of rationality is crucial to modeling strategic interaction and understanding behavior in games. Nevertheless, there is no consensus on how to best identify levels of higher order rationality, and the identification of an empirical distribution remains highly elusive. In particular, the games used for the task can have a huge impact on the identified distribution. To tackle this fundamental problem, this paper introduces an axiomatic approach that singles out a simple class of games that minimizes the probability of misidentification errors. It then shows that the axioms are empirically meaningful in a within subject experiment that compares the distribution of orders of rationality across different games, including standard games from the literature. The games singled out by the axioms exhibit the highest correlation both with the distribution of the most frequent rationality level a subject has been classified with and with an independent measure of cognitive ability. Finally, there is no evidence in our sample of within subject consistency of identified rationality levels across games.
Francesco Cerigioni
Fabrizio Germano
Pedro Rey-Biel
Peio Zuazo-Garin
rationality, higher-order rationality, revealed rationality, levels of thinking
2019-10
Financial Frictions and the Wealth Distribution
http://d.repec.org/n?u=RePEc:cpr:ceprdp:14002&r=ore
This paper investigates how, in a heterogeneous agents model with financial frictions, idiosyncratic individual shocks interact with exogenous aggregate shocks to generate time-varying levels of leverage and endogenous aggregate risk. To do so, we show how such a model can be efficiently computed, despite its substantial nonlinearities, using tools from machine learning. We also illustrate how the model can be structurally estimated with a likelihood function, using tools from inference with diffusions. We document, first, the strong nonlinearities created by financial frictions. Second, we report the existence of multiple stochastic steady states with properties that differ from the deterministic steady state along important dimensions. Third, we illustrate how the generalized impulse response functions of the model are highly state-dependent. In particular, we find that the recovery after a negative aggregate shock is more sluggish when the economy is more leveraged. Fourth, we prove that wealth heterogeneity matters in this economy because of the asymmetric responses of household consumption decisions to aggregate shocks.
Fernández-Villaverde, Jesús
Hurtado, Samuel
Nuño, Galo
Aggregate shocks; continuous-time; Heterogeneous Agents; Machine Learning; structural estimation
2019-09
Forecasting Realized Volatility of Agricultural Commodities
http://d.repec.org/n?u=RePEc:pra:mprapa:96267&r=ore
We forecast the realized and median realized volatility of agricultural commodities using variants of the Heterogeneous AutoRegressive (HAR) model. We obtain tick-by-tick data for five widely traded agricultural commodities (Corn, Rough Rice, Soybeans, Sugar, and Wheat) from the CME/ICE. Real out-of-sample forecasts are produced for 1- up to 66-days ahead. Our in-sample analysis shows that the variants of the HAR model which decompose volatility measures into their continuous path and jump components and incorporate leverage effects offer better fitting in the predictive regressions. However, we convincingly demonstrate that such HAR extensions do not offer any superior predictive ability in the out-of-sample results, since none of these extensions produce significantly better forecasts compared to the simple HAR model. Our results remain robust even when we evaluate them in a Value-at-Risk framework. Thus, there is no benefit by adding more complexity, related to volatility decomposition or relative transformations of volatility, in the forecasting models.
Degiannakis, Stavros
Filis, George
Klein, Tony
Walther, Thomas
Agricultural Commodities, Realized Volatility, Median Realized Volatility, Heterogeneous Autoregressive model, Forecast.
2019
Quantifying Life Insurance Risk using Least-Squares Monte Carlo
http://d.repec.org/n?u=RePEc:arx:papers:1910.03951&r=ore
This article presents a stochastic framework to quantify the biometric risk of an insurance portfolio in solvency regimes such as Solvency II or the Swiss Solvency Test (SST). The main difficulty in this context constitutes in the proper representation of long term risks in the profit-loss distribution over a one year horizon. This will be resolved by using least-squares Monte Carlo methods to quantify the impact of new experience on the annual re-valuation of the portfolio. Therefore our stochastic model can be seen as an example for an internal model, as allowed under Solvency II or the SST. Since our model does not rely upon nested simulations it is computationally fast and easy to implement.
Claus Baumgart
Johannes Krebs
Robert Lempertseder
Oliver Pfaffel
2019-10
Voter Heterogeneity and Political Corruption
http://d.repec.org/n?u=RePEc:bge:wpaper:1121&r=ore
We show that policies that eliminate corruption can depart from socially desirable policies and this inefficiency can be large enough to allow corruption to live on. Political competition between an honest (welfare maximiser) and corrupt politicians is studied. In our model the corrupt politician is at a distinct disadvantage: there is no asymmetric information, no voter bias and voters are fully rational. Yet, corruption cannot be eliminated when voters have heterogeneous preferences. Moreover, the corrupt politician can win the majority, as the honest politician tries to trade off the cost of eliminating corruption with its beneffits.
Enriqueta Aragonès
Javier Rivas
Áron Tóth
political corruption, political competition, voting
2019-10
Structures of rational behavior in economics
http://d.repec.org/n?u=RePEc:zbw:kitwps:136&r=ore
Describing individual behavior, we will be concerned with an axiom system, which can be interpreted by Shephard's distance function. Based on this function, one can discover the individual's preference relation, from which the individual's demand function can be derived. We will realize that the axiom system describes rational behavior, satisfying the Strong Axiom of Revealed Preference. The axiom system presented in this article is closely related to a former one describing consumer behavior by income compensation functions. These different approaches will help to illuminate choice behavior from different points of view. We will also see that the axiom system presented in this article can be interpreted by the economic quantity index in welfare theory and by distance functions in producer theory.
Fuchs-Seliger, Susanne
Economic Models,Demand Functions,Distance Functions,Rationality,Producer Theory,Welfare Theory
2019
When the U.S. catches a cold, Canada sneezes: a lower-bound tale told by deep learning
http://d.repec.org/n?u=RePEc:cpr:ceprdp:14025&r=ore
The Canadian economy was not initially hit by the 2007-2009 Great Recession but ended up having a prolonged episode of the effective lower bound (ELB) on nominal interest rates. To investigate the Canadian ELB experience, we build a "baby" ToTEM model -- a scaled-down version of the Terms of Trade Economic Model (ToTEM) of the Bank of Canada. Our model includes 49 nonlinear equations and 21 state variables. To solve such a high-dimensional model, we develop a projection deep learning algorithm -- a combination of unsupervised and supervised (deep) machine learning techniques. Our findings are as follows: The Canadian ELB episode was contaminated from abroad via large foreign demand shocks. Prolonged ELB episodes are easy to generate in open-economy models, unlike in closed-economy models. Nonlinearities associated with the ELB constraint have virtually no impact on the Canadian economy but other nonlinearities do, in particular, the degree of uncertainty and specific closing condition used to induce the model's stationarity.
Lepetyuk, Vadym
Maliar, Lilia
Maliar, Serguei
central banking; clustering analysis large-scale model; deep learning; Machine Learning; neural networks; New Keynesian Model; supervised learning; ToTEM; ZLB
2019-09
Predictive, finite-sample model choice for time series under stationarity and non-stationarity
http://d.repec.org/n?u=RePEc:ehl:lserod:101748&r=ore
In statistical research there usually exists a choice between structurally simpler or more complex models. We argue that, even if a more complex, locally stationary time series model were true, then a simple, stationary time series model may be advantageous to work with under parameter uncertainty. We present a new model choice methodology, where one of two competing approaches is chosen based on its empirical, finite-sample performance with respect to prediction, in a manner that ensures interpretability. A rigorous, theoretical analysis of the procedure is provided. As an important side result we prove, for possibly diverging model order, that the localised Yule-Walker estimator is strongly, uniformly consistent under local stationarity. An R package, forecastSNSTS, is provided and used to apply the methodology to financial and meteorological data in empirical examples. We further provide an extensive simulation study and discuss when it is preferable to base forecasts on the more volatile time-varying estimates and when it is advantageous to forecast as if the data were from a stationary process, even though they might not be.
Kley, Tobias
Preuss, Philip
Fryzlewicz, Piotr
forecasting; Yule-Walker estimate; local stationarity; covariance stationarity; EP/L014246/1
2019-10-01
Interruption des études secondaires et postsecondaires au Canada: une analyse dynamique
http://d.repec.org/n?u=RePEc:cir:cirpro:2019rp-11&r=ore
Cette recherche présente une analyse des trajectoires dynamiques des individus entre les épisodes d’étude et de décrochage aux niveaux secondaire et postsecondaire. Nous évaluons l’impact des caractéristiques individuelles, du taux de chômage et de plusieurs variables de politique publique sur le taux (risque) de décrochage et sur celui de retourner aux études. Notre recherche se base sur des données de l’Enquête auprès des jeunes en transition (EJET) de Statistique Canada. Nous e˙ectuons une analyse distincte pour les jeunes hommes et les jeunes femmes. Notre étude repose sur un modèle à risques proportionnels multi-états multi-épisodes avec hétérogénéité. Chez les hommes, nos estimations montrent entre autres qu’atteindre l’âge légal pour quitter les études augmente le taux de décrochage au secondaire d’environ 60%. Une augmentation de 10% du ratio entre le salaire minimum et le salaire moyen réduit de 12% le taux de retour aux études secondaires après un épisode de décrochage. De plus, une hausse de 10% des frais de scolarité entraîne une augmentation du taux d’interruption au postsecondaire de 1,5% ainsi qu’une réduction de plus de 3% du risque d’entreprendre des études postsecondaires chez les hommes. Chez les femmes, une même augmentation des frais de scolarité accroît le risque d’interruption au postsecondaire de 1,7%.
Bernard Fortin
Marcelin Joanis
Safa Ragued
, Décrochage scolaire,interruption des études,frais de scolarité,salaire minimum,âge légal de fréquentation scolaire,modèle de transition
2019-09-05
Learning under Diverse World Views: Model-Based Inference
http://d.repec.org/n?u=RePEc:pen:papers:19-018&r=ore
People reason with incomplete models. How do people hampered by different, incomplete views learn from each other? We introduce a model of ``model-based inference.'' Model-based reasoners partition an otherwise hopelessly complex state space into a manageable model. Unless the differences in agents' models are trivial, interactions will often not lead agents to common beliefs, and the correct-model belief will typically lie outside the convex hull of the agents' beliefs. However, if the agents' models have enough in common, then interacting will lead agents to similar beliefs, even if their models also exhibit bizarre idiosyncrasies and their information is widely dispersed.
George J. Mailath
Larry Samuelson
Information aggregation, model-based reasoning
2019-09-30
Regularised forecasting via smooth-rough partitioning of the regression coefficients
http://d.repec.org/n?u=RePEc:ehl:lserod:100878&r=ore
Maeng, Hye Young
Fryzlewicz, Piotr
change-point detection; prediction; penalised spline; functional linear regression; EP/L014246/1
2019-06-22
Averaging estimation for instrumental variables quantile regression
http://d.repec.org/n?u=RePEc:umc:wpaper:1907&r=ore
This paper proposes averaging estimation methods to improve the finite-sample efficiency of the instrumental variables quantile regression (IVQR) estimation. First, I apply Cheng, Liao, Shi's (2019) averaging GMM framework to the IVQR model. I propose using the usual quantile regression moments for averaging to take advantage of cases when endogeneity is not too strong. I also propose using two-stage least squares slope moments to take advantage of cases when heterogeneity is not too strong. The empirical optimal weight formula of Cheng et al. (2019) helps optimize the bias-variance tradeoff, ensuring uniformly better (asymptotic) risk of the averaging estimator over the standard IVQR estimator under certain conditions. My implementation involves many computational considerations and builds on recent developments in the quantile literature. Second, I propose a bootstrap method that directly averages among IVQR, quantile regression, and two-stage least squares estimators. More specifically, I find the optimal weights in the bootstrap world and then apply the bootstrap-optimal weights to the original sample. The bootstrap method is simpler to compute and generally performs better in simulations, but it lacks the formal uniform dominance results of Cheng et al. (2019). Simulation results demonstrate that in the multiple-regressors/instruments case, both the GMM averaging and bootstrap estimators have uniformly smaller risk than the IVQR estimator across data-generating processes (DGPs) with all kinds of combinations of different endogeneity levels and heterogeneity levels. In DGPs with a single endogenous regressor and instrument, where averaging estimation is known to have least opportunity for improvement, the proposed averaging estimators outperform the IVQR estimator in some cases but not others.
Xin Liu
model selection, model averaging
2019-10
Post-Keynesian Controversy About Uncertainty: Methodological Perspective, Part II
http://d.repec.org/n?u=RePEc:sek:iefpro:9512182&r=ore
In this paper, the author follows a discussion of two post-Keynesian economists, Paul Davidson and Rod O?Donnell, about the nature of uncertainty in economics. The author focuses on two points of this discussion: a controversy about possibility/impossibility of such a proof and a criticism of Davidson?s allegedly split definition of ergodicity. In a controversy about possibility/impossibility, the author puts O?Donnell to criticism for the latter?s reduction of proving to providing empirical evidence and, in effect, omission of extra-empirical cognition. The author accepts O?Donnell?s argument of Davidson?s split definition and infers his own conclusion: the reason why Davidson keeps ignoring the incompatibility of both definitions of ergodicity is that he does not distinguish cumulative and theoretical probability. The author contends that Davidson?s claim about predetermination of long-run outcomes in ergodic processes draws its persuasiveness from the ambiguity of the concept ?long run?: according to the author, Davidson perceives ?long-run? in the meaning of ?finitely long? while O?Donnell perceives ?long-run? in the meaning of ?limit infinity?.
Luká? Augustin Máslo
ergodicity, uncertainty, probability
2019-10
Productivity estimates for South Africa from CES production functions
http://d.repec.org/n?u=RePEc:rza:wpaper:789&r=ore
This paper provides estimates of the elasticity of substitution and total factor productivity (TFP) for South Africa. Estimates are based on constant elasticity of substitution (CES) production functions. Estimates of potential output and the output gap implied by different CES model specifications are also compared to those from other models.
Daan Steenkamp
constant elasticity of substitution, production functions, productivity, Output gap
2019-07
Identification and Estimation of SVARMA models with Independent and Non-Gaussian Inputs
http://d.repec.org/n?u=RePEc:arx:papers:1910.04087&r=ore
This paper analyzes identifiability properties of structural vector autoregressive moving average (SVARMA) models driven by independent and non-Gaussian shocks. It is well known, that SVARMA models driven by Gaussian errors are not identified without imposing further identifying restrictions on the parameters. Even in reduced form and assuming stability and invertibility, vector autoregressive moving average models are in general not identified without requiring certain parameter matrices to be non-singular. Independence and non-Gaussianity of the shocks is used to show that they are identified up to permutations and scalings. In this way, typically imposed identifying restrictions are made testable. Furthermore, we introduce a maximum-likelihood estimator of the non-Gaussian SVARMA model which is consistent and asymptotically normally distributed.
Bernd Funovits
2019-10
Direct and Indirect Effects based on Changes-in- Changes
http://d.repec.org/n?u=RePEc:fri:fribow:fribow00508&r=ore
We propose a novel approach for causal mediation analysis based on changes-in-changes assumptions restricting unobserved heterogeneity over time. This allows disentangling the causal effect of a binary treatment on a continuous outcome into an indirect effect operating through a binary intermediate variable (called mediator) and a direct effect running via other causal mechanisms. We identify average and quantile direct and indirect effects for various subgroups under the condition that the outcome is monotonic in the unobserved heterogeneity and that the distribution of the latter does not change over time conditional on the treatment and the mediator. We also provide a simulation study and an empirical application to the Jobs II programme.
Huber, Martin
Schelker, Mark
Strittmatter, Anthony
2019-09-30
The role of the rand as a shock absorber
http://d.repec.org/n?u=RePEc:rza:wpaper:790&r=ore
This paper investigates the impact of rand shocks on industry output and various other South African macroeconomic variables. We use a factor augmented model, which has the key advantage of providing a rich narrative about the disaggregated impacts of exchange rate shocks. We show that the currency tends to react to changes in the relative fundamentals of the economy, such as those captured by commodity export prices, and that the independent impact on the economy of exchange rate changes that are unrelated to fundamentals is estimated to be small. The results suggest that the exchange rate tends to act as a shock absorber to the shocks that hit the economy: a large proportion of the variation in the rand can be explained by other shocks, while rand shocks themselves explain a relatively small proportion of South Africaâ€™s macroeconomic volatility. That said, the role that the exchange rate plays as a shock absorber appears to be weaker in South Africa than for other commodity exporters like Australia and New Zealand.
Luchelle Soobyah
Daan Steenkamp
FAVAR, exchange rate shocks
2019-07
Artificial Intelligence, Data, Ethics: An Holistic Approach for Risks and Regulation
http://d.repec.org/n?u=RePEc:mse:cesdoc:19012&r=ore
An extensive list of risks relative to big data frameworks and their use through models of artificial intelligence is provided along with measurements and implementable solutions. Bias, interpretability and ethics are studied in depth, with several interpretations from the point of view of developers, companies and regulators. Reflexions suggest that fragmented frameworks increase the risks of models misspecification, opacity and bias in the result; Domain experts and statisticians need to be involved in the whole process as the business objective must drive each decision from the data extraction step to the final activatable prediction. We propose an holistic and original approach to take into account the risks encountered all along the implementation of systems using artificial intelligence from the choice of the data and the selection of the algorithm, to the decision making
Alexis Bogroff
Dominique Guégan
Artificial Intelligence; Bias; Big Data; Ethics; Governance; Interpretability; Regulation; Risk
2019-06
A theorem of Kalman and minimal state-space realization of Vector Autoregressive Models
http://d.repec.org/n?u=RePEc:arx:papers:1910.02546&r=ore
We introduce a concept of $autoregressive$ (AR)state-space realization that could be applied to all transfer functions $\boldsymbol{T}(L)$ with $\boldsymbol{T}(0)$ invertible. We show that a theorem of Kalman implies each Vector Autoregressive model (with exogenous variables) has a minimal $AR$-state-space realization of form $\boldsymbol{y}_t = \sum_{i=1}^p\boldsymbol{H}\boldsymbol{F}^{i-1}\boldsymbol{G}\boldsymbol{x}_{t-i}+\boldsymbol{\epsilon}_t$ where $\boldsymbol{F}$ is a nilpotent Jordan matrix and $\boldsymbol{H}, \boldsymbol{G}$ satisfy certain rank conditions. The case $VARX(1)$ corresponds to reduced-rank regression. Similar to that case, for a fixed Jordan form $\boldsymbol{F}$, $\boldsymbol{H}$ could be estimated by least square as a function of $\boldsymbol{G}$. The likelihood function is a determinant ratio generalizing the Rayleigh quotient. It is unchanged if $\boldsymbol{G}$ is replaced by $\boldsymbol{S}\boldsymbol{G}$ for an invertible matrix $\boldsymbol{S}$ commuting with $\boldsymbol{F}$. Using this invariant property, the search space for maximum likelihood estimate could be constrained to equivalent classes of matrices satisfying a number of orthogonal relations, extending the results in reduced-rank analysis. Our results could be considered a multi-lag canonical-correlation-analysis. The method considered here provides a solution in the general case to the polynomial product regression model of Velu et. al. We provide estimation examples. We also explore how the estimates vary with different Jordan matrix configurations and discuss methods to select a configuration. Our approach could provide an important dimensional reduction technique with potential applications in time series analysis and linear system identification. In the appendix, we link the reduced configuration space of $\boldsymbol{G}$ with a geometric object called a vector bundle.
Du Nguyen
2019-10