
on Econometrics 
By:  Atsushi Inoue; Lutz Kilian 
Abstract:  Several recent studies have expressed concern that the Haar prior typically imposed in estimating signidentified VAR models may be unintentionally informative about the implied prior for the structural impulse responses. This question is indeed important, but we show that the tools that have been used in the literature to illustrate this potential problem are invalid. Specifically, we show that it does not make sense from a Bayesian point of view to characterize the impulse response prior based on the distribution of the impulse responses conditional on the maximum likelihood estimator of the reducedform parameters, since the prior does not, in general, depend on the data. We illustrate that this approach tends to produce highly misleading estimates of the impulse response priors. We formally derive the correct impulse response prior distribution and show that there is no evidence that typical signidentified VAR models estimated using conventional priors tend to imply unintentionally informative priors for the impulse response vector or that the corresponding posterior is dominated by the prior. Our evidence suggests that concerns about the Haar prior for the rotation matrix have been greatly overstated and that alternative estimation methods are not required in typical applications. Finally, we demonstrate that the alternative Bayesian approach to estimating signidentified VAR models proposed by Baumeister and Hamilton (2015) suffers from exactly the same conceptual shortcoming as the conventional approach. We illustrate that this alternative approach may imply highly economically implausible impulse response priors. 
Keywords:  Prior; posterior; impulse response; loss function; joint inference; absolute loss; median 
JEL:  C22 C32 C52 E31 Q43 
Date:  2020–12–03 
URL:  http://d.repec.org/n?u=RePEc:fip:feddwp:89121&r=all 
By:  Francesco Giancaterini; Alain Hecq 
Abstract:  This paper analyzes the properties of the Maximum Likelihood Estimator for mixed causal and noncausal models when the error term follows a Student's tdistribution. In particular, we compare several existing methods to compute the expected Fisher information matrix and show that they cannot be applied in the heavytail framework. For this purpose, we propose a new approach to make inference on causal and noncausal parameters in finite sample sizes. It is based on the empirical variance computed on the generalized Student's t, even when the population variance is not finite. Monte Carlo simulations show the good performances of our new estimator for fat tail series. We illustrate how the different approaches lead to different standard errors in four time series: annual debt to GDP for Canada, the variation of daily Covid19 deaths in Belgium, the monthly wheat prices and the monthly inflation rate in Brazil. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.01888&r=all 
By:  Lee, K.; Linton, O.; Whang, YJ.; 
Abstract:  We propose nonparametric tests for the null hypothesis of time stochastic dominance. Time stochastic dominance makes a partial order of different prospects over time based on the net present value criteria for general utility and time discount function classes. For example, time stochastic dominance can be used for ranking investment strategies or environmental policies based on the expected net present value of the future benefits. We consider an Lp integrated test statistic and derive its large sample distribution. We suggest a pathwise bootstrap procedures that allows for time dependence in a panel data structure. In addition to the least favorable case based bootstrap method, we describe two approaches, the contactset approach and the numerical delta method, for the purpose of enhancing a power of the test. We prove the asymptotic validity of our testing procedures. We investigate the finite sample performance of the tests in simulation studies. As an illustration, we apply the proposed tests to evaluate the welfare improvement of the Thailand’s Million Baht Village Fund Program. 
Keywords:  Bootstrap, Discounting, Stochastic Dominance, Testing 
JEL:  C10 C12 C14 
Date:  2020–12–10 
URL:  http://d.repec.org/n?u=RePEc:cam:camdae:20121&r=all 
By:  Vincent W. C. Tan; Stefan Zohren 
Abstract:  We introduce a novel covariance estimator that exploits the heteroscedastic nature of financial time series by employing exponential weighted moving averages and shrinking the insample eigenvalues through crossvalidation. Our estimator is modelagnostic in that we make no assumptions on the distribution of the random entries of the matrix or structure of the covariance matrix. Additionally, we show how Random Matrix Theory can provide guidance for automatic tuning of the hyperparameter which characterizes the time scale for the dynamics of the estimator. By attenuating the noise from both the crosssectional and timeseries dimensions, we empirically demonstrate the superiority of our estimator over competing estimators that are based on exponentiallyweighted and uniformlyweighted covariance matrices. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.05757&r=all 
By:  Kevin Li 
Abstract:  Regression trees and random forests are popular and effective nonparametric estimators in practical applications. A recent paper by Athey and Wager shows that the random forest estimate at any point is asymptotically Gaussian; in this paper, we extend this result to the multivariate case and show that the vector of estimates at multiple points is jointly normal. Specifically, the covariance matrix of the limiting normal distribution is diagonal, so that the estimates at any two points are independent in sufficiently deep trees. Moreover, the offdiagonal term is bounded by quantities capturing how likely two points belong to the same partition of the resulting tree. Our results relies on certain a certain stability property when constructing splits, and we give examples of splitting rules for which this assumption is and is not satisfied. We test our proposed covariance bound and the associated coverage rates of confidence intervals in numerical simulations. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.03486&r=all 
By:  Jiti Gao; Fei Liu; Bin Peng 
Abstract:  In this paper, we investigate binary response models for heterogeneous panel data with interactive fixed effects by allowing both the cross sectional dimension and the temporal dimension to diverge. From a practical point of view, the proposed framework can be applied to predict the probability of corporate failure, conduct credit rating analysis, etc. Theoretically and methodologically, we establish a link between a maximum likelihood estimation and a least squares approach, provide a simple information criterion to detect the number of factors, and achieve the asymptotic distributions accordingly. In addition, we conduct intensive simulations to examine the theoretical findings. In the empirical study, we focus on the sign prediction of stock returns, and then use the results of sign forecast to conduct portfolio analysis. By implementing rollingwindow based outofsample forecasts, we show the finitesample performance and demonstrate the practical relevance of the proposed model and estimation method. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.03182&r=all 
By:  Marco Barnabani (Dipartimento di Statistica, Informatica, Applicazioni "G. Parenti", UniversitÃ di Firenze) 
Abstract:  In linear mixed models the selection of fixed and random effects using a testing hypothesis approach brings up several problems. In this paper, we consider the so called boundary problem and the confounding impact of effects from one set of coefficient in the other set. These problems are addressed by defining two test statistics based on ordinary least squares obtained by dividing two quadratic forms, one that contains the effect and another that does not. As a result, the test statistics are sufficiently general, easy to compute, with known finite sample properties. The test on randomness has a known exact distribution under the null and alternative hypothesis, the test on fixed effect is approximated by a noncentral F distribution. Because of its importance in the selection variable approach, the goodnessofapproximation is examined indepth in final simulations. 
Keywords:  Selection procedure; Hypothesis testing; Linear Mixed Models; Generalized F distribution; 
JEL:  C12 C63 C52 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:fir:econom:wp2020_09&r=all 
By:  Federico Martellosio 
Abstract:  We study identification in autoregressions defined on a general network. Most identification conditions that are available for these models either rely on repeated observations, are only sufficient, or require strong distributional assumptions. We derive conditions that apply even if only one observation of a network is available, are necessary and sufficient for identification, and require weak distributional assumptions. We find that the models are generically identified even without repeated observations, and analyze the combinations of the interaction matrix and the regressor matrix for which identification fails. This is done both in the original model and after certain transformations in the sample space, the latter case being important for some fixed effects specifications. 
Date:  2020–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2011.11084&r=all 
By:  Ilya Archakov; Peter Reinhard Hansen 
Abstract:  We obtain a canonical representation for block matrices. The representation facilitates simple computation of the determinant, the matrix inverse, and other powers of a block matrix, as well as the matrix logarithm and the matrix exponential. These results are particularly useful for block covariance and block correlation matrices, where evaluation of the Gaussian loglikelihood and estimation are greatly simplified. We illustrate this with an empirical application using a large panel of daily asset returns. Moreover, the representation paves new ways to regularizing large covariance/correlation matrices and to test block structures in matrices. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.02698&r=all 
By:  Philip Marx 
Abstract:  A fundamental question underlying the literature on partial identification is: what can we learn about parameters that are relevant for policy but not necessarily pointidentified by the exogenous variation we observe? This paper provides an answer in terms of sharp, closedform characterizations and bounds for the latent index selection model, which defines a large class of policyrelevant treatment effects via its marginal treatment effect (MTE) function [Heckman and Vytlacil (1999,2005), Vytlacil (2002)]. The sharp bounds use the full content of identified marginal distributions, and closedform expressions rely on the theory of stochastic orders. The proposed methods also make it possible to sharply incorporate new auxiliary assumptions on distributions into the latent index selection framework. Empirically, I apply the methods to study the effects of Medicaid on emergency room utilization in the Oregon Health Insurance Experiment, showing that the predictions from extrapolations based on a distribution assumption (rank similarity) differ substantively and consistently from existing extrapolations based on a parametric mean assumption (linearity). This underscores the value of utilizing the model's full empirical content. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.02390&r=all 
By:  Hess Chung; Cristina FuentesAlbero; Matthias Paustian; Damjan Pfajfar 
Abstract:  This paper advocates chaining the decomposition of shocks into contributions from forecast errors to the shock decomposition of the latent vector to better understand model inference about latent variables. Such a double decomposition allows us to gauge the inuence of data on latent variables, like the data decomposition. However, by taking into account the transmission mechanisms of each type of shock, we can highlight the economic structure underlying the relationship between the data and the latent variables. We demonstrate the usefulness of this approach by detailing the role of observable variables in estimating the output gap in two models. 
Keywords:  Kalman smoother; Latent variables; Shock decomposition; Data decomposition; Double decomposition 
JEL:  C18 C32 C52 
Date:  2020–12–04 
URL:  http://d.repec.org/n?u=RePEc:fip:fedgfe:2020100&r=all 
By:  Dennis Umlandt 
Abstract:  This paper proposes a new parametric approach to estimate linear factor pricing models with timevarying risk premia. In contrast to recent contributions to the literature,the framework presented abstains from introducing instrument variables to describethe time variation of risk prices. Instead, timevarying risk prices and exposures followa recursive updating scheme constructed to reduce the onestep ahead prediction errorfrom a crosssectional factor model at the current observation. This agnostic approachis particularly useful in situations where instrument variables are unavailable or of poorquality. Estimation and inference are done by likelihood maximization. A Monte Carlostudy compares the ability of the method to predict risk prices and returns to that ofa regressionbased method that uses noisy signals from true risk price predictors. Ina realistic setting, the two approaches keep pace when the signal contains 80 percentcorrect information. An application to a macrofinance model of currency carry tradesillustrates the novel approach. 
Keywords:  Dynamic Asset Pricing, Generalized Autoregressive Score Models, Timevarying Risk Premia, Return Predictability 
JEL:  G12 G17 C58 
Date:  2020 
URL:  http://d.repec.org/n?u=RePEc:trr:qfrawp:202006&r=all 
By:  Hugo Bodory; Martin Huber; Luk\'a\v{s} Laff\'ers 
Abstract:  We consider evaluating the causal effects of dynamic treatments, i.e. of multiple treatment sequences in various periods, based on double machine learning to control for observed, timevarying covariates in a datadriven way under a selectiononobservables assumption. To this end, we make use of socalled Neymanorthogonal score functions, which imply the robustness of treatment effect estimation to moderate (local) misspecifications of the dynamic outcome and treatment models. This robustness property permits approximating outcome and treatment models by double machine learning even under high dimensional covariates and is combined with data splitting to prevent overfitting. In addition to effect estimation for the total population, we consider weighted estimation that permits assessing dynamic treatment effects in specific subgroups, e.g. among those treated in the first treatment period. We demonstrate that the estimators are asymptotically normal and $\sqrt{n}$consistent under specific regularity conditions and investigate their finite sample properties in a simulation study. Finally, we apply the methods to the Job Corps study in order to assess different sequences of training programs under a large set of covariates. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.00370&r=all 
By:  Daniel Felix Ahelegbey (University of Pavia); Monica Billio (University of Venice); Roberto Casarin (University of Venice) 
Abstract:  Turning points in financial markets are often characterized by changes in the direction and/or magnitude of market movements with shorttolong term impacts on investorsâ€™ decisions. This paper develops a Bayesian technique to turning point detection in financial equity markets. We derive the interconnectedness among stock market returns from a piecewise network vector autoregressive model. The empirical application examines turning points in global equity market over the past two decades. We also compare the Covid19 induced interconnectedness with that of the global financial crisis in 2008 to identify similarities and the most central market for spillover propagation 
Keywords:  Bayesian inference, Dynamic Programming, Turning points, Networks, VAR. 
JEL:  C11 C15 C51 C52 C55 C58 G01 
Date:  2020–11 
URL:  http://d.repec.org/n?u=RePEc:pav:demwpp:demwp0195&r=all 
By:  Ilya Archakov; Peter Reinhard Hansen; Asger Lunde 
Abstract:  We propose a novel class of multivariate GARCH models that utilize realized measures of volatilities and correlations. The central component is an unconstrained vector parametrization of the correlation matrix that facilitates modeling of the correlation structure. The parametrization is based on the matrix logarithmic transformation that retains the positive definiteness as an innate property. A factor approach offers a way to impose a parsimonious structure in high dimensional system and we show that a factor framework arises naturally in some existing models. We apply the model to returns of nine assets and employ the factor structure that emerges from a block correlation specification. An auxiliary empirical finding is that the empirical distribution of parametrized realized correlations is approximately Gaussian. This observation is analogous to the wellknown result for logarithmically transformed realized variances. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.02708&r=all 
By:  Paul Labonne 
Abstract:  Nowcasting methods rely on timely series related to economic growth for producing and updating estimates of GDP growth before publication of official figures. But the statistical uncertainty attached to these forecasts, which is critical to their interpretation, is only improved marginally when new data on related series become available. That is particularly problematic in times of high economic uncertainty. As a solution this paper proposes to model common factors in scale and shape parameters alongside the mixedfrequency dynamic factor model typically used for location parameters in nowcasting frameworks. Scale and shape parameters control the timevarying dispersion and asymmetry round point forecasts which are necessary to capture the increase in variance and negative skewness found in times of recessions. It is shown how crosssectional dependencies in scale and shape parameters may be modelled in mixedfrequency settings, with a particularly convenient approximation for scale parameters in Gaussian models. The benefit of this methodology is explored using vintages of U.S. economic growth data with a focus on the economic depression resulting from the coronavirus pandemic. The results show that modelling common factors in scale and shape parameters improves nowcasting performance towards the end of the nowcasting window in recessionary episodes. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.02601&r=all 
By:  Laurinaityte, Nora; Meinerding, Christoph; Schlag, Christian; Thimme, Julian 
Abstract:  Crosssectional asset pricing tests with GMM can generate spuriouslyhigh explanatory power for factor models when the moment conditions are specifiedsuch that they allow the estimated factor means to substantially deviate from theobserved sample averages. In fact, by shifting the weights on the moment conditions,any level of crosssectional fit can be attained. This property is a feature of the GMMestimation design and applies to strong as well as weak factors, and to all samplesizes and test assets. We reveal the origins of this bias theoretically, gauge its sizeusing simulations, and document its relevance empirically. 
Keywords:  asset pricing,crosssection of expected returns,GMM,factor zoo 
JEL:  G00 G12 C21 C13 
Date:  2020 
URL:  http://d.repec.org/n?u=RePEc:zbw:bubdps:622020&r=all 
By:  Sean ELLIOTT (University of Toronto.); Christian GOURIEROUX (University of Toronto, Toulouse School of Economics, and CREST.) 
Abstract:  The aim of this paper is to understand the extreme variability on the estimated reproduction ratio R0 observed in practice. For expository purpose we consider a discrete time stochastic version of the SusceptibleInfectedRecovered (SIR) model, and introduce different approximate maximum likelihood (AML) estimators of R0. We carefully discuss the properties of these estimators and illustrate by a MonteCarlo study the width of confidence intervals on R0. 
Keywords:  SIR Model, Reproduction Ratio, COVID19, Approximate Maximum Likelihood, EpiEstim, Final Size. 
Date:  2020–12–09 
URL:  http://d.repec.org/n?u=RePEc:crs:wpaper:202031&r=all 
By:  Jose Apesteguia; Miguel Ángel Ballester 
Abstract:  We propose a novel measure of goodness of fit for stochastic choice models: that is, the maximal fraction of data that can be reconciled with the model. The procedure is to separate the data into two parts: one generated by the best specification of the model and another representing residual behavior. We claim that the three elements involved in a separation are instrumental to understanding the data. We show how to apply our approach to any stochastic choice model and then study the case of four wellknown models, each capturing a different notion of randomness. We illustrate our results with an experimental dataset. 
Keywords:  Goodness of fit; Stochastic Choice; Residual Behavior 
JEL:  C91 D81 G12 G20 G41 
Date:  2020–02 
URL:  http://d.repec.org/n?u=RePEc:upf:upfgen:1757&r=all 