
on Econometrics 
By:  Woosik Gong; Myung Hwan Seo 
Abstract:  This paper develops robust bootstrap inference for a dynamic panel threshold model to improve the finite sample coverage and to be applicable irrespective of the regression's continuity. When the true model becomes continuous and kinked but this restriction is not imposed in the estimation, we find that the usual rank condition for the GMM identification fails, since the Jacobian of the moment function for the GMM loses the fullcolumn rank property. Instead, we establish the identification in a higherorder expansion and derive a slower $n^{1/4}$ convergence rate for the GMM threshold estimator. Furthermore, we show that it destroys asymptotic normality for both coefficients and threshold estimators and invalidates the standard nonparametric bootstrap. We propose two alternative bootstrap schemes that are robust to the continuity and improve the finite sample coverage of the unknown threshold. One is a grid bootstrap that imposes null values of the threshold location. The other is a robust bootstrap where a resampling scheme is adjusted by a datadriven criterion. We show that both bootstraps are consistent. Finite sample performances of proposed methods are checked through Monte Carlo experiments, and an empirical application is shown. 
Date:  2022–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2211.04027&r=ecm 
By:  Zequn Jin; Lihua Lin; Zhengyu Zhang 
Abstract:  This paper proposes a new class of heterogeneous causal quantities, named \textit{outcome conditioned} average structural derivatives (OASD) in a general nonseparable model. OASD is the average partial effect of a marginal change in a continuous treatment on the individuals located at different parts of the outcome distribution, irrespective of individuals' characteristics. OASD combines both features of ATE and QTE: it is interpreted as straightforwardly as ATE while at the same time more granular than ATE by breaking the entire population up according to the rank of the outcome distribution. One contribution of this paper is that we establish some close relationships between the \textit{outcome conditioned average partial effects} and a class of parameters measuring the effect of counterfactually changing the distribution of a single covariate on the unconditional outcome quantiles. By exploiting such relationship, we can obtain root$n$ consistent estimator and calculate the semiparametric efficiency bound for these counterfactual effect parameters. We illustrate this point by two examples: equivalence between OASD and the unconditional partial quantile effect (Firpo et al. (2009)), and equivalence between the marginal partial distribution policy effect (Rothe (2012)) and a corresponding outcome conditioned parameter. Because identification of OASD is attained under a conditional exogeneity assumption, by controlling for a rich information about covariates, a researcher may ideally use highdimensional controls in data. We propose for OASD a novel automatic debiased machine learning estimator, and present asymptotic statistical guarantees for it. We prove our estimator is root$n$ consistent, asymptotically normal, and semiparametrically efficient. We also prove the validity of the bootstrap procedure for uniform inference on the OASD process. 
Date:  2022–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2211.07903&r=ecm 
By:  Christian Gourieroux; Joann Jasiak 
Abstract:  This paper considers nonlinear dynamic models where the main parameter of interest is a nonnegative matrix characterizing the network (contagion) effects. This network matrix is usually constrained either by assuming a limited number of nonzero elements (sparsity), or by considering a reduced rank approach for nonnegative matrix factorization (NMF). We follow the latter approach and develop a new probabilistic NMF method. We introduce a new Identifying Maximum Likelihood (IML) method for consistent estimation of the identified set of admissible NMF's and derive its asymptotic distribution. Moreover, we propose a maximum likelihood estimator of the parameter matrix for a given nonnegative rank, derive its asymptotic distribution and the associated efficiency bound. 
Date:  2022–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2211.11876&r=ecm 
By:  Greta Goracci; Davide Ferrari; Simone Giannerini; Francesco ravazzolo 
Abstract:  Threshold autoregressive movingaverage (TARMA) models are popular in time series analysis due to their ability to parsimoniously describe several complex dynamical features. However, neither theory nor estimation methods are currently available when the data present heavy tails or anomalous observations, which is often the case in applications. In this paper, we provide the first theoretical framework for robust Mestimation for TARMA models and also study its practical relevance. Under mild conditions, we show that the robust estimator for the threshold parameter is superconsistent, while the estimators for autoregressive and movingaverage parameters are strongly consistent and asymptotically normal. The Monte Carlo study shows that the Mestimator is superior, in terms of both bias and variance, to the least squares estimator, which can be heavily affected by outliers. The findings suggest that robust Mestimation should be generally preferred to the least squares method. Finally, we apply our methodology to a set of commodity price time series; the robust TARMA fit presents smaller standard errors and leads to superior forecasting accuracy compared to the least squares fit. The results support the hypothesis of a tworegime, asymmetric nonlinearity around zero, characterised by slow expansions and fast contractions. 
Date:  2022–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2211.08205&r=ecm 
By:  Lui, Yiu Lim (Dongbei University of Finance and Economics); Phillips, Peter C.B. (Yale University); Yu, Jun (Singapore Management University) 
Abstract:  A heteroskedasticityautocorrelation robust (HAR) test statistic is proposed to test for the presence of explosive roots in financial or real asset prices when the equation errors are strongly dependent. Limit theory for the test statistic is developed and extended to heteroskedastic models. The new test has stable size properties unlike conventional test statistics that typically lead to size distortion and inconsistency in the presence of strongly dependent equation errors. The new procedure can be used to consistently timestamp the origination and termination of an explosive episode under similar conditions of long memory errors. Simulations are conducted to assess the finite sample performance of the proposed test and estimators. An empirical application to the S&P 500 index highlights the usefulness of the proposed procedures in practical work. 
Keywords:  HAR test; Long memory; Explosiveness; Unit root test; S&P 500 
JEL:  C12 C22 G01 
Date:  2022–10–28 
URL:  http://d.repec.org/n?u=RePEc:ris:smuesw:2022_011&r=ecm 
By:  Kohtaro Hitomi (Kyoto Institute of Technology); Jianwei Jin (Yokohama National University); Keiji Nagai (Yokohama National University); Yoshihiko Nishiyama (Institute of Economic Research, Kyoto University); Junfan Tao (Institute of Economic Research, Kyoto University) 
Abstract:  The DickeyFuller (DF) unit root tests are widely used in empirical studies on economics. In the localtounity asymptotic theory, the effects of initial values vanish as the sample size grows. However, for a small sample size, the initial value will affect the distribution of the test statistics. When ignoring the effect of the initial value, the leftsided unit root test sets the critical value smaller than it should be. Therefore, the size and power of the test become smaller. This paper investigates the effect of the initial value for the DF test (including the t test). Limiting approximations of the DF test statistics are the ratios of two integrals which are represented via a onedimensional squared Bessel process. We derive the joint density of the squared Bessel process and its integral, enabling us to compute this ratio's distribution. For independent normal errors, the exact distribution of the DickeyFuller coefficient test statistic is obtained using the Imhof (1961) method for noncentral chisquared distribution. Numerical results show that when the sample size is small, the limiting distributions of the DF test statistics with initial values fit well with the exact or simulated distributions. We transform the DF test with respect to a local parameter into the test for a shift in the location parameter of normal distributions. As a result, a concise method for computing the powers of DF tests is derived. 
Keywords:  DickeyFuller tests, Squared Bessel process, joint density, powers approximated by normal distribution, exact distribution 
JEL:  C12 C22 C46 
Date:  2022–11 
URL:  http://d.repec.org/n?u=RePEc:kyo:wpaper:1084&r=ecm 
By:  Naoya Sueishi 
Abstract:  Empirical researchers often perform model specification tests, such as the Hausman test and the overidentifying restrictions test, to confirm the validity of estimators rather than the validity of models. This paper examines the effectiveness of specification pretests in finding invalid estimators. We study the local asymptotic properties of test statistics and estimators and show that locally unbiased specification tests cannot determine whether asymptotically efficient estimators are asymptotically biased. The main message of the paper is that correct specification and valid estimation are different issues. Correct specification is neither necessary nor sufficient for asymptotically unbiased estimation under local overidentification. 
Date:  2022–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2211.11915&r=ecm 
By:  Federico Crudu; Michael C. Knaus; Giovanni Mellace; Joeri Smits 
Abstract:  Many econometrics textbooks imply that under mean independence of the regressors and the error term, the OLS parameters have a causal interpretation. We show that even when this assumption is satisfied, OLS might identify a pseudoparameter that does not have a causal interpretation. Even assuming that the linear model is "structural" creates some ambiguity in what the regression error represents and whether the OLS estimand is causal. This issue applies equally to linear IV and panel data models. To give these estimands a causal interpretation, one needs to impose assumptions on a "causal" model, e.g., using the potential outcome framework. This highlights that causal inference requires causal, and not just stochastic, assumptions. 
Date:  2022–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2211.09502&r=ecm 
By:  Xiaomeng Zhang; Wendun Wang; Xinyu Zhang 
Abstract:  This paper provides new insights into the asymptotic properties of the synthetic control method (SCM). We show that the synthetic control (SC) weight converges to a limiting weight that minimizes the mean squared prediction risk of the treatmenteffect estimator when the number of pretreatment periods goes to infinity, and we also quantify the rate of convergence. Observing the link between the SCM and model averaging, we further establish the asymptotic optimality of the SC estimator under imperfect pretreatment fit, in the sense that it achieves the lowest possible squared prediction error among all possible treatment effect estimators that are based on an average of control units, such as matching, inverse probability weighting and differenceindifferences. The asymptotic optimality holds regardless of whether the number of control units is fixed or divergent. Thus, our results provide justifications for the SCM in a wide range of applications. The theoretical results are verified via simulations. 
Date:  2022–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2211.12095&r=ecm 
By:  Marc S. Paolella (University of Zurich  Department of Banking and Finance; Swiss Finance Institute); Pawel Polak (Stony Brook UniversityDepartment of Applied Mathematics and Statistics) 
Abstract:  The CCCGARCH model, and its dynamic correlation extensions, form the most important model class for multivariate asset returns. For multivariate density and portfolio risk forecasting, a drawback of these models is the underlying assumption of Gaussianity. This paper considers the socalled COMFORT model class, which is the CCCGARCH model but endowed with multivariate generalized hyperbolic innovations. The novelty of the model is that parameter estimation is conducted by joint maximum likelihood, of all model parameters, using an EM algorithm, and so is feasible for hundreds of assets. This paper demonstrates that (i) the new model is blatantly superior to its Gaussian counterpart in terms of forecasting ability, and (ii) also outperforms adhoc three step procedures common in the literature to augment the CCC and DCC models with a fattailed distribution. An extensive empirical study confirms the COMFORT model’s superiority in terms of multivariate density and ValueatRisk forecasting. 
Keywords:  GJRGARCH, Multivariate Generalized Hyperbolic Distribution, NonEllipticity, ValueatRisk. 
JEL:  C51 C53 G11 G17 
Date:  2022–11 
URL:  http://d.repec.org/n?u=RePEc:chf:rpseri:rp2288&r=ecm 
By:  Kyunghoon Ban; D\'esir\'e K\'edagni 
Abstract:  The differenceindifferences (DID) method identifies the average treatment effects on the treated (ATT) under mainly the socalled parallel trends (PT) assumption. The most common and widely used approach to justify the PT assumption is the pretreatment period examination. If a null hypothesis of the same trend in the outcome means for both treatment and control groups in the pretreatment periods is rejected, researchers believe less in PT and the DID results. This paper fills this gap by developing a generalized DID framework that utilizes all the information available not only from the pretreatment periods but also from multiple data sources. Our approach interprets PT in a different way using a notion of selection bias, which enables us to generalize the standard DID estimand by defining an information set that may contain multiple pretreatment periods or other baseline covariates. Our main assumption states that the selection bias in the posttreatment period lies within the convex hull of all selection biases in the pretreatment periods. We provide a sufficient condition for this assumption to hold. Based on the baseline information set we construct, we first provide an identified set for the ATT that always contains the true ATT under our identifying assumption, and also the standard DID estimand. Secondly, we propose a class of criteria on the selection biases from the perspective of policymakers that can achieve a point identification of the ATT. Finally, we illustrate our methodology through some numerical and empirical examples. 
Date:  2022–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2211.06710&r=ecm 
By:  Jingwen Zhang; Yifang Chen; Amandeep Singh 
Abstract:  The deployment of MultiArmed Bandits (MAB) has become commonplace in many economic applications. However, regret guarantees for even stateoftheart linear bandit algorithms (such as Optimism in the Face of Uncertainty Linear bandit (OFUL)) make strong exogeneity assumptions w.r.t. arm covariates. This assumption is very often violated in many economic contexts and using such algorithms can lead to suboptimal decisions. Further, in social science analysis, it is also important to understand the asymptotic distribution of estimated parameters. To this end, in this paper, we consider the problem of online learning in linear stochastic contextual bandit problems with endogenous covariates. We propose an algorithm we term $\epsilon$BanditIV, that uses instrumental variables to correct for this bias, and prove an $\tilde{\mathcal{O}}(k\sqrt{T})$ upper bound for the expected regret of the algorithm. Further, we demonstrate the asymptotic consistency and normality of the $\epsilon$BanditIV estimator. We carry out extensive Monte Carlo simulations to demonstrate the performance of our algorithms compared to other methods. We show that $\epsilon$BanditIV significantly outperforms other existing methods in endogenous settings. Finally, we use data from realtime bidding (RTB) system to demonstrate how $\epsilon$BanditIV can be used to estimate the causal impact of advertising in such settings and compare its performance with other existing methods. 
Date:  2022–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2211.08649&r=ecm 
By:  James A. Duffy; Sophocles Mavroeidis; Sam Wycherley 
Abstract:  In the literature on nonlinear cointegration, a longstanding open problem relates to how a (nonlinear) vector autoregression, which provides a unified description of the short and longrun dynamics of a collection of time series, can generate 'nonlinear cointegration' in the profound sense of those series sharing common nonlinear stochastic trends. We consider this problem in the setting of the censored and kinked structural VAR (CKSVAR), which provides a flexible yet tractable framework within which to model time series that are subject to thresholdtype nonlinearities, such as those arising due to occasionally binding constraints, of which the zero lower bound (ZLB) on shortterm nominal interest rates provides a leading example. We provide a complete characterisation of how common linear and nonlinear stochastic trends may be generated in this model, via unit roots and appropriate generalisations of the usual rank conditions, providing the first extension to date of the GrangerJohansen representation theorem from a linear to a nonlinear setting, and thereby giving the first successful treatment of the open problem. The limiting common trend processes include regulated, censored and kinked Brownian motions, none of which have previously appeared in the literature on cointegrated VARs. Our results and running examples illustrate that the CKSVAR is capable of supporting a far richer variety of longrun behaviour than is a linear VAR, in ways that may be particularly useful for the identification of structural parameters. En route to establishing our main results, we also develop a set of sufficient conditions for the processes generated by a CKSVAR to be stationary, ergodic, and weakly dependent. 
Date:  2022–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2211.09604&r=ecm 
By:  Lutz Kilian; Michael D. Plante; Alexander W. Richter 
Abstract:  A common practice in empirical macroeconomics is to examine alternative recursive orderings of the variables in structural vector autoregressive (VAR) models. When the implied impulse responses look similar, the estimates are considered trustworthy. When they do not, the estimates are used to bound the true response without directly addressing the identification challenge. A leading example of this practice is the literature on the effects of uncertainty shocks on economic activity. We prove by counterexample that this practice is invalid in general, whether the data generating process is a structural VAR model or a dynamic stochastic general equilibrium model. 
Keywords:  Cholesky Decomposition; endogeneity; uncertainty; business cycle 
JEL:  C32 C51 E32 
Date:  2022–11–23 
URL:  http://d.repec.org/n?u=RePEc:fip:feddwp:95180&r=ecm 
By:  Jianwei Jin (Yokohama National University); Keiji Nagai (Yokohama National University) 
Abstract:  This paper examines the effect of initial values and smallsample properties in sequential unit root tests of the firstorder autoregressive (AR(1)) process with a coefficient expressed by a local parameter. Adopting a stopping rule based on observed Fisher information defined by Lai and Siegmund (1983), we use the sequential least squares estimator (LSE) of the local parameter as the test statistic. The sequential LSE is represented as a timechanged Brownian motion with drift. The stopping time is written as the integral of the reciprocal of twice of a Bessel process with drift generated by the timechanged Brownian motion. The time change is applied to the joint density and joint Laplace transform derived from the Bessel bridge of the squared Bessel process by Pitman and Yor (1982), by which we derive the limiting joint density and joint Laplace transform for the sequential LSE and stopping time. The joint Laplace transform is needed to calculate joint moments because the joint density oscillates wildly as the value of the stopping time approaches zero. Moreover, this paper also earns the exact distribution of stopping time by Imhof's formula for both normally distributed and fixed initial values. When the autoregressive coefficient is less than 1, the question arises as to whether the localtounity or the strong stationary model should be used. We make the decision by comparing joint moments for respective models with those calculated from the exact distribution or simulations. 
Keywords:  Stopping time, observed Fisher information, DDS Brownian motion, local asymptotic normality, Bessel process, initial values, exact distributions 
JEL:  C12 C22 C46 
Date:  2022–11 
URL:  http://d.repec.org/n?u=RePEc:kyo:wpaper:1085&r=ecm 
By:  Jack Jewson; Li Li; Laura Battaglia; Stephen Hansen; David Rossell; Piotr Zwiernik 
Abstract:  A frequent challenge when using graphical models in applications is that the sample size is limited relative to the number of parameters to be learned. Our motivation stems from applications where one has external data, in the form of networks between variables, that provides valuable information to help improve inference. Speciﬁcally, we depict the relation between COVID19 cases and social and geographical network data, and between stock market returns and economic and policy networks extracted from text data. We propose a graphical LASSO framework where likelihood penalties are guided by the external network data. We also propose a spikeandslab prior framework that depicts how partial correlations depend on the networks, which helps interpret the fitted graphical model and its relationship to the network. We develop computational schemes and software implementations in R and probabilistic programming languages. Our applications show how incorporating network data can significantly improve interpretation, statistical accuracy, and outofsample prediction, in some instances using signiﬁcantly sparser graphical models than would have otherwise been estimated. 
Date:  2022–11–08 
URL:  http://d.repec.org/n?u=RePEc:azt:cemmap:20/22&r=ecm 
By:  Damian, Elena (Sciensano); Meuleman, Bart; van Oorschot, Wim 
Abstract:  Multilevel regression analysis is one of the most popular types of analyses in crossnational social studies. However, since its early applications, there have been constant concerns about the relatively small numbers of countries in crossnational surveys and its ability to produce unbiased and accurate countrylevel effects. A recent review of Bryan and Jenkins (2016) highlights that there are still no clear rules of thumb regarding the minimum number of countries needed. The current recommendations vary from 15 to 50 countries, depending on model complexity. This paper aims to offer a better understanding regarding the consequences of grouplevel sample size, model complexity, effect size, and estimator procedure on the precision to estimate countrylevel effects in crossnational studies. The accuracy criteria considered are statistical power, relative parameter bias, relative standard error bias, and convergence rates. We pay special attention to statistical power  a key criteria that has been largely neglected in past research. The results of our Monte Carlo simulation study indicate that the small number of countries found in crossnational surveys seriously affects the accuracy of grouplevel estimates. Specifically, while a sample size of 30 countries is sufficient to detect large population effects (.5), the probability of detecting a medium (.25) or a small effect (.10) is .4 or .2, respectively. The number of additional grouplevel variables (i.e., model complexity) included in the model does not disturb the relationship between sample size and statistical power. Hence, adding contextual variables one by one does not increase the power to estimate a certain effect if the sample size is small. Even though we find that Bayesian models have more accurate estimates, there are no notable differences in statistical power between Maximum Likelihood and Bayesian models. 
Date:  2022–08–19 
URL:  http://d.repec.org/n?u=RePEc:osf:osfxxx:m94kh&r=ecm 
By:  Vladim\'ir Hol\'y 
Abstract:  We develop a novel observationdriven model for highfrequency prices. We account for irregularly spaced observations, simultaneous transactions, discreteness of prices, and market microstructure noise. The relation between trade durations and price volatility, as well as intraday patterns of trade durations and price volatility, is captured using smoothing splines. The dynamic model is based on the zeroinflated Skellam distribution with timevarying volatility in a scoredriven framework. Market microstructure noise if filtered by including a moving average component. The model is estimated by the maximum likelihood method. In an empirical study of the IBM stock, we demonstrate that the model provides a good fit to the data. Besides modeling intraday volatility, it can also be used to measure daily realized volatility. 
Date:  2022–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2211.12376&r=ecm 
By:  Jan Ditzen (Free University of BozenBolzano); Yiannis Karavias (University of Birmingham); Joakim Westerlund (Lund University; Deakin University) 
Abstract:  Economists are concerned about the many recent disruptive events such as the 20072008 global financial crisis and the 2020 COVID19 outbreak, and their likely effect on economic relationships. The fear is that the relationships might have changed, which has implications for both estimation and policymaking. Motivated by this last observation, the present paper develops a new toolbox for multiple structural break detection in panel data models with interactive effects. The toolbox includes several tests for the presence of structural breaks, a break date estimator, and a break date confidence interval. The new toolbox is applied to a large panel data set covering 3,557 US banks between 2005 and 2021, a period characterized by a number of massive quantitative easing programs to lessen the impact of the global financial crisis and the COVID19 pandemic. The question we ask is: Have these programs been successful in spurring bank lending in the US economy? The short answer turns out to be: ``No''. 
Date:  2022–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2211.06707&r=ecm 
By:  Andrzej Kocięcki (University of Warsaw, Faculty of Economic Sciences); Marcin Kolasa (SGH Warsaw School of Economics; International Monetary Fund) 
Abstract:  We develop an analytical framework to study global identification in structural models with forwardlooking expectations. Our identification condition combines the similarity transformation linking the observationally equivalent state space systems with the constraints imposed on them by the model parameters. The key step of solving the identification problem then reduces to finding all roots of a system of polynomial equations. We show how it can be done using the concept of a Gröbner basis and recently developed algorithms to compute it analytically. In contrast to papers relying on numerical search, our approach can effectively prove whether a model is identified or not at the given parameter point, explicitly delivering the complete set of observationally equivalent parameter vectors. We present the solution to the global identification problem for several popular DSGE models. Our findings indicate that observational equivalence in mediumsized models of this class might be actually not as widespread as suggested by earlier, small modelbased evidence. 
Keywords:  global identification, state space systems, DSGE models, Gröbner basis 
JEL:  C10 C51 C65 E32 
Date:  2022 
URL:  http://d.repec.org/n?u=RePEc:war:wpaper:202201&r=ecm 
By:  Andrey Shternshis; Piero Mazzarisi 
Abstract:  Shannon entropy is the most common metric to measure the degree of randomness of time series in many fields, ranging from physics and finance to medicine and biology. Realworld systems may be in general non stationary, with an entropy value that is not constant in time. The goal of this paper is to propose a hypothesis testing procedure to test the null hypothesis of constant Shannon entropy for time series, against the alternative of a significant variation of the entropy between two subsequent periods. To this end, we find an unbiased approximation of the variance of the Shannon entropy's estimator, up to the order O(n^(4)) with n the sample size. In order to characterize the variance of the estimator, we first obtain the explicit formulas of the central moments for both the binomial and the multinomial distributions, which describe the distribution of the Shannon entropy. Second, we find the optimal length of the rolling window used for estimating the timevarying Shannon entropy by optimizing a novel selfconsistent criterion based on the counting of significant variations of entropy within a time window. We corroborate our findings by using the novel methodology to test for timevarying regimes of entropy for stock price dynamics, in particular considering the case of meme stocks in 2020 and 2021. We empirically show the existence of periods of market inefficiency for meme stocks. In particular, sharp increases of prices and trading volumes correspond to statistically significant drops of Shannon entropy. 
Date:  2022–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2211.05415&r=ecm 
By:  Amoroso, Sara (European Commission, Joint Research Centre); Bruno, Randolph Luca (University College London); Magazzini, Laura (Sant'Anna School of Advanced Studies) 
Abstract:  Recent literature has raised the attention on the estimation of timeinvariant variables both in a static and a dynmamic framework. In this context, HausmanTaylor type estimators have been applied, relying crucially on the distinction between exogenous and endogenous variables (in terms of correlation with the timeinvariant error component). We show that this provision can be relaxed, and identification can be achieved by relying on the milder assumption that the correlation between the individual effect and the timevarying regressors is homogenous over time. The methodology is applied to identify the role of inputs from "Science" (firmlevel publications' stock) on firms' labour productivity, showing that the effect is larger for those firms with higher level of R&D investments. The results further support the dual – direct and indirect – role of R&D. 
Keywords:  panel data, timeinvariant variables, science, productivity, R&D 
JEL:  C23 O32 L20 
Date:  2022–11 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp15708&r=ecm 
By:  Storti, Giuseppe; Wang, Chao 
Abstract:  A new multivariate semiparametric risk forecasting framework is proposed, to enable the portfolio ValueatRisk (VaR) and Expected Shortfall (ES) optimization and forecasting. The proposed framework accounts for the dependence structure among asset returns, without assuming their distribution. A simulation study is conducted to evaluate the finite sample properties of the employed estimator for the proposed model. An empirically motivated portfolio optimization method, that can be utilized to optimize the portfolio VaR and ES, is developed. A forecasting study on 2.5% level evaluates the performance of the model in risk forecasting and portfolio optimization, based on the components of the Dow Jones index for the outofsample period from December 2016 to September 2021. Comparing to the standard models in the literature, the empirical results are favorable for the proposed model class, in particular the effectiveness of the proposed framework in portfolio risk optimization is demonstrated. 
Keywords:  semiparametric; ValueatRisk; Expected Shortfall; multivariate; portfolio optimization. 
JEL:  C14 C32 C51 C58 G17 
Date:  2022–08 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:115266&r=ecm 
By:  Clemens Possnig; Andreea Rot\u{a}rescu; Kyungchul Song 
Abstract:  Spillover of economic outcomes often arises over multiple networks, and distinguishing their separate roles is important in empirical research. For example, the direction of spillover between two groups (such as banks and industrial sectors linked in a bipartite graph) has important economic implications, and a researcher may want to learn which direction is supported in the data. For this, we need to have an empirical methodology that allows for both directions of spillover simultaneously. In this paper, we develop a dynamic linear panel model and asymptotic inference with large $n$ and small $T$, where both directions of spillover are accommodated through multiple networks. Using the methodology developed here, we perform an empirical study of spillovers between bank weakness and zombiefirm congestion in industrial sectors, using firmbank matched data from Spain between 2005 and 2012. Overall, we find that there is positive spillover in both directions between banks and sectors. 
Date:  2022–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2211.08995&r=ecm 
By:  Thanh Trung Huynh; Minh Hieu Nguyen; Thanh Tam Nguyen; Phi Le Nguyen; Matthias Weidlich; Quoc Viet Hung Nguyen; Karl Aberer 
Abstract:  Advances in deep neural network (DNN) architectures have enabled new prediction techniques for stock market data. Unlike other multivariate timeseries data, stock markets show two unique characteristics: (i) \emph{multiorder dynamics}, as stock prices are affected by strong nonpairwise correlations (e.g., within the same industry); and (ii) \emph{internal dynamics}, as each individual stock shows some particular behaviour. Recent DNNbased methods capture multiorder dynamics using hypergraphs, but rely on the Fourier basis in the convolution, which is both inefficient and ineffective. In addition, they largely ignore internal dynamics by adopting the same model for each stock, which implies a severe information loss. In this paper, we propose a framework for stock movement prediction to overcome the above issues. Specifically, the framework includes temporal generative filters that implement a memorybased mechanism onto an LSTM network in an attempt to learn individual patterns per stock. Moreover, we employ hypergraph attentions to capture the nonpairwise correlations. Here, using the wavelet basis instead of the Fourier basis, enables us to simplify the message passing and focus on the localized convolution. Experiments with US market data over six years show that our framework outperforms stateoftheart methods in terms of profit and stability. Our source code and data are available at \url{https://github.com/thanhtrunghuynh9 3/estimate}. 
Date:  2022–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2211.07400&r=ecm 
By:  Youru Li; Zhenfeng Zhu; Xiaobo Guo; Shaoshuai Li; Yuchen Yang; Yao Zhao 
Abstract:  Risk prediction, as a typical time series modeling problem, is usually achieved by learning trends in markers or historical behavior from sequence data, and has been widely applied in healthcare and finance. In recent years, deep learning models, especially Long ShortTerm Memory neural networks (LSTMs), have led to superior performances in such sequence representation learning tasks. Despite that some attention or selfattention based models with timeaware or featureaware enhanced strategies have achieved better performance compared with other temporal modeling methods, such improvement is limited due to a lack of guidance from global view. To address this issue, we propose a novel endtoend Hierarchical Global Viewguided (HGV) sequence representation learning framework. Specifically, the Global Graph Embedding (GGE) module is proposed to learn sequential clipaware representations from temporal correlation graph at instance level. Furthermore, following the way of keyquery attention, the harmonic $\beta$attention ($\beta$Attn) is also developed for making a global tradeoff between timeaware decay and observation significance at channel level adaptively. Moreover, the hierarchical representations at both instance level and channel level can be coordinated by the heterogeneous information aggregation under the guidance of global view. Experimental results on a benchmark dataset for healthcare risk prediction, and a realworld industrial scenario for Small and Midsize Enterprises (SMEs) credit overdue risk prediction in MYBank, Ant Group, have illustrated that the proposed model can achieve competitive prediction performance compared with other known baselines. 
Date:  2022–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2211.07956&r=ecm 