|
on Econometrics |
By: | Woosik Gong; Myung Hwan Seo |
Abstract: | This paper develops robust bootstrap inference for a dynamic panel threshold model to improve the finite sample coverage and to be applicable irrespective of the regression's continuity. When the true model becomes continuous and kinked but this restriction is not imposed in the estimation, we find that the usual rank condition for the GMM identification fails, since the Jacobian of the moment function for the GMM loses the full-column rank property. Instead, we establish the identification in a higher-order expansion and derive a slower $n^{1/4}$ convergence rate for the GMM threshold estimator. Furthermore, we show that it destroys asymptotic normality for both coefficients and threshold estimators and invalidates the standard nonparametric bootstrap. We propose two alternative bootstrap schemes that are robust to the continuity and improve the finite sample coverage of the unknown threshold. One is a grid bootstrap that imposes null values of the threshold location. The other is a robust bootstrap where a resampling scheme is adjusted by a data-driven criterion. We show that both bootstraps are consistent. Finite sample performances of proposed methods are checked through Monte Carlo experiments, and an empirical application is shown. |
Date: | 2022–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2211.04027&r=ecm |
By: | Zequn Jin; Lihua Lin; Zhengyu Zhang |
Abstract: | This paper proposes a new class of heterogeneous causal quantities, named \textit{outcome conditioned} average structural derivatives (OASD) in a general nonseparable model. OASD is the average partial effect of a marginal change in a continuous treatment on the individuals located at different parts of the outcome distribution, irrespective of individuals' characteristics. OASD combines both features of ATE and QTE: it is interpreted as straightforwardly as ATE while at the same time more granular than ATE by breaking the entire population up according to the rank of the outcome distribution. One contribution of this paper is that we establish some close relationships between the \textit{outcome conditioned average partial effects} and a class of parameters measuring the effect of counterfactually changing the distribution of a single covariate on the unconditional outcome quantiles. By exploiting such relationship, we can obtain root-$n$ consistent estimator and calculate the semi-parametric efficiency bound for these counterfactual effect parameters. We illustrate this point by two examples: equivalence between OASD and the unconditional partial quantile effect (Firpo et al. (2009)), and equivalence between the marginal partial distribution policy effect (Rothe (2012)) and a corresponding outcome conditioned parameter. Because identification of OASD is attained under a conditional exogeneity assumption, by controlling for a rich information about covariates, a researcher may ideally use high-dimensional controls in data. We propose for OASD a novel automatic debiased machine learning estimator, and present asymptotic statistical guarantees for it. We prove our estimator is root-$n$ consistent, asymptotically normal, and semiparametrically efficient. We also prove the validity of the bootstrap procedure for uniform inference on the OASD process. |
Date: | 2022–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2211.07903&r=ecm |
By: | Christian Gourieroux; Joann Jasiak |
Abstract: | This paper considers nonlinear dynamic models where the main parameter of interest is a nonnegative matrix characterizing the network (contagion) effects. This network matrix is usually constrained either by assuming a limited number of nonzero elements (sparsity), or by considering a reduced rank approach for nonnegative matrix factorization (NMF). We follow the latter approach and develop a new probabilistic NMF method. We introduce a new Identifying Maximum Likelihood (IML) method for consistent estimation of the identified set of admissible NMF's and derive its asymptotic distribution. Moreover, we propose a maximum likelihood estimator of the parameter matrix for a given non-negative rank, derive its asymptotic distribution and the associated efficiency bound. |
Date: | 2022–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2211.11876&r=ecm |
By: | Greta Goracci; Davide Ferrari; Simone Giannerini; Francesco ravazzolo |
Abstract: | Threshold autoregressive moving-average (TARMA) models are popular in time series analysis due to their ability to parsimoniously describe several complex dynamical features. However, neither theory nor estimation methods are currently available when the data present heavy tails or anomalous observations, which is often the case in applications. In this paper, we provide the first theoretical framework for robust M-estimation for TARMA models and also study its practical relevance. Under mild conditions, we show that the robust estimator for the threshold parameter is super-consistent, while the estimators for autoregressive and moving-average parameters are strongly consistent and asymptotically normal. The Monte Carlo study shows that the M-estimator is superior, in terms of both bias and variance, to the least squares estimator, which can be heavily affected by outliers. The findings suggest that robust M-estimation should be generally preferred to the least squares method. Finally, we apply our methodology to a set of commodity price time series; the robust TARMA fit presents smaller standard errors and leads to superior forecasting accuracy compared to the least squares fit. The results support the hypothesis of a two-regime, asymmetric nonlinearity around zero, characterised by slow expansions and fast contractions. |
Date: | 2022–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2211.08205&r=ecm |
By: | Lui, Yiu Lim (Dongbei University of Finance and Economics); Phillips, Peter C.B. (Yale University); Yu, Jun (Singapore Management University) |
Abstract: | A heteroskedasticity-autocorrelation robust (HAR) test statistic is proposed to test for the presence of explosive roots in financial or real asset prices when the equation errors are strongly dependent. Limit theory for the test statistic is developed and extended to heteroskedastic models. The new test has stable size properties unlike conventional test statistics that typically lead to size distortion and inconsistency in the presence of strongly dependent equation errors. The new procedure can be used to consistently time-stamp the origination and termination of an explosive episode under similar conditions of long memory errors. Simulations are conducted to assess the finite sample performance of the proposed test and estimators. An empirical application to the S&P 500 index highlights the usefulness of the proposed procedures in practical work. |
Keywords: | HAR test; Long memory; Explosiveness; Unit root test; S&P 500 |
JEL: | C12 C22 G01 |
Date: | 2022–10–28 |
URL: | http://d.repec.org/n?u=RePEc:ris:smuesw:2022_011&r=ecm |
By: | Kohtaro Hitomi (Kyoto Institute of Technology); Jianwei Jin (Yokohama National University); Keiji Nagai (Yokohama National University); Yoshihiko Nishiyama (Institute of Economic Research, Kyoto University); Junfan Tao (Institute of Economic Research, Kyoto University) |
Abstract: | The Dickey-Fuller (DF) unit root tests are widely used in empirical studies on economics. In the local-to-unity asymptotic theory, the effects of initial values vanish as the sample size grows. However, for a small sample size, the initial value will affect the distribution of the test statistics. When ignoring the effect of the initial value, the left-sided unit root test sets the critical value smaller than it should be. Therefore, the size and power of the test become smaller. This paper investigates the effect of the initial value for the DF test (including the t test). Limiting approximations of the DF test statistics are the ratios of two integrals which are represented via a one-dimensional squared Bessel process. We derive the joint density of the squared Bessel process and its integral, enabling us to compute this ratio's distribution. For independent normal errors, the exact distribution of the Dickey-Fuller coefficient test statistic is obtained using the Imhof (1961) method for non-central chi-squared distribution. Numerical results show that when the sample size is small, the limiting distributions of the DF test statistics with initial values fit well with the exact or simulated distributions. We transform the DF test with respect to a local parameter into the test for a shift in the location parameter of normal distributions. As a result, a concise method for computing the powers of DF tests is derived. |
Keywords: | Dickey-Fuller tests, Squared Bessel process, joint density, powers approximated by normal distribution, exact distribution |
JEL: | C12 C22 C46 |
Date: | 2022–11 |
URL: | http://d.repec.org/n?u=RePEc:kyo:wpaper:1084&r=ecm |
By: | Naoya Sueishi |
Abstract: | Empirical researchers often perform model specification tests, such as the Hausman test and the overidentifying restrictions test, to confirm the validity of estimators rather than the validity of models. This paper examines the effectiveness of specification pretests in finding invalid estimators. We study the local asymptotic properties of test statistics and estimators and show that locally unbiased specification tests cannot determine whether asymptotically efficient estimators are asymptotically biased. The main message of the paper is that correct specification and valid estimation are different issues. Correct specification is neither necessary nor sufficient for asymptotically unbiased estimation under local overidentification. |
Date: | 2022–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2211.11915&r=ecm |
By: | Federico Crudu; Michael C. Knaus; Giovanni Mellace; Joeri Smits |
Abstract: | Many econometrics textbooks imply that under mean independence of the regressors and the error term, the OLS parameters have a causal interpretation. We show that even when this assumption is satisfied, OLS might identify a pseudo-parameter that does not have a causal interpretation. Even assuming that the linear model is "structural" creates some ambiguity in what the regression error represents and whether the OLS estimand is causal. This issue applies equally to linear IV and panel data models. To give these estimands a causal interpretation, one needs to impose assumptions on a "causal" model, e.g., using the potential outcome framework. This highlights that causal inference requires causal, and not just stochastic, assumptions. |
Date: | 2022–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2211.09502&r=ecm |
By: | Xiaomeng Zhang; Wendun Wang; Xinyu Zhang |
Abstract: | This paper provides new insights into the asymptotic properties of the synthetic control method (SCM). We show that the synthetic control (SC) weight converges to a limiting weight that minimizes the mean squared prediction risk of the treatment-effect estimator when the number of pretreatment periods goes to infinity, and we also quantify the rate of convergence. Observing the link between the SCM and model averaging, we further establish the asymptotic optimality of the SC estimator under imperfect pretreatment fit, in the sense that it achieves the lowest possible squared prediction error among all possible treatment effect estimators that are based on an average of control units, such as matching, inverse probability weighting and difference-in-differences. The asymptotic optimality holds regardless of whether the number of control units is fixed or divergent. Thus, our results provide justifications for the SCM in a wide range of applications. The theoretical results are verified via simulations. |
Date: | 2022–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2211.12095&r=ecm |
By: | Marc S. Paolella (University of Zurich - Department of Banking and Finance; Swiss Finance Institute); Pawel Polak (Stony Brook University-Department of Applied Mathematics and Statistics) |
Abstract: | The CCC-GARCH model, and its dynamic correlation extensions, form the most important model class for multivariate asset returns. For multivariate density and portfolio risk forecasting, a drawback of these models is the underlying assumption of Gaussianity. This paper considers the so-called COMFORT model class, which is the CCC-GARCH model but endowed with multivariate generalized hyperbolic innovations. The novelty of the model is that parameter estimation is conducted by joint maximum likelihood, of all model parameters, using an EM algorithm, and so is feasible for hundreds of assets. This paper demonstrates that (i) the new model is blatantly superior to its Gaussian counterpart in terms of forecasting ability, and (ii) also outperforms ad-hoc three step procedures common in the literature to augment the CCC and DCC models with a fat-tailed distribution. An extensive empirical study confirms the COMFORT model’s superiority in terms of multivariate density and Value-at-Risk forecasting. |
Keywords: | GJR-GARCH, Multivariate Generalized Hyperbolic Distribution, Non-Ellipticity, Value-at-Risk. |
JEL: | C51 C53 G11 G17 |
Date: | 2022–11 |
URL: | http://d.repec.org/n?u=RePEc:chf:rpseri:rp2288&r=ecm |
By: | Kyunghoon Ban; D\'esir\'e K\'edagni |
Abstract: | The difference-in-differences (DID) method identifies the average treatment effects on the treated (ATT) under mainly the so-called parallel trends (PT) assumption. The most common and widely used approach to justify the PT assumption is the pre-treatment period examination. If a null hypothesis of the same trend in the outcome means for both treatment and control groups in the pre-treatment periods is rejected, researchers believe less in PT and the DID results. This paper fills this gap by developing a generalized DID framework that utilizes all the information available not only from the pre-treatment periods but also from multiple data sources. Our approach interprets PT in a different way using a notion of selection bias, which enables us to generalize the standard DID estimand by defining an information set that may contain multiple pre-treatment periods or other baseline covariates. Our main assumption states that the selection bias in the post-treatment period lies within the convex hull of all selection biases in the pre-treatment periods. We provide a sufficient condition for this assumption to hold. Based on the baseline information set we construct, we first provide an identified set for the ATT that always contains the true ATT under our identifying assumption, and also the standard DID estimand. Secondly, we propose a class of criteria on the selection biases from the perspective of policymakers that can achieve a point identification of the ATT. Finally, we illustrate our methodology through some numerical and empirical examples. |
Date: | 2022–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2211.06710&r=ecm |
By: | Jingwen Zhang; Yifang Chen; Amandeep Singh |
Abstract: | The deployment of Multi-Armed Bandits (MAB) has become commonplace in many economic applications. However, regret guarantees for even state-of-the-art linear bandit algorithms (such as Optimism in the Face of Uncertainty Linear bandit (OFUL)) make strong exogeneity assumptions w.r.t. arm covariates. This assumption is very often violated in many economic contexts and using such algorithms can lead to sub-optimal decisions. Further, in social science analysis, it is also important to understand the asymptotic distribution of estimated parameters. To this end, in this paper, we consider the problem of online learning in linear stochastic contextual bandit problems with endogenous covariates. We propose an algorithm we term $\epsilon$-BanditIV, that uses instrumental variables to correct for this bias, and prove an $\tilde{\mathcal{O}}(k\sqrt{T})$ upper bound for the expected regret of the algorithm. Further, we demonstrate the asymptotic consistency and normality of the $\epsilon$-BanditIV estimator. We carry out extensive Monte Carlo simulations to demonstrate the performance of our algorithms compared to other methods. We show that $\epsilon$-BanditIV significantly outperforms other existing methods in endogenous settings. Finally, we use data from real-time bidding (RTB) system to demonstrate how $\epsilon$-BanditIV can be used to estimate the causal impact of advertising in such settings and compare its performance with other existing methods. |
Date: | 2022–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2211.08649&r=ecm |
By: | James A. Duffy; Sophocles Mavroeidis; Sam Wycherley |
Abstract: | In the literature on nonlinear cointegration, a long-standing open problem relates to how a (nonlinear) vector autoregression, which provides a unified description of the short- and long-run dynamics of a collection of time series, can generate 'nonlinear cointegration' in the profound sense of those series sharing common nonlinear stochastic trends. We consider this problem in the setting of the censored and kinked structural VAR (CKSVAR), which provides a flexible yet tractable framework within which to model time series that are subject to threshold-type nonlinearities, such as those arising due to occasionally binding constraints, of which the zero lower bound (ZLB) on short-term nominal interest rates provides a leading example. We provide a complete characterisation of how common linear and nonlinear stochastic trends may be generated in this model, via unit roots and appropriate generalisations of the usual rank conditions, providing the first extension to date of the Granger-Johansen representation theorem from a linear to a nonlinear setting, and thereby giving the first successful treatment of the open problem. The limiting common trend processes include regulated, censored and kinked Brownian motions, none of which have previously appeared in the literature on cointegrated VARs. Our results and running examples illustrate that the CKSVAR is capable of supporting a far richer variety of long-run behaviour than is a linear VAR, in ways that may be particularly useful for the identification of structural parameters. En route to establishing our main results, we also develop a set of sufficient conditions for the processes generated by a CKSVAR to be stationary, ergodic, and weakly dependent. |
Date: | 2022–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2211.09604&r=ecm |
By: | Lutz Kilian; Michael D. Plante; Alexander W. Richter |
Abstract: | A common practice in empirical macroeconomics is to examine alternative recursive orderings of the variables in structural vector autoregressive (VAR) models. When the implied impulse responses look similar, the estimates are considered trustworthy. When they do not, the estimates are used to bound the true response without directly addressing the identification challenge. A leading example of this practice is the literature on the effects of uncertainty shocks on economic activity. We prove by counterexample that this practice is invalid in general, whether the data generating process is a structural VAR model or a dynamic stochastic general equilibrium model. |
Keywords: | Cholesky Decomposition; endogeneity; uncertainty; business cycle |
JEL: | C32 C51 E32 |
Date: | 2022–11–23 |
URL: | http://d.repec.org/n?u=RePEc:fip:feddwp:95180&r=ecm |
By: | Jianwei Jin (Yokohama National University); Keiji Nagai (Yokohama National University) |
Abstract: | This paper examines the effect of initial values and small-sample properties in sequential unit root tests of the first-order autoregressive (AR(1)) process with a coefficient expressed by a local parameter. Adopting a stopping rule based on observed Fisher information defined by Lai and Siegmund (1983), we use the sequential least squares estimator (LSE) of the local parameter as the test statistic. The sequential LSE is represented as a time-changed Brownian motion with drift. The stopping time is written as the integral of the reciprocal of twice of a Bessel process with drift generated by the time-changed Brownian motion. The time change is applied to the joint density and joint Laplace transform derived from the Bessel bridge of the squared Bessel process by Pitman and Yor (1982), by which we derive the limiting joint density and joint Laplace transform for the sequential LSE and stopping time. The joint Laplace transform is needed to calculate joint moments because the joint density oscillates wildly as the value of the stopping time approaches zero. Moreover, this paper also earns the exact distribution of stopping time by Imhof's formula for both normally distributed and fixed initial values. When the autoregressive coefficient is less than 1, the question arises as to whether the local-to-unity or the strong stationary model should be used. We make the decision by comparing joint moments for respective models with those calculated from the exact distribution or simulations. |
Keywords: | Stopping time, observed Fisher information, DDS Brownian motion, local asymptotic normality, Bessel process, initial values, exact distributions |
JEL: | C12 C22 C46 |
Date: | 2022–11 |
URL: | http://d.repec.org/n?u=RePEc:kyo:wpaper:1085&r=ecm |
By: | Jack Jewson; Li Li; Laura Battaglia; Stephen Hansen; David Rossell; Piotr Zwiernik |
Abstract: | A frequent challenge when using graphical models in applications is that the sample size is limited relative to the number of parameters to be learned. Our motivation stems from applications where one has external data, in the form of networks between variables, that provides valuable information to help improve inference. Specifically, we depict the relation between COVID-19 cases and social and geographical network data, and between stock market returns and economic and policy networks extracted from text data. We propose a graphical LASSO framework where likelihood penalties are guided by the external network data. We also propose a spike-and-slab prior framework that depicts how partial correlations depend on the networks, which helps interpret the fitted graphical model and its relationship to the network. We develop computational schemes and software implementations in R and probabilistic programming languages. Our applications show how incorporating network data can significantly improve interpretation, statistical accuracy, and out-of-sample prediction, in some instances using significantly sparser graphical models than would have otherwise been estimated. |
Date: | 2022–11–08 |
URL: | http://d.repec.org/n?u=RePEc:azt:cemmap:20/22&r=ecm |
By: | Damian, Elena (Sciensano); Meuleman, Bart; van Oorschot, Wim |
Abstract: | Multilevel regression analysis is one of the most popular types of analyses in cross-national social studies. However, since its early applications, there have been constant concerns about the relatively small numbers of countries in cross-national surveys and its ability to produce unbiased and accurate country-level effects. A recent review of Bryan and Jenkins (2016) highlights that there are still no clear rules of thumb regarding the minimum number of countries needed. The current recommendations vary from 15 to 50 countries, depending on model complexity. This paper aims to offer a better understanding regarding the consequences of group-level sample size, model complexity, effect size, and estimator procedure on the precision to estimate country-level effects in cross-national studies. The accuracy criteria considered are statistical power, relative parameter bias, relative standard error bias, and convergence rates. We pay special attention to statistical power - a key criteria that has been largely neglected in past research. The results of our Monte Carlo simulation study indicate that the small number of countries found in cross-national surveys seriously affects the accuracy of group-level estimates. Specifically, while a sample size of 30 countries is sufficient to detect large population effects (.5), the probability of detecting a medium (.25) or a small effect (.10) is .4 or .2, respectively. The number of additional group-level variables (i.e., model complexity) included in the model does not disturb the relationship between sample size and statistical power. Hence, adding contextual variables one by one does not increase the power to estimate a certain effect if the sample size is small. Even though we find that Bayesian models have more accurate estimates, there are no notable differences in statistical power between Maximum Likelihood and Bayesian models. |
Date: | 2022–08–19 |
URL: | http://d.repec.org/n?u=RePEc:osf:osfxxx:m94kh&r=ecm |
By: | Vladim\'ir Hol\'y |
Abstract: | We develop a novel observation-driven model for high-frequency prices. We account for irregularly spaced observations, simultaneous transactions, discreteness of prices, and market microstructure noise. The relation between trade durations and price volatility, as well as intraday patterns of trade durations and price volatility, is captured using smoothing splines. The dynamic model is based on the zero-inflated Skellam distribution with time-varying volatility in a score-driven framework. Market microstructure noise if filtered by including a moving average component. The model is estimated by the maximum likelihood method. In an empirical study of the IBM stock, we demonstrate that the model provides a good fit to the data. Besides modeling intraday volatility, it can also be used to measure daily realized volatility. |
Date: | 2022–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2211.12376&r=ecm |
By: | Jan Ditzen (Free University of Bozen-Bolzano); Yiannis Karavias (University of Birmingham); Joakim Westerlund (Lund University; Deakin University) |
Abstract: | Economists are concerned about the many recent disruptive events such as the 2007-2008 global financial crisis and the 2020 COVID-19 outbreak, and their likely effect on economic relationships. The fear is that the relationships might have changed, which has implications for both estimation and policymaking. Motivated by this last observation, the present paper develops a new toolbox for multiple structural break detection in panel data models with interactive effects. The toolbox includes several tests for the presence of structural breaks, a break date estimator, and a break date confidence interval. The new toolbox is applied to a large panel data set covering 3,557 US banks between 2005 and 2021, a period characterized by a number of massive quantitative easing programs to lessen the impact of the global financial crisis and the COVID-19 pandemic. The question we ask is: Have these programs been successful in spurring bank lending in the US economy? The short answer turns out to be: ``No''. |
Date: | 2022–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2211.06707&r=ecm |
By: | Andrzej Kocięcki (University of Warsaw, Faculty of Economic Sciences); Marcin Kolasa (SGH Warsaw School of Economics; International Monetary Fund) |
Abstract: | We develop an analytical framework to study global identification in structural models with forward-looking expectations. Our identification condition combines the similarity transformation linking the observationally equivalent state space systems with the constraints imposed on them by the model parameters. The key step of solving the identification problem then reduces to finding all roots of a system of polynomial equations. We show how it can be done using the concept of a Gröbner basis and recently developed algorithms to compute it analytically. In contrast to papers relying on numerical search, our approach can effectively prove whether a model is identified or not at the given parameter point, explicitly delivering the complete set of observationally equivalent parameter vectors. We present the solution to the global identification problem for several popular DSGE models. Our findings indicate that observational equivalence in medium-sized models of this class might be actually not as widespread as suggested by earlier, small model-based evidence. |
Keywords: | global identification, state space systems, DSGE models, Gröbner basis |
JEL: | C10 C51 C65 E32 |
Date: | 2022 |
URL: | http://d.repec.org/n?u=RePEc:war:wpaper:2022-01&r=ecm |
By: | Andrey Shternshis; Piero Mazzarisi |
Abstract: | Shannon entropy is the most common metric to measure the degree of randomness of time series in many fields, ranging from physics and finance to medicine and biology. Real-world systems may be in general non stationary, with an entropy value that is not constant in time. The goal of this paper is to propose a hypothesis testing procedure to test the null hypothesis of constant Shannon entropy for time series, against the alternative of a significant variation of the entropy between two subsequent periods. To this end, we find an unbiased approximation of the variance of the Shannon entropy's estimator, up to the order O(n^(-4)) with n the sample size. In order to characterize the variance of the estimator, we first obtain the explicit formulas of the central moments for both the binomial and the multinomial distributions, which describe the distribution of the Shannon entropy. Second, we find the optimal length of the rolling window used for estimating the time-varying Shannon entropy by optimizing a novel self-consistent criterion based on the counting of significant variations of entropy within a time window. We corroborate our findings by using the novel methodology to test for time-varying regimes of entropy for stock price dynamics, in particular considering the case of meme stocks in 2020 and 2021. We empirically show the existence of periods of market inefficiency for meme stocks. In particular, sharp increases of prices and trading volumes correspond to statistically significant drops of Shannon entropy. |
Date: | 2022–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2211.05415&r=ecm |
By: | Amoroso, Sara (European Commission, Joint Research Centre); Bruno, Randolph Luca (University College London); Magazzini, Laura (Sant'Anna School of Advanced Studies) |
Abstract: | Recent literature has raised the attention on the estimation of time-invariant variables both in a static and a dynmamic framework. In this context, Hausman-Taylor type estimators have been applied, relying crucially on the distinction between exogenous and endogenous variables (in terms of correlation with the time-invariant error component). We show that this provision can be relaxed, and identification can be achieved by relying on the milder assumption that the correlation between the individual effect and the time-varying regressors is homogenous over time. The methodology is applied to identify the role of inputs from "Science" (firm-level publications' stock) on firms' labour productivity, showing that the effect is larger for those firms with higher level of R&D investments. The results further support the dual – direct and indirect – role of R&D. |
Keywords: | panel data, time-invariant variables, science, productivity, R&D |
JEL: | C23 O32 L20 |
Date: | 2022–11 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp15708&r=ecm |
By: | Storti, Giuseppe; Wang, Chao |
Abstract: | A new multivariate semi-parametric risk forecasting framework is proposed, to enable the portfolio Value-at-Risk (VaR) and Expected Shortfall (ES) optimization and forecasting. The proposed framework accounts for the dependence structure among asset returns, without assuming their distribution. A simulation study is conducted to evaluate the finite sample properties of the employed estimator for the proposed model. An empirically motivated portfolio optimization method, that can be utilized to optimize the portfolio VaR and ES, is developed. A forecasting study on 2.5% level evaluates the performance of the model in risk forecasting and portfolio optimization, based on the components of the Dow Jones index for the out-of-sample period from December 2016 to September 2021. Comparing to the standard models in the literature, the empirical results are favorable for the proposed model class, in particular the effectiveness of the proposed framework in portfolio risk optimization is demonstrated. |
Keywords: | semi-parametric; Value-at-Risk; Expected Shortfall; multivariate; portfolio optimization. |
JEL: | C14 C32 C51 C58 G17 |
Date: | 2022–08 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:115266&r=ecm |
By: | Clemens Possnig; Andreea Rot\u{a}rescu; Kyungchul Song |
Abstract: | Spillover of economic outcomes often arises over multiple networks, and distinguishing their separate roles is important in empirical research. For example, the direction of spillover between two groups (such as banks and industrial sectors linked in a bipartite graph) has important economic implications, and a researcher may want to learn which direction is supported in the data. For this, we need to have an empirical methodology that allows for both directions of spillover simultaneously. In this paper, we develop a dynamic linear panel model and asymptotic inference with large $n$ and small $T$, where both directions of spillover are accommodated through multiple networks. Using the methodology developed here, we perform an empirical study of spillovers between bank weakness and zombie-firm congestion in industrial sectors, using firm-bank matched data from Spain between 2005 and 2012. Overall, we find that there is positive spillover in both directions between banks and sectors. |
Date: | 2022–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2211.08995&r=ecm |
By: | Thanh Trung Huynh; Minh Hieu Nguyen; Thanh Tam Nguyen; Phi Le Nguyen; Matthias Weidlich; Quoc Viet Hung Nguyen; Karl Aberer |
Abstract: | Advances in deep neural network (DNN) architectures have enabled new prediction techniques for stock market data. Unlike other multivariate time-series data, stock markets show two unique characteristics: (i) \emph{multi-order dynamics}, as stock prices are affected by strong non-pairwise correlations (e.g., within the same industry); and (ii) \emph{internal dynamics}, as each individual stock shows some particular behaviour. Recent DNN-based methods capture multi-order dynamics using hypergraphs, but rely on the Fourier basis in the convolution, which is both inefficient and ineffective. In addition, they largely ignore internal dynamics by adopting the same model for each stock, which implies a severe information loss. In this paper, we propose a framework for stock movement prediction to overcome the above issues. Specifically, the framework includes temporal generative filters that implement a memory-based mechanism onto an LSTM network in an attempt to learn individual patterns per stock. Moreover, we employ hypergraph attentions to capture the non-pairwise correlations. Here, using the wavelet basis instead of the Fourier basis, enables us to simplify the message passing and focus on the localized convolution. Experiments with US market data over six years show that our framework outperforms state-of-the-art methods in terms of profit and stability. Our source code and data are available at \url{https://github.com/thanhtrunghuynh9 3/estimate}. |
Date: | 2022–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2211.07400&r=ecm |
By: | Youru Li; Zhenfeng Zhu; Xiaobo Guo; Shaoshuai Li; Yuchen Yang; Yao Zhao |
Abstract: | Risk prediction, as a typical time series modeling problem, is usually achieved by learning trends in markers or historical behavior from sequence data, and has been widely applied in healthcare and finance. In recent years, deep learning models, especially Long Short-Term Memory neural networks (LSTMs), have led to superior performances in such sequence representation learning tasks. Despite that some attention or self-attention based models with time-aware or feature-aware enhanced strategies have achieved better performance compared with other temporal modeling methods, such improvement is limited due to a lack of guidance from global view. To address this issue, we propose a novel end-to-end Hierarchical Global View-guided (HGV) sequence representation learning framework. Specifically, the Global Graph Embedding (GGE) module is proposed to learn sequential clip-aware representations from temporal correlation graph at instance level. Furthermore, following the way of key-query attention, the harmonic $\beta$-attention ($\beta$-Attn) is also developed for making a global trade-off between time-aware decay and observation significance at channel level adaptively. Moreover, the hierarchical representations at both instance level and channel level can be coordinated by the heterogeneous information aggregation under the guidance of global view. Experimental results on a benchmark dataset for healthcare risk prediction, and a real-world industrial scenario for Small and Mid-size Enterprises (SMEs) credit overdue risk prediction in MYBank, Ant Group, have illustrated that the proposed model can achieve competitive prediction performance compared with other known baselines. |
Date: | 2022–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2211.07956&r=ecm |