|
on Econometrics |
By: | Kylie-Anne Richards (Finance Discipline Group, UTS Business School, University of Technology Sydney); William T. M. Dunsmuir; Gareth W. Peters |
Abstract: | A score statistic for detecting the impact of marks in a linear Hawkes self-exciting point process is proposed, with its asymptotic properties, finite sample performance, power properties using simulation and application to real data presented. A major advantage of the proposed inference procedure is the Hawkes process can be fit under the null hypothesis that marks do not impact the intensity process. Hence, for a given record of a point process, the intensity process is estimated once only and then assessed against any number of potential marks without refitting the joint likelihood each time. Marks can be multivariate as well as serially dependent. The score function for any given set of marks is easily constructed as the covariance of functions of future intensities fit to the unmarked process with functions of the marks under assessment. The asymptotic distribution of the score statistic is chi-squared distribution, with degrees of freedom equal to the number of parameters required to specify the boost function. Model based, or non-parametric estimation of required features of the marks marginal moments and serial dependence can be used. The use of sample moments of the marks in the test statistic construction do not impact size and power properties. |
Keywords: | Marked Hawkes point process; Score test statistic; Screening marks; High frequency financial data |
JEL: | C10 C15 C12 |
Date: | 2019–05–01 |
URL: | http://d.repec.org/n?u=RePEc:uts:rpaper:405&r=all |
By: | Francesca Rossi (Department of Economics (University of Verona)); Peter M. Robinson (London School of Economics) |
Abstract: | We develop refined inference for spatial regression models with predetermined regressors. The ordinary least squares estimate of the spatial parameter is neither consistent, nor asymptotically normal, unless the elements of the spatial weight matrix uniformly vanish as sample size diverges. We develop refined testing of the hypothesis of no spatial dependence, without requiring negligibility of spatial weights, by formal Edgeworth expansions. We also develop higher-order expansions for both an unstudentized and a studentized transformed estimator, where the studentized one can be used to provide refined interval estimates. A Monte Carlo study of finite sample performance is included. |
Keywords: | Spatial autoregression; least squares estimation; higher-order inference; Edgeworth expansion; testing spatial independence. |
JEL: | C12 C13 C21 |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:ver:wpaper:04/2020&r=all |
By: | Peter C.B. Phillips (Cowles Foundation, Yale University); Ying Wang (The University of Auckland) |
Abstract: | Limit distribution theory in the econometric literature for functional coefficient cointegrating (FCC) regression is shown to be incorrect in important ways, influencing rates of convergence, distributional properties, and practical work. In FCC regression the cointegrating coefficient vector \beta(.) is a function of a covariate z_t. The true limit distribution of the local level kernel estimator of \beta(.) is shown to have multiple forms, each form depending on the bandwidth rate in relation to the sample size n and with an optimal convergence rate of n^{3/4} which is achieved by letting the bandwidth have order 1/n^{1/2}.when z_t is scalar. Unlike stationary regression and contrary to the existing literature on FCC regression, the correct limit theory reveals that component elements from the bias and variance terms in the kernel regression can both contribute to variability in the asymptotics depending on the bandwidth behavior in relation to the sample size. The trade-off between bias and variance that is a common feature of kernel regression consequently takes a different and more complex form in FCC regression whereby balance is achieved via the dual-source of variation in the limit with an associated common convergence rate. The error in the literature arises because the random variability of the bias term has been neglected in earlier research. In stationary regression this random variability is of smaller order and can correctly be neglected in asymptotic analysis but with consequences for finite sample performance. In nonstationary regression, variability typically has larger order due to the nonstationary regressor and its omission leads to deficiencies and partial failure in the asymptotics reported in the literature. Existing results are shown to hold only in scalar covariate FCC regression and only when the bandwidth has order larger than 1/n and smaller than 1/n^{1/2}. The correct results in cases of a multivariate covariate z_t are substantially more complex and are not covered by any existing theory. Implications of the findings for inference, confidence interval construction, bandwidth selection, and stability testing for the functional coefficient are discussed. A novel self-normalized t-ratio statistic is developed which is robust with respect to bandwidth order and persistence in the regressor, enabling improved testing and confidence interval construction. Simulations show superior performance of this robust statistic that corroborate the finite sample relevance of the new limit theory in both stationary and nonstationary regressions. |
Keywords: | Bandwidth selection, Bias variability, Functional coefficient cointegration, Kernel regression, Nonstationarity, Robust inference, Sandwich matrix |
JEL: | C14 C22 |
Date: | 2020–08 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2250&r=all |
By: | Ke Miao (School of Economics, Fudan University); Peter C.B. Phillips (Cowles Foundation, Yale University); Liangjun Su (School of Economics and Management, Tsinghua University) |
Abstract: | This paper studies high-dimensional vector autoregressions (VARs) augmented with common factors that allow for strong cross section dependence. Models of this type provide a convenient mechanism for accommodating the interconnectedness and temporal co-variability that are often present in large dimensional systems. We propose an `1-nuclear-norm regularized estimator and derive non-asymptotic upper bounds for the estimation errors as well as large sample asymptotics for the estimates. A singular value thresholding procedure is used to determine the correct number of factors with probability approaching one. Both the LASSO estimator and the conservative LASSO estimator are employed to improve estimation precision. The conservative LASSO estimates of the non-zero coefficients are shown to be asymptotically equivalent to the oracle least squares estimates. Simulations demonstrate that our estimators perform reasonably well in finite samples given the complex high dimensional nature of the model with multiple unobserved components. In an empirical illustration we apply the methodology to explore the dynamic connectedness in the volatilities of financial asset prices and the transmission of investor fear. The findings reveal that a large proportion of connectedness is due to common factors. Conditional on the presence of these common factors, the results still document remarkable connectedness due to the interactions between the individual variables, thereby supporting a common factor augmented VAR specification. |
Keywords: | Common factors, Connectedness, Cross-sectional dependence, Investor fear, High-dimensional VAR, Nuclear-norm regularization |
JEL: | C13 C33 C38 C51 |
Date: | 2020–08 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2252&r=all |
By: | Yang, Bill Huajian; Yang, Jenny; Yang, Haoji |
Abstract: | Models for a continuous risk outcome has a wide application in portfolio risk management and capital allocation. We introduce a family of interval distributions based on variable transformations. Densities for these distributions are provided. Models with a random effect, targeting a continuous risk outcome, can then be fitted by maximum likelihood approaches assuming an interval distribution. Given fixed effects, regression function can be estimated and derived accordingly when required. This provides an alternative regression tool to the fraction response model and Beta regression model. |
Keywords: | Interval distribution, model with a random effect, tailed index, expected shortfall, heteroscedasticity, Beta regression model, fraction response model, maximum likelihood. |
JEL: | C0 C01 C02 C5 C51 C53 C6 C61 C8 |
Date: | 2020–07–20 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:102219&r=all |
By: | Yuya Sasaki; Takuya Ura; Yichong Zhang |
Abstract: | Credible counterfactual analysis requires high-dimensional controls. This paper considers estimation and inference for heterogeneous counterfactual effects with high-dimensional data. We propose a novel doubly robust score for double/debiased estimation and inference for the unconditional quantile regression (Firpo, Fortin, and Lemieux, 2009) as a measure of heterogeneous counterfactual marginal effects. We propose a multiplier bootstrap inference for the Lasso double/debiased estimator, and develop asymptotic theories to guarantee that the bootstrap works. Simulation studies support our theories. Applying the proposed method to Job Corps survey data, we find that i) marginal effects of counterfactually extending the duration of the exposure to the Job Corps program are globally positive across quantiles regardless of definitions of the treatment and outcome variables, and that ii) these counterfactual effects are larger for higher potential earners than lower potential earners regardless of whether we define the outcome as the level or its logarithm. |
Date: | 2020–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2007.13659&r=all |
By: | Mathur, Maya B; VanderWeele, Tyler |
Abstract: | Meta-regression analyses usually focus on estimating and testing differences in average effect sizes between individual levels of each meta-regressive effect modifier in turn. These metrics are useful but have limitations: they consider each effect modifier individually, rather than in combination, and they characterize only the mean of a potentially heterogeneous distribution of effects. We propose additional metrics that address both limitations. Given a chosen threshold representing a meaningfully strong effect size, these metrics address the questions: (1) “For a given joint level of the effect modifiers, what percentage of the population effects are meaningfully strong?'' and (2) “For any two joint levels of the effect modifiers, what is the difference between these percentages of meaningfully strong effects?'' We provide nonparametric methods for estimation and inference and validate their performance in a simulation study. We apply the proposed methods to a meta-regression on memory consolidation, illustrating how the methods can provide more information than standard reporting alone. The methods are straightforward to implement in practice, and we provide simple example code in R to do so. |
Date: | 2020–07–08 |
URL: | http://d.repec.org/n?u=RePEc:osf:osfxxx:bmtdq&r=all |
By: | Kwok, Simon |
Abstract: | Understanding the jump dynamics of market prices is important for asset pricing and risk management. Despite their analytical tractability, parametric models may impose unrealistic restrictions on the temporal dependence structure of jumps. In this paper, we introduce a nonparametric inference procedure for the presence of jump autocorrelation in the DGP. Our toolkit includes (i) an omnibus test that jointly detect the autocorrelation of stationary jumps over all lags, and (ii) a jump autocorrelogram that enables visualization and pointwise inference of jump autocorrelation. We establish asymptotic normality and local power of our procedure for a rich set of local alternatives (e.g., self-exciting and/or self-inhibitory jumps). Under a unified framework that combines infill and long-span asymptotics, the joint test for jump autocorrelations is robust to the choice of sampling scheme and different degree of jump activity. Simulation study confirms its robustness property and reveals its competitive size and power performance relative to existing tests. In an empirical study on high-frequency stock returns, our procedure uncovers a wide array of jump autocorrelation profiles for different stocks in different time periods. |
Keywords: | jump autocorrelation, self-excited jumps, nonparametric inference, financial contagion, high-frequency returns |
Date: | 2020–08 |
URL: | http://d.repec.org/n?u=RePEc:syd:wpaper:2020-09&r=all |
By: | Dzemski, Andreas (Department of Economics, School of Business, Economics and Law, Göteborg University); Okui, Ryo (Department of Economics, School of Business, Economics and Law, Göteborg University) |
Abstract: | We study kmeans clustering estimation of panel data models with a latent group structure and N units and T time periods under long panel asymptotics. We show that the group-speci c coe cients can be estimated at the parametric root NT rate even if error variances diverge as T ! 1 and some units are asymptotically misclassi ed. This limit case approximates empirically relevant settings and is not covered by existing asymptotic results. |
Keywords: | Panel data; latent grouped structure; clustering; kmeans; convergence rate; misclassication |
JEL: | C23 C33 C38 |
Date: | 2020–08 |
URL: | http://d.repec.org/n?u=RePEc:hhs:gunwpe:0790&r=all |
By: | Chen, Yunxiao; Li, Xiaoou; Zhang, Siliang |
Abstract: | Latent factor models are widely used to measure unobserved latent traits in so- cial and behavioral sciences, including psychology, education, and marketing. When used in a conrmatory manner, design information is incorporated as zero constraints on corresponding parameters, yielding structured (conrmatory) latent factor models. In this paper, we study how such design information aects the identiability and the estimation of a structured latent factor model. Insights are gained through both asymptotic and non-asymptotic analyses. Our asymptotic results are established under a regime where both the number of manifest variables and the sample size diverge, mo- tivated by applications to large-scale data. Under this regime, we dene the structural identiability of the latent factors and establish necessary and sucient conditions that ensure structural identiability. In addition, we propose an estimator which is shown to be consistent and rate optimal when structural identiability holds. Finally, a non-asymptotic error bound is derived for this estimator, through which the eect of design information is further quantied. Our results shed lights on the design of 1 large-scale measurement in education and psychology and have important implications on measurement validity and reliability. |
Keywords: | High-dimensional latent factor model; conrmatory factor analysis; identi- ability of latent factors; structured low-rank matrix; large-scale psychological measurement; DMS-1712657 |
JEL: | C1 |
Date: | 2019–07–22 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:101122&r=all |
By: | Abhimanyu Gupta; Javier Hidalgo |
Abstract: | We describe a nonparametric prediction algorithm for spatial data. The algorithm is based on a flexible exponential representation of the model characterized via the spectral density function. We provide theoretical results demonstrating that our predictors have desired asymptotic properties. Finite sample performance is assessed in a Monte Carlo study that also compares our algorithm to a rival nonparametric method based on the infinite AR representation of the dynamics of the data. We apply our method to a real data set in an empirical example that predicts house prices in Los Angeles. |
Date: | 2020–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2008.04269&r=all |
By: | Jos\'e Luis Montiel Olea; Mikkel Plagborg-M{\o}ller |
Abstract: | Applied macroeconomists often compute confidence intervals for impulse responses using local projections, i.e., direct linear regressions of future outcomes on current covariates. This paper proves that local projection inference robustly handles two issues that commonly arise in applications: highly persistent data and the estimation of impulse responses at long horizons. We consider local projections that control for lags of the variables in the regression. We show that lag-augmented local projections with normal critical values are asymptotically valid uniformly over (i) both stationary and non-stationary data, and also over (ii) a wide range of response horizons. Moreover, lag augmentation obviates the need to correct standard errors for serial correlation in the regression residuals. Hence, local projection inference is arguably both simpler than previously thought and more robust than standard autoregressive inference, whose validity is known to depend sensitively on the persistence of the data and on the length of the horizon. |
Date: | 2020–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2007.13888&r=all |
By: | Simon Clinet; William T. M. Dunsmuir; Gareth W. Peters; Kylie-Anne Richards (Finance Discipline Group, UTS Business School, University of Technology Sydney) |
Abstract: | The asymptotic distribution of the score test of the null hypothesis that marks do not impact the intensity of a Hawkes marked self-exciting point process is shown to be chi-squared. For local asymptotic power, the distribution against local alternatives is also established as non-central chisquared. These asymptotic results are derived using existing asymptotic results for likelihood estimates of the unmarked Hawkes process model together with mild additional conditions on the moments and ergodicity of the marks process and an additional uniform boundedness assumption, shown to be true for the exponential decay Hawkes process. |
Keywords: | Marked Hawkes point process; Ergodicity; Quasi likelihood; Score test; Inferential statistics; Local power |
Date: | 2019–05–01 |
URL: | http://d.repec.org/n?u=RePEc:uts:rpaper:404&r=all |
By: | Tetsuya Kaji; Elena Manresa; Guillaume Pouliot |
Abstract: | We propose a new simulation-based estimation method, adversarial estimation, for structural models. The estimator is formulated as the solution to a minimax problem between a generator (which generates synthetic observations using the structural model) and a discriminator (which classifies if an observation is synthetic). The discriminator maximizes the accuracy of its classification while the generator minimizes it. We show that, with a sufficiently rich discriminator, the adversarial estimator attains parametric efficiency under correct specification and the parametric rate under misspecification. We advocate the use of a neural network as a discriminator that can exploit adaptivity properties and attain fast rates of convergence. We apply our method to the elderly's saving decision model and show that including gender and health profiles in the discriminator uncovers the bequest motive as an important source of saving across the wealth distribution, not only for the rich. |
Date: | 2020–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2007.06169&r=all |
By: | Glocker, Christian; Kaniovski, Serguei |
Abstract: | We propose a modeling approach involving a series of small-scale dynamic factor models. They are connected to each other within a cluster, whose linkages are derived from Granger-causality tests. This approach merges the benefits of large-scale macroeconomic and small-scale factor models, rendering our Cluster of Dynamic Factor Models (CDFM) useful for model-consistent nowcasting and forecasting on a larger scale. While the CDFM has a simple structure and is easy to replicate, its forecasts are more precise than those of a wide range of competing models and those of professional forecasters. Moreover, the CDFM allows forecasters to introduce their own judgment and hence produce conditional forecasts. |
Keywords: | Forecasting, Dynamic factor model, Granger causality, Structural modeling |
JEL: | C22 C53 C55 E37 |
Date: | 2020–07 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:101874&r=all |
By: | Chen, Yunxiao; Zhang, Siliang |
Abstract: | Intensive longitudinal studies are becoming progressively more prevalent across many social science areas, and especially in psychology. New technologies such as smart-phones, fitness trackers, and the Internet of Things make it much easier than in the past to collect data for intensive longitudinal studies, providing an opportunity to look deep into the underlying characteristics of individuals under a high temporal resolution. In this paper we introduce a new modelling framework for latent curve analysis that is more suitable for the analysis of intensive longitudinal data than existing latent curve models. Specifically, through the modelling of an individual-specific continuous-time latent process, some unique features of intensive longitudinal data are better captured, including intensive measurements in time and unequally spaced time points of observations. Technically, the continuous-time latent process is modelled by a Gaussian process model. This model can be regarded as a semi-parametric extension of the classical latent curve models and falls under the framework of structural equation modelling. Procedures for parameter estimation and statistical inference are provided under an empirical Bayes framework and evaluated by simulation studies. We illustrate the use of the proposed model though the analysis of an ecological momentary assessment data set. |
Keywords: | Gaussian process; latent curve analysis; structural equation modelling; intensive longitudinal data; ecological momentary assessment; time‐varying latent trait |
JEL: | C1 |
Date: | 2020–05–01 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:101121&r=all |
By: | Federico A. Bugni; Jia Li |
Abstract: | We propose using a permutation test to detect discontinuities in an underlying economic model at a cutoff point. Relative to the existing literature, we show that this test is well suited for event studies based on time-series data. The test statistic measures the distance between the empirical distribution functions of observed data in two local subsamples on the two sides of the cutoff. Critical values are computed via a standard permutation algorithm. Under a high-level condition that the observed data can be coupled by a collection of conditionally independent variables, we establish the asymptotic validity of the permutation test, allowing the sizes of the local subsamples to be either be fixed or grow to infinity. In the latter case, we also establish that the permutation test is consistent. We demonstrate that our high-level condition can be verified in a broad range of problems in the infill asymptotic time-series setting, which justifies using the permutation test to detect jumps in economic variables such as volatility, trading activity, and liquidity. An empirical illustration on a recent sample of daily S&P 500 returns is provided. |
Date: | 2020–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2007.09837&r=all |
By: | Ruijun Bu; Jihyun Kim; Bin Wang |
Abstract: | We obtain the uniform convergence rates of nonparametric kernel estimators of the local time, the drift and volatility functions as well as their derivatives, of discretely sampled diffusion processes. Moreover, modified kernel estimators of the drift and volatility functions are considered and their Lp convergence rates are obtained. Our asymptotic results apply to recurrent diffusions which include both stationary or nonstationary cases. Our sampling scheme is two-dimensional, with sampling interval shrinking to zero and time span increasing to infinity jointly. |
Keywords: | recurrent, diffusion, kernel, uniform convergence, Lp convergence. |
JEL: | C14 C22 C58 |
Date: | 2020–07 |
URL: | http://d.repec.org/n?u=RePEc:liv:livedp:202021&r=all |
By: | Shuping Shi (Macquarie University); Peter C.B. Phillips (Cowles Foundation, Yale University) |
Abstract: | Housing fever is a popular term to describe an overheated housing market or housing price bubble. Like other financial asset bubbles, housing fever can inflict harm on the real economy, as indeed the US housing bubble did in the period following 2006 leading up to the general financial crisis and great recession. One contribution that econometricians can make to minimize the harm created by a housing bubble is to provide a quantitative `thermometer' for diagnosing ongoing housing fever. Early diagnosis can enable prompt and effective policy action that reduces long term damage to the real economy. This paper provides a selective review of the relevant literature on econometric methods for identifying housing bubbles together with some new methods of research and an empirical application. We first present a technical definition of a housing bubble that facilitates empirical work and discuss significant difficulties encountered in practical work and the solutions that have been proposed in the past literature. A major challenge in all econometric identification procedures is to assess prices in relation to fundamentals, which requires measurement of fundamentals. One solution to address this challenge is to estimate the fundamental component from an underlying structural relationship involving measurable variables. A second aim of the paper is to improve the estimation accuracy of fundamentals by means of an easy-to-implement reduced-form approach. Since many of the relevant variables that determine fundamentals are nonstationary and interdependent we use the IVX (Phillips and Magdalinos, 2009) method to estimate the reduced-form model to reduce the finite sample bias which arises from highly persistent regressors and endogeneity. The recursive evolving test of Phillips, Shi and Yu (2015) is applied to the estimated non-fundamental component for the identification of speculative bubbles. The new bubble test developed here is referred to as PSY-IVX. An empirical application to the eight Australian capital city housing markets over the period 1999 to 2017 shows that bubble testing results are sensitive to different ways of controlling for fundamentals and highlights the importance of accurate estimation of these housing market fundamentals. |
Keywords: | Housing bubbles, Periodically collapsing, Unobservable, Fundamentals, Explosive; IVX, Australia housing markets |
JEL: | C12 C13 C58 |
Date: | 2020–08 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2248&r=all |
By: | Ye Chen (Singapore Management University); Peter C.B. Phillips (Cowles Foundation, Yale University); Shuping Shi (Macquarie University) |
Abstract: | Price bubbles in multiple assets are sometimes nearly coincident in occurrence. Such near-coincidence is strongly suggestive of co-movement in the associated asset prices and likely driven by certain factors that are latent in the financial or economic system with common effects across several markets. Can we detect the presence of such common factors at the early stages of their emergence? To answer this question, we build a factor model that includes I(1), mildly explosive, and stationary factors to capture normal, exuberant, and collapsing phases in such phenomena. The I(1) factor models the primary driving force of market fundamentals. The explosive and stationary factors model latent forces that underlie the formation and destruction of asset price bubbles, which typically exist only for subperiods of the sample. The paper provides an algorithm for testing the presence of and date-stamping the origination and termination of price bubbles determined by latent factors in a large-dimensional system embodying many markets. Asymptotics of the bubble test statistic are given under the null of no common bubbles and the alternative of a common bubble across these markets. We prove consistency of a factor bubble detection process for the origination and termination dates of the common bubble. Simulations show good finite sample performance of the testing algorithm in terms of its successful detection rates. Our methods are applied to real estate markets covering 89 major cities in China over the period January 2003 to March 2013. Results suggest the presence of three common bubble episodes in what are known as China's Tier 1 and Tier 2 cities over the sample period. There appears to be little evidence of a common bubble in Tier 3 cities. |
Keywords: | Common Bubbles, Mildly Explosive Process, Factor Analysis, Date Stamping, Real Estate Market |
JEL: | C12 C13 C58 |
Date: | 2020–08 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2251&r=all |
By: | Ryan Cumings-Menon; Minchul Shin |
Abstract: | We propose probability and density forecast combination methods that are defined using the entropy regularized Wasserstein distance. First, we provide a theoretical characterization of the combined density forecast based on the regularized Wasserstein distance under the Gaus-sian assumption. Second, we show how this type of regularization can improve the predictive power of the resulting combined density. Third, we provide a method for choosing the tuning parameter that governs the strength of regularization. Lastly, we apply our proposed method to the U.S. inflation rate density forecasting, and illustrate how the entropy regularization can improve the quality of predictive density relative to its unregularized counterpart. |
Keywords: | Entropy regularization; Wasserstein distance; optimal transport; density fore-casting; model combination. |
JEL: | C53 E37 |
Date: | 2020–08–06 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedpwp:88545&r=all |
By: | Carlo Campajola; Domenico Di Gangi; Fabrizio Lillo; Daniele Tantari |
Abstract: | We introduce a generalization of the Kinetic Ising Model using the score-driven approach, which allows the efficient estimation and filtering of time-varying parameters from time series data. We show that this approach allows to overcome systematic errors in the parameter estimation, and is useful to study complex systems of interacting variables where the strength of the interactions is not constant in time: in particular we propose to quantify the amount of noise in the data and the reliability of forecasts, as well as to discriminate between periods of higher or lower endogeneity in the observed dynamics, namely when interactions are more or less relevant in determining the realization of the observations. We apply our methodology to three different financial settings to showcase some realistic applications, focusing on forecasting high-frequency volatility of stocks, measuring its endogenous component during extreme events in the market, and analysing the strategic behaviour of traders around news releases. We find interesting results on financial systems and, given the widespread use of Ising models in multiple fields, we believe our approach can be efficiently adapted to a variety of settings, ranging from neuroscience to social sciences and machine learning. |
Date: | 2020–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2007.15545&r=all |
By: | Sokbae Lee; Bernard Salani\'e |
Abstract: | Multivalued treatments are commonplace in applications. We explore the use of discrete-valued instruments to control for selection bias in this setting. We establish conditions under which counterfactual averages and treatment effects are identified for heterogeneous complier groups. These conditions require a combination of assumptions that restrict both the unobserved heterogeneity in treatment assignment and how the instruments target the treatments. We introduce the concept of filtered treatment, which takes into account limitations in the analyst's information. Finally, we illustrate the usefulness of our framework by applying it to data from the Student Achievement and Retention Project and the Head Start Impact Study. |
Date: | 2020–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2007.10432&r=all |
By: | Leandro De Magalhaes (University of Bristol); Dominik Hangartner (London School of Economics and Political Science); Salomo Hirvonen (University of Bristol); Jaakko Meriläinen (ITAM); Nelson A. Ruiz (University of Oxford; University of Turku) |
Abstract: | Regression discontinuity designs (RDD) are widely used in the social sciences to estimate causal effects from observational data. Scholars can choose from a range of methods that implement different RDD estimators, but there is a paucity of research on the performance of these different estimators in recovering experimental benchmarks. Leveraging exact ties in local elections in Colombia and Finland, which are resolved by random coin toss, we find that RDD estimation using bias-correction and robust inference (CCT) performs better in replicating experimental estimates of the individual incumbency advantage than local linear regression with conventional inference (LLR). We assess the generalizability of our results by estimating incumbency effects across different subsamples, and in other countries. We find that CCT consistently comes closer to the experimental benchmark, produces smaller estimates than LLR, and that incumbency effects are highly heterogeneous, both in magnitude and sign, across countries with similar open-list PR systems. |
Keywords: | Close elections, personal incumbency advantage, regression discontinuity design |
JEL: | C21 C52 D72 |
Date: | 2020–08 |
URL: | http://d.repec.org/n?u=RePEc:tkk:dpaper:dp135&r=all |
By: | Claudia Foroni; Francesco Ravazzolo; Luca Rossini |
Abstract: | We analyse the importance of low frequency hard and soft macroeconomic information, respectively the industrial production index and the manufacturing Purchasing Managers' Index surveys, for forecasting high-frequency daily electricity prices in two of the main European markets, Germany and Italy. We do that by means of mixed-frequency models, introducing a Bayesian approach to reverse unrestricted MIDAS models (RU-MIDAS). Despite the general parsimonious structure of standard MIDAS models, the RU-MIDAS has a large set of parameters when several predictors are considered simultaneously and Bayesian inference is useful for imposing parameter restrictions. We study the forecasting accuracy for different horizons (from $1$ day ahead to $28$ days ahead) and by considering different specifications of the models. Results indicate that the macroeconomic low frequency variables are more important for short horizons than for longer horizons. Moreover, accuracy increases by combining hard and soft information, and using only surveys gives less accurate forecasts than using only industrial production data. |
Date: | 2020–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2007.13566&r=all |
By: | Andrii Babii; Ryan T. Ball; Eric Ghysels; Jonas Striaukas |
Abstract: | This paper introduces structured machine learning regressions for prediction and nowcasting with panel data consisting of series sampled at different frequencies. Motivated by the empirical problem of predicting corporate earnings for a large cross-section of firms with macroeconomic, financial, and news time series sampled at different frequencies, we focus on the sparse-group LASSO regularization. This type of regularization can take advantage of the mixed frequency time series panel data structures and we find that it empirically outperforms the unstructured machine learning methods. We obtain oracle inequalities for the pooled and fixed effects sparse-group LASSO panel data estimators recognizing that financial and economic data exhibit heavier than Gaussian tails. To that end, we leverage on a novel Fuk-Nagaev concentration inequality for panel data consisting of heavy-tailed $\tau$-mixing processes which may be of independent interest in other high-dimensional panel data settings. |
Date: | 2020–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2008.03600&r=all |
By: | Bhattacharjee, A.; Ditzen, J.; Holly, S. |
Abstract: | We provide a way of representing spatial and temporal equilibria in terms of a Engle-Granger representation theorem in a panel setting. We use the mean group, common correlated effects estimator plus multiple testing to provide a set of weakly cross correlated correlations that we treat as spatial weights. We apply this model to the 324 local authorities of England, and show that our approach successfully mops up weak cross section correlations as well as strong cross sectional correlations. |
Keywords: | Spatio-temporal Engle-Granger Theorems, cross sectional dependence |
JEL: | C21 C22 C23 R3 |
Date: | 2020–08–03 |
URL: | http://d.repec.org/n?u=RePEc:cam:camdae:2075&r=all |
By: | Deborah Kim |
Abstract: | We show that the hybrid test for superior predictability is not pointwise asymptotically of level under standard conditions, and may lead to rejection rates over 11% when the significance level $\alpha$ is 5% in a simple case. We propose a modified hybrid test which is uniformly asymptotically of level $\alpha$ by properly adapting the generalized moment selection method. |
Date: | 2020–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2008.02318&r=all |