|
on Econometrics |
By: | Daniel Ackerberg; Garth Frazer; Kyoo il Kim; Yao Luo; Yingjun Su |
Abstract: | We revisit identification based on timing and information set assumptions in structural models, which have been used in the context of production functions, demand equations, and hedonic pricing models (e.g. Olley and Pakes (1996), Blundell and Bond (2000)). First, we demonstrate a general under-identification problem using these assumptions, illustrating this with a simple version of the Blundell-Bond dynamic panel model. In particular, the basic moment conditions can yield multiple discrete solutions: one at the persistence parameter in the main equation and another at the persistence parameter governing the regressor. Second, we propose possible solutions based on sign restrictions and an augmented moment approach. We show the identification of our approach and propose a consistent estimation procedure. Our Monte Carlo simulations illustrate the underidentification issue and finite sample performance of our proposed estimator. Lastly, we show that the problem persists in many alternative models of the regressor but disappears in some models under stronger assumptions. |
Keywords: | Production Function, Identification, Timing and Information Set Assumptions, Market Persistence Factor, Monte Carlo Simulation |
JEL: | C14 C18 D24 |
Date: | 2020–11–07 |
URL: | http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-679&r=all |
By: | Tae-Hwy Lee (Department of Economics, University of California Riverside); Ekaterina Seregina (University of California Riverside) |
Abstract: | Graphical models are a powerful tool to estimate a high-dimensional inverse covariance (precision) matrix, which has been applied for portfolio allocation problem. The assumption made by these models is a sparsity of the precision matrix. However, when the stock returns are driven by the common factors, this assumption does not hold. Our paper develops a framework for estimating a high-dimensional precision matrix which combines the benefits of exploring the factor structure of the stock returns and the sparsity of the precision matrix of the factor-adjusted returns. The proposed algorithm is called Factor Graphical Lasso (FGL). We study a high-dimensional portfolio allocation problem when the asset returns admit the approximate factor model. In high dimensions, when the number of assets is large relative to the sample size, the sample covariance matrix of the excess returns is subject to the large estimation uncertainty, which leads to unstable solutions for portfolio weights. To resolve this issue, we consider the decomposition of low-rank and sparse components. This strategy allows us to consistently estimate the optimal portfolio in high dimensions, even when the covariance matrix is ill-behaved. We establish consistency of the portfolio weights in a high-dimensional setting without assuming sparsity on the covariance or precision matrix of stock returns. Our theoretical results and simulations demonstrate that FGL is robust to heavy-tailed distributions, which makes our method suitable for financial applications. The empirical application uses daily and monthly data for the constituents of the S&P500 to demonstrate superior performance of FGL compared to the equal-weighted portfolio, index and some prominent precision and covariance-based estimators. |
Keywords: | High-dimensionality, Portfolio optimization, Graphical Lasso, Approximate Factor Model, Sharpe Ratio, Elliptical Distributions |
JEL: | C13 C55 C58 G11 G17 |
Date: | 2020–09 |
URL: | http://d.repec.org/n?u=RePEc:ucr:wpaper:202025&r=all |
By: | Sibbertsen, Philipp; Wenger, Kai; Wingert, Simon |
Abstract: | This paper considers estimation and testing of multiple breaks that occur at unknown dates in multivariate long-memory time series. We propose a likelihood ratio based approach for estimating breaks in the mean and the covariance of a system of long-memory time series. The limiting distribution of these estimates as well as consistency of the estimators is derived. A testing procedure to determine the unknown number of break points is given based on iterative testing on the regression residuals. A Monte Carlo exercise shows the finite sample performance of our method. An empirical application to inflation series illustrates the usefulness of our procedures. |
Keywords: | Multivariate Long Memory ; Multiple Structural Breaks ; Hypothesis Testing |
JEL: | C12 C22 C58 G15 |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:han:dpaper:dp-676&r=all |
By: | Bergsma, Wicher |
Abstract: | The problem of estimating a parametric or nonparametric regression function in a model with normal errors is considered. For this purpose, a novel objective prior for the regression function is proposed, defined as the distribution maximizing entropy subject to a suitable constraint based on the Fisher information on the regression function. The prior is named I-prior. For the present model, it is Gaussian with covariance kernel proportional to the Fisher information, and mean chosen a priori (e.g., 0). The I-prior has the intuitively appealing property that the more information is available about a linear functional of the regression function, the larger its prior variance, and, broadly speaking, the less influential the prior is on the posterior. Unlike the Jeffreys prior, it can be used in high dimensional settings. The I-prior methodology can be used as a principled alternative to Tikhonov regularization, which suffers from well-known theoretical problems which are briefly reviewed. The regression function is assumed to lie in a reproducing kernel Hilbert space (RKHS) over a low or high dimensional covariate space, giving a high degree of generality. Analysis of some real data sets and a small-scale simulation study show competitive performance of the I-prior methodology, which is implemented in the R-package iprior. |
Keywords: | reproducing kernel; RKHS; fisher information; maximum entropy; objective prior; g-prior; empirical Bates; regression; nonparametric regression; functional data analysis; classification; Tikhonov regularization |
JEL: | C1 |
Date: | 2020–04–01 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:102136&r=all |
By: | Michael Ryan (University of Waikato) |
Abstract: | We quantify the effects of policy uncertainty on the economy using a proxy structural vector autoregression (SVAR). Our instrument in the proxy SVAR is a set of exogenous uncertainty events constructed using a text-based narrative approach. Usually the narrative approach involves manually reading texts, which is difficult in our application as our text—the parliamentary record—is unstructured and lengthy. To deal with such circumstances, we develop a procedure using a natural language technique, latent Dirichlet analysis. Our procedure extends the possible application of the narrative identification approach. We find the effects of policy uncertainty are significant, and are underestimated using alternative identification methods. |
Keywords: | Latent Dirichlet allocation; narrative identification; policy uncertainty; Proxy SVAR |
JEL: | C32 C36 C63 D80 E32 L50 |
Date: | 2020–11–03 |
URL: | http://d.repec.org/n?u=RePEc:wai:econwp:20/10&r=all |
By: | Steele, Fiona; Grundy, Emily |
Abstract: | Exchanges of practical or financial help between people living in different households are a major component of intergenerational exchanges within families and an increasingly important source of support for individuals in need. Using longitudinal data, bivariate dynamic panel models can be applied to study the effects of changes in individual circumstances on help given to and received from non-coresident parents and the reciprocity of exchanges. However, the use of a rotating module for collection of data on exchanges leads to data where the response measurements are unequally spaced and taken less frequently than for the time-varying covariates. Existing approaches to this problem focus on fixed effects linear models for univariate continuous responses. We propose a random effects estimator for a family of dynamic panel models that can handle continuous, binary or ordinal multivariate responses. The performance of the estimator is assessed in a simulation study. A bivariate probit dynamic panel model is then applied to estimate the effects of partnership and employment transitions in the previous year and the presence and age of children in the current year on an individual’s propensity to give or receive help. Annual data on respondents’ partnership and employment status and dependent children and data on exchanges of help collected at 2- and 5-year intervals are used. |
Keywords: | longitudinal data; autoregressive models; lagged response models; unequal spacing; intergenerational changes; ES/P000118/1; UKRI block grant |
JEL: | C1 |
Date: | 2020–11–07 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:106255&r=all |
By: | Tae-Hwy Lee (Department of Economics, University of California Riverside); Ekaterina Seregina (University of California Riverside) |
Abstract: | This paper studies forecast combination (as an expert system) using the precision matrix estimation of forecast errors when the latter admit the approximate factor model. This approach incorporates the facts that experts often use common sets of information and hence they tend to make common mistakes. This premise is evidenced in many empirical results. For example, the European Central Bank's Survey of Professional Forecasters on Euro-area real GDP growth demonstrates that the professional forecasters tend to jointly understate or overstate GDP growth. Motivated by this stylized fact, we develop a novel framework which exploits the factor structure of forecast errors and the sparsity in the precision matrix of the idiosyncratic components of the forecast errors. The proposed algorithm is called Factor Graphical Model (FGM). Our approach overcomes the challenge of obtaining the forecasts that contain unique information, which was shown to be necessary to achieve a "winning" forecast combination. In simulation, we demonstrate the merits of the FGM in comparison with the equal-weighted forecasts and the standard graphical methods in the literature. An empirical application to forecasting macroeconomic time series in big data environment highlights the advantage of the FGM approach in comparison with the existing methods of forecast combination. |
Keywords: | High-dimensionality, Graphical Lasso, Approximate Factor Model, Nodewise Regression, Precision Matrix |
JEL: | C13 C38 C55 |
Date: | 2020–09 |
URL: | http://d.repec.org/n?u=RePEc:ucr:wpaper:202024&r=all |
By: | Licht, Adrian; Escribano Saez, Alvaro; Blazsek, Szabolcs Istvan |
Abstract: | In this paper, we introduce Beta-t-QVAR (quasi-vector autoregression) for the joint modelling of score-driven location and scale. Asymptotic theory of the maximum likelihood (ML) estimatoris presented, and sufficient conditions of consistency and asymptotic normality of ML are proven. Forthe joint score-driven modelling of risk premium and volatility, Dow Jones Industrial Average (DJIA)data are used in an empirical illustration. Prediction accuracy of Beta-t-QVAR is superior to theprediction accuracies of Beta-t-EGARCH (exponential generalized AR conditional heteroscedasticity),A-PARCH (asymmetric power ARCH), and GARCH (generalized ARCH). The empirical results motivate the use of Beta-t-QVAR for the valuation of DJIA options. |
Keywords: | Generalized Autoregressive Score; Dynamic Conditional Score; Risk Premium; Volatility |
JEL: | C58 C22 |
Date: | 2020–11–05 |
URL: | http://d.repec.org/n?u=RePEc:cte:werepe:31339&r=all |
By: | Mauricio Villamizar-Villegas (Banco de la República de Colombia); Yasin Kursat Onder (Ghent University) |
Abstract: | The literature that employs Regression Discontinuity Designs (RDD) typically stacks data across time periods and cutoff values. While practical, this procedure omits useful time heterogeneity. In this paper we decompose the RDD treatment effect into its weighted time-value parts. This analysis adds richness to the RDD estimand, where each time-specific component can be different and informative in a manner that is not expressed by the single cutoff or pooled regressions. To illustrate our methodology, we present two empirical examples: one using repeated cross-sectional data and another using time-series. Overall, we show a significant heterogeneity in both cutoff and time-specific effects. From a policy standpoint, this heterogeneity can pick up key differences in treatment across economically relevant episodes. Finally, we propose a new estimator that uses all observations from the original design and which captures the incremental effect of policy given a state variable. We show that this estimator is generally more precise compared to those that exclude observations exposed to other cutoffs or time periods. Our proposed framework is simple and easily replicable and can be applied to any RDD application that carries an explicitly traceable time dimension. **** RESUMEN: La literatura que emplea diseños de regresión discontinua (RDD) generalmente agrupa observaciones a través del tiempo y a través de valores de corte. Si bien este método es práctico, puede omitir heterogeneidad útil de tiempo. En este documento descomponemos el efecto del tratamiento de RDD en sus partes ponderadas, relacionadas a cada valor temporal. De esta forma, nuestro análisis agrega riqueza al coeficiente estimado, donde cada componente específico del tiempo puede ser diferente e informativo, de una manera que no se expresa actualmente en las estimaciones de corte único o de cortes combinados. Para ilustrar nuestra metodología, presentamos dos ejemplos empíricos: uno usando datos de corte transversales repetidos y otro usando series de tiempo. En general, mostramos que existe una heterogeneidad significativa en los efectos de tiempo. Esta heterogeneidad puede generar diferencias relevantes en periodos económicos. Finalmente, proponemos un nuevo estimador que utiliza todas las observaciones del diseño original y que captura el efecto incremental de la poltica condicional a una variable de estado. Este estimador es generalmente más preciso en comparación con aquellos que excluyen observaciones expuestas a otros umbrales. Nuestra metodología es simple y fácilmente replicable y se puede aplicar a cualquier aplicación de RDD que tenga una dimensión rastreable de tiempo. |
Keywords: | Regression discontinuity, multiple cutoffs, time heterogeneity, Regresión discontinua, múltiples valores de corte, heterogeneidad de tiempo |
JEL: | D10 E24 J16 J22 |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:bdr:borrec:1141&r=all |
By: | Dietmar Pfeifer; Doreen Strassburger; Joerg Philipps |
Abstract: | In this paper we review Bernstein and grid-type copulas for arbitrary dimensions and general grid resolutions in connection with discrete random vectors possessing uniform margins. We further suggest a pragmatic way to fit the dependence structure of multivariate data to Bernstein copulas via grid-type copulas and empirical contingency tables. Finally, we discuss a Monte Carlo study for the simulation and PML estimation for aggregate dependent losses form observed windstorm and flooding data. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.15709&r=all |
By: | Timo Dimitriadis; Tobias Fissler; Johanna F. Ziegel |
Abstract: | Parameter estimation via M- and Z-estimation is broadly considered to be equally powerful in semiparametric models for one-dimensional functionals. This is due to the fact that, under sufficient regularity conditions, there is a one-to-one relation between the corresponding objective functions - strictly consistent loss functions and oriented strict identification functions - via integration and differentiation. When dealing with multivariate functionals such as multiple moments, quantiles, or the pair (Value at Risk, Expected Shortfall), this one-to-one relation fails due to integrability conditions: Not every identification function possesses an antiderivative. The most important implication of this failure is an efficiency gap: The most efficient Z-estimator often outperforms the most efficient M-estimator, implying that the semiparametric efficiency bound cannot be attained by the M-estimator in these cases. We show that this phenomenon arises for pairs of quantiles at different levels and for the pair (Value at Risk, Expected Shortfall), where we illustrate the gap through extensive simulations. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.14146&r=all |
By: | Max H. Farrell; Tengyuan Liang; Sanjog Misra |
Abstract: | We propose a methodology for effectively modeling individual heterogeneity using deep learning while still retaining the interpretability and economic discipline of classical models. We pair a transparent, interpretable modeling structure with rich data environments and machine learning methods to estimate heterogeneous parameters based on potentially high dimensional or complex observable characteristics. Our framework is widely-applicable, covering numerous settings of economic interest. We recover, as special cases, well-known examples such as average treatment effects and parametric components of partially linear models. However, we also seamlessly deliver new results for diverse examples such as price elasticities, willingness-to-pay, and surplus measures in choice models, average marginal and partial effects of continuous treatment variables, fractional outcome models, count data, heterogeneous production function components, and more. Deep neural networks are well-suited to structured modeling of heterogeneity: we show how the network architecture can be designed to match the global structure of the economic model, giving novel methodology for deep learning as well as, more formally, improved rates of convergence. Our results on deep learning have consequences for other structured modeling environments and applications, such as for additive models. Our inference results are based on an influence function we derive, which we show to be flexible enough to to encompass all settings with a single, unified calculation, removing any requirement for case-by-case derivations. The usefulness of the methodology in economics is shown in two empirical applications: the response of 410(k) participation rates to firm matching and the impact of prices on subscription choices for an online service. Extensions to instrumental variables and multinomial choices are shown. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.14694&r=all |
By: | Alexander Wehrli (ETH Zürich); Didier Sornette (ETH Zürich - Department of Management, Technology, and Economics (D-MTEC); Swiss Finance Institute; Southern University of Science and Technology; Tokyo Institute of Technology) |
Abstract: | We introduce a novel modelling framework - the Hawkes(p,q) process - which allows us to parsimoniously disentangle and quantify the time-varying share of high frequency financial price changes that are due to endogenous feedback processes and not exogenous impulses. We show how both flexible exogenous arrival intensities, as well as a time-dependent feedback parameter can be estimated in a structural manner using an Expectation Maximization algorithm. We use this approach to investigate potential characteristic signatures of anomalous market regimes in the vicinity of "flash crashes" - events where prices exhibit highly irregular and cascading dynamics. Our study covers some of the most liquid electronic financial markets, in particular equity and bond futures, foreign exchange and cryptocurrencies. Systematically balancing the degrees of freedom of both exogenously driving processes and endogenous feedback variation using information criteria, we show that the dynamics around such events are not universal, highlighting the usefulness of our approach (i) post-mortem for developing remedies and better future processes - e.g. improving circuit breakers or latency floor designs - and potentially (ii) ex-ante for short-term forecasts in the case of endogenously driven events. Finally, we test our proposed model against a process with refined treatment of exogenous clustering dynamics in the spirit of the recently proposed autoregressive moving-average (ARMA) point process. |
Keywords: | Flash crash; Hawkes process; ARMA point process; High frequency financial data; Market microstructure; EM algorithm; Time-varying parameters |
JEL: | C01 C40 C52 |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:chf:rpseri:rp2092&r=all |
By: | Christopher Dobronyi; Christian Gouri\'eroux |
Abstract: | We introduce two models of non-parametric random utility for demand systems: the stochastic absolute risk aversion (SARA) model, and the stochastic safety-first (SSF) model. In each model, individual-level heterogeneity is characterized by a distribution $\pi\in\Pi$ of taste parameters, and heterogeneity across consumers is introduced using a distribution $F$ over the distributions in $\Pi$. Demand is non-separable and heterogeneity is infinite-dimensional. Both models admit corner solutions. We consider two frameworks for estimation: a Bayesian framework in which $F$ is known, and a hyperparametric framework in which $F$ is a member of a parametric family. Our methods are illustrated by an application to a large panel of scanner data on alcohol consumption. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.13937&r=all |
By: | David Anthoff (University of California); Richard S. J. Tol (University of Sussex) |
Abstract: | Weitzman's Dismal Theorem has that the expected net present value of a stock problem with a stochastic growth rate with unknown variance is unbounded. Cost-benefit analysis can therefore not be applied to greenhouse gas emission control. We use the Generalized Central Limit Theorem to show that the Dismal Theorem can be tested, in a finite sample, by estimating the tail index. We apply this test to social cost of carbon estimates from three commonly used integrated assessment models, and to previously published estimates. Two of the three models do not support the Dismal Theorem, but the third one does for low discount rates. The meta-analysis cannot reject the Dismal Theorem. |
Keywords: | climate policy; dismal theorem; fat tails; social cost of carbon |
JEL: | C46 D81 Q54 |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:sus:susewp:1920&r=all |
By: | Aman Ullah (Department of Economics, University of California Riverside) |
Keywords: | Econometrics Inference, Inequality and Poverty, Misspecified Models, Entropy |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:ucr:wpaper:202023&r=all |
By: | T. -N. Nguyen; M. -N. Tran; R. Kohn |
Abstract: | We propose a new class of financial volatility models, which we call the REcurrent Conditional Heteroskedastic (RECH) models, to improve both the in-sample analysis and out-of-sample forecast performance of the traditional conditional heteroskedastic models. In particular, we incorporate auxiliary deterministic processes, governed by recurrent neural networks, into the conditional variance of the traditional conditional heteroskedastic models, e.g. the GARCH-type models, to flexibly capture the dynamics of the underlying volatility. The RECH models can detect interesting effects in financial volatility overlooked by the existing conditional heteroskedastic models such as the GARCH (Bollerslev, 1986), GJR (Glosten et al., 1993) and EGARCH (Nelson, 1991). The new models often have good out-of-sample forecasts while still explain well the stylized facts of financial volatility by retaining the well-established structures of the econometric GARCH-type models. These properties are illustrated through simulation studies and applications to four real stock index datasets. An user-friendly software package together with the examples reported in the paper are available at https://github.com/vbayeslab. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.13061&r=all |