|
on Econometrics |
By: | Richey, Jeremiah; Rosburg, Alicia |
Abstract: | We propose a decomposition method that extends the traditional Oaxaca-Blinder decomposition to a continuous group membership setting that can be applied to any distributional measure of interest. This is achieved by reframing the problem as a decomposition of joint distributions: we decompose the difference between an empirical and a (hypothetical) independent joint distribution of membership index and an outcome of interest. Differences are divided into a composition effect and a structure effect. The method is based on the estimation of a counterfactual joint distribution via reweighting functions that can be caste into various distributional measures to investigate the drivers of the empirical relationship. We apply the method to U.S. intergenerational economic mobility and investigate multiple versions of the intergenerational elasticity of income (IGE): the traditional linear IGE, quantile regression counterparts, and a nonparametric IGE. Quantile results reveal a U-shaped effect which is primarily compositional in nature; nonparametric results indicate the composition effect is the main driver of the mean parental-offspring link at low levels of parental income while the structural effect is the main driver at high levels of parental income. Both of these effects are masked by the traditional IGE which implies an even 50-50 split between the composition and structure effect. |
Keywords: | intergenerational mobility, decomposition methods |
JEL: | C14 C20 J31 J62 |
Date: | 2016–10–24 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:74744&r=ecm |
By: | Zhi-Qiang Jiang (ECUST, BU); Gang-Jin Wang (HNU, BU); Askery Canabarro (BU, UFA); Boris Podobnik (ZSEM); Chi Xie (HNU); H. Eugene Stanley (BU); Wei-Xing Zhou (ECUST) |
Abstract: | Being able to predict the occurrence of extreme returns is important in financial risk management. Using the distribution of recurrence intervals---the waiting time between consecutive extremes---we show that these extreme returns are predictable on the short term. Examining a range of different types of returns and thresholds we find that recurrence intervals follow a $q$-exponential distribution, which we then use to theoretically derive the hazard probability $W(\Delta t |t)$. Maximizing the usefulness of extreme forecasts to define an optimized hazard threshold, we indicates a financial extreme occurring within the next day when the hazard probability is greater than the optimized threshold. Both in-sample tests and out-of-sample predictions indicate that these forecasts are more accurate than a benchmark that ignores the predictive signals. This recurrence interval finding deepens our understanding of reoccurring extreme returns and can be applied to forecast extremes in risk management. |
Date: | 2016–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1610.08230&r=ecm |
By: | Román Mínguez (University of Castilla-La Mancha); María L. (Carlos III University); Roberto Basile (Second University of Naples) |
Abstract: | In this paper we propose an extension of the semiparametric P-Spline model to spatio-temporal data including a non-parametric trend, as well as a spatial lag of the dependent variable. This model is able to simultaneously control for func- tional form bias, spatial dependence bias, spatial heterogeneity bias, and omitted time-related factors bias. Specically, we consider a spatio-temporal ANOVA model disaggregating the trend in spatial and temporal main eects, and second and third order interactions between them. The model can include both linear and non-linear effects of the covariates, and other additional xed or random eects. Recent algorithms based on spatial anisotropic penalties (SAP) are used to estimate all the parameters in a closed form without the need of multidimensional optimization. An empirical case compares the performance of this model against alternatives models like spatial panel data models. |
Keywords: | : spatio-temporal trend, mixed models, P-splines, PS-ANOVA, SAR, spatial panel. |
JEL: | C33 C14 C63 |
Date: | 2016 |
URL: | http://d.repec.org/n?u=RePEc:lui:lleewp:16126&r=ecm |
By: | Badi H. Baltagi (Syracuse University); Chihwa Kao (University of Connecticut); Bin Peng (Huazhong University of Science and Technology) |
Abstract: | This paper considers the problem of testing cross-sectional correlation in large panel data models with serially correlated errors. It finds that existing tests for cross-sectional correlation encounter size distortions with serial correlation in the errors. To control the size, this paper proposes a modification of Pesaran’s CD test to account for serial correlation of an unknown form in the error term. We derive the limiting distribution of this test as (N, T) → ∞. The test is distribution free and allows for unknown forms of serial correlation in the errors. Monte Carlo simulations show that the test has good size and power for large panels when serial correlation in the errors is present. JEL Classification: C13; C33 Key words: Cross-sectional Correlation Test; Serial Correlation; Large Panel Data Model |
Date: | 2016–10 |
URL: | http://d.repec.org/n?u=RePEc:uct:uconnp:2016-32&r=ecm |
By: | Johansson, Per (Uppsala University); Lee, Myoung-jae (Korea University) |
Abstract: | We show that the main nonparametric identification finding of Abbring and Van den Berg (2003b, Econometrica) for the effect of a timing-chosen treatment on an event duration of interest does not hold. The main problem is that the identification is based on the competing-risks identification result of Abbring and Van den Berg (2003a, Journal of the Royal Statistical Society, Series B) that requires independence between the waiting duration until treatment and the event duration, but the independence assumption does not hold unless there is no treatment effect. We illustrate the problem using constant hazards (i.e., exponential distribution), and as it turns out, there is no constant-hazard data generating process satisfying the assumptions in Abbring and Van den Berg (2003b, Econometrica) so long as the effect is not zero. We also suggest an alternative causal model. |
Keywords: | sub-density function, competing risks, treatment effect, treatment timing, duration, identification, hazard regression |
JEL: | C1 C14 C22 |
Date: | 2016–09 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp10247&r=ecm |
By: | Abbring, Jaap H. (Tilburg University); van den Berg, Gerard J. (University of Bristol) |
Abstract: | In their IZA Discussion Paper 10247, Johansson and Lee claim that the main result (Proposition 3) in Abbring and Van den Berg (2003b) does not hold. We show that their claim is incorrect. At a certain point within their line of reasoning, they make a rather basic error while transforming one random variable into another random variable, and this leads them to draw incorrect conclusions. As a result, their paper can be discarded. |
Keywords: | scientific conduct, econometrics, survival analysis, hazard rates |
JEL: | C14 C31 C41 |
Date: | 2016–09 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp10248&r=ecm |
By: | Alexander Razen; Stefan Lang |
Abstract: | Distributional structured additive regression provides a flexible framework for modeling each parameter of a potentially complex response distribution in dependence of covariates. Structured additive predictors allow for an additive decomposition of covariate effects with nonlinear effects and time trends, unit- or cluster-specific heterogeneity, spatial heterogeneity and complex interactions between covariates of different type. Within this framework, we present a simultaneous estimation approach for multiplicative random effects that allow for cluster-specific heterogeneity with respect to the scaling of a covariate's effect. More specifically, a possibly nonlinear function f(z) of a covariate z may be scaled by a multiplicative cluster-specific random effect (1+alpha). Inference is fully Bayesian and is based on highly efficient Markov Chain Monte Carlo (MCMC) algorithms. We investigate the statistical properties of our approach within extensive simulation experiments for different response distributions. Furthermore, we apply the methodology to German real estate data where we identify significant district-specific scaling factors. According to the deviance information criterion, the models incorporating these factors perform significantly better than standard models without random scaling factors. |
Keywords: | iteratively weighted least squares proposals, MCMC, multiplicative random effects, structured additive predictors |
Date: | 2016–10 |
URL: | http://d.repec.org/n?u=RePEc:inn:wpaper:2016-30&r=ecm |
By: | Dean Hyslop (Motu Economic and Public Policy Research); Wilbur Townsend (Motu Economic and Public Policy Research) |
Abstract: | This paper analyses measurement error in the classification of employment. We show that the true employment rate and time-invariant error rates can be identified, given access to two measures of employment with independent errors. Empirical identification requires at least two periods of data over which the employment rate varies. We estimate our model using matched survey and administrative data from Statistics New Zealand’s Integrated Data Infrastructure. We find that both measures have error, with the administrative data being substantially more accurate than the survey data. In both sources, false positives are much more likely than false negatives. Allowing for errors in both sources substantially affects estimated employment rates. |
Keywords: | Unemployment rate, measurement error, validation study |
JEL: | C18 J6 J21 |
Date: | 2016–10 |
URL: | http://d.repec.org/n?u=RePEc:mtu:wpaper:16_19&r=ecm |
By: | Westermeier, Christian |
Abstract: | Survey data tends to be biased toward the middle class. Often it fails to adequately cover the highly relevant group of multi-millionaires and billionaires, which in turn results in biased estimates for aggregate wealth and top wealth shares. In order to overcome the under coverage and obtain more reliable measurements of wealth inequality, researchers are simulating the top tail of wealth distributions using Pareto distributions both with and without information on high-net-worth-individuals from rich lists. In a series of Monte Carlo experiments, this study analyzes what assumptions need to be fulfilled in order for such an exercise to yield reliable results. If survey weights are uninformed about the relationship between non-response and wealth, as is to be expected empirically, the former case will underestimate top wealth shares and the latter may overestimate it, while both methods yield estimates of aggregate wealth that are still inherently biased downwards. In an application using German survey wealth data, it is shown that re-weighting the provided frequency weights based on exogenous information possibly affects the estimates more severely than choosing the right parameters of the Pareto distribution. However, empirically the three separate assumptions on the non-response yield wildly different estimates.The validity of exogenous dataâand the rich list dataâremains a matter of trust on the part of the empiricist. |
Keywords: | differential non-response,non-observation bias,Pareto distribution,survey data,top wealth shares |
Date: | 2016 |
URL: | http://d.repec.org/n?u=RePEc:zbw:fubsbe:201621&r=ecm |
By: | Petr Jizba; Jan Korbel |
Abstract: | Multifractal analysis is one of the important approaches that enables us to measure the complexity of various data via the scaling properties. We compare the most common techniques used for multifractal exponents estimation from both theoretical and practical point of view. Particularly, we discuss the methods based on estimation of R\'enyi entropy, which provide a powerful tool especially in presence of heavy-tailed data. To put some flesh on bare bones, all methods are compared on various real financial datasets, including daily and high-frequency data. |
Date: | 2016–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1610.07028&r=ecm |
By: | Juan Arismendi (ICMA Centre, Henley Business School, University of Reading); Simon Broda (Department of Quantitative Economics, University of Amsterdam Tinbergen Institute Amsterdam) |
Abstract: | In this study, we derived analytic expressions for the elliptical truncated moment generating function (MGF), the zeroth-, rst-, and second-order moments of quadratic forms of the multivariate normal, Student's t, and generalised hyperbolic distributions. The resulting formulae were tested in a numerical application to calculate an analytic expression of the expected shortfall of quadratic portfolios with the bene t that moment based sensitivity measures can be derived from the analytic expression. The convergence rate of the analytic expression is fast { one iteration { for small closed integration domains, and slower for open integration domains when compared to the Monte Carlo integration method. The analytic formulae provide a theoretical framework for calculations in robust estimation, robust regression, outlier detection, design of experiments, and stochastic extensions of deterministic elliptical curves results. |
Keywords: | Multivariate truncated moments, Quadratic forms, Elliptical Truncation, Tail moments, Parametric distributions, Elliptical functions |
Date: | 2016–09 |
URL: | http://d.repec.org/n?u=RePEc:rdg:icmadp:icma-dp2016-06&r=ecm |
By: | Annika Schnücker |
Abstract: | As panel vector autoregressive (PVAR) models can include several countries and variables in one system, they are well suited for global spillover analyses. However, PVARs require restrictions to ensure the feasibility of the estimation. The present paper uses a selection prior for a data-based restriction search. It introduces the stochastic search variable selection for PVAR models (SSVSP) as an alternative estimation procedure for PVARs. This extends Koop and Korobilis’s stochastic search specification selection (S4) to a restriction search on single elements. The SSVSP allows for incorporating dynamic and static interdependencies as well as cross-country heterogeneities. It uses a hierarchical prior to search for data-supported restrictions. The prior differentiates between domestic and foreign variables, thereby allowing a less restrictive panel structure. Absent a matrix structure for restrictions, a Monte Carlo simulation shows that SSVSP outperforms S4 in terms of deviation from the true values. Furthermore, the results of a forecast exercise for G7 countries demonstrate that forecast performance improves for the SSVSP specifications which focus on sparsity in form of no dynamic interdependencies. |
Keywords: | model selection, stochastic search variable selection, PVAR |
JEL: | C11 C33 C52 |
Date: | 2016 |
URL: | http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1612&r=ecm |
By: | Jules Tinang (Toulouse School of Economics); Nour Meddahi (Toulouse School of Economics) |
Abstract: | In this paper, we propose a Gmm estimation of the structural parameters of the Long Run Risk model that allows for the separation between the consumer optimal decision's frequency and the frequency by which the econometrician observes the data. Our inference procedure is also robust to weak identification. The key finding is that the Long Run Risk model adapts well to the data but could not be so good at forecasting or telling the true story about what drives the evolution of asset prices. Indeed, the model is able to reproduce the qualitative behavior of targeted moments in the long run when the corresponding estimates of the structural parameters are used for simulations, but it also faces a urge tension in keeping in track with all the observed moments considered. |
Date: | 2016 |
URL: | http://d.repec.org/n?u=RePEc:red:sed016:1107&r=ecm |
By: | Mantobaye Moundigbaye; Clarisse Messemer; Richard W. Parks; W. Robert Reed (University of Canterbury) |
Abstract: | The Parks (1967) estimator is a workhorse for panel data and seemingly unrelated regression equation systems because it allows the incorporation of serial correlation together with heteroskedasticity and cross-sectional correlation. It is efficient both asymptotically and in small samples. Kmenta and Gilbert (1970) and more recently Beck and Katz (1995) note that estimated standard errors are biased downward, often severely. Instead of fixing the Parks standard errors, Beck and Katz abandon the efficient estimator in favor of a Prais-Winston estimator together with “panel corrected standard errors†(PCSE), a procedure that only partially reduces the standard error bias. In this paper we develop both parametric and nonparametric bootstrap approaches to inference that avoid the need to use biased standard errors. We then illustrate the effectiveness of our procedures using Monte Carlo experiments that show that the bootstrap gives rejection probabilities close to the nominal level chosen by the researcher. |
Keywords: | Parks model, SUR, panel data, cross-sectional correlation, bootstrap, Monte Carlo, simulation |
JEL: | I31 F52 Z13 |
Date: | 2016–10–22 |
URL: | http://d.repec.org/n?u=RePEc:cbt:econwp:16/22&r=ecm |
By: | Marine Carrasco; Guy Tchuente |
Abstract: | This paper studies the asymptotic validity of the regularized Anderson Rubin (AR) tests in linear models with large number of instruments. The regularized AR tests use informationreduction methods to provide robust inference in instrumental variable (IV) estimation for data rich environments. We derive the asymptotic properties of the tests. Their asymptotic distribution depend on unknown nuisance parameters. A bootstrap method is used to obtain more reliable inference. The regularized tests are robust to many moment conditions in the sense that they are valid for both few and many instruments, and even for more instruments than the sample size. Our simulations show that the proposed AR tests work well and have better performance than competing AR tests when the number of instruments is very large. The usefulness of the regularized tests is shown by proposing confidence intervals for the Elasticity of Intertemporal Substitution (EIS). |
Keywords: | Many weak instruments; AR test; Bootstrap; Factor Model |
Date: | 2016–09 |
URL: | http://d.repec.org/n?u=RePEc:ukc:ukcedp:1608&r=ecm |
By: | Jo\"el Bun; Jean-Philippe Bouchaud; Marc Potters |
Abstract: | This review covers recent results concerning the estimation of large covariance matrices using tools from Random Matrix Theory (RMT). We introduce several RMT methods and analytical techniques, such as the Replica formalism and Free Probability, with an emphasis on the Marchenko-Pastur equation that provides information on the resolvent of multiplicatively corrupted noisy matrices. Special care is devoted to the statistics of the eigenvectors of the empirical correlation matrix, which turn out to be crucial for many applications. We show in particular how these results can be used to build consistent "Rotationally Invariant" estimators (RIE) for large correlation matrices when there is no prior on the structure of the underlying process. The last part of this review is dedicated to some real-world applications within financial markets as a case in point. We establish empirically the efficacy of the RIE framework, which is found to be superior in this case to all previously proposed methods. The case of additively (rather than multiplicatively) corrupted noisy matrices is also dealt with in a special Appendix. Several open problems and interesting technical developments are discussed throughout the paper. |
Date: | 2016–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1610.08104&r=ecm |
By: | Peter B. Lerner |
Abstract: | The manipulation of LIBOR by a group of banks became one of the major blows to the remaining confidence in financial industry. Yet, despite an enormous amount of popular literature on the subject, rigorous time-series studies are few. In my paper, I discuss the following hypothesis. Namely, if we should assume for a statistical null, the quotes, which were submitted by the member banks were true, the deviations from the LIBOR should have been entirely random because they were determined by idiosyncratic conditions by the member banks. This hypothesis can be statistically verified. Serial correlations of the rates, which cannot be explained by the differences in credit qualities of the member banks or the domicile Governments, were subjected to correlation tests. A new econometric method--the analysis of the Wigner-Ville function borrowed from quantum mechanics and signal processing--is used and explained for the statistical interpretation of regression residuals. |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1610.08414&r=ecm |
By: | Martin Forde; Hongzhong Zhang |
Abstract: | Using the large deviation principle (LDP) for a re-scaled fractional Brownian motion $B^H_t$ where the rate function is defined via the reproducing kernel Hilbert space, we compute small-time asymptotics for a correlated fractional stochastic volatility model of the form $dS_t=S_t\sigma(Y_t) (\bar{\rho} dW_t +\rho dB_t), \,dY_t=dB^H_t$ where $\sigma$ is $\alpha$-H\"{o}lder continuous for some $\alpha\in(0,1]$; in particular, we show that $t^{H-\frac{1}{2}} \log S_t $ satisfies the LDP as $t\to0$ and the model has a well-defined implied volatility smile as $t \to 0$, when the log-moneyness $k(t)=x t^{\frac{1}{2}-H}$. Thus the smile steepens to infinity or flattens to zero depending on whether $H\in(0,\frac{1}{2})$ or $H\in(\frac{1}{2},1)$. We also compute large-time asymptotics for a fractional local-stochastic volatility model of the form: $dS_t= S_t^{\beta} |Y_t|^p dW_t,dY_t=dB^H_t$, and we generalize two identities in Matsumoto&Yor05 to show that $\frac{1}{t^{2H}}\log \frac{1}{t}\int_0^t e^{2 B^H_s} ds$ and $\frac{1}{t^{2H}}(\log \int_0^t e^{2(\mu s+B^H_s)} ds-2 \mu t)$ converge in law to $ 2\mathrm{max}_{0 \le s \le 1} B^H_{s}$ and $2B_1$ respectively for $H \in (0,\frac{1}{2})$ and $\mu>0$ as $t \to \infty$. |
Date: | 2016–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1610.08878&r=ecm |