|
on Econometrics |
By: | Yu-Chin Hsu (Institute of Economics, Academia Sinica, Taipei, Taiwan); Chu-An Liu (Institute of Economics, Academia Sinica, Taipei, Taiwan); Xiaoxia Shi (University of Wisconsin at Madison) |
Abstract: | We propose a test for a generalized regression monotonicity (GRM) hypothesis. The GRM hypothesis is the sharp testable implication of the monotonicity of certain latent structures, as we show in this paper. Examples include the monotone instrumental variable assumption of Manski and Pepper (2000) and the monotonicity of the conditional mean function when only interval data are available for the dependent variable. These instances of latent monotonicity can be tested using our test. Moreover, the GRM hypothesis includes regression monotonicity and stochastic monotonicity as special cases. Thus, our test also serves as an alternative to existing tests for those hypotheses. We show that our test controls the size uniformly over a broad set of data generating processes asymptotically, is consistent against fixed alternatives, and has nontrivial power against some n−1/2 local alternatives. JEL Classification: C01, C12, C21 |
Keywords: | Generalized regression monotonicity, hypothesis testing, monotone instrumental variable, interval outcome, uniform size control |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:sin:wpaper:16-a009&r=ecm |
By: | Johansson, Per (Uppsala University); Lee, Myoung-jae (Korea University) |
Abstract: | We show that the main nonparametric identification finding of Abbring and Van den Berg (2003b, Econometrica) for the effect of a timing-chosen treatment on an event duration of interest does not hold. The main problem is that the identification is based on the competing-risks identification result of Abbring and Van den Berg (2003a, Journal of the Royal Statistical Society, Series B) that requires independence between the waiting duration until treatment and the event duration, but the independence assumption does not hold unless there is no treatment effect. We illustrate the problem using constant hazards (i.e., exponential distribution), and as it turns out, there is no constant-hazard data generating process satisfying the assumptions in Abbring and Van den Berg (2003b, Econometrica) so long as the effect is not zero. We also suggest an alternative causal model. |
Keywords: | identification, duration, treatment timing, treatment effect, competing risks, sub-density function, hazard regression |
JEL: | C1 C14 C22 |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp10058&r=ecm |
By: | Sonali Das; Jeffrey S. Racine |
Abstract: | In this paper we propose a nonparametric methodology designed to facilitate the statistical analysis of complex systems. The proposed approach exploits an ensemble of nonparametric techniques including conditional density function estimation, conditional distribution function estimation, conditional mean estimation (regression) and conditional quantile estimation (quantile regression). By exploiting recent developments in nonparametric methodology and also in open source interactive platforms for data visualization and statistical analysis, we are able to provide an approach that facilitates enhanced understanding of complex empirical phenomenon. We illustrate this approach by exploring the inherent complexity of the Southern Ocean system as a carbon sink, measured in terms of fugacity of carbon dioxide at sea surface temperature (f CO2), in relation to a number of oceanic state variables, all measured in situ during the annual South African National Antarctic Expedition (SANAE) austral summer trips from Cape Town to the Antarctic, and back, between 2010 and 2015. |
Keywords: | Kernel Smoothing; Conditional Density, Distribution, Mean and Quantile Estimation; Exploratory Data Analysis |
JEL: | C14 |
Date: | 2016–06 |
URL: | http://d.repec.org/n?u=RePEc:mcm:deptwp:2016-07&r=ecm |
By: | Abonazel, Mohamed R. |
Abstract: | This paper provides a generalized model for the random-coefficients panel data model where the errors are cross-sectional heteroskedastic and contemporaneously correlated as well as with the first-order autocorrelation of the time series errors. Of course, the conventional estimators, which used in standard random-coefficients panel data model, are not suitable for the generalized model. Therefore, the suitable estimator for this model and other alternative estimators have been provided and examined in this paper. Moreover, the efficiency comparisons for these estimators have been carried out in small samples and also we examine the asymptotic distributions of them. The Monte Carlo simulation study indicates that the new estimators are more reliable (more efficient) than the conventional estimators in small samples. |
Keywords: | Classical pooling estimation; Contemporaneous covariance; First-order autocorrelation; Heteroskedasticity; Mean group estimation; Monte Carlo simulation; Random coefficient regression. |
JEL: | B23 C1 C23 C4 C5 |
Date: | 2016–04 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:72586&r=ecm |
By: | Yuichi Kitamura (Institute for Fiscal Studies and Yale University); Jörg Stoye (Institute for Fiscal Studies and New York University) |
Abstract: | This paper develops and implements a nonparametric test of Random Utility Models. The motivating application is to test the null hypothesis that a sample of cross-sectional demand distributions was generated by a population of rational consumers. We test a necessary and sucient condition for this that does not rely on any restriction on unobserved heterogeneity or the number of goods. We also propose and implement a control function approach to account for endogenous expenditure. An econometric result of independent interest is a test for linear inequality constraints when these are represented as the vertices of a polyhedron rather than its faces. An empirical application to the U.K. Household Expenditure Survey illustrates computational feasibility of the method in demand problems with 5 goods. |
JEL: | C14 |
Date: | 2016–06–14 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:27/16&r=ecm |
By: | Siem Jan Koopman (VU University Amsterdam, the Netherlands); Rutger Lit (VU University Amsterdam, the Netherlands); Andre Lucas (VU University Amsterdam, the Netherlands) |
Abstract: | We develop a multivariate unobserved components model to extract business cycle and financial cycle indicators from a panel of economic and financial time series of four large developed economies. Our model is flexible and allows for the inclusion of cycle components in different selections of economic variables with different scales and with possible phase shifts. We find clear evidence of the presence of a financial cycle with a length that is approximately twice the length of a regular business cycle. Moreover, cyclical movements in credit related variables largely depend on the financial cycle, and only marginally on the business cycle. Property prices appear to have their own idiosyncratic dynamics and do not substantially load on business or financial cycle components. Systemic surveillance policies should therefore account for the different dynamic components in typical macro financial variables. |
Keywords: | financial cycle; business cycle; phase shift; multivariate state space model; Kalman filtering; panel time series |
JEL: | E32 C22 |
Date: | 2016–07–11 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20160051&r=ecm |
By: | Eric JONDEAU (University of Lausanne and Swiss Finance Institute); Florian PELGRIN (EDHEC Business School and EDHEC Business School) |
Abstract: | The aggregation of individual random AR(1) models generally leads to an AR(infinity) process. We provide two consistent estimators of aggregate dynamics based on either a parametric regression or a minimum distance approach for use when only macro data are available. Notably, both estimators allow us to recover some moments of the cross-sectional distribution of the autoregressive parameter. Both estimators perform very well in our Monte-Carlo experiment, even with finite samples. |
Keywords: | Autoregressive process, Aggregation, Heterogeneity |
JEL: | C2 C13 |
URL: | http://d.repec.org/n?u=RePEc:chf:rpseri:rp1443&r=ecm |
By: | Sam Astill; David Harvey; Stephen Leybourne; Robert Taylor |
Abstract: | In this paper we examine the issue of detecting explosive behaviour in economic and financial time series when an explosive episode is both ongoing at the end of the sample, and of finite length. We propose a testing strategy based on the sub-sampling methods of Andrews (2003), in which a suitable test statistic is calculated on a finite number of end-of-sample observations, with a critical value obtained using sub-sample test statistsics calculated on the remaining observations. This approach also has the practical advantage that, by virtue of how the critical values are obtained, it can deliver tests which are robust to, among other things, conditional heteroskedasticity and serial correlation in the driving shocks. We also explore modifications of the raw statistics to account for unconditional heteroskedasticity using studentisation and a White-type correction. We evaluate the finite sample size and power properties of our proposed procedures, and find that they offer promising levels of power, suggesting the possibility for earlier detection of end-of-sample bubble episodes compared to exisitng procedures. |
Keywords: | Rational bubble; Explosive autoregression; Right-tailed unit root testing: Sub-sampling. |
URL: | http://d.repec.org/n?u=RePEc:not:notgts:16/02&r=ecm |
By: | Erik Barto\v{s}; Richard Pin\v{c}\'ak |
Abstract: | The multi dimensional string objects are introduced as a new alternative for an application of string models for time series forecasting in trading on financial markets. The objects are represented by open string with 2-endpoints and D2-brane, which are continuous enhancement of 1-endpoint open string model. We show how new object properties can change the statistics of the predictors, which makes them the candidates for modeling a wide range of time series systems. String angular momentum is proposed as another tool to analyze the stability of currency rates except the historical volatility. To show the reliability of our approach with application of string models for time series forecasting we present the results of real demo simulations for four currency exchange pairs. |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1607.05608&r=ecm |
By: | Jianxi Su; Edward Furman |
Abstract: | A new multivariate distribution possessing arbitrarily parametrized and positively dependent univariate Pareto margins is introduced. Unlike the probability law of Asimit et al. (2010) [Asimit, V., Furman, E. and Vernic, R. (2010) On a multivariate Pareto distribution. Insurance: Mathematics and Economics 46(2), 308-316], the structure in this paper is absolutely continuous with respect to the corresponding Lebesgue measure. The distribution is of importance to actuaries through its connections to the popular frailty models, as well as because of the capacity to describe dependent heavy-tailed risks. The genesis of the new distribution is linked to a number of existing probability models, and useful characteristic results are proved. Expressions for, e.g., the decumulative distribution and probability density functions, (joint) moments and regressions are developed. The distributions of minima and maxima, as well as, some weighted risk measures are employed to exemplify possible applications of the distribution in insurance. |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1607.04737&r=ecm |
By: | Cornelissen, Thomas; Dustmann, Christian; Raute, Anna; Schönberg, Uta |
Abstract: | This paper provides an introduction into the estimation of Marginal Treatment Effects (MTE). Compared to the existing surveys on the subject, our paper is less technical and speaks to the applied economist with a solid basic understanding of econometric techniques who would like to use MTE estimation. Our framework of analysis is a generalized Roy model based on the potential outcomes framework, within which we define different treatment effects of interest, and review the well-known case of IV estimation with a discrete instrument resulting in a local average treatment effect (LATE). Turning to IV estimation with a continuous instrument we demonstrate that the 2SLS estimator may be viewed as a weighted average of LATEs, and discuss MTE estimation as an alternative and more informative way of exploiting a continuous instrument. We clarify the assumptions underlying the MTE framework and illustrate how the MTE estimation is implemented in practice. |
Keywords: | heterogeneous effects; instrumental variables; marginal treatment effects |
JEL: | C26 I26 |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:11390&r=ecm |
By: | Fausto Corradin; Domenico Sartore |
Abstract: | This paper computes the non central moments of the Truncated Normal variable, that is, a Normal constrained to assume values in the interval , with and . We define two recursive expressions where one can be expressed in closed form. Another closed form is defined using the Lower Incomplete Gamma Function. Moreover, an upper bound for the absolute value of the non central moments is determined. The numerical results of the expressions are compared and the different behavior for high value of the order of the moments is shown. |
Keywords: | truncated normal variable, non central moments, lower incomplete gamma function. |
JEL: | G11 G14 G24 |
Date: | 2016 |
URL: | http://d.repec.org/n?u=RePEc:ven:wpaper:2016:17&r=ecm |
By: | Peter Jevtic (Department of Mathematics and statistics, McMaster University, Canada); Luca Regis (IMT School for Advanced Studies Lucca) |
Abstract: | We formulate, study and calibrate a continuous-time model for the joint evolution of the mortality surface of multiple populations. We model the mortality intensity by age and population as a mixture of stochastic latent factors, that can be either population-specific or common to all populations. These factors are described by affine time-(in)homogenous stochastic processes. Traditional, deterministic mortality laws can be extended to multi-population stochastic counterparts within our framework. We detail the calibration procedure when factors are Gaussian, using centralized data-fusion Kalman filter. We provide an application based on the mortality of UK males and females. Although parsimonious, the specification we calibrate provides a good fit of the observed mortality surface (ages 0-99) of both sexes between 1960 and 2013. |
Keywords: | multi-population mortality, mortality surface, continuous-time stochastic mortality, Kalman filter estimation, centralized data fusion |
JEL: | C13 C38 G22 J11 |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:ial:wpaper:03/2016&r=ecm |
By: | Lamy, Laurent; Patnam, Manasa; Visser, Michael |
Abstract: | This paper proposes a method to estimate the relationship between the price of a good sold at auction, and a post-auction outcome which is observed among auction winners. To account for both the endogeneity of the auction price and sample selection, we develop a control function approach based on the non-parametric identification of an auction model. In our application we estimate a performance equation using unique field data on wages earned by cricket players and their game-specific performances. Our empirical strategy benefits from the fact that wages are determined through randomly ordered sequential English auctions: the order in which players are sold acts as an exogenous shifter of wages. We find that the positive correlation between wages and performance comes (almost) exclusively from the selection and endogeneity effects. |
Keywords: | Sample Selection; Structural Econometrics of Auctions; Wage-Performance Elasticity |
JEL: | C13 C57 D44 M52 |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:11376&r=ecm |
By: | Simon Clinet; Yoann Potiron |
Abstract: | We introduce and show the existence of a continuous time-varying parameter extension model to the self-exciting point process. The kernel shape is assumed to be exponentially decreasing. The quantity of interest is defined as the integrated parameter over time $T^{-1} \int_0^T \theta_t^* dt$, where $\theta_t^*$ is the time-varying parameter. To estimate it na\"{i}vely, we chop the data into several blocks, compute the maximum likelihood estimator (MLE) on each block, and take the average of the local estimates. Correspondingly, we give conditions on the parameter process and the block length under which we can establish the local central limit theorem, and the boundedness of moments of order $2\kappa$ of the local estimators, where $\kappa > 1$. Under those assumptions, the global estimator asymptotic bias explodes asymptotically. As a consequence, we provide a non-na\"{i}ve estimator, which is constructed as the na\"{i}ve one when applying a first-order bias reduction to the local MLE. We derive such first-order bias formula for the self-exciting process, and provide further conditions under which the non-na\"{i}ve global estimator is asymptotically unbiased. Finally, we obtain the associated global central limit theorem. |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1607.05831&r=ecm |
By: | Jianxi Su; Edward Furman |
Abstract: | We introduce a class of dependence structures, that we call the Multiple Risk Factor (MRF) dependence structures. On the one hand, the new constructions extend the popular CreditRisk+ approach, and as such they formally describe default risk portfolios exposed to an arbitrary number of fatal risk factors with conditionally exponential and dependent hitting (or occurrence) times. On the other hand, the MRF structures can be seen as an encompassing family of multivariate probability distributions with univariate margins distributed Pareto of the 2nd kind, and in this role they can be used to model insurance risk portfolios of dependent and heavy tailed risk components. |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1607.04739&r=ecm |
By: | Gunes Kamber (Bank for International Settlements); James Morley (School of Economics, UNSW Business School, UNSW); Benjamin Wong (Reserve Bank of New Zealand) |
Abstract: | The Beveridge-Nelson (BN) trend-cycle decomposition based on autoregressive forecasting models of U.S. quarterly real GDP growth produces estimates of the output gap that are strongly at odds with widely-held beliefs about the amplitude, persistence, and even sign of transitory movements in economic activity. These antithetical attributes are related to the autoregressive coefficient estimates implying a very high signal-to-noise ratio in terms of the variance of trend shocks as a fraction of the overall quarterly forecast error variance. When we impose a lower signal-to-noise ratio, the resulting BN decomposition, which we label the “BN filter”, produces a more intuitive estimate of the output gap that is large in amplitude, highly persistent, and typically positive in expansions and negative in recessions. Real time estimates from the BN filter are also reliable in the sense that they are subject to smaller revisions and predict future output growth and inflation better than for other methods of trend-cycle decomposition that also impose a low signal-to-noise ratio, including deterministic detrending, the Hodrick-Prescott filter, and the bandpass filter. |
Keywords: | Beveridge-Nelson decomposition, output gap, signal-to-noise ratio |
JEL: | C18 E17 E32 |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:swe:wpaper:2016-09&r=ecm |
By: | Arie Beresteanu |
Abstract: | This paper considers estimation of discrete choice models when agents report their rankingof the alternatives (or some of them) rather than just the utility maximizing alternative. Weinvestigate the parametric conditional rank-ordered Logit model. We show that conditionsfor identifi cation do not change even if we observe ranking. Moreover, we ll a gap in theliterature and show analytically and by Monte Carlo simulations that efficiency increases as weuse additional information on the ranking. |
Date: | 2016–01 |
URL: | http://d.repec.org/n?u=RePEc:pit:wpaper:5878&r=ecm |
By: | Angie ANDRIKOGIANNOPOULOU (University of Geneva and Swiss Finance Institute); Filippos PAPAKONSTANTINOU (Imperial College London) |
Abstract: | We propose a novel approach to estimating the cross-sectional distribution of skill in the mutual fund industry, the proportion of funds with zero, negative, and positive alpha, and the skill of individual funds. We model the distribution of skill with a point mass at zero and two components, one with negative and one with positive support, and we tackle model specification uncertainty. We find that the skill distribution is highly non-normal, exhibiting heavy tails and negative skewness, and that while 14% of funds generate positive alpha, 76% have negative alpha; these results yield significantly different asset allocation decisions than previous estimates. Furthermore, portfolios formed using our methodology outperform those formed using alternative methodologies. |
Keywords: | Mutual Funds, Skill, Performance, Specification Uncertainty, Point Mass, Bayesian Estimation |
JEL: | G11 G23 C11 C52 C58 |
URL: | http://d.repec.org/n?u=RePEc:chf:rpseri:rp1442&r=ecm |
By: | Christoph Engel (Max Planck Institute for Research on Collective Goods); Oliver Kirchkamp (University Jena, School of Economics) |
Abstract: | We provide an example for an errors in variables problem which might be often neglected but which is quite common in lab experimental practice: In one task, attitude towards risk is measured, in another task participants behave in a way that can possibly be explained by their risk attitude. How should we deal with inconsistent behaviour in the risk task? Ignoring these observations entails two biases: An errors in variables bias and a selection bias. We argue that inconsistent observations should be exploited to address the errors in variables problem, which can easily be done within a Bayesian framework. |
Keywords: | Risk, lab experiment, public good, errors in variables, Bayesian inference |
JEL: | C91 D43 L41 |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:mpg:wpaper:2016_11&r=ecm |
By: | Spencer WHEATLEY (ETH Zurich); Vladimir FILIMONOV (ETH Zurich); Didier SORNETTE (ETH Zurich and Swiss Finance Institute) |
Abstract: | We introduce the Hawkes process with renewal immigration and make its statistical estimation possible with two Expectation Maximization (EM) algorithms. The standard Hawkes process introduces immigrant points via a Poisson process, and each immigrant has a subsequent cluster of associated offspring of multiple generations. We generalize the immigration to come from a Renewal process; introducing dependence between neighbouring clusters, and allowing for over/under dispersion in cluster locations. This complicates evaluation of the likelihood since one needs to know which subset of the observed points are immigrants. Two EM algorithms enable estimation here: The first is an extension of an existing algorithm that treats the entire branching structure - which points are immigrants, and which point is the parent of each offspring - as missing data. The second considers only if a point is an immigrant or not as missing data and can be implemented with linear time complexity. Both algorithms are found to be consistent in simulation studies. Further, we show that misspecifying the immigration process introduces significant bias into model estimation - especially the branching ratio, which quantifies the strength of self excitation. Thus, this extended model provides a valuable alternative model in practice. |
Keywords: | Expectation-maximization algorithm, Branching process models, Renewal Cluster process models, Point process models, non-parametric estimation, Hawkes process, immigration, branching structure |
JEL: | C40 C63 |
URL: | http://d.repec.org/n?u=RePEc:chf:rpseri:rp1453&r=ecm |
By: | Carrasco, Marine; Rossi, Barbara |
Abstract: | This paper considers in-sample prediction and out-of-sample forecasting in regressions with many exogenous predictors. We consider four dimension reduction devices: principal compo- nents, Ridge, Landweber Fridman, and Partial Least Squares. We derive rates of convergence for two representative models: an ill-posed model and an approximate factor model. The theory is developed for a large cross-section and a large time-series. As all these methods depend on a tuning parameter to be selected, we also propose data-driven selection methods based on cross- validation and establish their optimality. Monte Carlo simulations and an empirical application to forecasting inflation and output growth in the U.S. show that data-reduction methods out- perform conventional methods in several relevant settings, and might effectively guard against instabilities in predictors' forecasting ability. |
Keywords: | factor models; Forecasting; GDP forecasts; large datasets; partial least squares; principal components; regularization methods; Ridge; sparsity; variable selection |
JEL: | C22 C52 C53 |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:11388&r=ecm |