|
on Econometrics |
By: | Taras Bodnar; Arjun K. Gupta; Nestor Parolya |
Abstract: | In this work we construct an optimal shrinkage estimator for the precision matrix in high dimensions. We consider the general asymptotics when the number of variables $p\rightarrow\infty$ and the sample size $n\rightarrow\infty$ so that $p/n\rightarrow c\in (0, +\infty)$. The precision matrix is estimated directly, without inverting the corresponding estimator for the covariance matrix. The recent results from the random matrix theory allow us to find the asymptotic deterministic equivalents of the optimal shrinkage intensities and estimate them consistently. The resulting distribution-free estimator has almost surely the minimum Frobenius loss. Additionally, we prove that the Frobenius norms of the inverse and of the pseudo-inverse sample covariance matrices tend almost surely to deterministic quantities and estimate them consistently. At the end, a simulation is provided where the suggested estimator is compared with the estimators for the precision matrix proposed in the literature. The optimal shrinkage estimator shows significant improvement and robustness even for non-normally distributed data. |
Date: | 2013–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1308.0931&r=ecm |
By: | Karun Adusumilli; Taisuke Otsu |
Abstract: | We extend the method of empirical likelihood to cover hypotheses involving the Aumann expectation of random sets. By exploiting the properties of random sets, we convert the testing problem into one involving a continuum of moment restrictions for which we propose two inferential procedures. The first, which we term marked empirical likelihood, corresponds to constructing a non-parametric likelihood for each moment restriction and assessing the resulting process. The second, termed sieve empirical likelihood, corresponds to constructing a likelihood for a vector of moments with growing dimension. We derive the asymptotic distributions under the null and sequence of local alternatives for both types of tests and prove their consistency. The applicability of these inferential procedures is demonstrated in the context of two examples on the mean of interval observations and best linear predictors for interval outcomes. |
Date: | 2014–06 |
URL: | http://d.repec.org/n?u=RePEc:cep:stiecm:/2014/574&r=ecm |
By: | Taras Bodnar; Arjun K. Gupta; Nestor Parolya |
Abstract: | In this work we construct an optimal linear shrinkage estimator for the covariance matrix in high dimensions. The recent results from the random matrix theory allow us to find the asymptotic deterministic equivalents of the optimal shrinkage intensities and estimate them consistently. The developed distribution-free estimators obey almost surely the smallest Frobenius loss over all linear shrinkage estimators for the covariance matrix. The case we consider includes the number of variables $p\rightarrow\infty$ and the sample size $n\rightarrow\infty$ so that $p/n\rightarrow c\in (0, +\infty)$. Additionally, we prove that the Frobenius norm of the sample covariance matrix tends almost surely to a deterministic quantity which can be consistently estimated. |
Date: | 2013–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1308.2608&r=ecm |
By: | Barbara Sianesi (Institute for Fiscal Studies and IFS) |
Abstract: | One of the most powerful critiques of the use of randomised experiments in the social sciences is the possibility that individuals might react to the randomisation itself, thereby rendering the causal inference from the experiment irrelevant for policy purposes. In this paper we set out a theoretical framework for the systematic consideration of “randomisation bias”, and provide what is to our knowledge the first empirical evidence on this form of bias in an actual social experiment, the UK Employment Retention and Advancement (ERA) study. Specifically, we empirically test the extent to which random assignment has affected the process of participation in the ERA study. We further propose a non-experimental way of assessing the extent to which the treatment effects stemming from the experimental sample are representative of the impacts that would have been experienced by the population who would have been exposed to the program in routine mode. We consider both the case of administrative outcome measures available for the entire relevant sample and of survey-based outcome measures. For the case of survey outcomes we extend our estimators to also account for selective non-response based on observed characteristics. Both for the case of administrative and survey data we further extend our proposed estimators to deal with the nonlinear case of binary outcomes. |
Date: | 2014–05 |
URL: | http://d.repec.org/n?u=RePEc:ifs:ifsewp:14/10&r=ecm |
By: | Xavier D'Haultfoeuille; Arnaud Maurel; Yichong Zhang |
Abstract: | We consider the estimation of a semiparametric location-scale model subject to endogenous selection, in the absence of an instrument or a large support regressor. Identification relies on the independence between the covariates and selection, for arbitrarily large values of the outcome. In this context, we propose a simple estimator, which combines extremal quantile regressions with minimum distance. We establish the asymptotic normality of this estimator by extending previous results on extremal quantile regressions to allow for selection. Finally, we apply our method to estimate the black-white wage gap among males from the NLSY79 and NLSY97. We find that premarket factors such as AFQT and family background characteristics play a key role in explaining the level and evolution of the black-white wage gap. |
JEL: | C21 C24 J31 |
Date: | 2014–06 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:20257&r=ecm |
By: | Felix Ming Fai Wong; Zhenming Liu; Mung Chiang |
Abstract: | We revisit the problem of predicting directional movements of stock prices based on news articles: here our algorithm uses daily articles from The Wall Street Journal to predict the closing stock prices on the same day. We propose a unified latent space model to characterize the "co-movements" between stock prices and news articles. Unlike many existing approaches, our new model is able to simultaneously leverage the correlations: (a) among stock prices, (b) among news articles, and (c) between stock prices and news articles. Thus, our model is able to make daily predictions on more than 500 stocks (most of which are not even mentioned in any news article) while having low complexity. We carry out extensive backtesting on trading strategies based on our algorithm. The result shows that our model has substantially better accuracy rate (55.7%) compared to many widely used algorithms. The return (56%) and Sharpe ratio due to a trading strategy based on our model are also much higher than baseline indices. |
Date: | 2014–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1406.7330&r=ecm |
By: | Rahmanov, Ramiz |
Abstract: | The paper describes the evolutionary history of Macroeconometrics over the last one hundred years. Three main approaches are distinguished, their underlying principles are discussed and the weakness of each principle is considered. The paper also shows the current developments in the field and indicates the directions of the research currently undertaken. |
Keywords: | Macroeconometrics, history, Cowles Commission, VAR, VECM, DSGE |
JEL: | B41 C50 C6 C60 |
Date: | 2014 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:56869&r=ecm |