Econometrics
http://lists.repec.orgmailman/listinfo/nep-ecm
Econometrics
2016-10-16
High Dimensional Factor Models: An Empirical Bayes Approach
http://d.repec.org/n?u=RePEc:apc:wpaper:2016-075&r=ecm
We propose an empirical Bayesian implementation of principal components analysis for estimating high dimensional factor models. The method is evaluated in a large Monte Carlo study where we compare the traditional principal components estimator to the our proposed empirical Bayes version. We find that for increasingly weak factor specifications the mean squared error gain that is obtained from the empirical Bayes implementation increases. We further compare the standard and empirical Bayes principal components estimators to their maximum likelihood counterparts and document that in all cases the maximum likelihood estimates remain more accurate. The methodology is illustrated for two empirical applications. One for nowcasting macroeconomic time series and one for portfolio management. We find that the empirical Bayesian principal components estimates outperform the standard principal components estimates when compared the mean squared error for the inner product of the macroeconomic forecast estimates. Second, in the portfolio optimization problem the covariance matrix of the stock returns estimated by empirical Bayes methods achieve, in most cases, the highest Information Ratio and the highest expected return for the portfolio manager.
James Sampi
Shrinkage, Principal Component Analysis, Posterior modes, Nowcasting, Portfolio Management
2016-10
Confidence Sets for the Break Date in Cointegrating Regressions
http://d.repec.org/n?u=RePEc:hit:econdp:2016-07&r=ecm
In this paper, we propose constructing confidence sets for a break date in cointegrating regressions by inverting a test for the break location, which is obtained by maximizing the weighted average of power. It is found that the limiting distribution of the test depends on the number of I(1) regressors whose coefficients sustain structural change and the number of I(1) regressors whose coefficients are fixed throughout the sample. By Monte Carlo simulations, we then show that compared with a confidence interval developed by using the existing method based on the limiting distribution of the break point estimator under the assumption of the shrinking shift, the confidence set proposed in the present paper has a more accurate coverage rate, while the length of the confidence set is comparable. By using the method developed in this paper, we then investigate the cointegrating regressions of Russian macroeconomic variables with oil prices with a break.
KUROZUMI, Eiji
SKROBOTOV, Anton
Confidence interval, structural change, cointegration, Russian economy, oil price
2016-09
On the applicability of maximum likelihood methods: From experimental to financial data
http://d.repec.org/n?u=RePEc:zbw:safewp:148&r=ecm
This paper addresses whether and to what extent econometric methods used in experimental studies can be adapted and applied to financial data to detect the best-fitting preference model. To address the research question, we implement a frequently used nonlinear probit model in the style of Hey and Orme (1994) and base our analysis on a simulation stud. In detail, we simulate trading sequences for a set of utility models and try to identify the underlying utility model and its parameterization used to generate these sequences by maximum likelihood. We find that for a very broad classification of utility models, this method provides acceptable outcomes. Yet, a closer look at the preference parameters reveals several caveats that come along with typical issues attached to financial data, and that some of these issues seems to drive our results. In particular, deviations are attributable to effects stemming from multicollinearity and coherent under-identification problems, where some of these detrimental effects can be captured up to a certain degree by adjusting the error term specification. Furthermore, additional uncertainty stemming from changing market parameter estimates affects the precision of our estimates for risk preferences and cannot be simply remedied by using a higher standard deviation of the error term or a different assumption regarding its stochastic process. Particularly, if the variance of the error term becomes large, we detect a tendency to identify SPT as utility model providing the best fit to simulated trading sequences. We also find that a frequent issue, namely serial correlation of the residuals, does not seem to be significant. However, we detected a tendency to prefer nesting models over nested utility models, which is particularly prevalent if RDU and EXPO utility models are estimated along with EUT and CRRA utility models.
Jakusch, Sven Thorsten
Utility Functions,Model Selection,Parameter Elicitation
2016
A note on upper-patched generators for Archimedean copulas
http://d.repec.org/n?u=RePEc:hal:wpaper:hal-01347869&r=ecm
The class of multivariate Archimedean copulas is defined by using a real-valued function called the generator of the copula. This generator satisfies some properties, including d-monotony. We propose here a new basic transformation of this generator, preserving these properties, thus ensuring the validity of the transformed generator and inducing a proper valid copula. This transformation acts only on a specific portion of the generator, it allows both the non-reduction of the likelihood on a given dataset, and the choice of the upper tail dependence coefficient of the transformed copula. Numerical illustrations show the utility of this construction, which can improve the fit of a given copula both on its central part and its tail.
Elena Di Bernardino
Didier RulliÃ¨re
Archimedean copulas,transformations,distortions,tail dependence coefficients,likelihood
2016-07-21
Adjusting for Information Content when Comparing Forecast Performance
http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0328&r=ecm
Cross institutional forecast evaluations may be severely distorted by the fact that forecasts are made at different points in time, and thus with different amount of information. This paper proposes a method to account for these differences. The method computes the timing effect and the forecaster's ability simultaneously. Monte Carlo simulation demonstrate that evaluations that do not adjust for the differences in information content may be misleading. In addition, the method is applied on a real-world data set of 10 Swedish forecasters for the period 1999-2015. The results show that the ranking of the forecasters is affected by the proposed adjustment.
Andersson, Michael K.
Aranki, Ted
Reslow, AndrÃ©
Forecast error; Forecast comparison; Publication time; Evaluation; Error component model; Panel data
2016-08-01
Making Markowitz's Portfolio Optimization Theory Practically Useful
http://d.repec.org/n?u=RePEc:pra:mprapa:74360&r=ecm
The traditional estimated return for the Markowitz mean-variance optimization has been demonstrated to seriously depart from its theoretic optimal return. We prove that this phenomenon is natural and the estimated optimal return is always $\sqrt{\gamma}$ times larger than its theoretic counterpart where $\gamma = \frac 1{1-y}$ with $y$ as the ratio of the dimension to sample size. Thereafter, we develop new bootstrap-corrected estimations for the optimal return and its asset allocation and prove that these bootstrap-corrected estimates are proportionally consistent with their theoretic counterparts. Our theoretical results are further confirmed by our simulations, which show that the essence of the portfolio analysis problem could be adequately captured by our proposed approach. This greatly enhances the practical uses of the Markowitz mean-variance optimization procedure.
BAI, ZHIDONG
LIU, HUIXIA
WONG, WING-KEUNG
Optimal Portfolio Allocation, Mean-Variance Optimization; Large Random Matrix; Bootstrap Method
2016-10-08
Interpreting the latent dynamic factors by threshold FAVAR model
http://d.repec.org/n?u=RePEc:boe:boeewp:0622&r=ecm
This paper proposes a method to interpret factors which are otherwise difficult to assign economic meaning to by utilizing a threshold factor-augmented vector autoregression (FAVAR) model. We observe the frequency of the factor loadings being induced to zero when they fall below the estimated threshold to infer the economic relevance that the factors carry. The results indicate that we can link the factors to particular economic activities, such as real activity, unemployment, without any prior specification on the data set. By exploiting the flexibility of FAVAR models in structural analysis, we examine impulse response functions of the factors and individual variables to a monetary policy shock. The results suggest that the proposed method provides a useful framework for the interpretation of factors and associated shock transmission.
Hacioglu, Sinem
Tuzcuoglu, Kerem
Factor models; FAVAR; latent threshold; MCMC; interpretation of latent factors; shrinkage estimation
2016-10-07