
on Econometrics 
By:  TaeHwan Kim (School of Economics, Yonsei University  Yonsei University); Christophe Muller (AMSE  AixMarseille School of Economics  AixMarseille Univ.  Centre national de la recherche scientifique (CNRS)  École des Hautes Études en Sciences Sociales [EHESS]  Ecole Centrale Marseille (ECM)) 
Abstract:  In this paper, we develop a test to detect the presence of endogeneity in conditional quantiles. Our test is a Hausmantype test based on the distance between two estimators, of which one is consistent only under no endogeneity while the other is consistent regardless of the presence of endogeneity in conditional quantile models. We derive the asymptotic distribution of the test statistic under the null hypothesis of no endogeneity. The finite sample properties of the test are investigated through Monte Carlo simulations, and it is found that the test shows good size and power properties in finite samples. As opposed to the test based on the IVQR estimator of Chernozhukov and Hansen (2006) in the case of more than a couple of variables, our approach does not imply an infeasible computation time. Finally, we apply our approach to test for endogeneity in conditional quantile models for estimating Engel curves using UK consumption and expenditure data. The pattern of endogeneity in the Engel curve is found to vary substantially across quantiles 
Keywords:  regression quantile; endogeneity; twostage estimation; Hausman test; Engel curve 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:halshs00854527&r=ecm 
By:  Nima Nonejad (Aarhus University and CREATES) 
Abstract:  We propose a flexible model to describe nonlinearities and longrange dependence in time series dynamics. Our model is an extension of the heterogeneous autoregressive model. Structural breaks occur through mixture distributions in state innovations of linear Gaussian state space models. Monte Carlo simulations evaluate the properties of the estimation procedures. Results show that the proposed model is viable and flexible for purposes of forecasting volatility. Model uncertainty is accounted for by employing Bayesian model averaging. Bayesian model averaging provides very competitive forecasts compared to any single model specification. It provides further improvements when we average over nonlinear specifications. 
Keywords:  Mixture innovation models, Markov chain Monte Carlo, Realized volatility 
JEL:  C11 C22 C51 C53 
Date:  2013–08–13 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201324&r=ecm 
By:  Arturas Juodis 
Abstract:  This paper considers estimation of Panel Vectors Autoregressive Models of order 1 (PVAR(1)) with possible crosssectional heteroscedasticity in the error terms. We focus on fixed T consistent estimation methods in First differences (FD) with or without additional strictly exogenous regressors. Additional results for the Panel FD OLS estimator and the FDLS estimator of Han and Phillips (2010) are provided. In the covariance stationary case it is shown that the univariate moment conditions of the latter estimator are violated for general parameter matrices in the multivariate case. Furthermore, we simplify the analysis of Binder, Hsiao, and Pesaran (2005) by providing analytical results for the _rst two derivatives of the Transformed Maximum Likelihood (TML) function. We extend the original model by taking into account possible crosssectional heteroscedasticity and presence of strictly exogenous regressors. Moreover, we show that in the three wave panel the loglikelihood function of the unrestricted TML estimator violates the global identification assumption. The finitesample performance of the analyzed methods is investigated in a Monte Carlo study. Results indicate that under effect stationarity the TML estimator encounters problems with global identification even for moderate values of T. 
Date:  2013–06–05 
URL:  http://d.repec.org/n?u=RePEc:ame:wpaper:1306&r=ecm 
By:  Matthew D. Webb (University of Calgary) 
Abstract:  Many empirical projects are well suited to incorporating a linear differenceindifferences research design. While estimation is straightforward, reliable inference can be a challenge. Past research has not only demonstrated that estimated standard errors are biased dramatically downwards in models possessing a group clustered design, but has also suggested a number of bootstrapbased improvements to the inference procedure. In this paper, I first demonstrate using Monte Carlo experiments, that these bootstrapbased procedures and traditional clusterrobust standard errors perform poorly in situations with fewer than eleven clusters  a setting faced in many empirical applications. With few clusters, the wild cluster bootstrapt procedure results in pvalues that are not point identified. I subsequently introduce two easytoimplement alternative procedures that involve the wild bootstrap. Further Monte Carlo simulations provide evidence that the use of a 6point distribution with the wild bootstrap can improve the reliability of inference. 
Keywords:  CRVE, grouped data, clustered data, panel data, wild bootstrap, cluster wild bootstrap 
JEL:  C15 C21 C23 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:qed:wpaper:1315&r=ecm 
By:  Sucarrat, Genaro; Grønneberg, Steffen; Escribano, Alvaro 
Abstract:  Exponential models of Autoregressive Conditional Heteroscedasticity (ARCH) enable richer dynamics (e.g. contrarian or cyclical), provide greater robustness to jumps and outliers, and guarantee the positivity of volatility. The latter is not guaranteed in ordinary ARCH models, in particular when additional exogenous or predetermined variables ("X") are included in the volatility specification. Here, we propose estimation and inference methods for univariate and multivariate Generalised logARCHX (i.e. logGARCHX) models when the conditional density is not known via (V)ARMAX representations. The multivariate specification allows for volatility feedback across equations, and timevarying correlations can be fitted in a subsequent step. Finally, our empirical applications on electricity prices show that the modelclass is particularly useful when the Xvector is highdimensional. 
Keywords:  ARCH, exponential GARCH, logGARCH, ARMAX, Multivariate GARCH 
JEL:  C22 C32 C51 C52 
Date:  2013–08–11 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:49344&r=ecm 
By:  Kaspar Wüthrich 
Abstract:  This paper studies the identification of coefficients in generalized linear predictors where the outcome variable suffers from nonclassical measurement errors. Combining a mixture model of data errors with the bounding procedure proposed by Stoye (2007) derive bounds on the coefficient vector under different nonparametric assumptions about the structure of the measurement error. The method is illustrated by analyzing a simple earnings equation. 
Keywords:  Generalized linear predictor; Nonclassical measurement error; Contaminated sampling; Corrupt sampling; Multiplicative mean independence; Stochastic dominance; Nonparametric bounds 
JEL:  C2 C21 J24 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:ube:dpvwib:dp1304&r=ecm 
By:  Nima Nonejad (Aarhus University and CREATES) 
Abstract:  This paper proposes a model that simultaneously captures long memory and structural breaks. We model structural breaks through irreversible Markov switching or socalled changepoint dynamics. The parameters subject to structural breaks and the unobserved states which determine the position of the structural breaks are sampled from the joint posterior density by sampling from their respective conditional posteriors using Gibbs sampling and MetropolisHastings. Monte Carlo simulations demonstrate that the proposed estimation approach is effective in identifying and dating structural breaks. Applied to daily S&P 500 data, one finds strong evidence of three structural breaks. The evidence of these breaks is robust to different specifications including a GARCH specification for the conditional variance of volatility. 
Keywords:  Long memory, Structural breaks, Changepoints, Gibbs sampling 
JEL:  C22 C11 C52 G10 
Date:  2013–08–13 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201326&r=ecm 
By:  Nima Nonejad (Aarhus University and CREATES) 
Abstract:  This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the MetropolisHastings algorithm. Overall PMCMC provides a very compelling, computationally fast and efficient framework for estimation. These advantages are used to for instance estimate stochastic volatility models with leverage effect or with Studentt distributed errors. We also model changing time series characteristics of the US inflation rate by considering a heteroskedastic ARFIMA model where the heteroskedasticity is specified by means of a Gaussian stochastic volatility process. 
Keywords:  Particle filter, MetropolisHastings, Unobserved components, Bayes 
JEL:  C22 C11 C63 
Date:  2013–08–13 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201327&r=ecm 
By:  Tatiana Komarova 
Abstract:  The paper considers nonparametric estimation of absolutely continuous distribution functions of lifetimes of nonidentical components in koutofn systems from the observed "autopsy" data. In economics,ascending "button" or "clock" auctions with n heterogeneous bidders present 2outofn systems. Classical competing risks models are examples of noutofn systems. Under weak conditions on the underlying distributions the estimation problem is shown to be wellposed and the suggested extremum sieve estimator is proven to be consistent. The paper illustrates the suggested estimation method by using sieve spaces of Bernstein polynomials which allow an easy implementation of constraints on the monotonicity of estimated distribution functions. 
Keywords:  koutofn systems, competing risks, sieve estimation, Bernstein polynomials 
Date:  2013–07 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:/2013/564&r=ecm 
By:  Kees Jan van Garderen; H. Peter Boswijk 
Abstract:  The maximum likelihood estimator of the adjustment coefficient in a cointegrated vector autoregressive model (CVAR) is generally biased. For the case where the cointegrating vector is known in a firstorder CVAR with no intercept, we derive a condition for the unbiasedness of the maximum likelihood estimator of the adjustment coefficients, and provide a simple characterization of the bias in case this condition is violated. A feasible bias correction method is shown to virtually eliminate the bias over a large part of the parameter space. 
Date:  2013–06–04 
URL:  http://d.repec.org/n?u=RePEc:ame:wpaper:1305&r=ecm 
By:  Stefan Hoderlein (Institute for Fiscal Studies and Boston College); Yuya Sasaki 
Abstract:  This paper introduces average treatment effects conditional on the outcomes variable in an endogenous setup where outcome Y, treatment X and instrument Z are continuous. These objects allow to refine well studied treatment effects like ATE and ATT in the case of continuous treatment (see Florens et al (2009)), by breaking them up according to the rank of the outcome distribution. For instance, in the returns to schooling case, the outcome conditioned average treatment effect on the treated (ATTO), gives the average effect of a small increase in schooling on the subpopulation characterised by a certain treatment intensity, say 16 years of schooling, and a certain rank in the wage distribution. We show that IV type approaches are better suited to identify overall averages across the population like the average partial effect, or outcome conditioned versions thereof, while selection type methods are better suited to identify ATT or ATTO. Importantly, none of the identification relies on rectangular support of the errors in the identification equation. Finally, we apply all concepts to analyse the nonlinear heterogeneous effects of smoking during pregnancy on infant birth weight. 
Keywords:  Continuous treatment, average treatment effect on the treated, marginal treatment effect, average partial effect, local instrumental variables, nonseparable model, endogeneity, quantiles 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:39/13&r=ecm 
By:  William J. McCausland; A.A.J. Marley 
Abstract:  We complete the development of a testing ground for axioms of discrete stochastic choice. Our contribution here is to develop new posterior simulation methods for Bayesian inference, suitable for a class of prior distributions introduced by McCausland and Marley (2013). These prior distributions are joint distributions over various choice distributions over choice sets of different sizes. Since choice distributions over different choice sets can be mutually dependent, previous methods relying on conjugate prior distributions do not apply. We demonstrate by analyzing data from a previously reported experiment and report evidence for and against various axioms. 
Keywords:  Random utility, discrete choice, Bayesian inference, MCMC 
JEL:  C11 C35 C53 D01 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:mtl:montec:072013&r=ecm 
By:  Halvarsson, Daniel (Ratio) 
Abstract:  The increasing interest in the application of geometric stable distributions has lead to a need for appropriate estimators. Building on recent procedures for estimating the Linnik distribution, this paper develops two estimators for the geometric stable distribution. Closed form expressions are provided for the signed and unsigned fractional moments of the distribution. The estimators are then derived using the methods of fractional lower order moments and that of logarithmic moments. Their performance is tested on simulated data, where the lower order estimators, in particular, are found to give efficient results over most of the parameter space. 
Keywords:  Geometric stable distribution; Estimation; Fractional lower order moments; Logarithmic moments; Economics 
JEL:  C00 
Date:  2013–08–21 
URL:  http://d.repec.org/n?u=RePEc:hhs:ratioi:0216&r=ecm 
By:  T.D. Stanley; Hristos Doucouliagos 
Abstract:  This paper revisits and challenges two widely accepted practices in multiple metaregression analysis: the prevalent use of randomeffects metaregression analysis (REMRA) and the correction of standard errors from fixedeffects metaregression analysis (FEMRA). Specifically, we investigate the bias of REMRA when there is publication selection bias and compare REMRA with an alternative weighted least square metaregression analysis (WLSMRA). Simulations and statistical theory show that multiple WLSMRA provides improved estimates of metaregression coefficients and their confidence intervals when there is no publication bias. When there is publication selection bias, WLSMRA dominates REMRA, especially when there is additive excess heterogeneity. WLSMRA is also compared to FEMRA, where conventional wisdom is to correct the standard errors by dividing by √MSE. We demonstrate why it is better not to make this correction. 
Keywords:  metaregression, weighted least squares, randomeffects, fixedeffects 
Date:  2013–08–17 
URL:  http://d.repec.org/n?u=RePEc:dkn:econwp:eco_2013_2&r=ecm 
By:  Alan, Piper 
Abstract:  This short note discusses two alternative ways to model dynamics in happiness regressions. A explained, this may be important when standard fixed effects estimates have serial correlation in the residuals, but is also potentially useful when serial correlation is not a problem for providing new insights in the happiness of economics area. The note discusses modelling dynamics two ways the note discusses are via a lagged dependent variable, and via an AR(1) process. The usefulness and statistical appropriateness of each is discussed with reference to happiness. Finally, a flow chart is provided summarising key decisions regarding the choice regarding, and potential necessity of, modelling dynamics. 
Keywords:  Happiness, Dynamics, Lagged Dependent Variable, AR(1) process, Estimation 
JEL:  C23 C50 I31 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:49364&r=ecm 
By:  Roland Langrock; Th\'eo Michelot; Alexander Sohn; Thomas Kneib 
Abstract:  Stochastic volatility (SV) models mimic many of the stylized facts attributed to time series of asset returns, while maintaining conceptual simplicity. A substantial body of research deals with various techniques for fitting relatively basic SV models, which assume the returns to be conditionally normally distributed or Studenttdistributed, given the volatility. In this manuscript, we consider a frequentist framework for estimating the conditional distribution in an SV model in a nonparametric way, thus avoiding any potentially critical assumptions on the shape. More specifically, we suggest to represent the density of the conditional distribution as a linear combination of standardized Bspline basis functions, imposing a penalty term in order to arrive at a good balance between goodness of fit and smoothness. This allows us to employ the efficient hidden Markov model machinery in order to fit the model and to assess its predictive performance. We demonstrate the feasibility of the approach in a simulation study before applying it to three series of returns on stocks and one series of stock index returns. The nonparametric approach leads to an improved predictive capacity in some cases, and we find evidence for the conditional distributions being leptokurtic and negatively skewed. 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1308.5836&r=ecm 
By:  Selma Chaker 
Abstract:  Observed highfrequency prices are contaminated with liquidity costs or market microstructure noise. Using such data, we derive a new asset return variance estimator inspired by the market microstructure literature to explicitly model the noise and remove it from observed returns before estimating their variance. The returns adjusted for the estimated liquidity costs are either totally or partially free from noise. If the liquidity costs are fully removed, the sum of squared highfrequency returns – which would be inconsistent for return variance when based on observed returns – becomes a consistent variance estimator when based on adjusted returns. This novel estimator achieves the maximum possible rate of convergence. However, if the liquidity costs are only partially removed, the residual noise is smaller and closer to an exogenous white noise than the original noise. Therefore, any volatility estimator that is robust to noise relies on weaker noise assumptions if it is based on adjusted returns than if it is based on observed returns. 
Keywords:  Econometric and statistical methods; Financial markets; Market structure and pricing 
JEL:  G20 C14 C51 C58 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:bca:bocawp:1329&r=ecm 
By:  Simon A. Broda 
Abstract:  Countless test statistics can be written as quadratic forms in certain random vectors, or ratios thereof. Consequently, their distribution has received considerable attention in the literature. Except for a few special cases, no closedform expression for the cdf exists, and one resorts to numerical methods. Traditionally the problem is analyzed under the assumption of joint Gaussianity; the algorithm that is usually employed is that of Imhof (1961). The present manuscript generalizes this result to the case of multivariate generalized hyperbolic (MGHyp) random vectors. The MGHyp is a very flexible distribution which nests, among others, the multivariate t, Laplace, and variance gamma distributions. An expression for the first partial moment is also obtained, which plays a vital role in financial risk management. The proof involves a generalization of the classic inversion formula due to GilPelaez (1951). Two applications are considered: first, the finitesample distribution of the 2SLS estimator of a structural parameter. Second, the Value at Risk and Expected Shortfall of a quadratic portfolio with heavytailed risk factors. 
Date:  2013–05–01 
URL:  http://d.repec.org/n?u=RePEc:ame:wpaper:1304&r=ecm 
By:  Deniz Erdemlioglu; Sébastien Laurent; Christopher J. Neely 
Abstract:  This paper attempts to realistically model the underlying exchange rate data generating process. We ask what types of diffusion or jump features are most appropriate. The most plausible model for 1minute data features Brownian motion and Poisson jumps but not infinite activity jumps. Modeling periodic volatility is necessary to accurately identify the frequency of jump occurrences and their locations. We propose a twostage method to capture the effects of these periodic volatility patterns. Simulations show that microstructure noise does not significantly impair the statistical tests for jumps and diffusion behavior.> 
Keywords:  Foreign exchange ; Timeseries analysis 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:fip:fedlwp:2013024&r=ecm 
By:  Daniele Bianchi (Bocconi University); Massimo Guidolin (IGIER Bocconi University); Francesco Ravazzolo (Norges Bank (Central Bank of Norway) and BI Norwegian Business School) 
Abstract:  This paper proposes a Bayesian estimation framework for a typical multifactor model with timevarying risk exposures to macroeconomic risk factors and corresponding premia to price U.S. stocks and bonds. The model assumes that risk exposures and idiosynchratic volatility follow a breakpoint latent process, allowing for changes at any point in time but not restricting them to change at all points. An empirical application to 40 years of U.S. data and 23 portfolios shows that the approach yields sensible results compared to previous twostep methods based on naive recursive estimation schemes, as well as a set of alternative model restrictions. A variance decomposition test shows that although most of the predictable variation comes from the market risk premium, a number of additional macroeconomic risks, including real output and inflation shocks, are significantly priced in the crosssection. A Bayes factor analysis decisively favors the proposed changepoint model. 
Keywords:  Changepoint model, Stochastic volatility, Multifactor linear models 
JEL:  G11 C53 
Date:  2013–08–22 
URL:  http://d.repec.org/n?u=RePEc:bno:worpap:2013_19&r=ecm 
By:  Pawe{\l} O\'swi\c{e}cimka; Stanis{\l}aw Dro\.zd\.z; Marcin Forczek; Stanis{\l}aw Jadach; Jaros{\l}aw Kwapie\'n 
Abstract:  We propose a modified algorithm  Multifractal CrossCorrelation Analysis (MFCCA)  that is able to consistently identify and quantify multifractal crosscorrelations between two time series. Our motivation for introducing this algorithm is that the already existing methods like MFDXA have serious limitations for most of the signals describing complex natural processes. The principal component of the related improvement is proper incorporation of the sign of fluctuations. We present a broad analysis of the model fractal stochastic processes as well as of the realworld signals and show that MFCCA is a robust tool and allows a reliable quantification of the crosscorrelative structure of analyzed processes. We, in particular, analyze a relation between the generalized Hurst exponent and the MFCCA parameter $\lambda_q$. This relation provides information about the character of potential multifractality in crosscorrelations of the processes under study and thus enables selective insight into their dynamics. Using also an example of financial time series from the stock market we show that waiting times and price increments of the companies are multifractally crosscorrelated but only for relatively large fluctuations, whereas the small ones could be considered mutually independent. 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1308.6148&r=ecm 
By:  Franses, Ph.H.B.F. 
Abstract:  Time series with bubblelike patterns display an unbalance between growth and acceleration, in the sense that growth in the upswing is â€œtoo fastâ€ and then there is a collapse. In fact, such time series show periods where both the first differences (1L) and the second differences (1L)2 of the data are positivevalued, after which period there is a collapse. For a time series without such bubbles, it can be shown that 1L2 differenced data should be stable. A simple test based on onestepahead forecast errors can now be used to timely monitor whether a series experiences a bubble and also whether a collapse is near. Illustration on simulated data and on two housing prices and the Nikkei index illustrates the practical relevance of the new diagnostic. Monte Carlo simulations indicate that the empirical power of the test is high. 
Keywords:  growth;test;acceleration;C22;speculative bubbles 
URL:  http://d.repec.org/n?u=RePEc:dgr:eureir:1765039598&r=ecm 
By:  Bryan, Mark L.; Jenkins, Stephen P. 
Abstract:  Crossnational differences in outcomes are often analysed using regression analysis of multilevel country datasets, examples of which include the ECHP, ESS, EUSILC, EVS, ISSP, and SHARE. We review the regression methods applicable to this data structure, pointing out problems with the assessment of countrylevel factors that appear not to be widely appreciated, and illustrate our arguments using MonteCarlo simulations and analysis of womens employment probabilities and work hours using EU SILC data. With large sample sizes of individuals within each country but a small number of countries, analysts can reliably estimate individuallevel effects within each country but estimates of parameters summarising country effects are likely to be unreliable. Multilevel (hierarchical) modelling methods are commonly used in this context but they are no panacea. 
Date:  2013–08–19 
URL:  http://d.repec.org/n?u=RePEc:ese:iserwp:201314&r=ecm 
By:  Kessels, Roselinde; Jones, Bradley; Goos, Peter 
Abstract:  Using maximum likelihood estimation for discrete choice modeling of small datasets causes two problems. The rst problem is that the data often exhibit separation, in which case the maximum likelihood estimates do not exist. Also, provided they exist, the maximum likelihood estimates are biased. In this paper, we show how to adapt Firth's biasadjustment method for use in discrete choice modeling. This approach removes the rstorder bias of the estimates, and it also deals with the separation issue. An additional advantage of the bias adjustment is that it is usually accompanied by a reduction in the variance. Using a largescale simulation study, we identify the situations where Firth's biasadjustment method is most useful in avoiding the problem of separation as well as removing the bias and reducing the variance. As a special case, we apply the biasadjustment approach to discrete choice data from individuals, making it possible to construct an empirical distribution of the respondents' preferences without imposing any a priori population distribution. For both research purposes, we base our ndings on data from a stated choice study on various forms of employee compensation. 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:ant:wpaper:2013013&r=ecm 
By:  Patrick Gneuss (Faculty of Business Administration and Economics, European University Viadrina, Frankfurt (Oder)); Wolfgang Schmid (Faculty of Business Administration and Economics, European University Viadrina, Frankfurt (Oder)); Reimund Schwarze 
Abstract:  Linear mixed effects models have been widely used in the spatial analysis of environmental processes. However, parameter estimation and spatial predictions involve the inversion and determinant of the n times n dimensional spatial covariance matrix of the data process, with n being the number of observations. Nowadays environmental variables are typically obtained through remote sensing and contain observations of the order of tens or hundreds of thousand on a single day, which quickly leads to bottlenecks in terms of computation speed and requirements in working memory. Therefore techniques for reducing the dimension of the problem are required. The present work analyzes approaches to approximate the spatial covariance function in a real dataset of remotely sensed carbon dioxide concentrations, obtained from the Atmospheric Infrared Sounder of NASA's 'Aqua' satellite on the 1st of May 2009. In a crossvalidation case study it is shown how fixed rank kriging, stationary covariance tapering and the fullscale approximation are able to notably speed up calculations. However the loss in predictive performance caused by the approximation strongly differs. The best results were obtained for the fullscale approximation, which was able to overcome the individual weaknesses of the fixed rank kriging and the covariance tapering. 
Keywords:  spatial covariance function, fixed rank kriging, covariance tapering, fullscale approximation, large spatial data sets, midtropospheric CO2, remote sensing, efficient approximation 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:euv:dpaper:009&r=ecm 
By:  Hao Liu (CoFE, University of Konstanz, Germany); Winfried Pohlmeier (CoFE, University of Konstanz, Germany; ZEW, Germany; RCEA, Italy) 
Abstract:  This paper analyzes the estimation risk of efficient portfolio selection. We use the concept of certainty equivalent as the basis for a welldefined statistical loss function and a monetary measure to assess estimation risk. For given risk preferences we provide analytical results for different sources of estimation risk such as sample size, dimension of the portfolio choice problem and correlation structure of the return process. Our results show that theoretically suboptimal portfolio choice strategies turn out to be superior once estimation risk is taken into account. Since estimation risk crucially depends on risk preferences, the choice of the estimator for a given portfolio strategy becomes endogenous. We show that a shrinkage approach accounting for estimation risk in both, mean and covariance of the return vector, is generally superior to simple theoretically suboptimal strategies. Moreover, focusing on just one source of estimation risk, e.g. risk reduction in covariance estimation, can lead to suboptimal portfolios. 
Keywords:  efficient portfolio, estimation risk, certainty equivalent, shrinkage 
JEL:  G11 G12 G17 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:rim:rimwps:47_13&r=ecm 
By:  Lars winkelmann; Markus Bibinger; Tobias Linzert; 
Abstract:  This paper proposes a new econometric approach to disentangle two distinct response patterns of the yield curve to monetary policy announcements. Based on cojumps in intraday tickdata of a short and long term interest rate, we develop a daywise test that detects the occurrence of a significant policy surprise and identifies the market perceived source of the surprise. The new test is applied to 133 policy announcements of the European Central Bank (ECB) in the period from 20012012. Our main findings indicate a good predictability of ECB policy decisions and remarkably stable perceptions about the ECB’s policy preferences. 
Keywords:  Central bank communication; yield curve; spectral cojump estimator; high frequency tickdata 
JEL:  E58 C14 C58 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2013038&r=ecm 