
on Econometrics 
By:  Yebin Cheng (Shanghai University of Finance); Jan G. De Gooijer (University of Amsterdam); Dawit Zerom (California State University at Fullerton) 
Abstract:  In this paper two kernelbased nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By making use of an internally normalized kernel smoother, the proposed estimator reduces the computational requirement of the latter by the order of the sample size. The second estimator involves sequential fitting by univariate local polynomial quantile regressions for each additive component with the other additive components replaced by the corresponding estimates from the first estimator. The purpose of the extra local averaging is to reduce the variance of the first estimator. We show that the second estimator achieves oracle efficiency in the sense that each estimated additive component has the same variance as in the case when all other additive components were known. Asymptotic properties are derived for both estimators under dependent processes that are strictly stationary and absolutely regular. We also provide a demonstrative empirical application of additive quantile models to ambulance travel times using administrative data for the city of Calgary. 
Keywords:  Additive models; Asymptotic properties; Dependent data; Internalized kernel smoother; Local polynomial; Oracle efficiency 
JEL:  C01 C14 
Date:  2009–11–19 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20090104&r=ecm 
By:  Kai Christoffel (Directorate General Research, European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.); Günter Coenen (Directorate General Research, European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.); Anders Warne (Directorate General Research, European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.) 
Abstract:  In this paper we review the methodology of forecasting with loglinearised DSGE models using Bayesian methods. We focus on the estimation of their predictive distributions, with special attention being paid to the mean and the covariance matrix of hstep ahead forecasts. In the empirical analysis, we examine the forecasting performance of the New AreaWide Model (NAWM) that has been designed for use in the macroeconomic projections at the European Central Bank. The forecast sample covers the period following the introduction of the euro and the outofsample performance of the NAWM is compared to nonstructural benchmarks, such as Bayesian vector autoregressions (BVARs). Overall, the empirical evidence indicates that the NAWM compares quite well with the reducedform models and the results are therefore in line with previous studies. Yet there is scope for improving the NAWM’s forecasting performance. For example, the model is not able to explain the moderation in wage growth over the forecast evaluation period and, therefore, it tends to overestimate nominal wages. As a consequence, both the multivariate point and density forecasts using the log determinant and the log predictive score, respectively, suggest that a large BVAR can outperform the NAWM. JEL Classification: C11, C32, E32, E37. 
Keywords:  Bayesian inference, DSGE models, euro area, forecasting, openeconomy macroeconomics, vector autoregression. 
Date:  2010–05 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20101185&r=ecm 
By:  Marco Bee 
Abstract:  Copulas are an essential tool for the construction of nonstandard multivariate probability distributions. In the actuarial and financial field they are particularly important because of their relationship with nonlinear dependence and multivariate extreme value theory. In this paper we use a recently proposed generalization of Importance Sampling, called Adaptive Importance Sampling, for simulating copulabased distributions and computing tail probabilities. Unlike existing methods for copula simulation, this algorithm is general, in the sense that it can be used for any copula whose margins have an unbounded support. After working out the details for Archimedean copulas, we develop an extension, based on an appropriate transformation of the marginal distributions, for sampling extreme value copulas. Extensive Monte Carlo experiments show that the method works well and its implementation is simple. An example with equity data illustrates the potential of the algorithm for practical applications 
Keywords:  Adaptive Importance Sampling, Copula, Multivariate Extreme Value Theory, Tail probability. 
JEL:  C15 C63 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:trn:utwpde:1003&r=ecm 
By:  Dennis Kristensen (Columbia University  Department of Economics); Bernard Salanie (Columbia University  Department of Economics) 
Abstract:  Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties of such approximate estimators at a low computational cost. The first two methods correct the objective function so as to remove the leading term of the bias due to the approximation. One variant provides an analytical bias adjustment, but it only works for estimators based on stochastic approximators, such as simulationbased estimators. Our second bias correction is based on ideas from the resampling literature; it eliminates the leading bias term for nonstochastic as well as stochastic approximators. Finally, we propose an iterative procedure where we use NewtonRaphson (NR) iterations based on a much finer degree of approximation. The NR step removes some or all of the additional bias and variance of the initial approximate estimator. A Monte Carlo simulation on the mixed logit model shows that noticeable improvements can be obtained rather cheaply. 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:clu:wpaper:091015&r=ecm 
By:  Matteo PICCHIO (Tilburg University, Department of Economics, CentER, ReflecT and IZA, Germany); Chiara MUSSIDA (Department of Economics and Social Sciences, Universitˆ Cattolica, Milan and Prometeia spa, Bologna) 
Abstract:  Sizeable gender differences in employment rates are observed in many countries. Sample selection into the workforce might therefore be a relevant issue when estimating gender wage gaps. This paper proposes a new semiparametric estimator of densities in the presence of covariates which incorporates sample selection. We describe a simulation algorithm to implement counterfactual comparisons of densities. The proposed methodology is used to investigate the gender wage gap in Italy. It is found that when sample selection is taken into account gender wage gap widens, especially at the bottom of the wage distribution. Explanations are offered for this empirical finding. 
Keywords:  gender wage gap, hazard function, sample selection, glass ceiling, sticky floor 
JEL:  C21 C41 J16 J31 J71 
Date:  2010–03–12 
URL:  http://d.repec.org/n?u=RePEc:ctl:louvir:2010005&r=ecm 
By:  Dinghai Xu (Department of Economics, University of Waterloo) 
Abstract:  Rapid development in the computer technology has made the financial transaction data visible at an ultimate limit level. The realized volatility, as a proxy for the "true" volatility, can be constructed using the high frequency data. This paper extends a threshold stochastic volatility specification proposed in So, Li and Lam (2002) by incorporating the high frequency volatility measures. Due to the availability of the volatility time series, the parameters estimation can be easily implemented via the standard maximum likelihood estimation (MLE) rather than using the simulated Bayesian methods. In the Monte Carlo section, several misspecification and sensitivity experiments are conducted. The proposed methodology shows good performance according to the Monte Carlo results. In the empirical study, three stock indices are examined under the threshold stochastic volatility structure. Empirical results show that in different regimes, the returns and volatilities exhibit asymmetric behavior. In addition, this paper allows the threshold in the model to be flexible and uses a sequential optimization based on MLE to search for the "optimal" threshold value. We find that the model with a flexible threshold is always preferred to the model with a fixed threshold according to the loglikelihood measure. Interestingly, the "optimal" threshold is found to be stable across different sampling realized volatility measures. 
JEL:  C01 C51 
Date:  2010–05 
URL:  http://d.repec.org/n?u=RePEc:wat:wpaper:1003&r=ecm 
By:  Stefan Hlawatsch (Faculty of Economics and Management, OttovonGuericke University Magdeburg); Sebastian Ostrowski (Faculty of Economics and Management, OttovonGuericke University Magdeburg) 
Abstract:  The aim of our paper is the development of an adequate estimation model for the loss given default, which incorporates the empirically observed bimodality and bounded nature of the distribution. Therefore we introduce an adjusted Expectation Maximization algorithm to estimate the parameters of a univariate mixture distribution, consisting of two beta distributions. Subsequently these estimations are compared with the Maximum Likelihood estimators to test the efficiency and accuracy of both algorithms. Furthermore we analyze our derived estimation model with estimation models proposed in the literature on a synthesized loan portfolio. The simulated loan portfolio consists of possibly lossinfluencing parameters that are merged with loss given default observations via a quasirandom approach. Our results show that our proposed model exhibits more accurate loss given default estimators than the benchmark models for different simulated data sets comprising obligorspecific parameters with either high predictive power or low predictive power for the loss given default. 
Keywords:  Bimodality, EM Algorithm, Loss Given Default, Maximum Likelihood, Mixture Distribution, Portfolio Simulation 
JEL:  C01 C13 C15 C16 C5 
Date:  2010–03 
URL:  http://d.repec.org/n?u=RePEc:mag:wpaper:100010&r=ecm 
By:  Krzysztof Kontek (Artal Investments, Warsaw) 
Abstract:  This paper deals with estimating peaked densities over the interval [0,1] using the Uneven TwoSided Power Distribution (UTP). This distribution is the most complex of all the bounded power distributions introduced by Kotz and van Dorp (2004). The UTP maximum likelihood estimator, a result not derived by Kotz and van Dorp, is presented. The UTP is used to estimate the daily return densities of the DJI and stocks comprising this index. As the returns are found to have high kurtosis values, the UTP provides much more accurate estimations than a smooth distribution. The paper presents the program written in Mathematica which calculates maximum likelihood estimators for all members of the bounded power distribution family. The paper demonstrates that the UTP distribution may be extremely useful in estimating peaked densities over the interval [0,1] and in studying financial data. 
Keywords:  Density Distribution, Maximum Likelihood Estimation, Stock Returns 
JEL:  C01 C02 C13 C16 C46 C87 G10 
Date:  2010–05–08 
URL:  http://d.repec.org/n?u=RePEc:wse:wpaper:43&r=ecm 
By:  Charles S. Bos (VU University Amsterdam); Pawel Janus (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam) 
Abstract:  This paper considers spot variance path estimation from datasets of intraday high frequency asset prices in the presence of diurnal variance patterns, jumps, leverage effects and microstructure noise. We rely on parametric and nonparametric methods. The estimated spot variance path can be used to extend an existing high frequency jump test statistic, to detect arrival times of jumps and to obtain distributional characteristics of detected jumps. The effectiveness of our approach is explored through Monte Carlo simulations. It is shown that sparse sampling for mitigating the impact of microstructure noise has an adverse effect on both spot variance estimation and jump detection. In our approach we can analyze high frequency price observations that are contaminated with microstructure noise without the need for sparse sampling, say at fifteen minute intervals. An empirical illustration is presented for the intraday EUR/USD exchange rates. Our main finding is that fewer jumps are detected when sampling intervals increase. 
Keywords:  high frequency; intraday periodicity; jump testing; leverage effect; microstructure noise; preaveraged bipower variation; spot variance 
JEL:  C12 C13 C22 G10 G14 
Date:  2009–12–04 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20090110&r=ecm 
By:  Yin Liao; Heather Anderson; Farshid Vahid 
Abstract:  Realized volatility of stock returns is often decomposed into two distinct components that are attributed to continuous price variation and jumps. This paper proposes a tobit multivariate factor model for the jumps coupled with a standard multivariate factor model for the continuous sample path to jointly forecast volatility in three Chinese Mainland stocks. Out of sample forecast analysis shows that separate multivariate factor models for the two volatility processes outperform a single multivariate factor model of realized volatility, and that a single multivariate factor model of realized volatility outperforms univariate models. 
JEL:  C13 C32 C52 C53 G32 
Date:  2010–05 
URL:  http://d.repec.org/n?u=RePEc:acb:cbeeco:2010520&r=ecm 
By:  Michael P. Clements; David F. Hendry 
Abstract:  This chapter describes the issues confronting any realistic context for economic forecasting, which is inevitably based on unknowingly misspecified models, usually estimated from mismeasured data, facing intermittent and often unanticipated location shifts. We focus on mitigating the systematic forecast failures that result in such settings, and describe the background to our approach, the difficulties of evaluating forecasts, and the devices that are more robust when change occurs. 
Keywords:  Economic forecasting, Location shifts, Misspecified models, Robust forecasts 
JEL:  C51 C22 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:oxf:wpaper:484&r=ecm 
By:  Edward Hoyle; Lane P. Hughston; Andrea Macrina 
Abstract:  We develop a nonlife reserving model using a stable1/2 random bridge to simulate the accumulation of paid claims, allowing for an arbitrary choice of a priori distribution for the ultimate loss. Taking a Bayesian approach to the reserving problem, we derive the process of the conditional distribution of the ultimate loss. The `bestestimate ultimate loss process' is given by the conditional expectation of the ultimate loss. We derive explicit expressions for the bestestimate ultimate loss process, and for expected recoveries arising from aggregate excessofloss reinsurance treaties. Use of a deterministic time change allows for the matching of any initial (increasing) development pattern for the paid claims. We show that these methods are wellsuited to the modelling of claims where there is a nontrivial probability of catastrophic loss. The generalized inverseGaussian (GIG) distribution is shown to be a natural choice for the a priori ultimate loss distribution. For particular GIG parameter choices, the bestestimate ultimate loss process can be written as a rational function of the paidclaims process. We extend the model to include a second paidclaims process, and allow the two processes to be dependent. The results obtained can be applied to the modelling of multiple lines of business or multiple origin years. The multidimensional model has the attractive property that the dimensionality of calculations remains low, regardless of the number of paidclaims processes. An algorithm is provided for the simulation of the paidclaims processes. 
Date:  2010–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1005.0496&r=ecm 
By:  Marta O Soares (Centre for Health Economics, University of York, UK); L Canto e Castro (Department of Statistics and Operations Research, Faculty of Sciences, University of Lisbon, Portugal.) 
Abstract:  The choice of model design for decision analytic models in costeffectiveness analysis has been the subject of discussion. The current work addresses this issue by noting that, when time is to be explicitly modelled, we need to represent phenomena occurring in continuous time. Multistate models evaluated in continuous time might be used but closed form solutions of expected time in each state may not exist or may be difficult to obtain. Two approximations can then be used for costeffectiveness estimation: (1) simulation models, where continuous time estimates are obtained through Monte Carlo simulation, and (2) discretized models. This work draws recommendations on their use by showing that, when these alternative models can be applied, it is preferable to implement a cohort discretized model than a simulation model. Whilst the bias from the first can be minimized by reducing the cycle length, the second is inherently stochastic. Even though specialized literature advocates this framework, the current practice in economic evaluation is to define clinically meaningful cycle lengths for discretized models, disregarding potential biases. 
Date:  2010–03 
URL:  http://d.repec.org/n?u=RePEc:chy:respap:56cherp&r=ecm 
By:  Grassi, Stefano; Proietti, Tommaso 
Abstract:  We apply a recently proposed Bayesian model selection technique, known as stochastic model specification search, for characterising the nature of the trend in macroeconomic time series. We illustrate that the methodology can be quite successfully applied to discriminate between stochastic and deterministic trends. In particular, we formulate autoregressive models with stochastic trends components and decide on whether a specific feature of the series, i.e. the underlying level and/or the rate of drift, are fixed or evolutive. 
Keywords:  Bayesian model selection; stationarity; unit roots; stochastic trends; variable selection. 
JEL:  E32 C52 C22 
Date:  2010–05–07 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:22569&r=ecm 
By:  V. Abramov; F. Klebaner; R. Liptser 
Abstract:  The CEV model is given by the stochastic differential equation $X_t=X_0+\int_0^t\mu X_sds+\int_0^t\sigma (X^+_s)^pdW_s$, $\frac{1}{2}\le p<1$. It features a nonLipschitz diffusion coefficient and gets absorbed at zero with a positive probability. We show the weak convergence of EulerMaruyama approximations $X_t^n$ to the process $X_t$, $0\le t\le T$, in the Skorokhod metric. We give a new approximation by continuous processes which allows to relax some technical conditions in the proof of weak convergence in \cite{HZa} done in terms of discrete time martingale problem. We calculate ruin probabilities as an example of such approximation. We establish that the ruin probability evaluated by simulations is not guaranteed to converge to the theoretical one, because the point zero is a discontinuity point of the limiting distribution. To establish such convergence we use the Levy metric, and also confirm the convergence numerically. Although the result is given for the specific model, our method works in a more general case of nonLipschitz diffusion with absorbtion. 
Date:  2010–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1005.0728&r=ecm 
By:  Cathy Ning, Dinghai Xu, Tony Wirjanto (Department of Economics, University of Waterloo) 
Abstract:  Volatility clustering is a wellknown stylized feature of financial asset returns. In this paper, we investigate the asymmetric pattern of volatility clustering on both the stock and foreign exchange rate markets. To this end, we employ copulabased semiparametric univariate timeseries models that accommodate the clusters of both large and small volatilities in the analysis. Using daily realized volatilities of the individual company stocks, stock indices and foreign exchange rates constructed from high frequency data, we find that volatility clustering is strongly asymmetric in the sense that clusters of large volatilities tend to be much stronger than those of small volatilities. In addition, the asymmetric pattern of volatility clusters continues to be visible even when the clusters are allowed to be changing over time, and the volatility clusters themselves remain persistent even after forty days. 
JEL:  C51 G32 
Date:  2010–01 
URL:  http://d.repec.org/n?u=RePEc:wat:wpaper:1001&r=ecm 
By:  emmanuel, mamatzakis; george, christodoulakis 
Abstract:  We examine the attribution of premium growth rates for the five main insurance sectors of the United Kingdom for the period 19692005; in particular, Property, Motor, Pecuniary, Health & Accident, and Liability. In each sector, the growth rates of aggregate insurance premiums are viewed as portfolio returns which we attribute to a number of factors such as realized and expected losses and expenses, their uncertainty and market power, using the Sharpe (1988, 1992) Style Analysis. Our estimation method differs from the standard least squares practice which does not provide confidence intervals for style betas and adopts a Bayesian approach, resulting in a robust estimate of the entire empirical distribution of each beta coefficients for the full sample. We also perform a rolling analysis of robust estimation for a window of seven overlapping samples. Our empirical findings show that there are some main differences across industries as far as the weights attributed to the underlying factors. Rolling regressions assist us to identify the variability of these weights over time, but also across industries. 
Keywords:  Insurance Premiums; Monte Carlo Integration; NonNegativity Constraints; Return Attribution; Sharpe Style Analysis 
JEL:  C3 G22 C01 
Date:  2010–03–23 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:22516&r=ecm 
By:  Antonello D’Agostino (Central Bank of Ireland); Kieran McQuinn (Central Bank of Ireland); Karl Whelan (University College Dublin) 
Abstract:  In any dataset with individual forecasts of economic variables, some forecasters will perform better than others. However, it is possible that these ex post differences reflect sampling variation and thus overstate the ex ante differences between forecasters. In this paper, we present a simple test of the null hypothesis that all forecasters in the US Survey of Professional Forecasters have equal ability. We construct a test statistic that reflects both the relative and absolute performance of the forecaster and use bootstrap techniques to compare the empirical results with the equivalents obtained under the null hypothesis of equal forecaster ability. Results suggests limited evidence for the idea that the best forecasters are actually innately better than others, though there is evidence that a relatively small group of forecasters perform very poorly. 
Keywords:  Forecasting, Bootstrap 
Date:  2010–04–15 
URL:  http://d.repec.org/n?u=RePEc:ucn:wpaper:201012&r=ecm 
By:  Teresa Bago d'Uva (Erasmus University Rotterdam, and Netspar); Maarten Lindeboom (VU University Amsterdam, and Netspar); Owen O'Donnell (University of Macedonia, University of Lausanne, and Netspar); Eddy van Doorslaer (Erasmus University Rotterdam) 
Abstract:  Anchoring vignettes are increasingly used to identify and correct heterogeneity in the reporting of health, work disability, life satisfaction, political efficacy, etc. with the aim of improving interpersonal comparability of subjective indicators of these constructs. The method relies on two assumptions: vignette equivalence – the vignette description is perceived by all to correspond to the same state; and, response consistency  individuals use the same response scales to rate the vignettes and their own situation. We propose tests of these assumptions. For vignette equivalence, we test a necessary condition of no systematic variation with observed characteristics in the perceived difference in states corresponding to any two vignettes. To test response consistency we rely on the assumption that objective indicators fully capture the covariation between the construct of interest and observed individual characteristics, and so offer an alternative way to identify response scales, which can then be compared with those identified from the vignettes. We also introduce a weaker test that is valid under a less stringent assumption. We apply these tests to cognitive functioning and mobility related health problems using data from the English Longitudinal Survey of Ageing. Response consistency is rejected for both health domains according to the first test, but the weaker test does not reject for cognitive functioning. The necessary condition for vignette equivalence is rejected for both health domains. These results cast some doubt on the validity of the vignettes approach, at least as applied to these health domains. 
Keywords:  Reporting heterogeneity; Survey methods; Vignettes; Health; Cognition 
JEL:  C35 C42 I12 
Date:  2009–11–04 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20090091&r=ecm 
By:  Hafedh Bouakez; Foued Chihi; Michel Normandin 
Abstract:  Measuring the effects of discretionary fiscal policy is both difficult and controversial, as some explicit or implicit identifying assumptions need to be made to isolate exogenous and unanticipated changes in taxes and government spending. Studies based on structural vector autoregressions typically achieve identification by restricting the contemporaneous interaction of fiscal and nonfiscal variables in a rather arbitrary way. In this paper, we relax those restrictions and identify fiscal policy shocks by exploiting the conditional heteroscedasticity of the structural disturbances. We use this methodology to evaluate the macroeconomic effects of fiscal policy shocks in the U.S. before and after 1979. Our results show substantive differences in the economy’s response to government spending and tax shocks across the two periods. Importantly, we find that increases in public spending are, in general, more effective than tax cuts in stimulating economic activity. A key contribution of this study is to provide a formal test of the identifying restrictions commonly used in the literature. 
Keywords:  Fiscal policy, Government spending, Taxes, Primary deficit, Structural vector autoregression, Identification 
JEL:  C32 E62 H20 H50 H60 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:lvl:lacicr:1016&r=ecm 
By:  Charles Bellemare; Luc Bissonnette; Sabine Kröger 
Abstract:  We show how bounds around preferences parameters can be estimated under various levels of assumptions concerning the beliefs of senders in the investment game. We contrast these bounds with point estimates of the preference parameters obtained using nonincentivized subjective belief data. Our point estimates suggest that expected responses and social preferences both play a significant role in determining investment in the game. Moreover, these point estimates fall within our most reasonable bounds. This suggests that credible inferences can be obtained using nonincentivized beliefs. 
Keywords:  Partial identification, preferences, beliefs, decision making under uncertainty 
JEL:  C81 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:lvl:lacicr:1017&r=ecm 
By:  GaoFeng Gu; WeiXing Zhou 
Abstract:  Detrending moving average (DMA) is a widely used method to quantify the correlation of nonstationary signals. We generalize DMA to multifractal detrending moving average (MFDMA), and then extend onedimensional MFDMA to twodimensional version. In the paper, we elaborate onedimensional and twodimensional MFDMA theoretically and apply the methods to synthetic multifractal measures. We find that the numerical estimations of the multifractal scaling exponent $\tau(q)$ and the multifractal spectrum $f(\alpha)$ are in good agreement with the theoretical values. We also compare the performance of MFDMA with MFDFA, and report that MFDMA is superior to MFDFA when apply them to analysis the properties of onedimensional and twodimensional multifractal measures. 
Date:  2010–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1005.0877&r=ecm 