
on Econometrics 
By:  Hill, Jonathan B.; Prokhorov, Artem 
Abstract:  We construct a Generalized Empirical Likelihood estimator for a GARCH(1,1) model with a possibly heavy tailed error. The estimator imbeds tailtrimmed estimating equations allowing for overidentifying conditions, asymptotic normality, efficiency and empirical likelihood based confidence regions for very heavytailed random volatility data. We show the implied probabilities from the tailtrimmed Continuously Updated Estimator elevate weight for usable large values, assign large but not maximum weight to extreme observations, and give the lowest weight to nonleverage points. We derive a higher order expansion for GEL with imbedded tailtrimming (GELITT), which reveals higher order bias and efficiency properties, available when the GARCH error has a finite second moment. Higher order asymptotics for GEL without tailtrimming requires the error to have moments of substantially higher order. We use first order asymptotics and higher order bias to justify the choice of the number of trimmed observations in any given sample. We also present robust versions of Generalized Empirical Likelihood Ratio, Wald, and Lagrange Multiplier tests, and an efficient and heavy tail robust moment estimator with an application to expected shortfall estimation. Finally, we present a broad simulation study for GEL and GELITT, and demonstrate profile weighted expected shortfall for the Russian Ruble  US Dollar exchange rate. We show that tailtrimmed CUEGMM dominates other estimators in terms of bias, mse and approximate normality. AMS classifications : 62M10 , 62F35. 
Keywords:  GEL ; GARCH ; tail trimming ; heavy tails ; robust inference ; efficient moment estimation ; expected shortfall ; Russian Ruble 
JEL:  C13 C49 
Date:  2015–09–11 
URL:  http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/13795&r=ecm 
By:  Valerio Scalone (Dipartimento di Scienze Sociali ed Economiche, Sapienza University of Rome (Italy).) 
Abstract:  Nonlinear model estimation is generally perceived as impractical and computationally burdensome. This perception limited the diffusion on nonlinear models estimation. In this paper a simple set of techniques going under the name of Approximate Bayesian Computation (ABC) is proposed. ABC is a set of Bayesian techniques based on moments matching: moments are obtained simulating the model conditional on draws from the prior distribution. An acceptreject criterion is applied on the simulations and an approximate posterior distribution is obtained by the accepted draws. A series of techniques are presented (ABCregression, ABCMCMC, ABCSMC). To assess their small sample performance, Montecarlo experiments are run on AR(1) processes and on a RBC model showing that ABC estimators outperform the Limited Information Method (Kim, 2002), a GMMstyle estimator. In the remainder, the estimation of a newkeynesian model with a zero lower bound on the interest rate is performed. Nongaussian moments are exploited in the estimation procedure. 
Keywords:  MonteCarlo analysis, Method of moments, Bayesian, Zero Lower Bound, DSGE estimation. 
JEL:  C15 C11 E2 
Date:  2015–11 
URL:  http://d.repec.org/n?u=RePEc:saq:wpaper:06/15&r=ecm 
By:  Silvia BACCI; Francesco BARTOLUCCI; Silvia PANDOLFI 
Abstract:  A critical problem in repeated measurement studies is the occurrence of non ignorable missing observations. A common approach to deal with this problem is joint modeling the longitudinal and survival processes for each individual on the basis of a random effect that is usually assumed to be time constant. We relax this hypothesis by introducing timevarying subjectspecific random effects that follow a firstorder autoregressive process, AR(1). We also adopt a generalized linear model formulation to accommodate for different types of longitudinal response (i.e., con tinuous, binary, count) and we consider some extended cases, such as counts with excess of zeros and multivariate outcomes at each time occasion. Estimation of the parameters of the resulting joint model is based on maximization of the likelihood computed by a recursion developed in the hidden Markov literature. The maximiza tion is performed on the basis of a quasiNewton algorithm that also provides the information matrix and then standard errors for the parameter estimates. The pro posed approach is illustrated through a Monte Carlo simulation study and through the analysis of certain medical datasets. 
Keywords:  generalized linear models; informative dropout; nonignorable missing mechanism; sequential quadrature; sharedparameter models 
Date:  2015–10–01 
URL:  http://d.repec.org/n?u=RePEc:pia:papers:00014/2015&r=ecm 
By:  Gerlach, Richard; Wang, Chao 
Abstract:  A new framework named Realized Conditional Autoregressive Expectile (Realized CARE) is proposed, through incorporating a measurement equation into the conventional CARE model, in a framework analogous to RealizedGARCH. The Range and realized measures (Realized Variance and Realized Range) are employed as the dependent variables of the measurement equation, since they have proven more efficient than return for volatility estimation. The dependence between Range & realized measures and expectile can be modelled with this measurement equation. The grid search accuracy of the expectile level will be potentially improved with introducing this measurement equation. In addition, through employing a quadratic fitting target search, the speed of grid search is significantly improved. Bayesian adaptive Markov Chain Monte Carlo is used for estimation, and demonstrates its superiority compared to maximum likelihood in a simulation study. Furthermore, we propose an innovative subsampled Realized Range and also adopt an existing scaling scheme, in order to deal with the microstructure noise of the high frequency volatility measures. Compared to the CARE, the parametric GARCH and the RealizedGARCH models, ValueatRisk and Expected Shortfall forecasting results of 6 indices and 3 assets series favor the proposed RealizedCARE model, especially the RealizedCARE model with Realized Range and subsampled Realized Range. 
Keywords:  RealizedCARE ; Realized Variance ; Realized Range ; Subsampling Realized Range ; Markov Chain Monte Carlo ; Target Search ; ValueatRisk ; Expected Shortfall 
Date:  2015–09–11 
URL:  http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/13800&r=ecm 
By:  Tran, MinhNgoc; Kohn, Robert 
Abstract:  Approximate Bayesian Computation (ABC) is a powerful method for carrying out Bayesian inference when the likelihood is computationally intractable. However, a draw back of ABC is that it is an approximate method that induces a systematic error because it is necessary to set a tolerance level to make the computation tractable. The issue of how to optimally set this tolerance level has been the subject of extensive research. This paper proposes an ABC algorithm based on importance sampling that estimates expec tations with respect to the exact posterior distribution given the observed summary statistics. This overcomes the need to select the tolerance level. By exact we mean that there is no systematic error and the Monte Carlo error can be made arbitrarily small by increasing the number of importance samples. We provide a formal justifica tion for the method and study its convergence properties. The method is illustrated in two applications and the empirical results suggest that the proposed ABC based esti mators consistently converge to the true values as the number of importance samples increases. Our proposed approach can be applied more generally to any importance sampling problem where an unbiased estimate of the likelihood is required. 
Keywords:  Debiasing; Ising model; Unbiased likelihood; 
Date:  2015–09–23 
URL:  http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/13839&r=ecm 
By:  Tomás del Barrio Castro (Universitat de les Illes Balears); Andrii Bodnar; Andreu Sansó Rosselló (Universitat de les Illes Balears) 
Abstract:  This paper implements the approach introduced by MacKinnon (1994, 1996) to estimate the response surface of the test statistics of seasonal unit root tests with OLS and GLS detrending for quarterly and monthly time series. The Gauss code that is available in the supplementary material of the paper produces pvalues for five test statistics depending on the sample size, deterministic terms and frequency of the data. A comparison with previous studies is undertaken, and an empirical example using airport passenger arrivals to a tourist destination is carried out. Quantile function coefficients are reported for simple computation of critical values for tests at 1%, 5% and 10% significance levels. 
Keywords:  HEGY test, GLS detrending, response surfaces 
JEL:  C12 C22 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:ubi:deawps:73&r=ecm 
By:  Tomás del Barrio Castro (Universitat de les Illes Balears); Paulo M. M. Rodrigues (Bank of Portugal); A. M. Robert Taylor (University of Essex) 
Abstract:  It is well known that (seasonal) unit root tests can be seriously affected by the presence of weak dependence in the driving shocks when this is not accounted for. In the nonseasonal case both parametric (based around augmentation of the test regression with lagged dependent variables) and semiparametric (based around an estimator of the long run variance of the shocks) unit root tests have been proposed. Of these, the M class of unit root tests introduced by Stock (1999), Perron and Ng (1996) and Ng and Perron (2001), appear to be particularly successful, showing good finite sample size control even in the most problematic (nearcancellation) case where the shocks contain a strong negative moving average component. The aim of this paper is threefold. First we show the implications that neglected weak dependence in the shocks has on lag unaugmented versions of the well known regressionbased seasonal unit root tests of Hylleberg et al. (1990). Second, in order to complement extant parametrically augmented versions of the tests of Hylleberg et al. (1990), we develop semiparametric seasonal unit root test procedures, generalising the methods developed in the nonseasonal case to our setting. Third, we compare the finite sample size and power properties of the parametric and semiparametric seasonal unit root tests considered. Our results suggest that the superior size/power tradeoff offered by the M approach in the nonseasonal case carries over to the seasonal case. 
Keywords:  Seasonal unit roots, weak dependence, lag augmentation, long run variance estimator, demodulated process. 
JEL:  C12 C22 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:ubi:deawps:72&r=ecm 
By:  Bas van der Klaauw (VU University Amsterdam); Sandra Vriend (VU University Amsterdam, the Netherlands) 
Abstract:  Public programs often use statistical profiling to assess the risk that applicants will become longterm dependent on the program. The literature uses linear probability models and (Cox) proportional hazard models to predict duration outcomes. These either focus on one threshold duration or impose proportionality. In this paper we propose a nonparametric weighted survivor prediction method where the weights depend on the distance in characteristics between individuals. A simulation study shows that an Epanechnikov weighting function with a small bandwidth gives the best predictions while the choice of distance function is less important for the performance of the weighted survivor prediction method. This yields predictions that are slightly better than Cox survivor function predictions. In an empirical application concerning the outflow to work from unemployment insu rance, we do not find that the weighting method outperforms Cox survivor function predictions. 
Keywords:  profiling; KaplanMeier estimator; Cox proportional hazard model; distance metrics; weights; matching; unemployment duration 
JEL:  C14 C41 J64 
Date:  2015–11–12 
URL:  http://d.repec.org/n?u=RePEc:tin:wpaper:20150126&r=ecm 
By:  Sofiene El Aoud (FiQuant  Chaire de finance quantitative  Ecole Centrale Paris, MICS  Mathématiques et Informatique pour la Complexité et les Systèmes  CentraleSupélec); Frédéric Abergel (MICS  Mathématiques et Informatique pour la Complexité et les Systèmes  CentraleSupélec, FiQuant  Chaire de finance quantitative  Ecole Centrale Paris) 
Abstract:  We present in our work a continuous time Capital Asset Pricing Model where the volatilities of the market index and the stock are both stochastic. Using a singular perturbation technique, we provide approximations for the prices of european options on both the stock and the index. These approximations are functions of the model parameters. We show then that existing estimators of the parameter beta, proposed in the recent literature, are biased in our setting because they are all based on the assumption that the idiosyncratic volatility of the stock is constant. We provide then an unbiased estimator of the parameter beta using only implied volatility data. This estimator is a forward measure of the parameter beta in the sense that it represents the information contained in derivatives prices concerning the forward realization of this parameter, we test then its capacity of prediction of forward beta and we draw a conclusion concerning its predictive power. 
Date:  2014–03–14 
URL:  http://d.repec.org/n?u=RePEc:hal:journl:hal01006405&r=ecm 
By:  Matyas Barczy; Balazs Nyul; Gyula Pap 
Abstract:  We prove strong consistency and asymptotic normality of least squares estimators for the subcritical Heston model based on continuous time observations. We also present some numerical illustrations of our results. 
Date:  2015–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1511.05948&r=ecm 
By:  Fabian Dunker; Thorsten Hohage 
Abstract:  In this paper we present nonparametric estimators for coefficients in stochastic differential equation if the data are described by independent, identically distributed random variables. The problem is formulated as a nonlinear illposed operator equation with a deterministic forward operator described by the FokkerPlanck equation. We derive convergence rates of the risk for penalized maximum likelihood estimators with convex penalty terms and for Newtontype methods. The assumptions of our general convergence results are verified for estimation of the drift coefficient. The advantages of loglikelihood compared to quadratic data fidelity terms are demonstrated in MonteCarlo simulations. 
Date:  2014–04 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1404.0651&r=ecm 
By:  Marczak, Martyna; Proietti, Tommaso; Grassi, Stefano 
Abstract:  This article presents a robust augmented Kalman filter that extends the datacleaning filter (Masreliez and Martin, 1977) to the general state space model featuring nonstationary and regression effects. The robust filter shrinks the observations towards their onestepahead prediction based on the past, by bounding the effect of the information carried by a new observation according to an influence function. When maximum likelihood estimation is carried out on the replacement data, an Mtype estimator is obtained. We investigate the performance of the robust AKF in two applications using as a modeling framework the basic structural time series model, a popular unobserved components model in the analysis of seasonal time series. First, a Monte Carlo experiment is conducted in order to evaluate the comparative accuracy of the proposed method for estimating the variance parameters. Second, the method is applied in a forecasting context to a large set of European trade statistics series. 
Keywords:  robust filtering,augmented Kalman filter,structural time series model,additive outlier,innovation outlier 
JEL:  C32 C53 C63 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:zbw:hohdps:132015&r=ecm 
By:  Naoya Sueishi (Graduate School of Economics, Kobe University) 
Abstract:  This study gives a simple derivation of the efficiency bound for conditional moment restriction models. The Fisher information is obtained by deriving a least favorable submodel in an explicit form. The proposed method also suggests an asymptotically efficient estimator, which can be viewed as an empirical likelihood estimator for conditional moment restriction models. 
Keywords:  Conditional moment restrictions; Empirical likelihood; Fisher information; Least favorable submodel. 
JEL:  C14 
Date:  2015–09 
URL:  http://d.repec.org/n?u=RePEc:koe:wpaper:1531&r=ecm 
By:  Boriss Siliverstovs (KOF Swiss Economic Institute, ETH Zurich, Switzerland) 
Abstract:  In this paper we suggest an approach to comparison of models’ forecasting performance in unstable environments. Our approach is based on combination of the Cumulated Sum of Squared Forecast Error Differential (CSSFED) suggested earlier in Welch and Goyal (2008) and the Bayesian change point analysis based on Barry and Hartigan (1993). The latter methodology provides the formal statistical analysis of the CSSFED time series which turned out to be a powerful graphical tool for tracking how the relative forecasting performance of competing models evolves over time. We illustrate the suggested approach by using forecasts of the GDP growth rate in Switzerland. 
Keywords:  Forecasting, Forecast Evaluation, Change Point Detection, Bayesian Estimation 
JEL:  C22 C53 
Date:  2015–11 
URL:  http://d.repec.org/n?u=RePEc:kof:wpskof:15397&r=ecm 
By:  Tsagris, Michail 
Abstract:  Regression analysis, for prediction purposes, with compositional data is the subject of this paper. We examine both cases when compositional data are either response or predictor variables. A parametric model is assumed but the interest lies in the accuracy of the predicted values. For this reason, a data based power transformation is employed in both cases and the results are compared with the standard logratio approach. There are some interesting results and one advantage of the methods proposed here is the handling of the zero values. 
Keywords:  Compositional data, regression, prediction, αtransformation, principal component regression 
JEL:  C18 
Date:  2015–09–08 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:67868&r=ecm 
By:  Tamara Burdisso (Central Bank of Argentina); Máximo Sangiácomo (Central Bank of Argentina) 
Abstract:  The document focuses on the econometric treatment of macro panels, known in literature as panel time series. This new approach rejects the assumption of slopes’ homogeneity and handles nonstationarity. It also recognizes that the presence of crosssection dependence (CSD), i.e. some correlation structure in the error term between units due to the presence of unobservable common factors, squanders efficiency gains by operating with a panel. This led to a new set of estimators known in literature as Common Correlated Effect (CCE), which essentially consists of increasing the model to be estimated by adding the averages of the individuals in each time t, of both the dependent variable and the specific regressors of each individual. Finally, two Stata codes developed for the evaluation and treatment of the crosssection dependence are presented. 
Keywords:  panel time series, nostationarity, panel unit root, mean group estimator, crosssection dependence, common correlated effect 
JEL:  C13 C23 C87 
Date:  2015–11 
URL:  http://d.repec.org/n?u=RePEc:bcr:wpaper:201568&r=ecm 
By:  Sutton, M; Vasnev, A; Gerlach, R 
Abstract:  This paper proposes an expost volatility estimator, called generalized variance, that uses high frequency data to provide measurements robust to the idiosyncratic noise of stock markets caused by market microstructures. The new volatility estimator is analyzed theoretically, examined in a simulation study and evaluated empirically against the two currently dominant measures of daily volatility: realized volatility and realized range. The main finding is that generalized variance is robust to the presence of microstructures while delivering accuracy superior to realized volatility and realized range in several circumstances. The empirical study features Australian stocks from the ASX 20. 
Keywords:  Volatility ; Robust estimator 
Date:  2015–04–30 
URL:  http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/13263&r=ecm 
By:  Hans Colonius 
Abstract:  This paper presents an introduction to the stochastic concepts of \emph{coupling} and \emph{copula}. Coupling means the construction of a joint distribution of two or more random variables that need not be defined on one and the same probability space, whereas a copula is a function that joins a multivariate distribution to its onedimensional margins. Their role in stochastic modeling is illustrated by examples from multisensory perception. Pointers to more advanced and recent treatments are provided. 
Date:  2015–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1511.05303&r=ecm 
By:  Prokhorov, Artem; Schepsmeier, Ulf; Zhu, Yajing 
Abstract:  We propose a family of goodnessoffit tests for copulas. The tests use generalizations of the information matrix (IM) equality of White (1982) and so relate to the copula test proposed by Huang and Prokhorov (2014). The idea is that eigenspectrumbased statements of the IM equality reduce the degrees of freedom of the test's asymptotic distribution and lead to better sizepower properties, even in high dimensions. The gains are especially pronounced for vine copulas, where additional benefits come from simplifications of score functions and the Hessian. We derive the asymptotic distribution of the generalized tests, accounting for the nonparametric estimation of the marginals and apply a parametric bootstrap procedure, valid when asymptotic critical values are inaccurate. In Monte Carlo simulations, we study the behavior of the new tests, compare them with several Cramervon Mises type tests and confirm the desired properties of the new tests in high dimensions. 
Keywords:  information matrix equality; copula; goodnessoffit; vine copulas; Rvines 
Date:  2015–09–11 
URL:  http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/13798&r=ecm 
By:  David Pence Slichter 
Abstract:  This paper studies the employment effects of the minimum wage using a novel empirical strategy which can allow the researcher to identify treatment effects when more than one control group is available but each such control group is imperfect. Expanding on previous researchers who have compared regions which increase the minimum wage with nearby regions which do not change the minimum wage, I compare border counties in which the minimum wage increases to the set of neighboring counties, the set of neighborofneighboring counties, etc. The key innovation is to model the ratio of the bias of these comparisons. The model I select uses the relative similarity of control groups to the treated group on observables as a guide to their relative similarity on unobservables. Crucially, models of this type have a testable implication when there are enough control groups. Using data from the United States, I find that recent minimum wage increases have produced modest or zero disemployment effects for teenagers. 
JEL:  J38 C21 C29 
Date:  2015–11–14 
URL:  http://d.repec.org/n?u=RePEc:jmp:jm2015:psl76&r=ecm 
By:  Amsler, Christine; Prokhorov, Artem; Schmidt, Peter 
Abstract:  Stochastic frontier models are typically estimated by maximum likelihood (MLE) orcorrected ordinary least squares. The consistency of either estimator depends on exogeneity of the explanatory variables (inputs, in the production frontier setting). We will investigate the case that one or more of the inputs is endogenous, in the simultaneous equation sense of endogeneity. That is, we worry that there is correlation between the inputs and statistical noise or inefficiency. In a standard regression setting, simultaneity is handled by a number of procedures that are numerically or asymptotically equivalent. These include 2SLS; using the residual from the reduced form equations for the endogenous variables as a control function; and MLE of the system that contains the equation of interest plus the unrestricted reduced form equations for the endogenous variables (LIML). We will consider modifications of these standard procedures for the stochastic frontier setting. The paper is mostly a survey and combination of existing results from the stochastic frontier literature and the classic simultaneous equations literature, but it also contains some new results. 
Keywords:  endogeneity; stochastic frontier; efficiency measurement 
Date:  2015–02–17 
URL:  http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/12755&r=ecm 
By:  Thor Pajhede (Department of Economics, University of Copenhagen) 
Abstract:  Testing the validity of ValueatRisk (VaR) forecasts, or backtesting, is an integral part of modern market risk management and regulation. This is often done by applying independence and coverage tests developed in Christoffersen (1998) to socalled hitsequences derived from VaR forecasts and realized losses. However, as pointed out in the literature, see Christoffersen (2004), these aforementioned tests suffer from low rejection frequencies, or (empirical) power, when applied to hitsequences derived from simulations matching empirical stylized characteristics of return data. One key observation of the studies is that nonMarkovian behavior in the hitsequences may cause the observed lower power performance. To allow for nonMarkovian behavior, we propose to generalize the backtest framework for ValueatRisk forecasts, by extending the original first order dependence of Christoffersen (1998) to allow for a higher, or k’th, order dependence. We provide closed form expressions for the tests as well as asymptotic theory. Not only do the generalized tests have power against k’th order dependence by definition, but also included simulations indicate improved power performance when replicating the aforementioned studies. 
Keywords:  ValueatRisk, Backtesting, Risk Management, Markov Chain, Durationbased test, quantile, likelihood ratio, maximum likelihood. 
JEL:  C12 C15 C52 C32 
Date:  2015–11–18 
URL:  http://d.repec.org/n?u=RePEc:kud:kuiedp:1518&r=ecm 
By:  David M. Kaplan (Department of Economics, University of Missouri) 
Abstract:  Testing whether two parameters have the same sign is a nonstandard problem due to the nonconvex shape of the parameter subspace satisfying the composite null hypothesis, which is a nonlinear inequality constraint. We describe a simple example where the ordering of likelihood ratio (LR), Wald, and Bayesian sign equality tests reverses the “usual” ordering: the Wald rejection region is a subset of LR’s, as is the Bayesian rejection region (either asymptotically or with an uninformative prior). Under general conditions, we show that nonconvexity of the null hypothesis subspace is a necessary but not sufficient condition for this asymptotic frequentist/Bayesian ordering. Since linear inequalities only generate convex regions, a corollary is that frequentist tests are more conservative than Bayesian tests in that setting. We also examine a nearly similarontheboundary, unbiased test of sign equality. Rather than claim moral superiority of one statistical framework or test, we wish to clarify the regrettably ineluctable tradeoffs. 
Keywords:  convexity, likelihood ratio, limit experiment, nonlinear inequality constraint, nonstandard inference, unbiased test, Wald 
JEL:  C11 C12 
Date:  2015–07–14 
URL:  http://d.repec.org/n?u=RePEc:umc:wpaper:1516&r=ecm 
By:  Shin Kanaya (Aarhus University and CREATES) 
Abstract:  In this paper, we derive uniform convergence rates of nonparametric estimators for continuous time diffusion processes. In particular, we consider kernelbased estimators of the NadarayaWatson type with introducing a new technical device called a damping function. This device allows us to derive sharp uniform rates over an infinite interval with minimal requirements on the processes: The existence of the moment of any order is not required and the boundedness of relevant functions can be significantly relaxed. Restrictions on kernel functions are also minimal: We allow for kernels with discontinuity, unbounded support and slowly decaying tails. Our proofs proceed by using the coveringnumber technique from empirical process theory and exploiting the mixing and martingale properties of the processes. We also present new results on the pathcontinuity property of Brownian motions and diffusion processes over an infinite time horizon. These pathcontinuity results, which should also have an independent interest, are used to control discretization biases of the nonparametric estimators. The obtained convergence results are useful for non/semiparametric estimation and testing problems of diffusion processes. 
Keywords:  Diffusion process, uniform convergence, kernel estimation, nonparametric. 
JEL:  C14 C32 C58 
Date:  2015–11–12 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201550&r=ecm 
By:  Levent Bulut (Department of Economics, Ipek University) 
Abstract:  In Monte Carlo experiment with simulated data, I show that, as a point forecast criterion, the Clark and West's (2006) unconditional test of mean squared prediction errors (MSPE) fails to reflect the relative performance of a superior model over a relatively weaker model. The simulation results show that, even though the MSPE of a superior model is far below a weaker alternative, the Clark and West's (2006) test does not reflect this in their test statistics. Therefore, studies that use this statistics in testing the predictive accuracy of alternative exchange rate models, equity risk premium predictions, stock return predictability, inflation forecasting and unemployment forecasting should not weight too much on the magnitude of the statistically significant Clark and West's (2006) tests statistics. 
Keywords:  Model comparison, predictive accuracy, pointforecast criterion, the Clark and West test. 
JEL:  F37 F47 G17 C52 
Date:  2015–11 
URL:  http://d.repec.org/n?u=RePEc:ipk:wpaper:1509&r=ecm 
By:  Demuynck T. (GSBE) 
Abstract:  We show how to obtain bounds on the mean treatment effects by solving a simple linear programming problem. The use of a linear programme is convenient from a practical point of view because it avoids the need to derive closed form solutions. Imposing or omitting monotonicity or concavity restrictions is done by simply adding or removing sets of linear restrictions to the linear programme. 
Keywords:  Semiparametric and Nonparametric Methods: General; Optimization Techniques; Programming Models; Dynamic Analysis; Human Capital; Skills; Occupational Choice; Labor Productivity; 
JEL:  C14 C61 J24 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:unm:umagsb:2015027&r=ecm 
By:  Willi Mutschler 
Abstract:  This note derives closedform expressions for unconditional moments, cumulants and polyspectra of order higher than two for linear and nonlinear (pruned) DSGE models. The procedures are demonstrated by means of the Smets and Wouters (2007) model (firstorder approximation), the An and Schorfheide (2007) model (secondorder approximation) and the canonical neoclassical growth model (thirdorder approximation). Both the Gaussian as well as Student's tdistribution are considered as the underlying stochastic process. Useful Matrix tools and computational aspects are also discussed. 
Keywords:  higherorder moments, cumulants, polyspectra, nonlinear SDGE, pruning 
JEL:  C10 C51 E1 
Date:  2015–11 
URL:  http://d.repec.org/n?u=RePEc:cqe:wpaper:4315&r=ecm 