Econometrics
http://lists.repec.orgmailman/listinfo/nep-ecm
Econometrics
2015-11-21
GEL Estimation for Heavy-Tailed GARCH Models with Robust Empirical Likelihood Inference
http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/13795&r=ecm
We construct a Generalized Empirical Likelihood estimator for a GARCH(1,1) model with a possibly heavy tailed error. The estimator imbeds tail-trimmed estimating equations allowing for over-identifying conditions, asymptotic normality, efficiency and empirical likelihood based confidence regions for very heavy-tailed random volatility data. We show the implied probabilities from the tail-trimmed Continuously Updated Estimator elevate weight for usable large values, assign large but not maximum weight to extreme observations, and give the lowest weight to non-leverage points. We derive a higher order expansion for GEL with imbedded tail-trimming (GELITT), which reveals higher order bias and efficiency properties, available when the GARCH error has a finite second moment. Higher order asymptotics for GEL without tail-trimming requires the error to have moments of substantially higher order. We use first order asymptotics and higher order bias to justify the choice of the number of trimmed observations in any given sample. We also present robust versions of Generalized Empirical Likelihood Ratio, Wald, and Lagrange Multiplier tests, and an efficient and heavy tail robust moment estimator with an application to expected shortfall estimation. Finally, we present a broad simulation study for GEL and GELITT, and demonstrate profile weighted expected shortfall for the Russian Ruble - US Dollar exchange rate. We show that tail-trimmed CUE-GMM dominates other estimators in terms of bias, mse and approximate normality. AMS classifications : 62M10 , 62F35.
Hill, Jonathan B.
Prokhorov, Artem
GEL ; GARCH ; tail trimming ; heavy tails ; robust inference ; efficient moment estimation ; expected shortfall ; Russian Ruble
2015-09-11
Estimating Non-Linear DSGEs with the Approximate Bayesian Computation: an application to the Zero Lower Bound
http://d.repec.org/n?u=RePEc:saq:wpaper:06/15&r=ecm
Non-linear model estimation is generally perceived as impractical and computationally burdensome. This perception limited the diffusion on non-linear models estimation. In this paper a simple set of techniques going under the name of Approximate Bayesian Computation (ABC) is proposed. ABC is a set of Bayesian techniques based on moments matching: moments are obtained simulating the model conditional on draws from the prior distribution. An accept-reject criterion is applied on the simulations and an approximate posterior distribution is obtained by the accepted draws. A series of techniques are presented (ABC-regression, ABC-MCMC, ABC-SMC). To assess their small sample performance, Montecarlo experiments are run on AR(1) processes and on a RBC model showing that ABC estimators outperform the Limited Information Method (Kim, 2002), a GMM-style estimator. In the remainder, the estimation of a new-keynesian model with a zero lower bound on the interest rate is performed. Non-gaussian moments are exploited in the estimation procedure.
Valerio Scalone
Monte-Carlo analysis, Method of moments, Bayesian, Zero Lower Bound, DSGE estimation.
2015-11
A joint model for longitudinal and survival data based on an AR(1) latent process
http://d.repec.org/n?u=RePEc:pia:papers:00014/2015&r=ecm
A critical problem in repeated measurement studies is the occurrence of non- ignorable missing observations. A common approach to deal with this problem is joint modeling the longitudinal and survival processes for each individual on the basis of a random effect that is usually assumed to be time constant. We relax this hypothesis by introducing time-varying subject-specific random effects that follow a first-order autoregressive process, AR(1). We also adopt a generalized linear model formulation to accommodate for different types of longitudinal response (i.e., con- tinuous, binary, count) and we consider some extended cases, such as counts with excess of zeros and multivariate outcomes at each time occasion. Estimation of the parameters of the resulting joint model is based on maximization of the likelihood computed by a recursion developed in the hidden Markov literature. The maximiza- tion is performed on the basis of a quasi-Newton algorithm that also provides the information matrix and then standard errors for the parameter estimates. The pro- posed approach is illustrated through a Monte Carlo simulation study and through the analysis of certain medical datasets.
Silvia BACCI
Francesco BARTOLUCCI
Silvia PANDOLFI
generalized linear models; informative dropout; nonignorable missing mechanism; sequential quadrature; shared-parameter models
2015-10-01
Bayesian Semi-parametric Realized-CARE Models for Tail Risk Forecasting Incorporating Range and Realized Measures
http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/13800&r=ecm
A new framework named Realized Conditional Autoregressive Expectile (Realized- CARE) is proposed, through incorporating a measurement equation into the conventional CARE model, in a framework analogous to Realized-GARCH. The Range and realized measures (Realized Variance and Realized Range) are employed as the dependent variables of the measurement equation, since they have proven more efficient than return for volatility estimation. The dependence between Range & realized measures and expectile can be modelled with this measurement equation. The grid search accuracy of the expectile level will be potentially improved with introducing this measurement equation. In addition, through employing a quadratic fitting target search, the speed of grid search is significantly improved. Bayesian adaptive Markov Chain Monte Carlo is used for estimation, and demonstrates its superiority compared to maximum likelihood in a simulation study. Furthermore, we propose an innovative sub-sampled Realized Range and also adopt an existing scaling scheme, in order to deal with the micro-structure noise of the high frequency volatility measures. Compared to the CARE, the parametric GARCH and the Realized-GARCH models, Value-at-Risk and Expected Shortfall forecasting results of 6 indices and 3 assets series favor the proposed Realized-CARE model, especially the Realized-CARE model with Realized Range and sub-sampled Realized Range.
Gerlach, Richard
Wang, Chao
Realized-CARE ; Realized Variance ; Realized Range ; Subsampling Realized Range ; Markov Chain Monte Carlo ; Target Search ; Value-at-Risk ; Expected Shortfall
2015-09-11
Exact ABC using Importance Sampling
http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/13839&r=ecm
Approximate Bayesian Computation (ABC) is a powerful method for carrying out Bayesian inference when the likelihood is computationally intractable. However, a draw- back of ABC is that it is an approximate method that induces a systematic error because it is necessary to set a tolerance level to make the computation tractable. The issue of how to optimally set this tolerance level has been the subject of extensive research. This paper proposes an ABC algorithm based on importance sampling that estimates expec- tations with respect to the exact posterior distribution given the observed summary statistics. This overcomes the need to select the tolerance level. By exact we mean that there is no systematic error and the Monte Carlo error can be made arbitrarily small by increasing the number of importance samples. We provide a formal justifica- tion for the method and study its convergence properties. The method is illustrated in two applications and the empirical results suggest that the proposed ABC based esti- mators consistently converge to the true values as the number of importance samples increases. Our proposed approach can be applied more generally to any importance sampling problem where an unbiased estimate of the likelihood is required.
Tran, Minh-Ngoc
Kohn, Robert
Debiasing; Ising model; Unbiased likelihood;
2015-09-23
Numerical Distribution Functions for Seasonal Unit Root Test with and without GLS Detrending
http://d.repec.org/n?u=RePEc:ubi:deawps:73&r=ecm
This paper implements the approach introduced by MacKinnon (1994, 1996) to estimate the response surface of the test statistics of seasonal unit root tests with OLS and GLS detrending for quarterly and monthly time series. The Gauss code that is available in the supplementary material of the paper produces p-values for five test statistics depending on the sample size, deterministic terms and frequency of the data. A comparison with previous studies is undertaken, and an empirical example using airport passenger arrivals to a tourist destination is carried out. Quantile function coefficients are reported for simple computation of critical values for tests at 1%, 5% and 10% significance levels.
Tomás del Barrio Castro
Andrii Bodnar
Andreu Sansó Rosselló
HEGY test, GLS detrending, response surfaces
2015
Semi-Parametric Seasonal Unit Root Tests
http://d.repec.org/n?u=RePEc:ubi:deawps:72&r=ecm
It is well known that (seasonal) unit root tests can be seriously affected by the presence of weak dependence in the driving shocks when this is not accounted for. In the non-seasonal case both parametric (based around augmentation of the test regression with lagged dependent variables) and semi-parametric (based around an estimator of the long run variance of the shocks) unit root tests have been proposed. Of these, the M class of unit root tests introduced by Stock (1999), Perron and Ng (1996) and Ng and Perron (2001), appear to be particularly successful, showing good finite sample size control even in the most problematic (near-cancellation) case where the shocks contain a strong negative moving average component. The aim of this paper is threefold. First we show the implications that neglected weak dependence in the shocks has on lag un-augmented versions of the well known regression-based seasonal unit root tests of Hylleberg et al. (1990). Second, in order to complement extant parametrically augmented versions of the tests of Hylleberg et al. (1990), we develop semi-parametric seasonal unit root test procedures, generalising the methods developed in the non-seasonal case to our setting. Third, we compare the finite sample size and power properties of the parametric and semi-parametric seasonal unit root tests considered. Our results suggest that the superior size/power trade-off offered by the M approach in the non-seasonal case carries over to the seasonal case.
Tomás del Barrio Castro
Paulo M. M. Rodrigues
A. M. Robert Taylor
Seasonal unit roots, weak dependence, lag augmentation, long run variance estimator, demodulated process.
2015
A Nonparametric Method for Predicting Survival Probabilities
http://d.repec.org/n?u=RePEc:tin:wpaper:20150126&r=ecm
Public programs often use statistical profiling to assess the risk that applicants will become long-term dependent on the program. The literature uses linear probability models and (Cox) proportional hazard models to predict duration outcomes. These either focus on one threshold duration or impose proportionality. In this paper we propose a nonparametric weighted survivor prediction method where the weights depend on the distance in characteristics between individuals. A simulation study shows that an Epanechnikov weighting function with a small bandwidth gives the best predictions while the choice of distance function is less important for the performance of the weighted survivor prediction method. This yields predictions that are slightly better than Cox survivor function predictions. In an empirical application concerning the outflow to work from unemployment insu rance, we do not find that the weighting method outperforms Cox survivor function predictions.
Bas van der Klaauw
Sandra Vriend
profiling; Kaplan-Meier estimator; Cox proportional hazard model; distance metrics; weights; matching; unemployment duration
2015-11-12
Calibration of a stock's beta using options prices
http://d.repec.org/n?u=RePEc:hal:journl:hal-01006405&r=ecm
We present in our work a continuous time Capital Asset Pricing Model where the volatilities of the market index and the stock are both stochastic. Using a singular perturbation technique, we provide approximations for the prices of european options on both the stock and the index. These approximations are functions of the model parameters. We show then that existing estimators of the parameter beta, proposed in the recent literature, are biased in our setting because they are all based on the assumption that the idiosyncratic volatility of the stock is constant. We provide then an unbiased estimator of the parameter beta using only implied volatility data. This estimator is a forward measure of the parameter beta in the sense that it represents the information contained in derivatives prices concerning the forward realization of this parameter, we test then its capacity of prediction of forward beta and we draw a conclusion concerning its predictive power.
Sofiene El Aoud
Frédéric Abergel
2014-03-14
Least squares estimation for the subcritical Heston model based on continuous time observations
http://d.repec.org/n?u=RePEc:arx:papers:1511.05948&r=ecm
We prove strong consistency and asymptotic normality of least squares estimators for the subcritical Heston model based on continuous time observations. We also present some numerical illustrations of our results.
Matyas Barczy
Balazs Nyul
Gyula Pap
2015-11
On parameter identification in stochastic differential equations by penalized maximum likelihood
http://d.repec.org/n?u=RePEc:arx:papers:1404.0651&r=ecm
In this paper we present nonparametric estimators for coefficients in stochastic differential equation if the data are described by independent, identically distributed random variables. The problem is formulated as a nonlinear ill-posed operator equation with a deterministic forward operator described by the Fokker-Planck equation. We derive convergence rates of the risk for penalized maximum likelihood estimators with convex penalty terms and for Newton-type methods. The assumptions of our general convergence results are verified for estimation of the drift coefficient. The advantages of log-likelihood compared to quadratic data fidelity terms are demonstrated in Monte-Carlo simulations.
Fabian Dunker
Thorsten Hohage
2014-04
A data-cleaning augmented Kalman filter for robust estimation of state space models
http://d.repec.org/n?u=RePEc:zbw:hohdps:132015&r=ecm
This article presents a robust augmented Kalman filter that extends the data-cleaning filter (Masreliez and Martin, 1977) to the general state space model featuring nonstationary and regression effects. The robust filter shrinks the observations towards their one-step-ahead prediction based on the past, by bounding the effect of the information carried by a new observation according to an influence function. When maximum likelihood estimation is carried out on the replacement data, an M-type estimator is obtained. We investigate the performance of the robust AKF in two applications using as a modeling framework the basic structural time series model, a popular unobserved components model in the analysis of seasonal time series. First, a Monte Carlo experiment is conducted in order to evaluate the comparative accuracy of the proposed method for estimating the variance parameters. Second, the method is applied in a forecasting context to a large set of European trade statistics series.
Marczak, Martyna
Proietti, Tommaso
Grassi, Stefano
robust filtering,augmented Kalman filter,structural time series model,additive outlier,innovation outlier
2015
A Simple Derivation of the Efficiency Bound for Conditional Moment Restriction Models
http://d.repec.org/n?u=RePEc:koe:wpaper:1531&r=ecm
This study gives a simple derivation of the efficiency bound for conditional moment restriction models. The Fisher information is obtained by deriving a least favorable submodel in an explicit form. The proposed method also suggests an asymptotically efficient estimator, which can be viewed as an empirical likelihood estimator for conditional moment restriction models.
Naoya Sueishi
Conditional moment restrictions; Empirical likelihood; Fisher information; Least favorable submodel.
2015-09
Dissecting Models’ Forecasting Performance
http://d.repec.org/n?u=RePEc:kof:wpskof:15-397&r=ecm
In this paper we suggest an approach to comparison of models’ forecasting performance in unstable environments. Our approach is based on combination of the Cumulated Sum of Squared Forecast Error Differential (CSSFED) suggested earlier in Welch and Goyal (2008) and the Bayesian change point analysis based on Barry and Hartigan (1993). The latter methodology provides the formal statistical analysis of the CSSFED time series which turned out to be a powerful graphical tool for tracking how the relative forecasting performance of competing models evolves over time. We illustrate the suggested approach by using forecasts of the GDP growth rate in Switzerland.
Boriss Siliverstovs
Forecasting, Forecast Evaluation, Change Point Detection, Bayesian Estimation
2015-11
Regression analysis with compositional data containing zero values
http://d.repec.org/n?u=RePEc:pra:mprapa:67868&r=ecm
Regression analysis, for prediction purposes, with compositional data is the subject of this paper. We examine both cases when compositional data are either response or predictor variables. A parametric model is assumed but the interest lies in the accuracy of the predicted values. For this reason, a data based power transformation is employed in both cases and the results are compared with the standard log-ratio approach. There are some interesting results and one advantage of the methods proposed here is the handling of the zero values.
Tsagris, Michail
Compositional data, regression, prediction, α-transformation, principal component regression
2015-09-08
Panel Time Series. Review of the Methodological Evolution
http://d.repec.org/n?u=RePEc:bcr:wpaper:201568&r=ecm
The document focuses on the econometric treatment of macro panels, known in literature as panel time series. This new approach rejects the assumption of slopes’ homogeneity and handles nonstationarity. It also recognizes that the presence of crosssection dependence (CSD), i.e. some correlation structure in the error term between units due to the presence of unobservable common factors, squanders efficiency gains by operating with a panel. This led to a new set of estimators known in literature as Common Correlated Effect (CCE), which essentially consists of increasing the model to be estimated by adding the averages of the individuals in each time t, of both the dependent variable and the specific regressors of each individual. Finally, two Stata codes developed for the evaluation and treatment of the cross-section dependence are presented.
Tamara Burdisso
Máximo Sangiácomo
panel time series, nostationarity, panel unit root, mean group estimator, cross-section dependence, common correlated effect
2015-11
Generalized Variance: A Robust Estimator of Stock Price Volatility
http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/13263&r=ecm
This paper proposes an ex-post volatility estimator, called generalized variance, that uses high frequency data to provide measurements robust to the idiosyncratic noise of stock markets caused by market microstructures. The new volatility estimator is analyzed theoretically, examined in a simulation study and evaluated empirically against the two currently dominant measures of daily volatility: realized volatility and realized range. The main finding is that generalized variance is robust to the presence of microstructures while delivering accuracy superior to realized volatility and realized range in several circumstances. The empirical study features Australian stocks from the ASX 20.
Sutton, M
Vasnev, A
Gerlach, R
Volatility ; Robust estimator
2015-04-30
An invitation to coupling and copulas: with applications to multisensory modeling
http://d.repec.org/n?u=RePEc:arx:papers:1511.05303&r=ecm
This paper presents an introduction to the stochastic concepts of \emph{coupling} and \emph{copula}. Coupling means the construction of a joint distribution of two or more random variables that need not be defined on one and the same probability space, whereas a copula is a function that joins a multivariate distribution to its one-dimensional margins. Their role in stochastic modeling is illustrated by examples from multisensory perception. Pointers to more advanced and recent treatments are provided.
Hans Colonius
2015-11
Generalized Information Matrix Tests for Copulas
http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/13798&r=ecm
We propose a family of goodness-of-fit tests for copulas. The tests use generalizations of the information matrix (IM) equality of White (1982) and so relate to the copula test proposed by Huang and Prokhorov (2014). The idea is that eigenspectrum-based statements of the IM equality reduce the degrees of freedom of the test's asymptotic distribution and lead to better size-power properties, even in high dimensions. The gains are especially pronounced for vine copulas, where additional benefits come from simplifications of score functions and the Hessian. We derive the asymptotic distribution of the generalized tests, accounting for the non-parametric estimation of the marginals and apply a parametric bootstrap procedure, valid when asymptotic critical values are inaccurate. In Monte Carlo simulations, we study the behavior of the new tests, compare them with several Cramer-von Mises type tests and confirm the desired properties of the new tests in high dimensions.
Prokhorov, Artem
Schepsmeier, Ulf
Zhu, Yajing
information matrix equality; copula; goodness-of-fit; vine copulas; R-vines
2015-09-11
The Employment Effects of the Minimum Wage: A Selection Ratio Approach to Measuring Treatment Effects
http://d.repec.org/n?u=RePEc:jmp:jm2015:psl76&r=ecm
This paper studies the employment effects of the minimum wage using a novel empirical strategy which can allow the researcher to identify treatment effects when more than one control group is available but each such control group is imperfect. Expanding on previous researchers who have compared regions which increase the minimum wage with nearby regions which do not change the minimum wage, I compare border counties in which the minimum wage increases to the set of neighboring counties, the set of neighbor-of-neighboring counties, etc. The key innovation is to model the ratio of the bias of these comparisons. The model I select uses the relative similarity of control groups to the treated group on observables as a guide to their relative similarity on unobservables. Crucially, models of this type have a testable implication when there are enough control groups. Using data from the United States, I find that recent minimum wage increases have produced modest or zero disemployment effects for teenagers.
David Pence Slichter
2015-11-14
Endogeneity in Stochastic Frontier Models
http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/12755&r=ecm
Stochastic frontier models are typically estimated by maximum likelihood (MLE) orcorrected ordinary least squares. The consistency of either estimator depends on exogeneity of the explanatory variables (inputs, in the production frontier setting). We will investigate the case that one or more of the inputs is endogenous, in the simultaneous equation sense of endogeneity. That is, we worry that there is correlation between the inputs and statistical noise or inefficiency. In a standard regression setting, simultaneity is handled by a number of procedures that are numerically or asymptotically equivalent. These include 2SLS; using the residual from the reduced form equations for the endogenous variables as a control function; and MLE of the system that contains the equation of interest plus the unrestricted reduced form equations for the endogenous variables (LIML). We will consider modifications of these standard procedures for the stochastic frontier setting. The paper is mostly a survey and combination of existing results from the stochastic frontier literature and the classic simultaneous equations literature, but it also contains some new results.
Amsler, Christine
Prokhorov, Artem
Schmidt, Peter
endogeneity; stochastic frontier; efficiency measurement
2015-02-17
Backtesting Value-at-Risk: A Generalized Markov Framework
http://d.repec.org/n?u=RePEc:kud:kuiedp:1518&r=ecm
Testing the validity of Value-at-Risk (VaR) forecasts, or backtesting, is an integral part of modern market risk management and regulation. This is often done by applying independence and coverage tests developed in Christoffersen (1998) to so-called hit-sequences derived from VaR forecasts and realized losses. However, as pointed out in the literature, see Christoffersen (2004), these aforementioned tests suffer from low rejection frequencies, or (empirical) power, when applied to hit-sequences derived from simulations matching empirical stylized characteristics of return data. One key observation of the studies is that non-Markovian behavior in the hit-sequences may cause the observed lower power performance. To allow for non-Markovian behavior, we propose to generalize the backtest framework for Value-at-Risk forecasts, by extending the original first order dependence of Christoffersen (1998) to allow for a higher, or k’th, order dependence. We provide closed form expressions for the tests as well as asymptotic theory. Not only do the generalized tests have power against k’th order dependence by definition, but also included simulations indicate improved power performance when replicating the aforementioned studies.
Thor Pajhede
Value-at-Risk, Backtesting, Risk Management, Markov Chain, Duration-based test, quantile, likelihood ratio, maximum likelihood.
2015-11-18
Bayesian and frequentist tests of sign equality and other nonlinear inequalities
http://d.repec.org/n?u=RePEc:umc:wpaper:1516&r=ecm
Testing whether two parameters have the same sign is a nonstandard problem due to the non-convex shape of the parameter subspace satisfying the composite null hypothesis, which is a nonlinear inequality constraint. We describe a simple example where the ordering of likelihood ratio (LR), Wald, and Bayesian sign equality tests reverses the “usual” ordering: the Wald rejection region is a subset of LR’s, as is the Bayesian rejection region (either asymptotically or with an uninformative prior). Under general conditions, we show that non-convexity of the null hypothesis subspace is a necessary but not sufficient condition for this asymptotic frequentist/Bayesian ordering. Since linear inequalities only generate convex regions, a corollary is that frequentist tests are more conservative than Bayesian tests in that setting. We also examine a nearly similar-on-the-boundary, unbiased test of sign equality. Rather than claim moral superiority of one statistical framework or test, we wish to clarify the regrettably ineluctable tradeoffs.
David M. Kaplan
convexity, likelihood ratio, limit experiment, nonlinear inequality constraint, nonstandard inference, unbiased test, Wald
2015-07-14
Uniform Convergence Rates of Kernel-Based Nonparametric Estimators for Continuous Time Diffusion Processes: A Damping Function Approach
http://d.repec.org/n?u=RePEc:aah:create:2015-50&r=ecm
In this paper, we derive uniform convergence rates of nonparametric estimators for continuous time diffusion processes. In particular, we consider kernel-based estimators of the Nadaraya-Watson type with introducing a new technical device called a damping function. This device allows us to derive sharp uniform rates over an infinite interval with minimal requirements on the processes: The existence of the moment of any order is not required and the boundedness of relevant functions can be significantly relaxed. Restrictions on kernel functions are also minimal: We allow for kernels with discontinuity, unbounded support and slowly decaying tails. Our proofs proceed by using the covering-number technique from empirical process theory and exploiting the mixing and martingale properties of the processes. We also present new results on the path-continuity property of Brownian motions and diffusion processes over an infinite time horizon. These path-continuity results, which should also have an independent interest, are used to control discretization biases of the nonparametric estimators. The obtained convergence results are useful for non/semiparametric estimation and testing problems of diffusion processes.
Shin Kanaya
Diffusion process, uniform convergence, kernel estimation, nonparametric.
2015-11-12
"Does Higher Test Statistics Imply Better Performance?"
http://d.repec.org/n?u=RePEc:ipk:wpaper:1509&r=ecm
In Monte Carlo experiment with simulated data, I show that, as a point forecast criterion, the Clark and West's (2006) unconditional test of mean squared prediction errors (MSPE) fails to reflect the relative performance of a superior model over a relatively weaker model. The simulation results show that, even though the MSPE of a superior model is far below a weaker alternative, the Clark and West's (2006) test does not reflect this in their test statistics. Therefore, studies that use this statistics in testing the predictive accuracy of alternative exchange rate models, equity risk premium predictions, stock return predictability, inflation forecasting and unemployment forecasting should not weight too much on the magnitude of the statistically significant Clark and West's (2006) tests statistics.
Levent Bulut
Model comparison, predictive accuracy, point-forecast criterion, the Clark and West test.
2015-11
Bounding average treatment effects: A linear programming approach
http://d.repec.org/n?u=RePEc:unm:umagsb:2015027&r=ecm
We show how to obtain bounds on the mean treatment effects by solving a simple linear programming problem. The use of a linear programme is convenient from a practical point of view because it avoids the need to derive closed form solutions. Imposing or omitting monotonicity or concavity restrictions is done by simply adding or removing sets of linear restrictions to the linear programme.
Demuynck T.
Semiparametric and Nonparametric Methods: General; Optimization Techniques; Programming Models; Dynamic Analysis; Human Capital; Skills; Occupational Choice; Labor Productivity;
2015
Higher-order statistics for DSGE models
http://d.repec.org/n?u=RePEc:cqe:wpaper:4315&r=ecm
This note derives closed-form expressions for unconditional moments, cumulants and polyspectra of order higher than two for linear and nonlinear (pruned) DSGE models. The procedures are demonstrated by means of the Smets and Wouters (2007) model (first-order approximation), the An and Schorfheide (2007) model (second-order approximation) and the canonical neoclassical growth model (third-order approximation). Both the Gaussian as well as Student's t-distribution are considered as the underlying stochastic process. Useful Matrix tools and computational aspects are also discussed.
Willi Mutschler
higher-order moments, cumulants, polyspectra, nonlinear SDGE, pruning
2015-11