
on Econometrics 
By:  Mantobaye Moundigbaye; William Rea (University of Canterbury); W. Robert Reed (University of Canterbury) 
Abstract:  This study uses Monte Carlo experiments to produce new evidence on the performance of a wide range of panel data estimators. It focuses on estimators that are readily available in statistical software packages such as Stata and Eviews, and for which the number of crosssectional units (N) and time periods (T) are small to moderate in size. The goal is to develop practical guidelines that will enable researchers to select the best estimator for a given type of data. It extends a previous study on the subject (Reed and Ye, 2011), and modifies their recommendations. The new recommendations provide a (virtually) complete decision tree: When it comes to choosing an estimator for efficiency, it uses the size of the panel dataset (N and T) to guide the researcher to the best estimator. When it comes to choosing an estimator for hypothesis testing, it identifies one estimator as superior across all the data scenarios included in the study. An unusual finding is that researchers should use different estimators for estimating coefficients and testing hypotheses. We present evidence that bootstrapping allows one to use the same estimator for both. 
Keywords:  Panel Data Estimators, Monte Carlo simulation, PCSE, Parks model, Bootstrapping 
JEL:  C23 C33 
Date:  2016–09–03 
URL:  http://d.repec.org/n?u=RePEc:cbt:econwp:16/18&r=ecm 
By:  Jiti Gao; Guangming Pan; Yanrong Yang 
Abstract:  This paper considers modelling and detecting structure breaks associated with crosssectional dependence for large dimensional panel data models, which are popular in many fields including economics and finance. We propose a dynamic factor structure to measure the degree of crosssectional dependence. The extent of such crosssectional dependence is parameterized as an unknown parameter, which is defined by assuming that a small proportion of the total factor loadings are important. Compared with the usual parameterized style, this exponential description of extent covers the case of small proportion of the total sections being crosssectionally dependent. We established a 'moment' criterion to estimate the unknown based on the covariance of crosssectional averages at different time lags. By taking into account the fact that the serial dependence of common factors is stronger than that of idiosyncratic components, the proposed criterion is able to capture weak crosssectional dependence that is reflected on relatively small values of the unknown parameter. Due to the involvement of some unknown parameter, both joint and marginal estimators are constructed. This paper then establishes that the joint estimators of a pair of unknown parameters converge in distribution to bivariate normal. In the case where the other unknown parameter is being assumed to be known, an asymptotic distribution for an estimator of the original unknown parameter is also established, which naturally coincides with the joint asymptotic distribution for the case where the other unknown parameter is assumed to be known. Simulation results show the finitesample effectiveness of the proposed method. Empirical applications to crosscountry macrovariables and stock returns in SP500 market are also reported to show the practical relevance of the proposed estimation theory. 
Keywords:  crosssectional averages, dynamic factor model, joint estimation, marginal estimation, strong factor loading 
JEL:  C21 C32 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201612&r=ecm 
By:  Yicheng Kang; Xiaodong Gong; Jiti Gao; Peihua Qiu 
Abstract:  Errorinvariables regression is widely used in econometric models. The statistical analysis becomes challenging when the regression function is discontinuous and the distribution of measurement error is unknown. In this paper, we propose a novel jumppreserving curve estimation method. A major feature of our method is that it can remove the noise effectively while preserving the jumps well, without requiring much prior knowledge about the measurement error distribution. The jumppreserving property is achieved mainly by local clustering. We show that the proposed curve estimator is statistical consistent, and it performs favourably, in comparison with an existing jumppreserving estimator. Finally, we demonstrate our method by an application to a health tax policy study in Australia. 
Keywords:  clustering, demand for private health insurance, kernel smoothing, local regression, measurement errors, price elasticity 
JEL:  C13 C14 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201613&r=ecm 
By:  Chuhui Li; Donald S. Poskitt; Xueyan Zhao 
Abstract:  This paper presents an examination of the finite sample performance of likelihood based estimators derived from different functional forms. We evaluate the impact of functional form missspecification on the performance of the maximum likelihood estimator derived from the bivariate probit model. We also investigate the practical importance of available instruments in both cases of correct and incorrect distributional specifications. We analyze the finite sample properties of the endogenous dummy variable and covariate coefficient estimates, and the correlation coefficient estimates, and we examine the existence of possible "compensating effects" between the latter and estimates of parametric functions such as the predicted probabilities and the average treatment effect. Finally, we provide a bridge between the literature on the bivariate probit model and that on partial identification by demonstrating how the properties of likelihood based estimators are explicable via a link between the notion of pseudotrue parameter values and the concepts of partial identification. 
Keywords:  partial identification, binary outcome models, misspecification, average treatment effect 
JEL:  C31 C35 C36 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201616&r=ecm 
By:  Maria Grith; Wolfgang K. Härdle; Alois Kneip; Heiko Wagner 
Abstract:  We present two methods based on functional principal component analysis (FPCA) for the estimation of smooth derivatives of a sample of random functions, which are observed in a more than onedimensional domain. We apply eigenvalue decomposition to a) the dual covariance matrix of the derivatives, and b) the dual covariance matrix of the observed curves. To handle noisy data from discrete observations, we rely on local polynomial regressions. If curves are contained in a finitedimensional function space, the second method performs better asymptotically. We apply our methodology in a simulation and empirical study, in which we estimate state price density (SPD) surfaces from call option prices. We identify three main components, which can be interpreted as volatility, skewness and tail factors. We also find evidence for term structure variation. 
JEL:  C13 C14 G13 
Date:  2016–08 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2016033&r=ecm 
By:  Rothfelder, Mario (Tilburg University, Center For Economic Research); Boldea, Otilia (Tilburg University, Center For Economic Research) 
Abstract:  Using 2SLS estimation, we propose two tests for a threshold in models with endogenous regressors: a sup LR test and a sup Wald test. Here, the 2SLS estimation is not conventional because it uses additional information about the firststage being linear or not. Because of this additional information, our tests can be more accurate than the threshold test in Caner and Hansen (2004) which is based on conventional GMM estimation. We derive the asymptotic distributions of the two tests for a linear and for a threshold reduced form. In both cases, the distributions are nonpivotal, and we propose obtaining critical values via a fixed regressor wild bootstrap. Our simulations show that in small samples, the GMM test of Caner and Hansen (2004) can be severely oversized under heteroskedasticity, while the 2SLS tests we propose are much closer to their nominal size. We use our tests to investigate the common claim that the government spending multiplier is larger close to the zero lower bound, and therefore that the governments should have spent more in the recent crisis. We find no empirical support for this claim. 
Keywords:  2SLS; GMM; threshold tests; wild bootstrap 
JEL:  C12 C13 C21 C22 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:tiu:tiucen:40ca581ae22849ae911fea707ff4209c&r=ecm 
By:  Tom Boot (Erasmus University Rotterdam, the Netherlands); Didier Nibbering (Erasmus University Rotterdam, the Netherlands) 
Abstract:  Random subspace methods are a novel approach to obtain accurate forecasts in highdimensional regression settings. We provide a theoretical justification of the use of random subspace methods and show their usefulness when forecasting monthly macroeconomic variables. We focus on two approaches. The first is random subset regression, where random subsets of predictors are used to construct a forecast. Second, we discuss random projection regression, where artificial predictors are formed by randomly weighting the original predictors. Using recent results from random matrix theory, we obtain a tight bound on the mean squared forecast error for both randomized methods. We identify settings in which one randomized method results in more precise forecasts than the other and than alternative regularization strategies, such as principal component regression, partial least squares, lasso, and ridge regression. The predictive accuracy on the highdimensional macroeconomic FREDMD data set increases substantially when using the randomized methods, with random subset regression outperforming any one of the above mentioned competing methods for at least 66\% of the series. 
Keywords:  dimension reduction; random projections; random subset regression; principal components analysis; forecasting 
JEL:  C32 C38 C53 C55 
Date:  2016–09–06 
URL:  http://d.repec.org/n?u=RePEc:tin:wpaper:20160073&r=ecm 
By:  Andre Lucas (VU University Amsterdam, the Netherlands); Anne Opschoor (VU University Amsterdam, the Netherlands) 
Abstract:  We introduce a new fractionally integrated model for covariance matrix dynamics based on the longmemory behavior of daily realized covariance matrix kernels and daily return observations. We account for fat tails in both types of data by appropriate distributional assumptions. The covariance matrix dynamics are formulated as a numerically efficient matrix recursion that ensures positive definiteness under simple parameter constraints. Using intraday stock data over the period 20012012, we construct realized covariance kernels and show that the new fractionally integrated model statistically and economically outperforms recent alternatives such as the Multivariate HEAVY model and the 2006 “longmemory” version of the Riskmetrics model. 
Keywords:  multivariate volatility; fractional integration; realized covariance matrices; heavy tails; matrixF distribution; score dynamics 
JEL:  C32 C58 
Date:  2016–09–02 
URL:  http://d.repec.org/n?u=RePEc:tin:wpaper:20160069&r=ecm 
By:  Michel Philipp; Carolin Strobl; Jimmy de la Torre; Achim Zeileis 
Abstract:  Cognitive diagnosis models (CDMs) are an increasingly popular method to assess mastery or nonmastery of a set of finegrained abilities in educational or psychological assessments. Several inference techniques are available to quantify the uncertainty of model parameter estimates, to compare different versions of CDMs or to check model assumptions. However, they require a precise estimation of the standard errors (or the entire covariance matrix) of the model parameter estimates. In this article, it is shown analytically that the currently widely used form of calculation leads to underestimated standard errors because it only includes the items parameters, but omits the parameters for the ability distribution. In a simulation study, we demonstrate that including those parameters in the computation of the covariance matrix consistently improves the quality of the standard errors. The practical importance of this finding is discussed and illustrated using a real data example. 
Keywords:  cognitive diagnosis model, GDINA, standard errors, information matrix 
JEL:  C30 C52 C87 
Date:  2016–09 
URL:  http://d.repec.org/n?u=RePEc:inn:wpaper:201625&r=ecm 
By:  James G. MacKinnon (Queen's University); Matthew D. Webb (Carleton University) 
Abstract:  Inference based on clusterrobust standard errors is known to fail when the number of clusters is small, and the wild cluster bootstrap fails dramatically when the number of treated clusters is very small. We propose a family of new procedures called the subcluster wild bootstrap. In the case of pure treatment models, where all the observations in each cluster are either treated or not, the new procedures can work astonishingly well. The key requirement is that the sizes of the treated and untreated clusters should be very similar. Unfortunately, the analog of this requirement is not likely to hold for differenceindifferences regressions. Our theoretical results are supported by extensive simulations and an empirical example. 
Keywords:  CRVE, grouped data, clustered data, wild bootstrap, wild cluster bootstrap, subclustering, treatment model, difference in differences, robust inference 
JEL:  C15 C21 C23 
Date:  2016–09 
URL:  http://d.repec.org/n?u=RePEc:qed:wpaper:1364&r=ecm 
By:  Yanchun Jin (Graduate School of Economics, Kyoto University) 
Abstract:  This paper proposes nonparametric tests for the null hypothesis that a treatment has a zero effect on conditional variance for all subpopulations defined by covariates. Rather than the mean of outcome, which measures to what extent treatment changes the level of outcome, researchers are also interested in how the treatment affects the dispersion of outcome. We use variance to measure the dispersion and estimate the conditional variances by series method. We give a test rule comparing a Waldtype test statistic with the critical value from chisquared distribution. We also construct a normalized test statistic that is asymptotically standard normal under the null hypothesis. We illustrate the usefulness of the proposed test by Monte Carlo simulations and an empirical example that investigates the effect of unionism on wage dispersion. 
Keywords:  treatment effect; conditional variance; series estimation 
JEL:  C12 C14 C21 
Date:  2016–08 
URL:  http://d.repec.org/n?u=RePEc:kyo:wpaper:948&r=ecm 
By:  Bo Zhang; Guangming Pan; Jiti Gao 
Abstract:  This paper first considers some testing issues for a vector of highdimensional time series before it establishes a joint distribution for the largest eigenvalues of the corresponding covariance matrix associated with the highdimensional time series for the case where both the dimensionality of the time series and the length of time series go to infinity. As an application, a new unit root test for a vector of highdimensional time series is proposed and then studied both theoretically and numerically to show that existing unit tests for the fixeddimensional case are not applicable 
Keywords:  asymptotic normality, largest eigenvalue, linear process, unit root test 
JEL:  C21 C32 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201611&r=ecm 
By:  James G. MacKinnon (Queen's University) 
Abstract:  Inference using large datasets is not nearly as straightforward as conventional econometric theory suggests when the disturbances are clustered, even with very small intracluster correlations. The information contained in such a dataset grows much more slowly with the sample size than it would if the observations were independent. Moreover, inferences become increasingly unreliable as the dataset gets larger. These assertions are based on an extensive series of estimations undertaken using a large dataset taken from the U.S. Current Population Survey. 
Keywords:  clusterrobust inference, earnings equation, wild cluster bootstrap, CPS data, sample size, placebo laws 
JEL:  C12 C15 C18 C21 
Date:  2016–09 
URL:  http://d.repec.org/n?u=RePEc:qed:wpaper:1365&r=ecm 
By:  YuChin Hsu (Institute of Economics, Academia Sinica, Taipei, Taiwan) 
Abstract:  Multiplier bootstrap (MB) has been used to approximate the limiting processes of empirical processes in various papers. In this paper, we consider multiplier bootstrap in three cases. First, we consider MB for standard empirical process. Second, we extend the MB to account for estimation effects of the preestimated parameters or unknown nonparametric functions. Last, we consider MB for NadarayaWaston nonparametric kernel estimators. JEL Classification: C01, C15 
Keywords:  Empirical Process, Multiplier Bootstrap, Nonparametric Estimator 
Date:  2016–09 
URL:  http://d.repec.org/n?u=RePEc:sin:wpaper:16a010&r=ecm 
By:  Christoph Breunig; ; ; 
Abstract:  There are many environments in econometrics which require nonseparable modeling of a structural disturbance. In a nonseparable model, key conditions are validity of instrumental variables and monotonicity of the model in a scalar unobservable. Under these conditions the nonseparable model is equivalent to an instrumental quantile regression model. A failure of the key conditions, however, makes instrumental quantile regression potentially inconsistent. This paper develops a methodology for testing the hypothesis whether the instrumental quantile regression model is correctly speci ed. Our test statistic is asymptotically normally distributed under correct speci cation and consistent against any alternative model. In addition, test statistics to justify model simpli cation are established. Finite sample properties are examined in a Monte Carlo study and an empirical illustration. 
JEL:  C12 C14 
Date:  2016–08 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2016032&r=ecm 
By:  Asmerilda Hitaj; Friedrich Hubalek; Lorenzo Mercuri; Edit Rroji 
Abstract:  The multivariate version of the Mixed Tempered Stable is proposed. It is a generalization of the Normal Variance Mean Mixtures. Characteristics of this new distribution and its capacity in fitting tails and capturing dependence structure between components are investigated. We discuss a random number generating procedure and introduce an estimation methodology based on the minimization of a distance between empirical and theoretical characteristic functions. Asymptotic tail behavior of the univariate Mixed Tempered Stable is exploited in the estimation procedure in order to obtain a better model fitting. Advantages of the multivariate Mixed Tempered Stable distribution are discussed and illustrated via simulation study. 
Date:  2016–09 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1609.00926&r=ecm 
By:  Shin Kanaya (Aarhus University) 
Abstract:  The convergence rates of the sums of αmixing (or strongly mixing) triangular arrays of het erogeneous random variables are derived. We pay particular attention to the case where central limit theorems may fail to hold, due to relatively strong timeseries dependence and/or the non existence of higherorder moments. Several previous studies have presented various versions of laws of large numbers for sequences/triangular arrays, but their convergence rates were not fully investigated. This study is the first to investigate the convergence rates of the sums of αmixing triangular arrays whose mixing coefficients are permitted to decay arbitrarily slowly. We consider two kinds of asymptotic assumptions: one is that the time distance between adjacent observations is fixed for any sample size n; and the other, called the infill assumption, is that it shrinks to zero as n tends to infinity. Our convergence theorems indicate that an explicit tradeoff exists between the rate of convergence and the degree of dependence. While the results under the infill assumption can be seen as a direct extension of those under the fixeddistance assumption, they are new and particularly useful for deriving sharper convergence rates of discretization biases in estimating continuoustime processes from discretely sampled observations. We also discuss some examples to which our results and techniques are useful and applicable: a movingaverage process with long lasting past shocks, a continuoustime diffusion process with weak mean reversion, and a nearunitroot process. 
Keywords:  Law of large numbers; rate of convergence; αmixing triangular array; infill asymp totics; kernel estimation. 
JEL:  C14 C22 C58 
Date:  2016–08 
URL:  http://d.repec.org/n?u=RePEc:kyo:wpaper:947&r=ecm 
By:  Roberto Benedetti; Giuseppe Espa; Emanuele Taufer 
Abstract:   
Keywords:  spatial survey, twodimensional systematic sampling, twodimensional maximal stratification, semivariogram, Gaussian random field 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:trn:utwprg:2016/03&r=ecm 
By:  Manabu Asai (Soka University, Japan); Michael McAleer (National Tsing Hua University, Taiwan; Erasmus University Rotterdam, the Netherlands; Complutense University of Madrid, Spain; Yokohama National University, Japan) 
Abstract:  The paper considers various extended asymmetric multivariate conditional volatility models, and derives appropriate regularity conditions and associated asymptotic theory. This enables checking of internal consistency and allows valid statistical inferences to be drawn based on empirical estimation. For this purpose, we use an underlying vector random coefficient autoregressive process, for which we show the equivalent representation for the asymmetric multivariate conditional volatility model, to derive asymptotic theory for the quasimaximum likelihood estimator. As an extension, we develop a new multivariate asymmetric long memory volatility model, and discuss the associated asymptotic properties. 
Keywords:  Multivariate conditional volatility; Vector random coefficient autoregressive process; Asymmetry; Long memory; Dynamic conditional correlations; Regularity conditions; Asymptotic properties 
JEL:  C13 C32 C58 
Date:  2016–09–05 
URL:  http://d.repec.org/n?u=RePEc:tin:wpaper:20160071&r=ecm 
By:  Stelios D. Bekiros; Roberta Cardani; Alessia Paccagnini; Stefania Villa 
Abstract:  In the dynamic stochastic general equilibrium (DSGE) literature there has been an increasing awareness on the role that the banking sector can play in macroeconomic activity. We present a DSGE model with financial intermediation as in Gertler and Karadi (2011). The estimation of shocks and of the structural parameters shows that timevariation should be crucial in any attempted empirical analysis. Since DSGE modelling usually fails to take into account inherent nonlinearities of the economy, we propose a novel timevarying parameter (TVP) statespace estimation method for VAR processes both for homoskedastic and heteroskedastic error structures. We conduct an exhaustive empirical exercise to compare the outofsample predictive performance of the estimated DSGE model with that of standard ARs, VARs, Bayesian VARs and TVPVARs. We find that the TVPVAR provides the best forecasting performance for the series of GDP and net worth of financial intermediaries for all stepsahead, while the DSGE model outperforms the other specifications in forecasting inflation and the federal funds rate at shorter horizons. 
Keywords:  Financial frictions; DSGE; Timevarying coefficients; Extended Kalman filter; Banking sector 
JEL:  C11 C13 C32 E37 
Date:  2016–08 
URL:  http://d.repec.org/n?u=RePEc:ucn:wpaper:201611&r=ecm 
By:  Govert Bijwaard (Netherlands Interdisciplinary Demographic Institute); Christian Schluter (Centre de la Vieille Charite) 
Abstract:  Consider the duration of stay of migrants in a host country. We propose a statistical model of locally interdependent hazards in order to examine whether interactions at the level of the neighbourhood are present and lead to social multipliers. To this end, we propose and study a new twostage estimation strategy based on an inverted linear rank test statistic. Using a unique large administrative panel dataset for the population of recent labour immigrants to the Netherlands, we quantify the local social multipliers in several factual and counterfactual experiments, and demonstrate that these can be substantial. 
Keywords:  interdependent hazards, local interaction, social multipliers, return migration 
JEL:  C41 C10 C31 J61 
Date:  2016–08 
URL:  http://d.repec.org/n?u=RePEc:crm:wpaper:1620&r=ecm 
By:  Nassim Nicholas Taleb 
Abstract:  We examine random variables in the power law/slowly varying class with stochastic tail exponent, the exponent $\alpha$ having its own distribution. We show the effect of stochasticity of $\alpha$ on the expectation and higher moments of the random variable. For instance, the moments of a righttailed or rightasymmetric variable, when finite, increase with the variance of $\alpha$; those of a leftasymmetric one decreases. The same applies to conditional shortfall (CVar), or meanexcess functions. We prove the general case and examine the specific situation of lognormally distributed $\alpha \in [b,\infty), b>1$. The stochasticity of the exponent induces a significant bias in the estimation of the mean and higher moments in the presence of data uncertainty. This has consequences on sampling error as uncertainty about $\alpha$ translates into a higher expected mean. The bias is conserved under summation, even upon large enough a number of summands to warrant convergence to the stable distribution. We establish inequalities related to the asymmetry. We also consider the situation of capped power laws (i.e. with compact support), and apply it to the study of violence by Cirillo and Taleb (2016). We show that uncertainty concerning the historical data increases the true mean. 
Date:  2016–09 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1609.02369&r=ecm 
By:  D.S. Poskitt 
Abstract:  This paper provides a detailed analysis of the properties of Singular Spectrum Analysis (SSa) under very general conditions concerning the structure of the observed series. It translates the SSA interpretation of the singular value decomposition of the so called trajectory matrix as a discrete KarhunenLoeve expansion into conventional principle components analysis, and shows how this motivates a consideration of SSA constructed using standardized or rescaled trajectories (RSSA). The asymptotic properties of RSSA are derived assuming that the true data generating process (DGP) satisfies sufficient regularity to ensure that Grenander's conditions are satisfied. The spectral structure of different population ensemble models implicit in the large sample properties so derived is examined and it is shown how the decomposition of the spectrum into discrete and continuous components leads to an application of sequential RSSA series reconstruction. As part of the latter exercise the paper presents a generalization of Szego's theorem to fractionally integrated processes. The operation of the theoretical results is demonstrated via simulation experiments. The latter serve as a vehicle to illustrate the numerical consequences of the results in the context of different processes, and to assess the practical impact of the sequential RSSA processing methodology. 
Keywords:  embedding, principle components, rescaled trajectory matrix, singular value decomposition, spectrum. 
JEL:  C14 C22 C52 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201615&r=ecm 
By:  Bent Nielsen (Nuffield College, Oxford); Xiyu Jiao (Department of Economics, University of Oxford and Mansfield College) 
Abstract:  We consider outlier detection algorithms for time series regression based on iterated 1step Huberskip Mestimators. This paper analyses the role of varying cutoffs in such algorithms. The argument involves an asymptotic theory for a new class of weighted and marked empirical processes allowing for estimation errors of the scale and the regression coefficient. 
Keywords:  The iterated 1step Huberskip Mestimator; Tightness; A fixed point; Poisson approximation to gauge; Weighted and marked empirical processes. 
Date:  2016–08–25 
URL:  http://d.repec.org/n?u=RePEc:nuf:econwp:1608&r=ecm 
By:  Florian Wickelmaier; Achim Zeileis 
Abstract:  In multinomial processing tree (MPT) models, individual differences between the participants in a study lead to heterogeneity of the model parameters. While subject covariates may explain these differences, it is often unknown in advance how the parameters depend on the available covariates, that is, which variables play a role at all, interact, or have a nonlinear influence, etc. Therefore, a new approach for capturing parameter heterogeneity in MPT models is proposed based on the machine learning method MOB for modelbased recursive partitioning. This recursively partitions the covariate space, leading to an MPT tree with subgroups that are directly interpretable in terms of effects and interactions of the covariates. The pros and cons of MPT trees as a means of analyzing the effects of covariates in MPT model parameters are discussed based on a simulation experiment as well as on two empirical applications from memory research. Software that implements MPT trees is provided via the mpttree function in the psychotree package in R. 
Keywords:  multinomial processing tree, modelbased recursive partitioning, parameter heterogeneity 
JEL:  C14 C45 C52 C87 
Date:  2016–09 
URL:  http://d.repec.org/n?u=RePEc:inn:wpaper:201626&r=ecm 
By:  Martin M. Andreasen; Jesús FernándezVillaverde; Juan F. RubioRamírez 
Abstract:  This paper studies the pruned statespace system for higherorder perturbation approximations to DSGE models. We show the stability of the pruned approximation up to third order and provide closedform expressions for first and second unconditional moments and impulse response functions. Our results introduce GMM estimation and impulseresponse matching for DSGE models approximated up to third order and provide a foundation for indirect inference and SMM. As an application, we consider a New Keynesian model with EpsteinZinWeil preferences and two novel feedback effects from longterm bonds to the real economy, allowing us to match the level and variability of the 10year term premium in the U.S. with a low relative risk aversion of 5. 
Date:  2016–09 
URL:  http://d.repec.org/n?u=RePEc:fda:fdaddt:201607&r=ecm 