nep-ecm New Economics Papers
on Econometrics
Issue of 2015‒10‒17
twenty-two papers chosen by
Sune Karlsson
Örebro universitet

  1. Bayesian Inference in a Non-linear/Non-Gaussian Switching State Space Model: Regime-dependent Leverage Effect in the U.S. Stock Market By Kim, Jaeho
  2. Fuzzy Differences-in-Differences By de Chaisemartin, Clement; D'Haultfoeuille, Xavier
  3. Equation-by-Equation Estimation of a Multivariate Log-GARCH-X Model of Financial Returns By Francq, Christian; Sucarrat, Genaro
  4. Functional Coefficient Moving Average Model with Applications to forecasting Chinese CPI By Chen, Song Xi; Lei, Lihua; Tu, Yundong
  5. Semiparametric Model Averaging of Ultra-High Dimensional Time Series By Jia Chen; Degui Li; Oliver Linton; Zudi Lu
  6. Enhancing Estimation for Interest Rate Diffusion Models with Bond Prices By Zou, Tao; Chen, Song Xi
  7. Seasonal adjustment of state and metro ces jobs data By Phillips, Keith R.; Wang, Jianguo
  8. Correcting for Self-Reporting Bias in BMI: A Multiple Imputation Approach By Donal O'Neill;
  9. New Semiparametric Estimation Procedure for Functional Coefficient Longitudinal Data Models By Jia Chen; Degui Li; Yingcun Xia
  10. Textual Analysis in Real Estate By Adam Nowak; Patrick Smith
  11. How Robust Are SVARs at Measuring Monetary Policy in Small Open Economies? By Carrillo Julio A.; Elizondo Rocío
  12. Suppliment to Fuzzy Differences-in-Differences By de Chaisemartin, Clement; D'Haultfoeuille, Xavier
  13. A multivariate linear rank test of independence based on a multiparametric copula with cubic sections By Mangold, Benedikt
  14. Who gives Direction to Statistical Testing? Best Practice meets Mathematically Correct Tests By Karl H.Schlag
  15. Identification and Estimation of Dynamic Games when Players' Beliefs Are Not in Equilibrium By Aguirregabiria, Victor; Magesan, Arvind
  16. Stochastic levels and duration dependence in US unemployment By de Bruijn, B.; Franses, Ph.H.B.F.
  17. The Probability of Legislative Shirking: Estimation and Validation By Serguei Kaniovski; David Stadelmann
  18. Not Just Another Mixed Frequency Paper By Sergio Afonso Lago Alves; Angelo Marsiglia Fasolo
  19. Regime-switching Stochastic Volatility Model : Estimation and Calibration to VIX options By Stéphane Goutte; Amine Ismail; Huyên Pham
  20. Forecasting German Car Sales Using Google Data and Multivariate Models By Fantazzini, Dean; Toktamysova, Zhamal
  21. L'histoire (faussement) naïve des modèles DSGE By Francesco Sergi
  22. Estimation of Spatial Autoregressions with Stochastic Weight Matrices By Abhimanyu Gupta

  1. By: Kim, Jaeho
    Abstract: This paper provides two Bayesian algorithms to efficiently estimate non-linear/non-Gaussian switching state space models by extending a standard Particle Markov chain Monte Carlo (PMCMC) method. Instead of iteratively running separate PMCMC steps using conventional approaches, the proposed methods generate continuous-state and discrete-regime indicator variables together from their joint smoothing distribution in one Gibbs block. The proposed Bayesian algorithms that are built upon the novel ideas of ancestor sampling and particle rejuvenation are robust to small numbers of particles and degenerate state transition equations. Moreover, the algorithms are applicable to any switching state space models, regardless of the Markovian property. The difficulty in conducting Bayesian model comparisons is overcome by adopting the Deviance Information Criterion (DIC). For illustration, a regime-dependent leverage effect in the U.S. stock market is investigated using the newly developed methods. A conventional regime switching stochastic volatility model is generalized to encompass the regime-dependent leverage effect and is applied to Standard and Poor’s 500 and NASDAQ daily return data. The resulting Bayesian posterior estimates indicate that the stronger (weaker) financial leverage effect is associated with a high (low) volatility regime.
    Keywords: Particle Markov Chain Monte Carlo, Regime switching, State space model, Leverage effect
    JEL: C11 C15
    Date: 2015–10–09
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:67153&r=all
  2. By: de Chaisemartin, Clement (Department of Economics University of Warwick); D'Haultfoeuille, Xavier (CREST)
    Abstract: In many applications of the differences-in-differences (DID) method, the treatment increases more in the treatment group, but some units are also treated in the control group. In such fuzzy designs, a popular estimator of treatment effects is the DID of the outcome divided by the DID of the treatment, or OLS and 2SLS regressions with time and group fixed effects estimating weighted averages of this ratio across groups. We start by showing that when the treatment also increases in the control group, this ratio estimates a causal effect only if treatment effects are homogenous in the two groups. Even when the distribution of treatment is stable, it requires that the effect of time be the same on all counterfactual outcomes. As this assumption is not always applicable, we propose two alternative estimators. The first estimator relies on a generalization of common trends assumptions to fuzzy designs, while the second extends the changes-in-changes estimator of Athey & Imbens (2006). When the distribution of treatment changes in the control group, treatment effects are partially identified. Finally, we prove that our estimators are asymptotically normal and use them to revisit applied papers using fuzzy designs.
    JEL: C21
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:wrk:warwec:1065&r=all
  3. By: Francq, Christian; Sucarrat, Genaro
    Abstract: Estimation of large financial volatility models is plagued by the curse of dimensionality: As the dimension grows, joint estimation of the parameters becomes infeasible in practice. This problem is compounded if covariates or conditioning variables (``X") are added to each volatility equation. In particular, the problem is especially acute for non-exponential volatility models (e.g. GARCH models), since there the variables and parameters are restricted to be positive. Here, we propose an estimator for a multivariate log-GARCH-X model that avoids these problems. The model allows for feedback among the equations, admits several stationary regressors as conditioning variables in the X-part (including leverage terms), and allows for time-varying covariances of unknown form. Strong consistency and asymptotic normality of an equation-by-equation least squares estimator is proved, and the results can be used to undertake inference both within and across equations. The flexibility and usefulness of the estimator is illustrated in two empirical applications.
    Keywords: Exponential GARCH, multivariate log-GARCH-X, VARMA-X, Equation-by-Equation Estimation (EBEE), Least Squares
    JEL: C13 C22 C32 C51 C58
    Date: 2015–10–08
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:67140&r=all
  4. By: Chen, Song Xi; Lei, Lihua; Tu, Yundong
    Abstract: This article establishes the functional coefficient moving average model (FMA), which allows the coefficient of the classical moving average model to adapt with a covariate. The functional coefficient is identified as a ratio of two conditional moments. Local linear estimation technique is used for estimation and asymptotic properties of the resulting estimator are investigated. Its convergence rate depends on whether the underlying function reaches its boundary or not, and asymptotic distribution could be nonstandard. A model specification test in the spirit of Hardle-Mammen (1993) is developed to check the stability of the functional coefficient. Intensive simulations have been conducted to study the finite sample performance of our proposed estimator, and the size and the power of the test. The real data example on CPI data from China Mainland shows the efficacy of FMA. It gains more than 20% improvement in terms of relative mean squared prediction error compared to moving average model.
    Keywords: Moving Average model, functional coefficient model, forecasting, Consumer Price Index.
    JEL: C1 C13 C5 C51 C53
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:67074&r=all
  5. By: Jia Chen; Degui Li; Oliver Linton; Zudi Lu
    Abstract: In this paper, we consider semiparametric model averaging of the nonlinear dynamic time series system where the number of exogenous regressors is ultra large and the number of autoregressors is moderately large. In order to accurately forecast the response variable, we propose two semiparametric approaches of dimension reduction among the exogenous regressors and auto-regressors (lags of the response variable). In the first approach, we introduce a Kernel Sure Independence Screening (KSIS) technique for the nonlinear time series setting which screens out the regressors whose marginal regression (or auto-regression) functions do not make significant contribution to estimating the joint multivariate regression function and thus reduces the dimension of the regressors from a possible exponential rate to a certain polynomial rate, typically smaller than the sample size; then we consider a semiparametric method of Model Averaging MArginal Regression (MAMAR) for the regressors and auto-regressors that survive the screening procedure, and propose a penalised MAMAR method to further select the regressors which have significant effects on estimating the multivariate regression function and predicting the future values of the response variable. In the second approach, we impose an approximate factor modelling structure on the ultra-high dimensional exogenous regressors and use a well-known principal component analysis to estimate the latent common factors, and then apply the penalised MAMAR method to select the estimated common factors and lags of the response variable which are significant. Through either of the two approaches, we can finally determine the optimal combination of the signicant marginal regression and auto-regression functions. Under some regularity conditions, we derive the asymptotic properties for the two semiparametric dimension-reduction approaches. Some numerical studies including simulation and an empirical application are provided to illustrate the proposed methodology.
    Keywords: Kernel smoother, penalised MAMAR, principal component analysis, semiparametric approximation, sure independence screening, ultra-high dimensional time series.
    JEL: C14 C22 C52
    Date: 2015–10
    URL: http://d.repec.org/n?u=RePEc:yor:yorken:15/18&r=all
  6. By: Zou, Tao; Chen, Song Xi
    Abstract: We consider improving estimating parameters of diffusion processes for interest rates by incorporating information in bond prices. This is designed to improve the estimation of the drift parameters, which are known to be subject to large estimation errors. It is shown that having the bond prices together with the short rates leads to more efficient estimation of all parameters for the interest rate models. It enhances the estimation efficiency of the maximum likelihood estimation based on the interest rate dynamics alone. The combined estimation based on the bond prices and the interest rate dynamics can also provide inference to the risk premium parameter. Simulation experiments were conducted to confirm the theoretical properties of the estimators concerned. We analyze the overnight Fed fund rates together with the U.S. Treasury bond prices.
    Keywords: Interest Rate Models; Affine Term Structure; Bond Prices; Market Price of Risk; Combined Estimation; Parameter Estimation.
    JEL: C5 C50 C58
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:67073&r=all
  7. By: Phillips, Keith R. (Federal Reserve Bank of Dallas); Wang, Jianguo
    Abstract: Hybrid time series data often require special care in estimating seasonal factors. Series such as the state and metro area Current Employment Statistics produced by the Bureau of Labor Statistics (BLS) are composed of two different source series that often have two different seasonal patterns. In this paper we address the process to test for differing seasonal patterns within the hybrid series. We also discuss how to apply differing seasonal factors to the separate parts of the hybrid series. Currently the BLS simply juxtaposes the two different sets of seasonal factors at the transition point between the benchmark part of the data and the survey part. We argue that the seasonal factors should be extrapolated at the transition point or that an adjustment should be made to the level of the unadjusted data to correct for a bias in the survey part of the data caused by differing seasonal factors at the transition month.
    Keywords: Current Employment Statistics; Seasonal Adjustment; Hybrid Time Series
    Date: 2015–09–01
    URL: http://d.repec.org/n?u=RePEc:fip:feddwp:1505&r=all
  8. By: Donal O'Neill (Department of Economics, Finance and Accounting, Maynooth University.);
    Abstract: Measurement error in BMI is known to be a complex process has serious consequences for traditional estimators. In this paper I examine the extent to which Stochastic Multiple Imputation approaches can successfully addressing this problem. Using both Monte Carlo simulations and real world data I show how the MI approach can provide an effective solution to measurement error in BMI in appropriate circumstances. The MI approach yields consistent estimates that efficiently use all the available data.
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:may:mayecw:n263-15.pdf&r=all
  9. By: Jia Chen; Degui Li; Yingcun Xia
    Abstract: In order to achieve dimension reduction for the nonparametric functional coefficients and improve the estimation efficiency, in this paper we introduce a novel semiparametric estimation procedure which combines a principal component analysis of the functional coefficients and a Cholesky decomposition of the within-subject covariance matrices. Under some regularity conditions, we derive the asymptotic distribution for the proposed semiparametric estimators and show that the efficiency of the estimation of the (principal) functional coefficients can be improved when the within-subject covariance structure is correctly specied. Furthermore, we apply two approaches to consistently estimate the Cholesky decomposition, which avoid a possible misspecication of the within-subject covariance structure and ensure the efficiency improvement for the estimation of the (principal) functional coefficients. Some numerical studies including Monte Carlo experiments and an empirical application show that the developed semiparametric method works reasonably well in finite samples.
    Keywords: Cholesky decomposition, functional coefficients, local linear smoothing, principal component analysis, profile least squares, within-subject covariance
    JEL: C14 C23 C51
    Date: 2015–10
    URL: http://d.repec.org/n?u=RePEc:yor:yorken:15/17&r=all
  10. By: Adam Nowak (West Virginia University, Department of Economics); Patrick Smith (Georgia State University, J. Mack Robinson College of Business)
    Abstract: This paper incorporates text data from MLS listings from Atlanta, GA into a hedonic pricing model. Text is found to decrease pricing error by more than 25%. Information from text is incorporated into a linear model using a tokenization approach. By doing so, the implicit prices for various words and phrases are estimated. The estimation focuses on simultaneous variable selection and estimation for linear models in the presence of a large number of variables. The LASSO procedure and variants are shown to outperform least-squares in out-of-sample testing.
    Keywords: textual analysis, big data, real estate valuation
    JEL: C01 C18 C51 C52 C65 R30
    Date: 2015–08
    URL: http://d.repec.org/n?u=RePEc:wvu:wpaper:15-34&r=all
  11. By: Carrillo Julio A.; Elizondo Rocío
    Abstract: We study the ability of exclusion and sign restrictions to measure monetary policy shocks in small open economies. Our Monte Carlo experiments show that sign restrictions systematically overshoot inflation responses to the said shock, so we propose to add prior information to limit the number of economically implausible responses. This modified procedure robustly recovers the transmission of the shock, whereas exclusion restrictions show large sensitivity to the assumed monetary transmission mechanism of the model and the set of foreign variables included in the VAR. An application with Mexican data supports our findings.
    Keywords: Exclusion Restrictions; Sign Restrictions; Small Open Economy; Monetary Policy Shock.
    JEL: C32 E52
    Date: 2015–09
    URL: http://d.repec.org/n?u=RePEc:bdm:wpaper:2015-18&r=all
  12. By: de Chaisemartin, Clement (Department of Economics University of Warwick); D'Haultfoeuille, Xavier (CREST)
    Abstract: This paper gathers the supplementary material to de Chaisemartin & D'Haultfoeuille (2015). First, we show that two commonly used IV and OLS regressions with time and group fixed effects estimate weighted averages of Wald-DIDs. It then follows from Theorem 3.1 in de Chaisemartin & D'Haultfoeuille (2015) that these regressions estimate weighted sums of LATEs, with potentially many negative weights as we illustrate through two applications. We review all papers published in the American Economic Review between 2010 and 2012 and find that 10.1% of these papers estimate one or the other regression. Second, we consider estimators of the bounds on average and quantile treatment effects derived in Theorems 3.2 and 3.3 in de Chaisemartin & D'Haultfoeuille (2015) and we study their asymptotic behavior. Third, we revisit Gentzkow et al. (2011) and Field (2007) using our estimators. Finally, we present all the remaining proofs not included in the main paper.
    JEL: C21
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:wrk:warwec:1066&r=all
  13. By: Mangold, Benedikt
    Abstract: This paper generalizes the locally optimal linear rank test based on copula from Shirahata (1974) resp. Guillén and Isabel (1998) and Genest et al. (2006) to p dimensions and introduces a new X2-type test for global independence (Nelsen test). The test is compared to similar nonparametric tests by means of the power under several alternatives and sample sizes. However, the actual strength of the Nelsen test is the fast examination of a test decision due to the closed form expression of the asymptotic distribution of the test statistic which is provided by this paper.
    Keywords: Multivariate linear rank test,Copula,Multiparametric copula,Test of independence,Dependogramm,Nonparametric statistics,Dependence
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:zbw:iwqwdp:102015&r=all
  14. By: Karl H.Schlag
    Abstract: We are interested in statistical tests that are able to uncover that one method is better than another one. The Wilcoxon-Mann-Whitney rank-sum and the Wilcoxon sign-rank test are the most popular tests for showing that two methods are di¤erent. Yet all of the 32 papers in Economics we surveyed misused them to claim evidence that one method is better, without making any additional assumptions. We present eight nonparametric tests that can correctly identify which method is better in terms of a stochastic inequality, median di¤erence and di¤erence in medians or means, without adding any as- sumptions. We show that they perform very well in the data sets from the surveyed papers. The two tests for comparing medians are novel, constructed in the spirit of Mood?s test.
    JEL: C12 C14 C90
    Date: 2015–10
    URL: http://d.repec.org/n?u=RePEc:vie:viennp:1512&r=all
  15. By: Aguirregabiria, Victor; Magesan, Arvind
    Abstract: This paper deals with the identification and estimation of dynamic games when players' beliefs about other players' actions are biased, i.e., beliefs do not represent the probability distribution of the actual behavior of other players conditional on the information available. First, we show that a exclusion restriction, typically used to identify empirical games, provides testable nonparametric restrictions of the null hypothesis of equilibrium beliefs. Second, we prove that this exclusion restriction, together with consistent estimates of beliefs at several points in the support of the special state variable (i.e., the variable involved in the exclusion restriction), is sufficient for nonparametric point-identification of players' payoff and belief functions. The consistent estimates of beliefs at some points of support may come either from an assumption of unbiased beliefs at these points in the state space, or from available data on elicited beliefs for some values of the state variables. Third, we propose a simple two-step estimation method and a sequential generalization of the method that improves its asymptotic and finite sample properties. We illustrate our model and methods using both Monte Carlo experiments and an empirical application of a dynamic game of store location by retail chains. The key conditions for the identification of beliefs and payoffs in our application are the following: (a) the previous year's network of stores of the competitor does not have a direct effect on the profit of a firm, but the firm's own network of stores at previous year does affect its profit because the existence of sunk entry costs and economies of density in these costs; and (b) firms' beliefs are unbiased in those markets that are close, in a geographic sense, to the opponent's network of stores, though beliefs are unrestricted, and potentially biased, for unexplored markets which are farther away from the competitors' network. Our estimates show significant evidence of biased beliefs. Furthermore, imposing the restriction of unbiased beliefs generates a substantial attenuation bias in the estimate of competition effects.
    Keywords: dynamic games; estimation; identification; market entry-exit; rational behavior; rationalizability
    JEL: C73 L13
    Date: 2015–10
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:10872&r=all
  16. By: de Bruijn, B.; Franses, Ph.H.B.F.
    Abstract: We introduce a new time series model that can capture the properties of data as is typically exemplified by monthly US unemployment data. These data show the familiar nonlinear features, with steeper increases in unem- ployment during economic downswings than the decreases during economic prosperity. At the same time, the levels of unemployment in each of the two states do not seem fixed, nor are the transition periods abrupt. Finally, our model should generate out-of-sample forecasts that mimic the in-sample properties. We demonstrate that our new and flexible model covers all those features, and our illustration to monthly US unemployment data shows its merits, both in and out of sample.
    Keywords: Markov switching, duration dependence, Gibbs sampling, unemployment, stochastic levels
    JEL: C11 C22 C24 C53 E24
    Date: 2015–09–23
    URL: http://d.repec.org/n?u=RePEc:ems:eureir:78710&r=all
  17. By: Serguei Kaniovski; David Stadelmann
    Abstract: We introduce a binomial mixture model for estimating the probability of legislative shirking. The estimated probability strongly correlates with the observed frequency of shirking obtained by matching parliamentary roll-call votes with the will of the median voter revealed in national referenda on identical legislative proposals. Since our estimation method requires the roll-call votes as sole input, it can be used even if the will of the median voter is unknown.
    Keywords: binomial mixture model; legislative shirking; referenda
    JEL: C13 D72
    Date: 2015–10
    URL: http://d.repec.org/n?u=RePEc:cra:wpaper:2015-17&r=all
  18. By: Sergio Afonso Lago Alves; Angelo Marsiglia Fasolo
    Abstract: This paper presents a new algorithm, based on a two-part Gibbs sampler with FFBS method, to recover the joint distribution of missing observations in a mixed-frequency dataset. The new algorithm relaxes most of the constraints usually presented in the literature, namely: (i) it does not require at least one time series to be observed every period; (ii) it provides an easy way to add linear restrictions based on the state space representation of the VAR; (iii) it does not require regularly-spaced time series at lower frequencies; and (iv) it avoids degeneration problems arising when states, or linear combination of states, are actually observed. In addition, the algorithm is well suited for embedding high-frequency real-time information for improving nowcasts and forecasts of lower frequency time series. We evaluate the properties of the algorithm using simulated data. Moreover, as empirical applications, we simulate monthly Brazilian GDP, comparing our results to the Brazilian IBC-BR, and recover what would historical PNAD-C unemployment rates look like prior to 2012
    Date: 2015–09
    URL: http://d.repec.org/n?u=RePEc:bcb:wpaper:400&r=all
  19. By: Stéphane Goutte (LED - Université Vincennes Saint-Denis (Paris 8)); Amine Ismail (LPMA - Laboratoire de Probabilités et Modèles Aléatoires - UPMC - Université Pierre et Marie Curie - Paris 6 - UP7 - Université Paris Diderot - Paris 7 - CNRS); Huyên Pham (LPMA - Laboratoire de Probabilités et Modèles Aléatoires - UPMC - Université Pierre et Marie Curie - Paris 6 - UP7 - Université Paris Diderot - Paris 7 - CNRS, ENSAE Paris-Tech & CREST, Laboratoire de Finance et d'Assurance - ENSAE Paris-Tech & CREST)
    Abstract: We develop and implement a method for maximum likelihood estimation of a regime-switching stochastic volatility model. Our model uses a continuous time stochastic process for the stock dynamics with the instantaneous variance driven by a Cox-Ingersoll-Ross (CIR) process and each parameter modulated by a hidden Markov chain. We propose an extension of the EM algorithm through the Baum-Welch implementation to estimate our model and filter the hidden state of the Markov chain while using the VIX index to invert the latent volatility state. Using Monte Carlo simulations, we test the convergence of our algorithm and compare it with an approximate likelihood procedure where the volatility state is replaced by the VIX index. We found that our method is more accurate than the approximate procedure. Then, we apply Fourier methods to derive a semi-analytical expression of S&P 500 and VIX option prices, which we calibrate to market data. We show that the model is sufficiently rich to encapsulate important features of the joint dynamics of the stock and the volatility and to consistently fit option market prices.
    Keywords: EM algorithm,Regime-switching model, Stochastic volatility, VIX index, Baum-Welch algorithm.
    Date: 2015–10–06
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-01212018&r=all
  20. By: Fantazzini, Dean; Toktamysova, Zhamal
    Abstract: Long-term forecasts are of key importance for the car industry due to the lengthy period of time required for the development and production processes. With this in mind, this paper proposes new multivariate models to forecast monthly car sales data using economic variables and Google online search data. An out-of-sample forecasting comparison with forecast horizons up to 2 years ahead was implemented using the monthly sales of ten car brands in Germany for the period from 2001M1 to 2014M6. Models including Google search data statistically outperformed the competing models for most of the car brands and forecast horizons. These results also hold after several robustness checks which consider nonlinear models, different out-of-sample forecasts, directional accuracy, the variability of Google data and additional car brands.
    Keywords: Car Sales, Forecasting, Google, Google Trends, Global Financial Crisis, Great Recession
    JEL: C22 C32 C52 C53 L62
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:67110&r=all
  21. By: Francesco Sergi (Centre d'Economie de la Sorbonne)
    Abstract: The purpose of the article is to analyze and criticize the way how DSGE macroeconomists working in policy-making institutions think about the history of their own modeling practice. Our contribution is, first of all, historiographical: it investigates an original literature, emphasizing in the history of DSGE as it is told by its own practitioners. The results of this analysis is what we will call a “naïve history” of DSGE modeling. Modellers working from this perspective present their models as the achievement of a “scientific progress”, which is linear and cumulative both in macroeconomic theorizing and in the application of formalized methods and econometric techniques to the theory. This article also proposes a critical perspective about the naïve history of the DSGE models, which drawns, by contrast, the main lines of an alternative, “non-naïve” history. of the DSGE models is incomplete and imprecise. It mainly ignores controversies, failures and blind alleys in previous research; as a consequence, the major theoretical and empirical turning points are made invisible. The naïve history also provides an ahistorical account of assessment criteria for modeling (especially for evaluating empirical consistency), which hides the underlying methodological and epistemological debates. Finally, we will claim that the naïve history plays an active and rhetoric role in legitimizing the DSGE models as a dominant tool for policy expertise
    Keywords: DSGE; new neoclassical synthesis, history of macroeconomics, modelling methodology, central banks, rhetoric of economics
    JEL: B22 B41 E60
    Date: 2015–09
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:15066&r=all
  22. By: Abhimanyu Gupta
    Abstract: We examine a higher-order spatial autoregressive model with stochastic, but exogenous, spatial weight matrices. Allowing a general spatial linear process form for the disturbances that permits many common types of error specifications as well as potential ‘long memory’, we provide sufficient conditions for consistency and asymptotic normality of instrumental variables and ordinary least squares estimates. The implications of popular weight matrix normalizations and structures for our theoretical conditions are discussed. A set of Monte Carlo simulations examines the behaviour of the estimates in a variety of situations and suggests, like the theory, that spatial weights generated from distributions with ‘smaller’ moments yield better estimates. Our results are especially pertinent in situations where spatial weights are functions of stochastic economic variables.
    URL: http://d.repec.org/n?u=RePEc:esx:essedp:772&r=all

This nep-ecm issue is ©2015 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.