nep-ecm New Economics Papers
on Econometrics
Issue of 2014‒02‒02
24 papers chosen by
Sune Karlsson
Orebro University

  1. Efficient Iterative Maximum Likelihood Estimation of High-Parameterized Time Series Models By Nikolaus Hautsch; Ostap Okhrin; Alexander Ristig;
  2. Deriving the Information Bounds for Nonlinear Panel Data Models with Fixed Effects By Haruo Iwakura
  3. Testing exogeneity of multinomial regressors in count data models: does two stage residual inclusion work? By A. Geraci; D. Fabbri; C. Monfardini
  4. Supplement To "Weak Identification in Fuzzy Regression Discontinuity Designs" By Feir, Donna; Lemieux, Thomas; Marmer, Vadim
  5. Structural Vector Autoregressions: Checking Identifying Long-run Restrictions via Heteroskedasticity By Helmut Lütkepohl; Anton Velinov
  6. A latent dynamic factor approach to forecasting multivariate stock market volatility By Gribisch, Bastian
  7. Nonstationary-Volatility Robust Panel Unit Root Tests and the Great Moderation By Czudaj, Robert; Hanck, Christoph
  8. Identification of prior information via moment-matching By Sacht, Stephen
  9. OLS and IV estimation of regression models including endogenous interaction terms By Maurice J.G. Bun; Teresa D. Harrison
  10. Semiparametric Generalized Long Memory Modelling of GCC Stock Market Returns: A Wavelet Approach By Heni Boubaker; Nadia Sghaier
  11. Exclusion bias in empirical social interaction models: causes, consequences and solutions By Bet Caeyers
  12. Nonlinear shrinkage of the covariance matrix for portfolio selection: Markowitz meets Goldilocks By Olivier Ledoit; Michael Wolf
  13. The Asymptotic Size and Power of the Augmented Dickey-Fuller Test for a Unit Root By Paparoditis, Efstathios; Politis, Dimitris N
  14. Convenient links for the estimation of hedonic price indexes:the case of unique, infrequently traded assets By Esmeralda Ramalho; Joquim Ramalho
  15. Empirical modeling of the impact factor distribution By Michał Brzeziński
  16. Self-affinity in financial asset returns By John Goddard; Enrico Onali
  17. Textbook Estimators of Multiperiod Optimal Hedging Ratios: Methodological Aspects and Application to the European Wheat Market By Gianluca Stefani; Marco Tiberti
  18. Functional regression over irregular domains By Arnab Bhattacharjee; Liqian Cai; Taps Maiti
  19. Endogenous spatial structure and delineation of submarkets: A new framework with application to housing markets By Arnab Bhattacharjee; Eduardo Castro; Taps Maiti; João Marques
  20. Estimate nothing By M. Duembgen; L. C. G. Rogers
  21. Direct Versus Indirect Approach in Seasonal Adjustment By Marcus Scheiblecker
  22. Moment Matching versus Bayesian Estimation: Backward-Looking Behaviour in a New-Keynesian Baseline Model By Sacht, Stephen; Franke, Reiner; Jang, Tae-Seok
  23. Evaluating misspecification in DSGE models using tests for overidentifying restrictions By Reicher, Christopher Phillip
  24. Four Essays in Econometrics By Laurent Davezies; Jean-Marc Robin

  1. By: Nikolaus Hautsch; Ostap Okhrin; Alexander Ristig;
    Abstract: We propose an iterative procedure to efficiently estimate models with complex log-likelihood functions and the number of parameters relative to the observations being potentially high. Given consistent but inefficient estimates of sub-vectors of the parameter vector, the procedure yields computationally tractable, consistent and asymptotic efficient estimates of all parameters. We show the asymptotic normality and derive the estimator's asymptotic covariance in dependence of the number of iteration steps. To mitigate the curse of dimensionality in high-parameterized models, we combine the procedure with a penalization approach yielding sparsity and reducing model complexity. Small sample properties of the estimator are illustrated for two time series models in a simulation study. In an empirical application, we use the proposed method to estimate the connectedness between companies by extending the approach by Diebold and Yilmaz (2014) to a high-dimensional non-Gaussian setting.
    Keywords: Multi-Step estimation, Sparse estimation, Multivariate time series, Maximum likelihood estimation, Copula
    JEL: C13 C32 C50
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2014-010&r=ecm
  2. By: Haruo Iwakura (Graduate School of Economics, Kyoto University)
    Abstract: This paper studies the asymptotic efficiency of estimates in nonlinear panel data models with fixed effects when both the cross-sectional sample size and the length of time series tend to infinity. The efficiency bounds for regular estimators are derived using the infinite-dimensional convolution theorem by van der Varrt and Wellner (1996). It should be noted that the number of fixed effects increases with the sample size, so they constitute an infinite-dimensional nuisance parameter. The presence of fixed effects makes our derivation of the efficiency bounds non-trivial, and the techniques to overcome the difficulties caused by fixed effects will be discussed in detail. Our results include the efficiency bounds for models containing unknown functions (for instance, a distribution function of error terms). We apply our results to show that the bias-corrected fixed effects estimator of Hahn and Newey (2004) is asymptotically efficient.
    Keywords: asymptotic efficiency; convolution theorem; double asymptotics; nonlinear panel data model; fixed effects; interactive effects; factor structure; incidental parameters.
    JEL: C13 C23
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:kyo:wpaper:886&r=ecm
  3. By: A. Geraci; D. Fabbri; C. Monfardini
    Abstract: We study a simple exogeneity test in count data models with possibly endogenous multinomial treatment. The test is based on Two Stage Residual Inclusion (2SRI). Results from a broad Monte Carlo study provide novel evidence on important features of this approach in nonlinear settings. We find differences in the finite sample performance of various likelihood-based tests under correct specification and when the outcome equation is misspecified due to neglected over-dispersion or non-linearity. We compare alternative 2SRI procedures and uncover that standardizing the variance of the first stage residuals leads to higher power of the test and reduces the bias of the treatment coefficients. An original application in health economics corroborates our findings.
    JEL: C12 C31 C35 I11
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:bol:bodewp:wp921&r=ecm
  4. By: Feir, Donna; Lemieux, Thomas; Marmer, Vadim
    Abstract: Abstract This paper reports the results of a Monte Carlo simulation study, which accompanies Marmer, Feir, and Lemieux, "Weak Identification in Fuzzy Regression Discontinuity Designs".
    Keywords: Nonparametric inference; treatment effect; size distortions; Anderson-Rubin test; robust confidence set; Monte Carlo simulations
    Date: 2014–01–23
    URL: http://d.repec.org/n?u=RePEc:ubc:pmicro:vadim_marmer-2014-3&r=ecm
  5. By: Helmut Lütkepohl; Anton Velinov
    Abstract: Long-run restrictions have been used extensively for identifying structural shocks in vector autoregressive (VAR) analysis. Such restrictions are typically just-identifying but can be checked by utilizing changes in volatility. This paper reviews and contrasts the volatility models that have been used for this purpose. Three main approaches have been used, exogenously generated changes in the unconditional residual covariance matrix, changing volatility modelled by a Markov switching mechanism and multivariate generalized autoregressive conditional heteroskedasticity (GARCH) models. Using changes in volatility for checking long-run identifying restrictions in structural VAR analysis is illustrated by reconsidering models for identifying fundamental components of stock prices.
    Keywords: Vector autoregression, heteroskedasticity, vector GARCH, conditional heteroskedasticity, Markov switching model
    JEL: C32
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1356&r=ecm
  6. By: Gribisch, Bastian
    Abstract: This paper proposes a latent dynamic factor model for low- as well as high-dimensional realized covariance matrices of stock returns. The approach is based on the matrix logarithm and allows for flexible dynamic dependence patterns by combining common latent factors driven by HAR dynamics and idiosyncratic AR(1) factors. The model accounts for symmetry and positive definiteness of covariance matrices without imposing parametric restrictions. Simulated Bayesian parameter estimates as well as positive definite (co)variance forecasts are obtained using Markov Chain Monte Carlo (MCMC) methods. An empirical application to 5-dimensional and 30-dimensional realized covariance matrices of daily New York Stock Exchange (NYSE) stock returns shows that the model outperforms other approaches of the extant literature both in-sample and out-of-sample. --
    JEL: C32 C58 G17
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:zbw:vfsc13:79823&r=ecm
  7. By: Czudaj, Robert; Hanck, Christoph
    Abstract: This paper argues that typical applications of panel unit root tests should take possible nonstationarity in the volatility process of the innovations of the panel time series into account. Nonstationarity volatility arises for instance when there are structural breaks in the innovation variances. A prominent example is the reduction in GDP growth variances enjoyed by many industrialized countries, known as the `Great Moderation.' It also proposes a new testing approach for panel unit roots that is, unlike many previously suggested tests, robust to such volatility processes. The panel test is based on Simes' [Biometrika 1986, "An Improved Bonferroni Procedure for Multiple Tests of Signi cance"] classical multiple test, which combines evidence from time series unit root tests of the series in the panel. As time series unit root tests, we employ recently proposed tests of Cavaliere and Taylor [Journal of Time Series Analysis 2008b, "Time-Transformed Unit Root Tests for Models with Non-Stationary Volatility"]. The panel test is robust to general patterns of cross-sectional dependence and yet is straightforward to implement, only requiring valid p-values of time series unit root tests, and no resampling. Monte Carlo experiments show that other panel unit root tests suffer from sometimes severe size distortions in the presence of nonstationary volatility, and that this defect can be remedied using the test proposed here. We use the methods developed here to test for unit roots in OECD panels of gross domestic products and inflation rates, yielding inference robust to the `Great Moderation.' We find little evidence of trend stationarity, and mixed evidence regarding inflation stationarity. --
    JEL: C12 C23 E31
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:zbw:vfsc13:79734&r=ecm
  8. By: Sacht, Stephen
    Abstract: In this paper we apply a sensitivity analysis regarding two types of prior information considered within the Bayesian estimation of a standard hybrid New-Keynesian model. In particular, we shed a light on the impact of micro- and macropriors on the estimation outcome. First, we investigate the impact of the transformation of those model parameters which are bounded to the unit interval, in order to allow for a more diffuse prior distribution. Second, we combine the Moment-Matching (MM, Franke et al. (2012)) and Bayesian technique in order to evaluate macropriors. In this respect we define a two-stage estimation procedure - the so-called Moment-Matching based Bayesian (MoMBay) estimation approach - where we take the point estimates evaluated via MM and consider them as prior mean values of the parameters within Bayesian estimation. We show that while (transformed) micropriors are often used in the literature, applying macropriors evaluated via the MoMBay approach leads to a better fit of the structural model to the data. Furthermore, there is evidence for intrinsic (degree of price indexation) rather than extrinsic (autocorrelation in the shock process) persistence - an observation which stands in contradiction to the results documented in the recent literature. --
    Keywords: Bayesian estimation,moment-matching estimation,mombay estimation,New-Keynesian model,micropriors,macropriors
    JEL: C11 C32 C52 E3
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:zbw:cauewp:201404&r=ecm
  9. By: Maurice J.G. Bun; Teresa D. Harrison
    Abstract: We analyze a class of linear regression models including interactions of endogenous regressors and exogenous covariates. We show that, under typical conditions regarding higher-order dependencies between endogenous and exogenous regressors, the OLS estimator of the coefficient of the interaction term is consistent and asymptotically normally distributed. Although not a necessary condition, we demonstrate that multivariate symmetrically distributed data are sufficient for OLS consistency. In general, we propose a Wald test to test for the validity of these higher-order moments. Applying heteroskedasticity-consistent covariance matrix estimators, we then show that standard inference based on OLS is valid for the coefficient of the interaction term. Furthermore, we analyze several IV estimators, and conclude that an implementation exploiting instruments interacted with the exogenous part of the interaction term is to be preferred. Using our theoretical results we confirm recent empirical findings on the nonlinear causal relation between financial development and economic growth.
    Date: 2014–01–28
    URL: http://d.repec.org/n?u=RePEc:ame:wpaper:1402&r=ecm
  10. By: Heni Boubaker; Nadia Sghaier
    Abstract: This paper proposes a new class of semiparametric generalized long memory model with FIA- PARCH errors (SEMIGARMA-FIAPARCH model) that extends the conventionnel GARMA model to incorporate nonlinear deterministic trend, in the mean equation, and to allow for time varying volatility, in the conditional variance equation. The parameters of this model are estimated in a wavelet domain. We provide an empirical application of this model to examine the dynamic of the stock market returns in six GCC countries. The empirical results show that the model proposed o¤ers an interesting framework to describe the seasonal long range dependence and the nonlinear deterministic trend in the return as well as persistence to shocks in the conditional volatiliy. We also compare its performance predictive to the traditional long memory model with FIAPARCH errors (FARMA-FIAPARCH model). The predictive results indicate that the model proposed out performs the FARMA-FIAPARCH model.
    Keywords: semiparametric generalized long memory process, FIAPARCH errors, wavelet do- main, stock market returns.
    JEL: C13 C22 C32 G15
    Date: 2014–01–06
    URL: http://d.repec.org/n?u=RePEc:ipg:wpaper:2014-25&r=ecm
  11. By: Bet Caeyers
    Abstract: This paper formalises an unproven source of ordinary least squares estimation bias in standard linear-in-means peer effects models. I derive a formula for the magnitude of the bias and discuss its underlying parameters. I show the conditions under which the bias is aggravated in models adding cluster fixed effects and demonstrate how it affects inference and interpretation of estimation results. Further, I reveal that two-stage least squares (2SLS) estimation strategies eliminate the bias and provide illustrative simulations. The results may explain some counter-intuitive findings in the social interaction literature, such as the observation of OLS estimates of endogenous peer effects that are larger than their 2SLS counterparts.
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:csa:wpaper:2014-05&r=ecm
  12. By: Olivier Ledoit; Michael Wolf
    Abstract: Markowitz (1952) portfolio selection requires estimates of (i) the vector of expected returns and (ii) the covariance matrix of returns. Many proposals to address the first question exist already. This paper addresses the second question. We promote a new nonlinear shrinkage estimator of the covariance matrix that is more flexible than previous linear shrinkage estimators and has 'just the right number' of free parameters (that is, the Goldilocks principle). In a stylized setting, the nonlinear shrinkage estimator is asymptotically optimal for portfolio selection. In addition to theoretical analysis, we establish superior real-life performance of our new estimator using backtest exercises.
    Keywords: Large-dimensional asymptotics, Markowitz portfolio selection, nonlinear shrinkage
    JEL: C13 G11
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:zur:econwp:137&r=ecm
  13. By: Paparoditis, Efstathios; Politis, Dimitris N
    Abstract: It is shown that the limiting distribution of the augmented Dickey-Fuller (ADF) test under the null hypothesis of a unit root is valid under a very general set of assumptions that goes far beyond the linear AR (∞) process assumption typically imposed. In essence, all that is required is that the error process driving the random walk possesses a spectral density that is strictly positive. Given that many economic time series are nonlinear, this extended result may have important applications. Furthermore, under the same weak assumptions, the limiting distribution of the ADF test is derived under the alternative of stationarity, and a theoretical explanation is given for the well-known empirical fact that the test's power is a decreasing function of the autoregressive order p used in the augmented regression equation. The intuitive reason for the reduced power of the ADF test as p tends to infinity is that the p regressors become asymptotically collinear.  
    Keywords: Social and Behavioral Sciences, Autoregressive Representation, Hypothesis Testing, Integrated Series, Unit Root
    Date: 2013–12–01
    URL: http://d.repec.org/n?u=RePEc:cdl:ucsdec:qt0784p55m&r=ecm
  14. By: Esmeralda Ramalho (Department of Economics and CEFAGE-UE, Universidade de Évora); Joquim Ramalho (Department of Economics and CEFAGE-UE, Universidade de Évora)
    Abstract: Hedonic methods are a prominent approach in the construction of quality-adjusted price indexes. This paper shows that the process of computing such indexes is substantially simplified if arithmetic (geometric) price indexes are computed based on exponential (log-linear) hedonic functions estimated by the Poisson pseudo maximum likelihood (ordinary least squares) method. A Monte Carlo simulation study based on housing data illustrates the convenience of the links identified and the very attractive properties of the Poisson estimator in the hedonic framework.
    Keywords: Hedonic price indexes; Quality adjustment; Retransformation; House prices; Exponential regression; Poisson pseudo maximum likelihood.
    JEL: C43 C51 E31 R31
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:cfe:wpcefa:2014_01&r=ecm
  15. By: Michał Brzeziński (Faculty of Economic Sciences, University of Warsaw)
    Abstract: The distribution of impact factors has been modeled in the recent informetric literature using two-exponent law proposed by Mansilla et al. (2007). This paper shows that two distributions widely-used in economics, namely the Dagum and Singh-Maddala models, possess several advantages over the two-exponent model. Compared to the latter, the former give as good as or slightly better fit to data on impact factors in eight important scientific fields. In contrast to the two-exponent model, both proposed distributions have closed-from probability density functions and cumulative distribution functions, which facilitates fitting these distributions to data and deriving their statistical properties.
    Keywords: impact factor, two-exponent law, Dagum model, Singh-Maddala model, maximum likelihood estimation, model selection
    JEL: A12 C46 C52
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:war:wpaper:2014-01&r=ecm
  16. By: John Goddard; Enrico Onali
    Abstract: We test for departures from normal and independent and identically distributed (NIID) returns, when returns under the alternative hypothesis are self-affine. Self-affine returns are either fractionally integrated and long-range dependent, or drawn randomly from an L-stable distribution with infinite higher-order moments. The finite sample performance of estimators of the two forms of self-affinity is explored in a simulation study which demonstrates that, unlike rescaled range analysis and other conventional estimation methods, the variant of fluctuation analysis that considers finite sample moments only is able to identify either form of self-affinity. However, when returns are self-affine and long-range dependent under the alternative hypothesis, rescaled range analysis has greater power than fluctuation analysis. The finite-sample properties of the estimators when returns exhibit either form of self-affinity can be exploited to determine the source of self-affinity in empirical returns data. The techniques are illustrated by means of an analysis of the fractal properties of the daily logarithmic returns for the indices of 11 stock markets.
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1401.7170&r=ecm
  17. By: Gianluca Stefani (Dipartimento di Scienza per l'Economia e l'Impresa); Marco Tiberti (Dipartimento di Scienza per l'Economia e l'Impresa)
    Abstract: This work deals with methodological and empirical issues related to multiperiod optimal hedging OLS estimators. We propose an analytical formula for the multiperiod minimum variance hedging ratio starting from the triangular representation of a cointegrated system DGP. Since estimating the hedge ratio matching the frequency of data with the hedging horizon leads to a sample size reduction problem, we carry out a Monte Carlo study to investigate the pattern and hedging efficiency of OLS hedging ratio based on overlapping vs non-overlapping observations exploring a range of hedging horizons and sample sizes. Finally, we applied our approach to real data for a cross hedging related to soft wheat.
    Keywords: Future prices, Hedging, Monte Carlo, Soft wheat
    JEL: C58 G13
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:frz:wpaper:wp2013_29.rdf&r=ecm
  18. By: Arnab Bhattacharjee (Heriot-Watt University); Liqian Cai (Michigan State University); Taps Maiti (Michigan State University)
    Abstract: We develop a method for estimating the functional surface of a regression coefficient that varies over a complex spatial domain with irregular boundaries, peninsulas and interior holes. The method is motivated by, and applied to, data on housing markets, where the central object of inference is estimation of spatially varying effects of living space on house prices. For this purpose, we extend a method of spline smoothing over an irregular domain to the functional regression model. Spatially varying coefficients for a specific regressor are estimated by a combination of three smoothing problems, allowing for additional regressors with spatially fixed coefficients. The estimates adapt well to the irregular and complex spatial domain. Implicit prices for living space vary spatially, being high in the city centre and other desirable locations, and declining towards the periphery along gradients determined by major roads.
    Keywords: Delaunay triangulation, Finite element, Housing markets, Spatial functional regression, Spline smoothing
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:hwe:seecdp:1301&r=ecm
  19. By: Arnab Bhattacharjee (Heriot-Watt University); Eduardo Castro (University of Aveiro); Taps Maiti (Michigan State University); João Marques (University of Aveiro)
    Abstract: Definition of housing submarkets is important at both conceptual and empirical levels. In the housing studies literature, submarkets have been defined according to three different criteria: i) similarity in hedonic housing characters, ii) similarity in hedonic prices; iii) substitutability of housing units. We argue that the simultaneous fulfilment of criteria i) and ii) is a sufficient condition for criteria iii) to be fulfilled. Criterion i) is directly observable, while criterion ii) can be checked by a model able to detect and analyse spatial heterogeneity in the shadow prices. Here, we propose a new framework, based on a synthesis of spatial econometrics, functional data analysis (FDA) and geographically weighted regression (GWR). The framework is applied to a hedonic regression model where the dependent variable is logarithm of house prices per square meter and housing features are regressors. Thus, we delineate submarkets by clustering (jointly) on the surfaces of the estimated functional partial effects and housing features. The above model addresses two main limitations of previous approaches. First, endogeneity in spatial structure can be incorporated in the model. Second, the framework does not require delineation of housing submarkets a priori. Application to the housing market of the Aveiro-Ãlhavo urban conglomeration in Portugal implies submarkets that emphasize the historical and endogenous evolution of the urban spatial structure.
    Keywords: Spatial heterogeneity, Submarkets, Spatial lag model, Geographically weighted regression, Functional data analysis
    JEL: C21 R31 C51
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:hwe:seecdp:1403&r=ecm
  20. By: M. Duembgen; L. C. G. Rogers
    Abstract: In the econometrics of financial time series, it is customary to take some parametric model for the data, and then estimate the parameters from historical data. This approach suffers from several problems. Firstly, how is estimation error to be quantified, and then taken into account when making statements about the future behaviour of the observed time series? Secondly, decisions may be taken today committing to future actions over some quite long horizon, as in the trading of derivatives; if the model is re-estimated at some intermediate time, our earlier decisions would need to be revised - but the derivative has already been traded at the earlier price. Thirdly, the exact form of the parametric model to be used is generally taken as given at the outset; other competitor models might possibly work better in some circumstances, but the methodology does not allow them to be factored into the inference. What we propose here is a very simple (Bayesian) alternative approach to inference and action in financial econometrics which deals decisively with all these issues. The key feature is that nothing is being estimated.
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1401.5666&r=ecm
  21. By: Marcus Scheiblecker (WIFO)
    Abstract: With seasonal adjustment one has to decide whether to seasonal adjust an aggregate like GDP directly or to sum up its seasonally adjusted components. This choice is usually driven by subjective motives or practical convenience. In the case of seasonal adjustment with chain-linked data one might feel forced to use the direct approach as components do not even add up to aggregates before the adjustment. This paper presents a guide for practitioners, which recommends a more objective way of decision-making, based on several indicators. It proposes some of these criteria which can facilitate the decision between the direct and the indirect approach. For the case of chain-linked series, where the indirect approach seems not to be feasible because components are not adding up to an aggregate, the paper presents a method how the indirect approach of seasonal adjustment nevertheless can be applied. Finally it deals with a possible balancing process between the results of the direct and the indirect approach and a practical application example is given.
    Keywords: Seasonal adjustment, direct indirect method, chain-linking
    Date: 2014–01–29
    URL: http://d.repec.org/n?u=RePEc:wfo:wpaper:y:2014:i:460&r=ecm
  22. By: Sacht, Stephen; Franke, Reiner; Jang, Tae-Seok
    Abstract: The paper considers an elementary New-Keynesian three equation model and compares its Bayesian estimation to the results from the method of moments (MM), which seeks to match a finite set of the model-generated second moments of in ation, output and the interest rate to their empirical counterparts. It is found that in the Great Ination (GI) period - though not in the Great Moderation (GM) - the two estimations imply a significantly different covariance structure. Regarding the parameters, special emphasis is placed on the degree of backward-looking behaviour in the Phillips curve. While, in line with much of the literature, it plays a minor role in the Bayesian estimations, MM yields values of the price indexation parameter close to or even at its maximal value of unity. For both GI and GM, these results are worth noticing since in (strong or, respectively, weak) contrast to the Bayesian parameters, the covariance matching thus achieved is entirely satisfactory. --
    JEL: C52 E32 E37
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:zbw:vfsc13:79694&r=ecm
  23. By: Reicher, Christopher Phillip
    Abstract: In this paper I discuss the estimation of the process governing the structural shocks (or wedges) to a DSGE model, arguing that a well-specified model would satisfy certain sets of moment conditions. Based on tests for overidentifying restrictions, I compare three specifications of the Taylor rule within a simple New Keynesian model. I find that a rule which allows for the Fed to respond to four lags of inflation shows less evidence of misspecification than one where the Fed responds only to contemporaneous inflation. Raising the coe cient on the output gap to 1 instead of 0.5 gives more ambiguous results. --
    JEL: C12 C22 E52
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:zbw:vfsc13:79955&r=ecm
  24. By: Laurent Davezies; Jean-Marc Robin (Département d'économie)
    Abstract: Cette thèse se compose de quatre travaux indépendants. Le premier concerne les modèles partiellement identifiés, c'est-à-dire des modèles dans lesquels la valeur du paramètre d’intérêt ne peut pas être déduite de la distribution des données et des hypothèses du modèle. Dans certaines situations, aucune ou au contraire plusieurs valeurs du paramètre d’intérêt sont compatibles avec les données et les hypothèses du modèle. Ce travail démontre entre autre que si l’ensemble des distributions de probabilités compatibles avec le modèle est convexe, alors les parties extrêmes de ce convexe caractérise l’ensemble des distributions compatibles avec le modèle. Le deuxième travail propose une méthode basée sur une condition d’exclusion pour corriger de l’attrition endogène dans les panels. Nous appliquons cette méthode pour estimer les transitions sur le marché du travail à partir de l’enquête emploi française. Le troisième travail propose une méthode simple pour estimer un modèle logistique avec effets fixes et dépendance d’état tel qu’étudié par Honoré et Kiriazidou. Il propose également un nouvel estimateur des écarts-types qui semble avoir de meilleures propriétés à distance finie. Le quatrième travail est une évaluation sur les collèges de la politique éducative des Réseaux-Ambition-Réussite lancée en 2006. Nous exploitons une discontinuité dans la sélection des collèges pour comparer entre eux certains collèges « identiques » avant la mise en place de la politique. Les résultats de cette évaluation laissent place à peu d’optimisme concernant l’efficacité de cette politique.
    Keywords: Identification partielle, Attrition, Dépendance d’état, Evaluation de politiques publiques; Partial identification, Attrition, State dependence, Policy Evaluation
    Date: 2013–12
    URL: http://d.repec.org/n?u=RePEc:spo:wpmain:info:hdl:2441/6o65lgig8d0qcro9p14826c84&r=ecm

This nep-ecm issue is ©2014 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.