nep-ecm New Economics Papers
on Econometrics
Issue of 2009‒04‒18
twenty papers chosen by
Sune Karlsson
Orebro University

  1. Testing for Multiple Structural Changes in Cointegrated Regression Models By Mohitosh Kejriwal; Pierre Perron
  2. Bayesian Estimation of Unknown Regression Error Heteroscedasticity By Hiroaki Chigira; Tsunemasa Shiba
  3. A Simple Panel Stationarity Test in the Presence of Cross-Sectional Dependence By Hadri, Kaddour; Kurozumi, Eiji
  4. A General Framework for Observation Driven Time-Varying Parameter Models By Drew Creal; Siem Jan Koopman; Andre Lucas
  5. Spatial Dynamic Panel Model and System GMM: A Monte Carlo Investigation By Kukenova, Madina; Monteiro, Jose-Antonio
  6. Combination of multivariate volatility forecasts By Alessandra Amendola; Giuseppe Storti
  7. Bayesian and Frequentist Inference in Partially Identified Models By Hyungsik Roger Moon; Frank Schorfheide
  8. Multivariate Realised Kernels: Consistent Positive Semi-Definite Estimators of the Covariation of Equity Prices with Noise and Non-Synchronous Trading By Ole E. Barndorff-Nielsen; Peter Reinhard Hansen; Asger Lunde; Neil Shephard
  9. Optimal Dimension of Transition Probability Matrices for Markov Chain Bootstrapping By Roy Cerqueti; Paolo Falbo; Cristian Pelizzari
  10. A Practitioner's Guide to Bayesian Estimation of Discrete Choice Dynamic Programming Models By Andrew Ching; Susumu Imai; Masakazu Ishihara; Neelam Jain
  11. Properties of Hierarchical Archimedean Copulas By Ostap Okhrin; Yarema Okhrin; Wolfgang Schmid
  12. Model Selection Criteria for the Leads-and-Lags Cointegrating Regression By Choi, In; Kurozumi, Eiji
  13. Information Loss in Volatility Measurement with Flat Price Trading By Peter C. B. Phillips; Jun Yu
  14. On the Existence of the Moments of the Asymptotic Trace Statistic By Deniz Dilan Karaman Örsal; Bernd Droge
  15. Structural Breaks, Regime Change and Asymmetric Adjustment: A Short and Long Run Global Approach to the Output/Unemployment Dynamics By Mendonca, Gui Pedro
  16. A Joint Analysis of the KOSPI 200 Option and ODAX Option Markets Dynamics By Ji Cao; Wolfgang Härdle; Julius Mungo
  17. Who Drives the Market? Estimating a Heterogeneous Agent-based Financial Market Model Using a Neural Network Approach By Klein, Achim; Urbig, Diemo; Kirn, Stefan
  18. Anticipated Alternative Instrument-Rate Paths in Policy Simulations By Stefan Laséen; Lars E.O. Svensson
  19. The propagation of regional recessions By James D. Hamilton; Michael T. Owyang
  20. The Econometrics of Social Networks By Yann Bramoullé; Bernard Fortin

  1. By: Mohitosh Kejriwal; Pierre Perron
    Abstract: This paper considers issues related to testing for multiple structural changes in cointegrated systems. We derive the limiting distribution of the Sup-Wald test under mild conditions on the errors and regressors for a variety of testing problems. We show that even if the coefficients of the integrated regressors are held fixed but the intercept is allowed to change, the limit distributions are not the same as would prevail in a stationary framework. Including stationary regressors whose coefficients are not allowed to change does not affect the limiting distribution of the tests under the null hypothesis. We also propose a procedure that allows one to test the null hypothesis of, say, k changes, versus the alternative hypothesis of k + 1 changes. This sequential procedure is useful in that it permits consistent estimation of the number of breaks present. We show via simulations that our tests maintain the correct size in finite samples and are much more powerful than the commonly used LM tests, which suffer from important problems of non-monotonic power in the presence of serial correlation in the errors.
    Keywords: change-point, sequential procedure, wald tests, unit roots, cointegration
    JEL: C22
    Date: 2008–11
    URL: http://d.repec.org/n?u=RePEc:pur:prukra:1216&r=ecm
  2. By: Hiroaki Chigira; Tsunemasa Shiba
    Abstract: We propose a Bayesian procedure to estimate heteroscedastic variances of the regression error term ƒÖ, when the form of heteroscedasticity is unknown. The prior information on ƒÖ is elicited from the wellknown Eicker-White Heteroscedasticity Consistent Variance-Covariance Matrix Estimator. Markov Chain Monte Carlo algorithm is used to simulate posterior pdffs of the unknown elements of ƒÖ. In addition to the numerical examples, we present an empirical investigation of the stock prices of Japanese pharmaceutical and biomedical companies to demonstrate usefulness of the proposed method.
    Keywords: Eicker-White HCCM, orthogonal regressors, informative prior pdf's, MCMC, stock return variance
    JEL: C11 C13
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:hst:ghsdps:gd08-051&r=ecm
  3. By: Hadri, Kaddour; Kurozumi, Eiji
    Abstract: This paper develops a simple test for the null hypothesis of stationarity in heterogeneous panel data with cross-sectional dependence in the form of a common factor in the disturbance. We do not estimate the common factor but mop-up its effect by employing the same method as the one proposed in Pesaran (2007) in the unit root testing context. Our test is basically the same as the KPSS test but the regression is augmented by cross-sectional average of the observations. We also develop a Lagrange multiplier (LM) test allowing for cross-sectional dependence and, under restrictive assumptions, compare our augmented KPSS test with the extended LM test under the null of stationarity, under the local alternative and under the fixed alternative, and discuss the differences between these two tests. We also extend our test to the more realistic case where the shocks are serially correlated. We use Monte Carlo simulations to examine the finite sample property of the augmented KPSS test.
    Keywords: Panel data, stationarity, KPSS test, cross-sectional dependence, LM test, locally best test
    JEL: C12 C33
    Date: 2008–12
    URL: http://d.repec.org/n?u=RePEc:hit:ccesdp:7&r=ecm
  4. By: Drew Creal; Siem Jan Koopman; Andre Lucas
    Abstract: We propose a new class of observation driven time series models that we refer to as Generalized Autoregressive Score (GAS) models. The driving mechanism of the GAS model is the scaled likelihood score. This provides a unified and consistent framework for introducing time-varying parameters in a wide class of non-linear models. The GAS model encompasses other well-known models such as the generalized autoregressive conditional heteroskedasticity, autoregressive conditional duration, autoregressive conditional intensity and single source of error models. In addition, the GAS specification gives rise to a wide range of new observation driven models. Examples include non-linear regression models with time-varying parameters, observation driven analogues of unobserved components time series models, multivariate point process models with time-varying parameters and pooling restrictions, new models for time-varying copula functions and models for time-varying higher order moments. We study the properties of GAS models and provide several non-trivial examples of their application.
    Keywords: dynamic models, time-varying parameters, non-linearity, exponential family, marked point processes, copulas
    JEL: C10 C22 C32 C51
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:hst:ghsdps:gd08-038&r=ecm
  5. By: Kukenova, Madina; Monteiro, Jose-Antonio
    Abstract: This paper investigates the …finite sample properties of estimators for spatial dynamic panel models in the presence of several endogenous variables. So far, none of the available estimators in spatial econometrics allows considering spatial dynamic models with one or more endogenous variables. We propose to apply system-GMM, since it can correct for the endogeneity of the dependent variable, the spatial lag as well as other potentially endogenous variables using internal and/or external instruments. The Monte-Carlo investigation compares the performance of spatial MLE, spatial dynamic MLE (Elhorst (2005)), spatial dynamic QMLE (Yu et al. (2008)), LSDV, difference-GMM (Arellano & Bond (1991)), as well as extended-GMM (Arellano & Bover (1995), Blundell & Bover (1998)) in terms of bias, root mean squared error and standard-error accuracy. The results suggest that, in order to account for the endogeneity of several covariates, spatial dynamic panel models should be estimated using extended GMM. On a practical ground, this is also important, because system-GMM avoids the inversion of high dimension spatial weights matrices, which can be computationally unfeasible for large N and/or T.
    Keywords: Spatial Econometrics; Dynamic Panel Model; System GMM; Monte Carlo Simulations
    JEL: C15 C33
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:13404&r=ecm
  6. By: Alessandra Amendola; Giuseppe Storti
    Abstract: This paper proposes a novel approach to the combination of conditional covariance matrix forecasts based on the use of the Generalized Method of Moments (GMM). It is shown how the procedure can be generalized to deal with large dimensional systems by means of a two-step strategy. The finite sample properties of the GMM estimator of the combination weights are investigated by Monte Carlo simulations. Finally, in order to give an appraisal of the economic implications of the combined volatility predictor, the results of an application to tactical asset allocation are presented.
    Keywords: Multivariate GARCH, Forecast Combination, GMM, Portfolio Optimization
    JEL: C52 C53 C32 G11
    Date: 2009–01
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2009-007&r=ecm
  7. By: Hyungsik Roger Moon; Frank Schorfheide
    Abstract: A large sample approximation of the posterior distribution of partially identified structural parameters is derived for models that can be indexed by a finite-dimensional reduced form parameter vector. It is used to analyze the differences between frequentist confidence sets and Bayesian credible sets in partially identified models. A key difference is that frequentist set estimates extend beyond the boundaries of the identified set (conditional on the estimated reduced form parameter), whereas Bayesian credible sets can asymptotically be located in the interior of the identified set. Our asymptotic approximations are illustrated in the context of simple moment inequality models and a numerical illustration for a two-player entry game is provided.
    JEL: C11 C32 C35
    Date: 2009–04
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:14882&r=ecm
  8. By: Ole E. Barndorff-Nielsen; Peter Reinhard Hansen; Asger Lunde; Neil Shephard
    Abstract: We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement noise of certain types and can also handle non-synchronous trading. It is the first estimator which has these three properties which are all essential for empirical work in this area. We derive the large sample asymptotics of this estimator and assess its accuracy using a Monte Carlo study. We implement the estimator on some US equity data, comparing our results to previous work which has used returns measured over 5 or 10 minutes intervals. We show the new estimator is substantially more precise.
    Keywords: HAC estimator, Long run variance estimator, Market frictions, Quadratic variation, Realised variance
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:hst:ghsdps:gd08-037&r=ecm
  9. By: Roy Cerqueti (Univesity of Macerata); Paolo Falbo (University of Brescia); Cristian Pelizzari (University of Brescia)
    Abstract: <p> </p><p align="left"><font size="1">While the large portion of the literature on Markov chain (possibly of order<br />higher than one) bootstrap methods has focused on the correct estimation of<br />the transition probabilities, little or no attention has been devoted to the<br />problem of estimating the dimension of the transition probability matrix.<br />Indeed, it is usual to assume that the Markov chain has a one-step memory<br />property and that the state space could not to be clustered, and coincides<br />with the distinct observed values. In this paper we question the opportunity<br />of such a standard approach.<br />In particular we advance a method to jointly estimate the order of the Markov<br />chain and identify a suitable clustering of the states. Indeed in several real<br />life applications the "memory" of many<br />processes extends well over the last observation; in those cases a correct<br />representation of past trajectories requires a significantly richer set than<br />the state space. On the contrary it can sometimes happen that some distinct<br />values do not correspond to really "different<br />states of a process; this is a common conclusion whenever,<br />for example, a process assuming two distinct values in t is not affected in<br />its distribution in t+1. Such a situation would suggest to reduce the<br />dimension of the transition probability matrix.<br />Our methods are based on solving two optimization problems. More specifically<br />we consider two competing objectives that a researcher will in general pursue<br />when dealing with bootstrapping: preserving the similarity between the<br />observed and the bootstrap series and reducing the probabilities of getting a<br />perfect replication of the original sample. A brief axiomatic discussion is<br />developed to define the desirable properties for such optimal criteria. Two<br />numerical examples are presented to illustrate the method.</font></p><p align="left"> </p>
    Keywords: order of Markov chains,similarity of time series,transition probability matrices,multiplicity of time series,partition of states of Markov chains,Markov chains,bootstrap methods
    JEL: O1 O11
    Date: 2009–04
    URL: http://d.repec.org/n?u=RePEc:mcr:wpdief:wpaper35&r=ecm
  10. By: Andrew Ching (University of Toronto); Susumu Imai (Queen's University); Masakazu Ishihara (University of Toronto); Neelam Jain (Northern Illinois University)
    Abstract: This paper provides a step-by-step guide to estimating discrete choice dynamic programming (DDP) models using the Bayesian Dynamic Programming algorithm developed by Imai Jain and Ching (2008) (IJC). The IJC method combines the DDP solution algorithm with the Bayesian Markov Chain Monte Carlo algorithm into a single algorithm, which solves the DDP model and estimates its structural parameters simultaneously. The main computational advantage of this estimation algorithm is the efficient use of information obtained from the past iterations. In the conventional Nested Fixed Point algorithm, most of the information obtained in the past iterations remains unused in the current iteration. In contrast, the Bayesian Dynamic Programming algorithm extensively uses the computational results obtained from the past iterations to help solving the DDP model at the current iterated parameter values. Consequently, it significantly alleviates the computational burden of estimating a DDP model. We carefully discuss how to implement the algorithm in practice, and use a simple dynamic store choice model to illustrate how to apply this algorithm to obtain parameter estimates.
    Keywords: Bayesian Dynamic Programming, Discrete Choice Dynamic Programming, Markov Chain Monte Carlo
    JEL: C11 M3
    Date: 2009–04
    URL: http://d.repec.org/n?u=RePEc:qed:wpaper:1201&r=ecm
  11. By: Ostap Okhrin; Yarema Okhrin; Wolfgang Schmid
    Abstract: In this paper we analyse the properties of hierarchical Archimedean copulas. This class is a generalisation of the Archimedean copulas and allows for general non-exchangeable dependency structures. We show that the structure of the copula can be uniquely recovered from all bivariate margins. We derive the distribution of the copula value, which is particularly useful for tests and constructing con¯dence intervals. Furthermore, we analyse dependence orderings, multivariate dependence measures and extreme value copulas. Special attention we pay to the tail dependencies and derive several tail dependence indices for general hierarchical Archimedean copulas.
    Keywords: copula; multivariate distribution; Archimedean copula; stochastic ordering; hierarchical copula
    JEL: C16 C46
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2009-014&r=ecm
  12. By: Choi, In; Kurozumi, Eiji
    Abstract: In this paper, Mallows?(1973) Cp criterion, Akaike?s (1973) AIC, Hurvich and Tsai?s (1989) corrected AIC and the BIC of Akaike (1978) and Schwarz (1978) are derived for the leads-and-lags cointegrating regression. Deriving model selection criteria for the leads-and-lags regression is a nontrivial task since the true model is of in?nite dimension. This paper justi?es using the conventional formulas of those model selection criteria for the leads-and-lags cointegrating regression. The numbers of leads and lags can be selected in scienti?c ways using the model selection criteria. Simulation results regarding the bias and mean squared error of the long-run coeï¿  cient estimates are reported. It is found that the model selection criteria are successful in reducing bias and mean squared error relative to the conventional, ?xed selection rules. Among the model selection criteria, the BIC appears to be most successful in reducing MSE, and Cp in reducing bias. We also observe that, in most cases, the selection rules without the restriction that the numbers of the leads and lags be the same have an advantage over those with it.
    Keywords: Cointegration, Leads-and-lags regression, AIC, Corrected AIC, BIC, Cp
    Date: 2008–12
    URL: http://d.repec.org/n?u=RePEc:hit:ccesdp:6&r=ecm
  13. By: Peter C. B. Phillips; Jun Yu
    Abstract: A model of financial asset price determination is proposed that incorporates flat trading features into an efficient price process. The model involves the superposition of a Brownian semimartingale process for the effcient price and a Bernoulli process that determines the extent of price trading. The approach is related to sticky price modeling and the Calvo pricing mechanism in macroeconomic dynamics. A limit theory for the conventional realized volatility (RV) measure of integrated volatility is developed. The results show that RV is still consistent but has an inflated asymptotic variance that depends on the probability of flat trading. Estimated quarticity is similarly affected, so that both the feasible central limit theorem and the inferential framework suggested in Barndorff-Nielson and Shephard (2002) remain valid under flat price trading even though there is information loss due to flat trading effects. The results are related to work by Jacod (1993) and Mykland and Zhang (2006) on realized volatility measures with random and intermittent sampling, and to ACD models for irregularly spaced transactions data. Extensions are given to include models with microstructure noise. Some simulation results are reported. Empirical evaluations with tick-by-tick data indicate that the effect of flat trading on the limit theory under microstructure noise is likely to be minor in most cases, thereby affirming the relevance of existing approaches.
    Keywords: Bernoulli process, Brownian semimartingale, Calvo pricing, Flat trading, Microstructure noise, Quarticity function, Realized volatility, Stopping times
    JEL: C15 G12
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:hst:ghsdps:gd08-039&r=ecm
  14. By: Deniz Dilan Karaman Örsal; Bernd Droge
    Abstract: In this note we establish the existence of the first two moments of the asymptotic trace statistic, which appears as weak limit of the likelihood ratio statistic for testing the cointe- gration rank in a vector autoregressive model and whose moments may be used to develop panel cointegration tests. Moreover, we justify the common practice to approximate these moments by simulating a certain statistic, which converges weakly to the asymptotic trace statistic. To accomplish this we show that the moments of the mentioned statistic converge to those of the asymptotic trace statistic as the time dimension tends to infinity.
    Keywords: Cointegration, Trace statistic, Asymptotic moments, Uniform integrability
    JEL: C32 C33 C12
    Date: 2009–02
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2009-012&r=ecm
  15. By: Mendonca, Gui Pedro
    Abstract: Even though the output and unemployment relation has always been a key theme in applied macroeconometrics research, the global hypothesis of modular short and long run dynamics assuming classic macroeconomic assumptions, is still to become a widely discussed subject in the field, and, therefore entails a large scope for further improvement, discussion and experimentation. Following recent advances in non linear bivariate estimation techniques this paper evaluates the joint hypotheses of endogenous growth, the natural rate hypothesis and asymmetric short run error correction. To tackle this global proposal a three step methodology, based on numeric grid search procedures is employed on data from nineteen OCDE countries. First, a numerical grid search is used to estimate linear trend output regimes with structural breaks and long run natural Unemployment rate regimes are endogenously obtained from these estimates. Finally, different grid search procedures, based on the original two step procedure for estimating linear cointegration models, are used to estimate the short run adjustment process assuming threshold vector error correction dynamics, following recent proposals on asymmetric Okun adjustment.
    Keywords: Okun Law; Structural Change; Additive-Outlier Models; Bivariate Threshold Vector Error Correction Systems; Output/Unemployment Dynamics
    JEL: C32 C51 C52 E27
    Date: 2008–11–28
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:14648&r=ecm
  16. By: Ji Cao; Wolfgang Härdle; Julius Mungo
    Abstract: As a function of strike and time to maturity the implied volatility estimation is a challenging task in nancial econometrics. Dynamic Semiparametric Factor Models (DSFM) are a model class that allows for the estimation of the implied volatility surface (IVS) in a dynamic context, employing semiparametric factor functions and time-varying loadings. Because nancial asset volatilities move over time, across assets and over markets, this paper analyses volatility interaction between German and Korean stock markets. As proxy for the volatility, factor loadings series derived from a DSFM application on option prices are employed. We examine volatility transmission between the markets under the vector autoregressive (VAR) model framework. Our results show that a shock in the volatility of one market may not translate directly into greater uncertainty in another market and it is unlikely that portfolio investors can benet from diversication among these markets due to cointegration.
    Keywords: implied volatility surface, dynamic semiparametric factor model, VAR, cointegration
    JEL: C14 G12
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2009-019&r=ecm
  17. By: Klein, Achim; Urbig, Diemo; Kirn, Stefan
    Abstract: Introduction. The objects of investigation of this work are micro-level behaviors in stock markets. We aim at better understanding which strategies of market participants drive stock markets. The problem is that micro-level data from real stock markets are largely unobservable. We take an estimation perspective to obtain daily time series of fractions of chartists and fundamentalists among market participants. We estimate the heterogeneous agent-based financial market model introduced by Lux and Marchesi [1] to the S&P 500. This model has more realistic time series properties compared to less complex econometric and other agent-based models. Such kinds of models have a rather complex dependency between micro and macro parameters that have to be mapped to empirical data by the estimation method. This poses heavy computational burdens. Our contribution to this field is a new method for indirectly estimating time-varying micro-parameters of highly complex agent-based models at high frequency. Related work. Due to the high complexity, few authors have published on this topic to date (e.g., [2], [3], and [4]). Recent approaches in directly estimating agent-based models are restricted to simpler models, make simplifying assumptions on the estimation procedure, estimate only non-time varying parameters, or estimate only low frequency time series. Approach and computational methods. The indirect estimation method we propose is based on estimating the inverse model of a rich agent-based model that derives realistic macro market behavior from heterogeneous market participants’ behaviors. Applying the inverse model, which maps macro parameters back to micro parameters, to widely available macro-level financial market data, allows for estimating time series of aggregated real world micro-level strategy data at daily frequency. To estimate the inverse model in the first place, a neural network approach is used, as it allows for a large degree of freedom concerning the structure of the mapping to be represented by the neural network. As basis for learning the mapping, micro and macro time series of the market model are generated artificially using a multi-agent simulation based on RePast [5]. After applying several pre-processing and smoothing methods to these time series, a feed-forward multilayer perceptron is trained using a variant of the Levenberg-Marquardt algorithm combined with Bayesian regularization [6]. Finally, the trained network is applied to the S&P 500 to estimate daily time series of fractions of strategies used by market participants. Results. The main contribution of this work is a model-free indirect estimation approach. It allows estimating micro-parameter time series of the underlying agent-based model of high complexity at high frequency. No simplifying assumptions concerning the model or the estimation process have to be applied. Our results also contribute to the understanding of theoretical models. By investigating fundamental dependencies in the Lux and Marchesi model by means of sensitivity analysis of the resulting neural network inverse model, price volatility is found to be a major driver. This provides additional support to findings in [1]. Some face validity for concrete estimation results obtained from the S&P 500 is shown by comparing to results of Boswijk et al. [3]. This is the work which comes closest to our approach, albeit their model is simpler and estimation frequency is yearly. We find support for Boswijk et al.’s key finding of a large fraction of chartists during the end of 1990s price bubble in technology stocks. Eventually, our work contributes to understanding what kind of micro-level behaviors drive stock markets. Analyzing correlations of our estimation results to historic market events, we find the fraction of chartists being large at times of crises, crashes, and bubbles. See also <a href="http://www.whodrivesthemarket.com">http://www.whodrivesthemarket.com</a> for continuously updated and derived live-results.
    Keywords: stock market; heterogeneous agent-based models; indirect estimation; inverse model; trading strategies; chartists; fundamentalists; neural networks
    JEL: C32 G12 C45 C81 C15
    Date: 2008–06–24
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:14433&r=ecm
  18. By: Stefan Laséen; Lars E.O. Svensson
    Abstract: This paper specifies how to do policy simulations with alternative instrument-rate paths in DSGE models such as Ramses, the Riksbank's main model for policy analysis and forecasting. The new element is that these alternative instrument-rate paths are anticipated by the private sector. Such simulations correspond to situations where the Riksbank transparently announces that it plans to implement a particular instrument-rate path and where this announcement is believed by the private sector. Previous methods have instead implemented alternative instrument-rate paths by adding unanticipated shocks to an instrument rule, as in the method of modest interventions by Leeper and Zha (2003). This corresponds to a very different situation where the Riksbank would nontransparently and secretly plan to implement deviations from an announced instrument rule. Such deviations are in practical simulations normally both serially correlated and large, which seems inconsistent with the assumption that they would remain unanticipated by the private sector. Simulations with anticipated instrument-rate paths seem more relevant for the transparent flexible inflation targeting that the Riksbank conducts. We provide an algorithm for the computation of policy simulations with arbitrary restrictions on nominal and real instrument-rate paths for an arbitrary number of periods after which a given policy rule, including targeting rules and explicit, implicit, or forecast-based instrument rules is implemented. When inflation projections are sufficiently sensitive to the real interest-rate path, restrictions on real interest-rate paths provide more intuitive and robust results, whereas restrictions on nominal interest-rate path may provide somewhat counter-intuitive results.
    JEL: E52 E58
    Date: 2009–04
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:14902&r=ecm
  19. By: James D. Hamilton; Michael T. Owyang
    Abstract: This paper develops a framework for inferring common Markov-switching components in a panel data set with large cross-section and time-series dimensions. We apply the framework to studying similarities and differences across U.S. states in the timing of business cycles. We hypothesize that there exists a small number of cluster designations, with individual states in a given cluster sharing certain business cycle characteristics. We find that although oil-producing and agricultural states can sometimes experience a separate recession from the rest of the United States, for the most part, differences across states appear to be a matter of timing, with some states entering recession or recovering before others.
    Keywords: Business cycles ; Recessions
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:fip:fedlwp:2009-13&r=ecm
  20. By: Yann Bramoullé; Bernard Fortin
    Abstract: In a social network, agents have their own reference group that may influence their behavior. In turn, the agents' attributes and their behavior affect the formation and the structure of the social network. We survey the econometric literature on both aspects of social networks and discuss the identification and estimation issues they raise.
    Keywords: Social network, peer effects, identification, network formation, pair-wise regressions, separability, mutual consent
    JEL: D85 L14 Z13 C3
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:lvl:lacicr:0913&r=ecm

This nep-ecm issue is ©2009 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.