nep-ecm New Economics Papers
on Econometrics
Issue of 2007‒04‒09
28 papers chosen by
Sune Karlsson
Orebro University

  1. Efficient estimation of autoregression parameters and innovation distributions for semiparametric integer-valued AR(p) models By Drost,Feike C.; Akker,Ramon van den; Werker,Bas J.M.
  2. Convenient Estimators for the Panel Probit Model By Bertschek, Irene; Lechner, Michael
  3. Robust Priors in Nonlinear Panel Data Models By Manuel Arellano; Stéphane Bonhomme
  4. Learning, Forecasting and Structural Breaks By John M Maheu; Stephen Gordon
  5. Modelling and Tesint for Structural Changes in Panel Cointegration Models with Common and Idiosyncratic Stochastic Trend By Chihwa Kao; Lorenzo Trapani; Giovanni Urga
  6. Sieve bootstrap unit root tests By Patrick Richard
  7. BEVERRIDGE NELSON DECOMPOSITION WITH MARKOV SWITCHING By Chin Nam Low; Heather Anderson; Ralph Snyder
  8. Predicting the term structure of interest rates incorporating parameter uncertainty, model uncertainty and macroeconomic information By De Pooter, Michiel; Ravazzolo, Francesco; van Dijk, Dick
  9. Nonlinear Combination of Financial Forecast with Genetic Algorithm By Ozun, Alper; Cifter, Atilla
  10. Another Look at the Identification of Dynamic Discrete Decision Processes: With an Application to Retirement Behavior By Victor Aguirregabiria
  11. A (semi-)parametric functional coefficient autoregressive conditional duration model By Marcelo Fernandes; Marcelo Cunha Medeiros; Alvaro Veiga
  12. Semiparametric identification of structural dynamic optimal stopping time models By Le-Yu Chen
  13. Macro-panels and Reality By Cubadda Gianluca; Hecq Alain; Palm Franz C.
  14. "Multivariate stochastic volatility" By Siddhartha Chib; Yasuhiro Omori; Manabu Asai
  15. Information-Theoretic Distribution Test With Application to Normality By Stengos, T.; Wu, X.
  16. Using the Dynamic Bi-Factor Model with Markov Switching to Predict the Cyclical Turns in the Large European Economies By Konstantin A. Kholodilin
  17. An Extension of the Blinder-Oaxaca Decomposition to Non-Linear Models By Thomas Bauer; Mathias Sinning
  18. Predicting the UK Equity Premium with Dividend Ratios: An Out-Of-Sample Recursive Residuals Graphical Approach By Neil Kellard; John Nankervis; Fotis Papadimitriou
  19. Long Memory and FIGARCH Models for Daily and High Frequency Commodity Prices By Richard T. Baillie; Young-Wook Han; Robert J. Myers; Jeongseok Song
  20. Endogeneity and discrete outcomes By Andrew Chesher
  21. The behaviour of the real exchange rate: Evidence from regression quantiles By Kleopatra Nikolaou
  22. Testing for a common latent variable in a linear regression By Wittenberg, Martin
  23. An economic analysis of exclusion restrictions for instrumental variable estimation By van den Berg, Gerard
  24. Modeling Long-Term Memory Effect in Stock Prices: A Comparative Analysis with GPH Test and Daubechies Wavelets By Ozun, Alper; Cifter, Atilla
  25. Parametric properties of semi-nonparametric distributions, with applications to option valuation By Ángel León; Javier Mencía; Enrique Sentana
  26. Nowcasting GDP and Inflation: The Real-Time Informational Content of Macroeconomic Data Releases By Domenico Giannone; Lucrezia Reichlin; David H Small
  27. A State Space Approach To The Policymaker's Data Uncertainty Problem By Alastair Cunningham; Chris Jeffery; George Kapetanios; Vincent Labhard
  28. Constructing Historical Euro Area Data By Heather Anderson; Mardi Dungey; Denise Osborn; Farshid Vahid

  1. By: Drost,Feike C.; Akker,Ramon van den; Werker,Bas J.M. (Tilburg University, Center for Economic Research)
    Abstract: Integer-valued autoregressive (INAR) processes have been introduced to model nonnegative integer-valued phenomena that evolve over time. The distribution of an INAR(p) process is essentially described by two parameters: a vector of autoregression coefficients and a probability distribution on the nonnegative integers, called an immigration or innovation distribution. Traditionally, parametric models are considered where the innovation distribution is assumed to belong to a parametric family. This paper instead considers a more realistic semiparametric INAR(p) model: essentially there are no restrictions on the innovation distribution. We provide an (semiparametrically) efficient estimator of the autoregression parameters and the innovation distribution.
    Keywords: count data;nonparametric maximum likelihood;infinite-dimensional Z-estimator; semiparametric efficiency
    JEL: C13 C14 C22
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:200723&r=ecm
  2. By: Bertschek, Irene; Lechner, Michael (Institut für Volkswirtschaft und Statistik (IVS))
    Abstract: The paper shows that several estimators for the panel probit model suggested in the literature belong to a common class of GMM estimators. They are relatively easy to compute because they are based on conditional moment restrictions involving univariate moments of the binary dependent variable only. Applying nonparametric methods we discuss an estimator that is optimal in this class. A Monte Carlo study shows that a particular variant of this estimator has good small sample properties and that the effciency loss compared to maximumlikelihood is small. An application to the product innovation decisions of German firms reveals the expected effciency gains.
    JEL: C14 C23 C25
    URL: http://d.repec.org/n?u=RePEc:mea:ivswpa:528&r=ecm
  3. By: Manuel Arellano (Institute for Fiscal Studies and CEMFI); Stéphane Bonhomme
    Abstract: <p>Many approaches to estimation of panel models are based on an average or integrated likelihood that assigns weights to different values of the individual effects. Fixed effects, random effects, and Bayesian approaches all fall in this category. We provide a characterization of the class of weights (or priors) that produce estimators that are first-order unbiased. We show that such bias-reducing weights must depend on the data unless an orthogonal reparameterization or an essentially equivalent condition is available. Two intuitively appealing weighting schemes are discussed. We argue that asymptotically valid confidence intervals can be read from the posterior distribution of the common parameters when N and T grow at the same rate. Finally, we show that random effects estimators are not bias reducing in general and discuss important exceptions. Three examples and some Monte Carlo experiments illustrate the results. </p><p></p>
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:07/07&r=ecm
  4. By: John M Maheu; Stephen Gordon
    Abstract: We provide a general methodology for forecasting in the presence of structural breaks induced by unpredictable changes to model parameters. Bayesian methods of learning and model comparison are used to derive a predictive density that takes into account the possibility that a break will occur before the next observation. Estimates for the posterior distribution of the most recent break are generated as a by-product of our procedure. We discuss the importance of using priors that accurately reflect the econometrician's opinions as to what constitutes a plausible forecast. Several applications to macroeconomic time-series data demonstrate the usefulness of our procedure.
    Keywords: Bayesian Model Averaging, Markov Chain Monte Carlo, Real GDP Growth, Phillip's Curve
    JEL: C53 C22 C11
    Date: 2007–03–30
    URL: http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-284&r=ecm
  5. By: Chihwa Kao (Center for Policy Research, Maxwell School, Syracuse University, Syracuse, NY 13244-1020); Lorenzo Trapani; Giovanni Urga
    Abstract: In this paper, we propose an estimation and testing framework for parameter instability in cointegrated panel regressions with common and idiosyncratic trends. We develop tests for structural change for the slope parameters under the null hypothesis of no structural break against the alternative hypothesis of (at least) one common change point, which is possibly unknown. The limiting distributions of the proposed test statistics are derived. Monte Carlo simulations examine size and power of the proposed tests. We are grateful for discussions with Robert De Jong, Long-Fei Lee, Zongwu Cai, and Yupin Hu. We would also like to thank participants in the International Conferences on "Common Features in London" (Cass, 16-17 December 2004), 2006 New York Econometrics Camp and Breaks and Persistence in Econometrics (Cass, 11-12 December 2006), and econometrics seminars at Ohio State University and Academia Sinica for helpful comments. Part of this work was done while Chihwa Kao was visiting the Centre for Econometric Analysis at Cass (CEA@Cass). Financial support from City University 2005 Pump Priming Fund and CEA@Cass is gratefully acknowledged. Lorenzo Trapani acknowledges financial support from Cass Business School under the RAE Development Fund Scheme.
    Keywords: Panel cointegration, common and idiosyncratic stochastic trends, testing for structural changes.
    JEL: C32 C33 C13
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:max:cprwps:92&r=ecm
  6. By: Patrick Richard (GREDI, Département d'économique, Université de Sherbrooke)
    Abstract: We study the use of the sieve bootstrap to conduct ADF unit root tests when the time series' first difference is a general linear process that admits an infinite moving average form. The work of Park (2002) and Chang and Park (2003) suggest that the usual autoregressive (AR) sieve bootstrap provides some accuracy gains under the null hypothesis. The magnitude of this amelioration, however, depends on the nature of the true DGP. For example, the AR sieve test over-rejects almost as much as the asymptotic one when the DGP contains a strong negative moving average root. This lack of robustness is, of course, caused by the poor quality of the AR approximation. We attempt to reduce this problem by proposing to use sieve bootstraps based on moving average (MA) and autoregressive-moving average (ARMA) approximations. Though this is a natural generalisation of the standard AR sieve bootstrap, it has never been suggested in the econometrics literature. Two important theoretical results are shown. First, we establish invariance principles for the partial sum processes built from invertible MA and stationary and invertible ARMA sieve bootstrap DGPs. Second, these are used to provide a proof of the asymptotic validity of the resulting ADF bootstrap tests. Through Monte Carlo experiments, we find that the rejection probability of the MA sieve bootstrap is more robust to the DGP than that of the AR sieve bootstrap. We also find that the ARMA sieve bootstrap requires only a very parsimonious specification to achieve excellent results.
    Keywords: Sieve bootstrap, Unit root, ADF tests, ARMA approximations, Invariance Principle
    JEL: C12 C22
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:shr:wpaper:07-05&r=ecm
  7. By: Chin Nam Low; Heather Anderson; Ralph Snyder
    Abstract: This paper considers Beveridge-Nelson decomposition in a context where the permanent and transitory components both follow a Markov switching process. Our approach insorporates Markov switching into a single source of error state-space framework, allowing business cycle asymmetries and regime switches in the long-run multiplier.
    JEL: C22 C51 E32
    Date: 2006–07
    URL: http://d.repec.org/n?u=RePEc:acb:camaaa:2006-18&r=ecm
  8. By: De Pooter, Michiel; Ravazzolo, Francesco; van Dijk, Dick
    Abstract: We forecast the term structure of U.S. Treasury zero-coupon bond yields by analyzing a range of models that have been used in the literature. We assess the relevance of parameter uncertainty by examining the added value of using Bayesian inference compared to frequentist estimation techniques, and model uncertainty by combining forecasts from individual models. Following current literature we also investigate the benefits of incorporating macroeconomic information in yield curve models. Our results show that adding macroeconomic factors is very beneficial for improving the out-of-sample forecasting performance of individual models. Despite this, the predictive accuracy of models varies over time considerably, irrespective of using the Bayesian or frequentist approach. We show that mitigating model uncertainty by combining forecasts leads to substantial gains in forecasting performance, especially when applying Bayesian model averaging.
    Keywords: Term structure of interest rates; Nelson-Siegel model; Affine term structure model; forecast combination; Bayesian analysis
    JEL: C53 E47
    Date: 2006–11–06
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:2512&r=ecm
  9. By: Ozun, Alper; Cifter, Atilla
    Abstract: Complexity in the financial markets requires intelligent forecasting models for return volatility. In this paper, historical simulation, GARCH, GARCH with skewed student-t distribution and asymmetric normal mixture GRJ-GARCH models are combined with Extreme Value Theory Hill by using artificial neural networks with genetic algorithm as the combination platform. By employing daily closing values of the Istanbul Stock Exchange from 01/10/1996 to 11/07/2006, Kupiec and Christoffersen tests as the back-testing mechanisms are performed for forecast comparison of the models. Empirical findings show that the fat-tails are more properly captured by the combination of GARCH with skewed student-t distribution and Extreme Value Theory Hill. Modeling return volatility in the emerging markets needs “intelligent” combinations of Value-at-Risk models to capture the extreme movements in the markets rather than individual model forecast.
    Keywords: Forecast combination; Artificial neural networks; GARCH models; Extreme value theory; Christoffersen test
    JEL: G0 C52 C32
    Date: 2007–02–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:2488&r=ecm
  10. By: Victor Aguirregabiria
    Abstract: This paper deals with the estimation of the behavioral and welfare effects of counterfactual policy interventions in dynamic structural models where all the primitive functions are nonparametrically specified (i.e., preferences, technology, transition rules, and distribution of unobserved variables). It proves the nonparametric identification of agents' decision rules, before and after the policy intervention, and of the change in agents' welfare. Based on these results we propose a nonparametric procedure to estimate the behavioral and welfare effects of a general class of counterfactual policy interventions. The nonparametric estimator can be used to construct a test of the validity of a particular parametric specification. We apply this method to evaluate hypothetical reforms in the rules of a public pension system using a model of retirement behavior and a sample of workers in Sweden.
    Keywords: Dynamic discrete decision processes; Nonparametric identification; Counterfactual policy interventions; Retirement behavior.
    JEL: C13 C25
    Date: 2007–03–27
    URL: http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-282&r=ecm
  11. By: Marcelo Fernandes (Economics Department, Queen Mary, University of London); Marcelo Cunha Medeiros (Department of Economics PUC-Rio); Alvaro Veiga (Department of Economics,PUC-Rio)
    Abstract: In this paper, we propose a class of ACD-type models that accommodates overdispersion, intermittent dynamics, multiple regimes, and sign and size asymmetries in financial durations. In particular, our functional coefficient autoregressive conditional duration (FC-ACD) model relies on a smooth-transition autoregressive specification. The motivation lies on the fact that the latter yields a universal approximation if one lets the number of regimes grows without bound. After establishing that the sufficient conditions for strict stationarity do not exclude explosive regimes, we address model identifiability as well as the existence, consistency, and asymptotic normality of the quasi-maximum likelihood (QML) estimator for the FC-ACD model with a fixed number of regimes. In addition, we also discuss how to consistently estimate using a sieve approach a semiparametric variant of the FC-ACD model that takes the number of regimes to infinity. An empirical illustration indicates that our functional coefficient model is flexible enough to model IBM price durations.
    Keywords: explosive regimes, quasi-maximum likelihood, sieve estimation, smooth transition, stationarity.
    JEL: C22 C41
    Date: 2006–12
    URL: http://d.repec.org/n?u=RePEc:rio:texdis:535&r=ecm
  12. By: Le-Yu Chen (Institute for Fiscal Studies and University College London)
    Abstract: <p>This paper presents new identification results for the class of structural dynamic optimal stopping time models that are built upon the framework of the structural discrete Markov decision processes proposed by Rust (1994). We demonstrate how to semiparametrically identify the deep structural parameters of interest in the case where the utility function of an absorbing choice in the model is parametric but the distribution of unobserved heterogeneity is nonparametric. Our identification strategy depends on availability of a continuous observed state variable that satisfies certain exclusion restrictions. If such excluded variable is accessible, we show that the dynamic optimal stopping model is semiparametrically identified using control function approaches.</p>
    Keywords: Structural dynamic discrete choice models, semiparametric identification, optimal stopping
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:06/07&r=ecm
  13. By: Cubadda Gianluca; Hecq Alain; Palm Franz C. (METEOR)
    Abstract: This note argues that large VAR models with common cyclical feature restrictions provide an attractive framework for parsimonious implied univariate final equations, justifying on the one hand the estimation of homogenous panels with dynamic heterogeneity and a common factor structure, and on the other hand the aggregation of time series. However, starting with a too restrictive DGP might preclude from looking at interesting empirical issues.
    Keywords: Economics (Jel: A)
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:dgr:umamet:2007009&r=ecm
  14. By: Siddhartha Chib (Olin School of Business, Washington Unviersity in St.Louis); Yasuhiro Omori (Faculty of Economics, University of Tokyo); Manabu Asai (Faculty of Economics, Soka University)
    Abstract: The success of univariate stochastic volatility (SV) models in relation to univariate GARCH models has spurred an enormous interest in generalizations of SV models to a multivariate setting. A large number of multivariate SV (MSV) models are now available along with clearly articulated estimation recipes. Our goal in this paper is to provide the first detailed summary of the various model formulations, along with connections and differences, and discuss how the models are estimated. We aim to show that the developments and achievements in this area represent one of the great success stories of financial econometrics.
    Date: 2007–04
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2007cf488&r=ecm
  15. By: Stengos, T.; Wu, X.
    Date: 2006
    URL: http://d.repec.org/n?u=RePEc:gue:guelph:2006-4&r=ecm
  16. By: Konstantin A. Kholodilin (DIW Berlin)
    Abstract: The appropriately selected leading indicators can substantially improve the forecasting of the peaks and troughs of the business cycle. Using the novel methodology of the dynamic bi-factor model with Markov switching and the data for three largest European economies (France, Germany, and UK) we construct composite leading indicator (CLI) and composite coincident indicator (CCI) as well as corresponding recession probabilities. We estimate also a rival model of the Markov-switching VAR in order to see, which of the two models brings better outcomes. The recession dates derived from these models are compared to three reference chronologies: those of OECD and ECRI (growth cycles) and those obtained with quarterly Bry-Boschan procedure (classical cycles). Dynamic bi-factor model and MSVAR appear to predict the cyclical turning points equally well without systematic superiority of one model over another
    Keywords: Forecasting turning points, composite
    JEL: E32 C10
    Date: 2007–02–02
    URL: http://d.repec.org/n?u=RePEc:mmf:mmfc06:13&r=ecm
  17. By: Thomas Bauer; Mathias Sinning
    Abstract: In this paper, a general Blinder-Oaxaca decomposition is derived that can also be applied to non-linear models, which allows the differences in a non-linear outcome variable between two groups to be decomposed into a part that is explained by differences in observed characteristics and a part attributable to differences in the estimated coeffcients. Departing from this general model, we show how it can be applied to different models with discrete and limited dependent variables.
    Keywords: Blinder-Oaxaca decomposition, non-linear models
    JEL: C13 C20
    Date: 2006–10
    URL: http://d.repec.org/n?u=RePEc:rwi:dpaper:0049&r=ecm
  18. By: Neil Kellard (Essex Finance Centre, Department of Accounting, Finance & Management, University of Essex); John Nankervis (Essex Finance Centre, Department of Accounting, Finance & Management, University of Essex); Fotis Papadimitriou (Essex Finance Centre, Department of Accounting, Finance & Management, University of Essex)
    Abstract: The purpose of this paper is to evaluate the ability of dividend ratios to predict the UK equity premium. Specifically, we apply the Goyal and Welch (2003) methodology to equity premia derived from the UK FTSE All-Share index. This approach provides a powerful graphical diagnostic for predictive ability. Preliminary in-sample univariate regressions reveal that the UK equity premium contains an element of predictability. Moreover, out-of-sample the considered models outperform the historical moving average. In contrast to similar work on the US, the graphical diagnostic then indicates that dividend ratios are useful predictors of excess returns. Finally, Campbell and Shiller (1988) identities are employed to account for the time-varying properties of the dividend yield and dividend growth processes. It is shown that by instrumenting the models with the identities, forecasting ability can be improved.
    Date: 2007–02–02
    URL: http://d.repec.org/n?u=RePEc:mmf:mmfc06:129&r=ecm
  19. By: Richard T. Baillie (Michigan State University and Queen Mary, University of London); Young-Wook Han (Hallym University, Chunchon); Robert J. Myers (Michigan State University); Jeongseok Song (Chung-Ang University, Seoul)
    Abstract: Daily futures returns on six important commodities are found to be well described as FIGARCH fractionally integrated volatility processes, with small departures from the martingale in mean property. The paper also analyzes several years of high frequency intra day commodity futures returns and finds very similar long memory in volatility features at this higher frequency level. Semi parametric Local Whittle estimation of the long memory parameter supports the conclusions. Estimating the long memory parameter across many different data sampling frequencies provides consistent estimates of the long memory parameter, suggesting that the series are self-similar. The results have important implications for future empirical work using commodity price and returns data.
    Keywords: Commodity returns, Futures markets, Long memory, FIGARCH
    JEL: C4 C22
    Date: 2007–04
    URL: http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp594&r=ecm
  20. By: Andrew Chesher (Institute for Fiscal Studies and University College London)
    Abstract: <p><p>This paper studies models for discrete outcomes which permit explanatory variables to be endogenous. Interesting models for discrete outcomes that admit endogeneity necessarily involve a structural function which is a non-additive function of a latent variate. In the essentially single equation models considered here this latent variate is restricted to be locally independent of instruments but the models are silent about the nature of dependence between the latent variate and the endogenous variable and the role of the instrument in this relationship. These IV models which, when the outcome is continuous, can have point identifying power, have only set identifying power when the outcome is discrete. Identification regions shrink as the support of a discrete outcome grows. The paper extends the analysis of structural quantile functions with endogenous arguments to cases in which there are discrete outcomes, cases which have so far been excluded from consideration. The results point to a neglected consequence of interval censoring and grouping, namely the loss of point identifying power that can result when endogeneity is present.</p></p>
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:05/07&r=ecm
  21. By: Kleopatra Nikolaou (Warwick Business School)
    Abstract: We test for mean reversion in real exchange rates using a recently developed unit root test for non- normal processes based on quantile autoregression inference in semi-parametric and non-parametric settings. The quantile regression approach allows us to directly capture the impact of di¤erent magnitudes of shocks that hit the real exchange rate, conditional on its past history, and can detect asymmetric, dynamic adjustment of the real exchange rate towards its long run equilibrium. Our results suggest that large shocks tend to induce strong mean reverting tendencies in the exchange rate, with half lives less than one year in the extreme quantiles. Mean reversion is faster when large shocks originate at points of large real exchange rate deviations from the long run equilibrium. However, in the absence of shocks no mean reversion is observed. Finally, we report asymmetries in the dynamic adjustment of the RER
    Keywords: real exchange rate, purchasing power parity, quantile regression
    JEL: F31
    Date: 2007–02–02
    URL: http://d.repec.org/n?u=RePEc:mmf:mmfc06:46&r=ecm
  22. By: Wittenberg, Martin
    Abstract: We present a test of the hypothesis that a subset of the regressors are all proxying for the same latent variable. This issue will be of interest in cases where there are several correlated measures of elusive concepts such as misgovernance or corruption; in analyses where key variables such as income are not measured at all and one is forced to rely on various proxies; and where the key regressors are badly measured and one is trying to extract a stronger signal from the regression by adding additional proxies as suggested by Lubotsky and Wittenberg (2006). We apply this test in three contexts, each characterised by a different estimation challenge arising from data limitations. We reexamine Mauro's (1995) use of three institutional quality measures in his study of corruption and growth. Here several variables, each potentially measured with error, may all be proxies for a single factor: the quality of governance. Our test suggests that the latent variable is driven primarily by the “red tape” measure, rather than the “corruption” variable on which Mauro focuses. Secondly, we look at the correlates of body mass among black South African women. The key variable of interest, namely “wealth” is not measured at all. Consequently we construct an index from a series of asset variables as suggested by Filmer and Pritchett (2001). Our test shows that some assets have independent impacts on the dependent variable. Once this is recognised the “asset index” comes apart. Finally we analyse the determinants of sleep among young South Africans. The income variable in the survey is badly measured and we supplement it with asset proxies. The test again suggests that some assets are not proxying for the badly measured income variable. We can nevertheless get a substantially stronger signal on the income variable.
    Keywords: measurement error; proxy variables; specification test; asset index
    JEL: C12 C52 C13
    Date: 2007–03–31
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:2550&r=ecm
  23. By: van den Berg, Gerard (Free University Amsterdam)
    Abstract: Instrumental variable estimation requires untestable exclusion restrictions. With policy effects on inidividual outcomes, there is typically a time interval between the moment the agent realizes that he may be exposed to the policy and the actual exposure or the announcement of the actual treatment status. In such cases there is an incentive for the agent to acquire information on the value of the IV. This leads to violation of the exclusion restriction. We analyze this in a dynamic economic model framework. This provides a foundation of exclusion restrictions in terms of economic behavior. The results are used to describe policy evaluation settings in which instrumental variables are likely or unlikely to make sense. For the latter cases we analyze the asymptotic bias. The exclusion restriction is more likely to be violated if the outcome of interest strongly depends on interactions between the agent's effort before the outcome is realized and the actual treatment status. The bias has the same sign as this interaction effect. Violation does not causally depend on the weakness of the candidate intstrument or the size of the average treatment effect. With experiments, violation is more likely if the treatment and control groups are to be of similar size. We also address side-effects. We develop a novel economic interpretation of placebo effects and provide some empirical evidence for the relevance of the analysis.
    Keywords: Treatment; policy evaluation; information; selection effects; randomization; placebo effect
    JEL: J64
    Date: 2007–02–18
    URL: http://d.repec.org/n?u=RePEc:hhs:ifauwp:2007_010&r=ecm
  24. By: Ozun, Alper; Cifter, Atilla
    Abstract: Long-term memory effect in stock prices might be captured, if any, with alternative models. Though Geweke and Porter-Hudak (1983) test model the long memory with the OLS estimator, a new approach based on wavelets analysis provide WOLS estimator for the memory effect. This article examines the long-term memory of the Istanbul Stock Index with the Daubechies-20, Daubechies-12, the Daubechies-4 and the Haar wavelets and compares the results of the WOLS estimators with that of OLS estimator based on the Geweke and Porter-Hudak test. While the results of the GPH test imply that the stock returns are memoryless, fractional integration parameters based on the Daubechies wavelets display that there is an explicit long-memory effect in the stock returns. The research results have both methodological and practical crucial conclusions. On the theoretical side, the wavelet based OLS estimator is superior in modeling the behaviours of the stock returns in emerging markets where nonlinearities and high volatility exist due to their chaotic natures. For practical aims, on the other hand, the results show that the Istanbul Stock Exchange is not in the weak-form efficient because the prices have memories that are not reflected in the prices, yet.
    Keywords: Long-term memory; Wavelets; Stock prices; GPH test
    JEL: C45 G12
    Date: 2007–02–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:2481&r=ecm
  25. By: Ángel León (Universidad de Alicante); Javier Mencía (Banco de España); Enrique Sentana (Centro de Estudios Monetarios y Financieros (CEMFI))
    Abstract: We derive the statistical properties of the SNP densities of Gallant and Nychka (1987). We show that these densities, which are always positive, are more flexible than truncated Gram-Charlier expansions with positivity restrictions. We use the SNP densities for financial derivatives valuation. We relate real and risk-neutral measures, obtain closed-form prices for European options, and analyse the semiparametric properties of our pricing model. In an empirical application to S&P500 index options, we compare our model to the standard and Practitioner's Black-Scholes formulas, truncated expansions, and the Generalised Beta and Variance Gamma models.
    Keywords: kurtosis, density expansions, gram-charlier, skewness, s&p index options
    JEL: G13 C16
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:bde:wpaper:0707&r=ecm
  26. By: Domenico Giannone (ECARES Université Libre de Bruxelles); Lucrezia Reichlin (European Central Bank); David H Small (Federal Reserve Board)
    Abstract: This paper formalizes the process of updating the nowcast and forecast on out-put and inflation as new releases of data become available. The marginal contribution of a particular release for the value of the signal and its precision is evaluated by computing "news" on the basis of an evolving conditioning information set. The marginal contribution is then split into what is due to timeliness of information and what is due to economic content. We find that the Federal Reserve Bank of Philadelphia surveys have a large marginal impact on the nowcast of both inflation variables and real variables and this effect is larger than that of the Employment Report. When we control for timeliness of the releases, the effect of hard data becomes sizeable. Prices and quantities affect the precision of the estimates of inflation while GDP is only affected by real variables and interest rates
    JEL: E52 C33 C53
    Date: 2007–02–02
    URL: http://d.repec.org/n?u=RePEc:mmf:mmfc06:164&r=ecm
  27. By: Alastair Cunningham (Bank of England); Chris Jeffery (Bank of England); George Kapetanios (Queen Mary and WestÂ…eld College and Bank of England); Vincent Labhard (European Central Bank)
    Abstract: The paper describes the challenges that uncertainty over the true value of key macroeconomic variables poses for policymakers and the way in which they may form and update their priors in light of a range of indicators. Speci…cally, it casts the data uncertainty challenge in state space form and illustrates - in this setting - how the policymaker’s data uncertainty problem is related to any constraints that an optimising statistical agency might face in resolving its own data uncertainty challenge. The paper uses this intuition to motivate a set of identifying assumptions that might be used in the practical application of the Kalman Filter to form and update priors on the basis of a variety of indicators. In doing so, it moves beyond the simple methodology for deriving "best guesses" of the true value of economic variables outlined in Ashley, Driver, Hayes, and Je¤ery (2005)
    Date: 2007–02–02
    URL: http://d.repec.org/n?u=RePEc:mmf:mmfc06:168&r=ecm
  28. By: Heather Anderson (Australian National University); Mardi Dungey (University of Cambridge); Denise Osborn (University of Manchester); Farshid Vahid (Australian National University)
    Abstract: The conduct of time series analysis on the Euro Area currently presents problems in terms of availability of sufficiently long data sets. The ECB has provided a dataset of quarterly data from 1970 covering many data series in its Area Wide Model (AWM), but not for a number of important financial market series. This paper discusses methods for producing such backdata and in the resulting difficulties in selecting aggregation methods. Simple applicaiton of the AWM weights results in orders of magnitude difference in financial series. The use of different aggregation methods across series induces relationships. The effects of different possible methods of constructing data are shown through estimation of simple Taylor rules, which result in different weights on output gaps and inflation deviation for what are purportedly the same data
    Keywords: Euro area, data aggregation, Taylor rule
    JEL: C82 C43 E58
    Date: 2007–02–02
    URL: http://d.repec.org/n?u=RePEc:mmf:mmfc06:99&r=ecm

This nep-ecm issue is ©2007 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.