nep-ecm New Economics Papers
on Econometrics
Issue of 2006‒04‒01
eighteen papers chosen by
Sune Karlsson
Orebro University

  1. Inference via kernel smoothing of bootstrap P values By Jeff Racine; James MacKinnon
  2. Rank Tests for Instrumental Variables Regression with Weak Instruments By Donald W.K. Andrews; Gustavo Soares
  3. Smoothed L-estimation of regression function By Cizek,P.; Tamine,J.; Haerdle,W.
  4. Combining forecasts from nested models By Todd E. Clark; Michael W. McCracken
  5. Seeing the wood for the trees: A critical evaluation of methods to estimate the parameters of stochastic differential equations By Stan Hurn; J.Jeisman; K.A. Lindsay
  6. Modelling Security Market Events in Continuous Time: Intensity Based, Multivariate Point Process Models By Clive G. Bowsher
  7. Impulse Response Functions from Structural Dynamic Factor Models:A Monte Carlo Evaluation By George Kapetanios and Massimiliano Marcellino
  8. Testing Portfolio Efficiency with Conditioning Information By Wayne E. Ferson; Andrew F. Siegel
  9. On the specification of regression models with spatial dependence - an application of the accessibility concept By Andersson, Martin; Gråsjö, Urban
  10. Teaching an old dog new tricks: Improved estimation of the parameters of SDEs by numerical solution of the Fokker-Planck equation By Stan Hurn; J.Jeisman; K.A. Lindsay
  11. Meta-modeling by symbolic regression and parteo simulated annealing By Stinstra,Erwin; Rennen,Gijs; Teeuwen,Geert
  12. Large dimension forecasting models and random singular value spectra By Jean-Philippe Bouchaud; Laurent Laloux; M. Augusta Miceli; Marc Potters
  13. On a multi-timescale statistical feedback model for volatility fluctuations By Lisa Borland; Jean-Philippe Bouchaud
  14. Power Indices for Revealed Preference Tests By James Andreoni; William T. Harbaugh
  15. Testing for Purchasing Power Parity Under a Target Zone Exchange Rate Regime By J. Isaac Miller
  16. Design of web questionnaires: an informationprocessing perspective for the effect of response categories By Toepoel,Vera; Vis,Corrie; Das,Marcel; Soest,Arthur van
  17. Stochastic Volatility By Neil Shephard
  18. Consistent Information Multivariate Density Optimizing Methodology By Miguel Segoviano

  1. By: Jeff Racine (McMaster University); James MacKinnon (Queen's University)
    Abstract: Resampling methods such as the bootstrap are routinely used to estimate the finite-sample null distributions of a range of test statistics. We present a simple and tractable way to perform classical hypothesis tests based upon a kernel estimate of the CDF of the bootstrap statistics. This approach has a number of appealing features: i) it can perform well when the number of bootstraps is extremely small, ii) it is approximately exact, and iii) it can yield substantial power gains relative to the conventional approach. The proposed approach is likely to be useful when the statistic being bootstrapped is computationally expensive.
    Keywords: resampling, Monte Carlo test, bootstrap test, percentiles
    JEL: C12 C14 C15
    Date: 2006–03
    URL: http://d.repec.org/n?u=RePEc:qed:wpaper:1054&r=ecm
  2. By: Donald W.K. Andrews (Cowles Foundation, Yale University); Gustavo Soares (Department of Economics, Yale University)
    Abstract: This paper considers tests in an instrumental variables (IVs) regression model with IVs that may be weak. Tests that have near-optimal asymptotic power properties with Gaussian errors for weak and strong IVs have been determined in Andrews, Moreira, and Stock (2006a). In this paper, we seek tests that have near-optimal asymptotic power with Gaussian errors and improved power with non-Gaussian errors relative to existing tests. Tests with such properties are obtained by introducing rank tests that are analogous to the conditional likelihood ratio test of Moreira (2003). We also introduce a rank test that is analogous to the Lagrange multiplier test of Kleibergen (2002) and Moreira (2001).
    Keywords: Asymptotically similar tests, Conditional likelihood ratio test, Instrumental variables regression, Lagrange multiplier test, Power of test, Rank tests, Thick-tailed distribution, Weak instruments
    JEL: C12 C30
    Date: 2006–03
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1564&r=ecm
  3. By: Cizek,P.; Tamine,J.; Haerdle,W. (Tilburg University, Center for Economic Research)
    Abstract: The Nadaraya-Watson nonparametric estimator of regression is known to be highly sensitive to the presence of outliers in data. This sensitivity can be reduced, for example, by using local L-estimates of regression. Whereas the local L-estimation is traditionally done using an empirical conditional distribution function, we propose to use instead a smoothed conditional distribution function. The asymptotic distribution of the proposed estimator is derived under mild ¯-mixing conditions, and additionally, we show that the smoothed L-estimation approach provides computational as well as statistical ¯nite-sample improvements. Finally, the proposed method is applied to the modelling of implied volatility
    Keywords: nonparametric regression;L-estimation;smoothed cumulative distribution function
    JEL: C13 C14
    Date: 2006
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:200620&r=ecm
  4. By: Todd E. Clark; Michael W. McCracken
    Abstract: Motivated by the common finding that linear autoregressive models forecast better than models that incorporate additional information, this paper presents analytical, Monte Carlo, and empirical evidence on the effectiveness of combining forecasts from nested models. In our analytics, the unrestricted model is true, but as the sample size grows, the DGP converges to the restricted model. This approach captures the practical reality that the predictive content of variables of interest is often low. We derive MSE-minimizing weights for combining the restricted and unrestricted forecasts. In the Monte Carlo and empirical analysis, we compare the effectiveness of our combination approach against related alternatives, such as Bayesian estimation.
    Date: 2006
    URL: http://d.repec.org/n?u=RePEc:fip:fedkrw:rwp06-02&r=ecm
  5. By: Stan Hurn; J.Jeisman; K.A. Lindsay (School of Economics and Finance, Queensland University of Technology)
    Abstract: Maximum likelihood (ML) estimates of the parameters of stochastic differential equations (SDEs) are consistent and asymptotically efficient, but unfortunately difficult to obtain if a closed form expression for the transitional density of the process is not available. As a result, a large number of competing estimation procedures have been proposed. This paper provides a critical evaluation of the various estimation techniques. Special attention is given to the ease of implementation and comparative performance of the procedures when estimating the parameters of the Cox-IngersollRoss and Ornstein-Uhlenbeck equations respectively.
    Keywords: stochastic differential equations, parameter estimation, maximum likelihood, simulation, moments
    Date: 2006
    URL: http://d.repec.org/n?u=RePEc:qut:sthurn:2006&r=ecm
  6. By: Clive G. Bowsher (Nuffield College, Oxford University)
    Abstract: A continuous time econometric modelling framework for multivariate financial market event (or 'transactions') data is developed in which the model is specified via the vector conditional intensity. This has the advantage that the conditioning information set is updated continuously in time as new information arrives. Generalised Hawkes (g-Hawkes) models are introduced that are sufficiently flexible to incorporate `inhibitory' events and dependence between trading days. Novel omnibus specification tests for parametric models based on a multivariate random time change theorem are proposed. A computationally efficient thinning algorithm for simulation of g-Hawkes processes is also developed. A continuous time, bivariate point process model of the timing of trades and mid-quote changes is presented for a New York Stock Exchange stock and the empirical findings are related to the market microstructure literature. The two-way interaction of trades and quote changes is found to be important empirically. Furthermore, the model delivers a continuous record of instantaneous volatility that is conditional on the timing of trades and quote changes.
    Keywords: Point process, conditional intensity, Hawkes process, specification test, random time change, transactions data, market microstructure.
    JEL: C32 C51 C52 G10
    Date: 2005–10–01
    URL: http://d.repec.org/n?u=RePEc:nuf:econwp:0526&r=ecm
  7. By: George Kapetanios and Massimiliano Marcellino
    Abstract: The estimation of structural dynamic factor models (DFMs) for large sets of variables is attracting considerable attention. In this paper we briefly review the underlying theory and then compare the impulse response functions resulting from two alternative estimation methods for the DFM. Finally, as an example, we reconsider the issue of the identification of the driving forces of the US economy, using data for about 150 macroeconomic variables.
    URL: http://d.repec.org/n?u=RePEc:igi:igierp:306&r=ecm
  8. By: Wayne E. Ferson; Andrew F. Siegel
    Abstract: We develop asset pricing models' implications for portfolio efficiency when there is conditioning information in the form of a set of lagged instruments. A model of expected returns identifies a portfolio that should be minimum variance efficient with respect to the conditioning information. Our tests refine previous tests of portfolio efficiency, using the conditioning information optimally. We reject the efficiency of all static or time-varying combinations of the three Fama-French (1996) factors with respect to the conditioning information and also the conditional efficiency of time-varying combinations of the factors, given standard lagged instruments.
    JEL: C12 C51 C52 G12
    Date: 2006–03
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:12098&r=ecm
  9. By: Andersson, Martin (CESIS - Centre of Excellence for Science and Innovation Studies, Royal Institute of Technology); Gråsjö, Urban (CESIS - Centre of Excellence for Science and Innovation Studies, Royal Institute of Technology)
    Abstract: Using the taxonomy by Anselin (2003), this paper investigates how the inclusion of spatially discounted variables on the ‘right-hand-side’ (RHS) in empirical spatial models affects the extent of spatial autocorrelation. The basic proposition is that the inclusion of inputs external to the spatial observation in question as a separate variable reveals spatial dependence via the parameter estimate. One of the advantages of this method is that it allows for a direct interpretation. The paper also tests to what extent significance of the estimated parameters of the spatially discounted explanatory variables can be interpreted as evidence of spatial dependence. Additionally, the paper advocates the use of the accessibility concept for spatial weights. Accessibility is related to spatial interaction theory and can be motivated theoretically by adhering to the preference structure in random choice theory. Monte Carlo Simulations show that the coefficient estimates of the accessibility variables are significantly different from zero in the case of modelled effects. The rejection frequency of the three typical tests (Moran’s I, LM-lag and LM-err) is significantly reduced when these additional variables are included in the model. When the coefficient estimates of the accessibility variables are statistically significant, it suggests that problems of spatial autocorrelation are significantly reduced. Significance of the accessibility variables can be interpreted as spatial dependence
    Keywords: accessibility; spatial dependence; spatial econometrics; Monte Carlo Simulations; spatial spillovers
    JEL: C31 C51 R15
    Date: 2006–03–28
    URL: http://d.repec.org/n?u=RePEc:hhs:cesisp:0051&r=ecm
  10. By: Stan Hurn; J.Jeisman; K.A. Lindsay (School of Economics and Finance, Queensland University of Technology)
    Abstract: Many stochastic differential equations (SDEs) do not have readily available closed-form expressions for their transitional probability density functions (PDFs). As a result, a large number of competing estimation approaches have been proposed in order to obtain maximum-likelihood estimates of their parameters. Arguably the most straightforward of these is one in which the required estimates of the transitional PDF are obtained by numerical solution of the Fokker-Planck (or forward-Kolmogorov) partial differential equation. Despite the fact that this method produces accurate estimates and is completely generic, it has not proved popular in the applied literature. Perhaps this is attributable to the fact that this approach requires repeated solution of a parabolic partial differential equation to obtain the transitional PDF and is therefore computationally quite expensive. In this paper, three avenues for improving the reliability and speed of this estimation method are introduced and explored in the context of estimating the parameters of the popular Cox-Ingersoll-Ross and Ornstein-Uhlenbeck models. The recommended algorithm that emerges from this investigation is seen to offer substantial gains in reliability and computational time.
    Keywords: stochastic differential equations, maximum likelihood, finite difference, finite element, cumulative distribution function, interpolation.
    Date: 2006
    URL: http://d.repec.org/n?u=RePEc:qut:sthurn:2006-01&r=ecm
  11. By: Stinstra,Erwin; Rennen,Gijs; Teeuwen,Geert (Tilburg University, Center for Economic Research)
    Abstract: The subject of this paper is a new approach to Symbolic Regression. Other publications on Symbolic Regression use Genetic Programming. This paper describes an alternative method based on Pareto Simulated Annealing. Our method is based on linear regression for the estimation of constants. Interval arithmetic is applied to ensure the consistency of a model. In order to prevent over-fitting, we merit a model not only on predictions in the data points, but also on the complexity of a model. For the complexity we introduce a new measure. We compare our new method with the Kriging meta-model and against a Symbolic Regression meta-model based on Genetic Programming. We conclude that Pareto Simulated Annealing based Symbolic Regression is very competitive compared to the other meta-model approaches
    Keywords: approximation;meta-modeling;pareto simulated annealing;symbolic regression
    JEL: C14
    Date: 2006
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:200615&r=ecm
  12. By: Jean-Philippe Bouchaud (Science & Finance, Capital Fund Management; CEA Saclay;); Laurent Laloux (Science & Finance, Capital Fund Management); M. Augusta Miceli; Marc Potters (Science & Finance, Capital Fund Management)
    Abstract: We present a general method to detect and extract from a finite time sample statistically meaningful correlations between input and output variables of large dimensionality. Our central result is derived from the theory of free random matrices, and gives an explicit expression for the interval where singular values are expected in the absence of any true correlations between the variables under study. Our result can be seen as the natural generalization of the Mar?cenko-Pastur distribution for the case of rectangular correlation matrices. We illustrate the interest of our method on a set of macroeconomic time series.
    Date: 2005–12
    URL: http://d.repec.org/n?u=RePEc:sfi:sfiwpa:500066&r=ecm
  13. By: Lisa Borland (Evnine-Vaughan Associates, Inc.); Jean-Philippe Bouchaud (Science & Finance, Capital Fund Management; CEA Saclay;)
    Abstract: We study, both analytically and numerically, an ARCH-like, multiscale model of volatility, which assumes that the volatility is governed by the observed past price changes on different time scales. With a power-law distribution of time horizons, we obtain a model that captures most stylized facts of financial time series: Student-like distribution of returns with a power-law tail, long-memory of the volatility, slow convergence of the distribution of returns towards the Gaussian distribution, multifractality and anomalous volatility relaxation after shocks. At variance with recent multifractal models that are strictly time reversal invariant, the model also reproduces the time assymmetry of financial time series: past large scale volatility influence future small scale volatility. In order to quantitatively reproduce all empirical observations, the parameters must be chosen such that our model is close to an instability, meaning that (a) the feedback effect is important and substantially increases the volatility, and (b) that the model is intrinsically difficult to calibrate because of the very long range nature of the correlations. By imposing the consistency of the model predictions with a large set of different empirical observations, a reasonable range of the parameters value can be determined. The model can easily be generalized to account for jumps, skewness and multiasset correlations.
    JEL: G10
    Date: 2005–07
    URL: http://d.repec.org/n?u=RePEc:sfi:sfiwpa:500059&r=ecm
  14. By: James Andreoni; William T. Harbaugh
    Date: 2006–03–27
    URL: http://d.repec.org/n?u=RePEc:cla:levrem:122247000000001257&r=ecm
  15. By: J. Isaac Miller (Department of Economics, University of Missouri-Columbia)
    Abstract: We show that typical tests for purchasing power parity (PPP) using exchange rates governed by a target zone regime are inherently misspecified. Regardless of whether or not long-run PPP holds, the real exchange rate cannot be mean-reverting in the usual sense, since the nominal exchange rate is generated by a nonlinear transformation of a nonstationary economic fundamental. As an alternative, we propose basing the real exchange rate (and thus a PPP test) on conditional expectations of this unobservable fundamental. As an illustration, we test for long-run PPP between Denmark and the Euro area.
    Keywords: Target Zone Exchange Rates, Purchasing Power Parity, Nonlinear Transformations, Extended Kalman Filter
    JEL: C13 C22 C32
    Date: 2006–03–02
    URL: http://d.repec.org/n?u=RePEc:umc:wpaper:0604&r=ecm
  16. By: Toepoel,Vera; Vis,Corrie; Das,Marcel; Soest,Arthur van (Tilburg University, Center for Economic Research)
    Abstract: In this study we use an information-processing perspective to explore the impact of response scales on respondents answers in a web survey. This paper has four innovations compared to the existing literature: research is based on a different mode of administration (web), we use an open-ended format as a benchmark, four different question types are used, and the study is conducted on a representative sample of the population. We find strong effects of response scales. Questions requiring estimation strategies are more affected by the choice of response format than questions in which direct recall is used. Respondents with a low need for cognition and respondents with a low need to form opinions are more affected by the response categories than respondents with a high need for cognition and a high need to evaluate. The sensitivity to contextual clues is also significantly related to gender, age and education
    Keywords: web survey;questionnaire design;measurement error;context effects;response categories;need for cognition;need to evaluate
    JEL: C42 C81 C93
    Date: 2006
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:200619&r=ecm
  17. By: Neil Shephard (Nuffield College, Oxford University)
    Date: 2005–07–01
    URL: http://d.repec.org/n?u=RePEc:nuf:econwp:0517&r=ecm
  18. By: Miguel Segoviano
    Abstract: The estimation of the profit and loss distribution of a loan portfolio requires the modelling of the portfolio's multivariate distribution. This describes the joint likelihood of changes in the credit-risk quality of the loans that make-up the portfolio. A significant problem for portfolio credit risk measurement is the greatly restricted data that are available for its modelling. Under these circumstances, convenient parametric assumptions, however, usually do not appropiately describe the behaviour of the assets that are the subject of our interest, loans granted to small and medium enterprises (SMEs), unlisted and arm's length firms. This paper proposes the Consistent Information Multivariate Density Optimizing Methodology (CIMDO), based on the cross-entropy approach, as an alternative to generate probabilty multivariate densities from partial information and without making parametric assumptions. Using the probabilty integral transformation criterion, we show that the distributions recovered by CIMDO outperform distributions that are used for the measurement of portfolio credit risk of loans granted to SMEs, unlisted and arm's length firms.
    Date: 2006–03
    URL: http://d.repec.org/n?u=RePEc:fmg:fmgdps:dp557&r=ecm

This nep-ecm issue is ©2006 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.