nep-ecm New Economics Papers
on Econometrics
Issue of 2007‒07‒13
eighteen papers chosen by
Sune Karlsson
Orebro University

  1. Finite Sample Analysis of Weighted Realized Covariance with Noisy Asynchronous Observations By Taro Kanatani
  2. A new approach to bootstrap inference in functional coefficient models By Herwartz, Helmut; Xu, Fang
  3. Deconvoluting preferences and errors: a model for binomial panel data By Fosgerau, Mogens; Nielsen, Søren Feodor
  4. A sample selection model for unit and item nonresponse in cross-sectional surveys By Giuseppe De Luca; Franco Peracchi
  5. Bayesian Analysis of Hazard Regression Models under Order Restrictions on Covariate Effects and Ageing By Bhattacharjee, Arnab; Bhattacharjee, Madhuchhanda
  6. Maximum likelihood estimation of an extended latent markov model for clustered binary panel data By Francesco Bartolucci; Valentina Nigro
  7. Band Spectral Estimation for Signal Extraction By Tommaso Proietti
  8. Predictive Performance of Conditional Extreme Value Theory and Conventional Methods in Value at Risk Estimation By Ghorbel, Ahmed; Trabelsi, Abdelwahed
  9. A Unifying Framework for Analysing Common Cyclical Features in Cointegrated Time Series By Gianluca Cubadda
  10. Small Sample Properties of the Wilcoxon Signed Rank Test with Discontinuous and Dependent Observations By Nadine Chlaß; Jens J. Krüger
  11. COINTEGRATION, LONG-RUN STRUCTURAL MODELLING AND WEAK EXOGENEITY: TWO MODELS OF THE UK ECONOMY By Jan P.A.M. Jacobs; Kenneth F. Wallis
  12. A dynamic model for binary panel data with unobserved heterogeneity admitting a Vn-consistent conditional estimator By Francesco Bartolucci†; Valentina Nigro
  13. Polynomial Cointegration between Stationary Processes with Long Memory By Marco Avarucci; Domenico Marinucci
  14. An approach to the estimation of the distribution of marginal valuations from discrete choice data By Fosgerau, Mogens; Hjort, Katrine; Vincent Lyk-Jensen, Stéphanie
  15. Circumventing the problem of the scale: discrete choice models with multiplicative error terms By Fosgerau, Mogens; Bierlaire, Michel
  16. Second Generation Panel Unit Root Tests By Christophe Hurlin; Valérie Mignon
  17. New proposals for the quantification of qualitative survey data By Tommaso Proietti; Cecilia Frale
  18. Higher order approximations of stochastic rational expectations models By Kowal, Pawel

  1. By: Taro Kanatani (Institute of Economic Research, Kyoto University)
    Abstract: In this paper, we provide a framework to evaluate finite sample MSE of several realized covariance estimators when using nonsynchronous observations contaminated with microstructure noise. This framework enables us to examine different estimators. We propose some estimators as an application of the framework.
    Keywords: High frequency data; Weighted realized covariance; Nonsynchronous (asynchronous) observation; Microstructure noise
    JEL: C14 C32 C63
    Date: 2007–06
    URL: http://d.repec.org/n?u=RePEc:kyo:wpaper:634&r=ecm
  2. By: Herwartz, Helmut; Xu, Fang
    Abstract: We introduce a new, factor based bootstrap approach which is robust under heteroskedastic error terms for inference in functional coefficient models. Modeling the functional coefficient parametrically, the bootstrap approximation of an F statistic is shown to hold asymptotically. In simulation studies with both parametric and nonparametric functional coefficients, factor based bootstrap inference outperforms the wild bootstrap and pairs bootstrap approach according to its size features. Applying the functional coefficient model to a cross sectional investment regression on savings, the saving retention coefficient is found to depend on third variables as the population growth rate and the openness ratio.
    Keywords: Bootstrap, heteroskedasticity, functional coefficient models, Feldstein-Horioka puzzle
    JEL: C12 C14
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:zbw:cauewp:5614&r=ecm
  3. By: Fosgerau, Mogens; Nielsen, Søren Feodor
    Abstract: Let U be an unobserved random variable with compact support and let e_t be unobserved i.i.d. random errors also with compact support. Observe the random variables V_t, X_t, and Y_t = 1{U +d X_t+e_t < V_t}, t <= T, where d is an unknown parameter. This type of model is relevant for many stated choice experiments. It is shown that under weak assumptions on the support of U +e_t, the distributions of U and e_t as well as the unknown parameter d can be consistently estimated using a sieved maximum likelihood estimation procedure. The model is applied to simulated data and to actual data designed for assessing the willingness-to-pay for travel time savings.
    Keywords: semi-nonparametric; nonparametric; method of sieves; binomial panel; willingness-to-pay; value of time
    JEL: C23 R41 D12 C14 Q51 C25
    Date: 2007–07–09
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:3950&r=ecm
  4. By: Giuseppe De Luca (University of Rome “Tor Vergata”); Franco Peracchi (University of Rome “Tor Vergata”)
    Abstract: We consider a general sample selection model where unit and item nonresponse simultaneously affect a regression relationship of interest, and both types of nonresponse are potentially correlated. We estimate both parametric and semiparametric specifications of the model. The parametric specification assumes that the errors in the latent regression equations follow a trivariate Gaussian distribution. The semiparametric specification avoids distributional assumptions about the underlying regression errors. In our empirical application, we estimate Engel curves for consumption expenditure using data from the first wave of SHARE (Survey on Health, Aging and Retirement in Europe).
    Keywords: Unit nonresponse, item nonresponse, cross-sectional surveys, sample selection models, Engel curves.
    JEL: C14 C31 C34 D12
    Date: 2007–02–20
    URL: http://d.repec.org/n?u=RePEc:rtv:ceisrp:95&r=ecm
  5. By: Bhattacharjee, Arnab; Bhattacharjee, Madhuchhanda
    Abstract: We propose Bayesian inference in hazard regression models where the baseline hazard is unknown, covariate effects are possibly age-varying (non-proportional), and there is multiplicative frailty with arbitrary distribution. Our framework incorporates a wide variety of order restrictions on covariate dependence and duration dependence (ageing). We propose estimation and evaluation of age-varying covariate effects when covariate dependence is monotone rather than proportional. In particular, we consider situations where the lifetime conditional on a higher value of the covariate ages faster or slower than that conditional on a lower value; this kind of situation is common in applications. In addition, there may be restrictions on the nature of ageing. For example, relevant theory may suggest that the baseline hazard function decreases with age. The proposed framework enables evaluation of order restrictions in the nature of both covariate and duration dependence as well as estimation of hazard regression models under such restrictions. The usefulness of the proposed Bayesian model and inference methods are illustrated with an application to corporate bankruptcies in the UK.
    Keywords: Bayesian nonparametrics; Nonproportional hazards; Frailty; Age-varying covariate e¤ects; Ageing.
    JEL: C41 C14 C11
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:3938&r=ecm
  6. By: Francesco Bartolucci (Dipartimento diEconomia, Finanza e Statistica, Universit`a di Perugia,); Valentina Nigro (Dipartimento di Studi Economico-Finanziari e Metodi Quantitativi Universit`a di Roma “Tor Vergata”,)
    Abstract: Computational aspects concerning a model for clustered binary panel data are analysed. The model is based on the representation of the behavior of a subject (individual panel member) in a given cluster by means of a latent process that is decomposed into a cluster-specific component, which follows a first-order Markov chain, and an individual-specific component, which is timeinvariant and is represented by a discrete random variable. In particular, an algorithm for computing the joint distribution of the response variables is introduced. The algorithm may be used even in the presence of a large number of subjects in the same cluster. Also an Expectation-Maximization (EM) scheme for the maximum likelihood estimation of the model is described showing how the Fisher information matrix can be estimated on the basis of the numerical derivative of the score vector. The estimate of this matrix is used to compute standard errors for the parameter estimates and to check the identifiability of the model and the convergence of the EM algorithm. The approach is illustrated by means of an application to a dataset concerning Italian employees illness benefits.
    Keywords: EM algorithm; Finite mixture models; Heterogeneity; Latent class model; State dependence.
    JEL: C23 C25 C51 C63
    Date: 2007–02–20
    URL: http://d.repec.org/n?u=RePEc:rtv:ceisrp:96&r=ecm
  7. By: Tommaso Proietti (SEFEMEQ, Universita’ di Roma "Tor Vergata")
    Abstract: The paper evaluates the potential of band spectral estimation for extracting signals in economic time series. Two situations are considered. The first deals with trend extraction when the original data have been permanently altered by routine operations, such as prefiltering, temporal aggregation and disaggregation, and seasonal adjustment, which modify the high frequencies properties of economic time series. The second is when the measurement model is only partially specified, in that it aims at fitting the series in a particular frequency range, e.g. at interpreting the long run behaviour. These issues are illustrated with reference to a simple structural model, namely the random walk plus noise model.
    Keywords: Temporal Aggregation, Seasonal Adjustment, Trend Component, Frequency Domain.
    JEL: C22 E3
    Date: 2007–05–21
    URL: http://d.repec.org/n?u=RePEc:rtv:ceisrp:104&r=ecm
  8. By: Ghorbel, Ahmed; Trabelsi, Abdelwahed
    Abstract: This paper conducts a comparative evaluation of the predictive performance of various Value at Risk (VaR) models such as GARCH-normal, GARCH-t, EGARCH, TGARCH models, variance-covariance method, historical simulation and filtred Historical Simulation, EVT and conditional EVT methods. Special emphasis is paid on two methodologies related to the Extreme Value Theory (EVT): The Peaks over Threshold (POT) and the Block Maxima (BM). Both estimation techniques are based on limits results for the excess distribution over high thresholds and block maxima, respectively. We apply both unconditional and conditional EVT models to management of extreme market risks in stock markets. They are applied on daily returns of the Tunisian stock exchange (BVMT) and CAC 40 indexes with the intension to compare the performance of various estimation methods on markets with different capitalization and trading practices. The sample extends over the period July 29, 1994 to December 30, 2005. We use a rolling windows of approximately four years (n= 1000 days). The sub-period from July, 1998 for BVMT (from August 4, 1998 for CAC 40) has been reserved for backtesting purposes. The results we report demonstrate that conditional POT-EVT method produces the most accurate forecasts of extreme losses both for standard and more extreme VaR quantiles. The conditional block maxima EVT method is less accurate.
    Keywords: Financial Risk management; Value-at-Risk; Extreme Value Theory; Conditional EVT; Backtesting
    JEL: G0 C22 G15
    Date: 2007–03–31
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:3963&r=ecm
  9. By: Gianluca Cubadda (SEFEMEQ, Universita’ di Roma "Tor Vergata")
    Abstract: This paper provides a unifying framework in which the coexistence of different form of common cyclical features can be tested and imposed to a cointegrated VAR model. This goal is reached by introducing a new notion of common cyclical features, namely the weak form of polynomial serial correlation common features, which encompasses most of the previous ones. Statistical inference is obtained by means of reduced-rank regression, and alternative forms of common cyclical features are detected by means of tests for over-identifying restrictions on the parameters of the new model. Some iterative estimation procedures are then proposed for simultaneously modelling different forms of common features. Concepts and methods are illustrated by an empirical investigation of the US business cycle indicators.
    Keywords: Common Cyclical Features, Reduced Rank Regression.
    JEL: C32
    Date: 2007–05–21
    URL: http://d.repec.org/n?u=RePEc:rtv:ceisrp:102&r=ecm
  10. By: Nadine Chlaß (Max Planck Institute of Economics, Strategic Interaction Group); Jens J. Krüger (Friedrich-Schiller-University Jena, Department of Economics)
    Abstract: This Monte-Carlo study investigates sensitivity of the Wilcoxon signed rank test to certain assumption violations in small samples. Emphasis is put on within-sample-dependence, between-sample dependence, and the presence of ties. Our results show that both assumption violations induce severe size distortions and entail power losses. Surprisingly, these consequences do vary substantially with other properties the data may display. Results provided are particularly relevant for experimental settings where ties and within-sample dependence are frequently observed.
    Keywords: Wilcoxon signed rank test, ties, dependent observations, size and power
    JEL: C12 C14 C15
    Date: 2007–07–05
    URL: http://d.repec.org/n?u=RePEc:jrp:jrpwrp:2007-032&r=ecm
  11. By: Jan P.A.M. Jacobs; Kenneth F. Wallis
    Abstract: Cointegration ideas as introduced by Granger (1981) are commonly embodied in empirical macroeconomic modelling through the vector error correction model (VECM). It has also become common practice in these models to treat some variables as weakly exogenous, resulting in conditional VECMs. This paper studies the consequences of different approaches to weak exogeneity for the dynamic properties of such models, in the context of two models of the UK economy, one a national-economy model, the other the UK submodel of a global model. Impulse response and common trend analyses are shown to be sensitive to these assumptions and other specification choices.
    JEL: C32 C51 C52
    Date: 2007–06
    URL: http://d.repec.org/n?u=RePEc:acb:camaaa:2007-12&r=ecm
  12. By: Francesco Bartolucci† (Dipartimento di Economia, Finanza e Statistica, Universit`a di Perugia.); Valentina Nigro (Dipartimento di Studi Economico-Finanziari e Metodi Quantitativi, Universit`a di Roma “Tor Vergata”)
    Abstract: A model for binary panel data is introduced which allows for state dependence and unobserved heterogeneity beyond the effect of strictly exogenous covariates. The model is of quadratic exponential type and its structure closely resembles that of the dynamic logit model. An economic interpretation of its assumptions, based on expectation about future outcomes, is provided. The main advantage of the proposed model, with respect to the dynamic logit model, is that each individual-specific parameter for the unobserved heterogeneity may be eliminated by conditioning on the sum of the corresponding response variables. A conditional likelihood results which allows us to identify the structural parameters of the model with at least three observations (included an initial observation assumed to be exogenous), even in the presence of time dummies. A pn-consistent conditional estimator of these parameters also results which is very simple to compute. Its finite sample properties are studied by means of a simulation study. Extensions of the proposed approach are discussed with reference, in particular, to the case of more elaborated structures for the state dependence and to that of categorical response variables with more than two levels.
    Date: 2007–02–20
    URL: http://d.repec.org/n?u=RePEc:rtv:ceisrp:97&r=ecm
  13. By: Marco Avarucci (SEFeMEQ, University of Rome “Tor Vergata”); Domenico Marinucci (Department of Mathematics, University of Rome “Tor Vergata”)
    Abstract: In this paper we consider polynomial cointegrating relationships between stationary processes with long range dependence. We express the regression functions in terms of Hermite polynomials and we consider a form of spectral regression around frequency zero. For these estimates, we establish consistency by means of a more general result on continuously averaged estimates of the spectral density matrix at frequency zero.
    Keywords: Nonlinear cointegration, Long memory, Hermite polynomials, Spectral regression, Diagram formula.
    Date: 2007–03–05
    URL: http://d.repec.org/n?u=RePEc:rtv:ceisrp:99&r=ecm
  14. By: Fosgerau, Mogens; Hjort, Katrine; Vincent Lyk-Jensen, Stéphanie
    Abstract: Models such as the mixed logit are often used to measure the distribution of the marginal value of a good based on discrete choice panel data. There are however serious specification and identification issues that are rarely addressed. The consequences for results may be dramatic. This paper points out the issues and presents an approach to dealing with them that may be applied under some circumstances. The issues and the approach are illustrated using a dataset designed to measure the value of travel time.
    Keywords: Discrete choice; valuation; mixed logit
    JEL: C35 R41 C14 Q51
    Date: 2007–07–04
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:3907&r=ecm
  15. By: Fosgerau, Mogens; Bierlaire, Michel
    Abstract: We propose a multiplicative specification of a discrete choice model that renders choice probabilities independent of the scale of the utility. The scale can thus be random with unspecified distribution. The model mostly outperforms the classical additive formulation over a range of stated choice data sets. In some cases, the improvement in likelihood is greater than that obtained from adding observed and unobserved heterogeneity to the additive specification. The multiplicative specification makes it unnecessary to capture scale heterogeneity and, consequently, yields a significant potential for reducing model complexity in the presence of heteroscedasticity. Thus the proposed multiplicative formulation should be a useful supplement to the techniques available for the analysis of discrete choices. There is however a cost to be paid in terms of increased analytical complexity relative to the additive formulations.
    Keywords: Multivariate extreme value; logsum
    JEL: C25
    Date: 2007–07–03
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:3901&r=ecm
  16. By: Christophe Hurlin (LEO - Laboratoire d'économie d'Orleans - [CNRS : UMR6221] - [Université d'Orléans]); Valérie Mignon (CEPII - Centre d'études prospectives et d'informations internationales - [Université de Paris X - Nanterre], EconomiX - [CNRS : UMR7166] - [Université de Paris X - Nanterre])
    Abstract: This article proposes an overview of the recent developments relating to panel unit root tests. After a brief review of the first generation panel unit root tests, this paper focuses on the tests belonging to the second generation. The latter category of tests is characterized by the rejection of the cross-sectional independence hypothesis. Within this second generation of tests, two main approaches are distinguished. The first one relies on the factor structure approach and includes the contributions of Bai and Ng (2001), Phillips and Sul (2003a), Moon and Perron (2004a), Choi (2002) and Pesaran (2003) among others. The second approach consists in imposing few or none restrictions on the residuals covariance matrix and has been adopted notably by Chang (2002, 2004), who proposed the use of nonlinear instrumental variables methods or the use of bootstrap approaches to solve the nuisanceparameter problem due to cross-sectional dependency.
    Keywords: Nonstationary panel data; unit root, heterogeneity; cross-sectional dependencies
    Date: 2007–07–04
    URL: http://d.repec.org/n?u=RePEc:hal:papers:halshs-00159842_v1&r=ecm
  17. By: Tommaso Proietti (Universita di Roma “Tor Vergata”); Cecilia Frale (Universita di Roma “Tor Vergata”)
    Abstract: In this paper we deal with several issues related to the quantification of business surveys. In particular, we propose and compare new ways of scoring the ordinal responses concerning the qualitative assessment of the state of the economy, such as the spectral envelope and cumulative logit unobserved components models, and investigate the nature of seasonality in the series. We conclude with an evaluation of the type of business cycle fluctuations that is captured by the qualitative surveys.
    Keywords: Spectral envelope; Seasonality; Deviation cycles; Cumulative Logit Model.
    Date: 2007–03–05
    URL: http://d.repec.org/n?u=RePEc:rtv:ceisrp:98&r=ecm
  18. By: Kowal, Pawel
    Abstract: We describe algorithm to find higher order approximations of stochastic rational expectations models near the deterministic steady state. Using matrix representation of function derivatives instead of tensor representation we obtain simple expressions of matrix equations determining higher order terms.
    Keywords: perturbation method; DSGE models
    JEL: C63 C61 E17
    Date: 2007–07
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:3913&r=ecm

This nep-ecm issue is ©2007 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.