nep-ecm New Economics Papers
on Econometrics
Issue of 2008‒02‒23
fifteen papers chosen by
Sune Karlsson
Orebro University

  1. Efficient Estimation of Semiparametric Conditional Moment Models with Possibly Nonsmooth Residuals By Xiaohong Chen; Demian Pouzo
  2. Estimation of Dynamic Latent Variable Models Using Simulated Nonparametric Moments By Michael Creel
  3. Long Memory Modelling of Inflation with Stochastic Variance and Structural Breaks By C.S. Bos; S.J. Koopman; M. Ooms
  4. Power Properties of Invariant Tests for Spatial Autocorrelation in Linear Regression By Martellosio, Federico
  5. Selection of the number of frequencies using bootstrap techniques in log-periodogram regression. By Josu Arteche; Jesus Orbe
  6. Nonparametric Estimation of the Costs of Non-Sequential Search By José Luis Moraga-González; Zsolt Sándor; Matthijs R. Wildenbeest
  7. Inference for double Pareto lognormal queues with applications By Pepa Ramirez; Rosa E. Lillo; Michael P. Wiper; Simon P. Wilson
  8. Analyzing the Term Structure of Interest Rates using the Dynamic Nelson-Siegel Model with Time-Varying Parameters By Siem Jan Koopman; Max I.P. Mallee; Michel van der Wel
  9. On Bayesian estimation of multinomial probabilities under incomplete experimental information By Pepa Ramirez; Brani Vidakovic
  10. Identifying Reduced-Form Relations with Panel Data By Herman R.J. Vollebergh; Bertrand Melenberg; Elbert Dijkgraaf
  11. Time Aggregation and the Contradictions with Causal Relationships: Can Economic Theory Come to the Rescue? By Rangan Gupta; Kibii Komen
  12. Testing a DSGE model and its partner database By Lavan Mahadeva; Juan Carlos parra
  13. Unit Roots Tests with Smooth Breaks: An Application to the Nelson-Plosser Data Set By Pascalau, Razvan
  14. Mean and Bold? By Kristof De Witte; Elbert Dijkgraaf
  15. Assessing Budget Support with Statistical Impact Evaluation: a Methodological Proposal By Chris Elbers; Jan Willem Gunning; Kobus de Hoop

  1. By: Xiaohong Chen (Cowles Foundation, Yale University); Demian Pouzo (Dept. of Economics, New York University)
    Abstract: For semi/nonparametric conditional moment models containing unknown parametric components (theta) and unknown functions of endogenous variables (h), Newey and Powell (2003) and Ai and Chen (2003) propose sieve minimum distance (SMD) estimation of (theta, h) and derive the large sample prop­erties. This paper greatly extends their results by establishing the followings: (1) The penalized SMD (PSMD) estimator (hat{theta}, hat{h}) can simultaneously achieve root-n asymptotic normality of theta hat and nonpara­metric optimal convergence rate of hat{h}, allowing for models with possibly nonsmooth residuals and/or noncompact infinite dimensional parameter spaces. (2) A simple weighted bootstrap procedure can con­sistently estimate the limiting distribution of the PSMD hat{theta}. (3) The semiparametric efficiency bound results of Ai and Chen (2003) remain valid for conditional models with nonsmooth residuals, and the optimally weighted PSMD estimator achieves the bounds. (4) The profiled optimally weighted PSMD criterion is asymptotically Chi-square distributed, which implies an alternative consistent estimation of confidence region of the efficient PSMD estimator of theta. All the theoretical results are stated in terms of any consistent nonparametric estimator of conditional mean functions. We illustrate our general theories using a partially linear quantile instrumental variables regression, a Monte Carlo study, and an empirical estimation of the shape-invariant quantile Engel curves with endogenous total expenditure.
    Keywords: Penalized sieve minimum distance, Nonsmooth generalized residuals, Nonparametric en­dogeneity, Weighted bootstrap, Semiparametric efficiency, Confidence region, Partially linear quantile IV regression
    JEL: C14 C22
    Date: 2008–02
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1640&r=ecm
  2. By: Michael Creel
    Abstract: Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Since conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. Monte Carlo results show that the estimator performs well in comparison to other estimators that have been proposed for estimation of general DLV models.
    Keywords: dynamic latent variable models; simulation-based estimation; simulated moments; kernel regression; nonparametric estimation
    JEL: C13 C14 C15
    Date: 2008–02–13
    URL: http://d.repec.org/n?u=RePEc:aub:autbar:725.08&r=ecm
  3. By: C.S. Bos (VU University Amsterdam); S.J. Koopman (VU University Amsterdam); M. Ooms (VU University Amsterdam)
    Abstract: We investigate changes in the time series characteristics of postwar U.S. inflation. In a model-based analysis the conditional mean of inflation is specified by a long memory autoregressive fractionally integrated moving average process and the conditional variance is modelled by a stochastic volatility process. We develop a Monte Carlo maximum likelihood method to obtain efficient estimates of the parameters using a monthly dataset of core inflation for which we consider different subsamples of varying size. Based on the new modelling framework and the associated estimation technique, we find remarkable changes in the variance, in the order of integration, in the short memory characteristics and in the volatility of volatility.
    Keywords: Time varying parameters; Importance sampling; Monte Carlo simulation; Stochastic Volatility; Fractional Integration
    JEL: C15 C32 C51 E23 E31
    Date: 2007–12–18
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20070099&r=ecm
  4. By: Martellosio, Federico
    Abstract: This paper derives some exact power properties of tests for spatial autocorrelation in the context of a linear regression model. In particular, we characterize the circumstances in which the power vanishes as the autocorrelation increases, thus extending the work of Krämer (2005, Journal of Statistical Planning and Inference 128, 489-496). More generally, the analysis in the paper sheds new light on how the power of tests for spatial autocorrelation is affected by the matrix of regressors and by the spatial structure. We mainly focus on the problem of residual spatial autocorrelation, in which case it is appropriate to restrict attention to the class of invariant tests, but we also consider the case when the autocorrelation is due to the presence of a spatially lagged dependent variable among the regressors. A numerical study aimed at assessing the practical relevance of the theoretical results is included.
    Keywords: Cliff-Ord test; invariant tests; linear regression model; point optimal tests; power; similar tests; spatial autocorrelation.
    JEL: C12 C31 C21 C01
    Date: 2008–01–29
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:7255&r=ecm
  5. By: Josu Arteche (Fac. CC. Económicas y Empresariales. Dpto. Economía Aplicada III); Jesus Orbe (Fac. CC. Económicas y Empresariales.Dpto. Economía Aplicada III)
    Abstract: The choice of the bandwidth in the local log-periodogram regression is of crucial importance for estimation of the memory parameter of a long memory time series. Different choices may give rise to completely different estimates, which may lead to contradictory conclusions, for example about the stationarity of the series. We propose here a data driven bandwidth selection strategy that is based on minimizing a bootstrap approximation of the mean squared error and compare its performance with other existing techniques for optimal bandwidth selection in a mean squared error sense, revealing its better performance in a wider class of models. The empirical applicability of the proposed strategy is shown with two examples: the widely analyzed in a long memory context Nile river annual minimum levels and the input gas rate series of Box and Jenkins.
    Keywords: Bootstrap, long memory, log-periodogram regression, bandwidth selection
    JEL: C15 C22 C63
    Date: 2008–02–21
    URL: http://d.repec.org/n?u=RePEc:ehu:biltok:200801&r=ecm
  6. By: José Luis Moraga-González (University of Groningen, and CESifo); Zsolt Sándor (Universidad Carlos III de Madrid); Matthijs R. Wildenbeest (Kelley School of Business, Indiana University)
    Abstract: We study a consumer non-sequential search oligopoly model with search cost heterogeneity. We first prove that an equilibrium in mixed strategies always exists. We then examine the nonparametric identification and estimation of the costs of search. We find that the sequence of points on the support of the search cost distribution that can be identified is convergent to zero as the number of firms increases. As a result, when the econometrician has price data from only one market, the search cost distribution cannot be identified accurately at quantiles other than the lowest. To solve this pitfall, we propose to consider a richer framework where the researcher has price data from many markets with the same underlying search cost distribution. We provide conditions under which pooling the data allows for the identification of the search cost distribution at all the points of its support. We estimate the search cost density function directly by a semi-nonparametric density estimator whose parameters are chosen to maximize the joint likelihood corresponding to all the markets. A Monte Carlo study shows the advantages of the new approach and an application using a data set of online prices for memory chips is presented.
    Keywords: consumer search; oligopoly; search costs; semi-nonparametric estimation
    JEL: C14 D43 D83 L13
    Date: 2008–01–04
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20070102&r=ecm
  7. By: Pepa Ramirez; Rosa E. Lillo; Michael P. Wiper; Simon P. Wilson
    Abstract: In this article we describe a method for carrying out Bayesian inference for the double Pareto lognormal (dPlN) distribution which has recently been proposed as a model for heavy-tailed phenomena. We apply our approach to inference for the dPlN/M/1 and M/dPlN/1 queueing systems. These systems cannot be analyzed using standard techniques due to the fact that the dPlN distribution does not posses a Laplace transform in closed form. This difficulty is overcome using some recent approximations for the Laplace transform for the Pareto/M/1 system. Our procedure is illustrated with applications in internet traffic analysis and risk theory.
    Keywords: Heavy tails, Bayesian inference, Queueing theory
    Date: 2008–02
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws080402&r=ecm
  8. By: Siem Jan Koopman (VU University Amsterdam); Max I.P. Mallee (VU University Amsterdam); Michel van der Wel (VU University Amsterdam)
    Abstract: In this paper we introduce time-varying parameters in the dynamic Nelson-Siegel yield curve model for the simultaneous analysis and forecasting of interest rates of different maturities, known as the term structure. The Nelson-Siegel model has been recently reformulated as a dynamic factor model where the latent factors are interpreted as the level, slope and curvature of the term structure. The factors are modelled by a vector autoregressive process. We propose to extend this framework in two directions. First, the factor loadings are made time-varying through a simple single step function and we show that the model fit increases significantly as a result. The step function can be replaced by a spline function to allow for more smoothness and flexibility. Second, we investigate empirically whether the volatility in interest rates across different time periods is constant. For this purpose, we introduce a common volatility component that is specified as a spline function of time and scaled appropriately for each series. Based on a data-set that is analysed by others, we present empirical evidence where time-varying loadings and volatilities in the dynamic Nelson-Siegel framework lead to significant increases in model fit. Improvements in the forecasting of the term structure are also reported. Finally, we provide an illustration where the model is applied to an unbalanced dataset. It shows that missing data entries can be estimated accurately.
    Keywords: Yield Curve; Time-varying Volatility; Spline Functions; Kalman Filter; Missing Values
    JEL: C32 C51 E43
    Date: 2007–12–07
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20070095&r=ecm
  9. By: Pepa Ramirez; Brani Vidakovic
    Abstract: In this work, we discuss Bayesian estimation of multinomial probabilities associated with a finite alphabet A under incomplete experimental information. Two types of prior information are considered: (i) number of letters needed to see a particular pattern for the first time, and (ii) the fact that for two fixed words one appeared before the other.
    Keywords: Patterns, Stopping times, Incomplete experimental information
    Date: 2008–02
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws080503&r=ecm
  10. By: Herman R.J. Vollebergh (Faculty of Economics, Erasmus Universiteit Rotterdam); Bertrand Melenberg (Dept. of Economics, Tilburg University, and CentER); Elbert Dijkgraaf (Erasmus Universiteit Rotterdam, Tinbergen Institute, and SEOR-ECRi)
    Abstract: The literature that tests for U-shaped relationships using panel data, such as those between pollution and income or inequality and growth, reports widely divergent (parametric and non-parametric) empirical findings. We explain why lack of identification lies at the root of these differences. To deal with this lack of identification, we propose an identification strategy that explicitly distinguishes between what can be identified on the basis of the data and what is a consequence of subjective choices due to a lack of identification. We apply our methodology to the pollution-income relationship of both CO2- and SO2-emissions. Interestingly, our approach yields estimates of both income (scale) and time (composition and/or technology) effects for these reduced-form relationships that are insensitive to the required subjective choices and consistent with theoretical predictions.
    Keywords: Identification; Panel Data; Reduced-Form (Semi-)Parametric Estimation; Emission-Income Relationships
    JEL: C33 O50 Q40
    Date: 2007–09–17
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20070072&r=ecm
  11. By: Rangan Gupta (Department of Economics, University of Pretoria); Kibii Komen (Department of Economics, University of Pretoria)
    Abstract: The literature on causality takes contradictory stands regarding the direction of causal relationships based on whether one uses temporally aggregated or systematically sampled data. Using the relationship between a nominal target and the instrument used to achieve it, as an example, we show that one can fall back upon the data in itself, and analyse it from the perspective of economic theory, not only as a source of second opinion to econometric theories and Monte Carlo simulations, but also to draw proper conclusions regarding the form of the causal relationship that might be actually existing in the data.
    Keywords: Temporal Aggregation, Systematic Sampling, Granger Causality, Cointegration, Error Correction Models
    JEL: C15 C32 C43
    Date: 2008–02
    URL: http://d.repec.org/n?u=RePEc:pre:wpaper:200802&r=ecm
  12. By: Lavan Mahadeva; Juan Carlos parra
    Abstract: There is now an impetus to apply dynamic stochastic general equilibrium models to forecasting. But these models typically rely on purpose-built data, for example on tradable and nontradable sector outputs. How then do we know that the model will forecast well, in advance? We develop an early warning test of the database-model match and apply that to a Colombian model. Our test reveals where the combination should work (consumption) and where not (in investment). The test can be adapted to look at many likely sources of DSGE model failure.
    Date: 2008–01–29
    URL: http://d.repec.org/n?u=RePEc:col:000094:004507&r=ecm
  13. By: Pascalau, Razvan
    Abstract: This paper reconsiders the nature of the trends (i.e. deterministic or stochastic) in macroeconomic time series. For this purpose, the paper employs two new tests that display robustness to structural breaks of unknown forms, irrespective of the date and/or location of the breaks. These tests approximate structural changes as smooth processes via Flexible Fourier transforms. The tests deliver strong evidence in favor of a nonlinear deterministic trend for real GNP, real per capita GNP, employment, the unemployment rate, and stock prices. Further, the two tests confirm the existence of stochastic trends in nominal GNP, consumer prices, real wages, monetary aggregates, velocity, and bond yields. In general, it appears that real variables are stationary while nominal ones have a unit root.
    Keywords: Unit Roots; Stationarity Tests; Structural Change
    JEL: C50 E10
    Date: 2008–02–14
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:7220&r=ecm
  14. By: Kristof De Witte (University of Leuven); Elbert Dijkgraaf (Erasmus University Rotterdam)
    Abstract: The Dutch drinking water sector experienced two drastic changes over the last 10 years. Firstly, in 1997, the sector association started with a voluntary benchmarking aimed to increase the efficiency and effectiveness of the sector. Secondly, merger activity arose. This paper develops a tailored nonparametric model to dissect and distinguish the effects on efficiency of these two evolutions. In particular, we adapt Free Disposal Hull (FDH) to estimate robust and conditional non-oriented efficiency estimates. Parametric COLS (Fourier) tests show the robustness of the model with respect to the specification and its variables. We classify the merger economies into scale economies and increased incentives to fight inefficiencies. Although we detect a significant efficiency enhancing effect of benchmarking, we find insignificant merger economies due to the absence of scale economies and the absence of increased incentives to fight inefficiencies.
    Keywords: Mergers and acquisitions; efficiency; scale economies; water sector; non-parametric and parametric estimation
    JEL: C13 C14 D20 G34 L95
    Date: 2007–11–27
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20070092&r=ecm
  15. By: Chris Elbers (VU University Amsterdam); Jan Willem Gunning (VU University Amsterdam); Kobus de Hoop (University of Amsterdam)
    Abstract: Donor agencies and recipient governments want to assess the effectiveness of aid-supported sector policies. Unfortunately, existing methods for impact evaluation are designed for the evaluation of homogeneous interventions (‘projects’) where those with and without ‘treatment’ can be compared. The lack of a methodology for evaluations of sector-wide programs is a serious constraint in the debate on aid effectiveness. We propose a method of statistical impact evaluation in situations with heterogeneous interventions, an extension of the double differencing method often used in project evaluations. We illustrate its feasibility with an example from the education sector in Zambia.
    Keywords: Impact Evaluation; Sector-Wide Programs; Aid Effectiveness; Education; Africa; Zambia
    JEL: F35 H43 N37 O10
    Date: 2007–09–24
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20070075&r=ecm

This nep-ecm issue is ©2008 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.