nep-ecm New Economics Papers
on Econometrics
Issue of 2005‒06‒14
thirty-six papers chosen by
Sune Karlsson
Orebro University

  1. Simple Endogenous Binary Choice and Selection Panel Model Estimators By Arthur Lewbel
  2. Quantiles and medians By Chambers, Christopher P.
  3. The Reliability of Inflation Forecasts Based on Output Gap Estimates in Real Time By Orphanides, Athanasios; van Norden, Simon
  4. On the Fit and Forecasting Performance of New Keynesian Models By Del Negro, Marco; Schorfheide, Frank; Smets, Frank; Wouters, Rafael
  5. Parameter Instability, Model Uncertainty and the Choice of Monetary Policy By Favero, Carlo A; Milani, Fabio
  6. Forecasting the Spot Exchange Rate with the Term Structure of Forward Premia: Multivariate Threshold Cointegration By van Tol, Michel R; Wolff, Christian C
  7. Loss Functions in Option Valuation: A Framework for Model Selection By Bams, Dennis; Lehnert, Thorsten; Wolff, Christian C
  8. Leading Indicators: What Have We Learned? By Marcellino, Massimiliano
  9. Testing for Reference Dependence: An Application to the Art Market By Beggs, Alan; Graddy, Kathryn
  10. Portfolio Selection with Parameter and Model Uncertainty: A Multi-Prior Approach By Garlappi, Lorenzo; Uppal, Raman; Wang, Tan
  11. Improved HAR Inference By Peter C.B. Phillips; Yixiao Sun; Sainan Jin
  12. GMM with Many Moment Conditions By Chirok Han; Peter C.B. Phillips
  13. Nonstationary Discrete Choice: A Corrigendum and Addendum By Peter C.B. Phillips; Sainan Jin; Ling Hu
  14. Limit Theory for Moderate Deviations from a Unit Root under Weak Dependence By Peter C.B. Phillips; Tassos Magadalinos
  15. Sign Tests for Dependent Observations and Bounds for Path-Dependent Options By Donald J. Brown; Rustam Ibragimov
  16. Semiparametric estimation in perturbed long memory series. By Josu Arteche
  17. A review of backtesting and backtesting procedures By Sean D. Campbell
  18. Phillips-Perron-type unit root tests in the nonlinear ESTAR framework By Rothe, Christoph; Sibbertsen, Philipp
  19. Inverse probability weighted estimation for general missing data problems By Jeffrey M. Wooldridge
  20. Estimation of dynamic linear models in short panels with ordinal observation By Stephen Pudney
  21. Nonparametric inference for unbalance time series data By Oliver Linton
  22. Identification of sensitivity to variation in endogenous variables By Andrew Chesher
  23. Identification in additive error models with discrete endogenous variables By Andrew Chesher
  24. The Bootstrap and the Edgeworth Correction for Semiparametric Averaged Derivatives By Y. Nishiyama; Peter Robinson
  25. Testing a parametric model against a nonparametric alternative with identification through instrumental variables By Joel Horowitz
  26. A nonparametric test of exogeneity By Richard Blundell; Joel Horowitz
  27. Spatial design matrices and associated quadratic forms: structure and properties By Grant Hillier; Federico Martellosio
  28. Automatic positive semi-definite HAC covariance matrix and GMM estimation By Richard Smith
  29. Nonparametric methods for the characteristic model By Laura Blow; Martin Browning; Ian Crawford
  30. GEL Criteria for Moment Condition Models By Richard Smith
  31. Constructing Panel Data Estimators by Aggregation: A General Moment Estimator and a Suggested Synthesis By Erik Biørn
  32. Non-Bayesian Multiple Imputation By Jan F. Bjørnstad
  33. Identifying Structural Breaks in Cointegrated VAR Models By Håvard Hungnes
  34. Adaptive Estimation of the Regression Discontinuity Model By Yixiao Sun
  35. Another Look At What To Do With Time-Series Cross-Section Data By Xiujian Chen; Shu Lin; W. Robert Reed
  36. Modified Two Stage Least Squares Estimators for the Estimation of a Structural Vector Autoregressive Integrated Process By Cheng Hsiao; Siyan Wang

  1. By: Arthur Lewbel (Boston College)
    Abstract: This paper provides numerically trivial estimators for short panels of either binary choices or of linear models that suffer from confounded, nonignorable sample selection. The estimators allow for fixed effects, endogenous regressors, lagged dependent variables, and heterokedastic errors with unknown distribution. The estimators, which converge at rate root n, are based on variants of the Honoré and Lewbel (2002) panel binary choice model and Lewbel's (2005) cross section sample selection model.
    Keywords: Panel Data, Fixed Effects, Binary Choice, Binomial Response, Sample Selection, Treatment, Semiparametric, Latent Variable, Predetermined Regressors, Lagged Dependent Variable, Endogeneity, Instrumental Variable.
    Date: 2005–02–01
    URL: http://d.repec.org/n?u=RePEc:boc:bocoec:613&r=ecm
  2. By: Chambers, Christopher P.
    Abstract: We provide a list of functional equations that characterize quantile functions for collections of bounded and measurable functions. Our central axiom is ordinal covariance. When a probability measure is exogeneously given, we characterize quantiles with respect to that measure through monotonicity with respect to stochastic dominance. When none is given, we characterize those functions which are simply ordinally covariant and monotonic as quantiles with respect to capacities; and we also find an additional condition for finite probability spaces that allows us to represent the capacity as a probability measure. Additionally requiring that a function be covariant under its negation results in a generalized notion of median. Finally, we show that all of our theorems continue to hold under the weaker notion of covariance under increasing, concave transformations. Applications to the theory of ranking infinite utility streams and to the theory of risk measurement are provided.
    Keywords: Ordinal, quantile, median, axiom, risk measure, value at risk, intergenerational equity
    Date: 2005–04
    URL: http://d.repec.org/n?u=RePEc:clt:sswopa:1222&r=ecm
  3. By: Orphanides, Athanasios; van Norden, Simon
    Abstract: A stable predictive relationship between inflation and the output gap, often referred to as a Phillips curve, provides the basis for countercyclical monetary policy in many models. In this paper, we evaluate the usefulness of alternative univariate and multivariate estimates of the output gap for predicting inflation. Many of the ex post output gap measures we examine appear to be quite useful for predicting inflation. However, forecasts using real-time estimates of the same measures do not perform nearly as well. The relative usefulness of real-time output gap estimates diminishes further when compared to simple bivariate forecasting models which use past inflation and output growth. Forecast performance also appears to be unstable over time, with models often performing differently over periods of high and low inflation. These results call into question the practical usefulness of the output gap concept for forecasting inflation.
    Keywords: inflation forecasts; output gap; Phillips curve; real-timing data
    JEL: C53 E37
    Date: 2005–01
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:4830&r=ecm
  4. By: Del Negro, Marco; Schorfheide, Frank; Smets, Frank; Wouters, Rafael
    Abstract: The Paper provides new tools for the evaluation of DSGE models, and applies it to a large-scale New Keynesian dynamic stochastic general equilibrium (DSGE) model with price and wage stickiness and capital accumulation. Specifically, we approximate the DSGE model by a vector autoregression (VAR), and then systematically relax the implied cross-equation restrictions. Let delta denote the extent to which the restrictions are being relaxed. We document how the in- and out-of-sample fit of the resulting specification (DSGE-VAR) changes as a function of delta. Furthermore, we learn about the precise nature of the misspecification by comparing the DSGE model’s impulse responses to structural shocks with those of the best-fitting DSGE-VAR. We find that the degree of misspecification in large-scale DSGE models is no longer so large to prevent their use in day-to-day policy analysis, yet it is not small enough that it cannot be ignored.
    Keywords: Bayesian Analysis; DSGE models; model evaluation; vector autoregression
    JEL: C11 C32 C53
    Date: 2005–01
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:4848&r=ecm
  5. By: Favero, Carlo A; Milani, Fabio
    Abstract: This paper starts from the observation that parameter instability and model uncertainty are relevant problems for the analysis of monetary policy in small macroeconomic models. We propose to deal with these two problems by implementing a novel ‘thick recursive modelling’ approach. At each point in time we estimate all models generated by the combinations of a base-set of k observable regressors for aggregate demand and supply. We compute optimal monetary policies for all possible models and consider alternative ways of summarizing their distribution. Our main results show that thick recursive modelling delivers optimal policy rates that track the observed policy rates better than the optimal policy rates obtained under a constant parameter specification, with no role for model uncertainty.
    Keywords: model uncertainty; optimal monetary policy; parameter instability
    JEL: E44 E52 F41
    Date: 2005–02
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:4909&r=ecm
  6. By: van Tol, Michel R; Wolff, Christian C
    Abstract: In this paper we develop a multivariate threshold vector error correction model of spot and forward exchange rates that allows for different forms of equilibrium reversion in each of the cointegrating residual series. By introducing the notion of an indicator matrix to differentiate between the various regimes in the set of nonlinear processes we provide a convenient framework for estimation by OLS. Empirically, out-of sample forecasting exercises demonstrate its superiority over a linear VECM, while being unable to out-predict a (driftless) random walk model. As such we provide empirical evidence against the findings of Clarida and Taylor (1997).
    Keywords: foreign exchange; multivariate threshold cointegration; TAR models
    JEL: C51 C53 F31
    Date: 2005–03
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:4958&r=ecm
  7. By: Bams, Dennis; Lehnert, Thorsten; Wolff, Christian C
    Abstract: In this paper, we investigate the importance of different loss functions when estimating and evaluating option pricing models. Our analysis shows that it is important to take into account parameter uncertainty, since this leads to uncertainty in the predicted option price. We illustrate the effect on the out-of-sample pricing errors in an application of the ad hoc Black-Scholes model to DAX index options. Our empirical results suggest that different loss functions lead to uncertainty about the pricing error itself. At the same time, it provides a first yardstick to evaluate the adequacy of the loss function. This is accomplished through a data-driven method to deliver not just a point estimate of the pricing error, but a confidence interval.
    Keywords: estimation risk; GARCH; implied volatility; loss functions; option pricing
    JEL: G12
    Date: 2005–03
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:4960&r=ecm
  8. By: Marcellino, Massimiliano
    Abstract: We provide a summary updated guide for the construction, use and evaluation of leading indicators, and an assessment of the most relevant recent developments in this field of economic forecasting. To begin with, we analyse the problem of selecting a target coincident variable for the leading indicators, which requires coincident indicator selection, construction of composite coincident indexes, choice of filtering methods, and business cycle dating procedures to transform the continuous target into a binary expansion/recession indicator. Next, we deal with criteria for choosing good leading indicators, and simple non-model based methods to combine them into composite indexes. Then, we examine models and methods to transform the leading indicators into forecasts of the target variable. Finally, we consider the evaluation of the resulting leading indicator based forecasts, and review the recent literature on the forecasting performance of leading indicators.
    Keywords: business cycles; coincident indicators; forecasting; leading indicators; turning points
    JEL: C53 E32 E37
    Date: 2005–03
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:4977&r=ecm
  9. By: Beggs, Alan; Graddy, Kathryn
    Abstract: This paper tests for reference dependence, using data from Impressionist and Contemporary Art auctions. We distinguish reference dependence based on ‘rule of thumb’ learning from reference dependence based on ‘rational’ learning. Furthermore, we distinguish pure reference dependence from effects due to loss aversion. Thus, we use actual market data to test essential characteristics of Kahneman and Tversky’s Prospect Theory. The main methodological innovations of this paper are firstly, that reference dependence can be identified separately from loss aversion. Secondly, we introduce a consistent non-linear estimator to deal with measurement errors problems involved in testing for loss aversion. In this dataset, we find strong reference dependence but no loss aversion.
    Keywords: art; auctions; loss aversion; Prospect Theory; Reference Dependence
    JEL: D44 D81 L82
    Date: 2005–04
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:4982&r=ecm
  10. By: Garlappi, Lorenzo; Uppal, Raman; Wang, Tan
    Abstract: In this paper, we show how an investor can incorporate uncertainty about expected returns when choosing a mean-variance optimal portfolio. In contrast to the Bayesian approach to estimation error, where there is only a single prior and the investor is neutral to uncertainty, we consider the case where the investor has multiple priors and is averse to uncertainty. We characterize the multiple priors with a confidence interval around the estimated value of expected returns and we model aversion to uncertainty via a minimization over the set of priors. The multi-prior model has several attractive features: One, just like the Bayesian model, it is firmly grounded in decision theory. Two, it is flexible enough to allow for different degrees of uncertainty about expected returns for different subsets of assets, and also about the underlying asset-pricing model generating returns. Three, for several formulations of the multi-prior model we obtain closed-form expressions for the optimal portfolio, and in one special case we prove that the portfolio from the multi-prior model is equivalent to a ‘shrinkage’ portfolio based on the mean-variance and minimum-variance portfolios, which allows for a transparent comparison with Bayesian portfolios. Finally, we illustrate how to implement the multi-prior model for a fund manager allocating wealth across eight international equity indices; our empirical analysis suggests that allowing for parameter and model uncertainty reduces the fluctuation of portfolio weights over time and improves the out-of sample performance relative to the mean-variance and Bayesian models.
    Keywords: ambiguity; asset allocation; estimation error; portfolio choice; robustness; uncertainty
    JEL: D81 G11
    Date: 2005–05
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:5041&r=ecm
  11. By: Peter C.B. Phillips (Cowles Foundation, Yale University); Yixiao Sun (Dept. Economics, UCLA, San Diego); Sainan Jin (Guanghua School of Management, Peking University)
    Abstract: Employing power kernels suggested in earlier work by the authors (2003), this paper shows how to re.ne methods of robust inference on the mean in a time series that rely on families of untruncated kernel estimates of the long-run parameters. The new methods improve the size properties of heteroskedastic and autocorrelation robust (HAR) tests in comparison with conventional methods that employ consistent HAC estimates, and they raise test power in comparison with other tests that are based on untruncated kernel estimates. Large power parameter (rho) asymptotic expansions of the nonstandard limit theory are developed in terms of the usual limiting chi-squared distribution, and corresponding large sample size and large rho asymptotic expansions of the finite sample distribution of Wald tests are developed to justify the new approach. Exact finite sample distributions are given using operational techniques. The paper further shows that the optimal rho that minimizes a weighted sum of type I and II errors has an expansion rate of at most O(T^{1/2}) and can even be O(1) for certain loss functions, and is therefore slower than the O(T^{2/3}) rate which minimizes the asymptotic mean squared error of the corresponding long run variance estimator. A new plug-in procedure for implementing the optimal rho is suggested. Simulations show that the new plug-in procedure works well in finite samples.
    Keywords: Asymptotic expansion, consistent HAC estimation, data-determined kernel estimation, exact distribution, HAR inference, large rho asymptotics, long run variance, loss function, power parameter, sharp origin kernel
    JEL: C13 C14 C22 C51
    Date: 2005–06
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1513&r=ecm
  12. By: Chirok Han (Victoria University of Wellington); Peter C.B. Phillips (Cowles Foundation, Yale University)
    Abstract: This paper provides a first order asymptotic theory for generalized method of moments (GMM) estimators when the number of moment conditions is allowed to increase with the sample size and the moment conditions may be weak. Examples in which these asymptotics are relevant include instrumental variable (IV) estimation with many (possibly weak or uninformed) instruments and some panel data models covering moderate time spans and with correspondingly large numbers of instruments. Under certain regularity conditions, the GMM estimators are shown to converge in probability but not necessarily to the true parameter, and conditions for consistent GMM estimation are given. A general framework for the GMM limit distribution theory is developed based on epiconvergence methods. Some illustrations are provided, including consistent GMM estimation of a panel model with time varying individual effects, consistent LIML estimation as a continuously updated GMM estimator, and consistent IV structural estimation using large numbers of weak or irrelevant instruments. Some simulations are reported.
    Keywords: Epiconvergence, GMM, Irrelevant instruments, IV, Large numbers of instruments, LIML estimation, Panel models, Pseudo true value, Signal, Signal Variability, Weak instrumentation
    JEL: C22 C23
    Date: 2005–06
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1515&r=ecm
  13. By: Peter C.B. Phillips (Cowles Foundation, Yale University); Sainan Jin (Guanghua School of Management, Peking University); Ling Hu (Dept. of Economics, Ohio State University)
    Abstract: We correct the limit theory presented in an earlier paper by Hu and Phillips (Journal of Econometrics, 2004) for nonstationary time series discrete choice models with multiple choices and thresholds. The new limit theory shows that, in contrast to the binary choice model with nonstationary regressors and a zero threshold where there are dual rates of convergence (n^{1/4} and n^{3/4}), all parameters including the thresholds converge at the rate n^{3/4}. The presence of non-zero thresholds therefore materially affects rates of convergence. Dual rates of convergence reappear when stationary variables are present in the system. Some simulation evidence is provided, showing how the magnitude of the thresholds affects finite sample performance. A new finding is that predicted probabilities and marginal effect estimates have finite sample distributions that manifest a pile-up, or increasing density, towards the limits of the domain of definition.
    Keywords: Brownian motion, Brownian local time, Discrete choices, Integrated processes, Pile-up problem, Threshold parameters
    JEL: C23 C25
    Date: 2005–06
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1516&r=ecm
  14. By: Peter C.B. Phillips (Cowles Foundation, Yale University); Tassos Magadalinos (Dept. of Mathematics, University of York)
    Abstract: An asymptotic theory is given for autoregressive time series with weakly dependent innovations and a root of the form rho_{n} = 1+c/n^{alpha}, involving moderate deviations from unity when alpha in (0,1) and c in R are constant parameters. The limit theory combines a functional law to a diffusion on D[0,infinity) and a central limit theorem. For c > 0, the limit theory of the first order serial correlation coefficient is Cauchy and is invariant to both the distribution and the dependence structure of the innovations. To our knowledge, this is the first invariance principle of its kind for explosive processes. The rate of convergence is found to be n^{alpha}rho_{n}^{n}, which bridges asymptotic rate results for conventional local to unity cases (n) and explosive autoregressions ((1 + c)^{n}). For c < 0, we provide results for alpha in (0,1) that give an n^{(1+alpha)/2} rate of convergence and lead to asymptotic normality for the first order serial correlation, bridging the /n and n convergence rates for the stationary and conventional local to unity cases. Weakly dependent errors are shown to induce a bias in the limit distribution, analogous to that of the local to unity case. Linkages to the limit theory in the stationary and explosive cases are established.
    Keywords: Central limit theory; Diffusion; Explosive autoregression, Local to unity; Moderate deviations, Unit root distribution, Weak dependence
    JEL: C22
    Date: 2005–06
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1517&r=ecm
  15. By: Donald J. Brown (Cowles Foundation, Yale University); Rustam Ibragimov (Dept. of Economics, Yale University)
    Abstract: The present paper introduces new sign tests for testing for conditionally symmetric martingale-difference assumptions as well as for testing that conditional distributions of two (arbitrary) martingale-difference sequences are the same. Our analysis is based on the results that demonstrate that randomization over zero values of three-valued random variables in a conditionally symmetric martingale-difference sequence produces a stream of i.i.d. symmetric Bernoulli random variables and thus reduces the problem of estimating the critical values of the tests to computing the quantiles or moments of Binomial or normal distributions. The same is the case for randomization over ties in sign tests for equality of conditional distributions of two martingale-difference sequences. The paper also provides sharp bounds on the expected payoffs and fair prices of European call options and a wide range of path-dependent contingent claims in the trinomial financial market model in which, as is well-known, calculation of derivative prices on the base of no-arbitrage arguments is impossible. These applications show, in particular, that the expected payoff of a European call option in the trinomial model with log-returns forming a martingale-difference sequence is bounded from above by the expected payoff of a call option written on a stock with i.i.d. symmetric two-valued log-returns and, thus, reduce the problem of derivative pricing in the trinomial model with dependence to the i.i.d. binomial case. Furthermore, we show that the expected payoff of a European call option in the multiperiod trinomial option pricing model is dominated by the expected payoff of a call option in the two-period model with a log-normal asset price. These results thus allow one to reduce the problem of pricing options in the trinomial model to the case of two periods and the standard assumption of normal log-returns. We also obtain bounds on the possible fair prices of call options in the (incomplete) trinomial model in terms of the parameters of the asset's distribution. Sharp bounds completely similar to those for European call options also hold for many other contingent claims in the trinomial option pricing model, including those with an arbitrary convex increasing function as well as path-dependent ones, in particular, Asian options written on averages of the underlying asset's prices.
    Keywords: Sign tests, dependence, martingale-difference, Bernoulli random variables, conservative tests, exact tests, option bounds, trinomial model, binomial model, semiparametric estimates, fair prices, expected payoffs, path-dependent contingent claims, efficient market hypothesis
    JEL: C12 C14 G12 G14
    Date: 2005–06
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1518&r=ecm
  16. By: Josu Arteche (Dpto. Economía Aplicada III (UPV/EHU))
    Abstract: The estimation of the memory parameter in perturbed long memory series has recently attracted attention motivated especially by the strong persistence of the volatility in many financial and economic time series and the use of Long Memory in Stochastic Volatility (LMSV) processes to model such a behaviour. This paper discusses frequency domain semiparametric estimation of the memory parameter and proposes an extension of the log periodogram regression which explicitly accounts for the added noise, comparing it, asymptotically and in finite samples, with similar extant techniques. Contrary to the non linear log periodogram regression of Sun and Phillips (2003), we do not use a linear approximation of the logarithmic term which accounts for the added noise. A reduction of the asymptotic bias is achieved in this way and makes possible a faster convergence in long memory signal plus noise series by permitting a larger bandwidth. Monte Carlo results confirm the bias reduction but at the cost of a higher variability. An application to a series of returns of the Spanish Ibex35 stock index is finally included.
    Keywords: long memory, stochastic volatility, semiparametric estimation
    JEL: C22
    Date: 2005–06–09
    URL: http://d.repec.org/n?u=RePEc:ehu:biltok:200502&r=ecm
  17. By: Sean D. Campbell
    Abstract: This paper reviews a variety of backtests that examine the adequacy of Value-at-Risk (VaR) measures. These backtesting procedures are reviewed from both a statistical and risk management perspective. The properties of unconditional coverage and independence are defined and their relation to backtesting procedures is discussed. Backtests are then classified by whether they examine the unconditional coverage property, independence property, or both properties of a VaR measure. Backtests that examine the accuracy of a VaR model at several quantiles, rather than a single quantile, are also outlined and discussed. The statistical power properties of these tests are examined in a simulation experiment. Finally, backtests that are specified in terms of a pre-specified loss function are reviewed and their use in VaR validation is discussed.
    Keywords: Risk management ; Bank investments
    Date: 2005
    URL: http://d.repec.org/n?u=RePEc:fip:fedgfe:2005-21&r=ecm
  18. By: Rothe, Christoph; Sibbertsen, Philipp
    Abstract: In this paper, we propose Phillips-Perron type, semiparametric testing procedures to distinguish a unit root process from a mean-reverting exponential smooth transition autoregressive one. The limiting nonstandard distributions are derived under very general conditions and simulation evidence shows that the tests perform better than the standard Phillips-Perron or Dickey-Fuller tests in the region of the null.
    Keywords: Exponential smooth transition autoregressive model, Unit roots, Monte Carlo simulations, Purchasing Power Parity
    JEL: C12 C32
    Date: 2005–06
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-315&r=ecm
  19. By: Jeffrey M. Wooldridge
    Abstract: I study inverse probability weighted M-estimation under a general missing data scheme. The cases covered that do not previously appear in the literature include M-estimation with missing data due to a censored survival time, propensity score estimation of the average treatment effect for linear exponential family quasi-log-likelihood functions, and variable probability sampling with observed retainment frequencies. I extend an important result known to hold in special cases: estimating the selection probabilities is generally more efficient than if the known selection probabilities could be used in estimation. For the treatment effect case, the setup allows for a simple characterization of a “double robustness” result due to Scharfstein, Rotnitzky, and Robins (1999): given appropriate choices for the conditional mean function and quasi-log-likelihood function, only one of the conditional mean or selection probability needs to be correctly specified in order to consistently estimate the average treatment effect.
    JEL: C13 C21 C23
    Date: 2004–04
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:05/04&r=ecm
  20. By: Stephen Pudney (Institute for Fiscal Studies and Institute for Social and Economic Research)
    Abstract: We develop a simulated ML method for short-panel estimation of one or more dynamic linear equations, where the dependent variables are only partially observed through ordinal scales. We argue that this latent autoregression (LAR) model is often more appropriate than the usual state-dependence (SD) probit model for attitudinal and interval variables. We propose a score test for assisting in the treatment of initial conditions and a new simulation approach to calculate the required partial derivative matrices. An illustrative application to a model of households’ perceptions of their financial well-being demonstrates the superior fit of the LAR model.
    Keywords: Dynamic panel data models, ordinal variables, simulated maximum likelihood, GHK simulator, BHPS
    JEL: C23 C25 C33 C35 D84
    Date: 2005–06
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:05/05&r=ecm
  21. By: Oliver Linton (Institute for Fiscal Studies and London School of Economics)
    Abstract: Estimation of heteroskedasticity and autocorrelation consistent covariance matrices (HACs) is a well established problem in time series. Results have been established under a variety of weak conditions on temporal dependence and heterogeneity that allow one to conduct inference on a variety of statistics, see Newey and West (1987), Hansen (1992), de Jong and Davidson (2000), and Robinson (2004). Indeed there is an extensive literature on automating these procedures starting with Andrews (1991). Alternative methods for conducting inference include the bootstrap for which there is also now a very active research program in time series especially, see Lahiri (2003) for an overview. One convenient method for time series is the subsampling approach of Politis, Romano, andWolf (1999). This method was used by Linton, Maasoumi, andWhang (2003) (henceforth LMW) in the context of testing for stochastic dominance. This paper is concerned with the practical problem of conducting inference in a vector time series setting when the data is unbalanced or incomplete. In this case, one can work only with the common sample, to which a standard HAC/bootstrap theory applies, but at the expense of throwing away data and perhaps losing effciency. An alternative is to use some sort of imputation method, but this requires additional modelling assumptions, which we would rather avoid.1 We show how the sampling theory changes and how to modify the resampling algorithms to accommodate the problem of missing data. We also discuss effciency and power. Unbalanced data of the type we consider are quite common in financial panel data, see for example Connor and Korajczyk (1993). These data also occur in cross-country studies.
    Date: 2004–04
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:06/04&r=ecm
  22. By: Andrew Chesher (Institute for Fiscal Studies and University College London)
    Abstract: This lecture explores conditions under which there is identification of the impact on an outcome of exogenous variation in a variable which is endogenous when data are gathered. The starting point is the Cowles Commission linear simultaneous equations model. The parametric and additive error restrictions of that model are successively relaxed and modifications to covariation,order and rank conditions that maintain identifiability are presented. Eventually a just-identifying, non-falsifiable model permitting nonseparablity of latent vari-ates and devoid of parametric restrictions is obtained. The model requires the endogenous variable to be continuously distributed. It is shown that relaxing this restriction results in loss of point identification but set identification is possible if an additional covariation restriction is introduced. Relaxing other restrictions presents significant challenges.
    Date: 2004–07
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:10/04&r=ecm
  23. By: Andrew Chesher (Institute for Fiscal Studies and University College London)
    Abstract: In additive error models with a discrete endogenous variable identification cannot be achieved under a marginal covariation condition when the support of instruments is sparse relative to the support of the endogenous variable. An iterated covariation condition with a weak montonicity restriction is shown to have set identifying power.
    Date: 2004–09
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:11/04&r=ecm
  24. By: Y. Nishiyama; Peter Robinson (Institute for Fiscal Studies and London School of Economics)
    Abstract: In a number of semiparametric models, smoothing seems necessary in order to obtain estimates of the parametric component which are asymptotically normal and converge at parametric rate. However, smoothing can inflate the error in the normal approximation, so that refined approximations are of interest, especially in sample sizes that are not enormous. We show that a bootstrap distribution achieves a valid Edgeworth correction in case of density-weighted averaged derivative estimates of semiparametric index models. Approaches to bias-reduction are discussed. We also develop a higher order expansion, to show that the bootstrap achieves a further reduction in size distortion in case of two-sided testing. The finite sample performance of the methods is investigated by means of Monte Carlo simulations froma Tobit model.
    JEL: C23
    Date: 2004–10
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:12/04&r=ecm
  25. By: Joel Horowitz (Institute for Fiscal Studies and Northwestern University)
    Abstract: This paper is concerned with inference about a function g that is identified by a conditional moment restriction involving instrumental variables. The paper presents a test of the hypothesis that g belongs to a finite-dimensional parametric family against a nonparametric alternative. The test does not require nonparametric estimation of g and is not subject to the illposed inverse problem of nonparametric instrumental variables estimation. Under mild conditions, the test is consistent against any alternative model and has asymptotic power advantages over existing tests. Moreover, it has power arbitrarily close to 1 uniformly over a class of alternatives whose distance from the null hypothesis is O(n-1/2), where n is the sample size.
    Date: 2004–09
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:14/04&r=ecm
  26. By: Richard Blundell (Institute for Fiscal Studies and University College London); Joel Horowitz (Institute for Fiscal Studies and Northwestern University)
    Abstract: This paper is concerned with inference about a function g that is identified by a conditional moment restriction involving instrumental variables. The function is nonparametric. It satisfies mild regularity conditions but is otherwise unknown. The paper presents test of the hypothesis that g is the mean of a random variable Y conditional on a covariate X . The need to test this hypothesis arises frequently in economics. The test does not require nonparametric instrumental-variables (IV) estimation of g and is not subject to the ill-posed inverse problem that nonparametric IV estimation entails. The test is consistent whenever g differs from the conditional mean function of Y on a set of non-zero probability. Moreover, the power of the test is arbitrarily close to 1 uniformly over a set of functions g whose distance from the conditional mean function is O(n-1/2), where is the sample size.
    Keywords: Hypothesis test, instrumental variables, specification testing, consistent testing
    Date: 2004–12
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:15/04&r=ecm
  27. By: Grant Hillier; Federico Martellosio
    Abstract: The paper provides significant simplifications and extensions of results obtained by Gorsich, Genton, and Strang (J. Multivariate Anal. 80 (2002) 138) on the structure of spatial design matrices. These are the matrices implicitly defined by quadratic forms that arise naturally in modelling intrinsically stationary and isotropic spatial processes. We give concise structural formulae for these matrices, and simple generating functions for them. The generating functions provide formulae for the cumulants of the quadratic forms of interest when the process is Gaussian, second-order stationary and isotropic. We use these to study the statistical properties of the associated quadratic forms, in particular those of the classical variogram estimator, under several assumptions about the actual variogram.
    Keywords: Cumulant, Intrinsically Stationary Process, Kronecker
    Date: 2004–12
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:16/04&r=ecm
  28. By: Richard Smith (Institute for Fiscal Studies and University of Warwick)
    Abstract: This paper proposes a new class of HAC covariance matrix estimators. The standard HAC estimation method re-weights estimators of the autocovariances. Here we initially smooth the data observations themselves using kernel function based weights. The resultant HAC covariance matrix estimator is the normalised outer product of the smoothed random vectors and is therefore automatically positive semi-definite. A corresponding efficient GMM criterion may also be defined as a quadratic form in the smoothed moment indicators whose normalised minimand provides a test statistic for the over-identifying moment conditions.
    Keywords: GMM, HAC Covariance Matrix Estimation, Overidentifying Moments
    JEL: C13 C30
    Date: 2004–12
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:17/04&r=ecm
  29. By: Laura Blow (Institute for Fiscal Studies); Martin Browning (Institute for Fiscal Studies and University of Copenhagen); Ian Crawford (Institute for Fiscal Studies and University of Surrey)
    Abstract: Characteristics models have been found to be useful in many areas of economics. However, their empirical implementation tends to rely heavily on functional form assumptions. In this paper we develop a revealed preference-based nonparametric approach to characteristics models. We derive the minimal necessary and sufficient empirical conditions under which data on the market behaviour of individual, heterogeneous, pricetaking consumers are nonparametrically consistent with the consumer characteristics model. Where these conditions hold, we show how information may be recovered on individual consumer’s marginal valuations of product attributes. In some cases marginal valuations are point identi- fied and in other cases we can only recover bounds. Where the conditions fail we highlight the role which the introduction of unobserved product attributes can play in rationalising the data. We implement these ideas using consumer panel data on the Danish milk market.
    Keywords: Product characteristics, revealed preference
    JEL: C43 D11
    Date: 2004–12
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:18/04&r=ecm
  30. By: Richard Smith (Institute for Fiscal Studies and University of Warwick)
    Abstract: GEL methods which generalize and extend previous contributions are defined and analysed for moment condition models specified in terms of weakly dependent data. These procedures offer alternative one-step estimators and tests that are asymptotically equivalent to their efficient two-step GMM counterparts. The basis for GEL estimation is via a smoothed version of the moment indicators using kernel function weights which incorporate a bandwidth parameter. Examples for the choice of bandwidth parameter and kernel function are provided. Efficient moment estimators based on implied probabilities derived from the GEL method are proposed, a special case of which is estimation of the stationary distribution of the data. The paper also presents a unified set of test statistics for over-identifying moment restrictions and combinations of parametric and moment restriction hypotheses.
    Keywords: GMM, Generalized Empirical Likelihood, Efficient Moment Estimation,
    JEL: C13 C30
    Date: 2004–12
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:19/04&r=ecm
  31. By: Erik Biørn (Statistics Norway)
    Abstract: A regression equation for panel data with two-way random or fixed effects and a set of individual specific and period specific `within individual' and `within period', estimators of its slope coefficients are considered. They can be given Ordinary Least Squares (OLS) or Instrumental Variables (IV) interpretations. A class of estimators, obtained as an arbitrary linear combination of these `disaggregate' estimators, is defined and an expression for its variance-covariance matrix is derived. Nine familiar `aggregate' estimators which utilize the entire data set, including two between, three within, three GLS, as well as the standard OLS, emerge by specific choices of the weights. Other estimators in this class which are more robust to simultaneity and measurement error bias than the standard aggregate estimators and more efficient than the `disaggregate' estimators, are also considered. An empirical illustration of robustness and efficiency, relating to manufacturing productivity, is given.
    Keywords: Panel data. Aggregation. Simultaneity. Measurement error. Method of moments. Factor productivity
    JEL: C13 C23 C43
    Date: 2005–05
    URL: http://d.repec.org/n?u=RePEc:ssb:dispap:420&r=ecm
  32. By: Jan F. Bjørnstad (Statistics Norway)
    Abstract: Multiple imputation is a method specifically designed for variance estimation in the presence of missing data. Rubin’s combination formula requires that the imputation method is “proper” which essentially means that the imputations are random draws from a posterior distribution in a Bayesian framework. In national statistical institutes (NSI’s) like Statistics Norway, the methods used for imputing for nonresponse are typically non-Bayesian, e.g., some kind of stratified hot-deck. Hence, Rubin’s method of multiple imputation is not valid and cannot be applied in NSI’s. This paper deals with the problem of deriving an alternative combination formula that can be applied for imputation methods typically used in NSI’s and suggests an approach for studying this problem. Alternative combination formulas are derived for certain response mechanisms and hot-deck type imputation methods.
    Keywords: Multiple imputation; survey sampling; nonresponse; hot-deck imputation
    JEL: C42 C13 C15
    Date: 2005–05
    URL: http://d.repec.org/n?u=RePEc:ssb:dispap:421&r=ecm
  33. By: Håvard Hungnes (Statistics Norway)
    Abstract: The paper describes a procedure for decomposing the deterministic terms in cointegrated VAR models into growth rate parameters and cointegration mean parameters. These parameters express long-run properties of the model. For example, the growth rate parameters tell us how much to expect (unconditionally) the variables in the system to grow from one period to the next, representing the underlying (steady state) growth in the variables. The procedure can be used for analysing structural breaks when the deterministic terms include shift dummies and broken trends. By decomposing the coefficients into interpretable components, different types of structural breaks can be identified. Both shifts in intercepts and shifts in growth rates, or combinations of these, can be tested for. The ability to distinguish between different types of structural breaks makes the procedure superior compared to alternative procedures. Furthermore, the procedure utilizes the information more efficiently than alternative procedures. Finally, interpretable coefficients of different types of structural breaks can be identified.
    Keywords: Johansen procedure; cointegrated VAR; structural breaks; growth rates; cointegration mean levels.
    JEL: C32 C51 C52
    Date: 2005–05
    URL: http://d.repec.org/n?u=RePEc:ssb:dispap:422&r=ecm
  34. By: Yixiao Sun (University of California, San Diego)
    Abstract: In order to reduce the finite sample bias and improve the rate of convergence, local polynomial estimators have been introduced into the econometric literature to estimate the regression discontinuity model. In this paper, we show that, when the degree of smoothness is known, the local polynomial estimator achieves the optimal rate of convergence within the Hölder smoothness class. However, when the degree of smoothness is not known, the local polynomial estimator may actually inflate the finite sample bias and reduce the rate of convergence. We propose an adaptive version of the local polynomial estimator which selects both the bandwidth and the polynomial order adaptively and show that the adaptive estimator achieves the optimal rate of convergence up to a logarithm factor without knowing the degree of smoothness. Simulation results show that the finite sample performance of the locally cross-validated adaptive estimator is robust to the parameter combinations and data generating processes, reflecting the adaptive nature of the estimator. The root mean squared error of the adaptive estimator compares favorably to local polynomial estimators in the Monte Carlo experiments.
    Keywords: Adaptive estimator, local cross validation, local polynomial, minimax rate, optimal bandwidth, optimal smoothness parameter
    JEL: C1 C2 C3 C4 C5 C8
    Date: 2005–06–07
    URL: http://d.repec.org/n?u=RePEc:wpa:wuwpem:0506003&r=ecm
  35. By: Xiujian Chen (University of Oklahoma); Shu Lin (University of Oklahoma); W. Robert Reed (University of Oklahoma)
    Abstract: Our study revisits Beck and Katz’ (1995) comparison of the Parks and PCSE estimators using time-series, cross-sectional data (TSCS). Our innovation is that we construct simulated statistical environments that are designed to closely match “real-world,” TSCS data. We pattern our statistical environments after income and tax data on U.S. states from 1960-1999. While PCSE generally does a better job than Parks in estimating standard errors, it too can be unreliable, sometimes producing standard errors that are substantially off the mark. Further, we find that the benefits of PCSE can come at a substantial cost in estimator efficiency. Based on our study, we would give the following advice to researchers using TSCS data: Given a choice between Parks and PCSE, we recommend that researchers use PCSE for hypothesis testing, and Parks if their primary interest is accurate coefficient estimates.
    Keywords: Panel Data, Panel Corrected Standard Errors, Monte Carlo analysis
    JEL: C1 C2 C3 C4 C5 C8
    Date: 2005–06–08
    URL: http://d.repec.org/n?u=RePEc:wpa:wuwpem:0506004&r=ecm
  36. By: Cheng Hsiao; Siyan Wang
    Abstract: We consider the estimation of a structural vector autoregressive model of nonstationary and possibly cointegrated variables without the prior knowledge of unit roots or rank of cointegration. We propose two modified two stage least squares estimators that are consistent and have limiting distributions that are either normal or mixed normal. Limited Monte Carlo studies are also conducted to evaluate their finite sample properties.
    Keywords: Structural vector autoregression; Unit root; Cointegration; Asymptotic properties; Hypothesis testing
    JEL: C32 C12 C13
    Date: 2005–05
    URL: http://d.repec.org/n?u=RePEc:scp:wpaper:05-23&r=ecm

This nep-ecm issue is ©2005 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.