nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒01‒16
sixteen papers chosen by
Sune Karlsson
Orebro University

  1. Testing Conditional Symmetry Without Smoothing By Tao Chen; Gautam Tripathi
  2. Combining predictive densities using Bayesian filtering with applications to US economics data By Monica Billio; Roberto Casarin; Francesco Ravazzolo; Herman K. van Dijk
  3. "An Aysmptotically Optimal Modification of the Panel LIML Estimation for Individual Heteroscedasticity" By Naoto Kunitomo; Kentaro Akashi
  4. Confidence Sets Based on Inverting Anderson-Rubin Tests By Russell Davidson; James G. MacKinnon
  5. Testing the One-Part Fractional Response Model against an Alternative Two-Part Model By Oberhofer, Harald; Pfaffermayr, Michael
  6. Ordered Response Models and Non-Random Personality Traits: Monte Carlo Simulations and a Practical Guide By Ingo Geishecker; Maximilian Riedl
  7. Frontier Techniques: Contrasting the Performance of (Single-) Truncated Order Regression Methods and Replicated Moments By Ana Paula Martins
  8. Evaluating the strength of identification in DSGE models. An a priori approach By Nikolay Iskrev
  9. Identification and Estimation of Preference Distributions When Voters Are Ideological By Antonio Merlo; Aureo de Paula
  10. A monthly indicator of employment in the euro area: real time analysis of indirect estimates By Moauro, Filippo
  11. Option pricing under time varying correlation with conditional dependence: A copula based approach to recover the index skew from the constituent dynamics. By Matthias Fengler; Helmut Herwartz; Christian Werner
  12. Spatial Decentralization and Program Evaluation: Theory and an Example from Indonesia By Nidhiya Menon; Mark M. Pitt
  13. Corporate bond spreads and real activity in the euro area - Least Angle Regression forecasting and the probability of the recession By Marco Buchmann
  14. Production Under Uncertainty: A Simulation Study By Sriram Shankar; Chris O'Donnell; John Quiggin
  15. Forecasting damped trend exponential smoothing: an algebraic viewpoint. By Giacomo Sbrana
  16. Alternative Approaches to Measuring House Price Inflation By Diewert, Erwin

  1. By: Tao Chen (University of Connecticut); Gautam Tripathi (University of Connecticut)
    Abstract: We test the assumption of conditional symmetry used to identify and estimate parameters in regression models with endogenous regressors without making any distributional assumptions. The specification test proposed here is computationally tractable, does not require nonparametric smoothing, and can detect n1/2-deviations from the null. Since the limiting distribution of the test statistic turns out to be a non-pivotal gaussian process, the critical values for implementing the test are obtained by simulation. In a Monte Carlo study we use the approach proposed here to test the assumption of conditional symmetry maintained in the seminal paper of Powell (1986b). Results from this finite sample experiment suggest that our test can work very well in moderately sized samples.
    JEL: C12 C14
    Date: 2011–01
  2. By: Monica Billio (University of Venice, GRETA Assoc. and School for Advanced Studies in Venice); Roberto Casarin (University of Breccia and GRETA Assoc); Francesco Ravazzolo (Norges Bank (Central Bank of Norway)); Herman K. van Dijk (Econometrics and Tinbergen Institutes, Erasmus University Rotterdam)
    Abstract: Using a Bayesian framework this paper provides a multivariate combination approach to prediction based on a distributional state space representation of predictive densities from alternative models. In the proposed approach the model set can be incomplete. Several multivariate time-varying combination strategies are introduced. In particular, a weight dynamics driven by the past performance of the predictive densities is considered and the use of learning mechanisms. The approach is assessed using statistical and utility-based performance measures forevaluating density forecasts of US macroeconomic time series and of surveys of stock market prices.
    Keywords: Density Forecast Combination, Survey Forecast, Bayesian Filtering, Sequential Monte Carlo
    JEL: C11 C15 C53 E37
    Date: 2010–12–21
  3. By: Naoto Kunitomo (Faculty of Economics, University of Tokyo); Kentaro Akashi (Institute of Statistical Mathematics)
    Abstract: We consider the estimation of coefficients of a dynamic panel structural equation in the simultaneous equation models. As a semi-parametric method, we introduce a class of modifications of the limited information maximum likelihood (LIML) estimator to improve its asymptotic properties as well as the small sample properties when we have individual heteroscedasticities. We shall show that an asymptotically optimal modification of the LIML estimator, which is called AOM-LIML, removes the asymptotic bias caused by the forward-filtering and improves the LIML and other estimation methods with individual heteroscedasticities.
    Date: 2010–12
  4. By: Russell Davidson (McGill University); James G. MacKinnon (Queen's University)
    Abstract: Economists are often interested in the coefficient of a single endogenous explanatory variable in a linear simultaneous equations model. One way to obtain a confidence set for this coefficient is to invert the Anderson-Rubin test. The "AR confidence sets" that result have correct coverage under classical assumptions. In this paper, however, we show that AR confidence sets also have many undesirable properties. Their coverage conditional on quantities that the investigator can observe, notably the Sargan statistic, can be far from correct. It is well known that they can be unbounded when the instruments are weak. Even when they are bounded, their length may be very misleading. We argue that, at least when the instruments are not so weak that inference is hopeless, it is much better to obtain confidence intervals by bootstrapping either the IV or LIML t statistic on the coefficient of interest in a particular way that we propose.
    Keywords: bootstrap, confidence interval, instrumental variables, LIML, Sargan test, weak instruments
    JEL: C15
    Date: 2011–01
  5. By: Oberhofer, Harald (University of Salzburg); Pfaffermayr, Michael (Department of Economics and Statistics, University of Innsbruck)
    Abstract: This note proposes a generalized two-part model for fractional response variables that nests the one-part model proposed by Papke and Wooldridge (1996). Consequently, a Wald test allows to discriminate between these two competing models. A small scale Monte Carlo simulation demonstrates that the proposed Wald test is properly sized and equipped with higher power than an alternative non-nested P-test.
    Keywords: Fractional response models; two-part model; Wald test; P-test
    JEL: C12 C15 C21 C25
    Date: 2011–01–05
  6. By: Ingo Geishecker; Maximilian Riedl
    Abstract: The paper compares different estimation strategies of ordered response models in the presence of non-random unobserved heterogeneity. By running Monte Carlo simulations with a range of randomly generated panel data of differing cross¬sectional and longitudinal dimension sizes we assess the consistency and efficiency of standard models such as linear fixed effects, ordered and conditional logit and several different binary recoding procedures. Among the analyzed binary recoding procedures is the conditional ordered logit estimator proposed by Ferrer-i-Carbonell and Frijters (2004) that recently has gained some popularity in the analysis of individual well-being. The Ferrer-i-Carbonell and Frijters estimator (FCF) performs best if the number of observations is large and the number of categories on the ordered scale is small. However, a much simpler individual mean based binary recoding scheme performs similarly well and even outperforms the FCF estimator if the number of categories on the ordered scale becomes large. If the researcher is, however, only interested in the relative size of coefficients with respect to a baseline the easy to compute linear fixed effect model essentially delivers the same results as the more elaborate binary recoding schemes.
    Keywords: fixed effects ordered logit, ordered responses, happiness
    JEL: C23 C25 I31
    Date: 2010–11–30
  7. By: Ana Paula Martins
    Abstract: This research contrasts three econometric alternatives for stochastic efficiency frontier analysis: order – inter-quantile – and inverse order regression under the assumption of truncated error term distribution, and replicated moment estimation. The demonstration departs from a simple linear regression form of the effective frontier; truncated (at zero) errors are then added to it for simulation purposes. For order regression, experiments with the standard normal, uniform, exponential, Cauchy and logistic error terms are provided. For complex error structures we rely on normal distributions only. The three alternatives would perform satisfactorily for simple error disturbances, especially if they are normal. With more than one residual added to the dependent variable, the weight of the unrestricted range one can blur the conclusions regarding observation efficiency.
    Keywords: Stochastic Frontier Model, Generalized Method of Order Statistics, Minimum Distance Method of Order Statistics, Inverse Order Regression, Replicated Moments, Linear Models.
    JEL: C24 C10
    Date: 2010–08–10
  8. By: Nikolay Iskrev
    Abstract: This paper presents a new approach to parameter identification analysis in DSGE models wherein the strength of identification is treated as property of the underlying model and studied prior to estimation. The strength of identification reflects the empirical importance of the economic features represented by the parameters. Identification problems arise when some parameters are either nearly irrelevant or nearly redundant with respect to the aspects of reality the model is designed to explain. The strength of identification therefore is not only crucial for the estimation of models, but also has important implications for model development. The proposed measure of identification strength is based on the Fisher information matrix of DSGE models and depends on three factors: the parameter values, the set of observed variables and the sample size. By applying the proposed methodology, researchers can determine the effect of each factor on the strength of identification of individual parameters, and study how it is related to structural and statistical characteristics of the economic model. The methodology is illustrated using the medium-scale DSGE model estimated in Smets and Wouters (2007).
    JEL: C32 C51 C52 E32
    Date: 2010
  9. By: Antonio Merlo (Department of Economics, University of Pennsylvania); Aureo de Paula (Department of Economics, University of Pennsylvania)
    Abstract: This paper studies the nonparametric identification and estimation of voters' preferences when voters are ideological. We build on the methods introduced by Degan and Merlo (2009) representing elections as Voronoi tessellations of the ideological space. We exploit the properties of this geometric structure to establish that voter preference distributions and other parameters of interest can be identified from aggregate electoral data. We also show that these objects can be consistently estimated using the methodology proposed by Ai and Chen (2003) and we illustrate our analysis by performing an actual estimation using data from the 1999 European Parliament elections.
    Keywords: Voting, Voronoi tessellation,identification, nonparametric
    JEL: D72 C14
    Date: 2010–12–31
  10. By: Moauro, Filippo
    Abstract: The paper presents the results of an extensive real time analysis of alternative model-based approaches to derive a monthly indicator of employment for the euro area. In the experiment the Eurostat quarterly national accounts series of employment is temporally disaggregated using the information coming from the monthly series of unemployment. The strategy benefits of the contribution of the information set of the euro area and its 6 larger member states, as well as the split into the 6 sections of economic activity. The models under comparison include univariate regressions of the Chow and Lin' type where the euro area aggregate is directly and indirectly derived, as well as multivariate structural time series models of small and medium size. The specification in logarithms is also systematically assessed. The largest multivariate setups, up to 49 series, are estimated through the EM algorithm. Main conclusions are the following: mean revision errors of disaggregated estimates of employment are overall small; a gain is obtained when the model strategy takes into account the information by both sector and member state; the largest multivariate setups outperforms those of small size and the strategies based on classical disaggregation methods.
    Keywords: temporal disaggregation methods; multivariate structural time series models; mixed-frequency models; EM algorithm; Kalman filter and smoother
    JEL: C51 C32 C52 C22
    Date: 2010–12–30
  11. By: Matthias Fengler; Helmut Herwartz; Christian Werner
    Abstract: Equity index implied volatility functions are known to be excessively skewed in comparison with implied volatility at the single stock level. We study this stylized fact for the case of a major German stock index, the DAX, by recovering index implied volatility from simulating the 30 dimensional return system of all DAX constituents. Option prices are computed after risk neutralization of the multivariate process which is estimated under the physical probability measure. The multivariate models belong to the class of copula asymmetric dynamic conditional correlation models. We show that moderate tail-dependence coupled with asymmetric correlation response to negative news is essential to explain the index implied volatility skew. Standard dynamic correlation models with zero tail-dependence fail to generate a sufficiently steep implied volatility skew.
    Keywords: Copula Dynamic Conditional Correlation, Basket Options, Multivariate GARCH Models, Change of Measure, Esscher Transform
    JEL: C32 C15 G13 G14
    Date: 2010–12
  12. By: Nidhiya Menon (Department of Economics, Brandeis University); Mark M. Pitt (Brown University)
    Abstract: This paper proposes a novel instrumental variable method for program evaluation that only requires a single cross-section of data on the spatial intensity of programs and outcomes. The instruments are derived from a simple theoretical model of government decision-making in which governments are responsive to the attributes of places and their populations, rather than to the attributes of individuals, in making allocation decisions across space, and have a social welfare function that is spatially weakly separable, that is, that the budgeting process is multi-stage with respect to administrative districts and sub-districts. The spatial instrumental variables model is then estimated and tested by GMM with a single cross-section of Indonesian census data. The results offer support to the identification strategy proposed.
    Keywords: Spatial Decentralization, Program Evaluation, Instrumental Variables, Indonesia
    JEL: C21 H44 O12 C50
    Date: 2010–09
  13. By: Marco Buchmann (European Central Bank, DG Financial Stability, Financial Stability Assessment Division, Kaiserstrasse 29, D-60311 Frankfurt am Main, Germany.)
    Abstract: This paper aims at providing a detailed analysis of the leading indicator properties of corporate bond spreads for real economic activity in the euro area. In- and out-of-sample predictive content of corporate bond spreads are examined along three dimensions: the bonds’ quality, their term to maturity, as well as the forecast horizon at which one intends to predict a change in real activity. Numerous alternative leading indicators capturing macroeconomic and financial conditions are included in the analysis. Along with standard time series forecast models, the Least Angle Regression (LAR) technique is used to build multivariate models recursively. Models built via LAR can be used to produce forecasts and allow one to analyze how the composition and the number of relevant model variables evolve over time. Corporate bond spreads turn out to be valuable predictors for real activity, in particular at forecast horizons beyond one year; Medium risk bond spreads with maturities between 5 and 10 years appear particularly rich in content. The spreads also belong to the group of indicators that implied the highest probability of a recession occurring from a pre-crisis perspective. JEL Classification: E32, E37, E44, G32.
    Keywords: Corporate bond spreads, point and density forecasting, automatic model building, least angle regression.
    Date: 2011–01
  14. By: Sriram Shankar (School of Economics, University of Queensland); Chris O'Donnell (School of Economics, University of Queensland); John Quiggin (School of Economics, University of Queensland)
    Abstract: In this article we model production technology in a state-contingent framework. Our model analyzes production under uncertainty without being explicit about the nature of producer risk preferences. In our model producers’ risk preferences are captured by the risk-neutral probabilities they assign to the different states of nature. Using a state-general state-contingent specification of technology we show that rational producers who encounter the same stochastic technology can make significantly different production choices. Further, we develop an econometric methodology to estimate the risk-neutral probabilities and the parameters of stochastic technology when there are two states of nature and only one of which is observed. Finally, we simulate data based on our state-general state-contingent specification of technology. Biased estimates of the technology parameters are obtained when we apply conventional ordinary least squares (OLS) estimator on the simulated data.
    Keywords: CES, Cobb-Douglas, OLS, output-cubical, risk-neutral, state-allocable, state-contingent
    JEL: C15 C63 D21 D81 Q10
    Date: 2010–12
  15. By: Giacomo Sbrana (BETA/CNRS, Université de Strasbourg, France.)
    Date: 2010
  16. By: Diewert, Erwin
    Abstract: The paper uses data on sales of detached houses in a small Dutch town over 14 quarters starting at the first quarter of 2005 in order to compare various methods for constructing a house price index over this period. Four classes of methods are considered: (i) stratification techniques plus normal index number theory; (ii) time dummy hedonic regression models; (iii) hedonic imputation techniques and (iv) additive in land and structures hedonic regression models. The last approach is used in order to decompose the price of a house into land and structure components and it relies on the imposition of some monotonicity constraints or exogenous information on price movements for structures. The problems associated with constructing an index for the stock of houses using information on the sales of houses are also considered.
    Keywords: Property price indexes, hedonic regressions, stratification techniques, rolling year indexes, Fisher ideal indexes
    JEL: C2 C23 C43 D12 E31 R21
    Date: 2011–01–07

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.