nep-ecm New Economics Papers
on Econometrics
Issue of 2017‒12‒18
sixteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Calculating Degrees of Freedom in Multivariate Local Polynomial Regression By Nadine McCloud; Christopher F. Parmeter
  2. M-Estimation of a Nonparametric Threshold Regression Model By Daniel J. Henderson; Christopher F. Parmeter; Liangjun Su
  3. Estimation methods for non-homogeneous regression models: Minimum continuous ranked probability score vs. maximum likelihood By Manuel Gebetsberger; Jakob W. Messner; Georg J. Mayr; Achim Zeileis
  4. Bootstrap-Based Inference for Cube Root Consistent Estimators By Matias D. Cattaneo; Michael Jansson; Kenichi Nagasawa
  5. A Random Attention Model By Matias D. Cattaneo; Xinwei Ma; Yusufcan Masatlioglu; Elchin Suleymanov
  6. Risk-Neutral Moment-Based Estimation of Affine Option Pricing Models By Bruno Feunou; Cédric Okou
  7. Robust Inference for Dynamic Economies - with an application to Financial Frictions By Andreas Tryphonides
  8. Forecasting realized volatility: a review By Bucci, Andrea
  9. A new stochastic frontier model with cross-sectional effects in both noise and inefficiency terms By Orea, Luis; Álvarez, Inmaculada C.
  10. An endogenous regime-switching model of ordered choice with an application to federal funds rate target. By Andrei A. Sirchenko
  11. Higher order moments of the estimated tangency portfolio weights By Javed, Farrukh; Mazur, Stepan; Ngailo, Edward
  12. Kriging : Methods and Applications By Kleijnen, J.P.C.
  13. Inference in Conditional Moment Restriction Models when there is Selection due to Stratification By Antonio Cosma; Andreï Kostyrka; Gautam Tripathi
  14. On the consistency of the two-step estimates of the MS-DFM: a Monte Carlo study By Catherine Doz; Anna Petronevich
  15. How to Estimate Beta? By Hollstein, Fabian; Prokopczuk, Marcel; Wese Simen, Chardin
  16. Mixed Models as an Alternative to Farima By Jos\'e Igor Morlanes

  1. By: Nadine McCloud (University of the West Indies -- Mona); Christopher F. Parmeter (University of Miami)
    Abstract: The matrix that transforms the response variable in a regression to its predicted value is commonly referred to as the hat matrix. The trace of the hat matrix is a standard metric for calculating degrees of freedom. Nonparametric-based hat matrices do not enjoy all properties of their parametric counterpart in part owing to the fact that the former do not always stem directly from a traditional ANOVA decomposition. In the multivariate, local polynomial setup with a mix of continuous and discrete covariates, which include some irrelevant covariates, we formulate asymptotic expressions for the trace of the resultant non-ANOVA and ANOVA-based hat matrix from the estimator of the unknown conditional mean. The asymptotic expression of the trace of the non-ANOVA hat matrix associated with the conditional mean estimator is equal up to a linear combination of kernel-dependent constants to that of the ANOVA-based hat matrix. Additionally, we document that the trace of the ANOVA-based hat matrix converges to 0 in any setting where the bandwidths diverge. This attrition outcome can occur in the presence of irrelevant continuous covariates or it can arise when the underlying data generating process is in fact of polynomial order. Simulated examples demonstrate that our theoretical contributions are valid in finite-sample settings.
    Keywords: Trace, Degrees of Freedom, Effective Parameters, Nonparametric Regression, Irrelevant Regressors, Bandwidth, Goodness-of-fit. Publication Status: Submitted
    JEL: C14
    Date: 2017–11–21
  2. By: Daniel J. Henderson (University of Alabama); Christopher F. Parmeter (University of Miami); Liangjun Su (Singapore Management University)
    Abstract: The present work uses semiparametric M-estimation to construct an estimator for a threshold parameter in a nonparametric regression model. Given that this parameter is only weakly identified, we develop a set of sufficient conditions whereby our semiparametric M-estimator is consistent and asymptotically normal. Our work extends the theory of Chen, Linton and Van Keilegom (2003) to settings where there is weak identification for a semiparametric model. A range of Monte Carlo simulations and three empirical examples (threshold, asymmetric time series and regression discontinuity) support the asymptotic developments.
    Keywords: Change Point, M-Estimation, Nonparametric Threshold Regression, Regression Discontinuity, Structural Change Publication Status: Submitted
    JEL: G14 G24
    Date: 2017–10–19
  3. By: Manuel Gebetsberger; Jakob W. Messner; Georg J. Mayr; Achim Zeileis
    Abstract: Non-homogeneous regression models are widely used to statistically post-process numerical ensemble weather prediction models. Such regression models are capable of forecasting full probability distributions and correct for ensemble errors in the mean and variance. To estimate the corresponding regression coefficients, minimization of the continuous ranked probability score (CRPS) has widely been used in meteorological post-processing studies and has often been found to yield more calibrated forecasts compared to maximum likelihood estimation. From a theoretical perspective, both estimators are consistent and should lead to similar results, provided the correct distribution assumption about empirical data. Differences between the estimated values indicate a wrong specification of the regression model. This study compares the two estimators for probabilistic temperature forecasting with non-homogeneous regression, where results show discrepancies for the classical Gaussian assumption. The heavy-tailed logistic and Student-t distributions can improve forecast performance in terms of sharpness and calibration, and lead to only minor differences between the estimators employed. Finally, a simulation study confirms the importance of appropriate distribution assumptions and shows that for a correctly specified model the maximum likelihood estimator is slightly more efficient than the CRPS estimator.
    Keywords: ensemble post-processing, maximum likelihood, CRPS minimization, probabilistic forecasting, distributional regression models
    JEL: C13 C15 C16 C51 C61
    Date: 2017–11
  4. By: Matias D. Cattaneo; Michael Jansson; Kenichi Nagasawa
    Abstract: This note proposes a consistent bootstrap-based distributional approximation for cube root consistent estimators such as the maximum score estimator of Manski (1975) and the isotonic density estimator of Grenander (1956). In both cases, the standard nonparametric bootstrap is known to be inconsistent. Our method restores consistency of the nonparametric bootstrap by altering the shape of the criterion function defining the estimator whose distribution we seek to approximate. This modification leads to a generic and easy-to-implement resampling method for inference that is conceptually distinct from other available distributional approximations based on some form of modified bootstrap. We offer simulation evidence showcasing the performance of our inference method in finite samples. An extension of our methodology to general M-estimation problems is also discussed.
    Date: 2017–04
  5. By: Matias D. Cattaneo; Xinwei Ma; Yusufcan Masatlioglu; Elchin Suleymanov
    Abstract: We introduce a Random Attention Model (RAM) allowing for a large class of stochastic consideration maps in the context of an otherwise canonical limited attention model for decision theory. The model relies on a new restriction on the unobserved, possibly stochastic consideration map, termed \textit{Monotonic Attention}, which is intuitive and nests many recent contributions in the literature on limited attention. We develop revealed preference theory within RAM and obtain precise testable implications for observable choice probabilities. Using these results, we show that a set (possibly a singleton) of strict preference orderings compatible with RAM is identifiable from the decision maker's choice probabilities, and establish a representation of this identified set of unobserved preferences as a collection of inequality constrains on her choice probabilities. Given this nonparametric identification result, we develop uniformly valid inference methods for the (partially) identifiable preferences. We showcase the performance of our proposed econometric methods using simulations, and provide general-purpose software implementation of our estimation and inference results in the \texttt{R} software package \texttt{ramchoice}. Our proposed econometric methods are computationally very fast to implement.
    Date: 2017–12
  6. By: Bruno Feunou; Cédric Okou
    Abstract: This paper provides a novel methodology for estimating option pricing models based on risk-neutral moments. We synthesize the distribution extracted from a panel of option prices and exploit linear relationships between risk-neutral cumulants and latent factors within the continuous time affine stochastic volatility framework. We find that fitting the Andersen, Fusari, and Todorov (2015b) option valuation model to risk-neutral moments captures the bulk of the information in option prices. Our estimation strategy is effective, easy to implement, and robust, as it allows for a direct linear filtering of the latent factors and a quasi-maximum likelihood estimation of model parameters. From a practical perspective, employing risk-neutral moments instead of option prices also helps circumvent several sources of numerical errors and substantially lessens the computational burden inherent in working with a large panel of option contracts.
    Keywords: Asset pricing; Econometric and statistical methods
    JEL: G12
    Date: 2017
  7. By: Andreas Tryphonides
    Abstract: We propose a new inferential methodology for dynamic economies that is robust to misspecification of the mechanism generating frictions. Economies with frictions are treated as perturbations of a frictionless economy that are consistent with a variety of mechanisms. We derive a representation for the law of motion for such economies and we characterize parameter set identification. We derive a link from model aggregate predictions to distributional information contained in qualitative survey data and specify conditions under which the identified set is refined. The latter is used to semi-parametrically estimate distortions due to frictions in macroeconomic variables. Based on these estimates, we propose a novel test for complete models. Using consumer and business survey data collected by the European Commission, we apply our method to estimate distortions due to financial frictions in the Spanish economy. We investigate the implications of these estimates for the adequacy of the standard model of financial frictions SW-BGG (Smets and Wouters (2007), Bernanke, Gertler, and Gilchrist (1999)).
    Date: 2017–12
  8. By: Bucci, Andrea
    Abstract: Modeling financial volatility is an important part of empirical finance. This paper provides a literature review of the most relevant volatility models, with a particular focus on forecasting models. We firstly discuss the empirical foundations of different kinds of volatility. The paper, then, analyses the non-parametric measure of volatility, named realized variance, and its empirical applications. A wide range of realized volatility models, both univariate and multivariate, is presented, such as time series models, MIDAS and GARCH-MIDAS models, Realized GARCH, and HEAVY models. We further discuss forecasting evaluation methods specifically suited for volatility models.
    Keywords: Realized Volatility; Stochastic Volatility; Volatility Models
    JEL: C22 C53 G10
    Date: 2017–12
  9. By: Orea, Luis; Álvarez, Inmaculada C.
    Abstract: This paper develops a new stochastic frontier model that allows for cross-sectional (spatial) correlation in both the noise and inefficiency terms. The model proposed is useful in efficiency analyses when there are omitted but spatially-correlated variables and firms benefit from best practices implemented by other (adjacent) firms. Unlike the previous literature, our model can be estimated by maximum likelihood using standard software. The model is illustrated with an application to the Norwegian electricity distribution sector.
    Date: 2017
  10. By: Andrei A. Sirchenko
    Abstract: This paper introduces a class of ordered probit models with endogenous switching among N latent regimes and possibly endogenous explanatory variables. The paper contributes to and bridges two strands of microeconometric literature. First, it extends endogenous switching regressions to models of ordered choice with N unknown regimes. Second, it generalizes the existing zero-inflated ordered probit models to make them suitable for ordinal data that take on negative, zero and positive values and characterized by abundant and heterogeneous zero observations. From a macroeconomic perspective, it is the first attempt to implement regime switching and accommodate endogenous regressors in discrete-choice monetary policy rules. Recurring oscillating regime switches in the three regimes evolving endogenously in response to the state of economy are detected during a relatively stable policy period such as the Greenspan era. The Monte Carlo experiments and an application to the federal funds rate target demonstrate that ignoring endogeneity and regime-switching environment can lead to seriously distorted statistical inference. In the simulations, the new models perform well in small samples. In the application, they not only have better in-sample fit for the Greenspan era than the existing models but also forecast better out of sample for the entire Bernanke era, correctly predicting 91 percent of policy decisions.
    JEL: C34 C35 C36 E52
    Date: 2017–11–19
  11. By: Javed, Farrukh (Örebro University School of Business); Mazur, Stepan (Örebro University School of Business); Ngailo, Edward (Stockholm University)
    Abstract: In this paper we consider the estimated weights of tangency portfolio. The returns are assumed to be independently and multivariate normally distributed. We derive analytical expressions for the higher order non-central and central moments of these weights. Moreover, the expressions for mean, variance, skewness and kurtosis of the estimated weights are obtained in closed-forms. Finally, we complement our result with an empirical study where we analyze a portfolio with actual returns of eight nancial indexes listed in NASDAQ stock exchange.
    Keywords: Tangency portfolio; higher order moments; Wishart distribution
    JEL: C10 C44
    Date: 2017–12–07
  12. By: Kleijnen, J.P.C. (Tilburg University, Center For Economic Research)
    Abstract: In this chapter we present Kriging— also known as a Gaussian process (GP) model— which is a mathematical interpolation method. To select the input combinations to be simulated, we use Latin hypercube sampling (LHS); we allow uniform and non-uniform distributions of the simulation inputs. Besides deterministic simulation we discuss random simulation, which requires adjusting the design and analysis. We discuss sensitivity analysis of simulation models, using "functional analysis of variance" (FANOVA)— also known as Sobol sensitivity indexes. Finally, we discuss optimization of the simulated system, including "robust" optimization.
    Keywords: Gaussian process; Latin hypercube; deterministic simulation; random simulation; sensitivity analysis; optimization
    JEL: C0 C1 C9 C15 C44
    Date: 2017
  13. By: Antonio Cosma (CREA, Université du Luxembourg); Andreï Kostyrka (CREA, Université du Luxembourg); Gautam Tripathi (CREA, Université du Luxembourg)
    Abstract: We show how to conduct efficient semiparametric inference in models speci ed as conditional moment equalities when data is collected by variable probability sampling.
    Date: 2017
  14. By: Catherine Doz (PJSE - Paris Jourdan Sciences Economiques - UP1 - Université Panthéon-Sorbonne - ENS Paris - École normale supérieure - Paris - INRA - Institut National de la Recherche Agronomique - EHESS - École des hautes études en sciences sociales - ENPC - École des Ponts ParisTech - CNRS - Centre National de la Recherche Scientifique, PSE - Paris School of Economics); Anna Petronevich (PSE - Paris School of Economics, CREST - Centre de Recherche en Economie et Statistique [Bruz] - ENSAI - Ecole Nationale de la Statistique et de l'Analyse de l'Information [Bruz])
    Abstract: The Markov-Switching Dynamic Factor Model (MS-DFM) has been used in different applications, notably in the business cycle analysis. When the cross-sectional dimension of data is high, the Maximum Likelihood estimation becomes unfeasible due to the excessive number of parameters. In this case, the MS-DFM can be estimated in two steps, which means that in the first step the common factor is extracted from a database of indicators, and in the second step the Markov-Switching autoregressive model is fit to this extracted factor. The validity of the two-step method is conventionally accepted, although the asymptotic properties of the two-step estimates have not been studied yet. In this paper we examine their consistency as well as the small-sample behavior with the help of Monte Carlo simulations. Our results indicate that the two-step estimates are consistent when the number of cross-section series and time observations is large, however, as expected, the estimates and their standard errors tend to be biased in small samples.
    Keywords: Markov-switching, Dynamic Factor models, two-step estimation,small-sample performance, consistency, Monte Carlo simulations
    Date: 2017–09
  15. By: Hollstein, Fabian; Prokopczuk, Marcel; Wese Simen, Chardin
    Abstract: Researchers and practitioners face many choices when estimating an asset's sensitivities toward risk factors, i.e., betas. We study the effect of different data sampling frequencies, forecast adjustments, and model combinations for beta estimation. Using the entire U.S. stock universe and a sample period of more than 50 years, we find that a historical estimator based on daily return data with an exponential weighting scheme as well as a shrinkage toward the industry average yield the best predictions for future beta. Adjustments for asynchronous trading, macroeconomic conditions, or regression-based combinations, on the other hand, typically yield very high prediction errors.
    Keywords: beta estimation; forecast combinations; forecast adjustments
    JEL: G12 G11 G17
    Date: 2017–11
  16. By: Jos\'e Igor Morlanes
    Abstract: We construct a new process using a fractional Brownian motion and a fractional Ornstein-Uhlenbeck process of the Second Kind as building blocks. We consider the increments of the new process in discrete time and, as a result, we obtain a more parsimonious process with similar autocovariance structure to that of a FARIMA. In practice, variance of the new increment process is a closed-form expression easier to compute than that of FARIMA.
    Date: 2017–12

This nep-ecm issue is ©2017 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.