nep-ecm New Economics Papers
on Econometrics
Issue of 2017‒07‒02
fourteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Bayesian Analysis of Boundary and Near-Boundary Evidence in Econometric Models with Reduced Rank By Nalan Basturk; Lennart Hoogerheide; Herman K. van Dijk
  2. Factor Models for Non-Stationary Series: Estimates of Monthly U.S. GDP By Martina Hengge; Seton Leonard
  3. Bayesian semiparametric multivariate stochastic volatility with an application to international volatility co-movements By Martina Danielova Zaharieva; Mark Trede; Bernd Wilfling
  4. Smoothing Algorithms by Constrained Maximum Likelihood By Yang, Bill Huajian
  5. Parameter estimation for stable distributions with application to commodity futures log returns By Michael Kateregga; Sure Mataramvura; David Taylor
  6. Theory and Application of an Economic Performance Measure of Risk By Niu, C.; Guo, X.; McAleer, M.J.; Wong, W.K.
  7. Firms' Dynamics and Business Cycle: New Disaggregated Data By Lorenza Rossi; Emilio Zanetti Chini
  8. Estimating Taxable Income Responses with Elasticity Heterogeneity By Kumar, Anil; Liang, Che-Yuan
  9. The Correct Regularity Condition and Interpretation of Asymmetry in EGARCH By Chia-Lin Chang; Michael McAleer
  10. A Comparison Study on Criteria to Select the Most Adequate Weighting Matrix By Marcos Herrera; Jesus Mur; Manuel Ruiz-Marin
  11. Testing for Explosive Bubbles in the Presence of Autocorrelated Innovations By Thomas Quistgaard Pedersen; Erik Christian Montes Schütte
  12. A two-step indirect inference approach to estimate the long-run risk asset pricing model By Grammig, Joachim; Küchlin, Eva-Maria
  13. Farm heterogeneity and agricultural policy impacts on size dynamics: evidence from France By Legrand D. F, Saint-Cyr
  14. Modeling Qualitative Outcomes by Supplementing Participant Data with General Population Data: A Calibrated Qualitative Response Estimation Approach By Erard, Brian

  1. By: Nalan Basturk (Maastricht University, The Netherlands); Lennart Hoogerheide (VU Amsterdam and Tinbergen Institute, The Netherlands); Herman K. van Dijk (Erasmus University and Tinbergen Institute, The Netherlands and Norges Bank, Norway)
    Abstract: Weak empirical evidence near and at the boundary of the parameter region is a predominant feature in econometric models. Examples are macroeconometric models with weak information on the number of stable relations, microeconometric models measuring connectivity between variables with weak instruments, financial econometric models like the random walk with weak evidence on the efficient market hypothesis and factor models for investment policies with weak information on the number of unobserved factors. A Bayesian analysis is presented of the common issue in these models, which refers to the topic of a reduced rank. We introduce a lasso type shrinkage prior combined with orthogonal normalization which restricts the range of the parameters in a plausible way. This can be combined with other shrinkage, smoothness and data based priors using training samples or dummy observations. Using such classes of priors, it is shown how conditional probabilities of evidence near and at the boundary can be evaluated effectively. These results allow for Bayesian inference using mixtures of posteriors under the boundary state and the near-boundary state. The approach is applied to the estimation of education-income effect in all states of the US economy. The empirical results indicate that there exist substantial differences of this effect between almost all states. This may affect important national and state-wise policies on required length of education. The use of the proposed approach may, in general, lead to more accurate forecasting and decision analysis in other problems in economics, finance and marketing.
    Keywords: Bayesian analysis, reduced rank, lasso priors, shrinkage, Bayesian mixtures
    JEL: C11 C51
    Date: 2017–06–23
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20170058&r=ecm
  2. By: Martina Hengge (IHEID, The Graduate Institute of International and Development Studies, Geneva); Seton Leonard (IHEID, The Graduate Institute of International and Development Studies, Geneva)
    Abstract: This paper presents a novel dynamic factor model for non-stationary data. We begin by constructing a simple dynamic stochastic general equilibrium growth model and show that we can represent and estimate the model using a simple linear-Gaussian (Kalman) filter. Crucially, consistent estimation does not require differencing the data despite it being cointegrated of order 1. We then apply our approach to a mixed frequency model which we use to estimate monthly U.S. GDP from May 1969 to January 2017 using 171 series with an emphasis on housing related data. We suggest our estimates may, at a quarterly rate, in fact be more accurate than measurement error prone observations. Finally, we use our model to construct pseudo real-time GDP nowcasts over the 2007 to 2009 financial crisis. This last exercise shows that a GDP index, as opposed to real time estimates of GDP itself, may be more helpful in highlighting changes in the state of the macroeconomy.
    Keywords: Forecasting; Factor model: Large data sets; Mixed frequency data; Nowcasting; Non-stationarity; Real-time data
    JEL: E27 E52 C53 C33
    Date: 2017–06–10
    URL: http://d.repec.org/n?u=RePEc:gii:giihei:heidwp13-2017&r=ecm
  3. By: Martina Danielova Zaharieva; Mark Trede; Bernd Wilfling
    Abstract: In this paper, we establish a Cholesky-type multivariate stochastic volatility estimation framework, in which we let the innovation vector follow a Dirichlet process mixture, thus enabling us to model highly exible return distributions. The Cholesky decom- position allows parallel univariate process modeling and creates potential for estimating highly dimensional specifications. We use Markov Chain Monte Carlo methods for posterior simulation and predictive density computation. We apply our framework to a five-dimensional stock-return data set and analyze international volatility co-movements among the largest stock markets.
    Keywords: Bayesian nonparametrics, Markov Chain Monte Carlo, Dirichlet process mixture, multivariate stochastic volatility
    JEL: C11 C14 C53 C58
    Date: 2017–06
    URL: http://d.repec.org/n?u=RePEc:cqe:wpaper:6217&r=ecm
  4. By: Yang, Bill Huajian
    Abstract: In the process of loan pricing, stress testing, capital allocation, modeling of PD term structure, and IFRS9 expected credit loss estimation, it is widely expected that higher risk grades carry higher default risks, and that an entity is more likely to migrate to a closer non-default rating than a farther away non-default rating. In practice, sample estimates for rating level default rate or rating migration probability do not always respect this monotonicity rule, and hence the need for smoothing approaches. Regression and interpolation techniques are widely used for this purpose. A common issue with these approaches is that the risk scale for the estimates is not fully justified, leading to a possible bias in credit loss estimates. In this paper, we propose smoothing algorithms for rating level PD and rating migration probability. The smoothed estimates obtained by these approaches are optimal in the sense of constrained maximum likelihood, with a fair risk scale determined by constrained maximum likelihood, leading to more robust credit loss estimation. The proposed algorithms can be easily implemented by a modeller using, for example, the SAS procedure PROC NLMIXED. The approaches proposed in this paper will provide an effective and useful smoothing tool for practitioners in the field of risk modeling.
    Keywords: Credit loss estimation, risk scale, constrained maximum likelihood, PD term structure, rating migration probability
    JEL: C1 C13 C18 C5 C51 C52 C53 C54 C61 C63 E5 G31 G38 O32 O33 O38
    Date: 2017–06
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:79911&r=ecm
  5. By: Michael Kateregga; Sure Mataramvura; David Taylor
    Abstract: This paper explores the theory behind the rich and robust family of {\alpha}-stable distributions to estimate parameters from financial asset log-returns data. We discuss four-parameter estimation methods including the quantiles, logarithmic moments method, maximum likelihood (ML), and the empirical characteristics function (ECF) method. The contribution of the paper is two-fold: first, we discuss the above parametric approaches and investigate their performance through error analysis. Moreover, we argue that the ECF performs better than the ML over a wide range of shape parameter values, {\alpha}{\alpha} including values closest to 0 and 2 and that the ECF has a better convergence rate than the ML. Secondly, we compare the t location-scale distribution to the general stable distribution and show that the former fails to capture skewness which might exist in the data. This is observed through applying the ECF to commodity futures log-returns data to obtain the skewness parameter.
    Date: 2017–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1706.09756&r=ecm
  6. By: Niu, C.; Guo, X.; McAleer, M.J.; Wong, W.K.
    Abstract: Homm and Pigorsch (2012a) use the Aumann and Serrano index to develop a new economic performance measure (EPM), which is well known to have advantages over other measures. In this paper, we extend the theory by constructing a one-sample confidence interval of EPM, and construct confidence intervals for the difference of EPMs for two independent samples. We also derive the asymptotic distribution for EPM and for the difference of two EPMs when the samples are independent. We conduct simulations to show the proposed theory performs well for one and two independent samples. The simulations show that the proposed approach is robust in the dependent case. The theory developed is used to construct both one-sample and two-sample confidence intervals of EPMs for Singapore and USA stock indices.
    Keywords: Economic performance measure, Asymptotic confidence interval, Bootstrap-, based confidence interval, Method of variance estimates recovery
    JEL: C12 C15
    Date: 2017–06–15
    URL: http://d.repec.org/n?u=RePEc:ems:eureir:100417&r=ecm
  7. By: Lorenza Rossi (Department of Economics and Management, University of Pavia); Emilio Zanetti Chini (Department of Economics and Management, University of Pavia)
    Abstract: We provide stylized facts on firms dynamics by disaggregating U.S. yearly data from 1977 to 2013. To this aim, we use an unobserved component-based method, encompassing several classical regression-based techniques currently in use. Our new time series of entry and exit of firms at establishment level are feasible proxies of business cycle. Exit is a leading and countercyclical indicator, while entry is lagging and procyclical. A structural econometric analysis supports the findings of the most recent theoretical literature on firms dynamics. Standard macroeconometric models estimated from our data outperform their equivalents estimated using the existing series.
    Keywords: Entry and Exit, State Space, Business Cycle, Disaggregation, SVAR.
    JEL: C13 C32 C40 E30 E32
    Date: 2017–06
    URL: http://d.repec.org/n?u=RePEc:pav:demwpp:demwp0141&r=ecm
  8. By: Kumar, Anil (Research Department, Federal Reserve Bank of Dallas); Liang, Che-Yuan (Department of Economics)
    Abstract: We explore the implications of heterogeneity in the elasticity of taxable income (ETI) for tax-reform based estimation methods. We theoretically show that existing methods yield elasticities that are biased and lack policy relevance. We illustrate the empirical importance of our theoretical analysis using the NBER tax panel for 1979-1990. We show that elasticity heterogeneity is the main explanation for large differences between estimates in the previous literature. Our preferred, newly suggested method yields elasticity estimates of approximately 0.7 for taxable income and 0.2 for broad income.
    Keywords: elasticity of taxable income; elasticity heterogeneity; tax reforms; panel data; preference heterogeneity
    JEL: D11 H24 J22
    Date: 2017–03–29
    URL: http://d.repec.org/n?u=RePEc:hhs:uunewp:2017_005&r=ecm
  9. By: Chia-Lin Chang (National Chung Hsing University, Taiwan); Michael McAleer (National Tsing Hua University, Taiwan; University of Sydney Business School; Erasmus University Rotterdam, The Netherlands, Complutense University of Madrid, Spain and Yokohama National University, Japan)
    Abstract: In the class of univariate conditional volatility models, the three most popular are the generalized autoregressive conditional heteroskedasticity (GARCH) model of Engle (1982) and Bollerslev (1986), the GJR (or threshold GARCH) model of Glosten, Jagannathan and Runkle (1992), and the exponential GARCH (or EGARCH) model of Nelson (1990, 1991). For purposes of deriving the mathematical regularity properties, including invertibility, to determine the likelihood function for estimation, and the statistical conditions to establish asymptotic properties, it is convenient to understand the stochastic properties underlying the three univariate models. The random coefficient autoregressive process was used to obtain GARCH by Tsay (1987), an extension of which was used by McAleer (2004) to obtain GJR. A random coefficient complex nonlinear moving average process was used by McAleer and Hafner (2014) to obtain EGARCH. These models can be used to capture asymmetry, which denotes the different effects on conditional volatility of positive and negative effects of equal magnitude, and possibly also leverage, which is the negative correlation between returns shocks and subsequent shocks to volatility (see Black 1979). McAleer (2014) showed that asymmetry was possible for GJR, but not leverage. McAleer and Hafner showed that leverage was not possible for EGARCH. Surprisingly, the conditions for asymmetry in EGARCH seem to have been ignored in the literature, or have concentrated on the incorrect conditions, with no clear explanation, and hence with associated misleading interpretations. The purpose of the paper is to derive the regularity condition for asymmetry in EGARCH to provide the correct interpretation. It is shown that, in practice, EGARCH always displays asymmetry, though not leverage.
    Keywords: Conditional volatility models, random coefficient complex nonlinear moving average process, EGARCH, asymmetry, leverage, regularity condition
    JEL: C22 C52 C58 G32
    Date: 2017–06–23
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20170056&r=ecm
  10. By: Marcos Herrera (CONICET-IELDE/UNSa); Jesus Mur (Universidad de Zaragoza); Manuel Ruiz-Marin (Universidad Politecnica de Cartagena)
    Abstract: The practice of spatial econometrics revolves around a weighting matrix, which is often supplied by the user on previous knowledge. This is the so called W issue and, probably, the aprioristic approach is not the best solution. However, we have to concur that, nowadays, there few alternatives for the user. Under these circumstances, our contribution focuses on the problem of selecting a W matrix from among a finite set of matrices, all of them deemed appropriate for the case. We develop a new and simple method based on the Entropy corresponding to the distribution probability estimated for the data. Other alternatives to ours, which are common in current applied work, are also reviewed. The main part of the paper consists of a large Monte Carlo resolved in order to calibrate the effectiveness of our approach compared to the others. A case study is also included.
    Date: 2017–06
    URL: http://d.repec.org/n?u=RePEc:slt:wpaper:18&r=ecm
  11. By: Thomas Quistgaard Pedersen (Aarhus University and CREATES); Erik Christian Montes Schütte (Aarhus University and CREATES)
    Abstract: We analyze an empirically important issue with the recursive right-tailed unit root tests for bubbles in asset prices. First, we show that serially correlated innovations, which is a feature that is present in most financial series used to test for bubbles, can lead to severe size distortions when using either fixed or automatic (based on information criteria) lag-length selection in the auxiliary regressions underlying the test. Second, we propose a sieve-bootstrap version of these tests and show that this results in more or less perfectly sized test statistics even in the presence of highly autocorrelated innovations. We also find that these improvements in size come at a relatively low cost for the power of the tests. Finally, we apply the bootstrap tests on the housing market of OECD countries, and generally find less strong evidence of bubbles compared to existing evidence.
    Keywords: Right-tailed unit root tests, GSADF, Size and power properties, Sieve bootstrap, International housing market
    JEL: C58 G12
    Date: 2017–02–16
    URL: http://d.repec.org/n?u=RePEc:aah:create:2017-09&r=ecm
  12. By: Grammig, Joachim; Küchlin, Eva-Maria
    Abstract: The long-run consumption risk model provides a theoretically appealing explanation for prominent asset pricing puzzles, but its intricate structure presents a challenge for econometric analysis. This paper proposes a two-step indirect inference approach that disentangles the estimation of the model's macroeconomic dynamics and the investor's preference parameters. A Monte Carlo study explores the feasibility and efficiency of the estimation strategy. We apply the method to recent U.S. data and provide a critical re-assessment of the long-run risk model's ability to reconcile the real economy and financial markets. This two-step indirect inference approach is potentially useful for the econometric analysis of other prominent consumption-based asset pricing models that are equally difficult to estimate.
    Keywords: indirect inference estimation,asset pricing,longrun risk
    JEL: C58 G10 G12
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:zbw:cfrwps:1701&r=ecm
  13. By: Legrand D. F, Saint-Cyr
    Abstract: This article investigates the impact of agricultural policies on structural change in farming. Since not all farmers may behave alike, a non-stationary mixed-Markov chain modeling (M-MCM) approach is applied to capture unobserved heterogeneity in the transition process of farms. A multinomial logit specification is used for transition probabilities and the parameters are estimated by the maximum likelihood method and the Expectation-Maximization (EM) algorithm. An empirical application to an unbalanced panel dataset from 2000 to 2013 shows that French farming mainly consists of a mixture of two farm types characterized by specific transition processes. The main finding is that the impact of farm subsidies from both pillars of the Common Agricultural Policy (CAP) highly depends on the farm type. A comparison between the non-stationary M-MCM and a homogeneous non-stationary MCM shows that the latter model leads to either overestimation or underestimation of the impact of agricultural policy on change in farm size. This suggests that more attention should be paid to both observed and unobserved farm heterogeneity in assessing the impact of agricultural policy on structural change in farming.
    Keywords: agricultural policy, EM algorithm, farm structural change, mixed-Markov chain model, multinomial logit, unobserved heterogeneity
    JEL: Q12 Q18 C38 C51
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:rae:wpaper:201704&r=ecm
  14. By: Erard, Brian
    Abstract: Often providers of a program or a service have detailed information about their clients, but only very limited information about potential clients. Likewise, ecologists frequently have extensive knowledge regarding habitats where a given animal or plant species is known to be present, but they lack comparable information on habitats where they are certain not to be present. In epidemiology, comprehensive information is routinely collected about patients who have been diagnosed with a given disease; however, commensurate information may not be available for individuals who are known to be free of the disease. While it may be highly beneficial to learn about the determinants of participation (in a program or service) or presence (in a habitat or of a disease), the lack of a comparable sample of observations on subjects that are not participants (or that are non-present) precludes the application of standard qualitative response models, such as logit or probit. In this paper, we present some new qualitative response estimators that can be applied by combining information from a primary sample of participants with a general sample from the overall population. Our new estimators rival the best existing estimators for use control sampling. Furthermore, these new estimators can be applied to stratified samples even when the stratification criteria are unknown. The estimators are also readily generalized to accommodate polychotomous response problems. Modeling Qualitative Outcomes by Supplementing Participant Data with General Population Data: A Calibrated Qualitative Response Estimation Approach. Available from: https://www.researchgate.net/publication/317731280_Modeling_Qualitative_Outcomes_by_Supplementing_Participant_Data_with_General_Population_Data_A_Calibrated_Qualitative_Response_Estimation_Approach [accessed Jun 28, 2017].
    Keywords: Qualitative response, Probit, Logit, Case Control Sampling, Use Control Sampling, Presence Pseudo-Absence Sampling, Contaminated Controls, Supplementary Sampling, Prevalence, Take-Up, Habitat Selection
    JEL: C1 C13 C18 C4 C51
    Date: 2017–06–24
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:79927&r=ecm

This nep-ecm issue is ©2017 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.