nep-ecm New Economics Papers
on Econometrics
Issue of 2012‒10‒27
23 papers chosen by
Sune Karlsson
Orebro University

  1. Bayesian Nonparametric Instrumental Variable Regression based on Penalized Splines and Dirichlet Process Mixtures By Manuel Wiesenfarth; Carlos Matías Hisgen; Thomas Kneib; Carmen Cadarso-Suarez
  2. Maximum likelihood estimation and inference for approximate factor models of high dimension By Bai, Jushan; Li, Kunpeng
  3. A simple test for the equality of integration orders. By Javier Hualde
  4. Semiparametric Estimation of a Sample Selection Model in the Presence of Endogeneity By Schwiebert, Jörg
  5. Bias Transmission and Variance Reduction in Two-Stage Quantile Regression By Tae-Hwan Kim,; Christophe Muller
  6. Semiparametric Estimation of a Binary Choice Model with Sample Selection By Schwiebert, Jörg
  7. A tractable estimator for general mixed multinomial logit models By Jonathan James
  8. Nonparametric estimation of Value-at-Risk By Ramon Alemany; Catalina Bolancé; Montserrat Guillén
  9. Estimation of the Marginal Expected Shortfall: The Mean when a Related Variable is Extreme By Cai, J.; Einmahl, J.H.J.; Haan, L.F.M. de; Zhou, C.
  10. Imputing Individual Effects in Dynamic Microsimulation Models. An application of the Rank Method By Matteo Richiardi; Ambra Poggi
  11. Strong Consistency of the Least-Squares Estimator in Simple Regression Models with Stochastic Regressors By Norbert Christopeit; Michael Massmann
  12. Analyzing the Composition of the Female Workforce - A Semiparametric Copula Approach By Schwiebert, Jörg
  13. A Detailed Decomposition for Limited Dependent Variable Models By Schwiebert, Jörg
  14. Robust Standard Errors in Small Samples: Some Practical Advice By Guido W. Imbens; Michal Kolesar
  15. A Copula Based Bayesian Approach for Paid-Incurred Claims Models for Non-Life Insurance Reserving By Gareth W. Peters; Alice X. D. Dong; Robert Kohn
  16. A proxy approach to dealing with the infeasibility problem in super-efficiency data envelopment analysis By Cheng, Gang; Zervopoulos, Panagiotis
  17. Capital asset pricing model with fuzzy returns and hypothesis testing By Alfred Mbairadjim Moussa; Jules Sadefo Kamdem; Arnold F. Shapiro; Michel Terraza
  18. Non-Parametric Stochastic Simulations to Investigate Uncertainty around the OECD Indicator Model Forecasts By Elena Rusticelli
  19. On the empirical importance of periodicity in the volatility of financial time series By Blazej Mazur; Mateusz Pipien
  20. SEMIFARMA-HYGARCH Modeling of Dow Jones Return Persistence By Mohamed Chikhi; Anne Péguin-Feissolle; Michel Terraza
  21. Comparing Parametric and Nonparametric Regression Methods for Panel Data: the Optimal Size of Polish Crop Farms By Tomasz Gerard Czekaj; Arne Henningsen
  22. Concept-Based Bayesian Model Averaging and Growth Empirics By Magnus, J.R.; Wang, W.
  23. Inflated Ordered Outcomes By Robert Brooks; Mark N. Harris; Christopher Spencer

  1. By: Manuel Wiesenfarth (University of Mannheim); Carlos Matías Hisgen (Universidad Nacional del Nordeste, Argentina); Thomas Kneib (Georg-August-University Göttingen); Carmen Cadarso-Suarez (University of Santiago de Compostela)
    Abstract: We propose a Bayesian nonparametric instrumental variable approach that allows us to correct for endogeneity bias in regression models where the covariate effects enter with unknown functional form. Bias correction relies on a simultaneous equations specication with flexible modeling of the joint error distribution implemented via a Dirichlet process mixture prior. Both the structural and instrumental variable equation are specified in terms of additive predictors comprising penalized splines for nonlinear effects of continuous covariates. Inference is fully Bayesian, employing efficient Markov Chain Monte Carlo simulation techniques. The resulting posterior samples do not only provide us with point estimates, but allow us to construct simultaneous credible bands for the nonparametric effects, including data-driven smoothing parameter selection. In addition, improved robustness properties are achieved due to the flexible error distribution specification. Both these features are extremely challenging in the classical framework, making the Bayesian one advantageous. In simulations, we investigate small sample properties and an investigation of the effect of class size on student performance in Israel provides an illustration of the proposed approach which is implemented in an R package bayesIV.
    Keywords: Endogeneity; Markov Chain Monte Carlo methods; Simultaneous credible bands
    Date: 2012–10–15
    URL: http://d.repec.org/n?u=RePEc:got:gotcrc:127&r=ecm
  2. By: Bai, Jushan; Li, Kunpeng
    Abstract: An approximate factor model of high dimension has two key features. First, the idiosyncratic errors are correlated and heteroskedastic over both the cross-section and time dimensions; the correlations and heteroskedasticities are of unknown forms. Second, the number of variables is comparable or even greater than the sample size. Thus a large number of parameters exist under a high dimensional approximate factor model. Most widely used approaches to estimation are principal component based. This paper considers the maximum likelihood-based estimation of the model. Consistency, rate of convergence, and limiting distributions are obtained under various identification restrictions. Comparison with the principal component method is made. The likelihood-based estimators are more efficient than those of principal component based. Monte Carlo simulations show the method is easy to implement and an application to the U.S. yield curves is considered
    Keywords: Factor analysis; Approximate factor models; Maximum likelihood; Kalman smoother; Principal components; Inferential theory
    JEL: C51 C33
    Date: 2012–01–10
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:42099&r=ecm
  3. By: Javier Hualde (Departamento de Economía-UPNA)
    Abstract: A necessary condition for two time series to be nontrivially cointegrated is the equality of their respective integration orders. Thus, it is standard practice to test for order homogeneity prior to testing for cointegration. Tests for the equality of integration orders are particular cases of more general tests of linear restrictions among memory parameters of different time series, for which asymptotic theory has been developed in parametric and semiparametric settings. However, most tests have been developed in stationary and invertible settings, and, more importantly, many of them are invalid when the observables are cointegrated, because they usually involve inversion of an asymptotically singular matrix. We propose a general testing procedure which does not suffer from this serious drawback, and, in addition, it is very simple to compute, it covers the stationary/nonstationary and invertible/noninvertible ranges, and, as we show in a Monte Carlo experiment, it works well in finite samples.
    Keywords: integration orders, fractional differencing, fractional cointegration.
    JEL: C32
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:nav:ecupna:1206&r=ecm
  4. By: Schwiebert, Jörg
    Abstract: In this paper, we derive a semiparametric estimation procedure for the sample selection model when some covariates are endogenous. Our approach is to augment the main equation of interest with a control function which accounts for sample selectivity as well as endogeneity of covariates. In contrast to existing methods proposed in the literature, our approach allows that the same endogenous covariates may enter the main and the selection equation. We show that our proposed estimator is \sqrtn-consistent and derive its asymptotic distribution. We provide Monte Carlo evidence on the small sample behavior of our estimator and present an empirical application. Finally, we brie y consider an extension of our model to quantile regression settings and provide guidelines for estimation.
    Keywords: Sample selection model, semiparametric estimation, endogenous covariates, control function approach, quantile regression
    JEL: C21 C24 C26
    Date: 2012–10
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-504&r=ecm
  5. By: Tae-Hwan Kim, (Yonsei University, School of Economics); Christophe Muller (Aix-Marseille Université)
    Abstract: In this paper, we propose a variance reduction method for quantile regressions with endogeneity problems. First, we derive the asymptotic distribution of two-stage quantile estimators based on the fitted-value approach under very general conditions on both error terms and exogenous variables. Second, we exhibit a bias transmission property derived from the asymptotic representation of our estimator. Third, using a reformulation of the dependent variable, we improve the efficiency of the two-stage quantile estimators by exploiting a trade-off between an asymptotic bias confined to the intercept estimator and a reduction of the variance of the slope estimator. Monte Carlo simulation results show the excellent performance of our approach. In particular, by combining quantile regressions with first-stage trimmed least-squares estimators, we obtain more accurate slope estimates than 2SLS, 2SLAD and other estimators for a broad range of distributions.
    Keywords: Two-Stage Estimation, Variance Reduction, Quantile Regression, Asymptotic Bias.
    JEL: C13 C30
    Date: 2012–07
    URL: http://d.repec.org/n?u=RePEc:aim:wpaimx:1221&r=ecm
  6. By: Schwiebert, Jörg
    Abstract: In this paper we provide semiparametric estimation strategies for a sample selection model with a binary dependent variable. To the best of our knowledge, this has not been done before. We propose a control function approach based on two di erent identifying assumptions. This gives rise to semiparametric estimators which are extensions of the Klein and Spady (1993), maximum score (Manski, 1975) and smoothed maximum score (Horowitz, 1992) estimators. We provide Monte Carlo evidence and an empirical example to study the nite sample properties of our estimators. Finally, we outline an extension of these estimators to the case of endogenous covariates.
    Keywords: Sample selection model, binary dependent variable, semiparametric estimation, control function approach, endogenous covariates
    JEL: C21 C24 C25 C26
    Date: 2012–10
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-505&r=ecm
  7. By: Jonathan James
    Abstract: The mixed logit is a framework for incorporating unobserved heterogeneity in discrete choice models in a general way. These models are difficult to estimate because they result in a complicated incomplete data likelihood. This paper proposes a new approach for estimating mixed logit models. The estimator is easily implemented as iteratively re-weighted least squares: the well known solution for complete data likelihood logits. The main benefit of this approach is that it requires drastically fewer evaluations of the simulated likelihood function, making it significantly faster than conventional methods that rely on numerically approximating the gradient. The method is rooted in a generalized expectation and maximization (GEM) algorithm, so it is asymptotically consistent, efficient, and globally convergent.
    Keywords: Econometrics ; Econometric models
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:fip:fedcwp:12-19&r=ecm
  8. By: Ramon Alemany (Department of Econometrics, Riskcenter-IREA, University of Barcelona,Av. Diagonal, 690, 08034 Barcelona, Spain); Catalina Bolancé (Department of Econometrics, Riskcenter-IREA, University of Barcelona,Av. Diagonal, 690, 08034 Barcelona, Spain); Montserrat Guillén (Department of Econometrics, Riskcenter-IREA, University of Barcelona,Av. Diagonal, 690, 08034 Barcelona, Spain)
    Abstract: A method to estimate an extreme quantile that requires no distributional assumptions is presented. The approach is based on transformed kernel estimation of the cumulative distribution function (cdf). The proposed method consists of a double transformation kernel estimation. We derive optimal bandwidth selection methods that have a direct expression for the smoothing parameter. The bandwidth can accommodate to the given quantile level. The procedure is useful for large data sets and improves quantile estimation compared to other methods in heavy tailed distributions. Implementation is straightforward and R programs are available.
    Keywords: kernel estimation, bandwidth selection, quantile, risk measures..
    Date: 2012–10
    URL: http://d.repec.org/n?u=RePEc:xrp:wpaper:xreap2012-19&r=ecm
  9. By: Cai, J.; Einmahl, J.H.J.; Haan, L.F.M. de; Zhou, C. (Tilburg University, Center for Economic Research)
    Abstract: Abstract: Denote the loss return on the equity of a financial institution as X and that of the entire market as Y . For a given very small value of p > 0, the marginal expected shortfall (MES) is defined as E(X | Y > QY (1−p)), where QY (1−p) is the (1−p)-th quantile of the distribution of Y . The MES is an important factor when measuring the systemic risk of financial institutions. For a wide nonparametric class of bivariate distributions, we construct an estimator of the MES and establish the asymptotic normality of the estimator when p ↓ 0, as the sample size n → ∞. Since we are in particular interested in the case p = O(1=n), we use extreme value techniques for deriving the estimator and its asymptotic behavior. The finite sample performance of the estimator and the adequacy of the limit theorem are shown in a detailed simulation study. We also apply our method to estimate the MES of three large U.S. investment banks.
    Keywords: Asymptotic normality;extreme values;tail dependence.
    JEL: C13 C14
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:2012080&r=ecm
  10. By: Matteo Richiardi; Ambra Poggi
    Abstract: Dynamic microsimulation modeling involves two stages: estimation and forecasting. Unobserved heterogeneity is often considered in estimation, but not in forecasting, beyond trivial cases. Non-trivial cases involve individuals that enter the simulation with a history of previous outcomes. We show that the simple solutions of attributing to these individuals a null effect or a random draw from the estimated unconditional distributions lead to biased forecasts, which are often worse than those obtained neglecting unobserved heterogeneity altogether. We then present a first implementation of the Rank method, a new algorithm for attributing the individual effects to the simulation sample which greatly simplifies those already known in the literature. Out-of-sample validation of our model shows that correctly imputing unobserved heterogeneity significantly improves the quality of the forecasts.
    Keywords: Dynamic microsimulation, Unobserved heterogeneity, Validation, Rank method, Assignment algorithms, Female labor force participation, Italy
    JEL: C53 C18 C23 C25 J11 J12 J21
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:cca:wpaper:267&r=ecm
  11. By: Norbert Christopeit (University of Bonn); Michael Massmann (VU University Amsterdam)
    Abstract: Strong consistency of least squares estimators of the slope parameter in simple linear regression models is established for predetermined stochastic regressors. The main result covers a class of models which falls outside the applicability of what is presently available in the literature. An application to the identification of economic models with adaptive learning is discussed.
    Keywords: linear regression; least-squares; consistency; stochastic regressors; adaptive learning; decreasing gain
    JEL: C13 C22 D83 D84
    Date: 2012–10–12
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20120109&r=ecm
  12. By: Schwiebert, Jörg
    Abstract: We provide a semiparametric copula approach for estimating a "classical" sample selection model. We impose that the joint distribution function of unobservables can be characterized by a specifc copula, but the marginal distribution functions are estimated semiparametrically. In contrast to existing semiparametric estimators for sample selection models, our approach provides a measure of dependence between unobservables in main and selection equation which can be used to analyze the composition of, say, the female workforce. We apply our estimation procedure to a female labor supply data set and show that those women with the best skills participate in the labor market; moreover, we find evidence for the existence of an ability threshold which involves that women with high ability are to some extent advantaged and, therefore, have also obtained the best skills.
    Keywords: Sample selection model, semiparametric estimation, copula approach, composition of the female workforce, female labor force participation
    JEL: C21 C24 J21 J31
    Date: 2012–10
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-503&r=ecm
  13. By: Schwiebert, Jörg
    Abstract: In this paper, we consider a detailed decomposition method for limited dependent variable models. That means, we propose a method to decompose the differential in the (limited dependent) outcome variable between two groups into the contributions of the explanatory variables. We provide a theoretical derivation of the detailed decomposition and show how this decomposition can be estimated consistently. In contrast to decomposition approaches already presented in the literature, our method leads to a unique decomposition and accounts for the nonlinearity of the underlying econometric model in a rather intuitive way. Our results can be applied to the most common limited dependent variable models such as probit, logit and tobit models.
    Keywords: Decomposition methods, detailed decomposition, explained di erential, limited dependent variable models, probit, logit, tobit
    JEL: C40 J31 J70
    Date: 2012–10
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-506&r=ecm
  14. By: Guido W. Imbens; Michal Kolesar
    Abstract: In this paper we discuss the properties of confidence intervals for regression parameters based on robust standard errors. We discuss the motivation for a modification suggested by Bell and McCaffrey (2002) to improve the finite sample properties of the confidence intervals based on the conventional robust standard errors. We show that the Bell-McCaffrey modification is the natural extension of a principled approach to the Behrens-Fisher problem, and suggest a further improvement for the case with clustering. We show that these standard errors can lead to substantial improvements in coverage rates even for sample sizes of fifty and more. We recommend researchers calculate the Bell-McCaffrey degrees-of-freedom adjustment to assess potential problems with conventional robust standard errors and use the modification as a matter of routine.
    JEL: C01
    Date: 2012–10
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:18478&r=ecm
  15. By: Gareth W. Peters; Alice X. D. Dong; Robert Kohn
    Abstract: Our article considers the class of recently developed stochastic models that combine claims payments and incurred losses information into a coherent reserving methodology. In particular, we develop a family of Heirarchical Bayesian Paid-Incurred-Claims models, combining the claims reserving models of Hertig et al. (1985) and Gogol et al. (1993). In the process we extend the independent log-normal model of Merz et al. (2010) by incorporating different dependence structures using a Data-Augmented mixture Copula Paid-Incurred claims model. The utility and influence of incorporating both payment and incurred losses into estimating of the full predictive distribution of the outstanding loss liabilities and the resulting reserves is demonstrated in the following cases: (i) an independent payment (P) data model; (ii) the independent Payment-Incurred Claims (PIC) data model of Merz et al. (2010); (iii) a novel dependent lag-year telescoping block diagonal Gaussian Copula PIC data model incorporating conjugacy via transformation; (iv) a novel data-augmented mixture Archimedean copula dependent PIC data model. Inference in such models is developed via a class of adaptive Markov chain Monte Carlo sampling algorithms. These incorporate a data-augmentation framework utilized to efficiently evaluate the likelihood for the copula based PIC model in the loss reserving triangles. The adaptation strategy is based on representing a positive definite covariance matrix by the exponential of a symmetric matrix as proposed by Leonard et al. (1992).
    Date: 2012–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1210.3849&r=ecm
  16. By: Cheng, Gang; Zervopoulos, Panagiotis
    Abstract: Super-efficiency data envelopment analysis (SE-DEA) models are expressions of the traditional DEA models featuring the exclusion of the unit under evaluation from the reference set. The SE-DEA models have been applied in various cases such as sensitivity and stability analysis, measurement of productivity changes,outliers’ identification,and classification and ranking of decision making units (DMUs). A major deficiency in the SE-DEA models is their infeasibility in determining super-efficiency scores for some efficient DMUs when variable, non-increasing and non-decreasing returns to scale (VRS, NIRS, NDRS) prevail. The scope of this study is the development of an oriented proxy approach for SE-DEA models in order to tackle the infeasibility problem. The proxy introduced to the SE-DEA models replaces the original infeasible DMU in the sample and guarantees a feasible optimal solution. The proxy approach yields the same scores as the traditional SE-DEA models to the feasible DMUs.
    Keywords: Data envelopment analysis (DEA); Super-efficiency (SE); Infeasibility; Orientation
    JEL: C02 C61 C67
    Date: 2012–06
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:42064&r=ecm
  17. By: Alfred Mbairadjim Moussa; Jules Sadefo Kamdem; Arnold F. Shapiro; Michel Terraza
    Abstract: Over the last four decades, several estimation issues of the beta have been discussed extensively in many articles. An emerging consensus is that the betas are time-dependant and their estimates are impacted by the return interval and the length of the estimation period. These findings lead to the prominence of the practical implementation of the Capital Asset Pricing Model. Our goal in this paper is two-fold: After studying the impact of the return interval on the beta estimates, we analyze the sample size effects on the preceding estimation. Working in the framework of fuzzy set theory, we first associate the returns based on closing prices with the intraperiod volatility for the representation by the means of a fuzzy random variable in order to incorporate the effect of the interval period over which the returns are measured in the analysis. Next, we use these fuzzy returns to estimate the beta via fuzzy least square method in order to deal efficiently with outliers in returns, often caused by structural breaks and regime switches in the asset prices. A bootstrap test is carried out to investigate whether there is a linear relationship between the market portfolio fuzzy return and the given asset fuzzy return. Finally, the empirical results on French stocks suggest that our beta estimates seem to be more stable than the ordinary least square (OLS) estimates when the return intervals and the sample size change.
    Date: 2012–10
    URL: http://d.repec.org/n?u=RePEc:lam:wpaper:12-33&r=ecm
  18. By: Elena Rusticelli
    Abstract: The forecasting uncertainty around point macroeconomic forecasts is usually measured by the historical performance of the forecasting model, using measures such as root mean squared forecasting errors (RMSE). This measure, however, has the major drawback that it is constant over time and hence does not convey any information on the specific source of uncertainty nor the magnitude and balance of risks in the immediate conjuncture. Moreover, specific parametric assumptions on the probability distribution of forecasting errors are needed in order to draw confidence bands around point forecasts. This paper proposes an alternative time-varying simulated RMSE, obtained by means of non-parametric stochastic simulations, which combines the uncertainty around the model’s parameters and the structural errors term to construct asymmetric confidence bands around point forecasts. The procedure is applied, by way of example, to the short-term real GDP growth forecasts generated by the OECD Indicator Model for Germany. The empirical probability distributions of the GDP growth forecasts, derived through the bootstrapping technique, allow the ex ante probability of, for example, a negative GDP growth forecast for the current quarter to be estimated. The results suggest the presence of peaks of higher uncertainty related to economic recession events, with a balance of risks which became negative in the immediate aftermath of the global financial crisis.<P>Simulations stochastiques non-paramétriques pour étudier l'incertitude autour des prévisions du modèle d'indicateurs de l'OCDE<BR>L’incertitude entourant les prévisions macro-économiques ponctuelles est généralement mesurée par la performance historique du modèle de prévision, à l’aide de mesures telles que la moyenne au carré des erreurs de prévisions (EQM). Cette mesure, a cependant l’inconvénient majeur d’être constante dans le temps et donc de ne transmettre aucune information ni sur la source spécifique de l’incertitude, ni sur l’ampleur et la balance des risques liée à la conjoncture immédiate. Par ailleurs, des hypothèses paramétriques spécifiques sur la distribution de probabilité des erreurs de prévision sont nécessaires afin de dessiner des bandes de confiance autour des prévisions ponctuelles. Cet article propose une erreur quadratique moyenne simulé variant dans le temps et obtenue au moyen de simulations stochastiques nonparamétriques, combinent l’incertitude autour des paramètres du modèle et le terme d’erreurs structurelles pour construire des bandes de confiance asymétrique autour des prévisions ponctuelles. La procédure est appliquée, à titre d'exemple, aux prévisions à court terme de la croissance du PIB réel générées par le modèle d’indicateurs de l’OCDE pour l’Allemagne. Les distributions empiriques de probabilité des prévisions de croissance du PIB, obtenues par la technique de bootstrap, permettent d’estimer la probabilité ex ante d’une croissance négative du PIB pour le trimestre en cours. Les résultats suggèrent la présence de pics d’incertitude liée aux événements de la récession économique, avec une balance des risques qui est devenue négative au lendemain de la crise financière mondiale.
    Keywords: GDP, stochastic simulations, Forecasting uncertainty, empirical probability distribution, PIB, Incertitude entourant des prévisions, simulations stochastiques, distribution empirique de probabilité
    JEL: C12 C15 C53
    Date: 2012–07–27
    URL: http://d.repec.org/n?u=RePEc:oec:ecoaaa:979-en&r=ecm
  19. By: Blazej Mazur (Economic Institute, National Bank of Poland and Department of Econometrics and Operations Research, Cracow University of Economics); Mateusz Pipien (Economic Institute, National Bank of Poland and Department of Econometrics and Operations Research, Cracow University of Economics)
    Abstract: We discuss the empirical importance of long term cyclical effects in the volatility of financial returns. Following ˘Ci˘zek and Spokoiny (2009), Amado and Teräsvirta (2012) and others, we consider a general conditionally heteroscedastic process with stationarity property distorted by a deterministic function that governs the possible variability in time of unconditional variance. The function proposed in this paper can be interpreted as a finite Fourier approximation of an Almost Periodic (AP) function as defined by Corduneanu (1989). The resulting model has a particular form of a GARCH process with time varying parameters, intensively discussed in the recent literature. In the empirical analyses we apply a generalisation of the Bayesian AR(1)-t- GARCH(1,1) model for daily returns of S&P500, covering the period of sixty years of US postwar economy, including the recently observed global financial crisis. The results of a formal Bayesian model comparison clearly indicate the existence of significant long term cyclical patterns in volatility with a strongly supported periodic component corresponding to a 14 year cycle. This may be interpreted as empirical evidence in favour of a linkage between the business cycle in the US economy and long term changes in the volatility of the basic stock market index.
    Keywords: Periodically correlated stochastic processes, GARCH models, Bayesian inference, volatility, unconditional variance
    JEL: C58 C11 G10
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:nbp:nbpmis:124&r=ecm
  20. By: Mohamed Chikhi (Université de Ouargla and Université Montpellier I, Lameta); Anne Péguin-Feissolle (CNRS, Greqam); Michel Terraza (Université Montpellier I, Lameta)
    Abstract: This paper analyzes the cyclical behavior of Dow Jones by testing the existence of long memory through a new class of semiparametric ARFIMA models with HYGARCH errors (SEMIFARMA-HYGARCH); this class includes nonparametric deterministic trend, stochastic trend, short-range and long-range dependence and long memory heteroscedastic errors. We study the daily returns of the Dow Jones from 1896 to 2006. We estimate several models and we find that the coefficients of the SEMIFARMA-HYGARCH model, including long memory coefficients for the equations of the mean and the conditional variance, are highly significant. The forecasting results show that the informational shocks have permanent effects on volatility and the SEMIFARMA-HYGARCH model has better performance over some other models for long and/or short horizons. The predictions from this model are also better than the predictions of the random walk model; accordingly, the weak efficiency assumption of financial markets seems violated for Dow Jones returns studied over a long period.
    Keywords: SEMIFARMA model, HYGARCH model, nonparametric deterministic trend,kernel methodology, long memory.
    JEL: C14 C22 C58 G17
    Date: 2012–06
    URL: http://d.repec.org/n?u=RePEc:aim:wpaimx:1214&r=ecm
  21. By: Tomasz Gerard Czekaj (Institute of Food and Resource Economics, University of Copenhagen); Arne Henningsen (Institute of Food and Resource Economics, University of Copenhagen)
    Abstract: We investigate and compare the suitability of parametric and non-parametric stochastic regression methods for analysing production technologies and the optimal firm size. Our theoretical analysis shows that the most commonly used functional forms in empirical production analysis, Cobb-Douglas and Translog, are unsuitable for analysing the optimal firm size. We show that the Translog functional form implies an implausible linear relationship between the (logarithmic) firm size and the elasticity of scale, where the slope is artificially related to the substitutability between the inputs. The practical applicability of the parametric and non-parametric regression methods is scrutinised and compared by an empirical example: we analyse the production technology and investigate the optimal size of Polish crop farms based on a firm-level balanced panel data set. A nonparametric specification test rejects both the Cobb-Douglas and the Translog functional form, while a recently developed nonparametric kernel regression method with a fully nonparametric panel data specification delivers plausible results. On average, the nonparametric regression results are similar to results that are obtained from the parametric estimates, although many individual results differ considerably. Moreover, the results from the parametric estimations even lead to incorrect conclusions regarding the technology and the optimal firm size.
    Keywords: production technology, nonparametric econometrics, panel data, Translog, firm size, Polish crop farms
    JEL: C14 C23 D24 Q12
    Date: 2012–10
    URL: http://d.repec.org/n?u=RePEc:foi:wpaper:2012_12&r=ecm
  22. By: Magnus, J.R.; Wang, W. (Tilburg University, Center for Economic Research)
    Abstract: Abstract: In specifying a regression equation, we need to determine which regressors to include, but also how these regressors are measured. This gives rise to two levels of uncertainty: concepts (level 1) and measurements within each concept (level 2). In this paper we propose a hierarchical weighted least squares (HWALS) method to address these uncertainties. We examine the effects of different growth theories taking into account the measurement problem in the growth regression. We find that estimates produced by HWALS provide intuitive and robust explanations. We also consider approximation techniques when the number of variables is large or when computing time is limited, and we propose possible strategies for sensitivity analysis.
    Keywords: Hierarchical model averaging;Growth determinants;Measurement problem.
    JEL: C51 C52 C13 C11
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:2012017&r=ecm
  23. By: Robert Brooks (Department of Econometrics and Business Statistics, Monash University, Melbourne, Australia); Mark N. Harris (Department of Econometrics and Quantitative Modelling, School of Economics and Finance, Curtin Business School, Curtin University, Perth, Australia, WA 6845); Christopher Spencer (School of Business and Economics, Loughborough University, UK)
    Abstract: We extend Harris and Zhao (2007) by proposing a (Panel) Inflated Ordered Probit model, and demonstrate its usefulness by applying it to Bank of England Monetary Policy Committee voting data.
    Keywords: Panel Inflated Ordered Probit, random effects, inflated outcomes, voting, Monetary Policy Committee
    JEL: E5 C3
    Date: 2012–10
    URL: http://d.repec.org/n?u=RePEc:lbo:lbowps:2012_09&r=ecm

This nep-ecm issue is ©2012 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.