nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒07‒27
sixteen papers chosen by
Sune Karlsson
Orebro University

  1. New Liu Estimators for the Poisson Regression Model: Method and Application By Månsson, Kristofer; Kibria, B. M. Golam; Sjölander, Pär; Shukur, Ghazi
  2. Finite population properties of predictors based on spatial patterns By Francesca Bruno; Daniela Cocchi; Alessandro Vagheggini
  3. Doubly fractional models for dynamic heteroskedastic cycles. By Miguel Artiach; Josu Arteche
  4. Modified Ridge Parameters for Seemingly Unrelated Regression Model By Zeebari , Zangin; Shukur , Ghazi; Kibria, B. M. Golam
  5. Inference on an Extended Roy Model, with an Application to Schooling Decisions in France By Arnaud Maurel; Xavier D'Haultfoeuille
  6. Nonparametric Estimators of Dose-Response Functions By BIA Michela; FLORES Carlos A.; MATTEI Alessandra
  7. Does the Box-Cox transformation help in forecasting macroeconomic time series? By Tommaso, Proietti; Helmut, Luetkepohl
  8. Wild bootstrap of the mean in the infinite variance case By Giuseppe Cavaliere; Iliyan Georgiev; A.M.Robert Taylor
  9. An Order-Theoretic Mixing Condition for Monotone Markov Chains By Takashi Kamihigashi; John Stachurski
  10. A New Method for Measuring Tail Exponents of Firm Size Distributions By Fujimoto, S.; Ishikawa, A.; Mizuno, T.; Watanabe, T.
  11. A Dynamic Model of Demand for Houses and Neighborhoods By Patrick Bayer; Robert McMillan; Alvin Murphy; Christopher Timmins
  12. On Kronecker Products, Tensor Products And Matrix Differential Calculus By Stephen Pollock
  13. Understanding and teaching unequal probability of selection By Raghav, Manu; Barreto, Humberto
  14. Wavelet multiple correlation and cross-correlation: A multiscale analysis of euro zone stock markets. By Javier Fernández-Macho
  15. The explicit Laplace transform for the Wishart process By Alessandro Gnoatto; Martino Grasselli
  16. Identification of Monetary Policy in SVAR Models: A Data-Oriented Perspective By Matteo Fragetta; Giovanni Melina

  1. By: Månsson, Kristofer (Jönköping University); Kibria, B. M. Golam (Florida International University); Sjölander, Pär (Jönköping University); Shukur, Ghazi (Linnaeus University)
    Abstract: A new shrinkage estimator for the Poisson model is introduced in this paper. This method is a generalization of the Liu (1993) estimator originally developed for the linear regression model and will be generalised here to be used instead of the classical maximum likelihood (ML) method in the presence of multicollinearity since the mean squared error (MSE) of ML becomes inflated in that situation. Furthermore, this paper derives the optimal value of the shrinkage parameter and based on this value some methods of how the shrinkage parameter should be estimated are suggested. Using Monte Carlo simulation where the MSE and mean absolute error (MAE) are calculated it is shown that when the Liu estimator is applied with these proposed estimators of the shrinkage parameter it always outperforms the ML. Finally, an empirical application has been considered to illustrate the usefulness of the new Liu estimators.
    Keywords: Estimation; MSE; MAE; Multicollinearity; Poisson; Liu; Simulation
    JEL: C53
    Date: 2011–06–30
    URL: http://d.repec.org/n?u=RePEc:hhs:huiwps:0051&r=ecm
  2. By: Francesca Bruno (Università di Bologna); Daniela Cocchi (Università di Bologna); Alessandro Vagheggini (Università di Bologna)
    Abstract: When statistical inference is used for spatial prediction, the model-based framework known as kriging is commonly used. The predictor for an unsampled element of a population is a weighted combination of sampled values, in which weights are obtained by estimating the spatial covariance function. This solution can be affected by model misspecification and can be influenced by sampling design properties. In classical design-based finite population inference, these problems can be overcome; nevertheless, spatial solutions are still seldom used for this purpose. Through the efficient use of spatial information, a conceptual framework for design-based estimation has been developed in this study. We propose a standardized weighted predictor for unsampled spatial data, using the population information regarding spatial locations directly in the weighting system. Our procedure does not require model estimation of the spatial pattern because the spatial relationship is captured exclusively based on the Euclidean distances between locations (which are fixed and do not require assessment after sample selection). The individual predictor is a design-based ratio estimator, and we illustrate its properties for simple random sampling.
    Keywords: spatial sampling; ratio estimator; design-based inference; model-based inference; spatial information in finite population inference campionamento spaziale, stimatore del rapporto, inferenza da disegno, inferenza da modello; informazione spaziale nell’inferenza da popolazioni finite
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:bot:quadip:107&r=ecm
  3. By: Miguel Artiach (Universidad de Alicante); Josu Arteche (UPV/EHU)
    Abstract: Strong persistence is a common phenomenon that has been documented not only in the levels but also in the volatility of many time series. The class of doubly fractional models is extended to include the possibility of long memory in cyclical (non-zero) frequencies in both the levels and the volatility and a new model, the GARMA-GARMASV (Gegenbauer AutoRegressive Mean Average - Id. Stochastic Volatility) is introduced. A sequential estimation strategy, based on the Whittle approximation to maximum likelihood is proposed and its finite sample performance is evaluated with a Monte Carlo analysis. Finally, a trifactorial in the mean and bifactorial in the volatility version of the model is proved to successfully fit the well-known sunspot index.
    Keywords: Stochastic volatility; cycles; long memory; QML estimation; sunspot index.
    JEL: C22 C13
    Date: 2011–07–14
    URL: http://d.repec.org/n?u=RePEc:ehu:biltok:201103&r=ecm
  4. By: Zeebari , Zangin (Departments of Economics and Statistics); Shukur , Ghazi (Departments of Economics and Statistics); Kibria, B. M. Golam (Florida International University)
    Abstract: In this paper, we modify a number of new biased estimators of seemingly unrelated regression (SUR) parameters which are developed by Alkhamisi and Shukur (2008), AS, when the explanatory variables are affected by multicollinearity. Nine ridge parameters have been modified and compared in terms of the trace mean squared error (TMSE) and (PR) criterion. The results from this extended study are the also compared with those founded by AS. A simulation study has been conducted to compare the performance of the modified ridge parameters. The results showed that under certain conditions the performance of the multivariate ridge regression estimators based on SUR ridge RMSmax is superior to other estimators in terms of TMSE and PR criterion. In large samples and when the collinearity between the explanatory variables is not high the unbiased SUR, estimator produces a smaller TMSEs.
    Keywords: Multicollinearity; modified SUR ridge regression; Monte Carlo simulations; TMSE
    JEL: C30
    Date: 2010–10–01
    URL: http://d.repec.org/n?u=RePEc:hhs:huiwps:0043&r=ecm
  5. By: Arnaud Maurel; Xavier D'Haultfoeuille
    Abstract: This paper considers the identification and estimation of an extension of Roy’s model (1951) of sectoral choice, which includes a non-pecuniary component in the selection equation and allows for uncertainty on potential earnings. We focus on the identification of the non-pecuniary component, which is key to disentangle the relative importance of monetary incentives versus preferences in the context of sorting across sectors. By making the most of the structure of the selection equation, we show that this component is point identified from the knowledge of the covariates effects on earnings, as soon as one covariate is continuous. Notably, and in contrast to most results on the identification of Roy models, this implies that identification can be achieved without any exclusion restriction nor large support condition on the covariates. As a byproduct, bounds are obtained on the distribution of the ex ante monetary returns. We also propose a three-stage semiparametric estimation procedure for this model, which yields root-n consistent and asymptotically normal estimators. Finally, we apply our results to the educational context, by providing new evidence from French data that non-pecuniary factors are a key determinant of higher education attendance decisions.
    Keywords: Roy model, nonparametric identification, schooling choices, ex ante returns to schooling
    JEL: C14 C25 J24
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:duk:dukeec:11-10&r=ecm
  6. By: BIA Michela; FLORES Carlos A.; MATTEI Alessandra
    Abstract: We propose two semiparametric estimators of the dose-response function based on spline techniques. Under uncounfoundedness, the generalized propensity score can be used to estimate dose-response functions (DRF) and marginal treatment effect functions. In many observational studies treatment may not be binary or categorical. In such cases, one may be interested in estimating the dose-response function in a setting with a continuous treatment. We evaluate the performance of the proposed estimators using Monte Carlo simulation methods. The simulation results suggested that the estimated DRF is robust to the specific semiparametric estimator used, while the parametric estimates of the DRF were sensitive to model mis-specification. We apply our approach to the problem of evaluating the effect on innovation sales of Research and Development (R&D) financial aids received by Luxembourgish firms in 2004 and 2005.
    Keywords: Continuous treatment; Dose-response function; Generalized Propensity Score; Non-parametric methods; R&D investment
    JEL: C13 J31 J70
    Date: 2011–07
    URL: http://d.repec.org/n?u=RePEc:irs:cepswp:2011-40&r=ecm
  7. By: Tommaso, Proietti; Helmut, Luetkepohl
    Abstract: The paper investigates whether transforming a time series leads to an improvement in forecasting accuracy. The class of transformations that is considered is the Box-Cox power transformation, which applies to series measured on a ratio scale. We propose a nonparametric approach for estimating the optimal transformation parameter based on the frequency domain estimation of the prediction error variance, and also conduct an extensive recursive forecast experiment on a large set of seasonal monthly macroeconomic time series related to industrial production and retail turnover. In about one fifth of the series considered the Box-Cox transformation produces forecasts significantly better than the untransformed data at one-step-ahead horizon; in most of the cases the logarithmic transformation is the relevant one. As the forecast horizon increases, the evidence in favour of a transformation becomes less strong. Typically, the na¨ıve predictor that just reverses the transformation leads to a lower mean square error than the optimal predictor at short forecast leads. We also discuss whether the preliminary in-sample frequency domain assessment conducted provides a reliable guidance which series should be transformed for improving significantly the predictive performance.
    Keywords: Forecasts comparisons; Multi-step forecasting; Rolling forecasts; Nonparametric estimation of prediction error variance.
    JEL: C53 C14 C52 C22
    Date: 2011–07–18
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:32294&r=ecm
  8. By: Giuseppe Cavaliere (Università di Bologna); Iliyan Georgiev (Faculdade de Economia, Universidade Nova de Lisboa); A.M.Robert Taylor (School of Economics, University of Nottingham)
    Abstract: It is well known that the standard i.i.d. bootstrap of the mean is inconsistent in a location model with infinite variance (?-stable) innovations. This occurs because the bootstrap distribution of a normalised sum of infinite variance random variables tends to a random distribution. Consistent bootstrap algorithms based on subsampling methods have been proposed but have the drawback that they deliver much wider confidence sets than those generated by the i.i.d. bootstrap owing to the fact that they eliminate the dependence of the bootstrap distribution on the sample extremes. In this paper we propose sufficient conditions that allow a simple modification of the bootstrap (Wu, 1986, Ann.Stat.) to be consistent (in a conditional sense) yet to also reproduce the narrower confidence sets of the i.i.d. bootstrap. Numerical results demonstrate that our proposed bootstrap method works very well in practice delivering coverage rates very close to the nominal level and significantly narrower confidence sets than other consistent methods
    Keywords: Bootstrap, distribuzioni stabili, misure di probabilità stocastiche, convergenza debole Bootstrap, stable distributions, random probability measures, weak convergence
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:bot:quadip:108&r=ecm
  9. By: Takashi Kamihigashi (Research Institute for Economics and Business Administration, Kobe University); John Stachurski (Research School of Economics, Australian National University, ACT, Australia)
    Abstract: We discuss stability of stationary distributions for discrete-time Markov chains satisfying monotonicity and an order-theoretic mixing condition that can be seen as an alternative to irreducibility. A process satisfying these conditions has at most one stationary distribution, and any such distribution must be globally stable.
    Date: 2011–07
    URL: http://d.repec.org/n?u=RePEc:kob:dpaper:dp2011-24&r=ecm
  10. By: Fujimoto, S.; Ishikawa, A.; Mizuno, T.; Watanabe, T.
    Abstract: We propose a new method for estimating the power-law exponent of a firm size variable, such as annual sales. Our focus is on how to empirically identify a range in which a firm size variable follows a power-law distribution. As is well known, a firm size variable follows a power-law distribution only beyond some threshold. On the other hand, in almost all empirical exercises, the right end part of a distribution deviates from a power-law due to finite size effect. We modify the method proposed by Malevergne et al. (2011) so that we can identify both of the lower and the upper thresholds and then estimate the power-law exponent using observations only in the range defined by the two thresholds. We apply this new method to various firm size variables, including annual sales, the number of workers, and tangible fixed assets for firms in more than thirty countries.
    Keywords: Econophysics, power-law distributions, power-law exponents, firm size variables, finite size effect
    JEL: C16 D20 E23
    Date: 2011–07
    URL: http://d.repec.org/n?u=RePEc:hit:cinwps:7&r=ecm
  11. By: Patrick Bayer; Robert McMillan; Alvin Murphy; Christopher Timmins
    Abstract: We develop a tractable model of neighborhood choice in a dynamic setting along with a computationally straightforward estimation approach. This approach uses information about neighborhood choices and the timing of moves to recover moving costs and preferences for dynamically-evolving housing and neighborhood attributes. The model and estimator are potentially applicable to the study of a wide range of dynamic phenomena in housing markets and cities. We focus here on estimating the marginal willingness to pay for non-marketed amenities – neighborhood racial composition, air pollution, and violent crime – using rich dynamic data. Consistent with the time-series properties of each amenity, we find that a static demand model understates willingness to pay to avoid pollution and crime but overstates willingness to pay to live near neighbors of one’s own race. These findings have important implications for the class of static housing demand models typically used to value urban amenities.
    Keywords: Neighborhood Choice, Housing Demand, Hedonic Valuation, Dynamic Discrete Choice
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:duk:dukeec:11-16&r=ecm
  12. By: Stephen Pollock
    Abstract: The algebra of the Kronecker products of matrices is recapitulated using a notation that reveals the tensor structures of the matrices. It is claimed that many of the difficulties that are encountered in working with the algebra can be alleviated by paying close attention to the indices that are concealed beneath the conventional matrix notation. The vectorisation operations and the commutation transformations that are common in multivariate statistical analysis alter the positional relationship of the matrix elements. These elements correspond to numbers that are liable to be stored in contiguous memory cells of a computer, which should remain undisturbed. It is suggested that, in the absence of an adequate index notation that enables the manipulations to be performed without disturbing the data, even the most clear-headed of computer programmers is liable to perform wholly unnecessary and time-wasting operations that shift data between memory cells.
    Date: 2011–07
    URL: http://d.repec.org/n?u=RePEc:lec:leecon:11/34&r=ecm
  13. By: Raghav, Manu; Barreto, Humberto
    Abstract: This paper focuses on econometrics pedagogy. It demonstrates the importance of including probability weights in regression analysis using data from surveys that do not use simple random samples (SRS). We use concrete, numerical examples and simulation to show how to effectively teach this difficult material to a student audience. We relax the assumption of simple random sampling and show how unequal probability of selection can lead to biased, inconsistent OLS slope estimates. We then explain and apply probability weighted least squares, showing how weighting the observations by the reciprocal of the probability of inclusion in the sample improves performance. The exposition is non-mathematical and relies heavily on intuitive, visual displays to make the content accessible to students. This paper will enable professors to incorporate unequal probability of selection into their courses and allow students to use best practice techniques in analyzing data from complex surveys. The primary delivery vehicle is Microsoft Excel®. Two user-defined array functions, SAMPLE and LINESTW, are included in a prepared Excel workbook. We replicate all results in Stata® and offer a do file for easy analysis in Stata. Documented code in Excel and Stata allows users to see each step in the sampling and probability weighted least squares algorithms. All files and code are available at www.depauw.edu/learn/stata.
    Keywords: unequal probability; complex survey; simulation; weighted regression
    JEL: A22 A23 C8 C01
    Date: 2011–06–30
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:32334&r=ecm
  14. By: Javier Fernández-Macho (EA3 --- UPV/EHU)
    Abstract: Statistical studies that consider multiscale relationships among several variables use wavelet correlations and cross-correlations between pairs of variables. This procedure needs to calculate and compare a large number of wavelet statistics. The analysis can then be rather confusing and even frustrating since it may fail to indicate clearly the multiscale overall relationship that might exist among the variables. This paper presents two new statistical tools that help to determine the overall correlation for the whole multivariate set on a scale-by-scale basis. This is illustrated in the analysis of a multivariate set of daily Eurozone stock market returns during a recent period. Wavelet multiple correlation analysis reveals the existence of a nearly exact linear relationship for periods longer than the year, which can be interpreted as perfect integration of these Euro stock markets at the longest time scales. It also shows that small inconsistencies between Euro markets seem to be just short within-year discrepancies possibly due to the interaction of different agents with different trading horizons. On the other hand, multiple cross-correlation analysis shows that the French CAC40 may lead the rest of the Euro markets at those short time scales.
    Keywords: Euro zone, MODWT, multiscale analysis, multivariate analysis, stock markets, returns, wavelet transform.
    JEL: C32 C87 G15
    Date: 2011–07–14
    URL: http://d.repec.org/n?u=RePEc:ehu:biltok:201104&r=ecm
  15. By: Alessandro Gnoatto; Martino Grasselli
    Abstract: We derive the explicit formula for the joint Laplace transform of the Wishart process and its time integral which extends the original approach of Bru. We compare our methodology with the alternative results given by the variation of constants method, the linearization of the Matrix Riccati ODE's and the Runge-Kutta algorithm. The new formula turns out to be fast, accurate and very useful for applications when dealing with stochastic volatility and stochastic correlation modelling.
    Date: 2011–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1107.2748&r=ecm
  16. By: Matteo Fragetta (University of Salerno); Giovanni Melina (University of Surrey)
    Abstract: This paper applies graphical modelling theory to recover identifying restrictions for the analysis of monetary policy shocks in a VAR of the US economy. Results are in line with the view that only high-frequency data should be assumed to be in the information set of the monetary authority when the interest rate decision is taken.
    Keywords: Monetary policy; SVAR; Graphical modelling
    JEL: E43 E52
    Date: 2011–07
    URL: http://d.repec.org/n?u=RePEc:sur:surrec:0811&r=ecm

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.