|
on Econometrics |
By: | Yao Luo; Yuanyuan Wan |
Abstract: | This paper considers nonparametric estimation of first-price auction models under the monotonicity restriction on the bidding strategy. Based on an integrated-quantile representation of the first-order condition, we propose a tuning-parameter-free estimator for the valuation quantile function. We establish its cube-root-n consistency and asymptotic distribution under weaker smoothness assumptions than those typically assumed in the empirical literature. If the latter are true, we also provide a trimming-free smoothed estimator and show that it is asymptotically normal and achieves the optimal rate of Guerre, Perrigne, and Vuong (2000). We illustrate our methods using Monte Carlo simulations and an empirical study of the California highway procurements auctions. |
Keywords: | First Price Auctions, Monotone Bidding Strategy, Nonparametric Estimation, Tuning-Parameter-Free |
JEL: | D44 D82 C12 C14 |
Date: | 2015–05–06 |
URL: | http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-539&r=ecm |
By: | Francesco, Bartolucci; Silvia, Bacci; Claudia, Pigini |
Abstract: | An alternative to using normally distributed random effects in modeling clustered binary and ordered responses is based on using a finite-mixture. This approach gives rise to a flexible class of generalized linear mixed models for item responses, multilevel data, and longitudinal data. A test of misspecification for these finite-mixture models is proposed which is based on the comparison between the Marginal and the Conditional Maximum Likelihood estimates of the fixed effects as in the Hausman’s test. The asymptotic distribution of the test statistic is derived; it is of chi-squared type with a number of degrees of freedom equal to the number of covariates that vary within the cluster. It turns out that the test is simple to perform and may also be used to select the number of components of the finite-mixture, when this number is unknown. The approach is illustrated by a series of simulations and three empirical examples covering the main fields of application. |
Keywords: | Generalized Linear Mixed Models, Hausman Test, Item Response Theory, Latent Class model, Longitudinal data, Multilevel data |
JEL: | C12 C23 C52 |
Date: | 2015 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:64220&r=ecm |
By: | Ulrich Hounyo (Oxford-Man Institute, University of Oxford, and Aarhus University and CREATES); Bezirgen Veliyev (Aarhus University and CREATES) |
Abstract: | The main contribution of this paper is to establish the formal validity of Edgeworth expansions for realized volatility estimators. First, in the context of no microstructure effects, our results rigorously justify the Edgeworth expansions for realized volatility derived in Gonalves and Meddahi (2009). Second, we show that the validity of the Edgeworth expansions for realized volatility may not cover the optimal two-point distribution wild bootstrap proposed by Gonçalves and Meddahi (2009). Then, we propose a new optimal nonlattice distribution which ensures the second-order correctness of the bootstrap. Third, in the presence of microstructure noise, based on our Edgeworth expansions, we show that the new optimal choice proposed in the absence of noise is still valid in noisy data for the pre-averaged realized volatility estimator proposed by Podolskij and Vetter (2009). Finally, we show how confidence intervals for integrated volatility can be constructed using these Edgeworth expansions for noisy data. Our Monte Carlo simulations show that the intervals based on the Edgeworth corrections have improved the finite sample properties relatively to the conventional intervals based on the normal approximation. |
Keywords: | Realized volatility, pre-averaging, bootstrap, Edgeworth expansions, confidence intervals. |
JEL: | C15 C22 C58 |
Date: | 2015–05–03 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2015-21&r=ecm |
By: | Gloria Gonzalez-Rivera (Department of Economics, University of California Riverside); Wei Lin (Capital University of Economics and Business) |
Abstract: | The current regression models for interval-valued data ignore the extreme nature of the lower and upper bounds of intervals. We propose a new estimation approach that considers the bounds of the interval as realizations of the max/min order statistics coming from a sample of n_t random draws from the conditional density of an underlying stochastic process {Y_t}. This approach is important for data sets for which the relevant information is only available in interval format, e.g., low/high prices. We are interested in the characterization of the latent process as well as in the modeling of the bounds themselves. We estimate a dynamic model for the conditional mean and conditional variance of the latent process, which is assumed to be normally distributed, and for the conditional intensity of the discrete process {n_t}, which follows a negative binomial density function. Under these assumptions, together with the densities of order statistics, we obtain maximum likelihood estimates of the parameters of the model, which are needed to estimate the expected value of the bounds of the interval. We implement this approach with the time series of livestock prices, of which only low/high prices are recorded making the price process itself a latent process. We find that the proposed model provides an excellent fit of the intervals of low/high returns with an average coverage rate of 83%. We also offer a comparison with current models for interval-valued data. |
Date: | 2015–05 |
URL: | http://d.repec.org/n?u=RePEc:ucr:wpaper:201505&r=ecm |
By: | A. Benchimol; Irene Albarrán; J. Miguel Marín; Pablo J. Alonso |
Abstract: | Some groups of countries are connected not only economically, but also social and even demographically. This last fact can be exploited when trying to forecast the death rates of their populations. In this paper we propose a hierarchical specification of the Lee-Carter model and we assume that there is a common latent mortality factor for all of them. We introduce an estimation procedure for this kind of structures by means of a data cloning methodology. To our knowledge, this is the first time that this methodology is used in the actuarial field. It allows approximating the maximum likelihood estimates, which are not affected by the prior distributions assumed for the calculus. Finally, we apply the methodology to some France, Italy, Portugal and Spain data. The forecasts obtained using this methodology can be considered as very satisfactory. |
Keywords: | Bayesian inference , Data cloning , Hierarchical model , Lee-Carter model , Longevity risk , Projected life tables |
Date: | 2015–05 |
URL: | http://d.repec.org/n?u=RePEc:cte:wsrepe:ws1510&r=ecm |
By: | Mazzeu Gonçalves; Henrique Joao; Esther Ruiz; Helena Veiga |
Abstract: | The objective of this paper is to survey the literature on the effects of model uncertainty on the forecast accuracy of linear univariate ARMA models. We consider three specific uncertainties: parameter estimation, error distribution and lag order. We also survey the procedures proposed to deal with each of these sources of uncertainty. The results are illustrated with simulated data. |
Keywords: | Bayesian forecast, Bootstrap, Model misspecification, Parameter uncertainty , Bootstrap , Model misspecification , Parameter uncertainty |
Date: | 2015–05 |
URL: | http://d.repec.org/n?u=RePEc:cte:wsrepe:ws1508&r=ecm |
By: | Knut Are Aastveit (Norges Bank (Central Bank of Norway)); Anne Sofie Jore (Norges Bank (Central Bank of Norway)); Francesco Ravazzolo (Norges Bank (Central Bank of Norway) and BI Norwegian Business School) |
Abstract: | We define and forecast classical business cycle turning points for the Norwegian economy. When defining reference business cycles, we compare a univariate and a multivariate Bry-Boschan approach with univariate Markov-switching models and Markov-switching factor models. On the basis of a receiver operating characteristic curve methodology and a comparison of business cycle turning points with Norway's main trading partners, we find that a Markov-switching factor model provides the most reasonable definition of Norwegian business cycles for the sample 1978Q1-2011Q4. In a real-time out-of-sample forecasting exercise, focusing on the last recession, we show that univariate Markov-switching models applied to surveys and a financial conditions index are timely and accurate in calling the last peak in real time. The models are less accurate and timely in calling the trough in real time. |
Keywords: | Business cycle, Dating rules, Turning Points, Real-time data |
JEL: | C32 C52 C53 E37 E52 |
Date: | 2015–05–09 |
URL: | http://d.repec.org/n?u=RePEc:bno:worpap:2015_09&r=ecm |
By: | Ying-Hui Shao (ECUST); Gao-Feng Gu (ECUST); Zhi-Qiang Jiang (ECUST); Wei-Xing Zhou (ECUST) |
Abstract: | The detrending moving average (DMA) algorithm is one of the best performing methods to quantify the long-term correlations in nonstationary time series. Many long-term correlated time series in real systems contain various trends. We investigate the effects of polynomial trends on the scaling behaviors and the performances of three widely used DMA methods including backward algorithm (BDMA), centered algorithm (CDMA) and forward algorithm (FDMA). We derive a general framework for polynomial trends and obtain analytical results for constant shifts and linear trends. We find that the behavior of the CDMA method is not influenced by constant shifts. In contrast, linear trends cause a crossover in the CDMA fluctuation functions. We also find that constant shifts and linear trends cause crossovers in the fluctuation functions obtained from the BDMA and FDMA methods. When a crossover exists, the scaling behavior at small scales comes from the intrinsic time series while that at large scales is dominated by the constant shifts or linear trends. We also derive analytically the expressions of crossover scales and show that the crossover scale depends on the strength of the polynomial trend, the Hurst index, and in some cases (linear trends for BDMA and FDMA) the length of the time series. In all cases, the BDMA and the FDMA behave almost the same under the influence of constant shifts or linear trends. Extensive numerical experiments confirm excellently the analytical derivations. We conclude that the CDMA method outperforms the BDMA and FDMA methods in the presence of polynomial trends. |
Date: | 2015–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1505.02750&r=ecm |