|
on Econometrics |
By: | Nikolaus Hautsch; Julia Schuamburg; Melanie Schienle; |
Abstract: | Multiplicative error models (MEM) became a standard tool for modeling conditional durations of intraday transactions, realized volatilities and trading volumes. The parametric estimation of the corresponding multivariate model, the so-called vector MEM (VMEM), requires a specification of the joint error term distribution, which is due to the lack of multivariate distribution functions on Rd + defined via a copula. Maximum likelihood estimation is based on the assumption of constant copula parameters and therefore, leads to invalid inference, if the dependence exhibits time variations or structural breaks. Hence, we suggest to test for time-varying dependence by calibrating a time-varying copula model and to reestimate the VMEM based on identified intervals of homogenous dependence. This paper summarizes the important aspects of (V)MEM, its estimation and a sequential test for changes in the dependence structure. The techniques are applied in an empirical example. |
Keywords: | vector multiplicative error model, copula, time-varying copula, highfrequency data |
JEL: | C32 C51 |
Date: | 2012–09 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2012-054&r=ecm |
By: | Adam McCloskey; Pierre Perron |
Abstract: | We propose estimators of the memory parameter of a time series that are robust to a wide variety of random level shift processes, deterministic level shifts and deterministic time trends. The estimators are simple trimmed versions of the popular log-periodogram regression estimator that employ certain sample size-dependent and, in some cases, data-dependent trimmings which discard low-frequency components. We also show that a previously developed trimmed local Whittle estimator is robust to the same forms of data contamination. Regardless of whether the underlying long/shortmemory process is contaminated by level shifts or deterministic trends, the estimators are consistent and asymptotically normal with the same limiting variance as their standard untrimmed counterparts. Simulations show that the trimmed estimators perform their intended purpose quite well, substantially decreasing both finite sample bias and root mean-squared error in the presence of these contaminating components. Furthermore, we assess the tradeoffs involved with their use when such components are not present but the underlying process exhibits strong short-memory dynamics or is contaminated by noise. To balance the potential finite sample biases involved in estimating the memory parameter, we recommend a particular adaptive version of the trimmed log-periodogram estimator that performs well in a wide variety of circumstances. We apply the estimators to stock market volatility data to find that various time series typically thought to be long-memory processes actually appear to be short or very weak long-memory processes contaminated by level shifts or deterministic trends. |
Keywords: | long-memory processes, semiparametric estimators, level shifts, structural change, deterministic trends |
Date: | 2012 |
URL: | http://d.repec.org/n?u=RePEc:bro:econwp:2012-15&r=ecm |
By: | Adam McCloskey |
Abstract: | We develop powerful new size-correction procedures for nonstandard hypothesis testing environments in which the asymptotic distribution of a test statistic is discontinuous in a parameter under the null hypothesis. Examples of this form of testing problem are pervasive in econometrics and complicate inference by making size di- cult to control. This paper introduces two sets of new size-correction methods that correspond to two dierent general hypothesis testing frameworks. The new methods are designed to maximize the power of the underlying test while maintaining correct asymptotic size uniformly over the parameter space specied by the null hypothesis. They involve the construction of critical values that make use of reasoning derived from Bonferroni bounds. The rst set of new methods provides complementary alternatives to existing size-correction methods, entailing substantially higher power for many testing problems. The second set of new methods provides the rst available asymptotically size-correct tests for the general class of testing problems to which it applies. This class includes hypothesis tests on parameters after consistent model selection and tests on super-ecient/hard-thresholding estimators. We detail the construction and performance of the new tests in three specic examples: testing after conservative model selection, testing when a nuisance parameter may be on a boundary and testing after consistent model selection. |
Keywords: | Hypothesis testing, uniform inference, asymptotic size, exact size, power, size-correction, model selection, boundary problems, local asymptotics |
Date: | 2012 |
URL: | http://d.repec.org/n?u=RePEc:bro:econwp:2012-16&r=ecm |
By: | Adam McCloskey |
Abstract: | I provide conditions under which the trimmed FDQML estimator, advanced by McCloskey (2010) in the context of fully parametric short-memory models, can be used to estimate the long-memory stochastic volatility model parameters in the presence of additive low-frequency contamination in log-squared returns. The types of lowfrequency contamination covered include level shifts as well as deterministic trends. I establish consistency and asymptotic normality in the presence or absence of such lowfrequency contamination under certain conditions on the growth rate of the trimming parameter. I also provide theoretical guidance on the choice of trimming parameter by heuristically obtaining its asymptotic MSE-optimal rate under certain types of lowfrequency contamination. A simulation study examines the finite sample properties of the robust estimator, showing substantial gains from its use in the presence of level shifts. The finite sample analysis also explores how different levels of trimming affect the parameter estimates in the presence and absence of low-frequency contamination and long-memory. |
Keywords: | stochastic volatility, frequency domain estimation, robust estimation, spurious persistence, long-memory, level shifts, structural change, deterministic trends |
Date: | 2012 |
URL: | http://d.repec.org/n?u=RePEc:bro:econwp:2012-17&r=ecm |
By: | Shinichiro Shirota (Graduate School of Economics, University of Tokyo); Takayuki Hizu (Mitsubishi UFJ Trust and Banking); Yasuhiro Omori (Faculty of Economics, University of Tokyo) |
Abstract: | The daily return and the realized volatility are simultaneously modeled in the stochastic volatility model with leverage and long memory. The dependent variable in the stochastic volatility model is the logarithm of the squared return, and its error distribution is approximated by a mixture of normals. In addition, we incorporate the logarithm of the realized volatility into the measurement equation, assuming that the latent log volatility follows an Autoregressive Fractionally Integrated Moving Average (ARFIMA) process to describe its long memory property. Using a state space representation, we propose an ecient Bayesian estimation method implemented using Markov chain Monte Carlo method (MCMC). Model comparisons are performed based on the marginal likelihood, and the volatility forecasting performances are investigated using S&P500 stock index returns. |
Date: | 2012–11 |
URL: | http://d.repec.org/n?u=RePEc:tky:fseres:2012cf869&r=ecm |
By: | Nikolay Gospodinov; Raymond Kan; Cesare Robotti |
Abstract: | We derive new results on the asymptotic behavior of the estimated parameters of a linear asset pricing model and their associated t-statistics in the presence of a factor that is independent of the returns. The inclusion of this "useless" factor in the model leads to a violation of the full rank (identification) condition and renders the inference nonstandard. We show that the estimated parameter associated with the useless factor diverges with the sample size but the misspecification-robust t-statistic is still well-behaved and has a standard normal limiting distribution. The asymptotic distributions of the estimates of the remaining parameters and the model specification test are also affected by the presence of a useless factor and are nonstandard. We propose a robust and easy-to-implement model selection procedure that restores the standard inference on the parameters of interest by identifying and removing the factors that do not contribute to improved pricing. The finite-sample properties of our asymptotic approximations and the practical relevance of our results are illustrated using simulations and an empirical application. |
Date: | 2012 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedawp:2012-17&r=ecm |
By: | Biørn, Erik (Dept. of Economics, University of Oslo) |
Abstract: | A system of regression equations (SURE) for analyzing panel data with random heterogeneity in intercepts and coefficients, and unbalanced panel data is considered. A Maximum Likelihood (ML) procedure for joint estimation of all parameters is described. Since its implementation for numerical computation is complicated, simplified procedures are presented. The simplifications essentially concern the estimation of the covariance matrices of the random coefficients. The application and ‘anatomy’ of the proposed algorithm for modified ML estimation is illustrated by using panel data for output, inputs and costs for 111 manufacturing firms observed up to 22 years. |
Keywords: | Panel Data; Unbalanced data; Random Coefficients; Heterogeneity; Regression Systems; Iterated Maximum Likelihood |
JEL: | C33 C51 C63 D24 |
Date: | 2012–08–28 |
URL: | http://d.repec.org/n?u=RePEc:hhs:osloec:2012_022&r=ecm |
By: | Søren Johansen (University of Copenhagen and CREATES); Morten Ørregaard Nielsen (Queen's University and CREATES) |
Abstract: | We consider the nonstationary fractional model Δ^{d}X_{t}=ε_{t} with ε_{t} i.i.d.(0,σ²) and d>1/2. We derive an analytical expression for the main term of the asymptotic bias of the maximum likelihood estimator of d conditional on initial values, and we discuss the role of the initial values for the bias. The results are partially extended to other fractional models, and three different applications of the theoretical results are given. |
Keywords: | Asymptotic expansion, bias, conditional inference, fractional integration, initial values, likelihood inference |
JEL: | C22 |
Date: | 2012–11 |
URL: | http://d.repec.org/n?u=RePEc:qed:wpaper:1300&r=ecm |
By: | Stefano Favaro (University of Turin and Collegio Carlo Alberto); Antonio Lijoi (Department of Economics and Management, University of Pavia and Collegio Carlo Alberto); Igor Prünster (University of Turin and Collegio Carlo Alberto) |
Abstract: | Species sampling problems have a long history in ecological and biological studies and a number of issues, including the evaluation of species richness, the design of sampling experiments, the estimation of rare species variety, are to be addressed. Such inferential problems have recently emerged also in genomic applications, however exhibiting some peculiar features that make them more challenging: specifically, one has to deal with very large populations (genomic libraries) containing a huge number of distinct species (genes) and only a small portion of the library has been sampled (sequenced). These aspects motivate the Bayesian nonparametric approach we undertake, since it allows to achieve the degree of flexibility typically needed in this framework. Basing on an observed sample of size n, focus will be on prediction of a key aspect of the outcome from an additional sample of size m, namely the so–called discovery probability. In particular, conditionally on an observed basic sample of size n, we derive a novel estimator of the probability of detecting, at the (n + m + 1)–th observation, species that have been observed with any given frequency in the enlarged sample of size n + m. Such an estimator admits a closed form expression that can be exactly evaluated. The result we obtain allows us to quantify both the rate at which rare species are detected and the achieved sample coverage of abundant species, as m increases. Natural applications are represented by the estimation of the probability of discovering rare genes within genomic libraries and the results are illustrated by means of two Expressed Sequence Tags datasets. |
Keywords: | Bayesian nonparametrics; Gibbs–type priors; Rare species discovery; Species sampling models; Two–parameter Poisson–Dirichlet process. |
Date: | 2012–10 |
URL: | http://d.repec.org/n?u=RePEc:pav:demwpp:demwp0007&r=ecm |
By: | Stefano Favaro (University of Turin and Collegio Carlo Alberto); Antonio Lijoi (Department of Economics and Management, University of Pavia and Collegio Carlo Alberto); Igor Prünster (University of Turin and Collegio Carlo Alberto) |
Abstract: | Random probability measures are the main tool for Bayesian nonparametric inference, with their laws acting as prior distributions. Many well–known priors used in practice admit different, though (in distribution) equivalent, representations. Some of these are convenient if one wishes to thoroughly analyze the theoretical properties of the priors being used, others are more useful for modeling dependence and for addressing computational issues. As for the latter purpose, so–called stick–breaking constructions certainly stand out. In this paper we focus on the recently introduced normalized inverse Gaussian process and provide a completely explicit stick–breaking representation for it. Such a new result is of interest both from a theoretical viewpoint and for statistical practice. |
Keywords: | Bayesian Nonparametrics; Dirichlet process; Normalized Inverse Gaussian process; Random Probability Measures; Stick–breaking representation. |
Date: | 2012–10 |
URL: | http://d.repec.org/n?u=RePEc:pav:demwpp:demwp0008&r=ecm |
By: | Eduardo Rossi (Department of Economics and Management, University of Pavia); Dean Fantazzini (Moscow School of Economics, M.V. Lomonosov Moscow State University) |
Abstract: | Intraday return volatilities are characterized by the contemporaneous presence of periodicity and long memory. This paper proposes two new parameterizations of the intraday volatility: the Fractionally Integrated Periodic EGARCH and the Seasonal Fractional Integrated Periodic EGARCH, which provide the required flexibility to account for both features. The periodic kurtosis and periodic autocorrelations of power transformations of the absolute returns are computed for both models. The empirical application shows that volatility of the hourly Emini S&P 500 futures returns are characterized by a periodic leverage effect coupled with a statistically significant long-range dependence. An out-of-sample forecasting comparison with alternative models shows that a constrained version of the FI-PEGARCH provides superior forecasts. A simulation experiment is carried out to investigate the effects that sample frequency has on the fractional differencing parameter estimate. |
Keywords: | Intraday volatility, Long memory, FI-PEGARCH, SFI-PEGARCH, Periodicmodels. |
JEL: | C22 C58 G13 |
Date: | 2012–11 |
URL: | http://d.repec.org/n?u=RePEc:pav:demwpp:015&r=ecm |
By: | Siegfried Hörmann; Lukasz Kidzinski; Marc Hallin |
Abstract: | In this paper, we address the problem of dimension reductionfor sequentially observed functional data (Xk :k 2 Z). Suchfunctional time series arise frequently, e.g. when a continuous time processis segmented into some smaller natural units, such as days. Theneach Xk represents one intraday curve. We argue that functional principalcomponent analysis (FPCA), though a key technique in the field anda benchmark for any competitor, does not provide an adequate dimensionreduction in a time series setting. FPCA is a static procedure whichignores valuable information in the serial dependence of the functionaldata. Therefore, inspired by Brillinger’s theory of dynamic principalcomponents, we propose a dynamic version of FPCA which is based ona frequency domain approach. By means of a simulation study and anempirical illustration, we show the considerable improvement our methodentails when compared to the usual (static) procedure. While the mainpart of the article outlines the ideas and the implementation of dynamicFPCA for functional Xk, we provide in the appendices a rigorous theoryfor general Hilbertian data. |
Date: | 2012–11 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2013/131191&r=ecm |
By: | Christophe Ley; Anouk Neven |
Keywords: | Gamma function; Gaussian Distribution; Normalizing Constant; Student t distribution; tail weight |
Date: | 2012–11 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2013/131185&r=ecm |
By: | J. N. Lye and J. G. Hirschberg |
Abstract: | In this paper we demonstrate the construction of inverse test confidence intervals for the turning points in estimated nonlinear relationships by the use of the marginal or first derivative function. First, we outline the inverse test confidence interval approach. Then we examine the relationship between the traditional confidence intervals based on the Wald test for the turning-points for a cubic, a quartic and fractional polynomials estimated via regression analysis and the inverse test intervals. We show that the confidence interval plots of the marginal function can be used to estimate confidence intervals for the turning points that are equivalent to the inverse test. We also provide a method for the interpretation of the confidence intervals for the second derivative function to draw inferences for the characteristics of the turning-point. This method is applied to the examination of the turning points found when estimating a quartic and a fractional polynomial from data used for the estimation of an Environmental Kuznets Curve. The Stata do files used to generate these examples are listed in the appendix along with the data. |
Keywords: | Inverse Test Confidence Intervals, Likelihood Profile, Quartic, |
Date: | 2012 |
URL: | http://d.repec.org/n?u=RePEc:mlb:wpaper:1160&r=ecm |
By: | Dekker, T.; Koster, P.R.; Brouwer, R. |
Date: | 2012 |
URL: | http://d.repec.org/n?u=RePEc:dgr:vuarem:2012-5&r=ecm |
By: | Dimitris Korobilis |
Abstract: | This paper considers Bayesian variable selection in regressions with a large number of possibly highly correlated macroeconomic predictors. I show that by acknowledging the correlation structure in the predictors can improve forecasts over existing popular Bayesian variable selection algorithms. |
Keywords: | Bayesian semiparametric selection; Dirichlet process prior; correlated predictors; clustered coefficients |
JEL: | C11 C14 C32 C52 C53 |
Date: | 2012–07 |
URL: | http://d.repec.org/n?u=RePEc:gla:glaewp:2012_12&r=ecm |
By: | Eirini-Christina Saloniki; Amanda Gosling |
Abstract: | This paper addresses the problem of point identification in the presence of measurement error in discrete variables; in particular, it considers the case of having two "noisy" indicators of the same latent variable and without any prior information about the true value of the variable of interest. Based on the concept of the fourfold table and creating a nonlinear system of simultaneous equations from the observed proportions and predicted wages, we examine the need for different assumptions in order to obtain unique solutions for the system. We show that by imposing a simple restriction(s) for the joint misclassification probabilities, it is possible to measure the extent of the misclassification error in that specific variable. The proposed methodology is then used to identify whether people misreport their disability status using data from the British Household Panel Survey. Our results show that the probability of underreporting is greater than the probability of overreporting disability. |
Keywords: | measurement error; discrete; misclassification probabilities; identification; disability |
JEL: | C14 C35 J14 J31 |
Date: | 2012–11 |
URL: | http://d.repec.org/n?u=RePEc:ukc:ukcedp:1214&r=ecm |