
on Econometrics 
By:  Rasmus Søndergaard Pedersen (Department of Economics, Copenhagen University) 
Abstract:  As an alternative to quasimaximum likelihood, targeting estimation is a much applied estimation method for univariate and multivariate GARCH models. In terms of variance targeting estimation recent research has pointed out that at least finite fourthorder moments of the data generating process is required if one wants to perform inference in GARCH models relying on asymptotic normality of the estimator, see Pedersen and Rahbek (2014) and Francq et al. (2011). Such moment conditions may not be satisfied in practice for financial returns highlighting a large drawback of variance targeting estimation. In this paper we consider the largesample properties of the variance targeting estimator for the multivariate extended constant conditional correlation GARCH model when the distribution of the data generating process has infinite fourth moments. Using nonstandard limit theory we derive new results for the estimator stating that its limiting distribution is multivariate stable. The rate of consistency of the estimator is slower than squareroot T (as obtained by the quasimaximum likelihood estimator) and depends on the tails of the data generating process 
Keywords:  Targeting; variance targeting; multivariate GARCH; constant conditional correlation; asymptotic theory; time series, multivariate regular variation, stable distributions. 
JEL:  C32 C51 C58 
Date:  2014–02 
URL:  http://d.repec.org/n?u=RePEc:kud:kuiedp:1404&r=ecm 
By:  WeiMing Lee (Department of Economics National Chung Cheng University); YuChin Hsu (Institute of Economics, Academia Sinica, Taipei, Taiwan); ChungMing Kuan (Department of Finance National Taiwan University) 
Abstract:  We propose a new robust hypothesis test for (possibly nonlinear) constraints on Mestimators with possibly nondifferentiable estimating functions. The proposed test employs a random normalizing matrix computed from recursive Mestimators to eliminate the nuisance parameters arising from the asymptotic covariance matrix. It does not require consistent estimation of any nuisance parameters, in contrast with the conventional heteroskedasticity autocorrelation consistent (HAC)type test and the KVBtype test of Kiefer, Vogelsang, and Bunzel (2000). Our test reduces to the KVBtype test in simple location models with OLS estimation, so the error in rejection probability of our test in a Gaussian location model is OIP(T−1 log T). We discuss robust testing in quantile regression, and censored regression models in details. In simulation studies, we find that our test has better size control and better finite sample power than the HACtype and KVBtype tests. 
Keywords:  censored regression, generalized method of moments, robust hypothesis testing, KVB approach, Mestimator, quantile regression 
JEL:  C12 C22 
Date:  2014–03 
URL:  http://d.repec.org/n?u=RePEc:sin:wpaper:14a004&r=ecm 
By:  Seojeong Lee (School of Economics, Australian School of Business, the University of New South Wales) 
Abstract:  I propose a nonparametric iid bootstrap procedure for the empirical likelihood, the exponential tilting, and the exponentially tilted empirical likelihood estimators that achieves sharp asymptotic refinements for t tests and confidence intervals based on such estimators. Furthermore, the proposed bootstrap is robust to model misspecification, i.e., it achieves asymptotic refinements regardless of whether the assumed moment condition model is correctly specified or not. This result is new, because asymptotic refinements of the bootstrap based on these estimators have not been established in the literature even under correct model specification. Monte Carlo experiments are conducted in dynamic panel data setting to support the theoretical finding. As an application, bootstrap confidence intervals for the returns to schooling of Hellerstein and Imbens (1999) are calculated. The returns to schooling may be higher. 
Keywords:  generalized empirical likelihood, bootstrap, asymptotic refinement, model misspecification 
JEL:  C14 C15 C31 C33 
Date:  2014–01 
URL:  http://d.repec.org/n?u=RePEc:swe:wpaper:201402&r=ecm 
By:  Urbain J.R.Y.J.; Karabiyik H.; Westerlund J. (GSBE) 
Abstract:  This paper considers estimation of factoraugmented panel data regression models with homogenous slope coefficients. One of the most popular approaches towards this end is the pooled common correlated effects CCE estimator of Pesaran 2006. For this estimator to be consistent at the usual sqrtNT rate, where N and N denote the number of crosssection and time series observations, respectively, the number of factors cannot be larger than the number of observables. This is a problem in the typical application involving only a small number of regressors. The current paper proposes a simple extension to the CCE procedure by which the requirement can be relaxed. The CCE approach is based on taking the crosssection average of the observables as an estimator of the common factors. The idea put forth in the current paper is to consider not only the average but also other crosssection combinations. The asymptotic properties of the resulting combinationaugmented CCE C3E estimator are provided and verified in small samples using Monte Carlo simulation. 
Keywords:  Hypothesis Testing: General; Estimation: General; Multiple or Simultaneous Equation Models: Models with Panel Data; Longitudinal Data; Spatial Time Series; 
JEL:  C12 C13 C33 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:unm:umagsb:2014007&r=ecm 
By:  Smeekes S.; Urbain J.R.Y.J. (GSBE) 
Abstract:  In this paper we consider several modified wild bootstrap methods that, additionally to heteroskedasticity, can take dependence into account. The modified wild bootstrap methods are shown to correctly replicate an invariance principle for multivariate time series that are characterized by general forms of unconditional heteroskedasticity, or nonstationary volatility, as well as dependence within and between different elements of the time series. The invariance principle is then applied to derive the asymptotic validity of the wild bootstrap methods for unit root testing in a multivariate setting. The resulting tests, which can also be interpreted as panel unit root tests, are valid under more general assumptions than most current tests used in the literature. A simulation study is performed to evaluate the small sample properties of the bootstrap unit root tests. 
Keywords:  Statistical Simulation Methods: General; Multiple or Simultaneous Equation Models: TimeSeries Models; Dynamic Quantile Regressions; Dynamic Treatment Effect Models; 
JEL:  C15 C32 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:unm:umagsb:2014008&r=ecm 
By:  David Ardia; Lukasz Gatarek; Lennart F. hoogerheide 
Abstract:  A novel simulationbased methodology is proposed to test the validity of a set of marginal time series models, where the dependence structure between the time series is taken ‘directly’ from the observed data. The procedure is useful when one wants to summarize the test results for several time series in one joint test statistic and pvalue. The proposed test method can have higher power than a test for a univariate time series, especially for short time series. Therefore our test for multiple time series is particularly useful if one wants to assess ValueatRisk (or Expected Shortfall) predictions over a small time frame (e.g., a crisis period). We apply our method to test GARCH model specifications for a large panel data set of stock returns. 
Keywords:  Bootstrap test, GARCH, Marginal models,Multiple time series, ValueatRisk 
JEL:  C1 C12 C22 C44 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:lvl:lacicr:1413&r=ecm 
By:  Aurea Grané; Belén MartínBarragán; Helena Veiga 
Abstract:  Outliers of moderate magnitude cause large changes in financial time series of prices andreturns and affect both the estimation of parameters and volatilities when fitting a GARCHtypemodel. The multivariate setting is still to be studied, but similar biases and impacts oncorrelation dynamics are believed to exist. The accurate estimation of the correlation structure iscrucial in many applications, such as portfolio allocation and risk management. This paperfocuses on these issues by studding the impact of additive outliers (isolated and patches of leveloutliers and volatility outliers) on the estimation of correlations when fitting well knownmultivariate GARCH models and by proposing a general detection algorithm based on waveletsthat can be applied to a large class of multivariate volatility models. This procedure can be alsointerpreted as a model missspecification test since it is based on residual diagnostics. Theeffectiveness of the new proposal is evaluated by an intensive Monte Carlo study before it isapplied to daily stock market indices. The simulation studies show that correlations are highlyaffected by the presence of outliers and that the new method is both effective and reliable, sinceit detects very few false outliers. 
Keywords:  Additive Outliers, Correlations, Volatilities, Wavelets 
JEL:  C10 C13 C53 C58 G17 
Date:  2014–02 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws140503&r=ecm 
By:  Reijer, Ard H.J. de; Jacobs, Jan P.A.M.; Otter, Pieter W. (Groningen University) 
Abstract:  This paper derives a new criterion for the determination of the number of factors in static approximate factor models, that is strongly associated with the scree test. Our criterion looks for the number of eigenvalues for which the difference between adjacent eigenvaluecomponent number blocks is maximized. Monte Carlo experiments compare the properties of our criterion to the Edge Distribution (ED) estimator of Onatski (2010) and the two eigenvalue ratio estimators of Ahn and Horenstein (2013). Our criterion outperforms the latter two for all sample sizes and the ED estimator of Onatski (2010) for samples up to 300 variables/observations 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:dgr:rugsom:14008eef&r=ecm 
By:  Philipp Arbenz; Mathieu Cambou; Marius Hofert 
Abstract:  An importance sampling algorithm for copula models is introduced. The method improves Monte Carlo estimators when the functional of interest depends mainly on the behaviour of the underlying random vector when at least one of the components is large. Such problems often arise from dependence models in finance and insurance. The importance sampling framework we propose is general and can be easily implemented for all classes of copula models from which sampling is feasible. We show how the proposal distribution can be optimized to reduce the sampling error. In a case study inspired by a typical multivariate insurance application, we obtain variance reduction factors between 10 and 20 in comparison to standard Monte Carlo estimators. 
Date:  2014–03 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1403.4291&r=ecm 
By:  Tao Chen (Department of Economics, University of Waterloo, Ontario, Canada); Gautam Tripathi (CREA, Université de Luxembourg) 
Abstract:  We propose a "weighted and samplesize adjusted" KolmogorovSmirnov type statistic to test the assumption of conditional symmetry maintained in the symmetrically trimmed leastsquares (STLS) approach of Powell (1986b), which is widely used to estimate censored or truncated regression models without making distributional assumptions. The statistic proposed here is consistent and computationally easy to implement because, unlike traditional KolmogorovSmirnov statistics, it is not optimized over an uncountable set. Moreover, it does not require any nonparametric smoothing, although we test the validity of a conditional feature. We also propose a bootstrap procedure to obtain the pvalues and critical values that are required to carry out the test in practical applications. Results from a simulation study suggest that our test can work very well even in small to moderately sized samples. As an empirical illustration, we apply our test to two datasets that have been used in the literature to estimate censored regression models using Powell's STLS approach, to check whether the assumption of conditional symmetry is supported by these datasets. 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:luc:wpaper:1404&r=ecm 
By:  Irma Hindrayanto; Jan Jacobs; Denise Osborn 
Abstract:  Traditional unobserved component models assume that the trend, cycle and seasonal components of an individual time series evolve separately over time. Although this assumption has been relaxed in recent papers that focus on trendcycle interactions, it remains at the core of all seasonal adjustment methods applied by official statistical agencies around the world. The present paper develops an unobserved components model that permits nonzero correlations between seasonal and nonseasonal shocks, hence allowing testing of the uncorrelated assumption that is traditionally imposed. Identification conditions for estimation of the parameters are discussed, while applications to observed time series illustrate the model and its implications for seasonal adjustment. 
Keywords:  trendcycleseasonal decomposition; unobserved components; statespace models; seasonal adjustment; global real economic activity; unemployment 
JEL:  C22 E24 E32 E37 F01 
Date:  2014–03 
URL:  http://d.repec.org/n?u=RePEc:dnb:dnbwpp:417&r=ecm 
By:  Tomasz Skoczylas (Faculty of Economic Sciences, University of Warsaw) 
Abstract:  In this paper a new ARCHtype volatility model is proposed. The Rangebased Heterogeneous Autoregressive Conditional Heteroskedasticity (RHARCH) model draws inspiration from Heterogeneous Autoregressive Conditional Heteroskedasticity presented by Muller et al. (1995), but employs more efficient, rangebased volatility estimators instead of simple squared returns in conditional variance equation. In the first part of this research rangebased volatility estimators (such as Parkinson, or GarmanKlass estimators) are reviewed, followed by derivation of the RHARCH model. In the second part of this research the RHARCH model is compared with selected ARCHtype models with particular emphasis on forecasting accuracy. All models are estimated using data containing EURPLN spot rate quotation. Results show that RHARCH model often outperforms returnbased models in terms of predictive abilities in both insample and outofsample periods. Also properties of standardized residuals are very encouraging in case of the RHARCH model. 
Keywords:  volatility modelling, volatility forecasting, ARCH, rangebased volatility estimators, heterogeneity of volatility 
JEL:  C13 C22 C53 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:war:wpaper:201406&r=ecm 
By:  Yamin Ahmad (Department of Economics, University of Wisconsin  Whitewater); Luiggi Donayre (Department of Economics, University of Minnesota  Duluth) 
Abstract:  We conduct Monte Carlo simulations to investigate the effects of outlier observations on the properties of linearity tests against threshold autoregressive (TAR) processes. By considering different specifications and levels of persistence of the data generating processes, we find that outliers distort the size of the test and that the distortion increases with the level of persistence. However, contrary to what one might expect, we also find that larger outliers could help improve the power of the test in the case of persistent TAR processes. 
Keywords:  Outliers, Persistence, Monte Carlo Simulations, Threshold Autoregressionn, Size, Power 
JEL:  C15 C22 
Date:  2014–03 
URL:  http://d.repec.org/n?u=RePEc:uww:wpaper:1402&r=ecm 
By:  Westerlund, Joakim (Deakin University, Australia); Reese, Simon (Department of Economics, Lund University) 
Abstract:  The use of factoraugmented panel regressions has become very popular in recent years. Existing methods for such regressions require that the common factors are strong, such that their cumulative loadings rise proportionally to the number of crosssectional units, which of course need not be the case in practice. Motivated by this, the current paper offers an indepth analysis of the effect of nonstrong factors on two of the most popular estimators for factoraugmented regressions, namely, principal components (PC) and common correlated effects (CCE). Conditions for consistency and asymptotic normality are established, which are shown to depend critically on a so far overlooked relationship between factor strength and the relative expansion rate of the dimensions of the panel. 
Keywords:  Nonstrong common factors; factoraugmented panel regressions; common factor models 
JEL:  C12 C13 C33 
Date:  2014–02–13 
URL:  http://d.repec.org/n?u=RePEc:hhs:lunewp:2014_008&r=ecm 
By:  Yamin Ahmad (Department of Economics, University of Wisconsin  Whitewater); Ivan Paya (Department of Economics, Lancaster University Management School) 
Abstract:  This paper examines the impact of time averaging and interval sampling data assuming that the data generating process for a given series follows a random walk with uncorrelated increments. We provide expressions for the corresponding variances, and covariances, for both the levels and differences of the aggregated series, demonstrating how the degree of temporal aggregation impacts these particular properties. Moreover, we analytically derive any differences that arise between the aggregated series and its disaggregated counterpart, and show that they can be decomposed into a distortionary and small sample effect. We also provide exact expressions for the variance and sharpe ratios, and correlation coefficients for any level of aggregation. We discuss our results in the context of asset prices, which have utilized these extensively. 
Keywords:  Temporal Aggregation, Random Walk, Variance Ratio, Sharpe Ratio 
JEL:  F47 C15 C32 
Date:  2014–01 
URL:  http://d.repec.org/n?u=RePEc:uww:wpaper:1401&r=ecm 
By:  FrühwirthSchnatter, Sylvia (Vienna University of Economics and Business); Halla, Martin (University of Linz); Posekany, Alexandra (Vienna University of Economics and Business); Pruckner, Gerald J. (University of Linz); Schober, Thomas (University of Linz) 
Abstract:  Prior empirical research on the theoretically proposed interaction between the quantity and the quality of children builds on exogenous variation in family size due to twin births and focuses on human capital outcomes. The typical finding can be described as a statistically nonsignificant twostage least squares (2SLS) estimate, with substantial standard errors. We regard these conclusions of no empirical support for the quantityquality tradeoff as premature and, therefore, extend the empirical approach in two ways. First, we add health as an additional outcome dimension. Second, we apply a semiparametric Bayesian IV approach for econometric inference. Our estimation results substantiate the finding of a zero effect: we provide estimates with an increased precision by a factor of approximately twentythree, for a broader set of outcomes. 
Keywords:  quantityquality model of fertility, family size, human capital, health, semiparametric Bayesian IV approach 
JEL:  J13 C26 C11 I20 J20 I10 
Date:  2014–03 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp8024&r=ecm 
By:  Giuseppe arbia 
Abstract:  This article proposes a new method for the estimation of the parameters of a simple linear regression model which accounts for the role of comoments in nonGaussian distributions being based on the minimization of a quartic loss function. Although the proposed method is very general, we examine its application to finance. In fact, in this field the contribution of the comoments in explaining the returngenerating process is of paramount importance when evaluating the systematic risk of an asset within the framework of the Capital Asset Pricing Model (CAPM). The suggested new method contributes to this literature by showing that, in the presence of nonnormality, the regression slope can be expressed as a function of the cokurtosis between the returns of a risky asset and the market proxy. The paper provides an illustration of the method based on some empirical financial data referring to 40 industrial sector assets rates of the Italian stock market. 
Date:  2014–03 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1403.4171&r=ecm 
By:  K. Sudhir (Cowles Foundation and Yale School of Management); Nathan Yang (Yale School of Management) 
Abstract:  This paper offers a new identification strategy for disentangling structural state dependence from unobserved heterogeneity in preferences. Our strategy exploits market environments where there is a choiceconsumption mismatch. We first demonstrate the effectiveness of our identification strategy in obtaining unbiased state dependence estimates via Monte Carlo analysis and highlight its superiority relative to the extant choiceset variation based approach. In an empirical application that uses data of repeat transactions from the car rental industry, we find evidence of structural state dependence, but show that state dependence effects may be overstated without exploiting the choiceconsumption mismatches that materialize through free upgrades. 
Keywords:  Consumer dynamics, Heterogeneity, Quasiexperiment econometrics, Service industry, State dependence 
JEL:  C1 C5 L00 L80 M2 M3 
Date:  2014–03 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1941&r=ecm 
By:  Guido W. Imbens 
Abstract:  I review recent work in the statistics literature on instrumental variables methods from an econometrics perspective. I discuss some of the older, economic, applications including supply and demand models and relate them to the recent applications in settings of randomized experiments with noncompliance. I discuss the assumptions underlying instrumental variables methods and in what settings these may be plausible. By providing context to the current applications a better understanding of the applicability of these methods may arise. 
JEL:  C01 
Date:  2014–03 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:19983&r=ecm 