
on Econometrics 
By:  Xiaohong Chen (Cowles Foundation, Yale University); Zhipeng Liao (Dept. of Economics, UC Los Angeles); Yixiao Sun (Dept. of Economics, UC San Diego) 
Abstract:  The method of sieves has been widely used in estimating semiparametric and nonparametric models. In this paper, we first provide a general theory on the asymptotic normality of plugin sieve M estimators of possibly irregular functionals of semi/nonparametric time series models. Next, we establish a surprising result that the asymptotic variances of plugin sieve M estimators of irregular (i.e., slower than rootT estimable) functionals do not depend on temporal dependence. Nevertheless, ignoring the temporal dependence in small samples may not lead to accurate inference. We then propose an easytocompute and more accurate inference procedure based on a "preasymptotic" sieve variance estimator that captures temporal dependence. We construct a "preasymptotic" Wald statistic using an orthonormal series long run variance (OSLRV) estimator. For sieve M estimators of both regular (i.e., rootT estimable) and irregular functionals, a scaled "preasymptotic" Wald statistic is asymptotically F distributed when the series number of terms in the OSLRV estimator is held fixed. Simulations indicate that our scaled "preasymptotic" Wald test with F critical values has more accurate size in finite samples than the usual Wald test with chisquare critical values. 
Keywords:  Weak dependence, Sieve M estimation, Sieve Riesz representor, Irregular functional, Misspecification, Preasymptotic variance, Orthogonal series long run variance estimation, F distribution 
JEL:  C12 C14 C32 
Date:  2012–02 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1849&r=ecm 
By:  Wenjie Wang (Graduate School of Economics, Kyoto University) 
Abstract:  A bootstrap method is proposed for the AndersonRubin test and the J test for overidentifying restrictions in linear instrumental variable models with many instruments. We show the bootstrap validity of these test statistics when the number of instruments increases at the same rate as the sample size. Moreover, since it has been shown in the literature to be valid when the number of instruments is small, the bootstrap technique is practically robust to the numerosity of the moment conditions. A smallscale Monte Carlo experiment shows that our procedure has outstanding small sample performance compared with some existing asymptotic procedures. 
Date:  2012–02 
URL:  http://d.repec.org/n?u=RePEc:kyo:wpaper:810&r=ecm 
By:  James Mitchell; George Kapetanios; Yongcheol Shin 
Abstract:  This paper proposes a nonlinear panel data model which can generate endogenously both `weak' and `strong' crosssectional dependence. The model's distinguishing characteristic is that a given agent's behaviour is influenced by an aggregation of the views or actions of those around them. The model allows for considerable flexibility in terms of the genesis of this herding or clustering type behaviour. At an econometric level, the model is shown to nest various extant dynamic panel data models. These include panel AR models, spatial models, which accommodate weak dependence only, and panel models where crosssectional averages or factors exogenously generate strong, but not weak, cross sectional dependence. An important implication is that the appropriate model for the aggregate series becomes intrinsically nonlinear, due to the clustering behaviour, and thus requires the disaggregates to be simultaneously considered with the aggregate. We provide the associated asymptotic theory for estimation and inference. This is supplemented with Monte Carlo studies and two empirical applications which indicate the utility of our proposed model as both a structural and reduced form vehicle to model different types of crosssectional dependence, including evolving clusters. 
Keywords:  Nonlinear Panel Data Model; Clustering; Crosssection Dependence; Factor Models; Monte Carlo Simulations; Application to Stock Returns and Inflation Expectations 
JEL:  C31 C33 C51 E31 G14 
Date:  2012–01 
URL:  http://d.repec.org/n?u=RePEc:lec:leecon:12/01&r=ecm 
By:  Dominique Guegan (Centre d'Economie de la Sorbonne); Zhiping Lu (East China Normal University (ECNU)); BeiJia Zhu (Centre d'Economie de la Sorbonne et East China Normal University (ECNU)) 
Abstract:  In this paper, nine memory parameter estimation procedures for the fractionally integrated I(d) process, semiparametric and parametric, which prevail in the existing literature are reviewed ; through the simulation study under the ARFIMA (p,d,q) setting we cast a light on the finite sample performance of these estimation procedures for the nonstationary long memory time series. As a byproduct of this study, we provide a bandwidth parameter selection strategy for the frequency domain estimation and an upperandlower scale trimming strategy for the wavelet domain estimation from a practical standpoint. The other objective of this paper is to give a useful reference to the applied reserachers and practitioners. 
Keywords:  Finite sample performance comparaison, Fourier frequency, GDP, nonstationary long memory time series, wavelet. 
JEL:  C12 C15 C22 
Date:  2012–02 
URL:  http://d.repec.org/n?u=RePEc:mse:cesdoc:12008&r=ecm 
By:  Cristina Amado (Universidade do Minho  NIPE); Timo Terasvirta (CREATES, Department of Economics and Business, Aarhus University) 
Abstract:  In this paper we develop a testing and modelling procedure for describing the longterm volatility movements over very long return series. For the purpose, we assume that volatility is multiplicatively decomposed into a conditional and an unconditional component as in Amado and Teräsvirta (2011). The latter component is modelled by incorporating smooth changes so that the unconditional variance is allowed to evolve slowly over time. Statistical inference is used for specifying the parameterization of the timevarying component by applying a sequence of Lagrange multiplier tests. The model building procedure is illustrated with an application to daily returns of the Dow Jones Industrial Average stock index covering a period of more than ninety years. The main conclusions are as follows. First, the LM tests strongly reject the assumption of constancy of the unconditional variance. Second, the results show that the longmemory property in volatility may be explained by ignored changes in the unconditional variance of the long series. Finally, based on a formal statistical test we find evidence of the superiority of volatility forecast accuracy of the new model over the GJRGARCH model at all horizons for a subset of the long return series. 
Keywords:  Model specification; Conditional heteroskedasticity; Lagrange multiplier test; Timevarying unconditional variance; Long financial time series; Volatility persistence 
JEL:  C12 C22 C51 C52 C53 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:nip:nipewp:02/2012&r=ecm 
By:  Johanna Kappus 
Abstract:  For a Lévy process X having finite variation on compact sets and finite first moments, µ( dx) = xv( dx) is a finite signed measure which completely describes the jump dynamics. We construct kernel estimators for linear functionals of µ and provide rates of convergence under regularity assumptions. Moreover, we consider adaptive estimation via model selection and propose a new strategy for the data driven choice of the smoothing parameter. 
Keywords:  Statistics of stochastic processes, Low frequency observed Lévy processes, Nonparametric statistics, Adaptive estimation, Model selection with unknown variance 
JEL:  C14 
Date:  2012–02 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2012016&r=ecm 
By:  Foroni, Claudia; Marcellino, Massimiliano; Schumacher, Christian 
Abstract:  Mixeddata sampling (MIDAS) regressions allow to estimate dynamic equations that explain a lowfrequency variable by highfrequency variables and their lags. When the difference in sampling frequencies between the regressand and the regressors is large, distributed lag functions are typically employed to model dynamics avoiding parameter proliferation. In macroeconomic applications, however, differences in sampling frequencies are often small. In such a case, it might not be necessary to employ distributed lag functions. In this paper, we discuss the pros and cons of unrestricted lag polynomials in MIDAS regressions. We derive unrestricted MIDAS regressions (UMIDAS) from linear highfrequency models, discuss identification issues, and show that their parameters can be estimated by OLS. In Monte Carlo experiments, we compare UMIDAS to MIDAS with functional distributed lags estimated by NLS. We show that UMIDAS generally performs better than MIDAS when mixing quarterly and monthly data. On the other hand, with larger differences in sampling frequencies, distributed lagfunctions outperform unrestricted polynomials. In an empirical application on outofsample nowcasting GDP in the US and the Euro area using monthly predictors, we find a good performance of UMIDAS for a number of indicators, albeit the results depend on the evaluation sample. We suggest to consider UMIDAS as a potential alternative to the existing MIDAS approach in particular for mixing monthly and quarterly variables. In practice, the choice between the two approaches should be made on a casebycase basis, depending on their relative performance.  
Keywords:  mixed data sampling,distributed lag polynomals,time aggregation,nowcasting 
JEL:  E37 C53 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:zbw:bubdp1:201135&r=ecm 
By:  JeanBernard Chatelain (Centre d'Economie de la Sorbonne  Paris School of Economics); Kirsten Ralf (Ecole Supérieure du Commerce Extérieur (ESCE)) 
Abstract:  This paper shows that a multiple regression with two highly correlated explanatory variables, both of them with a near zero correlation with the dependent variable may correspond to a spurious regression or to a homeostatic model, with estimates highly sensible to outliers. The regression method does not allow how to decide which one of the two models is relevant. Statistical significance of the (very high) parameters is easily obtained, as shown doing Monte Carlo simulations. An example is provided by the Burnside and Dollar [2000] article on aid, policies and growth. 
Keywords:  Spurious regression, nearmulticollinearity, classical suppressor, parameter inflation factor (PIF). 
JEL:  C12 C18 C52 F35 O47 
Date:  2012–02 
URL:  http://d.repec.org/n?u=RePEc:mse:cesdoc:12011&r=ecm 
By:  Vijverberg, ChuPing C. (Wichita State University); Vijverberg, Wim P. (CUNY Graduate Center) 
Abstract:  The pregibit discrete choice model is built on a distribution that allows symmetry or asymmetry and thick tails, thin tails or no tails. Thus the model is much richer than the traditional models that are typically used to study behavior that generates discrete choice outcomes. Pregibit nests logit, approximately nests probit, loglog, cloglog and gusset models, and yields a linear probability model that is solidly founded on the discrete choice framework that underlies logit and probit. 
Keywords:  discrete choice, asymmetry, logit, probit, postsecondary education, mortgage application 
JEL:  C25 G21 I21 
Date:  2012–02 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp6359&r=ecm 
By:  Ann Elizabeth Maharaj; M. Andrés Alonso 
Abstract:  In analyzing ECG data, the main aim is to differentiate between the signal patterns of those of healthy subjects and those of individuals with specific heart conditions. We propose an approach for classifying multivariate ECG signals based on discriminant and wavelet analyzes. For this purpose we use multiplescale wavelet variances and wavelet correlations to distinguish between the patterns of multivariate ECG signals based on the variability of the individual components of each ECG signal and the relationships between every pair of these components. Using the results of other ECG classification studies in the literature as references, we demonstrate that our approach applied to 12lead ECG signals from a particular database, displays quite favourable performance. We also demonstrate with real and synthetic ECG data that our approach to classifying multivariate time series out performs other wellknown approaches for classifying multivariate time series. In simulation studies using multivariate time series that have patterns that are different from that of the ECG signals, we also demonstrate very favourably performance of this approach when compared to these other approaches. 
Keywords:  Time series, Wavelet Variances, Wavelet Correlations, Discriminant Analysis 
JEL:  C38 C22 
Date:  2012–02 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws120603&r=ecm 
By:  Khaled, Mohammed S; Keef, Stephen P 
Abstract:  Efficiency in financial markets is tested by applying variance ratio (VR)tests, but unit root tests are also used by many, sometimes in addition to the VR tests. There is a lack of clarity in the literature about the implication of these test results when they seem to disagree. We distinguish between two different types of predictability, called "structural predictability" and "error predictability". Standard unit root tests pick up structural predictability. VR tests pick up both structural and error predictability. 
Keywords:  Unit Root, Weak Form Efficiency, Random Walk, Autocorrelation, Variance Ratio, 
Date:  2011–12–20 
URL:  http://d.repec.org/n?u=RePEc:vuw:vuwecf:1993&r=ecm 
By:  Pablo Pincheira 
Abstract:  It is well known that weighted averages of two competing forecasts may reduce Mean Squared Prediction Errors (MSPE) and may also introduce certain inefficiencies. In this paper we take an indepth view of one particular type of inefficiency stemming from simple combination schemes. We identify testable conditions under which every linear convex combination of two forecasts displays this type of inefficiency. In particular, we show that the process of taking averages of forecasts may induce inefficiencies in the combination, even when the individual forecasts are efficient. Furthermore, we show that the socalled "optimal weighted average" traditionally presented in the literature may indeed be suboptimal. We propose a simple testable condition to detect if this traditional weighted factor is optimal in a broader sense. An optimal "recombination weight" is introduced. Finally, we illustrate our findings with simulations and an empirical application in the context of the combination of inflation forecasts. 
Date:  2012–01 
URL:  http://d.repec.org/n?u=RePEc:chb:bcchwp:661&r=ecm 
By:  Belzil, Christian (Ecole Polytechnique, Paris); Hansen, Jörgen (Concordia University) 
Abstract:  We build on Rosenzweig and Wolpin (2000) and Keane (2010) and show that in order to fulfill the Instrumental variable (IV) identifying moment condition, a policy must be designed so that compliers and noncompliers either have the same average error term, or have an error term ratio equal to their relative share of the population. The former condition (labeled Choice Orthogonality) is essentially a noselection condition. The latter one, referred to as Weighted Opposite Choices, may be viewed as a distributional (functional form) assumption necessary to match the degree of selectivity between compliers and noncompliers to their relative population proportions. Those conditions form a core of implicit IV assumptions that are present in any empirical applications. They allow the econometrician to gain substantial insight about the validity of a specific instrument, and they illustrate the link between identification and the statistical strength of an instrument. Finally, our characterization may also help designing a policy generating a valid instrument. 
Keywords:  instrumental variable methods, implicit assumptions, treatment effects 
JEL:  B4 C1 C3 
Date:  2012–02 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp6339&r=ecm 
By:  Martin Rypdal; Espen Sirnes; Ola L{\o}vsletten; Kristoffer Rypdal 
Abstract:  Maximum likelihood estimation applied to highfrequency data allows us to quantify intermittency in the fluctu ations of asset prices. From time records as short as one month these methods permit extraction of a meaningful intermittency parameter {\lambda} characterising the degree of volatility clustering of asset prices. We can therefore study the time evolution of volatility clustering and test the statistical significance of this variability. By analysing data from the Oslo Stock Exchange, and comparing the results with the investment grade spread, we find that the estimates of {\lambda} are lower at times of high market uncertainty. 
Date:  2012–02 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1202.4877&r=ecm 
By:  Boistard, Hélène; LevyLeduc, Céline; Moulines, Eric; Reisen, Valdério Anselmo; Taqqu, Murad 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:ner:toulou:http://neeo.univtlse1.fr/3043/&r=ecm 
By:  Matyas, Laszlo; Hornok, Cecilia; Pus, Daria 
Abstract:  The paper introduces for the most frequently used threedimensional panel data sets several random effects model specifications. It derives appropriate estimation methods for the balanced and unbalanced cases. An application is also presented where the bilateral trade of 20 EU countries is analysed for the period 20012006. The differences between the fixed and random effects specifications are highlighted through this empirical exercise. 
Keywords:  panel data; multidimensional panel data; random effects; error components model; trade model; gravity mode; 
JEL:  C13 F11 C23 F17 F1 C21 C01 
Date:  2012–02–17 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:36789&r=ecm 
By:  Walter Krämer 
Abstract:  This article takes issue with a recent book by Ziliak and McCloskey (2008) of the same title. Ziliak and McCloskey argue that statistical significance testing is a barrier rather than a booster for empirical research in economics and should therefore be abandoned altogether. The present article argues that this is good advice in some research areas but not in others. Taking all issues which have appeared so far of the German Economic Review and a recent epidemiological metaanalysis as examples, it shows that there has indeed been a lot of misleading work in the context of significance testing, and that at the same time many promising avenues for fruitfully employing statistical significance tests, disregarded by Ziliak and McCloskey, have not been used. 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:rsw:rswwps:rswwps176&r=ecm 
By:  Boistard, Hélène; LevyLeduc, Céline; Moulines, Eric; Reisen, Valdério Anselmo; Taqqu, Murad 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:ner:toulou:http://neeo.univtlse1.fr/3044/&r=ecm 
By:  Victor Chernozhukov (Institute for Fiscal Studies and MIT); Emre Kocatulum; Konrad Menzel 
Abstract:  <p>In this paper we introduce various set inference problems as they appear in finance and propose practical and powerful inferential tools. Our tools will be applicable to any problem where the set of interest solves a system of smooth estimable inequalities, though we will particularly focus on the following two problems: the admissible meanvariance sets of stochastic discount factors and the admissible meanvariance sets of asset portfolios. We propose to make inference on such sets using weighted likelihoodratio and Wald type statistics, building upon and substantially enriching the available methods for inference on sets.</p> 
Date:  2012–02 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:04/12&r=ecm 
By:  Hong Lan; Alexander MeyerGohde 
Abstract:  We prove that standard regularity and saddle stability assumptions for linear approximations are sufficient to guarantee the existence of a unique solution for all undetermined coefficients of nonlinear perturbations of arbitrary order to discrete time DSGE models. We derive the perturbation using a matrix calculus that preserves linear algebraic structures to arbitrary orders of derivatives, enabling the direct application of theorems from matrix analysis to prove our main result. As a consequence, we provide insight into several invertibility assumptions from linear solution methods, prove that the local solution is independent of terms first order in the perturbation parameter, and relax the assumptions needed for the local existence theorem of perturbation solutions. 
Keywords:  Perturbation, matrix calculus, DSGE, solution methods, Bézout theorem; Sylvester equations 
JEL:  C61 C63 E17 
Date:  2012–02 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2012015&r=ecm 
By:  Boistard, Hélène; LevyLeduc, Céline; Moulines, Eric; Reisen, Valdério Anselmo; Taqqu, Murad 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:ner:toulou:http://neeo.univtlse1.fr/3045/&r=ecm 