
on Econometrics 
By:  YuChin Hsu (Institute of Economics, Academia Sinica, Taipei, Taiwan); JiLiang Shiu (Institute for Economic and Social Research, Jinan University) 
Abstract:  This paper investigates identification and estimation of semiparametric nonlinear panel data models with correlated random effects (CRE). It is shown that under the Mundlaktype CRE specification, the average (or integrated) likelihood is the convolution of the proposed models and the conditional distribution of the unobserved heterogeneity. Then the conditional distribution of the unobserved heterogeneity can be recovered by means of Fourier transformation without imposing any distributional assumptions on it. Combining the proposed the conditional distributions of the outcome variables with the recovered distribution of the unobserved heterogeneity, we can construct a parametric family of average likelihood functions of observables and then show that the parameter vector is identifiable. Based on the identification condition, we propose a semiparametric twostep maximum likelihood estimator which is rootn consistent and asymptotically normal. Compared with the conventional parametric CRE approaches, the advantage of our method is that it is not subject to the function form misspecification. We investigate the finite sample properties of the proposed estimator through a Monte Carlo study and apply our method to determine the persistence effects of union membership. 
Keywords:  Nonlinear panel data models, Semiparametric identification, Correlated random effects, Semiparametric twostep maximum likelihood estimator 
Date:  2017–01 
URL:  http://d.repec.org/n?u=RePEc:sin:wpaper:17a002&r=ecm 
By:  JeanMarie Dufour; Richard Luger 
Abstract:  This paper develops tests of the null hypothesis of linearity in the context of autoregressive models with Markovswitching means and variances. These tests are robust to the identification failures that plague conventional likelihoodbased inference methods. The approach exploits the moments of normal mixtures implied by the regimeswitching process and uses Monte Carlo test techniques to deal with the presence of an autoregressive component in the model specification. The proposed tests have very respectable power in comparison to the optimal tests for Markovswitching parameters of Carrasco et al. (2014) and they are also quite attractive owing to their computational simplicity. The new tests are illustrated with an empirical application to an autoregressive model of U.S. output growth. 
Keywords:  Mixture distributions; Markov chains; Regime switching; Parametric bootstrap; Monte Carlo tests; Exact inference, 
JEL:  C12 C15 C22 C52 
Date:  2016–12–31 
URL:  http://d.repec.org/n?u=RePEc:cir:cirwor:2016s63&r=ecm 
By:  Ana Paula Martins 
Abstract:  This paper inspects a grid search algorithm to estimate the AR(1) process, based on the joint estimation of the canonical AR(1) equation along with its reverse form. The method relies on the GLS principle, accounting for the covariance error structure of the special estimable system. Nevertheless, it stands as potentially improving to rely on acrossequationrestricted system estimation with free covariance structure. The algorithm is (computationally) implemented and applied to inference of the AR(1) parameter of simulated – some stationary, others nonstationary  series. Additionally, it is argued  and illustrated by simulation  that nonstationary AR(1) processes appear to be consistently estimable by OLS. Also, it is suggested that the parameter of a stationary AR(1) process is estimable by OLS from the AR(2) representation of its nonstationary “firstintegrated” series; or from the joint estimate of the canonical and reverse form of the AR(1) process by OLS. Importance of further study of differenced, D(p) – stationary after being integrated p times  processes is concluded. 
Keywords:  Nonlinear Estimation; Grid Search Methods; AR(1) Processes; Integrated Series; Differenced Processes; Factored AR(1) Processes; Unit Roots. 
JEL:  C22 C13 C12 C63 
Date:  2016–11–21 
URL:  http://d.repec.org/n?u=RePEc:eei:rpaper:eeri_rp_2016_21&r=ecm 
By:  Firmin Doko Tchatoka; JeanMarie Dufour 
Abstract:  We study the distribution of DurbinWuHausman (DWH) and RevankarHartley (RH) tests for exogeneity from a finitesample viewpoint, under the null and alternative hypotheses. We consider linear structural models with possibly nonGaussian errors, where structural parameters may not be identified and where reduced forms can be incompletely specified (or nonparametric). On level control, we characterize the null distributions of all the test statistics. Through conditioning and invariance arguments, we show that these distributions do not involve nuisance parameters. In particular, this applies to several test statistics for which no finitesample distributional theory is yet available, such as the standard statistic proposed by Hausman (1978). The distributions of the test statistics may be nonstandard – so corrections to usual asymptotic critical values are needed – but the characterizations are sufficiently explicit to yield finitesample (MonteCarlo) tests of the exogeneity hypothesis. The procedures so obtained are robust to weak identification, missing instruments or misspecified reduced forms, and can easily be adapted to allow for parametric nonGaussian error distributions. We give a general invariance result (block triangular invariance) for exogeneity test statistics. This property yields a convenient exogeneity canonical form and a parsimonious reduction of the parameters on which power depends. In the extreme case where no structural parameter is identified, the distributions under the alternative hypothesis and the null hypothesis are identical, so the power function is flat, for all the exogeneity statistics. However, as soon as identification does not fail completely, this phenomenon typically disappears. We present simulation evidence which confirms the finitesample theory. The theoretical results are illustrated with two empirical examples: the relation between trade and economic growth, and the widely studied problem of the return of education to earnings. 
Keywords:  Exogeneity; DurbinWuHausman test; weak instrument; incomplete model; nonGaussian; weak identification; identification robust; finitesample theory; pivotal; invariance;Monte Carlo test; power., 
JEL:  C3 C12 C15 C52 
Date:  2016–12–31 
URL:  http://d.repec.org/n?u=RePEc:cir:cirwor:2016s62&r=ecm 
By:  Susanna Gallani (Harvard Business School, Accounting and Management Unit); Ranjani Krishnan (Eli Broad School of Management, Michigan State University) 
Abstract:  Survey research studies make extensive use of rating scales to measure constructs of interest. The bounded nature of such scales presents econometric estimation challenges. Linear estimation methods (e.g. OLS) often produce predicted values that lie outside the rating scales, and fail to account for nonconstant effects of the predictors. Established nonlinear approaches such as logit and probit transformations attenuate many shortcomings of linear methods. However, these nonlinear approaches are challenged by corner solutions, for which they require ad hoc transformations. Censored and truncated regressions alter the composition of the sample, while Tobit methods rely on distributional assumptions that are frequently not reflected in survey data, especially when observations fall at one extreme of the scale owing to surveyor and respondent characteristics. The fractional response model (FRM) (Papke and Wooldridge 1996, 2008) overcomes many limitations of established linear and nonlinear econometric solutions in the study of bounded data. In this study, we first review the econometric characteristics of the FRM and discuss its applicability to surveybased studies in accounting. Second, we present results from Monte Carlo simulations to highlight the advantages of using the FRM relative to conventional models. Finally, we use data from a hospital patient satisfaction survey, compare the estimation results from a traditional OLS method and the FRM, and conclude that the FRM provides an improved methodological approach to the study of bounded dependent variables. 
Keywords:  Fractional response model, bounded variables, simulation 
JEL:  C23 C24 C25 C15 I18 M41 
Date:  2015–08 
URL:  http://d.repec.org/n?u=RePEc:hbs:wpaper:16016&r=ecm 
By:  Bernd Hayo (University of Marburg) 
Abstract:  There is a widespread belief among economists that adding additional variables to a regression model causes higher standard errors. This note shows that, in general, this belief is unfounded and that the impact of adding variables on coefficients’ standard errors is unclear. The concept of standarderrordecreasing complementarity is introduced, which works against the collinearityinduced increase in standard errors. How standarderrordecreasing complementarity works is illustrated with the help of a nontechnical heuristic, and, using an example based on artificial data, it is shown that the outcome of popular econometric approaches can be potentially misleading. 
Keywords:  Standarderrordecreasing complementarity, multivariate regression model, standard error, econometric methodology, multicollinearity, collinearity 
JEL:  C1 B4 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:mar:magkse:201703&r=ecm 
By:  Javier Hualde (Universidad Publica de Navarra); Morten Ørregaard Nielsen (Queen's University and CREATES) 
Abstract:  We consider truncated (or conditional) sum of squares estimation of a parametric model composed of a fractional time series and an additive generalized polynomial trend. Both the memory parameter, which characterizes the behaviour of the stochastic component of the model, and the exponent parameter, which drives the shape of the deterministic component, are considered not only unknown real numbers, but also lying in arbitrarily large (but finite) intervals. Thus, our model captures different forms of nonstationarity and noninvertibility. As in related settings, the proof of consistency (which is a prerequisite for proving asymptotic normality) is challenging due to nonuniform convergence of the objective function over a large admissible parameter space, but, in addition, our framework is substantially more involved due to the competition between stochastic and deterministic components. We establish consistency and asymptotic normality under quite general circumstances, finding that results differ crucially depending on the relative strength of the deterministic and stochastic components. 
Keywords:  Asymptotic normality, consistency, deterministic trend, fractional process, generalized polynomial trend, noninvertibility, nonstationarity, truncated sum of squares estimation 
JEL:  C22 
Date:  2017–01 
URL:  http://d.repec.org/n?u=RePEc:qed:wpaper:1376&r=ecm 
By:  Mynbaev, Kairat; MartinsFilho, Carlos 
Abstract:  We define a new bandwidthdependent kernel density estimator that improves existing convergence rates for the bias, and preserves that of the variation, when the error is measured in L1. No additional assumptions are imposed to the extant literature. 
Keywords:  Kernel density estimation, higher order kernels, bias reduction 
JEL:  C14 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:75902&r=ecm 
By:  Rinke, Saskia; Busch, Marie; Leschinski, Christian 
Abstract:  The persistence of inflation rates is of major importance to central banks due to the fact that it determines the costs of monetary policy according to the Phillips curve. This article is motivated by newly available econometric methods which allow for a consistent estimation of the persistence parameter under low frequency contaminations and consistent break point estimation under long memory without a priori assumptions on the presence of breaks. In contrast to previous studies, we allow for smooth trends in addition to breaks as a source of spurious long memory. We support the fi nding of reduced memory parameters in monthly inflation rates of the G7 countries as well as spurious long memory, except for the US. Nevertheless, only a few breaks can be located. Instead, all countries exhibit signi cant trends at the 5 percent level with the exception of the US. 
Keywords:  Spurious Long Memory; Breaks; Trends; Inflation; G7 countries 
JEL:  C13 E58 
Date:  2017–01 
URL:  http://d.repec.org/n?u=RePEc:han:dpaper:dp584&r=ecm 
By:  Barnichon, Régis; Brownlees, Christian 
Abstract:  Vector Autoregressions (VAR) and Local Projections (LP) are well established methodologies for the estimation of Impulse Responses (IR). These techniques have complementary features: The VAR approach is more efficient when the model is correctly specified whereas the LP approach is less efficient but more robust to model misspecification. We propose a novel IR estimation methodology  Smooth Local Projections (SLP)  to strike a balance between these approaches. SLP consists in estimating LP under the assumption that the IR is a smooth function of the forecast horizon. Inference is carried out using semiparametric techniques based on Penalized Bsplines, which are straightforward to implement in practice. SLP preserves the flexibility of standard LP and at the same time can increase precision substantially. A simulation study shows the large gains in IR estimation accuracy of SLP over LP. We show how SLP may be used with common identification schemes such as timing restrictions and instrumental variables to directly recover structural IRs. We illustrate our technique by studying the effects of monetary shocks. 
Keywords:  impulse response; local projections; semiparametric estimation 
JEL:  C14 C32 C53 E47 
Date:  2016–12 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:11726&r=ecm 
By:  Hirsch, Tristan; Rinke, Saskia 
Abstract:  Outlying observations in time series influence parameter estimation and testing procedures, leading to biased estimates and spurious test decisions. Further inference based on these results will be misleading. In this paper the effects of outliers on the performance of ratiobased tests for a change in persistence are investigated. We consider two types of outliers, additive outliers and innovative outliers. Our simulation results show that the effect of outliers crucially depends on the outlier type and on the degree of persistence of the underlying process. Additive outliers deteriorate the performance of the tests for high degrees of persistence. In contrast, innovative outliers do not negatively influence the performance of the tests. Since additive outliers lead to severe size distortions when the null hypothesis under consideration is described by a nonstationary process, we apply an outlier detection method designed for unitroot testing. The adjustment of the series results in size improvements and power gains. In an empirical example we apply the tests and the outlier detection method to the G7 inflation rates. 
Keywords:  Additive Outliers; Innovative Outliers; Change in Persistence; Outlier Detection; Monte Carlo 
JEL:  C15 C22 
Date:  2017–01 
URL:  http://d.repec.org/n?u=RePEc:han:dpaper:dp583&r=ecm 
By:  Breunig, Christoph; Kummer, Michael; Ohnemus, Jorg; Viete, Steffen 
Abstract:  Missing values are a major problem in all econometric applications based on survey data. A standard approach assumes data are missingatrandom and uses imputation methods, or even listwise deletion. This approach is justified if item nonresponse does not depend on the potentially missing variables' realization. However, assuming missingatrandom may introduce bias if nonresponse is, in fact, selective. Relevant applications range from financial or strategic firmlevel data to individuallevel data on income or privacysensitive behaviors. In this paper, we propose a novel approach to deal with selective item nonresponse in the model's dependent variable. Our approach is based on instrumental variables that affect selection only through potential outcomes. In addition, we allow for endogenous regressors. We establish identification of the structural parameter and propose a simple twostep estimation procedure for it. Our estimator is consistent and robust against biases that would prevail when assuming missingness at random. We implement the estimation procedure using firmlevel survey data and a binary instrumental variable to estimate the effect of outsourcing on productivity. 
Keywords:  endogenous selection,IVestimation,inverse probability weighting,missing data,productivity,outsourcing,semiparametric estimation 
JEL:  C14 C36 D24 L24 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:zbw:zewdip:16092&r=ecm 
By:  Bógalo, Juan; Poncela, Pilar; Senra, Eva 
Abstract:  Singular Spectrum Analysis (SSA) is a nonparametric tecnique for signal extraction in time series based on principal components. However, it requires the intervention of the analyst to identify the frequencies associated to the extracted principal components. We propose a new variant of SSA, Circulant SSA (CSSA) that automatically makes this association. We also prove the validity of CSSA for the nonstationary case. Through several sets of simulations, we show the good properties of our approach: it is reliable, fast, automatic and produces strongly separable elementary components by frequency. Finally, we apply Circulant SSA to the Industrial Production Index of six countries. We use it to deseasonalize the series and to illustrate that it also reproduces a cycle in accordance to the dated recessions from the OECD. 
Keywords:  circulant matrices, signal extraction, singular spectrum analysis, nonparametric, time series, Toeplitz matrices. 
JEL:  C22 E32 
Date:  2017–01–05 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:76023&r=ecm 
By:  Carlos Viana de Carvalho (Department of Economics, PUCRio); Ricardo Masini (São Paulo School of Economics, Getúlio Vargas Foundation); Marcelo Cunha Medeiros (Department of Economics, PUCRio) 
Abstract:  Recently, there has been a growing interest in developing econometric tools to conduct counterfactual analysis with aggregate data when a “treated” unit suffers an intervention, such as a policy change, and there is no obvious control group. Usually, the proposed methods are based on the construction of an artificial counterfactual from a pool of “untreated” peers, organized in a panel data structure. In this paper, we investigate the consequences of applying such methodologies when the data are formed by integrated process of order 1. We find that without a cointegration relation (spurious case) the intervention estimator diverges resulting in the rejection of the hypothesis of no intervention effect regardless of its existence. Whereas, for the case when at least one cointegration relation exists, we have a vTconsistent estimator for the intervention effect albeit with a nonstandard distribution. However, even in this case, the test of no intervention effect is extremely oversized if nonstationarity is ignored. When a drift is present in the data generating processes, the estimator for both cases (cointegrated and spurious) either diverges or is not well defined asymptotically. As a final recommendation we suggest to work in firstdifferences to avoid spurious results. 
Date:  2016–12 
URL:  http://d.repec.org/n?u=RePEc:rio:texdis:654&r=ecm 
By:  Stéphanie Aerts; Ines Wilms 
Abstract:  Quadratic and Linear Discriminant Analysis (QDA/LDA) are the most often applied classification rules under normality. In QDA, a separate covariance matrix is estimated for each group. If there are more variables than observations in the groups, the usual estimates are singular and cannot be used anymore. Assuming homoscedasticity, as in LDA, reduces the number of parameters to estimate. This rather strong assumption is however rarely verified in practice. Regularized discriminant techniques that are computable in highdimension and cover the path between the two extremes QDA and LDA have been proposed in the literature. However, these procedures rely on sample covariance matrices. As such, they become inappropriate in presence of cellwise outliers, a type of outliers that is very likely to occur in highdimensional datasets. In this paper, we propose cellwise robust counterparts of these regularized discriminant techniques by inserting cellwise robust covariance matrices. Our methodology results in a family of discriminant methods that (i) are robust against outlying cells, (ii) cover the gap between LDA and QDA and (iii) are computable in highdimension. The good performance of the new methods is illustrated through simulated and real data examples. As a byproduct, visual tools are provided for the detection of outliers. 
Keywords:  Cellwise robust precision matrix, Classification, Discriminant analysis, Penalized estimation 
Date:  2017–01 
URL:  http://d.repec.org/n?u=RePEc:ete:kbiper:563648&r=ecm 
By:  Andrew G. Chapple 
Abstract:  Time to event data for econometric tragedies, like mass shootings, have largely been ignored from a changepoint analysis standpoint. We outline a technique for modelling economic changepoint problems using a piece wise constant hazard model to explain different economic phenomenon. Specifically, we investigate the rates of mass shootings in the United States since August 20th 1982 as a case study to examine changes in rates of these terrible events in an attempt to connect changes to the shooter’s covariates or policy and societal changes. 
Keywords:  Timetoevent Data, Bayesian Analyses, Piecewise Exponential, Reversible Jump, Mass Shooting. 
JEL:  C11 C22 
Date:  2016–11–24 
URL:  http://d.repec.org/n?u=RePEc:eei:rpaper:eeri_rp_2016_24&r=ecm 
By:  Yuta Yamauchi (Graduate School of Economics, The University of Tokyo); Yasuhiro Omori (Faculty of Economics, The University of Tokyo) 
Abstract:  Although stochastic volatility and GARCH models have been successful to describe the volatility dynamics of univariate asset returns, their natural extension to the multivariate models with dynamic correlations has been difficult due to several major problems. Firstly, there are too many parameters to estimate if available data are only daily returns, which results in unstable estimates. One solution to this problem is to incorporate additional observations based on intraday asset returns such as realized covariances. However, secondly, since multivariate asset returns are not traded synchronously, we have to use largest time intervals so that all asset returns are observed to compute the realized covariance matrices, where we fail to make full use of available intraday informations when there are less frequently traded assets. Thirdly, it is not straightforward to guarantee that the estimated (and the realized) covariance matrices are positive definite. Our contributions are : (1) we obtain the stable parameter estimates for dynamic correlation models using the realized measures, (2) we make full use of intraday informations by using pairwise realized correlations, (3) the covariance matrices are guaranteed to be positive definite, (4) we avoid the arbitrariness of the ordering of asset returns, (5) propose the flexible correlation structure model (e.g. such as setting some correlations to be identically zeros if necessary), and (6) the parsimonious specification for the leverage effect is proposed. Our proposed models are applied to daily returns of nine U.S. stocks with their realized volatilities and pairwise realized correlations, and are shown to outperform the existing models with regard to portfolio performances. 
Date:  2016–11 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2016cf1029&r=ecm 
By:  D.S.G. Pollock 
Abstract:  Discretetime ARMA processes can be placed in a onetoone correspondence with a set of continuoustime processes that are bounded in frequency by the Nyquist value of ? radians per sample period. It is well known that, if data are sampled from a continuous process of which the maximum frequency exceeds the Nyquist value, then there will be a problem of aliasing. However, if the sampling is too rapid, then other problems will arise that may cause the ARMA estimates to be severely biased. The paper reveals the nature of these problems and it shows how they may be overcome. 
Keywords:  ARMA Modelling, Stochastic Differential Equations, FrequencyLimited Stochastic Processes, Oversampling 
JEL:  C22 C32 E32 
Date:  2017–01 
URL:  http://d.repec.org/n?u=RePEc:lec:leecon:17/03&r=ecm 
By:  Christoph Wunderer 
Abstract:  Asset correlations play an important role in credit portfolio modelling. One possible data source for their estimation are default time series. This study investigates the systematic error that is made if the exposure pool underlying a default time series is assumed to be homogeneous when in reality it is not. We find that the asset correlation will always be underestimated if homogeneity with respect to the probability of default (PD) is wrongly assumed, and the error is the larger the more spread out the PD is within the exposure pool. If the exposure pool is inhomogeneous with respect to the asset correlation itself then the error may be going in both directions, but for most PD and asset correlation ranges relevant in practice the asset correlation is systematically underestimated. Both effects stack up and the error tends to become even larger if in addition we assume a negative correlation between asset correlation and PD within the exposure pool, an assumption that is plausible in many circumstances and consistent with the Basel RWA formula. It is argued that the generic inhomogeneity effect described in this paper is one of the reasons why asset correlations measured from default data tend to be lower than asset correlations derived from asset value data. 
Date:  2017–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1701.02028&r=ecm 
By:  Fornaro, Paolo; Luomaranta, Henri; Saarinen, Lauri 
Abstract:  We adopt a series of shrinkage and factor analytic methodologies to compute nowcasts of the main Finnish turnover indexes, using continuously accumulating firmlevel data. We show that the estimates based on large dimensional models provide an accurate and timelier alternative to the ones produced currently by Statistics Finland, even after taking into account data revisions. In particular, we find that the turnovers for the service sector can be estimated with high accuracy five days after the reference month has ended, giving more accurate and faster predictions compared to the first official internal release. For other sectors, the large dimensional models provide a good nowcasting performance, even though there is a timelinessaccuracy trade off. Finally, we propose a factorbased methodology to improve the accuracy of the current flash estimates by imputing part of the data sources, and find that we are able to provide better predictions in a more expedited fashion for all sectors of interest. 
Keywords:  Dynamic factor models, firmlevel data, nowcasting, shrinkage 
JEL:  C31 C53 C55 
Date:  2017–01–10 
URL:  http://d.repec.org/n?u=RePEc:rif:wpaper:46&r=ecm 