Econometrics
http://lists.repec.org/mailman/listinfo/nep-ecm
Econometrics2015-01-19Sune KarlssonA simple new test for slope homogeneity in panel data models with interactive effects
http://d.repec.org/n?u=RePEc:pra:mprapa:60795&r=ecm
We consider the problem of testing for slope homogeneity in high-dimensional panel data models with cross-sectionally correlated errors. We consider a Swamy-type test for slope homogeneity by incorporating interactive fixed effects. We show that the proposed test statistic is asymptotically normal. Our test allows the explanatory variables to be correlated with the unobserved factors, factor loadings, or with both. Monte Carlo simulations demonstrate that the proposed test has good size control and good power.Ando, Tomohiro, Bai, Jushan2014-12-21Cross-sectional dependence, Endogenous predictors, Slope homogeneityEstimation and inference of FAVAR models
http://d.repec.org/n?u=RePEc:pra:mprapa:60960&r=ecm
The factor-augmented vector autoregressive (FAVAR) model, first proposed by Bernanke, Bovin, and Eliasz (2005, QJE), is now widely used in macroeconomics and finance. In this model, observable and unobservable factors jointly follow a vector autoregressive process, which further drives the comovement of a large number of observable variables. We study the identification restrictions in the presence of observable factors. We propose a likelihood-based two-step method to estimate the FAVAR model that explicitly accounts for factors being partially observed. We then provide an inferential theory for the estimated factors, factor loadings and the dynamic parameters in the VAR process. We show how and why the limiting distributions are different from the existing results.Bai, Jushan, Li, Kunpeng, Lu, Lina2014-12high dimensional analysis; identification restrictions; inferential theory; likelihood-based analysis; VAR; impulse response.Poisson qmle of count time series models
http://d.repec.org/n?u=RePEc:pra:mprapa:59804&r=ecm
Regularity conditions are given for the consistency of the Poisson quasi-maximum likelihood estimator of the conditional mean parameter of a count time series. The asymptotic distribution of the estimator is studied when the parameter belongs to the interior of the parameter space and when it lies at the boundary. Tests for the significance of the parameters and for constant conditional mean are deduced. Applications to specific INAR and INGARCH models are considered. Numerical illustrations, on Monte Carlo simulations and real data series, are provided.Ahmad, Ali, Francq, Christian2014-11Boundary of the parameter space; Consistency and asymptotic normality; Integer-valued AR and GARCH models; Non-normal asymptotic distribution; Poisson quasi-maximum likelihood estimator; Time series of counts.Asymptotic Size of Kleibergen's LM and Conditional LR Tests for Moment Condition Models
http://d.repec.org/n?u=RePEc:cwl:cwldpp:1977&r=ecm
An influential paper by Kleibergen (2005) introduces Lagrange multiplier (LM) and conditional likelihood ratio-like (CLR) tests for nonlinear moment condition models. These procedures aim to have good size performance even when the parameters are unidentified or poorly identified. However, the asymptotic size and similarity (in a uniform sense) of these procedures has not been determined in the literature. This paper does so. This paper shows that the LM test has correct asymptotic size and is asymptotically similar for a suitably chosen parameter space of null distributions. It shows that the CLR tests also have these properties when the dimension p of the unknown parameter theta equals 1: When p > 2; however, the asymptotic size properties are found to depend on how the conditioning statistic, upon which the CLR tests depend, is weighted. Two weighting methods have been suggested in the literature. The paper shows that the CLR tests are guaranteed to have correct asymptotic size when p > 2 with one weighting method, combined with the Robin and Smith (2000) rank statistic. The paper also determines a formula for the asymptotic size of the CLR test with the other weighting method. However, the results of the paper do not guarantee correct asymptotic size when p > 2 with the other weighting method, because two key sample quantities are not necessarily asymptotically independent under some identification scenarios. Analogous results for confidence sets are provided. Even for the special case of a linear instrumental variable regression model with two or more right-hand side endogenous variables, the results of the paper are new to the literature.Donald W. K. Andrews, Patrik Guggenberger2014-12Asymptotics, Conditional likelihood ratio test, Confidence set, Identification, Inference, Lagrange multiplier test, Moment conditions, Robust, Test, Weak identification, Weak instrumentsEstimating Multivariate Latent-Structure Models
http://d.repec.org/n?u=RePEc:spo:wpecon:info:hdl:2441/2i27dd3b6h94aarftq0slq652a&r=ecm
A constructive proof of identification of multilinear decompositions of multiway arrays is presented. It can be applied to show identification in a variety of multivariate latent structures. Examples are finite-mixture models and hidden Markov models. The key step to show identification is the joint diagonalization of a set of matrices in the same non-orthogonal basis. An estimator of the latent-structure model may then be based on a sample version of this simultaneous-diagonalization problem. Simple algorithms are available for computation. Asymptotic theory is derived for this joint approximate-diagonalization estimator.Jean-Marc Robin, Stéphane Bonhomme, Koen Jochmans2014-12hidden Markov model, finite mixture model, latent structure, multilinear restrictions, multivariate data, nonparametric estimation, simultaneous matrix diagonalization.Identification- and Singularity-Robust Inference for Moment Condition
http://d.repec.org/n?u=RePEc:cwl:cwldpp:1978&r=ecm
This paper introduces two new identification- and singularity-robust conditional quasi-likelihood ratio (SR-CQLR) tests and a new identification- and singularity-robust Anderson and Rubin (1949) (SR-AR) test for linear and nonlinear moment condition models. The paper shows that the tests have correct asymptotic size and are asymptotically similar (in a uniform sense) under very weak conditions. For two of the three tests, all that is required is that the moment functions and their derivatives have 2 + gamma bounded moments for some gamma > 0 in i.i.d. scenarios. In stationary strong mixing time series cases, the same condition suffices, but the magnitude of gamma is related to the magnitude of the strong mixing numbers. For the third test, slightly stronger moment conditions and a (standard, though restrictive) multiplicative structure on the moment functions are imposed. For all three tests, no conditions are placed on the expected Jacobian of the moment functions, on the eigenvalues of the variance matrix of the moment functions, or on the eigenvalues of the expected outer product of the (vectorized) orthogonalized sample Jacobian of the moment functions. The two SR-CQLR tests are shown to be asymptotically efficient in a GMM sense under strong and semi-strong identification (for all k greater than or equal to p; where k and p are the numbers of moment conditions and parameters, respectively). The two SR-CQLR tests reduce asymptotically to Moreira's CLR test when p = 1 in the homoskedastic linear IV model. The first SR-CQLR test, which relies on the multiplicative structure on the moment functions, also does so for p greater than or equal to 2Donald W. K. Andrews, Patrik Guggenberger2015-01Asymptotics, Conditional likelihood ratio test, Confidence set, Identification, Inference, Moment conditions, Robust, Singular variance, Test, Weak identification, Weak instrumentsThe Effect of Measurement Error in the Sharp Regression Discontinuity Design
http://d.repec.org/n?u=RePEc:kyo:wpaper:910&r=ecm
This paper develops a nonparametric analysis for the sharp regression discontinuity (RD) design in which the continuous forcing variable may contain measurement error. We show that if the observable forcing variable contains measurement error, this error causes severe identification bias for the average treatment effect given the “true” forcing variable at the discontinuity point. The bias is critical in the sense that even if there is a significant causal effect, researchers are misled to the incorrect conclusion of no causal effect. Furthermore, the measurement error leads the conditional probability of the treatment to be continuous at the threshold. To investigate the average treatment effect using the mismeasured forcing variable, we propose an approximation using the small error variance approximation (SEVA) originally developed by Chesher (1991). Based on the SEVA, the average treatment effect is approximated up to the order of the variance of the measurement error using an identified parameter when the variance is small. We also develop an estimation procedure for the parameter that approximates the average treatment effect based on local polynomial regressions and the kernel density estimation. Monte Carlo simulations reveal the severity of the identification bias caused by the measurement error and demonstrate that our approximate analysis is successful.Takahide Yanagi2014-12Panel data; Regression discontinuity designs; classical measurement error; approximation; nonparametric methods; local polynomial regressionsOn Bias in the Estimation of Structural Break Points
http://d.repec.org/n?u=RePEc:siu:wpaper:22-2014&r=ecm
Based on the Girsanov theorem, this paper obtains the exact finite sample distribution of the maximum likelihood estimator of structural break points in a continuous time model. The exact finite sample theory suggests that, in empirically realistic situations, there is a strong finite sample bias in the estimator of structural break points. This property is shared by least squares estimator of both the absolute structural break point and the fractional structural break point in discrete time models. A simulation-based method based on the indirect estimation approach is proposed to reduce the bias both in continuous time and discrete time models. Monte Carlo studies show that the indirect estimation method achieves substantial bias reductions. However, since the binding function has a slope less than one, the variance of the indirect estimator is larger than that of the original estimator.Liang Jiang, Xiaohu Wang, Jun Yu2014-12Structural change, Bias reduction, Indirect estimation, Break pointAn Instrumental Variable Approach for Identification and Estimation with Nonignorable Nonresponse
http://d.repec.org/n?u=RePEc:mpr:mprres:a9593fac2c9746f486d2162f9deede30&r=ecm
Sheng Wang, Jun Shao, Jae Kwang Kim2014-07-01Consistency and asymptotic normality, generalized method of moments, missing not at random, nonparametric distribution, nonresponse instrument, parametric propensityA Bayesian Approach to Modelling Bivariate Time-Varying Cointegration and Cointegrating Rank
http://d.repec.org/n?u=RePEc:iae:iaewps:wp2014n27&r=ecm
A bivariate model that allows for both a time-varying cointegrating matrix and time-varying cointegrating rank is presented. The model addresses the issue that, in real data, the validity of a constant cointegrating relationship may be questionable. The model nests the sub-models implied by alternative cointegrating matrix ranks and allows for transitions between stationarity and non-stationarity, and cointegrating and non-cointegrating relationships in accordance with the observed behaviour of the data. A Bayesian test of cointegration is also developed. The model is used to assess the validity of the Fisher effect and is also applied to equity market data.Chew Lian Chua, Sarantis Tsiaplias2014-12Error correction models, singular value decomposition, cointegration testsBayesian Estimation for Partially Linear Models with an Application to Household Gasoline Consumption
http://d.repec.org/n?u=RePEc:msh:ebswps:2014-28&r=ecm
A partially linear model is often estimated in a two-stage procedure, which involves estimating the nonlinear component conditional on initially estimated linear coefficients. We propose a sampling procedure that aims to simultaneously estimate the linear coefficients and bandwidths involved in the Nadaraya-Watson estimator of the nonlinear component. The performance of this sampling procedure is demonstrated through Monte Carlo simulation studies. The proposed sampling algorithm is applied to partially linear models of gasoline consumption based on the US household survey data. In contrary to implausible price effect reported in the literature, we find negative price effect on household gasoline consumption.Haotian Chen, Xibin Zhang2014backfitting least squares, bandwidth, household income, price elasticity, profile least squares, random-walk MetropolisThe Dynamic Striated Metropolis-Hastings Sampler for High-Dimensional Models
http://d.repec.org/n?u=RePEc:fip:fedawp:2014-21&r=ecm
Having efficient and accurate samplers for simulating the posterior distribution is crucial for Bayesian analysis. We develop a generic posterior simulator called the "dynamic striated Metropolis-Hastings (DSMH)" sampler. Grounded in the Metropolis-Hastings algorithm, it draws its strengths from both the equi-energy sampler and the sequential Monte Carlo sampler by avoiding the weaknesses of the straight Metropolis-Hastings algorithm as well as those of importance sampling. In particular, the DSMH sampler possesses the capacity to cope with incredibly irregular distributions that are full of winding ridges and multiple peaks and has the flexibility to take full advantage of parallelism on either desktop computers or clusters. The high-dimensional application studied in this paper provides a natural platform to put to the test generic samplers such as the DSMH sampler.Waggoner, Daniel F., Wu, Hongwei, Zha, Tao2014-11-01dynamic striation adjustments; simultaneous equations; Phillips curve; winding ridges; multiple peaks; independent striated draws; irregular posterior distribution; importance weights; tempered posterior density; effective sample sizeReasonable Sample Sizes for Convergence to Normality
http://d.repec.org/n?u=RePEc:diw:diwsop:diw_sp714&r=ecm
The central limit theorem says that, provided an estimator fulfills certain weak conditions, then, for reasonable sample sizes, the sampling distribution of the estimator converges to normality. We propose a procedure to find out what a â€œreasonably large sample sizeâ€ is. The procedure is based on the properties of Gini’s mean difference decomposition. We show the results of implementations of the procedure from simulated datasets and data from the German Socioâ€economic Panel.Carsten SchrÃ¶der, Shlomo Yitzhaki2014central limit theorem, Gini’s mean difference compositionStatistical Power for School-Based RCTs with Binary Outcomes
http://d.repec.org/n?u=RePEc:mpr:mprres:eca22893011a4774852c1cc58ae05f38&r=ecm
This article develops a new approach for calculating appropriate sample sizes for school-based randomized controlled trials (RCTs) with binary outcomes using logit models with and without baseline covariates. The theoretical analysis develops sample size formulas for clustered designs in which random assignment is at the school or teacher level using generalized estimating equation methods. The key finding is that sample sizes of 40 to 60 schools that are typically included in clustered RCTs for student test score or behavioral scale outcomes will often be insufficient for binary outcomes.Peter Z. Schochet2013-06-30Statistical Power, Binary Outcomes, Clustered Designs, Randomized Control Trials , RCTs, School-basedA Nonparametric Partially Identified Estimator for Equivalence Scales
http://d.repec.org/n?u=RePEc:rwi:repape:0526&r=ecm
Methods for estimating equivalence scales usually rely on rather strong identifying assumptions. This paper considers a partially identified estimator for equivalence scales derived from the potential outcomes framework and using nonparametric methods for estimation, which requires only mild assumptions. Instead of point estimates, the method yields only lower and upper bounds of equivalence scales. Results of an analysis using German expenditure data show that the range implied by these bounds is rather wide, but can be reduced using additional covariates.Christian Dudel2014-12Household equivalence scale; partial identification; matching estimatorA Nonparametric Method For Term Structure Fitting With Automatic Smoothing
http://d.repec.org/n?u=RePEc:hig:wpaper:39/fe/2014&r=ecm
We present a new nonparametric method for fitting the term structure of interest rates from bond prices. Our method is a variant of the smoothing spline approach, but within our framework we are able to determine the smoothing coefficient automatically from the data using generalized crossvalidation or maximum likelihood estimates. We present an effective numerical algorithm to simultaneously find the term structure and the optimal smoothing coefficient. Finally, we compare the proposed nonparametric fitting method with other parametric and nonparametric methods to show its superior performance.Victor A. Lapshin, Vadim Ya. Kaushanskiy2014regularization, smoothing splines, term structure of interest rates.Range-based Volatility Estimation and Forecasting
http://d.repec.org/n?u=RePEc:fau:wpaper:wp2014_34&r=ecm
In this paper, we analyze new possibilities in predicting daily ranges, i.e. differences between daily high and low prices. We empirically assess efficiency gains in volatility estimation when using range-based estimators as opposed to simple daily ranges and explore the use of these more efficient volatility measures as predictors of daily ranges. The array of models used in this paper include the heterogeneous autoregressive model, conditional autoregressive ranges model and a vector error-correction model of daily highs and lows. Contrary to intuition, models based on co-integration of daily highs and lows fail to produce good quality out of sample forecasts of daily ranges. The best one-day-ahead daily ranges forecasts are produced by a realized range based HAR model with a GARCH volatility-of-volatility component.Daniel Bencik2014-12volatility, returns, futures contracts, cointegration, predictionDemand Analysis using Strategic Reports: An application to a school choice mechanism
http://d.repec.org/n?u=RePEc:nbr:nberwo:20775&r=ecm
Several school districts use assignment systems in which students have a strategic incentive to misrepresent their preferences. Indeed, we find evidence suggesting that reported preferences in Cambridge, MA respond to these incentives. Such strategizing can complicate the analysis of preferences. The paper develops a new method for estimating preferences in such environments. Our approach views the report made by a student as a choice of a probability distribution over assignment to various schools. We introduce a large class of mechanisms for which consistent estimation is feasible. We then study identification of a latent utility preference model under the assumption that agents play a Bayesian Nash Equilibrium. Preferences are non-parametrically identified under either sufficient variation in choice environments or sufficient variation in a special regressor. We then propose a tractable estimation procedure for a parametric model based on Gibbs’ sampling. Estimates from Cambridge suggest that while 84% of students are assigned to their stated first choice, only 75% are assigned to their true first choice. The difference occurs because students avoid ranking competitive schools in favor of less competitive schools. Although the Cambridge mechanism is manipulable, we estimate that welfare for the average student would be lower in the popular Deferred Acceptance mechanism.Nikhil Agarwal, Paulo Somaini2014-12Firm-Level Productivity Spillovers in China's Chemical Industry: A Spatial Hausman-Taylor Approach
http://d.repec.org/n?u=RePEc:max:cprwps:173&r=ecm
This paper assesses the role of intra-sectoral spillovers in total factor productivity across Chinese producers in the chemical industry. We use a rich panel data-set of 12,552 firms observed over the period 2004-2006 and model output by the firm as a function of skilled and unskilled labor, capital, materials, and total factor productivity, which is broadly defined. The latter is a composite of observable factors such as export market participation, foreign as well as public ownership, the extent of accumulated intangible assets, and unobservable total factor productivity. Despite the richness of our data-set, it suffers from the lack of time variation in the number of skilled workers as well as in the variable indicating public ownership. We introduce spatial spillovers in total factor productivity through contextual effects of observable variables as well as spatial dependence of the disturbances. We extend the Hausman and Taylor (1981) estimator to account for spatial correlation in the error term. This approach permits estimating the effect of time-invariant variables which are wiped out by the fixed effects estimator. While the original Hausman and Taylor (1981) estimator assumes homoskedastic error components, we provide spatial variants that allow for both homoskedasticity and heteroskedasticity. Monte Carlo results show, that our estimation procedure performs well in small samples. We find evidence of positive spillovers across chemical manufacturers and a large and significant detrimental effect of public ownership on total factor productivity.Badi H. Baltagi, Peter H. Egger2014-12Technology Spillovers, Spatial econometrics, Panel data econometrics, Firm-level productivity, Chinese firmsExtending the Oaxaca-Blinder Decomposition to the Independent Double Hurdle Model: With Application to Parental Spending on Education in Malawi
http://d.repec.org/n?u=RePEc:pra:mprapa:60740&r=ecm
The study develops the Blinder-Oaxaca decomposition technique for the independent double hurdle model. The proposed decomposition is done at the aggregate level. Using the Second Malawi Integrated Household Survey (IHS2), the paper applies the proposed decomposition to explain the rural-urban difference in parental spending on own primary school children. The results show that at least 66% of the expenditure differential is explained by differences in characteristics between rural and urban households.Mussa, Richard2014Double Hurdle; Decomposition; Blinder-Oaxaca; Malawi