|
on Econometrics |
By: | Peter Robinson (Institute for Fiscal Studies and London School of Economics); Supachoke Thawornkaiwong |
Abstract: | <p>Central limit theorems are developed for instrumental variables estimates of linear and semiparametric partly linear regression models for spatial data. General forms of spatial dependence and heterogeneity in explanatory variables and unobservable disturbances are permitted. We discuss estimation of the variance matrix, including estimates that are robust to disturbance heteroscedasticity and/or dependence. A Monte Carlo study of finite-sample performance is included. In an empirical example, the estimates and robust and non-robust standard errors are computed from Indian regional data, following tests for spatial correlation in disturbances, and nonparametric regression fitting. Some final comments discuss modifications and extensions.</p> |
Date: | 2011–02 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:08/11&r=ecm |
By: | Rong Liu; Lijian Yang; Wolfgang Karl Härdle |
Abstract: | Generalized additive models (GAM) are multivariate nonparametric regressions for non-Gaussian responses including binary and count data. We propose a spline-backfitted kernel (SBK) estimator for the component functions. Our results are for weakly dependent data and we prove oracle efficiency. The SBK techniques is both computational expedient and theoretically reliable, thus usable for analyzing high-dimensional time series. Inference can be made on component functions based on asymptotic normality. Simulation evidence strongly corroborates with the asymptotic theory. |
Keywords: | Bandwidths, B spline, knots, link function, mixing, Nadaraya-Watson estimator |
JEL: | C00 C14 J01 J31 |
Date: | 2011–03 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2011-016&r=ecm |
By: | Peter Robinson (Institute for Fiscal Studies and London School of Economics) |
Abstract: | <p>Power law or generalized polynomial regressions with unknown real-valued exponents and coefficients, and weakly dependent errors, are considered for observations over time, space or space-time. Consistency and asymptotic normality of nonlinear least squares estimates of the parameters are established. The joint limit distribution is singular, but can be used as a basis for inference on either exponents or coefficients. We discuss issues of implementation, efficiency, potential for improved estimation, and possibilities of extension to more general or alternative trending models, and to allow for irregularly-spaced data or heteroscedastic errors; though it focusses on a particular model to fix ideas, the paper can be viewed as offering machinery useful in developing inference for a variety of models in which power law trends are a component. Indeed, the paper also makes a contribution that is potentially relevant to many other statistical models: our problem is one of many in which consistency of a vector of parameter estimates (which converge at different rates) cannot be established by the usual techniques for coping with implicitly-defined extremum estimates, but requires a more delicate treatment; we present a generic consistency result. </p> |
Date: | 2011–02 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:09/11&r=ecm |
By: | Pitarakis, J |
Abstract: | In this paper we develop a test of the joint null hypothesis of parameter stability and a unit root within an ADF style autoregressive specification whose entire parameter structure is potentially subject to a structural break at an unknown time period. The maintained underlying null model is a linear autoregression with a unit root, stationary regressors and a constant term. As a byproduct we also obtain the limiting behaviour of a related Wald statistic designed to solely test the null of parameter stability in an environment with a unit root. These distributions are free of nuisance parameters and easily tabulated. The finite sample properties of our tests are subsequently assessed through a series of simulations. |
Keywords: | Structural Breaks; Unit Roots; Nonlinear Dynamics |
JEL: | C10 C22 |
Date: | 2011–02 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:29189&r=ecm |
By: | José Murteira (Faculdade de Economia Universidade de Coimbra / CEMAPRE); Esmeralda Ramalho (Departamento de Economia and CEFAGE-UE, Universidade de Évora); Joaquim Ramalho (Departamento de Economia and CEFAGE-UE, Universidade de Évora) |
Abstract: | A test for heteroskedasticity within the context of classical linear regression can be based on the difference between Wald statistics in heteroskedasticity-robust and nonrobust forms. The resulting statistic is asymptotically distributed under the null hypothesis of homoskedasticity as chi-squared with one degree of freedom. The power of this test is sensitive to the choice of parametric restriction on which the Wald statistics are based, so the supremum of a range of individual test statistics is proposed. Two versions of a supremum-based test are considered: the first version, easier to implement, does not have a known asymptotic null distribution, so the bootstrap is employed in order to assess its behaviour and enable meaningful conclusions from its use in applied work. The second version has a known asymptotic distribution and, in some cases, is asymptotically pivotal under the null. A small simulation study illustrates the implementation and finite-sample performance of both versions of the test. |
Keywords: | Heteroskedasticity testing; White test; Wald test; Supremum |
JEL: | C12 C21 |
Date: | 2011–01 |
URL: | http://d.repec.org/n?u=RePEc:gmf:wpaper:2011-05&r=ecm |
By: | Xavier d'Haultfoeuille ; Philippe Fevrier (Crest) |
Abstract: | We consider the issue of identifying nonparametrically mixture models. In these models, all observed variables depend on a common and unobserved component, but are mutually independent conditional on it. Such models are important in the measurement error, auction and matching literatures. Traditional approaches rely on parametric assumptions or strong functional restrictions. We show that these models are actually identified nonparametrically if a moving support assumption is satisfied. More precisely, we suppose that the supports of the observed variables move with the true value of the unobserved component. We show that this assumption is theoretically grounded, empirically relevant and testable. Finally, we compare our approach with the diagonalization technique introduced by Hu and Schennach (2008), which allows to obtain similar results. |
Keywords: | optimal matching |
Date: | 2010–12 |
URL: | http://d.repec.org/n?u=RePEc:crs:wpaper:2010-12&r=ecm |
By: | Steven T. Berry (Cowles Foundation, Yale University); Philip A. Haile (Cowles Foundation, Yale University) |
Abstract: | We consider identification in a class of nonparametric simultaneous equations models introduced by Matzkin (2008). These models combine standard exclusion restrictions with a requirement that each structural error enter through a "residual index" function. We provide constructive proofs of identification under several sets of conditions, demonstrating tradeoffs between restrictions on the support of the instruments, shape restrictions on the joint distribution of the structural errors, and restrictions on the form of the residual index function. |
Keywords: | Simultaneous equations, Nonseparable models, Nonparametric identification |
JEL: | C3 C14 |
Date: | 2011–03 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:1787&r=ecm |
By: | Yu-chin Chen (University of Washington); Wen-Jen Tsay (Institute of Economics, Academia Sinica, Taipei, Taiwan) |
Abstract: | This paper presents a generalized autoregressive distributed lag (GADL) model for conducting regression estimations that involve mixed-frequency data. As an example, we show that daily asset market information - currency and equity market movements - can produce forecasts of quarterly commodity price changes that are superior to those in the previous literature. Following the traditional ADL literature, our estimation strategy relies on a Vandermonde matrix to pa-rameterize the weighting functions for higher-frequency observations. Accord-ingly, inferences can be obtained under ordinary least squares principles without Kalman filtering or non-linear optimizations. Our findings provide an easy-to-use method for conducting mixed data-sampling analysis as well as for forecasting world commodity price movements. |
Keywords: | Mixed frequency data, autoregressive distributed lag, commodity prices, forecasting |
JEL: | C22 C53 F31 F47 |
Date: | 2011–03 |
URL: | http://d.repec.org/n?u=RePEc:sin:wpaper:11-a001&r=ecm |
By: | Xavier d'Haultfoeuille ; Arnaud Maurel (Crest) |
Abstract: | It is often believed that without instrument, endogenous sample selection models are identified only if a covariate with a large support is available (see Chamberlain, 1986, and Lewbel, 2007). We propose a new identification strategy mainly based on the condition that the selection variable becomes independent of the covariates when the outcome, not one of the covariates, tends to infinity. No large support on the covariates is required. Moreover, we prove that this condition is testable. We finally show that our strategy can be applied to the identification of generalized Roy models. |
Keywords: | optimal matching |
Date: | 2010–10 |
URL: | http://d.repec.org/n?u=RePEc:crs:wpaper:2010-10&r=ecm |
By: | Borchani, Anis (ESSAI (Ecole Supérieure de la Statistique et de l’Analyse de l’Informatio), Tunis) |
Abstract: | We propose a method to generate a warning system for the early detection of time clusters in discrete time series. Two approaches are developed, one using an approximation of the return period of an extreme event, independently of the nature of the data, the other using an estimation of the return period via standard EVT tools after a smoothing of our discrete data into continuous ones. This method allows us to define a surveillance and prediction system which is applied to finance and public health surveillance |
Keywords: | applications in insurance and finance; clusters; epidemiology; Extreme Value Theory; extreme quantile; outbreak detection; return level; return period; surveillance |
JEL: | C22 I10 |
Date: | 2010–12 |
URL: | http://d.repec.org/n?u=RePEc:ebg:essewp:dr-10009&r=ecm |
By: | Sven Schreiber (Macroeconomic Policy Institute (IMK) in the Hans Boeckler Foundation) |
Abstract: | The topic of this paper is the estimation uncertainty of the Stock-Watsonand Gonzalo-Granger permanent-transitory decompositions in the frameworkof the cointegrated vector-autoregression. Specifically, we suggest an approach to construct the confidence interval of the transitory component in agiven period (e.g. the latest observation) by conditioning on the observed datain that period. To calculate asymptotically valid confidence intervals we usethe delta method and two bootstrap variants. As an illustration we analyze theuncertainty of (US) output gap estimates in a system of output, consumption, and investment. |
Keywords: | transitory components, VECM, delta method, bootstrap |
JEL: | C32 C15 E32 |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:imk:wpaper:3-2011&r=ecm |
By: | Ralph D. Snyder (Department of Econometrics and Business Statistics, Monash University); J. Keith Ord (McDonough School of Business, Georgetown University); Adrian Beaumont (Department of Econometrics and Business Statistics, Monash University) |
Abstract: | Organizations with large-scale inventory systems typically have a large proportion of items for which demand is intermittent and low volume. We examine different approaches to forecasting for such products, paying particular attention to the need for inventory planning over a multi-period lead-time when the underlying process may be nonstationary. This emphasis leads to consideration of prediction distributions for processes with time-dependent parameters. A wide range of possible distributions could be considered but we focus upon the Poisson (as a widely used benchmark), the negative binomial (as a popular extension of the Poisson) and a hurdle shifted Poisson (which retains Croston’s notion of a Bernoulli process for times between orders). We also develop performance measures related to the entire predictive distribution, rather than focusing exclusively upon point predictions. The three models are compared using data on the monthly demand for 1,046 automobile parts, provided by a US automobile manufacturer. We conclude that inventory planning should be based upon dynamic models using distributions that are more flexible than the traditional Poisson scheme. |
Keywords: | Croston's method; Exponential smoothing; Hurdle shifted Poisson distribution; Intermittent demand; Inventory control; Prediction likelihood; State space models |
JEL: | C25 C53 M21 |
Date: | 2010–05 |
URL: | http://d.repec.org/n?u=RePEc:gwc:wpaper:2010-003&r=ecm |
By: | David E Allen (School of Accounting Finance & Economics, Edith Cowan University); Abhay Kumar Singh (School of Accounting Finance & Economics, Edith Cowan University) |
Abstract: | With the growing number of stocks and other financial instruments in the investment market, there is always a need for profitable methods of asset selection. The Fama-French three factor model, makes the problem of asset selection easy, by narrowing down the number of parameters, but the usual technique of Ordinary Least Square (OLS), used for estimation of the coefficients of the three factors suffers from the problem of modelling using the conditional mean of the distribution, as is the case with OLS. In this paper, we use the technique of Data Envelopment Analysis (DEA) applied to the Fama-French Three Factor Model, to choose stocks from Dow Jones Industrial Index. We use a more robust technique called as Quantile Regression to estimate the coefficients for the factor model and show that the assets selected using this regression method form a higher return equally weighted portfolio. |
Keywords: | Asset Selection, Factor Model, DEA, Quantile Regression |
Date: | 2010–10 |
URL: | http://d.repec.org/n?u=RePEc:ecu:wpaper:2010-05&r=ecm |
By: | Krivonozhko, Vladimir (Institute for Systems Analysis, Russian Academy of Sciences, Moscow); R. Førsund, Finn (Dept. of Economics, University of Oslo); V. Lychev, Andrey (Accounts Chamber of the Russian Federation, Moscow) |
Abstract: | Applications of the DEA models show that inadequate results may arise in some cases, two of these inadequacies being: a) too many efficient units may appear in some DEA models; b) a DEA model may show an inefficient unit from the point of view of experts as an efficient one. The purpose of this paper is to identify units that may unduly become efficient. The concept of a terminal unit is introduced for such units. A method for improving the adequacy of DEA models based on terminal units is suggested, and an example shown based on a real-life data set for Russian banks. |
Keywords: | Terminal units; DEA; Efficiency; Weight restrictions; Domination cones |
JEL: | C44 C61 C67 D24 |
Date: | 2011–02–17 |
URL: | http://d.repec.org/n?u=RePEc:hhs:osloec:2011_004&r=ecm |
By: | David Azriel; Micha Mandel; Yosef Rinott |
Abstract: | We consider the classical problem of selecting the best of two treatments in clinical trials with binary response. The target is to find the design that maximizes the power of the relevant test. Many papers use a normal approximation to the power function and claim that Neyman allocation that assigns subjects to treatment groups according to the ratio of the responses’ standard deviations, should be used. As the standard deviations are unknown, an adaptive design is often recommended. The asymptotic justification of this approach is arguable, since it uses the normal approximation in tails where the error in the approximation is larger than the estimated quantity. We consider two different approaches for optimality of designs that are related to Pitman and Bahadur definitions of relative efficiency of tests. We prove that the optimal allocation according to the Pitman criterion is the balanced allocation and that the optimal allocation according to the Bahadur approach depends on the unknown parameters. Exact calculations reveal that the optimal allocation according to Bahadur is often close to the balanced design, and the powers of both are comparable to the Neyman allocation for small sample sizes and are generally better for large experiments. Our findings have important implications to the design of experiments, as the balanced design is proved to be optimal or close to optimal and the need for the complications involved in following an adaptive design for the purpose of increasing the power of tests is therefore questionable. |
Keywords: | Neyman allocation, adaptive design, asymptotic power, Normal approximation, Pitman efficiency, Bahadur efficiency, large deviations |
JEL: | K13 |
Date: | 2011–03 |
URL: | http://d.repec.org/n?u=RePEc:huj:dispap:dp568&r=ecm |
By: | Luca Fanelli (Università di Bologna) |
Abstract: | It is known that the identifiability of the structural parameters of the class of Linear(ized) Rational Expectations (LRE) models currently used in monetary policy and business cycle analysis may change dramatically across different regions of the theoretically admissible parameter space. This paper derives novel necessary and sufficient conditions for local identifiability which hold irrespective of whether the LRE model as a determinate (unique stable) reduced form solution or indeterminate (multiple stable) reduced form solutions. These conditions can be interpreted as prerequisite for the likelihood-based (classical or Bayesian) empirical investigation of determinacy/indeterminacy in stationary LRE models and are particular useful for the joint estimation of the Euler equations comprising the LRE model by `limited-information' methods because checking their validity does not require the knowledge of the full set of reduced form solutions. |
Keywords: | Determinacy, Identification, Indeterminacy, Linear Rational Expectations model |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:bot:quadip:105&r=ecm |
By: | Klein, Ingo; Fischer, Matthias; Pleier, Thomas |
Abstract: | It is well known that the arithmetic mean of two possibly different copulas forms a copula, again. More general, we focus on the weighted power mean (WPM) of two arbitrary copulas which is not necessary a copula again, as different counterexamples reveal. However, various conditions regarding the mean function and the underlying copula are given which guarantee that a proper copula (so-called WPM copula) results. In this case, we also derive dependence properties of WPM copulas and give some brief application to financial return series. -- |
Keywords: | copulas,generalized power mean,max id,left tail decreasing,tail dependence |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:zbw:iwqwdp:012011&r=ecm |
By: | Cristina Davino, Rosaria Romano (University of Macerata) |
Abstract: | <div style="text-align: justify;">The paper proposes a new approach for analysing the stability of Composite Indicators. Starting from the consideration that different subjective choices occur in their construction, the paper emphasizes the importance of investigating the possible alternatives in order to have a clear and objective picture of the phenomenon under investigation. Methods dealing with Composite Indicator stability are known in literature as Sensitivity Analysis. In such a framework, the paper presents a new approach based on a combination of explorative and confirmative analysis aiming to investigate the impact of the different subjective choices on the Composite Indicator variability and the related individual differences among the statistical units as well.</div> |
Keywords: | sensitivity analysis,composite indicators,analysis of variance,principal component analysis |
JEL: | C52 C63 |
Date: | 2011–02 |
URL: | http://d.repec.org/n?u=RePEc:mcr:wpaper:wpaper00032&r=ecm |
By: | Jos\'e Bento; Morteza Ibrahimi; Andrea Montanari |
Abstract: | Consider the problem of learning the drift coefficient of a stochastic differential equation from a sample path. In this paper, we assume that the drift is parametrized by a high dimensional vector. We address the question of how long the system needs to be observed in order to learn this vector of parameters. We prove a general lower bound on this time complexity by using a characterization of mutual information as time integral of conditional variance, due to Kadota, Zakai, and Ziv. This general lower bound is applied to specific classes of linear and non-linear stochastic differential equations. In the linear case, the problem under consideration is the one of learning a matrix of interaction coefficients. We evaluate our lower bound for ensembles of sparse and dense random matrices. The resulting estimates match the qualitative behavior of upper bounds achieved by computationally efficient procedures. |
Date: | 2011–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1103.1689&r=ecm |
By: | R. Førsund, Finn (Dept. of Economics, University of Oslo) |
Abstract: | Measuring productive efficiency is an important research strand within fields of economics, management science and operations research. One definition of efficiency is the proportional scaling needed for observations of an inefficient unit to be projected onto an efficient production function and another definition is a ratio index of weighted outputs on weighted inputs. When linear programming is used to estimate efficiency the two definitions give identical results due to the fundamental duality of linear programming. Empirical applications of DEA using linear programming showed a prevalence of zero weights leading to questioning the consequence for the efficiency score estimate based on the ratio definition. Early literature on weight restrictions is exclusively based on the ratio efficiency. It was stated that variables with zero weights had no influence on the efficiency score, in spite of the alleged importance of the variables. This has been one motivation for introducing restrictions on weights. Another empirical result was that often there were too many efficient units. This problem could also be overcome by introducing weight restrictions. Weight restrictions were said to introduce values for inputs and outputs. The paper makes a critical examinations of these claims based on defining efficiency relative to a frontier production function. |
Keywords: | Weight restrictions; DEA; efficiency; frontier production function; primal and dual linear programming problems |
JEL: | C61 D20 |
Date: | 2011–02–17 |
URL: | http://d.repec.org/n?u=RePEc:hhs:osloec:2011_005&r=ecm |