|
on Econometrics |
By: | Weidenhammer, Beate; Schmid, Timo; Salvati, Nicola; Tzavidis, Nikos |
Abstract: | In this paper we will present recent work on a new unit-level small area methodology that can be used with continuous and discrete outcomes. The proposed method is based on constructing a model-based estimator of the distribution function by using a nested-error regression model for the quantiles of the target outcome. A general set of domain-specific parameters that extends beyond averages is then estimated by sampling from the estimated distribution function. For fitting the model we exploit the link between the Asymmetric Laplace Distribution and maximum likelihood estimation for quantile regression. The specification of the distribution of the random effects is considered in some detail by exploring the use of parametric and non-parametric alternatives. The use of the proposed methodology with discrete (count) outcomes requires appropriate transformations, in particular jittering. For the case of discrete outcomes the methodology relaxes the restrictive assumptions of the Poisson generalised linear mixed model and allows for what is potentially a more flexible mean-variance relationship. Mean Squared Error estimation is discussed. Extensive model-based simulations are used for comparing the proposed methodology to alternative unit-level methodologies for estimating a broad range of complex parameters. |
Keywords: | Asymmetric Laplace Distribution,generalized linear mixed model,jittering,non-parametric,estimation,small area estimation |
Date: | 2016 |
URL: | http://d.repec.org/n?u=RePEc:zbw:fubsbe:201612&r=ecm |
By: | In Choi (School of Economics, Sogang University, Seoul) |
Abstract: | This paper proposes new estimators for the panel autoregressive (PAR) mod- els with short time dimensions (T) and large cross sections (N). These estimators are based on the cross-sectional regression model using the ?rst time series obser- vations as a regressor and the last as a dependent variable. The regressors and errors of this regression model are correlated. The ?rst estimator is the maximum likelihood estimator (MLE) under the assumption of normal distributions. This estimator is called the cross-sectional MLE (CSMLE). The second estimator is the bias-corrected pooled least squares estimator (BCPLSE) that eliminates the asymptotic bias of PLSE by using the CSMLE. The CSMLE and BCPLSE are extended to the PAR model with endogenous time-variant and time-invariant regressors. The CSMLE and BCPLSE provide consistent estimates of the PAR coe¡Ë cients for stationary, unit root and explosive PAR models, estimate the co- e¡Ë cients of time-invariant regressors consistently and can be computed as long as T 2: Their ?nite sample properties are compared with those of some other estimators for the PAR model of order 1. The estimators of this paper are shown to perform quite well in ?nite samples. |
Keywords: | dynamic panels, maximum likelihood estimator, pooled least squares estimator, stationarity, unit root, explosiveness |
Date: | 2016–06 |
URL: | http://d.repec.org/n?u=RePEc:sgo:wpaper:1610&r=ecm |
By: | Belloni, Alexandre., Chen, Mingli., Chernozhukov, Victor (Department of Economics, University of Warwick & Department of Psychology, University of Warwick) |
Abstract: | We propose Quantile Graphical Models (QGMs) to characterize predictive and conditional independence relationships within a set of random variables of interest. This framework is intended to quantify the dependence in non-Gaussian settings which are ubiquitous in many econometric applications. We consider two distinct QGMs. First, Condition Independence QGMs characterize conditional independence at each quantile index revealing the distributional dependence structure. Second, Predictive QGMs characterize the best linear predictor under asymmetric loss functions. Under Gaussianity these notions essentially coincide but non-Gaussian settings lead us to different models as prediction and conditional independence are fundamentally different properties. Combined the models complement the methods based on normal and nonparanormal distributions that study mean predictability and use covariance and precision matrices for conditional independence. We also propose estimators for each QGMs. The estimators are based on high-dimension techniques including (a continuum of) l1-penalized quantile regressions and low biased equations, which allows us to handle the potentially large number of variables. We build upon recent results to obtain valid choice of the penalty parameters and rates of convergence. These results are derived without any assumptions on the separation from zero and are uniformly valid across a wide-range of models. With the additional assumptions that the coefficients are well-separated from zero, we can consistently estimate the graph associated with the dependence structure by hard thresholding the proposed estimators. Further we show how QGM can be used to represent the tail interdependence of the variables which plays an important role in application concern with extreme events in opposition to average behavior. We show that the associated tail risk network can be used for measuring systemic risk contributions. We also apply the framework to study financial contagion and the impact of downside movement in the market on the dependence structure of assets’ return. Finally, we illustrate the properties of the proposed framework through simulated examples. |
Keywords: | High-dimensional sparse model, tail risk, conditional independence, nonlinear correlation, penalized quantile regression, systemic risk, financial contagion, downside movement |
JEL: | I30 I31 |
Date: | 2016 |
URL: | http://d.repec.org/n?u=RePEc:wrk:warwec:1125&r=ecm |
By: | Cinzia Daraio (Department of Computer, Control and Management Engineering Antonio Ruberti (DIAG), University of Rome La Sapienza, Rome, Italy); Leopold Simar (Institut de Statistique,Biostatistique et Sciences Actuarielles, Universite' Catholique de Louvain, Louvain-la-Neuve, Belgium); Paul W. Wilson (Department of Economics and School of Computing, Division of Computer Science, Clemson University, Clemson, SC 29634) |
Abstract: | This paper demonstrates that standard central limit theorem (CLT) results do not hold for means of nonparametric conditional efficiency estimators, and provides new CLTs that do hold, permitting applied researchers to estimate confidence intervals for mean conditional efficiency or to compare mean efficiency across groups of producers along the lines of the test developed by Kneip et al. (JBES, 2015b). The new CLTs are used to develop a test of the "separability" condition that is necessary for second-stage regressions of efficiency estimates on environmental variables. We show that if this condition is violated, not only are second-stage regressions meaningless,but also first-stage, unconditional efficiency estimates are without meaning. As such,the test developed here is of fundamental importance to applied researchers using non-parametric methods for efficiency estimation. Our simulation results indicate that our tests perform well both in terms of size and power. We present a real-world empirical example by updating the analysis performed by Aly et al. (R. E. Stat., 1990) on U.S. commercial banks; our tests easily reject the assumption required for two-stage estimation, calling into question results that appear in hundreds of papers that have been published in recent years. |
Keywords: | technical efficiency ; conditional efficiency ; two-stage estimation ; separability ; data envelopment analysis (DEA) ; free-disposal hull (FDH). |
Date: | 2016 |
URL: | http://d.repec.org/n?u=RePEc:aeg:report:2016-02&r=ecm |
By: | Jitendra Kuma; Anoop Chaturvedi; Umme Afifa |
Abstract: | The present paper studies the panel data auto regressive (PAR) time series model for testing the unit root hypothesis. The posterior odds ratio (POR) is derived under appropriate prior assumptions and then empirical analysis is carried out for testing the unit root hypothesis of Net Asset Value of National Pension schemes (NPS) for different fund managers. The unit root hypothesis for the model with linear time trend and linear time trend with augmentation term is carried out. The estimated autoregressive coefficient is far away from one in case of linear time trend only so, testing is not executed but in consideration of augmentation term, it is close to one. Therefore, we performed the unit root hypothesis testing using the derived POR. In all cases unit root hypothesis is rejected therefore all NPS series are concluded trend stationary. |
Keywords: | Panel data, Stationarity, Autoregressive time series, Unit root, Posterior odds ratio, New Pension Scheme, Net Asset Value. |
JEL: | C11 C12 C22 C23 C39 |
Date: | 2016–01–14 |
URL: | http://d.repec.org/n?u=RePEc:eei:rpaper:eeri_rp_2016_14&r=ecm |
By: | Aparicio, Juan; Cordero, Jose M.; Pastor, Jesús |
Abstract: | Determining the least distance to the efficient frontier for estimating technical inefficiency, with the consequent determination of closest targets, has been one of the relevant issues in recent Data Envelopment Analysis literature. This new paradigm contrasts with traditional approaches, which yield furthest targets. In this respect, some techniques have been proposed in order to implement the new paradigm. A group of these techniques is based on identifying all the efficient faces of the polyhedral production possibility set and, therefore, is associated with the resolution of a NP-hard problem. In contrast, a second group proposes different models and particular algorithms to solve the problem avoiding the explicit identification of all these faces. These techniques have been applied more or less successfully. Nonetheless, the new paradigm is still unsatisfactory and incomplete to a certain extent. One of these challenges is that related to measuring technical inefficiency in the context of oriented models, i.e., models that aim at changing inputs or outputs but not both. In this paper, we show that existing specific techniques for determining the least distance without identifying explicitly the frontier structure for graph measures, which change inputs and outputs at the same time, do not work for oriented models. Consequently, a new methodology for satisfactorily implementing these situations is proposed. Finally, the new approach is empirically checked by using a recent PISA database consisting of 902 schools. |
Keywords: | Data Envelopment Analysis, least distance, oriented models |
JEL: | C60 C61 H52 |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:72630&r=ecm |
By: | Ana Paula Martins |
Abstract: | This paper focuses on two applications of time series methods. The first proposes a simple transformation of the unit root form of stationary testing to infer about the validity of smoothing by second-order running averages of a series, or of the variables in a linear model (here opposing co-integration testing). The second one advances a simple iterative algorithm to correct for MA(1) autocorrelation of the residuals of the general linear model, not requiring the estimation of the error process parameter. |
Keywords: | Smoothing Tests under First Order Autoregressive Processes, Running Averages, Negative Unit Roots, Moving Average Autocorrelation Correction in Linear Models. |
JEL: | C22 C12 C13 |
Date: | 2016–01–12 |
URL: | http://d.repec.org/n?u=RePEc:eei:rpaper:eeri_rp_2016_12&r=ecm |
By: | Spencer WHEATLEY (ETH Zurich); Didier SORNETTE (ETH Zurich and Swiss Finance Institute) |
Abstract: | We consider the detection of multiple outliers in Exponential and Pareto samples -- as well as general samples that have approximately Exponential or Pareto tails, thanks to Extreme Value Theory. It is shown that a simple "robust'' modification of common test statistics makes inward sequential testing -- formerly relegated within the literature since the introduction of outward testing -- as powerful as, and potentially less error prone than, outward tests. Moreover, inward testing does not require the complicated type 1 error control of outward tests. A variety of test statistics, employed in both block and sequential tests, are compared for their power and errors, in cases including no outliers, dispersed outliers (the classical slippage alternative), and clustered outliers (a case seldom considered). We advocate a density mixture approach for detecting clustered outliers. Tests are found to be highly sensitive to the correct specification of the main distribution (Exponential/Pareto), exposing high potential for errors in inference. Further, in five case studies -- financial crashes, nuclear power generation accidents, stock market returns, epidemic fatalities, and cities within countries -- significant outliers are detected and related to the concept of ‘Dragon King’ events, defined as meaningful outliers of unique origin. |
Keywords: | Outlier Detection, Exponential sample, Pareto sample, Dragon King, Extreme Value Theory |
JEL: | C12 C46 G01 |
URL: | http://d.repec.org/n?u=RePEc:chf:rpseri:rp1528&r=ecm |
By: | In Choi (Department of Economics, Sogang University, Seoul); Sun Ho Hwang (Department of Economics, Sogang University, Seoul) |
Abstract: | This paper proposes a new, optimal estimator of the AR(1) coefficient that minimixes the prediction mean-squared-error. This estimator can be used to generate an optimal predictor. The new estimator¡®s asymptotic distributions are derived for the cases of stationarity and a near unit root. The optimal estimator is also derived for the AR(p) model ( p¡Ã2) and its asymptotic distributions are reported. Simulation results confirm advantages of using the optimal estimator for prediction. |
Keywords: | Autoregressive model, prediction, near unit root |
Date: | 2016–02 |
URL: | http://d.repec.org/n?u=RePEc:sgo:wpaper:1607&r=ecm |
By: | David T. Frazier; Eric Renault |
Abstract: | The traditional implementation of Indirect Inference (I-I) is to perform inference on structural parameters $\theta$ by matching observed and simulated auxiliary statistics. These auxiliary statistics are consistent estimators of instrumental parameters whose value depends on the value of structural parameters through a binding function. Since instrumental parameters encapsulate the statistical information used for inference about the structural parameters, it sounds paradoxical to constrain these parameters, that is, to restrain the information used for inference. However, there are situations where the definition of instrumental parameters $\beta$ naturally comes with a set of $q$ restrictions. Such situations include: settings where the auxiliary parameters must be estimated subject to $q$ possibly binding strict inequality constraints $g(\cdot) > 0$; cases where the auxiliary model is obtained by imposing $q$ equality constraints $g(\theta) = 0$ on the structural model to define tractable auxiliary parameter estimates of $\beta$ that are seen as an approximation of the true $\theta$, since the simplifying constraints are misspecified; examples where the auxiliary parameters are defined by $q$ estimating equations that overidentify them. We demonstrate that the optimal solution in these settings is to disregard the constrained auxiliary statistics, and perform I-I without these constraints using appropriately modified unconstrained versions of the auxiliary statistics. In each of the above examples, we outline how such unconstrained auxiliary statistics can be constructed and demonstrate that this I-I approach without constraints can be reinterpreted as a standard implementation of I-I through a properly modified binding function. |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1607.06163&r=ecm |
By: | Collard-Wexler, Allan; De Loecker, Jan |
Abstract: | Production functions are a central component in a variety of economic analyzes. However, these production functions often first need to be estimated using data on individual production units. There is reason to believe that, more than any other input in the production process, there are severe errors in the recording of capital stock. Thus, when estimating production functions, we need to account for the ubiquity of measurement error in capital stock. This paper shows that commonly used estimation techniques in the productivity literature fail in the presence of plausible amounts of measurement error in capital. We propose an estimator that addresses this measurement error, while controlling for unobserved productivity shocks. Our main insight is that investment expenditures are informative about a producer's capital stock, and we propose a hybrid IV-Control function approach that instruments capital with (lagged) investment, while relying on standard intermediate input de- mand equations to offset the simultaneity bias. We rely on a series of Monte Carlo simulations and find that standard approaches yield downward-biased capital coefficients, while our estimator does not. We apply our estimator to two standard datasets, the census of manufacturing firms in India and Slovenia, and find capital coefficients that are, on average, twice as large. |
Keywords: | measurement error inputs; Production function |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:11399&r=ecm |
By: | In Choi (School of Economics, Sogang University, Seoul); Dukpa Kim (Department of Economics, Korea University, Seoul); Yun Jung Kim (School of Economics, Sogang University, Seoul); Noh-Sun Kwark (School of Economics, Sogang University, Seoul) |
Abstract: | This paper studies a multilevel factor model with global and country fac- tors. The global factors affect all individuals while the country factors affect only those within each specific country. A sequential procedure to identify the global and country factors separately is proposed. In the initial step, the global factors are estimated by canonical correlation analysis. Using this ini- tial estimator, the principal component estimators (PCEs) of the global and country factors are constructed. It is shown that the PCEs estimate the spaces of the global and country factors consistently and are normally distributed in the limit. Several information criteria that can estimate the numbers of the country factors are proposed. The number of the global factors is assumed to be known. Extensive simulation results demonstrate that the sequential pro- cedure and the information criteria work well in ?nite samples. The method of this paper is applied to 25 OECD countries to identify international business cycle. It is reported that the method extracts a global factor reasonably well. |
Keywords: | multilevel factor model, canonical correlation analysis, principal component estimator, information criteria, international business cycles. |
Date: | 2016–04 |
URL: | http://d.repec.org/n?u=RePEc:sgo:wpaper:1609&r=ecm |
By: | Gianluca Cubadda (DEF and CEIS, University of Rome "Tor Vergata"); Barbara Guardabascio (ISTAT); Alain Hecq (Maastricht University) |
Abstract: | This paper introduces a new modelling for detecting the presence of commonalities in a set of realized volatility measures. In particular, we propose a multivariate generalization of the heterogeneous autoregressive model (HAR) that is endowed with a common index structure. The Vector Heterogeneous Autoregressive Index model has the property to generate a common index that preserves the same temporal cascade structure as in the HAR model, a feature that is not shared by other aggregation methods (e.g., principal components). The parameters of this model can be easily estimated by a proper switching algorithm that increases the Gaussian likelihood at each step. We illustrate our approach with an empirical analysis aiming at combining several realized volatility measures of the same equity index for three di?erent markets. |
Keywords: | Common volatility, HAR models, index models, combinations of realized volatil¬ities, forecasting |
JEL: | C32 |
Date: | 2016–07–22 |
URL: | http://d.repec.org/n?u=RePEc:rtv:ceisrp:391&r=ecm |