|
on Econometrics |
By: | Arie Beresteanu |
Abstract: | This paper investigates the identiï¬ cation of quantiles and quantile regression parameters when observations are set valued. We deï¬ ne the identiï¬ cation set of quantiles of random sets in a way that extends the deï¬ nition of quantiles for regular random variables. We then give sharp characterization of this set by extending concepts from random set theory. For quantile regression parameters, we show that the identiï¬ cation set is characterized by a system of conditional moment inequalities. This characterization extends that of parametric quantile regression for regular random variables. Estimation and inference theories are developed for continuous cases, discrete cases, nonparametric conditional quantiles, and parametric quantile regressions. A fast computational algorithm of set linear programming is proposed. Monte Carlo experiments support our theoretical properties. We apply the proposed method to analyze the effects of cleanup of hazardous waste sites on local housing values. |
Date: | 2016–01 |
URL: | http://d.repec.org/n?u=RePEc:pit:wpaper:5991&r=ecm |
By: | Myung Hwan Seo; Taisuke Otsu |
Abstract: | This paper examines asymptotic properties of local M-estimators under three sets of high-level conditions. These conditions are sufficiently general to cover the minimum volume predictive region, conditional maximum score estimator for a panel data discrete choice model, and many other widely used estimators in statistics and econometrics. Specifically, they allow for discontinuous criterion functions of weakly dependent observations, which may be localized by kernel smoothing and contain nuisance parameters whose dimension may grow to infinity. Furthermore, the localization can occur around parameter values rather than around a fixed point and the observation may take limited values, which leads to set estimators. Our theory produces three different nonparametric cube root rates and enables valid inference for the local M-estimators, building on novel maximal inequalities for weakly dependent data. Our results include the standard cube root asymptotics as a special case. To illustrate the usefulness of our results, we verify our conditions for various examples such as the Hough transform estimator with diminishing bandwidth, maximum score-type set estimator, and many others. |
Keywords: | Cube root asymptotics, Maximal inequality, Mixing process, Partial identification, Parameter-dependent localization |
JEL: | C12 C14 |
Date: | 2016–10 |
URL: | http://d.repec.org/n?u=RePEc:cep:stiecm:/589&r=ecm |
By: | Todd Prono |
Abstract: | Covariances between contemporaneous squared values and lagged levels form the basis for closed-form instrumental variables estimators of ARCH processes. These simple estimators rely on asymmetry for identification (either in the model's rescaled errors or the conditional variance function) and apply to threshold ARCH (l) and ARCH (p) with p |
Keywords: | ARCH ; Closed form estimation ; Heavy tails ; Instrumental variables ; Regular variation ; Three-step estimation |
JEL: | C13 C22 C58 |
Date: | 2016–10 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedgfe:2016-83&r=ecm |
By: | Clifford Lam; Pedro C.L. Souza |
Abstract: | In many economic applications, it is often of interest to categorize, classify or label individuals by groups based on similarity of observed behavior. We propose a method that captures group affiliation or, equivalently, estimates the block structure of a neighboring matrix embedded in a Spatial Econometric model. The main results of the LASSO estimator shows that off-diagonal block elements are estimated as zeros with high probability, property defined as “zero-block consistency”. Furthermore, we present and prove zero-block consistency for the estimated spatial weight matrix even under a thin margin of interaction between groups. The tool developed in this paper can be used as a verification of block structure by applied researchers, or as an exploration tool for estimating unknown block structures. We analyzed the US Senate voting data and correctly identified blocks based on party affiliations. Simulations also show that the method performs well. |
Keywords: | spatial weight matrix; LASSO penalization; zero-block consistency; spatial lag/error model; Nagaev-type inequality |
JEL: | C31 C33 |
Date: | 2015 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:59898&r=ecm |
By: | Ion Lapteacru (Larefi - Laboratoire d'analyse et de recherche en économie et finance internationales - Université Montesquieu - Bordeaux 4) |
Abstract: | We develop the Murphy-Topel adjustment of the variance-covariance matrix for two-step panel data models. We apply it on the competition-fragility nexus in banking with different samples for two equations. Indeed, this issue is often observed in this field of research. A competition measure of banks is constructed for each country (first equation), whereas a risk measure is regressed on the entire sample of countries (second equation). Any statistical adjustment will only provide approximate results for the second equation, because of possible correlations between the results of both models. The Murphy-Topel method eventually seems to be more appropriate. |
Keywords: | Banking, Murphy-Topel adjustment, competition, risk. |
Date: | 2016 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:hal-01337726&r=ecm |
By: | Clark, Todd E. (Federal Reserve Bank of Cleveland); Carriero, Andrea (Queen Mary, University of London); Massimiliano, Marcellino (Bocconi University, IGIER and CEPR) |
Abstract: | We propose a new framework for measuring uncertainty and its effects on the economy, based on a large VAR model with errors whose stochastic volatility is driven by two common unobservable factors, representing aggregate macroeconomic and financial uncertainty. The uncertainty measures can also influence the levels of the variables so that, contrary to most existing measures, ours reflect changes in both the conditional mean and volatility of the variables, and their impact on the economy can be assessed within the same framework. Moreover, identification of the uncertainty shocks is simplified with respect to standard VAR-based analysis, in line with the FAVAR approach and with heteroskedasticity-based identification. Finally, the model, which is also applicable in other contexts, is estimated with a new Bayesian algorithm, which is computationally efficient and allows for jointly modeling many variables, while previous VAR models with stochastic volatility could only handle a handful of variables. Empirically, we apply the method to estimate uncertainty and its effects using US data, finding that there is indeed substantial commonality in uncertainty, sizable effects of uncertainty on key macroeconomic and financial variables with responses in line with economic theory. |
Keywords: | Bayesian VARs; stochastic volatility; large datasets; |
JEL: | C11 C13 C33 C55 E44 |
Date: | 2016–10–14 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedcwp:1622&r=ecm |
By: | Hanene Ben Salah (IMAG - Institut Montpelliérain Alexander Grothendieck - UM - Université de Montpellier - CNRS - Centre National de la Recherche Scientifique); Ali Gannoun (IMAG - Institut Montpelliérain Alexander Grothendieck - UM - Université de Montpellier - CNRS - Centre National de la Recherche Scientifique); Christian De Peretti (ISFA - Institut des Science Financière et d'Assurances - PRES Université de Lyon); Mathieu Ribatet (IMAG - Institut Montpelliérain Alexander Grothendieck - UM - Université de Montpellier - CNRS - Centre National de la Recherche Scientifique); Abdelwahed Trabelsi (Laboratoire BESTMOD ISG Tunis - ISG Tunis) |
Abstract: | The DownSide Risk (DSR) model for portfolio optimization allows to overcome the drawbacks of the classical Mean-Variance model concerning the asymmetry of returns and the risk perception of investors. This optimization model deals with a positive definite matrix that is endogenous with respect to the portfolio weights and hence yields to a non standard optimization problem. To bypass this hurdle, Athayde (2001) developed a new recursive minimization procedure that ensures the convergence to the solution. However, when a finite number of observations is available, the portfolio frontier usually exhibits some inflexion points which make this curve not very smooth. In order to overcome these points, Athayde (2003) proposed a mean kernel estimation of returns to get a smoother portfolio frontier. This technique provides an effect similar to the case in which an infinite number of observations is available. In spite of the originality of this approach, the proposed algorithm was not neatly written. Moreover, no application was presented in his paper. Ben Salah et al (2015), taking advantage on the the robustness of the median, replaced the mean estimator in Athayde's model by a nonparametric median estimator of the returns, and gave a tidily and comprehensive version of the former algorithm (of Athayde (2001, 2003)). In all the previous cases, the problem is computationally complex since at each iteration, the returns (for each asset and for the portfolio) need to be re-estimated. Due to the changes in the kernel weights for every time, the portfolio is altered. In this paper, a new method to reduce the number of iterations is proposed. Its principle is to start by estimating non parametrically all the returns for each asset; then, the returns of a given portfolio will be derived from the previous estimated assets returns. Using the DSR criterion and Athayde's algorithm, a smoother portfolio frontier is obtained when short selling is or is not allowed. The proposed approach is applied on the French and Brazilian stock markets. |
Keywords: | DownSide Risk,Kernel Method,Nonparametric Mean Estimation,Nonparametric Median Estimation,Semivariance |
Date: | 2016–04–07 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:hal-01299561&r=ecm |
By: | Stelios Arvanitis (Athens University of Economics and Business); Nikolas Topaloglou (Athens University of Economics and Business) |
Abstract: | We develop non-parametric tests for prospect stochastic dominance Efficiency (PSDE) and Markowitz stochastic dominance efficiency (MSDE) with rejection regions determined by block bootstrap resampling techniques. Under the appropriate conditions we show that they are asymptotically conservative and consistent. We engage into Monte Carlo experiments to assess the nite sample size and power of the tests allowing for the presence of numerical errors. We use them to empirically analyze investor preferences and beliefs by testing whether the value-weighted market portfolio can be considered as efficient according to prospect and Markowitz stochastic dominance criteria when confronted to diversi cation principles made of risky assets. Our results indicate that we cannot reject the hypothesis of prospect stochastic dominance efficiency for the market portfolio. This is supportive of the claim that the particular portfolio can be rationalized as the optimal choice for any S-shaped utility function. Instead, we reject the hypothesis for Markowitz stochastic dominance, which could imply that there exist reverse S-shaped utility functions that do not rationalize the market portfolio. |
Keywords: | Non parametric test, prospect stochastic dominance efficiency,Markowitz stochastic dominance efficiency, simplical complex, extremal point, Linear Programming, Mixed Integer Programming, Block Bootstrap, Consistency |
JEL: | C12 C13 C15 C44 D81 G11 |
Date: | 2015–11 |
URL: | http://d.repec.org/n?u=RePEc:aeb:wpaper:201511:y:2015&r=ecm |
By: | Ning Xu; Jian Hong; Timothy C. G. Fisher |
Abstract: | We study model evaluation and model selection from the perspective of generalization ability (GA): the ability of a model to predict outcomes in new samples from the same population. We believe that GA is one way formally to address concerns about the external validity of a model. The GA of a model estimated on a sample can be measured by its empirical out-of-sample errors, called the generalization errors (GE). We derive upper bounds for the GE, which depend on sample sizes, model complexity and the distribution of the loss function. The upper bounds can be used to evaluate the GA of a model, ex ante. We propose using generalization error minimization (GEM) as a framework for model selection. Using GEM, we are able to unify a big class of penalized regression estimators, including lasso, ridge and bridge, under the same set of assumptions. We establish finite-sample and asymptotic properties (including $\mathcal{L}_2$-consistency) of the GEM estimator for both the $n \geqslant p$ and the $n |
Date: | 2016–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1610.05448&r=ecm |
By: | Stelios Arvanitis (Athens University of Economics and Business) |
Abstract: | We derive the continuity properties of the cdf of a random variable deâined as a saddle-type point of a real valued continuous stochastic process on a compact metric space. This result facilitates the derivation of âirst order asymptotic properties of tests for stochastic spanning w.r.t. some stochastic dominance relation based on subsampling. As an illustration we define the concept of Markowitz stochastic spanning, derive an analytic representation upon the empirical analog of which we construct a relevant statistical test. The aforementioned result enables derivation of asymptotic exactness for the relevant procedure based on subsampling,when the metric space has the form of a simplicial complex, the spanning set is a compact subset and the signiâicance level is chosen according to the number of extreme points of the complex inside the spanning set. Consistency is also derived. Such tests are of interest in financial economics since they can provide reductions of portfolio sets. |
Keywords: | Continuous Process, Malliavin Derivative, Nested Optimizations, Saddle-Type Point, ConnectedSupport, Atom, AbsoluteContinuity,MarkowitzStochasticDominance,Stochas- tic Spanning, Spanning Test, Subsampling, Gaussian Process, Brownian Bridge, Asymptotic Exactness, Consistency. |
Date: | 2015–09 |
URL: | http://d.repec.org/n?u=RePEc:aeb:wpaper:201509:y:2015&r=ecm |
By: | Kapetanios, George; Marcellino, Massimiliano; Venditti, Fabrizio |
Abstract: | In this paper we introduce a nonparametric estimation method for a large Vector Autoregression (VAR) with time-varying parameters. The estimators and their asymptotic distributions are available in closed form. This makes the method computationally efficient and capable of handling information sets as large as those typically handled by factor models and Factor Augmented VARs (FAVAR). When applied to the problem of forecasting key macroeconomic variables, the method outperforms constant parameter benchmarks and large (parametric) Bayesian VARs with time-varying parameters. The tool can also be used for structural analysis. As an example, we study the time-varying effects of oil price innovations on sectoral U.S. industrial output. We find that the changing interaction between unexpected oil price increases and business cycle fluctuations is shaped by the durable materials sector, rather by the automotive sector on which a large part of the literature has typically focused. |
Date: | 2016–10 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:11560&r=ecm |
By: | Damien Ackerer; Thibault Vatter |
Abstract: | We introduce a class of flexible and tractable static factor models for the joint term structure of default probabilities, the factor copula models. These high dimensional models remain parsimonious with pair copula constructions, and nest numerous standard models as special cases. With finitely supported random losses, the loss distributions of credit portfolios and derivatives can be exactly and efficiently computed. Numerical examples on collateral debt obligation (CDO), CDO squared, and credit index swaption illustrate the versatility of our framework. An empirical exercise shows that a simple model specification can fit credit index tranche prices. |
Date: | 2016–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1610.03050&r=ecm |
By: | Hanene Ben Salah (IMAG - Institut Montpelliérain Alexander Grothendieck - UM - Université de Montpellier - CNRS - Centre National de la Recherche Scientifique, ISFA - Institut des Science Financière et d'Assurances - PRES Université de Lyon, BESTMOD - Business and Economic Statistics MODeling - ISG - Institut Supérieur de Gestion de Tunis [Tunis] - Université de Tunis [Tunis]); Mohamed Chaouch (United Arab Emirates University); Ali Gannoun (IMAG - Institut Montpelliérain Alexander Grothendieck - UM - Université de Montpellier - CNRS - Centre National de la Recherche Scientifique); Christian De Peretti (ISFA - Institut des Science Financière et d'Assurances - PRES Université de Lyon); Abdelwahed Trabelsi (BESTMOD - Business and Economic Statistics MODeling - ISG - Institut Supérieur de Gestion de Tunis [Tunis] - Université de Tunis [Tunis]) |
Abstract: | The DownSide Risk (DSR) model for portfolio optimisation allows to overcome the drawbacks of the classical Mean-Variance model concerning the asymmetry of returns and the risk perception of investors. This model optimization deals with a positive definite matrix that is endogenous with respect to portfolio weights. This aspect makes the problem far more difficult to handle. For this purpose, Athayde (2001) developed a new recursive minimization procedure that ensures the convergence to the solution. However, when a finite number of observations is available, the portfolio frontier presents some discontinuity and is not very smooth. In order to overcome that, Athayde (2003) proposed a Mean Kernel estimation of the returns, so as to create a smoother portfolio frontier. This technique provides an effect similar to the case in which continuous observations are available. In this paper, Athayde model is reformulated and clarified. Then, taking advantage on the robustness of the median, another nonparametric approach based on Median Kernel returns estimation is proposed in order to construct a portfolio frontier. A new version of Athayde's algorithm will be exhibited. Finally, the properties of this improved portfolio frontier are studied and analysed on the French Stock Market. Keywords DownSide Risk · Kernel Method · Mean Nonparametric Estimation · Median Nonparametric Estimation · Portefolio Efficient Frontier · Semi-Variance. |
Date: | 2016–04–12 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:hal-01300673&r=ecm |