|
on Econometrics |
By: | Christopher L. Skeels; Frank Windmeijer |
Abstract: | A standard test for weak instruments compares the first-stage F-statistic to a table of critical values obtained by Stock and Yogo (2005) using simulations. We derive a closed-form solution for the expectation that determines these critical values. Inspection of this new result provides insights not available from simulation, and will allow software implementations to be generalized and improved. Of independent interest, our analysis makes contributions to the theory of confluent hypergeometric functions and the theory of ratios of quadratic forms in normal variables. A by-product of our developments is an expression for the distribution function of the non-central chi-squared distribution that we have not been able to find elsewhere in the literature. Finally, we explore the calculation of p-values for the first-stage F-statistic weak instruments test. We provide the information needed in essentially all cases of practical interest such that any computer software that can evaluate the cumulative distribution of a non-central chi-squared can readily compute p-values. |
Keywords: | Weak instruments, hypothesis testing, Stock-Yogo tables, hypergeometric functions, quadratic forms, p-values. |
JEL: | C12 C36 C46 C52 C65 C88 |
Date: | 2016–11–09 |
URL: | http://d.repec.org/n?u=RePEc:bri:uobdis:16/679&r=ecm |
By: | Delle Monache, Davide; Petrella, Ivan; Venditti, Fabrizio |
Abstract: | In this paper we develop a new theoretical framework for the analysis of state space models with time-varying parameters. We let the driver of the time variation be the score of the predictive likelihood and derive a new filter that allows us to estimate simultaneously the state vector and the time-varying parameters. In this setup the model remains Gaussian, the likelihood function can be evaluated using the Kalman filter and the model parameters can be estimated via maximum likelihood, without requiring the use of computationally intensive methods. Using a Monte Carlo exercise we show that the proposed method works well for a number of different data generating processes. We also present two empirical applications. In the former we improve the measurement of GDP growth by combining alternative noisy measures, in the latter we construct an index of financial stress and evaluate its usefulness in nowcasting GDP growth in real time. Given that a variety of time series models have a state space representation, the proposed methodology is of wide interest in econometrics and statistics. |
Keywords: | Business cycle; financial stress.; score-driven models; State space models; time-varying parameters |
JEL: | C22 C32 C51 C53 E31 |
Date: | 2016–11 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:11599&r=ecm |
By: | Simon Beyeler (Study Center Gerzensee and University of Bern); Sylvia Kaufmann (Study Center Gerzensee) |
Abstract: | We combine the factor augmented VAR framework with recently developed estimation and identification procedures for sparse dynamic factor models. Working with a sparse hierarchical prior distribution allows us to discriminate between zero and non-zero factor loadings. The non-zero loadings identify the unobserved factors and provide a meaningful economic interpretation for them. Given that we work with a general covariance matrix of factor innovations, we can implement different strategies for structural shock identification. Applying our methodology to US macroeconomic data (FRED QD) reveals indeed a high degree of sparsity in the data. The proposed identification procedure yields seven unobserved factors that account for about 52 percent of the variation in the data. We simultaneously identify a monetary policy, a productivity and a news shock by recursive ordering and by applying the method of maximizing the forecast error variance share in a specific variable. Factors and specific variables show sensible responses to the identified shocks. |
Date: | 2016–10 |
URL: | http://d.repec.org/n?u=RePEc:szg:worpap:1608&r=ecm |
By: | Byeong U. Park (Department of Statistics, Seoul National University); Leopold Simar (Inst. de Statistique, Biostatistique et Sciences Actuarielles, Universite Catholique de Louvain); Valentin Zelenyuk (School of Economics, The University of Queensland) |
Abstract: | The non-parametric quasi-likelihood method is generalized to the context of discrete choice models for time series data where dynamics is modelled via lags of the discrete dependent variable appearing among regressors. Consistency and asymptotic normality of the estimator for such models in the general case is derived under the assumption of stationarity with strong mixing condition. Monte Carlo examples are used to illustrate performance of the proposed estimator relative to the fully parametric approach. Possible applications for the proposed estimator may include modelling and forecasting of probabilities of whether a subject would get a positive response to a treatment, whether in the next period an economy would enter a recession, or whether a stock market will go down or up, etc. |
Keywords: | Nonparametric, Dynamic Discrete Choice, Probit |
JEL: | C14 C22 C25 C44 |
Date: | 2016–10 |
URL: | http://d.repec.org/n?u=RePEc:qld:uqcepa:116&r=ecm |
By: | Michael Creel |
Abstract: | For simulable models, neural networks are used to approximate the limited information posterior mean, which conditions on a vector of statistics, rather than on the full sample. Because the model is simulable, training and testing samples may be generated with sizes large enough to train well a net that is large enough, in terms of number of hidden layers and neurons, to learn the limited information posterior mean with good accuracy. Targeting the limited information posterior mean using neural nets is simpler, faster, and more successful than is targeting the full information posterior mean, which conditions on the observed sample. The output of the trained net can be used directly as an estimator of the model’s parameters, or as an input to subsequent classical or Bayesian indirect inference estimation. Examples of indirect inference based on the out- put of the net include a small dynamic stochastic general equilibrium model, estimated using both classical indirect inference methods and approximate Bayesian computing (ABC) methods, and a continuous time jump-diffusion model for stock index returns, estimated using ABC. |
Keywords: | neural networks; indirect inference; approximate Bayesian computing; machine learning; DSGE; jump-diffusion |
Date: | 2016–11 |
URL: | http://d.repec.org/n?u=RePEc:bge:wpaper:942&r=ecm |
By: | Lenka Zbonakova; Wolfgang Karl Härdle; Weining Wang |
Abstract: | In the present paper we study the dynamics of penalization parameter ? of the least absolute shrinkage and selection operator (Lasso) method proposed by Tibshirani (1996) and extended into quantile regression context by Li and Zhu (2008). The dynamic behaviour of the parameter ? can be observed when the model is assumed to vary over time and therefore the fitting is performed with the use of moving windows. The proposal of investigating time series of ? and its dependency on model characteristics was brought into focus by H¨ardle et al. (2016), which was a foundation of FinancialRiskMeter (http://frm.wiwi.hu-berlin.de). Following the ideas behind the two aforementioned projects, we use the derivation of the formula for the penalization parameter ? as a result of the optimization problem. This reveals three possible effects driving ?; variance of the error term, correlation structure of the covariates and number of nonzero coefficients of the model. Our aim is to disentangle these three effect and investigate their relationship with the tuning parameter ?, which is conducted by a simulation study. After dealing with the theoretical impact of the three model characteristics on ?, empirical application is performed and the idea of implementing the parameter ? into a systemic risk measure is presented. The codes used to obtain the results included in this work are available on http://quantlet.de/d3/ia/. |
JEL: | C21 G01 G20 G32 |
Date: | 2016–11 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2016-047&r=ecm |
By: | Shaimaa Yassin (University of Neuchatel (Institute of Economic Research)) |
Abstract: | To be able to redress retrospective panels into random samples and correct for any recall and/or design bias the data might suffer from, this paper builds on the methodology proposed by Langot and Yassin (2015) and extends it to correct the data on the individual transaction level (i.e. micro level). It creates user-friendly weights that can be readily used by researchers relying on retrospective panels extracted from the Egypt and Jordan Labor Market Panel Surveys(ELMPS and JLMPS respectively). The technique suggested shows that it is sufficient to have population moments - stocks and/or transitions (for at least one point in time) to correct overor under-reporting biases in the retrospective data. The paper proposes two types of microdata weights: (1) naive proportional weights and (2) differentiated predicted weights. Both transaction-level weights i.e. for each transition at a certain point in time, as well as panel weights i.e. for an entire job or non-employment spell, are built. In order to highlight the importance of these weights, the paper also offers an application using these weights. The determinants of labor market transitions in Egypt and Jordan are analyzed via a multinomial regression analysis with and without the weights. The impact of these weights on the regressions estimations and coefficients is therefore examined and shown significant among the different types of labor market transitions, especially separations. |
Keywords: | Panel Data, Retrospective Data, Measurement Error, Micro-data weights, Labor Markets Transitions, Egypt, Jordan. |
JEL: | C83 C81 J01 J62 J64 |
Date: | 2016–11 |
URL: | http://d.repec.org/n?u=RePEc:irn:wpaper:16-07&r=ecm |
By: | John C. Whitehead; Daniel K. Lew |
Abstract: | We develop econometric models to jointly estimate revealed preference (RP) and stated preference (SP) models of recreational fishing behavior and preferences using survey data from the 2007 Alaska Saltwater Sportfishing Economic Survey. The RP data are from site choice survey questions, and the SP data are from a discrete choice experiment. Random utility models using only the RP data may be more likely to estimate the effect of cost on site selection well, but catch per day estimates may not reflect the benefits of the trip as perceived by anglers. The SP models may be more likely to estimate the effects of trip characteristics well, but less attention may be paid to the cost variable due to the hypothetical nature of the SP questions. The combination and joint estimation of RP and SP data seeks to exploit the contrasting strengths of both. We find that there are significant gains in econometric efficiency, and differences between RP and SP willingness to pay estimates are mitigated by joint estimation. We compare a number of models that have appeared in the environmental economics literature with the generalized multinomial logit model. The nested logit “trick” model fails to account for the panel nature of the data and is less preferred to the mixed logit error components model that accounts for panel data and scale differences. Naïve (1) scaled, (2) mixed logit, and (3) generalized multinomial logit models produced similar results to a generalized multinomial logit model that accounts for scale differences in RP and SP data. Willingness to pay estimates do not differ across these models but are greater than those in the mixed logit error components model. Key Words: discrete choice experiment, generalized multinomial logit model, hypothetical bias, revealed preference, stated preference, travel cost method |
Date: | 2016 |
URL: | http://d.repec.org/n?u=RePEc:apl:wpaper:16-22&r=ecm |
By: | Hecq, Alain; Telg, Sean; Lieb, Lenard |
Abstract: | This paper investigates the effect of seasonal adjustment filters on the identification of mixed causal-noncausal autoregressive (MAR) models. By means of Monte Carlo simulations, we find that standard seasonal filters might induce spurious autoregressive dynamics, a phenomenon already documented in the literature. Symmetrically, we show that those filters also generate a spurious noncausal component in the seasonally adjusted series. The presence of this spurious noncausal feature has important implications for modelling economic time series driven by expectation relationships. An empirical application on European inflation data illustrates these results. In particular, whereas several inflation rates are forecastable on seasonally adjusted series, they appear to be white noise using raw data. |
Keywords: | seasonality; inflation; seasonal adjustment filters; mixed causal-noncausal models; autoregessive; noncausality; expectations |
JEL: | C22 E37 |
Date: | 2016–11–04 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:74922&r=ecm |
By: | Krüger, Jens J. |
Abstract: | We propose a two-stage procedure for finding realistic benchmarks for nonparametric efficiency analysis. On the first stage the efficient DMUs are figured out by a free disposal hull approach. These benchmarks are directly targeted by directional distance functions and the extent of inefficiency is measured along the direction towards an existing DMU. Two variants for finding the closest or the furthest benchmark are proposed. With this approach there is no need to use linear combinations of existing DMUs as benchmarks which may not be achievable in reality and also no need to accept slacks which are not reflected by the efficiency measure. |
Keywords: | directional distance functions,targeting,direct benchmarks |
JEL: | C14 D24 |
Date: | 2016 |
URL: | http://d.repec.org/n?u=RePEc:zbw:darddp:229&r=ecm |
By: | Chia-Lin Chang (National Chung Hsing University, Taiwan); Michael McAleer (National Tsing Hua University, Taiwan; Erasmus School of Economics, Erasmus University Rotterdam, The Netherlands, Complutense University of Madrid, Spain, Yokohama National University, Japan) |
Abstract: | An early development in testing for causality (technically, Granger non-causality) in the conditional variance (or volatility) associated with financial returns, was the portmanteau statistic for non-causality in variance of Cheng and Ng (1996). A subsequent development was the Lagrange Multiplier (LM) test of non-causality in the conditional variance by Hafner and Herwartz (2006), who provided simulations results to show that their LM test was more powerful than the portmanteau statistic. While the LM test for causality proposed by Hafner and Herwartz (2006) is an interesting and useful development, it is nonetheless arbitrary. In particular, the specification on which the LM test is based does not rely on an underlying stochastic process, so that the alternative hypothesis is also arbitrary, which can affect the power of the test. The purpose of the paper is to derive a simple test for causality in volatility that provides regularity conditions arising from the underlying stochastic process, namely a random coefficient autoregressive process, and for which the (quasi-) maximum likelihood estimates have valid asymptotic properties. The simple test is intuitively appealing as it is based on an underlying stochastic process, is sympathetic to Granger’s (1969, 1988) notion of time series predictability, is easy to implement, and has a regularity condition that is not available in the LM test. |
Keywords: | Random coefficient stochastic process; Simple test; Granger non-causality; Regularity conditions; Asymptotic properties; Conditional volatility |
JEL: | C22 C32 C52 C58 |
Date: | 2016–11–07 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20160094&r=ecm |
By: | Pavlína Hejduková (Faculty of Economics, University of West Bohemia); Lucie Kureková (Faculty of Economics, University of Economics) |
Abstract: | This paper deals with the causal determination of phenomena (briefly causality) as a tool for empirical analysis in economics. Although is the causality difficult to grasp, they are built on the basis of many scientific theories, including economic theory. Causality is very hot topic today, both in philosophy and economics. The causality is used in many multi-sectorial disciplines and the concept of causality is different in various disciplines. In economics, we encounter many assertions that connect cause and effect, but causal relationships are not clearly expressed. At first glance, there may be confusion between cause and effect and the phenomena studied can then be viewed in terms of causality and vice versa. The causality plays very important role in econometric and economics. The paper focused on using of causality in economics and econometric studies. The paper begins with a brief overview of theoretical definition of the causality. Then, the empirical approaches to causality in economics and econometric and selected tools of causality are presented and discussed and the case study of possible using of Granger Causality Test is shown. At the end of the paper we discuss the significance of the Grander Causality Test in economics. The aims of this paper are following: to define the different approaches to causality and describe a short history of this term, to analyse selected econometric methods in interaction with causality and to show on the example of Granger Causality Test using of causality in empirical analysis in economics. |
Keywords: | Causality; Economics; Econometric; Empirical Analysis; Granger; Granger Causality Test |
JEL: | B16 B23 C10 |
URL: | http://d.repec.org/n?u=RePEc:sek:ibmpro:4407035&r=ecm |
By: | Murwan H. M. A. Siddig |
Abstract: | This paper aims to review the methodology behind the generalized linear models which are used in analyzing the actuarial situations instead of the ordinary multiple linear regression. We introduce how to assess the adequacy of the model which includes comparing nested models using the deviance and the scaled deviance. The Akiake information criterion is proposed as a comprehensive tool for selecting the adequate model. We model a simple automobile portfolio using the generalized linear models, and use the best chosen model to predict the number of claims made by the policyholders in the portfolio. |
Date: | 2016–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1611.02556&r=ecm |
By: | Taras Bodnar; Yarema Okhrin; Nestor Parolya |
Abstract: | In this paper we estimate the mean-variance (MV) portfolio in the high-dimensional case using the recent results from the theory of random matrices. We construct a linear shrinkage estimator which is distribution-free and is optimal in the sense of maximizing with probability $1$ the asymptotic out-of-sample expected utility, i.e., mean-variance objective function. Its asymptotic properties are investigated when the number of assets $p$ together with the sample size $n$ tend to infinity such that $p/n \rightarrow c\in (0,+\infty)$. The results are obtained under weak assumptions imposed on the distribution of the asset returns, namely the existence of the fourth moments. Thereafter we perform numerical and empirical studies where the small- and large-sample behavior of the derived estimator are investigated. The resulting estimator shows significant improvements over the naive diversification and it is robust to the deviations from normality. |
Date: | 2016–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1611.01958&r=ecm |
By: | Jan de Haan (Division of Corporate Services, IT and Methodology, Statistics Netherlands); Rens Hendriks (Statistics for Development Division, Pacific Community (SPC)); Michael Scholz (University of Graz) |
Abstract: | This paper compares two model-based multilateral price indexes: the time- product dummy (TPD) index and the time dummy hedonic (TDH) index, both estimated by expenditure-share weighted least squares regression. The TPD model can be viewed as the saturated version of the underlying TDH model, and we argue that the regression residuals are ``distorted towards zero'' due to overfitting. We decompose the ratio of the two indexes in terms of average regression residuals of the new and disappearing items (plus a third component that depends on the change in the matched items' normalized expenditure shares). The decomposition explains under which conditions the TPD index suffers from quality-change bias or, more generally, lack-of-matching bias. An example using scanner data on men's t-shirts illustrates our theoretical framework. |
Keywords: | hedonic regression; multilateral price indexes; new and disappearing items; quality change; scanner data |
JEL: | C43 E31 |
Date: | 2016–11 |
URL: | http://d.repec.org/n?u=RePEc:grz:wpaper:2016-13&r=ecm |
By: | Gao, Junbin; Guo, Yi; Wang, Zhiyong |
Abstract: | Traditional neural networks assume vectorial inputs as the network is arranged as layers of single line of computing units called neurons. This special structure requires the non-vectorial inputs such as matrices to be converted into vectors. This process can be problematic. Firstly, the spatial information among elements of the data may be lost during vectorisation. Secondly, the solution space becomes very large which demands very special treatments to the network parameters and high computational cost. To address these issues, we propose matrix neural networks (MatNet), which takes matrices directly as inputs. Each neuron senses summarised information through bilinear mapping from lower layer units in exactly the same way as the classic feed forward neural networks. Under this structure, back prorogation and gradient descent combination can be utilised to obtain network parameters e ciently. Furthermore, it can be conveniently extended for multimodal inputs. We apply MatNet to MNIST handwritten digits classi cation and image super resolution tasks to show its e ectiveness. Without too much tweaking MatNet achieves comparable performance as the state-of-the-art methods in both tasks with considerably reduced complexity. |
Keywords: | Image Super Resolution; Pattern Recognition; Machine Learning; Back Propagation; Neural Networks |
Date: | 2016–11–02 |
URL: | http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/15839&r=ecm |