|
on Econometrics |
By: | Lavergne, Pascal; Nguimkeu, Pierre |
Abstract: | This paper addresses the issue of detecting misspecified conditional moment restrictions (CMR). We propose a new Hausman-type test based on the comparison of an efficient estimator with an ineficient one, both derived by semiparametrically estimating the CMR using different bandwidths. The proposed test statistic is asymptotically chi-squared distributed under correct specification. We propose a general bootstrap procedure for computing critical values in small samples. The testing procedures are easy to implement and simulation results show that they perform well in small samples. An empirical application to a model of female formal labor force participation and wage determination in urban Ghana is provided. |
Keywords: | Conditional Moment Restrictions, Hypothesis Testing, Smoothing Methods, Bootstrap. |
JEL: | C12 C14 C15 C52 |
Date: | 2016–12 |
URL: | http://d.repec.org/n?u=RePEc:tse:wpaper:31275&r=ecm |
By: | Morais, Joanna; Simioni, Michel; Thomas-Agnan, Christine |
Abstract: | This paper aims to present and compare statistical modeling methods adapted for shares as dependent variables. Shares are characterized by the following constraints: positivity and sum equal to 1. Four types of models satisfy this requirement: multinomial logit models widely used in discrete choice models of the econometric literature, market-share models from the marketing literature, Dirichlet covariate models and compositional regression models from the statistical literature. We highlight the properties, the similarities and the differences between these models which are coming from the assumptions made on the distribution of the data and from the estimation methods. We prove that all these models can be written in an attraction model form, and that they can be interpreted in terms of direct and cross elasticities. An application to the automobile market is presented where we model brand market-shares as a function of media investments in 6 channels in order to measure their impact, controlling for the brands average price and a scrapping incentive dummy variable. We propose a cross-validation method to choose the best model according to different quality measures. |
Keywords: | Multinomial logit; Market-shares models; Compositional data analysis; Dirichlet regression. |
JEL: | C10 C25 C35 C46 D12 M31 |
Date: | 2016–12 |
URL: | http://d.repec.org/n?u=RePEc:tse:wpaper:31265&r=ecm |
By: | Li, Kunpeng; Li, Qi; Lu, Lina |
Abstract: | Factor models have been widely used in practice. However, an undesirable feature of a high dimensional factor model is that the model has too many parameters. An effective way to address this issue, proposed in a seminar work by Tsai and Tsay (2010), is to decompose the loadings matrix by a high-dimensional known matrix multiplying with a low-dimensional unknown matrix, which Tsai and Tsay (2010) name constrained factor models. This paper investigates the estimation and inferential theory of constrained factor models under large-N and large-T setup, where N denotes the number of cross sectional units and T the time periods. We propose using the quasi maximum likelihood method to estimate the model and investigate the asymptotic properties of the quasi maximum likelihood estimators, including consistency, rates of convergence and limiting distributions. A new statistic is proposed for testing the null hypothesis of constrained factor models against the alternative of standard factor models. Partially constrained factor models are also investigated. Monte carlo simulations confirm our theoretical results and show that the quasi maximum likelihood estimators and the proposed new statistic perform well in finite samples. We also consider the extension to an approximate constrained factor model where the idiosyncratic errors are allowed to be weakly dependent processes. |
Keywords: | Constrained factor models, Maximum likelihood estimation, High dimension, Inferential theory. |
JEL: | C1 C38 |
Date: | 2016–12–20 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:75676&r=ecm |
By: | Shih-Kang Chao; Wolfgang K. Härdle; Ming Yuan |
Abstract: | For many applications, analyzing multiple response variables jointly is desirable because of their dependency, and valuable information about the distribution can be retrieved by estimating quantiles. In this paper, we propose a multi-task quantile regression method that exploits the potential factor structure of multivariate conditional quantiles through nuclear norm regularization. We jointly study the theoretical properties and computational aspects of the estimating procedure. In particular, we develop an efficient iterative proximal gradient algorithm for the non-smooth and non-strictly convex optimization problem incurred in our estimating procedure, and derive oracle bounds for the estimation error in a realistic situation where the sample size and number of iterative steps are both finite. The finite iteration analysis is particular useful when the matrix to be estimated is big and the computational cost is high. Merits of the proposed methodology are demonstrated through a Monte Carlo experiment and applications to climatological and financial study. Specifically, our method provides an objective foundation for spatial extreme clustering, and gives a refreshing look on the global financial systemic risk. Supplementary materials for this article are available online. |
Date: | 2016–12 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2016-057&r=ecm |
By: | Santiago Pereda Fernández (Bank of Italy) |
Abstract: | Sorting and spillovers can create correlation in individual outcomes. In this situation, standard discrete choice estimators cannot consistently estimate the probability of joint and conditional events, and alternative estimators can yield incoherent statistical models or intractable estimators. I propose a random effects estimator that models the dependence among the unobserved heterogeneity of individuals in the same cluster using a parametric copula. This estimator makes it possible to compute joint and conditional probabilities of the outcome variable, and is statistically coherent. I describe its properties, establishing its efficiency relative to standard random effects estimators, and propose a specification test for the copula. The likelihood function for each cluster is an integral whose dimension equals the size of the cluster, which may require high-dimensional numerical integration. To overcome the curse of dimensionality from which methods like Monte Carlo integration suffer, I propose an algorithm that works for Archimedean copulas. I illustrate this approach by analysing labour supply in married couples. |
Keywords: | Copula, high-dimensional integration, nonlinear panel data. |
JEL: | C23 C25 J22 |
Date: | 2016–12 |
URL: | http://d.repec.org/n?u=RePEc:bdi:wptemi:td_1092_16&r=ecm |
By: | Katrien Antonio; Anastasios Bardoutsos; Wilbert Ouburg |
Abstract: | Life insurers, pension funds, health care providers and social security institutions face increasing expenses due to continuing improvements of mortality rates. The actuarial and demographic literature has introduced a myriad of (deterministic and stochastic) models to forecast mortality rates of single populations. This paper presents a Bayesian analysis of two related multi-population mortality models of log-bilinear type, designed for two or more populations. Using a larger set of data, multi-population mortality models allow joint modelling and projection of mortality rates by identifying characteristics shared by all subpopulations as well as sub-population speci?c e?ects on mortality. This is important when modeling and forecasting mortality of males and females, regions within a country and when dealing with index-based longevity hedges. Our ?rst model is inspired by the two factor Lee & Carter model of Renshaw and Haberman (2003) and the common factor model of Carter and Lee (1992). The second model is the augmented common factor model of Li and Lee (2005). This paper approaches both models in a statistical way, using a Poisson distribution for the number of deaths at a certain age and in a certain time period. Moreover, we use Bayesian statistics to calibrate the models and to produce mortality forecasts. We develop the technicalities necessary for Markov Chain Monte Carlo ([MCMC]) simulations and provide software implementation (in R) for the models discussed in the paper. Key bene?ts of this approach are multiple. We jointly calibrate the Poisson likelihood for the number of deaths and the times series models imposed on the time dependent parameters, we enable full allowance for parameter uncertainty and we are able to handle missing data as well as small sample populations. We compare and contrast results from both models to the results obtained with a frequentist single population approach and a least squares estimation of the augmented common factor model. |
Keywords: | projected life tables, multi-population stochastic mortality models, Bayesian statistics, Poisson regression, one factor Lee & Carter model, two factor Lee & Carter model, Li & Lee model, augmented common factor model |
Date: | 2015 |
URL: | http://d.repec.org/n?u=RePEc:baf:cbafwp:cbafwp1505&r=ecm |
By: | Aknouche, Abdelhakim; Al-Eid, Eid; Demouche, Nacer |
Abstract: | This paper establishes consistency and asymptotic normality of the generalized quasi-maximum likelihood estimate (GQMLE) for a general class of periodic conditionally heteroskedastic time series models (PCH). In this class of models, the volatility is expressed as a measurable function of the infinite past of the observed process with periodically time-varying parameters, while the innovation of the model is an independent and periodically distributed sequence. In contrast with the aperiodic case, the proposed GQMLE is rather based on S instrumental density functions where S is the period of the model while the corresponding asymptotic variance is in a "sandwich" form. Application to the periodic GARCH and the periodic asymmetric power GARCH model is given. Moreover, we discuss how to apply the GQMLE to the prediction of power problem in a one-step framework and to PCH models with complex periodic patterns such as high frequency seasonality and non-integer seasonality. |
Keywords: | Periodic conditionally heteroskedastic models, periodic asymmetric power GARCH, generalized QML estimation, consistency and asymptotic normality, prediction of powers, high frequency periodicity, non-integer periodicity. |
JEL: | C13 C18 C51 C58 |
Date: | 2016–02–03 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:75770&r=ecm |
By: | Alexander Razen; Stefan Lang; Judith Santer |
Abstract: | Multiplicative random effects allow for cluster-specific scaling of covariate effects. In many applications with spatial clustering, however, the random effects additionally show some geographical pattern, which usually can not sufficiently be captured with existing estimation techniques. Relying on Markov random fields, we present a fully Bayesian inference procedure for spatially correlated scaling factors. The estimation is based on highly efficient Markov Chain Monte Carlo (MCMC) algorithms and is smoothly incorporated into the framework of distributional regression. We run a comprehensive simulation study for different response distributions to examine the statistical properties of our approach. We also compare our results to those of a general estimation procedure for independent random scaling factors. Furthermore, we apply the method to German real estate data and show that exploiting the spatial correlation of the scaling factors further improves the performance of the model. |
Keywords: | distributional regression, iteratively weighted least squares proposals, MCMC, multiplicative random effects, spatial smoothing, structured additive predictors |
Date: | 2016–12 |
URL: | http://d.repec.org/n?u=RePEc:inn:wpaper:2016-33&r=ecm |
By: | Elena Di Bernardino (CEDRIC - Centre d'Etude et De Recherche en Informatique du Cnam - Conservatoire National des Arts et Métiers [CNAM]); Didier Rullière (SAF - Laboratoire de Sciences Actuarielle et Financière - UCBL - Université Claude Bernard Lyon 1) |
Abstract: | An important topic in Quantitative Risk Management concerns the modeling of dependence among risk sources and in this regard Archimedean copulas appear to be very useful. However, they exhibit symmetry, which is not always consistent with patterns observed in real world data. We investigate extensions of the Archimedean copula family that make it possible to deal with asymmetry. Our extension is based on the observation that when applied to the copula the inverse function of the generator of an Archimedean copula can be expressed as a linear form of generator inverses. We propose to add a distortion term to this linear part, which leads to asymmetric copulas. Parameters of this new class of copulas are grouped within a matrix, thus facilitating some usual applications as level curve determination or estimation. Some choices such as sub-model stability help associating each parameter to one bivariate projection of the copula. We also give some admissibility conditions for the considered copulas. We propose different examples as some natural multivariate extensions of Farlie-Gumbel-Morgenstern or Gumbel-Barnett. |
Keywords: | transformations of Archimedean copulas,Archimedean copulas |
Date: | 2016–12–14 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-01147778&r=ecm |
By: | Simon Clinet; Yoann Potiron |
Abstract: | This paper shows how to carry out efficient asymptotic variance reduction when estimating volatility in the presence of stochastic volatility and microstructure noise with the realized kernels (RK) from [Barndorff-Nielsen et al., 2008] and the quasi-maximum likelihood estimator (QMLE) studied in [Xiu, 2010]. To obtain such a reduction, we chop the data into B blocks, compute the RK (or QMLE) on each block, and aggregate the block estimates. The ratio of asymptotic variance over the bound of asymptotic efficiency converges as B increases to the ratio in the parametric version of the problem, i.e. 1.0025 in the case of the fastest RK Tukey-Hanning 16 and 1 for the QMLE. The finite sample performance of both estimators is investigated in simulations, while empirical work illustrates the gain in practice. |
Date: | 2017–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1701.01185&r=ecm |
By: | Mike G. Tsionas (Athens University of Economics and Business) |
Abstract: | In this paper, our proposal is to combine univariate ARMA models to produce a variant of the VARMA model that is much more easily implementable and does not involve certain complications. The original model is reduced to a series of univariate problems and a copula – like term (a mixture-of-normals densities) is introduced to handle dependence. Since the univariate problems are easy to handle by MCMC or other techniques, computations can be parallelized easily, and only univariate distribution functions are needed, which are quite often available in closed form. The results from parallel MCMC or other posterior simulators can then be taken together and use simple sampling - resampling to obtain a draw from the exact posterior which includes the copula - like term. We avoid optimization of the parameters entering the copula mixture form as its parameters are optimized only once before MCMC begins. We apply the new techniques in three types of challenging problems. Large time-varying parameter vector autoregressions (TVP-VAR) with nearly 100 macroeconomic variables, multivariate ARMA models with 25 macroeconomic variables and multivariate stochastic volatility models with 100 stock returns. Finally, we perform impulse response analysis in the data of Giannone, Lenza, and Primiceri (2015) and compare, as they proposed with results from a dynamic stochastic general equilibrium model. |
Keywords: | Vector Autoregressive Moving Average models; Multivariate Stochastic Volatility models; Copula models; Bayesian analysis |
JEL: | C11 C13 |
Date: | 2016–12 |
URL: | http://d.repec.org/n?u=RePEc:bog:wpaper:217&r=ecm |
By: | Mike G. Tsionas (Athens University of Economics and Business) |
Abstract: | In this paper we reconsider large Bayesian Vector Autoregressions (BVAR) from the point of view of Bayesian Compressed Regression (BCR). First, we show that there are substantial gains in terms of out-of-sample forecasting by treating the problem as an error-in-variables formulation and estimating the compression matrix instead of using random draws. As computations can be e?ciently organized around a standard Gibbs sampler, timings and computa-tional complexity are not a?ected severely. Second, we extend the Multivariate Autoregressive Index model to the BCR context and show that we have, again, gains in terms of out-of-sample forecasting. The new techniques are used in U.S data featuring medium-size, large and huge BVARs |
Keywords: | Bayesian Vector Autoregressions; Bayesian Compressed Re-gression; Error-in-Variables; Forecasting; Multivariate Autoregressive Index model. |
JEL: | C11 C13 |
Date: | 2016–11 |
URL: | http://d.repec.org/n?u=RePEc:bog:wpaper:216&r=ecm |
By: | McDonough, Ian K. (University of Nevada, Las Vegas); Millimet, Daniel L. (Southern Methodist University) |
Abstract: | Basmann (Basmann, R.L., 1957, A generalized classical method of linear estimation of coefficients in a structural equation. Econometrica 25, 77-83; Basmann, R.L., 1959, The computation of generalized classical estimates of coefficients in a structural equation. Econometrica 27, 72-81) introduced two-stage least squares (2SLS). In subsequent work, Basmann (Basmann, R.L., F.L. Brown, W.S. Dawes and G.K. Schoepfle, 1971, Exact finite sample density functions of GCL estimators of structural coefficients in a leading exactly identifiable case. Journal of the American Statistical Association 66, 122-126) investigated its finite sample performance. Here, we build on this tradition focusing on the issue of 2SLS estimation of a structural model when data on the endogenous covariate is missing for some observations. Many such imputation techniques have been proposed in the literature. However, there is little guidance available for choosing among existing techniques, particularly when the covariate being imputed is endogenous. Moreover, because the finite sample bias of 2SLS is not monotonically decreasing in the degree of measurement accuracy, the most accurate imputation method is not necessarily the method that minimizes the bias of 2SLS. Instead, we explore imputation methods designed to increase the first-stage strength of the instrument(s), even if such methods entail lower imputation accuracy. We do so via simulations as well as with an application related to the medium-run effects of birth weight. |
Keywords: | imputation, missing data, instrumental variables, birth weight, childhood development |
JEL: | C36 C51 J13 |
Date: | 2016–12 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp10402&r=ecm |
By: | Grzegorz Koloch |
Abstract: | In this paper we estimate a Smets and Wouters (2007) model with shocks following a closed skew normal distribution (csn) introduced in Gonzalez-Farias et al. (2004), which nests a normal distribution as a special case. In the paper we discuss priors for model parameters, including skewness-related parameters of shocks, i.e. location, scale and skewness parameters. Using data ranging from 1991Q1 to 2012Q2 we estimate the model and recursively verify its out-of sample forecasting properties for time period 2007Q1 - 2012Q2, therefore including the recent financial crisis, within a forecasting horizon from 1 up to 8 quarters ahead. Using a RMSE measure we compare the forecasting performance of the model with skewed shocks wit a model estimated using normally distributed shocks. We find that inclusion of skewness can help forecasting some variables (consumption, investment and hours worked), but, on the other hand, results in deterioration in the other ones (output, inflation wages and the short rate). |
Keywords: | DSGE, Forecasting, Closed Skew-Normal Distribution |
JEL: | C51 C13 E32 |
Date: | 2016–12 |
URL: | http://d.repec.org/n?u=RePEc:sgh:kaewps:2016023&r=ecm |
By: | Steven F. Lehrer; R. Vincent Pohl; Kyungchul Song |
Abstract: | Economic theory often predicts that treatment responses may depend on individuals’ characteristics and location on the outcome distribution. Policymakers need to account for such treatment effect heterogeneity in order to efficiently allocate resources to subgroups that can successfully be targeted by a policy. However, when interpreting treatment effects across subgroups and the outcome distribution, inference has to be adjusted for multiple hypothesis testing to avoid an overestimation of positive treatment effects. We propose six new tests for treatment effect heterogeneity that make corrections for the family-wise error rate and that identify subgroups and ranges of the outcome distribution exhibiting economically and statistically significant treatment effects. We apply these tests to individual responses to welfare reform and show that welfare recipients benefit from the reform in a smaller range of the earnings distribution than previously estimated. Our results shed new light on effectiveness of welfare reform and demonstrate the importance of correcting for multiple testing. |
JEL: | C12 C21 I38 J22 |
Date: | 2016–12 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:22950&r=ecm |
By: | Hruschka, Harald |
Abstract: | We compare the performance of several hidden variable models, namely binary factor analysis, topic models (latent Dirichlet allocation, correlated topic model), the restricted Boltzmann machine and the deep belief net. We shortly present these models and outline their estimation. Performance is measured by log likelihood values of these models for a holdout data set of market baskets. For each model we estimate and evaluate variants with increasing numbers of hidden variables. Binary factor analysis vastly outperforms topic models. The restricted Boltzmann machine and the deep belief net on the other hand attain a similar performance advantage over binary factor analysis. For each model we interpret the relationships between the most important hidden variables and observed category purchases. To demonstrate managerial implications we compute relative basket size increase due to promoting each category for the better performing models. Recommendations based on the restricted Boltzmann machine and the deep belief net not only have lower uncertainty due to their statistical performance, they also have more managerial appeal than those derived for binary factor analysis. The impressive performances of the restricted Boltzmann machine and the deep belief net suggest to continue research by extending these models, e.g., by including marketing variables as predictors. |
Keywords: | Marketing; Market Basket Analysis; Factor Analysis; Topic Models; Restricted Boltzmann Machine; Deep Belief Net |
Date: | 2016–12–15 |
URL: | http://d.repec.org/n?u=RePEc:bay:rdwiwi:34994&r=ecm |
By: | Førsund, Finn (Dept. of Economics, University of Oslo); Krivonozhko, Vladimir W (National University of Science and Technology «MISiS», Moscow, Russia); Lychev, Andrey V. (National University of Science and Technology «MISiS», Moscow, Russia) |
Abstract: | Some inadequate results may appear in the DEA models as in any other mathematical model. In the DEA scientific literature several methods were proposed to deal with these difficulties. In our previous paper, we introduced the notion of terminal units. It was also substantiated that only terminal units form necessary and sufficient sets of units for smoothing the frontier. Moreover, some relationships were established between terminal units and other sets of units that were proposed for improving the frontier. In this paper we develop a general algorithm for smoothing the frontier. The construction of algorithm is based on the notion of terminal units. Our theoretical results are verified by computational results using real-life data sets and also confirmed by graphical examples. |
Keywords: | Data Envelopment Analysis (DEA; Terminal units; Anchor units; Exterior units |
JEL: | C44 C61 D24 |
Date: | 2016–09–21 |
URL: | http://d.repec.org/n?u=RePEc:hhs:osloec:2016_011&r=ecm |
By: | Pötscher, Benedikt M.; Preinerstorfer, David |
Abstract: | Autocorrelation robust tests are notorious for suffering from size distortions and power problems. We investigate under which conditions the size of autocorrelation robust tests can be controlled by an appropriate choice of critical value. |
Keywords: | Autocorrelation robust tests, size control |
JEL: | C22 |
Date: | 2016–11 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:75657&r=ecm |
By: | Lajos Barath (Institute of Economics, Centre for Economic and Regional Studies, Hungarian Academy of Sciences); Heinrich Hockmann (Department Agricultural Markets - Leibniz Institute of Agricultural Development in Transition Economies (IAMO)) |
Abstract: | Existing studies of agricultural production largely neglect technology heterogeneity. However, the assumption of homogeneous production may result in inadequate policy implications. There is a growing literature on this issue. In this paper we contribute to this literature by modelling the effect of heterogeneous technologies and its impact on technological parameters and technical efficiency using a reformulated Random parameter Model. Our approach is based on the model developed by Alvarez et al. (2004). However, the original version of this model faces one crucial econometric problem: the assumption of independence of technical inefficiency and input variables does not, hence the estimated results are not necessarily consistent. Therefore we reformulate the model to allow for a more consistent estimation. Additionally, we examine the importance of the fulfilment of theoretical consistency: monotonicity and quasi-concavity. In order to fulfil these criteria we apply constrained maximum likelihood estimation, more specifically we build linear and non-linear constraints into the model and force it to yield theoretically consistent results, not only in the mean but also in different approximation points. For the empirical analysis we use farm level data from the Hungarian FADN Database. The results showed that considering technological differences is important. According to model selection criteria the modified Alvarez model with constraints was the preferred specification. Additionally, the results imply that the consideration of the effect of heterogeneous technologies on production potential and efficiency crucial in order to get adequate policy implication. |
Keywords: | technical efficiency, technological heterogeneity, Random Parameter Model, theoretical consistency, monotonicity, quasi-concavity, Hungarian agriculture |
JEL: | C5 D24 Q12 |
Date: | 2016–12 |
URL: | http://d.repec.org/n?u=RePEc:has:discpr:1636&r=ecm |
By: | Kirstin Hubrich; Frauke Skudelny |
Abstract: | The period of extraordinary volatility in euro area headline inflation starting in 2007 raised the question whether forecast combination methods can be used to hedge against bad forecast performance of single models during such periods and provide more robust forecasts. We investigate this issue for forecasts from a range of short-term forecasting models. Our analysis shows that there is considerable variation of the relative performance of the different models over time. To take that into account we suggest employing performance-based forecast combination methods, in particular one with more weight on the recent forecast performance. We compare such an approach with equal forecast combination that has been found to outperform more sophisticated forecast combination methods in the past, and investigate whether it can improve forecast accuracy over the single best model. The time-varying weights assign weights to the economic interpretations of the forecast stemming from different models. The combination methods are evaluated for HICP headline inflation and HICP excluding food and energy. We investigate how forecast accuracy of the combination methods differs between pre-crisis times, the period after the global financial crisis and the full evaluation period including the global financial crisis with its extraordinary volatility in inflation. Overall, we find that, first, forecast combination helps hedge against bad forecast performance and, second, that performance-based weighting tends to outperform simple averaging. |
Keywords: | Forecasting ; Euro area inflation ; Forecast combinations ; Forecast evaluation |
JEL: | C32 C52 C53 E31 E37 |
Date: | 2016–08 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedgfe:2016-104&r=ecm |