|
on Econometrics |
By: | Xuexin Wang; Yixiao Sun |
Abstract: | We propose a simple asymptotic F-distributed Portmanteau test for zero autocorrelations in an otherwise dependent time series. By employing the orthonormal series variance estimator of the variance matrix of sample autocovariances, our test statistic follows an F distribution asymptotically under fixed-smoothing asymptotics. The asymptotic F theory accounts for the estimation error in the underlying variance estimator, which the asymptotic chi-squared theory ignores. Monte Carlo simulations reveal that the F approximation is much more accurate than the corresponding chi-squared approximation in finite samples. Compared with the nonstandard test proposed by Lobato (2001), the asymptotic F test is as easy to use as the chi-squared test: There is no need to obtain critical values by simulations. Further, Monte Carlo simulations indicate that Lobato’s (2001) nonstandard test tends to be heavily undersized under the null and suffers from substantial power loss under the alternatives. |
Keywords: | Lack of autocorrelations; Portmanteau test; Fixed-smoothing asymptotics; F distribution; Orthonormal series variance estimator |
JEL: | C12 C22 |
Date: | 2019–05–24 |
URL: | http://d.repec.org/n?u=RePEc:wyi:wpaper:002407&r=all |
By: | Chang, Jinyuan; Qiu, Yumou; Yao, Qiwei; Zou, Tao |
Abstract: | We consider the statistical inference for high-dimensional precision matrices. Specifically, we propose a data-driven procedure for constructing a class of simultaneous confidence regions for a subset of the entries of a large precision matrix. The confidence regions can be applied to test for specific structures of a precision matrix, and to recover its nonzero components. We first construct an estimator for the precision matrix via penalized node-wise regression. We then develop the Gaussian approximation to approximate the distribution of the maximum difference between the estimated and the true precision coefficients. A computationally feasible parametric bootstrap algorithm is developed to implement the proposed procedure. The theoretical justification is established under the setting which allows temporal dependence among observations. Therefore the proposed procedure is applicable to both independent and identically distributed data and time series data. Numerical results with both simulated and real data confirm the good performance of the proposed method. |
Keywords: | Bias correction; Dependent data; High dimensionality; Kernel estimation; Parametric bootstrap; Precision matrix |
JEL: | C12 C13 C15 |
Date: | 2018–09–01 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:87513&r=all |
By: | Shakeeb Khan (Boston College); Fu Ouyang (University of Queensland); Elie Tamer (Harvard University) |
Abstract: | In this paper we explore inference on regression coefficients in semi parametric multinomial response models. We consider cross sectional, and both static and dynamic panel settings where we focus throughout on point inference under sufficient conditions for point identification. The approach to identification uses a matching insight throughout all three models and relies on variation in regressors: with cross section data, we match across individuals while with panel data we match within individuals over time. Across models, IIA is not assumed as the unobserved errors across choices are allowed to be arbitrarily correlated. For the cross sectional model estimation is based on a localized rank objective function, analogous to that used in Abrevaya, Hausman, and Khan (2010), and presents a generalization of existing approaches. In panel data settings rates of convergence are shown to exhibit a curse of dimensionality in the number of alternatives. The results for the dynamic panel data model generalizes the work of Honoré and Kyriazidou (2000) to cover the multinomial case. A simulation study establishes adequate finite sample properties of our new procedures and we apply our estimators to a scanner panel data set. |
Keywords: | Multinomial choice, Rank Estimation, Adaptive Inference, Dynamic Panel Data |
JEL: | C22 C23 C25 |
Date: | 2019–05–12 |
URL: | http://d.repec.org/n?u=RePEc:boc:bocoec:980&r=all |
By: | Federico Belotti (CEIS & DEF University of Rome "Tor Vergata"); Giuseppe Ilardi (Bank of Italy); Andrea Piano Mortari (CEIS, University of Rome "Tor Vergata") |
Abstract: | This paper proposes a stochastic frontier panel data model in which unit-specific inefficiencies are spatially correlated. In particular, this model has simultaneously three important features: i) the total inefficiency of a productive unit depends on its own inefficiency and on the inefficiency of its neighbors; ii) the spatially correlated and time varying inefficiency is disentangled from time invariant unobserved heterogeneity in a panel data model à la Greene (2005); iii) systematic differences in inefficiency can be explained using exogenous determinants. We propose to estimate both the "true" fixed- and random-effects variants of the model using a feasible simulated composite maximum likelihood approach. The finite sample behavior of the proposed estimators are investigated through a set of Monte Carlo experiments. Our simulation results suggest that the estimation approach is consistent, showing good finite sample properties especially in small samples. |
Keywords: | Stochastic frontiers model; Spatial inefficiency; Panel data, Fixed-effects model |
JEL: | C13 C23 C15 |
Date: | 2019–05–30 |
URL: | http://d.repec.org/n?u=RePEc:rtv:ceisrp:459&r=all |
By: | Marc Hallin; Luis K. Hotta; João H. G Mazzeu; Carlos Cesar Trucios-Maza; Pedro L. Valls Pereira; Mauricio Zevallos |
Abstract: | Based on a General Dynamic Factor Model with infinite-dimensional factor space, we develop a new estimation and forecasting procedures for conditional covariance matrices in high-dimensional time series. The performance of our approach is evaluated via Monte Carlo experiments, outperforming many alternative methods. The new procedure is used to construct minimum variance portfolios for a high-dimensional panel of assets. The results are shown to achieve better out-of-sample portfolio performance than alternative existing procedures. |
Keywords: | Dimension reduction, Large panels, High-dimensional time series, Minimum variance portfolio, Volatility, Multivariate GARCH |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2013/288066&r=all |
By: | Anders Rønn-Nielsen (Center for Statistics, Department of Finance, Copenhagen Business School); Dorte Kronborg (Center for Statistics, Department of Finance, Copenhagen Business School); Mette Asmild (Department of Food and Resource Economics, University of Copenhagen) |
Abstract: | When benchmarking production units by non-parametric methods like data envelopment analysis (DEA), an assumption has to be made about the returns to scale of the underlying technology. Moreover, it is often also relevant to compare the frontiers across samples of producers. Until now, no exact tests for examining returns to scale assumptions in DEA, or for test of equality of frontiers, have been available. The few existing tests are based on asymptotic theory relying on large sample sizes, whereas situations with relatively small samples are often encountered in practical applications. In this paper we propose three novel tests based on permutations. The tests are easily implementable from the algorithms provided, and give exact significance probabilities as they are not based on asymptotic properties. The first of the proposed tests is a test for the hypothesis of constant returns to scale in DEA. The others are tests for general frontier differences and whether the production possibility sets are, in fact, nested. The theoretical advantages of permutation tests are that they are appropriate for small samples and have the correct size. Simulation studies show that the proposed tests do, indeed, have the correct size and furthermore higher power than the existing alternative tests based on asymptotic theory. |
Keywords: | Data Envelopment Analysis (DEA), returns to scale, equality of production frontiers, exact tests, permutations |
JEL: | C12 C14 C44 C46 C61 D |
Date: | 2019–05 |
URL: | http://d.repec.org/n?u=RePEc:foi:wpaper:2019_04&r=all |
By: | Huber, Florian (University of Salzburg); Koop, Gary (University of Strathclyde); Onorante, Luca (European Central Bank) |
Abstract: | Time-varying parameter (TVP) models have the potential to be over-parameterized, particularly when the number of variables in the model is large. Global-local priors are increasingly used to induce shrink- age in such models. But the estimates produced by these priors can still have appreciable uncertainty. Sparsification has the potential to remove this uncertainty and improve forecasts. In this paper, we develop computationally simple methods which both shrink and sparsify TVP models. In a simulated data exercise we show the benefits of our shrink-then-sparsify approach in a variety of sparse and dense TVP regressions. In a macroeconomic forecast exercise, we find our approach to substantially improve forecast performance relative to shrinkage alone. |
Keywords: | Sparsity; shrinkage; hierarchical priors; time varying parameter regression |
JEL: | C11 C30 D31 E30 |
Date: | 2019–05–26 |
URL: | http://d.repec.org/n?u=RePEc:ris:sbgwpe:2019_002&r=all |
By: | Shakeeb Khan (Boston College); Maria Ponomareva (Northern Illinois University); Elie Tamer (Harvard University) |
Abstract: | We analyze identification in dynamic econometric models of binary choice with fixed effects under general conditions. This class of models is often used in the literature to distinguish between state dependence (invariably referred to in the recent literature as switching costs, inertia or stickiness) and heterogeneity. We first characterize the sharp set for parameters in a dynamic panel of binary choice under conditional stationarity. The identified set can be characterized by a union of convex polyhedrons. We conduct the same exercise under the stronger assumption of conditional exchangeability, and establish its incremental identifying power. We extend our identification approach to study models with more time periods as well. We also provide sufficient conditions for point identification. For inference in cases with discrete regressors, we provide an approach to constructing confidence sets for the identified sets using a linear program that is simple to implement. The paper then provides simulation based evidence on the size and shape of the identified sets in varying designs to illustrate the informational content of different assumptions. We also illustrate the inference approach using a data set on women’s labor supply decisions. |
Keywords: | Binary Choice, Dynamic Panel Data, Partial Identification |
JEL: | C22 C23 C25 |
Date: | 2019–03–15 |
URL: | http://d.repec.org/n?u=RePEc:boc:bocoec:979&r=all |
By: | Harvey, A.; Palumbo, D. |
Abstract: | This paper sets up a statistical framework for modeling realised volatility (RV ) using a Dynamic Conditional Score (DCS) model. It first shows how a preliminary analysis of RV, based on fitting a linear Gaussian model to its logarithm, confirms the presence of long memory effects and suggests a two component dynamic specification. It also indicates a weekly pattern in the data and an analysis of squared residuals suggests the presence of heteroscedasticity. Furthermore working with a Gaussian model in logarithms facilitates a comparison with the popular Heterogeneous Autoregression (HAR), which is a simple way of accounting for long memory in RV. Fitting the two component specification with leverage and a day of the week effect is then carried out directly on RV with a Generalised Beta of the second kind (GB2) conditional distribution. Estimating logRV with an Exponential Generalised Beta of the second kind (EGB2) distribution gives the same result. The EGB2 model is then fitted with heteroscedasticity and its forecasting performance compared with that of HAR. There is a small gain from using the DCS model. However, its main attraction is that it gives a comprehensive description of the properties of the data and yields multi-step forecasts of the conditional distribution of RV. |
Keywords: | EGARCH, GB2 distribution, HAR model, heteroscedasticity, long memory, weekly volatility pattern |
Date: | 2019–05–30 |
URL: | http://d.repec.org/n?u=RePEc:cam:camdae:1950&r=all |
By: | Licht, Adrian; Escribano Sáez, Álvaro; Blazsek, Szabolcs Istvan |
Abstract: | We study co-integration and common trends for time series variables, by introducing a new nonlinear multivariate dynamic conditional score (DCS) model that is robust to outliers. The new model is named the t-QVARMA (quasi-vector autoregressive moving average) model, which is a score-driven location model for the multivariate t-distribution. Classical VAR models of co-integrated time series are estimated by using the outlier-sensitive vector error correction model (VECM) representation. In t-QVARMA, the I(0) and I(1) components of the variables are separated in a way that is similar to the Granger-representation of VAR models. We show that a limiting special case of t-QVARMA, named Gaussian-QVARMA, is a Gaussian-VARMA specification with I(0) and I(1) components. For t-QVARMA, we present the reduced-form and the structural-form representations and the impulse response function (IRF). As an application, we study the relationship between federal funds rate and United States (US) inflation rate for the period of July 1954 to January 2019, since those variables are I(1) and co-integrated. We present the outlier-discounting property of t-QVARMA and compare the estimates of the t- QVARMA, Gaussian-QVARMA and Gaussian-VAR alternatives. We find that the statistical performance of t-QVARMA is superior to that of the classical Gaussian-VAR model. |
Keywords: | Quasi-Vector Autoregressive Moving Average (Qvarma) Model; Common Trends; Cointegration; Robustness To Outliers; Multivariate Dynamic Conditional Score (Dcs) Models |
Date: | 2019–05–19 |
URL: | http://d.repec.org/n?u=RePEc:cte:werepe:28451&r=all |
By: | Randolph Luca Bruno (School of Slavonic and East European Studies, University College London); Laura Magazzini (Department of Economics (University of Verona)); Marco Stampini (Social Protection and Health Division, Inter-American Development Bank) |
Abstract: | We devise an innovative methodology that allows exploiting information from singleton and longitudinal observations for the estimation of fixed effects panel data models. The approach can be applied to join cross-sectional data and longitudinal data, in order to increase estimation efficiency, while properly tackling the potential bias due to unobserved individual characteristics. Estimation is framed within the GMM context and we assess its properties by means of Monte Carlo simulations. The method is applied to an unbalanced panel of firm data to estimate a Total Factor Productivity regression based on the renown Business Environment and Enterprise Performance Survey (BEEPs) database. Under the assumption that the relationship between observed and unobserved characteristics is homogeneous across singleton and longitudinal observations (or across different samples), information from longitudinal data is used to "clean" the bias in the unpaired sample of singletons. This reduces the standard errors of the estimation (in our application, by approximately 8-9 percent) and has the potential to increase the significance of the coefficients. |
Keywords: | Panel Data, Efficient Estimation, Unobserved Heterogeneity, GMM |
JEL: | C23 C33 C51 |
Date: | 2018–05 |
URL: | http://d.repec.org/n?u=RePEc:ver:wpaper:04/2018&r=all |
By: | Oliver Wichert; I. Gaia Becheri; Feike C. Drost; Ramon van den Akker |
Abstract: | This paper considers unit-root tests in large n and large T heterogeneous panels with cross-sectional dependence generated by unobserved factors. We reconsider the two prevalent approaches in the literature, that of Moon and Perron (2004) and the PANIC setup proposed in Bai and Ng (2004). While these have been considered as completely different setups, we show that, in case of Gaussian innovations, the frameworks are asymptotically equivalent in the sense that both experiments are locally asymptotically normal (LAN) with the same central sequence. Using Le Cam's theory of statistical experiments we determine the local asymptotic power envelope and derive an optimal test jointly in both setups. We show that the popular Moon and Perron (2004) and Bai and Ng (2010) tests only attain the power envelope in case there is no heterogeneity in the long-run variance of the idiosyncratic components. The new test is asymptotically uniformly most powerful irrespective of possible heterogeneity. Moreover, it turns out that for any test, satisfying a mild regularity condition, the size and local asymptotic power are the same under both data generating processes. Thus, applied researchers do not need to decide on one of the two frameworks to conduct unit root tests. Monte-Carlo simulations corroborate our asymptotic results and document significant gains in finite-sample power if the variances of the idiosyncratic shocks differ substantially among the cross sectional units. |
Date: | 2019–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1905.11184&r=all |
By: | Gentry Johnson; Brian Quistorff; Matt Goldman |
Abstract: | When pre-processing observational data via matching, we seek to approximate each unit with maximally similar peers that had an alternative treatment status--essentially replicating a randomized block design. However, as one considers a growing number of continuous features, a curse of dimensionality applies making asymptotically valid inference impossible (Abadie and Imbens, 2006). The alternative of ignoring plausibly relevant features is certainly no better, and the resulting trade-off substantially limits the application of matching methods to "wide" datasets. Instead, Li and Fu (2017) recasts the problem of matching in a metric learning framework that maps features to a low-dimensional space that facilitates "closer matches" while still capturing important aspects of unit-level heterogeneity. However, that method lacks key theoretical guarantees and can produce inconsistent estimates in cases of heterogeneous treatment effects. Motivated by straightforward extension of existing results in the matching literature, we present alternative techniques that learn latent matching features through either MLPs or through siamese neural networks trained on a carefully selected loss function. We benchmark the resulting alternative methods in simulations as well as against two experimental data sets--including the canonical NSW worker training program data set--and find superior performance of the neural-net-based methods. |
Date: | 2019–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1905.12020&r=all |
By: | Guofang Huang (Krannert School of Management, Purdue University); K. Sudhir (Cowles Foundation & School of Management, Yale University; School of Management, Yale University) |
Abstract: | We propose an instrumental-variable (IV) approach to estimate the causal effect of service satisfaction on customer loyalty, by exploiting a common source of randomness in the assignment of service employees to customers in service queues. Our approach can be applied at no incremental cost by using routine repeated cross-sectional customer survey data collected by firms. The IV approach addresses multiple sources of biases that pose challenges in estimating the causal effect using cross-sectional data: (i) the upward bias from common-method variance due to the joint measurement of service satisfaction and loyalty intent in surveys; (ii) the attenuation bias caused by measurement errors in service satisfaction; and (iii) the omitted-variable bias that may be in either direction. In contrast to the common concern about the upward common-method bias in the estimates using cross-sectional survey data, we ?nd that ordinary-least-squares (OLS) substantially underestimates the casual effect, suggesting that the downward bias due to measurement errors and/or omitted variables is dominant. The underestimation is even more signi?cant with a behavioral measure of loyalty–where there is no common methods bias. This downward bias leads to significant underestimation of the positive profit impact from improving service satisfaction and can lead to under-investment by firms in service satisfaction. Finally, we ?nd that the causal e?ect of service satisfaction on loyalty is greater for more difficult types of services. |
Keywords: | Service satisfaction, Customer loyalty, Common-method bias, Measurement error, Cross-sectional data |
JEL: | M31 |
Date: | 2019–05 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2177&r=all |
By: | Vasilis Syrgkanis; Victor Lei; Miruna Oprescu; Maggie Hei; Keith Battocchi; Greg Lewis |
Abstract: | We consider the estimation of heterogeneous treatment effects with arbitrary machine learning methods in the presence of unobserved confounders with the aid of a valid instrument. Such settings arise in A/B tests with an intent-to-treat structure, where the experimenter randomizes over which user will receive a recommendation to take an action, and we are interested in the effect of the downstream action. We develop a statistical learning approach to the estimation of heterogeneous effects, reducing the problem to the minimization of an appropriate loss function that depends on a set of auxiliary models (each corresponding to a separate prediction task). The reduction enables the use of all recent algorithmic advances (e.g. neural nets, forests). We show that the estimated effect model is robust to estimation errors in the auxiliary models, by showing that the loss satisfies a Neyman orthogonality criterion. Our approach can be used to estimate projections of the true effect model on simpler hypothesis spaces. When these spaces are parametric, then the parameter estimates are asymptotically normal, which enables construction of confidence sets. We applied our method to estimate the effect of membership on downstream webpage engagement on TripAdvisor, using as an instrument an intent-to-treat A/B test among 4 million TripAdvisor users, where some users received an easier membership sign-up process. We also validate our method on synthetic data and on public datasets for the effects of schooling on income. |
Date: | 2019–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1905.10176&r=all |
By: | Mawuli Segnon; Manuel Stapper |
Abstract: | This paper introduces a new class of integer-valued long memory processes that are adaptations of the well-known FIGARCH(p, d, q) process of Baillie (1996) and HYGARCH(p, d, q) process of Davidson (2004) to a count data setting. We derive the statistical properties of the models and show that reasonable parameter estimates are easily obtained via conditional maximum likelihood estimation. An empirical application with financial transaction data illustrates the practical importance of the models. |
Keywords: | Count Data, Poisson Autoregression, Fractionally Integrated, INGARCH |
Date: | 2019–05 |
URL: | http://d.repec.org/n?u=RePEc:cqe:wpaper:8219&r=all |
By: | Xing Yan; Qi Wu; Wen Zhang |
Abstract: | We propose a novel probabilistic model to facilitate the learning of multivariate tail dependence of multiple financial assets. Our method allows one to construct from known random vectors, e.g., standard normal, sophisticated joint heavy-tailed random vectors featuring not only distinct marginal tail heaviness, but also flexible tail dependence structure. The novelty lies in that pairwise tail dependence between any two dimensions is modeled separately from their correlation, and can vary respectively according to its own parameter rather than the correlation parameter, which is an essential advantage over many commonly used methods such as multivariate $t$ or elliptical distribution. It is also intuitive to interpret, easy to track, and simple to sample comparing to the copula approach. We show its flexible tail dependence structure through simulation. Coupled with a GARCH model to eliminate serial dependence of each individual asset return series, we use this novel method to model and forecast multivariate conditional distribution of stock returns, and obtain notable performance improvements in multi-dimensional coverage tests. Besides, our empirical finding about the asymmetry of tails of the idiosyncratic component as well as the whole market is interesting and worth to be well studied in the future. |
Date: | 2019–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1905.13425&r=all |
By: | Yibei Li; Ximei Wang; Boualem Djehiche; Xiaoming Hu |
Abstract: | In this paper, the credit scoring problem is studied by incorporating network information, where the advantages of such incorporation are investigated in two scenarios. Firstly, a Bayesian optimal filter is proposed to provide a prediction for lenders assuming that published credit scores are estimated merely from structured individual data. Such prediction is used as a monitoring indicator for the risk warning in lenders' future financial decisions. Secondly, we further propose a recursive Bayes estimator to improve the accuracy of credit scoring estimation by incorporating the dynamic interaction topology of clients as well. It is shown that under the proposed evolution framework, the designed estimator has a higher precision than any efficient estimator, and the mean square errors are strictly smaller than the Cram\'er-Rao lower bound for clients within a certain range of scores. Finally, simulation results for a specific case illustrate the effectiveness and feasibility of the proposed methods. |
Date: | 2019–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1905.11795&r=all |
By: | Dmitry Arkhangelsky; Vasily Korovkin |
Abstract: | In this paper we construct a parsimonious causal model that addresses multiple issues researchers face when trying to use aggregate time-series shocks for policy evaluation: (a) potential unobserved aggregate confounders, (b) availability of various unit-level characteristics, (c) time and unit-level heterogeneity in treatment effects. We develop a new estimation algorithm that uses insights from treatment effects, panel, and time-series literature. We construct a variance estimator that is robust to arbitrary clustering pattern across geographical units. We achieve this by considering a finite population framework, where potential outcomes are treated as fixed, and all randomness comes from the exogenous shocks. Finally, we illustrate our approach using data from a study on the causal relationship between foreign aid and conflict conducted in Nunn and Qian [2014]. |
Date: | 2019–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1905.13660&r=all |
By: | Yoonseok Lee; Yulong Wang |
Abstract: | This paper develops a threshold regression model, where the threshold is determined by an unknown relation between two variables. The threshold function is estimated fully nonparametrically. The observations are allowed to be cross-sectionally dependent and the model can be applied to determine an unknown spatial border for sample splitting over a random field. The uniform rate of convergence and the nonstandard limiting distribution of the nonparametric threshold estimator are derived. The root-n consistency and the asymptotic normality of the regression slope parameter estimator are also obtained. Empirical relevance is illustrated by estimating an economic border induced by the housing price difference between Queens and Brooklyn in New York City, where the economic border deviates substantially from the administrative one. |
Date: | 2019–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1905.13140&r=all |
By: | Domenico Di Gangi; Giacomo Bormetti; Fabrizio Lillo |
Abstract: | Motivated by the evidence that real-world networks evolve in time and may exhibit non-stationary features, we propose an extension of the Exponential Random Graph Models (ERGMs) accommodating the time variation of network parameters. Within the ERGM framework, a network realization is sampled from a static probability distribution defined parametrically in terms of network statistics. Inspired by the fast growing literature on Dynamic Conditional Score-driven models, in our approach, each parameter evolves according to an updating rule driven by the score of the conditional distribution. We demonstrate the flexibility of the score-driven ERGMs, both as data generating processes and as filters, and we prove the advantages of the dynamic version with respect to the static one. Our method captures dynamical network dependencies, that emerge from the data, and allows for a test discriminating between static or time-varying parameters. Finally, we corroborate our findings with the application to networks from real financial and political systems exhibiting non stationary dynamics. |
Date: | 2019–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1905.10806&r=all |
By: | Giuseppe De Luca (University of Palermo); Jan R. Magnus (Vrije Universiteit Amsterdam and Tinbergen Institute); Franco Peracchi (Georgetown University and EIEF) |
Abstract: | We derive explicit expressions for arbitrary moments and quantiles of the posterior distribution of the location parameter in the normal location model with Laplace prior, and use the results to approximate the posterior distribution of sums of independent copies of the same parameter. |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:eie:wpaper:1911&r=all |