|
on Econometrics |
By: | Giuseppe De Luca (University of Palermo); Jan R. Magnus (Vrije Universiteit Amsterdam and Tinbergen Institute); Franco Peracchi (University of Rome Tor Vergata and EIEF) |
Abstract: | We extend the results of De Luca et al. (2021) to inference for linear regression models based on weighted-average least squares (WALS), a frequentist model averaging approach with a Bayesian flavor. We concentrate on inference about a single focus parameter, interpreted as the causal effect of a policy or intervention, in the presence of a potentially large number of auxiliary parameters representing the nuisance component of the model. In our Monte Carlo simulations we compare the performance of WALS with that of several competing estimators, including the unrestricted least-squares estimator (with all auxiliary regressors) and the restricted least-squares estimator (with no auxiliary regressors), two post-selection estimators based on alternative model selection criteria (the Akaike and Bayesian information criteria), various versions of frequentist model averaging estimators (Mallows and jackknife), and one version of a popular shrinkage estimator (the adaptive LASSO). We discuss confidence intervals for the focus parameter and prediction intervals for the outcome of interest, and conclude that the WALS approach leads to superior confidence and prediction intervals, but only if we apply a bias correction. |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:eie:wpaper:2108&r= |
By: | Tingguo Zheng; Han Xiao; Rong Chen |
Abstract: | One of the important and widely used classes of models for non-Gaussian time series is the generalized autoregressive model average models (GARMA), which specifies an ARMA structure for the conditional mean process of the underlying time series. However, in many applications one often encounters conditional heteroskedasticity. In this paper we propose a new class of models, referred to as GARMA-GARCH models, that jointly specify both the conditional mean and conditional variance processes of a general non-Gaussian time series. Under the general modeling framework, we propose three specific models, as examples, for proportional time series, nonnegative time series, and skewed and heavy-tailed financial time series. Maximum likelihood estimator (MLE) and quasi Gaussian MLE (GMLE) are used to estimate the parameters. Simulation studies and three applications are used to demonstrate the properties of the models and the estimation procedures. |
Date: | 2021–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2105.05532&r= |
By: | Hauber, Philipp; Schumacher, Christian |
Abstract: | We propose a new approach to sample unobserved states conditional on available data in (conditionally) linear unobserved component models when some of the observations are missing. The approach is based on the precision matrix of the states and model variables, which is sparse and banded in many economic applications and allows for efficient sampling. The existing literature on precision-based sampling is focused on complete-data applications, whereas the proposed samplers in this paper provide draws for states and missing observations by using permutations of the precision matrix. The approaches can be easily integrated into Bayesian estimation procedures like the Gibbs sampler. By allowing for incomplete data sets, the proposed sampler expands the range of potential applications for precision-based samplers in practice. We derive the sampler for a factor model, although it can be applied to a wider range of empirical macroeconomic models. In an empirical application, we estimate international factors in GDP growth in a large unbalanced data set of about 180 countries. |
Keywords: | Precision-based sampling,Bayesian estimation,state-space models,missing observations,factor models,banded matrices |
JEL: | C32 C38 C63 C55 |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:zbw:bubdps:112021&r= |
By: | Guido M. Kuersteiner; Ingmar R. Prucha; Ying Zeng |
Abstract: | We study linear peer effects models where peers interact in groups, individual's outcomes are linear in the group mean outcome and characteristics, and group effects are random. Our specification is motivated by the moment conditions imposed in Graham 2008. We show that these moment conditions can be cast in terms of a linear random group effects model and lead to a class of GMM estimators that are generally identified as long as there is sufficient variation in group size. We also show that our class of GMM estimators contains a Quasi Maximum Likelihood estimator (QMLE) for the random group effects model, as well as the Wald estimator of Graham 2008 and the within estimator of Lee 2007 as special cases. Our identification results extend insights in Graham 2008 that show how assumptions about random group effects as well as variation in group size can be used to overcome the reflection problem in identifying peer effects. Our QMLE and GMM estimators can easily be augmented with additional covariates and are valid in situations with a large but finite number of different group sizes. Because our estimators are general moment based procedures, using instruments other than binary group indicators in estimation is straight forward. Monte-Carlo simulations show that the bias of the QMLE estimator decreases with the number of groups and the variation in group size, and increases with group size. We also prove the consistency and asymptotic normality of the estimator under reasonable assumptions. |
Date: | 2021–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2105.04330&r= |
By: | Kim, Namhyun (University of Exeter Business School); W. Saart, Patrick (Cardiff Business School) |
Abstract: | Partially linear semiparametric models are advantageous to use in empirical studies of various economic problems due to a special feature that allows the parametric and nonparametric components to exist simultaneously in the model. However, systematic estimation procedures and methods have not yet been satisfactorily developed to deal effectively with a well-known endogeneity problem that may be present in some empirical applications. In the current paper, we aim to address endogeneity comprehensively, which may take place in either a parametric or a nonparametric component or both, and to provide guidance to an appropriate estimation procedure and method in the presence of such a problem. A significant difficulty we must overcome before such goals can be achieved is a generated regressor problem which arises because a critical part, known in the literature as the \control variables", is not observable in practice and hence must be estimated. We show theoretically (i.e. through the derivation of a set of important asymptotic properties) and experimentally (i.e. through the use of simulation exercises) that our newly introduced method can help in overcoming the above-mentioned endogeneity problem. For the sake of completeness, we also discuss an adaptive data-driven method of bandwidth selection and show its asymptotic optimality. |
Keywords: | Semiparametric Models with Endogeneity |
JEL: | C12 C14 C22 |
Date: | 2021–05 |
URL: | http://d.repec.org/n?u=RePEc:cdf:wpaper:2021/9&r= |
By: | Bodnar, Olha (Örebro University School of Business); Bodnar, Taras (Örebro University School of Business) |
Abstract: | Objective Bayesian inference procedures are derived for the parameters of the multivariate random effects model generalized to elliptically contoured distributions. The posterior for the overall mean vector and the between-study covariance matrix is deduced by assigning two noninformative priors to the model parameter, namely the Berger and Bernardo reference prior and the Jeffreys prior, whose analytical expressions are obtained under weak distributional assumptions. It is shown that the only condition needed for the posterior to be proper is that the sample size is larger than the dimension of the data-generating model, independently of the class of elliptically contoured distributions used in the definition of the generalized multivariate random effects model. The theoretical findings of the paper are applied to real data consisting of ten studies about the effectiveness of hypertension treatment for reducing blood pressure where the treatment effects on both the systolic blood pressure and diastolic blood pressure are investigated. |
Keywords: | Multivariate random-effects model; Jeffreys prior; reference prior; propriety; elliptically contoured distribution; multivariate meta-analysis |
JEL: | C11 C13 C15 C16 |
Date: | 2021–05–07 |
URL: | http://d.repec.org/n?u=RePEc:hhs:oruesi:2021_005&r= |
By: | Dong, C.; Li, S. |
Abstract: | This paper proposes the method of Specification-LASSO in a flexible semi-parametric regression model that allows for the interactive effects between different covariates. Specification-LASSO extends LASSO and Adaptive Group LASSO to achieve both relevant variable selection and model specification. Specification-LASSO also gives preliminary estimates that facilitate the estimation of the regression model. Monte Carlo simulations show that the Specification-LASSO can accurately specify partially linear additive models with interactive regressors. Finally, the proposed methods are applied in an empirical study, which examines the topic proposed by Freyberger et al. (2020), which argues that firms’ sizes may have interactive effects with other security-specific characteristics, which can explain the stocks excess returns together. |
Keywords: | Variable Selection, Model Selection, Interaction |
JEL: | C14 G12 |
Date: | 2021–05–05 |
URL: | http://d.repec.org/n?u=RePEc:cam:camdae:2139&r= |
By: | Berg, Tobias; Reisinger, Markus; Streitz, Daniel |
Abstract: | Despite their importance, the discussion of spillover effects in empirical research often misses the rigor dedicated to endogeneity concerns. We analyze a broad set of workhorse models of firm interactions and show that spillovers naturally arise in many corporate finance settings. This has important implications for the estimation of treatment effects: i) even with random treatment, spillovers lead to a complicated bias, ii) fixed effects can exacerbate the spillover-induced bias. We propose simple diagnostic tools for empirical researchers and illustrate our guidance in an application. |
Keywords: | credit supply; Direct vs. Indirect Effects; Spillovers |
JEL: | C13 C21 G21 G32 M41 M42 R11 R23 |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:15549&r= |
By: | W. Saart, Patrick (Cardiff Business School); Kim, Namhyun (University of Exeter Business School); Bateman, Ian (University of Exeter Business School) |
Abstract: | This paper addresses various statistical and empirical challenges associated with modelling farmers' decision-making processes concerning agricultural land-use. These include (i) use of spatially high-resolution data so that idiosyncratic effects of physical environment drivers, e.g. soil textures, can be explicitly modelled; (ii) modelling land-use shares as censored responses, which enables consistent estimation of the unknown parameters; (iii) incorporating spatial error dependence and heterogeneity, which leads to accurate formulation of the variances for the parameter estimates and more effective statistical inferences; and (iv) reducing the computational burden and improving estimation accuracy by introducing an alternative GMM/QML hybrid estimation procedure. We also provide extensive evidence, which suggests that our approach can construct more accurate land-use predictions than existing methods in the literature. We then apply our method to empirically investigate how the climatic, economic, policy and physical determinants influence the land-use patterns in England over time and spatial space. We are also interested in examining whether environmental schemes and grants have assisted in freeing up land used for arable, rough grazing, temporary and permanent grasslands and converting it to bio-energy crops to help to achieve deep emission reductions and prepare for climate change. |
Keywords: | Agro-environmental policy, land-use, multivariate Tobit, system of censored equation, spatial model, error component model. |
JEL: | C13 C21 C23 C34 Q15 Q53 |
Date: | 2021–05 |
URL: | http://d.repec.org/n?u=RePEc:cdf:wpaper:2021/7&r= |
By: | Guastadisegni, Lucia; Cagnone, Silvia; Moustaki, Irini; Vasdekis, Vassilis |
Abstract: | This paper studies the Type I error, false positive rates, and power of four versions of the Lagrange Multiplier test to detect measurement non-invariance in Item Response Theory (IRT) models for binary data under model misspecification. The tests considered are the Lagrange Multiplier test computed with the Hessian and cross-product approach, the Generalized Lagrange Multiplier test and the Generalized Jackknife Score test. The two model misspecifications are those of local dependence among items and non-normal distribution of the latent variable. The power of the tests is computed in two ways, empirically through Monte Carlo simulation methods and asymptotically, using the asymptotic distribution of each test under the alternative hypothesis. The performance of these tests is evaluated by means of a simulation study. The results highlight that, under mild model misspecification, all tests have good performance while, under strong model misspecification, the tests performance deteriorates, especially for false positive rates under local dependence and power for small sample size under misspecification of the latent variable distribution. In general, the Lagrange Multiplier test computed with the Hessian approach and the Generalized Lagrange Multiplier test have better performance in terms of false positive rates while the Lagrange Multiplier test computed with the cross-product approach has the highest power for small sample sizes. The asymptotic power turns out to be a good alternative to the classic empirical power because it is less time consuming. The Lagrange tests studied here have been also applied to a real data set. |
JEL: | C1 |
Date: | 2021–04–04 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:110358&r= |
By: | Rustam Ibragimov; Paul Kattuman; Anton Skrobotov |
Abstract: | Empirical analyses on income and wealth inequality and those in other fields in economics and finance often face the difficulty that the data is heterogeneous, heavy-tailed or correlated in some unknown fashion. The paper focuses on applications of the recently developed \textit{t}-statistic based robust inference approaches in the analysis of inequality measures and their comparisons under the above problems. Following the approaches, in particular, a robust large sample test on equality of two parameters of interest (e.g., a test of equality of inequality measures in two regions or countries considered) is conducted as follows: The data in the two samples dealt with is partitioned into fixed numbers $q_1, q_2\ge 2$ (e.g., $q_1=q_2=2, 4, 8$) of groups, the parameters (inequality measures dealt with) are estimated for each group, and inference is based on a standard two-sample $t-$test with the resulting $q_1, q_2$ group estimators. Robust $t-$statistic approaches result in valid inference under general conditions that group estimators of parameters (e.g., inequality measures) considered are asymptotically independent, unbiased and Gaussian of possibly different variances, or weakly converge, at an arbitrary rate, to independent scale mixtures of normal random variables. These conditions are typically satisfied in empirical applications even under pronounced heavy-tailedness and heterogeneity and possible dependence in observations. The methods dealt with in the paper complement and compare favorably with other inference approaches available in the literature. The use of robust inference approaches is illustrated by an empirical analysis of income inequality measures and their comparisons across different regions in Russia. |
Date: | 2021–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2105.05335&r= |
By: | Eric Auerbach; Max Tabord-Meehan |
Abstract: | We propose a new unified framework for causal inference when outcomes depend on how agents are linked in a social or economic network. Such network interference describes a large literature on treatment spillovers, social interactions, social learning, information diffusion, social capital formation, and more. Our approach works by first characterizing how an agent is linked in the network using the configuration of other agents and connections nearby as measured by path distance. The impact of a policy or treatment assignment is then learned by pooling outcome data across similarly configured agents. In the paper, we propose a new nonparametric modeling approach and consider two applications to causal inference. The first application is to testing policy irrelevance/no treatment effects. The second application is to estimating policy effects/treatment response. We conclude by evaluating the finite-sample properties of our estimation and inference procedures via simulation. |
Date: | 2021–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2105.03810&r= |
By: | Julián Martínez-Iriarte (UC San Diego) |
Abstract: | This paper proposes a framework to analyze the effects of counterfactual policies on the unconditional quantiles of an outcome variable. For a given counterfactual policy, we obtain identified sets for the effect of both marginal and global changes in the proportion of treated individuals. To conduct a sensitivity analysis, we introduce the quantile breakdown frontier, a curve that quantifies the maximum amount of selection bias consistent with a given conclu- sion of interest across different quantiles. We obtain a v n-consistent estimator of the curve, and propose a bootstrap-based inference procedure. To illustrate our method, we perform a sensitivity analysis on the effect of unionizing low income workers on the quantiles of the distribution of (log) wages. |
Keywords: | unconditional quantile effects partial identification sensitivity analysis directional differentiability |
Date: | 2021–04 |
URL: | http://d.repec.org/n?u=RePEc:aoz:wpaper:52&r= |
By: | Geoffrey Ducournau |
Abstract: | One of the standardized features of financial data is that log-returns are uncorrelated, but absolute log-returns or their squares namely the fluctuating volatility are correlated and is characterized by heavy tailed in the sense that some moment of the absolute log-returns is infinite and typically non-Gaussian [20]. And this last characteristic change accordantly to different timescales. We propose to model this long-memory phenomenon by superstatistical dynamics and provide a Bayesian Inference methodology drawing on Metropolis-Hasting random walk sampling to determine which superstatistics among inverse-Gamma and log-Normal describe the best log-returns complexity on different timescales, from high to low frequency. We show that on smaller timescales (minutes) even though the Inverse-Gamma superstatistics works the best, the log-Normal model remains very reliable and suitable to fit the absolute log-returns probability density distribution with strong capacity of describing heavy tails and power law decays. On larger timescales (daily), we show in terms of Bayes factor that the inverse-Gamma superstatistics is preferred to the log-Normal model. We also show evidence of a transition of statistics from power law decay on small timescales to exponential decay on large scale with less heavy tails meaning that on larger time scales the fluctuating volatility tend to be memoryless, consequently superstatistics becomes less relevant. |
Date: | 2021–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2105.04171&r= |
By: | Meenagh, David; Minford, Patrick; Wickens, Michael R. |
Abstract: | We ask whether Bayesian estimation creates a potential estimation bias as compared with standard estimation techniques based on the data, such as maximum likelihood or indirect estimation. We investigate this with a Monte Carlo experiment in which the true version of a New Keynesian model may either have high wage/price rigidity or be close to pure flexibility; we treat each in turn as the true model and create Bayesian estimates of it under priors from the true model and its false alternative. The Bayesian estimation of macro models may thus give very misleading results by placing too much weight on prior information compared to observed data; a better method may be Indirect estimation where the bias is found to be low. |
Keywords: | Bayesian; Estimation Bias; indirect inference; maximum likelihood |
JEL: | C11 E12 |
Date: | 2021–01 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:15684&r= |
By: | Bart Capéau; Liebrecht De Sadeleer; Sebastiaan Maes; André Decoster |
Abstract: | Empirical welfare analyses often impose stringent parametric assumptions on individuals’ preferences and neglect unobserved preference heterogeneity. In this paper, we develop a framework to conduct individual and social welfare analysis for discrete choice that does not suffer from these drawbacks. We first adapt the broad class of individual welfare measures introduced by Fleurbaey (2009) to settings where individual choice is discrete. Allowing for unrestricted, unobserved preference heterogeneity, these measures become random variables. We then show that the distribution of these objects can be derived from choice probabilities, which can be estimated nonparametrically from cross-sectional data. In addition, we derive nonparametric results for the joint distribution of welfare and welfare differences, as well as for social welfare. The former is an important tool in determining whether those who benefit from a price change belong disproportionately to those who were initially well-off. An empirical application illustrates the methods. |
Keywords: | discrete choice, nonparametric welfare analysis, individual welfare, social welfare, money metric utility, compensating variation, equivalent variation |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:ete:ceswps:674666&r= |
By: | Silva Lopes, Artur C. |
Abstract: | Motivated by the purpose to assess the income convergence hypothesis, a simple new Fourier-type unit root test of the Dickey-Fuller family is introduced and analysed. In spite of a few shortcomings that it shares with rival tests, the proposed test generally improves upon them in terms of power performance in small samples. The empirical results that it produces for a recent and updated sample of data for 25 countries clearly contrast with previous evidence produced by the Fourier approach and, more generally, they also contradict a recent wave of optimism concerning income convergence, as they are mostly unfavourable to it. |
Keywords: | income convergence; unit root tests; structural breaks |
JEL: | C22 F43 O47 |
Date: | 2021–03–19 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:107676&r= |
By: | Öberg, Stefan |
Abstract: | The local average treatment effect (LATE) estimator has enabled a wide range of empirical studies using natural experiments to estimate causal effects from observational data. This empirical literature has overlooked a crucial assumption regarding the definition of the treatment—a part of the stable unit treatment value assumption called the consistency assumption in epidemiology. The consequence of ignoring this assumption has been that results have been misinterpreted and over-generalized. I illustrate these problems using examples from seminal studies of the natural experiments literature and present how to improve definitions in future studies. Correctly interpreted LATEs are much more specific than previously acknowledged but reclaim their internal validity. By providing clear and careful definitions of the treatment and interpretations of the results, future studies will increase their usefulness by presenting the causal effect that is actually estimated. |
Date: | 2021–05–05 |
URL: | http://d.repec.org/n?u=RePEc:osf:socarx:pkyue&r= |
By: | Saparya Suresh (Indian Institute of Management Kozhikode); Malay Bhattacharya (Indian Institute of Management Bangalore) |
Abstract: | An interesting idea suggested in the existing literature states that any probability distribution pattern can be approximated to any precision by combining an arbitrary number of normal components. Motivated by the stated idea, we propose a new and simple, yet powerful stochastic process and discuss its construction from the microscopic movement of a particle in space. We call this new process Normal Mixture process. The intrinsic characteristics imbibed into the proposed process brings in several desirable properties making it a good candidate for modelling financial time series data like the stock price movements. In the paper, we have also illustrated how the proposed model gives a better fit compared to existing models through data analysis. |
Date: | 2021–03 |
URL: | http://d.repec.org/n?u=RePEc:iik:wpaper:410&r= |
By: | Levy, Matthew; Schiraldi, Pasquale |
Abstract: | This paper studies the non-parametric identification of the discount factor and utility function in the class of dynamic discrete-continuous choice (DDCC) models. In contrast to the discrete-only model we show the discount factor is identified. Our results further highlight why Euler equation estimation approaches that ignore agents' discrete choices are inconsistent. We estimate utility and discount factors for a consumption- savings-retirement choice problem using the Panel Study of Income Dynamics (PSID). We show that the relative risk aversion parameter and the intertemporal elasticity of substitution are separately identified, and that the latter varies across agents due to the wealth-dependence of the surplus from the discrete choice. This surplus also implies that the value function may be locally convex in wealth, and we find that a simulated Universal Basic Income (UBI) policy counterintuitively benefits wealthier working households more than poorer ones due to this effect. |
Date: | 2021–01 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:15719&r= |
By: | Kyle Butts |
Abstract: | Empirical work often uses treatment assigned following geographic boundaries. When the effects of treatment cross over borders, classical difference-in-differences estimation produces biased estimates for the average treatment effect. In this paper, I introduce a potential outcomes framework to model spillover effects and decompose the estimate's bias in two parts: (1) the control group no longer identifies the counterfactual trend because their outcomes are affected by treatment and (2) changes in treated units' outcomes reflect the effect of their own treatment status and the effect from the treatment status of "close" units. I propose estimation strategies that can remove both sources of bias and semi-parametrically estimate the spillover effects themselves. I extend Callaway and Sant'Anna (2020) to allow for event-study estimates that control for spillovers. To highlight the importance of spillover effects, I revisit analyses of three place-based interventions. |
Date: | 2021–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2105.03737&r= |
By: | Saparya Suresh (Indian Institute of Management Kozhikode); Malay Bhattacharya (Indian Institute of Management Bangalore) |
Abstract: | The Correlation Structure associated with a portfolio is subjected to vary across time. Studying the structural breaks in the time dependent Correlation matrix associated with a portfolio had been a subject of inter- est for a better understanding of the market movements, portfolio selection etc. The current paper proposes a methodology for testing the change in time dependent correlation structure of a portfolio in the high dimensional data using the techniques of generalised inverse, singular valued decompo- sition and multivariate distribution theory which has not been addressed so far. The asymptotic properties of the proposed test are derived. Also, the performance and the validity of the method is tested on real data set. |
Date: | 2021–03 |
URL: | http://d.repec.org/n?u=RePEc:iik:wpaper:411&r= |
By: | Piergiorgio Alessandri (Bank of Italy); Andrea Gazzani (Bank of Italy); Alejandro Vicondoa (Universidad Católica de Chile) |
Abstract: | Isolating financial uncertainty shocks is difficult because financial markets rapidly price changes in several economic fundamentals. To bypass this difficulty, we identify uncertainty shocks using daily data and use their monthly averages as an instrument in a VAR. We show that this novel approach is theoretically appealing and has dramatic implications for leading empirical studies on financial uncertainty. Daily interactions between equity returns, bond spreads and expected volatility cause previous identification schemes to fail at the monthly frequency. Once these interactions are explicitly modeled, the impact of uncertainty shocks on output and inflation is significant and similar across specifications. |
Keywords: | uncertainty shocks financial shocks structural vector autoregression high-frequency identification external instruments |
JEL: | C32 C36 E32 |
Date: | 2021–04 |
URL: | http://d.repec.org/n?u=RePEc:aoz:wpaper:61&r= |