|
on Econometrics |
By: | Junlong Feng |
Abstract: | In this paper, I show that a nonseparable model where the endogenous variable is multivalued can be point-identified even when the instrument (IV) is only binary. Though the order condition generally fails in this case, I show that exogenous covariates are able to generate enough moment equations to restore the order condition as if enlarging the IV's support under very general selection mechanisms for the endogenous variable. No restrictions are imposed on the way these covariates enter the model, such as separability or monotonicity. Further, after the order condition is fulfilled, I provide a new sufficient condition that is weaker than the existing results for the global uniqueness of the solution to the nonlinear system of equations. Based on the identification result, I propose a sieves estimator and uniform consistency and pointwise asymptotic normality are established under simple low-level conditions. A Monte Carlo experiment is conducted to examine its performance. |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1904.01159&r=all |
By: | Tingting Cheng; Jiti Gao; Oliver Linton |
Abstract: | We propose two new nonparametric predictive models: the multi-step nonparametric predictive regression model and the multi-step additive predictive regression model, in which the predictive variables are locally stationary time series. We define estimation methods and establish the large sample properties of these methods in the short horizon and the long horizon case. We apply our methods to stock return prediction using a number of standard predictors such as dividend yield. The empirical results show that all of these models can substantially outperform the traditional linear predictive regression model in terms of both in-sample and out-of-sample performance. In addition, we find that these models can always beat the historical mean model in terms of in-sample fitting, and also for some cases in terms of the out-of-sample forecasting. We also propose a trading strategy based on our methodology and show that it beats the buy and hold stategy provided the tuning parameters are well chosen. |
Keywords: | Kernel estimator, locally stationary process, series estimator, stock return prediction |
JEL: | C14 C22 G17 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2019-4&r=all |
By: | Bulat Gafarov |
Abstract: | This paper proposes both point-wise and uniform confidence sets (CS) for an element $\theta_{1}$ of a parameter vector $\theta\in\mathbb{R}^{d}$ that is partially identified by affine moment equality and inequality conditions. The method is based on an estimator of a regularized support function of the identified set. This estimator is \emph{half-median unbiased} and has an \emph{asymptotic linear representation} which provides closed form standard errors and enables optimization-free multiplier bootstrap. The proposed CS can be computed as a solution to a finite number of linear and convex quadratic programs, which leads to a substantial decrease in \emph{computation time} and \emph{guarantee of global optimum}. As a result, the method provides uniformly valid inference in applications with the dimension of the parameter space, $d$, and the number of inequalities, $k$, that were previously computationally unfeasible ($d,k >100$). The proposed approach is then extended to construct polygon-shaped joint CS for multiple components of $\theta$. Inference for coefficients in the linear IV regression model with interval outcome is used as an illustrative example. Key Words: Affine moment inequalities; Asymptotic linear representation; Delta\textendash Method; Interval data; Intersection bounds; Partial identification; Regularization; Strong approximation; Stochastic Programming; Subvector inference; Uniform inference. |
Date: | 2019–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1904.00111&r=all |
By: | Timothy Christensen; Benjamin Connault |
Abstract: | Researchers frequently make parametric assumptions about the distribution of unobservables when formulating structural models. These assumptions are typically motived by computational convenience rather than economic theory and are often untestable. Counterfactuals can be particularly sensitive to such assumptions, threatening the credibility of structural modeling exercises. To address this issue, we leverage insights from the literature on ambiguity and model uncertainty to propose a tractable econometric framework for characterizing the sensitivity of counterfactuals with respect to a researcher's assumptions about the distribution of unobservables in a class of structural models. In particular, we show how to construct the smallest and largest values of the counterfactual as the distribution of unobservables spans nonparametric neighborhoods of the researcher's assumed specification while other ``structural'' features of the model, e.g. equilibrium conditions, are maintained. Our methods are computationally simple to implement, with the nuisance distribution effectively profiled out via a low-dimensional convex program. Our procedure delivers sharp bounds for the identified set of counterfactuals (i.e. without imposing parametric assumptions about the distribution of unobservables) as the neighborhoods become large. Over small neighborhoods, we relate our procedure to a measure of local sensitivity which is further characterized using an influence function representation. We provide a suitable sampling theory for plug-in estimators. Finally, we apply our procedure to models of strategic interaction and dynamic discrete choice. |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1904.00989&r=all |
By: | Tommaso Proietti (CEIS & DEF, University of Rome "Tor Vergata") |
Abstract: | The formulation of unobserved components models raises some relevant interpretative issues, owing to the existence of alternative observationally equivalent specifications, differing for the timing of the disturbances and their covariance matrix. We illustrate them with reference to unobserved components models with ARMA(m;m) reduced form, performing the decomposition of the series into an ARMA(m; q) signal, q m, and a noise component. We provide a characterization of the set of covariance structures that are observationally equivalent, when the models are formulated both in the future and the contemporaneous forms. Hence, we show that, while the point predictions and the contemporaneous real time estimates are invariant to the specification of the disturbances covariance matrix, the reliability cannot be identified, except for special cases requiring q |
Keywords: | ARMA models,Steady State Kalman filter,Correlated Components,Nonfundamentalness |
JEL: | C22 C51 C53 |
Date: | 2019–03–22 |
URL: | http://d.repec.org/n?u=RePEc:rtv:ceisrp:455&r=all |
By: | Harold D. Chiang; Joel Rodrigue; Yuya Sasaki |
Abstract: | Three-dimensional panel models are widely used in empirical analysis. Researchers use various combinations of fixed effects for three-dimensional panels. When one imposes a parsimonious model and the true model is rich, then it incurs mis-specification biases. When one employs a rich model and the true model is parsimonious, then it incurs larger standard errors than necessary. It is therefore useful for researchers to know correct models. In this light, Lu, Miao, and Su (2018) propose methods of model selection. We advance this literature by proposing a method of post-selection inference for regression parameters. Despite our use of the lasso technique as means of model selection, our assumptions allow for many and even all fixed effects to be nonzero. Simulation studies demonstrate that the proposed method is more precise than under-fitting fixed effect estimators, is more efficient than over-fitting fixed effect estimators, and allows for as accurate inference as the oracle estimator. |
Date: | 2019–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1904.00211&r=all |
By: | Atsushi Inoue (Vanderbilt University); Lutz Kilian (University of Michigan) |
Abstract: | Existing proofs of the asymptotic validity of conventional methods of impulse response inference based on higher-order autoregressions are pointwise only. In this paper, we establish the uniform asymptotic validity of conventional asymptotic and bootstrap inference about individual impulse responses and vectors of impulse responses at fixed horizons. For inference about vectors of impulse responses based on Wald test statistics to be uniformly valid, lag-augmented autoregressions are required, whereas inference about individual impulse responses is uniformly valid under weak conditions even without lag augmentation. We introduce a new rank condition that ensures the uniform validity of inference on impulse responses and show that this condition holds under weak conditions. Simulations show that the highest finite-sample accuracy is achieved when bootstrapping the lag-augmented autoregression using the bias adjustments of Kilian (1999). The resulting confidence intervals remain accurate even at long horizons. We provide a formal asymptotic justification for this result. |
Keywords: | Impulse response, autoregression, lag augmentation, asymptotic normality, bootstrap, uniform inference |
JEL: | C22 C52 |
Date: | 2019–03–25 |
URL: | http://d.repec.org/n?u=RePEc:van:wpaper:vuecon-sub-19-00001&r=all |
By: | Goga, Camelia; Ruiz-Gazen, Anne |
Abstract: | The odds-ratio measure is widely used in Health and Social surveys where the aim is to compare the odds of a certain event between a population at risk and a population not at risk. It can be defined using logistic regression through an estimating equation that allows a generalization to continuous risk variable. Data from surveys need to be analyzed in a proper way by taking into account the survey weights. Because the odds-ratio is a complex parameter, the analyst has to circumvent some difficulties when estimating confidence intervals. The present paper suggests a nonparametric approach that can take advantage of some auxiliary information in order to improve on the precision of the odds-ratio estimator. The approach consists in B-spline modelling which can handle the nonlinear structure of the parameter in a exible way and is easy to implement. The variance estimation issue is solved through a linearization approach and confidence intervals are derived. Two small applications are discussed. |
Keywords: | B-spline functions; estimating equation; influence function; linearization, logistic regression; survey data |
Date: | 2019–03 |
URL: | http://d.repec.org/n?u=RePEc:tse:wpaper:122890&r=all |
By: | James G. MacKinnon (Queen's University); Morten Ø. Nielsen (Queen's University and CREATES); Matthew D. Webb (Carleton University) |
Abstract: | We study two cluster-robust variance estimators (CRVEs) for regression models with clustering in two dimensions and give conditions under which t-statistics based on each of them yield asymptotically valid inferences. In particular, one of the CRVEs requires stronger assumptions about the nature of the intra-cluster correlations. We then propose several wild bootstrap procedures and state conditions under which they are asymptotically valid for each type of t-statistic. Extensive simulations suggest that using certain bootstrap procedures with one of the t-statistics generally performs very well. An empirical example confirms that bootstrap inferences can differ substantially from conventional ones |
Keywords: | CRVE, grouped data, clustered data, cluster-robust variance estimator, two-way clustering, wild cluster bootstrap, robust inference |
JEL: | C15 C21 C23 |
Date: | 2019–03 |
URL: | http://d.repec.org/n?u=RePEc:qed:wpaper:1415&r=all |
By: | Rainer Winkelmann; Lin Xu |
Abstract: | Regression models for proportions are frequently encountered in applied work. The condi- tional expectation is bound between 0 and 1 and, therefore, must be non-linear which requires non-standard panel data extensions. The quasi-maximum likelihood estimator of Papke and Wooldridge (1996) suffers from the incidental parameters problem when including fixed effects. In this paper, we re-consider the binomial panel logit model with fixed effects (Machado, 2004). We show that the conditional maximum likelihood estimator is very easy to implement using standard software. We investigate the properties of the estimator under misspecification and derive a new test for overdispersion in the binomial fixed effects logit model. Models and test are applied in a study of contracted work-time percentage, measured as proportion of full-time work, for women in Switzerland. |
Keywords: | Proportions data, unobserved heterogeneity, conditional maximum likelihood, overdispersion |
JEL: | C23 J21 |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:zur:econwp:321&r=all |
By: | Furukawa, Chishio |
Abstract: | This paper questions the conventional wisdom that publication bias must result from the biased preferences of researchers. When readers only compare the number of positive and negative results of papers to make their decisions, even unbiased researchers will omit noisy null results and inflate some marginally insignificant estimates. Nonetheless, the equilibrium with such publication bias is socially optimal. The model predicts that published non-positive results are either precise null results or noisy but extreme negative results. This paper shows this prediction holds with some data, and proposes a new stem-based bias correction method that is robust to this and other publication selection processes. |
Keywords: | publication bias,information aggregation,meta-analysis,bias correction method |
JEL: | C13 C18 D71 D82 D83 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:zbw:esprep:194798&r=all |
By: | Pauwels, Laurent; Radchenko, Peter; Vasnev, Andrey |
Abstract: | The majority of financial data exhibit asymmetry and heavy tails, which makes forecasting the entire density critically important. Recently, a forecast combina- tion methodology has been developed to combine predictive densities. We show that combining individual predictive densities that are skewed and/or heavy-tailed results in significantly reduced skewness and kurtosis. We propose a solution to over- come this problem by deriving optimal log score weights under Higher-order Moment Constraints (HMC). The statistical properties of these weights are investigated the- oretically and through a simulation study. Consistency and asymptotic distribution results for the optimal log score weights with and without high moment constraints are derived. An empirical application that uses the S&P 500 daily index returns illustrates that the proposed HMC weight density combinations perform very well relative to other combination methods. |
Keywords: | Forecast combination; Predictive densities; Optimal weights; Skewness; Kurtosis |
Date: | 2019–03–19 |
URL: | http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/20175&r=all |
By: | Atsushi Inoue (Vanderbilt University); Lutz Kilian (University of Michigan) |
Abstract: | Existing proofs of the asymptotic validity of conventional methods of impulse response inference based on higher-order autoregressions are pointwise only. In this paper, we establish the uniform asymptotic validity of conventional asymptotic and bootstrap inference about individual impulse responses and vectors of impulse responses at fixed horizons. For inference about vectors of impulse responses based on Wald test statistics to be uniformly valid, lag-augmented autoregressions are required, whereas inference about individual impulse responses is uniformly valid under weak conditions even without lag augmentation. We introduce a new rank condition that ensures the uniform validity of inference on impulse responses and show that this condition holds under weak conditions. Simulations show that the highest finite-sample accuracy is achieved when bootstrapping the lag-augmented autoregression using the bias adjustments of Kilian (1999). The resulting confidence intervals remain accurate even at long horizons. We provide a formal asymptotic justification for this result. |
Keywords: | bootstrap, delta methods, lag augmentation |
JEL: | C2 C5 |
Date: | 2019–03–25 |
URL: | http://d.repec.org/n?u=RePEc:van:wpaper:vuecon-19-00001&r=all |
By: | Daniel Borup (Aarhus University and CREATES); Bent Jesper Christensen (Aarhus University and CREATES and the Dale T. Mortensen Center); Yunus Emre Ergemen (Aarhus University and CREATES) |
Abstract: | This paper proposes tests of the null hypothesis that model-based forecasts are uninformative in panels, allowing for individual and interactive fixed effects that control for cross-sectional dependence, endogenous predictors, and both short-range and long-range dependence. We consider a Diebold-Mariano style test based on comparison of the model-based forecast and a nested nopredictability benchmark, an encompassing style test of the same null, and a test of pooled uninformativeness in the entire panel. A simulation study shows that the encompassing style test is reasonably sized in finite samples, whereas the Diebold-Mariano style test is oversized. Both tests have non-trivial local power. The methods are applied to the predictive relation between economic policy uncertainty and future stock market volatility in a multi-country analysis. |
Keywords: | Panel data, predictability, long-range dependence, Diebold-Mariano test, encompassing test |
JEL: | C12 C23 C33 C52 C53 |
Date: | 2019–03–25 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2019-04&r=all |
By: | Michael McAleer (Department of Quantitative Finance National Tsing Hua University, Taiwan and Econometric Institute Erasmus School of Economics Erasmus University Rotterdam, The Netherlands and Department of Quantitative Economics Complutense University of Madrid, Spain And Institute of Advanced Sciences Yokohama National University, Japan.) |
Abstract: | In order to hedge efficiently, persistently high negative covariances or, equivalently, correlations, between risky assets and the hedging instruments are intended to mitigate against financial risk and subsequent losses. If there is more than one hedging instrument, multivariate covariances and correlations will have to be calculated. As optimal hedge ratios are unlikely to remain constant using high frequency data, it is essential to specify dynamic time-varying models of covariances and correlations. These values can either be determined analytically or numerically on the basis of highly advanced computer simulations. Analytical developments are occasionally promulgated for multivariate conditional volatility models. The primary purpose of the paper is to analyse purported analytical developments for the only multivariate dynamic conditional correlation model to have been developed to date, namely Engle’s (2002) widely-used Dynamic Conditional Correlation (DCC) model. Dynamic models are not straightforward (or even possible) to translate in terms of the algebraic existence, underlying stochastic processes, specification, mathematical regularity conditions, and asymptotic properties of consistency and asymptotic normality, or the lack thereof. The paper presents a critical analysis, discussion, evaluation and presentation of caveats relating to the DCC model, and an emphasis on the numerous dos and don’ts in implementing the DCC and related model in practice. |
Keywords: | Hedging, Covariances, Correlations, Existence, Mathematical regularity, Invertibility, Likelihood function, Statistical asymptotic properties, Caveats, Practical implementation. |
JEL: | C22 C32 C51 C52 C58 C62 G32 |
URL: | http://d.repec.org/n?u=RePEc:ucm:doicae:1917&r=all |
By: | Christian Gourieroux (University of Toronto and Toulouse School of Economics); Yang Lu (Centre d'Economie de l'Université de Paris Nord (CEPN)) |
Abstract: | We introduce new semi-parametric models for the analysis of rates and proportions, such as proportions of default, (expected) loss-given-default and credit conversion factor encountered in credit risk analysis. These models are especially convenient for the stress test exercises demanded in the current prudential regulation. We show that the Least Impulse Response Estimator, which minimizes the estimated effect of a stress, leads to consistent parameter estimates. The new models with their associated estimation method are compared with the other approaches currently proposed in the literature such as the beta and logistic regressions. The approach is illustrated by both simulation experiments and the case study of a retail P2P lending portfolio. |
Keywords: | Basel Regulation, Stress Test, (Expected) Loss-Given-Default,Impulse Response, Credit Scoring, Pseudo-Maximum Likelihood, LIR Estimation, Beta Regression, Moebius Transformation. |
JEL: | C51 G21 |
Date: | 2019–03 |
URL: | http://d.repec.org/n?u=RePEc:upn:wpaper:2019-05&r=all |
By: | Davide Viviano; Jelena Bradic |
Abstract: | Understanding of the effect of a particular treatment or a policy pertains to many areas of interest -- ranging from political economics, marketing to health-care and personalized treatment studies. In this paper, we develop a non-parametric, model-free test for detecting the effects of treatment over time that extends widely used Synthetic Control tests. The test is built on counterfactual predictions arising from many learning algorithms. In the Neyman-Rubin potential outcome framework with possible carry-over effects, we show that the proposed test is asymptotically consistent for stationary, beta mixing processes. We do not assume that class of learners captures the correct model necessarily. We also discuss estimates of the average treatment effect, and we provide regret bounds on the predictive performance. To the best of our knowledge, this is the first set of results that allow for example any Random Forest to be useful for provably valid statistical inference in the Synthetic Control setting. In experiments, we show that our Synthetic Learner is substantially more powerful than classical methods based on Synthetic Control or Difference-in-Differences, especially in the presence of non-linear outcome models. |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1904.01490&r=all |
By: | Amsler, Christine; Prokhorov, Artem; Schmidt, Peter |
Abstract: | In this paper we propose a new family of copulas for which the copula arguments are uncorrelated but dependent. Specifically, if w1 and w2 are the uniform random variables in the copula, they are uncorrelated, but w1 is correlated with |w2 - ½|. We show how this family of copulas can be applied to the error structure in an econometric production frontier model. We also generalize the family of copulas to three or more dimensions, and we give an empirical application. |
Keywords: | copulas |
Date: | 2019–03–25 |
URL: | http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/20203&r=all |
By: | Christoph Engel (Max Planck Institute for Research on Collective Goods) |
Abstract: | Frequently in experiments there is not only variance in the reaction of participants to treatment. The heterogeneity is patterned: discernible types of participants react differently. In principle, a finite mixture model is well suited to simultaneously estimate the probability that a given participant belongs to a certain type, and the reaction of this type to treatment. Yet often, finite mixture models need more data than the experiment provides. The approach requires ex ante knowledge about the number of types. Finite mixture models are hard to estimate for panel data, which is what experiments often generate. For repeated experiments, this paper offers a simple two-step alternative that is much less data hungry, that allows to find the number of types in the data, and that allows for the estimation of panel data models. It combines machine learning methods with classic frequentist statistics. |
Keywords: | heterogeneous treatment effect, finite mixture model, panel data, two-step approach, machine learning, CART |
JEL: | C14 C23 C91 |
Date: | 2019–01 |
URL: | http://d.repec.org/n?u=RePEc:mpg:wpaper:2019_01&r=all |
By: | Ben Jann |
Abstract: | Influence functions are useful, for example, because they provide an easy and flexible way to estimate standard errors. This paper contains a brief overview of influence functions in the context of linear regression and illustrates how their empirical counterparts can be computed in Stata, both for unweighted data and for weighted data. Influence functions for regression-adjustment estimators of average treatment effects are also covered. |
Keywords: | influence function, sampling variance, sampling weights, standard error, linear regression, mean difference, regression adjustment, average treatment effect, causal inference |
JEL: | C12 C13 |
Date: | 2019–03–27 |
URL: | http://d.repec.org/n?u=RePEc:bss:wpaper:32&r=all |
By: | Antoine Lejay (TOSCA - TO Simulate and CAlibrate stochastic models - CRISAM - Inria Sophia Antipolis - Méditerranée - Inria - Institut National de Recherche en Informatique et en Automatique - IECL - Institut Élie Cartan de Lorraine - UL - Université de Lorraine - CNRS - Centre National de la Recherche Scientifique, IECL - Institut Élie Cartan de Lorraine - UL - Université de Lorraine - CNRS - Centre National de la Recherche Scientifique); Paolo Pigato (WIAS - Weierstrass Institute for Applied Analysis and Stochastics - Forschungsverbund Berlin e.V. (FVB)) |
Abstract: | In financial markets, low prices are generally associated with high volatilities and vice-versa, this well known stylized fact usually being referred to as leverage effect. We propose a local volatility model, given by a stochastic differential equation with piecewise constant coefficients, which accounts of leverage and mean-reversion effects in the dynamics of the prices. This model exhibits a regime switch in the dynamics accordingly to a certain threshold. It can be seen as a continuous-time version of the Self-Exciting Threshold Autoregressive (SETAR) model. We propose an estimation procedure for the volatility and drift coefficients as well as for the threshold level. Parameters estimated on the daily prices of 348 stocks of NYSE and S&P 500, on different time windows, show consistent empirical evidence for leverage effects. Mean-reversion effects are also detected, most markedly in crisis periods. |
Keywords: | realized volatility,Oscillating Brownian motion,Regime-Switching,regime-switch,threshold diffusion,parametric estimation,Self-Exciting Threshold Autoregressive model,mean-reversion,stock price model *,Leverage effect |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-01669082&r=all |
By: | Vladimir Markov; Olga Vilenskaia; Vlad Rashkovich |
Abstract: | We present a set of models relevant for predicting various aspects of intra-day trading volume for equities and showcase them as an ensemble that projects volume in unison. We introduce econometric methods for predicting total and remaining daily volume, intra-day volume profile (u-curve), close auction volume and special day seasonalities and emphasize a need for a unified approach where all sub-models work consistently with one another. Historical and current inputs are combined using Bayesian methods, which have the advantage of providing adaptive and parameterless estimations of volume for a broad range of equities while automatically taking into account uncertainty of the model input components. The shortcomings of traditional statistical error metrics for calibrating volume prediction are also discussed and we introduce Asymmetrical Logarithmic Error (ALE) to overweight an overestimation risk. |
Date: | 2019–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1904.01412&r=all |
By: | Lee, L.; Linton, O.; Whang, Y-J.; |
Abstract: | We develop the limit theory of the quantilogram and cross-quantilogram under long memory. We establish the sub-root-n central limit theorems for quantilograms that depend on nuisance parameters. We propose a moving block bootstrap (MBB) procedure for inference and we establish its consistency thereby enabling a consistent confidence interval construction for the quantilograms. The newly developed reduction principles for the quantilograms serve as the main technical devices used to derive the asymptotics and establish the validity of MBB. We report some simulation evidence that our methods work satisfactorily. We apply our method to quantile predictive relations between financial returns and long-memory predictors. |
Keywords: | Long Memory, Moving Block Bootstrap, Nonlinear Dependence, Quantilogram and Cross-Quantilgoram, Reduction Principle |
JEL: | C22 |
URL: | http://d.repec.org/n?u=RePEc:cam:camdae:1936&r=all |
By: | LeSage, James; Fischer, Manfred M. |
Abstract: | The focus is on cross-sectional dependence in panel trade flow models. We propose alternative specifications for modeling time invariant factors such as socio-cultural indicator variables, e.g., common language and currency. These are typically treated as a source of heterogeneity eliminated using fixed effects transformations, but we find evidence of cross-sectional dependence after eliminating country-specific and time-specific effects. These findings suggest use of alternative simultaneous dependence model specifications that accommodate cross-sectional dependence, which we set forth along with Bayesian estimation methods. Ignoring cross-sectional dependence implies biased estimates from panel trade flow models that rely on fixed effects. |
Keywords: | Bayesian, MCMC estimation, socio-cultural distance, origin-destination flows, treatment of time invariant variables, panel models |
Date: | 2019–03–25 |
URL: | http://d.repec.org/n?u=RePEc:wiw:wus046:6886&r=all |