|
on Econometrics |
By: | Fela ÖZBEY (Çukurova University) |
Abstract: | Multiple linear regression model is a widely used statistical technique in social and life sciences. Ordinary Least Squares (OLS) estimator is the Best Linear Unbiased Estimator (BLUE) for the unknown population parameters of this model. Unfortunately, sometimes two or more of the regressors may be moderately or highly correlated causing multicollinearity problem. Various biased estimators are proposed to refine the ill-conditioning of X?X matrix and shrink the variance under the multicollinearity. The most popular of them is Ridge estimator. But it may worsen the fit when solving the ill-conditioning problem. Two-Parameter Ridge (2PR) and Liu Type (LT) estimators are proposed to overcome the fitting degeneration of Ridge estimator by using a tuning parameter. In this study, holding the parameter refining the ill-conditioning of X?X matrix fixed, the success of the tuning parameters of these estimators is investigated. Minimizers of Predicted Sum of Squares (PRESS) and Generalized Cross Validation (GCV) statistics are used as estimates of tuning parameters. Optimum parameter estimates are compared via their Scalar Mean Squared Errors (SMSE). It is observed that the SMSEs of estimates obtained by LT and 2PR estimators decreases when estimates of parameter refining the ill-conditioning of X?X matrix increases, and in all cases estimates obtained by the 2PR estimator are much more efficient than estimates obtained by LT and OLS estimators. |
Keywords: | Biased Estimators, Estimation, Monte Carlo Simulations, Multicollinearity. |
JEL: | C13 C52 C63 |
Date: | 2018–04 |
URL: | http://d.repec.org/n?u=RePEc:sek:iacpro:7508770&r=ecm |
By: | Laura Liu |
Abstract: | This paper constructs individual-specific density forecasts for a panel of firms or households using a dynamic linear model with common and heterogeneous coefficients and cross-sectional heteroskedasticity. The panel considered in this paper features a large cross-sectional dimension N but short time series T. Due to the short T, traditional methods have difficulty in disentangling the heterogeneous parameters from the shocks, which contaminates the estimates of the heterogeneous parameters. To tackle this problem, I assume that there is an underlying distribution of heterogeneous parameters, model this distribution nonparametrically allowing for correlation between heterogeneous parameters and initial conditions as well as individual-specific regressors, and then estimate this distribution by pooling the information from the whole cross-section together. Theoretically, I prove that both the estimated common parameters and the estimated distribution of the heterogeneous parameters achieve posterior consistency, and that the density forecasts asymptotically converge to the oracle forecast. Methodologically, I develop a simulation-based posterior sampling algorithm specifically addressing the nonparametric density estimation of unobserved heterogeneous parameters. Monte Carlo simulations and an application to young firm dynamics demonstrate improvements in density forecasts relative to alternative approaches. |
Date: | 2018–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1805.04178&r=ecm |
By: | Audrone Virbickaite (Universitat de les Illes Balears); Hedibert F. Lopes (Insper Institute of Education and Research); Maria Concepción Ausín (Universidad Carlos III de Madrid); Pedro Galeano (Universidad Carlos III de Madrid) |
Abstract: | This paper designs a Sequential Monte Carlo (SMC) algorithm for estimation of Bayesian semi-parametric Stochastic Volatility model for financial data. In particular, it makes use of one of the most recent particle filters called Particle Learning (PL). SMC methods are especially well suited for state-space models and can be seen as a cost-efficient alternative to MCMC, since they allow for online type inference. The posterior distributions are updated as new data is observed, which is prohibitively costly using MCMC. Also, PL allows for consistent online model comparison using sequential predictive log Bayes factors. A simulated data is used in order to compare the posterior outputs for the PL and MCMC schemes, which are shown to be almost identical. Finally, a short real data application is included. |
Keywords: | Bayes factor; Dirichlet Process Mixture; MCMC; Sequential Monte Carlo. |
JEL: | C58 C11 C14 |
Date: | 2018 |
URL: | http://d.repec.org/n?u=RePEc:ubi:deawps:88&r=ecm |
By: | Ryo Kato (Research Institute for Economics & Business Administration (RIEB), Kobe University, Japan); Takahiro Hoshino (Department of Economics, Keio University, Japan and RIKEN Center for Advanced Intelligence Project, Japan) |
Abstract: | We develop a new semiparametric Bayes instrumental variables estimation method. We employ the form of the regression function of the reduced-form equation and the disturbances are modelled nonparametrically to achieve better preditive power of the endogenous variables, whereas we use parametric formulation in the structural equation, which is of interest in inference. Our simulation studies show that under small sample size the proposed method obtains more e¢ cient estimates and very precise credible intervals compared with existing IV methods. The existing methods fail to reject the null hypothesis with higher probability, due to larger variance of the estimators. Moreover, the mean squared error in the proposed method may be less than 1/30 of that in the existing procedures even in the presence of weak instruments. We applied our proposed method to a Mendelian randomization dataset where a large number of instruments are available and semiparametric specification is appropriate. This is a weak instrument case; hence, the non-Bayesian IV approach yields inefficient estimates. We obtained statistically significant results that cannot be obtained by the existing methods, including standard Bayesian IV. |
Keywords: | Instrumental variable, Mendelian Randomization, Semiparametric Bayes model, Probit stick-breaking process mixture |
Date: | 2018–05 |
URL: | http://d.repec.org/n?u=RePEc:kob:dpaper:dp2018-14&r=ecm |
By: | Cassim, Lucius |
Abstract: | The main objective of this paper is to provide an estimation approach for non-parametric GARCH (2, 2) volatility model. Specifically the paper, by combining the aspects of multivariate adaptive regression splines(MARS) model estimation algorithm proposed by Chung (2012) and an algorithm proposed by Buhlman and McNeil(200), develops an algorithm for non-parametrically estimating GARCH (2,2) volatility model. Just like the MARS algorithm, the algorithm that is developed in this paper takes a logarithmic transformation as a preliminary analysis to examine a nonparametric volatility model. The algorithm however differs from the MARS algorithm by assuming that the innovations are i.d.d. The algorithm developed follows similar steps to that of Buhlman and McNeil (200) but starts by semi parametric estimation of the GARCH model and not parametric while relaxing the dependency assumption of the innovations to avoid exposing the estimation procedure to risk of inconsistency in the event of misspecification errors. |
Keywords: | GARCH (2,2), MARS, Algorithm, Parametric, Semi parametric, Nonparametric |
JEL: | C1 C14 C4 |
Date: | 2018–05–18 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:86861&r=ecm |
By: | Michael Zimmert |
Abstract: | This paper contributes to the literature on treatment effects estimation with machine learning inspired methods by studying the performance of different estimators based on the Lasso. Building on recent work in the field of high-dimensional statistics, we use the semiparametric efficient score estimation structure to compare different estimators. Alternative weighting schemes are considered and their suitability for the incorporation of machine learning estimators is assessed using theoretical arguments and various Monte Carlo experiments. Additionally we propose an own estimator based on doubly robust Kernel matching that is argued to be more robust to nuisance parameter misspecification. In the simulation study we verify theory based intuition and find good finite sample properties of alternative weighting scheme estimators like the one we propose. |
Date: | 2018–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1805.05067&r=ecm |
By: | Schweikert, Karsten |
Abstract: | In this paper, we develop new threshold cointegration tests with SETAR and MTAR adjustment allowing for the presence of structural breaks in the equilibrium equation. We propose a simple procedure to simultaneously estimate the previously unknown breakpoint and test the null hypothesis of no cointegration. Thereby, we extend the well-known residual-based cointegration test with regime shift introduced by Gregory and Hansen (1996a) to include forms of nonlinear adjustment. We derive the asymptotic distribution of the test statistics and demonstrate the finite-sample performance of the tests in a series of Monte Carlo experiments. We find a substantial decrease of power of the conventional threshold cointegration tests caused by a shift in the slope coefficient of the equilibrium equation. The proposed tests perform superior in these situations. An application to the 'rockets and feathers' hypothesis of price adjustment in the US gasoline market provides empirical support for this methodology. |
Keywords: | cointegration,threshold autoregression,structural breaks,SETAR,MTAR,asymmetric price transmission |
JEL: | C12 C32 C34 Q41 |
Date: | 2018 |
URL: | http://d.repec.org/n?u=RePEc:zbw:hohdps:072018&r=ecm |
By: | Farzad Sabzikar (Dept. of Statistics, Iowa State University); Qiying Wang (University of Sydney); Peter C.B. Phillips (Cowles Foundation, Yale University) |
Abstract: | This paper develops an asymptotic theory for near-integrated random processes and some associated regressions when the errors are tempered linear processes. Tempered processes are stationary time series that have a semi-long memory property in the sense that the autocovariogram of the process resembles that of a long memory model for moderate lags but eventually diminishes exponentially fast according to the presence of a decay factor governed by a tempering parameter. When the tempering parameter is sample size dependent, the resulting class of processes admits a wide range of behavior that includes both long memory, semi-long memory, and short memory processes. The paper develops asymptotic theory for such processes and associated regression statistics thereby extending earlier ?ndings that fall within certain subclasses of processes involving near-integrated time series. The limit results relate to tempered fractional processes that include tempered fractional Brownian motion and tempered fractional di?usions. The theory is extended to provide the limiting distribution for autoregressions with such tempered near-integrated time series, thereby enabling analysis of the limit properties of statistics of particular interest in econometrics, such as unit root tests, under more general conditions than existing theory. Some extensions of the theory to the multivariate case are reported. |
Keywords: | Asymptotics, Fractional Brownian motion, Long memory, Near Integration, Tempered processes |
JEL: | C22 C23 |
Date: | 2018–05 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2131&r=ecm |
By: | Viet Anh Nguyen; Daniel Kuhn; Peyman Mohajerin Esfahani |
Abstract: | We introduce a distributionally robust maximum likelihood estimation model with a Wasserstein ambiguity set to infer the inverse covariance matrix of a $p$-dimensional Gaussian random vector from $n$ independent samples. The proposed model minimizes the worst case (maximum) of Stein's loss across all normal reference distributions within a prescribed Wasserstein distance from the normal distribution characterized by the sample mean and the sample covariance matrix. We prove that this estimation problem is equivalent to a semidefinite program that is tractable in theory but beyond the reach of general purpose solvers for practically relevant problem dimensions $p$. In the absence of any prior structural information, the estimation problem has an analytical solution that is naturally interpreted as a nonlinear shrinkage estimator. Besides being invertible and well-conditioned even for $p>n$, the new shrinkage estimator is rotation-equivariant and preserves the order of the eigenvalues of the sample covariance matrix. These desirable properties are not imposed ad hoc but emerge naturally from the underlying distributionally robust optimization model. Finally, we develop a sequential quadratic approximation algorithm for efficiently solving the general estimation problem subject to conditional independence constraints typically encountered in Gaussian graphical models. |
Date: | 2018–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1805.07194&r=ecm |
By: | Baumeister, Christiane; Hamilton, James |
Abstract: | Reporting point estimates and error bands for structural vector autoregressions that are only set identified is a very common practice. However, unless the researcher is persuaded on the basis of prior information that some parameter values are more plausible than others, this common practice has no formal justification. When the role and reliability of prior information is defended, Bayesian posterior probabilities can be used to form an inference that incorporates doubts about the identifying assumptions. We illustrate how prior information can be used about both structural coefficients and the impacts of shocks, and propose a new distribution, which we call the asymmetric t distribution, for incorporating prior beliefs about the signs of equilibrium impacts in a nondogmatic way. We apply these methods to a three-variable macroeconomic model and conclude that monetary policy shocks were not the major driver of output, inflation, or interest rates during the Great Moderation. |
Keywords: | historical decompositions; impulse-response functions; informative priors; Model uncertainty; monetary policy; set identification; structural vector autoregressions |
JEL: | C11 C32 E52 |
Date: | 2018–05 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:12911&r=ecm |
By: | Jörg Schwiebert (Leuphana University Lueneburg, Germany) |
Abstract: | This paper develops a bivariate fractional probit model for fractional response variables, i.e., variables bounded between zero and one. The model can be applied when there are two seemingly unrelated fractional response variables. Since the model relies on a quite strong bivariate normality assumption, specification tests are discussed and the consequences of misspecification are investigated. It is shown that the model performs well when normal marginal distributions can be established (this can be tested), and does not perform worse when the joint distribution is not characterized by bivariate normality. Simulation evidence shows that the bivariate model generates more efficient estimates than two univariate models applied to each fractional response variable separately. An empirical application illustrates the usefulness of the proposed model in empirical practice. |
Keywords: | Bivariate model, Fractional probit model, Fractional response variable, Seemingly unrelated regression, Univariate model |
JEL: | C35 |
Date: | 2018–04 |
URL: | http://d.repec.org/n?u=RePEc:lue:wpaper:381&r=ecm |
By: | Jason Poulos |
Abstract: | This paper proposes an alternative to the synthetic control method (SCM) for estimating the effect of a policy intervention on an outcome over time. Recurrent neural networks (RNNs) are used to predict counterfactual time-series of treated unit outcomes using only the outcomes of control units as inputs. Unlike SCM, the proposed method does not rely on pre-intervention covariates, allows for nonconvex combinations of control units, can handle multiple treated units, and can share model parameters across time-steps. RNNs outperform SCM in terms of recovering experimental estimates from a field experiment extended to a time-series observational setting. In placebo tests run on three different benchmark datasets, RNNs are more accurate than SCM in predicting the post-intervention time-series of control units, while yielding a comparable proportion of false positives. The proposed method contributes to a new literature that uses machine learning techniques for data-driven counterfactual prediction. |
Date: | 2017–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1712.03553&r=ecm |
By: | Adams Vallejos; Ignacio Ormazabal; Felix A. Borotto; Hernan F. Astudillo |
Abstract: | It has been pointed out by Patriarca et al. (2005) that the power-law tailed equilibrium distribution in heterogeneous kinetic exchange models with a distributed saving parameter can be resolved as a mixture of Gamma distributions corresponding to particular subsets of agents. Here, we propose a new four-parameter statistical distribution which is a $\kappa$-deformation of the Generalized Gamma distribution with a power-law tail, based on the deformed exponential and logarithm functions introduced by Kaniadakis(2001). We found that this new distribution is also an extension to the $\kappa$-Generalized distribution proposed by Clementi et al. (2007), with an additional shape parameter $\nu$, and properly reproduces the whole range of the distribution of wealth in such heterogeneous kinetic exchange models. We also provide various associated statistical measures and inequality measures. |
Date: | 2018–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1805.06929&r=ecm |
By: | Lettau, Martin; Pelger, Markus |
Abstract: | We develop an estimator for latent factors in a large-dimensional panel of financial data that can explain expected excess returns. Statistical factor analysis based on Principal Component Analysis (PCA) has problems identifying factors with a small variance that are important for asset pricing. We generalize PCA with a penalty term accounting for the pricing error in expected returns. Our estimator searches for factors that can explain both the expected return and covariance structure. We derive the statistical properties of the new estimator and show that our estimator can find asset-pricing factors, which cannot be detected with PCA, even if a large amount of data is available. Applying the approach to portfolio data we find factors with Sharpe-ratios more than twice as large as those based on conventional PCA and with significantly smaller pricing errors. |
Keywords: | Anomalies; Cross Section of Returns; expected returns; high-dimensional data; Latent Factors; PCA; Weak Factors |
JEL: | C14 C38 C52 C58 G12 |
Date: | 2018–05 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:12926&r=ecm |
By: | Shota Gugushvili; Frank van der Meulen; Moritz Schauer; Peter Spreij |
Abstract: | Aiming at financial applications, we study the problem of learning the volatility under market microstructure noise. Specifically, we consider noisy discrete time observations from a stochastic differential equation and develop a novel computational method to learn the diffusion coefficient of the equation. We take a nonparametric Bayesian approach, where we model the volatility function a priori as piecewise constant. Its prior is specified via the inverse Gamma Markov chain. Sampling from the posterior is accomplished by incorporating the Forward Filtering Backward Simulation algorithm in the Gibbs sampler. Good performance of the method is demonstrated on two representative synthetic data examples. Finally, we apply the method on the EUR/USD exchange rate dataset. |
Date: | 2018–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1805.05606&r=ecm |
By: | Becker, Janis; Leschinski, Christian |
Abstract: | Models based on factors such as size, value, or momentum are ubiquitous in asset pricing. Therefore, portfolio allocation and risk management require estimates of the volatility of these factors. While realized volatility has become a standard tool for liquid individual assets, this measure is not available for factor models, due to their construction from the CRSP data base that does not provide high frequency data and contains a large number of less liquid stocks. Here, we provide a statistical approach to estimate the volatility of these factors. The efficacy of this approach relative to the use of models based on squared returns is demonstrated for forecasts of the market volatility and a portfolio allocation strategy that is based on volatility timing. |
Keywords: | Asset Pricing; Realized Volatility; Factor Models; Volatility Forecasting |
JEL: | C58 G11 G12 G17 G32 |
Date: | 2018–05 |
URL: | http://d.repec.org/n?u=RePEc:han:dpaper:dp-631&r=ecm |