|
on Econometrics |
By: | Jin Yan (The Chinese University of Hong Kong.); Hong Il Yoo (Durham Business School) |
Abstract: | We propose two semiparametric methods for estimating the random utility model using rank-ordered choice data. The framework is “semiparametric” in that the utility index includes finite dimensional preference parameters but the error term follows an unspecified distribution. Our methods allow for a flexible form of heteroskedasticity across individuals. With complete preference rankings, our methods also allow for heteroskedastic and correlated errors across alternatives, as well as a variety of random coefficients distributions. The baseline method we develop is the generalized maximum score (GMS) estimator, which is strongly consistent but follows a non-standard asymptotic distribution. To facilitate statistical inferences, we make extra regularity assumptions and develop the smoothed GMS estimator, which is asymptotically normal. Monte Carlo experiments show that our estimators perform favorably against popular parametric estimators under a variety of stochastic specifications |
Keywords: | Rank-ordered; Random utility; Semiparametric estimation; Smoothing |
JEL: | C14 C35 |
Date: | 2017–04 |
URL: | http://d.repec.org/n?u=RePEc:dur:durham:2017_02&r=ecm |
By: | Edwin Fourrier-Nicolai (Aix-Marseille Univ. (Aix-Marseille School of Economics), CNRS, EHESS and Centrale Marseille); Michel Lubrano (Aix-Marseille Univ. (Aix-Marseille School of Economics), CNRS, EHESS and Centrale Marseille) |
Abstract: | TIP curves are cumulative poverty gap curves used for representing the three different aspects of poverty: incidence, intensity and inequality. The paper provides Bayesian inference for TIP curves, linking their expression to a parametric representation of the income distribution using a mixture of lognormal densities. We treat specifically the question of zero-inflated income data and survey weights, which are two important issues in survey analysis. The advantage of the Bayesian approach is that it takes into account all the information contained in the sample and that it provides small sample confidence intervals and tests for TIP dominance. We apply our methodology to evaluate the evolution of child poverty in Germany after 2002, providing thus an update the portrait of child poverty in Germany given in Corak et al. 2008. |
Keywords: | Bayesian inference, mixture model, survey weights, zero-inflated model, poverty, Inequality |
JEL: | C11 C46 I32 I38 |
Date: | 2017–03 |
URL: | http://d.repec.org/n?u=RePEc:aim:wpaimx:1710&r=ecm |
By: | Xuehai Zhang (Paderborn University); Yuanhua Feng (Paderborn University); Christian Peitz (Paderborn University) |
Abstract: | The paper proposes a wide class of semiparametric GARCH models by introducing a scale function into a GARCH class model for featuring long-run volatility dynamics, which can be thought of as an MEM (multiplicative error model) with a varying scale function. Our focus is to estimate the scale function under suitable weak moment conditions by means of the Box-Cox transformation of the absolute returns. The estimation of the scale function is independent of any GARCH specification. To overcome the drawbacks of the kernel and the local linear approaches, a non-negatively constrained local linear estimator of the scale function is considered, which is then proposed to fit a suitable parametric GARCH model to the standardized residuals, is used. Asymptotic properties of the proposed nonpara- metric and parametric estimators are studied in detail and iterative plug-in algorithms are developed for selecting the bandwidth and transformation parameters, which are selected by MLE and JB statistic. The algorithms are also carried out independently without any parametric specification in the stationary part. Application to real data sets show that the proposals work very well in practice. |
Date: | 2017–04 |
URL: | http://d.repec.org/n?u=RePEc:pdn:ciepap:104&r=ecm |
By: | Amir-Ahmadi, Pooyan (University of Illinois at Urbana–Champaign); Drautzburg, Thorsten (Federal Reserve Bank of Philadelphia) |
Abstract: | We analyze set identification in Bayesian vector autoregressions (VARs). Because set identification can be challenging, we propose to include micro data on heterogeneous entities to sharpen inference. First, we provide conditions when imposing a simple ranking of impulse-responses sharpens inference in bivariate and trivariate VARs. Importantly; we show that this set reduction also applies to variables not subject to ranking restrictions. Second, we develop two types of inference to address recent criticism: (1) an efficient fully Bayesian algorithm based on an agnostic prior that directly samples from the admissible set and (2) a prior-robust Bayesian algorithm to sample the posterior bounds of the identified set. Third, we apply our methodology to U.S. data to identify productivity news and defense spending shocks. We find that under both algorithms, the bounds of the identified sets shrink substantially under heterogeneity restrictions relative to standard sign restrictions. |
Keywords: | Structural VAR; set-identification; heterogeneity and sign restrictions; posterior bounds; Bayesian inference; sampling methods; productivity news; government spending. |
Date: | 2017–05–01 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedpwp:17-11&r=ecm |
By: | Kerssenfischer, Mark |
Abstract: | Dynamic factor models and external instrument identification are two recent advances in the empirical macroeconomic literature. This paper combines the two approaches in order to study the effects of monetary policy shocks. I use this novel framework to re-examine the effects found by Forni and Gambetti (2010, JME) in a recursively-identified DFM. Considering the fundamental differences between the identifying assumptions, the results are overall strikingly similar. Importantly, this finding stands in stark contrast to traditional VAR models, which yield decisively different results in the two identification schemes. This highlights the importance of using extended information sets to properly identify monetary policy shocks. |
Keywords: | Monetary Policy,Dynamic Factor Models,External Instrument,High-Frequency Identification |
JEL: | C32 E32 E44 E52 F31 |
Date: | 2017 |
URL: | http://d.repec.org/n?u=RePEc:zbw:bubdps:082017&r=ecm |
By: | Siem Koopman (Vrije Universiteit Amsterdam, Tinbergen Institute); André Lucas (Vrije Universiteit Amsterdam, Tinbergen Institute); Marcin Zamojski (University of Gothenburg) |
Abstract: | We consider score-driven time-varying parameters in dynamic yield curve models and investigate their in-sample and out-of-sample performance for two data sets. In a univariate setting, score-driven models were shown to offer competitive performance to parameter-driven models in terms of in-sample fit and quality of out-of-sample forecasts but at a lower computational cost. We investigate whether this performance and the related advantages extend to more general and higher-dimensional models. Based on an extensive Monte Carlo study, we show that in multivariate settings the advantages of score-driven models can even be more pronounced than in the univariate setting. We also show how the score-driven approach can be implemented in dynamic yield curve models and extend them to allow for the fat-tailed distributions of the disturbances and the time-variation of variances (heteroskedasticity) and covariances. |
Keywords: | term-structure, dynamic Nelson-Siegel models, non-Gaussian distributions, time-varying parameters, observation-driven models, parameter-driven models |
JEL: | C15 C32 C33 C58 C63 E43 E52 E58 |
Date: | 2017 |
URL: | http://d.repec.org/n?u=RePEc:nbp:nbpmis:258&r=ecm |
By: | Korobilis, Dimitris |
Abstract: | Machine learning methods are becoming increasingly popular in economics, due to the increased availability of large datasets. In this paper I evaluate a recently proposed algorithm called Generalized Approximate Message Passing (GAMP) , which has been very popular in signal processing and compressive sensing. I show how this algorithm can be combined with Bayesian hierarchical shrinkage priors typically used in economic forecasting, resulting in computationally efficient schemes for estimating high-dimensional regression models. Using Monte Carlo simulations I establish that in certain scenarios GAMP can achieve estimation accuracy comparable to traditional Markov chain Monte Carlo methods, at a tiny fraction of the computing time. In a forecasting exercise involving a large set of orthogonal macroeconomic predictors, I show that Bayesian shrinkage estimators based on GAMP perform very well compared to a large set of alternatives. |
Keywords: | high-dimensional inference; compressive sensing; belief propagation; Bayesian shrinkage; dynamic factor models |
Date: | 2017–01 |
URL: | http://d.repec.org/n?u=RePEc:esy:uefcwp:19565&r=ecm |
By: | Syed Ali Asad Rizvi; Stephen J. Roberts; Michael A. Osborne; Favour Nyikosa |
Abstract: | In this paper we use Gaussian Process (GP) regression to propose a novel approach for predicting volatility of financial returns by forecasting the envelopes of the time series. We provide a direct comparison of their performance to traditional approaches such as GARCH. We compare the forecasting power of three approaches: GP regression on the absolute and squared returns; regression on the envelope of the returns and the absolute returns; and regression on the envelope of the negative and positive returns separately. We use a maximum a posteriori estimate with a Gaussian prior to determine our hyperparameters. We also test the effect of hyperparameter updating at each forecasting step. We use our approaches to forecast out-of-sample volatility of four currency pairs over a 2 year period, at half-hourly intervals. From three kernels, we select the kernel giving the best performance for our data. We use two published accuracy measures and four statistical loss functions to evaluate the forecasting ability of GARCH vs GPs. In mean squared error the GP's perform 20% better than a random walk model, and 50% better than GARCH for the same data. |
Date: | 2017–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1705.00891&r=ecm |
By: | Yuanhua Feng (Paderborn University); Thomas Gries (Paderborn University) |
Abstract: | The main purpose of this paper is the development of iterative plug-in algorithms for local polynomial estimation of the trend and its derivatives in macroeconomic time series. In particular, a data-driven lag-window estimator for the variance factor is proposed so that the bandwidth is selected without any parametric assumption on the stationary errors. Further analysis of the residuals using an ARMA model is discussed briefl y. Moreover, confidence bounds for the trend and its derivatives are conducted using some asymptotically unbiased estimates and applied to test possible linearity of the trend. These graphical tools also provide us further detailed features about the economic development. Practical performance of the proposals is illustrated by quarterly US and UK GDP data. |
Keywords: | Macroeconomic time series, semiparametric modelling, nonparametric regression with dependent errors, bandwidth selection, misspecification test |
Date: | 2017–04 |
URL: | http://d.repec.org/n?u=RePEc:pdn:ciepap:102&r=ecm |
By: | Antonakakis, Nikolaos; Gabauer, David |
Abstract: | In this study, we propose refined measures of dynamic connectedness based on a TVP-VAR approach, that overcomes certain shortcomings of the connectedness measures introduced originally by Diebold and Yilmaz (2009, 2012, 2014). We illustrate the advantages of the TVP-VAR-based connectedness approach with an empirical analysis on exchange rate volatility connectedness. |
Keywords: | Dynamic connectedness; TVP-VAR; Exchange rate volatility |
JEL: | C32 C50 F31 G15 |
Date: | 2017–04–12 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:78282&r=ecm |
By: | Martin Schumann (CREA, University du Luxembourg); Gautam Tripathi (CREA, University du Luxembourg) |
Abstract: | We demonstrate that the probit weight function is U-shaped on R, i.e., it is decreasing on (infinity,0), strictly increasing on [0,infinity), and strictly convex on R. Knowledge of the shape of the probit weight function can resolve any confusion that may arise from a result in the classic paper of Sampford (1953). |
Keywords: | Probit weight of function, Shape |
JEL: | C18 C25 |
Date: | 2017 |
URL: | http://d.repec.org/n?u=RePEc:luc:wpaper:17-08&r=ecm |
By: | Rendon Aguirre, Janeth Carolina; Prieto Fernández, Francisco Javier; Peña Sánchez de Rivera, Daniel |
Abstract: | Clustering Big Data is an important problem because large samples of many variables are usually heterogeneous and include mixtures of several populations. It often happens that only some of a large set of variables are useful for clustering and working with all of them would be very inefficient and may make more difficult the identification of the clusters. Thus, searching for spaces of lower dimension that include all the relevant information about the clusters seems a sensible way to proceed in these situations. Peña and Prieto (2001) showed that the extreme kurtosis directions of projected data are optimal when the data has been generated by mixtures of two normal distributions. We generalize this result for any number of mixtures and show that the extreme kurtosis directions of the projected data are linear combinations of the optimal discriminant directions if we knew the centers of the components of the mixture. In order to separate the groups we want directions that split the data into two groups, each corresponding to different components of the mixture. We prove that these directions can be found from extreme kurtosis projections. This result suggests a new procedure to deal with many groups, working in a binary decision way and deciding at each step if the data should be split into two groups or we should stop. The decision is based on comparing a single distribution with a mixture of two distribution. The performance of the algorithm is analyzed through a simulation study. |
Keywords: | Mixture models; Projection Pursuit; High dimension |
Date: | 2017–04–27 |
URL: | http://d.repec.org/n?u=RePEc:cte:wsrepe:24522&r=ecm |
By: | Hyeongwoo Kim; Kyunghwan Ko |
Abstract: | We present a factor augmented forecasting model for assessing the financial vulnerability in Korea. Dynamic factor models often extract latent common factors from a large panel of time series data via the method of the principal components (PC). Instead, we employ the partial least squares (PLS) method that estimates target specific common factors, utilizing covariances between predictors and the target variable. Applying PLS to 198 monthly frequency macroeconomic time series variables and the Bank of Korea's Financial Stress Index (KFSTI), our PLS factor augmented forecasting models consistently outperformed the random walk benchmark model in out-of-sample prediction exercises in all forecast horizons we considered. Our models also outperformed the autoregressive benchmark model in short-term forecast horizons. We expect our models would provide useful early warning signs of the emergence of systemic risks in Korea's financial markets. |
Keywords: | Partial Least Squares; Principal Component Analysis; Financial Stress Index; Out-of-Sample Forecast; RRMSPE |
JEL: | C38 C53 C55 E44 E47 G01 G17 |
Date: | 2017–05 |
URL: | http://d.repec.org/n?u=RePEc:abn:wpaper:auwp2017-03&r=ecm |
By: | Barbara Sianesi (Institute for Fiscal Studies and Institute for Fiscal Studies) |
Abstract: | Randomised controlled or clinical trials (RCTs) are generally viewed as the most reliable method to draw causal inference as to the effects of a treatment, as they should guarantee that the individuals being compared differ only in terms of their exposure to the treatment of interest. This ‘gold standard’ result however hinges on the requirement that the randomisation device determines the random allocation of individuals to the treatment without affecting any other element of the causal model. This ‘no randomisation bias’ assumption is generally untestable but if violated would undermine the causal inference emerging from an RCT, both in terms of its internal validity and in terms of its relevance for policy purposes. This paper offers a concise review of how the medical literature identifies and deals with such issues. |
Keywords: | randomised trials, medical, |
Date: | 2016–11–21 |
URL: | http://d.repec.org/n?u=RePEc:ifs:ifsewp:16/23&r=ecm |
By: | Kosaku Takanashi (Faculty of Economics, Keio University) |
Abstract: | We study local asymptotic normality of M-estimates of convex minimization in an infinite dimensional parameter space. The objective function of M-estimates is not necessary differentiable and is possibly subject to convex constraints. In the above circumstance, narrow convergence with respect to uniform convergence fails to hold, because of the strength of it's topology. A new approach we propose to the lack-of-uniform-convergence is based on Mosco-convergence that is weaker topology than uniform convergence. By applying narrow convergence with respect to Mosco topology, we develop an infinite-dimensional version of the convexity argument and provide a proof of a local asymptotic normality. Our new technique also provides a proof of an asymptotic distribution of the likelihood ratio test statistic defined on real separable Hilbert spaces. |
Keywords: | Epi-convergence, Likelihood ratio test, Local asymptotic normality, Mosco-topology |
JEL: | C14 C12 |
Date: | 2017–04–10 |
URL: | http://d.repec.org/n?u=RePEc:keo:dpaper:2017-012&r=ecm |
By: | Matthew Gentzkow; Bryan T. Kelly; Matt Taddy |
Abstract: | An ever increasing share of human interaction, communication, and culture is recorded as digital text. We provide an introduction to the use of text as an input to economic research. We discuss the features that make text different from other forms of data, offer a practical overview of relevant statistical methods, and survey a variety of applications. |
JEL: | C1 |
Date: | 2017–03 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:23276&r=ecm |
By: | Wiper, Michael Peter; Tena, Juan de Dios; Forrest, David; Corona, Francisco |
Abstract: | The paper discusses how to evaluate alternative seeding systems in sports competitions. Prior papers have developed an approach which uses a forecasting model at the level of the individual match and then applies Monte Carlo simulation of the whole tournament to estimate the probabilities associated with various outcomes or combinations of outcomes. This allows, for example, a measure of outcome uncertainty to be attached to each proposed seeding regime. However, this established approach takes no note of the uncertainty surrounding the parameter estimates in the underlying match forecasting model and this precludes testing for statistically significant differences between probabilities or outcome uncertainty measures under alternative regimes. We propose a Bayesian approach which resolves this weakness in standard methodology and illustrate its potential by examining the effect of seeding rule changes implemented in the UEFA Champions League, a major football tournament, in 2015. The reform appears to have increased outcome uncertainty. We identify which clubs and which sorts of clubs were favourably or unfavourably affected by the reform, distinguishing effects on probabilities of progression to different phases of the competition. |
Keywords: | Bayesian; Monte Carlo simulation; football; seeding; OR in sports |
Date: | 2017–04 |
URL: | http://d.repec.org/n?u=RePEc:cte:wsrepe:24521&r=ecm |