|
on Econometrics |
By: | Nikolay Gospodinov; Serena Ng |
Abstract: | This paper considers estimation of moving average (MA) models with non-Gaussian errors. Information in higher-order cumulants allows identification of the parameters without imposing invertibility. By allowing for an unbounded parameter space, the generalized method of moments estimator of the MA(1) model has classical (root-T and asymptotic normal) properties when the moving average root is inside, outside, and on the unit circle. For more general models where the dependence of the cumulants on the model parameters is analytically intractable, we consider simulation-based estimators with two features that distinguish them from the existing work in the literature. First, identification now requires information from the second and higher-order moments of the data. Thus, in addition to an autoregressive model, new auxiliary regressions need to be considered. Second, the errors used to simulate the model are drawn from a flexible functional form to accommodate a large class of distributions with non-Gaussian features. The proposed simulation estimators are also asymptotically normally distributed without imposing the assumption of invertibility. In the application considered, there is overwhelming evidence of non-invertibility in the Fama-French portfolio returns. |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedawp:2013-11&r=ecm |
By: | Boris Brodsky (Central Economics and Mathematics Department. National Research University Higher School of Economics. Applied Macroeconomics Department. Professor); Henry Penikas (Junior Research Fellow at International Laboratory of Decision Choice and Analysis, National Research University Higher School of Economics); Irina Safaryan (Institute for Informatics and Automation Problems of the National Academy of Sciences of the Republic of Armenia; Research Fellow.) |
Abstract: | This paper aims at presenting the research results of revealing a structural shift in copula-models of multivariate time-series. A nonparametric method of structural shift identification and estimation is used. The asymptotical characteristics (the probabilities of the I-type and II-type errors, and the probability of the estimation error) of the proposed method are analyzed. The simulation method verification results for Clayton and Gumbel copulas are presented and discussed. The empirical part of the paper is devoted to structural shift identification for multivariate time series of interest rates for Euro-, US Dollar- and Ruble-zones. The empirical application provides strong evidence of the efficiency for the proposed method of structural shift identification. |
Keywords: | Copula, structural shift, Kolmogorov-Smirnov statistics, interest rates |
JEL: | C14 C46 |
Date: | 2012 |
URL: | http://d.repec.org/n?u=RePEc:hig:wpaper:05/fe/2012&r=ecm |
By: | Jungyoon Lee; Peter M Robinson |
Abstract: | An asymptotic theory is developed for nonparametric and semiparametric series estimation under general cross-sectional dependence and heterogeneity. A uniform rate of consistency, asymptotic normality, and sufficient conditions for convergence, are established, and a data-driven studentization new to cross-sectional data is justifi�ed. The conditions accommodate various cross-sectional settings plausible in economic applications, and apply also to panel and time series data. Strong, as well as weak dependence are covered, and conditional heteroscedasticity is allowed. |
Keywords: | Series estimation, Nonparametric regression, Spatial data, Cross-sectional dependence, Uniform rate of consistency, Functional central limit the- orem, Data-driven studentization |
JEL: | C12 C13 C14 C21 |
Date: | 2013–06 |
URL: | http://d.repec.org/n?u=RePEc:cep:stiecm:/2013/570&r=ecm |
By: | Francq, Christian; Sucarrat, Genaro |
Abstract: | Estimation of log-GARCH models via the ARMA representation is attractive because it enables a vast amount of already established results in the ARMA literature. We propose an exponential Chi-squared QMLE for log-GARCH models via the ARMA representation. The advantage of the estimator is that it corresponds to the theoretically and empirically important case where the conditional error of the log-GARCH model is normal. We prove the consistency and asymptotic normality of the estimator, and show that, asymptotically, it is as efficient as the standard QMLE in the log-GARCH(1,1) case. We also verify and study our results in finite samples by Monte Carlo simulations. An empirical application illustrates the versatility and usefulness of the estimator. |
Keywords: | Log-GARCH, EGARCH, Quasi Maximum Likelihood, Exponential Chi- Squared, ARMA |
JEL: | C13 C22 C58 |
Date: | 2013–10–24 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:51783&r=ecm |
By: | Li, Yushu (Department of Business and Management Science, Norwegian School of Economics); Andersson, Fredrik N. G. (Department of Economics, Lund University) |
Abstract: | Hong and Kao (2004) proposed a panel data test for serial correlation of unknown form. However, their test is computationally difficult to implement, and simulation studies show the test to have bad small-sample properties. We extend Gencay’s (2011) time series test for serial correlation to the panel data case in the framework proposed by Hong and Kao (2004). Our new test maintains the advantages of the Hong and Kao (2004) test, and it is simpler and easier to implement. Furthermore, simulation results show that our test has quicker convergence and hence better small-sample properties. |
Keywords: | energy distribution; MODWT; serial correlation; static and dynamic panel models |
JEL: | C11 C12 C15 |
Date: | 2013–11–28 |
URL: | http://d.repec.org/n?u=RePEc:hhs:lunewp:2013_039&r=ecm |
By: | Miguel A. Delgado; Peter M Robinson |
Abstract: | We develop non-nested tests in a general spatial, spatio-temporal or panel data context. The spatial aspect can be interpreted quite generally, in either a geographical sense, or employing notions of economic distance, or even when parametric modelling arises in part from a common factor or other structure. In the former case, observations may be regularly-spaced across one or more dimensions, as is typical with much spatio-temporal data, or irregularly-spaced across all dimensions; both isotropic models and non-isotropic models can be considered, and a wide variety of correlation structures. In the second case, models involving spatial weight matrices are covered, such as "spatial autoregressive models". The setting is sufficiently general to potentially cover other parametric structures such as certain factor models, and vector-valued observations, and here our preliminary asymptotic theory for parameter estimates is of some independent value. The test statistic is based on a Gaussian pseudo-likelihood ratio, and is shown to have an asymptotic standard normal distribution under the null hypothesis that one of the two models is correct. A small Monte Carlo study of �finite-sample performance is included. |
Keywords: | on-nested test, spatial correlation, pseudo maximum likelihood estimation |
JEL: | C12 C21 |
Date: | 2013–11 |
URL: | http://d.repec.org/n?u=RePEc:cep:stiecm:/2013/568&r=ecm |
By: | Zhu, Ke; Yu, Philip L.H.; Li, Wai Keung |
Abstract: | This paper investigates a quasi-likelihood ratio (LR) test for the thresholds in buffered autoregressive processes. Under the null hypothesis of no threshold, the LR test statistic converges to a function of a centered Gaussian process. Under local alternatives, this LR test has nontrivial asymptotic power. Furthermore, a bootstrap method is proposed to obtain the critical value for our LR test. Simulation studies and one real example are given to assess the performance of this LR test. The proof in this paper is not standard and can be used in other non-linear time series models. |
Keywords: | AR(p) model; Bootstrap method; Buffered AR(p) model; Likelihood ratio test; Marked empirical process; Threshold AR(p) model. |
JEL: | C1 C12 |
Date: | 2013–11–25 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:51706&r=ecm |
By: | Leucht, Anne; Neumann, Michael H.; Kreiss, Jens-Peter |
Abstract: | We provide a consistent specification test for GARCH(1,1) models based on a test statistic of Cramér-von Mises type. Since the limit distribution of the test statistic under the null hypothesis depends on unknown quantities in a complicated manner, we propose a model-based (semiparametric)bootstrap method to approximate critical values of the test and verify its asymptotic validity. Finally, we illuminate the finite sample behavior of the test by some simulations. |
Keywords: | Bootstrap , Cramér-von Mises test , GARCH processes , V-statistic |
JEL: | C12 |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:mnh:wpaper:35107&r=ecm |
By: | Boswijk, H. P.; Zu, Y. |
Abstract: | The paper generalises recent unit root tests for nonstationary volatility to a multivariate context. Persistent changes in the innovation variance matrix lead to size distortions in conventional cointegration tests, and possibilities of increased power by taking the time-varying volatilities and correlations into account. The testing procedures are based on a likelihood analysis of the vector autoregressive model with a conditional covariance matrix that may be estimated nonparametrically. We find that under suitable conditions, adaptation with respect to the volatility matrix process is possible, in the sense that nonparametric volatility estimation does not lead to a loss of asymptotic local power. |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:cty:dpaper:13/08&r=ecm |
By: | Jungyoon Lee; Peter M Robinson |
Abstract: | Nonparametric regression is developed for data with both a temporal and a cross-sectional dimension. The model includes additive, unknown, individual-specifi�c components and allows also for cross-sectional and temporal dependence and conditional heteroscedasticity. A simple nonparametric estimate is shown to be dominated by a GLS-type one. Asymptotically optimal bandwidth choices are justified for both estimates. Feasible optimal bandwidths, and feasi- ble optimal regression estimates, are asymptotically justifi�ed, with fi�nite sample performance examined in a Monte Carlo study. |
Keywords: | Panel data, Nonparametric regression, Cross-sectional dependence, Generalized least squares, Optimal bandwidth |
JEL: | C13 C14 C23 |
Date: | 2013–03 |
URL: | http://d.repec.org/n?u=RePEc:cep:stiecm:/2013/569&r=ecm |
By: | Joshua C C Chan; Cody Y L Hsiao |
Abstract: | Financial time series often exhibit properties that depart from the usual assumptions of serial independence and normality. These include volatility clustering, heavy-tailedness and serial dependence. A voluminous literature on different approaches for modeling these empirical regularities has emerged in the last decade. In this paper we review the estimation of a variety of highly flexible stochastic volatility models, and introduce some efficient algorithms based on recent advances in state space simulation techniques. These estimation methods are illustrated via empirical examples involving precious metal and foreign exchange returns. The corresponding Matlab code is also provided. |
Keywords: | stochastic volatility, scale mixture of normal, state space model, Markov chain Monte Carlo, financial data |
JEL: | C11 C22 C58 |
Date: | 2013–11 |
URL: | http://d.repec.org/n?u=RePEc:een:camaaa:2013-74&r=ecm |
By: | Ilaria Lucrezia Amerise (Dipartimento di Economia, Statistica e Finanza, Università della Calabria) |
Abstract: | In this article we are concerned with a collection of multiple linear regressions that enable the researcher to gain an impression of the entire conditional distribution of a response variable given the specications for the explanatory variables. In particular, we investigate the advantage of using a new method of parametric estimation for non-crossing quantile regressions. The main tool is a weighting system of the observations that aims to reduce the eect of contamination of the sampled population on the estimated parameters by diminishing the eect of outliers. The performance of the new estimators has been evaluated on a number of data sets. We had considerable success with avoiding intersections and in the same time improving the global tting of conditional quantile regressions. We conjecture that in other situations (e.g. data with high level of skewness, non-constant variances, unusual and uncertain data) the method of weighted non-crossing quantiles will lead to estimators with good robustness properties. |
Keywords: | conditional quantiles, monotonicity problem, estimation under constraints |
JEL: | C21 C31 C6 |
Date: | 2013–11 |
URL: | http://d.repec.org/n?u=RePEc:clb:wpaper:201308&r=ecm |
By: | Montes-Rojas, G.; Galvao, A. F. |
Abstract: | We propose to model endogeneity bias using prior distributions of moment conditions. The estimator can be obtained both as a method-of-moments estimator and in a Ridge penalized regression framework. We show the estimator's relation to a Bayesian estimator. |
Keywords: | Endogeneity; Shrinkage; Ridge regression; Method of moments |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:cty:dpaper:13/09&r=ecm |
By: | Hjer, Per (Research Institute of Industrial Economics (IFN)) |
Abstract: | Revealed preference tests are widely used in empirical applications of consumer rationality. These are static tests, and consequently, lack ability to handle measurement errors in the data. This paper extends and generalizes existing procedures that account for measurement errors in revealed preference tests. In particular, it introduces a very efficient method to implement these procedures, which make them operational for large data sets. The paper illustrates the new method for both classical and Berkson measurement errors models. |
Keywords: | Berkson measurement errors; Classical measurement errors; GARP; Revealed preference |
JEL: | C43 D12 |
Date: | 2013–11–21 |
URL: | http://d.repec.org/n?u=RePEc:hhs:iuiwop:0990&r=ecm |
By: | W. Robert Reed (University of Canterbury) |
Abstract: | A common practice in applied econometrics consists of replacing a suspected endogenous variable with its lagged values. This note demonstrates that lagging an endogen¬ous variable does not enable one to escape simultaneity bias. The associated estimates are still inconsistent, and hypothesis testing is invalid. I show that the only time a strategy of replacing Xt with Xt-1 enables consistent estimation of structural parameters is when there is no simultaneity to begin with. The implication of this study is that researchers who employ this practice should be explicit about why lagging is an effective estimation strategy. |
Keywords: | Simultaneity, Endogeneity, Lagged variables |
JEL: | C1 C5 C15 |
Date: | 2013–10–16 |
URL: | http://d.repec.org/n?u=RePEc:cbt:econwp:13/32&r=ecm |
By: | Fildes, Robert; Petropoulos, Fotios |
Abstract: | A major problem for many organisational forecasters is to choose the appropriate forecasting method for a large number of data series. Model selection aims to identify the best method of forecasting for an individual series within the data set. Various selection rules have been proposed in order to enhance forecasting accuracy. In theory, model selection is appealing, as no single extrapolation method is better than all others for all series in an organizational data set. However, empirical results have demonstrated limited effectiveness of these often complex rules. The current study explores the circumstances under which model selection is beneficial. Three measures are examined for characterising the data series, namely predictability (in terms of the relative performance of the random walk but also a method, theta, that performs well), trend and seasonality in the series. In addition, the attributes of the data set and the methods also affect selection performance, including the size of the pools of methods under consideration, the stability of methods’ performance and the correlation between methods. In order to assess the efficacy of model selection in the cases considered, simple selection rules are proposed, based on within-sample best fit or best forecasting performance for different forecast horizons. Individual (per series) selection is contrasted against the simpler approach (aggregate selection), where one method is applied to all data series. Moreover, simple combination of methods also provides an operational benchmark. The analysis shows that individual selection works best when specific sub-populations of data are considered (trended or seasonal series), but also when methods’ relative performance is stable over time or no method is dominant across the data series. |
Keywords: | automatic model selection, comparative methods, extrapolative methods, combination, stability |
JEL: | C13 C22 |
Date: | 2013–04 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:51772&r=ecm |
By: | Rong Zhang; Brett A. Inder; Xibin Zhang |
Abstract: | We present a Bayesian sampling algorithm for parameter estimation in a discrete-response model, where the dependent variables contain two layers of binary choices and one ordered response. Our investigation is motivated by an empirical study using such a double-selection rule for three labour-market outcomes, namely labour-force participation, employment and occupational skill level. It is of particular interest to measure the marginal effects of some mental health factors on these labour-market outcomes. The contribution of our investigation is to present a sampling algorithm, which is a hybrid of Gibbs and Metropolis-Hastings algorithms. In Monte Carlo simulations, numerical maximization of likelihood fails to converge for more than half of the simulated samples. Our Bayesian method represents a substantial improvement: it converges in every sample, and performs with similar or better precision than maximum likelihood. We apply our sampling algorithm to the double-selection model of labour-force participation, employment and occupational skill level, where marginal effects of explanatory variables, in particular the mental health factors, on the three labour-force outcomes are assessed through 95% Bayesian credible intervals. The proposed sampling algorithm can easily be modified for other multivariate nonlinear models that involve selectivity and are difficult to estimate by other means. |
Keywords: | Gibbs sampler, Marginal effects, Mental illness, Metropolis-Hastings algorithm, Ordered outcome. |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2013-24&r=ecm |
By: | Efstathios Avdis; Jessica A. Wachter |
Abstract: | The equity premium is usually estimated by taking the sample average of returns. We propose an alternative estimator, based on maximum likelihood, that takes into account additional information contained in dividends and prices. Applying our method to the postwar sample leads to an economically significant reduction from the sample average of 6.4% to a maximum likelihood estimate of 5.1%. Using simulations, we show that our method is robust to mis-specification and is substantially less noisy than the sample average. |
JEL: | C32 C58 G11 G12 |
Date: | 2013–11 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:19684&r=ecm |
By: | Benedikt Rotermann; Bernd Wilfling |
Abstract: | This paper analyzes conditional stock-price volatility within in present-value framework including (rational) periodically collapsing bubbles as introduced by Evans (1991). To this end, we derive an analytically closed-form volatility formula of the stock price. The formula establishes a direct link between the bubble component and stock-price volatility. Using a Bayesian Monte-Carlo estimation technique (the particle filter), we demonstrate how to fit the parametric volatility equation to stock-market data. |
Keywords: | Present value model, Evans bubbles, conditional volatility, particle filter estimation |
JEL: | C1 G1 |
Date: | 2013–11 |
URL: | http://d.repec.org/n?u=RePEc:cqe:wpaper:2813&r=ecm |
By: | Francesca DI IORIO (Universit… di Napoli "Federico II"); Stefano FACHIN (Universir… La Sapienza, Roma); Riccardo LUCCHETTI (Universit… Politecnica delle Marche, Dipartimento di Scienze Economiche e Sociali) |
Abstract: | We review the I(2) model in the perspective of its application to near-I(2) data, and report the results of some Monte Carlo simulations on the small sample performance of asymptotic tests on the long-run coefficients in both I(2) and near-I(2) systems. Our findings suggest that these tests suffer from some finite-sample issues, such as size bias. However, the behaviour of these statistics is not markedly different in the I(2) and near-I(2) case at ordinary sample sizes, so the usage of the I(2) model with near-I(2) data is perfectly defensible in finite samples. |
Keywords: | Cointegration, Hypothesis testing, I(2), near-I(2) |
JEL: | C12 C32 C52 |
Date: | 2013–11 |
URL: | http://d.repec.org/n?u=RePEc:anc:wpaper:395&r=ecm |
By: | Steven E. Pav |
Abstract: | The asymptotic distribution of the Markowitz portfolio is derived, for the general case (assuming fourth moments of returns exist), and for the case of multivariate normal returns. The derivation allows for inference which is robust to heteroskedasticity and autocorrelation of moments up to order four. As a side effect, one can estimate the proportion of error in the Markowitz portfolio due to mis-estimation of the covariance matrix. A likelihood ratio test is given which generalizes Dempster's Covariance Selection test to allow inference on linear combinations of the precision matrix and the Markowitz portfolio. |
Date: | 2013–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1312.0557&r=ecm |
By: | Zuhair Bahraou (Department of Econometrics, Riskcenter-IREA, University of Barcelona, Av. Diagonal, 690, 08034 Barcelona, Spain); Catalina Bolancé (Department of Econometrics, Riskcenter-IREA, University of Barcelona, Av. Diagonal, 690, 08034 Barcelona, Spain); Ana M. Pérez-Marín (Department of Econometrics, Riskcenter-IREA, University of Barcelona, Av. Diagonal, 690, 08034 Barcelona, Spain) |
Abstract: | Testing weather or not data belongs could been generated by a family of extreme value copulas is difficult. We generalize a test and we prove that it can be applied whatever the alternative hypothesis. We also study the effect of using different extreme value copulas in the context of risk estimation. To measure the risk we use a quantile. Our results have motivated by a bivariate sample of losses from a real database of auto insurance claims. Methods are implemented in R. |
Keywords: | Extreme value copula, Extreme value distributions, Quantile |
Date: | 2013–11 |
URL: | http://d.repec.org/n?u=RePEc:xrp:wpaper:xreap2013-09&r=ecm |
By: | Andersson, Fredrik N.G. (Department of Economics, Lund University); Li, Yushu (Department of Business and Management Science, Norwegian School of Economics) |
Abstract: | Several central banks have adopted inflation targets. The implementation of these targets is flexible; the central banks aim to meet the target over the long term but allow inflation to deviate from the target in the short-term in order to avoid unnecessary volatility in the real economy. In this paper, we propose modeling the degree of flexibility using an AFRIMA model. Under the assumption that the central bankers control the long-run inflation rates, the fractional integration order captures the flexibility of the inflation targets. A higher integration order is associated with a more flexible target. Several estimators of the fractional integration order have been proposed in the literature. Grassi and Magistris (2011) show that a state-based maximum likelihood estimator is superior to other estimators, but our simulations show that their finding is over-biased for a nearly non-stationary time series. We resolve this issue by using a Bayesian Monte Carlo Markov Chain (MCMC) estimator. Applying this estimator to inflation from six inflation-targeting countries for the period 1999M1 to 2013M3, we find that inflation is integrated of order 0.8 to 0.9 depending on the country. The inflation targets are thus implemented with a high degree of flexibility. |
Keywords: | fractional integration; inflation-targeting; state space model |
JEL: | C32 E52 |
Date: | 2013–11–28 |
URL: | http://d.repec.org/n?u=RePEc:hhs:lunewp:2013_038&r=ecm |
By: | Erich Pinzón Fuchs (UP1 UFR02 - Université Paris 1, Panthéon-Sorbonne - UFR d'Économie - Université Paris I - Panthéon-Sorbonne - PRES HESAM) |
Abstract: | Econometrics has become such an obvious, objective - almost natural - tool that economists often forget that it has a history of its own, a complex and sometimes problematic history. Two works - Morgan (1990) and Qin (1993) - constitute the Received View of the history of econometrics. Basing our analysis on Leo Corry's methodological (and historiographical) framework of image and body of knowledge, the main purpose of this dissertation is to provide a critical account of the Received View. Our main criticism is that historians of econometrics have a particular image of knowledge that stems from within econometrics itself, generating a problem of reflexivity. This means that historians of econometrics would evaluate econometrics and its history from an econometrician point of view, determining very specific criteria of what should be considered as "true", what should be studied or what should be the questions that the scientific community should ask. This reflexive vision has conducted the Received View to write an internalist and funnel-shaped version of the History of Econometrics, presenting it as a lineal process progressing towards the best possible solution: Structural Econometrics and Haavelmo's Probability Approach in Econometrics (1944). The present work suggests that a new history of econometrics is needed. A new history that would overcome the reflexivity problem yielding a certainly messier and convoluted but also richer vision of econometrics' evolution, rather than the lineal path towards progress presented by the Received View. |
Keywords: | history of econometrics, economic methodology and philosophy, history of recent economic thought, quantification in economics, image and body of knowledge, reflexivity |
Date: | 2013–06–11 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:dumas-00906285&r=ecm |
By: | S. Boragan Aruoba; Luigi Bocola; Frank Schorfheide |
Abstract: | We develop a new class of nonlinear time-series models to identify nonlinearities in the data and to evaluate nonlinear DSGE models. U.S. output growth and the federal funds rate display nonlinear conditional mean dynamics, while inflation and nominal wage growth feature conditional heteroskedasticity. We estimate a DSGE model with asymmetric wage/price adjustment costs and use predictive checks to assess its ability to account for nonlinearities. While it is able to match the nonlinear inflation and wage dynamics, thanks to the estimated downward wage/price rigidities, these do not spill over to output growth or the interest rate. |
Keywords: | Wages ; Prices ; Inflation (Finance) ; Nonlinear theories ; Time-series analysis |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedpwp:13-47&r=ecm |