Econometrics
http://lists.repec.org/mailman/listinfo/nep-ecm
Econometrics2014-11-22Sune KarlssonHigh Dimensional Generalized Empirical Likelihood for Moment Restrictions with Dependent Data
http://d.repec.org/n?u=RePEc:pra:mprapa:59640&r=ecm
This paper considers the maximum generalized empirical likelihood (GEL) estimation and inference on parameters identified by high dimensional moment restrictions with weakly dependent data when the dimensions of the moment restrictions and the parameters diverge along with the sample size. The consistency with rates and the asymptotic normality of the GEL estimator are obtained by properly restricting the growth rates of the dimensions of the parameters and the moment restrictions, as well as the degree of data dependence. It is shown that even in the high dimensional time series setting, the GEL ratio can still behave like a chi-square random variable asymptotically. A consistent test for the over-identification is proposed. A penalized GEL method is also provided for estimation under sparsity setting.Chang, Jinyuan, Chen, Song Xi, Chen, Xiaohong2014Generalized empirical likelihood; High dimensionality; Penalized likelihood; Variable selection; Over-identification test; Weak dependence.Efficient estimation of heterogeneous coefficients in panel data models with common shock
http://d.repec.org/n?u=RePEc:pra:mprapa:59312&r=ecm
This paper investigates efficient estimation of heterogeneous coefficients in panel data models with common shocks, which have been a particular focus of recent theoretical and empirical literature. We propose a new two-step method to estimate the heterogeneous coefficients. In the first step, the maximum likelihood (ML) method is first conducted to estimate the loadings and idiosyncratic variances. The second step estimates the heterogeneous coefficients by using the structural relations implied by the model and replacing the unknown parameters with their ML estimates. We establish the asymptotic theory of our estimator, including consistency, asymptotic representation, and limiting distribution. The two-step estimator is asymptotically efficient in the sense that it has the same limiting distribution as the infeasible generalized least squares (GLS) estimator. Intensive Monte Carlo simulations show that the proposed estimator performs robustly in a variety of data setups.Li, Kunpeng, Lu, Lina2014-10Factor analysis; Block diagonal covariance; Panel data models; Common shocks; Maximum likelihood estimation, heterogeneous coefficients; Inferential theoryParticle learning for Bayesian non-parametric Markov Switching Stochastic Volatility model
http://d.repec.org/n?u=RePEc:cte:wsrepe:ws142819&r=ecm
This paper designs a Particle Learning (PL) algorithm for estimation of Bayesian nonparametric Stochastic Volatility (SV) models for financial data. The performance of this particle method is then compared with the standard Markov Chain Monte Carlo (MCMC) methods for non-parametric SV models. PL performs as well as MCMC, and at the same time allows for on-line type inference. The posterior distributions are updated as new data is observed, which is prohibitively costly using MCMC. Further, a new non-parametric SV model is proposed that incorporates Markov switching jumps.The proposed model is estimated by using PL and tested on simulated data. Finally, the performance of the two non-parametric SV models, with and without Markov switching, is compared by using real financial time series. The results show that including a Markov switching specification provides higher predictive power in the tails of the distribution.Audrone Virbickaite, Hedibert F. Lopes, Concepcion Ausín, Pedro Galeano2014-10Dirichlet Process Mixture, Markov Switching, MCMC, Particle Learning, Stochastic Volatility, Sequential Monte CarloOptimal Formulations for Nonlinear Autoregressive Processes
http://d.repec.org/n?u=RePEc:dgr:uvatin:20140103&r=ecm
We develop optimal formulations for nonlinear autoregressive models by representing them as linear autoregressive models with time-varying temporal dependence coefficients. We propose a parameter updating scheme based on the score of the predictive likelihood function at each time point. The resulting time-varying autoregressive model is formulated as a nonlinear autoregressive model and is compared with threshold and smooth-transition autoregressive models. We establish the information theoretic optimality of the score driven nonlinear autoregressive process and the asymptotic theory for maximum likelihood parameter estimation. The performance of our model in extracting the time-varying or the nonlinear dependence for finite samples is studied in a Monte Carlo exercise. In our empirical study we present the in-sample and out-of-sample performances of our model for a weekly time series of unemployment insurance claims.Francisco Blasques, Siem Jan Koopman, Andr� Lucas2014-08-11Asymptotic theory; Dynamic models, Observation driven time series models; Smooth-transition model; Time-Varying Parameters; Treshold autoregressive modelBand Width Selection for High Dimensional Covariance Matrix Estimation
http://d.repec.org/n?u=RePEc:pra:mprapa:59641&r=ecm
The banding estimator of Bickel and Levina (2008a) and its tapering version of Cai, Zhang and Zhou (2010), are important high dimensional covariance estimators. Both estimators require choosing a band width parameter. We propose a band width selector for the banding covariance estimator by minimizing an empirical estimate of the expected squared Frobenius norms of the estimation error matrix. The ratio consistency of the band width selector to the underlying band width is established. We also provide a lower bound for the coverage probability of the underlying band width being contained in an interval around the band width estimate. Extensions to the band width selection for the tapering estimator and threshold level selection for the thresholding covariance estimator are made. Numerical simulations and a case study on sonar spectrum data are conducted to confirm and demonstrate the proposed band width and threshold estimation approaches.Qiu, Yumou, Chen, Song Xi2014Bandable covariance; Banding estimator; Large $p$, small $n$; Ratio-consistency; Tapering estimator; Thresholding estimator.Interval-valued Time Series: Model Estimation based on Order Statistics
http://d.repec.org/n?u=RePEc:ucr:wpaper:201429&r=ecm
Gloria Gonzalez-Rivera, Wei Lin2014-09Low Frequency and Weighted Likelihood Solutions for Mixed Frequency Dynamic Factor Models
http://d.repec.org/n?u=RePEc:dgr:uvatin:20140105&r=ecm
The multivariate analysis of a panel of economic and financial time series with mixed frequencies is a challenging problem. The standard solution is to analyze the mix of monthly and quarterly time series jointly by means of a multivariate dynamic model with a monthly time index: artificial missing values are inserted for the intermediate months of the quarterly time series. In this paper we explore an alternative solution for a class of dynamic factor models that is specified by means of a low frequency quarterly time index. We show that there is no need to introduce artificial missing values while the high frequency (monthly) information is preserved and can still be analyzed. We also provide evidence that the analysis based on a low frequency specification can be carried out in a computationally more efficient way. A comparison study with existing mixed frequency procedures is presented and discussed. Furthermore, we modify the method of maximum likelihood in the context of a dynamic factor model. We introduce variable-specific weights in the likelihood function to let some variable equations be of more importance during the estimation process. We derive the asymptotic properties of the weighted maximum likelihood estimator and we show that the estimator is consistent and asymptotically normal. We also verify the weighted estimation method in a Monte Carlo study to investigate the effect of differen t choices for the weights in different scenarios. Finally, we empirically illustrate the new developments for the extraction of a coincident economic indicator from a small panel of mixed frequency economic time series.Francisco Blasques, Siem Jan Koopman, Max Mallee2014-08-11Asymptotic theory, Forecasting, Kalman filter, Nowcasting, State spaceMinimum Distance Estimation of Dynamic Models with Errors-In-Variables
http://d.repec.org/n?u=RePEc:fip:fedawp:2014-11&r=ecm
Empirical analysis often involves using inexact measures of desired predictors. The bias created by the correlation between the problematic regressors and the error term motivates the need for instrumental variables estimation. This paper considers a class of estimators that can be used when external instruments may not be available or are weak. The idea is to exploit the relation between the parameters of the model and the least squares biases. In cases when this mapping is not analytically tractable, a special algorithm is designed to simulate the latent predictors without completely specifying the processes that induce the biases. The estimators perform well in simulations of the autoregressive distributed lag model and the dynamic panel model. The methodology is used to re-examine the Phillips curve, in which the real activity gap is latent.Gospodinov, Nikolay, Komunjer, Ivana, Ng, Serena2014-08-01measurement error; minimum distance; simulation estimation; dynamic panelDynamic Panels with Threshold Effect and Endogeneity
http://d.repec.org/n?u=RePEc:cep:stiecm:/2014/577&r=ecm
This paper addresses an important and challenging issue as how best to model nonlinear asymmetric dynamics and cross-sectional heterogeneity, simultaneously, in the dynamic threshold panel data framework, in which both threshold variable and regressors are allowed to be endogenous. Depending on whether the threshold variable is strictly exogenous or not, we propose two different estimation methods: first-differenced two-step least squares and first-differenced GMM. The former exploits the fact that the threshold variable is strictly exogenous to achieve the super-consistency of the threshold estimator. We provide asymptotic distributions of both estimators. The bootstrap-based test for the presence of threshold effect as well as the exogeneity test of the threshold variable are also developed. Monte Carlo studies provide a support for our theoretical predictions. Finally, using the UK and the US company panel data, we provide two empirical applications investigating an asymmetric sensitivity of investment to cash flows and an asymmetric dividend smoothing.Myung Hwan Seo, Yongcheol Shin2014-09Dynamic Panel Threshold Models, Endogenous Threshold Effects and Regressors, FD-GMM and FD-2SLS Estimation, Linearity Test, Exogeneity Test, Investment and Dividend Smoothing.Dynamic Selection and Distributional Bounds on Search Costs in Dynamic Unit-Demand Models
http://d.repec.org/n?u=RePEc:osu:osuewp:14-02&r=ecm
This paper develops a dynamic model of consumer search that, despite placing very little structure on the dynamic problem faced by consumers, allows us to exploit intertemporal variation in within-period price and search cost distributions to estimate the population distribution from which consumers' search costs are initially drawn. We show that static approaches to estimating this distribution generally suffer from a dynamic sample selection bias because forward-looking consumers with unit demand for a good may delay their purchase in a way that depends on their individual search cost. We analyze identification of the population search cost distribution using only price data and develop estimable nonparametric upper and lower bounds on the distribution function and a nonlinear least squares estimator for parametric models. We also consider the additional identifying power of weak assumptions such as monotonicity of purchase probabilities in search costs. We apply our estimators to analyze the online market for two widely used econometrics textbooks. Our results suggest that static estimates of the search cost distribution are biased upwards, in a distributional sense, relative to the true population distribution. In a small-scale simulation study, we show that this is typical in a dynamic setting where consumers with high search costs are more likely to delay purchase than those with lower search costs.Jason R. Blevins, Garrett T. Senney2014-10nonsequential search, consumer search, dynamic selection, nonparametric boundsScore Driven exponentially Weighted Moving Average and Value-at-Risk Forecasting
http://d.repec.org/n?u=RePEc:dgr:uvatin:20140092&r=ecm
We present a simple new methodology to allow for time variation in volatilities using a recursive updating scheme similar to the familiar RiskMetrics approach. We update parameters using the score of the forecasting distribution rather than squared lagged observations. This allows the parameter dynamics to adapt automatically to any non-normal data features and robustifies the subsequent volatility estimates. Our new approach nests several extensions to the exponentially weighted moving average (EWMA) scheme as proposed earlier. Our approach also easily handles extensions to dynamic higher-order moments or other choices of the preferred forecasting distribution. We apply our method to Value-at-Risk forecasting with Student's t distributions and a time varying degrees of freedom parameter and show that the new method is competitive to or better than earlier methods for volatility forecasting of individual stock returns and exchange rates.Andr� Lucas, Xin Zhang2014-07-22dynamic volatilities, time varying higher order moments, integrated generalized autoregressive score models, Exponential Weighted Moving Average (EWMA), Value-at-Risk (VaR)Detrended fluctuation analysis as a regression framework: Estimating dependence at different scales
http://d.repec.org/n?u=RePEc:arx:papers:1411.0496&r=ecm
We propose a novel framework combining detrended fluctuation analysis with standard regression methodology. The method is built on detrended variances and covariances and it is designed to estimate regression parameters at different scales and under potential non-stationarity and power-law correlations. Selected examples from physics, finance and environmental sciences illustrate usefulness of the framework.Ladislav Kristoufek2014-11Spurious Dependence
http://d.repec.org/n?u=RePEc:hhs:stavef:2014_010&r=ecm
We study the problem of potentially spurious attribution of dependence in moderate to large samples, where both the number of variables and length of variable observations are growing. We approach this question of double asymptotics from both theoretical and empirical perspectives. For theoretical characterization, we consider a combination of poissonization and large deviation techniques. For empirics, we simulate a large dataset of i.i.d. variables and estimate dependence as both sample size and the number of iterations grow. We represent the different effects of sample size versus length of variables, via an empirical dependence surface. Finally, we apply the empirical method to a panel of financial data, comprising daily stock returns for 60 companies. For both simulated and financial data, increasing sample size reduces dependence estimates after a certain point. However, increasing the number of variables does not appear to attenuate the potential for spurious dependence, as measured by maximal Kendall's tau.Chollete, Loran, Pena, Victor de la, Segers , Johan2014-08-31Double Asymptotics; Empirical Dependence Surface; Financial Data; Poissonization; Simulation; Spurious DependenceNonparametric Identification of Endogenous and Heterogeneous Aggregate Demand Models: Complements, Bundles and the Market Level
http://d.repec.org/n?u=RePEc:ihs:ihsesp:307&r=ecm
This paper studies nonparametric identification in market level demand models for differentiated products. We generalize common models by allowing for the distribution of heterogeneity parameters (random coefficients) to have a nonparametric distribution across the population and give conditions under which the density of the random coefficients is identified. We show that key identifying restrictions are provided by (i) a set of moment conditions generated by instrumental variables together with an inversion of aggregate demand in unobserved product characteristics; and (ii) an integral transform (Radon transform) that maps the random coefficient density to the aggregate demand. This feature is shown to be common across a wide class of models, and we illustrate this by studying leading demand models. Our examples include demand models based on the multinomial choice (Berry, Levinsohn, Pakes, 1995), the choice of bundles of goods that can be substitutes or complements, and the choice of goods consumed in multiple units.Dunker, Fabian, Hoderlein, Stefan, Kaido, Hiroaki2014-10Evaluating Option Pricing Model Performance Using Model Uncertainty
http://d.repec.org/n?u=RePEc:crf:wpaper:14-06&r=ecm
The objective of this paper is to evaluate option pricing performance on the cross sectional level. For this purpose, we propose a statistical framework, in which we in particular account for the uncertainty associated with the reported pricing performance. Instead of a single figure, we determine an entire probability distribution function for the loss function that is used to measure option pricing performance. This methodology enables us to visualize the effect of parameter uncertainty on the reported pricing performance. Using a data driven approach, we confirm previous evidence that standard volatility models with clustering and leverage effects are sufficient for the option pricing purpose. In addition, we demonstrate that there is short-term persistence but long-term heterogeneity in crosssectional option pricing information. This finding has two important implications. First, it justifies the practitioner’s routine to refrain from time series approaches, and instead estimate option pricing models on a cross-section by cross-section basis. Second, the long term heterogeneity in option prices pinpoints the importance of measuring, comparing and testing option pricing model for each cross-section separately. To our knowledge no statistical testing framework has been applied to a single cross-section of option prices before. We propose a methodology that addresses this need. The proposed framework can be applied to a broad set of models and data. In the empirical part of the paper, we show by means of example, an application that uses a discrete time volatility model on S&P 500 European options.Thorsten Lehnert, Gildas Blanchard, Dennis Bams2014option pricing, cross-section, estimation risk, parameter uncertainty,specification test, bootstrappingEvaluating a Structural Model Forecast: Decomposition Approach
http://d.repec.org/n?u=RePEc:cnb:rpnrpn:2014/02&r=ecm
Macroeconomic forecasters are often criticized for a lack of transparency when presenting their forecasts. To deter such criticism, the transparency of the forecasting process should be enhanced by tracing and explaining the effects of data revisions and expert judgment updates on variations in the forecasts. This paper presents a forecast decomposition analysis framework designed to examine the differences between two forecasts generated by a linear structural model. The differences between the forecasts considered can be decomposed into the contributions of various forecast elements, such as the effect of new data or expert judgment. The framework allows us to evaluate the contributions of forecast assumptions in the presence of expert judgment applied in the expected way. The simplest application of this framework examines alternative forecast scenarios with different forecast assumptions. Next, a one-period difference between the forecasts’ initial periods is added to the examination. Finally, a replication of the Inflation Forecast Evaluation presented in Inflation Report III/2013 is created to illustrate the full capabilities of the decomposition framework.Frantisek Brazdik, Zuzana Humplova, Frantisek Kopriva2014-08Data revisions, DSGE models, forecasting, forecast revisionsThe Cult of statistical significance - A Review
http://d.repec.org/n?u=RePEc:ind:igiwpp:2014-038&r=ecm
I present a review and extended discussion of The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice and Lives by Deirdre McCloskey and Stephen Ziliak, a work that raises important issues related to the practice of statistics and that has been widely commented upon. For this review, I draw upon several other works on statistics and my personal experiences as a teacher of undergraduate econometrics.Sripad Motiram2014-09Significance, Standard Error, Application of Statistics, MethodologyDensity Forecast Evaluation in Unstable Environments
http://d.repec.org/n?u=RePEc:ucr:wpaper:201428&r=ecm
Gloria Gonzalez-Rivera, Yingying Sun2014-08Modeling Systematic Risk and Point-in-Time Probability of Default under the Vasicek Asymptotic Single Risk Factor Model Framework
http://d.repec.org/n?u=RePEc:pra:mprapa:59025&r=ecm
Systematic risk has been a focus for stress testing and risk capital assessment. Under the Vasicek asymptotic single risk factor model framework, entity default risk for a risk homogeneous portfolio divides into two parts: systematic and entity specific. While entity specific risk can be modelled by a probit or logistic model using a relatively short period of portfolio historical data, modeling of systematic risk is more challenging. In practice, most default risk models do not fully or dynamically capture systematic risk. In this paper, we propose an approach to modeling systematic and entity specific risks by parts and then aggregating together analytically. Systematic risk is quantified and modelled by a multifactor Vasicek model with a latent residual, a factor accounting for default contagion and feedback effects. The asymptotic maximum likelihood approach for parameter estimation for this model is equivalent to least squares linear regression. Conditional entity PDs for scenario tests and through-the-cycle entity PD all have analytical solutions. For validation, we model the point-in-time entity PD for a commercial portfolio, and stress the portfolio default risk by shocking the systematic risk factors. Rating migration and portfolio loss are assessed.Yang, Bill Huajian2014-03-18point-in-time PD, through-the-cycle PD, Vasicek model, systematic risk, entity specific risk, stress testing, rating migration, scenario lossDynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window
http://d.repec.org/n?u=RePEc:arx:papers:1410.7799&r=ecm
Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well that of other methods. Keywords: Bayesian model averaging; Model uncertainty; Nowcasting; Occam's window.Luca Onorante, Adrian E. Raftery2014-10