|
on Econometrics |
By: | Joshua C C Chan; Eric Eisenstat |
Abstract: | We consider an adaptive importance sampling approach to estimating the marginal likelihood, a quantity that is fundamental in Bayesian model comparison and Bayesian model averaging. This approach is motivated by the difficulty of obtaining an accurate estimate through existing algorithms that use Markov chain Monte Carlo (MCMC) draws, where the draws are typically costly to obtain and highly correlated in high-dimensional settings. In contrast, we use the cross-entropy (CE) method, a versatile adaptive Monte Carlo algorithm originally developed for rare-event simulation. The main advantage of the importance sampling approach is that random samples can be obtained from some convenient density with little additional costs. As we are generating independent draws instead of correlated MCMC draws, the increase in simulation effort is much smaller should one wish to reduce the numerical standard error of the estimator. Moreover, the importance density derived via the CE method is grounded in information theory, and therefore, is in a well-defined sense optimal. We demonstrate the utility of the proposed approach by two empirical applications involving women’s labor market participation and U.S. macroeconomic time series. In both applications the proposed CE method compares favorably to existing estimators. |
JEL: | C11 C15 C32 C52 |
Date: | 2012–05 |
URL: | http://d.repec.org/n?u=RePEc:acb:camaaa:2012-18&r=ecm |
By: | Eric Gautier (CREST); Alexandre Tsybakov (CREST) |
Keywords: | Instrumental Variables, Sparsity, STIV Estimator, Endogeneity, High-Dimensional Regression, Conic Programming, Optimal Instruments, Hereroscedasticity, Confidence Intervals, Non-Gaussian Errors, Variable Selection, Unknown Variance, Sign Consistency |
Date: | 2011–05 |
URL: | http://d.repec.org/n?u=RePEc:crs:wpaper:2011-13&r=ecm |
By: | Fan, Jianqing; Liao, Yuan; Mincheva, Martina |
Abstract: | This paper deals with estimation of high-dimensional covariance with a conditional sparsity structure, which is the composition of a low-rank matrix plus a sparse matrix. By assuming sparse error covariance matrix in a multi-factor model, we allow the presence of the cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specic examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms, including the spectral norm. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also veried by extensive simulation studies. |
Keywords: | High dimensionality; approximate factor model; unknown factors; principal components; sparse matrix; low-rank matrix; thresholding; cross-sectional correlation |
JEL: | C13 C01 |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:38697&r=ecm |
By: | Fan, Jianqing; Liao, Yuan |
Abstract: | Most papers on high-dimensional statistics are based on the assumption that none of the regressors are correlated with the regression error, namely, they are exogenous. Yet, endogeneity arises easily in high-dimensional regression due to a large pool of regressors and this causes the inconsistency of the penalized least-squares methods and possible false scientic discoveries. A necessary condition for model selection of a very general class of penalized regression methods is given, which allows us to prove formally the inconsistency claim. To cope with the possible endogeneity, we construct a novel penalized focussed generalized method of moments (FGMM) criterion function and oer a new optimization algorithm. The FGMM is not a smooth function. To establish its asymptotic properties, we rst study the model selection consistency and an oracle property for a general class of penalized regression methods. These results are then used to show that the FGMM possesses an oracle property even in the presence of endogenous predictors, and that the solution is also near global minimum under the over-identication assumption. Finally, we also show how the semi-parametric efficiency of estimation can be achieved via a two-step approach. |
Keywords: | Focused GMM; Sparsity recovery; Endogenous variables; Oracle property; Conditional moment restriction; Estimating equation; Over identi cation; Global minimization; Semi-parametric efficiency |
JEL: | C13 C52 C01 |
Date: | 2012 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:38698&r=ecm |
By: | Haan, Peter (DIW Berlin); Kemptner, Daniel (DIW Berlin); Uhlendorff, Arne (University of Mannheim) |
Abstract: | Dynamic discrete choice models usually require a general specification of unobserved heterogeneity. In this paper, we apply Bayesian procedures as a numerical tool for the estimation of a female labor supply model based on a sample size which is typical for common household panels. We provide two important results for the practitioner: First, for a specification with a multivariate normal distribution for the unobserved heterogeneity, the Bayesian MCMC estimator yields almost identical results as a classical Maximum Simulated Likelihood (MSL) estimator. Second, we show that when imposing distributional assumptions which are consistent with economic theory, e.g. log-normally distributed consumption preferences, the Bayesian method performs well and provides reasonable estimates, while the MSL estimator does not converge. These results indicate that Bayesian procedures can be a beneficial tool for the estimation of dynamic discrete choice models. |
Keywords: | Bayesian estimation, dynamic discrete choice models, intertemporal labor supply behavior |
JEL: | C11 C25 J22 |
Date: | 2012–05 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp6544&r=ecm |
By: | Olga Klopp (CREST) |
Abstract: | We propose a new pivotal method for estimating high-dimensional matrices. Assume that we observe a small set of entries or linear combinations of entries of an unknown matrix A0 corrupted by noise. We propose a new method for estimating A0 which does not rely on the knowledge or an estimation of the standard deviation of the noise . Our estimator achieves, up to a logarithmic factor, optimal rates of convergence under the Frobenius risk and, thus, has the same prediction performance as previously proposed estimators which rely on the knowledge of . Our method is based on the solution of a convex optimization problem which makes it computationally attractive |
Keywords: | Matrix completion, matrix regression, low rank matrix estimation, recovery of the rank |
Date: | 2012–02 |
URL: | http://d.repec.org/n?u=RePEc:crs:wpaper:2012-05&r=ecm |
By: | Liao, Yuan; Jiang, Wenxin |
Abstract: | This paper addresses the estimation of the nonparametric conditional moment restricted model that involves an infinite-dimensional parameter g0. We estimate it in a quasi-Bayesian way, based on the limited information likelihood, and investigate the impact of three types of priors on the posterior consistency: (i) truncated prior (priors supported on a bounded set), (ii) thin-tail prior (a prior that has very thin tail outside a growing bounded set) and (iii) normal prior with nonshrinking variance. In addition, g0 is allowed to be only partially identified in the frequentist sense, and the parameter space does not need to be compact. The posterior is regularized using a slowly growing sieve dimension, and it is shown that the posterior converges to any small neighborhood of the identified region. We then apply our results to the nonparametric instrumental regression model. Finally, the posterior consistency using a random sieve dimension parameter is studied. |
Keywords: | Identified region; limited information likelihood; sieve approximation; nonparametric instrumental variable; ill-posed problem; partial identification; Bayesian inference; shrinkage prior; regularization |
JEL: | C14 C11 C01 |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:38700&r=ecm |
By: | Pitarakis, J |
Abstract: | We develop a test of the joint null hypothesis of linearity and nonstationarity within a threshold autoregressive process of order one with deterministic components. We derive the limiting distribution of a Wald type test statistic and subsequently investigate its local power and nite sample properties. We view our test as a useful diagnostic tool since a non rejection of our null hypothesis would remove the need to explore nonlinearities any further and support a linear autoregression with a unit root. |
Keywords: | Threshold Autoregressive Models; Unit Roots; Near Unit Roots; Brownian Bridge; Augmented Dickey Fuller Test |
JEL: | C50 C22 |
Date: | 2012–05 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:38845&r=ecm |
By: | Claudia Tarantola (Department of Economics and Business, University of Pavia); Ioannis Ntzoufras (Department of Statistics, Athens University of Economics and Business) |
Abstract: | This paper deals with the Bayesian analysis of graphical models of marginal independence for three way contingency tables. Each marginal independence model corresponds to a particular factorization of the cell probabilities and a conjugate analysis based on Dirichlet prior can be performed. We illustrate a comprehensive Bayesian analysis of such models, involving suitable choices of prior parameters, estimation, model determination, as well as the allied computational issues. The posterior distributions of the marginal log-linear parameters is indirectly obtained using simple Monte Carlo schemes. The methodology is illustrated using two real data sets. |
Keywords: | graphical models, marginal log-linear parameterization, Monte Carlo computation, order decomposability, power prior approach. |
Date: | 2012–05 |
URL: | http://d.repec.org/n?u=RePEc:pav:wpaper:172&r=ecm |
By: | Timothy A. Weterings; Mark N. Harris; Bruce Hollingsworth |
Abstract: | This research proposes that, in cases where threshold covariates are either unavailable or difficult to observe, practitioners should treat these characteristics as latent, and use simulated maximum likelihood techniques to control for them. Two econometric frameworks for doing so in a more flexible manner are proposed. The finite sample performance of these new specifications are investigated with the use of Monte Carlo simulation. Applications of successively more flexible models are then given, with extensive post-estimation analysis utilised to better assess the likely implications of model choice on conclusions made in empirical research. |
Keywords: | Ordered Choice Modeling, Unobserved Heterogeneity, Simulated Maximum Likelihood |
JEL: | C01 C23 C52 |
Date: | 2012–05 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2012-12&r=ecm |
By: | Pitarakis, J |
Abstract: | We formally define a concept of functional cointegration linking the dynamics of two time series via a functional coefficient. This is achieved through the use of a concept of summability as an alternative to I(1)'ness which is no longer suitable under nonlinear dynamics. We subsequently introduce a nonparametric approach for estimating the unknown functional coefficients. Our method is based on a piecewise local least squares principle and is computationally simple to implement. We establish its consistency properties and evaluate its performance in finite samples. |
Keywords: | Functional Coefficients; Unit Roots; Cointegration; Piecewise Local Linear Estimation |
JEL: | C50 C22 |
Date: | 2012–05 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:38846&r=ecm |
By: | Olga Klopp (CREST) |
Abstract: | In the present paper we consider the problem of matrix completion with noise for general sampling schemes. Unlike previous works, in our construction we do not need to know or to evaluate the sampling distribution or the variance of the noise. We propose new nuclear-norm penalized estimators, one of them of the "square-root" type. We prove that, up to a logarithmic factor, our estimators achieve optimal rates with respect to the estimation error |
Date: | 2012–03 |
URL: | http://d.repec.org/n?u=RePEc:crs:wpaper:2012-06&r=ecm |
By: | Bartolucci, Francesco |
Abstract: | We develop a recursion for hidden Markov model of any order h, which allows us to obtain the posterior distribution of the latent state at every occasion, given the previous h states and the observed data. With respect to the well-known Baum-Welch recursions, the proposed recursion has the advantage of being more direct to use and, in particular, of not requiring dummy renormalizations to avoid numerical problems. We also show how this recursion may be expressed in matrix notation, so as to allow for an efficient implementation, and how it may be used to obtain the manifest distribution of the observed data and for parameter estimation within the Expectation-Maximization algorithm. The approach is illustrated by an application to nancial data which is focused on the study of the dynamics of the volatility level of log-returns. |
Keywords: | Expectation-Maximization algorithm; forward-backward recursions; latent Markov model; stochastic volatility |
JEL: | C13 C23 C22 |
Date: | 2011–12–31 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:38778&r=ecm |
By: | John K. Dagsvik (Statistics Norway) |
Abstract: | This paper discusses how specification of probabilistic models for multistate duration data generated by individual choices should be justified on a priori theoretical grounds. Preferences are assumed represented by random utilities, where utilities are viewed as random also to the agent himself. First, the paper proposes a characterization of exogenous preferences, (that is, in the special case with no state dependence effects). The main assumption asserts that when preferences are exogenous the current and future indirect utilities are uncorrelated with current and past choices, given unobservables that are perfectly known to the agent. It is demonstrated that under rather weak and general regularity conditions this characterization yields an explicit structure of the utility function as a so-called Extremal stochastic process. Furthermore, from this utility representation it follows that the choice process is a Markov Chain (in continuous- or discrete time), with a particular functional form of the transition probabilities, as explicit functions of the parameters of the utility function and choice set. Subsequently, we show how the model can be extended to allow for structural state dependence effects, and how such state dependence effects can be identified. Moreover, it is discussed how a version of Chamberlain’s conditional estimation method applies in the presence of fixed effects. Finally, we discuss two examples of applications. |
Keywords: | Duration models; Random utility models; Habit persistence; True state dependence; Extremal process; Markov chain |
JEL: | C23 C25 C41 C51 D01 |
Date: | 2012–05 |
URL: | http://d.repec.org/n?u=RePEc:ssb:dispap:688&r=ecm |
By: | Christian Francq (CREST); Jean-Michel Zakoïan (CREST) |
Keywords: | alpha-stable distribution, composite likelihood, GEV distribution, GPD, pseudo-likelihood, quasi-marginal maximum likelihood, stock returns distributions |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:crs:wpaper:2011-30&r=ecm |
By: | Puzanova, Natalia |
Abstract: | This paper introduces a multivariate pure-jump Lévy process which allows for skewness and excess kurtosis of single asset returns and for asymptotic tail dependence in the multivariate setting. It is termed Variance Compound Gamma (VCG). The novelty of my approach is that, by applying a two-stage stochastic time change to Brownian motions, I derive a hierarchical structure with different properties of inter- and intra-sector dependence. I investigate the properties of the implied static copula families and come to the conclusion that they are ordered with respect to their parameters and that the lower-tail dependence of the intra-sector copula is increasing in the absolute values of skewness parameters. Furthermore, I show that the joint characteristic function of the VCG asset returns can be explicitly given as a nested Archimedean copula of their marginal characteristic functions. Applied to credit portfolio modelling, the framework introduced results in a more conservative tail risk assessment than a Gaussian framework with the same linear correlation structure, as I show in a simulation study. To foster the simulation efficiency, I provide an Importance Sampling algorithm for the VCG portfolio setting. -- |
Keywords: | Portfolio Credit Risk,Stochastic Time Change,Brownian Subordination,Jumps,Tail Dependence,Hierarchical Dependence Structure |
JEL: | C46 C63 G12 G21 |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:zbw:bubdp2:201116&r=ecm |
By: | Bell, Peter |
Abstract: | This paper explores extensions to the random walk model for time series in finance. There is some disagreement about the suitability of multifractal probability models, but they have compelling attributes. Research that has found no evidence to support the multifractal model has used testing procedures that do not have known statistical power. Therefore, there is an opportunity for new methodology. This paper presents a testing procedure to determine if data follows a multifractal or monofractal process. Using simulation, the paper derives the power of the test. Although the power is low, the test suggests that some observed prices do follow multifractal behaviour. This is a strong result. Further, this work suggests there will be further disagreement in the literature going forward due to the difficulty of identifying multifractal data. |
Keywords: | Statistical methods; fractal geometry; finance |
JEL: | G0 C1 |
Date: | 2012–04–23 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:38689&r=ecm |
By: | Eric Gautier (CREST); Stefan Soderlein (Boston College) |
Keywords: | nonparametric identification, unobserved heterogeneity, treatment effects, deconvolution, radon transform, hemispherical transform |
Date: | 2011–09 |
URL: | http://d.repec.org/n?u=RePEc:crs:wpaper:2011-25&r=ecm |
By: | Xavier d'Haultfoeuille (CREST); Philippe Février (CREST) |
Keywords: | nonparametric identification, discrete instrument, control variable, fixed point, group theory |
JEL: | C14 |
Date: | 2011–05 |
URL: | http://d.repec.org/n?u=RePEc:crs:wpaper:2011-28&r=ecm |
By: | Michael Fertig; Katja Görlitz |
Abstract: | This paper investigates how to test and correct for nonresponse selection bias induced by missing income information when estimating wage functions. The novelty is to use the variation in interviewer-specific response rates as exclusion restriction within the framework of a sample selection model. |
Keywords: | Item nonresponse; wages |
JEL: | J30 |
Date: | 2012–04 |
URL: | http://d.repec.org/n?u=RePEc:rwi:repape:0333&r=ecm |
By: | Huber, Martin; Mellace, Giovanni |
Abstract: | In heterogeneous treatment effect models with endogeneity, the identification of the local average treatment effect (LATE) typically relies on an instrument that satisfies two conditions: (i) joint independence of the potential post-instrument variables and the instrument and (ii) monotonicity of the treatment in the instrument, see Imbens and Angrist (1994). We show that identification is still feasible when replacing monotonicity by a strictly weaker local monotonicity condition. We demonstrate that the latter allows identifying the LATEs on the (i) compliers (whose treatment reacts to the instrument in the intended way), (ii) defiers (who react counter-intuitively), and (iii) both populations jointly. Furthermore, (i) and (iii) coincides with standard LATE if monotonicity holds. We also present an application to the quarter of birth instrument of Angrist and Krueger (1991). |
Keywords: | Instrumental variable, treatment effects, LATE, local monotonicity |
JEL: | C14 C21 C26 |
Date: | 2012–05 |
URL: | http://d.repec.org/n?u=RePEc:usg:econwp:2012:12&r=ecm |
By: | Huber, Martin |
Abstract: | This paper demonstrates the identification of causal mechanisms in experiments with a binary treatment, (primarily) based on inverse probability weighting. I.e., we consider the average indirect effect of the treatment, which operates through an intermediate variable (or mediator) that is situated on the causal path between the treatment and the outcome, as well as the (unmediated) direct effect. Even under random treatment assignment, subsequent selection into the mediator is generally non-random such that causal mechanisms are only identified when controlling for confounders of the mediator and the outcome. To tackle this issue, units are weighted by the inverse of their conditional treatment propensity given the mediator and observed confounders. We show that the form and applicability of weighting depend on whether the confounders are themselves influenced by the treatment or not. A simulation study gives the intuition for these results and an empirical application to the direct and indirect health effects (through employment) of the U.S. Job Corps program is also provided. |
Keywords: | Causal mechanisms, mediation analysis, direct and indirect effects, experiment, inverse probability weighting |
JEL: | C14 C21 I38 |
Date: | 2012–05 |
URL: | http://d.repec.org/n?u=RePEc:usg:econwp:2012:13&r=ecm |
By: | Bartolucci, Francesco; Giorgio E., Montanari; Pandolfi, Silvia |
Abstract: | The evaluation of nursing homes and the assessment of the quality of the health care provided to their patients are usually based on the administration of questionnaires made of a large number of polytomous items. In applications involving data collected by questionnaires of this type, the Latent Class (LC) model represents a useful tool for classifying subjects in homogenous groups. In this paper, we propose an algorithm for item selection, which is based on the LC model. The proposed algorithm is aimed at finding the smallest subset of items which provides an amount of information close to that of the initial set. The method sequentially eliminates the items that do not significantly change the classification of the subjects in the sample with respect to the classification based on the full set of items. The LC model, and then the item selection algorithm, may be also used with missing responses that are dealt with assuming a form of latent ignorability. The potentialities of the proposed approach are illustrated through an application to a nursing home dataset collected within the ULISSE project, which concerns the quality-of-life of elderly patients hosted in Italian nursing homes. The dataset presents several issues, such as missing responses and a very large number of items included in the questionnaire. |
Keywords: | Expectation-Maximization algorithm; Polytomous items; Quality-of-life; ULISSE project |
JEL: | C13 I11 C33 |
Date: | 2012–04–26 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:38757&r=ecm |
By: | Guglielmo D'Amico; Filippo Petroni |
Abstract: | In this paper we propose a new stochastic model based on a generalization of semi-Markov chains to study the high frequency price dynamics of traded stocks. We assume that the financial returns are described by a weighted indexed semi-Markov chain model. We show, through Monte Carlo simulations, that the model is able to reproduce important stylized facts of financial time series as the first passage time distributions and the persistence of volatility. The model is applied to data from Italian and German stock market from first of January 2007 until end of December 2010. |
Date: | 2012–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1205.2551&r=ecm |
By: | Leppin, Julian Sebastian |
Abstract: | This paper examines the predictive power of different estimation approaches for reservation wages. It applies stochastic frontier models for employed workers and the approach from Kiefer and Neumann (1979b) for unemployed workers. Furthermore, the question of whether or not reservation wages decrease over the unemployment period is addressed. This is done by a pseudo-panel with known reservation wages which uses data from the German Socio-Economic Panel as a basis. The comparison of the estimators is carried out by a Monte Carlo simulation. The best results are achieved by the cross-sectional stochastic frontier model. The Kiefer-Neumann approach failed to predict the decreasing reservation wages correctly. -- |
Keywords: | job search theory,Monte Carlo simulation,reservation wages,stochastic wage frontiers |
JEL: | C21 C23 J64 |
Date: | 2012 |
URL: | http://d.repec.org/n?u=RePEc:zbw:hwwirp:124&r=ecm |
By: | Anirban Basu |
Abstract: | This paper builds on the methods of local instrumental variables developed by Heckman and Vytlacil (1999, 2001, 2005) to estimate person-centered treatment (PeT) effects that are conditioned on the person’s observed characteristics and averaged over the potential conditional distribution of unobserved characteristics that lead them to their observed treatment choices. PeT effects are more individualized than conditional treatment effects from a randomized setting with the same observed characteristics. PeT effects can be easily aggregated to construct any of the mean treatment effect parameters and, more importantly, are well-suited to comprehend individual-level treatment effect heterogeneity. The paper presents the theory behind PeT effects, studies their finite-sample properties using simulations and presents a novel analysis of treatment evaluation in health care. |
JEL: | C21 C26 D04 I12 |
Date: | 2012–05 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:18056&r=ecm |
By: | Fei Chen (Huazhong University of Science and Technology); Francis X. Diebold (Department of Economics, University of Pennsylvania); Frank Schorfheide (Department of Economics, University of Pennsylvania) |
Abstract: | We propose and illustrate a Markov-switching multi-fractal duration (MSMD) model for analysis of inter-trade durations in financial markets. We establish several of its key properties with emphasis on high persistence (indeed long memory). Empirical exploration suggests MSMD's superiority relative to leading competitors. |
Keywords: | High-frequency trading data, point process, long memory, time deformation, scaling law, self-similarity, regime-switching model, market microstructure, liquidity |
JEL: | C41 C22 G1 |
Date: | 2012–05–07 |
URL: | http://d.repec.org/n?u=RePEc:pen:papers:12-020&r=ecm |
By: | Julie Poirier (CREST) |
Abstract: | This paper deals with the protest bids issue in choice experiments. In the context of the Water Framework Directive, we examined local residents’ preferences for water quality improvements at a specific river basin in France. We used the choice experiment method with site-specific attributes referring to the four sites that compose our basin. We first estimated a random parameters logit model in order to take into account heterogeneity of preferences. We found positive willingness-to-pay for improvements in water quality. Moreover we observed that a significant proportion of respondents always chose the status quo scenario (which referred to the current management regime and was associated with a zero price) irrespective of the choice set she was presented. Status quo responses are considered as being zero bids and may be categorized into two types: true zero bids, where the respondent really places a zero value on the good, and protest bids, where the respondent states a zero willingness-topay even though her true value for the good is positive. We excluded protest bids from the analysis and re-estimated our random parameters logit model. Results showed that protest bids do affect the outcome. In order to take into account the existence of the two types of zero bids when estimating willingness-to-pay, we then proposed a cross-nested logit model. Implicit prices obtained from this model estimation are larger than those obtained from the random parameters logit model estimation. As a result, the cross-nested logit model allows taking into consideration the peculiarity of protest behaviors |
Keywords: | Choice experiments; Cross-nested logit model; Protest bids; Water Framework Directive;Water quality |
Date: | 2012–01 |
URL: | http://d.repec.org/n?u=RePEc:crs:wpaper:2012-02&r=ecm |
By: | Robert Novy-Marx |
Abstract: | Ferson, Sarkissian and Simin (2003) warn that persistence in expected returns generates spurious regression bias in predictive regressions of stock returns, even though stock returns are themselves only weakly autocorrelated. Despite this fact a growing literature attempts to explain the performance of stock market anomalies with highly persistent investor sentiment. The data suggest, however, that the potential misspecification bias may be large. Predictive regressions of real returns on simulated regressors are too likely to reject the null of independence, and it is far too easy to find real variables that have “significant power” predicting returns. Standard OLS predictive regressions find that the party of the U.S. President, cold weather in Manhattan, global warming, the El Niño phenomenon, atmospheric pressure in the Arctic, the conjunctions of the planets, and sunspots, all have “significant power” predicting the performance of anomalies. These issues appear particularly acute for anomalies prominent in the sentiment literature, including those formed on the basis of size, distress, asset growth, investment, profitability, and idiosyncratic volatility. |
JEL: | C53 G0 G12 |
Date: | 2012–05 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:18063&r=ecm |
By: | Adam E Clements (QUT); Joanne Fuller (QUT); Stan Hurn (QUT) |
Abstract: | The occurrence of extreme movements in the spot price of electricity represent a significant source of risk to retailers. Electricity markets are often structured so as to allow retailers to purchase at an unregulated spot price but then sell to consumers at a heavily regulated price. As such, the ability to forecast price spikes is an important aspect of effective risk management. A range of approaches have been considered with respect to modelling electricity prices, including predicting the trajectory of spot prices, as well as more recently, focusing of the prediction of spikes specifically. These models however, have relied on time series approaches which typically use restrictive decay schemes placing greater weight on more recent observations. This paper develops an alternative, semi-parametric method for forecasting that does not rely on this convention. In this approach, a forecast is a weighted average of historical price data, with the greatest weight given to periods that exhibit similar market conditions to the time at which the forecast is being formed. Weighting is determined by comparing short-term trends in electricity price spike occurrences across time, including other relevant factors such as load, by means of a multivariate kernel scheme. It is found that the semi-parametric method produces forecasts that are more accurate than the previously identified best approach for a short forecast horizon. |
Keywords: | Electricity Prices, Prices Spikes, Semi-parametric, Multivariate Kernel |
JEL: | C14 C53 |
Date: | 2012–05–16 |
URL: | http://d.repec.org/n?u=RePEc:qut:auncer:2012_5&r=ecm |
By: | Tack, Jesse B.; Harri, Ardian; Coble, Keith H. |
Abstract: | The objective of this article is to propose the use of moment functions and maximum entropy techniques as a flexible way to estimate conditional crop yield distributions. We present a moment based model that extends previous approaches in several dimensions, and can be easily estimated using standard econometric estimators. Upon identification of the yield moments under a variety of climate and irrigation regimes, we utilize maximum entropy techniques to analyze the distributional impacts from switching regimes. We consider the case of Arkansas, Mississippi, and Texas upland cotton to demonstrate how climate and irrigation affect the shape of the yield distribution, and compare our findings to other moment based approaches. We empirically illustrate several advantages of our moment based maximum entropy approach, including flexibility of the distributional tails across alternative irrigation and climate regimes. |
Keywords: | risk, climate change, moments, entropy, yield, cotton, Crop Production/Industries, Production Economics, Research Methods/ Statistical Methods, Risk and Uncertainty, |
Date: | 2012 |
URL: | http://d.repec.org/n?u=RePEc:ags:aaea12:123330&r=ecm |
By: | Alexander Jordan; Alex Lenkoski |
Abstract: | We develop a fully Bayesian, computationally efficient framework for incorporating model uncertainty into Type II Tobit models and apply this to the investigation of the determinants of Foreign Direct Investment (FDI). While direct evaluation of modelprobabilities is intractable in this setting, we show that by using conditional Bayes Factors, which nest model moves inside a Gibbs sampler, we are able to incorporate model uncertainty in a straight-forward fashion. We conclude with a study of global FDI flows between 1988-2000. |
Date: | 2012–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1205.2501&r=ecm |