
on Econometrics 
By:  Jiti Gao (School of Economics, University of Adelaide); Peter C. B. Phillips (Cowles Foundation, Yale University) 
Abstract:  A system of vector semiparametric nonlinear time series models is studied with possible dependence structures and nonstationarities in the parametric and nonparametric components. The parametric regressors may be endogenous while the nonparametric regressors are strictly exogenous and represent trends. The parametric regressors may be stationary or nonstationary and the nonparametric regressors are nonstationary time series. This framework allows for the nonparametric treatment of stochastic trends and subsumes many practical cases. Semiparametric least squares (SLS) estimation is considered and its asymptotic properties are derived. Due to endogeneity in the parametric regressors, SLS is generally inconsistent for the parametric component and a semiparametric instrumental variable least squares (SIVLS) method is proposed instead. Under certain regularity conditions, the SIVLS estimator of the parametric component is shown to be consistent with a limiting normal distribution that is amenable to inference. The rate of convergence in the parametric component is the usual /n rate and is explained by the fact that the common (nonlinear) trend in the system is eliminated nonparametrically by stochastic detrending. 
Keywords:  Simultaneous equation, Stochastic detrending, Vector semiparametric regression 
JEL:  C23 C25 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1769&r=ecm 
By:  Xiaoxia Shi (Dept. of Economics, Yale University); Peter C. B. Phillips (Cowles Foundation, Yale University) 
Abstract:  An asymptotic theory is developed for a weakly identified cointegrating regression model in which the regressor is a nonlinear transformation of an integrated process. Weak identification arises from the presence of a loading coefficient for the nonlinear function that may be close to zero. In that case, standard nonlinear cointegrating limit theory does not provide good approximations to the finite sample distributions of nonlinear least squares estimators, resulting in potentially misleading inference. A new local limit theory is developed that approximates the finite sample distributions of the estimators uniformly well irrespective of the strength of the identification. An important technical component of this theory involves new results showing the uniform weak convergence of sample covariances involving nonlinear functions to mixed normal and stochastic integral limits. Based on these asymptotics, we construct confidence intervals for the loading coefficient and the nonlinear transformation parameter and show that these confidence intervals have correct asymptotic size. As in other cases of nonlinear estimation with integrated processes and unlike stationary process asymptotics, the properties of the nonlinear transformations affect the asymptotics and, in particular, give rise to parameter dependent rates of convergence and differences between the limit results for integrable and asymptotically homogeneous functions. 
Keywords:  Integrated process, Local time, Nonlinear regression, Uniform weak convergence, Weak identification 
JEL:  C13 C22 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1768&r=ecm 
By:  Juan Carlos Escanciano (Indiana University); David JachoChavez (Indiana University); Arthur Lewbel (Boston College) 
Abstract:  Let g0(X) be a function of some observable vector X that is identified and can be nonparametrically estimated. This paper provides new results on the identification and estimation of the function F and the vector β0 whenE(YX)=F[X⊤β0,g0(X)]. Many models fit this framework, including latent index models with an endogenous regressor, and nonlinear models with sample selection. Our identification results show that identification based on functional form, without exclusions or instruments, extends to this semiparametric model. On estimation we provide a new uniform convergence result that allows for random weighting and data dependent bandwidths and trimming. We include Monte Carlo simulations and an empirical application. 
Keywords:  Double index models; Two step estimators; Semiparametric regression; Control function estimators; Sample selection models; Empirical process theory; Limited dependent variables; Oracle estimators; Migration 
JEL:  C13 C14 C21 D24 
Date:  2010–05–01 
URL:  http://d.repec.org/n?u=RePEc:boc:bocoec:756&r=ecm 
By:  Kasahara, Hiroyuki; Shimotsu, Katsumi 
Abstract:  This article analyzes the identifiability of kvariate, Mcomponent finite mixture models in which each component distribution has independent marginals, including models in latent class analysis. Without making parametric assumptions on the component distributions, we investigate how one can identify the number of components and the component distributions from the distribution function of the observed data. We reveal an important link between the number of variables (k), the number of values each variable can take, and the number of identifiable components. A lower bound on the number of components (M) is nonparametrically identifiable if k >= 2, and the maximum identifiable number of components is determined by the number of different values each variable takes. When M is known, the mixing proportions and the component distributions are nonparametrically identified from matrices constructed from the distribution function of the data if (i) k >= 3, (ii) two of k variables take at least M different values, and (iii) these matrices satisfy some rank and eigenvalue conditions. For the unknown M case, we propose an algorithm that possibly identifies M and the component distributions from data. We discuss a condition for nonparametric identi fication and its observable implications. In case M cannot be identified, we use our identification condition to develop a procedure that consistently estimates a lower bound on the number of components by estimating the rank of a matrix constructed from the distribution function of observed variables. 
Keywords:  finite mixture, latent class analysis, latent class model, model selection, number of components, rank estimation 
Date:  2010–08 
URL:  http://d.repec.org/n?u=RePEc:hit:econdp:201009&r=ecm 
By:  Sarafidis, Vasilis; Yamagata, Takashi 
Abstract:  This paper develops an instrumental variable (IV) estimator for consistent estimation of dynamic panel data models with error crosssectional dependence when both N and T, the crosssection and time series dimensions respectively, are large. Our approach asymptotically projects out the common factors from regressors using principal components analysis and then uses the defactored regressors as instruments to estimate the model in a standard way. Therefore, the proposed estimator is computationally very attractive. Furthermore, our procedure requires estimating only the common factors included in the regressors, leaving those that influence the dependent variable solely into the errors. Hence aside from computational simplicity the resulting approach allows parsimonious estimation of the model. The finitesample performance of the IV estimator and the associated ttest is investigated using simulated data. The results show that the bias of the estimator is very small and the size of the ttest is correct even when (T,N) is as small as (10,50). The performance of an overidentifying restrictions test is also explored and the evidence suggests that it has good power when the key assumption is violated. 
Keywords:  Method of moments; dynamic panel data; crosssectional dependence 
JEL:  C13 C23 C15 
Date:  2010–02 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:25182&r=ecm 
By:  James G. MacKinnon (Queen's University); Morten Ørregaard Nielsen (Queen?s University and CREATES) 
Abstract:  We calculate numerically the asymptotic distribution functions of likelihood ratio tests for fractional unit roots and cointegration rank. Because these distributions depend on a realvalued parameter, b, which must be estimated, simple tabulation is not feasible. Partly due to the presence of this parameter, the choice of model specification for the response surface regressions used to obtain the numerical distribution functions is more involved than is usually the case. We deal with model uncertainty by model averaging rather than by model selection. We make available a computer program which, given the dimension of the problem, q, and a value of b, provides either a set of critical values or the asymptotic P value for any value of the likelihood ratio statistic. The use of this program is illustrated by means of an empirical example involving opinion poll data. 
Keywords:  Cofractional process, fractional unit root, fractional cointegration, response surface regression, cointegration rank, numerical distribution function, model averaging. 
JEL:  C12 C16 C22 C32 
Date:  2010–08–03 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201059&r=ecm 
By:  Massimiliano Caporin; Michael McAleer (University of Canterbury) 
Abstract:  This paper focuses on the selection and comparison of alternative nonnested volatility models. We review the traditional insample methods commonly applied in the volatility framework, namely diagnostic checking procedures, information criteria, and conditions for the existence of moments and asymptotic theory, as well as the outofsample model selection approaches, such as mean squared error and Model Confidence Set approaches. The paper develops some innovative loss functions which are based on ValueatRisk forecasts. Finally, we present an empirical application based on simple univariate volatility models, namely GARCH, GJR, EGARCH, and Stochastic Volatility that are widely used to capture asymmetry and leverage. 
Keywords:  Volatility model selection; volatility model comparison; nonnested models; model confidence set; ValueatRisk forecasts; asymmetry, leverage 
JEL:  C11 C22 C52 
Date:  2010–09–01 
URL:  http://d.repec.org/n?u=RePEc:cbt:econwp:10/58&r=ecm 
By:  Athanasopoulos, George; Guillén, Osmani Teixeira de Carvalho; Issler, João Victor; Vahid, Farshid 
Abstract:  We study the joint determination of the lag length, the dimension of the cointegrating space andthe rank of the matrix of shortrun parameters of a vector autoregressive (VAR) model using modelselection criteria. We consider model selection criteria which have datadependent penalties aswell as the traditional ones. We suggest a new twostep model selection procedure which is ahybrid of traditional criteria and criteria with datadependant penalties and we prove its consistency.Our Monte Carlo simulations measure the improvements in forecasting accuracy that can arisefrom the joint determination of laglength and rank using our proposed procedure, relative to anunrestricted VAR or a cointegrated VAR estimated by the commonly used procedure of selecting thelaglength only and then testing for cointegration. Two empirical applications forecasting Brazilianinflation and U.S. macroeconomic aggregates growth rates respectively show the usefulness of themodelselection strategy proposed here. The gains in different measures of forecasting accuracy aresubstantial, especially for short horizons. 
Date:  2010–09–13 
URL:  http://d.repec.org/n?u=RePEc:fgv:epgewp:707&r=ecm 
By:  Trenkler, Carsten; Weber, Enzo 
Abstract:  In this paper we discuss identification of codependent VAR and VEC models. Codependence of order q is given if a linear combination of autocorrelated variables eliminates the serial correlation after q lags. Importantly, maximum likelihood estimation and corresponding likelihood ratio testing are only possible if the codependence restrictions can be uniquely imposed. However, our study reveals that codependent VAR and VEC models are not generally identified. Nevertheless, we show that one can guarantee identification in case of serial correlation common features, i.e. when q=0, and for a single vector generating codependence of order q=1. 
Keywords:  Codependence; identification; VAR; cointegration; serial correlation common features 
JEL:  C32 
Date:  2010–09–15 
URL:  http://d.repec.org/n?u=RePEc:bay:rdwiwi:16477&r=ecm 
By:  Sokbae Lee (University College London); Arthur Lewbel (Boston College) 
Abstract:  We provide new conditions for identification of accelerated failure time competing risks models. These include Roy models and some auction models. In our set up, unknown regression functions and the joint survivor function of latent disturbance terms are all nonparametric. We show that this model is identified given covariates that are independent of latent errors, provided that a certain rank condition is satisfied. We present a simple example in which our rank condition for identification is verified. Our identification strategy does not depend on identification at infinity or near zero, and it does not require exclusion assumptions. Given our identification, we show estimation can be accomplished using sieves. 
Keywords:  accelerated failure time models; competing risks; identifiability. 
Date:  2010–04–01 
URL:  http://d.repec.org/n?u=RePEc:boc:bocoec:755&r=ecm 
By:  Dobrislav P. Dobrev; Pawel J. Szerszen 
Abstract:  We demonstrate that the parameters controlling skewness and kurtosis in popular equity return models estimated at daily frequency can be obtained almost as precisely as if volatility is observable by simply incorporating the strong information content of realized volatility measures extracted from highfrequency data. For this purpose, we introduce asymptotically exact volatility measurement equations in state space form and propose a Bayesian estimation approach. Our highly efficient estimates lead in turn to substantial gains for forecasting various risk measures at horizons ranging from a few days to a few months ahead when taking also into account parameter uncertainty. As a practical rule of thumb, we find that two years of high frequency data often suffice to obtain the same level of precision as twenty years of daily data, thereby making our approach particularly useful in finance applications where only short data samples are available or economically meaningful to use. Moreover, we find that compared to model inference without highfrequency data, our approach largely eliminates underestimation of risk during bad times or overestimation of risk during good times. We assess the attainable improvements in VaR forecast accuracy on simulated data and provide an empirical illustration on stock returns during the financial crisis of 20072008. 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:fip:fedgfe:201045&r=ecm 
By:  Di Kuang (Aon, 8 Devonshire Square, London EC2M 4PL, U.K.); Bent Nielsen (Nuffield College, Oxford OX1 1NF, U.K.); Jens Perch Nielsen (Cass Business School, City University London, 106 Bunhill Row, London EC1Y 8TZ, U.K.) 
Abstract:  Reserving in general insurance is often done using chainladdertype methods. We propose a method aimed at situations where there is a sudden change in the economic environment affecting the policies for all accident years in the reserving triangle. It is shown that methods for forecasting nonstationary time series are helpful. We illustrate the method using data published in Barnett and Zehnwirth (2000). These data illustrate features we also found in data from the general insurer RSA during the recent credit crunch. 
Keywords:  Calendar effect, canonical parameter, extended chainladder, identification problem, forecasting. 
Date:  2010–06–24 
URL:  http://d.repec.org/n?u=RePEc:nuf:econwp:1005&r=ecm 
By:  Schiöler, Linus (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University); Frisén, Marianne (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University) 
Abstract:  Online monitoring is needed to detect outbreaks of diseases like influenza. Surveillance is also needed for other kinds of outbreaks, in the sense of an increasing expected value after a constant period. Information on spatial location or other variables might be available and may be utilized. We adapted a robust method for outbreak detection to a multivariate case. The relation between the times of the onsets of the outbreaks at different locations (or some other variable) was used to determine the sufficient statistic for surveillance. The derived maximum likelihood estimator of the outbreak regression was semiparametric in the sense that the baseline and the slope were nonparametric while the distribution belonged to the exponential family. The estimator was used in a generalized likelihood ratio surveillance method. The method was evaluated with respect to robustness and efficiency in a simulation study and applied to spatial data for detection of influenza outbreaks in Sweden. 
Keywords:  Exponential family; Generalised likelihood; Ordered regression; Regional data; Surveillance 
JEL:  C10 
Date:  2010–09–17 
URL:  http://d.repec.org/n?u=RePEc:hhs:gunsru:2010_002&r=ecm 
By:  Trenkler, Carsten; Weber, Enzo 
Abstract:  We analyze nonstationary time series that do not only trend together in the long run, but restore the equilibrium immediately in the period following a deviation. While this represents a common serial correlation feature, the framework is extended to codependence, allowing for delayed adjustment. We show which restrictions are implied for VECMs and lay out a likelihood ratio test. In addition, due to identification problems in codependent VECMs a GMM test approach is proposed. We apply the concept to US and European interest rate data, examining the capability of the Fed and ECB to control overnight money market rates. 
Keywords:  Serial correlation common features; codependence; cointegration; overnight interest rates; central banks 
JEL:  C32 E52 
Date:  2010–09–15 
URL:  http://d.repec.org/n?u=RePEc:bay:rdwiwi:16478&r=ecm 
By:  Feldkircher, Martin (Oesterreichische Nationalbank) 
Abstract:  In this study the forecast performance of model averaged forecasts is compared to that of alternative single models. Following Eklund and Karlsson (2007) we form posterior model probabilities  the weights for the combined forecast  based on the predictive likelihood. Extending the work of Fernández et al. (2001a) we carry out a prior sensitivity analysis for a key parameter in Bayesian model averaging (BMA): Zellner's g. The main results based on a simulation study are fourfold: First the predictive likelihood does always better than the traditionally employed 'marginal' likelihood in settings where the true model is not part of the model space. Secondly, and more striking, forecast accuracy as measured by the root mean square error (rmse) is maximized for the median probability model put forward by Barbieri and Berger (2003). On the other hand, model averaging excels in predicting direction of changes, a finding that is in line with Crespo Cuaresma (2007). Lastly, our recommendation concerning the prior on g is to choose the prior proposed by Laud and Ibrahim (1995) with a holdout sample size of 25% to minimize the rmse (median model) and 75% to optimize direction of change forecasts (model averaging). We finally forecast the monthly industrial production output of six Central Eastern and South Eastern European (CESEE) economies for a one step ahead forecasting horizon. Following the aforementioned forecasting recommendations improves the outofsample statistics over a 30period horizon beating for almost all countries the first order autoregressive benchmark model. 
Keywords:  Forecast Combination; Bayesian Model Averaging; Median Probability Model; Predictive Likelihood; Industrial Production; Model Uncertainty 
JEL:  C11 C15 C53 
Date:  2010–09–15 
URL:  http://d.repec.org/n?u=RePEc:ris:sbgwpe:2010_014&r=ecm 
By:  Ole E. BarndorffNielsen (The T.N. Thiele Centre for Mathematics in Natural Science, Department of Mathematical Sciences, University of Aarhus, Ny Munkegade, DK8000 Aarhus C, Denmark & CREATES, University of Aarhus); David G. Pollard (AHL Research, Man Research Laboratory, Eagle House, Walton Well Road, Oxford OX2 6ED, UK); Neil Shephard (OxfordMan Institute, University of Oxford, Eagle House, Walton Well Road, Oxford OX2 6ED, UK, & Department of Economics, University of Oxford) 
Abstract:  Motivated by features of low latency data in finance we study in detail discretevalued Levy processes as the basis of price processes for high frequency econometrics. An important case of this is a Skellam process, which is the difference of two independent Poisson processes. We propose a natural generalisation which is the difference of two negative binomial processes. We apply these models in practice to low latency data for a variety of different types of futures contracts. 
Keywords:  futures markets; high frequency econometrics; low latency data; negative binomial; Skellam distribution. 
Date:  2010–06–18 
URL:  http://d.repec.org/n?u=RePEc:nuf:econwp:1004&r=ecm 
By:  Arthur Lewbel (Boston College); Oliver Linton (London School of Economics) 
Abstract:  We consider nonparametric identification and estimation of consumption based asset pricing Euler equations. This entails estimation of pricing kernels or equivalently marginal utility functions up to scale. The standard way of writing these Euler pricing equations yields Fredholm integral equations of the first kind, resulting in the ill posed inverse problem. We show that these equations can be written in a form that equals, (or with habits, resembles) Fredholm integral equations of the second kind, having well posed rather than ill posed inverses. We allow durables, habits, or both to affect utility. We show how to extend the usual method of solving Fredholm integral equations of the second kind to allow for the presence of habits. Using these results, we show with few low level assumptions that marginal utility functions and pricing kernels are locally nonparametrically identified, and we give conditions for finite set and point identification of these functions. Unlike the case of ill posed inverse problems, the limiting distribution theory for our nonparametric estimators should be relatively standard. 
JEL:  C14 
Date:  2010–06–01 
URL:  http://d.repec.org/n?u=RePEc:boc:bocoec:757&r=ecm 
By:  Jeroen Rombouts; Lars Peter Stentoft 
Abstract:  This paper uses asymmetric heteroskedastic normal mixture models to fit return data and to price options. The models can be estimated straightforwardly by maximum likelihood, have high statistical fit when used on S&P 500 index return data, and allow for substantial negative skewness and time varying higher order moments of the risk neutral distribution. When forecasting outofsample a large set of index options between 1996 and 2009, substantial improvements are found compared to several benchmark models in terms of dollar losses and the ability to explain the smirk in implied volatilities. Overall, the dollar root mean squared error of the best performing benchmark component model is 39% larger than for the mixture model. When considering the recent financial crisis this difference increases to 69%. <P>Dans le présent document, nous avons recours aux modèles hétéroscédastiques asymétriques avec mélange de distributions normales pour ajuster les données sur les rendements et fixer les prix des options. Les modèles peuvent être estimés directement par le maximum de vraisemblance, ils comportent un ajustement statistique élevé quand ils sont utilisés sur les données de rendement de l’indice S&P 500, et ils permettent de tenir compte d’une asymétrie négative importante et des moments d’ordre élevé variant dans le temps liés à la distribution du risque nul. Dans le cas des prévisions horséchantillonnage concernant une vaste gamme d’options sur indice entre 1996 et 2009, nous constatons des améliorations substantielles, par rapport à plusieurs modèles de référence, en termes de pertes exprimées en dollars et de capacité d’expliquer le caractère ironique des volatilités implicites. En général, la racine de l’erreur quadratique moyenne du modèle de référence à composantes le plus efficace est 39 % plus grande que dans le cas du modèle à mélange. Dans le contexte de la récente crise financière, cette différence augmente à 69 %. 
Keywords:  Asymmetric heteroskedastic models, finite mixture models, option pricing, outofsample prediction, statistical fit , modèles hétéroscédastiques asymétriques, modèle à mélanges finis, fixation des prix des options, prédiction horséchantillonnage, ajustement statistique 
JEL:  C11 C15 C22 G13 
Date:  2010–09–01 
URL:  http://d.repec.org/n?u=RePEc:cir:cirwor:2010s38&r=ecm 
By:  Yingying Dong (California State University, Fullerton); Arthur Lewbel (Boston College) 
Abstract:  Consider a standard regression discontinuity model, where an outcome Y is determined in part by a binary treatment indicator T, which always (in sharp designs) or sometimes (in fuzzy designs) equals one when a running variable X exceeds a threshold c, and zero otherwise. It is well known that in these models a local average treatment effect can be nonparametrically identified under very general conditions. We show that the derivative of this treatment effect with respect to the threshold c is also nonparametrically identified in both sharp and fuzzy designs, and can be easily estimated. This marginal threshold treatment effect (MTTE) may be used to estimate the impacts of small changes in the threshold, e.g., we use it to show how raising the age of Medicare eligibility would change the probability of take up of various types of health insurance. 
Keywords:  regression discontinuity, sharp design, fuzzy design, treatment effects, program evaluation, threshold, running variable, forcing variable, marginal effects, health insurance, Medicare 
JEL:  C21 C25 
Date:  2010–08–01 
URL:  http://d.repec.org/n?u=RePEc:boc:bocoec:759&r=ecm 
By:  Tom Engsted (CREATES, School of Economics and Management, Aarhus University, DK8000 Aarhus C); Bent Nielsen (Nuffield College, Oxford OX1 1NF, U.K.) 
Abstract:  We derive the parameter restrictions that a standard equity market model implies for a bivariate vector autoregression for stock prices and dividends, and we show how to test these restrictions using likelihood ratio tests. The restrictions, which imply that stock returns are unpredictable, are derived both for a model without bubbles and for a model with a rational bubble. In both cases we show how the restrictions can be tested through standard chisquared inference. The analysis for the nobubble case is done within the traditional Johansen model for I(1) variables, while the bubble model is analysed using a coexplosive framework. The methodology is illustrated using US stock prices and dividends for the period 18722000. 
Keywords:  Rational bubbles, Explosiveness and coexplosiveness, Cointegration, Vector autoregression, Likelihood ratio tests. 
Date:  2010–06–24 
URL:  http://d.repec.org/n?u=RePEc:nuf:econwp:1006&r=ecm 
By:  Eric French; Christopher Taber 
Abstract:  This chapter discusses identification of common selection models of the labor market. We start with the classic Roy model and show how it can be identified with exclusion restrictions. We then extend the argument to the generalized Roy model, treatment effect models, duration models, search models, and dynamic discrete choice models. In all cases, key ingredients for identification are exclusion restrictions and support conditions. 
Keywords:  Labor market 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:fip:fedhwp:wp201008&r=ecm 
By:  Bent Jesper Christensen (Aarhus University and CREATES); Paolo Santucci de Magistris (University of Pavia and CREATES) 
Abstract:  We propose a simple model in which realized stock market return volatility and implied volatility backed out of option prices are subject to common level shifts corresponding to movements between bull and bear markets. The model is estimated using the Kalman filter in a generalization to the multivariate case of the univariate level shift technique by Lu and Perron (2008). An application to the S&P500 index and a simulation experiment show that the recently documented empirical properties of strong persistence in volatility and forecastability of future realized volatility from current implied volatility, which have been interpreted as long memory (or fractional integration) in volatility and fractional cointegration between implied and realized volatility, are accounted for by occasional common level shifts. 
Keywords:  Common level shifts, fractional cointegration, fractional VECM, implied volatility, long memory, options, realized volatility. 
JEL:  C32 G13 G14 
Date:  2010–09–09 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201060&r=ecm 
By:  Areal, Francisco J; Balcombe, Kelvin; Tiffin, R 
Abstract:  An approach to incorporate spatial dependence into Stochastic Frontier analysis is developed and applied to a sample of 215 dairy farms in England and Wales. A number of alternative specifications for the spatial weight matrix are used to analyse the effect of these on the estimation of spatial dependence. Estimation is conducted using a Bayesian approach and results indicate that spatial dependence is present when explaining technical inefficiency. 
Keywords:  Spatial dependence; technical efficiency; Bayesian; spatial weight matrix 
JEL:  C51 C13 C23 Q12 C11 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:24961&r=ecm 
By:  Steffen Andersen; John Fountain; Glenn W. Harrison; Arne Risa Hole; E. Elisabet RutstrÃ¶m 
Abstract:  We propose a method for estimating subjective beliefs, viewed as a subjective probability distribution. The key insight is to characterize beliefs as a parameter to be estimated from observed choices in a welldefined experimental task, and to estimate that parameter as a random coefficient. The experimental task consists of a series of standard lottery choices in which the subject is assumed to use conventional risk attitudes to select one lottery or the other, and then a series of betting choices in which the subject is presented with a range of bookies offering odds on the outcome of some event that the subject has a belief over. Knowledge of the risk attitudes of subjects conditions the inferences about subjective beliefs. Maximum simulated likelihood methods are used to estimate a structural model in which subjects employ subjective beliefs to make bets. We present evidence that some subjective probabilities are indeed best characterized as probability distributions with nonzero variance. 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:exc:wpaper:201014&r=ecm 
By:  Peter C. B. Phillips (Cowles Foundation, Yale University) 
Abstract:  Trends are ubiquitous in economic discourse, play a role in much economic theory, and have been intensively studied in econometrics over the last three decades. Yet the empirical economist, forecaster, and policy maker have little guidance from theory about the source and nature of trend behavior, even less guidance about practical formulations, and are heavily reliant on a limited class of stochastic trend, deterministic drift, and structural break models to use in applications. A vast econometric literature has emerged but the nature of trend remains elusive. In spite of being the dominant characteristic in much economic data, having a role in policy assessment that is often vital, and attracting intense academic and popular interest that extends well beyond the subject of economics, trends are little understood. This essay discusses some implications of these limitations, mentions some research opportunities, and briefly illustrates the extent of the difficulties in learning about trend phenomena even when the time series are far longer than those that are available in economics. 
Keywords:  Climate change, Etymology of trend, Paleoclimatology, Policy, Stochastic trend 
JEL:  C22 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1771&r=ecm 