
on Econometrics 
By:  Stefan Holst Bache (Aarhus University, School of Economics and Management and CREATES) 
Abstract:  A new and alternative quantile regression estimator is developed and it is shown that the estimator is root nconsistent and asymptotically normal. The estimator is based on a minimax ‘deviance function’ and has asymptotically equivalent properties to the usual quantile regression estimator. It is, however, a different and therefore new estimator. It allows for both linear and nonlinear model specifications. A simple algorithm for computing the estimates is proposed. It seems to work quite well in practice but whether it has theoretical justification is still an open question. 
Keywords:  Quantile regression, nonlinear quantile regression, estimating functions, minimax estimation, empirical process theory 
JEL:  C1 C4 C5 C6 
Date:  2010–08–01 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201054&r=ecm 
By:  Anders Bredahl Kock (CREATES, Aarhus University) 
Abstract:  This paper generalizes the results for the Bridge estimator of Huang et al. (2008) to linear random and fixed effects panel data models which are allowed to grow in both dimensions. In particular we show that the Bridge estimator is oracle efficient. It can correctly distinguish between relevant and irrelevant variables and the asymptotic distribution of the estimators of the coefficients of the relevant variables is the same as if only these had been included in the model, i.e. as if an oracle had revealed the true model prior to estimation. In the case of more explanatory variables than observations, we prove that the Marginal Bridge estimator can asymptotically correctly distinguish between relevant and irrelevant explanatory variables. We do this without restricting the dependence between covariates and without assuming sub Gaussianity of the error terms thereby generalizing the results of Huang et al. (2008). Furthermore, the number of relevant variables is allowed to be larger than the sample size. 
Keywords:  Panel data, high dimensional modeling, variable selection, Bridge estimators, oracle property 
JEL:  C1 C23 
Date:  2010–09–01 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201056&r=ecm 
By:  Dominique Guegan (CES  Centre d'économie de la Sorbonne  CNRS : UMR8174  Université PanthéonSorbonne  Paris I, EEPPSE  Ecole d'Économie de Paris  Paris School of Economics  Ecole d'Économie de Paris); Patrick Rakotomarolahy (CES  Centre d'économie de la Sorbonne  CNRS : UMR8174  Université PanthéonSorbonne  Paris I) 
Abstract:  An empirical forecast accuracy comparison of the nonparametric method, known as multivariate Nearest Neighbor method, with parametric VAR modelling is conducted on the euro area GDP. Using both methods for nowcasting and forecasting the GDP, through the estimation of economic indicators plugged in the bridge equations, we get more accurate forecasts when using nearest neighbor method. We prove also the asymptotic normality of the multivariate knearest neighbor regression estimator for dependent time series, providing confidence intervals for point forecast in time series. 
Keywords:  Forecast  Economic indicators  GDP  Euro area  VAR  Multivariate k nearest neighbor regression  Asymptotic normality 
Date:  2010–12 
URL:  http://d.repec.org/n?u=RePEc:hal:cesptp:halshs00511979_v1&r=ecm 
By:  Naoto Kunitomo (Faculty of Economics, University of Tokyo); Seisho Sato (Institute of Statistical Mathematics) 
Abstract:  For estimating the realized volatility and covariance by using high frequency data, we have introduced the Separating Information Maximum Likelihood (SIML) method when there are possibly micromarket noises by Kunitomo and Sato (2008a, 2008b, 2010a, 2010b). The resulting estimator is simple and it has the representation as a specific quadratic form of returns. We show that the SIML estimator has reasonable asymptotic properties; it is consistent and it has the asymptotic normality (or the stable convergence in the general case) when the sample size is large under general conditions including some nonGaussian processes and some volatility models. Based on simulations, we find that the SIML estimator has reasonable finite sample properties and thus it would be useful for practice. The SIML estimator has the asymptotic robustness properties in the sense it is consistent when the noise terms are weakly dependent and they are endogenously correlated with the efficient market price process. We also apply our method to an analysis of Nikkei225 Futures, which has been the major stock index in the Japanese financial sector. 
Date:  2010–08 
URL:  http://d.repec.org/n?u=RePEc:cfi:fseres:cf228&r=ecm 
By:  Yves Dominicy; David Veredas; Hiroaki Ogata 
Abstract:  We estimate the parameters of an elliptical distribution by means of a multivariate exten sion of the Method of Simulated Quantiles (MSQ) of Dominicy and Veredas (2010). The multivariate extension entails the challenge of the construction of a function of quantiles that is informative about the covariation parameters. The interquantile range of a projection of pairwise random variables onto the 45 degree line is very informative about the covariation of the two random variables. MSQ provides the asymptotic theory for the estimators and a Monte Carlo study reveals good nite sample properties of the estimators. An empirical application to 22 worldwide nancial market returns illustrates the usefulness of the method. 
Keywords:  Quantiles; Elliptical Distribution; Heavy Tails 
JEL:  C13 C15 G11 
Date:  2010–08 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/61441&r=ecm 
By:  Christos Ntantamis (School of Economics and Management, University of Aarhus and CREATES) 
Abstract:  Testing for structural breaks and identifying their location is essential for econometric modeling. In this paper, a Hidden Markov Model (HMM) approach is used in order to perform these tasks. Breaks are defined as the data points where the underlying Markov Chain switches from one state to another. The estimation of the HMM is conducted using a variant of the Iterative Conditional ExpectationGeneralized Mixture (ICEGEMI) algorithm proposed by Delignon et al. (1997), that permits analysis of the conditional distributions of economic data and allows for different functional forms across regimes. The locations of the breaks are subsequently obtained by assigning states to data points according to the Maximum Posterior Mode (MPM) algorithm. The Integrated Classification LikelihoodBayesian Information Criterion (ICLBIC) allows for the determination of the number of regimes by taking into account the classification of the data points to their corresponding regimes. The performance of the overall procedure, denoted IMI by the initials of the component algorithms, is validated by two sets of simulations; one in which only the parameters are permitted to differ across regimes, and one that also permits differences in the functional forms. The IMI method performs well in both sets. Moreover, when it is compared to the Bai and Perron (1998) method its performance is superior in the assessing the number of breaks and their respective locations. Finally, the methodology is applied for the detection of breaks in the monetary policy of United States, the di erent functional form being variants of the Taylor (1993) rule. 
Keywords:  Structural change, Hidden Markov Model, Regime Switching, Bayesian Segmentation, Monetary Policy 
JEL:  C13 C22 C52 
Date:  2010–08–31 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201052&r=ecm 
By:  Millimet, Daniel L. (Southern Methodist University) 
Abstract:  Researchers in economics and other disciplines are often interested in the causal effect of a binary treatment on outcomes. Econometric methods used to estimate such effects are divided into one of two strands depending on whether they require the conditional independence assumption (i.e., independence of potential outcomes and treatment assignment conditional on a set of observable covariates). When this assumption holds, researchers now have a wide array of estimation techniques from which to choose. However, very little is known about their performance – both in absolute and relative terms – when measurement error is present. In this study, the performance of several estimators that require the conditional independence assumption, as well as some that do not, are evaluated in a Monte Carlo study. In all cases, the datagenerating process is such that conditional independence holds with the 'real' data. However, measurement error is then introduced. Specifically, three types of measurement error are considered: (i) errors in treatment assignment, (ii) errors in the outcome, and (iii) errors in the vector of covariates. Recommendations for researchers are provided. 
Keywords:  treatment effects, propensity score, unconfoundedness, selection on observables, measurement error 
JEL:  C21 C52 
Date:  2010–08 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp5140&r=ecm 
By:  McMurry, Timothy L; Politis, D N 
Abstract:  We address the problem of estimating the autocovariance matrix of a stationary process. Under short range dependence assumptions, convergence rates are established for a gradually tapered version of the sample autocovariance matrix and for its inverse. The proposed estimator is formed by leaving the main diagonals of the sample autocovariance matrix intact while gradually downweighting oï¿½diagonal entries towards zero. In addition we show the same convergence rates hold for a positive deï¿½nite version of the estimator, and we introduce a new approach for selecting the banding parameter. The new matrix estimator is shown to perform well theoretically and in simulation studies. As an application we introduce a new resampling scheme for stationary processes termed the linear process bootstrap (LPB). The LPB is shown to be asymptotically valid for the sample mean and related statistics. The eï¿½ectiveness of the proposed methods are demonstrated in a simulation study. 
Keywords:  autocovariance matrix, stationary process, boostrap, block bootstrap, sieve bootstrap 
Date:  2010–03–31 
URL:  http://d.repec.org/n?u=RePEc:cdl:ucsdec:1257043&r=ecm 
By:  Grendar, Marian; Judge, George G. 
Abstract:  Empirical Likelihood (EL) and other methods that operate within the Empirical Estimating Equations (E3) approach to estimation and inference are challenged by the Empty Set Problem (ESP). ESP concerns the possibility that a model set, which is datadependent, may be empty for some data sets. To avoid ESP we return from E3 back to the Estimating Equations, and explore the Bayesian infinitedimensional Maximum Aposteriori Probability (MAP) method. The Bayesian MAP with Dirichlet prior motivates a Revised EL (ReEL) method. ReEL i) avoids ESP as well as the convex hull restriction, ii) attains the same basic asymptotic properties as EL, and iii) its computation complexity is comparable to that of EL. 
Keywords:  empirical estimating equations, generalized minimum contrast, empirical likelihood, generalized empirical likelihood, empty set problem, convex hull restriction, estimating equations, maximum aposteriori probability 
Date:  2010–07–01 
URL:  http://d.repec.org/n?u=RePEc:cdl:agrebk:1387327&r=ecm 
By:  W. Robert Reed (University of Canterbury); Rachel Webb 
Abstract:  This paper investigates the properties of the PanelCorrected Standard Error (PCSE) estimator. The PCSE estimator is commonly used when working with timeseries, crosssectional (TSCS) data. In an influential paper, Beck and Katz (1995) (henceforth BK) demonstrated that FGLS produces coefficient standard errors that are severely underestimated. They report Monte Carlo experiments in which the PCSE estimator produces accurate standard error estimates at no, or little, loss in efficiency compared to FGLS. Our study further investigates the properties of the PCSE estimator. We first reproduce the main experimental results of BK using their Monte Carlo framework. We then show that the PCSE estimator does not perform as well when tested in data environments that better resemble “practical research situations.” When (i) the explanatory variable(s) are characterized by substantial persistence, (ii) there is serial correlation in the errors, and (iii) the time span of the data series is relatively short, coverage rates for the PCSE estimator frequently fall between 80 and 90 percent. Further, we find many “practical research situations” where the PCSE estimator compares poorly with FGLS on efficiency grounds. 
Keywords:  Panel data estimation; Monte Carlo analysis; FGLS; Parks; PCSE; finite sample 
JEL:  C15 C23 
Date:  2010–08–13 
URL:  http://d.repec.org/n?u=RePEc:cbt:econwp:10/53&r=ecm 
By:  Jeroen Hinloopen (University of Amsterdam); Rien Wagenvoort (European Investment Bank, Luxemburg) 
Abstract:  Pp plots contain all the information that is needed for scaleinvariant comparisons. Indeed, Empirical Distribution Function (EDF) tests translate sample pp plots into a single number. In this paper we characterize the set of all distinct pp plots for two balanced sample of size <I>n</I> absent ties. Distributions of EDF test statistics are embedded in this set. It is thus used to derive the exact finite sample distribution of the L<sub>1</sub>version of the FiszCramérvon Mises test. Comparing this distribution with the (known) limiting distribution shows that the latter can always be used for hypothesis testing: although for finite samples the critical percentiles of the limiting distribution differ from the exact values, this will not lead to differences in the rejection of the underlying hypothesis. 
Keywords:  Sample pp plot; EDF test; finite sample distribution; limiting distribution 
JEL:  C12 C14 C46 
Date:  2010–08–30 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20100083&r=ecm 
By:  Harvey, A. 
Abstract:  The asymptotic distribution of maximum likelihood estimators is derived for a class of exponential generalized autoregressive conditional heteroskedasticity (EGARCH) models. The result carries over to models for duration and realised volatility that use an exponential link function. A key feature of the model formulation is that the dynamics are driven by the score. 
Keywords:  Duration models; gamma distribution; general error distribution; heteroskedasticity; leverage; score; Student's t. 
JEL:  C22 
Date:  2010–08–26 
URL:  http://d.repec.org/n?u=RePEc:cam:camdae:1040&r=ecm 
By:  Politis, D N; McElroy, Tucker S 
Abstract:  This paper considers the problem of distribution estimation for the studentized sample mean in the context of Long Memory and Negative Memory time series dynamics, adopting the fixedbandwidth approach now popular in the econometrics literature. The distribution theory complements the Short Memory results of Kiefer and Vogelsang (2005). In particular, our results highlight the dependence on the employed kernel, whether or not the taper is nonzero at the boundary, and most importantly whether or not the process has short memory. We also demonstrate that smallbandwidth approaches fail when long memory or negative memory is present since the limiting distribution is either a point mass at zero or degenerate. Extensive numerical work provides approximations to the quantiles of the asymptotic distribution for a range of tapers and memory parameters; these quantiles can be used in practice for the construction of confidence intervals and hypothesis tests for the mean of the time series. 
Keywords:  confidence intervals, critical values, dependence, gaussian, kernel spectral density, tapers, testing 
Date:  2009–12–01 
URL:  http://d.repec.org/n?u=RePEc:cdl:ucsdec:1088027&r=ecm 
By:  Gantner, M. (Tilburg University) 
Abstract:  In this thesis some new, nonparametric methods are introduced to explore uni or bivariate data. First, we introduce the "shorth plot", a graphical method to depict the main features of onedimensional probability distributions. Second, we define the "HalfHalf plot", a useful tool for analyzing regression data. Furthermore, a test for spherical symmetry in an empirical likelihood framework is presented. For all methods the asymptotic behavior is derived. The good performance of the methods is demonstrated through simulated and real data examples. 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:ner:tilbur:urn:nbn:nl:ui:124205923&r=ecm 
By:  Grendar, Marian; Judge, George G. 
Abstract:  Methods, like Maximum Empirical Likelihood (MEL), that operate within the Empirical Estimating Equations (E3) approach to estimation and inference are challenged by the Empty Set Problem (ESP). We propose to return from E3 back to the Estimating Equations, and to use the Maximum Likelihood method. In the discrete case the Maximum Likelihood with Estimating Equations (MLEE) method avoids ESP. In the continuous case, how to make MLEE operational is an open question. Instead of it, we propose a Patched Empirical Likelihood, and demonstrate that it avoids ESP. The methods enjoy, in general, the same asymptotic properties as MEL. 
Keywords:  maximum likelihood, estimating equations, empirical likelihood 
Date:  2010–01–22 
URL:  http://d.repec.org/n?u=RePEc:cdl:agrebk:1119684&r=ecm 
By:  Kruse, Robinson; Sibbertsen, Philipp 
Abstract:  We study the empirical behaviour of semiparametric logperiodogram estimation for long memory models when the true process exhibits a change in persistence. Simulation results confirm theoretical arguments which suggest that evidence for long memory is likely to be found. A recently proposed test by Sibbertsen and Kruse (2009) is shown to exhibit noticeable power to discriminate between long memory and a structural change in autoregressive parameters. 
Keywords:  Long memory; changing persistence; structural break; semiparametric estimation 
JEL:  C12 C22 
Date:  2010–08 
URL:  http://d.repec.org/n?u=RePEc:han:dpaper:dp455&r=ecm 
By:  David Grreasley (University of Canterbury) 
Abstract:  The paper discusses a range of modern time series methods that have become popular in the past 20 years and considers their usefulness for cliometrics research both in theory and via a range of applications. Issues such as, spurious regression, unit roots, cointegration, persistence, causality, structural time series methods, including time varying parameter models, are introduced as are the estimation and testing implications that they involve. Applications include a discussion of the timing and potential causes of the British Industrial Revolution, income „convergence? and the long run behaviour of English Real Wages 1264 – 1913. Finally some new and potentially useful developments are discussed including the mildly explosive processes; graphical modelling and long memory. 
Keywords:  Time series; cointegration; unit roots; persistence; causality; cliometrics; convergence; long memory; graphical modelling; British Industrial Revolution 
JEL:  N33 O47 O56 C22 C32 
Date:  2010–09–04 
URL:  http://d.repec.org/n?u=RePEc:cbt:econwp:10/56&r=ecm 
By:  Bratti, M.;; Miranda, A; 
Abstract:  In this paper we propose a method to estimate models in which an endogenous dichotomous treatment affects a count outcome in the presence of either sample selection or endogenous participation using maximum simulated likelihood. We allow for the treatment to have an effect on both the sample selection or the participation rule and the main outcome. Applications of this model are frequent in many fields of economics, such as health, labor, and population economics. We show the performance of the model using data from Kenkel and Terza (2001), which investigates the effect of physician advice on the amount of alcohol consumption. Our estimates suggest that in these data (i) neglecting treatment endogeneity leads to a perversely signed effect of physician advice on drinking intensity, (ii) neglecting endogenous participation leads to an upward biased estimator of the treatment effect of physician advice on drinking intensity. 
Keywords:  count data; drinking; endogenous participation; maximum simulated likelihood; sample selection; treatment effects 
JEL:  C35 I12 I21 
Date:  2010–07 
URL:  http://d.repec.org/n?u=RePEc:yor:hectdg:10/19&r=ecm 
By:  Politis, Dimitris N 
Abstract:  The problem of prediction is revisited with a view towards going beyond the typical nonparametric setting and reaching a fully modelfree environment for predictive inference, i.e., point predictors and predictive intervals. A basic principle of modelfree prediction is laid out based on the notion of transforming a given setup into one that is easier to work with, namely i.i.d. or Gaussian. As an application, the problem of nonparametric regression is addressed in detail; the modelfree predictors are worked out, and shown to be applicable under minimal assumptions. Interestingly, modelfree prediction in regression is a totally automatic technique that does not necessitate the search for an optimal data transformation before model fitting. The resulting modelfree predictive distributions and intervals are compared to their corresponding modelbased analogs, and the use of crossvalidation is extensively discussed. As an aside, improved prediction intervals in linear regression are also obtained. 
Keywords:  bootstrap, crossvalidation, heteroskedasticity, nonparametric estimation, predictive distribution, predictive intervals, regression, smoothing 
Date:  2010–03–01 
URL:  http://d.repec.org/n?u=RePEc:cdl:ucsdec:654028&r=ecm 
By:  Norbert Christopeit (University of Bonn); Michael Massmann (VU University Amsterdam) 
Abstract:  In this paper we consider regression models with forecast feedback. Agents' expectations are formed via the recursive estimation of the parameters in an auxiliary model. The learning scheme employed by the agents belongs to the class of stochastic approximation algorithms whose gain sequence is decreasing to zero. Our focus is on the estimation of the parameters in the resulting actual law of motion. For a special case we show that the ordinary least squares estimator is consistent. 
Keywords:  Adaptive learning; forecast feedback; stochastic approximation; linear regression with stochastic regressors; consistency 
JEL:  C13 C22 D83 D84 
Date:  2010–08–23 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20100077&r=ecm 
By:  Berthold R. Haag (HypoVereinsbank); Stefan Hoderlein (Boston College); Sonya Mihaleva (Brown University) 
Abstract:  Homogeneity of degree zero has often been rejected in empirical studies that employ parametric models. This paper proposes a test for homogeneity that does not depend on the correct specification of the functional form of the empirical model. The test statistic we propose is based on kernel regression and extends nonparametric specification tests to systems of equations with weakly dependent data. We discuss a number of practically important issues and further extensions. In particular, we focus on a novel bootstrap version of the test statistic. Moreover, we show that the same test also allows to assess the validity of functional form assumptions. When we apply the test to British household data, we find homogeneity generally well accepted. In contrast, we reject homogeneity with a standard almost ideal parametric demand system. Using our test for functional form we obtain however that it it precisely this functional form assumption which is rejected. Our findings indicate that the rejections of homogeneity obtained thus far are due to misspecification of the functional form and not due to incorrectness of the homogeneity assumption. 
Keywords:  Homogeneity, Nonparametric, Bootstrap, Specification Test, System of Equations 
Date:  2009–09–24 
URL:  http://d.repec.org/n?u=RePEc:boc:bocoec:749&r=ecm 
By:  Christian R. Proano (IMK at the Hans Boeckler Foundation) 
Abstract:  In this paper a dynamic probit model for recession forecasing under pseudoreal time is set up using a large set of macroeconomic and financial monthly indicators for Germany. Using different initial sets of explanatory variables, alternative dynamic probit specifications are obtained through an automatized generaltospecific lag selection procedure, which are then pooled in order to decrease the volatility of the estimated recession probabilities and increase their forecasting accuracy. As it is shown in the paper, this procedure does not only feature good insample forecast statistics, but has also good outofsample performance, as pseudoreal time evaluation exercises show. 
Keywords:  Dynamic probit models, outofsample forecasting, yield curve, realtime econometrics 
JEL:  C25 C53 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:imk:wpaper:102010&r=ecm 
By:  Agostino Tarsitano; Marianna Falcone (Dipartimento di Economia e Statistica, Università della Calabria) 
Abstract:  In this paper we propose a new method of single imputation, reconstruction, and estimation of nonreported, incorrect or excluded values both in the target and in the auxiliary variables where the first is on ratio or interval scale and the last are heterogeneous in measurement scale. Our technique is a variation of the popular nearest neighbor hot deck imputation (NNHDI) where "nearest" is defined in terms of a global distance obtained as a convex combination of the partial distance matrices computed for the various types of variables. In particular, we address the problem of proper weighting the partial distance matrices in order to reflect their significance, reliability and statistical adequacy. Performance of several weighting schemes is compared under a variety of settings in coordination with imputation of the least power mean. We have demonstrated, through analysis of simulated and actual data sets, the appropriateness of this approach. Our main contribution has been to show that mixed data may optimally be combined to allow accurate reconstruction of missing values in the target variable even in the absence of some data in the other fields of the record. 
Keywords:  hotdeck imputation, nearest neighbor, general distance coefficient, least power mean. 
JEL:  C13 C63 
Date:  2010–08 
URL:  http://d.repec.org/n?u=RePEc:clb:wpaper:201015&r=ecm 
By:  Bar, H;; Lillard, D; 
Abstract:  When event data are retrospectively reported, more temporally distal events tend to get “heaped” on even multiples of reporting units. Heaping may introduce a type of attenuation bias because it causes researchers to mismatch timevarying righthand side variables. We develop a modelbased approach to estimate the extent of heaping in the data, and how it affects regression parameter estimates. We use smoking cessation data as a motivating example to describe our approach, but the method more generally facilitates the use of retrospective data from the multitude of crosssectional and longitudinal studies worldwide that already have and potentially could collect event data. 
Keywords:  count data; drinking; endogenous participation; maximum simulated likelihood; sample selection; treatment effects 
Date:  2010–07 
URL:  http://d.repec.org/n?u=RePEc:yor:hectdg:10/20&r=ecm 
By:  Zhaosong Lu; Yong Zhang 
Abstract:  In this paper we consider general rank minimization problems with rank appearing in either objective function or constraint. We first show that a class of matrix optimization problems can be solved as lower dimensional vector optimization problems. As a consequence, we establish that a class of rank minimization problems have closed form solutions. Using this result, we then propose penalty decomposition methods for general rank minimization problems in which each subproblem is solved by a block coordinate descend method. Under some suitable assumptions, we show that any accumulation point of the sequence generated by our method when applied to the rank constrained minimization problem is a stationary point of a nonlinear reformulation of the problem. Finally, we test the performance of our methods by applying them to matrix completion and nearest lowrank correlation matrix problems. The computational results demonstrate that our methods generally outperform the existing methods in terms of solution quality and/or speed. 
Date:  2010–08 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1008.5373&r=ecm 
By:  Steffen Andersen; John Fountain; Glenn W. Harrison; E. Elisabet RutstrÃ¶m 
Abstract:  Subjective probabilities play a role in many economic decisions. There is a large theoretical literature on the elicitation of subjective probabilities, and an equally large empirical literature. However, there is a gulf between the two. The theoretical literature proposes a range of procedures that can be used to recover subjective probabilities, but stresses the need to make strong auxiliary assumptions or "calibrating adjustments" to elicited reports in order to recover the latent probability. With some notable exceptions, the empirical literature seems intent on either making those strong assumptions or ignoring the need for calibration. We illustrate how the joint estimation of risk attitudes and subjective probabilities using structural maximum likelihood methods can provide the calibration adjustments that theory calls for. This allows the observer to make inferences about the latent subjective probability, calibrating for virtually any wellspecified model of choice under uncertainty. We demonstrate our procedures with experiments in which we elicit subjective probabilities. We calibrate the estimates of subjective beliefs assuming that choices are made consistently with expected utility theory or rankdependent utility theory. Inferred subjective probabilities are significantly different when calibrated according to either theory, thus showing the importance of undertaking such exercises. Our findings also have implications for the interpretation of probabilities inferred from prediction markets. 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:exc:wpaper:201008&r=ecm 
By:  Nayoung Lee; Geert Ridder; John Strauss 
Abstract:  This paper investigates potential measurement error biases in estimated poverty transition matrices. Transition matrices based on survey expenditure data has been compared to transition matrices based on measurementerrorfree simulated expenditure. The simulation model uses estimates that correct for measurement error in expenditure. This dynamic model needs errorfree initial conditions that can not be derived from these estimates. [Working Paper No. 270] 
Keywords:  Measurement error, Economic mobility, Transition matrix 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:ess:wpaper:id:2796&r=ecm 
By:  Christos Ntantamis (School of Economics and Management, University of Aarhus and CREATES) 
Abstract:  The problem of modeling housing prices has attracted considerable attention due to its importance in terms of households' wealth and in terms of public revenues through taxation. One of the main concerns raised in both the theoretical and the empirical literature is the existence of spatial association between prices that can be attributed, among others, to unobserved neighborhood effects. In this paper, a model of spatial association for housing markets is introduced. Spatial association is treated in the context of spatial heterogeneity, which is explicitly modeled in both a global and a local framework. The global form of heterogeneity is incorporated in a Hedonic Price Index model that encompasses a nonlinear function of the geographical coordinates of each dwelling. The local form of heterogeneity is subsequently modeled as a Finite Mixture Model for the residuals of the Hedonic Index. The identified mixtures are considered as the different spatial housing submarkets. The main advantage of the approach is that submarkets are recovered by the housing prices data compared to submarkets imposed by administrative or geographical criteria. The Finite Mixture Model is estimated using the Figueiredo and Jain (2002) approach due to its ability in endogenously identifying the number of the submarkets and its efficiency in computational terms that permits the consideration of large datasets. The different submarkets are subsequently identified using the Maximum Posterior Mode algorithm. The overall ability of the model to identify spatial heterogeneity is validated through a set of simulations. The model was applied to Los Angeles county housing prices data for the year 2002. The results suggests that the statistically identified number of submarkets, after taking into account the dwellings' structural characteristics, are considerably fewer that the ones imposed either by geographical or administrative boundaries. 
Keywords:  Hedonic Models, Finite Mixture Model, Spatial Heterogeneity, Housing Submarkets 
JEL:  C13 C21 R0 
Date:  2010–08–18 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201053&r=ecm 
By:  Dominique Guegan (CES  Centre d'économie de la Sorbonne  CNRS : UMR8174  Université PanthéonSorbonne  Paris I, EEPPSE  Ecole d'Économie de Paris  Paris School of Economics  Ecole d'Économie de Paris) 
Abstract:  This chapter recalls the main tools useful to compute Value at Risk associated with a mdimensional portfolio. Then, the limitations of the use of these tools is explained, as soon as nonstationarities are observed in time series. Indeed, specific behaviours observed by financial assets, like volatility, jumps, explosions, and pseudoseasonalities, provoke nonstationarities which affect the distribution function of the portfolio. Thus, a new way for computing VaR is proposed which allows the potential noninvariance of the mdimensional portfolio distribution function to be avoided. 
Keywords:  Nonstationarity – ValueatRisk – Dynamic copula –Metadistribution – POT method. 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:hal:cesptp:halshs00511995_v1&r=ecm 
By:  Eduardo Fé 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:man:sespap:1016&r=ecm 
By:  Don Harding 
Abstract:  To match the NBER business cycle features it is necessary to employ Gen eralised dynamic categorical (GDC) models that impose certain phase re strictions and permit multiple indexes. Theory suggests additional shape re strictions in the form of monotonicity and boundedness of certain transition probabilities. Maximum likelihood and constraint weighted bootstrap esti mators are developed to impose these restrictions. In the application these estimators generate improved estimates of how the probability of recession varies with the yield spread. 
JEL:  C22 C53 E32 E37 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:acb:camaaa:201025&r=ecm 
By:  Alfarano, Simone; Lux, Thomas 
Abstract:  Power law behavior has been recognized to be a pervasive feature of many phenomena in natural and social sciences. While immense research efforts have been devoted to the analysis of behavioral mechanisms responsible for the ubiquity of powerlaw scaling, the strong theoretical foundation of power laws as a very general type of limiting behavior of large realizations of stochastic processes is less well known. In this chapter, we briefly present some of the key results of extreme value theory, which provide a statistical justification for the emergence of power laws as limiting behavior for extreme fluctuations. The remarkable generality of the theory allows to abstract from the details of the system under investigation, and therefore allows its application in many diverse fields. Moreover, this theory offers new powerful techniques for the estimation of the Pareto index, detailed in the second part of this chapter. 
Keywords:  Extreme Value Theory; Power Laws; Tail index 
JEL:  C13 C14 C01 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:24718&r=ecm 