nep-ecm New Economics Papers
on Econometrics
Issue of 2008‒01‒05
27 papers chosen by
Sune Karlsson
Orebro University

  1. Estimation and Inference by the Method of Projection Minimum Distance By Òscar Jordà; Sharon Kozicki
  2. A Monte Carlo Study for Pure and Pretest Estimators of a Panel Data Model with Spatially Autocorrelated Disturbances By Badi H. Baltagi; Peter Egger; Michael Pfaffermayr
  3. Computationally feasible estimation of the covariance structure in Generalized linear mixed models(GLMM) By Carling, Kenneth; Alam, Moudud
  4. Are there Structural Breaks in Realized Volatility? By Chun Liu; John M Maheu
  5. Least squares volatility change point estimation for partially observed diffusion processes By Alessandro De Gregorio; Stefano Iacus
  6. Is neglected heterogeneity really an issue in logit and probit models? A simulation exercise for binary and fractional data By Joaquim J.S. Ramalho; Esmeralda A. Ramalho
  7. A Metropolis-in-Gibbs Sampler for Estimating Equity Market Factors By Sarantis Tsiaplias
  8. Estimating First-Price Auctions with an Unknown Number of Bidders: A Misclassification Approach By Yingyao Hu; Matthew Shum
  9. A Comparison of Methods for Forecasting Demand for Slow Moving Car Parts By Ralph D. Snyder; Adrian Beaumont
  10. Loss distribution estimation, external data and model averaging By Ethan Cohen-Cole; Todd Prono
  11. Allowing the Data to Speak Freely: The Macroeconometrics of the Cointegrated Vector Autoregression By Kevin D. Hoover; Katarina Juselius; Søren Johansen
  12. A Simple Representation of the Bera-Jarque-Lee Test for Probit Models By Joachim Wilde
  13. Structural breaks in point processes: With an application to reporting delays for trades on the New York stock exchange By Andersson, Jonas; Moberg, Jan-Magnus
  14. The Use of Encompassing Tests for Forecast Combinations By Turgut Kisinbay
  15. Identifying the Returns to Lying When the Truth is Unobserved By Yingyao Hu; Arthur Lewbel
  16. Solving Linear Rational Expectations Models with Lagged Expectations Quickly and Easily By Alexander Meyer-Gohde
  17. COINTEGRATION VECTOR ESTIMATION BY DOLS FOR A THREE-DIMENSIONAL PANEL By Luis Fernando Melo; John Jairo León; Dagoberto Saboya
  18. Testing Hypotheses in an I(2) Model with Applications to the Persistent Long Swings in the Dmk/$ Rate By Søren Johansen; Katarina Juselius; Roman Frydman; Michael Goldberg
  19. GIS and geographically weighted regression in stated preferences analysis of the externalities produced by linear infrastructures By Giaccaria Sergio; Frontuto Vito
  20. Information combination and forecast (st)ability. Evidence from vintages of time-series data By Carlo Altavilla; Matteo Ciccarelli
  21. Sample Kurtosis, GARCH-t and the Degrees of Freedom Issue By Maria S. Heracleous
  22. Rényi information for ergodic diffusion processes By Alessandro De Gregorio; Stefano Iacus
  23. Bayesian Model Averaging in the Context of Spatial Hedonic Pricing: An Application to Farmland Values By Geerte Cotteleer; Tracy Stobbe; G. Cornelis van Kooten
  24. An Economic Evaluation of Empirical Exchange Rate Models By Della Corte, Pasquale; Sarno, Lucio; Tsiakas, Ilias
  25. Conceptual Frameworks and Experimental Design in Simultaneous Equations By C.L. Skeels
  26. Nonparametric analysis of intergenerational income mobility with application to the United States By Debopam Bhattacharya; Bhashkar Mazumder
  27. Set Identified Linear Models By BONTEMPS, Christian; MAGNAC, Thierry; MAURIN, Eric

  1. By: Òscar Jordà; Sharon Kozicki
    Abstract: A covariance-stationary vector of variables has a Wold representation whose coefficients can be semi-parametrically estimated by local projections (Jordà, 2005). Substituting the Wold representations for variables in model expressions generates restrictions that can be used by the method of minimum distance to estimate model parameters. We call this estimator projection minimum distance (PMD) and show that its parameter estimates are consistent and asymptotically normal. In many cases, PMD is asymptotically equivalent to maximum likelihood estimation (MLE) and nests GMM as a special case. In fact, models whose ML estimation would require numerical routines (such as VARMA models) can often be estimated by simple least-squares routines and almost as efficiently by PMD. Because PMD imposes no constraints on the dynamics of the system, it is often consistent in many situations where alternative estimators would be inconsistent.We provide several Monte Carlo experiments and an empirical application in support of the new techniques introduced.
    Keywords: Econometric and statistical methods
    JEL: C32 E47 C53
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:bca:bocawp:07-56&r=ecm
  2. By: Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, Syracuse, NY 13244-1020); Peter Egger (University of Munich and CESifo, Poschingerstr. 5, 91679 Munich, Germany); Michael Pfaffermayr (Department of Economics, University of Innsbruck, Universitaetsstrasse 15, 6020 Innsbruck, Austrai; Austrian Institute of Economic Research, and CE Sifo)
    Abstract: This paper examines the consequences of model misspecification using a panel data model with spatially autocorrelated disturbances. The performance of several maximum likelihood estimators assuming different specifications for this model are compared using Monte Carlo experiments. These include (i) MLE of a random effects model that ignore the spatial correlation; (ii) MLE described in Anselin (1988) which assumes that the individual effects are not spatially autocorrelated; (iii) MLE described in Kapoor et al. (2006) which assumes that both the individual effects and the remainder error are governed by the same spatial autocorrelation; (iv) MLE descrdibed in Baltagi et al. (2006) which allows the spatial correlation parameter for the iondividual effects to be different from that of the remainder error term. The latter model encompasses the other models and allows the researcher to test these specifications as restrictions on the general model using LM and LR tests. In fact, based on these tests, we suggest a pretest estimator which is shown to perform well in Monte Carlo experiments, ranking a close second to the true MLE in mean squared error performance.
    Keywords: Panel data; Spatially autocorrelated residuals; Pretest estimator; Maximum-likelihood estimation
    JEL: C12 C23
    Date: 2007–12
    URL: http://d.repec.org/n?u=RePEc:max:cprwps:98&r=ecm
  3. By: Carling, Kenneth (Department of Business, Economics, Statistics and Informatics); Alam, Moudud (Department of Business, Economics, Statistics and Informatics)
    Abstract: In this paper we discuss how a regression model, with a non-continuous response variable, that allows for dependency between observations should be estimated when observations are clustered and there are repeated measurements on the subjects. The cluster sizes are assumed to be large. We …nd that the conventional estimation technique suggested by the literature on Generalized Linear Mixed Models (GLMM) is slow and often fails due to non-convergence and lack of memory on standard PCs. We suggest to estimate the random e¤ects as …xed e¤ects by GLM and derive the covariance matrix from these estimates. A simulation study shows that our proposal is feasible in terms of Mean-Square Error and computation time. We recommend that our proposal be implemented in the software of GLMM techniques so that the estimation procedure can switch between the conventional technique and our proposal depending on the size of the clusters.
    Keywords: Monte-Carlo simulations; large sample; interdependence; cluster error
    JEL: C13 C15 C25 C63
    Date: 2007–09–10
    URL: http://d.repec.org/n?u=RePEc:hhs:oruesi:2007_014&r=ecm
  4. By: Chun Liu; John M Maheu
    Abstract: Constructed from high-frequency data, realized volatility (RV) provides an efficient estimate of the unobserved volatility of financial markets. This paper uses a Bayesian approach to investigate the evidence for structural breaks in reduced form time-series models of RV. We focus on the popular heterogeneous autoregressive (HAR) models of the logarithm of realized volatility. Using Monte Carlo simulations we demonstrate that our estimation approach is effective in identifying and dating structural breaks. Applied to daily S&P 500 data from 1993-2004, we find strong evidence of a structural break in early 1997. The main effect of the break is a reduction in the variance of log-volatility. The evidence of a break is robust to different models including a GARCH specification for the conditional variance of log(RV).
    Keywords: realized volatility, change point, marginal likelihood, Gibbs sampling, GARCH
    JEL: C22 C11 G10
    Date: 2007–12–18
    URL: http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-304&r=ecm
  5. By: Alessandro De Gregorio (Department of Economics, Business and Statistics, Università di Milano, Italy); Stefano Iacus (Department of Economics, Business and Statistics, University of Milan, IT)
    Abstract: A one dimensional diffusion process X={X_t, 0 <= t <= T}, with drift b(x) and diffusion coefficient s(theta, x)=sqrt(theta) s(x) known up to theta>0, is supposed to switch volatility regime at some point t* in (0,T). On the basis of discrete time observations from X, the problem is the one of estimating the instant of change in the volatility structure t* as well as the two values of theta, say theta_1 and theta_2, before and after the change point. It is assumed that the sampling occurs at regularly spaced times intervals of length Delta_n with n*Delta_n=T. To work out our statistical problem we use a least squares approach. Consistency, rates of convergence and distributional results of the estimators are presented under an high frequency scheme. We also study the case of a diffusion process with unknown drift and unknown volatility but constant.
    Keywords: discrete observations, diffusion process, change point problem, volatility regime switch, nonparametric estimator,
    Date: 2007–09–18
    URL: http://d.repec.org/n?u=RePEc:bep:unimip:1063&r=ecm
  6. By: Joaquim J.S. Ramalho (Universidade de Évora – Departamento de Economia); Esmeralda A. Ramalho (Universidade de Évora – Departamento de Economia)
    Abstract: In this paper we examine by simulation whether or not the omission of orthogonal relevant variables is really an issue in logit and probit models. We found that unobserved heterogeneity: (i) attenuates the regression coefficients; (ii) hardly affects the logit estimation of partial effects, while in the probit case there may be some bias in the estimation of those quantities for some forms of heterogeneity; (iii) is almost innocuous in the prediction of outcomes, particularly in logit models; and (iv) does not affect the size of Wald tests for the significance of observed regressors but seriously reduces their power.
    Keywords: Logit, probit, neglected heterogeneity, partial effects
    JEL: C12 C13 C15 C25
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:cfe:wpcefa:2007_08&r=ecm
  7. By: Sarantis Tsiaplias (Melbourne Institute of Applied Economic and Social Research, The University of Melbourne)
    Abstract: A model incorporating common Markovian regimes and GARCH residuals in a persistent factor environment is considered. Given the intractable and approximate nature of the likelihood function, a Metropolis-in-Gibbs sampler with Bayesian features is constructed for estimation purposes. The common factor drawing procedure is effectively an exact derivation of the Kalman filter with a Markovian regime component and GARCH innovations. To accelerate the drawing procedure, approximations to the conditional density of the common component are considered. The model is applied to equity data for 18 developed markets to derive global, European, and country specific equity market factors.
    Keywords: Common factors, Kalman filter, Markov switching, Monte Carlo, GARCH, Equities
    JEL: C32 C51
    Date: 2007–06
    URL: http://d.repec.org/n?u=RePEc:iae:iaewps:wp2007n18&r=ecm
  8. By: Yingyao Hu; Matthew Shum
    Abstract: In this paper, we consider nonparametric identification and estimation of first-price auction models when N*, the number of potential bidders, is unknown to the researcher, but observed by bidders. Exploiting results from the recent econometric literature on models with misclassification error, we develop a nonparametric procedure for recovering the distribution of bids conditional on the unknown N*. Monte Carlo results illustrate that the procedure works well in practice. We present illustrative evidence from a dataset of procurement auctions, which shows that accounting for the unobservability of N* can lead to economically meaningful differences in the estimates of bidders' profit margins.
    Date: 2007–12
    URL: http://d.repec.org/n?u=RePEc:jhu:papers:541&r=ecm
  9. By: Ralph D. Snyder; Adrian Beaumont
    Abstract: This paper has a focus on non-stationary time series formed from small non-negative integer values which may contain many zeros and may be over-dispersed. It describes a study undertaken to compare various suitable adaptations of the simple exponential smoothing method of forecasting on a database of demand series for slow moving car parts. The methods considered include simple exponential smoothing with Poisson measurements, a finite sample version of simple exponential smoothing with negative binomial measurements, and the Croston method of forecasting. In the case of the Croston method, a maximum likelihood approach to estimating key quantities, such as the smoothing parameter, is proposed for the first time. The results from the study indicate that the Croston method does not forecast, on average, as well as the other two methods. It is also confirmed that a common fixed smoothing constant across all the car parts works better than maximum likelihood approaches.
    Keywords: Count time series; forecasting; exponential smoothing; Poisson distribution; negative binomial distribution; Croston method.
    Date: 2007–12
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2007-15&r=ecm
  10. By: Ethan Cohen-Cole; Todd Prono
    Abstract: This paper will discuss a proposed method for the estimation of loss distribution using information from a combination of internally derived data and data from external sources. The relevant context for this analysis is the estimation of operational loss distributions used in the calculation of capital adequacy. We present a robust, easy-to-implement approach that draws on Bayesian inferential methods. The principal intuition behind the method is to let the data itself determine how they should be incorporated into the loss distribution. This approach avoids the pitfalls of managerial choice on data weighting and cut-off selection and allows for the estimation of a single loss distribution.
    Keywords: Risk
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:fip:fedbqu:qau07-8&r=ecm
  11. By: Kevin D. Hoover (Duke University); Katarina Juselius (Department of Economics, University of Copenhagen); Søren Johansen (Department of Economics, University of Copenhagen)
    Abstract: An explication of the key ideas behind the Cointegrated Vector Autoregression Approach. The CVAR approach is related to Haavelmo’s famous “Probability Approach in Econometrics” (1944). It insists on careful stochastic specification as a necessary groundwork for econometric inference and the testing of economic theories. In time-series data, the probability approach requires careful specification of the integration and cointegration properties of variables in systems of equations. The relationship between the CVAR approach and wider methodological issues and between it and related approaches (e.g., the LSE approach) are explored. The specific-to-general strategy of widening the scope of econometric models to identify stochastic trends and cointegrating relations and to nest theoretical economic models is illustrated with the example of purchasing-power parity
    Keywords: cointegrated VAR; stochastic trends; Purchasing Power Parity
    JEL: B41 C32 C51
    Date: 2007–11
    URL: http://d.repec.org/n?u=RePEc:kud:kuiedp:0735&r=ecm
  12. By: Joachim Wilde
    Abstract: The inference in probit models relies on the assumption of normality. However, tests of this assumption are not implemented in standard econometric software. Therefore, the paper presents a simple representation of the Bera-Jarque-Lee test, that does not require any matrix algebra. Furthermore, the representation is used to compare the Bera-Jarque- Lee test with the RESET-type test proposed by Papke and Wooldridge (1996).
    Keywords: probit model, Lagrange multiplier test, normality assumption, artificial regression
    JEL: C25
    Date: 2007–12
    URL: http://d.repec.org/n?u=RePEc:iwh:dispap:-07&r=ecm
  13. By: Andersson, Jonas (Dept. of Finance and Management Science, Norwegian School of Economics and Business Administration); Moberg, Jan-Magnus (Dept. of Finance and Management Science, Norwegian School of Economics and Business Administration)
    Abstract: In this paper some methods to determine the reporting delays for trades on the New York stock exchange are proposed and compared. The most successful method is based on a simple model of the quote revision process and a bootstrap procedure. In contrast to previous methods it accounts for autocorrelation and for variation originating both from the quote process itself and from estimation errors. This is obtained by the use of prediction intervals. The ability of the methods to determine when a trade has occurred is studied and compared with a previous method by Vergote (2005). This is done by means of a simulation study. An extensive empirical study shows the applicability of the method and that more reasonable results are obtained when accounting for autocorrelation and estimation uncertainty.
    Keywords: Quote revisions; bootstrap procedure; simulation
    JEL: C15
    Date: 2007–12–19
    URL: http://d.repec.org/n?u=RePEc:hhs:nhhfms:2007_028&r=ecm
  14. By: Turgut Kisinbay
    Abstract: The paper proposes an algorithm that uses forecast encompassing tests for combining forecasts. The algorithm excludes a forecast from the combination if it is encompassed by another forecast. To assess the usefulness of this approach, an extensive empirical analysis is undertaken using a U.S. macroecoomic data set. The results are encouraging as the algorithm forecasts outperform benchmark model forecasts, in a mean square error (MSE) sense, in a majority of cases.
    Keywords: Forecasting models , Economic forecasting ,
    Date: 2007–11–21
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:07/264&r=ecm
  15. By: Yingyao Hu; Arthur Lewbel
    Abstract: Consider an observed binary regressor D and an unobserved binary variable D*, both of which affect some other variable Y. This paper considers nonparametric identification and estimation of the effect of D on Y, conditioning on D* = 0. For example, suppose Y is a person’s wage, the unobserved D* indicates if the person has been to college, and the observed D indicates whether the individual claims to have been to college. This paper then identifies and estimates the difference in average wages between those who falsely claim college experience versus those who tell the truth about not having college. We estimate this average returns to lying to be about 7% to 20%. Nonparametric identification without observing D* is obtained either by observing a variable V that is roughly analogous to an instrument for ordinary measurement error, or by imposing restrictions on model error moments.
    Date: 2007–11
    URL: http://d.repec.org/n?u=RePEc:jhu:papers:540&r=ecm
  16. By: Alexander Meyer-Gohde
    Abstract: A solution method is derived in this paper for solving a system of linear rationalexpectations equation with lagged expectations (e.g., models incorporating sticky information) using the method of undetermined coefficients for the infinite MA representation. The method applies a combination of a Generalized Schur Decomposition familiar elsewhere in the literature and a simple system of linear equations when lagged expectations are present to the infinite MA representation. Execution is faster, applicability more general, and use more straightforward than with existing algorithms. Current methods of truncating lagged expectations are shown to not generally be innocuous and the use of such methods are rendered obsolete by the tremendous gains in computational efficiency of the method here which allows for a solution to floating-point accuracy in a fraction of the time required by standard methods. The associated computational application of the method provides impulse responses to anticipated and unanticipated innovations, simulations, and frequency-domain and simulated moments.
    Keywords: Lagged expectations; linear rational expectations models; block tridiagonal; Generalized Schur Form; QZ decomposition; LAPACK
    JEL: C32 C63
    Date: 2007–12
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2007-069&r=ecm
  17. By: Luis Fernando Melo; John Jairo León; Dagoberto Saboya
    Abstract: This paper extends the asymptotic results of the dynamic ordinary least squares (DOLS) cointegration vector estimator of Mark and Sul (2003) to a three-dimensional panel. We use a balanced panel of N and M lengths observed over T time periods. The cointegration vector is homogenous across individuals but we allow for individual heterogeneity using different short-run dynamics, individual-specific fixed effects and individual-specific time trends. Both individual effects are considered for the first two dimensions. We also model some degree of cross-sectional dependence using time-specific effects. This paper was motivated by the three-dimensional panel cointegration analysis used to estimate the total factor productivity for Colombian regions and sectors during 1975-2000 by Iregui, Melo and Ram´ırez (2007). They used the methodology proposed by Marrocu, Paci and Pala (2000); however, hypothesis testing is not valid under this technique. The methodology we are currently proposing allows us to estimate the long-run relationship and to construct asymptotically valid test statistics in the 3D-panel context.
    Date: 2007–12–17
    URL: http://d.repec.org/n?u=RePEc:col:000094:004391&r=ecm
  18. By: Søren Johansen (Department of Economics, University of Copenhagen); Katarina Juselius (Department of Economics, University of Copenhagen); Roman Frydman (New York University); Michael Goldberg (University of New Hampshire)
    Abstract: This paper discusses a number of likelihood ratio tests on long-run relations and common trends in the I(2) model and provide new results on the test of overidentifying restrictions on β’xt and the asymptotic variance for the stochastic trends parameters, α⊥1: How to specify deterministic components in the I(2) model is discussed at some length. Model specification and tests are illustrated with an empirical analysis of long and persistent swings in the foreign exchange market between Germany and USA. The data analyzed consist of nominal exchange rates, relative prices, US inflation rate, two long-term interest rates and two short-term interest rates over the 1975-1999 period. One important aim of the paper is to demonstrate that by structuring the data with the help of the I(2) model one can achieve a better understanding of the empirical regularities underlying the persistent swings in nominal exchange rates, typical in periods of floating exchange rates
    Keywords: PPP puzzle; forward premium puzzle; cointegrated VAR; likelihood inference
    JEL: C32 C52 F41
    Date: 2007–12
    URL: http://d.repec.org/n?u=RePEc:kud:kuiedp:0734&r=ecm
  19. By: Giaccaria Sergio (University of Turin); Frontuto Vito (University of Turin)
    Abstract: The paper uses Contingent Valuation to investigate the externalities from linear infrastructures, with a particular concern for their dependence on characteristics of the local context within which they are perceived. We employ Geographical Information Systems and a spatial econometric technique, the Geographic Weighted Regression, integrated in a dichotomous choice CV in order to improve both the sampling design and the econometric analysis of a CV survey. These tools are helpful when local factors with an important spatial variability may have a crucial explanatory role in the structure of individual preferences. The Geographic Weighted Regression is introduced, beside GIS, as a way to enhance the flexibility of a stated preference analysis, by fitting local changes and highlighting spatial non-stationarity in the relationships between estimated WTP and explanatory variables. This local approach is compared with a standard double bounded contingent valuation through an empirical study about high voltage transmission lines. The GWR methodology has not been applied before in environmental economics. The paper shows its significance in testing the consistency of the standard approach by monitoring the spatial patterns in the distribution of the WTP and the spatial stability of the parameters estimated in order to compute the conditional WTPs.
    Date: 2007–12
    URL: http://d.repec.org/n?u=RePEc:uto:dipeco:200710&r=ecm
  20. By: Carlo Altavilla (University of Naples “Parthenope”, Via Medina, 40 - 80133 Naples, Italy.); Matteo Ciccarelli (Corresponding author: European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.)
    Abstract: This paper explores the role of model and vintage combination in forecasting, with a novel approach that exploits the information contained in the revision history of a given variable. We analyse the forecast performance of eleven widely used models to predict inflation and GDP growth, in the three dimensions of accuracy, uncertainty and stability by using the real-time data set for macroeconomists developed at the Federal Reserve Bank of Philadelphia. Instead of following the common practice of investigating only the relationship between first available and fully revised data, we analyse the entire revision history for each variable and extract a signal from the entire distribution of vintages of a given variable to improve forecast accuracy and precision. The novelty of our study relies on the interpretation of the vintages of a real time data base as related realizations or units of a panel data set. The results suggest that imposing appropriate weights on competing models of inflation forecasts and output growth — reflecting the relative ability each model has over different sub-sample periods — substantially increases the forecast performance. More interestingly, our results indicate that augmenting the information set with a signal extracted from all available vintages of time-series consistently leads to a substantial improvement in forecast accuracy, precision and stability. JEL Classification: C32, C33, C53.
    Keywords: Real-time data, forecast combination, data and model uncertainty.
    Date: 2007–12
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20070846&r=ecm
  21. By: Maria S. Heracleous
    Abstract: Econometric modeling based on the Student’s t distribution introduces an additional parameter — the degree of freedom. In this paper we use a simulation study to investigate the ability of (i) the GARCH-t model (Bollerslev, 1987) to estimate the true degree of freedom parameter and (ii) the sample kurtosis coefficient to accurately determine the implied degrees of freedom. Simulation results reveal that the GARCH-t model and the sample kurtosis coefficient provide biased and inconsistent estimates of the degree of freedom parameter. Moreover, by varying ó2, we find that only the constant term in the conditional variance equation is affected, while the other parameters remain unaffected.
    Keywords: Student’s t distribution, Degree of freedom, Kurtosis coefficient, GARCH t model
    JEL: C15 C16 C22
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:eui:euiwps:eco2007/60&r=ecm
  22. By: Alessandro De Gregorio (Università di Milano, Italy); Stefano Iacus (Università di Milano, Italy)
    Abstract: In this paper we derive explicit formulas of the R\'enyi information, Shannon entropy and Song measure for the invariant density of one dimensional ergodic diffusion processes. In particular, the diffusion models considered include the hyperbolic, the generalized inverse Gaussian, the Pearson, the exponential familiy and a new class of skew-t diffusions.
    Keywords: Rényi entropy, Shannon entropy, Song measure, ergodic diffusion processes, invariant law,
    Date: 2007–11–12
    URL: http://d.repec.org/n?u=RePEc:bep:unimip:1064&r=ecm
  23. By: Geerte Cotteleer; Tracy Stobbe; G. Cornelis van Kooten
    Abstract: Since 1973, British Columbia created an Agricultural Land Reserve to protect farmland from development. In this study, we employ GIS-based hedonic pricing models of farmland values to examine factors that affect farmland prices. We take spatial lag and error dependence into explicit account. However, the use of spatial econometric techniques in hedonic pricing models is problematic because there is uncertainty with respect to the choice of the explanatory variables and the spatial weighting matrix. Bayesian model averaging techniques in combination with Markov Chain Monte Carlo Model Composition are used to allow for both types of model uncertainty.
    Keywords: Bayesian model averaging, Markov Chain Monte Carlo Model Composition, spatial econometrics, hedonic pricing, GIS, urban-rural fringe, farmland fragmentation
    JEL: R11 R15 C50 R14
    Date: 2007–11
    URL: http://d.repec.org/n?u=RePEc:rep:wpaper:2007-07&r=ecm
  24. By: Della Corte, Pasquale; Sarno, Lucio; Tsiakas, Ilias
    Abstract: This paper provides a comprehensive evaluation of the short-horizon predictive ability of economic fundamentals and forward premia on monthly exchange rate returns in a framework that allows for volatility timing. We implement Bayesian methods for estimation and ranking of a set of empirical exchange rate models, and construct combined forecasts based on Bayesian Model Averaging. More importantly, we assess the economic value of the in-sample and out-of-sample forecasting power of the empirical models, and find two key results: (i) a risk averse investor will pay a high performance fee to switch from a dynamic portfolio strategy based on the random walk model to one which conditions on the forward premium with stochastic volatility innovations; and (ii) strategies based on combined forecasts yield large economic gains over the random walk benchmark. These two results are robust to reasonably high transaction costs.
    Keywords: Bayesian MCMC Estimation; Bayesian Model Averaging; Economic Value; Exchange Rates; Forward Premium; Monetary Fundamentals; Volatility
    JEL: F31 F37 G11
    Date: 2007–12
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:6598&r=ecm
  25. By: C.L. Skeels
    Abstract: Using examples drawn from two important papers in the recent literature on weak instruments, we demonstrate how observed experimental outcomes can be pro- foundly inuenced by the dierent conceptual frameworks underlying two exper- imental designs commonly employed when simulating simultaneous equations
    Keywords: Simultaneous equations; Experimental design; Simulation experiment
    JEL: C12 C15 C30
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:mlb:wpaper:1020&r=ecm
  26. By: Debopam Bhattacharya; Bhashkar Mazumder
    Abstract: This paper concerns the problem of inferring the effects of covariates on intergenerational income mobility, i.e. on the relationship between the incomes of parents and future earnings of their children. We focus on two different measures of mobility- (i) traditional transition probability of movement across income quantiles over generations and (ii) a new direct measure of upward mobility, viz. the probability that an adult child's relative position exceeds that of the parents. We estimate the effect of possibly continuously distributed covariates from data using nonparametric regression and average derivatives and derive the distribution theory for these measures. The analytical novelty in the derivation is that the dependent variables involve nonsmooth functions of estimated components- marginal quantiles for transition probabilities and relative ranks for upward mobility- thus necessitating nontrivial modifications of standard nonparametric regression theory. We use these methods on US data from the National Longitudinal Survey of Youth to study black-white differences in intergenerational mobility, a topic which has received scant attention in the literature. We document that whites experience greater intergenerational mobility than blacks. Estimates of conditional mobility using nonparametric regression reveal that most of the interracial mobility gap can be accounted for by differences in cognitive skills during adolescence. The methods developed here have wider applicability to estimation of nonparametric regression and average derivatives where the dependent variable either involves a preliminary finite-dimensional estimate in a nonsmooth way or is a nonsmooth functional of ranks of one or more random variables.
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:fip:fedhwp:wp-07-12&r=ecm
  27. By: BONTEMPS, Christian; MAGNAC, Thierry; MAURIN, Eric
    Date: 2007–12
    URL: http://d.repec.org/n?u=RePEc:ide:wpaper:8519&r=ecm

This nep-ecm issue is ©2008 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.