
on Econometrics 
By:  Christine De Mol (Universite Libre de Bruxelles – ECARES, Av. F. D. Roosevelt, 50 – CP 114, 1050 Bruxelles, Belgium.); Domenico Giannone (Universite Libre de Bruxelles – ECARES, Av. F. D. Roosevelt, 50 – CP 114, 1050 Bruxelles, Belgium.); Lucrezia Reichlin (European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.) 
Abstract:  This paper considers Bayesian regression with normal and doubleexponential priors as forecasting methods based on large panels of time series. We show that, empirically, these forecasts are highly correlated with principal component forecasts and that they perform equally well for a wide range of prior choices. Moreover, we study the asymptotic properties of the Bayesian regression under Gaussian prior under the assumption that data are quasi collinear to establish a criterion for setting parameters in a large crosssection. JEL Classification: C11,C13, C33, C53. 
Keywords:  Bayesian VAR, ridge regression, Lasso regression, principal components, large crosssections. 
Date:  2006–12 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20060700&r=ecm 
By:  Lux, Thomas 
Abstract:  Multifractal processes have recently been proposed as a new formalism for modelling the time series of returns in ¯nance. The major attraction of these processes is their ability to generate various degrees of long memory in di®erent powers of returns  a feature that has been found in virtually all ¯nancial data. Initial di±culties stemming from nonstationarity and the combinatorial nature of the original model have been overcome by the introduction of an iterative Markovswitching multifractal model in Calvet and Fisher (2001) which allows for estimation of its parameters via maximum likelihood and Bayesian forecasting of volatility. However, applicability of MLE is restricted to cases with a discrete distribution of volatility components. From a practical point of view, ML also be comes computationally unfeasible for large numbers of components even if they are drawn from a discrete distribution. Here we propose an alter native GMM estimator together with linear forecasts which in principle is applicable for any continuous distribution with any number of volatility components. Monte Carlo studies show that GMM performs reasonably well for the popular Binomial and Lognormal models and that the loss incurred with linear compared to optimal forecasts is small. Extending the number of volatility components beyond what is feasible with MLE leads to gains in forecasting accuracy for some time series. 
Keywords:  Markovswitching, multifractal, forecasting, volatility, GMM estimation 
JEL:  C20 G12 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:zbw:cauewp:5164&r=ecm 
By:  Xiujian Chen; Shu Lin; W. Robert Reed (University of Canterbury) 
Abstract:  Panel data characterized by groupwise heteroscedasticity, crosssectional correlation, and AR(1) serial correlation pose problems for econometric analyses. It is well known that the asymptotically efficient, FGLS estimator (Parks) sometimes performs poorly in finite samples. In a widely cited paper, Beck and Katz (1995) claim that their estimator (PCSE) is able to produce more accurate coefficient standard errors without any loss in efficiency in ¡°practical research situations.¡± This study disputes that claim. We find that the PCSE estimator is usually less efficient than Parks  and substantially so  except when the number of time periods is close to the number of crosssections. 
Keywords:  Panel data estimation; Monte Carlo analysis; FGLS; Parks; PCSE; finite sample 
JEL:  C15 C23 
Date:  2006–11–03 
URL:  http://d.repec.org/n?u=RePEc:cbt:econwp:06/14&r=ecm 
By:  Giulietti, Monica (Aston Business School); Otero, Jesus (Universidad del Rosario, Colombia); Smith, Jeremy (University of Warwick) 
Abstract:  This paper presents two alternative methods for modifying the HEGYIPS test in the presence of crosssectional dependency. In general, the bootstrap method (BHEGYIPS) has greater power than the method suggested by Pesaran (2007) (CHEGYIPS), although for large T and high degree of crosssectional dependency the CHEGYIPS test dominates the BHEGYIPS test. 
Keywords:  Heterogeneous dynamic panels ; Monte Carlo ; seasonal unit roots ; cross sectional dependence 
JEL:  C12 C15 C22 C23 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:wrk:warwec:784&r=ecm 
By:  Katsumi Shimotsu (Department of Economics, Queen's University) 
Abstract:  This paper proposes two simple tests that are based on certain time domain properties of I(d) processes. First, if a time series follows an I(d) process, then each subsample of the time series also follows an I(d) process with the same value of d. Second, if a time series follows an I(d) process, then its dth differenced series follows an I(0) process. Simple as they may sound, these properties provide useful tools to distinguish between true and spurious I(d) processes. In the first test, we split the sample into b subsamples, estimate d for each subsample, and compare them with the estimate of d from the full sample. In the second test, we estimate d, use the estimate to take the dth difference of the sample, and apply the KPSS test and PhillipsPerron test to the differenced data and its partial sum. Both tests are applicable to both stationary and nonstationary I(d) processes. Simulations show that the proposed tests have good power against the spurious long memory models considered in the literature. The tests are applied to the daily realized volatility of the S&P 500 index. 
Keywords:  long memory, fractional integration, structural breaks, realized volatility 
JEL:  C12 C13 C14 C22 
Date:  2006–12 
URL:  http://d.repec.org/n?u=RePEc:qed:wpaper:1101&r=ecm 
By:  Chen, Willa; Deo, Rohit 
Abstract:  The distribution of the tstatistic in autoregressive (AR) processes is discontinuous near the unit root, causing problems for interval estimation. We show that the likelihood ratio test (RLRT) based on the restricted likelihood circumvents this problem. Chen and Deo (2006) show that irrespective of the AR coefficient, the error in the chisquare approximation to the RLRT distribution for stationary AR(1) processes is 0.5n^(¹)(G₃(.)G₁(.))+O(n^²), where Gs is the c.d.f of a χ{s}² distribution. In this paper, the nonstandard asymptotic distribution of the RLRT for the unit root boundary value is obtained and shown to be almost identical to that of the chisquare in the right tail. Together, the above two results imply that the chisquare distribution approximates the RLRT distribution very well even for nearly integrated series and transitions smoothly to the unit root distribution. The chisquare based confidence intervals obtained by inverting the RLRT thus have almost correct coverage near the unit root and have width shrinking to zero with increasing sample size. Related work by Francke and de Vos (2006) suggests the RLRT intervals may also be close to uniformly most accurate invariant. A simulation study supports the theory presented in the paper. 
Keywords:  Curvature; boundary value; trend stationary; restricted likelihood; confidence interval 
JEL:  C22 C12 
Date:  2006–12–18 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:1215&r=ecm 
By:  Westerlund, Joakim; Basher, Syed A. 
Abstract:  A common explanation for the inability of the monetary model to beat the random walk in forecasting future exchange rates is that conventional time series tests may have low power, and that panel data should generate more powerful tests. This paper provides an extensive evaluation of this power argument to the use of panel data in the forecasting context. In particular, by using simulations it is shown that although pooling of the individual prediction tests can lead to substantial power gains, pooling only the parameters of the forecasting equation, as has been suggested in the previous literature, does not seem to generate more powerful tests. The simulation results are illustrated through an empirical application. 
Keywords:  Monetary Exchange Rate Model; Forecasting; Panel Data; Pooling; Bootstrap. 
JEL:  F47 F31 C32 C15 C33 
Date:  2006–12–20 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:1229&r=ecm 
By:  Paolo, Foschi 
Abstract:  The estimation of regressions models with twoway error component disurbances, is considered for the case where both the random effects are nonspherically distributed. The usual approach that first transforms the effects into uncorrelated ones and then applies within and between transformations, cannot be conveniently applied. Here, it is proposed to revert this scheme by firstly applying the within and between transformations. This results in simple General Linear Model which can be partitioned into three smaller GLMs. Then, by exploiting the structure of the models and using the Generalized QR decomposition as a tool, a computationally efficient and numerically reliable method for estimating the regression parameters is derived. This estimation method is generalized to the case of a system of seemingly unrelated regressions. 
Keywords:  panel data models; regressions; seemingly unrelated regressions; generalized leastsquares; error components; orthogonal transformation; numerical methods 
JEL:  C32 C33 C63 
Date:  2005–02–08 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:1424&r=ecm 
By:  Matias Mayor Fernandez; Esteban Fernandez Vazquez; Jorge Rodriguez Valez 
Abstract:  Spatial econometrics is a subdiscipline that have gained a huge popularity in the last twenty years, not only in theoretical econometrics but in empirical studies as well. Basically, spatial econometric methods measure spatial interaction and incorporate spatial structure into regression analysis. The specification of a matrix of spatial weights W plays a crucial role in the estimation of spatial models. The elements of this matrix measure the spatial relationships between two geographical locations i and j, and they are specified exogenously to the model. Several alternatives for W have been proposed in the literature, although binary matrices based on contiguity among locations or distance matrices are the most commons choices. One shortcoming of using this type of matrices for the spatial models is the impossibility of estimating â€œheterogeneousâ€ spatial spillovers: the typical objective is the estimation of a parameter that measures the average spatial effect of the set of locations analysed. Roughly speaking, this is given by â€œillposedâ€ econometric models where the number of (spatial) parameters to estimate is too large. In this paper, we explore the use of generalized maximum entropy econometrics (GME) to estimate spatial structures. This technique is very attractive in situations where one has to deal with estimation of â€œillposedâ€ or â€œillconditionedâ€ models. We compare by means of Monte Carlo simulations â€œclassicalâ€ ML estimators with GME estimators in several situations with different availability of information. 
Date:  2006–08 
URL:  http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa06p777&r=ecm 
By:  Volker Krätschmer 
Abstract:  Let W denote a family of probability distributions with parameter space Τ, and WG be a subfamily of W depending on a mapping G:Θ > Τ. Extremum estimations of the parameter vector ν ∈ Θ are considered. Some sufficient conditions are presented to ensure the uniqueness with probability one. As important applications, the maximum likelihood estimation in curved exponential families and nonlinear regression models with independent disturbances as well as the maximum likelihood estimation of the location and scale parameters of Gumbel distributions are treated. 
Keywords:  Extremum Estimation, Sard’s Theorem, Nonlinear Regression, Curved Exponential Families, Gumbel Distributions. 
JEL:  C13 C16 
Date:  2006–12 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2006080&r=ecm 
By:  Marmer, Vadim; Shneyerov, Artyom 
Abstract:  We propose a quantilebased nonparametric approach to inference on the probability density function (PDF) of the private values in firstprice sealedbid auctions with independent private values. Our method of inference is based on a fully nonparametric kernelbased estimator of the quantiles and PDF of observable bids. Our estimator attains the optimal rate of Guerre, Perrigne, and Vuong (2000), and is also asymptotically normal with the appropriate choice of the bandwidth. As an application, we consider the problem of inference on the optimal reserve price. 
Keywords:  Firstprice auctions; independent private values; nonparametric estimation; kernel estimation; quantiles; optimal reserve price 
JEL:  C14 D44 
Date:  2006–10 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:1193&r=ecm 
By:  Hurvich, Cliiford; Wang, Yi 
Abstract:  We propose a new transactionlevel bivariate logprice model, which yields fractional or standard cointegration. To the best of our knowledge, all existing models for cointegration require the choice of a fixed sampling frequency Delta t. By contrast, our proposed model is constructed at the transaction level, thus determining the properties of returns at all sampling frequencies. The two ingredients of our model are a Long Memory Stochastic Duration process for the waiting times tau(k) between trades, and a pair of stationary noise processes ( e(k) and eta(k) ) which determine the jump sizes in the purejump logprice process. The e(k), assumed to be iid Gaussian, produce a Martingale component in log prices. We assume that the microstructure noise eta(k) obeys a certain model with memory parameter d(eta) in (1/2,0) (fractional cointegration case) or d(eta) = 1 (standard cointegration case). Our logprice model includes feedback between the shocks of the two series. This feedback yields cointegration, in that there exists a linear combination of the two components that reduces the memory parameter from 1 to 1+d(eta) in (0.5,1) and (0). Returns at sampling frequency Delta t are asymptotically uncorrelated at any fixed lag as Delta t increases. We prove that the cointegrating parameter can be consistently estimated by the ordinary leastsquares estimator, and obtain a lower bound on the rate of convergence. We propose transactionlevel methodofmoments estimators of several of the other parameters in our model. We present a data analysis, which provides evidence of fractional cointegration. We then consider special cases and generalizations of our model, mostly in simulation studies, to argue that the suitablymodified model is able to capture a variety of additional properties and stylized facts, including leverage, portfolio return autocorrelation due to nonsynchronous trading, Granger causality, and volatility feedback. The ability of the model to capture these effects stems in most cases from the fact that the model treats the (stochastic) intertrade durations in a fully endogenous way. 
Keywords:  Tick Time; Long Memory Stochastic Duration; Information Share; Granger causality. 
JEL:  C00 C01 
Date:  2006–12–04 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:1413&r=ecm 
By:  Clements, Michael P. (University of Warwick); Galvão, Ana Beatriz (Queen Mary, University of London); Kim, Jae H. (Monash University) 
Abstract:  Quantile forecasts are central to risk management decisions because of the widespread use of ValueatRisk. A quantile forecast is the product of two factors : the model used to forecast volatility, and the method of computing quantiles from the volatility forecasts. In this paper we calculate and evaluate quantile forecasts of the daily exchange rate returns of five currencies. The forecasting models that have been used in recent analyses of the predictability of daily realized volatility permit a comparison of the predictive power of different measures of intraday variation and intraday returns in forecasting exchange rate variability. The methods of computing quantile forecasts include making distributional assumptions for future daily returns as well as using the empirical distribution of predicted standardized returns with both rolling and recursive samples. Our main ?ndings are that the HAR model provides more accurate volatility and quantile forecasts for currencies which experience shifts in volatility, such as the Canadian dollar, and that the use of the empirical distribution to calculate quantiles can improve forecasts when there are shifts. 
Keywords:  realized volatility ; quantile forecasting ; MIDAS ; HAR ; exchange rates 
JEL:  C32 C53 F37 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:wrk:warwec:777&r=ecm 
By:  Dirk Temme; Lutz Hildebrandt 
Abstract:  Many researchers seem to be unsure about how to specify formative measurement models in software programs like LISREL or AMOS and to establish identification of the corresponding structural equation model. In order to make identification easier, a new, mainly graphically oriented approach is presented for a specific class of recursive models with formative indicators. Using this procedure it is shown that some models have erroneously been considered underidentified. Furthermore, it is shown that specifying formative indicators as exogenous variables rises serious conceptual and substantial issues in the case that the formative construct is truly endogenous (i. e. influenced by more remote causes). An empirical study on the effects and causes of brand competence illustrates this point. 
Keywords:  Formative Indicators; Latent Variables; Covariance Structure Analysis; Identification 
JEL:  C31 C51 C52 M31 
Date:  2006–12 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2006083&r=ecm 
By:  Lux, Thomas; Kaizoji, Taisei 
Abstract:  We investigate the predictability of both volatility and volume for a large sample of Japanese stocks. The particular emphasis of this paper is on assessing the performance of long memory time series models in comparison to their shortmemory counterparts. Since long memory models should have a particular advantage over long forecasting horizons, we consider predictions of up to 100 days ahead. In most respects, the long memory models (ARFIMA, FIGARCH and the recently introduced multifractal model) dominate over GARCH and ARMA models. However, while FIGARCH and ARFIMA also have quite a number of cases with dramatic failures of their forecasts, the multifractal model does not suffer from this shortcoming and its performance practically always improves upon the naïve forecast provided by historical volatility. As a somewhat surprising result, we also find that, for FIGARCH and ARFIMA models, pooled estimates (i.e. averages of parameter estimates from a sample of time series) give much better results than individually estimated models. 
Keywords:  forecasting, long memory models, volume, volatility 
JEL:  C22 C53 G12 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:zbw:cauewp:5160&r=ecm 
By:  Henk Folmer; Johan Oud 
Abstract:  A strong increase in the availability of spacetime data has occurred during the past decades. This has led to the development of a substantial literature dealing with the two particular problems inherent to this kind of data, i.e. serial dependence between the observations on each spatial unit over time, and spatial dependence between the observations on the spatial units at each point in time (e.g. Elhorst, 2001, 2003). Typical for spatial panel data models is that the causal direction cannot be based on instantaneous relationships between simultaneously measured variables. Rather the socalled crosslagged panel design studies compare the effects of variables on each other across time. Although they circumvent the difficult problem of assessing causal direction in crosssectional research, the crosslagged panel design studies are usually performed in discrete time (Oud, 2002). Because of different discrete time observation intervals within and between studies, outcomes are often incomparable or appear to be contradictory (Gollob & Reichardt, 1987). This paper will describe the problems of crosslagged spacetime models in discrete time and propose how these problems can be solved through a continuous time approach. In this regard special attention will be paid to structural equation modelling (SEM). In addition, we shall describe how spacetime dependence can he handled in a SEM framework 
Date:  2006–08 
URL:  http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa06p19&r=ecm 
By:  Chris Heaton (Department of Economics, Macquarie University); Victor Solo (University of New South Wales) 
Abstract:  The use of principal component techniques to estimate approximate factor models with large crosssectional dimension is now well established. However, recent work by Inklaar, Jacobs and Romp (2003) and Boivin and Ng (2005) has cast some doubt on the importance of a large crosssectional dimension for the precision of the estimates. This paper presents some new theory for approximate factor model estimation. Consistency is proved and rates of convergence are derived under conditions that allow for a greater degree of crosscorrelation in the model disturbances than previously published results. The rates of convergence depend on the rate at which the crosssectional correlation of the model disturbances grows as the crosssectional dimension grows. The consequences for applied economic analysis are discussed. 
Keywords:  Factor analysis, time series models, principal components 
JEL:  C13 C32 C43 C53 
Date:  2006–09 
URL:  http://d.repec.org/n?u=RePEc:mac:wpaper:0605&r=ecm 
By:  Andreas S. Andreou (University of Cyprus); George A. Zombanakis (Bank of Greece) 
Abstract:  This paper applies computational intelligence methods to exchange rate forecasting. In particular, it employs neural network methodology in order to predict developments of the Euro exchange rate versus the U.S. Dollar and the Japanese Yen. Following a study of our series using traditional as well as specialized, nonparametric methods together with Monte Carlo simulations we employ selected Neural Networks (NNs) trained to forecast rate fluctuations. Despite the fact that the data series have been shown by the Rescaled Range Statistic (R/S) analysis to exhibit random behaviour, their internal dynamics have been successfully captured by certain NN topologies, thus yielding accurate predictions of the two exchangerate series. 
Keywords:  Exchange  rate forecasting, Neural networks 
JEL:  C53 
Date:  2006–11 
URL:  http://d.repec.org/n?u=RePEc:bog:wpaper:49&r=ecm 
By:  Hirano, Keisuke; Porter, Jack 
Abstract:  This paper develops asymptotic optimality theory for statistical treatment rules in smooth parametric and semiparametric models. Manski (2000, 2002, 2004) and Dehejia (2005) have argued that the problem of choosing treatments to maximize social welfare is distinct from the point estimation and hypothesis testing problems usually considered in the treatment effects literature, and advocate formal analysis of decision procedures that map empirical data into treatment choices. We develop largesample approximations to statistical treatment assignment problems in both randomized experiments and observational data settings in which treatment effects are identified. We derive a local asymptotic minmax regret bound on social welfare, and a local asymptotic risk bound for a twopoint loss function. We show that certain natural treatment assignment rules attain these bounds. 
Keywords:  treatment effect; statistical decision theory; minmax regret; treatment assignment rules 
JEL:  C1 
Date:  2006–08–08 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:1173&r=ecm 
By:  Weron, Rafal; Misiorek, Adam 
Abstract:  In this paper we assess the shortterm forecasting power of different time series models in the Nord Pool electricity spot market. We evaluate the accuracy of both point and interval predictions; the latter are specifically important for risk management purposes where one is more interested in predicting intervals for future price movements than simply point estimates. We find evidence that nonlinear regimeswitching models outperform their linear counterparts and that the interval forecasts of all models are overestimated in the relatively nonvolatile periods. 
Keywords:  Wholesale electricity price; Point forecast; Interval forecast; AR model; Threshold AR model 
JEL:  L94 Q40 C53 C22 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:1363&r=ecm 
By:  Feridun, Mete 
Abstract:  This article aims at modeling and forecasting inflation in Pakistan. For this purpose a number of econometric approaches are implemented and their results are compared. In ARIMA models, adding additional lags for p and/or q necessarily reduced the sum of squares of the estimated residuals. When a model is estimated using lagged variables, some observations are lost. Results further indicate that the VAR models do not perform better than the ARIMA (2, 1, 2) models and, the two factor model with ARIMA (2, 1, 2) slightly performs better than the ARIMA (2, 1, 2). Although the study focuses on the problem of macroeconomic forecasting, the empirical results have more general implications for small scale macroeconometric models. 
Keywords:  Modeling and forecasting inflation; ARIMA; VAR. 
JEL:  G1 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:1026&r=ecm 
By:  Giuseppe Arbia; Roberto Basile; Gianfranco Piras 
Abstract:  In this paper we suggest an alternative estimator and an alternative graphical analysis, both developed by Hyndman et al. (1996), to describe the law of motion of crosssectional distributions of percapita income and its components in Europe. This estimator has better properties than the kernel density estimator generally used in the literature on intradistribution dynamics (cf. Quah, 1997). By using the new estimator, we obtain evidence of a very strong persistent behavior of the regions considered in the study, that is poor regions tend to remain poorer and rich regions tend to remain richer. These results are also in line with the most recent literature available on the distribution dynamic approach to regional convergence (Pittau and Zelli, 2006). 
Date:  2006–08 
URL:  http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa06p262&r=ecm 
By:  Knetsch, Thomas A.; Reimers, HansEggert 
Abstract:  Elements of an econometric examination of benchmark revisions in realtime data are suggested. Structural break tests may be applied to detect heterogeneities within vintages. Systems cointegration tests are helpful to reveal inconsistencies across vintages. Differencing and rebasing, often used to adjust for benchmark revisions, are generally not sufficient to ensure consistent realtime macroeconomic data. Vintage transformation functions estimated by cointegrating regressions are more flexible. Inappropriate conversion may cause observed revision statistics to be affected by nuisance parameters. In German industrial production and orders statistics, remaining revisions are generally biased and serially correlated. 
Keywords:  realtime data, benchmark revisions, industrial production, orders 
JEL:  C22 C32 C82 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:zbw:bubdp1:5155&r=ecm 
By:  Mishra, SK 
Abstract:  Arnold Zellner and Nagesh Revankar in their wellknown paper “Generalized Production Functions” [The Review of Economic Studies, 36(2), pp. 241250, 1969] introduced a new generalized production function, which was illustrated by an example of fitting the generalized CobbDouglas function to the U.S. data for Transportation Equipment Industry. For estimating the parameters of their production function, they used a method in which one of the parameters (theta) is chosen at the trial basis and other parameters relating to elasticity and returns to scale are estimated so as to maximize the likelihood function. Repeated trials are made with different values of theta so as to obtain the global maximum of the likelihood function. In this paper we show that the method suggested and used by Zellner and Revankar (ZR) may easily be caught into a local optimum trap. We also show that the estimated parameters reported by them are grossly suboptimal. Using the Differential Evolution (DE) and the Repulsive Particle Swarm (RPS) methods of global optimization, the present paper reestimates the parameters of the ZR production function with the U.S. data used by ZR. We find that the DE and the RPS estimates of parameters are significantly different from (but much better than) those estimated by ZR. We also find that the returns to scale do not vary with the size of output as reported by ZR. 
Keywords:  ZellnerRevankar production function; maximum likelihood; global optimization; Repulsive Particle Swarm; Differential Evolution; U.S. Data; Transport Equipment Industry; variable Returns to scale; suboptimality 
JEL:  D24 C61 C63 C31 
Date:  2006–12–10 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:1172&r=ecm 
By:  Anastassios Karaganis; Angelos Mimis 
Abstract:  In this paper a pioneer method for evaluating the probability of occurrence of a traffic accident in different segments of national road is described. The line segments describing the road have been divided into sections and for each one of those sections it is assumed that the probability of a traffic accident follows an inhomogeneous Poisson distribution. The overall model is build up by using a spatial point process. The Poisson parameter is estimated with the help of a spatial SUR method which exploits the characteristics of national road as described by a specially designed weight matrix. This methodology has been applied to data collected from the Greek national road and has been analyzed in this perspective. Specifically the influence of weather conditions as well as the presence of daylight has been examined. 
Date:  2006–08 
URL:  http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa06p640&r=ecm 
By:  Katharina Hampel; Marcus Kunz; Norbert Schanne; Ruediger Wapler; Antje Weyh 
Abstract:  Labourmarket policies are increasingly being decided on a regional level. This implies that institutions have an increased need for regional forecasts as a guideline for their decisionmaking process. Therefore, we forecast regional unemployment in the 176 German labour market districts. We use an augmented structural component (SC) model and compare the results from this model with those from basic SC and autoregressive integrated moving average (ARIMA) models. Basic SC models lack two important dimensions: First, they only use level, trend, seasonal and cyclical components, although former periods of the dependent variable generally have a significant influence on the current value. Second, as spatial units become smaller, the influence of â€œneighboureffectsâ€ becomes more important. In this paper we augment the SC model for structural breaks, autoregressive components and spatial autocorrelation. Using unemployment data from the Federal Employment Services in Germany for the period December 1997 to August 2005, we first estimate basic SC models with components for structural breaks and ARIMA models for each spatial unit separately. In a second stage, autoregressive components are added into the SC model. Third, spatial autocorrelation is introduced into the SC model. We assume that unemployment in adjacent districts is not independent for two reasons: One source of spatial autocorrelation may be that the effect of certain determinants of unemployment is not limited to the particular district but also spills over to neighbouring districts. Second, factors may exist which influence a whole region but are not fully captured by exogenous variables and are reflected in the residuals. We test the quality of the forecasts from the basic models and the augmented SC model by expostestimation for the period September 2004 to August 2005. First results show that the SC model with autoregressive elements and spatial autocorrelation is superior to basic SC and ARIMA models in most of the German labour market districts. 
Date:  2006–08 
URL:  http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa06p196&r=ecm 
By:  Zsolt Darvas (Corvinus University of Budapest); Gábor Vadas (Magyar Nemzeti Bank) 
Abstract:  Decomposing output into trend and cyclical components is an uncertain exercise and depends on the method applied. It is an especially dubious task for countries undergoing large structural changes, such as transition countries. Despite their deficiencies, however, univariate detrending methods are frequently adopted for both policy oriented and academic research. This paper proposes a new procedure for combining univariate detrending techniques which is based on revisions of the estimated output gaps adjusted by the variance of and the correlation among output gaps. The procedure is applied to the study of the similarity of business cycles between the euro area and new EU Member States. 
Keywords:  combination, detrending, new EU members, OCA, output gap, revision 
JEL:  C22 E32 
Date:  2005–08–15 
URL:  http://d.repec.org/n?u=RePEc:mkg:wpaper:0505&r=ecm 
By:  Gunnar Flötteröd (Berlin University of Technology, Group for Transport Systems Planning and Transport Telematics); Kai Nagel (Berlin University of Technology, Group for Transport Systems Planning and Transport Telematics) 
Abstract:  This article describes a behavioral model of combined route and activity location choice. The model can be simulated by a combination of a time variant best path algorithm and dynamic programming, yielding a behavioral pattern that minimizes a traveler’s perceived cost. Furthermore, the model is extended in a Bayesian manner, providing behavioral probabilities not only based on subjective costs, but also allowing for the incorporation of anonymous traffic measurements and the formulation of a traffic state estimation problem. 
Keywords:  transportation, route choice, modelling 
JEL:  L91 C6 
URL:  http://d.repec.org/n?u=RePEc:cni:wpaper:200606&r=ecm 
By:  Pascucci, Andrea; Foschi, Paolo 
Abstract:  We propose a general class of nonconstant volatility models with dependence on the past. The framework includes pathdependent volatility models such as that by Hobson&Rogers and also path dependent contracts such as options of Asian style. A key feature of the model is that market completeness is preserved. Some empirical analysis, based on the comparison with the performance of standard local volatility and Heston models, shows the effectiveness of the path dependent volatility. 
Keywords:  option pricing; stochastic volatility; path dependent option 
JEL:  G1 C02 
Date:  2006–11–30 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:973&r=ecm 