
on Econometrics 
By:  Laura Mayoral 
Abstract:  A new parametric minimum distance timedomain estimator for ARFIMA processes is introduced in this paper. The proposed estimator minimizes the sum of squared correlations of residuals obtained after filtering a series through ARFIMA parameters. The estimator is easy to compute and is consistent and asymptotically normally distributed for fractionally integrated (FI) processes with an integration order d strictly greater than 0.75. Therefore, it can be applied to both stationary and nonstationary processes. Deterministic components are also allowed in the DGP. Furthermore, as a byproduct, the estimation procedure provides an immediate check on the adequacy of the specified model. This is so because the criterion function, when evaluated at the estimated values, coincides with the BoxPierce goodness of fit statistic. Empirical applications and MonteCarlo simulations supporting the analytical results and showing the good performance of the estimator in finite samples are also provided. 
Keywords:  Fractional integration, nonstationary longmemory time series, minimum distance estimation 
JEL:  C13 C22 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:upf:upfgen:959&r=ecm 
By:  Hugo Kruiniger (Queen Mary, University of London) 
Abstract:  In this paper we consider GMM based estimation and inference for the panel AR(1) model when the data are persistent and the time dimension of the panel is fixed. We find that the nature of the weak instruments problem of the ArellanoBond estimator depends on the distributional properties of the initial observations. Subsequently, we derive local asymptotic approximations to the finite sample distributions of the ArellanoBond estimator and the System estimator, respectively, under a variety of distributional assumptions about the initial observations and discuss the implications of the results we obtain for doing inference. We also propose two LM type panel unit root tests. 
Keywords:  Dynamic panel data, GMM, Weak instruments, Weak identification, Local asymptotics, Multiindex asymptotics, Diagonal path asymptotics, LM test, Panel unit root test 
JEL:  C12 C13 C23 
Date:  2006–04 
URL:  http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp560&r=ecm 
By:  Quoreshi, Shahiduzzaman (Department of Economics, Umeå University) 
Abstract:  A model to account for the long memory property in a count data framework <p> is proposed and applied to high frequency stock transactions data. <p> The unconditional and conditional first and second order moments are <p> given. The CLS and FGLS estimators are discussed. In its empirical <p> application to two stock series for AstraZeneca and Ericsson B, we find <p> that both series have a fractional integration property. 
Keywords:  Intraday; High frequency; Estimation; Fractional integration; Reaction time 
JEL:  C13 C22 C25 C51 G12 G14 
Date:  2006–04–11 
URL:  http://d.repec.org/n?u=RePEc:hhs:umnees:0673&r=ecm 
By:  Kapetanios, George; Marcellino, Massimiliano 
Abstract:  The estimation of dynamic factor models for large sets of variables has attracted considerable attention recently, due to the increased availability of large datasets. In this paper we propose a new parametric methodology for estimating factors from large datasets based on state space models and discuss its theoretical properties. In particular, we show that it is possible to estimate consistently the factor space. We also develop a consistent information criterion for the determination of the number of factors to be included in the model. Finally, we conduct a set of simulation experiments that show that our approach compares well with existing alternatives. 
Keywords:  factor models; principal components; subspace algorithms 
JEL:  C32 C51 E52 
Date:  2006–04 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:5620&r=ecm 
By:  Ai Deng (Department of Economics, Boston University); Pierre Perron (Department of Economics, Boston University) 
Abstract:  We consider the power properties of the CUSUM and CUSUM of squares tests in the presence of a onetime change in the parameters of a linear regression model. A result due to Ploberger and Krämer (1990) is that the CUSUM of squares test has only trivial asymptotic local power in this case, while the CUSUM test has nontrivial local asymptotic power unless the change is orthogonal to the mean regressor. We argue that such conclusions obtained from a local asymptotic framework are not reliable guides to what happens in finite samples. The approach we take is to derive expansions of the test statistics to order Op(T 1/2) that retain terms related to the magnitude of the change under the alternative hypothesis. This enables us to analyze what happens for nonlocal to zero breaks. Our theoretical results are able to explain how the power function of the tests can be drastically different depending on whether one deals with a static regression with uncorrelated errors, a static regression with correlated errors, a dynamic regression with lagged dependent variables, or whether a correction for nonNormality is applied in the case of the CUSUM of squares. We discuss in which cases the tests are subject to a nonmonotonic power function that goes to zero as the magnitude of the change increases, and uncover some curious properties. All theoretical results are verified to yield good guides to the finite sample power through simulation experiments. We finally highlight the practical importance of our results. 
Keywords:  Changepoint, Mean shift, Local asymptotic power, Recursive residuals, Dynamic models 
Date:  2005–11 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2005047&r=ecm 
By:  Laura Mayoral 
Abstract:  Although it is commonly accepted that most macroeconomic variables are nonstationary, it is often difficult to identify the source of the nonstationarity. In particular, it is wellknown that integrated and short memory models containing trending components that may display sudden changes in their parameters share some statistical properties that make their identification a hard task. The goal of this paper is to extend the classical testing framework for I(1) versus I(0)+ breaks by considering a a more general class of models under the null hypothesis: nonstationary fractionally integrated (FI) processes. A similar identification problem holds in this broader setting which is shown to be a relevant issue from both a statistical and an economic perspective. The proposed test is developed in the time domain and is very simple to compute. The asymptotic properties of the new technique are derived and it is shown by simulation that it is very wellbehaved in finite samples. To illustrate the usefulness of the proposed technique, an application using inflation data is also provided. 
Date:  2005–10 
URL:  http://d.repec.org/n?u=RePEc:upf:upfgen:956&r=ecm 
By:  Juan J. Dolado; Jesús Gonzalo; Laura Mayoral 
Abstract:  This paper discusses the role of deterministic components in the DGP and in the auxiliary regression model which underlies the implementation of the Fractional DickeyFuller (FDF) test for I(1) against I(d) processes with d € [0, 1). This is an important test in many economic applications because I(d) processess with d < 1 are meanreverting although, when 0.5 = d < 1, like I(1) processes, they are nonstationary. We show how simple is the implementation of the FDF in these situations, and argue that it has better properties than LM tests. A simple testing strategy entailing only asymptotically normally distributed tests is also proposed. Finally, an empirical application is provided where the FDF test allowing for deterministic components is used to test for longmemory in the per capita GDP of several OECD countries, an issue that has important consequences to discriminate between growth theories, and on which there is some controversy. 
Keywords:  Deterministic components, DickeyFuller test, fractionally DickeyFuller test, fractional processes, long memory, trends, unit roots 
JEL:  C12 C22 C40 
Date:  2005–02 
URL:  http://d.repec.org/n?u=RePEc:upf:upfgen:957&r=ecm 
By:  Quoreshi, Shahiduzzaman (Department of Economics, Umeå University) 
Abstract:  A vector integervalued moving average (VINMA) model is introduced. <p> The VINMA model allows for both positive and negative correlations <p> between the counts. The conditional and unconditional first and second <p> order moments are obtained. The CLS and FGLS estimators are discussed. <p> The model is capable of capturing the covariance between and <p> within intraday time series of transaction frequency data due to macroeconomic <p> news and news related to a specific stock. Empirically, it is <p> found that the spillover effect from Ericsson B to AstraZeneca is larger <p> than that from AstraZeneca to Ericsson B 
Keywords:  Count data; Intraday; Time series; Estimation; Reaction 
JEL:  C13 C22 C25 C51 G12 G14 
Date:  2006–04–11 
URL:  http://d.repec.org/n?u=RePEc:hhs:umnees:0674&r=ecm 
By:  Heiss, Florian; Winschel, Viktor 
Abstract:  For the estimation of many econometric models, integrals without analytical solutions have to be evaluated. Examples include limited dependent variables and nonlinear panel data models. In the case of onedimensional integrals, Gaussian quadrature is known to work efficiently for a large class of problems. In higher dimensions, similar approaches discussed in the literature are either very specific and hard to implement or suffer from exponentially rising computational costs in the number of dimensions  a problem known as the "curse of dimensionality" of numerical integration. We propose a strategy that shares the advantages of Gaussian quadrature methods, is very general and easily implemented, and does not suffer from the curse of dimensionality. Monte Carlo experiments for the random parameters logit model indicate the superior performance of the proposed method over simulation techniques. 
JEL:  C25 C15 
Date:  2006–04 
URL:  http://d.repec.org/n?u=RePEc:lmu:muenec:916&r=ecm 
By:  Eva Cantoni; Joanna Mills Flemming; Elvezio Ronchetti 
Abstract:  We adapt Breiman's (1995) nonnegative garrote method to perform variable selection in nonparametric additive models. The technique avoids methods of testing for which no reliable distributional theory is available. In addition it removes the need for a full search of all possible models, something which is computationally intensive, especially when the number of variables is moderate to high. The method has the advantages of being conceptually simple and computationally fast. It provides accurate predictions and is eective at identifying the variables generating the model. For illustration, we consider both a study of Boston housing prices as well as two simulation settings. In all cases our methods perform as well or better than available alternatives like the Component Selection and Smoothing Operator (COSSO). 
Keywords:  crossvalidation, nonnegative garrote, nonparametric regression, shrinkage methods, variable selection 
Date:  2006–03 
URL:  http://d.repec.org/n?u=RePEc:gen:geneem:2006.02&r=ecm 
By:  LiChun Zhang and Ib Thomsen (Statistics Norway) 
Abstract:  Standard approaches to sample surveys take as the point of departure the estimation of one or several population totals (or means), or a few predefined subtotals (or submeans). While the modelbased prediction approach provides an attractive framework for estimation and inference, a modelbased theory for the variety of randomization sampling designs has been lacking. In this paper we extend the modelbased approach to the prediction of individuals in addition to totals and means. Since, given the sample, the conditional prediction error is zero for the selected units but positive for the units outside of the sample, it is possible to use the sampling design to control the unconditional individual prediction mean square errors. This immediately raises the need for probability sampling. It turns out that balancing between optimal prediction of the population total and control over individual predictions provides a fruitful modelbased approach to sampling design. Apart from raising the need for probability sampling in general, it leads naturally to a number of important design features that are firmly established in the sampling practice, including the use of simple random sampling for homogeneous populations and unequal probability sampling otherwise, the division of a business population into the takeall, takesome and takenone units, the most common twostage sampling designs, the use of stratification with proportional allocation, etc.. Most of them have not received adequate modelbased treatment previously. Our approach enables us to give an appraisal of these methods from a prediction point of view. 
Keywords:  Individual prediction; business survey; unequal probability sampling; twostage sampling; linear regression population; common parameter model 
Date:  2005–12 
URL:  http://d.repec.org/n?u=RePEc:ssb:dispap:440&r=ecm 
By:  Gang Liu, Terje Skjerpen, Anders Rygh Swensen and Kjetil Telle (Statistics Norway) 
Abstract:  Timeseries regressions including nonlinear transformations of an integrated variable are not uncommon in various fields of economics. In particular, within the Environmental Kuznets Curve (EKC) literature, where the effect on the environment of income levels is investigated, it is standard procedure to include a third order polynomial in the income variable. When the income variable is an I(1)variable and this variable is also included nonlinearly in the regression relation, the properties of the estimators and standard inferential procedures are unknown. Surprisingly, such problems have received rather limited attention in applied work, and appear disregarded in the EKC literature. We investigate the properties of the estimators of longrun parameters using MonteCarlo simulations. We find that the mean of the ordinary least squares estimates are very similar to the true values and that standard testing procedures based on normality behave rather well. 
Keywords:  Emissions; Environmental Kuznets Curve; Unit Roots; Monte Carlo Simulations 
JEL:  C15 C16 C22 C32 O13 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:ssb:dispap:443&r=ecm 
By:  Juan J. Dolado; Jesús Gonzalo; Laura Mayoral 
Abstract:  This paper proposes a new timedomain test of a process being I(d), 0 < d = 1, under the null, against the alternative of being I(0) with deterministic components subject to structural breaks at known or unknown dates, with the goal of disentangling the existing identification issue between longmemory and structural breaks. Denoting by AB(t) the di?erent types of structural breaks in the deterministic components of a time series considered by Perron (1989), the test statistic proposed here is based on the tratio (or the infimum of a sequence of tratios) of the estimated coefficient on yt1 in an OLS regression of ?dyt on a simple transformation of the abovementioned deterministic components and yt1, possibly augmented by a suitable number of lags of ?dyt to account for serial correlation in the error terms. The case where d = 1 coincides with the Perron (1989) or the Zivot and Andrews (1992) approaches if the break date is known or unknown, respectively. The statistic is labelled as the SBFDF (Structural BreakFractional Dickey Fuller) test, since it is based on the same principles as the wellknown DickeyFuller unit root test. Both its asymptotic behavior and finite sample properties are analyzed, and two empirical applications are provided. 
Date:  2005–09 
URL:  http://d.repec.org/n?u=RePEc:upf:upfgen:954&r=ecm 
By:  Laura Mayoral 
Abstract:  The wellknown lack of power of unit root tests has often been attributed to the short length of macroeconomic variables and also to DGP’s that depart from the I(1)I(0) alternatives. This paper shows that by using long spans of annual real GNP and GNP per capita (133 years) high power can be achieved, leading to the rejection of both the unit root and the trendstationary hypothesis. This suggests that possibly neither model provides a good characterization of these data. Next, more flexible representations are considered, namely, processes containing structural breaks (SB) and fractional orders of integration (FI). Economic justification for the presence of these features in GNP is provided. It is shown that the latter models (FI and SB) are in general preferred to the ARIMA (I(1) or I(0)) ones. As a novelty in this literature, new techniques are applied to discriminate between FI and SB models. It turns out that the FI specification is preferred, implying that GNP and GNP per capita are nonstationary, highly persistent but meanreverting series. Finally, it is shown that the results are robust when breaks in the deterministic component are allowed for in the FI model. Some macroeconomic implications of these findings are also discussed. 
Keywords:  GNP, unit roots, fractional integration, structural change, long memory, exogenous growth models 
Date:  2005–05 
URL:  http://d.repec.org/n?u=RePEc:upf:upfgen:955&r=ecm 
By:  Brännäs, Kurt (Department of Economics, Umeå University); Lönnbark, Carl (Department of Economics, Umeå University) 
Abstract:  This note gives dynamic effects of discrete and continuous explanatory variables for count data or integervalued moving average models. An illustration based on a model for the number of transactions in a stock is included. 
Keywords:  INMA model; Marginal effect; Intraday; Financial data 
JEL:  C22 C25 G12 
Date:  2006–04–05 
URL:  http://d.repec.org/n?u=RePEc:hhs:umnees:0679&r=ecm 
By:  Andre Monteiro (Vrije Universiteit Amsterdam); Georgi V. Smirnov (University of Porto); Andre Lucas (Vrije Universiteit Amsterdam) 
Abstract:  We propose procedures for estimating the timedependent transition matrices for the general class of finite nonhomogeneous continuoustime semiMarkov processes. We prove the existence and uniqueness of solutions for the system of Volterra integral equations defining the transition matrices, therefore showing that these empirical transition probabilities can be estimated from window censored eventhistory data. An implementation of the method is presented based on nonparametric estimators of the hazard rate functions in the general and separable cases. A Monte Carlo study is performed to assess the small sample behavior of the resulting estimators. We use these new estimators for dealing with a central issue in credit risk. We consider the problem of obtaining estimates of the historical corporate default and rating migration probabilities using a dataset on credit ratings from Standard & Poor's. 
Keywords:  Nonhomogeneous semiMarkov processes; transition matrix; Volterra integral equations; separability; credit risk 
JEL:  C13 C14 C33 C41 G11 
Date:  2006–03–08 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20060024&r=ecm 
By:  Frits Bijleveld (SWOV Institute for Road Safety Research, Netherlands); Jacques Commandeur (SWOV Institute for Road Safety Research, Netherlands); Phillip Gould (Monash University, Melbourne); Siem Jan Koopman (Vrije Universiteit Amsterdam) 
Abstract:  Risk is at the center of many policy decisions in companies, governments and other institutions. The risk of road fatalities concerns local governments in planning counter measures, the risk and severity of counterparty default concerns bank risk managers on a daily basis and the risk of infection has actuarial and epidemiological consequences. However, risk can not be observed directly and it usually varies over time. Measuring risk is therefore an important exercise. In this paper we introduce a general multivariate framework for the time series analysis of risk that is modelled as a latent process. The latent risk time series model extends existing approaches by the simultaneous modelling of (i) the exposure to an event, (ii) the risk of that event occurring and (iii) the severity of the event. First, we discuss existing time series approaches for the analysis of risk which have been applied to road safety, actuarial and epidemiological problems. Seco! nd, we present a general model for the analysis of risk and discuss its statistical treatment based on linear state space methods. Third, we apply the methodology to time series of insurance claims, credit card purchases and road safety. It is shown that the general methodology can be effectively used in the assessment of risk. 
Keywords:  Actuarial statistics; Dynamic factor analysis; Kalman filter; Maximum likelihood; Road casualties; State space model; Unobserved components 
JEL:  C32 G33 
Date:  2005–12–19 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20050118&r=ecm 
By:  Roberto Patuelli (Department of Spatial Economics, Vrije Universiteit Amsterdam); Aura Reggiani (Department of Economics, University of Bologna, Italy); Peter Nijkamp (Department of Spatial Economics, Vrije Universiteit Amsterdam); Uwe Blien (Institut für Arbeitsmarkt und Berufsforschung (IAB), Nuremberg) 
Abstract:  In this paper, a set of neural network (NN) models is developed to compute shortterm forecasts of regional employment patterns in Germany. NNs are modern statistical tools based on learning algorithms that are able to process large amounts of data. NNs are enjoying increasing interest in several fields, because of their effectiveness in handling complex data sets when the functional relationship between dependent and independent variables is not explicitly specified. The present paper compares two NN methodologies. First, it uses NNs to forecast regional employment in both the former West and East Germany. Each model implemented computes single estimates of employment growth rates for each German district, with a 2year forecasting range. Next, additional forecasts are computed, by combining the NN methodology with ShiftShare Analysis (SSA). Since SSA aims to identify variations observed among the labour districts, its results are used as further explanatory variables in the NN models. The data set used in our experiments consists of a panel of 439 German districts. Because of differences in the size and time horizons of the data, the forecasts for West and East Germany are computed separately. The outofsample forecasting ability of the models is evaluated by means of several appropriate statistical indicators. 
Keywords:  networks; forecasts; regional employment; shiftshare analysis; shiftshare regression 
JEL:  C23 E27 R12 
Date:  2006–02–17 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20060020&r=ecm 
By:  Elena Pesavento; Barbara Rossi 
Abstract:  This paper is a comprehensive comparison of existing methods for constructing confidence bands for univariate impulse response functions in the presence of high persistence. Monte Carlo results show that Kilian (1998a), Wright (2000), Gospodinov (2004) and Pesavento and Rossi (2005) have favorable coverage properties, although they differ in terms of robustness at various horizons, median unbiasedness, and reliability in the possible presence of a unit or mildly explosive root. On the other hand, methods like Runkle's (1987) bootstrap, Andrews and Chen (1994), and regressions in levels or first differences (even when based on pretests) may not have accurate coverage properties. The paper makes recommendations as to the appropriateness of each method in empirical work. 
Date:  2006–03 
URL:  http://d.repec.org/n?u=RePEc:emo:wp2003:0603&r=ecm 
By:  H.P. Boswijk (Department of Quantitative Economics, Universiteit van Amsterdam); D. Fok (Department of Econometrics, Erasmus Universiteit Rotterdam); P.H. Franses (Department of Econometrics, Erasmus Universiteit Rotterdam) 
Abstract:  To examine crosscountry diffusion of new products, marketing researchers have to rely on a multivariate product growth model. We put forward such a model, and show that it is a natural extension of the original Bass (1969) model. We contrast our model with currently in use multivariate models and we show that inference is much easier and interpretation is straightforward. In fact, parameter estimation can be done using standard commercially available software. We illustrate the benefits of our model relative to other models in simulation experiments. An application to a threecountry CD sales series shows the merits of our model in practice. 
Keywords:  Diffusion; International marketing; econometric models 
JEL:  C39 M31 
Date:  2006–03–16 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20060027&r=ecm 
By:  Duangkamon Chotikapanich; William E. Griffiths 
Abstract:  Hypothesis tests for dominance in income distributions has received considerable attention in recent literature. See, for example, Barrett and Donald (2003), Davidson and Duclos (2000) and references therein. Such tests are useful for assessing progress towards eliminating poverty and for evaluating the effectiveness of various policy initiatives directed towards welfare improvement. To date the focus in the literature has been on sampling theory tests. Such tests can be set up in various ways, with dominance as the null or alternative hypothesis, and with dominance in either direction (X dominates Y or Y dominates X). The result of a test is expressed as rejection of, or failure to reject, a null hypothesis. In this paper we develop and apply Bayesian methods of inference to problems of Lorenz and stochastic dominance. The result from a comparison of two income distributions is reported in terms of the posterior probabilities for each of the three possible outcomes: (a) X dominates Y, (b) Y dominates X, and (c) neither X nor Y is dominant. Reporting results about uncertain outcomes in terms of probabilities has the advantage of being more informative than a simple reject / donotreject outcome. Whether a probability is sufficiently high or low for a policy maker to take a particular action is then a decision for that policy maker. The methodology is applied to data for Canada from the Family Expenditure Survey for the years 1978 and 1986. We assess the likelihood of dominance from one time period to the next. Two alternative assumptions are made about the income distributions –Dagum and SinghMaddala – and in each case the posterior probability of dominance is given by the proportion of times a relevant parameter inequality is satisfied by the posterior observations generated by Markov chain Monte Carlo. 
Keywords:  Bayesian, Income Distributions, Lorenz 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:mlb:wpaper:960&r=ecm 
By:  Miguel Carriquiry; Bruce A. Babcock (Center for Agricultural and Rural Development (CARD); Midwest Agribusiness Trade Research and Information Center (MATRIC)); Chad E. Hart (Center for Agricultural and Rural Development (CARD); Food and Agricultural Policy Research Institute (FAPRI)) 
Abstract:  The effect of sampling error in estimation of farmers' mean yields for crop insurance purposes is explored using farmlevel corn yield data in Iowa from 1990 to 2000 and Monte Carlo simulations. We find that sampling error combined with nonlinearities in the insurance indemnity function will result in empirically estimated crop insurance rates that exceed actuarially fair values by between 2 and 16 percent, depending on the coverage level and the number of observations used to estimate mean yields. Accounting for the adverse selection caused by sampling error results in crop insurance rates that will exceed fair values by between 42 and 127 percent. We propose a new estimator for mean yields based on a common decomposition of farm yields into systemic and idiosyncratic components. The proposed estimator reduces sampling variance by approximately 45 percent relative to the current estimator. 
Keywords:  actual production history (APH), crop insurance, mean yields estimation, sampling error. 
Date:  2005–03 
URL:  http://d.repec.org/n?u=RePEc:ias:cpaper:05wp387&r=ecm 
By:  Tony Lancaster (Institute for Fiscal Studies and Brown University); Sung Jae Jun 
Abstract:  Recent work by Schennach (2005) has opened the way to a Bayesian treatment of quantile regression. Her method, called Bayesian exponentially tilted empirical likelihood (BETEL), provides a likelihood for data y subject only to a set of m moment conditions of the form Eg(y, ?) = 0 where ? is a k dimensional parameter of interest and k may be smaller, equal to or larger than m. The method may be thought of as construction of a likelihood supported on the n data points that is minimally informative, in the sense of maximum entropy, subject to the moment conditions. 
Date:  2006–02 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:05/06&r=ecm 
By:  Nabil Annabi; John Cockburn; Bernard Decaluwé 
Abstract:  This study focused on the choice of functional forms and their parametrization (estimation of free parameters and calibration of other parameters) in the context of CGE models. Various types of elasticities are defined, followed by a presentation of the functional forms most commonly used in these models and various econometric methods for estimating their free parameters. Following this presentation of the theoretical framework, we review parameter estimates used in the literature. This brief literature review was carried out to be used as a guideline for the choice of parameters for CGE models of developing countries. 
Keywords:  Trade liberalization, Poverty, Elasticities, Functional forms, Calibration, Computable General Equilibrium (CGE)Model 
JEL:  C51 C81 C82 D58 E27 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:lvl:mpiacr:200604&r=ecm 
By:  José Luis MoragaGonzález (Groningen University); Matthijs R. Wildenbeest (Erasmus Universiteit Rotterdam) 
Abstract:  In a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online prices for different computer memory chips. The estimates of the search cost distribution suggest that consumers have either quite high or quite low search costs so they either search for all prices in the market or for at most three prices. According to KolmogorovSmirnov goodnessoffit tests, we cannot reject the null hypothesis that the observed prices are generated by the mode.<BR> 
Keywords:  consumer search; oligopoly; price dispersion; structural estimation; maximum likelihood 
JEL:  C14 D43 D83 L13 
Date:  2006–02–15 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20060019&r=ecm 
By:  FrancoisÉric Racicot (Département des sciences administratives, Université du Québec (Outaouais) et LRSP); Raymond Théoret (Département de stratégie des affaires, Université du Québec (Montréal)) 
Abstract:  Monte Carlo simulation has an advantage upon the binomial tree as it can take into account the multidimensions of a problem. However it convergence speed is slower. In this article, we show how this method may be improved by various means: antithetic variables, control variates and low discrepancy sequences: Faure, Sobol and Halton sequences. We show how to compute the standard deviation of a Monte Carlo simulation when the payoffs of a claim, like a contingent claim, are nonlinear. In this case, we must compute this standard deviation by doing a great number of repeated simulations such that we arrive at a normal distribution of the results. The mean of the means of these simulations is then a good estimator of the wanted price. We also show how to combine Halton numbers with antithetic variables to improve the convergence of a QMC. That is our new version of QMC which is then well named because the result varies from one simulation to the other in our version of the QMC while the result is fixed (not random) in a classical QMC, like in the binomial tree. 
Keywords:  Financial engineering, derivatives, Monte Carlo simulation, low discrepancy sequences. 
JEL:  G12 G13 G33 
Date:  2006–04–10 
URL:  http://d.repec.org/n?u=RePEc:pqs:wpaper:052006&r=ecm 