nep-ecm New Economics Papers
on Econometrics
Issue of 2009‒06‒17
thirty papers chosen by
Sune Karlsson
Orebro University

  1. Instrumental Variable Quantile Estimation of Spatial Autoregressive Models By Zhenlin Yang; Liangjun Su
  2. Nonparametric Structural Estimation via Continuous Location Shifts in an Endogenous Regressor By Peter C.B. Phillips; Liangjun Su
  3. Dynamic Misspecification in Nonparametric Cointegrating Regression By Ioannis Kasparis; Peter C.B. Phillips
  4. LAD Asymptotics under Conditional Heteroskedasticity with Possibly Infinite Error Densities By Jin Seo Cho; Chirok Han; Peter C.B. Phillips
  5. Real Time Detection of Structural Breaks in GARCH Models By Zhongfang He; John M. Maheu;
  6. The empirical saddlepoint likelihood estimator applied to two-step GMM By Sowell, Fallaw
  7. Infinite Density at the Median and the Typical Shape of Stock Return Distributions By Chirok Han; Jin Seo Cho; Peter C.B. Phillips
  8. A Paradox of Inconsistent Parametric and Consistent Nonparametric Regression By Peter C.B. Phillips; Liangjun Su
  10. Explosive Behavior in the 1990s Nasdaq: When Did Exuberance Escalate Asset Values? By Peter C.B. Phillips; Yangru Wu; Jun Yu
  13. Is neglected heterogeneity really an issue in binary and fractional regression models? A simulation exercise for logit, probit and loglog models By Esmeralda A. Ramalho; Joaquim J. S. Ramalho
  14. De copulis non est disputandum - Copulae: An Overview By Wolfgang Härdle; Ostap Okhrin
  15. Bayesian Fuzzy Regression Analysis and Model Selection: Theory and Evidence By Hui Feng; David E. Giles
  16. Misspecification and Heterogeneity in Single-Index, Binary Choice Models By Chen, Pian; Velamuri, Malathi
  17. On the Existence of the Maximum Likelihood Estimates for Poisson Regression By J. M. C. Santos Silva; Silvana Tenreyro
  18. Pricing Model Performance and the Two-Pass Cross-Sectional Regression Methodology By Raymond Kan; Cesare Robotti; Jay Shanken
  19. Bayesian networks of customer satisfaction survey data, By Silvia SALINI; Ron S. KENETT
  20. Identifying common dynamic features in stock returns By Caiado, Jorge; Crato, Nuno
  21. Further Simulation Evidence on the Performance of the Poisson Pseudo-Maximum Likelihood Estimator By J. M. C. Santos Silva; Silvana Tenreyro
  22. Parameter estimation for differential equations using fractal-based methods and applications to economics By Cinzia COLAPINTO; Matteo FINI; Herb E. KUNZE; Jelena LONCAR
  23. Volatility Models : from GARCH to Multi-Horizon Cascades. By Alexander Subbotin; Thierry Chauveau; Kateryna Shapovalova
  24. Low-Pass Filter Design using Locally Weighted Polynomial Regression and Discrete Prolate Spheroidal Sequences By Proietti, Tommaso; Luati, Alessandra
  25. Solving inverse problems for random equations and applications By Herb E. KUNZE; Davide LA TORRE; Edward R. VRSCAY
  26. Methodological overview of Rasch model and application in customer satisfaction survey data By Francesca DE BATTISTI; Giovanna NICOLINI; Silvia SALINI
  27. D'un multiple conditionnel en assurance de portefeuille : CAViaR pour les gestionnaires ?. By Benjamin Hamidi; Emmanuel Jurczenko; Bertrand Maillet
  28. Yield Curve Predictability, Regimes, and Macroeconomic Information: A Data-Driven Approach By Francesco Audrino; Kameliya Filipova
  29. Parameter identification for deterministic and stochastic differential equations using the "collage method" for fixed point equations By Vincenzo CAPASSO; Herb E. KUNZE; Davide LA TORRE; Edward R. VRSCAY
  30. A note on positive semi-definiteness of some non-pearsonian correlation matrices By Mishra, SK

  1. By: Zhenlin Yang (School of Economics, Singapore Management University); Liangjun Su (School of Economics, Singapore Management University)
    Abstract: We propose an instrumental variable quantile regression (IVQR) estimator for spatial autoregressive (SAR) models. Like the GMM estimators of Lin and Lee (2006) and Kelejian and Prucha (2006), the IVQR estimator is robust against heteroscedasticity. Unlike the GMM estimators, the IVQR estimator is also robust against outliers and requires weaker moment conditions. More importantly, it allows us to characterize the heterogeneous impact of variables on different points (quantiles) of a response distribution. We derive the limiting distribution of the new estimator. Simulation results show that the new estimator performs well in finite samples at various quantile points. In the special case of median restriction, it outperforms the conventional QML estimator without taking into account of heteroscedasticity in the errors; it also outperforms the GMM estimators with or without considering the heteroscedasticity.
    Keywords: Spatial Autoregressive Model; Quantile Regression; Instrumental Variable; Quasi Maximum Likelihood; GMM; Robustness.
    JEL: C13 C21 C51
    Date: 2007–08
  2. By: Peter C.B. Phillips (Cowles Foundation, Yale University); Liangjun Su (School of Economics, Singapore Management University)
    Abstract: Recent work by Wang and Phillips (2009b, c) has shown that ill posed inverse problems do not arise in nonstationary nonparametric regression and there is no need for nonparametric instrumental variable estimation. Instead, simple Nadaraya Watson nonparametric estimation of a (possibly nonlinear) cointegrating regression equation is consistent with a limiting (mixed) normal distribution irrespective of the endogeneity in the regressor, near integration as well as integration in the regressor, and serial dependence in the regression equation. The present paper shows that some closely related results apply in the case of structural nonparametric regression with independent data when there are continuous location shifts in the regressor. In such cases, location shifts serve as an instrumental variable in tracing out the regression line similar to the random wandering nature of the regressor in a cointegrating regression. Asymptotic theory is given for local level and local linear nonparametric estimators, links with nonstationary cointegrating regression theory and nonparametric IV regression are explored, and extensions to the stationary strong mixing case are given. In contrast to standard nonparametric limit theory, local level and local linear estimators have identical limit distributions, so the local linear approach has no apparent advantage in the present context. Some interesting cases are discovered, which appear to be new in the literature, where nonparametric estimation is consistent whereas parametric regression is inconsistent even when the true (parametric) regression function is known. The methods are further applied to establish a limit theory for nonparametric estimation of structural panel data models with endogenous regressors and individual effects. Some simulation evidence is reported.
    Keywords: Fixed effects, Kernel regression, Location shift, Mixing, Nonparametric IV, Nonstationarity, Panel model, Structural estimation
    JEL: C13 C14
    Date: 2009–06
  3. By: Ioannis Kasparis (University of Cyprus); Peter C.B. Phillips (Cowles Foundation, Yale University)
    Abstract: Linear cointegration is known to have the important property of invariance under temporal translation. The same property is shown not to apply for nonlinear cointegration. The requisite limit theory involves sample covariances of integrable transformations of non-stationary sequences and time translated sequences, allowing for the presence of a bandwidth parameter so as to accommodate kernel regression. The theory is an extension of Wang and Phillips (2008) and is useful for the analysis of nonparametric regression models with a misspecified lag structure and in situations where temporal aggregation issues arise. The limit properties of the Nadaraya-Watson (NW) estimator for cointegrating regression under misspecified lag structure are derived, showing the NW estimator to be inconsistent with a "pseudo-true function" limit that is a local average of the true regression function. In this respect nonlinear cointegrating regression differs importantly from conventional linear cointegration which is invariant to time translation. When centred on the pseudo-function and appropriately scaled, the NW estimator still has a mixed Gaussian limit distribution. The convergence rates are the same as those obtained under correct specification but the variance of the limit distribution is larger. Some applications of the limit theory to non-linear distributed lag cointegrating regression are given and the practical import of the results for index models, functional regression models, and temporal aggregation are discussed.
    Keywords: Dynamic misspecification, Functional regression, Integrable function, Integrated process, Local time, Misspecification, Mixed normality, Nonlinear cointegration, Nonparametric regression
    JEL: C22 C32
    Date: 2009–06
  4. By: Jin Seo Cho (Dept. of Economics, Korea University); Chirok Han (Dept. of Economics, Korea University); Peter C.B. Phillips (Cowles Foundation, Yale University)
    Abstract: Least absolute deviations (LAD) estimation of linear time-series models is considered under conditional heteroskedasticity and serial correlation. The limit theory of the LAD estimator is obtained without assuming the finite density condition for the errors that is required in standard LAD asymptotics. The results are particularly useful in application of LAD estimation to financial time series data.
    Keywords: Asymptotic leptokurtosis, Convex function, Infinite density, Least absolute deviations, Median, Weak convergence
    JEL: C12 G11
    Date: 2009–06
  5. By: Zhongfang He (University of Toronto (Canada)); John M. Maheu (University of Toronto (Canada); Rimini Centre for Economic Analysis, Rimini, Italy);
    Abstract: A sequential Monte Carlo method for estimating GARCH models subject to an unknown number of structural breaks is proposed. Particle filtering techniques allow for fast and efficient updates of posterior quantities and forecasts in real-time. The method conveniently deals with the path dependence problem that arises in these type of models. The performance of the method is shown to work well using simulated data. Applied to daily NASDAQ returns, the evidence favors a partial structural break specification in which only the intercept of the conditional variance equation has breaks compared to the full structural break specification in which all parameters are subject to change. The empirical application underscores the importance of model assumptions when investigating breaks. A model with normal return innovations result in strong evidence of breaks; while more flexible return distributions such as t-innovations or a GARCH-jump mixture model still favor breaks but indicate much more uncertainty regarding the time and impact of them.
    Keywords: particle filter, GARCH model, change point, sequential Monte Carlo
    Date: 2009–01
  6. By: Sowell, Fallaw
    Abstract: The empirical saddlepoint likelihood (ESPL) estimator is introduced. The ESPL provides improvement over one-step GMM estimators by including additional terms to automatically reduce higher order bias. The first order sampling properties are shown to be equivalent to efficient two-step GMM. New tests are introduced for hypothesis on the model's parameters. The higher order bias is calculated and situations of practical interest are noted where this bias will be smaller than for currently available estimators. As an application, the ESPL is used to investigate an overidentified moment model. It is shown how the model's parameters can be estimated with both the ESPL and a conditional ESPL (CESPL), conditional on the overidentifying restrictions being satisfied. This application leads to several new tests for overidentifying restrictions. Simulations demonstrate that ESPL and CESPL have smaller bias than currently available one-step GMM estimators. The simulations also show new tests for overidentifying restrictions that have performance comparable to or better than currently available tests. The computations needed to calculate the ESPL estimator are comparable to those needed for a one-step GMM estimator.
    Keywords: Generalized method of moments estimator; test of overidentifying restrictions; sampling distribution; empirical saddlepoint approximation; asymptotic distribution; higher order bias
    JEL: C5 C1
    Date: 2009–02
  7. By: Chirok Han (Dept. of Economics, Korea University); Jin Seo Cho (Dept. of Economics, Korea University); Peter C.B. Phillips (Cowles Foundation, Yale University)
    Abstract: Statistics are developed to test for the presence of an asymptotic discontinuity (or infinite density or peakedness) in a probability density at the median. The approach makes use of work by Knight (1998) on L_1 estimation asymptotics in conjunction with non-parametric kernel density estimation methods. The size and power of the tests are assessed, and conditions under which the tests have good performance are explored in simulations. The new methods are applied to stock returns of leading companies across major U.S. industry groups. The results confirm the presence of infinite density at the median as a new significant empirical evidence for stock return distributions.
    Keywords: Asymptotic leptokurtosis, Infinite density at the median, Least absolute deviations, Kernel density estimation, Stock returns, Stylized facts
    JEL: C12 G11
    Date: 2009–06
  8. By: Peter C.B. Phillips (Cowles Foundation, Yale University); Liangjun Su (School of Economics, Singapore Management University)
    Abstract: This paper explores a paradox discovered in recent work by Phillips and Su (2009). That paper gave an example in which nonparametric regression is consistent whereas parametric regression is inconsistent even when the true regression functional form is known and used in regression. This appears to be a paradox, as knowing the true functional form should not in general be detrimental in regression. In the present case, local regression methods turn out to have a distinct advantage because of endogeneity in the regressor. The paradox arises because additional correct information is not necessarily advantageous when information is incomplete. In the present case, endogeneity in the regressor introduces bias when the true functional form is known, but interestingly does not do so in local nonparametric regression. We examine this example in detail and propose two new consistent estimators for the parametric regression, which address the endogeneity in the regressor by means of spatial bounding and bias correction using nonparametric estimation. Some simulations are reported illustrating the paradox and the new procedures.
    Keywords: Bias-correction, Endogeneity, Kernel regression, L_{2} regression, Location shift, Nonparametric IV, Nonstationarity, Paradox, Spatial regression, Structural Estimation
    JEL: C13 C14
    Date: 2009–06
  9. By: Roberto Patuelli (University of Lugano, Switzerland The Rimini Centre for Economic Analysis, Rimini, Italy); Daniel A. Griffith (University of Texas at Dallas, USA); Michael Tiefelsdorf (University of Texas at Dallas, USA); Peter Nijkamp (VU University Amsterdam, The Netherlands)
    Abstract: Regions, independent of their geographic level of aggregation, are known to be interrelated partly due to their relative locations. Similar economic performance among regions can be attributed to proximity. Consequently, a proper understanding, and accounting, of spatial liaisons is needed in order to effectively forecast regional economic variables. Several spatial econometric techniques are available in the literature, which deal with the spatial autocorrelation in geographically-referenced data. The experiments carried out in this paper are concerned with the analysis of the spatial autocorrelation observed for unemployment rates in 439 NUTS-3 German districts. We employ a semi-parametric approach – spatial filtering – in order to uncover spatial patterns that are consistently significant over time. We first provide a brief overview of the spatial filtering method and illustrate the data set. Subsequently, we describe the empirical application carried out: that is, the spatial filtering analysis of regional unemployment rates in Germany. Furthermore, we exploit the resulting spatial filter as an explanatory variable in a panel modelling framework. Additional explanatory variables, such as average daily wages, are used in concurrence with the spatial filter. Our experiments show that the computed spatial filters account for most of the residual spatial autocorrelation in the data.
    Keywords: spatial filtering, eigenvectors, Germany, unemployment
    JEL: C33 E24 R12
    Date: 2009–01
  10. By: Peter C.B. Phillips (Cowles Foundation, Yale University); Yangru Wu (Rutgers Business School - Newarks and New Brunswick, Rutgers University); Jun Yu (School of Economics, Singapore Management University)
    Abstract: A recursive test procedure is suggested that provides a mechanism for testing explosive behavior, date-stamping the origination and collapse of economic exuberance, and providing valid confidence intervals for explosive growth rates. The method involves the recursive implementation of a right-side unit root test and a sup test, both of which are easy to use in practical applications, and some new limit theory for mildly explosive processes. The test procedure is shown to have discriminatory power in detecting periodically collapsing bubbles, thereby overcoming a weakness in earlier applications of unit root tests for economic bubbles. An empirical application to Nasdaq stock price index in the 1990s provides confirmation of explosiveness and date-stamps the origination of financial exuberance to mid -1995, prior to the famous remark in December 1996 by Alan Greenspan about irrational exuberance in financial markets, thereby giving the remark empirical content.
    Keywords: Explosive root, Irrational exuberance, Mildly explosive process, Nasdaq bubble, Periodically collapsing bubble, Sup test, Unit root test
    JEL: G10 C22
    Date: 2009–06
  11. By: Carlos Llano (Universidad Autonoma de Madrid, Spain The Rimini Centre for Economic Analysis, Rimini, Italy); Wolfgang Polasek (Institute for Advanced Studies, Vienna, Austria and The Rimini Centre for Economic Analysis, Italy); Richard Sellner (Institute for Advanced Studies, Vienna, Austria)
    Abstract: Completing data sets that are collected in heterogeneous units is a quite frequent problem. Chow and Lin (1971) were the rst to develop a unied framework for the three problems (interpolation, extrapolation and distribution) of predicting times series by related series (the `indicators'). This paper develops a spatial Chow-Lin procedure for cross-sectional and panel data and compares the classical and Bayesian estimation methods. We outline the error covariance structure in a spatial context and derive the BLUE for the ML and Bayesian MCMC estimation. Finally, we apply the procedure to Spanish regional GDP data between 2000-2004. We assume that only NUTS-2 GDP is known and predict GDP at NUTS-3 level by using socio-economic and spatial information available at NUTS-3. The spatial neighborhood is dened by either km distance, travel time, contiguity and trade relationships. After running some sensitivity analysis, we present the forecast accuracy criteria comparing the predicted values with the observed ones.
    Keywords: Interpolation, Spatial panel econometrics, MCMC, Spatial
    Date: 2009–01
  12. By: Don Harding; Adrian Pagan
    Abstract: Macroeconometric and fi?nancial researchers often use secondary or constructed binary random variables that differ in terms of their sta- tistical properties from the primary random variables used in micro- econometric studies. One important difference between primary and secondary binary variables is that, while the former are, in many in- stances, independently distributed (i.d.), the latter are rarely i.d. We show how popular rules for constructing the binary states interact with the stochastic processes for of the variables they are constructed from, so that the binary states need to be treated as Markov processes. Consequently, one needs to recognize this when performing analyses with the binary variables, and it is not valid to adopt a model like sta- tic Probit which fails to recognize such dependence. Moreover, these binary variables are often censored, in that they are constructed in such a way as to result in sequences of them possessing the same sign. Such censoring imposes restrictions upon the DGP of the binary states and it creates difficulties if one tries to utilize a dynamic Probit model with them. Given this we describe methods for modeling with these explicitly deals with any censoring constraints. An application is provided that investigates the relation between the business cycle and the yield spread.
    JEL: C22 C53 E32 E37
    Date: 2009–01
  13. By: Esmeralda A. Ramalho (Universidade de Evora, Departamento de Economia, CEFAGE-UE); Joaquim J. S. Ramalho (Universidade de Evora, Departamento de Economia, CEFAGE-UE)
    Abstract: In this paper we examine theoretically and by simulation whether or not unobserved heterogeneity independent of the included regressors is really an issue in logit, probit and loglog models with both binary and fractional data. We found that unobserved heterogeneity: (i) produces an attenuation bias in the estimation of regression coefficients; (ii) is innocuous for logit estimation of average sample partial effects, while in the probit and loglog cases there may be important biases in the estimation of those quantities; (iii) has much more destructive effects over the estimation of population partial effects; (iv) only for logit models does not affect substantially the prediction of outcomes; and (v) is innocuous for the size and consistency of Wald tests for the significance of observed regressors but, in small samples, reduces their power substantially.
    Keywords: Binary models; fractional models; neglected heterogeneity; partial effects; prediction; wald tests.
    JEL: C12 C13 C15 C25
    Date: 2009
  14. By: Wolfgang Härdle; Ostap Okhrin
    Abstract: Normal distribution of the residuals is the traditional assumption in the classical multivariate time series models. Nevertheless it is not very often consistent with the real data. Copulae allows for an extension of the classical time series models to nonelliptically distributed residuals. In this paper we apply different copulae to the calculation of the static and dynamic Value-at-Risk of portfolio returns and Profit-and-Loss function. In our findings copula based multivariate model provide better results than those based on the normal distribution.
    Keywords: copula, multivariate distribution, value-at-risk, multivariate dependence
    JEL: C13 C14 C50
    Date: 2009–05
  15. By: Hui Feng (Department of Economics, Business & Mathematics, King's University College at University of Western Ontario); David E. Giles (Department of Economics, University of Victoria)
    Abstract: In this study we suggest a Bayesian approach to fuzzy clustering analysis – the Bayesian fuzzy regression. Bayesian Posterior Odds analysis is employed to select the correct number of clusters for the fuzzy regression analysis. In this study, we use a natural conjugate prior for the parameters, and we find that the Bayesian Posterior Odds provide a very powerful tool for choosing the number of clusters. The results from a Monte Carlo experiment and three illustrative applications with economic data are very encouraging.
    Keywords: Bayesian posterior odds, model selection, fuzzy regression, fuzzy clustering
    JEL: C1 C6 C8 C90
    Date: 2009–06–11
  16. By: Chen, Pian; Velamuri, Malathi
    Abstract: We propose a nonparametric approach for estimating single-index, binary-choice models when parametric models such as Probit and Logit are potentially misspecified. The new approach involves two steps: first, we estimate index coefficients using sliced inverse regression without specifying a parametric probability function a priori; second, we estimate the unknown probability function using kernel regression of the binary choice variable on the single index estimated in the first step. The estimated probability functions for different demographic groups indicate that the conventional dummy variable approach cannot fully capture heterogeneous effects across groups. Using both simulated and labor market data, we demonstrate the merits of this new approach in solving model misspecification and heterogeneity problems.
    Keywords: Probit; Logit; Sliced Inverse Regression; categorical variables; treatment heterogeneity
    JEL: C14 C52 C21
    Date: 2009–05
  17. By: J. M. C. Santos Silva; Silvana Tenreyro
    Abstract: We note that the existence of the maximum likelihood estimates for Poisson regressiondepends on the data configuration. Because standard software does not check for thisproblem, the practitioner may be surprised to find that in some applications estimation of thePoisson regression is unusually difficult or even impossible. More seriously, the estimationalgorithm may lead to spurious maximum likelihood estimates. We identify the signs of thenon-existence of the maximum likelihood estimates and propose a simple empirical strategyto single out the regressors causing this type of identification failure.
    Keywords: Poisson estimation, gravity equation
    JEL: C13 C50 F10
    Date: 2009–05
  18. By: Raymond Kan; Cesare Robotti; Jay Shanken
    Abstract: Since Black, Jensen, and Scholes (1972) and Fama and MacBeth (1973), the two-pass cross-sectional regression (CSR) methodology has become the most popular approach for estimating and testing asset pricing models. Statistical inference with this method is typically conducted under the assumption that the models are correctly specified, i.e., expected returns are exactly linear in asset betas. This can be a problem in practice since all models are, at best, approximations of reality and are likely to be subject to a certain degree of misspecification. We propose a general methodology for computing misspecification-robust asymptotic standard errors of the risk premia estimates. We also derive the asymptotic distribution of the sample CSR R2 and develop a test of whether two competing beta pricing models have the same population R2. This provides a formal alternative to the common heuristic of simply comparing the R2 estimates in evaluating relative model performance. Finally, we provide an empirical application which demonstrates the importance of our new results when applied to a variety of asset pricing models.
    JEL: C1 C12 C13 C4 C52 G12
    Date: 2009–06
  19. By: Silvia SALINI; Ron S. KENETT
    Abstract: A Bayesian Network is a probabilistic graphical model that represents a set of variables and their probabilistic dependencies. Formally, Bayesian Networks are directed acyclic graphs whose nodes represent variables, and whose arcs encode the conditional dependencies between the variables. Nodes can represent any kind of variable, be it a measured parameter, a latent variable or a hypothesis. They are not restricted to representing random variables, which forms the "Bayesian" aspect of a Bayesian network. Efficient algorithms exist that perform inference and learning in Bayesian Networks. Bayesian Networks that model sequences of variables are called Dynamic Bayesian Networks. Harel et. al (2007) provide a comparison between Markov Chains and Bayesian Networks in the analysis of web usability from e-commerce data. A comparison of regression models, SEMs, and Bayesian networks is presented Anderson et. al (2004). In this paper we apply Bayesian Networks to the analysis of Customer Satisfaction Surveys and demonstrate the potential of the approach. Bayesian Networks offer advantages in implementing models of cause and effect over other statistical techniques designed primarily for testing hypotheses. Other advantages include the ability to conduct probabilistic inference for prediction and diagnostic purposes with an output that can be intuitively understood by managers
    Keywords: Bayesian Networks, Customer Satisfaction, Eurobarometer, Service Quality
    JEL: C11 C42
    Date: 2007–10–08
  20. By: Caiado, Jorge; Crato, Nuno
    Abstract: This paper proposes spectral and asymmetric-volatility based methods for cluster analysis of stock returns. Using the information about both the periodogram of the squared returns and the estimated parameters in the TARCH equation, we compute a distance matrix for the stock returns. Clusters are formed by looking to the hierarchical structure tree (or dendrogram) and the computed principal coordinates. We employ these techniques to investigate the similarities and dissimilarities between the "blue-chip" stocks used to compute the Dow Jones Industrial Average (DJIA) index. For reference, we investigate also the similarities among stock returns by mean and squared correlation methods.
    Keywords: Asymmetric effects; Cluster analysis; DJIA stock returns; Periodogram; Threshold ARCH model; Volatility
    JEL: C32 G1 G10
    Date: 2009–04
  21. By: J. M. C. Santos Silva; Silvana Tenreyro
    Abstract: We extend the simulation results given in Santos-Silva and Tenreyro (2006, 'The Log ofGravity', The Review of Economics and Statistics, 88, pp.641-658) by considering datagenerated as a finite mixture of gamma variates. Data generated in this way can naturallyhave a large proportion of zeros and is fully compatible with constant elasticity models suchas the gravity equation. Our results confirm that the Poisson pseudo maximum likelihoodestimator is generally well behaved.
    Keywords: Gravity equation, Heteroskedasticity, Jensens inequality
    JEL: C13 C50 F10
    Date: 2009–05
  22. By: Cinzia COLAPINTO; Matteo FINI; Herb E. KUNZE; Jelena LONCAR
    Abstract: Many problems from the area of economics and finance can be described using dynamical models. For them, in which time is the only independent variable and for which we work in a continuous framework, these models take the form of deterministic differential equations (DEs). We may study these models in two fundamental ways: the direct problem and the inverse problem. The direct problem is stated as follows: given all of the parameters in a system of DEs, find a solution or determine its properties either analytically or numerically. The inverse problem reads: given a system of DEs with unknown parameters and some observational data, determine the values of the parameters such that the system admits the data as an approximate solution. The inverse problem is crucial for the calibration of the model; starting from a series of data we wish to describe them using deterministic differential equations in which the parameters have to be estimated from data samples. The solutions of the inverse problems are the estimations of the unknown parameters and we use fractal-based methods to get them. We then show some applications to technological change and competition models.
    Keywords: Differential equations, collage methods, inverse problems, parameter estimation, Lotka Volterra models, technological change, boat-fishery model
    Date: 2008–04–07
  23. By: Alexander Subbotin (Centre d'Economie de la Sorbonne et Higher School of Economics); Thierry Chauveau (Centre d'Economie de la Sorbonne); Kateryna Shapovalova (Centre d'Economie de la Sorbonne)
    Abstract: We overview different methods of modeling volatility of stock prices and exchange rates, focusing on their ability to reproduce the empirical properties in the corresponding time series. The properties of price fluctuations vary across the time scales of observation. The adequacy of different models for describing price dynamics at several time horizons simultaneously is the central topic of this study. We propose a detailed survey of recent volatility models, accounting for multiple horizons. These models are based on different and sometimes competing theoretical concepts. They belong either to GARCH or stochastic volatility model families and often borrow methodological tools from statistical physics. We compare their properties and comment on their pratical usefulness and perspectives.
    Keywords: Volatility modeling, GARCH, stochastic volatility, volatility cascade, multiple horizons in volatility.
    JEL: G10 C13
    Date: 2009–05
  24. By: Proietti, Tommaso; Luati, Alessandra
    Abstract: The paper concerns the design of nonparametric low-pass filters that have the property of reproducing a polynomial of a given degree. Two approaches are considered. The first is locally weighted polynomial regression (LWPR), which leads to linear filters depending on three parameters: the bandwidth, the order of the fitting polynomial, and the kernel. We find a remarkable linear (hyperbolic) relationship between the cutoff period (frequency) and the bandwidth, conditional on the choices of the order and the kernel, upon which we build the design of a low-pass filter. The second hinges on a generalization of the maximum concentration approach, leading to filters related to discrete prolate spheroidal sequences (DPSS). In particular, we propose a new class of lowpass filters that maximize the concentration over a specified frequency range, subject to polynomial reproducing constraints. The design of generalized DPSS filters depends on three parameters: the bandwidth, the polynomial order, and the concentration frequency. We discuss the properties of the corresponding filters in relation to the LWPR filters, and illustrate their use for the design of low-pass filters by investigating how the three parameters are related to the cutoff frequency.
    Keywords: Trend filters; Kernels; Concentration; Filter Design.
    JEL: E32 C14 C22
    Date: 2009–06–01
  25. By: Herb E. KUNZE; Davide LA TORRE; Edward R. VRSCAY
    Abstract: ABSTRACT: Most natural phenomena are subject to small variations in the environment within which they take place; data gathered from many runs of the same experiment may well show differences that are most suitably accounted for by a model that incorporates some randomness. Differential equations with random coefficients are one such class of useful models. In this paper we consider such equations T(w,x(w))=x(w) as random fixed point equations, where T:Y x X -> X is a given operator, Y is a probability space and (X,d) is a complete metric space. We consider the following inverse problem for such equations: given a set of realizations of the fixed point of T (possibly the interpolations of different observational data sets), determine the operator T or the mean value of its random components, as appropriate. We solve the inverse problem for this class of equations by using the collage theorem.
    Keywords: Inverse problems, collage method, random differential equations
    Date: 2007–12–01
  26. By: Francesca DE BATTISTI; Giovanna NICOLINI; Silvia SALINI
    Abstract: This paper deals with the measurement of a service or product quality using Customer Satisfaction Survey results. Many different methods are used to analyse customer satisfaction data. Some use statistical models which estimate the relationship between the latent and manifest variables (LISREL, PLS, etc.), whilst others use dimensionality reduction methods (FA, PCA, etc.). All of these methods require a numerical quantification of the categories and consequently the distance between the numerical labels is fixed and the linear relationship between the variables implicitly assumed. Moreover these methods produce a customer satisfaction measure for each subject and an evaluation of its importance on the satisfaction level for each item. When analyzing quality and satisfaction levels together, the Rasch model (RM) appears to be particularly appropriate. A Likert scale is not required and non-linear relationships are involved. Moreover, a Rasch analysis can also act as a useful diagnostic tool for calibrating the questionnaire itself. In this paper we will present three different applications of the Rasch Model for the purposes of measuring quality and customer satisfaction levels. For each technique we will highlight its peculiarities, give an interpretation of the parameters used, analyse the model’s fit with the data and perform a critical analysis of the results.
    Keywords: Latent trait model, data reduction methods, ordinal variables
    JEL: C02 C13 M31
    Date: 2008–02–29
  27. By: Benjamin Hamidi (Centre d'Economie de la Sorbonne et A.A.Advisors-QCG (ABN AMRO)Variances); Emmanuel Jurczenko (ESCP-EAP); Bertrand Maillet (Centre d'Economie de la Sorbonne, A.A.Advisors-QCG (ABN AMRO)Variances et IEF)
    Abstract: In a Constant Proportion Portfolio Insurance (CPPI) framework, a constant risk exposure is defined by the multiple of the strategy. This article proposes an alternative conditional multiple estimation model, which is based on an autoregressive quantile regression dynamic approach. We estimate several specifications of the conditional multiple model on the American equity market, and we compare relative performances of cushioned portfolios using conditional and unconditional multiples.
    Keywords: Portfolio insurance, CPPI, quantile regression.
    JEL: G11 C13 C14 C22 C32
    Date: 2009–05
  28. By: Francesco Audrino; Kameliya Filipova
    Abstract: We propose an empirical approach to determine the various economic sources driving the US yield curve. We allow the conditional dynamics of the yield at different maturities to change in reaction to past information coming from several relevant predictor variables. We consider both endogenous, yield curve factors and exogenous, macroeconomic factors as predictors in our model, letting the data themselves choose the most important variables. We find clear, different economic patterns in the local dynamics and regime specification of the yields depending on the maturity. Moreover, we present strong empirical evidence for the accuracy of the model in fitting in-sample and predicting out-of-sample the yield curve in comparison to several alternative approaches.
    Keywords: Yield curve modeling and forecasting; Macroeconomic variables; Tree-structured models; Threshold regimes; GARCH; Bagging
    JEL: C22 C51 C53 E43 E44
    Date: 2009–05
  29. By: Vincenzo CAPASSO; Herb E. KUNZE; Davide LA TORRE; Edward R. VRSCAY
    Abstract: A number of inverse problems may be viewed in terms of the approximation of a target element x in a complete metric space (X,d) by the fixed point x* of a contraction function T : X -> X. In practice, from a family of contraction functions T(a) one wishes to find the parameter a for which the approximation error d(x,x*(a)) is as small as possible. Thanks to a simple consequence of Banach's fixed point theorem known as the Collage Theorem, most practical methods of solving the inverse problem for fixed point equations seek to find an operator T(a) for which the so called collage distance d(x,T(a)x) is as small as possible. We first show how to solve inverse problems for deterministic and random differential equations and then we switch to the analysis of stochastic differential equations. Here inverse problems can be solved by minimizing the collage distance in an appropriate metric space. At the end we show an application of this approach to a system of coupled stochastic differential equations which describes the interaction between particles in a physical system
    Keywords: Inverse problems, stochastic differential equations, fixed point equations, Monge-Kantorovich distance, Wasserstein metric, Collage Theorem
    Date: 2008–04–08
  30. By: Mishra, SK
    Abstract: The Pearsonian coefficient of correlation as a measure of association between two variates is highly prone to the deleterious effects of outlier observations (in data). Statisticians have proposed a number of formulas to obtain robust measures of correlation that are considered to be less affected by errors of observation, perturbation or presence of outliers. Spearman’s rho, Blomqvist’s signum, Bradley’s absolute r and Shevlyakov’s median correlation are some of such robust measures of correlation. However, in many applications, correlation matrices that satisfy the criterion of positive semi-definiteness are required. Our investigation finds that while Spearman’s rho, Blomqvist’s signum and Bradley’s absolute r make positive semi-definite correlation matrices, Shevlyakov’s median correlation very often fails to do that. The use of correlation matrices based on Shevlyakov’s formula, therefore, is problematic.
    Keywords: Robust correlation; outliers; Spearman’s rho; Blomqvist’s signum; Bradley’s absolute correlation; Shevlyakov’s median correlation; positive semi-definite matrix; fortran 77; computer program
    JEL: C14 C63
    Date: 2009–06–14

This nep-ecm issue is ©2009 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.