nep-ecm New Economics Papers
on Econometrics
Issue of 2004‒12‒12
33 papers chosen by
Sune Karlsson
Stockholm School of Economics

  1. Out of Sample Tests of Model Specification By Steven Lugauer
  2. Average Difference Estimation of Nonlinear Pricing in Labor Markets By Mark Coppejans; Holger Sieg
  3. Refinement of the partial adjustment model using continuous-time econometrics By Arie ten Cate
  5. Decomposing Granger Causality over the Spectrum By Lemmens, A.; Croux, C.; Dekimpe, M.G.
  6. Forecasting with a Bayesian DSGE model : an application to the Euro area By Smets,F.; Wouters,R.
  7. An Extension of the Markov-Switching Model with Time-Varying Transition Probabilities: Bull-Bear Analysis of the Japanese Stock Market By Akifumi Isogai; Satoru Kanoh; Toshifumi Tokunaga
  8. Stochastic Trends, Demographics and Demand Systems By Clifford Attfield
  9. Bounds on Parameters in Dynamic Discrete Choice Models By Bo E. Honoré; Elie Tamer
  10. Identification and Estimation of the Economic Performance of Outmigrants using Panel Attrition By Charles Bellemare
  11. Some Least Squares Results Without Least Squares Algebra By E.H. Oksanen
  12. A Bayesain Approach for Measuring Economies of Scale with Application to Large Scale Banks By M. W. Luke Chan; Dading Li; Dean C. Mountain
  13. Exploring the Use of a Nonparametrically Generated Instrumental Variable in the Estimation of a Linear Parametric Equation By Frank T. Denton
  14. Box-Cox Stochastic Volatility Models with Heavy-Tails and Correlated Errors By Xibin Zhang; Maxwell L. King
  15. Using Hit Rate Tests to Test for Racial Bias in Law Enforcement: Vehicle Searches in Wichita By Nicola Persico; Petra Todd
  16. Simulating properties of the likelihood ratio test for a unit root in an explosive second order autoregression By Bent Nielsen; J. James Reade
  17. A Central Limit Theorem for Realised Power and Bipower Variations of Continuous Semimartingales By Ole Barndorff-Nielsen; Svend Erik Graversen; Jean Jacod; Mark Podolskij; Neil Shephard
  18. Stochastic volatility with leverage: fast likelihood inference By Yasuhiro Omori; Siddhartha Chib; Neil Shephard; Jouchi Nakajima
  19. Likelihood based inference for diffusion driven models By Siddhartha Chib; Michael K Pitt; Neil Shephard
  20. Multipower Variation and Stochastic Volatility By Ole Barndorff-Nielsen; Neil Shephard
  21. Testing for structural Change in Regression: An Empirical Likelihood Ratio Approach By Lauren Bin Dong
  22. Non-stationarities in stock returns By Catalin Starica; Clive Granger
  23. Changes of structure in financial time series and the GARCH model By Thomas Mikosch; Catalin Starica
  24. Long range dependence effects and ARCH modelling By Thomas Mikosch; Catalin Starica
  25. The Swedish Inflation Fan Charts: An Evaluation of the Riksbank’s Inflation Density Forecasts By Kevin Dowd
  26. Too Good to be True? The (In)credibility of the UK Inflation Fan Charts By Kevin Dowd
  27. FOMC Forecasts of Macroeconomic Risks By Kevin Dowd
  28. A Dynamic Factor Approach to Nonlinear Stability Analysis By Mototsugu Shintani
  30. Estimation and Model Selection of Semiparametric Copula-Based Multivariate Dynamic Models under Copula Misspecification By Xiaohong Chen; Yanqin Fan
  31. Efficient Estimation of Semiparametric Multivariate Copula Models By Xiaohong Chen; Yanqin Fan; Victor Tsyrennifov
  32. Income distribution and inequality measurement : the problem of extreme values By Frank A. Cowell; Emmanuel Flachaire
  33. Simultaneous Equation Econometrics: The Missing Example By Dennis Epple; Bennett McCallum

  1. By: Steven Lugauer
  2. By: Mark Coppejans; Holger Sieg
    Abstract: In this paper, we derive nonparametric average difference estimators. We show that this estimator is consistent and root-$N$ asymptotically normally distributed. Furthermore, the average difference estimator converges to the well-known average derivative estimator as the increment used to compute the difference converges to zero. We apply this estimator to test for differences between average and marginal compensation of workers. We estimate different versions of the model using repeated cross-sectional data from the CPS for a number of narrowly defined occupations. The average difference estimator yields plausible estimates for the average marginal compensation in all subsamples of the CPS considered in this paper. Our results highlight the importance of choosing bandwidth parameters in nonparametric estimation. If important covariates are measured discretely, standard approaches for choosing optimal bandwidth parameters do not necessarily apply. Our main empirical findings suggest that, at least for the preferred range of bandwidth parameters, marginal compensation exceeds average compensation, which suggests that average compensation is at best a noisy measure for the unobserved productivity of workers.
  3. By: Arie ten Cate
    Abstract: This paper presents some suggestions for the specification of dynamic models. These suggestions are based on the supposed continuous-time nature of most economic processes. In particular, the partial adjustment model --or Koyck lag model-- is discussed. The refinement of this model is derived from the continuous-time econometric literature. <P> We find three alternative formulas for this refinement, depending on the particular econometric literature which is used. Two of these formulas agree with an intuitive example. In passing, it is shown that that the continuous-time models of Sims and Bergstrom are closely related. Also the inverse of Bergstrom’s approximate analog has been introduced, making use of engineering mathematics.
    Keywords: dynamics; dynamics; continuous time; econometrics; koyck; bergstrom
    JEL: C22 C51
    Date: 2004–11
  4. By: Alberto Mora-Galan; Ana Perez; Esther Ruiz
    Abstract: It has been often empirically observed that the sample autocorrelations of absolute financial returns are larger than those of squared returns. This property, know as Taylor effect, is analysed in this paper in the Stochastic Volatility (SV) model framework. We show that the stationary autoregressive SV model is able to generate this property for realistic parameter specifications. On the other hand, the Taylor effect is shown not to be a sampling phenomena due to estimation biases of the sample autocorrelations. Therefore, financial models that aims to explain the behaviour of financial returns should take account of this property.
    Date: 2004–11
  5. By: Lemmens, A.; Croux, C.; Dekimpe, M.G. (Erasmus Research Institute of Management (ERIM), Erasmus University Rotterdam)
    Abstract: We develop a bivariate spectral Granger-causality test that can be applied at each individual frequency of the spectrum. The spectral approach to Granger causality has the distinct advantage that it allows to disentangle (potentially) di?erent Granger- causality relationships over di?erent time horizons. We illustrate the usefulness of the proposed approach in the context of the predictive value of European production expectation surveys.
    Keywords: Business Surveys;Granger Causality;Production Expectations;Spectral Analysis;
    Date: 2004–12–01
  6. By: Smets,F.; Wouters,R. (Nationale Bank van Belgie)
    Date: 2004
  7. By: Akifumi Isogai; Satoru Kanoh; Toshifumi Tokunaga
    Abstract: This paper attempts to extend the Markov-switching model with time-varying tansition probabilities(TVTP). The tansition probabilities in the conventional TVTP model are functions of exogenous variables that are time-dependent but with constant coefficients. In this paper the coefficient parameters that express the sensitivities of the exogenous variables are also allowed to vary with time. Using data on Japanese monthly stock returns, it is shown that the explanatory power of the extended model is superior to conventional models.
    Keywords: Gibbs sampling, Kalman filter, Marginal likelihood, Market dynamics, Time-varying sensitivity
    Date: 2004–11
  8. By: Clifford Attfield
    Abstract: Techniques for determining the number of stochastic trends generating a set of non-stationary panel data are applied to budget shares for a number of commodity groups from the Family Expenditure Survey (FES) for the UK for the years 1973-2001. It is argued that some stochastic trends in macro data are generated by the aggregation of fixed demographic effects in the micro data. From cross section data, fixed effect coefficients are estimated which incorporate both age and income distribution effects. The estimated coefficients are combined with age proportion variables to form a set of I(1) indices for broad commodity groups which are then incorporated into a system of aggregate demand equations. The equations are estimated and tested in a non-stationary time series setting.
    Keywords: Demand Equations, Age Demographics, Stochastic Trends.
    JEL: C1 C3 D1
    Date: 2004–04
  9. By: Bo E. Honoré (Department of Economics, University of Princeton); Elie Tamer (Department of Economics, University of Princeton)
    Abstract: Identification of dynamic nonlinear panel data models is an important and delicate problem in econometrics. In this paper we provide insights that shed light on the identification of parameters of some commonly used models. Using this insight, we are able to show through simple calculations that point identification often fails in these models. On the other hand, these calculations also suggest that the model restricts the parameter to lie in a region that is very small in many cases, and the failure of point identification may therefore be of little practical importance in those cases. Although the emphasis is on identification, our techniques are constructive in that they can easily form the basis for consistent estimates of the identified sets.
    JEL: C23 C25
    Date: 2002–10
  10. By: Charles Bellemare
    Abstract: This paper presents conditions providing semiparametric identification of the conditional expectation of economic outcomes characterizing outmigrants using data on immigrant sample attrition. The approach does not require that individual immigrant departures be observed. Outcomes of interest are labor market earnings, labor force participation, and labor supply. We present a panel model which extracts the information on outmigrant performance from sample attrition and estimate it using German data. We find strong evidence of self-selection of outmigrants based on unobserved individual characteristics. Simulations are performed to quantify the gap in labor market earnings and labor force participation rates between immigrant stayers and outmigrants.
    Keywords: Migration movements, Semiparametric identification, immigrant performance, Panel data models
    JEL: J24 C33 J61
    Date: 2004
  11. By: E.H. Oksanen
    Abstract: Certain propositions conventionally derived via least squares algebra can be derived very simply without that algebra by treating the vector of residuals as a regressor.
    Date: 1998–07
  12. By: M. W. Luke Chan; Dading Li; Dean C. Mountain
    Abstract: Traditionally, the literature has not found economies of scale for the very large banks. Among the reasons for these results are that usually large banks are not the sole focus of analysis and the analyzed banks may be subject to a diverse set of regulatory restrictions and limitations with respect to banking services. Our paper draws upon a panel data set containing information on the relatively large Schedule I Canadian banks. Although small in number, they offer extensive interbranch banking services under one set of regulations. In light of this, we propose a Bayesian methodology for estimating returns to scale. This technique allows for random coefficients and distinct input-allocative coefficients for each bank, and it provides reliable estimates of economies of scale using a panel data set with a small number of very large banks. In conclusion, we do find significant economies of scale.
    Date: 1999–01
  13. By: Frank T. Denton
    Abstract: The use of a nonparametrically generated instrumental variable in estimating a single-equation linear parametric model is explored, using kernel and other smoothing functions. The method, termed IVOS (Instrumental Variables Obtained by Smoothing), is applied in the estimation of measurement error and endogenous regressor models. Asymptotic and small-sample properties are investigated by simulation, using artificial data sets. IVOS is easy to apply and the simulation results exhibit good statistical properties. It can be used in situations in which standard IV cannot because suitable instruments are not available.
    Keywords: single equation models; nonparametric; instrumental variables
    JEL: C13 C14 C21
    Date: 2004–12
  14. By: Xibin Zhang; Maxwell L. King
    Abstract: This paper presents a Markov chain Monte Carlo (MCMC) algorithm to estimate parameters and latent stochastic processes in the asymmetric stochastic volatility (SV) model, in which the Box-Cox transformation of the squared volatility follows an autoregressive Gaussian distribution and the marginal density of asset returns has heavytails. To test for the significance of the Box-Cox transformation parameter, we present the likelihood ratio statistic, in which likelihood functions can be approximated using a particle filter and a Monte Carlo kernel likelihood. When applying the heavy-tailed asymmetric Box-Cox SV model and the proposed sampling algorithm to continuously compounded daily returns of the Australian stock index, we find significant empirical evidence supporting the Box-Cox transformation of the squared volatility against the alternative model involving a logarithmic transformation.
    Keywords: Leverage effect; Likelihood ratio test; Markov Chain Monte Carlo; Monte Carlo kernel likelihood; Particle filter
    JEL: C12 C15 C52
    Date: 2004–11
  15. By: Nicola Persico; Petra Todd
    Abstract: This paper considers the use of outcomes-based tests for detecting racial bias in the context of police searches of motor vehicles. It shows that the test proposed in Knowles, Persico and Todd (2001) can also be applied in a more general environment where police officers are heterogenous in their tastes for discrimination and in their costs of search and motorists are heterogeneous in their benefits and costs from criminal behavior. We characterize the police and motorist decision problems in a game theoretic framework and establish properties of the equilibrium. We also extend the model to the case where drivers' characteristics are mutable in the sense that drivers can adapt some of their characteristics to reduce the probability of being monitored. After developing the theory that justifies the application of outcomes-based tests, we apply the tests to data on police searches of motor vehicles gathered by the Wichita Police deparment. The empirical findings are consistent with the notion that police in Wichita choose their search strategies to maximize successful searches, and not out of racial bias.
    JEL: J70 K42
    Date: 2004–12
  16. By: Bent Nielsen (Nuffield College, Oxford University, UK); J. James Reade (St. Cross College, University of Oxford)
    Abstract: This paper provides a means of accurately simulating explosive autoregressive processes, and uses this method to analyse the distribution of the likelihood ratio test statistic for an explosive second order autoregressive process. Nielsen (2001) has shown that for the asymptotic distribution of the likelihood ratio unit root test statistic in a higher order autoregressive model, the assumption that the remaining roots are stationary is unnecessary, and as such the approximating asymptotic distribution for the test in the difference stationary region is valid in the explosive region also. However, simulations of statistics in the explosive region are beset by the magnitude of the numbers involved, which cause numerical inaccuracies, and this has previously constituted a bar on supporting asymptotic results by means of simulation, and analysing the finite sample properties of tests in the explosive region.
    Date: 2004–10–19
  17. By: Ole Barndorff-Nielsen (University of Aarhus); Svend Erik Graversen (University of Aarhus); Jean Jacod (Universtie P. et M. Curie); Mark Podolskij (Ruhr University of Bochum); Neil Shephard (Nuffield College, University of Oxford, UK)
    Abstract: Consider a semimartingale of the form Y_{t}=Y_0+\int _0^{t}a_{s}ds+\int _0^{t}_{s-} dW_{s}, where a is a locally bounded predictable process and (the "volatility") is an adapted right--continuous process with left limits and W is a Brownian motion. We define the realised bipower variation process V(Y;r,s)_{t}^n=n^{((r+s)/2)-1} \sum_{i=1}^{[nt]}|Y_{(i/n)}-Y_{((i-1)/n)}|^{r}|Y_{((i+1)/n)}-Y_{(i/n)}|^{s}, where r and s are nonnegative reals with r+s>0. We prove that V(Y;r,s)_{t}n converges locally uniformly in time, in probability, to a limiting process V(Y;r,s)_{t} (the "bipower variation process"). If further is a possibly discontinuous semimartingale driven by a Brownian motion which may be correlated with W and by a Poisson random measure, we prove a central limit theorem, in the sense that \sqrt(n) (V(Y;r,s)^n-V(Y;r,s)) converges in law to a process which is the stochastic integral with respect to some other Brownian motion W', which is independent of the driving terms of Y and \sigma. We also provide a multivariate version of these results.
    Date: 2004–11–01
  18. By: Yasuhiro Omori (University of Tokyo); Siddhartha Chib (Washington University); Neil Shephard (Nuffield College, University of Oxford, UK); Jouchi Nakajima (University of Tokyo)
    Abstract: Kim, Shephard and Chib (1998) provided a Bayesian analysis of stochastic volatility models based on a very fast and reliable Markov chain Monte Carlo (MCMC) algorithm. Their method ruled out the leverage effect, which limited its scope for applications. Despite this, their basic method has been extensively used in financial economics literature and more recently in macroeconometrics. In this paper we show how to overcome the limitation of this analysis so that the essence of the Kim, Shephard and Chib (1998) can be used to deal with the leverage effect, greatly extending the applicability of this method. Several illustrative examples are provided.
    Keywords: Leverage effect, Markov chain Monte Carlo, Mixture sampler, Stochastic volatility, Stock returns.
    Date: 2004–08–22
  19. By: Siddhartha Chib (Olin School of Business, Washington University); Michael K Pitt (University of Warwick); Neil Shephard (Nuffield College, University of Oxford, UK)
    Abstract: This paper provides methods for carrying out likelihood based inference for diffusion driven models, for example discretely observed multivariate diffusions, continuous time stochastic volatility models and counting process models. The diffusions can potentially be non-stationary. Although our methods are sampling based, making use of Markov chain Monte Carlo methods to sample the posterior distribution of the relevant unknowns, our general strategies and details are different from previous work along these lines. The methods we develop are simple to implement and simulation efficient. Importantly, unlike previous methods, the performance of our technique is not worsened, in fact it improves, as the degree of latent augmentation is increased to reduce the bias of the Euler approximation. In addition, our method is not subject to a degeneracy that afflicts previous techniques when the degree of latent augmentation is increased. We also discuss issues of model choice, model checking and filtering. The techniques and ideas are applied to both simulated and real data.
    Keywords: Bayes estimation, Brownian bridge, Non-linear diffusion, Euler approximation, Markov chain Monte Carlo, Metropolis-Hastings algorithm, Missing data, Simulation, Stochastic differential equation.
    Date: 2004–08–22
  20. By: Ole Barndorff-Nielsen (University of Aarhus); Neil Shephard (Nuffield College, University of Oxford, UK)
    Abstract: In this brief note we review some of our recent results on the use of high frequency financial data to estimate objects like integrated variance in stochastic volatility models. Interesting issues include multipower variation, jumps and market microstructure effects.
    Date: 2004–11–18
  21. By: Lauren Bin Dong (Statistics Canada)
    Abstract: In this paper we derive an empirical likelihood type Wald (ELW)test for the problem testing for structural change in a linear regression model when the variance of error term is not known to be equal across regimes. The sampling properties of the ELW test are analyzed using Monte Carlo simulation. Comparisons of these properties of the ELW test and of three other commonly used tests (Jayatissa, Weerahandi, and Wald) are conducted. The finding is that the ELW test has very good power properties.
    Keywords: Empirical Likelihood, Wald test, Monte Carlo Simulation, Power and size, structural change
    JEL: C12 C15 C16
    Date: 2004–12–06
  22. By: Catalin Starica (Dept. Mathematical Statistics, Chalmers University of Technology); Clive Granger (Dept. Economics, UCSD)
    Abstract: The paper outlines a methodology for analyzing daily stock returns that relinquishes the assumption of global stationarity. Giving up this common working hypothesis reflects our belief that fundamental features of the financial markets are continuously and significantly changing. Our approach approximates locally the non-stationary data by stationary models. The methodology is applied to the S&P 500 series of returns covering a period of over seventy years of market activity. We find most of the dynamics of this time series to be concentrated in shifts of the unconditional variance. The forecasts based on our non-stationary unconditional modeling were found to be superior to those obtained in a stationary long memory framework or to those based on a stationary Garch(1,1) data generating process.
    Keywords: stock returns, non-stationarities, locally stationary processes, volatility, sample autocorrelation, long range dependence, Garch(1,1) data generating process.
    JEL: C14 C22 C52 C53
    Date: 2004–11–22
  23. By: Thomas Mikosch (Dept. Actuarial Mathematics, University of Copenhagen); Catalin Starica (Dept. Mathematical Statistics & Economics, Gothenburg University & CTH)
    Abstract: In this paper we propose a goodness of fit test that checks the resemblance of the spectral density of a GARCH process to that of the log-returns. The asymptotic behavior of the test statistics are given by a functional central limit theorem for the integrated periodogram of the data. A simulation study investigates the small sample behavior, the size and the power of our test. We apply our results to the S&P500 returns and detect changes in the structure of the data related to shifts of the unconditional variance. We show how a long range dependence type behavior in the sample ACF of absolute returns might be induced by these shifts.
    Keywords: integrated periodogram, spectral distribution, functional central limit theorem, Kiefer--Muller process, Brownian bridge, sample autocorrelation, change point, GARCH process, long range dependence, IGARCH, non-stationarity
    JEL: C22 C52
    Date: 2004–12–06
  24. By: Thomas Mikosch (Dept. Actuarial Mathematics, University of Copenhagen); Catalin Starica (Dept. Mathematical Statistics & Economics, Gothenburg University & CTH)
    Abstract: Our study supports the hypothesis of global non-stationarity of the return time series. We bring forth both theoretical and empirical evidence that the long range dependence (LRD) type behavior of the sample ACF and the periodogram of absolute return series and the IGARCH effect documented in the econometrics literature could be due to the impact of non-stationarity on statistical instruments and estimation procedures. In particular, contrary to the common-hold belief that the LRD characteristic and the IGARCH phenomena carry meaningful information about the price generating process, these so-called stylized facts could be just artifacts due to structural changes in the data. The effect that the switch to a different regime has on the sample ACF and the periodogram is theoretically explained and empirically documented using time series that were the object of LRD modeling efforts (S&P500, DEM/USD FX) in various publications.
    Keywords: sample autocorrelation, change point, GARCH process, long range dependence.
    JEL: C22 C52
    Date: 2004–12–06
  25. By: Kevin Dowd (Nottingham University Business School)
    Abstract: This paper evaluates the inflation density forecasts published by the Swedish central bank, the Sveriges Riksbank. Realized inflation outcomes are mapped to their forecasted percentiles, which are then transformed to be standard normal under the null that the forecasting model is good. Results suggest that the Riksbank’s inflation density forecasts have a skewness problem, and their longer term forecasts have a kurtosis problem as well.
    Keywords: Inflation density forecasting, Sveriges Riksbank, forecast evaluation
    JEL: C53 E31 E52
    Date: 2004–01–12
  26. By: Kevin Dowd (Nottingham University Business School)
    Abstract: This paper presents some simple methods to estimate the probability that realized inflation will breach a given inflation target range over a specified period, based on the Bank of England’s RPIX inflation forecasting model and the Monetary Policy Committee’s forecasts of the parameters on which this model is built. Illustrative results for plausible target ranges over the period up to 04Q1 indicate that these probabilities are low, if not very low, and strongly suggest that the Bank’s model over-estimates inflation risk.
    Keywords: Inflation, inflation risk, fan charts
    JEL: C53 E47 E52
    Date: 2004–01–12
  27. By: Kevin Dowd (Nottingham University Business School)
    Abstract: This paper presents a new approach to the evaluation of FOMC macroeconomic forecasts. Its distinctive feature is the interpretation, under reasonable conditions, of the minimum and maximum forecasts reported in FOMC meetings as indicative of probability density forecasts for these variables. This leads to some straightforward binomial tests of the performance of the FOMC forecasts as forecasts of macroeconomic risks. Empirical results suggest that there are serious problems with the FOMC forecasts. Most particularly, there are problems with the FOMC forecasts of the tails of the macroeconomic density functions, including a tendency to under-estimate the tails of macroeconomic risks.
    Keywords: Macroeconomic risks, FOMC forecasts, density forecasting
    JEL: C53 E47 E52
    Date: 2004–01–12
  28. By: Mototsugu Shintani (Department of Economics, Vanderbilt University)
    Abstract: A method of principal components is employed to investigate nonlinear dynamic factor structure using a large panel data. The evidence suggests the possibility of nonlinearity in the U.S. while it excludes the class of nonlinearity that can generate endogenous fluctuation or chaos.
    Keywords: Chaos, Dynamic Factor Model, Lyapunov Exponents, Nonparametric Regression, Principal Components
    JEL: C14 C33
    Date: 2004–08
  29. By: Donald M. Pianto; Sergei Soares
    Abstract: The structure of some household surveys allows the evaluation of social programs which are implemented gradually by municipality and whose objectives are measurable by survey variables. Such evaluations do not require over sampling of areas in which the program was implemented, nor the application of additional questionnaires, while providing baseline data and non-experimental comparison groups. We use the PNAD survey to evaluate the impact of the Program for the Eradication of Child Labor on child labor, schooling, and income for municipalities which entered the program from 1997-1999. We present results both from a reflexive comparison and from matching municipalities to form a comparison group and measuring the difference in differences (D in D). Only the reduction of child labor is robust to the D in D analysis, while the reflexive results also demonstrate a significant increase in school attendance. We find the program to be more effective in smaller municipalities as suggested by Rocha (1999).
    JEL: I32 I38
    Date: 2004
  30. By: Xiaohong Chen (Department of Economics, New York University); Yanqin Fan (Department of Economics, Vanderbilt University)
    Abstract: Recently Chen and Fan (2003a) introduced a new class of semiparametric copula-based multivariate dynamic (SCOMDY) models. A SCOMDY model specifies the conditional mean and the conditional variance of a multivariate time series parametrically (such as VAR, GARCH), but specifies the multivariate distribution of the standardized innovation semiparametrically as aparametric copula evaluated at nonparametric marginal distributions. In this paper, we first study large sample properties of the estimators of SCOMDY model parameters under a misspecified parametric copula, and then establish pseudo likelihood ratio (PLR) tests for model selection between two SCOMDY models with possibly misspecified copulas. Finally we develop PLR tests for model selection between more than two SCOMDY models along the lines of the reality check of White (2000). The limiting distributions of the estimators of copula parameters and the PLR tests do not depend on the estimation of conditional mean and conditional variance parameters. Although the tests are affected by the estimation of unknown marginal distributions of standardized innovations, they have standard parametric rates and the limiting null distributions are very easy to simulate. Empirical applications to multiple daily exchange rate data indicate the simplicity and usefulness of the proposed tests. Although a SCOMDY model with Gaussian copula might be a reasonable model for some bivariate FX series, but a SCOMDY model with a copula which has (asymmetric) tail-dependence is generally preferred for tri-variate and higher dimensional FX series.
    Keywords: Multivariate dynamic models; Misspecified copulas; Multiple model selection; Semiparametric inference; Mixture copulas; t copula; Gaussian copula
    JEL: C14 G22 G22
    Date: 2004–02
  31. By: Xiaohong Chen (Department of Economics, New York University); Yanqin Fan (Department of Economics, Vanderbilt University); Victor Tsyrennifov (Department of Economics, New York University)
    Abstract: We propose a sieve maximum likelihood (ML) estimation procedure for a broad class of semiparametric multivariate distribution models. A joint distribution in this class is characterized by a parametric copula function evaluated at nonparametric marginal distributions. This class of models has gained popularity in diverse fields due to a) its flexibility in separately modeling the dependence structure and the marginal behaviors of a multivariate random variable, and b) its circumvention of the "curse of dimensionality" associated with purely nonparametric multivariate distributions. We show that the plug-in sieve ML estimates of all smooth functionals, including the finite dimensional copula parameters and the unknown marginal distributions, are semiparametrically efficient; and that their asymptotic variances can be estimated consistently. Moreover, prior restrictions on the marginal distributions can be easily incorporated into the sieve ML procedure to achieve further efficiency gains. Two such cases are studied in the paper: (i) the marginal distributions are equal but otherwise unspecifed, and (ii) some but not all marginal distributions are parametric. Monte Carlo studies indicate that the sieve ML estimates perform well in finite samples, especially so when prior information on the marginal distributions is incorporated.
    Keywords: Multivariate copula, sieve maximum likelihood, semiparametric efficiency
    JEL: C13 C14
    Date: 2002–05
  32. By: Frank A. Cowell (STICERD, London School of Economics); Emmanuel Flachaire (EUREQua)
    Abstract: We examine the statistical performance of inequality indices in the presence of extreme values in the data and show that these indices are very sensitive to the properties of the income distribution. Estimation and inference can be dramatically affected, especially when the tail of the income distribution is heavy, even when standard bootstrap methods are employed. However, use of appropriate methods for modelling the upper tail can greatly improve the performance of even those inequality indices that are normally considered particularly sensitive to extreme values.
    Keywords: Inequality measures; statistical performance; robustness
    JEL: C1 D63
    Date: 2004–10
  33. By: Dennis Epple; Bennett McCallum
    Abstract: For introductory presentation of issues involving identification and estimation of simultaneous equation systems, a natural vehicle is a model consisting of supply and demand relationships to explain price and quantity variables for a single good. One would accordingly expect to find in introductory econometrics textbooks a supply-demand example featuring actual data in which structural estimation methods yield more satisfactory results than ordinary least squares. In a search of 26 existing textbooks, however, we have found no such example-indeed, no example with actual data in which all parameter estimates are of the proper sign and statistically significant. This absence is documented in the present paper. Its main contribution, however, is the development of a simple but satisfying example, for broiler chickens, based on U.S. annual data over 1960-1999.

This nep-ecm issue is ©2004 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.