
on Econometrics 
By:  Steven Lugauer 
URL:  http://d.repec.org/n?u=RePEc:cmu:gsiawp:1091385995&r=ecm 
By:  Mark Coppejans; Holger Sieg 
Abstract:  In this paper, we derive nonparametric average difference estimators. We show that this estimator is consistent and root$N$ asymptotically normally distributed. Furthermore, the average difference estimator converges to the wellknown average derivative estimator as the increment used to compute the difference converges to zero. We apply this estimator to test for differences between average and marginal compensation of workers. We estimate different versions of the model using repeated crosssectional data from the CPS for a number of narrowly defined occupations. The average difference estimator yields plausible estimates for the average marginal compensation in all subsamples of the CPS considered in this paper. Our results highlight the importance of choosing bandwidth parameters in nonparametric estimation. If important covariates are measured discretely, standard approaches for choosing optimal bandwidth parameters do not necessarily apply. Our main empirical findings suggest that, at least for the preferred range of bandwidth parameters, marginal compensation exceeds average compensation, which suggests that average compensation is at best a noisy measure for the unobserved productivity of workers. 
URL:  http://d.repec.org/n?u=RePEc:cmu:gsiawp:1909861039&r=ecm 
By:  Arie ten Cate 
Abstract:  This paper presents some suggestions for the specification of dynamic models. These suggestions are based on the supposed continuoustime nature of most economic processes. In particular, the partial adjustment model or Koyck lag model is discussed. The refinement of this model is derived from the continuoustime econometric literature. <P> We find three alternative formulas for this refinement, depending on the particular econometric literature which is used. Two of these formulas agree with an intuitive example. In passing, it is shown that that the continuoustime models of Sims and Bergstrom are closely related. Also the inverse of Bergstrom’s approximate analog has been introduced, making use of engineering mathematics. 
Keywords:  dynamics; dynamics; continuous time; econometrics; koyck; bergstrom 
JEL:  C22 C51 
Date:  2004–11 
URL:  http://d.repec.org/n?u=RePEc:cpb:discus:41&r=ecm 
By:  Alberto MoraGalan; Ana Perez; Esther Ruiz 
Abstract:  It has been often empirically observed that the sample autocorrelations of absolute financial returns are larger than those of squared returns. This property, know as Taylor effect, is analysed in this paper in the Stochastic Volatility (SV) model framework. We show that the stationary autoregressive SV model is able to generate this property for realistic parameter specifications. On the other hand, the Taylor effect is shown not to be a sampling phenomena due to estimation biases of the sample autocorrelations. Therefore, financial models that aims to explain the behaviour of financial returns should take account of this property. 
Date:  2004–11 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws046315&r=ecm 
By:  Lemmens, A.; Croux, C.; Dekimpe, M.G. (Erasmus Research Institute of Management (ERIM), Erasmus University Rotterdam) 
Abstract:  We develop a bivariate spectral Grangercausality test that can be applied at each individual frequency of the spectrum. The spectral approach to Granger causality has the distinct advantage that it allows to disentangle (potentially) di?erent Granger causality relationships over di?erent time horizons. We illustrate the usefulness of the proposed approach in the context of the predictive value of European production expectation surveys. 
Keywords:  Business Surveys;Granger Causality;Production Expectations;Spectral Analysis; 
Date:  2004–12–01 
URL:  http://d.repec.org/n?u=RePEc:dgr:eureri:30001959&r=ecm 
By:  Smets,F.; Wouters,R. (Nationale Bank van Belgie) 
Date:  2004 
URL:  http://d.repec.org/n?u=RePEc:att:belgnw:200460&r=ecm 
By:  Akifumi Isogai; Satoru Kanoh; Toshifumi Tokunaga 
Abstract:  This paper attempts to extend the Markovswitching model with timevarying tansition probabilities(TVTP). The tansition probabilities in the conventional TVTP model are functions of exogenous variables that are timedependent but with constant coefficients. In this paper the coefficient parameters that express the sensitivities of the exogenous variables are also allowed to vary with time. Using data on Japanese monthly stock returns, it is shown that the explanatory power of the extended model is superior to conventional models. 
Keywords:  Gibbs sampling, Kalman filter, Marginal likelihood, Market dynamics, Timevarying sensitivity 
Date:  2004–11 
URL:  http://d.repec.org/n?u=RePEc:hst:hstdps:d0443&r=ecm 
By:  Clifford Attfield 
Abstract:  Techniques for determining the number of stochastic trends generating a set of nonstationary panel data are applied to budget shares for a number of commodity groups from the Family Expenditure Survey (FES) for the UK for the years 19732001. It is argued that some stochastic trends in macro data are generated by the aggregation of fixed demographic effects in the micro data. From cross section data, fixed effect coefficients are estimated which incorporate both age and income distribution effects. The estimated coefficients are combined with age proportion variables to form a set of I(1) indices for broad commodity groups which are then incorporated into a system of aggregate demand equations. The equations are estimated and tested in a nonstationary time series setting. 
Keywords:  Demand Equations, Age Demographics, Stochastic Trends. 
JEL:  C1 C3 D1 
Date:  2004–04 
URL:  http://d.repec.org/n?u=RePEc:bri:uobdis:04/563&r=ecm 
By:  Bo E. Honoré (Department of Economics, University of Princeton); Elie Tamer (Department of Economics, University of Princeton) 
Abstract:  Identification of dynamic nonlinear panel data models is an important and delicate problem in econometrics. In this paper we provide insights that shed light on the identification of parameters of some commonly used models. Using this insight, we are able to show through simple calculations that point identification often fails in these models. On the other hand, these calculations also suggest that the model restricts the parameter to lie in a region that is very small in many cases, and the failure of point identification may therefore be of little practical importance in those cases. Although the emphasis is on identification, our techniques are constructive in that they can easily form the basis for consistent estimates of the identified sets. 
JEL:  C23 C25 
Date:  2002–10 
URL:  http://d.repec.org/n?u=RePEc:kud:kuieca:2004_23&r=ecm 
By:  Charles Bellemare 
Abstract:  This paper presents conditions providing semiparametric identification of the conditional expectation of economic outcomes characterizing outmigrants using data on immigrant sample attrition. The approach does not require that individual immigrant departures be observed. Outcomes of interest are labor market earnings, labor force participation, and labor supply. We present a panel model which extracts the information on outmigrant performance from sample attrition and estimate it using German data. We find strong evidence of selfselection of outmigrants based on unobserved individual characteristics. Simulations are performed to quantify the gap in labor market earnings and labor force participation rates between immigrant stayers and outmigrants. 
Keywords:  Migration movements, Semiparametric identification, immigrant performance, Panel data models 
JEL:  J24 C33 J61 
Date:  2004 
URL:  http://d.repec.org/n?u=RePEc:lvl:lacicr:0429&r=ecm 
By:  E.H. Oksanen 
Abstract:  Certain propositions conventionally derived via least squares algebra can be derived very simply without that algebra by treating the vector of residuals as a regressor. 
Date:  1998–07 
URL:  http://d.repec.org/n?u=RePEc:mcm:deptwp:199807&r=ecm 
By:  M. W. Luke Chan; Dading Li; Dean C. Mountain 
Abstract:  Traditionally, the literature has not found economies of scale for the very large banks. Among the reasons for these results are that usually large banks are not the sole focus of analysis and the analyzed banks may be subject to a diverse set of regulatory restrictions and limitations with respect to banking services. Our paper draws upon a panel data set containing information on the relatively large Schedule I Canadian banks. Although small in number, they offer extensive interbranch banking services under one set of regulations. In light of this, we propose a Bayesian methodology for estimating returns to scale. This technique allows for random coefficients and distinct inputallocative coefficients for each bank, and it provides reliable estimates of economies of scale using a panel data set with a small number of very large banks. In conclusion, we do find significant economies of scale. 
Date:  1999–01 
URL:  http://d.repec.org/n?u=RePEc:mcm:deptwp:199901&r=ecm 
By:  Frank T. Denton 
Abstract:  The use of a nonparametrically generated instrumental variable in estimating a singleequation linear parametric model is explored, using kernel and other smoothing functions. The method, termed IVOS (Instrumental Variables Obtained by Smoothing), is applied in the estimation of measurement error and endogenous regressor models. Asymptotic and smallsample properties are investigated by simulation, using artificial data sets. IVOS is easy to apply and the simulation results exhibit good statistical properties. It can be used in situations in which standard IV cannot because suitable instruments are not available. 
Keywords:  single equation models; nonparametric; instrumental variables 
JEL:  C13 C14 C21 
Date:  2004–12 
URL:  http://d.repec.org/n?u=RePEc:mcm:qseprr:390&r=ecm 
By:  Xibin Zhang; Maxwell L. King 
Abstract:  This paper presents a Markov chain Monte Carlo (MCMC) algorithm to estimate parameters and latent stochastic processes in the asymmetric stochastic volatility (SV) model, in which the BoxCox transformation of the squared volatility follows an autoregressive Gaussian distribution and the marginal density of asset returns has heavytails. To test for the significance of the BoxCox transformation parameter, we present the likelihood ratio statistic, in which likelihood functions can be approximated using a particle filter and a Monte Carlo kernel likelihood. When applying the heavytailed asymmetric BoxCox SV model and the proposed sampling algorithm to continuously compounded daily returns of the Australian stock index, we find significant empirical evidence supporting the BoxCox transformation of the squared volatility against the alternative model involving a logarithmic transformation. 
Keywords:  Leverage effect; Likelihood ratio test; Markov Chain Monte Carlo; Monte Carlo kernel likelihood; Particle filter 
JEL:  C12 C15 C52 
Date:  2004–11 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:200426&r=ecm 
By:  Nicola Persico; Petra Todd 
Abstract:  This paper considers the use of outcomesbased tests for detecting racial bias in the context of police searches of motor vehicles. It shows that the test proposed in Knowles, Persico and Todd (2001) can also be applied in a more general environment where police officers are heterogenous in their tastes for discrimination and in their costs of search and motorists are heterogeneous in their benefits and costs from criminal behavior. We characterize the police and motorist decision problems in a game theoretic framework and establish properties of the equilibrium. We also extend the model to the case where drivers' characteristics are mutable in the sense that drivers can adapt some of their characteristics to reduce the probability of being monitored. After developing the theory that justifies the application of outcomesbased tests, we apply the tests to data on police searches of motor vehicles gathered by the Wichita Police deparment. The empirical findings are consistent with the notion that police in Wichita choose their search strategies to maximize successful searches, and not out of racial bias. 
JEL:  J70 K42 
Date:  2004–12 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:10947&r=ecm 
By:  Bent Nielsen (Nuffield College, Oxford University, UK); J. James Reade (St. Cross College, University of Oxford) 
Abstract:  This paper provides a means of accurately simulating explosive autoregressive processes, and uses this method to analyse the distribution of the likelihood ratio test statistic for an explosive second order autoregressive process. Nielsen (2001) has shown that for the asymptotic distribution of the likelihood ratio unit root test statistic in a higher order autoregressive model, the assumption that the remaining roots are stationary is unnecessary, and as such the approximating asymptotic distribution for the test in the difference stationary region is valid in the explosive region also. However, simulations of statistics in the explosive region are beset by the magnitude of the numbers involved, which cause numerical inaccuracies, and this has previously constituted a bar on supporting asymptotic results by means of simulation, and analysing the finite sample properties of tests in the explosive region. 
Date:  2004–10–19 
URL:  http://d.repec.org/n?u=RePEc:nuf:econwp:0424&r=ecm 
By:  Ole BarndorffNielsen (University of Aarhus); Svend Erik Graversen (University of Aarhus); Jean Jacod (Universtie P. et M. Curie); Mark Podolskij (Ruhr University of Bochum); Neil Shephard (Nuffield College, University of Oxford, UK) 
Abstract:  Consider a semimartingale of the form Y_{t}=Y_0+\int _0^{t}a_{s}ds+\int _0^{t}_{s} dW_{s}, where a is a locally bounded predictable process and (the "volatility") is an adapted rightcontinuous process with left limits and W is a Brownian motion. We define the realised bipower variation process V(Y;r,s)_{t}^n=n^{((r+s)/2)1} \sum_{i=1}^{[nt]}Y_{(i/n)}Y_{((i1)/n)}^{r}Y_{((i+1)/n)}Y_{(i/n)}^{s}, where r and s are nonnegative reals with r+s>0. We prove that V(Y;r,s)_{t}n converges locally uniformly in time, in probability, to a limiting process V(Y;r,s)_{t} (the "bipower variation process"). If further is a possibly discontinuous semimartingale driven by a Brownian motion which may be correlated with W and by a Poisson random measure, we prove a central limit theorem, in the sense that \sqrt(n) (V(Y;r,s)^nV(Y;r,s)) converges in law to a process which is the stochastic integral with respect to some other Brownian motion W', which is independent of the driving terms of Y and \sigma. We also provide a multivariate version of these results. 
Date:  2004–11–01 
URL:  http://d.repec.org/n?u=RePEc:nuf:econwp:0429&r=ecm 
By:  Yasuhiro Omori (University of Tokyo); Siddhartha Chib (Washington University); Neil Shephard (Nuffield College, University of Oxford, UK); Jouchi Nakajima (University of Tokyo) 
Abstract:  Kim, Shephard and Chib (1998) provided a Bayesian analysis of stochastic volatility models based on a very fast and reliable Markov chain Monte Carlo (MCMC) algorithm. Their method ruled out the leverage effect, which limited its scope for applications. Despite this, their basic method has been extensively used in financial economics literature and more recently in macroeconometrics. In this paper we show how to overcome the limitation of this analysis so that the essence of the Kim, Shephard and Chib (1998) can be used to deal with the leverage effect, greatly extending the applicability of this method. Several illustrative examples are provided. 
Keywords:  Leverage effect, Markov chain Monte Carlo, Mixture sampler, Stochastic volatility, Stock returns. 
Date:  2004–08–22 
URL:  http://d.repec.org/n?u=RePEc:nuf:econwp:0419&r=ecm 
By:  Siddhartha Chib (Olin School of Business, Washington University); Michael K Pitt (University of Warwick); Neil Shephard (Nuffield College, University of Oxford, UK) 
Abstract:  This paper provides methods for carrying out likelihood based inference for diffusion driven models, for example discretely observed multivariate diffusions, continuous time stochastic volatility models and counting process models. The diffusions can potentially be nonstationary. Although our methods are sampling based, making use of Markov chain Monte Carlo methods to sample the posterior distribution of the relevant unknowns, our general strategies and details are different from previous work along these lines. The methods we develop are simple to implement and simulation efficient. Importantly, unlike previous methods, the performance of our technique is not worsened, in fact it improves, as the degree of latent augmentation is increased to reduce the bias of the Euler approximation. In addition, our method is not subject to a degeneracy that afflicts previous techniques when the degree of latent augmentation is increased. We also discuss issues of model choice, model checking and filtering. The techniques and ideas are applied to both simulated and real data. 
Keywords:  Bayes estimation, Brownian bridge, Nonlinear diffusion, Euler approximation, Markov chain Monte Carlo, MetropolisHastings algorithm, Missing data, Simulation, Stochastic differential equation. 
Date:  2004–08–22 
URL:  http://d.repec.org/n?u=RePEc:nuf:econwp:0420&r=ecm 
By:  Ole BarndorffNielsen (University of Aarhus); Neil Shephard (Nuffield College, University of Oxford, UK) 
Abstract:  In this brief note we review some of our recent results on the use of high frequency financial data to estimate objects like integrated variance in stochastic volatility models. Interesting issues include multipower variation, jumps and market microstructure effects. 
Date:  2004–11–18 
URL:  http://d.repec.org/n?u=RePEc:nuf:econwp:0430&r=ecm 
By:  Lauren Bin Dong (Statistics Canada) 
Abstract:  In this paper we derive an empirical likelihood type Wald (ELW)test for the problem testing for structural change in a linear regression model when the variance of error term is not known to be equal across regimes. The sampling properties of the ELW test are analyzed using Monte Carlo simulation. Comparisons of these properties of the ELW test and of three other commonly used tests (Jayatissa, Weerahandi, and Wald) are conducted. The finding is that the ELW test has very good power properties. 
Keywords:  Empirical Likelihood, Wald test, Monte Carlo Simulation, Power and size, structural change 
JEL:  C12 C15 C16 
Date:  2004–12–06 
URL:  http://d.repec.org/n?u=RePEc:vic:vicewp:0405&r=ecm 
By:  Catalin Starica (Dept. Mathematical Statistics, Chalmers University of Technology); Clive Granger (Dept. Economics, UCSD) 
Abstract:  The paper outlines a methodology for analyzing daily stock returns that relinquishes the assumption of global stationarity. Giving up this common working hypothesis reflects our belief that fundamental features of the financial markets are continuously and significantly changing. Our approach approximates locally the nonstationary data by stationary models. The methodology is applied to the S&P 500 series of returns covering a period of over seventy years of market activity. We find most of the dynamics of this time series to be concentrated in shifts of the unconditional variance. The forecasts based on our nonstationary unconditional modeling were found to be superior to those obtained in a stationary long memory framework or to those based on a stationary Garch(1,1) data generating process. 
Keywords:  stock returns, nonstationarities, locally stationary processes, volatility, sample autocorrelation, long range dependence, Garch(1,1) data generating process. 
JEL:  C14 C22 C52 C53 
Date:  2004–11–22 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0411016&r=ecm 
By:  Thomas Mikosch (Dept. Actuarial Mathematics, University of Copenhagen); Catalin Starica (Dept. Mathematical Statistics & Economics, Gothenburg University & CTH) 
Abstract:  In this paper we propose a goodness of fit test that checks the resemblance of the spectral density of a GARCH process to that of the logreturns. The asymptotic behavior of the test statistics are given by a functional central limit theorem for the integrated periodogram of the data. A simulation study investigates the small sample behavior, the size and the power of our test. We apply our results to the S&P500 returns and detect changes in the structure of the data related to shifts of the unconditional variance. We show how a long range dependence type behavior in the sample ACF of absolute returns might be induced by these shifts. 
Keywords:  integrated periodogram, spectral distribution, functional central limit theorem, KieferMuller process, Brownian bridge, sample autocorrelation, change point, GARCH process, long range dependence, IGARCH, nonstationarity 
JEL:  C22 C52 
Date:  2004–12–06 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0412003&r=ecm 
By:  Thomas Mikosch (Dept. Actuarial Mathematics, University of Copenhagen); Catalin Starica (Dept. Mathematical Statistics & Economics, Gothenburg University & CTH) 
Abstract:  Our study supports the hypothesis of global nonstationarity of the return time series. We bring forth both theoretical and empirical evidence that the long range dependence (LRD) type behavior of the sample ACF and the periodogram of absolute return series and the IGARCH effect documented in the econometrics literature could be due to the impact of nonstationarity on statistical instruments and estimation procedures. In particular, contrary to the commonhold belief that the LRD characteristic and the IGARCH phenomena carry meaningful information about the price generating process, these socalled stylized facts could be just artifacts due to structural changes in the data. The effect that the switch to a different regime has on the sample ACF and the periodogram is theoretically explained and empirically documented using time series that were the object of LRD modeling efforts (S&P500, DEM/USD FX) in various publications. 
Keywords:  sample autocorrelation, change point, GARCH process, long range dependence. 
JEL:  C22 C52 
Date:  2004–12–06 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0412004&r=ecm 
By:  Kevin Dowd (Nottingham University Business School) 
Abstract:  This paper evaluates the inflation density forecasts published by the Swedish central bank, the Sveriges Riksbank. Realized inflation outcomes are mapped to their forecasted percentiles, which are then transformed to be standard normal under the null that the forecasting model is good. Results suggest that the Riksbank’s inflation density forecasts have a skewness problem, and their longer term forecasts have a kurtosis problem as well. 
Keywords:  Inflation density forecasting, Sveriges Riksbank, forecast evaluation 
JEL:  C53 E31 E52 
Date:  2004–01–12 
URL:  http://d.repec.org/n?u=RePEc:nub:occpap:10&r=ecm 
By:  Kevin Dowd (Nottingham University Business School) 
Abstract:  This paper presents some simple methods to estimate the probability that realized inflation will breach a given inflation target range over a specified period, based on the Bank of England’s RPIX inflation forecasting model and the Monetary Policy Committee’s forecasts of the parameters on which this model is built. Illustrative results for plausible target ranges over the period up to 04Q1 indicate that these probabilities are low, if not very low, and strongly suggest that the Bank’s model overestimates inflation risk. 
Keywords:  Inflation, inflation risk, fan charts 
JEL:  C53 E47 E52 
Date:  2004–01–12 
URL:  http://d.repec.org/n?u=RePEc:nub:occpap:11&r=ecm 
By:  Kevin Dowd (Nottingham University Business School) 
Abstract:  This paper presents a new approach to the evaluation of FOMC macroeconomic forecasts. Its distinctive feature is the interpretation, under reasonable conditions, of the minimum and maximum forecasts reported in FOMC meetings as indicative of probability density forecasts for these variables. This leads to some straightforward binomial tests of the performance of the FOMC forecasts as forecasts of macroeconomic risks. Empirical results suggest that there are serious problems with the FOMC forecasts. Most particularly, there are problems with the FOMC forecasts of the tails of the macroeconomic density functions, including a tendency to underestimate the tails of macroeconomic risks. 
Keywords:  Macroeconomic risks, FOMC forecasts, density forecasting 
JEL:  C53 E47 E52 
Date:  2004–01–12 
URL:  http://d.repec.org/n?u=RePEc:nub:occpap:12&r=ecm 
By:  Mototsugu Shintani (Department of Economics, Vanderbilt University) 
Abstract:  A method of principal components is employed to investigate nonlinear dynamic factor structure using a large panel data. The evidence suggests the possibility of nonlinearity in the U.S. while it excludes the class of nonlinearity that can generate endogenous fluctuation or chaos. 
Keywords:  Chaos, Dynamic Factor Model, Lyapunov Exponents, Nonparametric Regression, Principal Components 
JEL:  C14 C33 
Date:  2004–08 
URL:  http://d.repec.org/n?u=RePEc:van:wpaper:0418&r=ecm 
By:  Donald M. Pianto; Sergei Soares 
Abstract:  The structure of some household surveys allows the evaluation of social programs which are implemented gradually by municipality and whose objectives are measurable by survey variables. Such evaluations do not require over sampling of areas in which the program was implemented, nor the application of additional questionnaires, while providing baseline data and nonexperimental comparison groups. We use the PNAD survey to evaluate the impact of the Program for the Eradication of Child Labor on child labor, schooling, and income for municipalities which entered the program from 19971999. We present results both from a reflexive comparison and from matching municipalities to form a comparison group and measuring the difference in differences (D in D). Only the reduction of child labor is robust to the D in D analysis, while the reflexive results also demonstrate a significant increase in school attendance. We find the program to be more effective in smaller municipalities as suggested by Rocha (1999). 
JEL:  I32 I38 
Date:  2004 
URL:  http://d.repec.org/n?u=RePEc:anp:en2004:133&r=ecm 
By:  Xiaohong Chen (Department of Economics, New York University); Yanqin Fan (Department of Economics, Vanderbilt University) 
Abstract:  Recently Chen and Fan (2003a) introduced a new class of semiparametric copulabased multivariate dynamic (SCOMDY) models. A SCOMDY model specifies the conditional mean and the conditional variance of a multivariate time series parametrically (such as VAR, GARCH), but specifies the multivariate distribution of the standardized innovation semiparametrically as aparametric copula evaluated at nonparametric marginal distributions. In this paper, we first study large sample properties of the estimators of SCOMDY model parameters under a misspecified parametric copula, and then establish pseudo likelihood ratio (PLR) tests for model selection between two SCOMDY models with possibly misspecified copulas. Finally we develop PLR tests for model selection between more than two SCOMDY models along the lines of the reality check of White (2000). The limiting distributions of the estimators of copula parameters and the PLR tests do not depend on the estimation of conditional mean and conditional variance parameters. Although the tests are affected by the estimation of unknown marginal distributions of standardized innovations, they have standard parametric rates and the limiting null distributions are very easy to simulate. Empirical applications to multiple daily exchange rate data indicate the simplicity and usefulness of the proposed tests. Although a SCOMDY model with Gaussian copula might be a reasonable model for some bivariate FX series, but a SCOMDY model with a copula which has (asymmetric) taildependence is generally preferred for trivariate and higher dimensional FX series. 
Keywords:  Multivariate dynamic models; Misspecified copulas; Multiple model selection; Semiparametric inference; Mixture copulas; t copula; Gaussian copula 
JEL:  C14 G22 G22 
Date:  2004–02 
URL:  http://d.repec.org/n?u=RePEc:van:wpaper:0419&r=ecm 
By:  Xiaohong Chen (Department of Economics, New York University); Yanqin Fan (Department of Economics, Vanderbilt University); Victor Tsyrennifov (Department of Economics, New York University) 
Abstract:  We propose a sieve maximum likelihood (ML) estimation procedure for a broad class of semiparametric multivariate distribution models. A joint distribution in this class is characterized by a parametric copula function evaluated at nonparametric marginal distributions. This class of models has gained popularity in diverse fields due to a) its flexibility in separately modeling the dependence structure and the marginal behaviors of a multivariate random variable, and b) its circumvention of the "curse of dimensionality" associated with purely nonparametric multivariate distributions. We show that the plugin sieve ML estimates of all smooth functionals, including the finite dimensional copula parameters and the unknown marginal distributions, are semiparametrically efficient; and that their asymptotic variances can be estimated consistently. Moreover, prior restrictions on the marginal distributions can be easily incorporated into the sieve ML procedure to achieve further efficiency gains. Two such cases are studied in the paper: (i) the marginal distributions are equal but otherwise unspecifed, and (ii) some but not all marginal distributions are parametric. Monte Carlo studies indicate that the sieve ML estimates perform well in finite samples, especially so when prior information on the marginal distributions is incorporated. 
Keywords:  Multivariate copula, sieve maximum likelihood, semiparametric efficiency 
JEL:  C13 C14 
Date:  2002–05 
URL:  http://d.repec.org/n?u=RePEc:van:wpaper:0420&r=ecm 
By:  Frank A. Cowell (STICERD, London School of Economics); Emmanuel Flachaire (EUREQua) 
Abstract:  We examine the statistical performance of inequality indices in the presence of extreme values in the data and show that these indices are very sensitive to the properties of the income distribution. Estimation and inference can be dramatically affected, especially when the tail of the income distribution is heavy, even when standard bootstrap methods are employed. However, use of appropriate methods for modelling the upper tail can greatly improve the performance of even those inequality indices that are normally considered particularly sensitive to extreme values. 
Keywords:  Inequality measures; statistical performance; robustness 
JEL:  C1 D63 
Date:  2004–10 
URL:  http://d.repec.org/n?u=RePEc:mse:wpsorb:v04101&r=ecm 
By:  Dennis Epple; Bennett McCallum 
Abstract:  For introductory presentation of issues involving identification and estimation of simultaneous equation systems, a natural vehicle is a model consisting of supply and demand relationships to explain price and quantity variables for a single good. One would accordingly expect to find in introductory econometrics textbooks a supplydemand example featuring actual data in which structural estimation methods yield more satisfactory results than ordinary least squares. In a search of 26 existing textbooks, however, we have found no such exampleindeed, no example with actual data in which all parameter estimates are of the proper sign and statistically significant. This absence is documented in the present paper. Its main contribution, however, is the development of a simple but satisfying example, for broiler chickens, based on U.S. annual data over 19601999. 
URL:  http://d.repec.org/n?u=RePEc:cmu:gsiawp:1132963504&r=ecm 