|
on Econometrics |
By: | Chirok Han (Victoria University of Wellington); Peter C.B. Phillips (Cowles Foundation, Yale University) |
Abstract: | This paper develops new estimation and inference procedures for dynamic panel data models with fixed effects and incidental trends. A simple consistent GMM estimation method is proposed that avoids the weak moment condition problem that is known to affect conventional GMM estimation when the autoregressive coefficient (rho) is near unity. In both panel and time series cases, the estimator has standard Gaussian asymptotics for all values of rho in (-1, 1] irrespective of how the composite cross section and time series sample sizes pass to infinity. Simulations reveal that the estimator has little bias even in very small samples. The approach is applied to panel unit root testing. |
Keywords: | Asymptotic normality, Asymptotic power envelope, Moment conditions, Panel unit roots, Point optimal test, Unit root tests, Weak instruments |
JEL: | C22 C23 |
Date: | 2007–01 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:1599&r=ecm |
By: | Peter C.B. Phillips (Cowles Foundation, Yale University); Jun Yu (Singapore Management University) |
Abstract: | This paper overviews maximum likelihood and Gaussian methods of estimating continuous time models used in finance. Since the exact likelihood can be constructed only in special cases, much attention has been devoted to the development of methods designed to approximate the likelihood. These approaches range from crude Euler-type approximations and higher order stochastic Taylor series expansions to more complex polynomial-based expansions and infill approximations to the likelihood based on a continuous time data record. The methods are discussed, their properties are outlined and their relative finite sample performance compared in a simulation experiment with the nonlinear CIR diffusion model, which is popular in empirical finance. Bias correction methods are also considered and particular attention is given to jackknife and indirect inference estimators. The latter retains the good asymptotic properties of ML estimation while removing finite sample bias. This method demonstrates superior performance in finite samples. |
Keywords: | Maximum likelihood, Transition density, Discrete sampling, Continuous record, Realized volatility, Bias reduction, Jackknife, Indirect inference |
JEL: | C22 C32 |
Date: | 2007–01 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:1597&r=ecm |
By: | Eduardo Mendes (Department of Electrical Engineering, PUC-Rio); Alvaro Veiga (Department of Electrical Engineering, PUC-Rio); MArcelo Cunha Medeiros (Department of Economics, PUC-Rio) |
Abstract: | In this paper a new model of mixture of distributions is proposed, where the mixing structure is determined by a smooth transition tree architecture. Models based on mixture of distributions are useful in order to approximate unknown conditional distributions of multivariate data. The tree structure yields a model that is simpler, and in some cases more interpretable, than previous proposals in the literature. Based on the Expectation-Maximization (EM) algorithm a quasi-maximum likelihood estimator is derived and its asymptotic properties are derived under mild regularity conditions. In addition, a specific-to-general model building strategy is proposed in order to avoid possible identification problems. Both the estimation procedure and the model building strategy are evaluated in a Monte Carlo experiment, which give strong support for the theory developed in small samples. The approximation capabilities of the model is also analyzed in a simulation experiment. Finally, two applications with real datasets are considered. KEYWORDS: Mixture models, smooth transition, EM algorithm, asymptotic properties, time series, conditional distribution. |
Date: | 2007–01 |
URL: | http://d.repec.org/n?u=RePEc:rio:texdis:538&r=ecm |
By: | Hugo Kruiniger (Queen Mary, University of London) |
Abstract: | In this paper we show that the Quasi ML estimation method yields consistent Random and Fixed Effects estimators for the autoregression parameter ρ in the panel AR(1) model with arbitrary initial conditions even when the errors are drawn from heterogenous distributions. We compare both analytically and by means of Monte Carlo simulations the QML estimators with the GMM estimator proposed by Arellano and Bond (1991) [AB], which ignores some of the moment conditions implied by the model. Unlike the AB GMM estimator, the QML estimators for ρ only suffer from a weak instruments problem when ρ is close to one if the cross-sectional average of the variances of the errors is constant over time, e.g. under time-series homoskedasticity. However, even in this case the QML estimators are still consistent when ρ is equal to one and they display only a relatively small bias when ρ is close to one. In contrast, the AB GMM estimator is inconsistent when ρ is equal to one, and is severly biased when ρ is close to one. Finally, we study the finite sample properties of two types of estimators for the standard errors of the QML estimators for ρ, and the bounds of QML based confidence intervals for ρ. |
Keywords: | Dynamic panel data, Initial conditions, Quasi ML, GMM, Weak moment conditions, Local-to-zero asymptotics |
JEL: | C12 C13 C23 |
Date: | 2006–12 |
URL: | http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp582&r=ecm |
By: | Nakatani, Tomoaki (Dept. of Economic Statistics, Stockholm School of Economics); Teräsvirta, Timo (School of Management and Economics) |
Abstract: | In this paper we propose a Lagrange multiplier (LM) test for volatility interactions among markets or assets. The null hypothesis is the Constant Conditional Correlation (CCC) GARCH model of Bollerslev (1990) in which volatility of an asset is described only through lagged squared innovations and volatility of its own. The alternative hypothesis is an extension of that model in which volatility is modelled as a linear combination not only of its own lagged squared residuals and volatility but also of those in the other equations while keeping the conditional correlation structure constant. This configuration enables us to test for volatility transmissions among variables in the model. We derive an LM test of the null hypothesis. Monte Carlo experiments show that the test has satisfactory finite sample properties. The size distortions become negligible when the sample size reaches 2000. The test is applied to pairs of foreign exchange returns and individual stock returns. Results indicate that six pairs out of seven investigated seem to have volatility interactions, and that significant interaction effects typically result from the lagged squared innovations of the other variables. <p> |
Keywords: | Multivariate GARCH; Volatility interactions; Lagrange multiplier test; Monte Carlo simulation; Conditional correlations |
JEL: | C12 C32 C51 C52 G19 |
Date: | 2007–01–05 |
URL: | http://d.repec.org/n?u=RePEc:hhs:hastef:0649&r=ecm |
By: | Chuan Goh |
Abstract: | This paper considers a class of semiparametric estimators that take the form of density-weighted averages. These arise naturally in a consideration of semiparametric methods for the estimation of index and sample-selection models involving preliminary kernel density estimates. The question considered in this paper is that of selecting the degree of smoothing to be used in computing the preliminary density estimate. This paper proposes a bootstrap method for estimating the mean squared error and associated optimal bandwidth. The particular bootstrap method suggested here involves using a resample of smaller size than the original sample. This method of bandwidth selection is presented with specific reference to the case of estimators of average densities, of density-weighted average derivatives and of density-weighted conditional covariances. |
Keywords: | bandwidth selection, density-weighted averages, bootstrap, m-out-of-n bootstrap, kernel density estimation |
JEL: | C14 |
Date: | 2007–01–02 |
URL: | http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-274&r=ecm |
By: | Peter C.B. Phillips (Cowles Foundation, Yale University); Jun Yu (Singapore Management University) |
Abstract: | A new methodology is proposed to estimate theoretical prices of financial contingent-claims whose values are dependent on some other underlying financial assets. In the literature the preferred choice of estimator is usually maximum likelihood (ML). ML has strong asymptotic justification but is not necessarily the best method in finite samples. The present paper proposes instead a simulation-based method that improves the finite sample performance of the ML estimator while maintaining its good asymptotic properties. The methods are implemented and evaluated here in the Black-Scholes option pricing model and in the Vasicek bond pricing model, but have wider applicability. Monte Carlo studies show that the proposed procedures achieve bias reductions over ML estimation in pricing contingent claims. The bias reductions are sometimes accompanied by reductions in variance, leading to significant overall gains in mean squared estimation error. Empirical applications to US treasury bills highlight the differences between the bond prices implied by the simulation-based approach and those delivered by ML. Some consequences for the statistical testing of contingent-claim pricing models are discussed. |
Keywords: | Bias reduction, Bond pricing, Indirect inference, Option pricing, Simulation-based estimation |
JEL: | C15 G12 |
Date: | 2007–01 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:1596&r=ecm |
By: | Yixiao Sun (University of California, San Diego) |
Abstract: | This paper proposes and implements a tractable approach to detect group structure in panel data. The mechanism works by means of a panel structure model, which assumes that individuals form a number of homogeneous groups in a heterogeneous population. Within each group, the (linear) regression coe¢ cients are the same, while they may be different across different groups. The econometrician is not presumed to know the group structure. Instead, a multinomial logistic regression is used to infer which individuals belong to which groups.The model is estimated via maximum likelihood. We prove the consistency and asymptotic normality of a global MLE under the mild assumption that the time dimension is larger than the number of regressors in the linear regression. We propose a likelihood ratio test to test the null of one group against the alternative of multiple groups. Simulation studies show that the MLE performs quite well and the likelihood ratio test has good size and power properties in finite samples. |
Keywords: | dynamic panel data model; group structure; logistic regression; nonregular test; parameter heterogeneity, |
Date: | 2005–10–01 |
URL: | http://d.repec.org/n?u=RePEc:cdl:ucsdec:2005-11&r=ecm |
By: | Alberto Bisin; Andrea Moro; Giorgio Topa (Microeconomic and Regional Studies Federal Reserve Bank of New York) |
Abstract: | We consider a generic environment with (potentially) multiple equilibria and analyze conditions for identification of the structural parameters. We then study conditions that allow for the estimation of both the structural parameters and the “selected equilibriumâ€. We focus on a “easy to compute†consistent 2-step estimator and use Monte Carlo methods on a model with social interactions to describe its finite sample properties |
Keywords: | multiple equilibria, identification, structural estimation, montecarlo simulations |
JEL: | C13 C21 J71 |
Date: | 2006–12–03 |
URL: | http://d.repec.org/n?u=RePEc:red:sed006:660&r=ecm |
By: | Ivana Komunjer (University of California, San Diego); Quang Vuong (The Pennsylvania State University) |
Abstract: | In this paper we consider the problem of efficient estimation in conditional quantile models with time series data. Our first result is to derive the semiparametric efficiency bound in time series models of conditional quantiles; this is a nontrivial extension of a large body of work on efficient estimation, which has traditionally focused on models with independent and identically distributed data. In particular, we generalize the bound derived by New and Powell (1990) to the case where the data is weakly dependent and heterogeneous. We then proceed by constructing an M-estimator which achieves the semiparametric efficiency bound. Our efficient M-estimator is obtained by minimizing an objective function which depends on a nonparametric estimator of the conditional distribution of the variable of interest rather than its density. |
Keywords: | semiparametric efficientcy, time series models, dependence, parametric submodels, conditional quantiles, |
Date: | 2006–10–01 |
URL: | http://d.repec.org/n?u=RePEc:cdl:ucsdec:2006-10&r=ecm |
By: | Giovanni Trovato (University of Rome II - Faculty of Economics); Marco Alf˜ (Universitˆ degli Studi La Sapienza; University of Rome II - Faculty of Economics) |
Abstract: | The analysis of overdispersed counts has been the focus of a large amount of literature, with the general objective of providing reliable parameter estimates in the presence of heterogeneity or dependence among subjects. In this paper we extend the standard variance component models to the analysis of multivariate counts, defining the dependence among counts through a set of correlated random coefficients. Estimation is carried out by numerical integration through an EM algorithm without parametric assumptions upon the random coefficients distribution. The proposed model is computationally parsimonious and, when applied to a real dataset, seems to produce better results than parametric models. A simulation study has been carried out to investigate the behavior of the proposed models in a series of empirical situations. |
Keywords: | Correlated counts, Multivariate counts, Correlated random effects, Non-parametric ML |
URL: | http://d.repec.org/n?u=RePEc:rtv:ceisrp:51&r=ecm |
By: | Peter C.B. Phillips (Cowles Foundation, Yale University); Donggyu Sul (University of Auckland) |
Abstract: | A new panel data model is proposed to represent the behavior of economies in transition allowing for a wide range of possible time paths and individual heterogeneity. The model has both common and individual specific components and is formulated as a nonlinear time varying factor model. When applied to a micro panel, the decomposition provides flexibility in idiosyncratic behavior over time and across section, while retaining some commonality across the panel by means of an unknown common growth component. This commonality means that when the heterogeneous time varying idiosyncratic components converge over time to a constant, a form of panel convergence holds, analogous to the concept of conditional sigma convergence. The paper provides a framework of asymptotic representations for the factor components which enables the development of econometric procedures of estimation and testing. In particular, a simple regression based convergence test is developed, whose asymptotic properties are analyzed under both null and local alternatives, and a new method of clustering panels into club convergence groups is constructed. These econometric methods are applied to analyze convergence in cost of living indices among 19 US. metropolitan cities. |
Keywords: | Club convergence, Relative convergence, Common factor, Convergence, log t regression test, Panel data, Transition |
JEL: | C33 F21 G12 |
Date: | 2007–01 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:1595&r=ecm |
By: | John K. Dagsvik, Torbjørn Hægeland and Arvid Raknerud (Statistics Norway) |
Abstract: | In this paper we develop a full information maximum likelihood method for the estimation of a joint model for the choice of length of schooling and the corresponding earnings equation. The model for schooling is assumed to be an ordered probit model, whereas the earnings equation is allowed to be very general with explanatory variables that are flexible transformations of schooling and experience. The coefficients associated with length of schooling and experience are allowed to be random and all the random terms of the model may be correlated. Under normality assumptions, we show that the joint probability distribution for schooling and earnings can be expressed on a closed form that is tractable for empirical analysis. |
Keywords: | Schooling choice; earnings equation; treatment effects; self-selection; ordered probit; random coefficients; full information maximum likelihood |
JEL: | C31 I20 J30 |
Date: | 2006–11 |
URL: | http://d.repec.org/n?u=RePEc:ssb:dispap:486&r=ecm |
By: | Dimitris Politis (University of California, San Diego) |
Abstract: | A new class of HAC covariance matrix estimators is proposed based on the notion of a flat-top kernel as in Politis and Romano (1995)and Politis (2001). The new estimators are shown to be higher-order accurate when higher-order accuracy is possible, and a discussion on kernel choice is given. The higher-order accuracy of flat-top kernel estimators typically comes at the sacrifice of the positive semi-definite property. Nevertheless, we show how a modified flat-top estimator is positive semi-definite while maintaining its higher-order accuracy. In addition, an automatic and consistent procedure for optimal bandwidth choice for flat-top kernel HAC estimators is given. The general problem of spectral matrix estimation is also treated. |
Keywords: | HAC estimations, spectral estimation, |
Date: | 2005–03–01 |
URL: | http://d.repec.org/n?u=RePEc:cdl:ucsdec:2005-03&r=ecm |
By: | Caporin Massimiliano (Department of Economics, University of Padova, Italy); Paruolo Paolo (Department of Economics, University of Insubria, Italy) |
Abstract: | This paper proposes a new approach for the specification of multivariate GARCH models for data sets with a potentially large cross-section dimension. The approach exploits the spatial dependence structure associated with asset characteristics, like industrial sectors and capitalization size. We use the acronym SEARCH for this model, short for Spatial Effects in ARCH. This parametrization extends current feasible specifications for large scale GARCH models, while keeping the numbers of parameters linear with respect to the number of assets. An application to daily returns on 20 stocks from the NYSE for the period January 1994 to June 2001 shows the benefits of the present specification. |
JEL: | C32 C51 C52 |
Date: | 2005–05 |
URL: | http://d.repec.org/n?u=RePEc:ins:quaeco:qf0501&r=ecm |
By: | Jeremy Berkowitz (University of Houston); Peter Christoffersen (McGill University); Denis Pelletier (Department of Economics, North Carolina State University) |
Abstract: | We present new evidence on disaggregated profit and loss and VaR forecasts obtained from a large international commercial bank. Our dataset includes daily P/L generated by four separate business lines within the bank. All four business lines are involved in securities trading and each is observed daily for a period of at least two years. Given this rich dataset, we provide an integrated, unifying framework for assessing the accuracy of VaR forecasts. A thorough Monte Carlo comparison of the various methods is conducted to provide guidance as to which of these many tests have the best finite-sample size and power properties. The Caviar test of Engle and Manganelli (2004) performs best overall but duration-based tests also perform well in many cases. |
Keywords: | risk management, backtesting, volatility, disclosure |
JEL: | G21 G32 |
Date: | 2005–10 |
URL: | http://d.repec.org/n?u=RePEc:ncs:wpaper:010&r=ecm |
By: | SErgio Firpo (Department of Economics PUC-Rio); Nicole M. Fortin (University of British Columbia); Thomas Lemieux (University of British Columbia) |
Abstract: | We propose a new regression method to estimate the impact of explanatory variables on quantiles of the unconditional distribution of an outcome variable. The proposed method consists of running a regression of the (recentered) influence function (RIF) of the unconditional quantile on the explanatory variables. The influence function is a widely used tool in robust estimation that can easily be computed for each quantile of interest. We show how standard partial effects, as well as policy effects, can be estimated using our regression approach. We propose three different regression estimators based on a standard OLS regression (RIFOLS), a Logit regression (RIF-Logit), and a nonparametric Logit regression (RIFNP). We also discuss how our approach can be generalized to other distributional statistics besides quantiles. |
Keywords: | Influence Functions, Unconditional Quantile, Quantile Regressions. |
Date: | 2006–11 |
URL: | http://d.repec.org/n?u=RePEc:rio:texdis:533&r=ecm |
By: | Hidehiko Ichimura (Faculty of Economics, University of Tokyo); Petra E. Todd (Department of Economics, University of Pennsylvania) |
Abstract: | This chapter reviews recent advances in nonparametric and semiparametric estimation, with an emphasis on applicability to empirical research and on resolving issues that arise in implementation. It considers techniques for estimating densities, conditional mean functions, derivatives of functions and conditional quantiles in a flexible way that imposes minimal functional form assumptions. The chapter begins by illustrating how flexible modeling methods have been applied in empirical research, drawing on recent examples of applications from labor economics, consumer demand estimation and treatment effects models. Then, key concepts in semiparametric and nonparametric modeling are introduced that do not have counterparts in parametric modeling, such as the so-called curse of dimensionality, the notion of models with an infinite number of parameters, the criteria used to define optimal convergence rates, and "dimension-free" estimators. After defining these new concepts, a large literature on nonparametric estimation is reviewed and a unifying framework presented for thinking about how different approaches relate to one another. Local polynomial estimators are discussed in detail and their distribution theory is developed. The chapter then shows how nonparametric estimators form the building blocks for many semiparametric estimators, such as estimators for average derivatives, index models, partially linear models, and additively separable models. Semiparametric methods offer a middle ground between fully nonparametric and parametric approaches. Their main advantage is that they typically achieve faster rates of convergence than fully nonparametric approaches. In many cases, they converge at the parametric rate. The second part of the chapter considers in detail two issues that are central with regard to implementing flexible modeling methods: how to select the values of smoothing parameters in an optimal way and how to implement "trimming" procedures. It also reviews newly developed techniques for deriving the distribution theory of semiparametric estimators. The chapter concludes with an overview of approximation methods that speed up the computation of nonparametric estimates and make flexible estimation feasible even in very large size samples. |
Date: | 2006–12 |
URL: | http://d.repec.org/n?u=RePEc:tky:fseres:2006cf452&r=ecm |
By: | Timothy J. Halliday (Department of Economics, University of Hawaii at Manoa; John A. Burns School of Medicine) |
Abstract: | We consider the identification of state dependence in a dynamic Logit model with timevariant transition probabilities and an arbitrary distribution of the unobserved heterogeneity. We derive a simple result that allows us to test for the presence of state dependence in this model. Monte Carlo evidence suggests that this test has desirable properties even when there are some violations of the model’s assumptions. We also consider alternative tests for state dependence that will have desirable properties only when the transition probabilities do not depend on time and provide evidence that there is an "acceptable" range in which ignoring time-dependence does not matter too much. We conclude with an application to the Barker Hypothesis. |
Keywords: | Dynamic Panel Data Models, State Dependence, Health |
Date: | 2006–12–31 |
URL: | http://d.repec.org/n?u=RePEc:hai:wpaper:200614&r=ecm |
By: | Richard Carson; Jordan Louviere (School of Marketing, University of Technology, Sydney) |
Abstract: | Consideration sets have become a central concept in the study of consumer behavior. Frequently, consumers are asked to split choice alternatives into those that that they would consider and those that they would not. Information on alternatives not in the consideration set is then typically not used in subsequent analysis. This practice is shown to lead to biased estimates of preference parameters. The reason for this is shown to be a form of sample selection bias. |
Keywords: | choice models, random utility, sample selection bias, |
Date: | 2006–07–01 |
URL: | http://d.repec.org/n?u=RePEc:cdl:ucsdec:2006-07&r=ecm |
By: | Hisayuki Tsukuma (Department of Medical Informatics, Toho University); Tatsuya Kubokawa (Faculty of Economics, University of Tokyo) |
Abstract: | This paper treats the problem of simultaneously estimating the precision matrices in multivariate normal distributions. A condition for improvement on the unbiased estimators of the precision matrices is derived under a quadratic loss function. The improvement condition is similar to the superharmonic condition established by Stein (1981). The condition allows us not only to provide various alternative estimators such as shrinkage type and enlargement type estimators for the unbiased estimators, but also to present a condition on a prior density under which the resulting generalized Bayes estimators dominate the unbiased estimators. Also, a uni?ed method improving upon both the shrinkage and the enlargement type estimators is discussed. |
Date: | 2006–12 |
URL: | http://d.repec.org/n?u=RePEc:tky:fseres:2006cf459&r=ecm |
By: | Victor Aguirregabiria (Department of Economics Boston University) |
Abstract: | This paper presents a method to estimate the effects of a counterfactual policy intervention in the context of dynamic structural models where all the structural functions (i.e., preferences, technology, transition probabilities, and the distribution of unobservable variables) are nonparametrically specified. We show that agents' behavior, before and after the policy intervention, and the change in agents' utility are nonparametrically identified. Based on this result we propose a nonparametric procedure to estimate the behavioral and welfare effects of a general class of counterfactual policy interventions. We apply this method to evaluate hypothetical reforms in the rules of a public pension system using a model of retirement behavior and a sample of blue-collar workers in Sweden |
Keywords: | Dynamic discrete decision processes; Nonparametric identification; Counterfactual policy interventions; Retirement behavior. |
JEL: | C14 C25 J26 |
Date: | 2006–12–03 |
URL: | http://d.repec.org/n?u=RePEc:red:sed006:169&r=ecm |
By: | Fonseca Giovanni (Department of Economics, University of Insubria, Italy) |
Abstract: | In the present paper we study the stability of a threshold continuos-time model that belongs to the class of Piecewise Deterministic Markov Processes. We derive a sufficient condition on the coefficients of the model to ensure the exponential ergodicity of the process under two different assumptions on the jumps. |
Keywords: | Threshold process, Compound Poisson Process, Stationary process, Ergodicity. |
Date: | 2005–05 |
URL: | http://d.repec.org/n?u=RePEc:ins:quaeco:qf0502&r=ecm |
By: | John K. Dagsvik and Gang Liu (Statistics Norway) |
Abstract: | In this paper we develop a framework for analyzing panel data with observations on rank ordered alternatives that allows for correlated random taste shifters across time and across alternatives. As a special case we obtain a nested logit model type for rank ordered alternatives. We have applied this framework to estimate several model versions for household demand for conventional and alternative fuel automobiles in Shanghai based on rank ordered data obtained from a stated preference survey. The preferred model is then used to calculate demand probabilities and elasticities and the willingness-to-pay for alternative fuel vehicles. |
Keywords: | Random utility models; Nested rank ordered logit models; Automobile demand; Alternative fuel vehicles |
JEL: | C25 C33 L92 |
Date: | 2006–10 |
URL: | http://d.repec.org/n?u=RePEc:ssb:dispap:480&r=ecm |
By: | Manfred M. Fischer; Daniel A. Griffith |
Abstract: | The need to account for spatial autocorrelation is well known in spatial analysis. Many spatial statistics and spatial econometric texts detail the way spatial autocorrelation can be identified and modelled in the case of object and field data. The literature on spatial autocorrelation is much less developed in the case of spatial interaction data. The focus of interest in this paper is on the problem of spatial autocorrelation in a spatial interaction context. The paper aims to illustrate that eigenfunction-based spatial filtering offers a powerful methodology that can efficiently account for spatial autocorrelation effects within a Poisson spatial interaction model context that serves the purpose to identify and measure spatial separation effects to interregional knowledge spillovers as captured by patent citations among high-technology-firms in Europe. |
Date: | 2006–08 |
URL: | http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa06p10&r=ecm |
By: | Cerqueti, Roy; Costantini, Mauro |
Abstract: | This paper presents new results on the rational bubbles hypothesis for a panel of 9 OECD countries using Campbell, Lo and MacKinsay (1997) model. The contribution offered by this paper is an analysis of international data that exploits increased power deriving from the panel unit root and cointegration methodology, together with the flexibility of allowing explicitly for multiple endogenous structural breaks in the individual series. Differently from the time series methodology, the panel data approach allows for a global analysis of the Financial crashes that are related to rational bubbles. Strong evidence in favor of bubbles phenomena is found. Classification-C12, C33, G15. |
Keywords: | Panel data, Co-integration, International Financial markets, Rational bubbles. |
Date: | 2006–12–21 |
URL: | http://d.repec.org/n?u=RePEc:mol:ecsdps:esdp06030&r=ecm |
By: | Rolf Aaberge (Statistics Norway) |
Abstract: | The purpose of this paper is to justify the use of the Gini coefficient and two close relatives for summarizing the basic information of inequality in distributions of income. To this end we employ a specific transformation of the Lorenz curve, the scaled conditional mean curve, rather than the Lorenz curve as the basic formal representation of inequality in distributions of income. The scaled conditional mean curve is shown to possess several attractive properties as an alternative interpretation of the information content of the Lorenz curve and furthermore proves to yield essential information on polarization in the population. The paper also provides asymptotic distribution results for the empirical scaled conditional mean curve and the related family of empirical measures of inequality. |
Keywords: | The scaled conditional mean curve; measures of inequality; the Gini coefficient; the Bonferroni coefficient; measures of social welfare; principles of transfer sensitivity; estimation; asymptotic distributions. |
JEL: | D3 D63 |
Date: | 2006–12 |
URL: | http://d.repec.org/n?u=RePEc:ssb:dispap:491&r=ecm |
By: | Genaro, SUCARRAT (UNIVERSITE CATHOLIQUE DE LOUVAIN, Department of Economics) |
Abstract: | The reduction theory of David F. Hendry provides a comprehensive probabilistic framework for the analysis and classification of the reductions associated with empirical econometric models. However, it is unable to provide an analysis on the same underlying probability space of the first reduction - and hence the subsequent reductions - given a commonplace theory of social reality, namely the joint hypotheses that the course of history is indeterministic, that history does not repeat itsself, and that the future depends on the past. As a solution this essay proposes that the elements of the underlying outcome space in HendryÕs theory are interpreted as indeterministic worlds made up of historically inherited particulars. |
Keywords: | Theory of recution, DGP, Possible worlds, Measurement error, Probabilistic causality |
JEL: | B40 C50 |
Date: | 2006–09–15 |
URL: | http://d.repec.org/n?u=RePEc:ctl:louvec:2006041&r=ecm |
By: | Marie Cottrell (SAMOS - Statistique Appliquée et MOdélisation Stochastique - [Université Panthéon-Sorbonne - Paris I], MATISSE - Modélisation Appliquée, Trajectoires Institutionnelles et Stratégies Socio-Économiques - [CNRS : UMR8595] - [Université Panthéon-Sorbonne - Paris I]); Patrice Gaubert (SAMOS - Statistique Appliquée et MOdélisation Stochastique - [Université Panthéon-Sorbonne - Paris I], MATISSE - Modélisation Appliquée, Trajectoires Institutionnelles et Stratégies Socio-Économiques - [CNRS : UMR8595] - [Université Panthéon-Sorbonne - Paris I], LEMMA - LEMMA - [Université du Littoral Côte d'Opale]) |
Abstract: | Pseudo panels constituted with repeated cross-sections are good substitutes to true panel data. But individuals grouped in a cohort are not the same for successive periods, and it results in a measurement error and inconsistent estimators. The solution is to constitute cohorts of large numbers of individuals but as homogeneous as possible. This paper explains a new way to do this: by using a self-organizing map, whose properties are well suited to achieve these objectives. It is applied to a set of Canadian surveys, in order to estimate income elasticities for 18 consumption functions.. |
Keywords: | Pseudo panels ; self-organizing maps; |
Date: | 2007–01–05 |
URL: | http://d.repec.org/n?u=RePEc:hal:papers:hal-00122817_v1&r=ecm |
By: | Carlos Carmona (University of California, San Diego) |
Abstract: | Inflation forecasts of the Federal Reserve systematically under-predicted inflation before Volcker and systematically over-predicted it afterward. Furthermore, under quadratic loss, commercial forecasts have information not contained in those forecasts. To investigate the cause, this paper recovers the loss function implied by Federal Reserve's forecasts. It finds that the cost of having inflation above an implicit time-varying target was larger than the cost of having inflation below it for the period since Volcker, and that the opposite was true for the pre-Volcker era. Once these asymmetries are taken into account, the Federal Reserve is found to be rational. (JEL C53, E52) |
Keywords: | Inflation Forecasts, Asymmetric Loss, Federal Reserve, |
Date: | 2005–07–01 |
URL: | http://d.repec.org/n?u=RePEc:cdl:ucsdec:2005-05&r=ecm |