nep-ecm New Economics Papers
on Econometrics
Issue of 2009‒11‒14
29 papers chosen by
Sune Karlsson
Orebro University

  1. Small area estimation on poverty indicators By Isabel Molina; J.N.K. Rao
  2. Testing the null hypothesis of no regime switching with an application to GDP growth rates By Marmer, Vadim
  3. Predictive density construction and accuracy testing with multiple possibly misspecified diffusion models By Valentina Corradi; Norman R. Swanson
  4. Unconditional Quantile Regression for Panel Data with Exogenous or Endogenous Regressors By David Powell
  5. The ‘Puzzles’ Methodology: En Route to Indirect Inference? By Vo Phuong Mai Le; Patrick Minford; Michael Wickens
  6. Robust Inference with Multi-way Clustering By Miller, Douglas; Cameron, A. Colin; Gelbach, Jonah
  7. Nonlinearity, Nonstationarity, and Spurious Forecasts By Marmer, Vadim
  8. A Simple GMM Estimator for the Semi-Parametric Mixed Proportional Hazard Model By Bijwaard, Govert; Ridder, Geert
  9. Forecasting Inflation Using Dynamic Model Averaging By Gary Koop; Dimitris Korobilis
  10. A simple improvement of the IV estimator for the classical errors-in-variables problem By Andersson, Jonas; Møen, Jarle
  11. On Loss Functions and Ranking Forecasting Performances of Multivariate Volatility Models By Sébastien Laurent; Jeroen V.K. Rombouts; Francesco Violante
  12. Modelling Realized Covariances By Xin Jin; John M Maheu
  13. On the Probability Distribution of Economic Growth By Öller , L-E; Stockhammar, P
  14. What Belongs Where? Variable Selection for Zero-Inflated Count Models with an Application to the Demand for Health Care By Markus Jochmann
  15. Density forecasting of the Dow Jones share index By Öller, L-E; Stockhammar, P
  16. Real-time datasets really do make a difference: definitional change, data release, and forecasting By Andres Fernandez; Norman R. Swanson
  17. Nowcasting Euro Area Economic Activity in Real-Time: The Role of Confidence Indicators By Domenico Giannone; Lucrezia Reichlin; Saverio Simonelli
  18. On economic evaluation of directional forecasts By Oliver Blaskowitz; Helmut Herwartz
  19. Sample selection correction in panel data models when selectivity is due to two sources By Di Novi, Cinzia
  20. Extremal behavior of aggregated economic processes in a structural growth model By Stéphane Auray; Aurélien Eyquem; Frédéric Jouneau-Sion
  21. "Identification of Stochastic Sequential Bargaining Models" By Antonio Merlo; Xun Tang
  22. "Interdependent Durations" Third Version By Bo E. Honoré; Aureo de Paula
  23. Large-scale portfolios using realized covariance matrix: evidence from the Japanese stock market By Masato Ubukata
  24. Generalized canonical correlation analysis with missing values By Velden, M. van de; Takane, Y.
  25. Modeling Asymmetric Volatility Clusters Using Copulas and High Frequency Data By Cathy Ning; Dinghai Xu; Tony Wirjanto
  26. Illegal Migration, Wages, and Remittances: Semi-Parametric Estimation of Illegality Effects By Schluter, Christian; Wahba, Jackline
  27. Testable implications of general equilibrium models: an integer programming approach By Laurens CHERCHYE; Thomas DEMUYNCK; Bram DE ROCK
  28. On the equivalence of location choice models: conditional logit, nested logit and poisson By Kurt Schmidheiny; Marius Brülhart
  29. Accounting for Respondent Uncertainty to Improve Willingness-to-Pay Estimates By Moore, Rebecca; Bishop, Richard C.; Provencher, Bill; Champ, Patricia

  1. By: Isabel Molina; J.N.K. Rao
    Abstract: We propose to estimate non-linear small area population quantities by using Empirical Best (EB) estimators based on a nested error model. EB estimators are obtained by Monte Carlo approximation. We focus on poverty indicators as particular non-linear quantities of interest, but the proposed methodology is applicable to general non-linear quantities. Small sample properties of EB estimators are analyzed by model-based and design-based simulation studies. Results show large reductions in mean squared error relative to direct estimators and estimators obtained by simulated censuses. An application is also given to estimate poverty incidences and poverty gaps in Spanish provinces by sex with mean squared errors estimated by parametric bootstrap. In the Spanish data, results show a significant reduction in coefficient of variation of the proposed EB estimators over direct estimators for most domains.
    Keywords: Empirical best estimator, Parametric bootstrap, Poverty mapping, Small area estimation
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws091505&r=ecm
  2. By: Marmer, Vadim
    Abstract: This paper presents tests for the null hypothesis of no regime switching in Hamilton's (1989) regime switching model. The test procedures exploit similarities between regime switching models, autoregressions with measurement errors, and finite mixture models. The proposed tests are computationally simple and, contrary to likelihood based tests, have a standard distribution under the null. When the methodology is applied to US GDP growth rates, no strong evidence of regime switching is found.
    Keywords: regime switching, LM tests, GMM, matching methods, GDP growth rates
    Date: 2009–11–02
    URL: http://d.repec.org/n?u=RePEc:ubc:pmicro:vadim_marmer-2009-59&r=ecm
  3. By: Valentina Corradi; Norman R. Swanson
    Abstract: This paper develops tests for comparing the accuracy of predictive densities derived from (possibly misspecified) diffusion models. In particular, the authors first outline a simple simulation-based framework for constructing predictive densities for one-factor and stochastic volatility models. Then, they construct accuracy assessment tests that are in the spirit of Diebold and Mariano (1995) and White (2000). In order to establish the asymptotic properties of their tests, the authors also develop a recursive variant of the nonparametric simulated maximum likelihood estimator of Fermanian and Salanié (2004). In an empirical illustration, the predictive densities from several models of the one-month federal funds rates are compared.
    Keywords: Econometric models - Evaluation ; Stochastic analysis
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:fip:fedpwp:09-29&r=ecm
  4. By: David Powell
    Abstract: Quantile treatment effects are difficult to estimate in the presence of fixed effects. Panel data are used when fixed effects or differences are necessary to identify the parameters of interest. The inclusion of fixed effects or differencing the data, however, redefines the quantiles. This paper introduces a quantile estimator for panel data which conditions on the fixed effect for identification purposes but allows the parameters to be interpreted in the same manner as cross-sectional quantile estimates. The quantiles are unconditional in the fixed effect and are defined by the "total residual," including the fixed effect.
    Keywords: Quantile regression, panel data, fixed effects, instrumental variables
    JEL: C13 C31 C33 C51
    Date: 2009–10
    URL: http://d.repec.org/n?u=RePEc:ran:wpaper:710&r=ecm
  5. By: Vo Phuong Mai Le; Patrick Minford; Michael Wickens
    Abstract: We review the methods used in many papers to evaluate DSGE models by comparing their simulated moments with data moments. We compare these with the method of Indirect Inference to which they are closely related. We illustrate the comparison with contrasting assessments of a two-country model in two recent papers. We conclude that Indirect Inference is the proper end point of the puzzles methodology.
    Keywords: Bootstrap, US-EU Model, DSGE, VAR, Indirect Inference, Wald Statistic, Anomaly, Puzzle.
    JEL: C12 C32 C52 E1
    Date: 2009–09
    URL: http://d.repec.org/n?u=RePEc:san:cdmacp:0903&r=ecm
  6. By: Miller, Douglas (University of California, Davis); Cameron, A. Colin (University of California, Davis); Gelbach, Jonah (University of Arizona)
    Abstract: In this paper we propose a variance estimator for the OLS estimator as well as for nonlinear estimators such as logit, probit and GMM. This variance estimator enables cluster-robust inference when there is two-way or multi-way clustering that is non-nested. The variance estimator extends the standard cluster-robust variance estimator or sandwich estimator for one-way clustering (e.g. Liang and Zeger (1986), Arellano (1987)) and relies on similar relatively weak distributional assumptions. Our method is easily implemented in statistical packages, such as Stata and SAS, that already offer cluster-robust standard errors when there is one-way clustering. The method is demonstrated by a Monte Carlo analysis for a two-way random effects model; a Monte Carlo analysis of a placebo law that extends the state-year effects example of Bertrand et al. (2004) to two dimensions; and by application to studies in the empirical literature where two-way clustering is present.
    JEL: C12 C21 C23
    Date: 2009–05
    URL: http://d.repec.org/n?u=RePEc:ecl:ucdeco:09-9&r=ecm
  7. By: Marmer, Vadim
    Abstract: Implications of nonlinearity, nonstationarity and misspecification are considered from a forecasting perspective. Our model allows for small departures from the martingale difference sequence hypothesis by including a nonlinear component, formulated as a general, integrable transformation of the I(1) predictor. We assume that the true generating mechanism is unknown to the econometrician and he is therefore forced to use some approximating functions. It is shown that in this framework the linear regression techniques lead to spurious forecasts. Improvements of the forecast accuracy are possible with properly chosen nonlinear transformations of the predictor. The paper derives the limiting distribution of the forecasts' MSE. In the case of square integrable approximants, it depends on the Lâ‚‚-distance between the nonlinear component and approximating function. Optimal forecasts are available for a given class of approximants.
    Keywords: Forecasting; integrated time series; misspecified models; nonlinear transformations; stock returns
    Date: 2009–11–03
    URL: http://d.repec.org/n?u=RePEc:ubc:pmicro:vadim_marmer-2009-60&r=ecm
  8. By: Bijwaard, Govert (NIDI - Netherlands Interdisciplinary Demographic Institute); Ridder, Geert (University of Southern California)
    Abstract: Ridder and Woutersen (2003) have shown that under a weak condition on the baseline hazard there exist root-N consistent estimators of the parameters in a semiparametric Mixed Proportional Hazard model with a parametric baseline hazard and unspecified distribution of the unobserved heterogeneity. We extend the Linear Rank Estimator (LRE) of Tsiatis (1990) and Robins and Tsiatis (1991) to this class of models. The optimal LRE is a two-step estimator. We propose a simple first-step estimator that is close to optimal if there is no unobserved heterogeneity. The efficiency gain associated with the optimal LRE increases with the degree of unobserved heterogeneity.
    Keywords: mixed proportional hazard, linear rank estimation, counting process
    JEL: C41 C14
    Date: 2009–11
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp4543&r=ecm
  9. By: Gary Koop (Department of Economics, University of Strathclyde and RCEA); Dimitris Korobilis (Department of Economics, University of Strathclyde and RCEA)
    Abstract: There is a large literature on forecasting inflation using the generalized Phillips curve (i.e. using forecasting models where inflation depends on past inflation, the unemployment rate and other predictors). The present paper extends this literature through the use of econometric methods which incorporate dynamic model averaging. These not only allow for coefficients to change over time (i.e. the marginal effect of a predictor for inflation can change), but also allows for the entire forecasting model to change over time (i.e. different sets of predictors can be relevant at different points in time). In an empirical exercise involving quarterly US inflation, we fi…nd that dynamic model averaging leads to substantial forecasting improvements over simple benchmark approaches (e.g. random walk or recursive OLS forecasts) and more sophisticated approaches such as those using time varying coefficient models.
    Keywords: Option Pricing; Modular Neural Networks; Non-parametric Methods
    JEL: E31 E37 C11 C53
    Date: 2009–01
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:34_09&r=ecm
  10. By: Andersson, Jonas (Dept. of Finance and Management Science, Norwegian School of Economics and Business Administration); Møen, Jarle (Dept. of Finance and Management Science, Norwegian School of Economics and Business Administration)
    Abstract: Two measures of an error-ridden explanatory variable make it possible to solve the classical errors-in-variable problem by using one measure as an instrument for the other. It is well known that a second IV estimate can be obtained by reversing the roles of the two measures. We explore a simple estimator that is the linear combination of these two estimates, that minimizes the asymptotic mean squared error. In a Monte Carlo study we show that the gain in precision is significant compared to using only one of the original IV estimates. The proposed estimator also compares well with full information maximum likelihood under normality.
    Keywords: Measurement errors; Classical Errors-in-Variables; multiple indicator method; Instrumental variable techniques
    JEL: C13 C30 C80
    Date: 2009–09–15
    URL: http://d.repec.org/n?u=RePEc:hhs:nhhfms:2009_010&r=ecm
  11. By: Sébastien Laurent; Jeroen V.K. Rombouts; Francesco Violante
    Abstract: A large number of parameterizations have been proposed to model conditional variance dynamics in a multivariate framework. However, little is known about the ranking of multivariate volatility models in terms of their forecasting ability. The ranking of multivariate volatility models is inherently problematic because it requires the use of a proxy for the unobservable volatility matrix and this substitution may severely affect the ranking. We address this issue by investigating the properties of the ranking with respect to alternative statistical loss functions used to evaluate model performances. We provide conditions on the functional form of the loss function that ensure the proxy-based ranking to be consistent for the true one – i.e., the ranking that would be obtained if the true variance matrix was observable. We identify a large set of loss functions that yield a consistent ranking. In a simulation study, we sample data from a continuous time multivariate diffusion process and compare the ordering delivered by both consistent and inconsistent loss functions. We further discuss the sensitivity of the ranking to the quality of the proxy and the degree of similarity between models. An application to three foreign exchange rates, where we compare the forecasting performance of 16 multivariate GARCH specifications, is provided.
    Keywords: Volatility, multivariate GARCH, Matrix norm, Loss function, Model confidence set
    JEL: C10 C32 C51 C52 C53 G10
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:lvl:lacicr:0948&r=ecm
  12. By: Xin Jin; John M Maheu
    Abstract: This paper proposes a new dynamic model of realized covariance (RCOV) matrices based on recent work in time-varying Wishart distributions. The specifications can be linked to returns for a joint multivariate model of returns and covariance dynamics that is both easy to estimate and forecast. Realized covariance matrices are constructed for 5 stocks using high-frequency intraday prices based on positive semi-definite realized kernel estimates. We extend the model to capture the strong persistence properties in RCOV. Out-of-sample performance based on statistical and economic metrics show the importance of this. We discuss which features of the model are necessary to provide improvements over a traditional multivariate GARCH model that only uses daily returns.
    Keywords: eigenvalues, dynamic conditional correlation, predictive likelihoods, MCMC
    JEL: C11 C32 C53
    Date: 2009–11–10
    URL: http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-382&r=ecm
  13. By: Öller , L-E; Stockhammar, P
    Abstract: Normality is often mechanically and without sufficient reason assumed in econometric models. In this paper three important and significantly heteroscedastic GDP series are studied. Heteroscedasticity is removed and the distributions of the filtered series are then compared to a Normal, a Normal-Mixture and Normal-Asymmetric Laplace (NAL) distributions. NAL represents a reduced and empirical form of the Aghion and Howitt (1992) model for economic growth, based on Schumpeter's idea of creative destruction. Statistical properties of the NAL distributions are provided and it is shown that NAL competes well with the alternatives.
    Keywords: The Aghion-Howitt model; asymmetric innovations; mixed normal- asymmetric Laplace distribution; Kernel density estimation; Method of Moments estimation.
    JEL: C16
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:18581&r=ecm
  14. By: Markus Jochmann (Department of Economics, University of Strathclyde)
    Abstract: This paper develops stochastic search variable selection (SSVS) for zero-inflated count models which are commonly used in health economics. This allows for either model averaging or model selection in situations with many potential regressors. The proposed techniques are applied to a data set from Germany considering the demand for health care. A package for the free statistical software environment R is provided.
    Keywords: Bayesian, model selection, model averaging, count data, zero-inflation, demand for health care
    JEL: C11 C25 I11
    Date: 2009–10
    URL: http://d.repec.org/n?u=RePEc:str:wpaper:0923&r=ecm
  15. By: Öller, L-E; Stockhammar, P
    Abstract: The distribution of differences in logarithms of the Dow Jones share index is compared to the normal (N), normal mixture (NM) and a weighted sum of a normal and an Assymetric Laplace distribution (NAL). It is found that the NAL fits best. We came to this result by studying samples with high, medium and low volatility, thus circumventing strong heteroscedasticity in the entire series. The NAL distribution also fitted economic growth, thus revealing a new analogy between financial data and real growth.
    Keywords: Density forecasting; heteroscedasticity; mixed Normal- Asymmetric Laplace distribution; Method of Moments estimation; connection with economic growth.
    JEL: C20
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:18582&r=ecm
  16. By: Andres Fernandez; Norman R. Swanson
    Abstract: In this paper, the authors empirically assess the extent to which early release inefficiency and definitional change affect prediction precision. In particular, they carry out a series of ex-ante prediction experiments in order to examine: the marginal predictive content of the revision process, the trade-offs associated with predicting different releases of a variable, the importance of particular forms of definitional change, which the authors call "definitional breaks," and the rationality of early releases of economic variables. An important feature of our rationality tests is that they are based solely on the examination of ex-ante predictions, rather than being based on in-sample regression analysis, as are many tests in the extant literature. Their findings point to the importance of making real-time datasets available to forecasters, as the revision process has marginal predictive content, and because predictive accuracy increases when multiple releases of data are used when specifying and estimating prediction models. The authors also present new evidence that early releases of money are rational, whereas prices and output are irrational. Moreover, they find that regardless of which release of our price variable one specifies as the "target" variable to be predicted, using only "first release" data in model estimation and prediction construction yields mean square forecast error (MSFE) "best" predictions. On the other hand, models estimated and implemented using "latest available release" data are MSFE-best for predicting all releases of money. The authors argue that these contradictory findings are due to the relevance of definitional breaks in the data generating processes of the variables that they examine. In an empirical analysis, they examine the real-time predictive content of money for income, and they find that vector autoregressions with money do not perform significantly worse than autoregressions, when predicting output during the last 20 years.
    Keywords: Economic forecasting ; Econometrics
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:fip:fedpwp:09-28&r=ecm
  17. By: Domenico Giannone (ECARES, Université Libre de Bruxelles and CEPR); Lucrezia Reichlin (London Business School and CEPR); Saverio Simonelli (Università di Napoli Federico II, EUI and CSEF)
    Abstract: This paper assesses the role of surveys for the early estimates of GDP in the euro area in a model-based automated procedures which exploits the timeliness of their release. The analysis is conducted using both an historical evaluation and a real time case study on the current conjuncture.
    Keywords: Forecasting; factor model; real time data; large data sets; survey
    JEL: E52 C33 C53
    Date: 2009–11–06
    URL: http://d.repec.org/n?u=RePEc:sef:csefwp:240&r=ecm
  18. By: Oliver Blaskowitz; Helmut Herwartz
    Abstract: It is commonly accepted that information is helpful if it can be exploited to improve a decision mak- ing process. In economics, decisions are often based on forecasts of up{ or downward movements of the variable of interest. We point out that directional forecasts can provide a useful framework to assess the economic forecast value when loss functions (or success measures) are properly formu- lated to account for realized signs and realized magnitudes of directional movements. We discuss a general approach to evaluate (directional) forecasts which is simple to implement, robust to outlying or unreasonable forecasts and which provides an economically interpretable loss/success functional framework. As such, the measure of directional forecast value is a readily available alternative to the commonly used squared error loss criterion.
    Keywords: Directional forecasts, directional forecast value, forecast evaluation, economic forecast value, mean squared forecast error, mean absolute forecast error
    JEL: C52 E17 E27 E37 E47 F17 F37 F47
    Date: 2009–10
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2009-052&r=ecm
  19. By: Di Novi, Cinzia
    Abstract: This paper proposes a specification of Wooldridge's (1995) two step estimation method in which selectivity bias is due to two sources rather than one. The main objective of the paper is to show how the method can be applied in practice. The application concerns an important problem in health economics: the presence of adverse selection in the private health insurance markets on which there exists a large literature. The data for the empirical application is drawn from the 2003/2004 Medical Expenditure Panel Survey in conjunction with the 2002 National Health Interview Survey.
    JEL: I11 D82
    Date: 2009–11
    URL: http://d.repec.org/n?u=RePEc:uca:ucapdv:137&r=ecm
  20. By: Stéphane Auray (Université Lille 3 (GREMARS), Université de Sherbrooke (GREDI) and CIRPÉE); Aurélien Eyquem (GATE, UMR 5824, Université de Lyon and Ecole Normale Supérieure Lettres et Sciences Humaines, France); Frédéric Jouneau-Sion (Universites Lille Nord de France)
    Abstract: This paper proposes a structural approach to growth modeling relying on random return scale. An RBC-like model in which return to scale may be strictly increasing or decreasing depending on shocks is explicitly derived. We show that relevant component of usual macroeconomic models (including capital, growth, various relative prices) are all related to a Random Autoregressive Coefficient model. Recent works on extreme behavior on dependent process emphasize some properties of this model that are worth from the economic viewpoint. First, this model typically displays fat tail behavior event if the shocks do not. Second, records (both historically high and low points) are less frequent than in the usual stationary case but tend to appear in cluster. We show that both fat tails and clustering of extreme values are consistent with arbitrarily small variations of the autoregressive coefficient around the usual unit-root case. As such, distinguishing RCA from a more usual constant autoregressive model from available macro data may then be difficult and typically require very long data set. To this end, we propose a direct test based on the annual sequence of real wages in England recorded since the XIII-th century onwards. The test clearly reject the constant AR model and supports the Random coefficient hypothesis.
    Keywords: Economic growth, extremal behavior, dependent processes
    JEL: C22 C46 N13 O41 O47
    Date: 2009–09–01
    URL: http://d.repec.org/n?u=RePEc:shr:wpaper:09-17&r=ecm
  21. By: Antonio Merlo (Department of Economics, University of Pennsylvania); Xun Tang (Department of Economics, University of Pennsylvania)
    Abstract: Stochastic sequential bargaining games (Merlo and Wilson (1995, 1998)) have found wide applications in various fields including political economy and macroeconomics due to their flexibility in explaining delays in reaching agreement. In this paper, we present new results in nonparametric identification of such models under different scenarios of data availability. First, with complete data on players’ decisions, the sizes of the surplus to be shared (cakes) and the agreed allocations, both the mapping from states to the total surplus (i.e. the "cake function") and the players’ common discount rate are identified, if the unobservable state variable (USV) is independent of observable ones (OSV), and the total surplus is strictly increasing in the USV conditional on the OSV. Second, when the cake size is only observed under agreements and is additively separable in OSV and USV, the contribution by OSV is identified provided the USV distribution satisfies some distributional exclusion restrictions. Third, if data only report when an agreement is reached but never report the cake sizes, we propose a simple algorithm that exploits exogenously given shape restrictions on the cake function and the independence of USV from OSV to recover all rationalizable probabilities for reaching an agreement under counterfactual state transitions. Numerical examples show the set of rationalizable counterfactual outcomes so recovered can be informative.
    Keywords: Nonparametric identification, non-cooperative bargaining, stochastic sequential bargaining, rationalizable counterfactual outcomes
    JEL: C14 C35 C73 C78
    Date: 2009–10–15
    URL: http://d.repec.org/n?u=RePEc:pen:papers:09-037&r=ecm
  22. By: Bo E. Honoré (Department of Economics, Princeton University); Aureo de Paula (Department of Economics, University of Pennsylvania)
    Abstract: This paper studies the identification of a simultaneous equation model involving duration measures. It proposes a game theoretic model in which durations are determined by strategic agents. In the absence of strategic motives, the model delivers a version of the generalized accelerated failure time model. In its most general form, the system resembles a classical simultaneous equation model in which endogenous variables interact with observable and unobservable exogenous components to characterize an economic environment. In this paper, the endogenous variables are the individually chosen equilibrium durations. Even though a unique solution to the game is not always attainable in this context, the structural elements of the economic system are shown to be semiparametrically identified. We also present a brief discussion of estimation ideas and a set of simulation studies on the model.
    Keywords: duration, empirical games, identification
    JEL: C10 C30 C41
    Date: 2009–11–04
    URL: http://d.repec.org/n?u=RePEc:pen:papers:09-039&r=ecm
  23. By: Masato Ubukata (Graduate School of Economics, Osaka University)
    Abstract: The objective of this paper is to examine effects of realized covariance matrix estimators based on intraday returns on large-scale minimum-variance equity portfolio optimization. We empirically assess out-of-sample performance of portfolios with different covariance matrix estimators: the realized covariance matrix estimators and Bayesian shrinkage estimators based on the past monthly and daily returns. The main results are: (1) the realized covariance matrix estimators using the past intraday returns yield a lower standard deviation of the large-scale portfolio returns than the Bayesian shrinkage estimators based on the monthly and daily historical returns; (2) gains to switching to strategies using the realized covariance matrix estimators are higher for an investor with higher relative risk aversion; and (3) the better portfolio performance of the realized covariance approach implied by ex-post returns in excess of the risk-free rate, the standard deviations of the excess returns, the return per unit of risk (Sharpe ratio) and the switching fees seems to be robust to the level of transaction costs.
    Keywords: Large-scale portfolio selection; Realized covariance matrix; Intraday data
    JEL: G11
    Date: 2009–09
    URL: http://d.repec.org/n?u=RePEc:osk:wpaper:0930&r=ecm
  24. By: Velden, M. van de; Takane, Y. (Erasmus Econometric Institute)
    Abstract: Two new methods for dealing with missing values in generalized canonical correlation analysis are introduced. The first approach, which does not require iterations, is a generalization of the Test Equating method available for principal component analysis. In the second approach, missing values are imputed in such a way that the generalized canonical correlation analysis objective function does not increase in subsequent steps. Convergence is achieved when the value of the objective function remains constant. By means of a simulation study, we assess the performance of the new methods. We compare the results with those of two available methods; the missing-data passive method, introduced Gifi's homogeneity analysis framework, and the GENCOM algorithm developed by Green and Carroll.
    Keywords: generalized canoncial correlation analysis;missing values
    Date: 2009–11–02
    URL: http://d.repec.org/n?u=RePEc:dgr:eureir:1765017106&r=ecm
  25. By: Cathy Ning (Department of Economics, Ryerson University, Toronto, Canada); Dinghai Xu (Department of Economics, University of Waterloo, Waterloo, Ontario, Canada); Tony Wirjanto (School of Accounting & Finance and Department of Statistics & Actuarial Science,University of Waterloo, Waterloo, Ontario, Canada)
    Abstract: Volatility clustering is a well-known stylized feature of financial asset returns. In this paper, we investigate the asymmetric pattern of volatility clustering on both the stock and foreign exchange rate markets. To this end, we employ copula-based semi-parametric univariate time-series models that accommodate the clusters of both large and small volatilities in the analysis. Using daily realized volatilities of the individual company stocks, stock indices and foreign exchange rates constructed from high frequency data, we find that volatility clustering is strongly asymmetric in the sense that clusters of large volatilities tend to be much stronger than those of small volatilities. In addition, the asymmetric pattern of volatility clusters continues to be visible even when the clusters are allowed to be changing over time, and the volatility clusters themselves remain persistent even after forty days.
    Keywords: Volatility clustering, Copulas, Realized volatility, High-frequency data.
    JEL: C51 G32
    Date: 2009–11
    URL: http://d.repec.org/n?u=RePEc:rye:wpaper:wp006&r=ecm
  26. By: Schluter, Christian (University of Southampton); Wahba, Jackline (University of Southampton)
    Abstract: We consider the issue of illegal migration from Mexico to the US, and examine whether the lack of legal status causally impacts on outcomes, specifically wages and remitting behavior. These outcomes are of particular interest given the extent of legal and illegal migration, and the resulting financial flows. We formalize this question and highlight the principal empirical problem using a potential outcome framework with endogenous selection. The selection bias is captured by a control function, which is estimated non-parametrically. The framework for remitting is extended to allow for endogenous regressors (e.g. wages). We propose a new re-parametrisation of the control function, which is linear in case of a normal error structure, and test linearity. Using Mexican Migration project data, we find considerable and robust illegality effects on wages, the penalty being about 12% in the 1980s and 22% in the 1990s. For the latter period, the selection bias is not created by a normal error structure; wrongly imposing normality overestimates the illegality effect on wages by 50%, while wrongly ignoring selection leads to a 50% underestimate. In contrast to these wage penalties, legal status appears to have mixed effects on remitting behavior.
    Keywords: non-parametric estimation, control functions, selection, counterfactuals, illegality effects, illegal migration, intermediate outcomes, Mexican Migration Project
    JEL: J61 J30 J40
    Date: 2009–10
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp4527&r=ecm
  27. By: Laurens CHERCHYE; Thomas DEMUYNCK; Bram DE ROCK
    Abstract: Focusing on the testable implications on the equilibrium manifold, we show that the rationalizability problem is NP-complete. Subsequently, we present an integer programming (IP) approach to characterizing general equilibrium models. This approach avoids the use of the Tarski-Seidenberg algorithm for quantifier elimination that is commonly used in the literature. The IP approach naturally applies to settings with any number of observations, which is attractive for empirical applications. In addition, it can easily be adjusted to analyze the testable implications of alternative general equilibrium models (that include, e.g., public goods, externalities and/or production). Further, we show that the IP framework can easily address recoverability questions (pertaining to the structural model that underlies the observed equilibrium behavior), and account for empirical issues when bringing the IP methodology to the data (such as goodness-of-fit and power). Finally, we show how to develop easy-to-implement heuristics that give a quick (but possibly inconclusive) answer to whether or not the data satisfy the general equilibrium models.
    Keywords: General equilibrium, equilibrium manifold, exchange economies, production economies, NP-completeness, nonparametric restrictions, GARP, integer programming.
    JEL: C60 D10 D51
    Date: 2009–07
    URL: http://d.repec.org/n?u=RePEc:ete:ceswps:ces09.14&r=ecm
  28. By: Kurt Schmidheiny (Universitat Pompeu Fabra); Marius Brülhart (Université de Lausanne)
    Abstract: It is well understood that the two most popular empirical models of location choice-conditional logit and Poisson - return identical coefficient estimates when the regressors are not individual specific. We show that these two models differ starkly in terms of their implied predictions. The conditional logit model represents a zero-sum world, in which one region's gain is the other regions' loss. In contrast, the Poisson model implies a positive-sum economy, in which one region's gain is no other region's loss. We also show that all intermediate cases can be represented as a nested logit model with a single outside option. The nested logit turns out to be a linear combination of the conditional logit and Poisson models. Conditional logit and Poisson elasticities mark the polar cases and can therefore serve as boundary values in applied research.
    Keywords: firm location, residential choice, conditional logit, nested logit, Poisson count model
    JEL: C25 R3 H73
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:ieb:wpaper:2009/10/doc2009-14&r=ecm
  29. By: Moore, Rebecca (University of Georgia); Bishop, Richard C. (University of Wisconsin); Provencher, Bill (University of Wisconsin); Champ, Patricia (US Forest Service)
    Abstract: In this paper we develop an econometric model of willingness to pay that integrates data on respondent uncertainty regarding their own willingness to pay. The integration is utility consistent and does not involve calibrating the contingent responses to actual payment data, and so the approach can "stand alone". In an application to a valuation study related to whooping crane restoration, we find that this model generates a statistically lower expected WTP than the standard CV model. Moreover, the WTP function estimated with this model is not statistically different from that estimated using actual payment data, suggesting that when properly analyzed using data on respondent uncertainty, contingent valuation decisions can simulate actual payment decisions. This method allows for more reliable estimates of WTP that incorporates respondent uncertainty without the need for collecting comparable actual payment data.
    Date: 2009–05
    URL: http://d.repec.org/n?u=RePEc:ecl:wisagr:537&r=ecm

This nep-ecm issue is ©2009 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.