nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒02‒19
twenty-six papers chosen by
Sune Karlsson
Orebro University

  1. Identification-robust estimation and testing of the zero-beta CAPM By Marie-Claude Beaulieu; Jean-Marie Dufour; Lynda Khalaf
  2. Asymptotic properties of weighted least squares estimation in weak parma models By Francq, Christian; Roy, Roch; Saidi, Abdessamad
  3. Modelling asset correlations: A nonparametric approach By Aslanidis, Nektarios; Casas, Isabel
  4. Out-Of-Sample Comparisons of Overfit Models By Calhoun, Gray
  5. On the criticality of inferred models By Iacopo Mastromatteo; Matteo Marsili
  6. Model Selection in Equations with Many 'Small' Effects By Jennifer L. Castle; Jurgen A. Doornik; David F. Hendry
  7. Modeling Data Revisions By Juan Manuel Julio Román
  8. FaMIDAS: A Mixed Frequency Factor Model with MIDAS structure By Cecilia Frale; Libero Monteforte
  9. Some Remarks on Consistency and Strong Inconsistency of Bayesian Inference By Kociecki, Andrzej
  10. Empirical Economic Model Discovery and Theory Evaluation By David F. Hendry
  11. Plant-Level Productivity and Imputation of Missing Data in the Census of Manufactures By T. Kirk White; Jerome P. Reiter; Amil Petrin
  12. Inference of Signs of Interaction Effects in Simultaneous Games with Incomplete Information, Second Version By Aureo de Paula; Xun Tang
  13. How wrong can you be? Implications of incorrect utility function specification for welfare measurement in choice experiments By Hanley, Nick; Riera, Antoni; Torres, Cati
  14. Do Food Stamps Cause Obesity? A Generalised Bayesian Instrumental Variable Approach in the Presence of Heteroscedasticity By Salois, Matthew; Balcombe, Kelvin
  15. On multivariate control charts By Frisén, Marianne
  16. Persistence of regional unemployment : Application of a spatial filtering approach to local labour markets in Germany By Patuelli, Roberto; Schanne, Norbert; Griffith, Daniel A.; Nijkamp, Peter
  17. Methods and evaluations for surveillance in industry, business, finance, and public health By Frisén, Marianne
  18. Spectral Analysis Informs the Proper Frequency in the Sampling of Financial Time Series Data By Taufemback, Cleiton; Da Silva, Sergio
  19. Statistical Inference for Time-changed Brownian Motion Credit Risk Models By T. R. Hurd; Zhuowei Zhou
  20. A Copula Approach on the Dynamics of Statistical Dependencies in the US Stock Market By Michael C. M\"unnix; Rudi Sch\"afer
  21. The Factors of Growth of Small Family Businesses: A Robust Estimation of the Behavioral Consistency in the Panel Data Models By Vladimír Benáček; Eva Michalíková
  22. A Monte Carlo Study of Old and New Frontier Methods for Efficiency Measurement By Krüger, Jens
  23. Towards Unrestricted Public Use Business Microdata: The Synthetic Longitudinal Business Database By Satkartar K. Kinney; Jerome P. Reiter; Arnold P. Reznek; Javier Miranda; Ron S. Jarmin; John M. Abowd
  24. Estimation and evaluation of DSGE models: progress and challenges By Frank Schorfheide
  25. Mathematical Models and Economic Forecasting: Some Uses and Mis-Uses of Mathematicsin Economics By David F. Hendry
  26. Quantifying and Modeling Long-Range Cross-Correlations in Multiple Time Series with Applications to World Stock Indices By Duan Wang; Boris Podobnik; Davor Horvati\'c; H. Eugene Stanley

  1. By: Marie-Claude Beaulieu; Jean-Marie Dufour; Lynda Khalaf
    Abstract: We propose exact simulation-based procedures for: (i) testing mean-variance efficiency when the zero-beta rate is unknown, and (ii) building confidence intervals for the zero-beta rate. On observing that this parameter may be weakly identified, we propose LR-type statistics as well as heteroskedascity and autocorrelation corrected (HAC) Wald-type procedures, which are robust to weak identification and allow for non-Gaussian distributions including parametric GARCH structures. In particular, we propose confidence sets for the zero-beta rate based on “inverting” exact tests for this parameter; these sets provide a multivariate extension of Fieller’s technique for inference on ratios. The exact distribution of LR-type statistics for testing efficiency is studied under both the null and the alternative hypotheses. The relevant nuisance parameter structure is established and finite-sample bound procedures are proposed, which extend and improve available Gaussianspecific bounds. Furthermore, we study the invariance to portfolio repacking property for tests and confidence sets proposed. The statistical properties of available and proposed methods are analyzed via aMonte Carlo study. Empirical results on NYSE returns show that exact confidence sets are very different from the asymptotic ones, and allowing for non-Gaussian distributions affects inference results. Simulation and empirical results suggest that LR-type statistics - with p-values corrected using the Maximized Monte Carlo test method - are generally preferable to their Wald-HAC counterparts from the viewpoints of size control and power. <P>
    Keywords: capital asset pricing model, CAPM; Black, mean-variance efficiency, non-normality, weak identification, Fieller, multivariate linear regression, uniform linear hypothesis, exact test, Monte Carlo test, bootstrap, nuisance parameters, GARCH, portfolio repacking.,
    Date: 2011–02–01
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2011s-21&r=ecm
  2. By: Francq, Christian; Roy, Roch; Saidi, Abdessamad
    Abstract: The aim of this work is to investigate the asymptotic properties of weighted least squares (WLS) estimation for causal and invertible periodic autoregressive moving average (PARMA) models with uncorrelated but dependent errors. Under mild assumptions, it is shown that the WLS estimators of PARMA models are strongly consistent and asymptotically normal. It extends Theorem 3.1 of Basawa and Lund (2001) on least squares estimation of PARMA models with independent errors. It is seen that the asymptotic covariance matrix of the WLS estimators obtained under dependent errors is generally different from that obtained with independent errors. The impact can be dramatic on the standard inference methods based on independent errors when the latter are dependent. Examples and simulation results illustrate the practical relevance of our findings. An application to financial data is also presented.
    Keywords: Weak periodic autoregressive moving average models; Seasonality; Weighted least squares; Asymptotic normality; Strong consistency; Weak periodic white noise; Strong mixing.
    JEL: C22
    Date: 2011–02–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:28721&r=ecm
  3. By: Aslanidis, Nektarios; Casas, Isabel
    Abstract: This article proposes a time-varying nonparametric estimator and a time-varying semiparametric estimator of the correlation matrix. We discuss representation, estimation based on kernel smoothing and inference. An extensive Monte Carlo simulation study is performed to compare the semiparametric and nonparametric models with the DCC speci fication. Our bivariate simulation results show that the semiparametric and nonparametric models are best in DGPs with gradual changes or structural breaks in correlations. However, in DGPs with rapid changes or constancy in correlations the DCC delivers the best outcome. Moreover, in multivariate simulations the semiparametric and nonparametric models fare the best in DGPs with substantial time-variability in correlations, while when allowing for little variability in the correlations the DCC is the dominant speci fication. The methodologies are illustrated by estimating the correlations for two interesting portfolios. The rst portfolio consists of the equity sectors SPDRs and the S&P 500 composite, while the second one contains major currencies that are actively traded in the foreign exchange market. Portfolio evaluation results show that the nonparametric estimator generally dominates its competitors, with a statistically significant lower portfolio variance.
    Keywords: Portfolio Evaluation; DCC; Local Linear Estimator; Nonparametric Correlations; Semiparametric Conditional Correlation Model
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:syd:wpaper:2123/7171&r=ecm
  4. By: Calhoun, Gray
    Abstract: This paper uses dimension asymptotics to study why overfit linear regression models should be compared out-of-sample; we let the number of predictors used by the larger model increase with the number of observations so that their ratio remains uniformly positive. Under this limit theory, the naive Diebold-Mariano-West out-of-sample test can test hypotheses about a key quantity for evaluating forecasting models---a time series analogue to the generalization error---as long as the out-of-sample period is small relative to the total sample size. Moreover, tests that are designed to reject if the larger model is true, such as the usual in-sample Wald and LM tests and also Clark and McCracken's (2001, 2005a), McCracken's (2007) and Clark and West's (2006, 2007) out-of-sample statistics, will choose the larger model too often when the smaller model is more accurate.
    Keywords: Generalization Error; Forecasting; ModelSelection; t-test; Dimension Asymptotics
    JEL: C01 C12 C22 C52 C53
    Date: 2011–02–10
    URL: http://d.repec.org/n?u=RePEc:isu:genres:32462&r=ecm
  5. By: Iacopo Mastromatteo; Matteo Marsili
    Abstract: Advanced inference techniques allow one to reconstruct the pattern of interaction from high dimensional data sets. We focus here on the statistical properties of inferred models and argue that inference procedures are likely to yield models which are close to a phase transition. On one side, we show that the reparameterization invariant metrics in the space of probability distributions of these models (the Fisher Information) is directly related to the model's susceptibility. As a result, distinguishable models tend to accumulate close to critical points, where the susceptibility diverges in infinite systems. On the other, this region is the one where the estimate of inferred parameters is most stable. In order to illustrate these points, we discuss inference of interacting point processes with application to financial data and show that sensible choices of observation time-scales naturally yield models which are close to criticality.
    Date: 2011–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1102.1624&r=ecm
  6. By: Jennifer L. Castle; Jurgen A. Doornik; David F. Hendry
    Abstract: General unrestricted models (GUMs) may include important individual determinants, many small relevant effects, and irrelevant variables. Automatic model selection procedures can handle perfect collinearity and more candidate variables than observations, allowing substantial dimension reduction from GUMs with salient regressors, lags, non-linear transformations, and multiple location shifts, together with all the principal components representing ‘factor’ structures, which can also capture small influences that selection may not retain individually. High dimensional GUMs and even the final model can implicitly include more variables than observations entering via ‘factors’. We simulate selection in several special cases to illustrate.
    Keywords: Model selection, high dimensionality, principal components, non-linearity, Monte Carlos
    JEL: C51 C22
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:oxf:wpaper:528&r=ecm
  7. By: Juan Manuel Julio Román
    Abstract: A dynamic linear model for data revisions and delays is proposed. This model extends Jacobs & Van Norden's [13] in two ways. First, the "true" data series is observable up to a fixed period of time M. And second, preliminary figures might be biased estimates of the true series. Otherwise, the model follows Jacobs & Van Norden's [13] so their gains are extended through the new assumptions. These assumptions represent the data release process more realistically under particular circumstances, and improve the overall identification of the model. An application to the year to year growth of the Colombian quarterly GDP reveals that preliminary growth reports under-estimate the true growth, and that measurement errors are predictable from the information available at the data release. The models implemented in this note help this purpose.
    Date: 2011–02–08
    URL: http://d.repec.org/n?u=RePEc:col:000094:007929&r=ecm
  8. By: Cecilia Frale (MEF-Ministry of the Economy and Finance-Italy, Treasury Department); Libero Monteforte (Bank of Italy and MEF-Ministry of the Economy and Finance-Italy, Treasury Department)
    Abstract: In this paper a dynamic factor model with mixed frequency is proposed (FaMIDAS), where the past observations of high frequency indicators are used following the MIDAS approach. This structure is able to represent with richer dynamics the information content of the economic indicators and produces smoothed factors and forecasts. In addition, the Kalman filter is applied, which is particularly suited for dealing with unbalanced data set and revisions in the preliminary data. In the empirical application for the Italian quarterly GDP the short-term forecasting performance is evaluated against other mixed frequency models in a pseudo-real time experiment, also allowing for pooled forecast from factor models.
    Keywords: mixed frequency models, dynamic factor models, MIDAS,forecasting.
    JEL: E32 E37 C53
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:bdi:wptemi:td_788_11&r=ecm
  9. By: Kociecki, Andrzej
    Abstract: The paper provides new sufficient conditions for consistent and coherent Bayesian inference when a model is invariant under some group of transformations. Building on our theoretical results we reexamine an example from Stone (1976) giving some new insights. The priors for multivariate normal models and Structural Vector AutoRegression models that entail consistent and coherent Bayesian inference are also discussed.
    Keywords: invariant models; coherence; strong inconsistency; groups
    JEL: C11
    Date: 2011–02–08
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:28731&r=ecm
  10. By: David F. Hendry
    Abstract: Economies are so high dimensional and non-constant that many features of models cannot be derived by prior reasoning, intrinsically involving empirical discovery and requiring theory evaluation. Despite important differences, discovery and evaluation in economics are similar to those of science. Fitting a pre-specified equation limits discovery, but automatic methods can formulate much more general models with many variables, long lag lengths and non-linearities, allowing for outliers, data contamination, and parameter shifts; select congruent parsimonious-encompassing models even with more candidate variables than observations, while embedding the theory; then rigorously evaluate selected models to ascertain their viability.
    Keywords: Empirical discovery, theory evaluation, model selection, Autometrics
    JEL: B40
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:oxf:wpaper:529&r=ecm
  11. By: T. Kirk White; Jerome P. Reiter; Amil Petrin
    Abstract: In the U.S. Census of Manufactures, the Census Bureau imputes missing values using a combination of mean imputation, ratio imputation, and conditional mean imputation. It is wellknown that imputations based on these methods can result in underestimation of variability and potential bias in multivariate inferences. We show that this appears to be the case for the existing imputations in the Census of Manufactures. We then present an alternative strategy for handling the missing data based on multiple imputation. Specifically, we impute missing values via sequences of classification and regression trees, which offer a computationally straightforward and flexible approach for semi-automatic, large-scale multiple imputation. We also present an approach to evaluating these imputations based on posterior predictive checks. We use the multiple imputations, and the imputations currently employed by the Census Bureau, to estimate production function parameters and productivity dispersions. The results suggest that the two approaches provide quite different answers about productivity.
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:cen:wpaper:11-02&r=ecm
  12. By: Aureo de Paula (Department of Economics, University of Pennsylvania); Xun Tang (Department of Economics, University of Pennsylvania)
    Abstract: This paper studies the inference of interaction effects (impacts of players' actions on each other's payoffs) in discrete simultaneous games with incomplete information. We propose an easily implementable test for the signs of state-dependent interaction effects that does not require parametric specifications of players' payoffs, the distributions of their private signals or the equilibrium selection mechanism. The test relies on the commonly invoked assumption that players' private signals are independent conditional on observed states. The procedure is valid in (but does not rely on) the presence of multiple equilibria in the data-generating process (DGP). As a by-product, we propose a formal test for multiple equilibria in the DGP. We also show how to extend our arguments to identify signs of interaction effects when private signals are correlated. We provide Monte Carlo evidence of the test's good performance in finite samples. We then implement the test using data on radio programming of commercial breaks in the U.S., and infer stations' incentives to synchronize their commercial breaks. Our results support the earlier finding by Sweeting (2009) that stations have stronger incentives.
    Keywords: identification, inference, multiple equilibria, incomplete information games
    JEL: C01 C72
    Date: 2010–06–29
    URL: http://d.repec.org/n?u=RePEc:pen:papers:11-003&r=ecm
  13. By: Hanley, Nick; Riera, Antoni; Torres, Cati
    Abstract: Despite the vital role of the utility function in welfare measurement, the implications of working with incorrect utility specifications have been largely neglected in the choice experiments literature. This paper addresses the importance of specification with a special emphasis on the effects of mistaken assumptions about the marginal utility of income. Monte Carlo experiments were conducted using different functional forms of utility to generate simulated choices. Multi-Nomial Logit and Mixed Logit models were then estimated on these choices under correct and incorrect assumptions about the true, underlying utility function. Estimated willingness to pay measures from these choice modelling results are then compared with the equivalent measures directly calculated from the true utility specifications. Results show that for the parameter values and functional forms considered, a continuous-quadratic or a discrete-linear attribute specification is a good option regardless of the true effects the attribute has on utility. We also find that mistaken assumptions about preferences over costs magnify attribute mis-specification effects.
    Keywords: Monte Carlo analysis; choice experiments; efficiency; accuracy; welfare measurement; attributes; utility specification
    Date: 2010–11
    URL: http://d.repec.org/n?u=RePEc:stl:stledp:2010-12&r=ecm
  14. By: Salois, Matthew; Balcombe, Kelvin
    Abstract: The impact of covariates on obesity in the US is investigated, with particular attention given to the role of the Supplemental Nutrition Assistance Program. The potential endogeneity of participation in SNAP is considered as a potential problem in investigating its causal influence on obesity using instrumental variable (IV) approaches. Due to the presence of heteroscedasticity in the errors, the approach for dealing with heteroscedastic errors in Geweke (1993) is extended to the Bayesian instrumental variable estimator outlined in Rossi et al. (2005). This approach leads to substantively different findings to a standard classical IV approach to correcting for heteroscedasticity. Although findings support the contention that the SNAP participation rate is associated with a greater prevalence of obesity, the evidence for this impact is substantially weakened when using the methods introduced in the paper.
    Keywords: Bayesian; Food Stamps; Food Insecurity; Instrumental Variabls; Heteroscedasticity; Obesity.
    JEL: I38 I00 C31 D10 C11
    Date: 2011–02–08
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:28745&r=ecm
  15. By: Frisén, Marianne (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University)
    Abstract: Industrial production requires multivariate control charts to enable monitoring of several components. Recently there has been an increased interest also in other areas such as detection of bioterrorism, spatial surveillance and transaction strategies in finance. In the literature, several types of multivariate counterparts to the univariate Shewhart, EWMA and CUSUM methods have been proposed. We review general approaches to multivariate control chart. Suggestions are made on the special challenges of evaluating multivariate surveillance methods.
    Keywords: Surveillance; monitoring; quality control; multivariate evaluation; sufficiency
    JEL: C10
    Date: 2011–02–10
    URL: http://d.repec.org/n?u=RePEc:hhs:gunsru:2011_002&r=ecm
  16. By: Patuelli, Roberto; Schanne, Norbert (Institut für Arbeitsmarkt- und Berufsforschung (IAB), Nürnberg [Institute for Employment Research, Nuremberg, Germany]); Griffith, Daniel A.; Nijkamp, Peter
    Abstract: "The geographical distribution and persistence of regional/local unemployment rates in heterogeneous economies (such as Germany) have been, in recent years, the subject of various theoretical and empirical studies. Several researchers have shown an interest in analysing the dynamic adjustment processes of unemployment and the average degree of dependence of the current unemployment rates or gross domestic product from the ones observed in the past. In this paper, we present a new econometric approach to the study of regional unemployment persistence, in order to account for spatial heterogeneity and/or spatial autocorrelation in both the levels and the dynamics of unemployment. First, we propose an econometric procedure suggesting the use of spatial filtering techniques as a substitute for fixed effects in a panel estimation framework. The spatial filter computed here is a proxy for spatially distributed region-specific information (e.g., the endowment of natural resources, or the size of the 'home market') that is usually incorporated in the fixed effects parameters. The same argument applies for the spatial filter modelling of the heterogenous dynamics. The advantages of our proposed procedure are that the spatial filter, by incorporating region-specific information that generates spatial autocorrelation, frees up degrees of freedom, simultaneously corrects for time-stable spatial autocorrelation in the residuals, and provides insights about the spatial patterns in regional adjustment processes. We present several experiments in order to investigate the spatial pattern of the heterogeneous autoregressive parameters estimated for unemployment data for German NUTS-3 regions. We find widely heterogeneous but generally high persistence in regional unemployment rates." (author's abstract, IAB-Doku) ((en))
    Keywords: Arbeitslosenquote, Persistenz, Schätzung, regionale Disparität
    JEL: C31 E24 E27 R11
    Date: 2011–02–10
    URL: http://d.repec.org/n?u=RePEc:iab:iabdpa:201103&r=ecm
  17. By: Frisén, Marianne (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University)
    Abstract: An overview on surveillance in different areas is given. Even though methods have been developed under different scientific cultures, the statistical concepts can be the same. When the statistical problems are the same, progress in one area can be used also in other areas. The aim of surveillance is to detect an important change in an underlying process as soon as possible after the change has occurred. In practice, we have complexities such as gradual changes and multivariate settings. Approaches to handling some of these complexities are discussed. The correspondence between the measures for evaluation and the aims of the ap-plication is important. Thus, the choice of evaluation measure deserves attention. The com-monly used ARL criterion should be used with care.
    Keywords: expected delay; gradual change; likelihood ratio; monitoring; multivariate surveillance
    JEL: C10
    Date: 2011–02–10
    URL: http://d.repec.org/n?u=RePEc:hhs:gunsru:2011_003&r=ecm
  18. By: Taufemback, Cleiton; Da Silva, Sergio
    Abstract: Applied econometricians tend to show a long neglect for the proper frequency to be considered while sampling the time series data. The present study shows how spectral analysis can be usefully employed to fix this problem. The case is illustrated with ultra-high-frequency data and daily prices of four selected stocks listed on the Sao Paulo stock exchange.
    Keywords: Econophysics; Spectral analysis; Aliasing; Sampling; Financial time series
    JEL: C81
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:28720&r=ecm
  19. By: T. R. Hurd; Zhuowei Zhou
    Abstract: We consider structural credit modeling in the important special case where the log-leverage ratio of the firm is a time-changed Brownian motion (TCBM) with the time-change taken to be an independent increasing process. Following the approach of Black and Cox, one defines the time of default to be the first passage time for the log-leverage ratio to cross the level zero. Rather than adopt the classical notion of first passage, with its associated numerical challenges, we accept an alternative notion applicable for TCBMs called "first passage of the second kind". We demonstrate how statistical inference can be efficiently implemented in this new class of models. This allows us to compare the performance of two versions of TCBMs, the variance gamma (VG) model and the exponential jump model (EXP), to the Black-Cox model. When applied to a 4.5 year long data set of weekly credit default swap (CDS) quotes for Ford Motor Co, the conclusion is that the two TCBM models, with essentially one extra parameter, can significantly outperform the classic Black-Cox model.
    Date: 2011–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1102.2412&r=ecm
  20. By: Michael C. M\"unnix; Rudi Sch\"afer
    Abstract: We analyze the statistical dependency structure of the S&P 500 constituents in the 4-year period from 2007 to 2010 using intraday data from the New York Stock Exchange's TAQ database. With a copula-based approach, we find that the statistical dependencies are very strong in the tails of the marginal distributions. This tail dependence is higher than in a bivariate Gaussian distribution, which is implied in the calculation of many correlation coefficients. We compare the tail dependence to the market's average correlation level as a commonly used quantity and disclose an neraly linear relation.
    Date: 2011–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1102.1099&r=ecm
  21. By: Vladimír Benáček (Institute of Economic Studies, Faculty of Social Sciences, Charles University, Prague, Czech Republic); Eva Michalíková (Institute of Economic Studies, Faculty of Social Sciences, Charles University, Prague, Czech Republic)
    Abstract: The paper quantifies the role of factors associated with the growth (or decline) of micro and small businesses in European economies. The growth is related to employment and value added in enterprises as well as to ten institutional variables. We test the data for consistency of behavioural patterns in various countries and gradually remove outlying observations, quite a unique a pproach in the panel data analysis, that can lead to erroneous conclusions when using the classical estimators. In the first part of this paper we outline a highly robust method of estimation based on fixed effects and least trimmed squares (LTS). In its second part we apply this method on the panel data of 28 countries in 2002-2008 testing for the hypothesis that micro and small businesses in Europe use different strategies for their growth. We run a series of econometric tests where we regress employment and total net production in micro and small businesses on three economic factors: gross capital returns, labour cost gaps in small relative to large enterprises and the GDP per capita. In addition, we also test the role of 10 institutional factors in the growth of familty businesses.
    Keywords: Family business, robust estimator, LTS, fixed effects
    JEL: C01 C23 C51 C82 F21 F40
    Date: 2011–02
    URL: http://d.repec.org/n?u=RePEc:fau:wpaper:wp2011_06&r=ecm
  22. By: Krüger, Jens
    Abstract: This study presents the results of an extensive Monte Carlo experiment to compare different methods of e�ciency analysis. In addition to traditional parametric-stochastic and nonparametric-deterministic methods recently developed robust nonparametric-stochastic methods are considered. The experimental design comprises a wide variety of situations with different returns-to-scale regimes, substitution elasticities and outlying observations. As the results show, the new robust nonparametric-stochastic methods should not be used without cross-checking by other methods like stochastic frontier analysis or data envelopment analysis. These latter methods appear quite robust in the experiments.
    Keywords: Monte Carlo experiment, efficiency measurement, nonparametric stochastic methods
    Date: 2011–02
    URL: http://d.repec.org/n?u=RePEc:dar:ddpeco:48892&r=ecm
  23. By: Satkartar K. Kinney; Jerome P. Reiter; Arnold P. Reznek; Javier Miranda; Ron S. Jarmin; John M. Abowd
    Abstract: In most countries, national statistical agencies do not release establishment-level business microdata, because doing so represents too large a risk to establishments\' confidentiality. One approach with the potential for overcoming these risks is to release synthetic data; that is, the released establishment data are simulated from statistical models designed to mimic the distributions of the underlying real microdata. In this article, we describe an application of this strategy to create a public use file for the Longitudinal Business Database, an annual economic census of establishments in the United States comprising more than 20 million records dating back to 1976. The U.S. Bureau of the Census and the Internal Revenue Service recently approved the release of these synthetic microdata for public use, making the synthetic Longitudinal Business Database the first-ever business microdata set publicly released in the United States. We describe how we created the synthetic data, evaluated analytical validity, and assessed disclosure risk.
    Date: 2011–02
    URL: http://d.repec.org/n?u=RePEc:cen:wpaper:11-04&r=ecm
  24. By: Frank Schorfheide
    Abstract: Estimated dynamic stochastic equilibrium (DSGE) models are now widely used for empirical research in macroeconomics as well as for quantitative policy analysis and forecasting at central banks around the world. This paper reviews recent advances in the estimation and evaluation of DSGE models, discusses current challenges, and provides avenues for future research.
    Keywords: Econometric models ; Stochastic analysis
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:fip:fedpwp:11-7&r=ecm
  25. By: David F. Hendry
    Abstract: We consider three ‘cases studies’ of the uses and mis-uses of mathematics in economics and econometrics. The first concerns economic forecasting, where a mathematical analysis is essential, and is independent of the specific forecasting model and how the process being forecast behaves. The second concerns model selection with more candidate variables than the number of observations. Again, an understanding of the properties of extended general-to-specific procedures is impossible without advanced mathematical analysis. The third concerns inter-temporal optimization and the formation of ‘rational expectations’, where misleading results follow from present mathematical approaches for realistic economies. The appropriate mathematics remains to be developed, and may end ‘problem specific’ rather than generic.
    Keywords: Economic forecasting, structural breaks, model selections, expectations, impulse-indicator saturation, mathematical analyses
    JEL: C02 C22
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:oxf:wpaper:530&r=ecm
  26. By: Duan Wang; Boris Podobnik; Davor Horvati\'c; H. Eugene Stanley
    Abstract: We propose a modified time lag random matrix theory in order to study time lag cross-correlations in multiple time series. We apply the method to 48 world indices, one for each of 48 different countries. We find long-range power-law cross-correlations in the absolute values of returns that quantify risk, and find that they decay much more slowly than cross-correlations between the returns. The magnitude of the cross-correlations constitute "bad news" for international investment managers who may believe that risk is reduced by diversifying across countries. We find that when a market shock is transmitted around the world, the risk decays very slowly. We explain these time lag cross-correlations by introducing a global factor model (GFM) in which all index returns fluctuate in response to a single global factor. For each pair of individual time series of returns, the cross-correlations between returns (or magnitudes) can be modeled with the auto-correlations of the global factor returns (or magnitudes). We estimate the global factor using principal component analysis, which minimizes the variance of the residuals after removing the global trend. Using random matrix theory, a significant fraction of the world index cross-correlations can be explained by the global factor, which supports the utility of the GFM. We demonstrate applications of the GFM in forecasting risks at the world level, and in finding uncorrelated individual indices. We find 10 indices are practically uncorrelated with the global factor and with the remainder of the world indices, which is relevant information for world managers in reducing their portfolio risk. Finally, we argue that this general method can be applied to a wide range of phenomena in which time series are measured, ranging from seismology and physiology to atmospheric geophysics.
    Date: 2011–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1102.2240&r=ecm

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.