nep-ecm New Economics Papers
on Econometrics
Issue of 2012‒02‒20
28 papers chosen by
Sune Karlsson
Orebro University

  1. Parametric estimation of hidden stochastic model by contrast minimization and deconvolution: application to the Stochastic Volatility Model By Salima El Kolei
  2. Bootstrap determination of the co-integration rank in VAR models By Giuseppe Cavaliere; Anders Rahbek; Taylor A.M.Robert
  3. Single Equation Instrumental Variable Models: Identification under discrete variation. By [no author]
  4. Identification of linear panel data models when instruments are not available By Laura Magazzini; Giorgio Calzolari
  5. Triangular simultaneous equations model under structural misspecification. By Kandemir, I.
  6. Particle Filters for Markov Switching Stochastic Volatility Models By Yun Bao; Carl Chiarella
  7. Testing CAPM with a Large Number of Assets By M Hashem Pesaran; Takashi Yamagata
  8. Time-varying conditional Johnson SU density in value-at-risk (VaR) methodology By Cayton, Peter Julian A.; Mapa, Dennis S.
  9. Evaluating alternative estimators for optimal order quantities in the newsvendor model with skewed demand By Halkos, George; Kevork, Ilias
  10. Modelling zero-inflated count data when exposure varies: with an application to sick leave By Gregori Baetschmann; Rainer Winkelmann
  11. Selection of Control Variables in Propensity Score Matching: Evidence from a Simulation Study By Nguyen Viet, Cuong
  12. On tests for linearity against STAR models with deterministic trends By Kaufmann, Hendrik; Kruse, Robinson; Sibbertsen, Philipp
  13. Robust FDI Determinants: Bayesian Model Averaging In The Presence Of Selection Bias By Theo S Eicher; Lindy Helfman; Alex Lenkoski
  14. Exponent of Cross-sectional Dependence: Estimation and Inference By Bailey, Natalia; Kapetanios, George; Pesaran, Hashem
  15. Almost periodically correlated time series in business fluctuations analysis By Łukasz Lenart; Mateusz Pipień
  16. The Gompertz distribution and maximum likelihood estimation of its parameters - a revision By Adam Lenart
  17. Evaluating the calibration of multi-step-ahead density forecasts using raw moments By Knüppel, Malte
  18. Volatility timing and portfolio selection: How best to forecast volatility By Adam E Clements; Annastiina Silvennoinen
  19. Identifying speculative bubbles with an in finite hidden Markov model By Song, Yong; Shi, Shuping
  20. The Oaxaca-Blinder unexplained component as a treatment effects estimator By Tymon Sloczynski
  21. Evidence on a DSGE Business Cycle model subject to Neutral and Investment-Specific Technology Shocks using Bayesian Model Averaging By Rodney W. Strachan; Herman K. van Dijk
  22. Biases in Bias Elicitation By Giancarlo Manzi; Martin Forster
  23. Do Health Care Report Cards Cause Providers to Select Patients and Raise Quality of Care? By Yijuan Chen; Juergen Meinecke
  24. Plant-level Productivity and Imputation of Missing Data in U.S. Census Manufacturing Data By T. Kirk White; Jerome P. Reiter; Amil Petrin
  25. Validity and precision of estimates in the classical newsvendor model with exponential and rayleigh demand By Halkos, George; Kevork, Ilias
  26. Looking for Rational Bubbles in Agricultural Commodity Markets By Gutierrez, Luciano
  27. Functional form issues in the regression analysis of financial leverage ratios By Joaquim Ramalho; Jacinto Vidigal da Silva
  28. Forecasting multivariate volatility in larger dimensions: some practical issues By Adam E Clements; Ayesha Scott; Annastiina Silvennoinen

  1. By: Salima El Kolei
    Abstract: We study a new parametric approach for particular hidden stochastic models such as the Stochastic Volatility model. This method is based on contrast minimization and deconvolution. After proving consistency and asymptotic normality of the estimation leading to asymptotic confidence intervals, we provide a thorough numerical study, which compares most of the classical methods that are used in practice (Quasi Maximum Likelihood estimator, Simulated Expectation Maximization Likelihood estimator and Bayesian estimators). We prove that our estimator clearly outperforms the Maximum Likelihood Estimator in term of computing time, but also most of the other methods. We also show that this contrast method is the most robust with respect to non Gaussianity of the error and also does not need any tuning parameter.
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1202.2559&r=ecm
  2. By: Giuseppe Cavaliere (Università di Bologna); Anders Rahbek; Taylor A.M.Robert
    Abstract: This paper discusses a consistent bootstrap implementation of the likelihood ratio [LR] co-integration rank test and associated sequential rank determination procedure of Johansen (1996). The bootstrap samples are constructed using the restricted parameter estimates of the underlying VAR model which obtain under the reduced rank null hypothesis. A full asymptotic theory is provided which shows that, unlike the bootstrap procedure in Swensen (2006) where a combination of unrestricted and restricted estimates from the VAR model is used, the resulting bootstrap data are I(1) and satisfy the null co-integration rank, regardless of the true rank. This ensures that the bootstrap LR test is asymptotically correctly sized and that the probability that the bootstrap sequential procedure selects a rank smaller than the true rank converges to zero. Monte Carlo evidence suggests that our bootstrap procedures work very well in practice.
    Keywords: Bootstrap; Co-integration; Trace statistic; Rank determination Cointegrazione; Statistica “traccia”; determinazione del rango
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:bot:quadip:113&r=ecm
  3. By: [no author]
    Abstract: Over the last decade, substantial interest in theoretical econometrics and microeconometrics has been directed towards nonparametric models. Much work has been devoted to the development of novel identification and estimation technieques and in particular, to the identifying power of econometric models under various types of restrictions. Notable attention has been focused on the conditional independence restriction and instrumental variable methods for both continuous and discrete data problems. This immense effort has led to tremendous outcomes in terms of theoretical findings and most importantly, new empirical practices. Nowadays, we face an apparent emphasis on minimal restrictions of nuisance parameters of the model, with a focus on specific structural features at the same time. New models permit the relaxation of implausible restrictions frequently superimposed unwillingly in empirical analysis of plain old econometric models. In this spirit, recent developments in microeconometrics have given rise to increasing interest in partially identified models. In these models, for the credibility of claims, the feature of interest is bounded to a set rather than constituting of a point in the space of parameters or functions. This in turn has its own place in economic practice. Among many appealing and commonly investigated economic circumstances, partial identification frequently arises in econometric inquiry when researchers are faced with discrete data, omnipresent in survey studies. Examples consider a very general class of the limited information discrete outcome models with endogeneity when very little is known about the genesis of the process generating endogenous variable. This thesis contributes to the aforementioned line of research and seeks to address a somewhat limited, but I believe important, range of issues in a great depth. These issues are concerned with the specification of identified sets in so-called single equation models with endogeneity. We achieve identification via instrumental variable restrictions and focus on discrete outcomes as well as discrete endogenous variables. Our focus on discrete, ordered outcome models complements the vast majority of research on econometric design under continuous variation. The latter, even though theoretically sound, often becomes practically infeasible. We believe that this study provides a level of unity to the partial identification framework as a whole and makes steps forward in understanding some aspects of single equation instrumental variable models under discrete variation.
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:ner:euiflo:urn:hdl:1814/20214&r=ecm
  4. By: Laura Magazzini (Department of Economics (University of Verona)); Giorgio Calzolari (University of Florence)
    Abstract: One of the major virtues of panel data models is the possibility to control for unobserved and unobservable heterogeneity at the unit (individual, firm, sector...) level, even when this is correlated with the variables included on the right hand side of the equation. By assuming an additive error structure, identification of the model parameters spans from transformations of the data that wipe out the individual component. We propose an alternative identification strategy, where the equation of interest is embedded in a structural system that properly accounts for the endogeneity of the variables on the right hand side (without distinguishing correlation with the individual component or the idiosyncratic term). We show that, under certain conditions, the system is identified even in the case where no exogenous variable is available, due to the presence of cross-equation restrictions. Estimation of the model parameters can rely on an iterated Zellner-type estimator, with remarkable performance gains over traditional GMM approaches.
    Keywords: panel data, identification, cross-equation restrictions
    JEL: C23 C33
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:ver:wpaper:06/2012&r=ecm
  5. By: Kandemir, I.
    Abstract: Triangular simultaneous equation models are commonly used in econometric analysis to analyse endogeneity problems caused, among others, by individual choice or market equilibrium. Empirical researchers usually specify the simultaneous equation models in an ad hoc linear form; without testing the validity of such specification. In this paper, approximation properties of a linear fit for structural function in a triangular system of simultaneous equations are explored. I demonstrate that, linear fit can be interpreted as the best linear prediction to the underlying structural function in a weighted mean squared (WMSE) error sense. Furthermore, it is shown that with the endogenous variable being a continuous treatment variable, under misspecification, the pseudo-parameter that defines the returns to treatment intensity is weighted average of the Marginal Treatment Effects (MTE) of Heckman and Vytlacil (2001). Misspecification robust asymptotic variance formulas for estimators of pseudo-true parameters are also derived. The approximation properties are further investigated with Monte-Carlo experiments.
    Date: 2011–12–28
    URL: http://d.repec.org/n?u=RePEc:ner:ucllon:http://discovery.ucl.ac.uk/1336878/&r=ecm
  6. By: Yun Bao (Toyota Financial Services Australia); Carl Chiarella (Finance Discipline Group, UTS Business School, University of Technology, Sydney; Finance Discipline Group, UTS Business School, University of Technology, Sydney)
    Abstract: This paper proposes an auxiliary particle filter algorithm for inference in regime switching stochastic volatility models in which the regime state is governed by a first-order Markov chain. We proposes an ongoing updated Dirichlet distribution to estimate the transition probabilities of the Markov chain in the auxiliary particle filter. A simulation-based algorithm is presented for the method which demonstrated that we are able to estimate a class of models in which the probability that the system state transits from one regime to a different regime is relatively high. The methodology is implemented to analyze a real time series: the foreign exchange rate of Australian dollars vs South Korean won.
    Keywords: Particle filters; Markov switching stochastic volatility models; Sequential Monte Carlo simulation
    JEL: C61 D11
    Date: 2012–01–01
    URL: http://d.repec.org/n?u=RePEc:uts:rpaper:299&r=ecm
  7. By: M Hashem Pesaran; Takashi Yamagata
    Abstract: This paper is concerned with testing the time series implications of the capital asset pricing model (CAPM) due to Sharpe (1964) and Lintner (1965), when the number of securities, N, is large relative to the time dimension, T, of the return series. Two new tests of CAPM are proposed that exploit recent advances on the analysis of large panel data models, and are valid even if N>T. When the errors are Gaussian and cross sectionally independent, a test, denoted by J_{α,1}, is proposed which is N(0,1) as N→∞, with T fixed. Even when the errors are non-Gaussian we are still able to show that J_{α,1} tends to N(0,1) so long as the errors are cross-sectionally independent and N/T³→0, with N and T→∞, jointly. In the case of cross sectionally correlated errors, using a threshold estimator of the average squares of pair-wise error correlations, a modified version of J_{α,1}, denoted by J_{α,2}, is proposed. Small sample properties of the tests are compared using Monte Carlo experiments designed specifically to match the correlations, volatilities, and other distributional features of the residuals of Fama-French three factor regressions of individual securities in the Standard & Poor 500 index. Overall, the proposed tests perform best in terms of power, with empirical sizes very close to the chosen nominal value even in cases where N is much larger than T. The J_{α,2} test (which allows for non-Gaussian and weakly cross correlated errors) is applied to all securities in the S&P 500 index with 60 months of return data at the end of each month over the period September 1989-September 2011. Statistically significant evidence against Sharpe-Lintner CAPM is found mainly during the recent financial crisis. Furthermore, a strong negative correlation is found between a twelve-month moving average p-values of the J_{α,2} test and the returns of long/short equity strategies relative to the return on S&P 500 over the period December 2006 to September 2011, suggesting that abnormal profits are earned during episodes of market inefficiencies.
    Keywords: CAPM, Testing for alpha, Market e¢ ciency, Long/short equity returns, Large panels, Weak and strong cross-sectional dependence.
    JEL: C12 C15 C23 G11 G12
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:yor:yorken:12/05&r=ecm
  8. By: Cayton, Peter Julian A.; Mapa, Dennis S.
    Abstract: Stylized facts on financial time series data are the volatility of returns that follow non-normal conditions such as leverage effects and heavier tails leading returns to have heavier magnitudes of extreme losses. Value-at-risk is a standard method of forecasting possible future losses in investments. A procedure of estimating value-at-risk using time-varying conditional Johnson SU¬ distribution is introduced and assessed with econometric models. The Johnson distribution offers the ability to model higher parameters with time-varying structure using maximum likelihood estimation techniques. Two procedures of modeling with the Johnson distribution are introduced: joint estimation of the volatility and two-step procedure where estimation of the volatility is separate from the estimation of higher parameters. The procedures were demonstrated on Philippine-foreign exchange rates and the Philippine stock exchange index. They were assessed with forecast evaluation measures with comparison to different value-at-risk methodologies. The research opens up modeling procedures where manipulation of higher parameters can be integrated in the value-at-risk methodology.
    Keywords: Time Varying Parameters; GARCH models; Nonnormal distributions; Risk Management
    JEL: G12 C53 G32 C22
    Date: 2012–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:36206&r=ecm
  9. By: Halkos, George; Kevork, Ilias
    Abstract: This paper considers the classical Newsvendor model, also known as the Newsboy problem, with the demand to be fully observed and to follow in successive inventory cycles one of the Exponential, Rayleigh, and Log-Normal distributions. For each distribution, appropriate estimators for the optimal order quantity are considered, and their sampling distributions are derived. Then, through Monte-Carlo simulations, we evaluate the performance of corresponding exact and asymptotic confidence intervals for the true optimal order quantity. The case where normality for demand is erroneously assumed is also investigated. Asymptotic confidence intervals produce higher precision, but to attain equality between their actual and nominal confidence level, samples of at least a certain size should be available. This size depends upon the coefficients of variation, skewness and kurtosis. The paper concludes that having available data on the skewed demand for enough inventory cycles enables (i) to trace non-normality, and (ii) to use the right asymptotic confidence intervals in order the estimates for the optimal order quantity to be valid and precise.
    Keywords: Inventory Control; Newsboy Problem; Skewed Demand; Exact and Asymptotic Confidence Intervals; Monte-Carlo Simulations
    JEL: C13 M11 C44 C15 D24
    Date: 2012–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:36205&r=ecm
  10. By: Gregori Baetschmann; Rainer Winkelmann
    Abstract: This paper is concerned with the analysis of zero-inflated count data when time of exposure varies. It proposes a new zero-inflated count data model that is based on two homogeneous Poisson processes and accounts for exposure time in a theory consistent way. The new model is used in an application to the effect of insurance generosity on the number of absent days.
    Keywords: Exposure, Poisson regression, complementary log-log link
    JEL: J29 C25
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:zur:econwp:061&r=ecm
  11. By: Nguyen Viet, Cuong
    Abstract: Propensity score matching is a widely-used method to measure the effect of a treatment in social as well as health sciences. An important issue in propensity score matching is how to select conditioning variables in estimation of the propensity score. It is commonly mentioned that only variables which affect both program participation and outcomes are selected. Using Monte Carlo simulation, this paper shows that efficiency in estimation of the Average Treatment Effect on the Treated can be gained if all the available observed variables in the outcome equation are included in the estimation of the propensity score.
    Keywords: Impact evaluation; treatment effect; propensity score matching; covariate selection; Monte Carlo
    JEL: C14 H43 C15
    Date: 2012–02–10
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:36377&r=ecm
  12. By: Kaufmann, Hendrik; Kruse, Robinson; Sibbertsen, Philipp
    Abstract: Linearity testing against smooth transition autoregressive (STAR) models when deterministic trends are potentially present in the data is considered in this paper. As opposed to recently reported results in Zhang (2012), we show that linearity tests against STAR models lead to useful results in this setting.
    Keywords: Nonlinearity, Smooth transition, Deterministic trend
    JEL: C12 C22
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-492&r=ecm
  13. By: Theo S Eicher; Lindy Helfman; Alex Lenkoski
    Abstract: The literature on Foreign Direct Investment (FDI) determinants is remarkably diverse in terms of competing theories and empirical results. We utilize Bayesian Model Averaging (BMA) to resolve the model uncertainty that surrounds the validity of the competing FDI theories. Since the structure of existing FDI data is known to induce selection bias, we extend BMA theory to HeckitBMA to address model uncertainty in the presence of selection bias. We then show that more than half of the previously suggested FDI determinants are no longer robust and highlight theories that receive support from the data. In addition, our selection approach allows us to highlight that the determinants of margins of FDI (intensive and extensive) differ profoundly in the data, while FDI theories do not usually model this aspect explicitly.
    Date: 2011–04
    URL: http://d.repec.org/n?u=RePEc:udb:wpaper:uwec-2011-07-fc&r=ecm
  14. By: Bailey, Natalia (University of Cambridge); Kapetanios, George (Queen Mary, University of London); Pesaran, Hashem (University of Cambridge)
    Abstract: An important issue in the analysis of cross-sectional dependence which has received renewed interest in the past few years is the need for a better understanding of the extent and nature of such cross dependencies. In this paper we focus on measures of cross-sectional dependence and how such measures are related to the behaviour of the aggregates defined as cross-sectional averages. We endeavour to determine the rate at which the cross-sectional weighted average of a set of variables appropriately demeaned, tends to zero. One parameterisation sets this to be O(N^2α-2), for 1/2 < α ≤ 1. Given the fashion in which it arises, we refer to as the exponent of cross-sectional dependence. We derive an estimator of from the estimated variance of the cross-sectional average of the variables under consideration. We propose bias corrected estimators, derive their asymptotic properties and consider a number of extensions. We include a detailed Monte Carlo study supporting the theoretical results. Finally, we undertake an empirical investigation of using the S&P 500 data-set, and a large number of macroeconomic variables across and within countries.
    Keywords: cross correlations, cross-sectional dependence, cross-sectional averages, weak and strong factor models, Capital Asset Pricing Model
    JEL: C21 C32
    Date: 2012–01
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp6318&r=ecm
  15. By: Łukasz Lenart (Economic Institute in National Bank of Poland, Department of Mathematics in Cracow University of Economics); Mateusz Pipień (Economic Institute in National Bank of Poland, Department of Econometrics and Operations Research in Cracow University of Economics)
    Abstract: We propose a non-standard subsampling procedure to make formal statistical inference about the business cycle, one of the most important unobserved feature characterising fluctuations of economic growth. We show that some characteristics of business cycle can be modelled in a non-parametric way by discrete spectrum of the Almost Periodically Correlated (APC) time series. On the basis of estimated characteristics of this spectrum business cycle is extracted by filtering. As an illustration we characterise the man properties of business cycles in industrial production index for Polish economy.
    Keywords: business cycle, industrial production index, almost periodically correlated time series, subsampling procedure
    JEL: C01 C02 C14
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:nbp:nbpmis:107&r=ecm
  16. By: Adam Lenart (Max Planck Institute for Demographic Research, Rostock, Germany)
    Abstract: The Gompertz distribution is widely used to describe the distribution of adult deaths. Previous works concentrated on formulating approximate relationships to characterize it. However, using the generalized integro-exponential function Milgram (1985) exact formulas can be derived for its moment-generating function and central moments. Based on the exact central moments, higher accuracy approximations can be defined for them. In demographic or actuarial applications, maximum-likelihood estimation is often used to determine the parameters of the Gompertz distribution. By solving the maximum-likelihood estimates analytically, the dimension of the optimization problem can be reduced to one both in the case of discrete and continuous data.
    JEL: J1 Z0
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:dem:wpaper:wp-2012-008&r=ecm
  17. By: Knüppel, Malte
    Abstract: The evaluation of multi-step-ahead density forecasts is complicated by the serial correlation of the corresponding probability integral transforms. In the literature, three testing approaches can be found which take this problem into account. However, these approaches can be computationally burdensome, ignore important information and therefore lack power, or suffer from size distortions even asymptotically. In this work, a fourth testing approach based on raw moments is proposed. It is easy to implement, uses standard critical values, can include all moments regarded as important, and has correct asymptotic size. It is found to have good size and power properties if it is based directly on the (standardized) probability integral transforms. --
    Keywords: density forecast evaluation,normality tests
    JEL: C12 C52 C53
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:zbw:bubdp1:201132&r=ecm
  18. By: Adam E Clements (QUT); Annastiina Silvennoinen (QUT)
    Abstract: Within the context of volatility timing and portfolio selection this paper considers how best to estimate a volatility model. Two issues are dealt with, namely the frequency of data used to construct volatility estimates, and the loss function used to estimate the parameters of a volatility model. We find support for the use of intraday data for estimating volatility which is consistent with earlier research. We also find that the choice of loss function is important and show that a simple mean squared error loss, overall provides the best forecasts of volatility upon which to form optimal portfolios.
    Keywords: Volatility, volatility timing, utility, portfolio allocation, realized volatility
    JEL: C22 G11 G17
    Date: 2011–10–12
    URL: http://d.repec.org/n?u=RePEc:qut:auncer:2011_7&r=ecm
  19. By: Song, Yong; Shi, Shuping
    Abstract: This paper proposes an infinite hidden Markov model (iHMM) to detect, date stamp,and estimate speculative bubbles. Three features make this new approach attractive to practitioners. First, the iHMM is capable of capturing the nonlinear dynamics of different types of bubble behaviors as it allows an infinite number of regimes. Second, the implementation of this procedure is straightforward as the detection, dating, and estimation of bubbles are done simultaneously in a coherent Bayesian framework. Third, the iHMM, by assuming hierarchical structures, is parsimonious and superior in out-of-sample forecast. Two empirical applications are presented: one to the Argentinian money base, exchange rate, and consumer price from January 1983 to November 1989; and the other to the U.S. oil price from April 1983 to December 2010. We find prominent results, which have not been discovered by the existing finite hidden Markov model. Model comparison shows that the iHMM is strongly supported by the predictive likelihood.
    Keywords: speculative bubbles; in nite hidden Markov model; Dirichlet process
    JEL: C14 C15 C11
    Date: 2012–02–06
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:36455&r=ecm
  20. By: Tymon Sloczynski (Warsaw School of Economics)
    Abstract: In this paper I use the National Supported Work (NSW) data to examine the validity of the Oaxaca–Blinder unexplained component as an estimator of the population average treatment effect on the treated (PATT). Precisely, I utilize dataset and variable selections used in previous studies of the NSW data to compare the performance of the Oaxaca–Blinder unexplained component with methods based on the propensity score (Dehejia and Wahba, 1999) and bias-corrected matching estimators (Abadie and Imbens, 2011). I show that in both cases the Oaxaca–Blinder unexplained component performs superior compared to the previously analyzed estimators provided that common support is imposed.
    Keywords: decomposition methods, Manpower training, Treatment effects
    JEL: C21 J24
    Date: 2012–02–13
    URL: http://d.repec.org/n?u=RePEc:wse:wpaper:61&r=ecm
  21. By: Rodney W. Strachan; Herman K. van Dijk
    Abstract: The empirical support for a DSGE type of real business cycle model with two technology shocks is evaluated using a Bayesian model averaging procedure that makes use of a finite mixture of many models within the class of vector autoregressive (VAR) processes. The linear VAR model is extended to permit equilibrium restrictions and restrictions on long-run responses to technology shocks apart from having a range of lag structures and deterministic processes. These model features are weighted as posterior probabilites and computed using MCMC and analytical methods. Uncertainty exists as to the most appropriate model for our data, with five models receiving significant support. The model set used has substantial implications for the results obtained. We do find support for a number of features implied by the real business cycle model. Business cycle volatility seems more due to investment specific technology shocks than neutral technology shocks and this result is robust to model specification. These techonolgy schocks appear to account for all stochastic trends in our system after 1984. we provide evidence on the uncertainty bands associated with these results.
    JEL: C11 C32 C52
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:acb:camaaa:2012-03&r=ecm
  22. By: Giancarlo Manzi; Martin Forster
    Abstract: We consider the biases that can arise in bias elicitation when expert assessors make random errors. We illustrate the phenomenon for two sources of bias: that due to omitting important variables in a least squares regression and that which arises in adjusting relative risks for treatment effects using an elicitation scale. Results show that, even when assessors' elicitations of bias have desirable properties (such as unbiasedness and independence), the nonlinear nature of biases can lead to elicitations of bias that are, themselves, biased. We show the corrections which can be made to remove this bias and discuss the implications for the applied literature which employs these methods.
    Keywords: bias reduction, expert elicitation, elicitation scales, omitted cariable bias
    Date: 2012–01
    URL: http://d.repec.org/n?u=RePEc:yor:yorken:12/04&r=ecm
  23. By: Yijuan Chen; Juergen Meinecke
    Abstract: We exploit a brief period of asymmetric information during the implementation of Pennsylvania’s “report card” scheme for coronary artery bypass graft surgery to test for improvements in quality of care and selection of patients by health care providers. During the ?rst three years of the 1990s, providers in Pennsylvania had an incentive to bias report cards by selecting patients strategically, with patients having no access to the report cards. This dichotomy enables us to separate providers’ selection of patients from patients’ selection of providers. Using data from the Nationwide Inpatient Sample, we estimate a non–linear difference–in– differences model and derive asymptotic standard errors. The mortality rate for bypass patients decreases by only 0.05 percentage points due to the report cards, which we interpret as evidence that quality of bypass surgery did not improve (at least in the short–term) nor did patient selection by providers occur. Our timing, estimation, and asymptotics are readily applicable to many other report card schemes.
    Keywords: health care report cards; provider moral hazard; quality improvement; difference–in–differences estimation
    JEL: C23 D82 I18
    Date: 2012–01
    URL: http://d.repec.org/n?u=RePEc:auu:dpaper:657&r=ecm
  24. By: T. Kirk White; Jerome P. Reiter; Amil Petrin
    Abstract: Within-industry differences in measured plant-level productivity are large. A large literature has been devoted to explaining these differences. In the U.S. Census Bureau's manufacturing data, the Bureau imputes for missing values using methods known to result in underestimation of variability and potential bias in multivariate inferences. We present an alternative strategy for handling the missing data based on multiple imputation via sequences of classification and regression trees. We use our imputations and the Bureau's imputations to estimate within-industry productivity dispersions. The results suggest that there may be more within-industry productivity dispersion than previous research has indicated.
    JEL: C80 L11 L60
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:17816&r=ecm
  25. By: Halkos, George; Kevork, Ilias
    Abstract: In this paper we consider the classical newsvendor model with profit maximization. When demand is fully observed in each period and follows either the Rayleigh or the exponential distribution, appropriate estimators for the optimal order quantity and the maximum expected profit are established and their distributions are derived. Measuring validity and precision of the corresponding generated confidence intervals by respectively the actual confidence level and the expected half-length divided by the true quantity (optimal order quantity or maximum expected profit), we prove that the intervals are characterized by a very important and useful property. Either referring to confidence intervals for the optimal order quantity or the maximum expected profit, measurements for validity and precision take on exactly the same values. Furthermore, validity and precision do not depend upon the values assigned to the revenue and cost parameters of the model. To offer, therefore, a-priori knowledge for levels of precision and validity, values for the two statistical criteria, that is, the actual confidence level and the relative expected half-length are provided for different combinations of sample size and nominal confidence levels 90%, 95% and 99%. The values for the two criteria have been estimated by developing appropriate Monte-Carlo simulations. For the relative-expected half-length, values are computed also analytically.
    Keywords: Inventory Control; Classical newsvendor model; Exponential and Rayleigh Distributions; Confidence Intervals; Monte-Carlo Simulations
    JEL: C13 M11 C44 C15 D24
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:36460&r=ecm
  26. By: Gutierrez, Luciano
    Abstract: In this paper, we use a bootstrap methodology to helps us to compute the finite sample probability distribution of the asymptotic tests recently proposed in Phillips et al. (2009b) and Phillips and Yu (2009c). Simulation shows that the bootstrap methodology works well and allows us to identify explosive processes and collapsing bubbles. We apply the bootstrap procedure to the wheat and rough rice commodity prices. We find some evidence of price exuberance for both prices in the 2007-2008 period.
    Keywords: Rational Speculative Bubbles, Bootstrap, Unit Root Tests, Commodity Prices., Marketing, G14, Q14, C12, C15,
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:ags:eaae11:120377&r=ecm
  27. By: Joaquim Ramalho (Department of Economics and CEFAGE- UE, Universidade de Évora); Jacinto Vidigal da Silva (Department of Management and CEFAGE- UE, Universidade de Évora)
    Abstract: Linear models are typically used in the regression analysis of capital structure choices. However, given the proportional and bounded nature of leverage ratios, models such as the tobit, the fractional regression model and its two-part variant are a better alternative. In this paper, we discuss the main econometric assumptions and features of those models, provide a theoretical foundation for their use in the regression analysis of leverage ratios and review some statistical tests suitable to assess their specification. Using a dataset previously considered in the literature, we carry out a comprehensive comparison of the alternative models, finding that in this framework the most relevant functional form issue is the choice between a single model for all capital structure decisions and a two-part model that explains separately the decisions to issue debt and, conditional on the first decision, on the amount of debt to issue.
    Keywords: Capital structure; Zero leverage; Fractional regression model; Tobit; Twopart model.
    JEL: G32 C25
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:cfe:wpcefa:2011_28&r=ecm
  28. By: Adam E Clements (QUT); Ayesha Scott (QUT); Annastiina Silvennoinen (QUT)
    Abstract: The importance of covariance modelling has long been recognised in the field of portfolio management and large dimensional multivariate problems are increasingly becoming the focus of research. This paper provides a straightforward and commonsense approach toward investigating whether simpler moving average based correlation forecasting methods have equal predictive accuracy as their more complex multivariate GARCH counterparts for large dimensional problems. We find simpler forecasting techniques do provide equal (and often superior) predictive accuracy in a minimum variance sense. A portfolio allocation problem is used to compare forecasting methods. The global minimum variance portfolio and Model Confidence Set (Hansen, Lunde, and Nason (2003)) are used to compare methods, whilst portfolio weight stability and computational time are also considered.
    Keywords: Volatility, multivariate GARCH, portfolio allocation
    JEL: C22 G11 G17
    Date: 2012–02–06
    URL: http://d.repec.org/n?u=RePEc:qut:auncer:2012_3&r=ecm

This nep-ecm issue is ©2012 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.